简体   繁体   English

实木复合地板架构和Spark

[英]Parquet schema and Spark

I am trying to convert CSV files to parquet and i am using Spark to accomplish this. 我正在尝试将CS​​V文件转换为镶木地板,并且我正在使用Spark来完成此操作。

SparkSession spark = SparkSession
    .builder()
    .appName(appName)
    .config("spark.master", master)
    .getOrCreate();

Dataset<Row> logFile = spark.read().csv("log_file.csv");
logFile.write().parquet("log_file.parquet");

Now the problem is i don't have a schema defined and columns look like this (output displayed using printSchema() in spark) 现在的问题是我没有定义架构,列看起来像这样(输出在spark中使用printSchema()显示)

root
 |-- _c0: string (nullable = true)
 |-- _c1: string (nullable = true)
 |-- _c2: string (nullable = true)
 ....

the csv has the names on the first row but they're ignored i guess, the problem is only a few columns are strings, i also have ints and dates. csv在第一行有名称,但是我想它们被忽略了,问题是只有几列是字符串,我也有整数和日期。

I am only using Spark, no avro or anything else basically (never used avro). 使用Spark,基本上没有Avro或其他任何功能(从未使用过Avro)。

What are my options to define a schema and how? 我定义模式有哪些选择?如何选择? If i need to write the parquet file in another way then no problem as long as it's a quick an easy solution. 如果我需要用其他方式编写镶木地板文件,那么只要它是一种快速简便的解决方案,就没有问题。

(i am using spark standalone for tests / don't know scala) (我正在使用Spark Standalone进行测试/不知道Scala)

Try using the .option("inferschema","true") present Spark-csv package. 尝试使用目前存在的Spark-csv软件包.option(“ inferschema”,“ true”)。 This will automatically infer the schema from the data. 这将自动从数据推断模式。

You can also define a custom schema for your data using struct type and use the .schema(schema_name) to read the on the basis of a custom schema. 您还可以使用结构类型为数据定义自定义架构,并使用.schema(schema_name)在自定义架构的基础上读取。

val sqlContext = new SQLContext(sc)
val customSchema = StructType(Array(
    StructField("year", IntegerType, true),
    StructField("make", StringType, true),
    StructField("model", StringType, true),
    StructField("comment", StringType, true),
    StructField("blank", StringType, true)))

val df = sqlContext.read
    .format("com.databricks.spark.csv")
    .option("header", "true") // Use first line of all files as header
    .schema(customSchema)
    .load("cars.csv")

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM