简体   繁体   English

为每个数据框行创建一个CSV文件

[英]Create a single CSV file for each dataframe row

I need to create a single dataframe for each dataframe row. 我需要为每个数据框行创建一个数据框。

The following code will create a single csv with Dataframe information 以下代码将使用数据框信息创建单个csv

import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.SparkConf
import java.sql.Timestamp
import org.apache.spark.sql._
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType, LongType, DoubleType};
import org.apache.spark.sql.functions._

val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
var myDF = sqlContext.sql("select a, b, c from my_table")

val filename = "/tmp/myCSV.csv";
myDF.repartition(1).write.option("header", "true").option("compression", "none").option("timestampFormat", "yyyy-MM-dd'T'HH:mm:ss.SSS").csv(filename)

I'd like to create a single CSV for each row 我想为每一行创建一个CSV

scala> val myDF = sqlContext.sql("select a, b, c from my_table")

scala> val c = myDF.cache.count //Let say total 100 records

scala> val newDF = myDF.repartition(c.toInt)
scala> newDF.rdd.getNumPartitions
res34: Int = 100

scala> newDF.write.format("csv").option("header","true").save(<path to write>)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM