简体   繁体   中英

Create a single CSV file for each dataframe row

I need to create a single dataframe for each dataframe row.

The following code will create a single csv with Dataframe information

import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.SparkConf
import java.sql.Timestamp
import org.apache.spark.sql._
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType, LongType, DoubleType};
import org.apache.spark.sql.functions._

val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
var myDF = sqlContext.sql("select a, b, c from my_table")

val filename = "/tmp/myCSV.csv";
myDF.repartition(1).write.option("header", "true").option("compression", "none").option("timestampFormat", "yyyy-MM-dd'T'HH:mm:ss.SSS").csv(filename)

I'd like to create a single CSV for each row

scala> val myDF = sqlContext.sql("select a, b, c from my_table")

scala> val c = myDF.cache.count //Let say total 100 records

scala> val newDF = myDF.repartition(c.toInt)
scala> newDF.rdd.getNumPartitions
res34: Int = 100

scala> newDF.write.format("csv").option("header","true").save(<path to write>)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM