简体   繁体   中英

Write/store dataframe in text file

I am trying to write dataframe to text file. If a file contains single column then I am able to write in text file. If file contains multiple column then I a facing some error

Text data source supports only a single column, and you have 2 columns.

object replace {

  def main(args:Array[String]): Unit = {

    Logger.getLogger("org").setLevel(Level.ERROR)

    val spark = SparkSession.builder.master("local[1]").appName("Decimal Field Validation").getOrCreate()

    var sourcefile = spark.read.option("header","true").text("C:/Users/phadpa01/Desktop/inputfiles/decimalvalues.txt")

     val rowRDD = sourcefile.rdd.zipWithIndex().map(indexedRow => Row.fromSeq((indexedRow._2.toLong+1) +: indexedRow._1.toSeq)) //adding prgrefnbr               
                         //add column for prgrefnbr in schema
     val newstructure = StructType(Array(StructField("PRGREFNBR",LongType)).++(sourcefile.schema.fields))

     //create new dataframe containing prgrefnbr

     sourcefile = spark.createDataFrame(rowRDD, newstructure)
     val op= sourcefile.write.mode("overwrite").format("text").save("C:/Users/phadpa01/Desktop/op")

  }

}

you can convert the dataframe to rdd and covert the row to string and write the last line as

 val op= sourcefile.rdd.map(_.toString()).saveAsTextFile("C:/Users/phadpa01/Desktop/op")

Edited

As @philantrovert and @Pravinkumar have pointed that the above would append [ and ] in the output file, which is true. The solution would be to replace them with empty character as

val op= sourcefile.rdd.map(_.toString().replace("[","").replace("]", "")).saveAsTextFile("C:/Users/phadpa01/Desktop/op")

One can even use regex

I would recommend using a csv or other delimited formats. The following is an example with the most concise/elegant way to write to .tsv in Spark 2+

val tsvWithHeaderOptions: Map[String, String] = Map(
  ("delimiter", "\t"), // Uses "\t" delimiter instead of default ","
  ("header", "true"))  // Writes a header record with column names

df.coalesce(1)         // Writes to a single file
  .write
  .mode(SaveMode.Overwrite)
  .options(tsvWithHeaderOptions)
  .csv("output/path")

You can save as text CSV file ( .format("csv") )

The result will be a text file in a CSV format, each column will be separated by a comma.

val op = sourcefile.write.mode("overwrite").format("csv").save("C:/Users/phadpa01/Desktop/op")

More info can be found in thespark programming guide

I think using "substring" is more appropriate for all scenarios I feel.

Please check below code.

sourcefile.rdd
.map(r =>  { val x = r.toString; x.substring(1, x.length-1)})
.saveAsTextFile("C:/Users/phadpa01/Desktop/op")

我使用 databricks api 将我的 DF 输出保存到文本文件中。

myDF.write.format("com.databricks.spark.csv").option("header", "true").save("output.csv")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM