简体   繁体   中英

Spark : Writing data frame to s3 bucket

I am trying to write DF data to S3 bucket. It is working fine as expected. Now i want to write to s3 bucket based on condition.

In data frame i am having one column as Flag and in that column values are T and F . Now the condition is If Flag is F then it should write the data to S3 bucket otherwise No. Please find the details below.

DF Data :

1015,2017/08,新潟,101,SW,39,1015,2017/08,山形,101,SW,10,29,74.35897435897436,11.0,F
1015,2017/08,新潟,101,SW,39,1015,2017/08,大分,101,SW,14,25,64.1025641025641,15.4,F
1015,2017/08,新潟,101,SW,39,1015,2017/08,山口,101,SW,6,33,84.61538461538461,6.6,T
1015,2017/08,新潟,101,SW,39,1015,2017/08,愛媛,101,SW,5,34,87.17948717948718,5.5,T
1015,2017/08,新潟,101,SW,39,1015,2017/08,神奈川,101,SW,114,75,192.30769230769232,125.4,F
1015,2017/08,新潟,101,SW,39,1015,2017/08,富山,101,SW,12,27,69.23076923076923,13.2,F
1015,2017/08,新潟,101,SW,39,1015,2017/08,高知,101,SW,3,36,92.3076923076923,3.3,T
1015,2017/08,新潟,101,SW,39,1015,2017/08,岩手,101,SW,11,28,71.7948717948718,12.1,F
1015,2017/08,新潟,101,SW,39,1015,2017/08,三重,101,SW,45,6,15.384615384615385,49.5,F
1015,2017/08,新潟,101,SW,39,1015,2017/08,京都,101,SW,23,16,41.02564102564102,25.3,F
1015,2017/08,新潟,101,SW,39,1015,2017/08,静岡,101,SW,32,7,17.94871794871795,35.2,F
1015,2017/08,新潟,101,SW,39,1015,2017/08,鹿児島,101,SW,18,21,53.84615384615385,19.8,F
1015,2017/08,新潟,101,SW,39,1015,2017/08,福島,101,SW,17,22,56.41025641025641,18.7,F

Code :

val df = spark.read.format("csv").option("header","true").option("inferSchema","true").load("s3a://test_system/transcation.csv")
    df.createOrReplaceTempView("data")
    val res = spark.sql("select count(*) from data")
    res.show(10)
    res.coalesce(1).write.format("csv").option("header","true").mode("Overwrite")
     .save("s3a://test_system/Output/Test_Result")
     res.createOrReplaceTempView("res1")
     val res2 = spark.sql("select distinct flag from res1 where flag = 'F'")
     if (res2 ==='F')
     {
     //writing to s3 bucket as raw data .Here transcation.csv file.
     df.write.format("csv").option("header","true").mode("Overwrite")
     .save("s3a://test_system/Output/Test_Result/rawdata")
     }

I am trying this approach but it is not exporting df data to s3 bucket. How can i export/write data to S3 bucket by using condition?

Many thanks for your help.

I am assuming you want to write the dataframe given a "F" flag present in the dataframe.

val df = spark.read.format("csv").option("header","true").option("inferSchema","true").load("s3a://test_system/transcation.csv")
df.createOrReplaceTempView("data")
val res = spark.sql("select count(*) from data")
res.show(10)
res.coalesce(1).write.format("csv").option("header","true").mode("Overwrite")
  .save("s3a://test_system/Output/Test_Result")
res.createOrReplaceTempView("res1")

Here we are using the data table since res1 table is just a count table which you created above. Also from the result dataframe, we are selecting just the first row by using first() function and the first column from that row using getAs[String](0)

val res2 = spark.sql("select distinct flag from data where flag = 'F'").first().getAs[String](0)

println("Printing out res2 = " + res2)

Here we are doing a comparision between the string extracted above and the string "F" . Remember "F" is a string while 'F' is a char in scala.

if (res2.equals("F"))
{
  println("Inside the if loop")
  //writing to s3 bucket as raw data .Here transcation.csv file.
  df.write.format("csv").option("header","true").mode("Overwrite")
    .save("s3a://test_system/Output/Test_Result/rawdata")
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM