简体   繁体   中英

PySpark - Write data frame into Hive table

I have an empty Hive table. I have 18 jobs that I am running, and each one could have a data frame that I would need to add into the Hive table with a parquet file.

What I have is something like this:

df2.write.parquet(SOME_HDFS_DIR/my_table_dir)

But this doesn't seem quite right. Do I have to add some .parquet file name and keep appending it each time? I have seen some syntax is Scala but not Python.

df.write.parquet will overwrite the parquet files of the location, but with the option,

df.write.mode('append').parquet('path')

then it will create a new parquet file to the path and so you can read the data from the table.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM