简体   繁体   中英

Join data with filename in Spark / PySpark

I'm reading in data from a number of S3 files in PySpark. The S3 keys contain the calendar date that the file was created and I'd like to do a join between the data and that date. Is there any way to do a join between the lines of data in files and filenames?

You can add a column to the dataframe that contains the file name, I use this to identify the source of each row after merging them later:

from pyspark.sql.functions import lit

filename = 'myawesomefile.csv'

df_new = df.withColumn('file_name', lit(filename))

Here's what I ended up doing:

I overwrote the LineRecordReader Hadoop class so that it included the filename in each line, then overwrote TextInputFormat to use my new LineRecordReader.

Then I loaded the file using the newAPIHadoopFile function.

Links:
LineRecordReader: http://tinyurl.com/linerecordreader
TextInputFormat: http://tinyurl.com/textinputformat
newAPIHadoopFile: http://tinyurl.com/newapihadoopfile

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM