简体   繁体   中英

How to process pyspark dataframe as group by column value

I have a huge dataframe of different item_id and its related data, I need to process each group with the item_id serparately in parallel, I tried the to repartition the dataframe by item_id using the below code, but it seems it's still being processed as a whole not chunks

data = sqlContext.read.csv(path='/user/data', header=True)
columns = data.columns    
result = data.repartition('ITEM_ID') \
        .rdd \
        .mapPartitions(lambda iter: pd.DataFrame(list(iter), columns=columns))\
        .mapPartitions(scan_item_best_model)\
        .collect()

also is repartition is the correct approach or there is something am doing wrong?

after looking around I found this which addresses a similar problem, finally I had to solve it like

data = sqlContext.read.csv(path='/user/data', header=True)

columns = data.columns

df = data.select("ITEM_ID", F.struct(columns).alias("df"))

df = df.groupBy('ITEM_ID').agg(F.collect_list('df').alias('data'))

df = df.rdd.map(lambda big_df: (big_df['ITEM_ID'], pd.DataFrame.from_records(big_df['data'], columns=columns))).map(
    scan_item_best_model)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM