简体   繁体   中英

How to select and order multiple columns in a Pyspark Dataframe after a join

I want to select multiple columns from existing dataframe (which is created after joins) and would like to order the fileds as my target table structure. How can it be done ? The approached I have used is below. Here I am able to select the necessary columns required but not able to make in sequence.

Required (Target Table structure) :
hist_columns = ("acct_nbr","account_sk_id", "zip_code","primary_state", "eff_start_date" ,"eff_end_date","eff_flag")

account_sk_df = hist_process_df.join(broadcast(df_sk_lkp) ,'acct_nbr','inner' )
account_sk_df_ld = account_sk_df.select([c for c in account_sk_df.columns if c in hist_columns])

>>> account_sk_df
DataFrame[acct_nbr: string, primary_state: string, zip_code: string, eff_start_date: string, eff_end_date: string, eff_flag: string, hash_sk_id: string, account_sk_id: int]


>>> account_sk_df_ld
DataFrame[acct_nbr: string, primary_state: string, zip_code: string, eff_start_date: string, eff_end_date: string, eff_flag: string, account_sk_id: int]

The account_sk_id need to be in 2nd place. What's the best way to do this ?

尝试通过仅提供列表来选择列,而不是通过迭代现有列并且排序应该可以:

account_sk_df_ld = account_sk_df.select(*hist_columns)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM