I have a very large DataFrame with ~100M rows that looks like this:
query score1 score2 key
0 query0 97.149704 1.317513 key1
1 query1 86.344880 1.337784 key2
2 query2 85.192480 1.312714 key3
3 query1 86.240326 1.317513 key4
4 query2 85.192480 1.312714 key5
...
I want to group the dataframe by "query"
and then get the position of each row sorted by "score1"
and "score2"
(higher is better) so the output should look like this -
query score1 score2 key pos1 pos2
0 query0 97.149704 1.317513 key1 0 0
1 query1 86.344880 1.237784 key2 0 1
2 query2 85.192480 1.312714 key3 1 0
3 query1 86.240326 1.317513 key4 1 0
4 query2 85.492410 1.212714 key5 0 1
Currently, I have a function that looks something like this:
def func(query, df, score1=True):
mini_df = df[df["query"] == query]
mini_df.reset_index(drop=True, inplace=True)
col_name = "pos_score2"
if score1:
col_name = "pos_score1"
mini_df[col_name] = mini_df.index
return mini_df
which I call from main()
:
p = Pool(cpu_count())
df_list = list(p.starmap(func, zip(queries, repeat(df))))
df = pd.concat(df_list, ignore_index=True)
but it takes a long time. I am running this on machine with 96 CPUs Intel Xeon with 512G memory and it still takes more than 24 hrs. What would be a much faster way to achieve this?
Use groupby
and rank
:
df[['pos1', 'pos2']] = (df.groupby('query')[['score1', 'score2']]
.rank(method='max', ascending=False)
.sub(1).astype(int))
print(df)
# Output
query score1 score2 key pos1 pos2
0 query0 97.149704 1.317513 key1 0 0
1 query1 86.344880 1.237784 key2 0 1
2 query2 85.192480 1.312714 key3 1 0
3 query1 86.240326 1.317513 key4 1 0
4 query2 85.492410 1.212714 key5 0 1
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.