简体   繁体   中英

Slicing Pandas Dataframe based on a value present in a column which is a list of lists

I have a Pandas Dataframe with a million rows (ids) with one of the columns as a list of lists. eg

df = pd.DataFrame({'id' : [1,2,3,4] ,'token_list' : [['a','b','c'],['c','d'],['a','e','f'],['c','f']]})

I want to create a dictionary of all the unique tokens - 'a', 'b', 'c', 'e', 'f' (which i already have as a separate list) as keys and all the ids that each key is associated with. For eg, {'a' : [1,3], 'b': [1], 'c': [1, 2,4]..} and so on.

My problem is there are 12000 such tokens, and I do not want to use loops to run through each row in the first frame. And is in does not seem to work.

Use np.repeat with numpy.concatenate for flattening first and then groupby with list and last to_dict :

a = np.repeat(df['id'], df['token_list'].str.len())
b = np.concatenate(df['token_list'].values)

d = a.groupby(b).apply(list).to_dict()
print (d)

{'c': [1, 2, 4], 'a': [1, 3], 'b': [1], 'd': [2], 'e': [3], 'f': [3, 4]}

Detail:

print (a)
0    1
0    1
0    1
1    2
1    2
2    3
2    3
2    3
3    4
3    4
Name: id, dtype: int64

print (b)
['a' 'b' 'c' 'c' 'd' 'a' 'e' 'f' 'c' 'f']
df.set_index('id')['token_list'].\
    apply(pd.Series).stack().reset_index(name='V').\
       groupby('V')['id'].apply(list).to_dict()
Out[359]: {'a': [1, 3], 'b': [1], 'c': [1, 2, 4], 'd': [2], 'e': [3], 'f': [3, 4]}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM