I frequently need to generate network graphs based on the co-occurences of items in a column. I start of with something like this:
letters
0 [b, a, e, f, c]
1 [a, c, d]
2 [c, b, j]
In the following example, I want a to make a table of all pairs of letters, and then have a "weight" column, which describes how many times each two letter pair appeared in the same row together (see bottom for example).
I am currently doing large parts of it using a for loop, and I was wondering if there is a way for me to vectorize it, as I am often dealing with enormous datasets that take an extremely long time to process in this way. I am also concerned about keeping things within memory limits. This is my code right now:
import pandas as pd
# Make some data
df = pd.DataFrame({'letters': [['b','a','e','f','c'],['a','c','d'],['c','b','j']]})
# I make a list of sets, which contain pairs of all the elements
# that co-occur in the data in the same list
sets = []
for lst in df['letters']:
for i, a in enumerate(lst):
for b in lst[i:]:
if not a == b:
sets.append({a, b})
# Sets now looks like:
# [{'a', 'b'},
# {'b', 'e'},
# {'b', 'f'},...
# Dataframe with one column containing the sets
df = pd.DataFrame({'weight': sets})
# We count how many times each pair occurs together
df = df['weight'].value_counts().reset_index()
# Split the sets into two seperate columns
split = pd.DataFrame(df['index'].values.tolist()) \
.rename(columns = lambda x: f'Node{x+1}') \
.fillna('-')
# Merge the 'weight' column back onto the dataframe
df = pd.concat([df['weight'], split], axis = 1)
print(df.head)
# Output:
weight Node1 Node2
0 2 c b
1 2 a c
2 1 f e
3 1 d c
4 1 j b
As suggested in the other answers, make use of collections.Counter
for the counting. Since it behaves like a dict
though, it needs hashable types. {a,b}
is not hashable, because it's a set. Replacing it with a tuple fixes the hashability problem, but introduces possible duplicates (eg, ('a', 'b')
and ('b', 'a')
). To fix this issue, just sort the tuple.
since sorted
returns a list
, we need to turn that back into a tuple: tuple(sorted((a,b)))
. A bit cumbersome, but convenient in combination with Counter
.
When rearranged, your nested loops can be replaced with the following comprehension:
sets = [ sorted((a,b)) for lst in df['letters'] for i,a in enumerate(lst) for b in lst[i:] if not a == b ]
Python has optimizations in place for comprehension execution, so this will already bring some speedup.
Bonus: If you combine it with Counter
, you don't even need the result as a list, but can instead use a generator expression (almost no extra memory is used instead of storing all pairs):
Counter( tuple(sorted((a, b))) for lst in lists for i,a in enumerate(lst) for b in lst[i:] if not a == b ) # note the lack of [ ] around the comprehension
As usual, when dealing with performance, the final answer must come from testing different approaches and choosing the best one. Here I compare the (IMO very elegant and readable) itertools
-based approach by @yatu, the original nested-for and the comprehension. All tests run on the same sample data, randomly generated to look like the given example.
from timeit import timeit
setup = '''
import numpy as np
import random
from collections import Counter
from itertools import combinations, chain
random.seed(42)
np.random.seed(42)
DF_SIZE = 50000 # make it big
MAX_LEN = 6
list_lengths = np.random.randint(1, 7, DF_SIZE)
letters = 'abcdefghijklmnopqrstuvwxyz'
lists = [ random.sample(letters, ln) for ln in list_lengths ] # roughly equivalent to df.letters.tolist()
'''
#################
comprehension = '''Counter( tuple(sorted((a, b))) for lst in lists for i,a in enumerate(lst) for b in lst[i:] if not a == b )'''
itertools = '''Counter(chain.from_iterable(combinations(sorted(i), r=2) for i in lists))'''
original_for_loop = '''
sets = []
for lst in lists:
for i, a in enumerate(lst):
for b in lst[i:]:
if not a == b:
sets.append(tuple(sorted((a, b))))
Counter(sets)
'''
print(f'Comprehension: {timeit(setup=setup, stmt=comprehension, number=10)}')
print(f'itertools: {timeit(setup=setup, stmt=itertools, number=10)}')
print(f'nested for: {timeit(setup=setup, stmt=original_for_loop, number=10)}')
Running the code above on my machine (python 3.7) prints:
Comprehension: 1.6664735930098686
itertools: 0.5829475829959847
nested for: 1.751666523006861
So, both suggested approaches improve over the nested for loops, but itertools is indeed faster in this case.
For a performance improvement you could use itertooos.combinations
in order to get all length 2
combinations from the inner lists, and Counter
to count the pairs in a flattened list.
Note that in addition to obtaining all combinations from each sublist, sorting is a necessary step since it will ensure that all pairs of tuples will appear in the same order:
from itertools import combinations, chain
from collections import Counter
l = df.letters.tolist()
t = chain.from_iterable(combinations(sorted(i), r=2) for i in l)
print(Counter(t))
Counter({('a', 'b'): 1,
('a', 'c'): 2,
('a', 'e'): 1,
('a', 'f'): 1,
('b', 'c'): 2,
('b', 'e'): 1,
('b', 'f'): 1,
('c', 'e'): 1,
('c', 'f'): 1,
('e', 'f'): 1,
('a', 'd'): 1,
('c', 'd'): 1,
('b', 'j'): 1,
('c', 'j'): 1})
A numpy/scipy solution using sparse incidence matrices:
from itertools import chain
import numpy as np
from scipy import sparse
from simple_benchmark import BenchmarkBuilder, MultiArgument
B = BenchmarkBuilder()
@B.add_function()
def pp(L):
SZS = np.fromiter(chain((0,),map(len,L)),int,len(L)+1).cumsum()
unq,idx = np.unique(np.concatenate(L),return_inverse=True)
S = sparse.csr_matrix((np.ones(idx.size,int),idx,SZS),(len(L),len(unq)))
SS = (S.T@S).tocoo()
idx = (SS.col>SS.row).nonzero()
return unq[SS.row[idx]],unq[SS.col[idx]],SS.data[idx] # left, right, count
from collections import Counter
from itertools import combinations
@B.add_function()
def yatu(L):
return Counter(chain.from_iterable(combinations(sorted(i),r=2) for i in L))
@B.add_function()
def feature_engineer(L):
Counter((min(nodes), max(nodes))
for row in L for nodes in combinations(row, 2))
from string import ascii_lowercase as ltrs
ltrs = np.array([*ltrs])
@B.add_arguments('array size')
def argument_provider():
for exp in range(4, 30):
n = int(1.4**exp)
L = [ltrs[np.maximum(0,np.random.randint(-2,2,26)).astype(bool).tolist()] for _ in range(n)]
yield n,L
r = B.run()
r.plot()
We see that the method presented here ( pp
) comes with the typical numpy constant overhead, but from ~100 sublists it starts winning.
OPs example:
import pandas as pd
df = pd.DataFrame({'letters': [['b','a','e','f','c'],['a','c','d'],['c','b','j']]})
pd.DataFrame(dict(zip(["left", "right", "count"],pp(df['letters']))))
Prints:
left right count
0 a b 1
1 a c 2
2 b c 2
3 c d 1
4 a d 1
5 c e 1
6 a e 1
7 b e 1
8 c f 1
9 e f 1
10 a f 1
11 b f 1
12 b j 1
13 c j 1
Notes to improve efficiency:
Instead of storing the pairs in sets, which are memory hogs and require expensive computation for adding elements, use a tuple where the first element is the smallest.
To calculate the combinations quickly, use itertools.combinations.
To count the combinations use collections.Counter
optionally, convert the count to a DataFrame.
Here's an example implementation:
from collections import Counter
from itertools import combinations
data = df.letters.tolist()
# data = [['b', 'a', 'e', 'f', 'c'],
# ['a', 'c', 'd'],
# ['c', 'b', 'j']]
counts = Counter((min(nodes), max(nodes)) for row in data for nodes in combinations(row, 2))
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.