Is there any chance of optimizing this:
import itertools
data = [['apple', 'banana', 'banana'],['apple', 'strawberry'], ['banana', 'lemon']]
Text = itertools.chain(*data)
for i in list(set(Text)):
print i, sum([1 for j in data if i in j])
Output:
strawberry 1
lemon 1
apple 2
banana 2
from collections import Counter
c = Counter()
for d in data:
c.update(set(d))
c
>>>> Counter({'apple': 2, 'banana': 2, 'strawberry': 1, 'lemon': 1})
Use a collections.Counter()
object to count documents per word:
from collections import Counter
data = [['apple', 'banana', 'banana'], ['apple', 'strawberry'], ['banana', 'lemon']]
counts = Counter()
for document in data:
# count unique words only; one count per document
counts.update(set(document))
Demo:
>>> from collections import Counter
>>> data = [['apple', 'banana', 'banana'], ['apple', 'strawberry'], ['banana', 'lemon']]
>>> counts = Counter()
>>> for document in data:
... # count unique words only; one count per document
... counts.update(set(document))
...
>>> for word, documentcount in counts.most_common():
... print word, documentcount
...
apple 2
banana 2
strawberry 1
lemon 1
Using Counter and itertools you can write it with a single line of code:
from collections import Counter
import itertools
Counter(itertools.chain(*map(set, data)))
Result:
Counter({'apple': 2, 'banana': 2, 'strawberry': 1, 'lemon': 1})
Using elementary functions (set and dict):
res = {}
for lst in data:
for word in set(lst):
if word not in res:
res[word] = 0
res[word] += 1
print res
which runs O(n log(n))
instead of O(n^2)
like your code.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.