I have a text and a list of 60k words. I need to find anagrams of the text from the list of words using 3 words(lasts in alphabetic order) and the function should return a tuple of the 3 words that build an anagram of the given text. NOTE: I have to ignore capital letters and spaces that are in the text.
I have developed the function that finds all the words of the list of words that are contained in the text. but I don't know how to end finding the anagrams.
def es3(words_list, text):
text=text.replace(" ","")
for x in text:
text=text.replace(x,x.lower())
result=[]
cont=0
for x in words_list:
if len(x)>=2:
for c in x:
if c in text:
cont+=1
if cont==len(x):
result.append(x)
cont=0
Examples:
text = "Andrea Sterbini" -> anagram= ('treni', 'sia', 'brande')
sorted(andreasterbini)==sorted(treni+sia+brande)
"Angelo Monti" -> ('toni', 'nego', 'mal')
"Angelo Spognardi" -> ('sragion', 'pend', 'lago')
"Ha da veni Baffone" -> ('video', 'beh', 'affanna')
The naive algorithm would be:
(sorted letters of the three words, triplet)
to a multimap (a map that may accept more than one value per key: in Python, a reglar map key -> [values]
). The problem is that the construction of the multimap has a O(N^3)
time and space complexity. If N = 60,000, you have 216,000 billions operations and values. That's a lot!
Let's try to reduce this. Let me repharse the problem : given a sequence, find three subsequences that: 1. are non overlapping and cover this sequence ; 2. are in a given set. See your first example: "Angelo Monti" -> ('toni', 'nego', 'mal')
sequence a e g i l m n n o o t
subseq1 i n o t
subseq2 e g n o
subseq3 a l m
Find three non overlapping subsequences that cover the sequence is the same problem as paritioning a set of n elements in k groups. The complexity is known as S(n, k) , which is bounded by 1/2 (nk) k^(nk)
. Hence, finding all the partitions of n elements in k groups has an O(n^k * k^(nk))
complexity.
Let's try to implement this in Python:
def partitions(S, k):
if len(S) < k: # can't partition if there are not enough elements
raise ValueError()
elif k == 1:
yield tuple([S]) # one group: return the set
elif len(S) == k:
yield tuple(map(list, S)) # ([e1], ..., [e[n]])
else:
e, *M = S # extract the first element
for p in partitions(M, k-1): # we need k-1 groups because...
yield ([e], *p) # the first element is a group on itself
for p in partitions(M, k):
for i in range(len(p)): # add the first element to every group
yield tuple(list(p[:i]) + [[e] + p[i]] + list(p[i+1:]))
A simple test:
>>> list(partitions("abcd", 3))
[(['a'], ['b'], ['c', 'd']), (['a'], ['b', 'c'], ['d']), (['a'], ['c'], ['b', 'd']), (['a', 'b'], ['c'], ['d']), (['b'], ['a', 'c'], ['d']), (['b'], ['c'], ['a', 'd'])]
Now, I will use as word list some words you used in your question:
words = "i have a text and a list of words i need to find anagrams of the text from the list of words using words lasts in alphabetic order and the function should return a tuple of the words that build an anagram of the given text note i have to ignore capital letters and spaces that are in the text i have developed the function that finds all the words of the list of words that are contained in the text but i dont know how to end finding the anagrams and some examples treni sia brande toni nego mal sragion pend lago video beh affanna".split(" ")
And build a dict sorted(letters) -> list of words
to check the groups
word_by_sorted = {}
for w in words:
word_by_sorted.setdefault("".join(sorted(w)), set()).add(w)
The result is:
>>> word_by_sorted
{'i': {'i'}, 'aehv': {'have'}, 'a': {'a'}, 'ettx': {'text'}, 'adn': {'and'}, 'ilst': {'list'}, 'fo': {'of'}, 'dorsw': {'words'}, 'deen': {'need'}, 'ot': {'to'}, 'dfin': {'find'}, 'aaagmnrs': {'anagrams'}, 'eht': {'the'}, 'fmor': {'from'}, 'ginsu': {'using'}, 'alsst': {'lasts'}, 'in': {'in'}, 'aabcehilpt': {'alphabetic'}, 'deorr': {'order'}, 'cfinnotu': {'function'}, 'dhlosu': {'should'}, 'enrrtu': {'return'}, 'elptu': {'tuple'}, 'ahtt': {'that'}, 'bdilu': {'build'}, 'an': {'an'}, 'aaagmnr': {'anagram'}, 'eginv': {'given'}, 'enot': {'note'}, 'eginor': {'ignore'}, 'aacilpt': {'capital'}, 'eelrstt': {'letters'}, 'acepss': {'spaces'}, 'aer': {'are'}, 'ddeeelopv': {'developed'}, 'dfins': {'finds'}, 'all': {'all'}, 'acdeinnot': {'contained'}, 'btu': {'but'}, 'dnot': {'dont'}, 'know': {'know'}, 'how': {'how'}, 'den': {'end'}, 'dfgiinn': {'finding'}, 'emos': {'some'}, 'aeelmpsx': {'examples'}, 'einrt': {'treni'}, 'ais': {'sia'}, 'abdenr': {'brande'}, 'inot': {'toni'}, 'egno': {'nego'}, 'alm': {'mal'}, 'aginors': {'sragion'}, 'denp': {'pend'}, 'aglo': {'lago'}, 'deiov': {'video'}, 'beh': {'beh'}, 'aaaffnn': {'affanna'}}
Now, put the bricks together: check every partition of text
in three groups and output the words if the three groups are an anagram of a word in the list:
for p in partitions("angelomonti", 3):
L = [word_by_sorted.get("".join(sorted(xs)), set()) for xs in p]
for anagrams in itertools.product(*L):
print (anagrams)
Remarks:
word_by_sorted.get("".join(sorted(xs)), set())
searches the sorted group of letter as a string in the dict, and returns a set of words or an empty set if there is no match. itertools.product(*L)
create the cartesian product of the found sets. If there is an empty set (a group with no match), then the product is empty by definition. Ouput (there is a reason for duplicates, try to find it!):
('nego', 'mal', 'toni')
('mal', 'nego', 'toni')
('mal', 'nego', 'toni')
('mal', 'nego', 'toni')
What's important here is that the number of word is no longer an issue (a lookup in a dict is amortized O(1)
), but the length of the text to search may become one.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.