I have two big csv files. The main one has a field as product name and in other csv file I have some keywords. I am looking for these keywords in product name in first CSV file. At the moment my code is like this:
class Keyword:
# keyword class for adding match keywords
def __init__(self):
self.data={}
def add(self, keyword, count):
if keyword in self.data.keys():
self.data[keyword]+=count
else:
self.data[keyword]=count
def get_match(self):
temp = []
for key, value in self.data.iteritems():
temp.append(key)
temp.append(value)
return temp
for i, product_row in product_df.iterrows():
product_title = product_row['title'].lower().replace(',','')
k = Keyword()
for j, keyword_row in keyword_df.iterrows():
if keyword_row['keyword'] in product_title:
k.add(keyword_row['keyword'], keyword_row['count'])
match_items = k.get_match()
if len(match_items)>0:
temp = product_row.tolist()
temp = [str(x).replace(',','') for x in temp]
temp.extend(match_items)
print>>sys.stdout, str(temp).strip('[]').replace("'",'')
else:
pass
This code is extremely slow and I have many of these csv files that should get compare with each other. Do you know more efficient way to compare these files?
如果您的关键字实际上是单个单词,而不是多单词表达式,我的第一个建议是将产品标题转换为一组,以加快查找速度:
product_title = set(product_row['title'].lower().replace(',','').split())
Read the entire keyword file, store the keywords in a list. After that read your products fields and check if any of the keywords are in the field, if they are then print them.
with open("keywords.txt", "r") as f:
keywords = f.read().splitlines()
with open("products.txt") as f:
for product_name in f:
if any(keyword in product_name for keyword in keywords):
print product_name
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.