简体   繁体   English

如何计算文本文件中某个元素中某个单词的出现次数?

[英]How to count occurences of a word in a certain element in a text file?

This is the code that i have so far and my problem is that it goes through every single word in the text file but instead i only want it to go through the last word of each line (the genre of the book: religion etc.)这是我到目前为止的代码,我的问题是它遍历文本文件中的每一个单词,但我只希望它通过每行的最后一个单词 go (书的类型:宗教等)

import string 

    # Open the file in read mode 
    text = open("book_data_file.txt", "r") 

    # Create an empty dictionary
    d = dict()


    # Loop through each line of the file 
    for line in text: 
            # Remove the leading spaces and newline character 
            line = line.strip() 

            # Convert the characters in line to 
            # lowercase to avoid case mismatch 
            line = line.lower() 

            # Remove the punctuation marks from the line 
            line = line.translate(line.maketrans("", "", string.punctuation)) 

            # Split the line into words 
            words = line.split(" ")

            

            # Iterate over each word in line 
            for word in words:
                    # Check if the word is already in dictionary 
                    if word in d: 
                            # Increment count of word by 1 
                            d[word] = d[word] + 1
                    else: 
                            # Add the word to dictionary with count 1 
                            d[word] = 1

    # Print the contents of dictionary 
    for key in list(d.keys()): 
            print(key, ":", d[key]) 

And this is a screenshot of the text file book text file这是文本文件书文本文件的屏幕截图

My desired output is religion: 4 science:3 fiction: 2 etc.我想要的 output 是宗教:4 科学:3 小说:2 等等。

Any help would be appreciated任何帮助,将不胜感激

Using pandas :使用pandas

df = pd.read_csv('file.txt', sep=',')
words_count = df['GENRE'].value_counts()

Edit:编辑:

Just take the last word using indexing: word = line.split(" ")[-1] Ignore 1st line because they are having headings and also if there is any new line then also skip.只需使用索引取最后一个单词: word = line.split(" ")[-1]忽略第一行,因为它们有标题,如果有任何新行也跳过。 by using:通过使用:

if idx==0 or len(line)==0:
     continue

book.txt:书.txt:

a, b, c, d
a1, b1, c1, d1
a2, b2, c2, d2
a3, b3, c3, d1
a4, b4, c4, d1
a5, b5, c5, d3

import string 

# Open the file in read mode 
text = open("book.txt", "r")

# Create an empty dictionary
d = dict()


# Loop through each line of the file 
for idx, line in enumerate(text): 

        # Remove the leading spaces and newline character 
        line = line.strip()
        
        if idx==0 or len(line)==0:
            continue

        # Convert the characters in line to 
        # lowercase to avoid case mismatch 
        line = line.lower() 

        # Remove the punctuation marks from the line 
        line = line.translate(line.maketrans("", "", string.punctuation)) 

        # Split the line into words 
        word = line.split(" ")[-1]

        # Check if the word is already in dictionary 
        if word in d: 
                # Increment count of word by 1 
                d[word] = d[word] + 1
        else: 
                # Add the word to dictionary with count 1 
                d[word] = 1

# Print the contents of dictionary 
for key in list(d.keys()): 
        print(key, ":", d[key]) 

d1 : 3
d2 : 1
d3 : 1

If you don't want to use pandas, you're on the right way by using a dict .如果您不想使用 pandas,那么使用dict是正确的方法。 There is actually a subclass of dict in the standard library that does exactly what you want: collections.Counter .实际上,标准库中有一个dict的子类,它可以完全满足您的要求: collections.Counter

import string
from collections import Counter

def tokenize(line: str):
    # Remove the leading spaces and newline character 
    line = line.strip() 

    # Convert the characters in line to 
    # lowercase to avoid case mismatch 
    line = line.lower() 

    # Remove the punctuation marks from the line 
    line = line.translate(line.maketrans("", "", string.punctuation)) 

    # Split the line into words 
    words = line.split(" ")


def iter_tokens(lines):
    for line in lines:
        yield from tokenize(line)


# Open the file in read mode
with open("book.txt", "r") as text:
    counts = Counter(iter_tokens(content))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM