简体   繁体   中英

Scraping PDF - Checking word frequency on specified keywords

I've created a parser that scrapes keywords from a PDF document. Currently, this scrapes the top keyword as well as shows the frequency (how many times) the words been repeated in the document.

At this point, I'm looking to check the frequency of specific keywords however when entering the desired keyword, it joins the word together with the top word and gives the same frequency.

Ideally, I'd like to be able to check the frequency of the keyword 1.) "GRI" 2.) "CDP"

Would great appreciate anyone's help here!

import pandas as pd
import textract
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import nltk

nltk.download('stopwords')

pdffileobj=open('sample.pdf', 'rb')
pdfreader=PyPDF2.PdfFileReader(pdffileobj)
num_pages=pdfreader.numPages
count = 0
text= " "

while count < num_pages:
    pageObj = pdfreader.getPage(count)
    count +=1
    text += pageObj.extractText()

if text != "":
   text = text
else:
   text = textract.process(fileurl, method='tesseract', language='eng')

nltk.download('punkt')
tokens=word_tokenize(text)

punctuations = ['(',')',';',':','[',']',',','!','=','==','<','>','@','#','$','%','^','&','*','.','//','{','}','...','``','+',"''",]

stop_words = stopwords.words('english')

keywords = [word for word in tokens if not word in stop_words and not word in punctuations]

# print(keywords)
#At this point all the keywords in the document show up 

freq = pd.Series(' '.join(keywords).split()).value_counts()

#Print results show with frequency
print(freq)


To get a single word's frequency from Pandas' value_counts() output, you just do

freq['word_you_want']

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM