[英]I am trying to scrape the titles from the PDFs on this website. However, I get the titles and the links. Why and how can I fix this?
我想在这个网站上刮掉 PDF 的标题。 但是,我得到了标题和链接。 我怎样才能解决这个问题?
publications=[]
text=[]
for i in np.arange(12,19):
response=requests.get('https://occ.ca/our-
publications/page/{}/'.format(i), headers={'User-Agent': 'Mozilla'})
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'lxml')
pdfs = soup.findAll('div', {"class": "publicationoverlay"})
links = [pdf.find('a').attrs['href'] for pdf in pdfs]
publications.extend(links)
text.extend(pdfs)
任何帮助将非常感激。
您希望.text
虽然在\t
上拆分(以排除a
文本)并剥离。 我使用Session
来提高效率。
import requests
from bs4 import BeautifulSoup
import numpy as np
publications=[]
text=[]
with requests.Session() as s:
for i in np.arange(12,19):
response= s.get('https://occ.ca/our-publications/page/{}/'.format(i), headers={'User-Agent': 'Mozilla'})
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'lxml')
pdfs = soup.findAll('div', {"class": "publicationoverlay"})
text.extend([pdf.text.strip().split('\t')[0] for pdf in pdfs])
您还可以在获取 href 之后和获取父级的 .text 之前使用 decompose 删除子级标签
import requests
from bs4 import BeautifulSoup
import numpy as np
publications=[]
text=[]
links = []
with requests.Session() as s:
for i in np.arange(12,19):
response= s.get('https://occ.ca/our-publications/page/{}/'.format(i), headers={'User-Agent': 'Mozilla'})
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'lxml')
for a in soup.select('.publicationoverlay a'):
links.extend([a['href']])
a.decompose()
pdfs = soup.findAll('div', {"class": "publicationoverlay"})
text.extend([pdf.text.strip() for pdf in pdfs])
print(list(zip(links, text)))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.