I want to scrape the titles of the PDFs on this website. However, I get the titles and the links. How can I fix this?
publications=[]
text=[]
for i in np.arange(12,19):
response=requests.get('https://occ.ca/our-
publications/page/{}/'.format(i), headers={'User-Agent': 'Mozilla'})
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'lxml')
pdfs = soup.findAll('div', {"class": "publicationoverlay"})
links = [pdf.find('a').attrs['href'] for pdf in pdfs]
publications.extend(links)
text.extend(pdfs)
Any help would be much appreciated.
You want the .text
though split on \t
(to exclude child a
text) and strip. I use Session
for efficiency.
import requests
from bs4 import BeautifulSoup
import numpy as np
publications=[]
text=[]
with requests.Session() as s:
for i in np.arange(12,19):
response= s.get('https://occ.ca/our-publications/page/{}/'.format(i), headers={'User-Agent': 'Mozilla'})
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'lxml')
pdfs = soup.findAll('div', {"class": "publicationoverlay"})
text.extend([pdf.text.strip().split('\t')[0] for pdf in pdfs])
You could also use decompose to remove child a tags after getting href and before taking the.text of parent
import requests
from bs4 import BeautifulSoup
import numpy as np
publications=[]
text=[]
links = []
with requests.Session() as s:
for i in np.arange(12,19):
response= s.get('https://occ.ca/our-publications/page/{}/'.format(i), headers={'User-Agent': 'Mozilla'})
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'lxml')
for a in soup.select('.publicationoverlay a'):
links.extend([a['href']])
a.decompose()
pdfs = soup.findAll('div', {"class": "publicationoverlay"})
text.extend([pdf.text.strip() for pdf in pdfs])
print(list(zip(links, text)))
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.