[英]Need some help identifying the HTML tag that will allow me to pull all the relevant headlines, links and img URL's. My code is currently displaying 1
I used Request library to access the website and BeautifulSoup to parse the html.I would like my scraper to be able to scrape at least 4 headlines with the links and Image URL from the website.我使用请求库访问该网站,并使用 BeautifulSoup 解析 html。我希望我的抓取工具能够从网站上抓取至少 4 个带有链接和图像 URL 的标题。 I know its the HTML tag and I have failed to locate which tag.
我知道它是 HTML 标签,但我找不到哪个标签。 I have uploaded what I have done so far.
我已经上传了我到目前为止所做的。 The code is displaying the 1st headline, URL's, headline links.
该代码显示第一个标题、URL、标题链接。
from bs4 import BeautifulSoup
import requests
#user agent to facilitates end-user interaction with web content**
headers = [''Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.101'
]
#identifying website to be scraped*
source = request.get('https://www.jse.co.za/').text
#print(source) - verifying if HTLM for the page
soup = BeautifulSoup(source ,'lxml')# html parser
#print(soup.prettify)- to check if HTML has been parsed.
for item in soup.find_all('div',{'class':'view-content row row-flex'})[0:4]:# indexing
text = item.find('h3' {'class':'card__title'}).text .strip()
img = item.find('img' {'class': 'media__image })
link= item.find('a')
article_link = link.attrs('href')
print('Article Headline')
print(text)
print('IMAGE URL')
print(img['data-src']
print('LINK TO ARTICLE')
print(article_link)
print()
output output
# looking at output of 4 headlines
ARTICLE HEADLINE
South Africa offers investment opportunities to Asia Pacific investors
# looking at output of at least 4 Image URL's
IMAGE URL
/sites/default/files/styles/standard_lg/public/medial/images/2021-06/Web_Banner_0.jpg?h=4ae650de&itok=hdGEy5jA
# I was hoping to scrape at least 4 links
LINK TO ARTICLE
/news/market-news/south-africa-offers-investment-opportunities-asia-pacific-investors
```
Looking at that JSE site, they are using the article
tag to list each of the news items, with the card
class also, so I would suggest using for article in soup.find_all('article')
to split up by those, then within that get each of the inner items.查看那个 JSE 站点,他们使用
article
标签列出每个新闻项目,还有card
class,所以我建议使用for article in soup.find_all('article')
按这些分开,然后在获取每个内部项目。
Update: fully working example.更新:完整的工作示例。
from bs4 import BeautifulSoup
import requests
base_url = 'https://www.jse.co.za'
source = requests.get(base_url).text
print("Got source")
soup = BeautifulSoup(source, 'html.parser')
print("Parsed source")
articles = soup.find_all("article", class_="card")
print(f"Number of articles found: {len(articles)}")
for article in articles:
print("----------------------------------------------------")
headline = article.h3.text.strip()
link = base_url + article.a['href']
text = article.find("div", class_="field--type-text-with-summary").text.strip()
img_url = base_url + article.picture.img['data-src']
print(headline)
print(link)
print(text)
print("Image: "+ img_url)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.