[英]Python - Web scraping using Scrapy
剛開始使用scrapy框架學習網頁抓取。 我正在嘗試使用以下代碼從醫學網站上抓取對葯物的評論。 但是,如果我運行“scrapy runningpider spiders/medreview.py -o med.csv”,但錯誤會像“INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)”和med.csv 沒有任何數據。
# Importing Scrapy Library
import scrapy
# Creating a new class to implement Spide
class MedSpider(scrapy.Spider):
# Spider name
name = 'reviews'
# Domain names to scrape
allowed_domains = ['1mg.com']
# Base URL for the MacBook air reviews
myBaseUrl = "https://www.1mg.com/otc/becosules-z-capsule-otc63496/amp"
# Defining a Scrapy parser
def parse(self, response):
data = response.css('.OtcPage__reviews-container___hrKgt')
##data = response.css('.ReviewCards__review-card___3Z733')
# Collecting user reviews
comments = data.css('.ReviewCards__review-description___WoLdZ')
count = 0
# Combining the results
for review in comments:
yield{'comment': ''.join(review.xpath('.//text()').extract())
}
count=count+1
根據@stranac 注釋添加了“start_urls = myBaseUrl”。 現在我在控制台中遇到了一些錯誤。
2020-09-28 16:04:34 [scrapy.core.engine] ERROR: Error while obtaining
start requests
Traceback (most recent call last):
File "E:\anaconda\lib\site-packages\scrapy\core\engine.py", line 129, in
_next_request
request = next(slot.start_requests)
File "E:\anaconda\lib\site-packages\scrapy\spiders\__init__.py", line 77, in start_requests
yield Request(url, dont_filter=True)
File "E:\anaconda\lib\site-packages\scrapy\http\request\__init__.py", line 25, in __init__
self._set_url(url)
File "E:\anaconda\lib\site-packages\scrapy\http\request\__init__.py", line 69, in _set_url
raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: h
你做錯了幾件事。 您試圖從不存在的頁面上抓取評論。 您可以在此處或此處找到評論。 因此,您需要使用任一建議的網址。 要訪問數據,必須在請求中定義標頭。 鑒於以下是您可以解析數據的一種方式:
import scrapy
class MedSpider(scrapy.Spider):
name = 'reviews'
start_urls = [
# "https://www.1mg.com/otc/becosules-z-capsule-otc63496"
"https://www.1mg.com/otc/becosules-z-capsule-otc63496/reviews"
]
headers = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36"}
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url,callback=self.parse,headers=self.headers)
def parse(self,response):
for review in response.css("[class^='ReviewCards__review-card']"):
reviewer_name = review.css("[class^='ReviewCards__name']::text").get()
reviewer_rating = review.css("[class^='Rating__ratings-container'] > span::text").get()
print(reviewer_name,reviewer_rating)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.