![](/img/trans.png)
[英]Python - how to add response from scrapy.request from yield into an array
[英]How to determine if the generator returned from `yield scrapy.Request` has any data?
在Scrapy 教程中,蜘蛛從class="next"
中提取下一頁鏈接並抓取它們 -
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
]
def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').get(),
'author': quote.css('span small::text').get(),
'tags': quote.css('div.tags a.tag::text').getall(),
}
next_page = response.css('li.next a::attr(href)').get()
if next_page is not None:
yield response.follow(next_page, callback=self.parse)
就我而言,我在從網絡服務器下載的文件中找不到下一頁鏈接,但我知道格式是response.url
與/page/[page number]/
連接。 不產生報價的請求頁面仍會返回response
,例如 - 未找到報價! . 由於下一頁的數量通常少於 20,我可以通過將蜘蛛的最后 3 行替換為 - 來遍歷所有可能的 url
for page_num in range(2, 20):
yield response.follow(f"/page/{page_num}/", callback=self.parse)
然而,這迫使蜘蛛請求不產生報價的頁面(例如http://quotes.toscrape.com/page/11到 20)。 在請求不產生引號的第一頁后,如何調整我的蜘蛛以終止page_num
循環? (如http://quotes.toscrape.com/page/11 )
偽代碼 -
page_num = 2
while (quotes are yielded from the response):
yield response.follow(f"/page/{page_num}/", callback=self.parse)
page_num += 1
您可以使用response.css('..')
結果作為下一頁的條件。
在這種情況下,您的代碼將如下所示:
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
]
def parse(self, response):
page_num = get_pagenumber_from_url(response.url)
quotes_sel = response.css('div.quote'):
# quotes_sel - will be SelectorList if page have item data
# quotes_sel - will be None if page doesn't have item data
for quote in quotes_sel:
yield {
'text': quote.css('span.text::text').get(),
'author': quote.css('span small::text').get(),
'tags': quote.css('div.tags a.tag::text').getall(),
}
if quotes_sel:
next_page_url = f"/page/{str(page_num+1)}"
yield response.follow(next_page_url , callback=self.parse)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.