[英]Scrapy spider crawl infinite
任务我的蜘蛛应该能够抓取整个域的每个链接,并且应该识别它是产品链接还是类别链接,但只将产品链接写入项目。
我设置了一个规则,允许包含“a-”的 URL,因为它包含在每个产品链接中。
我的 if 条件应该简单地检查,如果列出了 productean,如果是,那么它的双重检查并且应该绝对是一个 productlink
在该过程之后,它应该将链接保存在我的列表中
如果包含“-a”,问题蜘蛛收集所有链接而不是解析链接
编辑:使用代码
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from ..items import LinkextractorItem
class TopArtSpider(CrawlSpider):
name = "topart"
allow_domains = ['topart-online.com']
start_urls = [
'https://www.topart-online.com'
]
custom_settings = {'FEED_EXPORT_FIELDS' : ['Link'] }
rules = (
Rule(LinkExtractor(allow='/a-'), callback='parse_filter_item', follow=True),
)
def parse_filter_item(self, response):
exists = response.xpath('.//div[@class="producteant"]').get()
link = response.xpath('//a/@href')
if exists:
response.follow(url=link.get(), callback=self.parse)
for a in link:
items = LinkextractorItem()
items['Link'] = a.get()
yield items
# -*- coding: utf-8 -*-
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class TopartSpider(CrawlSpider):
name = 'topart'
allowed_domains = ['topart-online.com']
start_urls = ['http://topart-online.com/']
rules = (
Rule(LinkExtractor(allow=r'/a-'), callback='parse_item', follow=True),
)
def parse_item(self, response):
return {'Link': response.url}
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.