简体   繁体   English

Python-Scrapy-创建一个爬网程序以获取URL列表并对它们进行爬网

[英]Python - Scrapy - Creating a crawler that gets a list of URLs and crawls them

I am trying to create a spider with the package "Scrapy" that gets a lists of URLs and crawls them. 我正在尝试使用“ Scrapy”包创建一个蜘蛛,该蜘蛛获取URL列表并对其进行爬网。 I have searched stackoverflow for an answer but could not find something that will solve the issue. 我已经在stackoverflow上搜索了答案,但是找不到能够解决问题的方法。

My script is as follows: 我的脚本如下:

class Try(scrapy.Spider):
   name = "Try"

   def __init__(self, *args, **kwargs):
      super(Try, self).__init__(*args, **kwargs)
      self.start_urls = kwargs.get( "urls" )
      print( self.start_urls )

   def start_requests(self):
      print( self.start_urls )
      for url in self.start_urls:
          yield Request( url , self.parse )

   def parse(self, response):
      d = response.xpath( "//body" ).extract()

When I crawl the spider: 当我爬行蜘蛛时:

Spider = Try(urls = [r"https://www.example.com"])
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})

process.crawl(Spider)
process.start()

I get the following info printed while printing self.start_urls: 我在打印self.start_urls时得到以下信息:

  • In the __init__ function printed on screen is: [r" https://www.example.com "] (as passed to the spider). 在屏幕上显示的__init__函数中是:[r“ https://www.example.com ”](传递给蜘蛛)。
  • In the start_requests function printed on screen is: None 屏幕上显示的start_requests函数中是:无

Why do I get None? 为什么我什么都没有? Is there another way to approach this issue? 有没有其他方法可以解决此问题? or Is there any mistakes in my spider's class? 或我的蜘蛛班上有什么错误吗?

Thanks for any help given! 感谢您提供的任何帮助!

I would suggest to use the Spider Class in process.crawl and pass urls parameters there. 我建议在process.crawl使用Spider类,并在其中传递urls参数。

import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy import Request


class Try(scrapy.Spider):
   name = 'Try'

   def __init__(self, *args, **kwargs):
      super(Try, self).__init__(*args, **kwargs)
      self.start_urls = kwargs.get("urls")

   def start_requests(self):
      for url in self.start_urls:
          yield Request( url , self.parse )

   def parse(self, response):
      d = response.xpath("//body").extract()

process = CrawlerProcess({
    'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})

process.crawl(Try, urls=[r'https://www.example.com'])
process.start()

If I run 如果我跑步

process.crawl(Try, urls=[r"https://www.example.com"])

then it send urls to Try as I expect. 然后它将urls发送给Try正如我期望的那样。 And even I don't need start_requests . 甚至我也不需要start_requests

import scrapy

class Try(scrapy.Spider):

   name = "Try"

   def __init__(self, *args, **kwargs):
       super(Try, self).__init__(*args, **kwargs)
       self.start_urls = kwargs.get("urls")

   def parse(self, response):
       print('>>> url:', response.url)
       d = response.xpath( "//body" ).extract()

from scrapy.crawler import CrawlerProcess

process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(Try, urls=[r"https://www.example.com"])
process.start()

But if I use 但是如果我用

spider = Try(urls = ["https://www.example.com"])

process.crawl(spider)

then it looks like it runs new Try without urls and then list is empty. 那么看起来它会运行没有urlsTry ,然后列表为空。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM