简体   繁体   中英

Dynamic start_urls value

I am new to scrapy and python. I have written a spider that works fine with the initialized start_urls value.

It also works fine if I put a literal in the code in the Init as

{ self.start_urls = ' http://something.com ' }

But, when I read the value in from a json file and create a list, I get the same error about a missing%20

I feel like I am missing something obvious in either scrapy or python because I am a nube.

class SiteFeedConstructor(CrawlSpider, FeedConstructor):

    name = "Data_Feed"
    start_urls = ['http://www.cnn.com/']

    def __init__(self, *args, **kwargs):

    FeedConstructor.__init__(self, **kwargs)
    kwargs = {}
    super(SiteFeedConstructor, self).__init__(*args, **kwargs)

    self.name = str(self.config_json.get('name', 'Missing value'))
    self.start_urls = str(self.config_json.get('start_urls', 'Missing value'))
    self.start_urls = self.start_urls.split(",")

ERROR:

Traceback (most recent call last):
  File "/usr/bin/scrapy", line 4, in <module>
    execute()
  File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 132, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 97, in _run_print_help
    func(*a, **kw)
  File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 139, in _run_command
    cmd.run(args, opts)
  File "/usr/lib/python2.7/dist-packages/scrapy/commands/runspider.py", line 64, in run
    self.crawler.crawl(spider)
  File "/usr/lib/python2.7/dist-packages/scrapy/crawler.py", line 42, in crawl
    requests = spider.start_requests()
  File "/usr/lib/python2.7/dist-packages/scrapy/spider.py", line 55, in start_requests
    reqs.extend(arg_to_iter(self.make_requests_from_url(url)))
  File "/usr/lib/python2.7/dist-packages/scrapy/spider.py", line 59, in make_requests_from_url
    return Request(url, dont_filter=True)
  File "/usr/lib/python2.7/dist-packages/scrapy/http/request/__init__.py", line 26, in __init__
    self._set_url(url)
  File "/usr/lib/python2.7/dist-packages/scrapy/http/request/__init__.py", line 61, in _set_url
    raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: Missing%20value

Instead of defining __init__() override start_requests() method:

This is the method called by Scrapy when the spider is opened for scraping when no particular URLs are specified. If particular URLs are specified, the make_requests_from_url() is used instead to create the Requests. This method is also called only once from Scrapy, so it's safe to implement it as a generator.

class SiteFeedConstructor(CrawlSpider, FeedConstructor):
    name = "Data_Feed"

    def start_requests(self):
        self.name = str(self.config_json.get('name', 'Missing value'))
        for url in str(self.config_json.get('start_urls', 'Missing value')).split(","):
            yield self.make_requests_from_url(url)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM