简体   繁体   中英

scrapy.Request() prevents me from stepping into my function

Hello everyone~ I am new to Scrapy and I encountered a very strange problem. Briefly speaking, I find that scrapy.Request() prevents me from stepping into my function.
Here is my Code:

# -*- coding: utf-8 -*-
import scrapy
from tutor_job_spy.items import TutorJobSpyItem

class Spyspider(scrapy.Spider):
    name = 'spy'
    #for privacy reasons I delete the url information :)
    allowed_domains = ['']
    url_0 = ''
    start_urls = [url_0, ]
    base_url = ''
    list_previous = []
    list_present = []

    def parse(self, response):
        numbers = response.xpath(  '//tr[@bgcolor="#d7ecff" or @bgcolor="#eef7ff"]/td[@width="8%" and @height="40"]/span/text()').extract()
        self.list_previous = numbers
        self.list_present = numbers
        yield scrapy.Request(self.url_0, self.keep_spying)

    def keep_spying(self, response):
        numbers = response.xpath('//tr[@bgcolor="#d7ecff" or @bgcolor="#eef7ff"]/td[@width="8%" and @height="40"]/span/text()').extract()
        self.list_previous = self.list_present
        self.list_present = numbers
        # judge if anything new
        if (self.list_present != self.list_previous):  
            self.goto_new_demand(response)
        #time.sleep(60)  #from cache
        yield scrapy.Request(self.url_0, self.keep_spying, dont_filter=True)

    def goto_new_demand(self, response):
        new_demand_links = []
        detail_links = response.xpath('//div[@class="ShowDetail"]/a/@href').extract()
        for i in range(len(self.list_present)):
            if (self.list_present[ i] not in self.list_previous):  
                new_demand_links.append(self.base_url + detail_links[i])
        if (new_demand_links != []):
            for new_demand_link in new_demand_links:
                yield scrapy.Request(new_demand_link, self.get_new_demand)

    def get_new_demand(self, response):
        new_demand = TutorJobSpyItem()
        new_demand['url'] = response.url
        requirments = response.xpath('//tr[@#bgcolor="#eef7ff"]/td[@colspan="2"]/div/text()').extract()[0]
        new_demand['gender'] = self.get_gender(requirments)
        new_demand['region'] = response.xpath('//tr[@bgcolor="#d7ecff"]/td[@align="left"]/text()').extract()[5]
        new_demand['grade'] = response.xpath('//tr[@bgcolor="#d7ecff"]/td[@align="left"]/text()').extract()[7]
        new_demand['subject'] = response.xpath('//tr[@bgcolor="#eef7ff"]/td[@align="left"]/text()').extract()[2]
        return new_demand

    def get_gender(self, requirments):
        if ('女老师' in requirments):
            return 'F'
        elif ('男老师' in requirments):
            return 'M'
        else:
            return 'Both okay'

The problem is that when I debug, I find that I cannot step into goto_new_demand :

if (self.list_present != self.list_previous):  
    self.goto_new_demand(response)

Every time I run the script or debug it, it just skip goto_new_demand , but after I comment yield scrapy.Request(new_demand_link, self.get_new_demand) in goto_new_demand and then I can step into it. I have tried many times and found that I can step into goto_new_demand only when there is no yyield scrapy.Request(new_demand_link, self.get_new_demand) in it. Why that happens?
Thanks in advance to anyone who can give an advice :)
PS:
Scrapy : 1.5.0
lxml : 4.1.1.0
libxml2 : 2.9.5
cssselect : 1.0.3
parsel : 1.3.1
w3lib : 1.18.0
Twisted : 17.9.0
Python : 3.6.3 (v3.6.3:2c5fed8, Oct 3 2017, 18:11:49) [MSC v.1900 64 bit (AMD64)]
pyOpenSSL : 17.5.0 (OpenSSL 1.1.0g 2 Nov 2017)
cryptography : 2.1.4
Platform : Windows-7-6.1.7601-SP1

Problem solved!
I modified the generator goto_new_demand into function goto_new_demand . So the problem is totally result from my little comprehension of yield an generator .
Here is the code modified:

if (self.list_present != self.list_previous):
    # yield self.goto_new_demand(response)
    new_demand_links = self.goto_new_demand(response)
    if (new_demand_links != []):
        for new_demand_link in new_demand_links:
            yield scrapy.Request(new_demand_link, self.get_new_demand)

def goto_new_demand(self, response):
    new_demand_links = []
    detail_links = response.xpath('//div[@class="ShowDetail"]/a/@href').extract()
    for i in range(len(self.list_present)):
        if (self.list_present[ i] not in self.list_previous):
            new_demand_links.append(self.base_url + detail_links[i])
    return new_demand_links

The reason lies in the answer from Ballack.

The correct way to debug Scrapy spiders is described in the documentation . Especially useful technique is using Scrapy Shell to inspect the responses.

I think you may need to change this statement

if (self.list_present != self.list_previous):  
    self.goto_new_demand(response)

to:

if (self.list_present != self.list_previous):  
    yield self.goto_new_demand(response)

because the self.goto_new_demand() is just a generator(which have yield statement in the function), so simply using self.goto_new_demand(response) will not make anything runs.

A simple example for the generator may make you more clear about this:

def a():
    print("hello")

# invoke a will print out hello
a()

but for a generator, simply invoke this will return just a generator:

def a():
    yield
    print("hello")

# invoke a will not print out hello, instead it will return a generator object
a()

So, in scrapy, you should use yield self.goto_new_demand(response) to make goto_new_demand(response) actually runs.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM