简体   繁体   中英

Scrapy request, shell Fetch() in spider

I'm trying to reach a specific page, let's call it http://example.com/puppers . This page cannot be reached when connecting directly using scrapy shell or the standard scrapy.request module (results in <405> HTTP ).

However, when I use scrapy shell 'http://example.com/kittens' first, and then use fetch('http://example.com/puppers') it works and I get a <200> OK HTTP code. I can now extract data using scrapy shell .

I tried implementing this in my script, by altering the referer (using url #1), the user-agent and a few others while connecting to the puppers (url #2) page. I still get a <405> code..

I appreciate all the help. Thank you.

start_urls = ['http://example.com/kittens']

def parse(self, response):

    yield scrapy.Request(

        url="http://example.com/puppers",
        callback=self.parse_puppers
    )

def parse_puppers(self, response):
    #process your puppers
    .....

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM