简体   繁体   中英

Scrapy, can't crawl any page: “TCP connection timed out: 110: Connection timed out.”

New to programming

Can't scrape content from some domain belonging to the same website.

For example, I can scrape it.example.com , es.example.com , pt.example.com but when I try to do the same with fr.example.com or us.example.com , I get:

2017-12-17 14:20:27 [scrapy.extensions.telnet] DEBUG: Telnet console 
listening on 127.0.0.1:6025
2017-12-17 14:21:27 [scrapy.extensions.logstats] INFO: Crawled 0 pages 
(at 
0 pages/min), scraped 0 items (at 0 items/min)
2017-12-17 14:22:27 [scrapy.extensions.logstats] INFO: Crawled 0 pages 
(at 
0 pages/min), scraped 0 items (at 0 items/min)
2017-12-17 14:22:38 [scrapy.downloadermiddlewares.retry] DEBUG: 
Retrying 
<GET https://fr.example.com/robots.txt> (failed 1 times): TCP 
connection 
timed out: 110: Connection timed out.

Here's the Spider some.py

import scrapy
import itertools

class SomeSpider(scrapy.Spider):
   name = 'some'
   allowed_domains = ['https://fr.example.com']
   def start_requests(self):
    categories = [ 'thing1', 'thing2', 'thing3',]
           base = "https://fr.example.com/things?t={category}&p={index}"

    for category, index in itertools.product(categories, range(1, 11)):
        yield scrapy.Request(base.format(category=category, index=index))

def parse(self, response):
    response.selector.remove_namespaces()
    info1 = response.css("span.info1").extract()
    info2 = response.css("span.info2").extract()

    for item in zip(info1, info2):
        scraped_info = {
            'info1': item[0],
            'info2': item[1]
            }

        yield scraped_info

What I have tried:

  1. Run the spider from a different IP (same problem with the same domains)

  2. Add a pool of IPs (didn't work)

  3. Found somewhere on Stackoverflow: in setting.py , set

    USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36'

  4. ROBOTSTXT_OBEY = False

Any idea is welcome!

Try to access the page with the requests package instead of scrapy , and see if it works.

import requests

url = 'fr.example.com'

response = requests.get(url)
print(response.text)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM