简体   繁体   English

selenium:socket.error:[Errno 61]连接被拒绝

[英]selenium:socket.error: [Errno 61] Connection refused

There are 10 links I want to catch 我想捕捉10个链接
When I run spider,I can get the links in json file ,but there are still errors like this: 当我运行Spider时,我可以在json文件中获取链接,但是仍然存在如下错误:
It seems like selenium run twice.What's the problem is? 硒似乎运行了两次,这是什么问题?
Please guide me Thank you 请指导我谢谢

2014-08-06 10:30:26+0800 [spider2] DEBUG: Scraped from <200 http://www.test/a/1>
{'link': u'http://www.test/a/1'}
2014-08-06 10:30:26+0800 [spider2] ERROR: Spider error processing <GET
http://www.test/a/1>
Traceback (most recent call last):
 ........
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 571, in create_connection
    raise err
socket.error: [Errno 61] Connection refused

Here is my code: 这是我的代码:

from selenium import webdriver
from scrapy.spider import Spider
from ta.items import TaItem
from selenium.webdriver.support.wait import WebDriverWait
from scrapy.http.request import Request

class ProductSpider(Spider):
    name = "spider2"  
    start_urls = ['http://www.test.com/']
    def __init__(self):
        self.driver = webdriver.Firefox()

    def parse(self, response):
        self.driver.get(response.url)
        self.driver.implicitly_wait(20)  
        next = self.driver.find_elements_by_css_selector("div.body .heading a")
        for a in next:
            item = TaItem()    
            item['link'] =  a.get_attribute("href")     
            yield Request(url=item['link'], meta={'item': item}, callback=self.parse_detail)  

    def parse_detail(self,response):
        item = response.meta['item']
        yield item
        self.driver.close()

The problem is that you are closing the driver too early. 问题是您过早关闭驱动程序。

You should close it only when the spider finishes it work, listen to spider_closed signal: 您仅应在Spider完成工作后才关闭它,听一下spider_closed信号:

from scrapy import signals
from scrapy.xlib.pydispatch import dispatcher
from selenium import webdriver
from scrapy.spider import Spider
from ta.items import TaItem
from scrapy.http.request import Request


class ProductSpider(Spider):
    name = "spider2"  
    start_urls = ['http://www.test.com/']
    def __init__(self):
        self.driver = webdriver.Firefox()
        dispatcher.connect(self.spider_closed, signals.spider_closed)

    def parse(self, response):
        self.driver.get(response.url)
        self.driver.implicitly_wait(20)  
        next = self.driver.find_elements_by_css_selector("div.body .heading a")
        for a in next:
            item = TaItem()    
            item['link'] =  a.get_attribute("href")     
            yield Request(url=item['link'], meta={'item': item}, callback=self.parse_detail)  

    def parse_detail(self,response):
        item = response.meta['item']
        yield item

    def spider_closed(self, spider):
        self.driver.close()

See also: scrapy: Call a function when a spider quits . 另请参阅: scrapy:蜘蛛退出时调用一个函数

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM