繁体   English   中英

Scrapy 蜘蛛过早关闭

[英]Scrapy spider close prematurely

我对 Scrapy 进行了编程,以废弃我存储在数据库中的几千个 url 链接。 我已经编写了一个蜘蛛程序来调用 scrapy.Requests 函数以从数据库中传递 url。但是在抓取 1-2 页之后,蜘蛛程序过早关闭(没有错误)。 我不知道为什么会这样。 代码:

# -*- coding: utf-8 -*-
import scrapy
import olsDBUtil
import tokopediautil
from datetime import datetime
import time

import logging
from scrapy.utils.log import configure_logging


class DataproductSpider(scrapy.Spider):

    dbObj = olsDBUtil.olsDBUtil()
    name = "dataProduct"
    allowed_domains = ["tokopedia.com"]
    newProductLink = list(dbObj.getNewProductLinks(10))
    start_urls = list(newProductLink.pop())
    # start_urls = dbObj.getNewProductLinks(NumOfLinks=2)

    tObj = tokopediautil.TokopediaUtil()

    configure_logging(install_root_handler=False)
    logging.basicConfig(
        filename='log.txt',
        format='%(levelname)s: %(message)s',
        level=logging.INFO
    )


    def parse(self, response):

        if response.status == 200:
            thisIsProductPage = response.selector.xpath("/html/head/meta[@property='og:type']/@content").extract()[
                                    0] == 'product'
            if thisIsProductPage:
                vProductID = self.dbObj.getProductIDbyURL(response.url)
                vProductName = \
                response.selector.xpath("//input[@type='hidden'][@name='product_name']/@value").extract()[0]
                vProductDesc = response.selector.xpath("//p[@itemprop='description']/text()").extract()[0]
                vProductPrice = \
                response.selector.xpath("/html/head/meta[@property='product:price:amount']/@content").extract()[0]
                vSiteProductID = \
                response.selector.xpath("//input[@type='hidden'][@name='product_id']/@value").extract()[0]
                vProductCategory = response.selector.xpath("//ul[@itemprop='breadcrumb']//text()").extract()[1:-1]
                vProductCategory = ' - '.join(vProductCategory)
                vProductUpdated = \
                response.selector.xpath("//small[@class='product-pricelastupdated']/i/text()").extract()[0][26:36]
                vProductUpdated = datetime.strptime(vProductUpdated, '%d-%M-%Y')
                vProductVendor = response.selector.xpath("//a[@id='shop-name-info']/text()").extract()[0]

                vProductStats = self.tObj.getItemSold(vSiteProductID)
                vProductSold = vProductStats['item_sold']
                vProductViewed = self.tObj.getProductView(vSiteProductID)
                vSpecificPortalData = "item-sold - %s , Transaction Sucess - %s , Transaction Rejected - %s " % (
                vProductStats['item_sold'], vProductStats['success'], vProductStats['reject'])

                print "productID      : " + str(vProductID)
                print "product Name   : " + vProductName
                print "product Desc   : " + vProductDesc
                print "Product Price  : " + str(vProductPrice)
                print "Product SiteID : " + str(vSiteProductID)
                print "Category       : " + vProductCategory
                print "Product Updated: " + vProductUpdated.strftime('%Y-%m-%d')
                print "Product Vendor : " + vProductVendor
                print "Product Sold   : " + str(vProductSold)
                print "Product Viewed : " + str(vProductViewed)
                print "Site Specific Info: " + vSpecificPortalData

                self.dbObj.storeNewProductData(
                    productID=vProductID,
                    productName=vProductName,
                    productPrice=vProductPrice,
                    productSiteProdID=vSiteProductID,
                    productVendor=vProductVendor,
                    productDesc=vProductDesc,
                    productQtyDilihat=vProductViewed,
                    productTerjual=vProductSold,
                    productCategory=vProductCategory,
                    productSiteSpecificInfo=vSpecificPortalData

                )

                self.dbObj.storeProductRunningData(
                    productID=vProductID,
                    productDilihat=str(vProductViewed),
                    productTerjual=str(vProductSold)

                )

        else:
            print "Error Logged : Page Call Error"

        LinkText = str(self.newProductLink.pop())
        print "LinkText : %s" % LinkText
        print "Total newProductLink is %s" % str(len(self.newProductLink))

        yield scrapy.Request(url=LinkText, callback=self.parse)

这是 scrapy 日志:

INFO: Scrapy 1.3.0 started (bot: tokopedia)
INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tokopedia.spiders', 'HTTPCACHE_EXPIRATION_SECS': 1800, 'SPIDER_MODULES': ['tokopedia.spiders'], 'HTTPCACHE_ENABLED': True, 'BOT_NAME': 'tokopedia', 'USER_AGENT': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36'}
INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats',
 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware']
INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
INFO: Enabled item pipelines:
[]
INFO: Spider opened
INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
DEBUG: Telnet console listening on 127.0.0.1:6023
DEBUG: Crawled (200) <GET https://www.tokopedia.com/karmedia/penjelasan-pembatal-keislaman> (referer: None)
DEBUG: Starting new HTTPS connection (1): js.tokopedia.com
DEBUG: https://js.tokopedia.com:443 "GET /productstats/check?pid=27455429 HTTP/1.1" 200 61
DEBUG: Starting new HTTPS connection (1): www.tokopedia.com
DEBUG: https://www.tokopedia.com:443 "GET /provi/check?pid=27455429&callback=show_product_view HTTP/1.1" 200 31
INFO: Closing spider (finished)
INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 333,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 20815,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 2, 10, 18, 4, 10, 355000),
 'httpcache/firsthand': 1,
 'httpcache/miss': 1,
 'httpcache/store': 1,
 'log_count/DEBUG': 6,
 'log_count/INFO': 7,
 'offsite/filtered': 1,
 'request_depth_max': 1,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2017, 2, 10, 18, 4, 8, 922000)}
INFO: Spider closed (finished)

将 scrapy.Request 调用更改为下一个产品的绝对 url 链接。它起作用了。 我不明白为什么会发生这种情况……不知何故 list.pop() 语句不起作用……即使我已将其更改为字符串。

在你的scrapy.Request()中尝试dont_filter=True 我有一个类似的问题,重复过滤器导致蜘蛛(也使用pop() )过早关闭,我看到你有一个'offsite/filtered': 1那里可能导致过滤问题。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM