简体   繁体   English

对于在Python脚本中运行Scrapy感到困惑

[英]Confused about running Scrapy from within a Python script

Following document , I can run scrapy from a Python script, but I can't get the scrapy result. 在下面的文档中 ,我可以从Python脚本运行scrapy,但是我无法获得scrapy结果。

This is my spider: 这是我的蜘蛛:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from items import DmozItem

class DmozSpider(BaseSpider):
    name = "douban" 
    allowed_domains = ["example.com"]
    start_urls = [
        "http://www.example.com/group/xxx/discussion"
    ]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        rows = hxs.select("//table[@class='olt']/tr/td[@class='title']/a")
        items = []
        # print sites
        for row in rows:
            item = DmozItem()
            item["title"] = row.select('text()').extract()[0]
            item["link"] = row.select('@href').extract()[0]
            items.append(item)

        return items

Notice the last line, I try to use the returned parse result, if I run: 注意最后一行,我尝试使用返回的解析结果,如果我运行:

 scrapy crawl douban

the terminal could print the return result 终端可以打印返回结果

But I can't get the return result from the Python script. 但是我无法从Python脚本中获得返回结果。 Here is my Python script: 这是我的Python脚本:

from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy import log, signals
from spiders.dmoz_spider import DmozSpider
from scrapy.xlib.pydispatch import dispatcher

def stop_reactor():
    reactor.stop()
dispatcher.connect(stop_reactor, signal=signals.spider_closed)
spider = DmozSpider(domain='www.douban.com')
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
log.msg("------------>Running reactor")
result = reactor.run()
print result
log.msg("------------>Running stoped")

I try to get the result at the reactor.run() , but it return nothing, 我试着在reactor.run()上得到结果,但它什么也没有返回,

How can I get the result? 我怎样才能得到结果?

Terminal prints the result because the default log level is set to DEBUG . 终端打印结果,因为默认日志级别设置为DEBUG

When you are running your spider from the script and call log.start() , the default log level is set to INFO . 从脚本运行spider并调用log.start() ,默认日志级别设置为INFO

Just replace: 只需更换:

log.start()

with

log.start(loglevel=log.DEBUG)

UPD: UPD:

To get the result as string, you can log everything to a file and then read from it, eg: 要将结果作为字符串,您可以将所有内容记录到文件中,然后从中读取,例如:

log.start(logfile="results.log", loglevel=log.DEBUG, crawler=crawler, logstdout=False)

reactor.run()

with open("results.log", "r") as f:
    result = f.read()
print result

Hope that helps. 希望有所帮助。

I found your question while asking myself the same thing, namely: "How can I get the result?". 我在问自己同样的事情时发现了你的问题,即:“我怎样才能得到结果?”。 Since this wasn't answered here I endeavoured to find the answer myself and now that I have I can share it: 由于这里没有回答,我努力找到自己的答案,现在我可以分享它:

items = []
def add_item(item):
    items.append(item)
dispatcher.connect(add_item, signal=signals.item_passed)

Or for scrapy 0.22 ( http://doc.scrapy.org/en/latest/topics/practices.html#run-scrapy-from-a-script ) replace the last line of my solution by: 或者对scrapy 0.22( http://doc.scrapy.org/en/latest/topics/practices.html#run-scrapy-from-a-script )替换我的解决方案的最后一行:

crawler.signals.connect(add_item, signals.item_passed)

My solution is freely adapted from http://www.tryolabs.com/Blog/2011/09/27/calling-scrapy-python-script/ . 我的解决方案可以从http://www.tryolabs.com/Blog/2011/09/27/calling-scrapy-python-script/免费改编。

在我的情况下,我将脚本文件放在scrapy项目级别,例如,如果scrapyproject / scrapyproject / spiders然后我把它放在scrapyproject / myscript.py

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM