[英]scrapy running 2 spiders
How can I run 2 spiders in series? 如何连续运行2个Spider? Running this runs the first spider but not the second.
运行此命令将运行第一个蜘蛛,但不运行第二个。 Is there a way to wait for one to finish?:
有没有办法等待一个人完成?:
from scrapy import cmdline
cmdline.execute("scrapy crawl spider1".split())
cmdline.execute("scrapy crawl spider2".split())
Edit1: I changed it using .wait() to: Edit1:我使用.wait()将其更改为:
spider1 = subprocess.Popen(cmdline.execute("scrapy crawl spider1".split()))
spider1.wait()
spider2 = subprocess.Popen(cmdline.execute("scrapy crawl spider2".split()))
spider2.wait()
Did I do it wrong because it will just runs the first one 我做错了吗,因为它只会运行第一个
Edit2: 编辑2:
Traceback (most recent call last):
File "/usr/bin/scrapy", line 9, in <module>
load_entry_point('Scrapy==0.24.6', 'console_scripts', 'scrapy')()
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 109, in execute
settings = get_project_settings()
File "/usr/lib/pymodules/python2.7/scrapy/utils/project.py", line 60, in get_project_settings
settings.setmodule(settings_module_path, priority='project')
File "/usr/lib/pymodules/python2.7/scrapy/settings/__init__.py", line 109, in setmodule
module = import_module(module)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named settings
1
I would use Subprocess, which has a .wait() function. 我将使用具有.wait()函数的Subprocess。 Or you could use
.call()
in subprocess, which automatically waits and print it to get the terminal text from calling the scrapy crawl
. 或者,您可以在子
scrapy crawl
使用.call()
,该子scrapy crawl
会自动等待并打印出来,以从调用scrapy crawl
来获取终端文本。
spider1 = subprocess.call(["scrapy", "crawl", "spider1"])
print spider1
spider2 = subprocess.call(["scrapy", "crawl", "spider2"])
print spider2
This method will automatically wait until the first spider is done and then call the seconds spider 此方法将自动等待,直到第一个蜘蛛完成,然后调用秒蜘蛛
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.