简体   繁体   English

通过HTTP请求运行Python Scrapy脚本

[英]Run Python Scrapy script via HTTP request

I'm looking for an example to run scrapy script via HTTP request. 我正在寻找一个通过HTTP请求运行scrapy脚本的示例。 I'm planing to send url as a parameter that i need to crawl, via GET or POST method. 我打算通过GET或POST方法将url作为需要抓取的参数发送。 How can i do that. 我怎样才能做到这一点。

You should use scrapyd . 您应该使用scrapyd

Link to the GitHub project page . 链接到GitHub项目页面

Once you are using scrapyd you can use this api to scedule a crawl. 使用scrapyd之后,您可以使用此api来规划抓取。

Try something like that. 尝试类似的事情。

from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
from testspiders.spiders.followall import FollowAllSpider
from scrapy.utils.project import get_project_settings

spider = FollowAllSpider(domain='url.com')
settings = get_project_settings()
crawler = Crawler(settings)
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM