简体   繁体   English

如何阻止草率的记录?

[英]How to stop scrapy from logging?

I am calling a Scrapy based crawler from a bigger framework. 我正在从更大的框架调用基于Scrapy器。 During the crawl Scrapy logs all events. 在抓取过程中, Scrapy记录所有事件。 After the crawl scrapy should stop logging and the calling framework should take over the logging duty and printing to standard out again. 爬取后scrapy应该停止日志记录,并且调用框架应该接管日志记录任务并再次打印以使其标准化。

How to can I stop Scrapy from dominating all logs and pass it back to my framework? 如何才能阻止Scrapy控制所有日志并将其传递回我的框架?

How to manage several loggers in Python? 如何在Python中管理多个记录器?

Update: I added crawler.spider.settings.overrides['LOG_ENABLED'] = False to my crawler. 更新:我为crawler.spider.settings.overrides['LOG_ENABLED'] = False添加了crawler.spider.settings.overrides['LOG_ENABLED'] = False Scrapy keeps on preventing me to print to standard out. Scrapy不断阻止我进行标准打印。

LOG_ENABLED设置更改为False

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM