简体   繁体   English

停止 Scrapy 将蜘蛛 output 记录到 Visual Studio Code 终端

[英]Stop Scrapy from logging spider output to Visual Studio Code terminal

Whenever I run my spider scrapy crawl test -O test.json in my Visual Studio Code terminal I get output like this:每当我在 Visual Studio Code 终端中运行我的蜘蛛scrapy crawl test -O test.json时,我都会得到 output,如下所示:

2023-01-31 14:31:45 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.example.com/product/1
{'price': 100,
 'newprice': 90
}
2023-01-31 14:31:50 [scrapy.core.engine] INFO: Closing spider (finished)
2023-01-31 14:31:50 [scrapy.extensions.feedexport] INFO: Stored json feed (251 items) in: test.json
2023-01-31 14:31:50 [selenium.webdriver.remote.remote_connection] DEBUG: DELETE http://localhost:61169/session/996866d968ab791730e4f6d87ce2a1ea {}
2023-01-31 14:31:50 [urllib3.connectionpool] DEBUG: http://localhost:61169 "DELETE /session/996866d968ab791730e4f6d87ce2a1ea HTTP/1.1" 200 14
2023-01-31 14:31:50 [selenium.webdriver.remote.remote_connection] DEBUG: Remote response: status=200 | data={"value":null} | headers=HTTPHeaderDict({'Content-Length': '14', 'Content-Type': 'application/json; charset=utf-8', 'cache-control': 'no-cache'})
2023-01-31 14:31:50 [selenium.webdriver.remote.remote_connection] DEBUG: Finished Request
2023-01-31 14:31:52 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 91321,
 'downloader/request_count': 267,
 'downloader/request_method_count/GET': 267,
 'downloader/response_bytes': 2730055,
 'downloader/response_count': 267,
 'downloader/response_status_count/200': 267,
 'dupefilter/filtered': 121,
 'elapsed_time_seconds': 11.580893,
 'feedexport/success_count/FileFeedStorage': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2023, 1, 31, 13, 31, 50, 495392),
 'httpcompression/response_bytes': 9718676,
 'httpcompression/response_count': 267,
 'item_scraped_count': 251,
 'log_count/DEBUG': 537,
 'log_count/INFO': 11,
 'request_depth_max': 2,
 'response_received_count': 267,
 'scheduler/dequeued': 267,
 'scheduler/dequeued/memory': 267,
 'scheduler/enqueued': 267,
 'scheduler/enqueued/memory': 267,
 'start_time': datetime.datetime(2023, 1, 31, 13, 31, 38, 914499)}
2023-01-31 14:31:52 [scrapy.core.engine] INFO: Spider closed (finished)

I want to log all this, including the print('hi') lines in my Spider but I DON'T want the spider output logged, in this case {'price': 100, 'newprice': 90 } .我想记录所有这些,包括我的蜘蛛中的print('hi')行,但我不想记录蜘蛛 output,在本例中为{'price': 100, 'newprice': 90 }

Inspecting the above I think I need to disable only the downloader/response_bytes .检查以上内容我认为我只需要禁用downloader/response_bytes I've been reading this https://docs.scrapy.org/en/latest/topics/logging.html , but I'm not sure where or how to configure my exact use case.我一直在阅读这个https://docs.scrapy.org/en/latest/topics/logging.html ,但我不确定在哪里或如何配置我的确切用例。 I have hundreds of spiders and I don't want to have to add a configuration in each, but rather apply the loggin config to all spiders.我有数百个蜘蛛,我不想在每个蜘蛛中都添加配置,而是将登录配置应用于所有蜘蛛。 Do I need to add a separate config file or add to an existing like scrapy.cfg ?我是否需要添加单独的配置文件或添加到现有的scrapy.cfg

UPDATE 1更新 1

So here's my folder structure where I created settings.py :所以这是我创建settings.py的文件夹结构:

Scrapy\
    tt_spiders\
        myspiders\
            spider1.py
            spider2.py
            settings.py
        middlewares.py
        pipelines.py
        settings.py
    scrapy.cfg
    settings.py

settings.py设置.py

if __name__ == "__main__":
    disable_list = ['scrapy.core.engine', 'scrapy.core.scraper', 'scrapy.spiders']
    for element in disable_list:
        logger = logging.getLogger(element)
        logger.disabled = True

    spider = 'example_spider'
    settings = get_project_settings()
    settings['USER_AGENT'] = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36'
    process = CrawlerProcess(settings)
    process.crawl(spider)
    process.start()

This throws 3 errors, which makes sense as I have not defined these:这会抛出 3 个错误,这是有道理的,因为我没有定义这些错误:

  • "logging" is not defined “日志记录”未定义
  • "get_project_settings" is not defined “get_project_settings”未定义
  • "CrawlerProcess" is not defined “CrawlerProcess”未定义

But more importantly, what I don't understand, this code contains spider = 'example_spider' , where I want this logic to apply to ALL spiders.但更重要的是,我不明白的是,这段代码包含spider = 'example_spider' ,我希望这个逻辑适用于所有蜘蛛。

So I reduced it to:所以我将其简化为:

if __name__ == "__main__":
    disable_list = ['scrapy.core.scraper']

But still the output is logged.但仍然记录了 output。 What am I missing?我错过了什么?

Let's assume that we have this spider:让我们假设我们有这个蜘蛛:

spider.py:蜘蛛.py:

import scrapy


class ExampleSpider(scrapy.Spider):
    name = 'example_spider'
    allowed_domains = ['scrapingclub.com']
    start_urls = ['https://scrapingclub.com/exercise/detail_basic/']

    def parse(self, response):
        item = dict()
        item['title'] = response.xpath('//h3/text()').get()
        item['price'] = response.xpath('//div[@class="card-body"]/h4/text()').get()
        yield item

And its output is:而它的 output 是:

...
[scrapy.middleware] INFO: Enabled item pipelines:
[]
[scrapy.core.engine] INFO: Spider opened
[scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
[scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
[scrapy.core.engine] DEBUG: Crawled (200) <GET https://scrapingclub.com/exercise/detail_basic/> (referer: None)
[scrapy.core.scraper] DEBUG: Scraped from <200 https://scrapingclub.com/exercise/detail_basic/>
{'title': 'Long-sleeved Jersey Top', 'price': '$12.99'}
[scrapy.core.engine] INFO: Closing spider (finished)
[scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 329,
 'downloader/request_count': 1,
...

If you want to disable logging for specific line then just copy the text inside the square brackets and disable its logger.如果您想禁用特定行的日志记录,则只需复制方括号内的文本并禁用其记录器。 eg: [scrapy.core.scraper] DEBUG: Scraped from <200 https://scrapingclub.com/exercise/detail_basic/> .例如: [scrapy.core.scraper] DEBUG: Scraped from <200 https://scrapingclub.com/exercise/detail_basic/>中删除。

main.py:主要文件:

if __name__ == "__main__":
    disable_list = ['scrapy.core.engine', 'scrapy.core.scraper', 'scrapy.spiders']
    for element in disable_list:
        logger = logging.getLogger(element)
        logger.disabled = True

    spider = 'example_spider'
    settings = get_project_settings()
    settings['USER_AGENT'] = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36'
    process = CrawlerProcess(settings)
    process.crawl(spider)
    process.start()

If you want to disable some of the extensions you can set them to None in settings.py :如果你想禁用一些扩展,你可以在settings.py中将它们设置为None

EXTENSIONS = {
    'scrapy.extensions.telnet': None,
    'scrapy.extensions.logstats.LogStats': None,
    'scrapy.extensions.corestats.CoreStats': None
}

Update 1:更新 1:

Add just this to settings.py :将此添加到settings.py

import logging
disable_list = ['scrapy.core.engine', 'scrapy.core.scraper', 'scrapy.spiders']
for element in disable_list:
    logger = logging.getLogger(element)
    logger.disabled = True

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何在 Visual Studio Code 终端中停止命令 - How to stop a command in the Visual Studio Code terminal 在终端 Visual Studio Code 中看起来更漂亮的 output - Nicer looking output in the terminal Visual Studio Code Visual Studio 代码未在终端中给出 output - Visual Studio code is not giving output in terminal JavaScript函数无法在Visual Studio代码中将任何内容记录到终端 - JavaScript Function Not Logging Anything to the Terminal in Visual Studio Code 当我运行 C 代码时,如何阻止 Visual Studio Code 打开外部终端 window - How to stop Visual Studio Code from opening an external terminal window when I run my C code 如何将 Visual Studio 代码中的 output 从终端更改为 output? 我试过了,但找不到命令 - how to change the output in visual studio code from terminal to output ? i tried but i'm getting command not found Visual Studio 代码终端 - Visual Studio Code Terminal 如何在 Visual Studio Code 的“输出”部分而不是“终端”部分从用户那里获取输入 - How can you get input from user in 'Output' section instead of 'Terminal' section in Visual Studio Code 使用 Visual Studio Code 进行调试并将终端 output 传输到文件 - Debugging using Visual Studio Code and piping terminal output to a file 为什么 Visual Studio 代码可以将 C# 正确打印到终端,但没有输出? - Why is visual studio code printing C# to terminal correctly, but not output?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM