简体   繁体   English

在python中从文本中提取JSON

[英]Extract JSON from Text in python

I want to extract JSON/dictionary from a log text.我想从日志文本中提取 JSON/字典。

The Sample log text:示例日志文本:

2018-06-21 19:42:58 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'locations', 'CLOSESPIDER_TIMEOUT': '14400', 'FEED_FORMAT': 'geojson', 'LOG_FILE': '/geojson_dumps/21_Jun_2018_07_42_54/logs/coastalfarm.log', 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'locations.spiders', 'SPIDER_MODULES': ['locations.spiders'], 'TELNETCONSOLE_ENABLED': '0', 'USER_AGENT': 'Mozilla/5.0'}

2018-06-21 19:43:00 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 369,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 1718,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 6, 21, 14, 13, 0, 841666),
 'item_scraped_count': 4,
 'log_count/INFO': 8,
 'memusage/max': 56856576,
 'memusage/startup': 56856576,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2018, 6, 21, 14, 12, 58, 499385)}

2018-06-21 19:43:00 [scrapy.core.engine] INFO: Spider closed (finished)

I have tried (\\{.+$\\}) as the regex expression but it gives me the the dict which is on single line, {'BOT_NAME': 'locations',..., 'USER_AGENT': 'Mozilla/5.0'} which is not what is expected.我试过(\\{.+$\\})作为正则表达式,但它给了我单行的字典, {'BOT_NAME': 'locations',..., 'USER_AGENT': 'Mozilla/5.0'}这不是预期的。

The json/dictionary I want to extract: Note: The dictionary would not the same keys, it could differ.我想提取的 json/dictionary:注意:字典不会有相同的键,它可能会有所不同。

{'downloader/request_bytes': 369,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 1718,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 6, 21, 14, 13, 0, 841666),
 'item_scraped_count': 4,
 'log_count/INFO': 8,
 'memusage/max': 56856576,
 'memusage/startup': 56856576,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2018, 6, 21, 14, 12, 58, 499385)}

Edit: The JSON spans multiple lines so here's what will do it: 编辑:JSON跨越多行,所以这是将要执行的操作:

import re

re_str = '\d{2}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} \[scrapy\.statscollectors] INFO: Dumping Scrapy stats:.({.+?\})'
stats_re = re.compile(re_str, re.MULTILINE | re.DOTALL)

for match in stats_re.findall(log):
    print(match)

If you are after only the lines from the statscollector then this should get you there (assuming that it's all on one line too): 如果您只关注statscollector中的各行,那么这应该可以帮助您(假设所有内容也都在同一行上):

^.*?\[scrapy.statscollectors] INFO: Dumping Scrapy stats: (\{.+$\}).*?$

Using a JSON tokenizer makes this a very simple and efficient task, as long as you have an anchor to search for in the original document that allows you to at least identify the beginning of the JSON blob.使用 JSON 标记器使这成为一项非常简单和高效的任务,只要您有一个锚点可以在原始文档中搜索,至少可以识别 JSON blob 的开头。 This uses json-five to extract JSON from HTML:这使用json-5从 HTML 中提取 JSON:

import json5.tokenizer

with open('5f32d5b4e2c432f660e1df44.html') as f:
    document = f.read()

search_for = "window.__INITIAL_STATE__="
i = document.index(search_for)
j = i + len(search_for)
extract_from = document[j:]

tokens = json5.tokenizer.tokenize(extract_from)
stack = []
collected = []
for token in tokens:
    collected.append(token.value)

    if token.type in ('LBRACE', 'LBRACKET'):
        stack.append(token)
    elif token.type in ('RBRACE', 'RBRACKET'):
        stack.pop()

    if not stack:
        break

json_blob = ''.join(collected)

Note that this accounts for the JSON both being a complex (object, list) or scalar type.请注意,这将 JSON 解释为复杂(对象、列表)或标量类型。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM