简体   繁体   English

以高效快速的方式解析大型 XML 文件并在 Python 中提取嵌套元素

[英]Parse large XML files and extract nested elements in Python in efficient and fast way

I am new to ElementTree.我是 ElementTree 的新手。 I attempted to parse the the following XML file in Python.我尝试在 Python 中解析以下 XML 文件。



<Instrm>
    <Rcrd>
        <NewFinPpt>
            <Id>BT0007YSAWK</Id>
            <FullNm>Turbo Car</FullNm>
            <Cmmdty>false</Cmmdty>
        </NewFinPpt>
        <Issr>529900M2F7D5795H1A49</Issr>
        <Attrbts>
            <Amt Ccy="EUR">15.134</Amt>
            <Authrty>US</Authrty>
            <Prd>
                <Dt>2002-03-20</Dt>
            </Prd>
            <Ven>NYSE</Ven>
        </Attrbts >
    </Rcrd>

    <Rcrd>
        <OldFinPpt>
            <Id>BX0009YNOYK</Id>
            <FullNm>Turbo truk</FullNm>
            <Ccy>EUR</Ccy>
            <Cmmdty>false</Cmmdty>
        </OldFinPpt>
        <Issr>58888M2F7D579536J4</Issr>
        <Attrbts>
            <Amt Ccy="GBP">12.134</Amt>
            <Authrty>UK</Authrty>
            <Prd>
                <Dt>2002-04-21</Dt>
            </Prd>
            <Ven>BOX</Ven>
        </Attrbts >
    </Rcrd>
    <Rcrd>
        <NowFinPpt>
            <Id>GR0009YTEYK</Id>
            <FullNm>Tesla D4</FullNm>
            <Ccy>USD</Ccy>
        </NowFinPpt>
        <Issr>58888M2F7D579536K4</Issr>
        <Attrbts>
            <Amt Ccy="USD">12.28</Amt>
            <Authrty>UK</Authrty>
            <Prd>
                <Dt>2002-04-21</Dt>
            </Prd>
            <Ven>LSE</Ven>
        </Attrbts >
    </Rcrd>
    <Rcrd>
        <FinPpt>
            <Id>YC0009UWLSK</Id>
            <FullNm>Tesla D6</FullNm>
            <Cmmdty>true</Cmmdty>
        </FinPpt>
        <Issr>S7264292F7D57957777</Issr>
        <Attrbts>
            <Amt Ccy="XYS">14.28</Amt>
            <Prd>
                <Dt>2002-04-21</Dt>
            </Prd>
            <Ven>LSE</Ven>
        </Attrbts >
    </Rcrd>
</Instrm>


Since thats a large XML file (500mb), I only managed to parse the first several thousands entries.由于那是一个大型 XML 文件 (500mb),我只能解析前几千个条目。 My code is我的代码是


import xml.etree.ElementTree as et

import pandas as pd

tree = et.parse("newsample.xml")
root = tree.getroot()
lst = []
for country in root.findall("Rcrd"):

    rank = country.find("NewFinPpt")
    if rank is None:
        rank = country.find("OldFinPpt")
    if rank is None:
        rank = country.find("NowFinPpt")
    if rank is None:
        rank = country.find("FinPpt")

    Attrbts = country.find("Attrbts")
    try:
        id = rank.find("Id").text
    except AttributeError:
        id = ""
    try:
        FullNm = rank.find("FullNm").text
    except:
        FullNm=""

    try:
        Ccy = rank.find("Ccy").text
    except:
        Ccy = ""
    try:
        Cmmdty = rank.find("Cmmdty").text
    except:
        Cmmdty = ""

    try:
        Issr = country.find("Issr").text
    except:
        Issr = ""
    try:
        Amt = Attrbts.find("Amt").text
    except:
        Amt = ""
    try:
        Authrty = Attrbts.find("Authrty").text
    except:
        Authrty = ""
    try:
        dt = Attrbts.find("Prd").find("Dt").text
    except:
        dt = ""
    try:
        Ven = Attrbts.find("Ven").text
    except:
        Ven = ""

    lst.append([id, FullNm, Ccy, Cmmdty, Issr, Amt, Authrty, dt, Ven])
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_columns', None)
pd.set_option('max_colwidth', 100)
pd.set_option('display.width', 200)
p = pd.DataFrame(lst, columns=["id", "FullNm", "Ccy", "Cmmdty", "Issr", "Amt", "Authrty", "dt", "Ven"])
print(p)


It seems my loops are not very efficient to extract nested elements in XML file.似乎我的循环在 XML 文件中提取嵌套元素不是很有效。 If I tried to parse the whole file, it might take up to 5 hours.如果我尝试解析整个文件,可能需要长达 5 个小时。 Is there any way to make the parsing faster?有什么方法可以使解析更快? Maybe use multithreading or iterparse ?也许使用多线程或 iterparse ? Thanks.谢谢。

I recommend using .iterparse() to parse large XML files.我建议使用.iterparse()来解析大型 XML 文件。

This allows you to stream the XML, which is arguably the fastest and most memory-efficient way to deal with large inputs.这允许您流式传输 XML,这可以说是处理大量输入的最快和最节省内存的方式。 But it's not as convenient as .parse() , because you have to keep track of parsing context yourself.但它不如.parse()方便,因为您必须自己跟踪解析上下文。

The following keeps a list of parent elements ( stack ) and reacts appropriately when elements with certain names (and certain ancestors) occur in the stream.下面保留了一个父元素列表( stack ),并在流中出现具有某些名称(和某些祖先)的元素时做出适当的反应。 It builds a dict ( record ) and fills it with values as it goes.它构建一个 dict ( record ) 并在它进行时用值填充它。 In the end it yield s the dict and processes the next <Rcrd> .最后它yield s 字典并处理下一个<Rcrd>

import xml.etree.ElementTree as ET

def parse_rcrd(filename):
    stack = []

    for event, elem in ET.iterparse('newsample.xml', events=('start','end')):
        if event == 'start':
            if elem.tag == 'Rcrd':
                record = {
                    'Id': '',
                    'FullNm': '',
                    'Ccy': '',
                    'Cmmdty': '',
                    'Issr': '',
                    'Amt': '',
                    'Authrty': '',
                    'Dt': '',
                    'Ven': '',                
                }
            elif (elem.tag in ['Id', 'FullNm', 'Ccy', 'Cmmdty'] and
                  stack[-1] in ['NewFinPpt', 'OldFinPpt', 'NowFinPpt', 'FinPpt']):
                record[elem.tag] = elem.text
            elif (elem.tag in ['Issr'] and
                  stack[-1] in ['Rcrd']):
                record[elem.tag] = elem.text
            elif (elem.tag in ['Amt', 'Authrty', 'Ven'] and
                  stack[-1] in ['Attrbts']):
                record[elem.tag] = elem.text
            elif (elem.tag in ['Dt'] and
                  stack[-1] == 'Prd' and stack[-2] == 'Attrbts'):
                record[elem.tag] = elem.text

            stack.append(elem.tag)
        elif event == 'end':
            if elem.tag == 'Rcrd':
                yield record

            stack.pop()

for rcrd in parse_rcrd('newsample.xml'):
    print(rcrd)
    # or add to a dataframe

This should be plenty fast for your XML.这对于您的 XML 来说应该足够快了。 With your sample it prints this:使用您的样本,它会打印:

{'Id': 'BT0007YSAWK', 'FullNm': 'Turbo Car', 'Ccy': '', 'Cmmdty': 'false', 'Issr': '529900M2F7D5795H1A49', 'Amt': '15.134', 'Authrty': 'US', 'Dt': '2002-03-20', 'Ven': 'NYSE'}
{'Id': 'BX0009YNOYK', 'FullNm': 'Turbo truk', 'Ccy': 'EUR', 'Cmmdty': 'false', 'Issr': '58888M2F7D579536J4', 'Amt': '12.134', 'Authrty': 'UK', 'Dt': '2002-04-21', 'Ven': 'BOX'}
{'Id': 'GR0009YTEYK', 'FullNm': 'Tesla D4', 'Ccy': 'USD', 'Cmmdty': '', 'Issr': '58888M2F7D579536K4', 'Amt': '12.28', 'Authrty': 'UK', 'Dt': '2002-04-21', 'Ven': 'LSE'}
{'Id': 'YC0009UWLSK', 'FullNm': 'Tesla D6', 'Ccy': '', 'Cmmdty': 'true', 'Issr': 'S7264292F7D57957777', 'Amt': '14.28', 'Authrty': '', 'Dt': '2002-04-21', 'Ven': 'LSE'}

I saw a new method yesterday, and I recommend it to you.昨天看到一个新方法,推荐给大家。

from simplified_scrapy import SimplifiedDoc, utils

doc = SimplifiedDoc(edit=False)
doc.loadFile('newsample.xml', lineByline=True)

lst = []
header = [
    "Id", "FullNm", "Ccy", "Cmmdty", "Issr", "Amt", "Authrty", "Dt", "Ven"
]
lst.append(header)
for rcrd in doc.getIterable('Rcrd'):
    rank = rcrd.select('NewFinPpt|OldFinPpt|NowFinPpt|FinPpt')
    attrbts = rcrd.Attrbts
    lst.append([
        rank.Id.text, rank.FullNm.text, rank.Ccy.text, rank.Cmmdty.text,
        rcrd.Issr.text, attrbts.Amt.text, attrbts.Authrty.text,
        attrbts.Dt.text, attrbts.Ven.text
    ])
print (lst)
# utils.save2csv('rcrd.csv', lst)

If there are no duplicate tags, you can use the following method.如果没有重复的标签,可以使用下面的方法。

from simplified_scrapy import SimplifiedDoc, utils

doc = SimplifiedDoc(edit=False)
doc.loadFile('newsample.xml', lineByline=True)

lst = []
header = [
    "Id", "FullNm", "Ccy", "Cmmdty", "Issr", "Amt", "Authrty", "Dt", "Ven"
]
lst.append(header)
for rcrd in doc.getIterable('Rcrd'):
    row = []
    for key in header:
        value = rcrd.select(key + '>text()')
        if not value: value = ''
        row.append(value)
    lst.append(row)
print(lst)

Result:结果:

[['Id', 'FullNm', 'Ccy', 'Cmmdty', 'Issr', 'Amt', 'Authrty', 'Dt', 'Ven'], ['BT0007YSAWK', 'Turbo Car', '', 'false', '529900M2F7D5795H1A49', '15.134', 'US', '2002-03-20', 'NYSE'], ['BX0009YNOYK', 'Turbo truk', 'EUR', 'false', '58888M2F7D579536J4', '12.134', 'UK', '2002-04-21', 'BOX'], ['GR0009YTEYK', 'Tesla D4', 'USD', '', '58888M2F7D579536K4', '12.28', 'UK', '2002-04-21', 'LSE'], ['YC0009UWLSK', 'Tesla D6', '', 'true', 'S7264292F7D57957777', '14.28', '', '2002-04-21', 'LSE']]

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM