简体   繁体   中英

XML parsing in Python for big data

I am trying to parse an XML file using Python. But the problem is that the XML file size is around 30GB. So, it's taking hours to execute:

tree = ET.parse('Posts.xml')

In my XML file, there are millions of child elements of the root. Is there any way to make it faster? I don't need all the children to parse. Even the first 100,000 would be fine. All I need is to set a limit for the depth to parse.

You'll want an XML parsing mechanism that doesn't load everything into memory.

You can use ElementTree.iterparse or you could use Sax .

Here is a page with some XML processing tutorials for Python.

UPDATE: As @marbu said in the comment, if you use ElementTree.iterparse be sure to use it in such a way that you get rid of elements in memory when you've finished processing them.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM