简体   繁体   中英

Processing large JSON with multiple root elements and read into pandas dataframe

I want to (pre)process large JSON files (5-10GB each), which contain multiple root elements. These root elements follow each other without separator like this: {}{}....

So I first wrote the following simple code to get a valid JSON File:

with open(file) as f: 
    file_data = f.read()
    file_data = file_data.replace("}{", "},{") 
    file_data = "[" + file_data + "]"
    df = pd.read_json(file_data)

Obviously this doesn´t work with large files. Even the 400MB file doesn´t work. (I´ve got 16GB memory)

I´ve read that it´s possible to work with chunks but I don´t manage to get this in ''chunk logic'' Is there a way to ''chunkenize'' this?

I am glad for you help.

I am having a hard time visualizing the multiple root element idea, but you should write the file_data contents to disk and try reading it in separately. If you have the file open it will consume RAM in addition to having the RAM consumed by the file_data object (and possibly even the modified object, though that's a garbage collector question. I think garbage collection gets done after the function returns.) Try using f.close explicitly instead of the with and return that from a separate function.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM