简体   繁体   中英

Convert python nested JSON-like data to dataframe

My records looks like this and I need to write it to a csv file:

my_data={"data":[{"id":"xyz","type":"book","attributes":{"doc_type":"article","action":"cut"}}]}

which looks like json, but the next record starts with "data" and not "data1" which forces me to read each record separately. Then, I convert it to a dict using eval() , to iterate thru keys and values for a certain path to get to the values I need. Then, I generate a list of keys and values based on the keys I need. Then, a pd.dataframe() converts that list into a dataframe which I know how to convert to csv. My code that works is below. But I am sure there are better ways to do this. Mine scales poorly. Thx.

counter=1
k=[]
v=[]
res=[]
m=0
for line in f2:
    jline=eval(line)
counter +=1
for items in jline:
    k.append(jline[u'data'][0].keys())
    v.append(jline[u'data'][0].values())
print 'keys are:', k
i=0
j=0
while i <3 :
    while j <3:
        if k[i][j]==u'id':
            res.append(v[i][j])
        j += 1    
    i += 1
#res is my result set
del k[:]
del v[:]

Changing my_data to be:

my_data = [{"id":"xyz","type":"book","attributes":{"doc_type":"article","action":"cut"}}, # Data One
{"id":"xyz2","type":"book","attributes":{"doc_type":"article","action":"cut"}}, # Data Two
{"id":"xyz3","type":"book","attributes":{"doc_type":"article","action":"cut"}}] # Data Three

You can dump this directly into a dataframe as so:

mydf = pd.DataFrame(my_data)

It's not clear what your data path would be, but if you are looking for specific combinations of id , type , etc. You could explicitly search

def find_my_way(data, pattern):

    # pattern = {'id':'someid', 'type':'sometype'...}
    res = []
    for row in data:
        if row.get('id') == pattern.get('id'):
            res.append(row)
    return row


mydf = pd.DataFrame(find_my_way(mydata, pattern))

EDIT:

Without going into how the api works, in pseudo-code, you'll want to do something like the following:

my_objects = []
calls = 0
while calls < maximum:

    my_data = call_the_api(params)

    data = my_data.get('data')

    if not data:
        calls+=1
        continue

    # Api calls to single objects usually return a dictionary, to group objects they return lists. This handles both cases
    if isinstance(data, list):
        my_objects = [*data, *my_objects]

    elif isinstance(data, {}):
        my_objects = [{**data}, *my_objects]

# This will unpack the data response into a list that you can then load into a DataFrame with the attributes from the api as the columns

df = pd.DataFrame(my_objects)

Assuming your data from the api looks like:

"""
 {
 "links": {},
 "meta": {},
 "data": {
    "type": "FactivaOrganizationsProfile",
    "id": "Goog",
    "attributes": {
      "key_executives": {
        "source_provider": [
          {
            "code": "FACSET",
            "descriptor": "FactSet Research Systems Inc.",
            "primary": true
          }
        ]
      }
    },
    "relationships": {
      "people": {
        "data": {
            "type": "people",
            "id": "39961704"
          }
      }
    }
  },
 "included": {}
 }
 """

per the documentation, which is why I'm using my_data.get('data') .

That should get you all of the data (unfiltered) into a DataFrame

Saving the DataFrame for the last bit is a bit more memory friendly

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM