简体   繁体   中英

How to extract values from nested JSON array using pandas

I have a large JSON file (400k lines). I am trying to isolate the following:

Policies- "description"

policy items - "users" and "database values"

JSON FILE - https://pastebin.com/hv8mLfgx

Expected Output from Pandas: https://imgur.com/a/FVcNGsZ

Everything after "Policy Items" is re-iterated the exact same throughout the file. I have tried the code below to isolate "users". It doesn't seem to work, I'm trying to dump all of this into a CSV.

Edit* here was a solution I was attempting to try, but could not get this to work correctly - Deeply nested JSON response to pandas dataframe

from pandas.io.json import json_normalize as Jnormal
import json
import pprint, csv
import re

with open("Ranger_Policies_20190204_195010.json") as file:
    jsonDF = json.load(file)
    for item in jsonDF['policies'][0]['policyItems'][0]:
        print ('{} - {} - {}'.format(jsonDF['users']))

EDIT 2: I have some working code which is able to grab some of the USERS, but it does not grab all of them. Only 11 out of 25.

from pandas.io.json import json_normalize as Jnormal
import json
import pprint, csv
import re

with open("Ranger_Policies_20190204_195010.json") as file:
    jsonDF = json.load(file)
    pNode = Jnormal(jsonDF['policies'][0]['policyItems'], record_path='users')
    print(pNode.head(500))

EDIT 3: This is the Final working copy, however I am still not copying over all my TABLE data. I set a loop to simply ignore everything. Capture everything and I'd sort it in Excel, Does anyone have any ideas why I cannot capture all the TABLE values?

    json_data = json.load(file)
    with open("test.csv", 'w', newline='') as fd:
        wr = csv.writer(fd)
        wr.writerow(('Database name', 'Users', 'Description', 'Table'))
        for policy in json_data['policies']:
            desc = policy['description']
            db_values = policy['resources']['database']['values']
            db_tables = policy['resources']['table']['values']
            for item in policy['policyItems']:
                users = item['users']
                for dbT in db_tables:
                    for user in users:
                        for db in db_values:
                            _ = wr.writerow((db, user, desc, dbT))```

Pandas is overkill here: the csv standard module is enough. You have just to iterate on policies to extract the description an database values, next on policyItems to extract the users:

with open("Ranger_Policies_20190204_195010.json") as file:
    jsonDF = json.load(file)
with open("outputfile.csv", newline='') as fd:
    wr = csv.writer(fd)
    _ = wr.writerow(('Database name', 'Users', 'Description'))
    for policy in js['policies']:
        desc = policy['description']
        db_values = policy['resources']['database']['values']
        for item in policy['policyItems']:
            users = item['users']
            for user in users:
                for db in db_values:
                    if db != '*':
                        _ = wr.writerow((db, user, desc))

Here is one way to do it, and let's assume your json data is in a variable called json_data

from itertools import product

def make_dfs(data):
    cols = ['db_name', 'user', 'description']

    for item in data.get('policies'):
        description = item.get('description')
        users = item.get('policyItems', [{}])[0].get('users', [None])
        db_name = item.get('resources', {}).get('database', {}).get('values', [None])
        db_name = [name for name in db_name if name != '*']
        prods = product(db_name, users, [description])
        yield pd.DataFrame.from_records(prods, columns=cols)

df = pd.concat(make_dfs(json_data), ignore_index=True)

print(df)

   db_name          user                               description
0    m2_db          hive  Policy for all - database, table, column
1    m2_db  rangerlookup  Policy for all - database, table, column
2    m2_db     ambari-qa  Policy for all - database, table, column
3    m2_db          af34  Policy for all - database, table, column
4    m2_db          g748  Policy for all - database, table, column
5    m2_db          hdfs  Policy for all - database, table, column
6    m2_db          dh10  Policy for all - database, table, column
7    m2_db          gs22  Policy for all - database, table, column
8    m2_db          dh27  Policy for all - database, table, column
9    m2_db          ct52  Policy for all - database, table, column
10   m2_db  livy_pyspark  Policy for all - database, table, column

Tested on Python 3.5.1 and pandas==0.23.4

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM