[英]How load nested Response JSON into a dataframe without saving in a file?
I'm attempting to load a pandas a dataframe with a heavily nesting JSON.我正在尝试加载一个 pandas 和一个 dataframe 并带有大量嵌套的 JSON。 This JSON is the result of an API code in Python.
此 JSON 是 Python 中的 API 代码的结果。 Here is the structure of the JSON file:
下面是 JSON 文件的结构:
{
'results': [{
'accountId': 'XXXXXXXXXXXXXXXXXXXXXXXXXXX',
'id': '6ed909bd16',
'partition': None,
'externalId': None,
'metadata': None,
'name': 'NewSeptemberFile20220908',
'description': None,
'created': '2022-09-12T22:04:55.799+00:00',
'lastModified': '2022-09-12T22:06:10.838+00:00',
'lastIndexed': '2022-09-12T22:05:04.551+00:00',
'privacyMode': 'Private',
'userName': 'M M',
'isOwned': True,
'isBase': True,
'hasSourceVideoFile': True,
'state': 'Processed',
'moderationState': 'OK',
'reviewState': 'None',
'processingProgress': '100%',
'durationInSeconds': 58,
'thumbnailVideoId': '6ed909bd16',
'thumbnailId': '5f04af4d-e382-4387-9573-6d9e4bad3b68',
'searchMatches': [],
'indexingPreset': 'Default',
'streamingPreset': 'Default',
'sourceLanguage': 'en-GB',
'sourceLanguages': ['en-GB'],
'personModelId': '00000000-0000-0000-0000-000000000000'
}, {
'accountId': 'XXXXXXXXXXXXXXXXX',
'id': '34344818e8',
'partition': None,
'externalId': None,
'metadata': None,
'name': '3September',
'description': None,
'created': '2022-09-09T17:55:59.696+00:00',
'lastModified': '2022-09-09T17:57:51.057+00:00',
'lastIndexed': '2022-09-09T17:56:04.544+00:00',
'privacyMode': 'Private',
'userName': 'M M',
'isOwned': True,
'isBase': True,
'hasSourceVideoFile': True,
'state': 'Processed',
'moderationState': 'OK',
'reviewState': 'None',
'processingProgress': '100%',
'durationInSeconds': 58,
'thumbnailVideoId': '34344818e8',
'thumbnailId': 'baae7ed1-a791-4481-853c-1707b40b5e77',
'searchMatches': [],
'indexingPreset': 'Default',
'streamingPreset': 'Default',
'sourceLanguage': 'en-GB',
'sourceLanguages': ['en-GB'],
'personModelId': '00000000-0000-0000-0000-000000000000'
}, {
'accountId': 'XXXXXXXXXXXXXXXXXXXXXXXXX',
'id': '82da4b60ef',
'partition': None,
'externalId': None,
'metadata': None,
'name': 'film10',
'description': None,
'created': '2022-08-22T14:24:08.442+00:00',
'lastModified': '2022-09-08T23:13:16.416+00:00',
'lastIndexed': '2022-08-22T14:24:12.605+00:00',
'privacyMode': 'Private',
'userName': 'M M',
'isOwned': True,
'isBase': True,
'hasSourceVideoFile': True,
'state': 'Processed',
'moderationState': 'OK',
'reviewState': 'None',
'processingProgress': '100%',
'durationInSeconds': 58,
'thumbnailVideoId': '82da4b60ef',
'thumbnailId': '5a5f6a71-0302-46a6-93c8-beb918c00b14',
'searchMatches': [],
'indexingPreset': 'Default',
'streamingPreset': 'Default',
'sourceLanguage': 'en-GB',
'sourceLanguages': ['en-GB'],
'personModelId': '00000000-0000-0000-0000-000000000000'
}, {
'accountId': 'XXXXXXXXXXXXXXX',
'id': '7ea0c5e34a',
'partition': None,
'externalId': None,
'metadata': None,
'name': 'davide_quatela--people_in_frankfurt',
'description': None,
'created': '2022-09-07T21:31:52.818+00:00',
'lastModified': '2022-09-08T22:52:52.833+00:00',
'lastIndexed': '2022-09-07T21:31:57.328+00:00',
'privacyMode': 'Private',
'userName': 'M M',
'isOwned': True,
'isBase': True,
'hasSourceVideoFile': True,
'state': 'Processed',
'moderationState': 'OK',
'reviewState': 'None',
'processingProgress': '100%',
'durationInSeconds': 131,
'thumbnailVideoId': '7ea0c5e34a',
'thumbnailId': '3aba8f42-a3a7-4d77-92b0-8cabcc275a3b',
'searchMatches': [],
'indexingPreset': 'Default',
'streamingPreset': 'Default',
'sourceLanguage': 'en-US',
'sourceLanguages': ['en-US'],
'personModelId': '00000000-0000-0000-0000-000000000000'
}, {
'accountId': 'XXXXXXXXXXXXXXXXXX',
'id': '7c45ae7ffe',
'partition': None,
'externalId': None,
'metadata': None,
'name': 'Untitled project',
'description': None,
'created': '2022-08-17T17:36:23.72+00:00',
'lastModified': '2022-08-17T17:36:49.95+00:00',
'lastIndexed': '2022-08-17T17:36:49.95+00:00',
'privacyMode': 'Private',
'userName': 'M M',
'isOwned': True,
'isBase': False,
'hasSourceVideoFile': False,
'state': 'Processed',
'moderationState': 'OK',
'reviewState': 'None',
'processingProgress': '',
'durationInSeconds': 0,
'thumbnailVideoId': None,
'thumbnailId': '00000000-0000-0000-0000-000000000000',
'searchMatches': [],
'indexingPreset': None,
'streamingPreset': 'Default',
'sourceLanguage': 'en-US',
'sourceLanguages': ['en-US'],
'personModelId': '00000000-0000-0000-0000-000000000000'
}
],
'nextPage': {
'pageSize': 25,
'skip': 0,
'done': True
}
}
And this is my code in Python 3.x:这是我在 Python 3.x 中的代码:
url = "https://api.videoindexer.ai/" + Location + "/Accounts/" + AzVIAccountID + "/Videos?pageSize=25&skip=0&accessToken=" + iAccessToken
response = requests.get(url,headers=hdr )
### Response: 200 OK
print("If Response=200 Script run is OK : " , response.status_code)
I want the result to be like this:我希望结果是这样的:
Question: Is there a possibility to fetch 'name', 'id', and 'create' of all videos which are showing in the JSON result (Response API) into a dataframe directly?问题:是否有可能将 JSON 结果(响应 API)中显示的所有视频的“名称”、“id”和“创建”直接提取到 dataframe 中?
If data
contains the data from the question you can do:如果
data
包含问题中的数据,您可以执行以下操作:
df = pd.DataFrame(data["results"])
columns_i_want = ["id", "name", "created"]
df = df[columns_i_want]
print(df)
Prints:印刷:
id name created
0 6ed909bd16 NewSeptemberFile20220908 2022-09-12T22:04:55.799+00:00
1 34344818e8 3September 2022-09-09T17:55:59.696+00:00
2 82da4b60ef film10 2022-08-22T14:24:08.442+00:00
3 7ea0c5e34a davide_quatela--people_in_frankfurt 2022-09-07T21:31:52.818+00:00
4 7c45ae7ffe Untitled project 2022-08-17T17:36:23.72+00:00
EDIT:编辑:
url = (
"https://api.videoindexer.ai/"
+ Location
+ "/Accounts/"
+ AzVIAccountID
+ "/Videos?pageSize=25&skip=0&accessToken="
+ iAccessToken
)
response = requests.get(url, headers=hdr)
data = response.json()
# make sure the correct response is sent:
print(data)
# if above command prints correct data, you can do next:
df = pd.DataFrame(data["results"])
columns_i_want = ["id", "name", "created"]
df = df[columns_i_want]
print(df)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.