[英]Merging pandas dataframes
I am fairly new to pandas.我对熊猫相当陌生。
I am calling API which response is the following:我正在调用 API,其响应如下:
Id name number key
1 john 540 us
2 alex 541 us
3 mary 542 us
4 kate 543 us
...
I am calling the same API about 120 times, each time I get dataframe with 1000 rows.我调用相同的 API 大约 120 次,每次我获得 1000 行的数据帧。
def load_full(times):
item_count = 0
while item_count <= times:
response = requests.post(url_2,data=json.dumps(data_two),headers=headers)
response_json = response.json()
result = pd.io.json.json_normalize(response_json['hits']['hits'])
item_count+=1
print(result)
My goal is to merge those 120 responses with 1000 rows each into one dataframe which I would export to .CSV file.我的目标是将这 120 个响应(每行 1000 行)合并到一个数据框中,然后将其导出到 .CSV 文件。 I have tried appending or merging but I can't seem to find the logic to actually get what I need which is 120000x4 dataframe.
我试过追加或合并,但我似乎无法找到实际获得我需要的 120000x4 数据帧的逻辑。
How would I move forward merging each result into one file which would contain each result from each API call?我将如何将每个结果合并到一个文件中,该文件将包含来自每个 API 调用的每个结果?
Thank you for your suggestions.谢谢你的建议。
Idea is create list of DataFrame
s with append
and then concat
together:想法是使用
append
创建DataFrame
列表,然后concat
在一起:
def load_full(times):
dfs = []
item_count = 0
while item_count <= times:
response = requests.post(url_2,data=json.dumps(data_two),headers=headers)
response_json = response.json()
result = pd.io.json.json_normalize(response_json['hits']['hits'])
item_count+=1
dfs.append(result)
df = pd.concat(dfs, ignore_index=True)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.