簡體   English   中英

在while循環中附加到dataframe pandas

[英]Appending to dataframe in while loop pandas

所以我在嘗試對數據框進行排序時遇到了一些問題。 我的代碼獲取的數據一次只允許 1000 行,然后它發送一個延續 URL,我的腳本跟隨 while 循環,但問題是在每次傳遞時我都有它寫入並附加到 csv。 它工作得很好,但現在我需要對整個數據框進行排序,這是一個問題。

我怎樣才能在每次通過時寫入數據幀,然后將 dataframe 寫入 csv。 我會 append 到每個循環上的數據幀,還是讓它在每次通過時創建新的數據幀,然后在最后如何組合它們? 我不知道該怎么做,我幾乎沒有按原樣工作,所以一些建議將不勝感激。

import requests
import json
import pandas as pd
import time
import os
from  itertools import product

#what I need to loop through
instrument = ('btc-usd')
exchange = ('cbse')  
interval = ('1m','3m')  
start_time = '2021-01-14T00:00:00Z'
end_time = '2021-01-16T23:59:59Z'


for (interval) in product(interval):
    page_size = '1000'
    url = f'https://us.market-api.kaiko.io/v2/data/trades.v1/exchanges/{exchange}/spot/{instrument}/aggregations/count_ohlcv_vwap'
    #params = {'interval': interval, 'page_size': page_size, 'start_time': start_time, 'end_time': end_time }
    params = {'interval': interval, 'page_size': page_size }
    KEY = 'xxx'
    headers = {
        "X-Api-Key": KEY,
        "Accept": "application/json",
        "Accept-Encoding": "gzip"
    }

    csv_file = f"{exchange}-{instrument}-{interval}.csv"
    c_token = True

    while(c_token):
        res = requests.get(url, params=params, headers=headers)
        j_data = res.json()
        parse_data = j_data['data']
        c_token = j_data.get('continuation_token')
        today = time.strftime("%Y-%m-%d")
        params = {'continuation_token': c_token}

        if c_token:   
            url = f'https://us.market-api.kaiko.io/v2/data/trades.v1/exchanges/cbse/spot/btc-usd/aggregations/count_ohlcv_vwap?continuation_token={c_token}'        

        # create dataframe
        df = pd.DataFrame.from_dict(pd.json_normalize(parse_data), orient='columns')
        df.insert(1, 'time', pd.to_datetime(df.timestamp.astype(int),unit='ms'))          
        df['range'] = df['high'].astype(float) - df['low'].astype(float)
        df.range = df.range.astype(float)

        #sort
        df = df.sort_values(by='range')
        
        #that means file already exists need to append
        if(csv_file in os.listdir()): 
            csv_string = df.to_csv(index=False, encoding='utf-8', header=False)
            with open(csv_file, 'a') as f:
                f.write(csv_string)
        #that means writing file for the first time        
        else: 
            csv_string = df.to_csv(index=False, encoding='utf-8')
            with open(csv_file, 'w') as f:
                f.write(csv_string)

也許最干凈和最有效的方法是制作一個空的 dataframe 然后 append 給它。

import requests
import json
import pandas as pd
import time
import os
from  itertools import product

#what I need to loop through
instruments = ('btc-usd',)
exchanges = ('cbse',)
intervals = ('1m','3m')  
start_time = '2021-01-14T00:00:00Z'
end_time = '2021-01-16T23:59:59Z'
params = {'page_size': 1000}
KEY = 'xxx'
    
headers = {
        "X-Api-Key": KEY,
        "Accept": "application/json",
        "Accept-Encoding": "gzip"
    }

for instrument, exchange, interval  in product(instruments, exchanges, intervals):
    params['interval'] = interval
    url = 'https://us.market-api.kaiko.io/v2/data/trades.v1/exchanges/{exchange}/spot/{instrument}/aggregations/count_ohlcv_vwap'
    csv_file = f"{exchange}-{instrument}-{interval}.csv"
    df = pd.DataFrame()   # start with empty dataframe

    while True:
        res = requests.get(url, params=params, headers=headers)
        j_data = res.json()
        parse_data = j_data['data']
        df = df.append(pd.DataFrame.from_dict(pd.json_normalize(parse_data), orient='columns'))  # append to the dataframe
        if 'continuation_token' in j_data:
            params['continuation_token'] = j_data['continuation_token']
        else:
            break
        
    # These parts can be done outside of the while loop, once all the data has been compiled
    df.insert(1, 'time', pd.to_datetime(df.timestamp.astype(int),unit='ms'))          
    df['range'] = df['high'].astype(float) - df['low'].astype(float)
    df.range = df.range.astype(float)
    df = df.sort_values(by='range')
    df.to_csv(csv_file, index=False, encoding='utf-8')  # write the whole CSV at once

If the size of the combined dataframe is too large for memory, then you could instead read in one page at a time and append it to the CSV, provided the column headings are the same on each page. (您可能仍然需要注意 pandas 每次都以相同的順序寫入列。)

您可以使用 df.loc 和 len 並添加值列表。

    win_results_df=pd.DataFrame(columns=['GameId','Team','TeamOpponent',\
    'HomeScore', 'VisitorScore','Target'])

   df_length = len(win_results_df)
   win_results_df.loc[df_length] = [teamOpponent['gameId'], \
   key, teamOpponent['visitorDisplayName'], \
   teamOpponent['HomeScore'], teamOpponent['VisitorScore'],True]

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM