簡體   English   中英

Python - 使用HUGE數據集避免內存錯誤

[英]Python - avoiding memory error with HUGE data set

我有一個連接到PostGreSQL數據庫的python程序。 在這個數據庫中,我有很多數據(大約12億行)。 幸運的是,我不必同時分析所有這些行。

這12億行分布在幾張桌子上(大約30張)。 目前我正在訪問一個名為table_3的表,我想在其中訪問具有特定“did”值的所有行(如調用該列)。

我使用SQL命令計算了行數:

SELECT count(*) FROM table_3 WHERE did='356002062376054';

返回1.57億行。

我將對所有這些行執行一些“分析”(提取2個特定值)並對這些值進行一些計算,然后將它們寫入字典,然后將它們保存在另一個表中的PostGreSQL上。

問題是我正在創建大量列表和字典來管理所有這些我最終耗盡內存,即使我使用的是Python 3 64位並且具有64 GB的RAM。

一些代碼:

CONNECTION = psycopg2.connect('<psycopg2 formatted string>')
CURSOR = CONNECTION.cursor()

DID_LIST = ["357139052424715",
            "353224061929963",
            "356002064810514",
            "356002064810183",
            "358188051768472",
            "358188050598029",
            "356002061925067",
            "358188056470108",
            "356002062376054",
            "357460064130045"]

SENSOR_LIST = [1, 2, 3, 4, 5, 6, 7, 8, 9,
               10, 11, 12, 13, 801, 900, 901,
               902, 903, 904, 905, 906, 907,
               908, 909, 910, 911]

for did in did_list:
    table_name = did
    for sensor_id in sensor_list:
        rows = get_data(did, sensor_id)
        list_object = create_standard_list(sensor_id, rows)  # Happens here
        formatted_list = format_table_dictionary(list_object) # Or here
        pushed_rows = write_to_table(table_name, formatted_list) #write_to_table method is omitted as that is not my problem.

def get_data(did, table_id):
    """Getting data from postgresql."""
    table_name = "table_{0}".format(table_id)
    query = """SELECT * FROM {0} WHERE did='{1}'
               ORDER BY timestamp""".format(table_name, did)

    CURSOR.execute(query)
    CONNECTION.commit()

    return CURSOR

def create_standard_list(sensor_id, data):
    """Formats DB data to dictionary"""
    list_object = []

    print("Create standard list")
    for row in data: # data is the psycopg2 CURSOR
        row_timestamp = row[2]
        row_data = row[3]

        temp_object = {"sensor_id": sensor_id, "timestamp": row_timestamp,
                       "data": row_data}

        list_object.append(temp_object)

    return list_object


def format_table_dictionary(list_dict):
    """Formats dictionary to simple data
       table_name = (dates, data_count, first row)"""
    print("Formatting dict to DB")
    temp_today = 0
    dict_list = []
    first_row = {}
    count = 1

    for elem in list_dict:
        # convert to seconds
        date = datetime.fromtimestamp(elem['timestamp'] / 1000)
        today = int(date.strftime('%d'))
        if temp_today is not today:
            if not first_row:
                first_row = elem['data']
            first_row_str = str(first_row)
            dict_object = {"sensor_id": elem['sensor_id'],
                           "date": date.strftime('%d/%m-%Y'),
                           "reading_count": count,
                           # size in MB of data
                           "approx_data_size": (count*len(first_row_str)/1000),
                           "time": date.strftime('%H:%M:%S'),
                           "first_row": first_row}

            dict_list.append(dict_object)
            first_row = {}
            temp_today = today
            count = 0
        else:
            count += 1

    return dict_list

我的錯誤發生在創建兩個列表中的任何一個,在我的代碼中用注釋標記。 它代表我的電腦停止響應,並最終讓我退出。 我正在運行Windows 10,如果這是重要的。

我知道我使用“create_standard_list”方法創建的第一個列表可以被排除,並且該代碼可以在“format_table_dictionary”代碼中運行,從而避免在內存中包含157 mio元素的列表,但我認為其他一些表我將遇到類似的問題,可能會更大,所以我想現在就優化它,但我不確定我能做什么?

我想寫一個文件並不會真正有用,因為我必須讀取該文件,從而將它重新放回內存中?

極簡主義的例子

我有一張桌子

---------------------------------------------------------------
|Row 1 | did | timestamp | data | unused value | unused value |
|Row 2 | did | timestamp | data | unused value | unused value |
....
---------------------------------

table = [{ values from above row1 }, { values from above row2},...]

connection = psycopg2.connect(<connection string>)
cursor = connection.cursor()

table = cursor.execute("""SELECT * FROM table_3 WHERE did='356002062376054'
                          ORDER BY timestamp""")

extracted_list = extract(table)
calculated_list = calculate(extracted_list)
... write to db ...

def extract(table):
    """extract all but unused values"""
    new_list = []
    for row in table:
        did = row[0]
        timestamp = row[1]
        data = row[2]

        a_dict = {'did': did, 'timestamp': timestamp, 'data': data}
        new_list.append(a_dict)

    return new_list


def calculate(a_list):
    """perform calculations on values"""
    dict_list = []
    temp_today = 0
    count = 0
    for row in a_list:
        date = datetime.fromtimestamp(row['timestamp'] / 1000) # from ms to sec
        today = int(date.strfime('%d'))
        if temp_today is not today:
            new_dict = {'date': date.strftime('%d/%m-%Y'),
                        'reading_count': count,
                        'time': date.strftime('%H:%M:%S')}
            dict_list.append(new_dict)

    return dict_list

create_standard_list()format_table_dictionary()可以構建生成器( yield每個項目而不是return完整列表),這會停止將整個列表保存在內存中,因此應該解決您的問題,例如:

def create_standard_list(sensor_id, data):
    for row in data:
        row_timestamp = row[2]
        row_data = row[3]

        temp_object = {"sensor_id": sensor_id, "timestamp": row_timestamp,
                       "data": row_data}
        yield temp_object
       #^ yield each item instead of appending to a list

有關生成器yield關鍵字的更多信息。

您在這里嘗試做的是IIUC,它是在Python代碼中模擬SQL GROUP BY表達式。 這永遠不會像直接在數據庫中那樣快速和有效。 你的示例代碼似乎有一些問題,但我把它理解為:要計算每天的行數,對於發生對於給定的每一天did 此外,您對每組價值的最小(或最大或中位,無關緊要)時間感興趣,即每天。

讓我們設置一個小的示例表(在Oracle上測試):

create table t1 (id number primary key, created timestamp, did number, other_data varchar2(200));  

insert into t1 values (1, to_timestamp('2017-01-31 17:00:00', 'YYYY-MM-DD HH24:MI:SS'), 9001, 'some text');
insert into t1 values (2, to_timestamp('2017-01-31 19:53:00', 'YYYY-MM-DD HH24:MI:SS'), 9001, 'some more text');
insert into t1 values (3, to_timestamp('2017-02-01 08:10:00', 'YYYY-MM-DD HH24:MI:SS'), 9001, 'another day');
insert into t1 values (4, to_timestamp('2017-02-01 15:55:00', 'YYYY-MM-DD HH24:MI:SS'), 9001, 'another day, rainy afternoon');
insert into t1 values (5, to_timestamp('2017-02-01 15:59:00', 'YYYY-MM-DD HH24:MI:SS'), 9002, 'different did');
insert into t1 values (6, to_timestamp('2017-02-03 01:01:00', 'YYYY-MM-DD HH24:MI:SS'), 9001, 'night shift');

我們有幾行,分布在幾天, 9001 9002也有價值,我們會忽略它。 現在讓我們將您要寫入第二個表的行作為一個簡單的SELECT .. GROUP BY

select 
    count(*) cnt, 
    to_char(created, 'YYYY-MM-DD') day, 
    min(to_char(created, 'HH24:MI:SS')) min_time 
from t1 
where did = 9001
group by to_char(created, 'YYYY-MM-DD')
;

我們按created列的時間(時間戳)對所有行進行分組。 我們選擇每組的行數,日期本身,以及 - 只是為了好玩 - 每組的最小時間部分。 結果:

cnt day         min_time
2   2017-02-01  08:10:00
1   2017-02-03  01:01:00
2   2017-01-31  17:00:00

所以現在你將第二個表作為SELECT 從中創建表格是微不足道的:

create table t2 as
select
    ... as above
;

HTH!

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM