簡體   English   中英

將python對象寫入磁盤而不加載到內存中?

[英]writing python objects to disk without loading into memory?

我正在運行大量計算,其結果我想一次保存到磁盤一個項目,因為整個數據太大而無法保存在內存中。 我嘗試使用shelve來保存它,但是我得到了錯誤:

HASH: Out of overflow pages.  Increase page size

我的代碼如下。 在python中執行此操作的正確方法是什么? pickle將對象加載到內存中。 shelve支持磁盤寫入,但強制字典結構,您受限於鍵數。 我保存的最終數據只是一個列表,不需要是字典形式。 只需要能夠一次讀取一個項目。

import shelve
def my_data():
  # this is a generator that yields data points
  for n in xrange(very_large_number):
    yield data_point

def save_result():
  db = shelve.open("result")
  n = 0
  for data in my_data():
    # result is a Python object (a tuple)
    result = compute(data)
    # now save result to disk
    db[str(n)] = result
  db.close()

如果使用klepto ,這很容易,這使您能夠透明地將對象存儲在文件或數據庫中。 首先,我展示了直接使用歸檔后端(即直接寫入磁盤)。

>>> import klepto
>>> db = klepto.archives.dir_archive('db', serialized=True, cached=False)
>>> db['n'] = 69     
>>> db['add'] = lambda x,y: x+y
>>> db['x'] = 42
>>> db['y'] = 11
>>> db['sub'] = lambda x,y: y-x
>>> 

然后我們重新啟動,創建一個到磁盤“數據庫”的新連接。

Python 2.7.11 (default, Dec  5 2015, 23:50:48) 
[GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import klepto
>>> db = klepto.archives.dir_archive('db', serialized=True, cached=False)
>>> db     
dir_archive('db', {'y': 11, 'x': 42, 'add': <function <lambda> at 0x10e500d70>, 'sub': <function <lambda> at 0x10e500de8>, 'n': 69}, cached=False)
>>> 

或者,您可以創建使用內存中代理的新連接。 下面,我只顯示將所需的條目加載到內存中。

Python 2.7.11 (default, Dec  5 2015, 23:50:48) 
[GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import klepto
>>> db = klepto.archives.dir_archive('db', serialized=True, cached=True)
>>> db
dir_archive('db', {}, cached=True)
>>> db.load('x', 'y')  # read multiple
>>> db.load('add')     # read one at a time
>>> db
dir_archive('db', {'y': 11, 'x': 42, 'add': <function <lambda> at 0x1079e7d70>}, cached=True)
>>> db['result'] = db['add'](db['x'],db['y'])
>>> db['result']
53
>>>

...或者也可以dump新條目dump到磁盤。

>>> db.dump('result')
>>>

以下程序演示了您可能希望如何處理您在問題中描述的過程。 它模擬應用程序可能需要復制的數據的創建,編寫,讀取和處理。 在默認情況下,代碼生成大約32 GB的數據並將其寫入磁盤。 經過一些實驗,啟用gzip壓縮可提供良好的速度,並將文件大小減小到約195 MB。 您應該針對您的問題調整示例,並且可能會通過反復試驗找到比其他壓縮技術更合適的壓縮技術。

#! /usr/bin/env python3
import os
import pickle


# Uncomment one of these imports to enable file compression:
# from bz2 import open
# from gzip import open
# from lzma import open


DATA_FILE = 'results.dat'
KB = 1 << 10
MB = 1 << 20
GB = 1 << 30
TB = 1 << 40


def main():
    """Demonstrate saving data to and loading data from a file."""
    save_data(develop_data())
    analyze_data(load_data())


def develop_data():
    """Create some sample data that can be saved for later processing."""
    return (os.urandom(1 * KB) * (1 * MB // KB) for _ in range(32 * GB // MB))


def save_data(data):
    """Take in all data and save it for retrieval later on."""
    with open(DATA_FILE, 'wb') as file:
        for obj in data:
            pickle.dump(obj, file, pickle.HIGHEST_PROTOCOL)


def load_data():
    """Load each item that was previously written to disk."""
    with open(DATA_FILE, 'rb') as file:
        try:
            while True:
                yield pickle.load(file)
        except EOFError:
            pass


def analyze_data(data):
    """Pretend to do something useful with each object that was loaded."""
    for obj in data:
        print(hash(obj))


if __name__ == '__main__':
    main()

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM