简体   繁体   中英

How to store very large 3 dimensional matrix in HDF5 format?

I have a very large matrix which is a video file as an array of frames, each around 350x250 resolution. I have around 8,000-10,000 such frames in a single video file, which is around 1-1.5GB in size. I have figured that HDF5 is a good file format for my use as I have to perform a lot of mathematical operations on the file (across the entire depth column). My problem is that I am unable to store this 3D matrix in HDF5. Can someone suggest me how to store these frames in an incremental fashion (adding frame by frame to the hdf5 file) as a 3D matrix in hdf5 format? I am using h5py python package.

As an example, let's assume your video has 10 frames with a resolution of 200x200 pixels. Therefore, you would have to create a dataset with dimensions 10 x 200 x 200 x 3 with data type uint8 (each RGB component uses 8 unsigned bits). Here's how this transfers to the h5py api. Check the docs for details.

import h5py
import numpy as np

# create an hdf5 file
with h5py.File("/tmp/videos.h5") as f:
    # create a dataset for your movie
    dst = f.create_dataset("myvideo", shape=(10, 200, 200, 3),
                           dtype=np.uint8)
    # fill the 10 frames with a random image
    for frame in range(10):
        dst[frame] = np.random.randint(255, size=(200, 200, 3))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM