简体   繁体   English

如何轻松地将 PyTorch 数据加载器转换为 tf.Dataset?

[英]How to easily convert a PyTorch dataloader to tf.Dataset?

How can we convert a pytorch dataloader to a tf.Dataset ?我们如何将pytorch数据加载器转换为tf.Dataset

I spied this snippet:-我发现了这个片段:-

def convert_pytorch_dataloader_to_tf_dataset(dataloader, batch_size, shuffle=True):
    dataset = tf.data.Dataset.from_generator(
        lambda: dataloader,
        output_types=(tf.float32, tf.float32),
        output_shapes=(tf.TensorShape([256, 512]), tf.TensorShape([2,]))
    )
    if shuffle:
        dataset = dataset.shuffle(buffer_size=len(dataloader.dataset))
    dataset = dataset.batch(batch_size)
    return dataset

But it doesn't work at all.但它根本不起作用。

Is there an in-built option to export dataloaders to tf.Dataset s easily?是否有内置选项可以轻松地将数据tf.Dataset dataloaders I have a very complex dataloader, so a simple solutions should ensure things are bug-free:)我有一个非常复杂的数据加载器,所以一个简单的解决方案应该确保没有错误:)

For your data in h5py format, you can use the script below.对于 h5py 格式的数据,您可以使用下面的脚本。 name_x is the features' name in your h5py and name_y is your label's file name. name_x 是 h5py 中的功能名称,name_y 是标签的文件名。 This method is memory efficient and you can feed the data batch by batch.这种方法是 memory 有效的,您可以批量输入数据。

class Generator(object):

def __init__(self,open_directory,batch_size,name_x,name_y):

    self.open_directory = open_directory

    data_f = h5py.File(open_directory, "r")

    self.x = data_f[name_x]
    self.y = data_f[name_y]

    if len(self.x.shape) == 4:
        self.shape_x = (None, self.x.shape[1], self.x.shape[2], self.x.shape[3])

    if len(self.x.shape) == 3:
        self.shape_x = (None, self.x.shape[1], self.x.shape[2])

    if len(self.y.shape) == 4:
        self.shape_y = (None, self.y.shape[1], self.y.shape[2], self.y.shape[3])

    if len(self.y.shape) == 3:
        self.shape_y = (None, self.y.shape[1], self.y.shape[2])

    self.num_samples = self.x.shape[0]
    self.batch_size = batch_size
    self.epoch_size = self.num_samples//self.batch_size+1*(self.num_samples % self.batch_size != 0)

    self.pointer = 0
    self.sample_nums = np.arange(0, self.num_samples)
    np.random.shuffle(self.sample_nums)


def data_generator(self):

    for batch_num in range(self.epoch_size):

        x = []
        y = []

        for elem_num in range(self.batch_size):

            sample_num = self.sample_nums[self.pointer]

            x += [self.x[sample_num]]
            y += [self.y[sample_num]]

            self.pointer += 1

            if self.pointer == self.num_samples:
                self.pointer = 0
                np.random.shuffle(self.sample_nums)
                break

        x = np.array(x,
                     dtype=np.float32)
        y = np.array(y,
                     dtype=np.float32)

        yield x, y

def get_dataset(self):
    dataset = tf.data.Dataset.from_generator(self.data_generator,
                                             output_types=(tf.float32,
                                                           tf.float32),
                                             output_shapes=(tf.TensorShape(self.shape_x),
                                                            tf.TensorShape(self.shape_y)))
    dataset = dataset.prefetch(1)

    return dataset

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM