[英]Interlacing randomly a tf.Dataset with another tf.Dataset
I have two datasets:我有两个数据集:
main_ds = tf.data.Dataset.from_tensor_slices(list(range(1000, 1100)))
backgroud_ds = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4])
I want a batch interleaving main_ds
and backgroud_ds
data randomly.我想要一批随机交错
main_ds
和backgroud_ds
数据。 For instance, a batch of size 10 should look like:例如,一批大小为 10 的批次应如下所示:
[3, 1017, 1039, 3, 2, 1024, 4, 1, 1053, 4]
I tried the following:我尝试了以下方法:
def interlace_background(image, background):
return tf.cond(tf.random_uniform([]) < .5, lambda: image, lambda: background)
background_ds = background_ds.shuffle(10).repeat(-1)
background_it = background_ds.make_initializable_iterator()
background_next = background_it.get_next()
main_ds = main_ds.shuffle(10)\
.repeat(-1)\
.map(lambda x: interlace_background(x, background_next))\
.batch(10)
main_it = main_ds.make_initializable_iterator()
main_next = main_it.get_next()
but I get a fixed background across all batches:但我在所有批次中都得到了固定的背景:
batch 0: [ 3 1006 3 1001 3 1005 1015 1000 3 3]
batch 1: [1007 3 1012 1018 1013 3 1008 1019 3 3]
batch 2: [1016 3 1025 3 3 3 1021 3 3 1035]
batch 3: [1038 3 3 1023 1020 3 3 1046 1034 1047]
batch 4: [ 3 3 1039 3 3 3 3 3 1053 3]
Why is the background fixed (cf. above where background is always 3
) and how could I solve this?为什么背景是固定的(参见上面背景总是
3
),我该如何解决这个问题?
Fully reproducible code below:完全可重现的代码如下:
import tensorflow as tf
import numpy as np
def interlace_background(image, background):
return tf.cond(tf.random_uniform([]) < .5, lambda: image, lambda: background)
main_ds = tf.data.Dataset.from_tensor_slices(list(range(1000, 1100)))
background_ds = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4])
background_ds = background_ds.shuffle(10).repeat(-1)
background_it = background_ds.make_initializable_iterator()
background_next = background_it.get_next()
main_ds = main_ds.shuffle(10)\
.repeat(-1)\
.map(lambda x: interlace_background(x, background_next))\
.batch(10)
main_it = main_ds.make_initializable_iterator()
main_next = main_it.get_next()
with tf.Session() as sess:
sess.run(background_it.initializer)
sess.run(main_it.initializer)
for i in range(5):
print('batch %i' % i, sess.run(main_next))
You can do the same thing with Dataset.zip()
and Dataset.map()
.你可以用
Dataset.zip()
和Dataset.map()
做同样的事情。
Here is the code:这是代码:
import tensorflow as tf
def interlace_background(image, background):
return tf.cond(tf.random_uniform([]) < .5, lambda: image, lambda: background)
main_ds = tf.data.Dataset.from_tensor_slices(list(range(1000, 1100))).shuffle(100)
background_ds = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4]).shuffle(4)
new_ds = tf.data.Dataset \
.zip((main_ds, background_ds)) \
.repeat(-1) \
.map(lambda x, y: interlace_background(x, y)) \
.batch(10)
iterator = new_ds.make_initializable_iterator()
next_item = iterator.get_next()
with tf.Session() as sess:
sess.run(iterator.initializer)
for i in range(5):
print('batch %i' % i, sess.run(next_item))
Output:输出:
batch 0 [1065 2 4 1 2 4 1 1036 1072 1020]
batch 1 [ 4 3 2 1057 1 4 2 1077 3 1]
batch 2 [ 3 1044 1042 1049 1029 1 3 1069 1018 3]
batch 3 [ 2 4 1089 1094 2 1022 1041 1006 1 3]
batch 4 [1079 2 1 3 1023 1042 4 1018 1054 4]
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.