[英]Are we loosing data when we use .next() or .take() on tf.keras.preprocessing.image_dataset_from_directory object?
I create a data generator like this:我创建了一个这样的数据生成器:
# Create test_dataset
test_dataset = \
tf.keras.preprocessing.image_dataset_from_directory(directory=test_dir,
labels='inferred',
label_mode='int',
class_names=None,
seed=42,
)
# Explore the first batch
for images, labels in test_dataset.take(1):
print(labels)
it returns:它返回:
tf.Tensor([5 3 8 3 8 5 7 6 3 8 4 2 4 5 5 4 0 1 0 5 5 2 6 0 7 9 9 0 4 9 6 4], shape=(32,), dtype=int32)
if I re-run the last part as below:如果我重新运行最后一部分如下:
for images, labels in test_dataset.take(1):
print(labels)
it returns something different from the first time:它返回与第一次不同的东西:
tf.Tensor([0 6 2 5 5 7 5 2 7 4 0 5 0 4 6 5 8 7 7 3 5 1 1 9 5 2 6 6 6 6 2 0], shape=(32,), dtype=int32)
if I recreate test_dataset
and explore it as below:如果我重新创建
test_dataset
并按如下方式进行探索:
# Create test_dataset
test_dataset = \
tf.keras.preprocessing.image_dataset_from_directory(directory=test_dir,
labels='inferred',
label_mode='int',
class_names=None,
seed=42,
)
# Explore the first batch
for images, labels in test_dataset.take(1):
print(labels)
it returns the same as the first time:它返回与第一次相同的结果:
tf.Tensor([5 3 8 3 8 5 7 6 3 8 4 2 4 5 5 4 0 1 0 5 5 2 6 0 7 9 9 0 4 9 6 4], shape=(32,), dtype=int32)
Well, I conclude that when I use the take
method, the batch is popped out and lost and no more accessible to be used in the modeling and validation, etc.好吧,我得出的结论是,当我使用
take
方法时,批处理会弹出并丢失,并且无法在建模和验证等中使用。
My question is:我的问题是:
test_dataset.take(1)
test_dataset.take(1)
,第一批是否丢失tf.keras.preprocessing.image_dataset_from_directory
object?tf.keras.preprocessing.image_dataset_from_directory
对象中的批次时,有什么方法可以不松懈吗?That's not about losing the batch.这不是关于丢失批次。 Function
tf.keras.preprocessing.image_dataset_from_directory
has an argument shuffle
that is default value is True
.函数
tf.keras.preprocessing.image_dataset_from_directory
有一个参数shuffle
,默认值为True
。 That said, dataset is shuffled at each iteration.也就是说,数据集在每次迭代时都会被打乱。
If we dive into the source code :如果我们深入研究源代码:
if shuffle:
# Shuffle locally at each iteration
dataset = dataset.shuffle(buffer_size=batch_size * 8, seed=seed)
dataset = dataset.batch(batch_size)
Under the hood as you can see it creates a tf.data
object which has shuffle method.正如你所看到的,它创建了一个具有shuffle方法的
tf.data
对象。 Shuffle Method has an argument reshuffle_each_iteration = True
by default. Shuffle Method默认有一个参数
reshuffle_each_iteration = True
。 With 2nd take method you are iterating over the dataset again that causes it to get shuffled again.使用 2nd take 方法,您将再次迭代数据集,导致它再次被打乱。
If you set shuffle = False
for the dataset, then the data will be sorted in a alphanumeric order and its order won't change at each iteration.如果为数据集设置
shuffle = False
,则数据将按字母数字顺序排序,并且每次迭代时其顺序都不会改变。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.