[英]Use multiple directories for flow_from_directory in Keras
我的場景是我們有多個擁有自己數據的對等點,位於不同的目錄中,具有相同的子目錄結構。 我想使用這些數據訓練模型,但是如果我將它們全部復制到一個文件夾中,我將無法跟蹤哪些數據來自誰(偶爾也會創建新數據,因此不適合保留復制文件每次)我的數據現在存儲如下:
-user01
-user02
-user03
...
(它們都有相似的子目錄結構)
我已經搜索了解決方案,但我只在此處和此處找到了多輸入案例,它們將多個輸入連接為 1 個單個並行輸入,這不是我的情況。
我知道flow_from_directory()
一次只能由 1 個目錄提供,那么如何構建一個可以一次由多個目錄提供的自定義目錄?
如果我的問題是低質量的,請提供有關如何改進它的建議,我也在 keras 的 github 上進行了搜索,但沒有找到任何我可以適應的內容。
謝謝你。
flow_from_directory
ImageDataGenerator flow_from_directory
方法有一個follow_links
參數。
也許您可以創建一個目錄,其中填充了指向所有其他目錄中文件的符號鏈接。
這個堆棧問題討論了在 Keras ImageDataGenerator 中使用符號鏈接: Understanding 'follow_links' argument in Keras's ImageDataGenerator?
這么多天后,我希望你找到了問題的解決方案,但我將在這里分享另一個想法,以便像我這樣將來面臨同樣問題的新人獲得幫助。
幾天前,我遇到了這樣的問題。 正如 user3731622 所說, follow_links
將是您問題的解決方案。 另外,我認為合並兩個數據生成器的想法會奏效。 但是,在這種情況下,必須與每個相關目錄中的數據范圍成比例地確定相應數據生成器的批量大小。
子發電機的批量大小:
Where,
b = Batch Size Of Any Sub-generator
B = Desired Batch Size Of The Merged Generator
n = Number Of Images In That Directory Of Sub-generator
the sum of n = Total Number Of Images In All Directories
請參閱下面的代碼,這可能會有所幫助:
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import Sequence
import matplotlib.pyplot as plt
import numpy as np
import os
class MergedGenerators(Sequence):
def __init__(self, batch_size, generators=[], sub_batch_size=[]):
self.generators = generators
self.sub_batch_size = sub_batch_size
self.batch_size = batch_size
def __len__(self):
return int(
sum([(len(self.generators[idx]) * self.sub_batch_size[idx])
for idx in range(len(self.sub_batch_size))]) /
self.batch_size)
def __getitem__(self, index):
"""Getting items from the generators and packing them"""
X_batch = []
Y_batch = []
for generator in self.generators:
if generator.class_mode is None:
x1 = generator[index % len(generator)]
X_batch = [*X_batch, *x1]
else:
x1, y1 = generator[index % len(generator)]
X_batch = [*X_batch, *x1]
Y_batch = [*Y_batch, *y1]
if self.generators[0].class_mode is None:
return np.array(X_batch)
return np.array(X_batch), np.array(Y_batch)
def build_datagenerator(dir1=None, dir2=None, batch_size=32):
n_images_in_dir1 = sum([len(files) for r, d, files in os.walk(dir1)])
n_images_in_dir2 = sum([len(files) for r, d, files in os.walk(dir2)])
# Have to set different batch size for two generators as number of images
# in those two directories are not same. As we have to equalize the image
# share in the generators
generator1_batch_size = int((n_images_in_dir1 * batch_size) /
(n_images_in_dir1 + n_images_in_dir2))
generator2_batch_size = batch_size - generator1_batch_size
generator1 = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
rotation_range=5.,
horizontal_flip=True,
)
generator2 = ImageDataGenerator(
rescale=1. / 255,
zoom_range=0.2,
horizontal_flip=False,
)
# generator2 has different image augmentation attributes than generaor1
generator1 = generator1.flow_from_directory(
dir1,
target_size=(128, 128),
color_mode='rgb',
class_mode=None,
batch_size=generator1_batch_size,
shuffle=True,
seed=42,
interpolation="bicubic",
)
generator2 = generator2.flow_from_directory(
dir2,
target_size=(128, 128),
color_mode='rgb',
class_mode=None,
batch_size=generator2_batch_size,
shuffle=True,
seed=42,
interpolation="bicubic",
)
return MergedGenerators(
batch_size,
generators=[generator1, generator2],
sub_batch_size=[generator1_batch_size, generator2_batch_size])
def test_datagen(batch_size=32):
datagen = build_datagenerator(dir1="./asdf",
dir2="./asdf2",
batch_size=batch_size)
print("Datagenerator length (Batch count):", len(datagen))
for batch_count, image_batch in enumerate(datagen):
if batch_count == 1:
break
print("Images: ", image_batch.shape)
plt.figure(figsize=(10, 10))
for i in range(image_batch.shape[0]):
plt.subplot(1, batch_size, i + 1)
plt.imshow(image_batch[i], interpolation='nearest')
plt.axis('off')
plt.tight_layout()
test_datagen(4)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.