简体   繁体   中英

Data Augmentation for Inception v3

I'm trying to use inception v3 to classify images but my dataset is very small (can't have more img than that) and I'd like to augment it with transformations such as rotation or inversions. I'm new to TF and can't figure out how to do so, I've read the documentation for the ImageDataGenerator which should augment my data but when training I still get the error that states that I don't have enough data. I could use masks also but don't know how to implement in tf. Can someone enlighten me? Thanks a lot for any input

Here's my code:

train_datagen = ImageDataGenerator(rescale = 1./255.,
                                   rotation_range = 180,
                                   width_shift_range = 0.2,
                                   height_shift_range = 0.2,
                                   zoom_range = 0.2,
                                   horizontal_flip = True,
                                   vertical_flip = True)
 
test_datagen = ImageDataGenerator(rescale = 1./255.,
                                   rotation_range = 180,
                                   width_shift_range = 0.2,
                                   height_shift_range = 0.2,
                                   zoom_range = 0.2,
                                   horizontal_flip = True,
                                   vertical_flip = True)

 
train_generator = train_datagen.flow_from_directory(train_dir,
                                                    batch_size = 100,
                                                    class_mode = 'binary',
                                                    target_size = (224, 224))


validation_generator =  test_datagen.flow_from_directory(validation_dir,
                                                          batch_size  = 100,
                                                          class_mode  = 'binary',
                                                          target_size = (224, 224))
base_model = InceptionV3(input_shape = (224, 224, 3),
                                include_top = False,
                                weights = 'imagenet')
for layer in base_model.layers:
  layer.trainable = False

%%time
x = layers.Flatten()(base_model.output)
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.2)(x)                 
x = layers.Dense(1, activation='sigmoid')(x)          
 
model = Model( base_model.input, x)
 
model.compile(optimizer = RMSprop(learning_rate=0.0001),loss = 'binary_crossentropy',metrics = ['acc'])
callbacks = myCallback()
 
history = model.fit_generator(
            train_generator,
            validation_data = validation_generator,
            steps_per_epoch = 100,
            epochs = 10,
            validation_steps = 10,
            verbose = 2,
            callbacks=[callbacks])

Error:

WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 1000 batches). You may need to use the repeat() function when building your dataset.

As you are using generators, you will have to calculate the number of steps per epoch as follows:

steps_per_epoch=(data_samples/batch_size)

OR

You could let the model figure out how many steps are there to cover an epoch. Did you try running it without the steps_per_epoch parameter?

Let us know if the issue still persists. Thanks!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM