I am training VGG on some of my own images. I have the following code:
img_width, img_height = 512, 512
top_model_weights_path = 'UIP-versus-inconsistent.h5'
train_dir = 'MasterHRCT/Limited-Cuts-UIP-Inconsistent/train'
validation_dir = 'MasterHRCT/Limited-Cuts-UIP-Inconsistent/validation'
nb_train_samples = 1500
nb_validation_samples = 500
epochs = 50
batch_size = 16
def save_bottleneck_features():
datagen = ImageDataGenerator(rescale=1. / 255)
model = applications.VGG16(include_top=False, weights='imagenet')
generator = datagen.flow_from_directory(
train_dir,
target_size=(img_width, img_height),
shuffle=False,
class_mode=None,
batch_size=batch_size
)
bottleneck_features_train = model.predict_generator(generator=generator, steps=nb_train_samples // batch_size)
np.save(file="UIP-versus-inconsistent_train.npy", arr=bottleneck_features_train)
generator = datagen.flow_from_directory(
validation_dir,
target_size=(img_width, img_height),
shuffle=False,
class_mode=None,
batch_size=batch_size,
)
bottleneck_features_validation = model.predict_generator(generator, nb_validation_samples // batch_size)
np.save(file="UIP-versus-inconsistent_validate.npy", arr=bottleneck_features_validation)
generator = datagen.flow_from_directory(
validation_dir,
target_size=(img_width, img_height),
shuffle=False,
class_mode=None,
batch_size=batch_size,
)
bottleneck_features_validation = model.predict_generator(generator, nb_validation_samples // batch_size)
np.save(file="UIP-versus-inconsistent_validate.npy", arr=bottleneck_features_validation)
Following execution of this I get, as expected based on my directory
Found 1500 images belonging to 2 classes.
Found 500 images belonging to 2 classes
Then I run
train_data = np.load(file="UIP-versus-inconsistent_train.npy")
train_labels = np.array([0] * 750 + [1] * 750)
validation_data = np.load(file="UIP-versus-inconsistent_validate.npy")
validation_labels = np.array([0] * 250 + [1] * 250)
And then inspect the data
print("Train data shape", train_data.shape)
print("Train_labels shape", train_labels.shape)
print("Validation_data shape", validation_labels.shape)
print("Validation_labels", validation_labels.shape)
And I get
Train data shape (1488, 16, 16, 512)
Train_labels shape (1488,)
Validation_data shape (496,)
Validation_labels (496,)
And this is variable - instead of having 1500 training data examples and 500 validation examples it's like I "lose" some. Sometimes when I run save_bottleneck_features(): The numbers come back right, other times they don't. It happens a lot when the process takes a long time. Is there a reproducible explanation for this? Corrupted image perhaps?
It's simple:
1488 = (1500 // batch_size) * batch_size
496 = (500 // batch_size) * batch_size
Your loss comes from integer division inaccuracy.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.