简体   繁体   English

如何提高深度学习中的准确率和验证准确率

[英]How to improve accuracy and validation accuracy in deep learning

I'm training a CNN with my own data I tried resnet50 and resnet101 and my own model on the same data and the accuracy was63 and validation accuracy is 0.08.我正在用我自己的数据训练 CNN 我在相同的数据上尝试了 resnet50 和 resnet101 以及我自己的模型,准确度为 63,验证准确度为 0.08。 I know the problem is with my data I want to try to shuffle my data before splitting it but I have my data in 26 different classes how can I shuffle my data before splitting it to training and validation sets.我知道问题出在我的数据上 我想在拆分数据之前尝试对其进行混洗,但我的数据有 26 个不同的类,如何在将数据拆分为训练集和验证集之前对其进行混洗。 My data set is more than 36K images.我的数据集超过 36K 个图像。

(trainX, testX, trainY, testY) = train_test_split(data, labels,
    test_size=0.25, stratify=labels, random_state=42)

# initialize the training data augmentation object
trainAug = ImageDataGenerator(
    rotation_range=30,
    zoom_range=0.15,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.15,
    horizontal_flip=True,
    fill_mode="nearest")

# initialize the validation/testing data augmentation object (which
# we'll be adding mean subtraction to)
valAug = ImageDataGenerator()

# define the ImageNet mean subtraction (in RGB order) and set the
# the mean subtraction value for each of the data augmentation
# objects
mean = np.array([123.68, 116.779, 103.939], dtype='float32')
trainAug.mean = mean
valAug.mean = mean

model = Sequential()
# The first two layers with 32 filters of window size 3x3
model.add(Conv2D(32, (5, 5), padding='same', activation='relu', input_shape=(224, 224, 3)))
model.add(Conv2D(32, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(64, (5, 5), padding='same', activation='relu'))
model.add(Conv2D(64, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(labels, activation='softmax'))


print("[INFO] compiling model...")
opt = SGD(lr=1e-4, momentum=0.9, decay=1e-4 / args["epochs"])
model.compile(loss="categorical_crossentropy", optimizer=opt,
    metrics=["accuracy"])
print("[INFO] training head...")
H = model.fit(
    x=trainAug.flow(trainX, trainY, batch_size=32),
    steps_per_epoch=len(trainX) // 32,
    validation_data=valAug.flow(testX, testY),
    validation_steps=len(testX) // 32,
    epochs=args["epochs"])

You can use the validation split keyword of the ImageDataGenerator to automatically split your training and test data.您可以使用 ImageDataGenerator 的验证拆分关键字来自动拆分您的训练和测试数据。

train_datagen = ImageDataGenerator(rescale=1./255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True,
    validation_split=0.2) # set validation split

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_height, img_width),
    batch_size=batch_size,
    class_mode='binary',
    subset='training') # set as training data

validation_generator = train_datagen.flow_from_directory(
    train_data_dir, # same directory as training data
    target_size=(img_height, img_width),
    batch_size=batch_size,
    class_mode='binary',
    subset='validation') # set as validation data

model.fit_generator(
    train_generator,
    steps_per_epoch = train_generator.samples // batch_size,
    validation_data = validation_generator, 
    validation_steps = validation_generator.samples // batch_size,
    epochs = nb_epochs)

As the ImageDataGenerator automatically shuffles your input data, you using the ImageDataGenerator your data is shuffled and split.由于ImageDataGenerator自动打乱您的输入数据,因此您可以使用ImageDataGenerator您的数据进行打乱和拆分。

In your case you'll need flow instead of flow_from_directory在您的情况下,您需要flow而不是flow_from_directory

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM