简体   繁体   English

训练二进制CNN(Keras)-训练时间慢

[英]Training a binary CNN (Keras) - Slow training time

I am training a binary CNN in keras for classifying polarity of emotions (expression) eg Smiling/Not_smiling . 我正在训练Smiling/Not_smiling的二进制CNN,以对情绪的极性(表达)进行分类,例如Smiling/Not_smiling this is my code. 这是我的代码。 I am training this on multi-GPU machine, but surprised by how long this training takes. 我正在多GPU机器上进行此培训,但对培训所需的时间感到惊讶。 Each class binary model is taking 5-6 hours. 每个类的二进制模型需要5到6个小时。 Is this normal/expected? 这正常吗?

I had previously trained a multi-class model combining all the classes and that took about 4 hours in total. 我以前曾训练过一个结合所有课程的multi-class模型,总共花了大约4个小时。

Note: each pos/neg class contains ~5000-10000 images. 注意:每个pos / neg类包含约5000-10000张图像。

Am I doing this right? 我这样做对吗? Is this training duration expected? 这个培训时间是预期的吗?

class_names = ["smiling","frowning","surprised","sad"]
## set vars!
for cname in class_names:
    print("[+] training: ",model_name,cname)

    dp_path_train = './emotion_data/{0}/train/{1}'.format(model_name,cname)
    dp_path_val = './emotion_data/{0}/val/{1}'.format(model_name,cname)
    dir_checkpoint = './models'
    G = 2 # no. of gpus to use

    batch_size = 32 * G
    step_size = 1000//G
    print("[*] batch size & step size: ", batch_size,step_size)

    model = Sequential()
    model.add(Conv2D(32, kernel_size = (3, 3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(BatchNormalization())
    model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(BatchNormalization())
    model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(BatchNormalization())
    model.add(Conv2D(96, kernel_size=(3,3), activation='relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(BatchNormalization())
    model.add(Conv2D(32, kernel_size=(3,3), activation='relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(BatchNormalization())
    model.add(Dropout(0.2))
    model.add(Flatten())
    model.add(Dense(128, activation='relu'))
    model.add(Dropout(0.3))
    model.add(Dense(1, activation = 'sigmoid'))
    model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

    train_datagen = ImageDataGenerator(rescale = 1./255,
        shear_range = 0.2,
        zoom_range = 0.2,
        horizontal_flip = True)
    test_datagen = ImageDataGenerator(rescale = 1./255)

    training_set = train_datagen.flow_from_directory(dp_path_train,
        target_size = (224, 224),
        batch_size = batch_size,
        class_mode = 'binary')

    test_set = test_datagen.flow_from_directory(dp_path_val,
        target_size = (224, 224),
        batch_size = batch_size,
        class_mode = 'binary')

    model.fit_generator(training_set,
        steps_per_epoch = step_size,
        epochs = 50,
        validation_data = test_set,
        validation_steps = 2000)

    print("[+] saving model: ",model_name,cname)
    model.save("./models2/{0}_{1}.hdf5".format(model_name,cname))

删除所有BatchNormalization层应有助于加快处理速度,或者您可以在网络体系结构层之间较少地使用它

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM