簡體   English   中英

當我使用 VGG16 訓練模型時,我面臨以下問題

[英]I'm facing below issue whe I train the model using VGG16

在嘗試擬合我的模型時,我面臨以下問題:

ValueError: Input 0 of layer "model" is incompatible with the layer: expected shape=(None, 256, 96, 3), found shape=(None, 1, 8, 3, 512)

我的模型的詳細信息如下:

img_height = 96
img_width = 256

#Get back the convolutional part of a VGG network trained on ImageNet
model_vgg16_conv = VGG16(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3))
#Create your own input format (here 3x200x200)
input = Input(shape=(img_width, img_height, 3))

#Use the generated model 
output_vgg16_conv = model_vgg16_conv(input)

#Add the fully-connected layers 
x = Flatten(name='flatten')(output_vgg16_conv)
x = Dense(512, activation='relu', name='Dense1')(x)
x = Dropout(0.2, name = 'Dropout')(x)
x = Dense(45, activation='softmax', name='predictions')(x)

#Create your own model 
my_model = Model(inputs=input, outputs=x)

#In the summary, weights and layers from the VGG part will be hidden, but they will be fit during the training
my_model.summary()

my_model.compile(
    loss = 'sparse_categorical_crossentropy',
    optimizer = 'adam',
    metrics = ['accuracy']
)

my_model.fit(
    features,
    labels,
    batch_size = 5,
    epochs = 15,
    validation_split = 0.1,
    callbacks=[TensorBoard]
    )

有什么建議可以調整我的模型以解決問題嗎? 請注意特征:X,標簽:y,總圖像:4193 和 4 個類別

我的數據集生成代碼:

conv_base = VGG16(
            weights='imagenet',
            include_top=False,
            input_shape=(img_width, img_height, 3)
        )

形象重塑

    for input_image in tqdm(os.listdir(dir)):
        try:

            img = image.load_img(os.path.join(dir, input_image), target_size=(img_width, img_height))
            img_tensor = image.img_to_array(img)
            img_tensor /= 255.

            pic = conv_base.predict(img_tensor.reshape(1, img_width, img_height, 3))
            data.append([pic, index])

        except Exception as e:
            pass

我需要對此做任何調整嗎?

您需要確保對模型的輸入是正確的。 我正在使用隨機生成的數據tf.random.normal((64, 256, 96, 3)) ,其中 64 是樣本數,256 是您的img_width ,96 是您的img_height ,3 是通道數。 另請注意,如果您有 4 個類,則您的輸出層應該有 4 個節點。

import tensorflow as tf

img_height = 96
img_width = 256

#Get back the convolutional part of a VGG network trained on ImageNet
model_vgg16_conv = tf.keras.applications.VGG16(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3))
#Create your own input format (here 3x200x200)
input = tf.keras.layers.Input(shape=(img_width, img_height, 3))

#Use the generated model 
output_vgg16_conv = model_vgg16_conv(input)

#Add the fully-connected layers 
x = tf.keras.layers.Flatten(name='flatten')(output_vgg16_conv)
x = tf.keras.layers.Dense(512, activation='relu', name='Dense1')(x)
x = tf.keras.layers.Dropout(0.2, name = 'Dropout')(x)
x = tf.keras.layers.Dense(4, activation='softmax', name='predictions')(x)

#Create your own model 
my_model = tf.keras.Model(inputs=input, outputs=x)

#In the summary, weights and layers from the VGG part will be hidden, but they will be fit during the training
my_model.summary()

my_model.compile(
    loss = 'sparse_categorical_crossentropy',
    optimizer = 'adam',
    metrics = ['accuracy']
)

my_model.fit(
    tf.random.normal((64, 256, 96, 3)),
    tf.random.uniform((64, 1), maxval=4),
    batch_size = 5,
    epochs = 15)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM