繁体   English   中英

ValueError:层 sequential_16 的输入 0 与层不兼容:预期 ndim=5,发现 ndim=4。 收到的完整形状:[无,224、224、3]

[英]ValueError: Input 0 of layer sequential_16 is incompatible with the layer: expected ndim=5, found ndim=4. Full shape received: [None, 224, 224, 3]

我正在使用 MobileNet 的迁移学习,然后将提取的特征发送到 LSTM 以进行视频数据分类。

当我使用 image_dataset_from_directory() 设置训练、测试、验证数据集时,图像大小调整为 (224,224)。

编辑:所以我需要填充数据的序列,但是这样做时出现以下错误,我不太确定在使用 image_dataset_from_directory() 时该怎么做:

train_dataset = sequence.pad_sequences(train_dataset, maxlen=BATCH_SIZE, padding="post", truncating="post")

InvalidArgumentError: assertion failed: [Unable to decode bytes as JPEG, PNG, GIF, or BMP]
     [[{{node decode_image/cond_jpeg/else/_1/decode_image/cond_jpeg/cond_png/else/_20/decode_image/cond_jpeg/cond_png/cond_gif/else/_39/decode_image/cond_jpeg/cond_png/cond_gif/Assert/Assert}}]] [Op:IteratorGetNext]

我检查了 train_dataset 类型:

<BatchDataset shapes: ((None, None, 224, 224, 3), (None, None)), types: (tf.float32, tf.int32)>

全局变量:

TARGETX = 224
TARGETY = 224
CLASSES = 3
SIZE = (TARGETX,TARGETY)
INPUT_SHAPE = (TARGETX, TARGETY, 3)
CHANNELS = 3
NBFRAME = 5
INSHAPE = (NBFRAME, TARGETX, TARGETY, 3)

美孚网function:

def build_mobilenet(shape=INPUT_SHAPE, nbout=CLASSES):
    # INPUT_SHAPE = (224,224,3)
    # CLASSES = 3
    model = MobileNetV2(
        include_top=False,
        input_shape=shape,
        weights='imagenet')
    base_model.trainable = True
    output = GlobalMaxPool2D()
    return Sequential([model, output])

长短期记忆网络 function:

def action_model(shape=INSHAPE, nbout=3):
    # INSHAPE = (5, 224, 224, 3)
    convnet = build_mobilenet(shape[1:])
    
    model = Sequential()
    model.add(TimeDistributed(convnet, input_shape=shape))
    model.add(LSTM(64))
    model.add(Dense(1024, activation='relu'))
    model.add(Dropout(.5))
    model.add(Dense(512, activation='relu'))
    model.add(Dropout(.5))
    model.add(Dense(128, activation='relu'))
    model.add(Dropout(.5))
    model.add(Dense(64, activation='relu'))
    model.add(Dense(nbout, activation='softmax'))
    return model
model = action_model(INSHAPE, CLASSES)
model.summary()
Model: "sequential_16"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
time_distributed_6 (TimeDist (None, 5, 1280)           2257984   
_________________________________________________________________
lstm_5 (LSTM)                (None, 64)                344320    
_________________________________________________________________
dense_45 (Dense)             (None, 1024)              66560     
_________________________________________________________________
dropout_18 (Dropout)         (None, 1024)              0         
_________________________________________________________________
dense_46 (Dense)             (None, 512)               524800    
_________________________________________________________________
dropout_19 (Dropout)         (None, 512)               0         
_________________________________________________________________
dense_47 (Dense)             (None, 128)               65664     
_________________________________________________________________
dropout_20 (Dropout)         (None, 128)               0         
_________________________________________________________________
dense_48 (Dense)             (None, 64)                8256      
_________________________________________________________________
dense_49 (Dense)             (None, 3)                 195       
=================================================================
Total params: 3,267,779
Trainable params: 3,233,667
Non-trainable params: 34,112

你 model 很好。 问题是您提供数据的方式。

您的 model 代码:

import tensorflow as tf
import keras
from keras.layers import GlobalMaxPool2D, TimeDistributed, Dense, Dropout, LSTM
from keras.applications import MobileNetV2
from keras.models import Sequential
import numpy as np
from keras.preprocessing.sequence import pad_sequences

TARGETX = 224
TARGETY = 224
CLASSES = 3
SIZE = (TARGETX,TARGETY)
INPUT_SHAPE = (TARGETX, TARGETY, 3)
CHANNELS = 3
NBFRAME = 5
INSHAPE = (NBFRAME, TARGETX, TARGETY, 3)

def build_mobilenet(shape=INPUT_SHAPE, nbout=CLASSES):
    # INPUT_SHAPE = (224,224,3)
    # CLASSES = 3
    model = MobileNetV2(
        include_top=False,
        input_shape=shape,
        weights='imagenet')
    model.trainable = True
    output = GlobalMaxPool2D()
    return Sequential([model, output])

def action_model(shape=INSHAPE, nbout=3):
    # INSHAPE = (5, 224, 224, 3)
    convnet = build_mobilenet(shape[1:])
    
    model = Sequential()
    model.add(TimeDistributed(convnet, input_shape=shape))
    model.add(LSTM(64))
    model.add(Dense(1024, activation='relu'))
    model.add(Dropout(.5))
    model.add(Dense(512, activation='relu'))
    model.add(Dropout(.5))
    model.add(Dense(128, activation='relu'))
    model.add(Dropout(.5))
    model.add(Dense(64, activation='relu'))
    model.add(Dense(nbout, activation='softmax'))
    return model    

现在让我们用一些虚拟数据试试这个 model:

因此,您 model 接受一系列图像(即视频帧)并将它们(视频)分类为 3 类之一。

让我们创建一个虚拟数据,其中包含 4 个视频,每个视频 10 帧,即批量大小 = 4,时间步长 = 10

X = np.random.randn(4, 10, TARGETX, TARGETY, 3)
y = model(X)
print (y.shape)

Output:

(4,3)

正如预期的那样,output 大小为(4,3)

现在使用image_dataset_from_direcctory将面临的问题是如何批量处理可变长度视频,因为每个视频中的帧数会/可能会有所不同。 处理它的方法是使用pad_sequences

例如,如果第一个视频有 10 帧,第二个有 9 帧,依此类推,您可以执行如下操作

X = [np.random.randn(10, TARGETX, TARGETY, 3), 
     np.random.randn(9, TARGETX, TARGETY, 3), 
     np.random.randn(8, TARGETX, TARGETY, 3), 
     np.random.randn(7, TARGETX, TARGETY, 3)]

X = pad_sequences(X)
y = model(X)
print (y.shape)

Output:

(4,3)

因此,一旦您使用image_dataset_from_direcctory读取图像,您将不得不将可变长度的帧填充到批处理中。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM