简体   繁体   English

与 keras-turner 一起使用时 tensorflow CNN model 的输入形状不匹配

[英]Input shape mismatch of tensorflow CNN model when use with keras-turner

I have an existing CNN model which works fine and the code is as follows.我有一个现有的 CNN model 工作正常,代码如下。

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape=(train_data.shape[1], 1)))
model.add(tf.keras.layers.Conv1D(48, 48, activation=tf.nn.selu, padding='same'))
model.add(tf.keras.layers.MaxPool1D(2))
model.add(tf.keras.layers.Conv1D(48, 96, activation=tf.nn.selu, padding='same'))
model.add(tf.keras.layers.MaxPool1D(2))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation=tf.keras.activations.relu))
model.add(tf.keras.layers.Dense(1, activation=tf.keras.activations.sigmoid))
model.compile(optimizer=tf.keras.optimizers.Adam(), loss=loss_function)

model.fit(train_data, train_result, epochs=2000, validation_split=0.2, verbose=0, callbacks=[early_stop])

train_data is a set of time series where each series is a 48 value vector. train_data是一组时间序列,其中每个序列是一个 48 值向量。

I'm trying to optimise the hyperparameters using keras-turner .我正在尝试使用keras-turner优化超参数。 Referring to the CIFAR example in https://github.com/keras-team/keras-tuner/blob/master/examples/cifar10.py I've changed my code as follows参考https://github.com/keras-team/keras-tuner/blob/master/examples/cifar10.py中的 CIFAR 示例,我将代码更改如下

def build_model(hp):
    model = tf.keras.models.Sequential()
    model.add(tf.keras.layers.InputLayer(input_shape=(train_data.shape[1], 1)))
    # for i in range(hp.Int('conv_blocks', 3, 5, default=3)):
    filters = hp.Int('filters_' + str(1), 12, 96, step=12)
    for _ in range(2):
        model.add(tf.keras.layers.Conv1D(filters, 3, activation=tf.nn.selu, padding='same'))
        if hp.Choice('pooling_' + str(1), ['avg', 'max']) == 'max':
            model.add(tf.keras.layers.MaxPool1D(2))
        else:
            model.add(tf.keras.layers.AvgPool1D(2))
    model.add(tf.keras.layers.Flatten())
    model.add(tf.keras.layers.Dense(hp.Int('hidden_size', 30, 100, step=10, default=50),
                                    activation=tf.keras.activations.relu))
    model.add(tf.keras.layers.Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5)))
    model.add(tf.keras.layers.Dense(2, activation=tf.keras.activations.softmax))
    model.compile(optimizer=tf.keras.optimizers.Adam(hp.Float('learning_rate', 1e-4, 1e-2, sampling='log')),
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy'])
    return model

import kerastuner as kt

tuner = kt.Hyperband(build_model, objective='val_accuracy', max_epochs=30, hyperband_iterations=2)
tuner.search(train_data, validation_split=0.2, epochs=30, callbacks=[tf.keras.callbacks.EarlyStopping(patience=1)])

But when I try to run I'm getting the following error.但是当我尝试运行时,出现以下错误。

ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (176039, 48)

Can someone help me to figure out what am I doing wrong here?有人可以帮我弄清楚我在这里做错了什么吗?

Your train_data should have 3 dimensions, the last dimension is missing.您的 train_data 应该有 3 个维度,缺少最后一个维度。

train_data = train_data.reshape(-1,48,1)

You're also not passing any labels to the model.您也没有将任何标签传递给 model。

Here's a dummy working code, you need to pass the labels accordingly.这是一个虚拟工作代码,您需要相应地传递标签。

import tensorflow as tf
import numpy as np

def build_model(hp):
    model = tf.keras.models.Sequential()
    model.add(tf.keras.layers.InputLayer(input_shape=(48, 1)))
    # for i in range(hp.Int('conv_blocks', 3, 5, default=3)):
    filters = hp.Int('filters_' + str(1), 12, 96, step=12)
    for _ in range(2):
        model.add(tf.keras.layers.Conv1D(filters, 3, activation=tf.nn.selu, padding='same'))
        if hp.Choice('pooling_' + str(1), ['avg', 'max']) == 'max':
            model.add(tf.keras.layers.MaxPool1D(2))
        else:
            model.add(tf.keras.layers.AvgPool1D(2))
    model.add(tf.keras.layers.Flatten())
    model.add(tf.keras.layers.Dense(hp.Int('hidden_size', 30, 100, step=10, default=50),
                                    activation=tf.keras.activations.relu))
    model.add(tf.keras.layers.Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5)))
    model.add(tf.keras.layers.Dense(2, activation=tf.keras.activations.softmax))
    model.compile(optimizer=tf.keras.optimizers.Adam(hp.Float('learning_rate', 1e-4, 1e-2, sampling='log')),
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy'])
    return model

import kerastuner as kt

tuner = kt.Hyperband(build_model, objective='val_accuracy', max_epochs=30, hyperband_iterations=2)

train_data = np.random.randn(100, 48)

train_data = train_data.reshape(-1,48,1)

train_labels = np.random.randint(0, 2, (100,1))

tuner.search(train_data, train_labels, validation_split=0.2, epochs=30, callbacks=[tf.keras.callbacks.EarlyStopping(patience=1)])
Epoch 1/2
3/3 [==============================] - 0s 82ms/step - loss: 0.7110 - accuracy: 0.4875 - val_loss: 0.6611 - val_accuracy: 0.6500
Epoch 2/2
3/3 [==============================] - 0s 21ms/step - loss: 0.6937 - accuracy: 0.5000 - val_loss: 0.6599 - val_accuracy: 0.7500

Trial complete
Trial summary
|-Trial ID: adc89daddb79f3e5ea6a8c307352e4ee
|-Score: 0.75
|-Best step: 0
Hyperparameters:
|-dropout: 0.4
|-filters_1: 24
|-hidden_size: 70
|-learning_rate: 0.0002528462794256226
|-pooling_1: avg
|-tuner/bracket: 3
|-tuner/epochs: 2
|-tuner/initial_epoch: 0
|-tuner/round: 0

Epoch 1/2
3/3 [==============================] - 0s 57ms/step - loss: 0.7196 - accuracy: 0.4750 - val_loss: 0.7451 - val_accuracy: 0.4000
Epoch 2/2
3/3 [==============================] - 0s 21ms/step - loss: 0.7048 - accuracy: 0.5500 - val_loss: 0.7398 - val_accuracy: 0.5000

Trial complete
Trial summary
|-Trial ID: 6042b7a7ca696bf79224cbaf5bc05a42
|-Score: 0.5
|-Best step: 0
Hyperparameters:
|-dropout: 0.1
|-filters_1: 36
|-hidden_size: 50
|-learning_rate: 0.00018055209590750966
|-pooling_1: max
|-tuner/bracket: 3
|-tuner/epochs: 2
|-tuner/initial_epoch: 0
|-tuner/round: 0

Epoch 1/2
3/3 [==============================] - 0s 58ms/step - loss: 0.7476 - accuracy: 0.4625 - val_loss: 0.7329 - val_accuracy: 0.4500
Epoch 2/2
3/3 [==============================] - 0s 14ms/step - loss: 0.6390 - accuracy: 0.6875 - val_loss: 0.6930 - val_accuracy: 0.4000

Trial complete
Trial summary
|-Trial ID: 394ba122903b467ddf54902b15d04a53
|-Score: 0.44999998807907104
|-Best step: 0
Hyperparameters:
|-dropout: 0.2
|-filters_1: 12
|-hidden_size: 60
|-learning_rate: 0.003343121876306107
|-pooling_1: avg
|-tuner/bracket: 3
|-tuner/epochs: 2
|-tuner/initial_epoch: 0
|-tuner/round: 0

Epoch 1/2
3/3 [==============================] - 0s 66ms/step - loss: 0.7547 - accuracy: 0.4625 - val_loss: 0.6964 - val_accuracy: 0.4500
Epoch 2/2
3/3 [==============================] - 0s 19ms/step - loss: 0.5858 - accuracy: 0.7500 - val_loss: 0.6720 - val_accuracy: 0.7000

Trial complete
Trial summary
|-Trial ID: 2479d3c548e70bb0b88a5e4540a7923a
|-Score: 0.699999988079071
|-Best step: 0
Hyperparameters:
|-dropout: 0.1
|-filters_1: 72
|-hidden_size: 50
|-learning_rate: 0.003193348791226863
|-pooling_1: max
|-tuner/bracket: 3
|-tuner/epochs: 2
|-tuner/initial_epoch: 0
|-tuner/round: 0

Epoch 1/2
3/3 [==============================] - 0s 53ms/step - loss: 0.7166 - accuracy: 0.5125 - val_loss: 0.6674 - val_accuracy: 0.6000
Epoch 2/2
3/3 [==============================] - 0s 19ms/step - loss: 0.7243 - accuracy: 0.4625 - val_loss: 0.6569 - val_accuracy: 0.5500

Trial complete
Trial summary
|-Trial ID: 01a8bb49c51eb81f27dbe7d491d40246
|-Score: 0.6000000238418579
|-Best step: 0
Hyperparameters:
|-dropout: 0.4
|-filters_1: 12
|-hidden_size: 90
|-learning_rate: 0.0008793685539613403
|-pooling_1: avg
|-tuner/bracket: 3
|-tuner/epochs: 2
|-tuner/initial_epoch: 0
|-tuner/round: 0

Epoch 1/2
3/3 [==============================] - 0s 57ms/step - loss: 0.7178 - accuracy: 0.4750 - val_loss: 0.7252 - val_accuracy: 0.4500
Epoch 2/2
3/3 [==============================] - 0s 18ms/step - loss: 0.6906 - accuracy: 0.4750 - val_loss: 0.7161 - val_accuracy: 0.4500

Trial complete
Trial summary
|-Trial ID: fc795bd34f19275f9eef882bece8092a
|-Score: 0.44999998807907104
|-Best step: 0
Hyperparameters:
|-dropout: 0.0
|-filters_1: 48
|-hidden_size: 60
|-learning_rate: 0.0002136185900215609
|-pooling_1: avg
|-tuner/bracket: 3
|-tuner/epochs: 2
|-tuner/initial_epoch: 0
|-tuner/round: 0

Epoch 1/2
3/3 [==============================] - 0s 52ms/step - loss: 0.7821 - accuracy: 0.4375 - val_loss: 0.7021 - val_accuracy: 0.5500
Epoch 2/2
3/3 [==============================] - 0s 13ms/step - loss: 0.5737 - accuracy: 0.7625 - val_loss: 0.6778 - val_accuracy: 0.5500

Trial complete
Trial summary
|-Trial ID: b7c8fc5ae3ffa33970d8dcf9486667ae
|-Score: 0.550000011920929
|-Best step: 0
Hyperparameters:
|-dropout: 0.0
|-filters_1: 72
|-hidden_size: 100
|-learning_rate: 0.0012802609011755962
|-pooling_1: max
|-tuner/bracket: 3
|-tuner/epochs: 2
|-tuner/initial_epoch: 0
|-tuner/round: 0

Epoch 1/2
3/3 [==============================] - 0s 52ms/step - loss: 0.6974 - accuracy: 0.5375 - val_loss: 0.7001 - val_accuracy: 0.7000
Epoch 2/2
3/3 [==============================] - 0s 14ms/step - loss: 0.5605 - accuracy: 0.7750 - val_loss: 0.7182 - val_accuracy: 0.5000

Trial complete
Trial summary
|-Trial ID: c5b4f33e7b657342804b394ae3483a22
|-Score: 0.699999988079071
|-Best step: 0
......
......

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM