簡體   English   中英

TensorFlow ValueError:層順序的輸入0與層不兼容

[英]TensorFlow ValueError: Input 0 of layer sequential is incompatible with the layer

當我運行下面代碼的最后一部分時,出現以下錯誤:

ValueError: Input 0 of layer sequential_1 is incompatible with the layer: expected axis -1 of input shape to have value 28 but received input with shape (None, 30, 30)
import pandas as pd                       
import numpy as np
from tensorflow import keras
from tensorflow.keras.layers import Conv1D, MaxPooling1D, Dense, Dropout, Flatten, GRU, SimpleRNN, LSTM, Bidirectional, Activation, TimeDistributed
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras.regularizers import l2
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt

CNNmodel = keras.Sequential()  
CNNmodel.add(Conv1D(32, 2, activation='relu', input_shape=(20,28))) # 32 convolution filters used each of size 2
CNNmodel.add(Conv1D(64, 3, activation='relu'))        # 64 convolution filters used each of size 3
CNNmodel.add(MaxPooling1D(pool_size=(1,)))            # choose the best features via pooling
CNNmodel.add(Dropout(0.25))                           # randomly turn neurons on and off to improve convergence
CNNmodel.add(Flatten())                               # flatten we only want a classification output
CNNmodel.add(Dense(30, activation='relu'))            # fully connected to get all relevant data
CNNmodel.add(Dropout(0.1))                            # one more dropout
CNNmodel.add(Dense(1, activation='sigmoid'))          # output 

lr_schedule = keras.optimizers.schedules.ExponentialDecay(
    initial_learning_rate=1e-2,
    decay_steps=10000,
    decay_rate=0.9)

opt = keras.optimizers.Adagrad(learning_rate=lr_schedule)

CNNmodel.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])

CNNhistory = CNNmodel.fit(x_train, y_train, validation_data=(x_val, y_val),  epochs=20, batch_size=128) # Getting score metrics

scores = CNNmodel.evaluate(x_test, y_test) 
print("Accuracy: %.2f%%" % (scores[1]*100))

工作示例代碼

import pandas as pd                       
import numpy as np
from tensorflow import keras
from tensorflow.keras.layers import Conv1D, MaxPooling1D, Dense, Dropout, Flatten, GRU, SimpleRNN, LSTM, Bidirectional, Activation, TimeDistributed
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras.regularizers import l2
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

CNNmodel = keras.Sequential()  
CNNmodel.add(Conv1D(32, 2, activation='relu', input_shape=(28,28))) # 32 convolution filters used each of size 2
CNNmodel.add(Conv1D(64, 3, activation='relu'))        # 64 convolution filters used each of size 3
CNNmodel.add(MaxPooling1D(pool_size=(1,)))            # choose the best features via pooling
CNNmodel.add(Dropout(0.25))                           # randomly turn neurons on and off to improve convergence
CNNmodel.add(Flatten())                               # flatten we only want a classification output
CNNmodel.add(Dense(30, activation='relu'))            # fully connected to get all relevant data
CNNmodel.add(Dropout(0.1))                            # one more dropout
CNNmodel.add(Dense(1, activation='sigmoid')) 

lr_schedule = keras.optimizers.schedules.ExponentialDecay(
    initial_learning_rate=1e-2,
    decay_steps=10000,
    decay_rate=0.9)

opt = keras.optimizers.Adagrad(learning_rate=lr_schedule)

CNNmodel.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])

CNNhistory = CNNmodel.fit(x_train, y_train, validation_data=(x_test, y_test),  epochs=20, batch_size=128)

Output

Epoch 1/20
469/469 [==============================] - 14s 28ms/step - loss: -3719534848.0000 - accuracy: 0.1124 - val_loss: -11388519424.0000 - val_accuracy: 0.1135
Epoch 2/20
469/469 [==============================] - 9s 18ms/step - loss: -25869672448.0000 - accuracy: 0.1124 - val_loss: -44974247936.0000 - val_accuracy: 0.1135
Epoch 3/20
469/469 [==============================] - 8s 17ms/step - loss: -69248000000.0000 - accuracy: 0.1124 - val_loss: -99721273344.0000 - val_accuracy: 0.1135
Epoch 4/20
469/469 [==============================] - 8s 17ms/step - loss: -133157298176.0000 - accuracy: 0.1124 - val_loss: -175103967232.0000 - val_accuracy: 0.1135
Epoch 5/20
469/469 [==============================] - 8s 18ms/step - loss: -216887148544.0000 - accuracy: 0.1124 - val_loss: -270619656192.0000 - val_accuracy: 0.1135
Epoch 6/20
469/469 [==============================] - 8s 18ms/step - loss: -320444530688.0000 - accuracy: 0.1124 - val_loss: -385881538560.0000 - val_accuracy: 0.1135
Epoch 7/20
469/469 [==============================] - 9s 19ms/step - loss: -443233828864.0000 - accuracy: 0.1124 - val_loss: -520282669056.0000 - val_accuracy: 0.1135
Epoch 8/20
469/469 [==============================] - 8s 18ms/step - loss: -584527708160.0000 - accuracy: 0.1124 - val_loss: -673431617536.0000 - val_accuracy: 0.1135
Epoch 9/20
469/469 [==============================] - 8s 17ms/step - loss: -743466008576.0000 - accuracy: 0.1124 - val_loss: -844648939520.0000 - val_accuracy: 0.1135
Epoch 10/20
469/469 [==============================] - 8s 17ms/step - loss: -920933564416.0000 - accuracy: 0.1124 - val_loss: -1033648603136.0000 - val_accuracy: 0.1135
Epoch 11/20
469/469 [==============================] - 9s 19ms/step - loss: -1113547472896.0000 - accuracy: 0.1124 - val_loss: -1239565729792.0000 - val_accuracy: 0.1135
Epoch 12/20
469/469 [==============================] - 9s 19ms/step - loss: -1324937117696.0000 - accuracy: 0.1124 - val_loss: -1462383280128.0000 - val_accuracy: 0.1135
Epoch 13/20
469/469 [==============================] - 8s 18ms/step - loss: -1552220815360.0000 - accuracy: 0.1124 - val_loss: -1701631885312.0000 - val_accuracy: 0.1135
Epoch 14/20
469/469 [==============================] - 9s 19ms/step - loss: -1793859387392.0000 - accuracy: 0.1124 - val_loss: -1956641505280.0000 - val_accuracy: 0.1135
Epoch 15/20
469/469 [==============================] - 9s 19ms/step - loss: -2052668915712.0000 - accuracy: 0.1124 - val_loss: -2227197444096.0000 - val_accuracy: 0.1135
Epoch 16/20
469/469 [==============================] - 9s 20ms/step - loss: -2327011393536.0000 - accuracy: 0.1124 - val_loss: -2512955113472.0000 - val_accuracy: 0.1135
Epoch 17/20
469/469 [==============================] - 8s 18ms/step - loss: -2612614660096.0000 - accuracy: 0.1124 - val_loss: -2813191520256.0000 - val_accuracy: 0.1135
Epoch 18/20
469/469 [==============================] - 8s 18ms/step - loss: -2914698395648.0000 - accuracy: 0.1124 - val_loss: -3127708745728.0000 - val_accuracy: 0.1135
Epoch 19/20
469/469 [==============================] - 9s 18ms/step - loss: -3229450240000.0000 - accuracy: 0.1124 - val_loss: -3455992463360.0000 - val_accuracy: 0.1135
Epoch 20/20
469/469 [==============================] - 9s 19ms/step - loss: -3558048268288.0000 - accuracy: 0.1124 - val_loss: -3797768994816.0000 - val_accuracy: 0.1135

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM