簡體   English   中英

為什么訓練時准確性和損失保持不變?

[英]Why is accuracy and loss staying exactly the same while training?

所以我嘗試修改https://www.tensorflow.org/tutorials/keras/basic_classification中的入門教程,以使用我自己的數據。 目標是對狗和貓的圖像進行分類。 代碼非常簡單,如下所示。 問題是網絡似乎根本沒有學習,訓練損失和准確性在每個時代之后都保持不變。

圖像(X_training)和標簽(y_training)似乎具有正確的格式: X_training.shape返回: (18827, 80, 80, 3)

y_training是一個一維列表,其中的條目為{0,1}

我已經多次檢查過, X_training中的“圖像”被正確標記:假設X_training[i,:,:,:]代表一只狗,那么y_training[i]將返回1,如果X_training[i,:,:,:]表示一只貓,然后y_training[i]將返回0。

下面顯示的是沒有import語句的完整python文件。

#loading the data from 4 pickle files:
pickle_in = open("X_training.pickle","rb")
X_training = pickle.load(pickle_in)

pickle_in = open("X_testing.pickle","rb")
X_testing = pickle.load(pickle_in)

pickle_in = open("y_training.pickle","rb")
y_training = pickle.load(pickle_in)

pickle_in = open("y_testing.pickle","rb")
y_testing = pickle.load(pickle_in)


#normalizing the input data:
X_training = X_training/255.0
X_testing = X_testing/255.0


#building the model:
model = keras.Sequential([
    keras.layers.Flatten(input_shape=(80, 80,3)),
    keras.layers.Dense(128, activation=tf.nn.relu),
    keras.layers.Dense(1,activation='sigmoid')
])
model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])


#running the model:
model.fit(X_training, y_training, epochs=10)

該代碼編制並訓練了10個時代,但既沒有損失也沒有精確度提高,它們在每個時代之后保持完全相同。 該代碼適用於本教程中使用的MNIST-fashion數據集,略有變化,考慮了多類與二元分類和輸入形狀的差異。

如果你想訓練一個分類模型,你必須有丟失函數時的binary_crossentropy,而不是用於回歸任務的mean_squared_error

更換

model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])

model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])

此外,我建議不要在密集層上使用relu激活,而是使用linear

更換

keras.layers.Dense(128, activation=tf.nn.relu),

keras.layers.Dense(128),

為了更好地利用神經網絡的力量,在你的flatten layer之前使用一些convolutional layers flatten layer

我找到了一個不同的實現,其中一個稍微復雜的模型可以工作。 這是沒有import語句的完整代碼:

#global variables:
batch_size = 32
nr_of_epochs = 64
input_shape = (80,80,3)


#loading the data from 4 pickle files:
pickle_in = open("X_training.pickle","rb")
X_training = pickle.load(pickle_in)

pickle_in = open("X_testing.pickle","rb")
X_testing = pickle.load(pickle_in)

pickle_in = open("y_training.pickle","rb")
y_training = pickle.load(pickle_in)

pickle_in = open("y_testing.pickle","rb")
y_testing = pickle.load(pickle_in)



#building the model
def define_model():
    model = Sequential()
    model.add(Conv2D(32, (3, 3), activation='relu', input_shape=input_shape))
    model.add(MaxPooling2D((2, 2)))
    model.add(Flatten())
    model.add(Dense(128, activation='relu'))
    model.add(Dense(1, activation='sigmoid'))
    # compile model
    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
    return model
model = define_model()


#Possibility for image data augmentation
train_datagen = ImageDataGenerator(rescale=1.0/255.0)
val_datagen = ImageDataGenerator(rescale=1./255.) 
train_generator =train_datagen.flow(X_training,y_training,batch_size=batch_size)
val_generator = val_datagen.flow(X_testing,y_testing,batch_size= batch_size)



#running the model
history = model.fit_generator(train_generator,steps_per_epoch=len(X_training) //batch_size,
                              epochs=nr_of_epochs,validation_data=val_generator,
                              validation_steps=len(X_testing) //batch_size)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM