[英]i have a problem while using cnn to classify more then 10000 class. how can i solve it?
the model have a 5000 class so, how can i create layers using tensorflow or keras if i increase epochs my system getting loaded and hanged. 在這里我應用了亞當優化器和 mean_square_error 損失 function 所以我得到了非常低的精度。 我該如何修復它''' epochs = 3 batch_size = 35
model = Sequential()
print(x_train.shape[1],1)
model.add(Conv1D(16, 3, padding='same', activation='relu', input_shape=(128,1)))#x.shape[1:])) # Input shape: (96, 96, 1)
model.add(MaxPooling1D(pool_size=1))
model.add(Conv1D(32, 3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=1))
model.add(Dropout(0.25))
model.add(Conv1D(64, 3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=1))
model.add(Conv1D(128, 3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=1))
model.add(Conv1D(256, 3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=1))
# Convert all values to 1D array
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(5823))
##checkpointer = ModelCheckpoint(filepath='checkpoint1.hdf5', verbose=1, save_best_only=True)
# Complie Model
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
history=model.fit(x_train, y_train_binary,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test_binary))
'''
您必須使用類似softmax
的 function 激活最后一層(muli 分類的最佳選擇)。 正如評論中所說,如果您的標簽是單熱編碼的(如果不使用sparse_categorical_crossentropy
),最好在您的情況下使用損失 function categorical_crossentropy
。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.