簡體   English   中英

測試精度高於訓練精度

[英]Testing accuracy higher than training accuracy

為什么測試精度比我的訓練精度高? 驗證准確性並非如此。 是因為我拆分數據集的方式嗎?

修改網絡無效,因此我猜我在數據集准備部分做錯了什么。

數據集由惡意軟件和正常活動的數據包捕獲組成。.dataset.txt文件包含總共777行和28列。

#converting dataset and labels to numpy arrays
x = np.genfromtxt("dataset.txt", delimiter=",")
y = np.genfromtxt("label.txt", delimiter=",")

#handling missing values
x[np.isnan(x)] = 0

#shuffling the data
indices = np.arange(x.shape[0])
np.random.shuffle(indices)
x = x[indices]
y = y[indices]

#dividing the dataset into train and test 
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=0)

#building the model
def build_model():
        model = models.Sequential()
        model.add(layers.Dense(32, activation='relu', input_shape=(28,)))
        model.add(layers.Dense(32, activation='relu'))
        model.add(layers.Dense(32, activation='relu'))
        model.add(Dropout(0.2))
        model.add(layers.Dense(1, activation='sigmoid'))
        model.compile(optimizer='rmsprop',  loss='binary_crossentropy', 
                      metrics=['accuracy'])
        return model

'''cross validation 
k = 5
num_val_samples = len(x_train) // k
all_scores = []

for i in range(k):
   print('fold #', i)
   x_val = x_train[i * num_val_samples: (i + 1) * num_val_samples]
   y_val = y_train[i * num_val_samples: (i + 1) * num_val_samples]
   partial_x_train = np.concatenate([x_train[:i * num_val_samples], 
                     x_train[(i + 1) * num_val_samples:]], axis=0)
   partial_y_train = np.concatenate([y_train[:i * num_val_samples], 
                     y_train[(i + 1) * num_val_samples:]], axis=0)
   model = build_model()
   model.fit(partial_x_train, partial_y_train,epochs=20, batch_size=16, 
             verbose=0)
   val_loss, val_acc = model.evaluate(x_val, y_val, verbose=0)
   all_scores.append(val_acc)

print(all_scores)
val_acc = np.mean(all_scores)
print(val_loss , val_acc) 
'''

#training the model with the entire training dataset
model = build_model()
model.fit(x_train, y_train, epochs=20, batch_size=16)

#confusion matrix
y_pred = model.predict(x_test)
y_pred = (y_pred > 0.5)
result = confusion_matrix(y_test, y_pred)
print ('Confusion Matrix:')
print(result)

#calculating the test accuracy
model_acc = accuracy_score(y_test, y_pred)
print('Test Accuracy:')
print(model_acc)

這是因為keras報告每個時期的運行平均准確度。 對於少數歷元,這意味着通過一個時代的結束你的模型是比它更好這個時期期間平均為。

這也可能是由於在測試集中隨機包含了“更簡單”的樣本,但是如果您在代碼的同一部分中隨機分割樣本,則不會在每次運行時都發生這種情況。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM