簡體   English   中英

如何在 Keras 中保存 val_loss 和 val_acc

[英]how to save val_loss and val_acc in Keras

我在 Keras 中記錄“val_loss”和“val_acc”時遇到問題。 'loss' 和 'acc' 很容易,因為它們總是記錄在 model.fit 的歷史記錄中。

如果在fit啟用驗證,則記錄 'val_loss',如果啟用驗證和准確性監控,則記錄val_acc 但是,這是什么意思?

我的節點是model.fit(train_data, train_labels,epochs = 64,batch_size = 10,shuffle = True,validation_split = 0.2, callbacks=[history])

如您所見,我使用了 5 折交叉驗證並打亂了數據。 在這種情況下,我怎么能允許validationfit錄制“val_loss”和“val_acc”?

謝謝

從 Keras 文檔中,我們有models.fit方法:

fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)

'val_loss' is recorded if validation is enabled in fit, and val_accis recorded if validation and accuracy monitoring are enabled. - 這來自keras.callbacks.Callback()對象,如果用於上述 fit 方法中的回調參數。 它可以如下使用:

    from keras.callbacks import Callback
    logs = Callback()
    model.fit(train_data, train_labels,epochs = 64, batch_size = 10,shuffle = True,validation_split = 0.2, callbacks=[logs]) 
    # Instead of using the history callback, which you've used.

如果在fit啟用驗證,則會記錄“val_loss”:當使用 model.fit 方法時,您使用的是validatoin_split參數或使用validation_data參數to specify the tuple (x_val, y_val) or tuple (x_val, y_val, val_sample_weights) on which to evaluate the loss and any model metrics at the end of each epoch. .

一個歷史對象。 它的 History.history 屬性是連續 epoch 的訓練損失值和度量值的記錄,以及驗證損失值和驗證度量值(如果適用)。 - Keras 文檔(model.fit 方法的返回值)

在下面的模型中:

model.fit(train_data, train_labels,epochs = 64,batch_size = 10,shuffle = True,validation_split = 0.2, callbacks=[history])

您正在使用 History 回調,如果您使用變量來保存 model.fit 如下所示:

history = model.fit(train_data, train_labels,epochs = 64,batch_size = 10,shuffle = True,validation_split = 0.2, callbacks=[history])
history.history

history.history將為您輸出一個字典,其中包含: lossaccval_lossval_acc :就像下面給出的:

{'val_loss': [14.431451635814849,
  14.431451635814849,
  14.431451635814849,
  14.431451635814849,
  14.431451635814849,
  14.431451635814849,
  14.431451635814849,
  14.431451635814849,
  14.431451635814849,
  14.431451635814849],
 'val_acc': [0.1046428571712403,
  0.1046428571712403,
  0.1046428571712403,
  0.1046428571712403,
  0.1046428571712403,
  0.1046428571712403,
  0.1046428571712403,
  0.1046428571712403,
  0.1046428571712403,
  0.1046428571712403],
 'loss': [14.555215610322499,
  14.555215534028553,
  14.555215548560733,
  14.555215588524229,
  14.555215592157273,
  14.555215581258137,
  14.555215575808571,
  14.55521561940511,
  14.555215563092913,
  14.555215624854679],
 'acc': [0.09696428571428571,
  0.09696428571428571,
  0.09696428571428571,
  0.09696428571428571,
  0.09696428571428571,
  0.09696428571428571,
  0.09696428571428571,
  0.09696428571428571,
  0.09696428571428571,
  0.09696428571428571]}

您可以通過使用如下注釋中給出的 csvlogger 或使用更長的方法將字典寫入 csv 文件(如此處給出的將字典寫入 csv)來保存數據

csv_logger = CSVLogger('training.log')
model.fit(X_train, Y_train, callbacks=[csv_logger])

更新: val_accuracy字典鍵今天似乎不再有效。 不知道為什么,但我從這里刪除了該代碼,盡管 OP 詢問如何記錄它(而且,損失對於交叉驗證結果的比較來說實際上很重要)。

使用 Python 3.7 和 Tensorflow 2.0,經過多次搜索、猜測和反復失敗后,以下內容對我有用。 我從其他人的腳本開始,將我需要的內容寫入.json文件; 它會在每次訓練運行時生成一個這樣的.json文件,顯示每個時期的驗證損失,因此您可以看到模型如何收斂(或不收斂); 准確性被記錄但不作為性能指標。

注意:您需要填寫yourTrainDiryourTrainingDatayourValidationDatayourOptimizeryourLossFunctionFromKerasOrElsewhereyourNumberOfEpochs等,以啟用此代碼:

import numpy as np
import os
import tensorflow as tf
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, LambdaCallback
import json
model.compile(
    optimizer=yourOptimizer,
    loss=yourLossFunctionFromKerasOrElsewhere()
    )

# create a custom callback to enable future cross-validation efforts
yourTrainDir = os.getcwd() + '/yourOutputFolderName/'
uniqueID = np.random.randint(999999) # To distinguish validation runs by saved JSON name
epochValidationLog = open(
    yourTrainDir +
    'val_log_per_epoch_' +
    '{}_'.format(uniqueID) +
    '.json',
    mode='wt',
    buffering=1
    )
ValidationLogsCallback = LambdaCallback(
    on_epoch_end = lambda epoch,
        logs: epochValidationLog.write(
            json.dumps(
                {
                    'oneIndexedEpoch': epoch + 1,
                    'Validationloss': logs['val_loss']
                }
                ) + '\n'
            ),
    on_train_end = lambda logs: epochValidationLog.close()
    )

# set up the list of callbacks
callbacksList = [
    ValidationLogsCallback,
    EarlyStopping(patience=40, verbose=1),
    ]
results = model.fit(
    x=yourTrainingData,
    steps_per_epoch=len(yourTrainingData),
    validation_data=yourValidationData,
    validation_steps=len(yourValidationData),
    epochs=yourNumberOfEpochs,
    verbose=1,
    callbacks=callbacksList
    )

這會在TrainDir文件夾中生成一個 JSON 文件,將每個訓練時期的驗證損失和准確性記錄為它自己的類似字典的項目。 請注意,紀元編號的索引從1開始,因此它與 tensorflow 的輸出相匹配,而不是 Python 中的實際索引。

我正在輸出到 .JSON 文件,但它可以是任何東西。 這是我用於分析生成的 JSON 文件的代碼; 我本可以把它全部放在一個腳本中,但沒有。

import os
from pathlib import Path
import json

currentDirectory = os.getcwd()
outFileName = 'CVResults.json'
outFile = open(outFileName, mode='wt')
validationLogPaths = Path().glob('val_log_per_epoch_*.json')

# Necessary list to detect short unique IDs for each training session
stringDecimalDigits = [
    '1',
    '2',
    '3',
    '4',
    '5',
    '6',
    '7',
    '8',
    '9',
    '0'
]
setStringDecimalDigits = set(stringDecimalDigits)
trainingSessionsList = []

# Load the JSON files into memory to allow reading.
for validationLogFile in validationLogPaths:
    trainingUniqueIDCandidate = str(validationLogFile)[18:21]

    # Pad unique IDs with fewer than three digits with zeros at front
    thirdPotentialDigitOfUniqueID = trainingUniqueIDCandidate[2]
    if setStringDecimalDigits.isdisjoint(thirdPotentialDigitOfUniqueID):
        secondPotentialDigitOfUniqueID = trainingUniqueIDCandidate[1]
        if setStringDecimalDigits.isdisjoint(secondPotentialDigitOfUniqueID):
            trainingUniqueID = '00' + trainingUniqueIDCandidate[:1]
        else:
            trainingUniqueID = '0' + trainingUniqueIDCandidate[:2]
    else:
        trainingUniqueID = trainingUniqueIDCandidate
    trainingSessionsList.append((trainingUniqueID, validationLogFile))
trainingSessionsList.sort(key=lambda x: x[0])

# Analyze and export cross-validation results
for replicate in range(len(dict(trainingSessionsList).keys())):
    validationLogFile = trainingSessionsList[replicate][1]
    fileOpenForReading = open(
        validationLogFile, mode='r', buffering=1
    )

    with fileOpenForReading as openedFile:
        jsonValidationData = [json.loads(line) for line in openedFile]

    bestEpochResultsDict = {}
    oneIndexedEpochsList = []
    validationLossesList = []
    for line in range(len(jsonValidationData)):
        tempDict = jsonValidationData[line]
        oneIndexedEpochsList.append(tempDict['oneIndexedEpoch'])
        validationLossesList.append(tempDict['Validationloss'])
    trainingStopIndex = min(
        range(len(validationLossesList)),
        key=validationLossesList.__getitem__
    )
    bestEpochResultsDict['Integer_unique_ID'] = trainingSessionsList[replicate][0]
    bestEpochResultsDict['Min_val_loss'] = validationLossesList[trainingStopIndex]
    bestEpochResultsDict['Last_train_epoch'] = oneIndexedEpochsList[trainingStopIndex]
    outFile.write(json.dumps(bestEpochResultsDict, sort_keys=True) + '\n')

outFile.close()

最后一段代碼創建了一個 JSON,總結了上面生成的CVResults.json

from pathlib import Path
import json
import os
import statistics

outFile = open("CVAnalysis.json", mode='wt')
CVResultsPath = sorted(Path().glob('*CVResults.json'))
if len(CVResultsPath) > 1:
    print('\nPlease analyze only one CVResults.json file at at time.')
    userAnswer = input('\nI understand only one will be analyzed: y or n')
    if (userAnswer == 'y') or (userAnswer == 'Y'):
        print('\nAnalyzing results in file {}:'.format(str(CVResultsPath[0])))

# Load the first CVResults.json file into memory to allow reading.
CVResultsFile = CVResultsPath[0]
fileOpenForReading = open(
    CVResultsFile, mode='r', buffering=1
)

outFile.write(
    'Analysis of cross-validation results tabulated in file {}'.format(
        os.getcwd()
    ) +
    str(CVResultsFile) +
    ':\n\n'
)

with fileOpenForReading as openedFile:
    jsonCVResultsData = [json.loads(line) for line in openedFile]

minimumValidationLossesList = []
trainedOneIndexedEpochsList = []
for line in range(len(jsonCVResultsData)):
    tempDict = jsonCVResultsData[line]
    minimumValidationLossesList.append(tempDict['Min_val_loss'])
    trainedOneIndexedEpochsList.append(tempDict['Last_train_epoch'])
outFile.write(
    '\nTrained validation losses: ' +
    json.dumps(minimumValidationLossesList) +
    '\n'
)
outFile.write(
    '\nTraining epochs required: ' +
    json.dumps(trainedOneIndexedEpochsList) +
    '\n'
)
outFile.write(
    '\n\nMean trained validation loss: ' +
    str(round(statistics.mean(minimumValidationLossesList), 4)) +
    '\n'
)
outFile.write(
    'Median of mean trained validation losses per session: ' +
    str(round(statistics.median(minimumValidationLossesList), 4)) +
    '\n'
)
outFile.write(
    '\n\nMean training epochs required: ' +
    str(round(statistics.mean(trainedOneIndexedEpochsList), 1)) +
    '\n'
)
outFile.write(
    'Median of mean training epochs required per session: ' +
    str(round(statistics.median(trainedOneIndexedEpochsList), 1)) +
    '\n'
)
outFile.close()

可以使用Keras 的 ModelCheckpoint類保存val_lossval_acc的數據。

from keras.callbacks import ModelCheckpoint

checkpointer = ModelCheckpoint(filepath='yourmodelname.hdf5', 
                               monitor='val_loss', 
                               verbose=1, 
                               save_best_only=False)

history = model.fit(X_train, y_train, epochs=100, validation_split=0.02, callbacks=[checkpointer])

history.history.keys()

# output
# dict_keys(['val_loss', 'val_mae', 'val_acc', 'loss', 'mae', 'acc'])

重要的一點,如果你省略了validation_split屬性,你將只會得到lossmaeacc的值。

希望這可以幫助!

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM