簡體   English   中英

在CNTK中創建自定義錯誤功能

[英]Creating custom error function in CNTK

這是我當前使用CNTK模塊在python中進行NN訓練的python代碼的一部分

batch_axis = C.Axis.default_batch_axis()
input_seq_axis = C.Axis.default_dynamic_axis()

input_dynamic_axes = [batch_axis, input_seq_axis]
input_dynamic_axes2 = [batch_axis, input_seq_axis]

input = C.input_variable(n_ins, dynamic_axes=input_dynamic_axes, dtype=numpy.float32)
output = C.input_variable(n_outs, dynamic_axes=input_dynamic_axes2, dtype=numpy.float32)

dnn_model = cntk_model.create_model(input, hidden_layer_type, hidden_layer_size, n_outs)

loss = C.squared_error(dnn_model, output)
error = C.squared_error(dnn_model, output)

lr_schedule = C.learning_rate_schedule(current_finetune_lr, C.UnitType.minibatch)
            momentum_schedule = C.momentum_schedule(current_momentum)

learner = C.adam(dnn_model.parameters, lr_schedule, momentum_schedule, unit_gain = False, l1_regularization_weight=l1_reg, l2_regularization_weight= l2_reg)    

trainer = C.Trainer(dnn_model, (loss, error), [learner])  

這是用於創建NN模型的代碼

def create_model(features, hidden_layer_type, hidden_layer_size, n_out):
    logger.debug('Creating cntk model')
    assert len(hidden_layer_size) == len(hidden_layer_type)

    n_layers = len(hidden_layer_size)

    my_layers = list()
    for i in xrange(n_layers):
        if(hidden_layer_type[i] == 'TANH'):
            my_layers.append(C.layers.Dense(hidden_layer_size[i], activation=C.tanh, init=C.layers.glorot_uniform()))
        elif (hidden_layer_type[i] == 'LSTM'):
            my_layers.append(C.layers.Recurrence(C.layers.LSTM(hidden_layer_size[i])))
        else:
            raise Exception('Unknown hidden layer type')

    my_layers.append(C.layers.Dense(n_out, activation=None))

    my_model = C.layers.Sequential([my_layers])
    my_model = my_model(features)

    return my_model

現在,我想更改反向傳播,因此在計算誤差時,不使用直接網絡輸出,而是經過一些額外計算后的輸出。 我試圖定義這樣的東西

 def create_error_function(self, prediction, target):

    prediction_denorm = C.element_times(prediction, self.std_vector)
    prediction_denorm = C.plus(prediction_denorm, self.mean_vector)
    prediction_denorm_rounded = C.round(C.element_times(prediction_denorm[0:5], C.round(prediction_denorm[5])))
    prediction_denorm_rounded = C.element_divide(prediction_denorm_rounded, C.round(prediction_denorm[5]))

    prediction_norm = C.minus(prediction_denorm_rounded, self.mean_vector[0:5])
    prediction_norm = C.element_divide(prediction_norm, self.std_vector[0:5])

    first =  C.squared_error(prediction_norm, target[0:5])
    second = C.minus(C.round(prediction_denorm[5]), self.mean_vector[5])
    second = C.element_divide(second, self.std_vector[5])

    return C.plus(first, C.squared_error(second, target[5]))

並使用它代替標准squared_error 和NN訓練的一部分

dnn_model = cntk_model.create_model(input, hidden_layer_type, hidden_layer_size, n_outs)
 error_function = cntk_model.ErrorFunction(cmp_mean_vector, cmp_std_vector)
 loss = error_function.create_error_function(dnn_model, output)
 error = error_function.create_error_function(dnn_model, output)
 lr_schedule = C.learning_rate_schedule(current_finetune_lr, C.UnitType.minibatch)
 momentum_schedule = C.momentum_schedule(current_momentum)

 learner = C.adam(dnn_model.parameters, lr_schedule, momentum_schedule, unit_gain = False, l1_regularization_weight=l1_reg,
                                 l2_regularization_weight= l2_reg)    

 trainer = C.Trainer(dnn_model, (loss, error), [learner])  
 trainer.train_minibatch({input: temp_train_x, output: temp_train_y}) 

但是在兩個時期之后,由於我的網絡無法學習,我開始變得總是平均損失相同

每次您要更改反向傳播的工作方式時,都需要使用stop_gradient 這是唯一一個其斜率與正向通過操作的斜率不同的函數。 在向前傳遞中, stop_gradient充當標識。 在向后傳遞中,它阻止了梯度的傳播。

要在正向傳遞中對某些x進行操作f(x)並假裝它從未在向后傳遞中發生,您需要執行以下操作: C.stop_gradient(f(x) - x) + x 在你的情況下

norm_features = C.stop_gradient(features/normalization - features) + features

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM