簡體   English   中英

使用 Keras 計算損失時,如何在權重中添加噪聲?

[英]How do I add noise to the weights when calculating the loss with Keras?

我是 Keras 的新手,我正在嘗試在 Keras 中自定義我的訓練步驟。

問題:

  1. 自定義訓練循環時,如何在 Keras 中使用weights_right=weights- (lr+alpha)*gradients創建新的變量 weights_right?
  2. 如何以權重作為形式參數前饋 NN? 我可以像下面的代碼那樣自定義 Keras 中的正向 function 嗎?

背景:

在隨機梯度下降算法中,在前饋一個小批量數據並獲得這個小批量數據的梯度后,我想擾動權重並創建一個名為 weights_right weights_righ t= weights-(lr+alpha)*gradients的新變量weights_righ t= weights-(lr+alpha)*gradients (alpha 是一個常量),然后用 weights_right 前饋 NN 以獲得新的損失。

python 中的部分代碼如下:

class Network(object):
    def __init__(self, sizes):
        self.num_layers = len(sizes)
        self.sizes = sizes
        self.weights = [np.random.randn(y,1) for y in sizes[1:]]
        self.biases = [np.random.randn(y,x) for x, y in zip(sizes[:-1], sizes[1:])]
    def feedforward(self, a, weights=None, biases=None):
        """Return the output of the network if ``a`` is input."""
        if weights is None:
            weights=self.weights
        if biases is None:
            biases=self.biases
        #!!! Note the output layer has no activation for regression.
        for b, w in zip(biases[:-1], weights[:-1]):
            a = sigmoid(np.dot(w, a)+b)
        a=np.dot(weights[-1],a)+biases[-1]
        
        return a
    #-----------------------------------
    # The following is the important one.
    #-----------------------------------
    def customSGD():
        for epoch in range(epochs):
            random.shuffle(training_data)
            mini_batches= [training_data[k:k+mini_batch_size] for k in range(0, len(training_data), mini_batch_size)]
            for mini_batch in mini_batches:
                gradients_on_mini_batch = get_gradients(mini_batch)
                #---------------------------------------
                # The following two steps are what 
                # I would like to archive in Keras
                #---------------------------------------
                # Creat new variable called weights_right

                weights_right = weights-(lr+alpha)*gradients_on_mini_batch

                # feed the NN with weights_right, note that the params 
                #in current NN are still weights, not weights_right.

                pred_right = feedforward(training_data, weights_right)
                loss_right = loss_func(pred_right, training_labels)
                ......

                # update weights
                weights = weights-lr*gradients_on_mini_batch          

以上代碼主要來自在線書籍Michael Nielsen

任何幫助,將不勝感激。 太感謝了!

在自定義訓練循環中,您可以對梯度和權重做任何您喜歡的事情。

@tf.function
def train_step(inputs, labels):
    with tf.GradientTape() as tape:
        logits = model(inputs)
        loss = loss_object(labels, logits)

    weights = model.trainable_variables
    # add manipulation of weights here
    gradients = tape.gradient(loss, weights)
    opt.apply_gradients(zip(gradients, model.trainable_variables))
    train_loss(loss)
    train_acc(labels, logits)

這是完整的運行示例:

import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense
from sklearn.datasets import load_iris

X, y = load_iris(return_X_y=True)

dataset = tf.data.Dataset.from_tensor_slices((X, y)).shuffle(150)

train_dataset = dataset.take(120).batch(4)
test_dataset = dataset.skip(120).take(30).batch(4)


class DenseModel(Model):
    def __init__(self):
        super(DenseModel, self).__init__()
        self.dens1 = Dense(8, activation='elu')
        self.dens2 = Dense(16, activation='relu')
        self.dens3 = Dense(3)

    def call(self, inputs, training=None, **kwargs):
        x = self.dens1(inputs)
        x = self.dens2(x)
        x = self.dens3(x)
        return x


model = DenseModel()

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

train_loss = tf.keras.metrics.Mean()
test_loss = tf.keras.metrics.Mean()

train_acc = tf.keras.metrics.SparseCategoricalAccuracy()
test_acc = tf.keras.metrics.SparseCategoricalAccuracy()


opt = tf.keras.optimizers.Adam(learning_rate=1e-3)


@tf.function
def train_step(inputs, labels):
    with tf.GradientTape() as tape:
        logits = model(inputs)
        loss = loss_object(labels, logits)

    weights = model.trainable_variables
    # add manipulation of weights here
    gradients = tape.gradient(loss, weights)
    opt.apply_gradients(zip(gradients, model.trainable_variables))
    train_loss(loss)
    train_acc(labels, logits)


@tf.function
def test_step(inputs, labels):
    logits = model(inputs)
    loss = loss_object(labels, logits)
    test_loss(loss)
    test_acc(labels, logits)


for epoch in range(10):
    template = 'Epoch {:>2} Train Loss {:.3f} Test Loss {:.3f} ' \
               'Train Acc {:.2f} Test Acc {:.2f}'

    train_loss.reset_states()
    test_loss.reset_states()
    train_acc.reset_states()
    test_acc.reset_states()

    for X_train, y_train in train_dataset:
        train_step(X_train, y_train)

    for X_test, y_test in test_dataset:
        test_step(X_test, y_test)

    print(template.format(
        epoch + 1,
        train_loss.result(),
        test_loss.result(),
        train_acc.result(),
        test_acc.result()
    ))

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM