簡體   English   中英

如何在 Tensorflow 中手動設置 LSTM 層的權重

[英]How do I manually set the weights of an LSTM layer in Tensorflow

我正在關注 Tensorflow 上的訓練循環指南, Tensorflow 上的訓練循環指南 他們將權重設置為model.w = weights 對於來自tensorflow.keras.layers.LSTM的 LSTM,我無法做到這一點,因為它沒有這樣的參數,因為訓練代碼'LSTM' object has no attribute 'w'
我如何設置它的權重和偏差?

這是 python 回溯的片段

 File "C:\Users\vishw\Desktop\cse535a3a2\pystuff\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 486, in _fall_back_unconverted
    return _call_unconverted(f, args, kwargs, options)
  File "C:\Users\vishw\Desktop\cse535a3a2\pystuff\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 460, in _call_unconverted
    return f.__self__.call(args, kwargs)
  File "C:\Users\vishw\Desktop\cse535a3a2\pystuff\lib\site-packages\tensorflow\python\eager\function.py", line 3933, in call
    return wrapped_fn(self.weakrefself_target__(), *args, **kwargs)
  File "C:\Users\vishw\Desktop\cse535a2\learning.py", line 27, in networkTraining
    dw, db = t.gradient(loss, [self.__network.w, self.__network.b])## <--------------------------HERE

這是我的代碼為您提供方便

class LSTMmodel(tf.Module):
    def __init__(self, arg_name=None):
        super().__init__(name=arg_name)
        self.__input = tf.Variable(initial_value=[0.0 for x in range(7)], dtype=tf.float32)
        # self.__input_reshaped = tf.reshape(self.__input, [1, 7, 1])
        self.__network = tf.keras.layers.LSTM(units=7, input_shape=(7,1))
        self.__output = tf.Variable(initial_value=[0.0 for x in range(7)], dtype=tf.float32)
        self.__output = tf.reshape(self.__output, [1, 7, 1])
    @tf.function
    def networkTraining(self, arg_data_train, arg_labels, arg_learning_rate):
        with tf.GradientTape() as t:
            print('loc 1')
            # self.__input = tf.Variable(arg_data_train)
            print('loc 2')
            self.__input_reshaped = tf.reshape(arg_labels, [len(arg_labels), 7, 1])
            self.__output = self.__network(self.__input_reshaped)
            print('loc 3')
            loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=arg_labels, logits=self.__output)
            print('loc 4')
            dw, db = t.gradient(loss, [self.__network.w, self.__network.b])
            print('loc 5')
        self.__network.w.assign_sub(arg_learning_rate * dw)
        self.__network.b.assign_sub(arg_learning_rate * db)

    @tf.function
    def __call__(self, arg_input=[0 for x in range(7)]):
        self.__input = tf.Variable(arg_input)
        self.__output = self.__network(self.__input)
        return self.__output

由於 tensorflow 和 keras 一起工作,您可以使用set_weights方法手動設置 LSTM 層的權重。 另外,我意識到我的問題在這個問題上有所不同。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM