簡體   English   中英

TensorFlow 2.0 GradientTape 返回 None 作為手動模型的梯度

[英]TensorFlow 2.0 GradientTape returne None as gradients for Manual Models

我正在嘗試手動創建邏輯回歸模型,但 GradientTape 返回 NoneType 梯度

class LogisticRegressionTF:
    def __init__(self,dim):
        #dim = X_train.shape[0]
        tf.random.set_seed(1)
        weight_init = tf.initializers.VarianceScaling(scale=1.0, mode="fan_avg", distribution="uniform", seed=1)
        zeros_init = tf.zeros_initializer()
        self.W = tf.Variable(zeros_init([dim,1]), trainable=True, name="W")
        self.b = tf.Variable(zeros_init([1]), trainable=True, name="b")

    def sigmoid(self,z):
        x = tf.Variable(z, trainable=True,dtype=tf.float32, name='x')
        sigmoid = tf.sigmoid(x)
        result = sigmoid
        return result

    def predict(self, x):
        x = tf.cast(x, dtype=tf.float32)
        h = tf.sigmoid(tf.add(tf.matmul(tf.transpose(self.W), x), self.b))
        return h

    def loss(self,logits, labels):
        z = tf.Variable(logits, trainable=False,dtype=tf.float32, name='z')
        y = tf.Variable(labels, trainable=False,dtype=tf.float32, name='y')
        m = tf.cast(tf.size(z), dtype=tf.float32)
        cost = tf.divide(tf.reduce_sum(y*tf.math.log(z) + (1-y)*tf.math.log(1-z)),-m)
        return cost

    def fit(self,X_train, Y_train, lr_rate = 0.01, epochs = 1000):
        costs=[]
        optimizer = tf.optimizers.SGD(learning_rate=lr_rate)

        for i in range(epochs):
            current_loss = self.loss(self.predict(X_train), Y_train)
            print(current_loss)
            with tf.GradientTape() as t:
                t.watch([self.W, self.b])
                currt_loss = self.loss(self.predict(X_train), Y_train)
                print(currt_loss)
            grads = t.gradient(currt_loss, [self.W, self.b])
            print(grads)
            #optimizer.apply_gradients(zip(grads,[self.W, self.b]))
            self.W.assign_sub(lr_rate * grads[0])
            self.b.assign_sub(lr_rate * grads[1])
            if(i %100 == 0):
                print('Epoch %2d: , loss=%2.5f' %(i, current_loss))
            costs.append(current_loss)

        plt.plot(costs)
        plt.ylim(0,50)
        plt.ylabel('Cost J')
        plt.xlabel('Iterations')

log_reg = LogisticRegressionTF(train_set_x.shape[0])
log_reg.fit(train_set_x, train_set_y)

這給出了一個 TypeError,這是由於梯度返回None

tf.Tensor(0.6931474, shape=(), dtype=float32)
tf.Tensor(0.6931474, shape=(), dtype=float32)
[None, None]

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-192-024668d532b0> in <module>()
      1 log_reg = LogisticRegressionTF(train_set_x.shape[0])
----> 2 log_reg.fit(train_set_x, train_set_y)

<ipython-input-191-4fef932eb231> in fit(self, X_train, Y_train, lr_rate, epochs)
     40             print(grads)
     41             #optimizer.apply_gradients(zip(grads,[self.W, self.b]))
---> 42             self.W.assign_sub(lr_rate * grads[0])
     43             self.b.assign_sub(lr_rate * grads[1])
     44             if(i %100 == 0):

TypeError: unsupported operand type(s) for *: 'float' and 'NoneType'

我的假設函數是 tf.sigmoid(tf.add(tf.matmul(tf.transpose(self.W), x), self.b))

我手動將成本函數定義為 tf.divide(tf.reduce_sum(y*tf.math.log(z) + (1-y)*tf.math.log(1-z)),-m),其中m 是訓練樣例的數量

驗證它是否將損失返回為 tf.Tensor(0.6931474, shape=(), dtype=float32)

我也做了一個t.watch()但什么也沒發生它仍然返回 [None, None]

train_set_y.dtype is dtype('int64')

train_set_x.dtype is dtype('float64')

train_set_x.shape is (12288, 209)

train_set_y.shape is (1, 209)

type(train_set_x) is numpy.ndarray

我哪里做錯了??

謝謝

在我的環境中,Tensorflow 正在Eagerly運行,即它在 Eager Execution 上運行。 我們可以使用tf.executing_eagerly()檢查這一點tf.executing_eagerly()啟用了急切執行,它返回True

問題在於loss(self,logits, labels):函數

Logits 不應該是`tf.Variable(...)'

它應該更改為z = logits ,並且 logits 應該被視為張量對象而不是 tf.Variable 對象。

我還將 tf.divide 更改為 Eager 模式(雖然不是必需的)

前:

    def loss(self,logits, labels):
        z = tf.Variable(logits, trainable=False,dtype=tf.float32, name='z')
        y = tf.Variable(labels, trainable=False,dtype=tf.float32, name='y')
        m = tf.cast(tf.size(z), dtype=tf.float32)
        cost = tf.divide(tf.reduce_sum(y*tf.math.log(z) + (1-y)*tf.math.log(1-z)),-m)
        return cost

后:

    def loss(self,logits, labels):
        z = logits
        y = tf.constant(labels,dtype=tf.float32, name='y')
        m = tf.cast(tf.size(z), dtype=tf.float32)
        cost = (-1/m)*tf.reduce_sum(y*tf.math.log(z) + (1-y)*tf.math.log(1-z))
        return cost

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM