简体   繁体   中英

Python, Tensorflow ValueError: No gradients provided for any variable

I have a class called RL_Brain :

class RL_Brain():
    def __init__(self, n_features, n_action, memory_size=10, batch_size=32, gamma=0.9, fi_size=10):
        self.n_features = n_features
        self.n_actions = n_action

        self.encoder = keras.Sequential([
            Input((self.n_features,)),
            Dense(16, activation='relu', kernel_initializer='glorot_normal', name='encoder_1'),
            Dense(16, activation='relu', kernel_initializer='glorot_normal', name='encoder_2'),
            Dense(16, activation='relu', kernel_initializer='glorot_normal', name='encoder_3'),
            Dense(self.fi_size, activation='softmax', name='fi'),
        ])

        self.decoder = keras.Sequential([
            Input((self.fi_size,)),
            Dense(16, activation='relu', name='decoder_1', trainable=True),
            Dense(16, activation='relu', name='decoder_2', trainable=True),
            Dense(16, activation='relu', name='decoder_3', trainable=True),
            Dense(self.n_features, activation=None, name='decoder_output', trainable=True)
        ])

    def learn(self, state, r, a, state_):
        encoded = tf.one_hot(tf.argmax(self.encoder(state), axis=1), depth=self.fi_size)
        encoded_ = tf.one_hot(tf.argmax(self.encoder(state_), axis=1), depth=self.fi_size)
        decoded_state = self.decoder(encoded).numpy()
        with tf.GradientTape() as tape:
            loss1 = mean_squared_error(state, decoded_state)
        grads = tape.gradient(loss1, self.decoder.trainable_variables)
        self.opt.apply_gradients(zip(grads, self.decoder.trainable_variables))

When I run the learn function, I get the following error:

File "/Users/wangheng/app/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/optimizer_v2/utils.py", line 78, in filter_empty_gradients raise ValueError("No gradients provided for any variable: %s." % ...

ValueError: No gradients provided for any variable: ['decoder_1/kernel:0', 'decoder_1/bias:0', 'decoder_2/kernel:0', 'decoder_2/bias:0', 'decoder_3/kernel:0', 'decoder_3/bias:0', 'decoder_output/kernel:0', 'decoder_output/bias:0'].

the following line is causing that error

decoded_state = self.decoder(encoded).numpy()

Once you do that, there is no path from your loss function to your trainable variables so no gradient can be calculated.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM