简体   繁体   中英

keras.optimizers.Adam.apply_gradients(...) fails

I've been struggling for several days now trying to get a DDPG reinforcement learning running on a raspberry pi.

The critic part of the model works fine, but the actor part of the model won't update. It calculates some gradients, then runs the following command:

self.actor_opt.apply_gradients(zip(da_dtheta, self.actor_model.trainable_variables))

Unfortunately this line appears to do nothing: the weights of the actor_model are not updated. I verified the gradients have values that appear to make sense.

I noticed this function actually returns an operation, so I tried to assign it to a variable and run it:

grad_op = self.actor_opt.apply_gradients(zip(da_dtheta, self.actor_model.trainable_variables))
grad_op.run()

This doesn't work either, giving me the following cryptic error:

tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable Adam/iter from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/Adam/iter/N10tensorflow3VarE does not exist

The issue was that eager execution was disabled. It turns out I was running tensorflow 1.4 rather than tensorflow 2.0 on my linux software, and eager execution was disabled by default. For some reason apply_gradients doesn't appear to work without eager execution.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM