简体   繁体   中英

tf and tf.keras Dense layer shows completely different behavior in my setup

While using tensorflow 1.14, I noticed some very strange behavior when using tf.layers.Dense vs tf.keras.layers.Dense. People on Stackoverflow say that these two layers are exactly the same, and I basically would agree, but having a look at the discounted reward while training an AC agent results in the following graph:

tf vs tf.keras

The arguments are exactly the same. Repeated runs lead to the same result (see differently colored data in image). As far as I understand the code, one of the Dense layers inherits from the other: tf.keras.layers.core and tf.layers.core .

Is anyone able to explain this behavior?

According to a response to a similar issue on the stable_baseline repository , it seems that keras does not support shared weights between multiple agents. Therefore, when training an actor-critic network with multiple instances, every environment has its own network which leads to completely different results. The fix is to only use tensorflow layers directly which support reuse of the same weights.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM