简体   繁体   中英

Allocating gpu fraction eager execution

Basically, I'm running a reinforcement learning model in eager mode and I need to limit the amount of memory that each process will claim from the gpu. In the graph api, this could be achieved by modifying a tf.ConfigProto() object and creating a session with said config object.

However, in eager api, there is no session. My doubt then is, how can I manage gpu memory in this case?

tf.enable_eager_execution() accepts a config argument, whose value would be the same ConfigProto message.

So, you should be able to set the same options per-process using that.

Hope that helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM