简体   繁体   中英

How do I calculate MaxQ in Q-learning?

I'making a implementation of Q-learning, specifically the Bellman equation. 贝尔曼方程

I'm using the version from a website that guides he through the problem, but I have question: For maxQ, do I calculate the max reward using all Q-table values of the new state (s') - in my case 4 possible action (a'), each with their respective value- or the sum of the Q-table values of all the positions when taking the action (a')?

In other words, do I use the highest Q-value of all the possible actions I can take, or the summed Q-values of all the "neighbouring" squares?

You always use max Q-value for all the possible actions you can take.

The idea is to pick the action with biggest (best) Q-Value of next state in order to stay in optimal policy Qpi*.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM