簡體   English   中英

《冰雪奇緣》 Q學習更新問題

[英]FrozenLake Q-Learning Update Issue

我正在學習Q學習,並嘗試在OpenAI Gym中針對FrozenLake-v0問題構建Q學習器。 由於問題只有16個狀態和4個可能的動作,因此應該很容易,但是看起來我的算法沒有正確更新Q表。

以下是我的Q學習算法:

import gym
import numpy as np
from gym import wrappers


def run(
    env,
    Qtable,
    N_STEPS=10000,
    alpha=0.2,  # 1-alpha the learning rate
    rar=0.4,  # random exploration rate
    radr=0.97  # decay rate
):

    # Initialize pars::
    TOTAL_REWARD = 0
    done = False
    action = env.action_space.sample()
    state = env.reset()

    for _ in range(N_STEPS):
        if done:
            print('TW', TOTAL_REWARD)
            break

        s_prime, reward, done, info = env.step(action)
        # Update Q Table:
        Qtable[state, action] = (1 - alpha) * Qtable[state, action] + alpha * (reward + Qtable[s_prime,np.argmax(Qtable[s_prime,])])

        # Prepare for the next step:
        # Next New Action:
        if rand.uniform(0, 1) < rar:
            action = env.action_space.sample()
        else:
            action = np.argmax(Qtable[s_prime, :])

        # Update new state:
        state = s_prime
        # Update Decay:
        rar *= radr
        # Update Stats
        TOTAL_REWARD += reward
        if reward > 0:
            print(reward)

    return Qtable, TOTAL_REWARD

然后運行Q-learner 1000次迭代:

if __name__=="__main__":
    # Required Pars:
    N_ITER = 1000
    REWARDS = []
    # Setup the Maze:
    env = gym.make('FrozenLake-v0')

    # Initialize Qtable:
    num_actions = env.unwrapped.nA
    num_states = env.unwrapped.nS
    # Qtable = np.random.uniform(0, 1, size=num_states * num_actions).reshape((num_states, num_actions))
    Qtable = np.zeros((env.observation_space.n, env.action_space.n))

    for _ in range(N_ITER):
        res = run(env, Qtable)
        Qtable = res[0]
        REWARDS.append(res[1])
    print(np.mean(REWARDS))

任何建議將被認真考慮!

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM