繁体   English   中英

OpenAI Gym - 迷宫 - 使用 Q 学习 - “ValueError:dir 不能为 0。唯一有效的 dirs 是 dict_keys(['N', 'E', 'S', 'W'])。”

[英]OpenAI Gym - Maze - Using Q learning- “ValueError: dir cannot be 0. The only valid dirs are dict_keys(['N', 'E', 'S', 'W']).”

我正在尝试使用 Q 学习来训练一个代理来解决迷宫。
我使用以下方法创建了环境:

import gym
import gym_maze 
import numpy as np

env = gym.make("maze-v0")

由于状态在 [x,y] 坐标中,并且我想要一个 2D Q 学习表,我创建了一个字典,将每个 state 映射到一个值:

states_dic = {}
count = 0
for i in range(5):
    for j in range(5):
        states_dic[i, j] = count
        count+=1

然后我创建了 Q 表:

n_actions = env.action_space.n

#Initialize the Q-table to 0
Q_table = np.zeros((len(states_dic),n_actions))
print(Q_table)

[[0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]]

一些变量:

#number of episode we will run
n_episodes = 10000
#maximum of iteration per episode
max_iter_episode = 100
#initialize the exploration probability to 1
exploration_proba = 1
#exploartion decreasing decay for exponential decreasing
exploration_decreasing_decay = 0.001
# minimum of exploration prob
min_exploration_proba = 0.01
#discounted factor
gamma = 0.99
#learning rate
lr = 0.1

rewards_per_episode = list()

但是当我尝试运行 Q 学习算法时,我得到了标题中的错误。

#we iterate over episodes
for e in range(n_episodes):
    #we initialize the first state of the episode
    current_state = env.reset()
    done = False
    
    #sum the rewards that the agent gets from the environment
    total_episode_reward = 0

    for i in range(max_iter_episode): 
        if np.random.uniform(0,1) < exploration_proba:
            action = env.action_space.sample()
        else:
            action = np.argmax(Q_table[current_state,:])
            
        next_state, reward, done, _ = env.step(action)

        current_coordinate_x = int(current_state[0])
        current_coordinate_y = int(current_state[1])

        next_coordinate_x = int(next_state[0])
        next_coordinate_y = int(next_state[1])


        # update Q-table using the Q-learning iteration    
        current_Q_table_coordinates = states_dic[current_coordinate_x, current_coordinate_y]
        next_Q_table_coordinates = states_dic[next_coordinate_x, next_coordinate_y]
        
        Q_table[current_Q_table_coordinates, action] = (1-lr) *Q_table[current_Q_table_coordinates, action] +lr*(reward + gamma*max(Q_table[next_Q_table_coordinates,:]))
    
        total_episode_reward = total_episode_reward + reward
        # If the episode is finished, we leave the for loop
        if done:
            break
        current_state = next_state
    #We update the exploration proba using exponential decay formula 
    exploration_proba = max(min_exploration_proba,\
                            np.exp(-exploration_decreasing_decay*e))
    rewards_per_episode.append(total_episode_reward)

更新:
共享完整的错误回溯:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-11-74e6fe3c1212> in <module>()
     25         # The environment runs the chosen action and returns
     26         # the next state, a reward and true if the epiosed is ended.
---> 27         next_state, reward, done, _ = env.step(action)
     28 
     29         ####    ####    ####    ####

/Users/x/anaconda3/envs/y/lib/python3.6/site-packages/gym/wrappers/time_limit.py in step(self, action)
     14     def step(self, action):
     15         assert self._elapsed_steps is not None, "Cannot call env.step() before calling reset()"
---> 16         observation, reward, done, info = self.env.step(action)
     17         self._elapsed_steps += 1
     18         if self._elapsed_steps >= self._max_episode_steps:

/Users/x/anaconda3/envs/y/lib/python3.6/site-packages/gym_maze-0.4-py3.6.egg/gym_maze/envs/maze_env.py in step(self, action)
     75             self.maze_view.move_robot(self.ACTION[action])
     76         else:
---> 77             self.maze_view.move_robot(action)
     78 
     79         if np.array_equal(self.maze_view.robot, self.maze_view.goal):

/Users/x/anaconda3/envs/y/lib/python3.6/site-packages/gym_maze-0.4-py3.6.egg/gym_maze/envs/maze_view_2d.py in move_robot(self, dir)
     93         if dir not in self.__maze.COMPASS.keys():
     94             raise ValueError("dir cannot be %s. The only valid dirs are %s."
---> 95                              % (str(dir), str(self.__maze.COMPASS.keys())))
     96 
     97         if self.__maze.is_open(self.__robot, dir):

ValueError: dir cannot be 1. The only valid dirs are dict_keys(['N', 'E', 'S', 'W']).

第二次更新:由于@Alexander L. Hayes 的一些调试而得到修复。

#we iterate over episodes
for e in range(n_episodes):
    #we initialize the first state of the episode
    current_state = env.reset()
    done = False
    
    #sum the rewards that the agent gets from the environment
    total_episode_reward = 0

    for i in range(max_iter_episode): 
        current_coordinate_x = int(current_state[0])
        current_coordinate_y = int(current_state[1])
        current_Q_table_coordinates = states_dic[current_coordinate_x, current_coordinate_y]

        if np.random.uniform(0,1) < exploration_proba:
            action = env.action_space.sample()
        else:
            action = int(np.argmax(Q_table[current_Q_table_coordinates]))


        next_state, reward, done, _ = env.step(action)

        next_coordinate_x = int(next_state[0])
        next_coordinate_y = int(next_state[1])


        # update our Q-table using the Q-learning iteration
        next_Q_table_coordinates = states_dic[next_coordinate_x, next_coordinate_y]
        
        Q_table[current_Q_table_coordinates, action] = (1-lr) *Q_table[current_Q_table_coordinates, action] +lr*(reward + gamma*max(Q_table[next_Q_table_coordinates,:]))
    
        total_episode_reward = total_episode_reward + reward
        # If the episode is finished, we leave the for loop
        if done:
            break
        current_state = next_state
    #We update the exploration proba using exponential decay formula 
    exploration_proba = max(min_exploration_proba,\
                            np.exp(-exploration_decreasing_decay*e))
    rewards_per_episode.append(total_episode_reward)


    

首先猜测(与答案有关,但与答案无关):

在健身房的环境(例如FrozenLake )中,离散动作通常被编码为整数。

看起来该错误是由此环境表示操作的非标准方式引起的。

我已经注释了我假设的类型可能是设置action变量时的类型:

if np.random.uniform(0,1) < exploration_proba:
    # Is this a string?
    action = env.action_space.sample()
else:
    # np.argmax returns an int
    action = np.argmax(Q_table[current_state,:])

用这样的东西替换else分支可能会起作用:

_action_map = {0: "N", 1: "E", 2: "S", 3: "W"}

action = _action_map[np.argmax(Q_table[current_state,:])]

第二个猜测(甚至不接近,但有利于上下文):

看起来这是在MattChanTK/gym-maze存储库中工作的。


第三个猜测(非常接近):

我已经缩小了从 Q function 中选择的问题。 这是我添加断点的修改版本:

for e in range(n_episodes):
    current_state = env.reset()
    done = False
    total_episode_reward = 0

    for i in range(max_iter_episode):
        if np.random.uniform(0,1) < exploration_proba:
            action = env.action_space.sample()
        else:
            print("From Q_table:")
            action = np.argmax(Q_table[current_state,:])
            import pdb; pdb.set_trace()

解决方案(我不能相信,@Penguin 知道了☺️)

current_state转换为坐标,并将np.argmaxint

for i in range(max_iter_episode): 
    current_coordinate_x = int(current_state[0])
    current_coordinate_y = int(current_state[1])
    current_Q_table_coordinates = states_dic[current_coordinate_x, current_coordinate_y]

    if np.random.uniform(0,1) < exploration_proba:
        action = env.action_space.sample()
    else:
        action = int(np.argmax(Q_table[current_Q_table_coordinates]))

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM