简体   繁体   English

使用乌龟图形的强化学习算法不起作用

[英]Reinforcement learning algorithm using turtle graphics not functioning

Currently trying to implement a Q table algorithm in my environment created using turtle graphics. 目前正在尝试在使用乌龟图形创建的环境中实现Q表算法。 When i try running the algorithm which uses Q learning I get an error stating: 当我尝试运行使用Q学习的算法时,出现错误提示:

  File "<ipython-input-1-cf5669494f75>", line 304, in <module>
    rl()

  File "<ipython-input-1-cf5669494f75>", line 282, in rl
    A = choose_action(S, q_table)

  File "<ipython-input-1-cf5669494f75>", line 162, in choose_action
    state_actions = q_table.iloc[state, :]

  File "/Users/himansuodedra/anaconda3/lib/python3.6/site-packages/pandas/core/indexing.py", line 1367, in __getitem__
    return self._getitem_tuple(key)

  File "/Users/himansuodedra/anaconda3/lib/python3.6/site-packages/pandas/core/indexing.py", line 1737, in _getitem_tuple
    self._has_valid_tuple(tup)

  File "/Users/himansuodedra/anaconda3/lib/python3.6/site-packages/pandas/core/indexing.py", line 204, in _has_valid_tuple
    if not self._has_valid_type(k, i):

  File "/Users/himansuodedra/anaconda3/lib/python3.6/site-packages/pandas/core/indexing.py", line 1674, in _has_valid_type
    return self._is_valid_list_like(key, axis)

  File "/Users/himansuodedra/anaconda3/lib/python3.6/site-packages/pandas/core/indexing.py", line 1723, in _is_valid_list_like
    raise IndexingError('Too many indexers')

IndexingError: Too many indexers

I cannot seem to pinpoint the problem. 我似乎无法查明问题所在。 The logic to me looks fine. 我的逻辑看起来不错。 Also I am able to build the environment thereafter the script gets stuck and i am forced to terminate it. 另外,在脚本卡住之后,我还可以构建环境,然后我不得不终止它。 Any help would be great. 任何帮助都会很棒。 The code is below: 代码如下:

"""
Reinforcement Learning using table lookup Q-learning method.
An agent "Blue circle" is positioned in a grid and must make its way to the 
green square. This is the end goal. Each time the agent should improve its 
strategy to reach the final Square. There are two traps the red and the wall 
which will reset the agent. 
"""
import turtle
import pandas as pd
import numpy as np
import time

np.random.seed(2)

""" Setting Parameters """

#N_STATES = 12   # the size of the 2D world
ACTIONS = ['left', 'right', 'down','up']     # available actions
EPSILON = 0.9   # greedy police (randomness factor)
ALPHA = 0.1     # learning rate 
GAMMA = 0.9    # discount factor
MAX_EPISODES = 13   # maximum episodes
FRESH_TIME = 0.3    # fresh time for one move


def isGoal():
    if player.xcor() == -25 and player.ycor() == 225:
        player.goto(-175,125)
        status_func(1)
        S_ = 'terminal'
        R = 1
        interaction = 'Episode %s: total_steps = %s' %(episode+1, step_counter)
        print('\r{}'.format(interaction), end='')
        time.sleep(2)
        print('\r', end='')
        return S_, R
    else:
        pass


def isFire():
    if player.xcor() == -25 and player.ycor() == 175:
        player.goto(-175,125)
        status_func(3)
        S_ = 'terminal'
        R = -1
        interaction = 'Episode %s: total_steps = %s' %(episode+1, step_counter)
        print('\r{}'.format(interaction), end='')
        time.sleep(2)
        print('\r', end='')
        return S_, R
    else:
        pass 


def isWall():
    if player.xcor() == -125 and player.ycor() == 175:
        player.goto(-175,125)
        status_func(2)
        S_ = 'terminal'
        R = -1
        interaction = 'Episode %s: total_steps = %s' %(episode+1, step_counter)
        print('\r{}'.format(interaction), end='')
        time.sleep(2)
        print('\r', end='')
        return S_, R
    else:
        pass


""" Player Movement """

playerspeed = 50

""" Create the token object """

player = turtle.Turtle()
player.color("blue")
player.shape("circle")
player.penup()
player.speed(0)
player.setposition(-175,125)
player.setheading(90)



#Move the player left and right
def move_left():
    x = player.xcor()
    x -= playerspeed
    if x < -175:
        x = -175
    player.setx(x)
    isGoal()
    isFire()
    isWall()
    S_ = player.pos()
    R = 0

def move_right():
    x = player.xcor()
    x += playerspeed
    if x > -25:
        x = -25
    player.setx(x)
    isGoal()
    isFire()
    isWall()
    S_ = player.pos()
    R = 0

def move_up():
    y = player.ycor()
    y += playerspeed
    if y > 225:
        y = 225
    player.sety(y)
    isGoal()
    isFire()
    isWall()
    S_ = player.pos()
    R = 0

def move_down():
    y = player.ycor()
    y -= playerspeed
    if y < 125:
        y = 125
    player.sety(y)
    isGoal()
    isFire()
    isWall()
    S_ = player.pos()
    R = 0

#Create Keyboard Bindings
turtle.listen()
turtle.onkey(move_left, "Left")
turtle.onkey(move_right, "Right")
turtle.onkey(move_up, "Up")
turtle.onkey(move_down, "Down")

def build_q_table(n_states, actions):
    table = pd.DataFrame(
        np.zeros((n_states, len(actions))),     # q_table initial values
        columns=actions,    # actions's name
    )
    # print(table)    # show table
    return table


def choose_action(state, q_table):
    # This is how to choose an action
    state_actions = q_table.iloc[state, :]
    # act non-greedy or state-action have no value
    if (np.random.uniform() > EPSILON) or ((state_actions == 0).all()): 
        action_name = np.random.choice(ACTIONS)
    else:   # act greedy
        # replace argmax to idxmax as argmax means a different function 
        action_name = state_actions.idxmax()    
    return action_name



def get_env_feedback(S, A):
    if A == 'right':
        move_right()
    elif A == 'left':
        move_left()
    elif A == 'up':
        move_up()
    else: #down 
        move_down()
    return S_, R



def update_env(S, episode, step_counter):
    wn = turtle.Screen()
    wn.bgcolor("white")
    wn.title("test")

    """ Create the Grid """

    greg = turtle.Turtle()
    greg.speed(0)

    def create_square(size,color="black"):
        greg.color(color)
        greg.pd()
        for i in range(4):
            greg.fd(size)
            greg.lt(90)
        greg.pu()
        greg.fd(size)

    def row(size,color="black"):
        for i in range(4):
            create_square(size)

    def board(size,color="black"):
        greg.pu()
        greg.goto(-(size*4),(size*4))
        for i in range(3):
            row(size)
            greg.bk(size*4)
            greg.rt(90)
            greg.fd(size)
            greg.lt(90)

    def color_square(start_pos,distance_sq, sq_width, color):
        greg.pu()
        greg.goto(start_pos)
        greg.fd(distance_sq)
        greg.color(color)
        greg.begin_fill()
        for i in range(4):
            greg.fd(sq_width)
            greg.lt(90)
        greg.end_fill()
        greg.pu()

    def initiate_grid(): 
        board(50)
        color_square((-200,200),150, 50,color="green")
        color_square((-200,150),50, 50,color="black")
        color_square((-200,150),150, 50,color="red")
        greg.hideturtle()

    initiate_grid()

    """ Create the token object """

    player = turtle.Turtle()
    player.color("blue")
    player.shape("circle")
    player.penup()
    player.speed(0)
    player.setposition(S)
    player.setheading(90)




def rl():
    possible_states = {0:(-175,125),
                      1:(-175,175),
                      2:(-175,225),
                      3:(-125,125),
                      4:(-125,175),
                      5:(-125,225),
                      6:(-75,125),
                      7:(-75,175),
                      8:(-75,225),
                      9:(-25,125),
                      10:(-25,175),
                      11:(-25,225)}

    inv_possible_states = {v:k for k,v in possible_states.items()}

    #build the qtable 
    q_table = build_q_table(len(possible_states),ACTIONS)
    for episode in range(MAX_EPISODES):
        step_counter = 0
        which_state = 0
        S = possible_states[which_state]
        is_terminated = False
        update_env(S,episode,step_counter)
        while not is_terminated:

            A = choose_action(S, q_table)
            # take action & get next state and reward
            S_, R = get_env_feedback(S, A) 
            q_predict = q_table.loc[S, A]
            if S_ != 'terminal':
                S_ = inv_possible_states[S_]
                # next state is not terminal
                q_target = R + GAMMA * q_table.iloc[S_, :].max()   
            else:
                q_target = R     # next state is terminal
                is_terminated = True    # terminate this episode

            q_table.loc[S, A] += ALPHA * (q_target - q_predict)  # update
            S = S_  # move to next state

            update_env(S, episode, step_counter+1)
            step_counter += 1
    return q_table



rl()

Short answer : You are confusing the screen coordinates with the 12 states of the environment 简短答案您将屏幕坐标与环境的12种状态混淆

Long answer : When A = choose_action(S, q_table) is called and the choose_action method is executed, you are running into problems with the following line of code within that method: 长答案 :当调用A = choose_action(S, q_table)并执行choose_action方法时,该方法中的以下代码行会遇到问题:

state_actions = q_table.iloc[state, :]

The error IndexingError: Too many indexers is trying to tell you that the value you're trying to access does not exist on the q_table . 错误IndexingError: Too many indexers试图告诉您q_table上不存在您要访问的值。

If you were to print out the state variable that gets passed into the choose_action function, you'll get this: 如果要打印出传递给choose_action函数的state变量,则会得到以下信息:

(-175, 125)

But that doesn't make sense. 但这没有道理。 If you print entire Q-table before the error happens, you'll see the following values: 如果在错误发生之前打印整个Q表,您将看到以下值:

    left  right  down   up
0    0.0    0.0   0.0  0.0
1    0.0    0.0   0.0  0.0
2    0.0    0.0   0.0  0.0
3    0.0    0.0   0.0  0.0
4    0.0    0.0   0.0  0.0
5    0.0    0.0   0.0  0.0
6    0.0    0.0   0.0  0.0
7    0.0    0.0   0.0  0.0
8    0.0    0.0   0.0  0.0
9    0.0    0.0   0.0  0.0
10   0.0    0.0   0.0  0.0
11   0.0    0.0   0.0  0.0

The values are all zeros because you haven't learned anything yet. 这些值全为零,因为您还没有学到任何东西。 But your code is trying to access q_table.iloc[state, :] when state is equal to (-175, 125) . 但是q_table.iloc[state, :]state等于(-175, 125) q_table.iloc[state, :]q_table.iloc[state, :]您的代码正在尝试访问q_table.iloc[state, :] That doesn't make any sense! 那没有任何意义!

The value you're passing in to the choose_action method should correspond to one of the twelve states within the environment, represented in the q_table by the integers from 0 to 11. 您传递给choose_action方法的值应对应于环境中的十二种状态之一,在q_table由0到11之间的整数表示。

It seems the problem is being caused from this line: 似乎是此行引起的问题:

S = possible_states[which_state]

☝️ That line of code in the rl method is changing the value of S to be (-175, 125) . ☝️的代码中的该行rl方法被改变的值S(-175, 125) If S is supposed to represent which state of the environment the agent is in, then S should always be an integer between 0 and 11 (inclusively). 如果假设S代表代理所处的环境状态,则S应当始终为0到11(含)之间的整数。

You need to make sure that you correctly separate the screen locations that turtle-graphics is drawing from the 12 states of the environment that the agent is exploring. 您需要确保将turtle-graphics绘制的屏幕位置与代理正在探索的环境的12种状态正确地分开。 turtle-graphics doesn't know how to draw the environment states as they are stored within q_table , and the q_table doesn't know which states in the environment are associated with the coordinates that turtle-graphics uses to draw the squares. turtle-graphics不知道如何绘制环境状态,因为它们存储在q_table ,并且q_table不知道环境中的哪些状态与turtle-graphics用于绘制正方形的坐标相关联。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM