简体   繁体   中英

Stuck in understanding the difference between update usels of TD(0) and TD(λ)

I'm studying Temporal difference learning from this post . Here the update rule of TD(0) is clear to me but in TD(λ), I don't understand how utility values of all the previous states are updated in a single update.

Here is the diagram given for comparison of bot updates:

在此处输入图片说明

above diagram is explained as following:

In TD(λ) the result is propagated back to all the previous states thanks to the eligibility traces.

My question is how the information is propagated to all the previous states in a single update even if we are using following update rule with eligibility traces?

在此处输入图片说明

Here in a single update, we're only updating utility of a single state Ut(s) , then how the utilities of all the previous states are getting updated?

Edit

As per the answer, it is clear that this update is applied for every single step and that's the reason why information is propagated. If this is the case, then it again confuses me because, the only difference between the update rules is eligibility trace.

So even if the value of eligibility trace is non zero for previous states, the values of delta will be zero in above case (because initially rewards and utility function is initialized 0). Then how is it possible for previous states to get other utility values than zero in first update?

Also in the given python implementation, following output is given after a single iteration:

 [[ 0.       0.04595  0.1      0.     ]
 [ 0.       0.       0.       0.     ]
 [ 0.       0.       0.       0.     ]]

Here only 2 values are updated instead of all 5 previous states as shown in the figure. What I'm missing here?

So even if the value of eligibility trace is non zero for previous states, the values of delta will be zero in above case (because initially rewards and utility function is initialized 0). Then how is it possible for previous states to get other utility values than zero in first update?

You are right that, in the first update , all rewards and updates will still be 0 (except if we already manage to reach the goal in a single step, then the reward won't be 0 ).

However, the eligibility traces e_t will continue to "remember" or "memorize" all the states that we have previously visited. So, as soon as we do manage to reach the goal state and get a non-zero reward, the eligibility traces will still remember all the states that we went through. Those states will still have non-zero entries in the table of eligibility traces, and therefore all at once get a non-zero update as soon as you observe your first reward.

The table of eligibility traces does decay every time step (multiplied by gamma * lambda_ ), so the magnitude of updates to states that were visited a long time ago will be smaller than the magnitude of updates to states that we visited very recently, but we will continue to remember all those states, they will have non-zero entries (under the assumption that gamma > 0 and lambda_ > 0 ). This allows for values of all visited states to be updated, not as soon as we reach those states , but as soon as we observe a non-zero reward (or, in epochs after the first epoch, as soon as we reach a state for which we already have an existing non-zero predicted value) after having visited them at some earlier point in time .


Also in the given python implementation, following output is given after a single iteration:

 [[ 0. 0.04595 0.1 0. ] [ 0. 0. 0. 0. ] [ 0. 0. 0. 0. ]]

Here only 2 values are updated instead of all 5 previous states as shown in the figure. What I'm missing here?

The first part of their code looks as follows:

for epoch in range(tot_epoch):
  #Reset and return the first observation
  observation = env.reset(exploring_starts=True)

So, every new epoch, they start by resetting the environment using the exploring_starts flag. If we look at the implementation of their environment , we see that the usage of this flag means that we always start out with a random initial position .

So, I suspect that, when the code was run to generate that output, the initial position was simply randomly selected to be the position two steps to the left of the goal, rather than the position in the bottom-left corner. If the initial position is randomly selected to already be closer to the goal, the agent only ever visits those two states for which you see non-zero updates, so those will also be the only states with non-zero entries in the table of eligibility traces and therefore be the only states with non-zero updates.

If the initial position is indeed the position in the bottom-left corner, a correct implementation of the algorithm would indeed update the values for all the states along that path (assuming no extra tricks are added, like setting entries to 0 if they happen to get "close enough" to 0 due to decaying).


I'd also like to note that there is in fact a mistake in the code on that page: they do not reset all the entries of the table of eligibility traces to 0 when resetting the environment / starting a new epoch. This should be done. If this is not done, the eligibility traces will still memorize states that were visited during previous epochs and also still update all of those, even if they were not visited again in the new epoch. This is incorrect. A correct version of their code should start out like this:

for epoch in range(tot_epoch):
    #Reset and return the first observation
    observation = env.reset(exploring_starts=True)
    trace_matrix = trace_matrix * 0.0    # IMPORTANT, added this        
    for step in range(1000):
        ...

You are missing a small but important detail, the update rule is applied to all states , not only the current state. So, in practice, you are updating all the states whose e_t(s) is different from zero.

Edit

The delta is not zero because is computed for the current state, when the episode ends and the agent receives a reward of +1. Therefore, after computing a delta different from zero, you update all the states using that delta and the current eligibility traces.

I don't know why in the Python implementation (I haven't checked it carefully) the output updates only 2 values, but please verify that the eligibility traces for all the 5 previous states are different from 0, and if it's not the case, try to understand why. Sometimes you are not interested in maintaining the traces under a very small threshold (eg, 10e-5), because it has a very small effect in the learning process and it's wasting computational resources.

As it may be seen δ is used in order to calculate the states utility. But, δ uses the next state's utility, as seen in the article . This means that for TD(0) it will update all the states because, to calculate the Ut(s) we need to calculate the next state's U and so on.

  • at the same time eligibility trace is using the previous states's eligibility trace
  • In eligibility trace, since, gamma is 0.999 (less than 1), lambda = 0.5 multipliyng by gamma and lamba gives the utilities a strong decrease,
  • weight by adding a +1 only to the state of the current time t.
  • Meaning that the closer we are to the current state the more weight we have and thus a bigger multiplier
  • and the further appart the fewer and a smaller multiplier.

In TD(λ) there is also an extra decrease while adding eligibility trace to the calculation The above leads to the conclusion that the previous values are 0 in the first iteration because all the utilities are 0 at start and in TD(λ) when updated they get an a lot stronger decrease by the eligibility trace. You could say that they are extremely small, too small to calculate or consider.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM