简体   繁体   English

Pytorch 双 DQN 无法正常工作

[英]Pytorch Double DQN not working properly

I'm trying to make a double dqn network for cartpole-v0, but the network doesn't seem to be working as expected and stagnates at around 8-9 reward.我正在尝试为 cartpole-v0 制作一个双 dqn 网络,但该网络似乎没有按预期工作,并且在 8-9 奖励附近停滞不前。 What am I doing wrong?我究竟做错了什么?

Each step in the learning phase:学习阶段的每一步:

def make_step(model, target_model, optimizer, criterion, observation, action, reward, next_observation):
    inp_obv = torch.Tensor(observation)
    q = model(inp_obv)
    q_argmax = torch.argmax(q.data)
    q = q[action]

    inp_next_obv = torch.Tensor(next_observation)
    q_next = target_model(inp_next_obv)
    q_a_next = q_next[q_argmax]

    #LHS of the double DQN equation
    obv_reward = q

    #RHS of the double DQN equation
    target_reward = torch.Tensor([reward]) + GAMMA*q_a_next.detach()

    #Backprop
    loss = criterion(obv_reward, target_reward) #MSELoss
    loss.backward()

Code wrapping make_step:代码包装make_step:

optimizer.zero_grad() #RMSprop on net
if e%2 == 0:
    target_net.load_state_dict(net.state_dict())
for i in range(len(data)):
    observation, action, reward, next_observation = data[i]
    make_step(net, target_net, optimizer, criterion, observation, action, reward, next_observation)

GAMMA *= GAMMA
optimizer.step()

What am I doing wrong?我究竟做错了什么? Thank you.谢谢你。

Increase the target network update frequency can solve the problem.增加目标网络更新频率可以解决问题。

optimizer.zero_grad() #RMSprop on net
if e % 100 == 0:
    target_net.load_state_dict(net.state_dict())
for i in range(len(data)):
    observation, action, reward, next_observation = data[i]
    make_step(net, target_net, optimizer, criterion, observation, action, reward, next_observation)

GAMMA *= GAMMA
optimizer.step()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM