[英]ValueError: RolloutWorker has no input_reader object
I am using RLlib and I am trying to run APEX_DDPG with tune on a multi-agent environment with Ray v1.10 on Python 3.9.6.我正在使用 RLlib,我正在尝试在 Python 3.9.6 上使用 Ray v1.10 在多代理环境中运行 APEX_DDPG。 I get the following error:
我收到以下错误:
raise ValueError("RolloutWorker has no input_reader object: " ValueError. RolloutWorker has no input_reader object. Cannot call sample(). You can try setting create_env_on_driver to True. raise ValueError("RolloutWorker has no input_reader object: " ValueError。RolloutWorker 没有 input_reader object。无法调用 sample()。您可以尝试将 create_env_on_driver 设置为 True。
I found the source of the error in docs, which is in RolloutWorker class definition:我在文档中找到了错误的来源,在 RolloutWorker class 定义中:
if self.fake_sampler and self.last_batch is not None:\
return self.last_batch\
elif self.input_reader is None:\
raise ValueError("RolloutWorker has no input_reader object! "\
"Cannot call sample(). You can try setting "
"create_env_on_driver to True.")
But I do not know how to solve it, since I am a little bit new to RLlib.但我不知道如何解决它,因为我对 RLlib 有点陌生。
I' m also new to Ray and RLlib.我也是 Ray 和 RLlib 的新手。 I also encounter this error today.
我今天也遇到这个错误。 My problem is that I forgot to add my
env
to config
.我的问题是我忘记将我的
env
添加到config
。 You may try adding you environment to you config
before using ApexDDPGTrainer(config=config)
or using ray.tune(config=config)
在使用
ApexDDPGTrainer(config=config)
或使用ray.tune(config=config)
config
,您可以尝试将您的环境添加到您的配置中
The following is an example from ray's official doc :以下是ray 官方文档中的示例:
import gym, ray
from ray.rllib.agents import ppo
class MyEnv(gym.Env):
def __init__(self, env_config):
self.action_space = <gym.Space>
self.observation_space = <gym.Space>
def reset(self):
return <obs>
def step(self, action):
return <obs>, <reward: float>, <done: bool>, <info: dict>
ray.init()
trainer = ppo.PPOTrainer(env=MyEnv, config={
"env_config": {}, # config to pass to env class
})
You may also register your custom environment first:您也可以先注册您的自定义环境:
from ray.tune.registry import register_env
def env_creator(env_config):
return MyEnv(...) # return an env instance
register_env("my_env", env_creator)
trainer = ppo.PPOTrainer(env="my_env")
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.