简体   繁体   English

ValueError:RolloutWorker 没有 input_reader object

[英]ValueError: RolloutWorker has no input_reader object

I am using RLlib and I am trying to run APEX_DDPG with tune on a multi-agent environment with Ray v1.10 on Python 3.9.6.我正在使用 RLlib,我正在尝试在 Python 3.9.6 上使用 Ray v1.10 在多代理环境中运行 APEX_DDPG。 I get the following error:我收到以下错误:

raise ValueError("RolloutWorker has no input_reader object: " ValueError. RolloutWorker has no input_reader object. Cannot call sample(). You can try setting create_env_on_driver to True. raise ValueError("RolloutWorker has no input_reader object: " ValueError。RolloutWorker 没有 input_reader object。无法调用 sample()。您可以尝试将 create_env_on_driver 设置为 True。

I found the source of the error in docs, which is in RolloutWorker class definition:我在文档中找到了错误的来源,在 RolloutWorker class 定义中:

if self.fake_sampler and self.last_batch is not None:\
   return self.last_batch\
elif self.input_reader is None:\
   raise ValueError("RolloutWorker has no input_reader object! "\
   "Cannot call sample(). You can try setting "
   "create_env_on_driver to True.")

But I do not know how to solve it, since I am a little bit new to RLlib.但我不知道如何解决它,因为我对 RLlib 有点陌生。

I' m also new to Ray and RLlib.我也是 Ray 和 RLlib 的新手。 I also encounter this error today.我今天也遇到这个错误。 My problem is that I forgot to add my env to config .我的问题是我忘记将我的env添加到config You may try adding you environment to you config before using ApexDDPGTrainer(config=config) or using ray.tune(config=config)在使用ApexDDPGTrainer(config=config)或使用ray.tune(config=config) config ,您可以尝试将您的环境添加到您的配置中

The following is an example from ray's official doc :以下是ray 官方文档中的示例:

import gym, ray
from ray.rllib.agents import ppo

class MyEnv(gym.Env):
    def __init__(self, env_config):
        self.action_space = <gym.Space>
        self.observation_space = <gym.Space>
    def reset(self):
        return <obs>
    def step(self, action):
        return <obs>, <reward: float>, <done: bool>, <info: dict>

ray.init()
trainer = ppo.PPOTrainer(env=MyEnv, config={
    "env_config": {},  # config to pass to env class
})

You may also register your custom environment first:您也可以先注册您的自定义环境:

from ray.tune.registry import register_env

def env_creator(env_config):
    return MyEnv(...)  # return an env instance

register_env("my_env", env_creator)
trainer = ppo.PPOTrainer(env="my_env")

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 MapReduce input_reader的ndb.Key过滤器 - ndb.Key filter for MapReduce input_reader 来自.itertext()的lxml错误“ValueError:输入对象没有元素:HtmlComment” - lxml error from .itertext() “ValueError: Input object has no element: HtmlComment” Tensorflow /移动网络培训/ ValueError:不支持的input_reader_config - Tensorflow / mobilenet training / ValueError: Unsupported input_reader_config AttributeError:“模块”对象没有属性“ ValueError” - AttributeError: 'module' object has no attribute 'ValueError' pynsq:读取器对象没有属性“完成” - pynsq: Reader object has no attribute 'finish' AttributeError: 'LargeList' object 没有属性 'reader' - AttributeError: 'LargeList' object has no attribute 'reader' AttributeError:&#39;_io.TextIOWrapper&#39;对象没有属性&#39;reader&#39; - AttributeError: '_io.TextIOWrapper' object has no attribute 'reader' &#39;_csv.reader&#39; 类型的对象没有 len(),无法识别 csv 数据 - object of type '_csv.reader' has no len(), csv data not recognized TypeError:&#39;_ csv.reader&#39;对象没有属性&#39;__getitem__&#39;? - TypeError: '_csv.reader' object has no attribute '__getitem__'? AttributeError: '_io.TextIOWrapper' object 没有属性 'reader' [更糟糕的问题] - AttributeError: '_io.TextIOWrapper' object has no attribute 'reader' [Worse Problem]
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM