[英]Error for classifier.predict in tensorflow (Shape in shape_and_slice spec does not match)
[英]PPOAgent + Cartpole = ValueError: actor_network output spec does not match action spec:
我正在嘗試在 CartPole-v1 環境中使用 tf_agents 的 PPOAgent 進行試驗,但在聲明代理本身時收到以下錯誤:
ValueError: actor_network output spec does not match action spec:
TensorSpec(shape=(2,), dtype=tf.float32, name=None)
vs.
BoundedTensorSpec(shape=(), dtype=tf.int64, name='action', minimum=array(0, dtype=int64), maximum=array(1, dtype=int64))
我相信問題是我的網絡的輸出是tf.float32
而不是tf.int64
,但我可能是錯的。 我不知道如何使網絡輸出一個整數,據我所知,這是不可能或不希望的。
如果我運行像 MountainCarContinuous-v0 這樣的連續環境,我會得到一個不同的錯誤:
ValueError: Unexpected output from `actor_network`. Expected `Distribution` objects, but saw output spec: TensorSpec(shape=(1,), dtype=tf.float32, name=None)
這是相關代碼(主要取自 DQN 教程):
# env_name = 'MountainCarContinuous-v0'
env_name = 'CartPole-v1'
train_py_env = suite_gym.load(env_name)
eval_py_env = suite_gym.load(env_name)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
train_env.reset()
eval_env.reset()
actor_layer_params = (100, 50)
critic_layer_params = (100, 50)
action_tensor_spec = tensor_spec.from_spec(train_env.action_spec())
num_actions = action_tensor_spec.maximum - action_tensor_spec.minimum + 1
# Define a helper function to create Dense layers configured with the right
# activation and kernel initializer.
def dense_layer(num_units):
return tf.keras.layers.Dense(
num_units,
activation=tf.keras.activations.relu,
kernel_initializer=tf.keras.initializers.VarianceScaling(
scale=2.0, mode='fan_in', distribution='truncated_normal'))
#Actor network
dense_layers = [dense_layer(num_units) for num_units in actor_layer_params]
actions_layer = tf.keras.layers.Dense(
1,
name='actions',
activation=None,
kernel_initializer=tf.keras.initializers.RandomUniform(
minval=-0.03, maxval=0.03),
bias_initializer=tf.keras.initializers.Constant(-0.2))
ActorNet = sequential.Sequential(dense_layers + [actions_layer])
#Critic/value network
dense_layers = [dense_layer(num_units) for num_units in critic_layer_params]
criticism_layer = tf.keras.layers.Dense(
1,
activation=None,
kernel_initializer=tf.keras.initializers.RandomUniform(
minval=-0.03, maxval=0.03),
bias_initializer=tf.keras.initializers.Constant(-0.2))
CriticNet = sequential.Sequential(dense_layers + [criticism_layer])
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
train_step_counter = tf.Variable(0)
#Error occurs here
agent = tf_agents.agents.PPOAgent(
train_env.time_step_spec(),
train_env.action_spec(),
optimizer=optimizer,
actor_net=ActorNet,
value_net=CriticNet,
train_step_counter=train_step_counter)
我覺得我一定遺漏了一些明顯的東西,或者有一個根本的誤解,任何和所有的幫助都將不勝感激。 我找不到正在使用的 PPOAgent 的示例。
想通了,我需要使用一個返回分布的網絡,例如 ActorDistributionNetwork
詳情請見: https : //www.tensorflow.org/agents/api_docs/python/tf_agents/networks/actor_distribution_network/ActorDistributionNetwork
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.