簡體   English   中英

無法將嵌入層與 tf.distribute.MirroredStrategy 一起使用

[英]Not able to use Embedding Layer with tf.distribute.MirroredStrategy

我正在嘗試在 tensorflow 版本 2.4.1 上並行化帶有嵌入層的 model。 但它給我帶來了以下錯誤:

InvalidArgumentError: Cannot assign a device for operation sequential/emb_layer/embedding_lookup/ReadVariableOp: Could not satisfy explicit device specification '' because the node {{colocation_node sequential/emb_layer/embedding_lookup/ReadVariableOp}} was colocated with a group of nodes that required incompatible device '/job:localhost/replica:0/task:0/device:GPU:0'. All available devices [/job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:XLA_CPU:0, /job:localhost/replica:0/task:0/device:XLA_GPU:0, /job:localhost/replica:0/task:0/device:GPU:0]. 
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=2 requested_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' assigned_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' resource_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
GatherV2: GPU CPU XLA_CPU XLA_GPU 
Cast: GPU CPU XLA_CPU XLA_GPU 
Const: GPU CPU XLA_CPU XLA_GPU 
ResourceSparseApplyAdagradV2: CPU 
_Arg: GPU CPU XLA_CPU XLA_GPU 
ReadVariableOp: GPU CPU XLA_CPU XLA_GPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  sequential_emb_layer_embedding_lookup_readvariableop_resource (_Arg)  framework assigned device=/job:localhost/replica:0/task:0/device:GPU:0
  adagrad_adagrad_update_update_0_resourcesparseapplyadagradv2_accum (_Arg)  framework assigned device=/job:localhost/replica:0/task:0/device:GPU:0
  sequential/emb_layer/embedding_lookup/ReadVariableOp (ReadVariableOp) 
  sequential/emb_layer/embedding_lookup/axis (Const) 
  sequential/emb_layer/embedding_lookup (GatherV2) 
  gradient_tape/sequential/emb_layer/embedding_lookup/Shape (Const) 
  gradient_tape/sequential/emb_layer/embedding_lookup/Cast (Cast) 
  Adagrad/Adagrad/update/update_0/ResourceSparseApplyAdagradV2 (ResourceSparseApplyAdagradV2) /job:localhost/replica:0/task:0/device:GPU:0

     [[{{node sequential/emb_layer/embedding_lookup/ReadVariableOp}}]] [Op:__inference_train_function_631]

將 model 簡化為基本的 model 以使其可重現:

import tensorflow as tf
central_storage_strategy = tf.distribute.MirroredStrategy()
with central_storage_strategy.scope():
  user_model = tf.keras.Sequential([
       tf.keras.layers.Embedding(10, 2, name = "emb_layer")
     ])
user_model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1), loss="mse")
user_model.fit([1],[[1,2]], epochs=3) 

任何幫助將不勝感激。 謝謝 !

所以最后我想出了問題,如果有人在尋找答案。

Tensorflow 目前還沒有完整的 Adagrad 優化器 GPU 實現。 ResourceSparseApplyAdagradV2 操作在 GPU 上產生錯誤,這是嵌入層不可或缺的。 因此它不能與具有數據並行策略的嵌入層一起使用。 使用 Adam 或 rmsprop 可以正常工作。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM