简体   繁体   English

Keras的“ multi_gpu_model”用法导致错误“ yolo_head”未定义

[英]Keras `multi_gpu_model` usage causes error `yolo_head` is not defined

I have a keras_yolo python implementation, and I am trying to get the learning to work across multiple GPUs, and the multi_gpu_mode option sounds like a good place to start. 我有一个keras_yolo python实现,我正在尝试学习如何在多个GPU上工作,并且multi_gpu_mode选项听起来像是一个不错的起点。

However, my problem is that the same code works just fine in a single CPU/GPU setup but fails with NameError: name 'yolo_head' is not defined when running as a multi_gpu_mode model. 但是,我的问题是,相同的代码在单个CPU / GPU设置中也可以正常工作,但会失败并出现NameError:作为multi_gpu_mode模型运行时未定义名称“ yolo_head”。 The full stack: 完整的堆栈:

parallel_model = multi_gpu_model(model, cpu_relocation=True) File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/utils/multi_gpu_utils.py", line 200, in multi_gpu_model model = clone_model(model) File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/models.py", line 251, in clone_model return _clone_functional_model(model, input_tensors=input_tensors) File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/models.py", line 152, in _clone_functional_model layer(computed_tensors, **kwargs)) File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/engine/base_layer.py", line 457, in __call__ output = self.call(inputs, **kwargs) File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/layers/core.py", line 687, in call return self.function(inputs, **arguments) File "/mnt/data/DeepLeague/YAD2K/yad2k/models/keras_yolo.py", line 199, in yolo_loss pred_xy, pred_wh, pred_confidence, pred_class_prob = yolo_head(

Here is a link to the definition of yolo_head : https://github.com/farzaa/DeepLeague/blob/c87fcd89d9f9e81421609eb397bf95433270f0e2/YAD2K/yad2k/models/keras_yolo.py#L66 这是yolo_head定义的yolo_headhttps : //github.com/farzaa/DeepLeague/blob/c87fcd89d9f9e81421609eb397bf95433270f0e2/YAD2K/yad2k/models/keras_yolo.py#L66

I've not yet dived into the multi_gpu_model code to understand how the copying worked under the hood and was hoping to avoid needing to do that. 我尚未深入研究multi_gpu_model代码以了解复制是如何进行的,并希望避免这样做。

The issue is because custom imports in the lambda used in Keras must be imported explicitly within the function referring to it. 问题是因为Keras中使用的lambda中的自定义导入必须在引用它的函数中显式导入。

eg. 例如。 in this case yolo_head must be 're-imported' at the function level of `yolo_loss' like this: 在这种情况下, yolo_head必须在“ yolo_loss”的功能级别上“重新导入”,如下所示:

def yolo_loss(args, anchors, num_classes, rescore_confidence=False, print_loss=False):
    from yad2k.models.keras_yolo import yolo_head 

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM