简体   繁体   English

使用TensorFlow2进行分布式学习不起作用

[英]Distributed Learning with TensorFlow2 is not working

I'm trying to get distributed TF working in VS-Code with the Tensorflow version 2.0.0a (the CPU Version). 我正在尝试使用Tensorflow版本2.0.0a(CPU版本)在VS-Code中使用分布式TF。

I'm using a Windows and a Linux System (two different computers) and both are working well alone. 我正在使用Windows和Linux系统(两台不同的计算机),两者都很好。

For the distibuted TF I followed the tutorial at https://www.tensorflow.org/alpha/guide/distribute_strategy . 对于已分发的TF,我按照https://www.tensorflow.org/alpha/guide/distribute_strategy上的教程进行了操作。

I already tried different ports and turning off the firewalls. 我已经尝试过不同的端口并关闭防火墙。 I also tried to switch the master system from Windows to Linux but now i think it might be a Problem with the code or maybe the TF-Version which is labeled as experimental. 我还尝试将主系统从Windows切换到Linux,但现在我认为它可能是代码的问题,或者可能是标记为实验的TF版本。

from __future__ import absolute_import, division, print_function, unicode_literals

import tensorflow_datasets as tfds    
import tensorflow as tf    
import json    
import os

BUFFER_SIZE = 10000    
BATCH_SIZE = 64

def scale(image, label):

   image = tf.cast(image, tf.float32)
   image /= 255
   return image, label


datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)

train_datasets_unbatched = datasets['train'].map(scale).shuffle(BUFFER_SIZE)

train_datasets = train_datasets_unbatched.batch(BATCH_SIZE)

def build_and_compile_cnn_model():

  model = tf.keras.Sequential([    
      tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),    
      tf.keras.layers.MaxPooling2D(),    
      tf.keras.layers.Flatten(),    
      tf.keras.layers.Dense(64, activation='relu'),    
      tf.keras.layers.Dense(10, activation='softmax')    
  ])

  model.compile(    
      loss=tf.keras.losses.sparse_categorical_crossentropy,    
      optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),    
      metrics=['accuracy'])

  return model


#multiworker conf:

os.environ['TF_CONFIG'] = json.dumps({    
    'cluster': {    
        'worker': ["192.168.0.12:2468", "192.168.0.13:1357"]    
    },    
    'task': {'type': 'worker', 'index': 0}    
})

strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
NUM_WORKERS = 2

GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS

#--------------------------------------------------------------------

#In the following line the error occurs

train_datasets = train_datasets_unbatched.batch(GLOBAL_BATCH_SIZE)

#--------------------------------------------------------------------


with strategy.scope():    
    multi_worker_model = build_and_compile_cnn_model()  
    multi_worker_model.fit(x=train_datasets, epochs=3)

I expect the worker to start the learning process but instead I get the error: 我希望工作人员开始学习过程,但我得到错误:

"F tensorflow/core/framework/device_base.cc:33] Device does not implement name()" “F tensorflow / core / framework / device_base.cc:33]设备未实现name()”

As far as i know, each worker should have a unique task index, for example: 据我所知,每个工作者都应该有一个唯一的任务索引,例如:

on the first machine you should have: 你应该在第一台机器上:

os.environ['TF_CONFIG'] = json.dumps({    
    'cluster': {    
        'worker': ["192.168.0.12:2468", "192.168.0.13:1357"]    
    },    
    'task': {'type': 'worker', 'index': 0}    
})

and on the second: 在第二个:

os.environ['TF_CONFIG'] = json.dumps({    
    'cluster': {    
        'worker': ["192.168.0.12:2468", "192.168.0.13:1357"]    
    },    
    'task': {'type': 'worker', 'index': 1}    
})

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM