简体   繁体   中英

Distributed Learning with TensorFlow2 is not working

I'm trying to get distributed TF working in VS-Code with the Tensorflow version 2.0.0a (the CPU Version).

I'm using a Windows and a Linux System (two different computers) and both are working well alone.

For the distibuted TF I followed the tutorial at https://www.tensorflow.org/alpha/guide/distribute_strategy .

I already tried different ports and turning off the firewalls. I also tried to switch the master system from Windows to Linux but now i think it might be a Problem with the code or maybe the TF-Version which is labeled as experimental.

from __future__ import absolute_import, division, print_function, unicode_literals

import tensorflow_datasets as tfds    
import tensorflow as tf    
import json    
import os

BUFFER_SIZE = 10000    
BATCH_SIZE = 64

def scale(image, label):

   image = tf.cast(image, tf.float32)
   image /= 255
   return image, label


datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)

train_datasets_unbatched = datasets['train'].map(scale).shuffle(BUFFER_SIZE)

train_datasets = train_datasets_unbatched.batch(BATCH_SIZE)

def build_and_compile_cnn_model():

  model = tf.keras.Sequential([    
      tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),    
      tf.keras.layers.MaxPooling2D(),    
      tf.keras.layers.Flatten(),    
      tf.keras.layers.Dense(64, activation='relu'),    
      tf.keras.layers.Dense(10, activation='softmax')    
  ])

  model.compile(    
      loss=tf.keras.losses.sparse_categorical_crossentropy,    
      optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),    
      metrics=['accuracy'])

  return model


#multiworker conf:

os.environ['TF_CONFIG'] = json.dumps({    
    'cluster': {    
        'worker': ["192.168.0.12:2468", "192.168.0.13:1357"]    
    },    
    'task': {'type': 'worker', 'index': 0}    
})

strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
NUM_WORKERS = 2

GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS

#--------------------------------------------------------------------

#In the following line the error occurs

train_datasets = train_datasets_unbatched.batch(GLOBAL_BATCH_SIZE)

#--------------------------------------------------------------------


with strategy.scope():    
    multi_worker_model = build_and_compile_cnn_model()  
    multi_worker_model.fit(x=train_datasets, epochs=3)

I expect the worker to start the learning process but instead I get the error:

"F tensorflow/core/framework/device_base.cc:33] Device does not implement name()"

As far as i know, each worker should have a unique task index, for example:

on the first machine you should have:

os.environ['TF_CONFIG'] = json.dumps({    
    'cluster': {    
        'worker': ["192.168.0.12:2468", "192.168.0.13:1357"]    
    },    
    'task': {'type': 'worker', 'index': 0}    
})

and on the second:

os.environ['TF_CONFIG'] = json.dumps({    
    'cluster': {    
        'worker': ["192.168.0.12:2468", "192.168.0.13:1357"]    
    },    
    'task': {'type': 'worker', 'index': 1}    
})

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM