简体   繁体   中英

slow training with GPU google cloud ML engine

sorry if my question is so dump but i spent a lot of time trying to understand the reason of the problem but i couldn't so here it is

i'm training tacotron model on google cloud ML i have trained it before on floyd hub and it was pretty fast so i configured my project to be able to run on google ML

this is the major changes that i made to my project

original

with open(metadata_filename, encoding='utf-8') as f:
  self._metadata = [line.strip().split('|') for line in f]
  hours = sum((int(x[2]) for x in self._metadata)) * hparams.frame_shift_ms / (3600 * 1000)
  log('Loaded metadata for %d examples (%.2f hours)' % (len(self._metadata), hours))

my config

with file_io.FileIO(metadata_filename, 'r') as f:
     self._metadata = [line.strip().split('|') for line in f]
     hours = sum((int(x[2]) for x in self._metadata)) * hparams.frame_shift_ms / (3600 * 1000)
     log('Loaded metadata for %d examples (%.2f hours)' % (len(self._metadata), hours))

original

def _get_next_example(self):
    '''Loads a single example (input, mel_target, linear_target, cost) from disk'''
    if self._offset >= len(self._metadata):
      self._offset = 0
      random.shuffle(self._metadata)
    meta = self._metadata[self._offset]
    self._offset += 1

    text = meta[3]
    if self._cmudict and random.random() < _p_cmudict:
      text = ' '.join([self._maybe_get_arpabet(word) for word in text.split(' ')])

    input_data = np.asarray(text_to_sequence(text, self._cleaner_names), dtype=np.int32)
    linear_target = np.load(os.path.join(self._datadir, meta[0]))
    mel_target = np.load(os.path.join(self._datadir, meta[1]))
    return (input_data, mel_target, linear_target, len(linear_target))

my config

 def _get_next_example(self):

    '''Loads a single example (input, mel_target, linear_target, cost) from disk'''
    if self._offset >= len(self._metadata):
        self._offset = 0
        random.shuffle(self._metadata)
    meta = self._metadata[self._offset]
    self._offset += 1

    text = meta[3]
    if self._cmudict and random.random() < _p_cmudict:
        text = ' '.join([self._maybe_get_arpabet(word) for word in text.split(' ')])

    input_data = np.asarray(text_to_sequence(text, self._cleaner_names), dtype=np.int32)
    f = BytesIO(file_io.read_file_to_string(
        os.path.join(self._datadir, meta[0]),binary_mode=True))
    linear_target = np.load(f)
    s = BytesIO(file_io.read_file_to_string(
        os.path.join(self._datadir, meta[1]),binary_mode = True))
    mel_target = np.load(s)
    return (input_data, mel_target, linear_target, len(linear_target))

here 2 screen shots to show the difference Google ML , FLoydhub

and this is the training command i use in google ML i use scale-tier=BASIC_GPU gcloud ml-engine jobs submit training "$JOB_NAME" --stream-logs --module-name trainier.train --package-path trainier --staging-bucket "$BUCKET_NAME" --region "us-central1" --scale-tier=basic-gpu --config ~/gp-master/config.yaml --runtime-version=1.4 -- --base_dir "$BASEE_DIR" --input "$TRAIN_DATA"

So my question is did i do something that could cause this slow reading data maybe or there is problem in google cloud ML and i doubt that ??

好吧,我弄清楚我应该在需要的软件包中放置tensorflow-gpu == 1.4而不是tensorflow == 1.4 ^^

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM