简体   繁体   中英

How to skip erroneous elements at io level in apache beam with Dataflow?

I am doing some analysis on the tfrecords stored in GCP, but some of the tfrecords inside the files are corrupted, so when I run my pipeline and get more than four errors my pipeline breaks due to this . I think this is a constraint of DataFlowRunner and not of beam.

Here is my script of processing

import argparse
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.metrics.metric import Metrics

from apache_beam.runners.direct import direct_runner
import tensorflow as tf

input_ = "path_to_bucket"


def _parse_example(serialized_example):
  """Return inputs and targets Tensors from a serialized tf.Example."""
  data_fields = {
      "inputs": tf.io.VarLenFeature(tf.int64),
      "targets": tf.io.VarLenFeature(tf.int64)
  }
  parsed = tf.io.parse_single_example(serialized_example, data_fields)
  inputs = tf.sparse.to_dense(parsed["inputs"])
  targets = tf.sparse.to_dense(parsed["targets"])
  return inputs, targets


class MyFnDo(beam.DoFn):

  def __init__(self):
    beam.DoFn.__init__(self)
    self.input_tokens = Metrics.distribution(self.__class__, 'input_tokens')
    self.output_tokens = Metrics.distribution(self.__class__, 'output_tokens')
    self.num_examples = Metrics.counter(self.__class__, 'num_examples')
    self.decode_errors = Metrics.counter(self.__class__, 'decode_errors')

  def process(self, element):
    # inputs = element.features.feature['inputs'].int64_list.value
    # outputs = element.features.feature['outputs'].int64_list.value
    try:
      inputs, outputs = _parse_example(element)
      self.input_tokens.update(len(inputs))
      self.output_tokens.update(len(outputs))
      self.num_examples.inc()
    except Exception:
      self.decode_errors.inc()



def main(argv):
  parser = argparse.ArgumentParser()
  parser.add_argument('--input', dest='input', default=input_, help='input tfrecords')
  # parser.add_argument('--output', dest='output', default='gs://', help='output file')

  known_args, pipeline_args = parser.parse_known_args(argv)
  pipeline_options = PipelineOptions(pipeline_args)

  with beam.Pipeline(options=pipeline_options) as p:
    tfrecords = p | "Read TFRecords" >> beam.io.ReadFromTFRecord(known_args.input,
                                                                 coder=beam.coders.ProtoCoder(tf.train.Example))
    tfrecords | "count mean" >> beam.ParDo(MyFnDo())


if __name__ == '__main__':
    main(None)

so basically how can I skip the corrupted tfrecords and log their numbers while my analysis ?

There was a conceptual issue with it, the beam.io.ReadFromTFRecord reads from the single tfrecords (which could have been shared to multiple files), whereas I was giving the list of many individual tfrecords and hence it was causing the error. Switching to ReadAllFromTFRecord from ReadFromTFRecord resolved my issue.

p = beam.Pipeline(runner=direct_runner.DirectRunner())
tfrecords = p | beam.Create(tf.io.gfile.glob(input_)) | ReadAllFromTFRecord(coder=beam.coders.ProtoCoder(tf.train.Example))
tfrecords | beam.ParDo(MyFnDo())

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM