简体   繁体   English

如何使用Apache Beam Python将结果追加到管道中?

[英]How to append result in pipeline using apache beam python?

I have apache beam pipeline where i am getting some texts from input files using pubsub and after that i am doing some transformation and i am getting the sentence and score but my writer over writes the results instead of appending, I wanted to know is there any append module for beam.filesystems? 我有apache beam管道,我从输入文件中使用pubsub获取一些文本,然后我正在进行一些转换,我得到了句子和分数,但我的作者在写结果而不是追加,我想知道是否有任何为beam.filesystems添加模块?

from __future__ import absolute_import

import argparse
import logging
from datetime import datetime

from past.builtins import unicode
import json
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
import apache_beam as beam
import apache_beam.transforms.window as window

from apache_beam.io.filesystems import FileSystems
from apache_beam.io.gcp.pubsub import WriteToPubSub

from apache_beam.examples.wordcount import WordExtractingDoFn
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.options.pipeline_options import StandardOptions
from apache_beam.io.textio import ReadFromText, WriteToText



def run(argv=None):
  """Build and run the pipeline."""
  parser = argparse.ArgumentParser()
  parser.add_argument(
      '--output',
         dest='output',
        required=True,
        help='GCS destination folder to save the images to (example: gs://BUCKET_NAME/path')
  group = parser.add_mutually_exclusive_group(required=True)
  group.add_argument(
      '--input_topic',
      help=('Input PubSub topic of the form '
            '"projects<project name>/subscriptions/<topic name>".'))
  group.add_argument(
      '--input_subscription',
      help=('Input PubSub subscription of the form '
            '"projects<project name>/subscriptions/<subsciption name>."'))
  known_args, pipeline_args = parser.parse_known_args(argv)

  # We use the save_main_session option because one or more DoFn's in this
  # workflow rely on global context (e.g., a module imported at module level).
  pipeline_options = PipelineOptions(pipeline_args)
  pipeline_options.view_as(SetupOptions).save_main_session = True
  pipeline_options.view_as(StandardOptions).streaming = True
  p = beam.Pipeline(options=pipeline_options)

  # Read from PubSub into a PCollection.
  if known_args.input_subscription:
    messages = (p
                | beam.io.ReadFromPubSub(
                    subscription=known_args.input_subscription)
                .with_output_types(bytes))
  else:
    messages = (p
                | beam.io.ReadFromPubSub(topic=known_args.input_topic)
                .with_output_types(bytes))


  def print_row(row):
    print(type(row))
  file_metadata_pcoll = messages | 'decode' >> beam.Map(lambda x: json.loads(x.decode('utf-8')))
                            #| "print" >> beam.Map(print_row))

  lines = file_metadata_pcoll | 'read_file' >> beam.FlatMap(lambda metadata: FileSystems.open('gs://%s/%s' % (metadata['bucket'], metadata['name'])))
                     #| "print" >> beam.Map(print_row))


  # Count the occurrences of each word.
  class Split(beam.DoFn):
    def process(self,element):
        #element = str(element)
        #print(type(element))
        element = element.rstrip(b"\n")
        text = element.split(b',') 
        result = []
        for i in range(len(text)):
            dat = text[i]
            #print(dat)
            client = language.LanguageServiceClient()
            document = types.Document(content=dat,type=enums.Document.Type.PLAIN_TEXT)
            sent_analysis = client.analyze_sentiment(document=document)
            sentiment = sent_analysis.document_sentiment
            data = [
            (dat,sentiment.score)
            ] 
            result.append(data)
        return result

  # Format the counts into a PCollection of strings.
  class WriteToCSV(beam.DoFn):
    def process(self, element):
        return [
            "{},{}".format(
            element[0][0],
            element[0][1]
            )]


  class WriteToGCS(beam.DoFn):
    def __init__(self, outdir):
        source_date=datetime.now().strftime("%Y%m%d-%H%M%S")
        self.outdir = "gs://bucket-name/output"+format(source_date) +'.txt'
    def process(self, element):
        writer = FileSystems.create(self.outdir,'text/plain')
        writer.write(element)
        writer.close()

  sentiment_analysis =( lines | 'split' >> beam.ParDo(Split()) 
                             | beam.WindowInto(window.FixedWindows(15, 0)))

  format_csv = (sentiment_analysis | 'CSV formatting' >> beam.ParDo(WriteToCSV())
                                  | 'encode' >> beam.Map(lambda x: (x.encode('utf-8'))).with_output_types(bytes)
                                  |  'Save file' >> beam.ParDo(WriteToGCS(known_args.output)))


  result = p.run()
  result.wait_until_finish()

if __name__ == '__main__':
  logging.getLogger().setLevel(logging.INFO)
  run()

So instead of getting this : 所以不要这样:

<sentence 1> <score>
<sentence 2> <score>
.
.
.
.
<sentence n> <score>

i just get this : 我得到这个:

<sentence n> <score>

I need some minor fixes , i am stuck please help me someone. 我需要一些小修复,我被卡住了,请帮助我。

For this, you could try using beam.io.textio.WriteToText : 为此,您可以尝试使用beam.io.textio.WriteToText

messages = (p | "Read From PubSub" >> beam.io.ReadFromPubSub(subscription=known_args.subscription)
    | "Write to GCS" >> beam.io.WriteToText('gs://<your_bucket>/<your_file>', file_name_suffix='.txt',append_trailing_newlines=True,shard_name_template=''))

This will give you one file as the output when you finish your streaming job. 当您完成流工作时,这将为您提供一个文件作为输出。

Hope it helps! 希望能帮助到你!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM