简体   繁体   中英

How can I tell Dataflow to "use_unsupported_python_version" with PipelineOptions?

I'm trying to use Google Dataflow to transfer data from one BigQuery table to another:

import apache_beam as beam
from apache_beam.io.gcp.internal.clients import bigquery
from apache_beam.options.pipeline_options import PipelineOptions

import argparse

def parseArgs():
  parser = argparse.ArgumentParser()
  parser.add_argument(
    '--experiment',
    default='use_unsupported_python_version',
    help='This does not seem to do anything.')
  args, beam_args = parser.parse_known_args()
  return beam_args

def beamer(rows=[]):
  if len(rows) == 0:
    return

  project = 'myproject-474601'
  gcs_temp_location = 'gs://my_temp_bucket/tmp'
  gcs_staging_location = 'gs://my_temp_bucket/staging'

  table_spec = bigquery.TableReference(
    projectId=project,
    datasetId='mydataset',
    tableId='test')
  beam_options = PipelineOptions(
    parseArgs(), # This doesn't seem to work.
    project=project,
    runner='DataflowRunner',
    job_name='unique-job-name',
    temp_location=gcs_temp_location,
    staging_location=gcs_staging_location,
    use_unsupported_python_version=True, # This doesn't work either. :(
    experiment='use_unsupported_python_version' # This also doesn't work.
  )

  with beam.Pipeline(options=beam_options) as p:
    quotes = p | beam.Create(rows)

    quotes | beam.io.WriteToBigQuery(
    table_spec,
    # custom_gcs_temp_location = gcs_temp_location, # Not needed?
    method='FILE_LOADS',
    write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
    create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED)
  return

if __name__ == '__main__':
    beamer(rows=[{'id': 'ein', 'value': None, 'year': None, 'valueHistory': [{'year': 2021, 'amount': 900}]}])

But apparently Dataflow doesn't support my Python version because I'm getting this error:

Exception: Dataflow runner currently supports Python versions ['3.6', '3.7', '3.8'], got 3.9.7 (default, Sep 16 2021, 08:50:36) 
[Clang 10.0.0 ].
To ignore this requirement and start a job using an unsupported version of Python interpreter, pass --experiment use_unsupported_python_version pipeline option.

So I tried adding a use_unsupported_python_version parameter to PipelineOptions to no avail. I also tried an experiment option. In the official pipeline option docs , it shows args being successfully merged into PipelineOptions, so I tried that too.

Yet I continue to get the same unsupported version error. How can I get Dataflow to use my version of Python?

Try passing experiments=['use_unsupported_python_version'] . You can delete your implementation of parseArgs as well.

'--experiment=use_unsupported_python_version' Please add option like above

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM