For the life of me I cannot figure out what is going on here.
I am starting a Glue Job via Boto3 (from Lambda but testing locally gives the exact same issue) and when I pass parameters in via the "start job run" api I get the same error, but looking at the logs the parameters all look correct. Here is the output (I have changed some names of the buckets etc.)
Glue Code (sample):
def main():
args = getResolvedOptions(sys.argv, [
'JOB_NAME',
's3_bucket',
's3_temp_prefix',
's3_schema_prefix',
's3_processed_prefix',
'ingestion_run_id'
]
)
sc = SparkContext()
glueContext = GlueContext(sc)
logger = glueContext.get_logger()
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
s3_client = boto3.client('s3')
s3_bucket = args['s3_bucket']
temp_prefix = args['s3_temp_prefix']
schema_prefix = args['s3_schema_prefix']
processed_prefix = args['s3_processed_prefix']
ingestion_run_id = args['ingestion_run_id']
logger.info(f's3_bucket: {s3_bucket}')
logger.info(f'temp_prefix {temp_prefix}')
logger.info(f'schema_prefix: {schema_prefix}')
logger.info(f'processed_prefix: {processed_prefix}')
logger.info(f'ingestion_run_id: {ingestion_run_id}')
SAM Template to make the Glue Job:
CreateDataset:
Type: AWS::Glue::Job
Properties:
Command:
Name: glueetl
PythonVersion: 3
ScriptLocation: !Sub "s3://bucket-name/GLUE/create_dataset.py"
DefaultArguments:
"--extra-py-files": "s3://bucket-name/GLUE/S3GetKeys.py"
"--enable-continuous-cloudwatch-log": ""
"--enable-metrics": ""
GlueVersion: 2.0
MaxRetries: 0
Role: !GetAtt GlueRole.Arn
Timeout: 360
WorkerType: Standard
NumberOfWorkers: 15
Code to attempt to start the Glue Job:
import boto3
session = boto3.session.Session(profile_name='glue_admin', region_name=region)
client = session.client('glue')
name = 'CreateDataset-1uPuNfIw1Tjd'
args = {
"--s3_bucket": 'bucket-name',
"--s3_temp_prefix": 'TEMP',
"--s3_schema_prefix": 'SCHEMA',
"--s3_processed_prefix": 'PROCESSED',
"--ingestion_run_id": 'FakeRun'
}
client.start_job_run(JobName=name, Arguments=args)
This starts the job fine put then the script errors and this is the log left behind, from what I can see it seems the parameters are lined up fine?
Wed Feb 10 09:16:00 UTC 2021/usr/bin/java -cp /opt/amazon/conf:/opt/amazon/lib/hadoop-lzo/*:/opt/amazon/lib/emrfs-lib/*:/opt/amazon/spark/jars/*:/opt/amazon/superjar/*:/opt/amazon/lib/*:/opt/amazon/Scala2.11/* com.amazonaws.services.glue.PrepareLaunch --conf spark.dynamicAllocation.enabled=true --conf spark.shuffle.service.enabled=true --conf spark.dynamicAllocation.minExecutors=1 --conf spark.dynamicAllocation.maxExecutors=29 --conf spark.executor.memory=5g --conf spark.executor.cores=4 --conf spark.driver.memory=5g --JOB_ID j_76c49a0d580594d5c0f584458cc0c9d519 --enable-metrics --extra-py-files s3://bucket-name/GLUE/S3GetKeys.py --JOB_RUN_ID jr_c0b9049abf1ee1161de189a901dd4be05694c1c42863 --s3_schema_prefix SCHEMA --enable-continuous-cloudwatch-log --s3_bucket bucket-name --scriptLocation s3://bucket-name/GLUE/create_dataset.py --s3_temp_prefix TEMP --ingestion_run_id FakeRun --s3_processed_prefix PROCESSED --JOB_NAME CreateDataset-1uPuNfIw1Tjd
Bucket name has been altered for this post but it matches exactly.
Fail point in Glue JOb log:
java.lang.IllegalArgumentException: For input string: "--s3_bucket"
The bucket name has no illegal chars but does have '-' in it?
Thanks in advance for help.
This happened because --enable-continuous-cloudwatch-log
argument expects a value and since you didn't provide a value, the argument parser assumed the next argument is the value for it( --enable-continuous-cloudwatch-log --s3_bucket
), which in this case was --s3_bucket
, now --s3_bucket
is an invalid value for --enable-continuous-cloudwatch-log
option, therefore that error happens.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.