[英]How to delete cloudsql mysql records through bigquery. Federated query gives error "Failed to get query schema from MySQL server"
[英]How to get better log from big query schema error
我遇到了同样的问题: 读取数据时出错,错误消息:JSON 表遇到太多错误,放弃。 行,我很确定它与架构有关:
RuntimeError: BigQuery job beam_bq_job_LOAD_AUTOMATIC_JOB_NAME_LOAD_STEP_... failed. Error Result: <ErrorProto location: 'gs://dataflow/tmp/bq_load/some_file'
message: 'Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 1; errors: 1. Please look into the errors[] collection for more details. File: gs://some_file'
reason: 'invalid'> [while running 'WriteToBigQuery/BigQueryBatchFileLoads/WaitForDestinationLoadJobs-ptransform-27']
这里的问题是我有一个大模式(运行数据流作业)并且只是检查它是否存在小错误是乏味的。 有什么方法可以查看更好的错误消息/获取更多实际查明模式的哪一部分是错误的日志?
我经常遇到与Beam
Python
和BigQueryIO
相同的问题,在这种情况下错误不明确,并且未指示架构中的错误字段。
为了解决这类问题,我通常在输入元素中使用模式或 object 验证,并为错误中的元素使用死信队列。
然后我将错误汇入BigQuery
表进行分析。
我创建了一个名为Asgarde的库来简化Beam
的错误处理:
# Beam pipeline with Asgarde library.
input_teams: PCollection[str] = p | 'Read' >> beam.Create(team_names)
result = (CollectionComposer.of(input_teams)
.map('Map with country', lambda tname: TeamInfo(name=tname, country=team_countries[tname], city=''))
.map('Map with city', lambda tinfo: TeamInfo(name=tinfo.name, country=tinfo.country, city=team_cities[tinfo.name]))
.filter('Filter french team', lambda tinfo: tinfo.country == 'France'))
result_outputs: PCollection[TeamInfo] = result.outputs
result_failures: PCollection[Failure] = result.failures
包装器CollectionComposer
从PCollection
创建,此结构返回:
PCollection
PCollection
故障集故障由Failure
object 表示:
@dataclass
class Failure:
pipeline_step: str
input_element: str
exception: Exception
您可以将Failure
PCollection
BigQuery
表进行分析。
您还可以查看这篇文章Dead letter queue for errors with Beam, Asgarde, Dataflow and alerting in real time
我也分享给大家:
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.