[英]writing pandas dataframe with Nested structure to DynamoDB using Python and AWS Lambda
我正在尝试将 Pandas dataframe 写入 DynamoDB 表。 此框架有嵌套对象
{
"PK": {
"S": "2"
},
"SK": {
"S": "INFO"
},
"001": {
"M": {
"New_Some": {
"N": "6"
},
"New_Some1": {
"N": "2"
},
"New_Some2": {
"N": "1"
},
"New_Some3": {
"N": "1"
}
}
},
"status": {
"S": "New"
},
"ModelVals": {
"L": [
{
"M": {
"Models": {
"L": [
{
"S": "ABC123"
}
]
},
"class": {
"S": "XYZ222"
}
}
}
]
},
我的 pandas dataframe 列包含如下列表和字典 -
column1- ["mfg_nom", "mfg_nom", "mfg_nom", "mfg_nom"]
column2 - ["ZZY", "ZZY", "XYZ", "XYZ"]
column3 - ["1", "2", "2", "1"]
and
column4 - {"New_Some": "0.000"}
column5 - {"New_Some1": "636.000"}
column6 - {}
column7 - {"Insta": 7, "Other": 7}
pandas dataframe 包含多个嵌套列表和字典。 如何创建此行以插入到 dynamoDB 中。 到目前为止,我已经在下面尝试过,但它只适用于 String 而不是 Arrays
df = wr.s3.read_parquet(path=s3_path,
dataset=dataset,
path_suffix = path_suffix).isna()
with table.batch_writer() as batch:
for index, row in df.iterrows():
print(row.to_json())
batch.put_item(json.loads(row.to_json(), parse_float=Decimal))
出现错误 -
"An error occurred (ValidationException) when calling the BatchWriteItem operation: The provided key element does not match the schema",
我的建议是使用 AWS Glue 而不是 Lambda,它有一个内置的 DynamoDB 连接器,允许您从 S3 读取并直接写入 DynamoDB。
但是,如果您必须使用 Lambda,则可以为 DynamoDB 使用 awswrangler:
https://aws-sdk-pandas.readthedocs.io/en/stable/stubs/awswrangler.dynamodb.put_df.html
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.