[英]writing pandas dataframe with Nested structure to DynamoDB using Python and AWS Lambda
我正在嘗試將 Pandas dataframe 寫入 DynamoDB 表。 此框架有嵌套對象
{
"PK": {
"S": "2"
},
"SK": {
"S": "INFO"
},
"001": {
"M": {
"New_Some": {
"N": "6"
},
"New_Some1": {
"N": "2"
},
"New_Some2": {
"N": "1"
},
"New_Some3": {
"N": "1"
}
}
},
"status": {
"S": "New"
},
"ModelVals": {
"L": [
{
"M": {
"Models": {
"L": [
{
"S": "ABC123"
}
]
},
"class": {
"S": "XYZ222"
}
}
}
]
},
我的 pandas dataframe 列包含如下列表和字典 -
column1- ["mfg_nom", "mfg_nom", "mfg_nom", "mfg_nom"]
column2 - ["ZZY", "ZZY", "XYZ", "XYZ"]
column3 - ["1", "2", "2", "1"]
and
column4 - {"New_Some": "0.000"}
column5 - {"New_Some1": "636.000"}
column6 - {}
column7 - {"Insta": 7, "Other": 7}
pandas dataframe 包含多個嵌套列表和字典。 如何創建此行以插入到 dynamoDB 中。 到目前為止,我已經在下面嘗試過,但它只適用於 String 而不是 Arrays
df = wr.s3.read_parquet(path=s3_path,
dataset=dataset,
path_suffix = path_suffix).isna()
with table.batch_writer() as batch:
for index, row in df.iterrows():
print(row.to_json())
batch.put_item(json.loads(row.to_json(), parse_float=Decimal))
出現錯誤 -
"An error occurred (ValidationException) when calling the BatchWriteItem operation: The provided key element does not match the schema",
我的建議是使用 AWS Glue 而不是 Lambda,它有一個內置的 DynamoDB 連接器,允許您從 S3 讀取並直接寫入 DynamoDB。
但是,如果您必須使用 Lambda,則可以為 DynamoDB 使用 awswrangler:
https://aws-sdk-pandas.readthedocs.io/en/stable/stubs/awswrangler.dynamodb.put_df.html
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.