[英]Old connectionId when new web socket connection triggered by DynamoDB
I see in CloudWatch that only sporadically the connectionId
of new web socket connections are successfully conveyed by an event, triggered by a new and correct DynamoDB entry...我在 CloudWatch 中看到,新的 web 套接字连接的
connectionId
偶尔会由事件成功传送,由新的正确的 DynamoDB 条目触发...
What I do:我所做的:
wss://xxxxxx.execute-api.us-east-1.amazonaws.com/production
.wss://xxxxxx.execute-api.us-east-1.amazonaws.com/production
。 A Connect
lambda function stores the connectionId
each time successfully in a DynamoDB table. Connect
lambda function 每次成功将connectionId
存储在DynamoDB表中。 This connectiondId
varies per connection.connectiondId
因连接而异。Broadcast
lambda function. In its current form this Broadcast
function (see below) only prints the connectionId
which is conveyed in the event (the result is printed in CloudWatch).Broadcast
lambda function。在当前形式下,此Broadcast
function(见下文)仅打印事件中传送的connectionId
(结果打印在 CloudWatch 中) . Here is the problem: This connectionId
keeps on being the same, while it should change per new connection... Like there is lazy event cache that should be cleared?connectionId
一直是一样的,而它应该在每个新连接中改变......就像有惰性事件缓存应该被清除?Disconnect
lambda function removes the connectionId
successfully from the DynamoDB table.Disconnect
lambda function 从 DynamoDB 表中成功删除了connectionId
。 I thought: This helps to prevent mixing up connectionIds
, but not.connectionIds
,但事实并非如此。 Question: How is it possible the same connectionId
is repeatedly printed for different web socket connections?问题:如何为不同的 web 套接字
connectionId
重复打印相同的 connectionId? (see attached images) The DynamoDB tables work as expected, and once triggered should convey the correct table entry/value in the event, isn't it? (见附图)DynamoDB 表按预期工作,一旦触发应该在事件中传达正确的表条目/值,不是吗?
Side question: Why are some log streams in Cloud Watch clustered and others separate?附带问题:为什么 Cloud Watch 中的一些日志流是集群的,而另一些是分开的?
Broadcast lambda python code广播 lambda python 码
import json
import boto3
client = boto3.client('apigatewaymanagementapi', endpoint_url="https://xxxxxx.execute-api.us-east-1.amazonaws.com/production")
def lambda_handler(event, context):
# get connectionId from DynamoDB
print(event['Records'][0]['dynamodb']['Keys']['connectionid']['S'])
CloudWatch logs for different web socket connections不同 web 套接字连接的 CloudWatch 日志
This problem was resolved by filtering INSERT events sent to the Broadcast
script.此问题已通过过滤发送到
Broadcast
脚本的 INSERT 事件得到解决。 Since also MODIFY and REMOVE events are sent to the script by DynamoDB, once listed as a trigger, and are therefore logged in CloudWatch.由于 MODIFY 和 REMOVE 事件也由 DynamoDB 发送到脚本,一旦被列为触发器,因此记录在 CloudWatch 中。
Below code does the job in the Broadcast
script下面的代码在
Broadcast
脚本中完成工作
# only look at INSERT dynamodb events and collect connectionId
if record.get('eventName') in ('INSERT'):
id = record['dynamodb']['Keys']['connectionid']['S']
print("connectionId: ", id)
else:
pass
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.