简体   繁体   English

AWS Lambda python 函数的长时间冷启动

[英]Long cold start on AWS Lambda python function

I have converted a very simple flask function (ie two simple routes linking to a sql query and turning them into a table) to a Lambda function with Zappa.我已经使用 Zappa 将一个非常简单的烧瓶函数(即链接到 sql 查询并将它们转换为表的两个简单路由)转换为 Lambda 函数。 The deployed function is around 20MB.部署的函数大约为 20MB。 Because the traffic will be very low, I am not using any warming mechanism for my function.因为流量会非常低,所以我没有为我的功能使用任何预热机制。

The function only requires 128MB memory, and when run with this when all previous instances have been destroyed, the cold start is approximately 16 seconds.该函数只需要 128MB 内存,在所有先前实例都已销毁的情况下运行时,冷启动大约为 16 秒。

This seems like a long time intuitively, and what I have read(eg here ), which suggests that python functions not in a VPC have relatively low latency for cold starts.这在直觉上似乎很长一段时间,我读过的内容(例如此处)表明不在VPC 中的 Python 函数对于冷启动具有相对较低的延迟。

If I add memory to the function, the cold start time seems to decrease linearly.如果我向函数添加内存,冷启动时间似乎线性减少。 Again this conflicts with what I have read (eg here as above) in terms of memory not being an issue for cold-start latency.这再次与我所读到的内容(例如如上所示)相冲突,因为内存不是冷启动延迟的问题。 This is my table of invocation times:这是我的调用时间表: 在此处输入图片说明

Should I be surprised by these results or am I missing something?我应该对这些结果感到惊讶还是我错过了什么?

Thanks谢谢

Stephen斯蒂芬

I'm not (so) surprised.我并不(如此)感到惊讶。 Have in mind that under 1GB of RAM the CPU is single core and the CPU grows linearly with memory.请记住,在 1GB 内存下,CPU 是单核的,并且 CPU 会随内存线性增长。

Try this tool for fine tuning of your Lambda memory/power and cost.尝试使用此工具微调您的 Lambda 内存/功率和成本。 If you don't want to raise your memory try using Provisioned Concurrency to cut coldstarts.如果您不想提高内存,请尝试使用预置并发来减少冷启动。

PS: are you sure that those time are because of cold starts? PS:你确定那些时间是因为冷启动?

Before calling the handler function, the underlying CPU isn't throttled (see this re:invent video ).在调用处理程序函数之前,底层 CPU 不会受到限制(请参阅此 re:invent 视频)。 Since the billed duration is decreasing as you are increasing the memory, something tells me that you might have written function definitions within the handler, which is naturally throttled as per the Memory allocated to the function, taking longer time.由于计费持续时间随着您增加内存而减少,因此有些东西告诉我您可能在处理程序中编写了函数定义,这自然会根据分配给函数的内存进行节流,从而花费更长的时间。

Try describing all the functions, static variables outside the handler, and keep handler code minimal.尝试描述处理程序之外的所有函数和静态变量,并尽量减少处理程序代码。 This will ensure Lambda spends more time outside the handler with full CPU capacity before invoking the handler function where it is throttled.这将确保 Lambda 在调用受限制的处理程序函数之前,在处理程序之外花费更多时间,并具有完整的 CPU 容量。

Best way to get more idea about this is my profiling your function code using X-Ray segments to see where function is spending more time.对此有更多了解的最佳方法是我使用X 射线段分析您的函数代码,以查看函数在哪里花费了更多时间。 This will paint a clearer picture if those are indeed cold starts, or just function taking longer.如果这些确实是冷启动,或者只是运行时间更长,这将描绘出更清晰的画面。

Note: Cold start durations are not counted towards your function duration metrics but rather will show as "init duration" when you enable X-Ray tracing.注意:冷启动持续时间不计入您的函数持续时间指标,而是在您启用 X-Ray 跟踪时显示为“初始持续时间”。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM