[英]applying fine-grained control over scale out
I have a blob-triggered function that will take the contents of the blob and send it to an sftp drop location. 我有一个Blob触发的函数,它将获取Blob的内容并将其发送到sftp放置位置。
Depending on the sftp destination, I need to be able to control the scale out. 根据sftp目的地,我需要能够控制横向扩展。
For example: If destination == 'sftp.alex.com' then scale the function out to no more than 5 instances, if destination == 'sftp.othersite.com' then scaleOut to no more than 20. 例如:如果destination =='sftp.alex.com',则将功能扩展到不超过5个实例;如果destination =='sftp.othersite.com',则将scaleOut扩展到不超过20。
Blob example: Blob示例:
{
"payload":"binary-formatted string",
"destination":"sftp.alex.com"
}
Is this type of fine-trained control over the scale out of the azure function available? 是否可以通过天蓝色功能对刻度进行这种精细控制?
Is this type of fine-trained control over the scale out of the azure function available? 是否可以通过天蓝色功能对刻度进行这种精细控制?
No, it's impossible. 不,这是不可能的。
For Consumption Plan, you can't directly affect the scaling algorithm of Azure Functions. 对于消费计划,您不能直接影响Azure Functions的缩放算法。 This serverless plan scales automatically , and you're charged for compute resources only when your functions are running. 该无服务器计划会自动缩放,并且仅在函数运行时才向您收取计算资源的费用。
For App Service Plan, you can scale out instance count manually or automatically . 对于App Service Plan,您可以手动或自动扩展实例计数。
Blob Trigger specifically has some known limitations. Blob触发器特别有一些已知的限制。 Particularly, there are might be delays in processing blob . 特别是在处理blob时可能会有延迟 。 For faster scaling, I suggest you using Event Grid triggers
, which should scale pretty well for both "priority" and "non-priority" customers of yours. 为了更快地扩展,我建议您使用Event Grid triggers
,它对于您的“优先”和“非优先”客户都应该很好地扩展。 Refer to this similar issue . 请参阅此类似问题 。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.