[英]Prometheus rate query over a long period
When using the rate
function over a long period (eg 7d
) I am getting the error "query processing would load too many samples into memory in query execution"
.当长时间使用
rate
function (例如7d
)时,我收到错误"query processing would load too many samples into memory in query execution"
。
My query is我的查询是
histogram_quantile(0.90, rate(http_request_in_seconds_bucket[7d]))
This error happens because Prometheus has a limit of samples it can process in memory.发生此错误是因为 Prometheus 在 memory 中可以处理的样本数量有限。
I solved this with subqueries which was added on Prometheus 2.7 .我用Prometheus 2.7 添加的子查询解决了这个问题。 This allows you to separately query smaller time intervals and then aggregate them together.
这允许您单独查询较小的时间间隔,然后将它们聚合在一起。
For example, I changed my query to multiple 24 hours subqueries and then averaged it together例如,我将查询更改为多个 24 小时子查询,然后将其平均在一起
histogram_quantile(0.90, avg_over_time(rate(http_request_in_seconds_bucket[24h])[7d:12h]))
There are other solutions:还有其他解决方案:
query.max-samples
(not recommended)query.max-samples
来增加它可以处理的样本数量(不推荐)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.