Trying to run Coverity scan on python files, this Job is automated as pipeline on Gitlab.
Running the scan on runner with below kubernetes configuration:
cpuLimit: 1500m
# cpuLimitOverwriteMaxAllowed: 400m
memoryLimit: 3Gi
# memoryLimitOverwriteMaxAllowed: 512Mi
cpuRequests: 1500m
# cpuRequestsOverwriteMaxAllowed: 200m
memoryRequests: 1500Mi
# memoryRequestsOverwriteMaxAllowed: 256Mi
resources:
limits:
memory: 3Gi
cpu: 1500m
requests:
memory: 3Gi
cpu: 1500m
Running below commands:
I am suspecting something to do with my Kubernetes POD CPU and memory limits, pls suggest?
Almost certainly, the problem is the analysis has exceeded the configured 3 GiB memory limit, and has been killed by the OOM Killer . You can confirm that diagnosis by using the kubectl describe pod
command as described, for example, here or here .
The analysis memory requirements are documented at Coverity Memory and CPU Requirements , with the basic formula being:
1.0 GiB + (0.5 GiB * number of analysis workers)
The console output shown in your screenshot says:
Using 32 workers as limited by CPU(s)
That means, per the formula, that the analysis needs 17 GiB. And it really will use all of that memory, even for a relatively small program, because it starts all of the workers immediately and each one will use its allotted 0.5 GiB.
By default, cov-analyze
starts as many workers as there are CPU cores on the machine. To stay under 3 GiB, you will need to pass --jobs 4
in order to limit the number of workers to 4. (I'd start with --jobs 3
to be on the safe side.)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.