简体   繁体   中英

Coverity Scan on GitLab pipeline failing

Trying to run Coverity scan on python files, this Job is automated as pipeline on Gitlab.

Running the scan on runner with below kubernetes configuration:

  cpuLimit: 1500m
  # cpuLimitOverwriteMaxAllowed: 400m
  memoryLimit: 3Gi
  # memoryLimitOverwriteMaxAllowed: 512Mi
  cpuRequests: 1500m
  # cpuRequestsOverwriteMaxAllowed: 200m
  memoryRequests: 1500Mi
  # memoryRequestsOverwriteMaxAllowed: 256Mi
   
resources:
limits:
  memory: 3Gi
  cpu: 1500m
requests:
  memory: 3Gi
  cpu: 1500m    

Running below commands:

  1. cov-capture --source-dir $CI_PROJECT_DIR --dir idir
  2. cov-analyze --dir idir. --> This second command is failing with below.

I am suspecting something to do with my Kubernetes POD CPU and memory limits, pls suggest?

在此处输入图像描述

Almost certainly, the problem is the analysis has exceeded the configured 3 GiB memory limit, and has been killed by the OOM Killer . You can confirm that diagnosis by using the kubectl describe pod command as described, for example, here or here .

The analysis memory requirements are documented at Coverity Memory and CPU Requirements , with the basic formula being:

1.0 GiB + (0.5 GiB * number of analysis workers)

The console output shown in your screenshot says:

Using 32 workers as limited by CPU(s)

That means, per the formula, that the analysis needs 17 GiB. And it really will use all of that memory, even for a relatively small program, because it starts all of the workers immediately and each one will use its allotted 0.5 GiB.

By default, cov-analyze starts as many workers as there are CPU cores on the machine. To stay under 3 GiB, you will need to pass --jobs 4 in order to limit the number of workers to 4. (I'd start with --jobs 3 to be on the safe side.)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM