简体   繁体   English

缓存不适用于所有跑步者和分支

[英]cache is not working on all runners and branches

i have an angular app adn i have wrote a pipline for it like this:我有一个 angular 应用程序,我为它写了一个管道,如下所示:


image: node:16.13.2
variables:
  DOCKER_HOST: myurl
  GIT_STRATEGY: clone
  TAG_LATEST: latest
  TAG_COMMIT: $CI_COMMIT_REF_NAME-$CI_COMMIT_SHORT_SHA

.login_into_nexus: &login_into_nexus
  - echo "Login Into Nexus...."
  - docker login -u $NEXUS_USERNAME -p $NEXUS_PASS $NEXUS_URL

services:
  - docker:dind

stages:
  - build

install-dependency:
  stage: .pre
  script:
    - npm i --prefer-offline # install dependencies
  cache:
    key: "{$CI_JOB_NAME}"
    paths:
      - node_modules
    policy: pull-push
  artifacts:
    paths:
      - node_modules/

build:
  stage: build
  needs:
    - job: install-dependency
      artifacts: true
  script:
    - npm run build:aot
  rules:
    - if: '$CI_PIPELINE_SOURCE == "push"'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
    - if: '$CI_PIPELINE_SOURCE != "push" && $CI_PIPELINE_SOURCE != "merge_request_event" && $CI_COMMIT_REF_NAME == "master"'
    - if: '$CI_PIPELINE_SOURCE != "push" && $CI_PIPELINE_SOURCE != "merge_request_event" && $CI_COMMIT_REF_NAME == "develop"'

config.toml: config.toml:


concurrent = 1
check_interval = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "runner-global-1"
  output_limit = 10000000
  url = "myurl"
  token = "QmeDZw6u2Qa48n6asVHE"
  executor = "docker"
  cache_dir = "/tmp/build-cache"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.docker]
    tls_verify = false
    image = "alpine"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/tmp/build-cache:/cache:rw"]
  shm_size = 0

[[runners]]
  name = "runner-global-2"
  output_limit = 10000000
  url = "myurl"
  token = "YYaXwQfLZ-2zSL8eHMGP"
  executor = "docker"
  cache_dir = "/tmp/build-cache"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.docker]
    tls_verify = false
    image = "alpine"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/tmp/build-cache:/cache:rw"]
    shm_size = 0

[[runners]]
  name = "runner-global-3"
  output_limit = 10000000
  url = "myurl"
  token = "-EUSye1c7h7tQyEk2VfH"
  executor = "docker"
  cache_dir = "/tmp/build-cache"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.docker]
    tls_verify = false
    image = "alpine"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/tmp/build-cache:/cache:rw"]
    shm_size = 0

[[runners]]
  name = "runner-global-4"
  output_limit = 10000000
  url = "myurl"
  token = "S7gPu3r2xVzc2CTZzy7z"
  executor = "docker"
  cache_dir = "/tmp/build-cache"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.docker]
    tls_verify = false
    image = "ruby:2.6"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/tmp/build-cache:/cache:rw"]
    shm_size = 0

[[runners]]
  name = "runner-global-6"
  output_limit = 10000000
  url = "myurl"
  token = "U_VQCMkj_AN5AfVuWyCR"
  executor = "docker"
  cache_dir = "/tmp/build-cache"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.docker]
    tls_verify = false
    image = "ruby:2.6"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/tmp/build-cache:/cache:rw"]
    shm_size = 0




as seen above the pipeline downloads node modules from my nexus and installs it in the install-dependency job如上所示,管道从我的 nexus 下载节点模块并将其安装在 install-dependency 作业中

i also have 5 runners on this project and each one of them can pick the job.我在这个项目上也有 5 名跑步者,他们每个人都可以选择这份工作。 but each runner saves the cache for itself and when i run the pipeline on another branch it wont use the saved cache on the other branch但是每个跑步者都为自己保存缓存,当我在另一个分支上运行管道时,它不会使用另一个分支上保存的缓存

my gitlab version is: 13.3.5-ee我的 gitlab 版本是:13.3.5-ee

You must enable distributed caching in order for all your runners to share the same cache.您必须启用分布式缓存才能让所有跑步者共享相同的缓存。 Otherwise, the default is that the cache is local to the runner.否则,默认缓存是运行器本地的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何在 Azure 管道中排除除 master 分支之外的所有分支? - how to exclude build all branches except master branch in Azure Pipelines? 列出 Google Cloud Source Repository 中的所有分支 - List all branches from Google Cloud Source Repository 默认情况下,Google Cloud Build 标签触发器是否适用于所有分支? - Does Google Cloud Build tag trigger apply to all branches by default? 具有 lambda 边缘的 Cloudfront 无法使用新的缓存行为 - Cloudfront with lambda edge not working with new cache behavior Gitlab CI 作业在不同的跑步者上并行 - Gitlab CI job parallel on different runners 内部缓存查找和存储值不起作用 - Internal cache lookup and store value not working 获取共享跑步者令牌 Gitlab API - Obtain shared runners token Gitlab API 如何将缓存无效消息广播到所有运行 Web 应用程序的服务器? - How to broadcast cache invalidate messages to all servers running a web app? Cloudfront X-Cache一直显示“云端小姐” - Cloudfront X-Cache shows "Miss from cloudfront" all the time Gitlab 使用带有 CLONE_STRATEGY 的缓存时工作目录不干净:无 - Gitlab working directory not clean when using cache with CLONE_STRATEGY: none
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM