简体   繁体   中英

Buildah vs Kaniko

I'm using ArgoWorkflow to automate our CI/CD chains. In order to build images, and push them to our private registry we are faced between the choice of either buildah or kaniko. But I can't put my finger on the main difference between the two. Pros and cons wise, and also on how do these tools handle parallel builds and cache management. Can anyone clarify these points? Or even suggest another tool that can maybe do the job in a more simple way. Some clarifications on the subject would be really helpful. Thanks in advance.

kaniko is very simple to setup and has some magic that let it work with no requirements in kubernetes:)

I also tried buildah but was unable to configure it and found it too complex to setup in a kubernetes environment.

You can use as cache management for kaniko an internal Docker registry, but a local storage can be configured instead (not tried yet). Just use the latest version of kaniko (v1.7.0), that fixes an important bug in the cached layers management.

These are some functions that I use in my GitLab CI pipelines, executed by a GitLab runner in Kubernetes (they should hopefully clarify setup and usage of kaniko ):

function kaniko_config
{
    local docker_auth="$(echo -n "$CI_REGISTRY_USER:$CI_REGISTRY_PASSWORD" | base64)"

    mkdir -p $DOCKER_CONFIG
    [ -e $DOCKER_CONFIG/config.json ] || \
        cat <<JSON > $DOCKER_CONFIG/config.json
{
    "auths": {
        "$CI_REGISTRY": {
            "auth": "$docker_auth"
        }
    }
}
JSON
}

# Usage example (.gitlab-ci.yml)
#
# build php:
#   extends: .build
#   variables:
#     DOCKER_CONFIG: "$CI_PROJECT_DIR/php/.docker"
#     DOCKER_IMAGE_PHP_DEVEL_BRANCH: &php-devel-image "${CI_REGISTRY_IMAGE}/php:${CI_COMMIT_REF_SLUG}-build"
#   script:
#     - kaniko_build
#       --destination $DOCKER_IMAGE_PHP_DEVEL_BRANCH
#       --dockerfile $CI_PROJECT_DIR/docker/images/php/Dockerfile
#       --target devel

function kaniko_build
{
    kaniko_config
    echo "Kaniko cache enabled ($CI_REGISTRY_IMAGE/cache)"
    /kaniko/executor \
        --build-arg http_proxy="${HTTP_PROXY}" \
        --build-arg https_proxy="${HTTPS_PROXY}" \
        --build-arg no_proxy="${NO_PROXY}" \
        --cache --cache-repo $CI_REGISTRY_IMAGE/cache \
        --context "$CI_PROJECT_DIR" \
        --digest-file=/dev/termination-log \
        --label "com.qwant.ci.job.id=${CI_JOB_ID}" \
        --label "com.qwant.ci.pipeline.id=${CI_PIPELINE_ID}" \
        --verbosity info \
        $@

    [ -r /dev/termination-log ] && \
        echo "Manifest digest: $(cat /dev/termination-log)"
}

With these functions a new image can be built with:

stages:
  - build

build app:
  stage: build
  image:
    name: gcr.io/kaniko-project/executor:v1.7.0-debug
    entrypoint: [""]
  variables:
    DOCKER_CONFIG: "$CI_PROJECT_DIR/app/.docker"
    DOCKER_IMAGE_APP_RELEASE_BRANCH: &app-devel-image "${CI_REGISTRY_IMAGE}/phelps:${CI_COMMIT_REF_SLUG}"
    GIT_SUBMODULE_STRATEGY: recursive
  before_script:
    - source ci/libkaniko.sh
  script:
    - kaniko_build
      --destination $DOCKER_IMAGE_APP_RELEASE_BRANCH
      --digest-file $CI_PROJECT_DIR/docker-content-digest-app
      --dockerfile $CI_PROJECT_DIR/docker/Dockerfile
  artifacts:
    paths:
      - docker-content-digest-app
  tags:
    - k8s-runner

buildah will require either a privileged container with more then one UID or a container running with CAP_SETUID, CAP_SETGID to build container images. It is not hacking on the file system like kanicko does to get around these requirements. It runs full contianers when building.

--isolation chroot, will make it a little easier to get buildah to work within kubernetes.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM