简体   繁体   中英

Why are Docker filesystem permissions behaving differently on GitHub Actions (Permission denied on server)

I have an app with a Dockerfile:

# From here on we use the least-privileged `node` user to run the backend.
USER node
WORKDIR /app


# This switches many Node.js dependencies to production mode.
ENV NODE_ENV production

# Copy repo skeleton first, to avoid unnecessary docker cache invalidation.
# The skeleton contains the package.json of each package in the monorepo,
# and along with yarn.lock and the root package.json, that's enough to run yarn install.
COPY --chown=node:node yarn.lock package.json packages/backend/dist/skeleton.tar.gz ./
RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz

This is part of the dockerfile generated by the official Backstage CLI ( https://github.com/backstage/backstage/issues/15421 ).

When I run docker build, on my Mac, and on my colleague's Windows machine, the build works.

However, when we attempt the same build on GitHub Actions, or on Microsoft ADO, the docker build fails with a file system permissions error:

Step 7/11 : RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz
 ---> Running in b20314a0495a
tar: packages: Cannot mkdir: Permission denied
tar: packages/app/package.json: Cannot open: No such file or directory
tar: packages: Cannot mkdir: Permission denied
tar: packages/backend/package.json: Cannot open: No such file or directory
tar: Exiting with failure status due to previous errors

I did some Googling and found that creating and changing the directory to be owned by the "node" USER above prior to the tar operation solves the problem.

So in fact this Dockerfile works both on my machine AND on GitHub Actions. Notice the first two lines - that's the entire difference between Dockerfiles:

RUN mkdir -p /app
RUN chown node /app

# From here on we use the least-privileged `node` user to run the backend.
USER node
WORKDIR /app


# This switches many Node.js dependencies to production mode.
ENV NODE_ENV production

# Copy repo skeleton first, to avoid unnecessary docker cache invalidation.
# The skeleton contains the package.json of each package in the monorepo,
# and along with yarn.lock and the root package.json, that's enough to run yarn install.
COPY --chown=node:node yarn.lock package.json packages/backend/dist/skeleton.tar.gz ./
RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz

What I do not understand is: why does the first Dockerfile build succeeds on Mac/Windows while it fails on GitHub Actions (Linux?). Why does the GitHub Actions version require the additional owner change on the folder level while the Mac/Windows version do not? Maybe it's something to do with the Docker version?

I'm pretty certain the issue happens on Linux specifically, as on Windows and macOS it's running inside a VM. The reason why it would occur in a "real Linux" machine and not in a VM is because user ids and group ids are shared between the "docker host" and docker containers (see more here )

The GitHub Actions probably has designated permissions for the uids/gids that are contained in the tar.gz and also of the tar.gz itself, while on your local macOS/Windows the dedicated VM for docker doesn't have any "real" user management using Linux.

I had a similar issue when I used Bitbucket's CI which has some weird policies arounds uids/gids. This is probably a similar case.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM