简体   繁体   中英

Docker how to copy files from container to host?

I was faced with the need to share data in a folder on the host with a container. Data created inside the container should be read or modified on the host and vice versa.

I decided to use the Docker Run -v arg for this. I was inspired by this project. https://github.com/itzg/docker-minecraft-server . The same principle is implemented there that I described to you.

docker run -d -v /path/on/host:/data \
    -e TYPE=PAPER -e FORCE_REDOWNLOAD=true \
    -p 25565:25565 -e EULA=TRUE --name mc itzg/minecraft-server

All the data will be available on the host in /path/on/host . The files can be modified on the host, and all changes will be transferred to the container and vice versa. The /path/on/host and /data folders are constantly linked.

And so, I decided to do this in my project - a nodejs bot. But nothing came of it, because the files don't copy from the container to the host. Access to files can only be obtained from the host to the container. Now I wonder how it's possible to copy files to the host from the container.

How can I do the same as in the example above or how to implement it differently for conveniently launching instances in shared folders from the terminal in one line?

Other guides did not help me. I want to use exactly with -v.


Files in WORKDIR will be created

docker run --name app node-app

WORKDIR will be empty or will contain any files that are in /tmp/node-app

docker run -v /tmp/node-app:/app --name app node-app

Dockerfile

FROM node:lts
VOLUME /app/
COPY ./repo/package.json /app/package.json
WORKDIR /app/
RUN npm install --production
COPY ./repo /app/
CMD npm start

rsync is an excellent way to share files between systems. Find the ip addresses of the machine in your internal docker network and then

rsync -rav <src-directory> <username>@<dest-ip>:<dest-directory> -o

A couple of potential problems:

  1. If you're using docker desktop, you need to make sure your directory is shared since the docker engine is actually running on a VM, /tmp may not equate to /tmp on your machine.

On macOS: Docker -> Preferences -> Resources -> FILE SHARING

  1. You also need to be careful with mounts. When you run a container with a "-v" option as you have above, it's bind mounting a host directory inside the container. If you bind mount a directory over-top of your "/app" directory, you won't see the things that were in /app before the bind mount. You should choose an empty or non-existent directory to perform the bind-mount.

  2. When you run "COPY" inside a Dockerfile, that is copying the assets from the build context permanently, and statically inside the container. If you need dynamic assets, skip that step, and just do the bind mount at runtime.

Option 1: If you cannot change the image, you can use a named volume to a bind mount. That will give you the normal initialization features you normally see with named volumes, but to a path of your choosing. Note that named volumes are only initialized when they are empty, once files exist in the named volume, future container starts will not copy from the container filesystem (eg if the image gets updated). Here are 3 different ways that can be done:

  # create the volume in advance
  $ docker volume create --driver local \
      --opt type=none \
      --opt device=/home/user/test \
      --opt o=bind \
      test_vol

  # create on the fly with --mount
  $ docker run -it --rm \
    --mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
    foo

  # inside a docker-compose file
  ...
  volumes:
    bind-test:
      driver: local
      driver_opts:
        type: none
        o: bind
        device: /home/user/test
  ...

Option 2: If you want something easy to use on multiple hosts, you can configure an entrypoint in the image to copy files from a container path out to the volume. To do this, you'll need to store the files in a different location within the image than the volume mount, and then include a script that runs a copy, rsync, or similar. An example of this is in my save-volume and load-volume scripts in my base image repo . Similar techniques are used in other images that use host volumes on docker hub, specifically I'm thinking of the jenkins image.

This may not be the best solution, but I still did what I wanted. It turns out I only superficially understood how docker works. I will take into account all the above comments in order to use docker more professionally. Thank you for leaving them.

Ultimately, I added ENTRYPOINT instruction to run a shell script that copies files without overwriting and runs the program via npm.

Dockerfile

FROM node:12-alpine

VOLUME ["/data"]

...

# Copy script
COPY start.sh /home/docker-start.sh

ENTRYPOINT ["/bin/sh", "/home/docker-start.sh"]

start.sh

#!/bin/sh

mv -n /app/* /data
rm /app -r
cd /data
npm start

Now the program can be deployed in just a line

docker run -v /opt/appname/ow:/data --name ow homosanians/appname

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM