I need to create a Docker image with a large database. The database can be populated using a script populate_net_db.sh
from specified folders. I need to create a Docker image that has this done at build time so that developers working with database can create a container and not wait long time before the database gets populated.
What I have done (this worked but is not what I need):
I can create database at run time and populate it using mysql Docker image and putting the required files into docker-entrypoint-initdb.d
folder. This will require significant time to set things up when you run the image, but works as needed and I can access the DB with docker exec -it "image_name" mysql -u root -p
exactly as required.
I can create the same database at build time using RUN command and see that the tables are set up correctly at build, however, when I run such an image a new mysql DB is set up and the DB from build is gone.
Is there a way to have the build-time database show up in docker exec -it "image_name" mysql -u root -p
?
I managed to solve this problem by saving the contents of /var/lib/mysql/
in a temporary folder during build as: mv /var/lib/mysql /mnt/tmp/
. Then I created an entrypoint script to put in /docker-entrypoint-initdb.d
folder. This script deletes the /var/lib/mysql/
contents at run time and moves the saved state from /mnt/tmp/
. This causes MySQL to restart. However, the database becomes available very fast at container run. This method takes ~5 minutes for a 27GB database versus >1hr with loading the database from a dump file.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.