简体   繁体   English

无法将大型数据库导入 docker mysql 容器

[英]unable to import large database to docker mysql container

I'm trying to import a large database to a mysql container.我正在尝试将大型数据库导入 mysql 容器。 I've mounted host directories as volumes for the mysql container.我已将主机目录安装为 mysql 容器的卷。 So the data is persistent on host.所以数据在主机上是持久的。 The importing sql file is 14 GB +.导入的 sql 文件为14 GB +。 The mysql container becomes unresponsive half way of through importing. mysql 容器在导​​入中途变得无响应。 When I run docker stats I can see the CPU % usage becomes < 1 once mysql container ate all the memory available.当我运行docker stats我可以看到一旦 mysql 容器吃掉所有可用内存, CPU %使用率就会变为< 1 I tried increasing memory of docker up to 10 GB and It creates more tables from import when I allocate more memory to Docker.我尝试将 docker 的内存增加到10 GB ,当我为 Docker 分配更多内存时,它会从导入中创建更多表。 But I cannot allocate more than 10GB from host.但是我不能从主机分配超过10GB

Following is my docker-compose.yml file以下是我docker-compose.yml文件

mysql:
    image: mysql:5.6
    environment:
        - MYSQL_ROOT_PASSWORD=12345678
    volumes:
        - ./mysql/lib:/var/lib/mysql
        - ./mysql/conf.d:/etc/mysql/conf.d
        - ./mysql/log:/var/log/mysql
        - /tmp:/tmp
    ports:
        - "3306:3306"

I'm using Docker for mac which has docker version 1.12.1我正在使用Docker for mac ,它的 docker 版本为1.12.1

I was using docker exec -it docker_mysql_1 /bin/bash to login to container and import the sql file from /tmp我正在使用docker exec -it docker_mysql_1 /bin/bash登录到容器并从/tmp导入 sql 文件

Also I tried the way recommended by mysql repo by mounting sql file to /docker-entrypoint-initdb.d .我还尝试了 mysql repo 推荐的方式,将 sql 文件挂载到/docker-entrypoint-initdb.d But that also halt the mysql init.但这也停止了mysql init。

UPDATE 1更新 1

$ docker info
Containers: 1
 Running: 0
 Paused: 0
 Stopped: 1
Images: 2
Server Version: 1.12.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 18
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: host bridge null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.20-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 9.744 GiB
Name: moby
ID: 43S4:LA5E:6MTG:IFOG:HHJC:HYLX:LYIT:YU43:QGBQ:K5I5:Z6LP:AENZ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 16
 Goroutines: 27
 System Time: 2016-10-12T07:52:58.516469676Z
 EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Insecure Registries:
 127.0.0.0/8


$ df -h
Filesystem      Size   Used  Avail Capacity iused      ifree %iused  Mounted on
/dev/disk1     233Gi  141Gi   92Gi    61% 2181510 4292785769    0%   /
devfs          193Ki  193Ki    0Bi   100%     668          0  100%   /dev
map -hosts       0Bi    0Bi    0Bi   100%       0          0  100%   /net
map auto_home    0Bi    0Bi    0Bi   100%       0          0  100%   /home
/dev/disk2s2   466Gi   64Gi  401Gi    14%    1857 4294965422    0%   /Volumes/mac
/dev/disk2s3   465Gi   29Gi  436Gi     7%  236633    3575589    6%   /Volumes/PORTABLE
/dev/disk3s1   100Mi   86Mi   14Mi    86%      12 4294967267    0%   /Volumes/Vagrant

I was using /dev/disk1 directories to mount volumes.我正在使用 /dev/disk1 目录来挂载卷。

I had a similar issue when trying to load a big sql file into my database.尝试将大 sql 文件加载到我的数据库时,我遇到了类似的问题。 I just had to increase the maximum packet size in the container and the import worked as expected.我只需要增加容器中的最大数据包大小,导入按预期工作。 For example, you want to increase the maximum size of your SQL file to 512 MB and your container is named as my_mysql , you can adjust the package size in a running container with this command:例如,您想将 SQL 文件的最大大小增加到 512 MB,并且您的容器名为my_mysql ,您可以使用以下命令调整正在运行的容器中的包大小:

docker exec -it my_mysql bash -c "echo 'max_allowed_packet = 512M' >> /etc/mysql/mysql.conf.d/mysqld.cnf" 

This appends the line to the config file.这会将行附加到配置文件中。 After this, you need to restart the container:在此之后,您需要重新启动容器:

docker restart my_mysql

I solved phpmyadmin->import of large Database error, by changing Environment variable at docker-compose.yml我通过更改 docker-compose.yml 中的环境变量解决了 phpmyadmin->import of large Database 错误

UPLOAD_LIMIT=1G上传限制=1G


myadmin:
        image: phpmyadmin/phpmyadmin
        container_name: phpmyadmin
        ports:
            - "8083:80"
        environment:
            - UPLOAD_LIMIT=1G
            - PMA_ARBITRARY=1
            - PMA_HOST=${MYSQL_HOST}
        restart: always
        depends_on:
            - mysqldb

Even I had run into similar problem.甚至我也遇到过类似的问题。 Follow the below process, this might help you: Firstly copy your sql file(filename.sql) into the db container.按照以下过程,这可能对您有所帮助:首先将您的 sql 文件(filename.sql)复制到 db 容器中。

docker cp filename.sql docker_db_container:/filename.sql

Later login to your db container,and populate db with this(filename.sql) file.稍后登录到您的 db 容器,并使用此(filename.sql)文件填充 db。 In order to insert your file(filename.sql) in the database, Go-to mysql into the db container, use the db in which you want to store the database,ie.为了将您的文件(filename.sql)插入数据库,转到 mysql 到 db 容器中,使用要存储数据库的 db,即。 use database_name;

source /filename.sql

Yet if you're facing issue wrt to large packet size,then increase containers packet size但是,如果您遇到大数据包大小的问题,请增加容器数据包大小

docker exec -it docker_db_container bash -c "echo 'max_allowed_packet = 1024M' >> /etc/mysql/mysql.conf.d/mysqld.cnf"

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM