简体   繁体   English

从shell导入多个.sql转储文件到mysql数据库

[英]Import Multiple .sql dump files into mysql database from shell

I have a directory with a bunch of .sql files that mysql dumps of each database on my server. 我有一个目录与一堆.sql文件,我的服务器上的每个数据库的mysql转储。

eg 例如

database1-2011-01-15.sql
database2-2011-01-15.sql
...

There are quite a lot of them actually. 实际上有很多。

I need to create a shell script or single line probably that will import each database. 我需要创建一个shell脚本或单行,可能会导入每个数据库。

I'm running on a Linux Debian machine. 我在Linux Debian机器上运行。

I thinking there is some way to pipe in the results of a ls into some find command or something.. 我认为有一些方法可以将ls的结果输入到某些find命令或其他东西中。

any help and education is much appreciated. 任何帮助和教育都非常感谢。

EDIT 编辑

So ultimately I want to automatically import one file at a time into the database. 所以最终我想自动将一个文件一次导入数据库。

Eg if I did it manually on one it would be: 例如,如果我在一个上手动完成它将是:

mysql -u root -ppassword < database1-2011-01-15.sql

cat *.sql | mysql cat *.sql | mysql ? cat *.sql | mysql Do you need them in any specific order? 你是否需要任何特定的订单?

If you have too many to handle this way, then try something like: 如果你有太多的方法来处理这种方式,那么尝试以下方法:

find . -name '*.sql' | awk '{ print "source",$0 }' | mysql --batch

This also gets around some problems with passing script input through a pipeline though you shouldn't have any problems with pipeline processing under Linux. 这也解决了通过管道传递脚本输入的一些问题,尽管在Linux下管道处理不应该有任何问题。 The nice thing about this approach is that the mysql utility reads in each file instead of having it read from stdin . 这种方法的mysqlmysql实用程序读入每个文件而不是从stdin读取。

One-liner to read in all .sql files and imports them: 单行读取所有.sql文件并导入它们:

for SQL in *.sql; do DB=${SQL/\.sql/}; echo importing $DB; mysql $DB < $SQL; done

The only trick is the bash substring replacement to strip out the .sql to get the database name. 唯一的技巧是使用bash子字符串替换.sql以获取数据库名称。

There is superb little script at http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script which will take a huge mysqldump file and split it into a single file for each table. http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script上有一个极好的小脚本,它将获取一个巨大的 mysqldump文件并将其拆分为单个文件每张桌子。 Then you can run this very simple script to load the database from those files: 然后,您可以运行这个非常简单的脚本从这些文件加载​​数据库:

for i in *.sql
do
  echo "file=$i"
  mysql -u admin_privileged_user --password=whatever your_database_here < $i
done

mydumpsplitter even works on .gz files, but it is much, much slower than gunzipping first, then running it on the uncompressed file. mydumpsplitter甚至可以在.gz文件上运行,但它比先解压缩要快得多,然后在未压缩的文件上运行它。

I say huge , but I guess everything is relative. 我说很大 ,但我想一切都是相对的。 It took about 6-8 minutes to split a 2000-table, 200MB dump file for me. 花了大约6-8分钟为我分割一个2000个表,200MB的转储文件。

I created a script some time ago to do precisely this, which I called (completely uncreatively) "myload". 我前段时间创建了一个脚本来完成这个,我称之为(完全没有创造性地)“myload”。 It loads SQL files into MySQL. 它将SQL文件加载到MySQL中。

Here it is on GitHub 这是在GitHub上

It's simple and straight-forward; 这很简单直接; allows you to specify mysql connection parameters, and will decompress gzip'ed sql files on-the-fly. 允许您指定mysql连接参数,并将动态解压缩gzip的sql文件。 It assumes you have a file per database, and the base of the filename is the desired database name. 它假定每个数据库都有一个文件,文件名的基础是所需的数据库名称。

So: 所以:

myload foo.sql bar.sql.gz

Will create (if not exist) databases called "foo" and "bar", and import the sql file into each. 将创建(如果不存在)称为“foo”和“bar”的数据库,并将sql文件导入到每个数据库中。

For the other side of the process, I wrote this script (mydumpall) which creates the corresponding sql (or sql.gz) files for each database (or some subset specified either by name or regex). 对于该过程的另一方,我编写了这个脚本(mydumpall) ,它为每个数据库(或者通过名称或正则表达式指定的某个子集创建相应的sql(或sql.gz)文件。

我不记得mysqldump的语法,但它会是这样的

 find . -name '*.sql'|xargs mysql ...

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM