简体   繁体   English

通过命令行将大型sql文件导入MySql

[英]Importing large sql file to MySql via command line

I'm trying to import an sql file of around 300MB to MySql via command line in Ubuntu. 我正在尝试通过Ubuntu中的命令行将大约300MB的sql文件导入MySql。 I used 我用了

source /var/www/myfile.sql;

Right now it's displaying a seemingly infinite rows of: 现在它显示了看似无限的行:

Query OK, 1 row affected (0.03 sec)

However it's been running a little while now. 然而它现在已经运行了一段时间。 I've not imported a file this large before so I just want to know whether this is normal, if the process stalls or has some errors, will this show up in command line or will this process go on indefinitely? 我之前没有导入这么大的文件所以我只是想知道这是否正常,如果进程停止或有一些错误,这将显示在命令行中还是这个进程将无限期地继续?

Thanks 谢谢

You can import .sql file using the standard input like this: 您可以使用标准输入导入.sql文件,如下所示:

mysql -u <user> -p<password> <dbname> < file.sql

Note: There shouldn't space between <-p> and <password> 注意: <-p><password>之间不应该有空格

Reference: http://dev.mysql.com/doc/refman/5.0/en/mysql-batch-commands.html 参考: http//dev.mysql.com/doc/refman/5.0/en/mysql-batch-commands.html

Note for suggested edits: This answer was slightly changed by suggested edits to use inline password parameter. 建议编辑的注意事项:建议的编辑使用内联密码参数稍微改变了这个答案。 I can recommend it for scripts but you should be aware that when you write password directly in the parameter ( -p<password> ) it may be cached by a shell history revealing your password to anyone who can read the history file. 我可以推荐它用于脚本,但是你应该知道,当你直接在参数( -p<password> )中写密码时,它可以通过shell历史记录缓存,向任何能够读取历史文件的人显示你的密码。 Whereas -p asks you to input password by standard input. -p要求您通过标准输入输入密码。

Guys regarding time taken for importing huge files most importantly it takes more time is because default setting of mysql is "autocommit = true", you must set that off before importing your file and then check how import works like a gem... 关于导入大文件所花时间的人最重要的是需要更多时间是因为mysql的默认设置是“autocommit = true”,你必须在导入文件之前将其设置为off,然后检查导入如何像gem一样工作...

First open MySQL: 首先打开MySQL:

mysql -u root -p mysql -u root -p

Then, You just need to do following : 然后,您只需要执行以下操作:

mysql>use your_db

mysql>SET autocommit=0 ; source the_sql_file.sql ; COMMIT ;

+1 to @MartinNuc, you can run the mysql client in batch mode and then you won't see the long stream of "OK" lines. +1到@MartinNuc,您可以在批处理模式下运行mysql客户端,然后您将看不到长行“OK”行。

The amount of time it takes to import a given SQL file depends on a lot of things. 导入给定SQL文件所需的时间取决于很多事情。 Not only the size of the file, but the type of statements in it, how powerful your server server is, and how many other things are running at the same time. 不仅是文件的大小,还有文件的类型,服务器服务器的强大程度,以及同时运行的其他内容。

@MartinNuc says he can load 4GB of SQL in 4-5 minutes, but I have run 0.5 GB SQL files and had it take 45 minutes on a smaller server. @MartinNuc说他可以在4-5分钟内加载4GB的SQL,但我已经运行了0.5 GB的SQL文件并且在较小的服务器上需要45分钟。

We can't really guess how long it will take to run your SQL script on your server. 我们无法猜测在服务器上运行SQL脚本需要多长时间。


Re your comment, 你的评论,

@MartinNuc is correct you can choose to make the mysql client print every statement. @MartinNuc是正确的你可以选择让mysql客户端打印每个语句。 Or you could open a second session and run mysql> SHOW PROCESSLIST to see what's running. 或者你可以打开第二个会话并运行mysql> SHOW PROCESSLIST来查看正在运行的内容。 But you probably are more interested in a "percentage done" figure or an estimate for how long it will take to complete the remaining statements. 但是你可能更感兴趣的是“完成百分比”数字或估计完成剩余陈述需要多长时间。

Sorry, there is no such feature. 对不起,没有这样的功能。 The mysql client doesn't know how long it will take to run later statements, or even how many there are. mysql客户端不知道运行以后的语句需要多长时间,甚至不知道有多少语句。 So it can't give a meaningful estimate for how much time it will take to complete. 因此,无法对完成所需的时间进行有意义的估算。

The solution I use for large sql restore is a mysqldumpsplitter script. 我用于大型sql还原的解决方案是mysqldumpsplitter脚本。 I split my sql.gz into individual tables. 我将sql.gz拆分为单独的表。 then load up something like mysql workbench and process it as a restore to the desired schema. 然后加载类似mysql workbench的东西并将其作为恢复到所需的模式进行处理。

Here is the script https://github.com/kedarvj/mysqldumpsplitter 这是脚本https://github.com/kedarvj/mysqldumpsplitter

And this works for larger sql restores, my average on one site I work with is a 2.5gb sql.gz file, 20GB uncompressed, and ~100Gb once restored fully 这适用于较大的sql恢复,我使用的一个站点上的平均值是2.5gb sql.gz文件,20GB未压缩,并且一旦完全恢复就会达到~100Gb

Importing large sql file to MySql via command line 通过命令行将大型sql文件导入MySql

  1. first download file . 第一个下载文件。
  2. paste file on home. 粘贴文件在家里。
  3. use following command in your terminals(CMD) 在终端中使用以下命令(CMD)
  4. Syntax: mysql -u username -p databsename < file.sql 语法:mysql -u username -p databsename <file.sql

Example: mysql -u root -p aanew < aanew.sql 示例:mysql -u root -p aanew <aanew.sql

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM