简体   繁体   English

从slave做mysqldump时如何编写master的Mysql二进制日志位置?

[英]How to write the Mysql binary log position of master when doing a mysqldump from slave?

I am currently running mysqldump on a Mysql slave to backup our database. 我目前在Mysql slave上运行mysqldump来备份我们的数据库。 This has worked fine for backing up our data itself, but what I would like to supplement it with is the binary log position of the master that corresponds with the data generated by the mysqldump. 这对于备份我们的数据本身很有效,但我想补充的是主机的二进制日志位置,它与mysqldump生成的数据相对应。

Doing this would allow us to restore our slave (or setup new slaves) without having to do a separate mysqldump on the main database where we grab the binary log position of the master. 这样做可以让我们恢复我们的奴隶(或设置新的奴隶),而不必在主数据库上执行单独的mysqldump,我们获取主数据库的二进制日志位置。 We would just take the data generated by the mysqldump, combine it with the binary log information we generated, and voila... be resynced. 我们只需要获取mysqldump生成的数据,将它与我们生成的二进制日志信息结合起来,然后再进行重新调整。

So far, my research has gotten me very CLOSE to being able to accomplish this goal, but I can't seem to figure out an automated way to pull it off. 到目前为止,我的研究让我非常关心能够实现这个目标,但我似乎无法找到一种自动化的方法来实现这一目标。 Here are the "almosts" I've uncovered: 以下是我发现的“差不多”:

  • If we were running mysqldump from the main database, we could use the "--master-data" parameter with mysqldump to log the master's binary position along with the dump data (I presume this would probably also work if we started generating binary logs from our slave, but that seems like overkill for what we want to accomplish) 如果我们从主数据库运行mysqldump,我们可以使用mysqldump中的“--master-data”参数来记录主服务器的二进制位置以及转储数据(我认为如果我们开始生成二进制日志,这可能也会有用)我们的奴隶,但这对我们想要完成的事情来说似乎有点过头了)
  • If we wanted to do this in a non-automated way, we could log into the slave's database and run "STOP SLAVE SQL_THREAD;" 如果我们想以非自动方式执行此操作,我们可以登录到slave的数据库并运行“STOP SLAVE SQL_THREAD;” followed by "SHOW SLAVE STATUS;" 其次是“SHOW SLAVE STATUS”; ( http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html ). http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html )。 But this isn't going to do us any good unless we know in advance we want to back something up from the salve. 但是,除非我们事先知道我们想从药膏中取出一些东西,否则这对我们没有好处。
  • If we had $500/year to blow, we could use the InnoDb hot backup plugin and just run our mysqldumps from the main DB. 如果我们每年需要500美元,我们可以使用InnoDb热备份插件,只需从主数据库运行我们的mysqldump。 But we don't have that money, and I don't want to add any extra I/O on our main DB anyway. 但是我们没有这笔钱,而且我不想在主DB上添加任何额外的I / O.

This seems like something common enough that somebody must have figured out before, hopefully that somebody is using Stack Overflow? 这似乎是一个普遍的东西,以前有人必须弄清楚,希望有人使用Stack Overflow?

The following shell script will run in cron or periodic, replace variables as necessary (defaults are written for FreeBSD): 以下shell脚本将以cron或periodic运行,根据需要替换变量(默认为FreeBSD编写):

# MySQL executable location
mysql=/usr/local/bin/mysql

# MySQLDump location
mysqldump=/usr/local/bin/mysqldump

# MySQL Username and password
userpassword=" --user=<username> --password=<password>"

# MySQL dump options
dumpoptions=" --quick --add-drop-table --add-locks --extended-insert"

# Databases
databases="db1 db2 db3"

# Backup Directory
backupdir=/usr/backups

# Flush and Lock
mysql $userpassword -e 'STOP SLAVE SQL_THREAD;'

set `date +'%Y %m %d'`

# Binary Log Positions
masterlogfile=`$mysql $userpassword -e 'SHOW SLAVE STATUS \G' | grep '[^_]Master_Log_File'`
masterlogpos=`$mysql $userpassword -e 'SHOW SLAVE STATUS \G' | grep 'Read_Master_Log_Pos'`

# Write Binlog Info
echo $masterlogfile >> ${backupdir}/info-$1-$2-$3.txt
echo $masterlogpos >> ${backupdir}/info-$1-$2-$3.txt

# Dump all of our databases
echo "Dumping MySQL Databases"
for database in $databases
do
$mysqldump $userpassword $dumpoptions $database | gzip - > ${backupdir}/${database}-$1-$2-$3.sql.gz
done

# Unlock
$mysql $userpassword -e 'START SLAVE'

echo "Dump Complete!"

exit 0

Although Ross's script is on the right track, @joatis is right when he says to stop the slave before checking the master log position. 虽然罗斯的剧本在正确的轨道上,但@joatis在检查主日志位置之前说要停止奴隶时是正确的。 The reason being is the READ LOCK will not preserve the Read_Master_Log_Pos that is retrieved with SHOW SLAVE STATUS . 原因是READ LOCK不会保留使用SHOW SLAVE STATUS检索的Read_Master_Log_Pos

To see that this is the case, log into MySQL on your slave and run: 要查看是这种情况,请在您的slave上登录MySQL并运行:

FLUSH TABLES WITH READ LOCK

SHOW SLAVE STATUS \G

Note the Read_Master_Log_Pos 请注意Read_Master_Log_Pos

Wait a few seconds and once again run: 等待几秒钟再次运行:

SHOW SLAVE STATUS \G

You should notice that the Read_Master_Log_Pos has changed. 您应该注意到Read_Master_Log_Pos已更改。

Since the backup is initiated quickly after we check the status, the log position recorded by the script may be accurate. 由于在检查状态后快速启动备份,因此脚本记录的日志位置可能是准确的。 However, its prefereable instead to follow the procedure here: http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-backups-mysqldump.html 但是,它的优先顺序是遵循这里的程序: http//dev.mysql.com/doc/refman/5.0/en/replication-solutions-backups-mysqldump.html

And run STOP SLAVE SQL_THREAD; 然后运行STOP SLAVE SQL_THREAD; instead of FLUSH TABLES WITH READ LOCK for the duration of the backup. 在备份期间FLUSH TABLES WITH READ LOCK而不是FLUSH TABLES WITH READ LOCK

When done, start replication again with START SLAVE 完成后,使用START SLAVE再次开始复制

Also, if you wish to backup the bin-logs for incremental backups or as a extra safety measure, it is useful to append --flush-logs to the $dumpoptions variable above 此外,如果您希望备份bin-logs以进行增量备份或作为额外的安全措施,将--flush-logs附加到上面的$ dumpoptions变量很有用。

Using Read_Master_Log_Pos as the position to continue from the master means you can end up with missing data. 使用Read_Master_Log_Pos作为从主站继续的位置意味着您最终可能会丢失数据。

The Read_Master_Log_Pos variable is the position in the master binary log file that the slave IO thread is up to. Read_Master_Log_Pos变量是从属IO线程所在的主二进制日志文件中的位置。

The problem here is that even in the small amount of time between stopping the slave SQL thread and retreiving the Read_Master_Log_Pos the IO thread may have received more data from the master which hasn't been applied by the SQL thread having been stopped. 这里的问题是,即使在停止从属SQL线程和检索Read_Master_Log_Pos之间的少量时间内,IO线程也可能从主机接收到更多数据,而这些数据尚未被已停止的SQL线程应用。

This results in the Read_Master_Log_Pos being further ahead than the data returned in the mysqldump, leaving a gap in the data when imported and continued on another slave. 这导致Read_Master_Log_Pos比mysqldump中返回的数据更进一步,在导入时在数据中留下间隙并在另一个从属上继续。

The correct value to use on the slave is Exec_Master_Log_Pos, which is the position in the master binary log file that the slave SQL thread last executed, meaning there is no data gap between the mysqldump and the Exec_Master_Log_Pos. 在从属服务器上使用的正确值是Exec_Master_Log_Pos,它是从属SQL二进制日志文件中从属SQL线程最后执行的位置,这意味着mysqldump和Exec_Master_Log_Pos之间没有数据间隔。

Using Ross's script above the correct usage would be: 使用Ross的脚本,正确的用法是:

# MySQL executable location
mysql=/usr/bin/mysql

# MySQLDump executable location
mysqldump=/usr/bin/mysqldump

# MySQL Username and password
userpassword=" --user=<username> --password=<password>"

# MySQL dump options
dumpoptions=" --quick --add-drop-table --add-locks --extended-insert"

# Databases to dump
databases="db1 db2 db3"

# Backup Directory
# You need to create this dir
backupdir=~/mysqldump


# Stop slave sql thread

echo -n "Stopping slave SQL_THREAD... "
mysql $userpassword -e 'STOP SLAVE SQL_THREAD;'
echo "Done."

set `date +'%Y %m %d'`

# Get Binary Log Positions

echo "Logging master status..."
masterlogfile=`$mysql $userpassword -e 'SHOW SLAVE STATUS \G' | grep '[^_]Master_Log_File'`
masterlogpos=`$mysql $userpassword -e 'SHOW SLAVE STATUS \G' | grep 'Exec_Master_Log_Pos'`

# Write log Info

echo $masterlogfile
echo $masterlogpos
echo $masterlogfile >> ${backupdir}/$1-$2-$3_info.txt
echo $masterlogpos >> ${backupdir}/$1-$2-$3_info.txt

# Dump the databases

echo "Dumping MySQL Databases..."
for database in $databases
do
echo -n "$database... "
$mysqldump $userpassword $dumpoptions $database | gzip - > ${backupdir}/$1-$2-$3_${database}.sql.gz
echo "Done."
done

# Start slave again

echo -n "Starting slave... "
$mysql $userpassword -e 'START SLAVE'
echo "Done."

echo "All complete!"

exit 0

You're second option looks like the right track. 你的第二个选择看起来像是正确的轨道。

I had to figure a way to do differential backups using mysqldump. 我不得不想办法使用mysqldump进行差异备份。 I ended up writing a script that chose what databases to back up and then executed mysqldump. 我最后写了一个脚本,选择要备份的数据库,然后执行mysqldump。 Couldn't you create a script that followed the steps mentioned in http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_master-data and call that from a cron job? 你能不能创建一个遵循http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_master-data中提到的步骤的脚本,并从cron作业调用它?

  1. connect to mysql and "stop slave" 连接到mysql和“停止奴隶”
  2. execute SHOW SLAVE STATUS 执行SHOW SLAVE STATUS
  3. store file_name, file_pos in variables 将file_name,file_pos存储在变量中
  4. dump and restart the slave. 转储并重启奴隶。

Just a thought but I'm guessing you could append the "CHANGE MASTER TO" line to the dumpfile and it would get executed when you restored/setup the new slave. 只是一个想法,但我猜你可以将“CHANGE MASTER TO”行附加到dumpfile,当你恢复/设置新的slave时它会被执行。

mysqldump (on 5.6) seems to have an option --dump-slave that when executed on a slave records the binary log co-ords of the master that the node was a slave of. mysqldump(在5.6上)似乎有一个选项--dump-slave,当在从服务器上执行时,记录该节点是从属服务器的主服务器的二进制日志。 The intent of such a dump is exactly what you are describing. 这种转储的意图正是您所描述的。

(I am late, I know ) (我知道,我迟到了)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM