I am currently running mysqldump on a Mysql slave to backup our database. This has worked fine for backing up our data itself, but what I would like to supplement it with is the binary log position of the master that corresponds with the data generated by the mysqldump.
Doing this would allow us to restore our slave (or setup new slaves) without having to do a separate mysqldump on the main database where we grab the binary log position of the master. We would just take the data generated by the mysqldump, combine it with the binary log information we generated, and voila... be resynced.
So far, my research has gotten me very CLOSE to being able to accomplish this goal, but I can't seem to figure out an automated way to pull it off. Here are the "almosts" I've uncovered:
This seems like something common enough that somebody must have figured out before, hopefully that somebody is using Stack Overflow?
The following shell script will run in cron or periodic, replace variables as necessary (defaults are written for FreeBSD):
# MySQL executable location
mysql=/usr/local/bin/mysql
# MySQLDump location
mysqldump=/usr/local/bin/mysqldump
# MySQL Username and password
userpassword=" --user=<username> --password=<password>"
# MySQL dump options
dumpoptions=" --quick --add-drop-table --add-locks --extended-insert"
# Databases
databases="db1 db2 db3"
# Backup Directory
backupdir=/usr/backups
# Flush and Lock
mysql $userpassword -e 'STOP SLAVE SQL_THREAD;'
set `date +'%Y %m %d'`
# Binary Log Positions
masterlogfile=`$mysql $userpassword -e 'SHOW SLAVE STATUS \G' | grep '[^_]Master_Log_File'`
masterlogpos=`$mysql $userpassword -e 'SHOW SLAVE STATUS \G' | grep 'Read_Master_Log_Pos'`
# Write Binlog Info
echo $masterlogfile >> ${backupdir}/info-$1-$2-$3.txt
echo $masterlogpos >> ${backupdir}/info-$1-$2-$3.txt
# Dump all of our databases
echo "Dumping MySQL Databases"
for database in $databases
do
$mysqldump $userpassword $dumpoptions $database | gzip - > ${backupdir}/${database}-$1-$2-$3.sql.gz
done
# Unlock
$mysql $userpassword -e 'START SLAVE'
echo "Dump Complete!"
exit 0
Although Ross's script is on the right track, @joatis is right when he says to stop the slave before checking the master log position. The reason being is the READ LOCK will not preserve the Read_Master_Log_Pos
that is retrieved with SHOW SLAVE STATUS
.
To see that this is the case, log into MySQL on your slave and run:
FLUSH TABLES WITH READ LOCK
SHOW SLAVE STATUS \G
Note the Read_Master_Log_Pos
Wait a few seconds and once again run:
SHOW SLAVE STATUS \G
You should notice that the Read_Master_Log_Pos
has changed.
Since the backup is initiated quickly after we check the status, the log position recorded by the script may be accurate. However, its prefereable instead to follow the procedure here: http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-backups-mysqldump.html
And run STOP SLAVE SQL_THREAD;
instead of FLUSH TABLES WITH READ LOCK
for the duration of the backup.
When done, start replication again with START SLAVE
Also, if you wish to backup the bin-logs for incremental backups or as a extra safety measure, it is useful to append --flush-logs to the $dumpoptions variable above
Using Read_Master_Log_Pos as the position to continue from the master means you can end up with missing data.
The Read_Master_Log_Pos variable is the position in the master binary log file that the slave IO thread is up to.
The problem here is that even in the small amount of time between stopping the slave SQL thread and retreiving the Read_Master_Log_Pos the IO thread may have received more data from the master which hasn't been applied by the SQL thread having been stopped.
This results in the Read_Master_Log_Pos being further ahead than the data returned in the mysqldump, leaving a gap in the data when imported and continued on another slave.
The correct value to use on the slave is Exec_Master_Log_Pos, which is the position in the master binary log file that the slave SQL thread last executed, meaning there is no data gap between the mysqldump and the Exec_Master_Log_Pos.
Using Ross's script above the correct usage would be:
# MySQL executable location
mysql=/usr/bin/mysql
# MySQLDump executable location
mysqldump=/usr/bin/mysqldump
# MySQL Username and password
userpassword=" --user=<username> --password=<password>"
# MySQL dump options
dumpoptions=" --quick --add-drop-table --add-locks --extended-insert"
# Databases to dump
databases="db1 db2 db3"
# Backup Directory
# You need to create this dir
backupdir=~/mysqldump
# Stop slave sql thread
echo -n "Stopping slave SQL_THREAD... "
mysql $userpassword -e 'STOP SLAVE SQL_THREAD;'
echo "Done."
set `date +'%Y %m %d'`
# Get Binary Log Positions
echo "Logging master status..."
masterlogfile=`$mysql $userpassword -e 'SHOW SLAVE STATUS \G' | grep '[^_]Master_Log_File'`
masterlogpos=`$mysql $userpassword -e 'SHOW SLAVE STATUS \G' | grep 'Exec_Master_Log_Pos'`
# Write log Info
echo $masterlogfile
echo $masterlogpos
echo $masterlogfile >> ${backupdir}/$1-$2-$3_info.txt
echo $masterlogpos >> ${backupdir}/$1-$2-$3_info.txt
# Dump the databases
echo "Dumping MySQL Databases..."
for database in $databases
do
echo -n "$database... "
$mysqldump $userpassword $dumpoptions $database | gzip - > ${backupdir}/$1-$2-$3_${database}.sql.gz
echo "Done."
done
# Start slave again
echo -n "Starting slave... "
$mysql $userpassword -e 'START SLAVE'
echo "Done."
echo "All complete!"
exit 0
You're second option looks like the right track.
I had to figure a way to do differential backups using mysqldump. I ended up writing a script that chose what databases to back up and then executed mysqldump. Couldn't you create a script that followed the steps mentioned in http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_master-data and call that from a cron job?
Just a thought but I'm guessing you could append the "CHANGE MASTER TO" line to the dumpfile and it would get executed when you restored/setup the new slave.
mysqldump (on 5.6) seems to have an option --dump-slave that when executed on a slave records the binary log co-ords of the master that the node was a slave of. The intent of such a dump is exactly what you are describing.
(I am late, I know )
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.