I need to do some mysql replication. Some informations :
Right now the system is using a revision number on every row of every table and we periodically checked for modification of these number (and update the corresponding rows). It's quite slow.
What I'm thinking of is that every SELECT/INSERT/UPDATE query is logged in a specific table and the "slave server" periodically ask the "master server" for the content of this table and apply the corresponding queries.
What is your opinion on that idea ?
I know it's not perfect, a server might be down before all the queries are propagated, but I want to minimize the possible problems, with as few lines of code as possible.
What would be the best possible way to implement it ?
I successfully used a combination of triggers and a federated table to simulate replication of data from a MyISAM table on one server to a MyISAM table on a different server in a shared hosting environment.
Any inserts / updates / deletes on my master table are replicated to my federated table on the same server via AFTER INSERT / AFTER UPDATE / AFTER DELETE triggers. That federated table then pushes the changes to a table on a different server.
I can't take the credit for coming up with this approach as it was very helpfully documented by RolandoMySQLDBA on Server Fault:
Is a MySQL stored procedure able to insert/update to a remote backup MySQL server? .
Here are the steps I implemented:
On SERVER2...
On SERVER1...
I created a table (let's call it federated_table ) with columns which matched those in master_table , specifying a FEDERATED
storage engine and a CONNECTION
string to reference slave_table on SERVER2... CONNECTION='mysql://username:password@SERVER2:port/database/slave_table';
I added AFTER INSERT
, AFTER UPDATE
and AFTER DELETE
triggers to master_table which contained SQL commands to...
INSERT INTO federated_table VALUES (NEW.id,NEW.title);
,
UPDATE federated_table SET id=NEW.id,title=NEW.title WHERE id=OLD.id;
and
DELETE FROM federated_table WHERE id=OLD.id;
respectively.
I hope that helps someone in a similar situation.
Two ideas:
A cron that finds the max(ID)s in the backup database tables and then gets all the records in the main database beyond that.
To include the suggestion from my comment, duplicating your writes directly to the 2nd database instead of writing the queries to a table. This may cause a bit of overhead, but might be the easiest to implement.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.