简体   繁体   中英

Mysql replication in shared hosting

I need to do some mysql replication. Some informations :

  • I have two databases instances, in shared hosting, so I can't use Mysql Replication (I don't have access to the configuration files).
  • It's for a non profit project (educational) so we can't afford our own servers.
  • If the main server is down for a few minutes it's generally no that bad but there exists specific days where we REALLY need to have a backup solution, synchronized with the main server (time limited events of the website).

Right now the system is using a revision number on every row of every table and we periodically checked for modification of these number (and update the corresponding rows). It's quite slow.

What I'm thinking of is that every SELECT/INSERT/UPDATE query is logged in a specific table and the "slave server" periodically ask the "master server" for the content of this table and apply the corresponding queries.

What is your opinion on that idea ?

I know it's not perfect, a server might be down before all the queries are propagated, but I want to minimize the possible problems, with as few lines of code as possible.

What would be the best possible way to implement it ?

  • In the php code, on every SELECT/INSERT/UPDATE I can do an other insert in a specific table (I simply insert the query)
  • With a trigger ?

I successfully used a combination of triggers and a federated table to simulate replication of data from a MyISAM table on one server to a MyISAM table on a different server in a shared hosting environment.

Any inserts / updates / deletes on my master table are replicated to my federated table on the same server via AFTER INSERT / AFTER UPDATE / AFTER DELETE triggers. That federated table then pushes the changes to a table on a different server.

I can't take the credit for coming up with this approach as it was very helpfully documented by RolandoMySQLDBA on Server Fault:
Is a MySQL stored procedure able to insert/update to a remote backup MySQL server? .

Here are the steps I implemented:

On SERVER2...

  • I created a table (let's call it slave_table ) with columns which matched those in the master table (let's call it master_table ) on SERVER1.

On SERVER1...

  • I created a table (let's call it federated_table ) with columns which matched those in master_table , specifying a FEDERATED storage engine and a CONNECTION string to reference slave_table on SERVER2... CONNECTION='mysql://username:password@SERVER2:port/database/slave_table';

  • I added AFTER INSERT , AFTER UPDATE and AFTER DELETE triggers to master_table which contained SQL commands to...
    INSERT INTO federated_table VALUES (NEW.id,NEW.title); ,
    UPDATE federated_table SET id=NEW.id,title=NEW.title WHERE id=OLD.id; and
    DELETE FROM federated_table WHERE id=OLD.id; respectively.

I hope that helps someone in a similar situation.

Two ideas:

  1. A cron that finds the max(ID)s in the backup database tables and then gets all the records in the main database beyond that.

  2. To include the suggestion from my comment, duplicating your writes directly to the 2nd database instead of writing the queries to a table. This may cause a bit of overhead, but might be the easiest to implement.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM