The Schworak Site | Log In | Up One Level

MySQL and Bidirectional Replication (aka Master-Master)

This is a configuration where data can be written to two different MySQL servers and changes made on either server will be sent over to the other so both have all of the changes.

bidirectional database

In this configuration, both servers are masters and both are slaves and the data is kept in sync by passing changes in both directions.

There are a couple very important things to keep in mind in a configuration like this.

  • Primary keys created on server A cannot clash with those created on server B. Fields that auto increment can cause you problems unless you take special steps.
  • Hard drive speed becomes an issue. Normally, hard drive speed is not the most important thing when dealing with a database. RAM for caching and CPU can be more important. But with bidirectional replication, drive speed becomes important due to the high level of disk I/O that you will encounter.
  • Although triggers will be active on both servers, you will want to turn off events on one of the  servers. Pick one server to be your main master and let the events run there. If you are not using events in your applications then they should be turned off on both servers.
  • Keeping your data storage area neat is always important for maintenance purposes. But with replication this becomes even more important because you may need to make full data image copies.

We are going to walk through the process from start to finish of creating a bidirectional replication system.

Although configuration settings may be listed in this document in various places, you can and probably should make all of your configuration change at one time before you start moving/copying  data. This will mean fewer server stops and restarts.

You can do much of this while the server is running but there will be some periods of time when the server is shut down and not accessable.



The replication configuration

You will need to edit the MySQL configuration file my.cnf so the server will know it is suppose to be logging changes and to get changes from the other server. You can  make all of these changes while the server is running. The changes won't be seen until you restart the server. Here is an example of the configuration chanes you may need to make. Below the sample each item will be discussed.

server-id        = 1
auto-increment-increment = 2
auto-increment-offset = 1
replicate-same-server-id = 0
log-bin          = /var/lib/mysql/binlog/mysql-bin
relay-log        = /var/lib/mysql/relaylog/mysql-relay
relay-log-index  = /var/lib/mysql/relaylog/mysql-relay.index
expire_logs_days = 10
max_binlog_size  = 100M
binlog_format    = 'MIXED'

server-id * This must be a unique integer greater than zero. No two servers can have the same ID or replication will not happen. It is easiest to start with 1 and go up by one with each server.

auto-increment-increase * This is very important. This should be the number of master servers you will be running. For this reason we have picked 2 since we will have two masters.

auto-increment-offset * This is equally important and should be unique for each server. To keep things simple, if you number your masters 1 and 2, this number should be the same as your server-id. This along with the previous value will insure that auto increment values don't clash between servers.

replicate-same-server-id * This tells the server that if it gets an update with its own server ID, ignore it (when set to zero).

log-bin * This tells the server to log all changes to the binary log. It also gives a file prefix to be used. In our example all files will start with "bin" and be followed by a sequence number. (eg. bin.0000001) It is important to start this entry with a slash and give the full path to the data. This is mainly for calrity but if you don't start with a slash the file is created at a relative location. Relative locations can cause confusion later.

log-bin-index * this is the location and name of the binary log index file. In our example we are keeping it in the folder with the binar log files. It tells the server where it is in the binary log process.

relay-log * This works in the same way log-bin although this is for the slave files that come from the other master.

relay-log.index * This work as log-bin-index although this is for the relay binary log files coming from the other master.

expire_logs_days * This is the number of days to keep the binary logs. You can play with this. I find that after 5 days it is often easier to start the slave (or second master in this case) from scratch rather than letting it catch up. But for this example it is being set to 10 days. Adjust it as you see fit.

max_binlog_size * The maximum file size for binary logs created by the master. I like 100 meg but you can go larger or smaller to fit your needs.

binlog_format * This can be ROW, STATEMENT or MIXED. Each has advnatages and disadvantages. When working with master-slave configurations I find ROW is the best setting. But in master-master configurations MIXED is the best way to go. This is because it is possible for records to be modified on either server and ROW will die if a row is deleted by server A and then on server B but before server A's change is synced. This is because when the delete comes from server A the record will already be gone from server B and replication will stop. In MIXED mode, most commands are sent as DML statements rather than binary row changes so deleting a row that doesn't exist is not a problem. You can try STATEMENT but there are somethings that won't replicate. In most cases MIXED is the proper setting here.

Start with a clean configuration

It is unlikely that you are building a new replication system fromscratch. But if you are your life will be a bit easier because your data can start out in the best layout from the word go instead of cleaning it up before starting the process. Although you don't need to follow this cleaning process to make replication work, it will help you in the long run. If you don't care about cleaning up your data folder you can skip down a bit.

Out of the box, MySQL builds one big data file and everything gets mixed in there making it hard to split data if your run out of disk space. It is highly recommended that you put each innodb database in its own folder. MySQL also puts all of the data related files in one folder making it hard to weed out the data from binary logs and replication logs. To make your life a lot easier, you should put your data in one folder, binary logs in another folder and relay logs in yet another. This is done in the my.cnf file and the entries might look like this...

datadir          = /var/lib/mysql/data
log-bin          = /var/lib/mysql/binlog/bin
log-bin-index    = /var/lib/mysql/binlog/bin.index
relay-log        = /var/lib/mysql/relaylog/relay
relay-log-index  = /var/lib/mysql/relaylog/relay.index

In your current data folder "/var/lib/mysql" create the three folders "data", "binlog" and "relaylog". Make sure they are all owned by the mysql user. On linux use the chown command to do this.

Shut down MySQL and move all of your existing data files into the data folder. If you are already performing master/server replication, you can either move the log files into the respective folders or if you are flushing your current slave(s) you can just delete all of those binary log files. If you are keeping your logs, make sure the entries in the my.cnf file match the current naming pattern you are using. It is best to start fresh here if possible.

Once the data has been moved, you can restart MySQL.

Taking a snapshot of the current state

As with master-slave configurations, you will need to know exactly where in the binary log process the master is. To do this the server must be running but no data changes can be taking place. So lets start by opening two connections to the server. One will allow you to stop data changes and get the binary log position. The other will allow you to shut down the database without letting any data changes sneak in.

In the first shell session, log into mysql with root access so you can create a replication user (if you haven't already done so) and stop data updates and finally get the current binary log position. If you alredy have a reilication user, you can skip the first two commands in this example:

mysql > CREATE USER '{user_name}'@'%' IDENTIFIED BY '{secret}';
mysql > GRANT REPLICATION SLAVE ON *.* TO '{user_name}'@'%';
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
| bin.000003       | 86767926 |              |                  |
1 row in set (0.07 sec)
mysql >

Record this information but leave the session open. If you exit, the read lock will be released and the binary log could be altered and new changes could be made to the database. If this happens, you will have to repeate the lock and status operation because the slave won't be able to sync properly.

In the other shell session, you can now stop the database. On linux systems you should be able to use the service command. In any case you will need permission to do this so you may need to use sudo or elevate your access in another way. We will use sudo in this example:

sudo service mysql stop

Once the database stops, you can close out of the mysql session you used to get the read lock and from this shell make a full binary backup of the current state of the database. We will use zip in this example but you can use tar or even scp to directly copy the data from one server to another. How ever you make your copy, the goal is the same. Take an exact snapshot of the data and the current binary log files. For this example it is assumed that you make the folder changes to put the data and log files each in separate folders.

# ls -l
total 12
drwxr-xr-x  2 root root 4096 Mar 10 09:04 binlog
drwxr-xr-x 25 root root 4096 Mar 10 09:04 data
drwxr-xr-x  2 root root 4096 Mar 10 09:04 relaylog
# zip *

When the backup process has finished, delete all of the relay log files (if any) and the file out of the main data folder (if it exists). We also want to delete the file which is also located in the main data folder. The correct info files and relay log files will be created as soon as you start the server. Be sure the second master is stopped and then you can restart the database server and open a new mysql client so we can initialize the slave side of the process.

sudo service mysql start

     ->    MASTER_HOST='{ip address to second server}',
     ->    MASTER_USER='{replication user name}',
     ->    MASTER_PASSWORD='{replication user password}',
     ->    MASTER_LOG_FILE='bin.000003',
     ->    MASTER_LOG_POS=86767926;

Because the other server is not running, these settings will be saved and the relay log files will be created but nothing will start copying yet. We will need to shut down this server for just a moment when we perform this same process on the other server. This server is now a replication master and also a slave waiting to make a connection.

Setting up the second master

Use what ever method works best for you to get the backup file from the server we just set up over to this new server. Shut down the MySQL server if it is running. You also want to copy the my.cnf file so we can make a few adjustments. At a minimum, you will need to change the server-id and auto-increment-offset values. If your data is going to be in the same folder structure you are fine otherwise make the needed adjustments.

server-id        = 2
auto-increment-offset = 2

Restore the data that we backuped on the first server over to this server. This should create the same basic file structure. We are going to need to delete a few files and allow this server to create new versions. You will delete all relay log files, the relay log index file. Next you will need to delete the and files as well. These are located in the same folder as your data. We delete these files so the second server will come up fresh. We will populate some information in the file once we get logged into the mysql client.

Shut down the first server. We don't want this server to start processing changes early and possibly cause errors. Once the first server is shut down, you can start this server and perform the same configuration process within a MySQL client session.

sudo service mysql start

     ->    MASTER_HOST='{ip address to first server}',
     ->    MASTER_USER='{replication user name}',
     ->    MASTER_PASSWORD='{replication user password}',
     ->    MASTER_LOG_FILE='bin.000003',
     ->    MASTER_LOG_POS=86767926;

The only thing in this process that should be changed is the IP address to the server. Because we created this system using an exact copy of the first, the bin log information would be exactly the same. We even copied those bin logs from the first server.

Now you can start the slave

mysql > slave start;

No replication will actually happen yet becuase the first server is still off line so start that server and issue the same command to start the slave over there.

Both servers should now be up and running and syncing changes between each other. The best way to test this is to perform some insert, update and delete statementds on each box and watch the changes happen on the other box.

Use these commands to check the status of the master and slave on each machine from time to time to make sure things are working.

show master status;
show slave status;

Remember, running replication is not a replacement for quality and frequent data backups but it will give you the ability to perform backups on one machine while leaving the other running.

If one server goes off line for a short time (even a couple days) when it comes back, it will slowly catch up. If the server is too far behind for your need or if it is actually damaged and can't start syncing again, simply take another snapshot of the server that is running properly (or the one with more data if both are running but one is too far behind) and create the second master over from scratch. It is often faster to recreate the second master if things are more than a few days out of sync.

When the second server starts replicating, it may already be several hours out of sync with the first server. It is a good idea to let it catch up before you let it start processing update requests. This isn't required, but it may help get the two systems in sync faster by reducing the load on the server that is trying to catch up.

Hopefully, this helped you. It took me several failed starts to get this process down and make it easy.


When you make a slave, all of the databases on Server1 will be copied to Server2 except for those you tell replication to skip. So the idea of having DB1 & DB2 on one server and DB3 & DB4 on another server makes no sense. Without telling replication to skip databases, when you start replication bother Server1 and Server2 will end up with DB1 - DB4. Also, any additional databases you create once replication is started will automatically end up on both servers. When you start replication, both servers need to start in a consistant state. This means they both need to look identical as far as the databases and tables within each database are concerned. Only the configuratin files should be different and in most cases only a couple differences should exist in the config files such as the auto-increment values and the server id. The goal of replication is to end up with two (or more) identical copies of all data.
Hi, if i want to create bi-direction replica on 2 server but with different DBs, i mean the below scenario : Server1 contain masterA DB1& DB2 , slaveB DB3 and some tables of DB4 Server2 Contain masterB DB3 and DB4 , slaveA some tables of DB1/DB2 so should i configure them as bi-direction replica or as master-slave replica? Also should i put the below option in each my.cnf file? [mysqld] server-id = 1 auto-increment-increment = 2 auto-increment-offset = 1 replicate-same-server-id = 0 log-bin = /var/lib/mysql/binlog/mysql-bin relay-log = /var/lib/mysql/relaylog/mysql-relay relay-log-index = /var/lib/mysql/relaylog/mysql-relay.index expire_logs_days = 10 max_binlog_size = 100M binlog_format = 'MIXED' skip-external-locking
Thanks for sharing, it helped a great deal to understand the process. I didn't like the thought of leaving a session open manually, so I scripted the part that dumps the database in order to limit the impact on a live database. That script starts the lock in a background process.
All content on this site is copyright ©2004-2019 and is not to be reproduced without prior permission.