MariaDB ensures high availability with Replication for async/semi-sync data copying and Galera Cluster for sync multi-master with failover and zero data loss.
Galera Load Balancer is a simple load balancer specifically designed for Galera Cluster. Like Galera, it only runs on Linux. Galera Load Balancer is developed and maintained by Codership. Documentation is available on fromdual.com.
Galera Load Balancer is inspired by Pen, which is a generic TCP load balancer. However, since Pen is a generic TCP connection load balancer, the techniques it uses are not well-suited to the particular use case of database servers. Galera Load Balancer is optimized for this type of workload.
Several balancing policies are supported. Each node can be assigned a different weight. Nodes with a higher weight are preferred. Depending on the selected policy, other nodes can even be ignored until the preferred nodes crash.
A lightweight daemon called glbd receives the connections from clients, and it redirects them to nodes. No specific client exists for this demo: a generic TCP client, such as nc, can be used to send administrative commands and read the usage statistics.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB Galera Cluster provides high availability with synchronous replication, while adding asynchronous replication boosts redundancy for disaster recovery or reporting.
MariaDB replication can be used to replicate between MariaDB Galera Cluster and MariaDB Server. This article will discuss how to do that.
Before we set up replication, we need to ensure that the cluster is configured properly. This involves the following steps:
Set log_slave_updates=ON on all nodes in the cluster. See Configuring MariaDB Galera Cluster: Writing Replicated Write Sets to the Binary Log and Using MariaDB Replication with MariaDB Galera Cluster: Configuring a Cluster Node as a Replication Master for more information on why this is important. It is also needed to enable wsrep GTID mode.
Set server_id to the same value on all nodes in the cluster. See Using MariaDB Replication with MariaDB Galera Cluster: Setting server_id on Cluster Nodes for more information on what this means.
If you want to use GTID replication, then you also need to configure some things to enable wsrep GTID mode. For example:
wsrep_gtid_mode=ON needs to be set on all nodes in the cluster.
wsrep_gtid_domain_id needs to be set to the same value on all nodes in the cluster so that each cluster node uses the same domain when assigning GTIDs for Galera Cluster's write sets.
log_slave_updates needs to be enabled on all nodes in the cluster. See MDEV-9855 about that.
And as an extra safety measure:
gtid_domain_id should be set to a different value on all nodes in a given cluster, and each of these values should be different than the configured wsrep_gtid_domain_id value. This is to prevent a node from using the same domain used for Galera Cluster's write sets when assigning GTIDs for non-Galera transactions, such as DDL executed with wsrep_sst_method=RSU set or DML executed with wsrep_on=OFF set.
Before we set up replication, we also need to ensure that the MariaDB Server replica is configured properly. This involves the following steps:
Set server_id to a different value than the one that the cluster nodes are using.
Set gtid_domain_id to a value that is different than the wsrep_gtid_domain_id and gtid_domain_id values that the cluster nodes are using.
Set log_bin and log_slave_updates=ON if you want the replica to log the transactions that it replicates.
Our process to set up replication is going to be similar to the process described at Setting up a Replication Slave with mariadb-backup, but it will be modified a bit to work in this context.
The very first step is to start the nodes in the first cluster. The first node will have to be bootstrapped. The other nodes can be started normally.
Once the nodes are started, you need to pick a specific node that will act as the replication primary for the MariaDB Server.
The first step is to simply take and prepare a fresh full backup of the node that you have chosen to be the replication primary. For example:
$ mariadb-backup --backup \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword
And then you would prepare the backup as you normally would. For example:
$ mariadb-backup --prepare \
--target-dir=/var/mariadb/backup/
Once the backup is done and prepared, you can copy it to the MariaDB Server that will be acting as replica. For example:
$ rsync -avrP /var/mariadb/backup dc2-dbserver1:/var/mariadb/backup
At this point, you can restore the backup to the datadir, as you normally would. For example:
$ mariadb-backup --copy-back \
--target-dir=/var/mariadb/backup/
And adjusting file permissions, if necessary:
$ chown -R mysql:mysql /var/lib/mysql/
Now that the backup has been restored to the MariaDB Server replica, you can start the MariaDB Server process.
Before the MariaDB Server replica can begin replicating from the cluster's primary, you need to create a user account on the primary that the replica can use to connect, and you need to grant the user account the REPLICATION SLAVE privilege. For example:
CREATE USER 'repl'@'dc2-dbserver1' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'dc2-dbserver1';
At this point, you need to get the replication coordinates of the primary from the original backup.
The coordinates will be in the xtrabackup_binlog_info file.
mariadb-backup dumps replication coordinates in two forms: GTID strings and binary log file and position coordinates, like the ones you would normally see from SHOW MASTER STATUS output. In this case, it is probably better to use the GTID coordinates.
For example:
mariadb-bin.000096 568 0-1-2
Regardless of the coordinates you use, you will have to set up the primary connection using CHANGE MASTER TO and then start the replication threads with START SLAVE.
If you want to use GTIDs, then you will have to first set gtid_slave_pos to the GTID coordinates that we pulled from the xtrabackup_binlog_info file, and we would set MASTER_USE_GTID=slave_pos
in the CHANGE MASTER TO command. For example:
SET GLOBAL gtid_slave_pos = "0-1-2";
CHANGE MASTER TO
MASTER_HOST="c1dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_USE_GTID=slave_pos;
START SLAVE;
If you want to use the binary log file and position coordinates, then you would set MASTER_LOG_FILE
and MASTER_LOG_POS
in the CHANGE MASTER TO command to the file and position coordinates that we pulled from the xtrabackup_binlog_info file. For example:
CHANGE MASTER TO
MASTER_HOST="c1dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_LOG_FILE='mariadb-bin.000096',
MASTER_LOG_POS=568,
START SLAVE;
You should be done setting up the replica now, so you should check its status with SHOW SLAVE STATUS. For example:
SHOW SLAVE STATUS\G
Now that the MariaDB Server is up, ensure that it does not start accepting writes yet if you want to set up circular replication between the cluster and the MariaDB Server.
You can also set up circular replication between the cluster and MariaDB Server, which means that the MariaDB Server replicates from the cluster, and the cluster also replicates from the MariaDB Server.
Before circular replication can begin, you also need to create a user account on the MariaDB Server, since it will be acting as the replication primary to the cluster's replica, and you need to grant the user account the REPLICATION SLAVE privilege. For example:
CREATE USER 'repl'@'c1dbserver1' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'c1dbserver1';
How this is done would depend on whether you want to use the GTID coordinates or the binary log file and position coordinates.
Regardless, you need to ensure that the second cluster is not accepting any writes other than those that it replicates from the cluster at this stage.
To get the GTID coordinates on the MariaDB server, you can check gtid_current_pos by executing:
SHOW GLOBAL VARIABLES LIKE 'gtid_current_pos';
Then on the node acting as a replica in the cluster, you can set up replication by setting gtid_current_pos to the GTID that was returned and then executing CHANGE MASTER TO :
SET GLOBAL gtid_slave_pos = "0-1-2";
CHANGE MASTER TO
MASTER_HOST="c2dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_USE_GTID=slave_pos;
START SLAVE;
To get the binary log file and position coordinates on the MariaDB server, you can execute SHOW MASTER STATUS:
SHOW MASTER STATUS
Then on the node acting as a replica in the cluster, you would set master_log_file
and master_log_pos
in the CHANGE MASTER TO command. For example:
CHANGE MASTER TO
MASTER_HOST="c2dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_LOG_FILE='mariadb-bin.000096',
MASTER_LOG_POS=568;
START SLAVE;
You should be done setting up the circular replication on the node in the first cluster now, so you should check its status with SHOW SLAVE STATUS. For example:
SHOW SLAVE STATUS\G
This page is licensed: CC BY-SA / Gnu FDL
MariaDB replication can be used for replication between two MariaDB Galera Clusters. This article will discuss how to do that.
Before we set up replication, we need to ensure that the clusters are configured properly. This involves the following steps:
Set log_slave_updates=ON on all nodes in both clusters. See Configuring MariaDB Galera Cluster: Writing Replicated Write Sets to the Binary Log and Using MariaDB Replication with MariaDB Galera Cluster: Configuring a Cluster Node as a Replication Master for more information on why this is important. This is also needed to enable wsrep GTID mode.
Set server_id to the same value on all nodes in a given cluster, but be sure to use a different value in each cluster. See Using MariaDB Replication with MariaDB Galera Cluster: Setting server_id on Cluster Nodes for more information on what this means.
If you want to use GTID replication, then you also need to configure some things to enable wsrep GTID mode. For example:
wsrep_gtid_mode=ON needs to be set on all nodes in each cluster.
wsrep_gtid_domain_id needs to be set to the same value on all nodes in a given cluster so that each cluster node uses the same domain when assigning GTIDs for Galera Cluster's write sets. Each cluster should have this set to a different value so that each cluster uses different domains when assigning GTIDs for their write sets.
log_slave_updates needs to be enabled on all nodes in the cluster. See MDEV-9855 about that.
And as an extra safety measure:
gtid_domain_id should be set to a different value on all nodes in a given cluster, and each of these values should be different than the configured wsrep_gtid_domain_id value. This is to prevent a node from using the same domain used for Galera Cluster's write sets when assigning GTIDs for non-Galera transactions, such as DDL executed with wsrep_sst_method=RSU set or DML executed with wsrep_on=OFF set.
Our process to set up replication is going to be similar to the process described at Setting up a Replication Slave with mariadb-backup, but it will be modified a bit to work in this context.
The very first step is to start the nodes in the first cluster. The first node will have to be bootstrapped. The other nodes can be started normally.
Once the nodes are started, you need to pick a specific node that will act as the replication primary for the second cluster.
The first step is to simply take and prepare a fresh full backup of the node that you have chosen to be the replication primary. For example:
$ mariadb-backup --backup \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword
And then you would prepare the backup as you normally would. For example:
$ mariadb-backup --prepare \
--target-dir=/var/mariadb/backup/
Once the backup is done and prepared, you can copy it to the node in the second cluster that will be acting as replica. For example:
$ rsync -avrP /var/mariadb/backup c2dbserver:/var/mariadb/backup
At this point, you can restore the backup to the datadir, as you normally would. For example:
$ mariadb-backup --copy-back \
--target-dir=/var/mariadb/backup/
And adjusting file permissions, if necessary:
$ chown -R mysql:mysql /var/lib/mysql/
Now that the backup has been restored to the second cluster's replica, you can start the server by bootstrapping the node.
Before the second cluster's replica can begin replicating from the first cluster's primary, you need to create a user account on the primary that the replica can use to connect, and you need to grant the user account the REPLICATION SLAVE privilege. For example:
CREATE USER 'repl'@'c2dbserver1' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'c2dbserver1';
At this point, you need to get the replication coordinates of the primary from the original backup.
The coordinates will be in the xtrabackup_binlog_info file.
mariadb-backup dumps replication coordinates in two forms: GTID strings and binary log file and position coordinates, like the ones you would normally see from SHOW MASTER STATUS output. In this case, it is probably better to use the GTID coordinates.
For example:
mariadb-bin.000096 568 0-1-2
Regardless of the coordinates you use, you will have to set up the primary connection using CHANGE MASTER TO and then start the replication threads with START SLAVE.
If you want to use GTIDs, then you will have to first set gtid_slave_pos to the GTID coordinates that we pulled from the xtrabackup_binlog_info file, and we would set MASTER_USE_GTID=slave_pos
in the CHANGE MASTER TO command. For example:
SET GLOBAL gtid_slave_pos = "0-1-2";
CHANGE MASTER TO
MASTER_HOST="c1dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_USE_GTID=slave_pos;
START SLAVE;
If you want to use the binary log file and position coordinates, then you would set MASTER_LOG_FILE
and MASTER_LOG_POS
in the CHANGE MASTER TO command to the file and position coordinates that we pulled the xtrabackup_binlog_info file. For example:
CHANGE MASTER TO
MASTER_HOST="c1dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_LOG_FILE='mariadb-bin.000096',
MASTER_LOG_POS=568,
START SLAVE;
You should be done setting up the replica now, so you should check its status with SHOW SLAVE STATUS. For example:
SHOW SLAVE STATUS\G
If the replica is replicating normally, then the next step would be to start the MariaDB Server process on the other nodes in the second cluster.
Now that the second cluster is up, ensure that it does not start accepting writes yet if you want to set up circular replication between the two clusters.
You can also set up circular replication between the two clusters, which means that the second cluster replicates from the first cluster, and the first cluster also replicates from the second cluster.
Before circular replication can begin, you also need to create a user account on the second cluster's primary that the first cluster's replica can use to connect, and you need to grant the user account the the REPLICATION SLAVE privilege. For example:
CREATE USER 'repl'@'c1dbserver1' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'c1dbserver1';
How this is done would depend on whether you want to use the GTID coordinates or the binary log file and position coordinates.
Regardless, you need to ensure that the second cluster is not accepting any writes other than those that it replicates from the first cluster at this stage.
To get the GTID coordinates on the second cluster, you can check gtid_current_pos by executing:
SHOW GLOBAL VARIABLES LIKE 'gtid_current_pos';
Then on the first cluster, you can set up replication by setting gtid_current_pos to the GTID that was returned and then executing CHANGE MASTER TO:
SET GLOBAL gtid_slave_pos = "0-1-2";
CHANGE MASTER TO
MASTER_HOST="c2dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_USE_GTID=slave_pos;
START SLAVE;
To get the binary log file and position coordinates on the second cluster, you can execute SHOW MASTER STATUS:
SHOW MASTER STATUS
Then on the first cluster, you would set master_log_file
and master_log_pos
in the CHANGE MASTER TO command. For example:
CHANGE MASTER TO
MASTER_HOST="c2dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_LOG_FILE='mariadb-bin.000096',
MASTER_LOG_POS=568;
START SLAVE;
You should be done setting up the circular replication on the node in the first cluster now, so you should check its status with SHOW SLAVE STATUS. For example:
SHOW SLAVE STATUS\G
This page is licensed: CC BY-SA / Gnu FDL
MariaDB's global transaction IDs (GTIDs) are very useful when used with MariaDB replication, which is primarily what that feature was developed for. Galera Cluster, on the other hand, was developed by Codership for all MySQL and MariaDB variants, and the initial development of the technology pre-dated MariaDB's GTID implementation. As a side effect, MariaDB Galera Cluster (at least until MariaDB 10.5.1) only partially supports MariaDB's GTID implementation.
Galera Cluster has its own certification-based replication method that is substantially different from MariaDB replication. However, it would still be beneficial if MariaDB Galera Cluster was able to associate a Galera Cluster write set with a GTID that is globally unique but that is also consistent for that write set on each cluster node.
Before MariaDB 10.5.1, MariaDB Galera Cluster did not replicate the original GTID with the write set except in cases where the transaction was originally applied by a slave SQL thread. Each node independently generated its own GTID for each write set in most cases. See MDEV-20720.
MariaDB supports wsrep_gtid_mode.
MariaDB has a feature called wsrep GTID mode. When this mode is enabled, MariaDB uses some tricks to try to associate each Galera Cluster write set with a GTID that is globally unique, but that is also consistent for that write set on each cluster node. These tricks work in some cases, but GTIDs can still become inconsistent among cluster nodes.
Several things need to be configured for wsrep GTID mode to work, such as
wsrep_gtid_mode=ON needs to be set on all nodes in the cluster.
wsrep_gtid_domain_id needs to be set to the same value on all nodes in a given cluster, so that each cluster node uses the same domain when assigning GTIDs for Galera Cluster's write sets. When replicating between two clusters, each cluster should have this set to a different value, so that each cluster uses different domains when assigning GTIDs for their write sets.
log_slave_updates needs to be enabled on all nodes in the cluster. See MDEV-9855.
And as an extra safety measure:
gtid_domain_id should be set to a different value on all nodes in a given cluster, and each of these values should be different than the configured wsrep_gtid_domain_id value. This is to prevent a node from using the same domain used for Galera Cluster's write sets when assigning GTIDs for non-Galera transactions, such as DDL executed with wsrep_sst_method=RSU set or DML executed with wsrep_on=OFF set.
If you want to avoid writes accidentialy local GTIDS, you can avoid it with wsrep_gtid_mode = DISALLOW_LOCAL_GTID
In this case you get an error: ERROR 4165 (HY000): Galera replication not supported
You can overwrite it temporarily with set sql_log_bin = 0;
For information on setting server_id, see Using MariaDB Replication with MariaDB Galera Cluster: Setting server_id on Cluster Nodes.
Until MariaDB 10.5.1, there were known cases where GTIDs could become inconsistent across the cluster nodes.
A known issue (fixed in MariaDB 10.5.1) is:
Implicitly dropped temporary tables can make GTIDs inconsistent. See MDEV-14153 and MDEV-20720.
This does not necessarily imply that wsrep GTID mode works perfectly in all other situations. If you discover any other issues with it, please report a bug.
If a Galera Cluster node is also a replication slave, then that node's slave SQL thread will be applying transactions that it replicates from its replication master. If the node has log_slave_updates=ON set, then each transaction that the slave SQL thread applies will also generate a Galera Cluster write set that is replicated to the rest of the nodes in the cluster.
In MariaDB 10.1.30 and earlier, the node acting as slave would apply the transaction with the original GTID that it received from the master, and the other Galera Cluster nodes would generate their own GTIDs for the transaction when they replicated the write set.
In MariaDB 10.1.31 and later, the node acting as slave will include the transaction's original Gtid_Log_Event
in the replicated write set, so all nodes should associate the write set with its original GTID. See MDEV-13431 about that.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB replication and MariaDB Galera Cluster can be used together. However, there are some things that have to be taken into account.
If you want to use MariaDB replication and MariaDB Galera Cluster together, then the following tutorials may be useful:
If a Galera Cluster node is also a replication master, then some additional configuration may be needed.
Like with MariaDB replication, write sets that are received by a node with Galera Cluster's certification-based replication are not written to the binary log by default.
If the node is a replication master, then its replication slaves only replicate transactions that are in the binary log, so this means that the transactions that correspond to Galera Cluster write-sets would not be replicated by any replication slaves by default. If you would like a node to write its replicated write sets to the binary log, then you will have to set log_slave_updates=ON. If the node has any replication slaves, then this would also allow those slaves to replicate the transactions that corresponded to those write sets.
See Configuring MariaDB Galera Cluster: Writing Replicated Write Sets to the Binary Log for more information.
If a Galera Cluster node is also a replication slave, then some additional configuration may be needed.
If the node is a replication slave, then the node's slave SQL thread will be applying transactions that it replicates from its replication master. Transactions applied by the slave SQL thread will only generate Galera Cluster write-sets if the node has log_slave_updates=ON set. Therefore, in order to replicate these transactions to the rest of the nodes in the cluster, log_slave_updates=ON must be set.
If the node is a replication slave, then it is probably also a good idea to enable wsrep_restart_slave. When this is enabled, the node will restart its slave threads whenever it rejoins the cluster.
Both MariaDB replication and MariaDB Galera Cluster support replication filters, so extra caution must be taken when using all of these features together. See Configuring MariaDB Galera Cluster: Replication Filters for more details on how MariaDB Galera Cluster interprets replication filters.
It is most common to set server_id to the same value on each node in a given cluster. Since MariaDB Galera Cluster uses a virtually synchronous certification-based replication, all nodes should have the same data, so in a logical sense, a cluster can be considered in many cases a single logical server for purposes related to MariaDB replication. The binary logs of each cluster node might even contain roughly the same transactions and GTIDs if log_slave_updates=ON is set and if wsrep GTID mode is enabled and if non-Galera transactions are not being executed on any nodes.
There are cases when it might make sense to set a different server_id value on each node in a given cluster. For example, if log_slave_updates=OFF is set and if another cluster or a standard MariaDB Server is using multi-source replication to replicate transactions from each cluster node individually, then it would be required to set a different server_id value on each node for this to work.
Keep in mind that if replication is set up in a scenario where each cluster node has a different server_id value, and if the replication topology is set up in such a way that a cluster node can replicate the same transactions through Galera and through MariaDB replication, then you may need to configure the cluster node to ignore these transactions when setting up MariaDB replication. You can do so by setting IGNORE_SERVER_IDS to the server IDs of all nodes in the same cluster when executing CHANGE MASTER TO. For example, this might be required when circular replication is set up between two separate clusters, and each cluster node has a different server_id value, and each cluster has log_slave_updates=ON set.
This page is licensed: CC BY-SA / Gnu FDL