Install MariaDB Server using binary packages. This section guides you through deploying pre-compiled versions on various platforms, offering a straightforward approach for setup and upgrades.
The MariaDB project signs their MariaDB packages for Debian, Ubuntu, Fedora, CentOS, and Red Hat.
Our repositories for Debian "Sid" and the Ubuntu 16.04 and beyond "Xenial" use the following GPG signing key. As detailed in MDEV-9781, APT 1.2.7 (and later) prefers SHA2 GPG keys and now prints warnings when a repository is signed using a SHA1 key like our previous GPG key. We have created a SHA2 key for use with these.
Information about this key:
The short Key ID is: 0xC74CD1D8
The long Key ID is: 0xF1656F24C74CD1D8
The full fingerprint of the key is: 177F 4010 FE56 CA33 3630 0305 F165 6F24 C74C D1D8
The key can be added on Debian-based systems using the following command:
sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8
Usage of the apt-key
command is deprecated in the latest versions of Debian and Ubuntu, and the replacement method is to download the keyring file to the /etc/apt/trusted.gpg.d/
directory. This can be done with the following:
sudo curl -LsSo /etc/apt/trusted.gpg.d/mariadb-keyring-2019.gpg https://supplychain.mariadb.com/mariadb-keyring-2019.gpg
Beginning in 2023 we migrated the key used to sign our yum/dnf/zypper repositories and to sign our source code and binary tarballs to the same key we use for Debian and Ubuntu. This unifies our GPG signing and enables our repositories to be compatible with FIPS and other regulations that mandate a stronger signing key.
The key can be imported on RPM-based systems using the following command:
sudo rpm --import https://supplychain.mariadb.com/MariaDB-Server-GPG-KEY
or
sudo rpmkeys --import https://supplychain.mariadb.com/MariaDB-Server-GPG-KEY
The GPG Key ID of the MariaDB signing key we used for yum/dnf/zypper repositories and to sign our source code tarballs until the end of 2022 was 0xCBCB082A1BB943DB
. The short form of the id is 0x1BB943DB
and the full key fingerprint is:
1993 69E5 404B D5FC 7D2F E43B CBCB 082A 1BB9 43DB
This key was used by the yum/dnf/zypper repositories for RedHat, CentOS, Fedora, openSUSE, and SLES.
If you configure the mariadb.org rpm repositories using the repository configuration tool (see below) then your package manager will prompt you to import the key the first time you install a package from the repository.
You can also import the key directly using the following command:
sudo rpmkeys --import https://supplychain.mariadb.com/MariaDB-Server-GPG-KEY
See the this page for details on using the mariadb_repo_setup
script to configure repositories that use these keys.
See the this page for details on configuring MariaDB Foundation repositories that use these keys.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB was originally designed as a drop-in replacement of MySQL, with more features, new storage engines, fewer bugs, and better performance, but you can also install it alongside MySQL. (This can be useful, for example, if you want to migrate databases/applications one by one.)
Here are the steps to install MariaDB near an existing MySQL installation.
Download the compiled binary tar.gz that contains the latest version (mariadb-5.5.24-linux-x86_64.tar.gz
as of writing this article) and extract the files in a directory of your choice. I will assume for this article that the directory was /opt.
[root@mariadb-near-mysql ~]# cat /etc/issue
CentOS release 6.2 (Final)
[root@mariadb-near-mysql ~]# rpm -qa mysql*
mysql-5.1.61-1.el6_2.1.x86_64
mysql-libs-5.1.61-1.el6_2.1.x86_64
mysql-server-5.1.61-1.el6_2.1.x86_64
[root@mariadb-near-mysql ~]# ps axf | grep mysqld
2072 pts/0 S+ 0:00 \_ grep mysqld
1867 ? S 0:01 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --socket=/var/lib/mysql/mysql.sock ...
1974 ? Sl 0:06 \_ /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql ...
Create data directory and symlinks as below:
[root@mariadb-near-mysql opt]# mkdir mariadb-data
[root@mariadb-near-mysql opt]# ln -s mariadb-5.5.24-linux-x86_64 mariadb
[root@mariadb-near-mysql opt]# ls -al
total 20
drwxr-xr-x. 5 root root 4096 2012-06-06 07:27 .
dr-xr-xr-x. 23 root root 4096 2012-06-06 06:38 ..
lrwxrwxrwx. 1 root root 27 2012-06-06 07:27 mariadb -> mariadb-5.5.24-linux-x86_64
drwxr-xr-x. 13 root root 4096 2012-06-06 07:07 mariadb-5.5.24-linux-x86_64
drwxr-xr-x. 2 root root 4096 2012-06-06 07:26 mariadb-data
Create group mariadb and user mariadb and set correct ownerships:
[root@mariadb-near-mysql opt]# groupadd --system mariadb
[root@mariadb-near-mysql opt]# useradd -c "MariaDB Server" -d /opt/mariadb -g mariadb --system mariadb
[root@mariadb-near-mysql opt]# chown -R mariadb:mariadb mariadb-5.5.24-linux-x86_64/
[root@mariadb-near-mysql opt]# chown -R mariadb:mariadb mariadb-data/
Create a new my.cnf in /opt/mariadb from support files:
[root@mariadb-near-mysql opt]# cp mariadb/support-files/my-medium.cnf mariadb-data/my.cnf
[root@mariadb-near-mysql opt]# chown mariadb:mariadb mariadb-data/my.cnf
Edit the file /opt/mariadb-data/my.cnf and add custom paths, socket, port, user and the most important of all: data directory and base directory. Finally the file should have at least the following:
[client]
port = 3307
socket = /opt/mariadb-data/mariadb.sock
[mysqld]
datadir = /opt/mariadb-data
basedir = /opt/mariadb
port = 3307
socket = /opt/mariadb-data/mariadb.sock
user = mariadb
Copy the init.d script from support files in the right location:
[root@mariadb-near-mysql opt]# cp mariadb/support-files/mysql.server /etc/init.d/mariadb
[root@mariadb-near-mysql opt]# chmod +x /etc/init.d/mariadb
Edit /etc/init.d/mariadb replacing mysql with mariadb as below:
- # Provides: mysql
+ # Provides: mariadb
- basedir=
+ basedir=/opt/mariadb
- datadir=
+ datadir=/opt/mariadb-data
- lock_file_path="$lockdir/mysql"
+ lock_file_path="$lockdir/mariadb"
The trickiest part will be the last changes to this file. You need to tell mariadb to use only one cnf file. In the start section after$bindir/mysqld_safe add --defaults-file=/opt/mariadb-data/my.cnf. Finally the lines should look like:
# Give extra arguments to mysqld with the my.cnf file. This script
# may be overwritten at next upgrade.
$bindir/mysqld_safe --defaults-file=/opt/mariadb-data/my.cnf --datadir="$datadir" --pid-file="$mysqld_pid_file_path" $other_args >/dev/null 2>&1 &
The same change needs to be made to the mariadb-admin command below in the wait_for_ready() function so that the mariadb start command can properly listen for the server start. In the wait_for_ready() function, after $bindir/mariadb-admin add --defaults-file=/opt/mariadb-data/my.cnf. The lines should look like:
wait_for_ready () {
[...]
if $bindir/mariadb-admin --defaults-file=/opt/mariadb-data/my.cnf ping >/dev/null 2>&1; then
Run mariadb-install-db by explicitly giving it the my.cnf file as argument:
[root@mariadb-near-mysql opt]# cd mariadb
[root@mariadb-near-mysql mariadb]# scripts/mariadb-install-db --defaults-file=/opt/mariadb-data/my.cnf
Now you can start MariaDB by
[root@mariadb-near-mysql opt]# /etc/init.d/mariadb start
Starting MySQL... [ OK ]
Make MariaDB start at system start:
[root@mariadb-near-mysql opt]# cd /etc/init.d
[root@mariadb-near-mysql init.d]# chkconfig --add mariadb
[root@mariadb-near-mysql init.d]# chkconfig --levels 3 mariadb on
Finally test that you have both instances running:
[root@mariadb-near-mysql ~]# mysql -e "SELECT VERSION();"
+-----------+
| VERSION() |
+-----------+
| 5.1.61 |
+-----------+
[root@mariadb-near-mysql ~]# mysql -e "SELECT VERSION();" --socket=/opt/mariadb-data/mariadb.sock
+----------------+
| VERSION() |
+----------------+
| 5.5.24-MariaDB |
+----------------+
By having the mariadb.socket, my.cnf file and databases in /opt/mariadb-data if you want to upgrade the MariaDB version you will only need to:
extract the new version from the archive in /opt near the current version
stop MariaDB
change the symlink mariadb to point to the new directory
start MariaDB
run upgrade script... but remember you will need to provide the socket option --socket=/opt/mariadb-data/mariadb.sock
This page is licensed: CC BY-SA / Gnu FDL
Binary tarballs (bintars) are compressed tar archives that contain pre-compiled executables, libraries, and other deployment dependencies. They can usually be installed on any modern Linux distribution.
MariaDB Binary tarballs are named following the pattern: mariadb-VERSION-OS.tar.gz. Be sure to download the correct version for your machine.
Note: Some older binary tarballs are marked '(GLIBC_2.14)' or '(requires GLIBC_2.14+)'. These binaries are built the same as the others, but on a newer build host, and they require GLIBC 2.14 or higher. Use the other binaries for machines with older versions of GLIBC installed. Run ldd --version
to see which version is running on your distribution.
Others are marked 'systemd', which are for systems with systemd
and GLIBC 2.19 or higher.
Binary tarballs provide multiple benefits:
They are highly OS independent. As long as you get the bintar for the architecture, GLIBC version and if you are using systemd or not, the bintar should work almost anywhere.
You do not need to be root to use them.
They can be installed by anyone to any path, including ones home directory.
You can have any number of different MariaDB installations on the same machine. This is often desired during upgrades when one wants to have the old installation running until switching to the new one.
You can use them on systems for which MariaDB does not support native packages.
To install the binaries, unpack the distribution into the directory of your choice and run the mariadb-install-db script.
In the example below we install MariaDB in the /usr/local/mysql
directory (this is the default location for MariaDB for many platforms). However any other directory should work too.
We install the binary with a symlink to the original name. This is done so that you can easily change MariaDB versions just by moving the symlink to point to another directory.
MariaDB searches for the configuration files '/etc/my.cnf
' (on some
systems '/etc/mysql/my.cnf
') and '~/.my.cnf
'. If you have an
old my.cnf
file (maybe from a system installation of MariaDB or MySQL) you
need to take care that you don't accidentally use the old one with your new
binary .tar installation.
The normal solution for this is to ignore the my.cnf
file in /etc
when
you use the programs in the tar file.
This is done by creating your own .my.cnf file in
your home directory and telling mariadb-install-db,mysqld_safe and possibly mariadb (the
command-line client utility) to only use this one with the option
'--defaults-file=~/.my.cnf
'. Note that
this has to be first option for the above commands!
If you have root access to the system, you probably want to install MariaDB under the user and group 'mysql' (to keep compatibility with MySQL installations):
groupadd mysql
useradd -g mysql mysql
cd /usr/local
tar -zxvpf /path-to/mariadb-VERSION-OS.tar.gz
ln -s mariadb-VERSION-OS mysql
cd mysql
./scripts/mariadb-install-db --user=mysql
chown -R root .
chown -R mysql data
The symlinking with ln -s
is recommended as it makes it easy to install many MariaDB version at the same time (for easy testing, upgrading, downgrading etc).
If you are installing MariaDB to replace MySQL, then you can leave out the call to mariadb-install-db
. Instead shut down MySQL. MariaDB should find the path to the data directory from your old /etc/my.cnf
file (path may vary depending on your system).
To start mariadbd you should now do:
./bin/mariadbd_safe --user=mysql &
or
./bin/mariadbd_safe --defaults-file=~/.my.cnf --user=mysql &
To test connection, modify your $PATH so you can invoke client such as mariadb, mariadb-dump, etc.
export PATH=$PATH:/usr/local/mysql/bin/
You may want to modify your .bashrc or .bash_profile to make it permanent.
Below, change /usr/local to the directory of your choice.
cd /usr/local
gunzip < /path-to/mariadb-VERSION-OS.tar.gz | tar xf -
ln -s mariadb-VERSION-OS mysql
cd mysql
./scripts/mariadb-install-db --defaults-file=~/.my.cnf
If you have problems with the above gunzip command line, you can instead, if you have gnu tar, do:
tar xfz /path-to/mariadb-VERSION-OS.tar.gz
To start mariadbd you should now do:
./bin/mariadbd_safe --defaults-file=~/.my.cnf &
You can get mariadbd (the MariaDB server) to autostart by copying the file mysql.server
file to the right place.
cp support-files/mysql.server /etc/init.d/mysql.server
The exact place depends on your system. The mysql.server
file contains instructions of how to use and fine tune it.
For systemd installation the mariadb.service file will need to be copied from the support-files/systemd folder to the /usr/lib/systemd/system/ folder.
cp support-files/systemd/mariadb.service /usr/lib/systemd/system/mariadb.service
Note that by default the /usr/ directory is write protected by systemd though, so when having the data directory in /usr/local/mysql/data as per the instructions above you also need to make that directory writable. You can do so by adding an extra service include file:
mkdir /etc/systemd/system/mariadb.service.d/
cat > /etc/systemd/system/mariadb.service.d/datadir.conf <<EOF
[Service]
ReadWritePaths=/usr/local/mysql/data
EOF
systemctl daemon-reload
After this you can start and stop the service using
systemctl start mariadb.service
and
systemctl stop mariadb.service
respectively.
Please refer to the systemd page for further information.
After this, remember to set proper passwords for all accounts accessible from untrusted sources, to avoid exposing the host to security risks!
Also consider using the mysql.server tostart MariaDB automatically when your system boots.
On systems using systemd you can instead enable automatic startup during system boot with
systemctl enable mariadb.service
instead.
For details on the exact steps used to build the binaries, see thecompiling MariaDB section of the KB.
This page is licensed: CC BY-SA / Gnu FDL
On Debian, Ubuntu, and other similar Linux distributions, it is highly recommended to install the relevant .deb
packages from MariaDB's
repository using apt, aptitude, Ubuntu Software Center, Synaptic Package Manager, or another package
manager.
This page walks you through the simple installation steps using apt
.
We currently have APT repositories for the following Linux distributions:
Debian 10 (Buster)
Debian 11 (Bullseye)
Debian 12 (Bookworm)
Debian Unstable (Sid)
Ubuntu 18.04 LTS (Bionic)
Ubuntu 20.04 LTS (Focal)
Ubuntu 22.04 (Jammy)
Ubuntu 22.10 (Kinetic)
Ubuntu 23.04 (Lunar)
If you want to install MariaDB with apt
, then you can configure apt
to install from MariaDB Corporation's MariaDB Package Repository by using the MariaDB Package Repository setup script.
MariaDB Corporation provides a MariaDB Package Repository for several Linux distributions that use apt
to manage packages. This repository contains software packages related to MariaDB Server, including the server itself, clients and utilities, client libraries, plugins, and mariadb-backup. The MariaDB Package Repository setup script automatically configures your system to install packages from the MariaDB Package Repository.
To use the script, execute the following command:
curl -sS https://downloads.mariadb.com/MariaDB/mariadb_repo_setup | sudo bash
Note that this script also configures a repository for MariaDB MaxScale and a repository for MariaDB Tools, which currently only contains Percona XtraBackup and its dependencies.
See MariaDB Package Repository Setup and Usage for more information.
If you want to install MariaDB with apt
, then you can configure apt
to install from MariaDB Foundation's MariaDB Repository by using the MariaDB Repository Configuration Tool.
The MariaDB Foundation provides a MariaDB repository for several Linux distributions that use apt-get
to manage packages. This repository contains software packages related to MariaDB Server, including the server itself, clients and utilities, client libraries, plugins, and mariadb-backup. The MariaDB Repository Configuration Tool can easily generate the appropriate commands to add the repository for your distribution.
There are several ways to add the repository.
Executing add-apt-repository
One way to add an apt
repository is by using the add-apt-repository command. This command will add the repository configuration to /etc/apt/sources.list
.
For example, if you wanted to use the repository to install MariaDB 10.6 on Ubuntu 18.04 LTS (Bionic), then you could use the following commands to add the MariaDB apt
repository:
sudo apt-get install software-properties-common
sudo add-apt-repository 'deb [arch=amd64,arm64,ppc64el] http://sfo1.mirrors.digitalocean.com/mariadb/repo/10.6/ubuntu bionic main'
And then you would have to update the package cache by executing the following command:
sudo apt update
Creating a Source List File
Another way to add an apt
repository is by creating a source list file in /etc/apt/sources.list.d/
.
For example, if you wanted to use the repository to install MariaDB 10.6 on Ubuntu 18.04 LTS (Bionic), then you could create the MariaDB.list
file in /etc/apt/sources.list.d/
with the following contents to add the MariaDB apt
repository:
# MariaDB 10.6 repository list - created 2019-01-27 09:50 UTC
# http://downloads.mariadb.org/mariadb/repositories/
deb [arch=amd64,arm64,ppc64el] http://sfo1.mirrors.digitalocean.com/mariadb/repo/10.6/ubuntu bionic main
deb-src http://sfo1.mirrors.digitalocean.com/mariadb/repo/10.6/ubuntu bionic main
And then you would have to update the package cache by executing the following command:
sudo apt update
Using Ubuntu Software Center
Another way to add an apt
repository is by using Ubuntu Software Center.
You can do this by going to the Software Sources window. This window can be opened either by navigating to Edit > Software Sources or by navigating to System > Administration > Software Sources.
Once the Software Sources window is open, go to the Other Software tab, and click the Add button. At that point, you can input the repository information provided by the MariaDB Repository Configuration Tool.
See here for more information.
Using Synaptic Package Manager
Another way to add an apt
repository is by using Synaptic Package Manager.
You can do this by going to the Software Sources window. This window can be opened either by navigating to System > Administrator > Software Sources or by navigating to Settings > Repositories.
Once the Software Sources window is open, go to the Other Software tab, and click the Add button. At that point, you can input the repository information provided by the MariaDB Repository Configuration Tool.
See here for more information.
If you wish to pin the apt
repository to a specific minor release, or if you would like to downgrade to a specific minor release, then you can create a apt
repository with the URL hard-coded to that specific minor release.
The MariaDB Foundation archives repositories of old minor releases at the following URL:
Archives are only of the distros and architectures supported at the time of release. For example, MariaDB 10.6.21 exists for Ubuntu bionic, focal, jammy, kinetic,
and #lunaris obtained looking in [dists](https://archive.mariadb.org/mariadb-10.6.21/repo/ubuntu/dists).
For example, if you wanted to pin your repository to MariaDB 10.5.9 on Ubuntu 20.04 LTS (Focal), then you would have to first remove any existing MariaDB repository source list file from /etc/apt/sources.list.d/
. And then you could use the following commands to add the MariaDB apt-get
repository:
sudo add-apt-repository 'deb [arch=amd64,arm64,ppc64el,s390x] http://archive.mariadb.org/mariadb-10.5.9/repo/ubuntu/ focal main main/debug'
Ensure you have the signing key installed.
Ubuntu Xenial and older will need:
sudo apt-get install -y apt-transport-https
And then you would have to update the package cache by executing the following command:
sudo apt update
MariaDB's apt
repository can be updated to a new major release. How this is done depends on how you originally configured the repository.
If you configured apt
to install from MariaDB Corporation's MariaDB Package Repository by using the MariaDB Package Repository setup script, then you can update the major release that the repository uses by running the script again.
If you configured apt
to install from MariaDB Foundation's MariaDB Repository by using the MariaDB Repository Configuration Tool, then you can update the major release in various ways, depending on how you originally added the repository.
Updating a Repository with add-apt-repository
If you added the apt
repository by using the add-apt-repository command, then you can update the major release that the repository uses by using the add-apt-repository command again.
First, look for the repository string for the old version in /etc/apt/sources.list
.
And then, you can remove the repository for the old version by executing the add-apt-repository command and providing the --remove
option. For example, if you wanted to remove a MariaDB 10.6 repository, then you could do so by executing something like the following:
sudo add-apt-repository --remove 'deb [arch=amd64,arm64,ppc64el] http://sfo1.mirrors.digitalocean.com/mariadb/repo/10.6/ubuntu bionic main'
After that, you can add the repository for the new version with the add-apt-repository command. For example, if you wanted to use the repository to install MariaDB 10.6 on Ubuntu 18.04 LTS (Bionic), then you could use the following commands to add the MariaDB apt
repository:
sudo apt-get install software-properties-common
sudo add-apt-repository 'deb [arch=amd64,arm64,ppc64el] http://sfo1.mirrors.digitalocean.com/mariadb/repo/10.6/ubuntu bionic main'
And then you would have to update the package cache by executing the following command:
sudo apt update
After that, the repository should refer to MariaDB 10.6.
Updating a Source List File
If you added the apt
repository by creating a source list file in /etc/apt/sources.list.d/
, then you can update the major release that the repository uses by updating the source list file in-place. For example, if you wanted to change the repository from MariaDB 10.5 to MariaDB 10.6, and if the source list file was at /etc/apt/sources.list.d/MariaDB.list
, then you could execute the following:
sudo sed -i 's/10.5/10.6/' /etc/apt/sources.list.d/MariaDB.list
And then you would have to update the package cache by executing the following command:
sudo apt update
After that, the repository should refer to MariaDB 10.6.
Before MariaDB can be installed, you also have to import the GPG public key that is used to verify the digital signatures of the packages in our repositories. This allows the apt
utility to verify the integrity of the packages that it installs.
Prior to Debian 9 (Stretch), and Debian Unstable (Sid), and Ubuntu 16.04 LTS (Xenial), the id of our GPG public key is 0xcbcb082a1bb943db
. The full key fingerprint is:
1993 69E5 404B D5FC 7D2F E43B CBCB 082A 1BB9 43DB
The apt-key utility can be used to import this key. For example:
sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db
Starting with Debian 9 (Stretch) and Ubuntu 16.04 LTS (Xenial), the id of our GPG public key is 0xF1656F24C74CD1D8
. The full key fingerprint is:
177F 4010 FE56 CA33 3630 0305 F165 6F24 C74C D1D8
The apt-key utility can be used to import this key. For example:
sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8
Starting with Debian 9 (Stretch), the dirmngr package needs to be installed before the GPG public key can be imported. To install it, execute: sudo apt install dirmngr
If you are unsure which GPG public key you need, then it is perfectly safe to import both keys.
The command used to import the GPG public key is the same on both Debian and Ubuntu. For example:
$ sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.ASyOPV87XC --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db
gpg: requesting key 1BB943DB from hkp server keyserver.ubuntu.com
gpg: key 1BB943DB: "MariaDB Package Signing Key <package-signing-key@mariadb.org>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg: imported: 1
Once the GPG public key is imported, you are ready to install packages from the repository.
After the apt
repository is configured, you can install MariaDB by executing the apt-get command. The specific command that you would use would depend on which specific packages that you want to install.
To Install the most common packages, first you would have to update the package cache by executing the following command:
sudo apt update
To Install the most common packages, execute the following command:
sudo apt-get install mariadb-server galera-4 mariadb-client libmariadb3 mariadb-backup mariadb-common
To Install MariaDB Server, first you would have to update the package cache by executing the following command:
sudo apt update
Then, execute the following command:
sudo apt-get install mariadb-server
The process to install MariaDB Galera Cluster with the MariaDB apt-get
repository is practically the same as installing standard MariaDB Server.
Galera Cluster support is included in the standard MariaDB Server packages, so you will need to install the mariadb-server
package, as you normally would.
You also need to install the galera-4
package to obtain the Galera 4 wsrep provider library.
To install MariaDB Galera Cluster, first you would have to update the package cache by executing the following command:
sudo apt update
To install MariaDB Galera Cluster, you could execute the following command:
sudo apt-get install mariadb-server mariadb-client galera-4
MariaDB Galera Cluster also has a separate package that can be installed on arbitrator nodes. The package is called galera-arbitrator-4
. This package should be installed on whatever node you want to serve as the arbitrator. It can either run on a separate server that is not acting as a cluster node, which is the recommended configuration, or it can run on a server that is also acting as an existing cluster node.
To install the arbitrator package, you could execute the following command:
sudo apt-get install galera-arbitrator-4
<>
See MariaDB Galera Cluster for more information on MariaDB Galera Cluster.
MariaDB Connector/C is included as the client library.
To Install the clients and client libraries, first you would have to update the package cache by executing the following command:
sudo apt update
Then, execute the following command:
sudo apt-get install mariadb-client libmariadb3
To install mariadb-backup, first you would have to update the package cache by executing the following command:
sudo apt update
Then, execute the following command:
sudo apt-get install mariadb-backup
Some plugins may also need to be installed.
For example, to install the cracklib_password_check password validation plugin, first you would have to update the package cache by executing the following command:
sudo apt update
Then, execute the following command:
sudo apt-get install mariadb-cracklib-password-check
The MariaDB apt
repository contains the last few versions of MariaDB. To show what versions are available, use the apt-cache command:
sudo apt-cache showpkg mariadb-server
In the output you will see the available versions.
To install an older version of a package instead of the latest version we just need to specify the package name, an equal sign, and then the version number.
However, when installing an older version of a package, if apt-get
has to install dependencies, then it will automatically choose to install the latest versions of those packages. To ensure that all MariaDB packages are on the same version in this scenario, it is necessary to specify them all. Therefore, to install MariaDB 10.3. from this apt
repository, we would do the following:
sudo apt-get install mariadb-server=10.6.21-1 mariadb-client=10.6.21-1 libmariadb3=10.6.21-1 mariadb-backup=10.6.21-1 mariadb-common=10.6.21-1
The rest of the install and setup process is as normal.
While it is not recommended, it is possible to download and install the.deb
packages manually. However, it is generally recommended to use a package manager like apt-get
.
A tarball that contains the .deb
packages can be downloaded from the following URL:
For example, to install the MariaDB 10.6.21 .deb
packages on Ubuntu 18.04 LTS (Bionic), you could execute the following:
sudo apt-get update
sudo apt-get install libdbi-perl libdbd-mysql-perl psmisc libaio1 socat
wget https://downloads.mariadb.com/MariaDB/mariadb-10.6.21/repo/ubuntu/mariadb-10.6.21-ubuntu-bionic-amd64-debs.tar
tar -xvf mariadb-10.6.21-ubuntu-bionic-amd64-debs.tar
cd mariadb-10.6.21-ubuntu-bionic-amd64-debs/
sudo dpkg --install ./mariadb-common*.deb \
./mysql-common*.deb \
./mariadb-client*.deb \
./libmariadb3*.deb \
./libmysqlclient18*.deb
sudo dpkg --install ./mariadb-server*.deb \
./mariadb-backup*.deb \
./galera-4*.deb
After the installation is complete, you can start MariaDB.
If you are using MariaDB Galera Cluster, then keep in mind that the first node will have to be bootstrapped.
The available DEB packages depend on the specific MariaDB release series.
For MariaDB, the following DEBs are available:
galera-4
The WSREP provider for Galera 4.
libmariadb3
Dynamic client libraries.
libmariadb-dev
Development headers and static libraries.
libmariadbclient18
Virtual package to satisfy external depends
libmysqlclient18
Virtual package to satisfy external depends
mariadb-backup
mariadb-client
Client tools like mariadb CLI, mariadb-dump, and others.
mariadb-client-core
Core client tools
mariadb-common
Character set files and /etc/my.cnf
mariadb-plugin-connect
The CONNECT storage engine.
mariadb-plugin-cracklib-password-check
The cracklib_password_check password validation plugin.
mariadb-plugin-gssapi-client
The client-side component of the gssapi authentication plugin.
mariadb-plugin-gssapi-server
The server-side component of the gssapi authentication plugin.
mariadb-plugin-rocksdb
The MyRocks storage engine.
mariadb-plugin-spider
The SPIDER storage engine.
mariadb-plugin-tokudb
The TokuDB storage engine.
mariadb-server
The server and server tools, like myisamchk and mariadb-hotcopy are here.
mariadb-server-core
The core server.
mariadb-test
mysql-client-test executable, and mysql-test framework with the tests.
mariadb-test-data
MariaDB database regression test suite - data files
<>
When the mariadb-server
DEB package is installed, it will create a user and group named mysql
, if they do not already exist.
This page is licensed: CC BY-SA / Gnu FDL
MSI packages is available for x64 (64 bit) processor architectures and, in some older releases only, for x86 (32 bit). We'll use screenshots from an x64 installation below (the 32 bit installer is very similar).
This is the typical mode of installation. To start the installer, just click on the mariadb-...msi
Click on "I accept the terms"
Here, you can choose what features to install. By default, all features are installed with the exception of the debug symbols. If the "Database instance" feature is selected, the installer will create a database instance, by default running as a service. In this case the installer will present additional dialogs to control various database properties. Note that you do not necessarily have to create an instance at this stage. For example, if you already have MySQL or MariaDB databases running as services, you can just upgrade them during the installation. Also, you can create additional database instances after the installation, with the mysql_install_db.exe utility.
NOTE: By default, if you install a database instance, the data directory will be in the "data" folder under the installation root. To change the data directory location, select "Database instance" in the feature tree, and use the "Browse" button to point to another place.
This dialog is shown if you selected the "Database instance" feature. Here, you can set the password for the "root" database user and specify whether root can access database from remote machines. The "Create anonymous account" setting allows for anonymous (non-authenticated) users. It is off by default and it is not recommended to change this setting.
Install as service
Defines whether the database should be run as a service. If it should be run as a service, then it also defines the service name. It is recommended to run your database instance as a service as it greatly
simplifies database management. In MariaDB 10.4 and later, the default service name used by the MSI installer is "MariaDB". In 10.3 and before, the default service name used by the MSI installer is "MySQL". Note that the default service name for the --install and --install-manual options for mysqld.exe
is "MySQL" in all versions of MariaDB.
Enable Networking
Whether to enable TCP/IP (recommended) and which port MariaDB should listen to. If security is a concern, you can change the bind-address parameter post-installation to bind to only local addresses. If the "Enable networking" checkbox is deselected, the database will use named pipes for communication.
InnoDB engine settings
Defines the InnoDB buffer pool size, and the InnoDB page size. The default buffer pool size is 12.5% of RAM, and depending on your requirements you can give InnoDB more (up to 70-80% RAM). 32 bit versions of MariaDB have restrictions on maximum buffer pool size, which is approximately 1GB, due to virtual address space limitations for 32bit processes. A 16k page size is suitable for most situations. See the innodb_page_size system variable for details on other settings.
At this point, all installation settings are collected. Click on the "Install" button.
Installation is finished now. If you have upgradable instances of MariaDB/MySQL, running as services, this dialog will present a "Do you want to upgrade existing instances" checkbox (if selected, it launches the Upgrade Wizard post-installation).
If you installed a database instance as service, the service will be running already.
Installation will add some entries in the Start Menu:
MariaDB Client - Starts command line client mysql.exe
Command Prompt - Starts a command prompt. Environment is set such that "bin" directory of the installation is included into PATH environment variable, i.e you can use this command prompt to issue MariaDB commands (mysqldadmin, mysql etc...)
Database directory - Opens the data directory in Explorer.
Error log - Opens the database error log in Notepad.
my.ini - Opens the database configuration file my.ini in Notepad.
Upgrade Wizard - Starts the Wizard to upgrade an existing MariaDB/MySQL database instance to this MariaDB version.
In the Explorer applet "Programs and Features" (or "Add/Remove programs" on older Windows), find the entry for MariaDB, choose Uninstall/Change and click on the "Remove" button in the dialog below.
If you installed a database instance, you will need to decide if you want to remove or keep the data in the database directory.
The MSI installer supports silent installations as well. In its simplest form silent installation with all defaults can be performed from an elevated command prompt like this:
msiexec /i path-to-package.msi /qn
Note: the installation is silent due to msiexe.exe's /qn switch (no user interface), if you omit the switch, the installation will have the full UI.
Silent installations also support installation properties (a property would correspond for example to checked/unchecked state of a checkbox in the UI, user password, etc). With properties the command line to install the MSI package would look like this:
msiexec /i path-to-package.msi [PROPERTY_1=VALUE_1 ... PROPERTY_N=VALUE_N] /qn
The MSI installer package requires property names to be all capitals and contain only English letters. By convention, for a boolean property, an empty value means "false" and a non-empty is "true".
MariaDB installation supports the following properties:
INSTALLDIR
%ProgramFiles%\MariaDB \
Installation root
PORT
3306
--port parameter for the server
ALLOWREMOTEROOTACCESS
Allow remote access for root user
BUFFERPOOLSIZE
RAM/8
Bufferpoolsize for innodb
CLEANUPDATA
1
Remove the data directory (uninstall only)
DATADIR
INSTALLDIR\data
Location of the data directory
DEFAULTUSER
Allow anonymous users
PASSWORD
Password of the root user
SERVICENAME
Name of the Windows service. A service is not created if this value is empty.
SKIPNETWORKING
Skip networking
STDCONFIG
1
Corresponds to "optimize for transactions" in the GUI, default engine innodb, strict sql mode
UTF8
if set, adds character-set-server=utf8 to my.ini file
PAGESIZE
16K
page size for innodb
Feature is a Windows installer term for a unit of installation. Features can be selected and deselected in the UI in the feature tree in the "Custom Setup" dialog.
Silent installation supports adding features with the special propertyADDLOCAL=Feature_1,..,Feature_N
and removing features withREMOVE=Feature_1,..., Feature_N
Features in the MariaDB installer:
DBInstance
yes
Install database instance
Client
yes
Command line client programs
MYSQLSERVER
yes
Install server
SharedLibraries
yes
Install client shared library
DEVEL
yes
install C/C++ header files and client libraries
HeidiSQL
yes
Installs HeidiSQL
All examples here require running as administrator (and elevated command line in Vista and later)
Install default features, database instance as service, non-default datadir and port
msiexec /i path-to-package.msi SERVICENAME=MySQL DATADIR=C:\mariadb5.2\data PORT=3307 /qn
Install service, add debug symbols, do not add development components (client libraries and headers)
msiexec /i path-to-package.msi SERVICENAME=MySQL ADDLOCAL=DEBUGSYMBOLS REMOVE=DEVEL /qn
To uninstall silently, use the REMOVE=ALL
property with msiexec:
msiexec /i path-to-package.msi REMOVE=ALL /qn
To keep the data directory during an uninstall, you will need to pass an additional parameter:
msiexec /i path-to-package.msi REMOVE=ALL CLEANUPDATA="" /qn
If you encounter a bug in the installer, the installer logs should be used for
diagnosis. Please attach verbose logs to the bug reports you create. To create a verbose
installer log, start the installer from the command line with the /l*v
switch, like so:
msiexec.exe /i path-to-package.msi /l*v path-to-logfile.txt
It is possible to install 32 and 64 bit packages on the same Windows x64.
Apart from testing, an example where this feature can be useful is a development scenario, where users want to run a 64 bit server and develop both 32 and 64 bit client components. In this case the full 64 bit package can be installed, including a database instance plus development-related features (headers and libraries) from the 32 bit package.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB Server is available for installation on macOS (formerly Mac OS X) via the Homebrew package manager.
MariaDB Server is available as a Homebrew "bottle", a pre-compiled package. This means you can install it without having to build from source yourself. This saves time.
After installing Homebrew, MariaDB Server can be installed with this command:
brew install mariadb
After installation, start MariaDB Server:
mysql.server start
To auto-start MariaDB Server, use Homebrew's services functionality, which configures auto-start with the launchctl utility from launchd:
brew services start mariadb
After MariaDB Server is started, you can log in as your user:
mysql
Or log in as root:
sudo mysql -u root
First you may need to update your brew installation:
brew update
Then, to upgrade MariaDB Server:
brew upgrade mariadb
In addition to the "bottled" MariaDB Server package available from Homebrew, you can use Homebrew to build MariaDB from source. This is useful if you want to use a different version of the server or enable some different capabilities that are not included in the bottle package.
Two components not included in the bottle package are the CONNECT and OQGRAPH engines, because they have non-standard dependencies. To build MariaDB Server with these engines, you must first install boost
and judy
. Follow these steps to install the dependencies and build the server:
brew install boost judy
brew install mariadb --build-from-source
You can also use Homebrew to build and install a pre-release version of MariaDB Server. Use this command to build and install a "development" version of MariaDB Server:
brew install mariadb --devel
This page is licensed: CC BY-SA / Gnu FDL
MariaDB Server does not currently provide a .pkg
installer for macOS. For information about how to install MariaDB Server on macOS using Homebrew, see Installing MariaDB Server on macOS Using Homebrew.
This page is licensed: CC BY-SA / Gnu FDL
Users need to run mariadb-install-db.exe, without parameters to create a data directory, e.g
C:\zip_unpack\directory> bin\mariadb-install-db.exe
Then you can start server like this
C:\zip_unpack\directory> bin\mariadbd.exe --console
If you like to customize the server instance (data directory, install as service etc), please refer to mariadb-install-db.exe documentation
This page is licensed: CC BY-SA / Gnu FDL
If you are looking to set up MariaDB Server, it is often easiest to use a repository. The MariaDB Foundation has a repository configuration tool and MariaDB Corporation provides two convenient shell scripts to configure access to their MariaDB Package Repositories:
mariadb_es_repo_setup
for MariaDB Enterprise Server, which can be downloaded from:
mariadb_repo_setup
for MariaDB Community Server, which can be downloaded from:
The repository setup script can be downloaded and verified in the following way:
Download the script:
curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
Verify the checksum of the script:
echo "${checksum} mariadb_es_repo_setup" | sha256sum -c -
Download the script:
curl -LsSO https://r.mariadb.com/downloads/mariadb_repo_setup
Verify the checksum of the script:
echo "${checksum} mariadb_repo_setup" | sha256sum -c -
For the script to work, the curl
package needs to be installed on your system. Additionally on Debian and Ubuntu the apt-transport-https
package needs to be installed. The script will check if these are installed and let you know before it attempts to create the repository configuration on your system.
They can be installed on your system as follows:
sudo dnf install curl
sudo apt update
sudo apt install curl apt-transport-https
sudo zypper install curl
After the script is downloaded you need to run it with root
user permissions. This is normally accomplished by using the sudo
command:
Retrieve your customer downloads token:
Navigate to https://customers.mariadb.com/downloads/token/
Log in
Copy the Customer Download Token
Substitute your token for ${token}
when running the mariadb_es_repo_setup
script, below
Set the script to be executable:
chmod +x mariadb_es_repo_setup
Run the script:
sudo ./mariadb_es_repo_setup --token="${token}" --apply
Set the script to be executable:
chmod +x mariadb_repo_setup
Run the script:
sudo ./mariadb_repo_setup
The script will set up different repositories in a single repository configuration file. The primary two are the MariaDB Server Repository, and the MariaDB MaxScale Repository.
The default repositories setup by mariadb_es_repo_setup
are:
MariaDB Enterprise Server Repository
A MariaDB Enterprise Server Debug Repository (Ubuntu only)
MariaDB Enterprise MaxScale Repository
MariaDB Tools Repository
The default repositories set up by mariadb_repo_setup
are:
MariaDB Community Server Repository
MariaDB Community Server Debug Repository (Ubuntu only)
MariaDB MaxScale Repository
MariaDB Tools Repository
The MariaDB Repository contains software packages related to MariaDB Server, including the server itself, clients and utilities, client libraries, plugins, and mariadb-backup.
The binaries in MariaDB Corporation's MariaDB Repository are currently identical to the binaries in MariaDB Foundation's MariaDB Repository that is configured with the MariaDB Repository Configuration Tool.
By default, the mariadb_repo_setup
script will configure your system to install from the 11.rolling repository, which contains the latest stable version of MariaDB Community server.
The mariadb_es_repo_setup
script will set up whatever is the current stable version of MariaDB Enterprise Server.
If you would like to stick to a specific release series, then you will need to either manually edit the repository configuration file to point to that specific version or series, or run the MariaDB Package Repository setup script again using the --mariadb-server-version
option. For example, if you wanted to specifically use the 11.4 series you would do: --mariadb-server-version=11.4
.
If you do not want to configure the MariaDB Repository on your system, for example if you are setting up a server just running MariaDB MaxScale, then you can use the --skip-server
option to prevent the setup script from configuring the server repository.
The MariaDB MaxScale Repository contains software packages related to MariaDB MaxScale.
By default, the script will configure your system to install from the repository of the latest GA version of MariaDB MaxScale. When a new major GA release occurs, the repository will automatically switch to the new version. If instead you would like to stay on a particular version you will need to manually edit the repository configuration file and change 'latest
' to the version you want (e.g. '6.1
') or run the MariaDB Package Repository setup script again, specifying the particular version or series you want.
Older versions of the MariaDB Package Repository setup script would configure a specific MariaDB MaxScale series in the repository (i.e. 24.02
), so if you used the script in the past to set up your repository and want MariaDB MaxScale to automatically use the latest GA version then change 24.02
or whatever version it is set to in the repository configuration to latest
. Or download the current version of the setup script and re-run it to set up the repository again.
The script can configure your system to install from the repository of an older version of MariaDB MaxScale if you use the --mariadb-maxscale-version
option. For example, --mariadb-maxscale-version=25.01
.
If you do not want to configure the MariaDB MaxScale Repository on your system, then you can use the --skip-maxscale
option to prevent the setup script from configuring it.
The script supports Linux distributions that are officially supported by MariaDB Corporation's MariaDB TX subscription. However, a MariaDB TX subscription with MariaDB Corporation is not required to use the MariaDB Package Repository.
The distributions currently supported by the script include:
Red Hat Enterprise Linux (RHEL) 8, and 9
Debian 10 (Buster), 11 (Bullseye), 12 (Bookworm)
Ubuntu 20.04 LTS (Focal), 22.04 LTS (Jammy), and 24.04 LTS (Noble)
SUSE Linux Enterprise Server (SLES) 12 and 15
To install MariaDB on distributions not supported by the MariaDB Package Repository setup script, please consider using MariaDB Foundation's MariaDB Repository Configuration Tool. Some Linux distributions also include MariaDB in their own repositories.
To provide options to the script, you must tell your to expect them by executing bash with the options -s --
, for example:
curl -LsS https://r.mariadb.com/downloads/mariadb_repo_setup | sudo bash -s -- --help
--help
Display a usage message and exit
--mariadb-server-version=
Override the default MariaDB Server version. By default, the script will use '11.rolling'
--mariadb-maxscale-version=
Override the default MariaDB MaxScale version. By default, the script will use 'latest'
--os-type=
Override detection of OS type. Acceptable values include debian, ubuntu, rhel, and sles
--os-version=
Override detection of OS version. Acceptable values depend on the OS type you specify
--skip-key-import
Skip importing GPG signing keys
--skip-maxscale
Skip the 'MaxScale' repository
--skip-server
Skip the 'MariaDB Server' repository
--skip-tools
Skip the 'Tools' repository
--skip-verify
Skip verification of MariaDB Server versions. Use with caution as this can lead to an invalid repository configuration file being created
--skip-check-installed
Skip tests for required prerequisites for this script
--skip-eol-check
Skip tests for versions being past their EOL date
--skip-os-eol-check
Skip tests for operating system versions being past EOL date
--write-to-stdout
Write output to stdout instead of to the OS's repository configuration file. This will also skip importing GPG public keys and updating the package cache on platforms where that behavior exists
--mariadb-server-version
By default, the script will configure your system to install from the repository of the latest GA version of MariaDB. If a new major GA release occurs and you would like to upgrade to it, then you will need to either manually edit the repository configuration file to point to the new version, or run the MariaDB Package Repository setup script again.
The script can also configure your system to install from the repository of a different version of MariaDB if you use the --mariadb-server-version
option.
The string mariadb-
has to be prepended to the version number. For example, to configure your system to install from the repository of MariaDB 10.6, that would be:
curl -LsS https://r.mariadb.com/downloads/mariadb_repo_setup | sudo bash -s -- --mariadb-server-version="mariadb-10.6"
The following MariaDB versions are currently supported:
mariadb-10.5
mariadb-10.6
mariadb-10.11
mariadb-11.4
mariadb-11.7
mariadb-11.8
mariadb-11.rolling
mariadb-11.rc
If you want to pin the repository of a specific minor release, such as MariaDB
10.6.14, then you can also specify the minor release. For example,mariadb-10.6.14
. This may be helpful if you want to avoid upgrades. However,
avoiding upgrades is not recommended, since minor maintenance releases may
contain important bug fixes and fixes for security vulnerabilities.
--mariadb-maxscale-version
By default, the script will configure your system to install from the repository of the latest GA version of MariaDB MaxScale.
If you would like to pin the repository to a specific version of MariaDB
MaxScale then you will need
to either manually edit the repository configuration file to point to the
desired version, or use the --mariadb-maxscale-version
option.
For example, to configure your system to install from the repository of MariaDB MaxScale 6.1, that would be:
curl -LsS https://r.mariadb.com/downloads/mariadb_repo_setup | sudo bash -s -- --mariadb-maxscale-version="6.1"
The following MariaDB MaxScale versions are currently supported:
MaxScale 1.4
MaxScale 2.0
MaxScale 2.1
MaxScale 2.2
MaxScale 2.3
MaxScale 2.4
MaxScale 2.5
MaxScale 6.1
MaxScale 6.2
The special identifiers latest
(for the latest GA release) and beta
(for the latest beta release) are also supported. By default themariadb_repo_setup
script uses latest
as the version.
--os-type
and --os-version
If you want to run this script on an unsupported OS that you believe to be
package-compatible with an OS that is supported, then you can use the--os-type
and --os-version
options to override the script's OS
detection. If you use either option, then you must use both options.
The supported values for --os-type
are:
rhel
debian
ubuntu
sles
If you use a non-supported value, then the script will fail, just as it would fail if you ran the script on an unsupported OS.
The supported values for --os-version
are entirely dependent on the OS
type.
For Red Hat Enterprise Linux (RHEL) and CentOS, 7
and 8
are valid
options.
For Debian and Ubuntu, the version must be specified as the codename of the
specific release. For example, Debian 9 must be specified as stretch
, and
Ubuntu 18.04 must be specified as bionic
.
These options can be useful if your distribution is a fork of another
distribution. As an example, Linux Mint 8.1 is based on and is fully compatible
with Ubuntu 16.04 LTS (Xenial). Therefore, If you are using Linux Mint 8.1,
then you can configure your system to install from the repository of Ubuntu
16.04 LTS (Xenial). If you would like to do that, then you can do so by
specifying --os-type=ubuntu
and --os-version=xenial
to the MariaDB
Package Repository setup script.
For example, to manually set the --os-type
and --os-version
to RHEL 8
you could do:
curl -LsS https://r.mariadb.com/downloads/mariadb_repo_setup | sudo bash -s -- --os-type=rhel --os-version=8
--write-to-stdout
The --write-to-stdout
option will prevent the script from modifying
anything on the system. The repository configuration will not be written to the
repository configuration file. Instead, it will be printed to standard output.
That allows the configuration to be reviewed, redirected elsewhere, consumed by
another script, or used in some other way.
The --write-to-stdout
option automatically enables --skip-key-import
.
For example:
curl -LsS https://r.mariadb.com/downloads/mariadb_repo_setup | sudo bash -s -- --write-to-stdout
On Red Hat Enterprise Linux (RHEL) and CentOS, the MariaDB Package Repository setup script performs the following tasks:
Creates a repository configuration file
at /etc/yum.repos.d/mariadb.repo
.
Imports the GPG public key used to verify the signature of MariaDB software
packages with rpm --import
from downloads.mariadb.com
.
On Debian and Ubuntu, the MariaDB Package Repository setup script performs the following tasks:
Creates a repository configuration file
at /etc/apt/sources.list.d/mariadb.list
.
Creates a package preferences file
at /etc/apt/preferences.d/mariadb-enterprise.pref
, which gives packages
from MariaDB repositories a higher priority than packages from OS and other
repositories, which can help avoid conflicts. It looks like the following:
Package: *
Pin: origin downloads.mariadb.com
Pin-Priority: 1000
Imports the GPG public key used to verify the signature of MariaDB software
package with apt-key
from the keyserver.ubuntu.com
key server.
Updates the package cache with package definitions from the MariaDB Package
Repository with apt-get update
.
On SUSE Linux Enterprise Server (SLES), the MariaDB Package Repository setup script performs the following tasks:
Creates a repository configuration file at /etc/zypp/repos.d/mariadb.repo
.
Imports the GPG public key used to verify the signature of MariaDB software
packages with rpm --import
from downloads.mariadb.com
.
After setting up the MariaDB Package Repository, you can install the software packages in the supported repositories.
To install MariaDB on Red Hat Enterprise Linux (RHEL) and CentOS, see the instructions at Installing MariaDB Packages with YUM. For example:
sudo yum install MariaDB-server MariaDB-client MariaDB-backup
To install MariaDB MaxScale on Red Hat Enterprise Linux (RHEL) and CentOS, see the instructions at MariaDB MaxScale Installation Guide. For example:
sudo yum install maxscale
To install MariaDB on Debian and Ubuntu, see the instructions at Installing MariaDB Packages with APT. For example:
sudo apt-get install mariadb-server mariadb-client mariadb-backup
To install MariaDB MaxScale on Debian and Ubuntu, see the instructions at MariaDB MaxScale Installation Guide. For example:
sudo apt-get install maxscale
To install MariaDB on SUSE Linux Enterprise Server (SLES), see the instructions at Installing MariaDB Packages with ZYpp. For example:
sudo zypper install MariaDB-server MariaDB-client MariaDB-backup
To install MariaDB MaxScale on SUSE Linux Enterprise Server (SLES), see the instructions at MariaDB MaxScale Installation Guide. For example:
sudo zypper install maxscale
mariadb_es_repo_setup Versions
Version
sha256sum
2025-06-04
4d483b4df193831a0101d3dfa7fb3e17411dda7fc06c31be4f9e089c325403c0
2025-01-16
99ea6c55dbf32bfc42cdcd05c892aebc5e51b06f4c72ec209031639d6e7db9fe
2025-01-07
b98c6436e01ff33d7e88513edd7b77a965c4500d6d52ee3f106a198a558927af
2024-11-19
97e5ef25b4c4a4bd70b30da46b1eae0b57db2f755ef820a28d254e902ab5a879
2024-11-13
0c181ada4e7a4cd1d7688435c478893502675b880be2b918af7d998e239eb325
2024-09-20
c12da6a9baa57eab7fa685aa24bf76e6929a8c67f4cd244835520c0181007753
2024-09-09
733f247c626d965304b678b62a4b86eb4bb8bf956f98a241b6578dedc6ca4020
2024-06-12
b96fcd684a84bbe1080b6276f424537fc9d9c11ebe243ad8b9a45dd459f6ee4f
2023-07-27
f8eb9c1b59ccfd979d27e39798d2f2a98447dd29e2149ce92bf606aab4493ad9
2023-03-13
8dfef0ec98eb03a4455df07b33107a6d4601425c9df0ab5749b8f10bf3abdcbb
2022-10-26
3f4a9d1c507a846a598e95d6223871aade69a9955276455324e7cc5f54a87021
2022-09-12
713a8f78ea7bab3eccfb46dc14e61cd54c5cf5a08acb5c320ef5370d375e48bd
2022-06-14
cfcd35671125d657a212d92b93be7b1f4ad2fda58dfa8b5ab4b601bf3afa4eae
2022-03-11
53efddb84ea12efa7d521499a7474065bd4a60c721492d0e72b4336192f4033f
2021-12-13
5feb2aac767c512cc9e2af674d1aef42df0b775ba2968fffa8700eb42702bd44
2021-10-13
4f266ff758fe15eeb9b8b448a003eb53e93f3064baf1acb789dd39de4f534b1d
2021-09-14
b741361ea3a0a9fcaa30888a63ff3a8a4021882f126cf4ef26cf616493a29315
2021-08-26
a49347a4e36f99c5b248403ed9fb9b33a2f07f5e24605a694b1b1e24d7199f28
2021-06-29
99e768b24ae430b37dec7cb69cdd625396630dba18f5e1588ee24d3d8bb97064
2021-06-14
ec08f8ede524f568b3766795ad8ca1a0d0ac4db355a18c3d85681d7f9c0f8c09
2021-05-04
bf67a231c477fba0060996a83b197c29617b6193e1167f6f062216ae13c716c7
2021-03-15
99c7f4a3473a397d824d5f591274c2a4f5ebf6dc292eea154800bbaca04ddc7e
2021-02-12
c78db828709d94876406a0ea346f13fbc38e73996795903f40e3c21385857dd4
2020-12-16
c01fa97aed71ca0cd37cba7036ff80ab40efed4cc261c890aa2aa11cd8ab4e2f
2020-12-15
e42f1f16f2c78a3de0e73dcc2a9081e2f771b3161f4f4ceecb13ea788d84673b
2020-12-14
4aaf495606633a47c55ea602829e67e702aec0a5c6ff6b1af90709c19ee9f322
2020-10-07
93fa0df3d6491a791f5d699158dcfe3e6ce20c45ddc2f534ed2f5eac6468ff0a
2020-09-08
eeebe9e08dffb8a4e820cc0f673afe437621060129169ea3db0790eb649dbe9b
2020-07-16
957bc29576e8fd320fa18e35fa49b5733f3c8eeb4ca06792fb1f05e089c810ff
mariadb_repo_setup Versions
Version
sha256sum
2025-02-13
c4a0f3dade02c51a6a28ca3609a13d7a0f8910cccbb90935a2f218454d3a914a
2024-11-14
ceaa5bd124c4d10a892c384e201bb6e0910d370ebce235306d2e4b860ed36560
2024-08-14
6083ef1974d11f49d42ae668fb9d513f7dc2c6276ffa47caed488c4b47268593
2024-05-30
26e5bf36846003c4fe455713777a4e4a613da0df3b7f74b6dad1cb901f324a84
2024-02-16
30d2a05509d1c129dd7dd8430507e6a7729a4854ea10c9dcf6be88964f3fdc25
2023-11-21
2d7291993f1b71b5dc84cc1d23a65a5e01e783aa765c2bf5ff4ab62814bb5da1
2023-08-21
935944a2ab2b2a48a47f68711b43ad2d698c97f1c3a7d074b34058060c2ad21b
2023-08-14
f5ba8677ad888cf1562df647d3ee843c8c1529ed63a896bede79d01b2ecc3c1d
2023-06-09
3a562a8861fc6362229314772c33c289d9096bafb0865ba4ea108847b78768d2
2023-02-16
ad125f01bada12a1ba2f9986a21c59d2cccbe8d584e7f55079ecbeb7f43a4da4
2022-11-17
367a80b01083c34899958cdd62525104a3de6069161d309039e84048d89ee98b
2022-08-22
733cf126b03f73050e242102592658913d10829a5bf056ab77e7f864b3f8de1f
2022-08-15
f99e1d560bd72a3a23f64eaede8982d5494407cafa8f995de45fb9a7274ebc5c
2022-06-14
d4e4635eeb79b0e96483bd70703209c63da55a236eadd7397f769ee434d92ca8
2022-02-08
b9e90cde27affc2a44f9fc60e302ccfcacf71f4ae02071f30d570e6048c28597
2022-01-18
c330d2755e18e48c3bba300a2898b0fc8ad2d3326d50b64e02fe65c67b454599
This page is licensed: CC BY-SA / Gnu FDL
MariaDB Corporation provides package tarballs for some MariaDB database products.
Package tarballs provide multiple benefits:
Package tarballs are compressed tar archives that contain software packages.
Software packages can be installed using the operating system's package manager without relying on a remote repository.
RPM (.rpm) files are distributed for CentOS, Red Hat Enterprise Linux (RHEL), and SUSE Linux Enterprise Server (SLES).
DEB (.deb) files are distributed for Debian and Ubuntu.
If you want to deploy MariaDB database products without using a package tarball, alternative deployment methods are available. Available deployment methods are component-specific.
MariaDB database products can be deployed with package tarballs to support use cases, such as:
Transfer the package tarball to an air-gapped network for installation without an internet connection
Install software using a package manager without configuring a package repository
Automatically install missing dependencies using a package manager
The following MariaDB database products can be deployed using package tarballs:
MariaDB Community Server 10.5
MariaDB Community Server 10.6
MariaDB Enterprise Server 10.5
MariaDB Enterprise Server 10.6
MariaDB Enterprise Server 11.4
MariaDB MaxScale 22.08
MariaDB Corporation provides package tarballs (.debs.tar, .rpms.tar) to support customers who leverage in-house package repositories to distribute software to their servers. Secure any such repository to prevent outside access.
MariaDB Corporation provides multiple interfaces to download package tarballs.
Steps to download a package tarball:
Go to the MariaDB Downloads page at dles
Complete customer login
Select the desired version and operating system, then click the Download button
Package tarballs can be downloaded using command-line tools or automation from the MariaDB Download interface with the Customer Download Token.
For additional information, see "Download Binary Files".
Once downloaded and extracted, you can:
Install .rpm packages (RHEL, CentOS, and SLES) using rpm -i
Install .deb packages (Debian, Ubuntu) using dpkg -i
Install from the simple package repositories included in the tarball. Missing dependencies will be resolved when using the apt, yum, or zypper package manager. See the README file enclosed in the package tarball for more information.
Test packages before placement in an internal package repository for distribution to your servers. Secure this repository from outside access.
Installation loads software to the system. This software requires configuration before the database server is ready for use.
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Corporation provides package repositories, including the MariaDB Enterprise Repository and the MariaDB Community Repository, that can be used to install MariaDB products using the operating system's package manager. Local mirrors of the package repositories can be used for local deployments.
Local package repository mirrors provide multiple benefits:
MariaDB Corporation's official package repositories are the source for the local mirror.
Tools provided by the operating system are used to create and maintain the local mirror.
After a local mirror is created, the mirror can be used just like the MariaDB repositories to install MariaDB products using the operating system's package manager.
If you want to deploy MariaDB database products without using a local package repository mirror, alternative deployment methods are available. Available deployment methods are component-specific.
MariaDB database products can be deployed with local package repository mirrors to support use cases, such as:
Install from the mirror on an air-gapped network that is not connected to the internet
Remove packages from mirror for versions that are not used in the local environment
Add packages to mirror for tools and clients that are used in the local environment
Automatically install missing dependencies using a package manager
The following MariaDB database products can be deployed using package repositories:
MariaDB ColumnStore 5 (included with MariaDB Community Server 10.5)
MariaDB ColumnStore 6 (included with MariaDB Community Server 10.6)
MariaDB Community Server 10.2
MariaDB Community Server 10.3
MariaDB Community Server 10.4
MariaDB Community Server 10.5 (excluding ColumnStore 5)
MariaDB Community Server 10.6 (excluding ColumnStore 6)
MariaDB Enterprise ColumnStore 5 (included with MariaDB Enterprise Server 10.5)
MariaDB Enterprise ColumnStore 6 (included with MariaDB Enterprise Server 10.6)
MariaDB Enterprise Server 10.2
MariaDB Enterprise Server 10.3
MariaDB Enterprise Server 10.4
MariaDB Enterprise Server 10.5
MariaDB Enterprise Server 10.6
MariaDB Enterprise Server 11.4
MariaDB MaxScale 2.4
MariaDB MaxScale 2.5
MariaDB MaxScale 6
MariaDB MaxScale 22.08
The package manager depends on the operating system:
CentOS 7
YUM
Debian 9
APT
Debian 10
APT
Debian 11
APT
Red Hat Enterprise Linux 7 (RHEL 7)
YUM
Red Hat Enterprise Linux 8 (RHEL 8)
YUM
Rocky Linux 8
YUM
SUSE Linux Enterprise Server 12 (SLES 12)
ZYpp
SUSE Linux Enterprise Server 15 (SLES 15)
ZYpp
Ubuntu 18.04 LTS (Bionic Beaver)
APT
Ubuntu 20.04 LTS (Focal Fossa)
APT
Creating a local mirror of the MariaDB Enterprise Repository or the MariaDB Community Repository enables you to distribute MariaDB products to your servers from a local repository you support. Secure any such repository mirror to prevent outside access.
Set up a repository mirroring tool, for example:
For YUM: reposync
, available at: 23016
For APT: debmirror
, available at: Setup#Debian_Repository_Mirroring_Tools
Secure the repository mirror to prevent outside access.
This page is: Copyright © 2025 MariaDB. All rights reserved.
Automate MariaDB Server deployment and administration. This section covers tools and practices for streamlined installation, configuration, and ongoing management using binary packages.
This page compares the automation systems that are covered by this section of the MariaDB documentation. More information about these systems are presented in the relevant pages, and more systems may be added in the future.
Different automation systems provide different ways to describe our infrastructure. Understanding how they work is the first step to evaluate them and choose one for our organization.
Ansible code consists of the following components:
An inventory determines which hosts Ansible should be able to deploy. Each host may belong to one or more groups. Groups may have children, forming a hierarchy. This is useful because it allows us to deploy on a group, or to assign variables to a group.
A role describes the state that a host, or group of hosts, should reach after a deploy.
A play associates hosts or groups to their roles. Each role/group can have more than one role.
A role consists of a list of tasks. Despite its name a task is not necessarily something to do, but something that must exist in a certain state.
Tasks can use variables. They can affect how a task is executed (for example a variable could be a file name), or even whether a task is executed or not. Variables exist at role, group or host level. Variables can also be passed by the user when a play is applied.
Playbooks are the code that is used to define tasks and variables.
Facts are data that Ansible retrieves from remote hosts before deploying. This is a very important step, because facts may determine which tasks are executed or how they are executed. Facts include, for example, the operating system family or its version. A playbook sees facts as pre-set variables.
Modules implement actions that tasks can use. Action examples are file (to declare that files and directories must exist) or mysql_variables (to declare MySQL/MariaDB variables that need to be set).
See Ansible Overview - concepts for more details and an example.
Puppet code consists of the following components:
An inventory file defines a set of groups and their targets (the members of a group). plugins can be used to retrieve groups and target dynamically, so they are equivalent to Ansible dynamic inventories.
A manifest is a file that describes a configuration.
A resource is a component that should run on a server. For example, "file" and "service" are existing support types.
An attribute relates to a resource and affects the way it is applied. For example, a resource of type "file" can have attributes like "owner" and "mode".
A class groups resources and variables, describing a logical part of server configuration. A class can be associated to several servers. A class is part of a manifest.
A module is a set of manifests and describes an infrastructure or a part of it.
Classes can have typed parameters that affect how they are applied.
Properties are variables that are read from the remote server, and cannot be arbitrarily assigned.
Facts are pre-set variables collected by Puppet before applying or compiling a manifest.
The architecture of the various systems is different. Their architectures determine how a deploy physically works, and what is needed to be able to deploy.
Ansible architecture is simple. Ansible can run from any host, and can apply its playbooks on remote hosts. To do this, it runs commands via SSH. In practice, in most cases the commands will be run as superuser via sudo
, though this is not always necessary.
Inventories can be dynamic. In this case, when we apply a playbook Ansible connects to remote services to discover hosts.
Ansible playbooks are applied via the ansible-playbook
binary. Changes to playbooks are only applied when we perform this operation.
To recap, Ansible does not need to be installed on the server is administers. It needs an SSH access, and normally its user needs to be able to run sudo
. It is also possible to configure a dynamic inventory, and a remote service to be used for this purpose.
Puppet supports two types of architecture: agent-master or standalone. The agent-master architecture is recommended by Puppet Labs, and it is the most popular among Puppet users. For this reason, those who prefer a standalone architecture tend to prefer Ansible.
When this architecture is chosen, manifests are sent to the Puppet master. There can be more than one master, for high availability reasons. All target hosts run a Puppet agent. Normally this is a service that automatically starts at system boot. The agent contacts a master at a given interval. It sends facts, and uses them to compile a catalog from the manifests. A catalog is a description of what exactly an individual server should run. The agent receives the catalog and checks if there are differences between its current configuration and the catalog. If differences are found, the agent applies the relevant parts of the catalog.
An optional component is PuppetDB. This is a central place where some data are stored, including manifests, retrieved facts and logs. PuppetDB is based on PostgreSQL and there are no plans to support MariaDB or other DBMSs.
If a manual change is made to a remove server, it will likely be overwritten the next time Puppet agent runs. To avoid this, the Puppet agent service can be stopped.
As mentioned, this architecture is not recommended by Puppet Labs nor popular amongst Puppet users. It is similar to Ansible architecture.
Users can apply manifests from any host with Puppet installed. This could be their laptop but, in order to emulate the behavior of an agent-master architecture, normally Puppet runs on a dedicated node as a cronjob. The Puppet apply application will require facts from remote hosts, it will compile a catalog for each host, will check which parts of it need to be applied, and will apply them remotely.
If a manual change is made to a remove server, it will be overwritten the next time Puppet apply runs. To avoid this, comment out any cron job running Puppet apply, or comment out the target server in the inventory.
As mentioned, Puppet supports plugins to retrieve the inventory dynamically from remote services. In an agent-master architecture, one has to make sure that each target host has access to these services. In a standalone architecture, one has to make sure that the hosts running Puppet apply have access to these services.
Often our automation repositories need to contain secrets, like MariaDB user passwords or private keys for SSH authentication.
Both Ansible and Puppet support integration with secret stores, like Hashicorp Vault. For Puppet integration, see Integrations with secret stores.
In the simplest case, Ansible allows encrypting secrets in playbooks and decrypting them during execution using ansible-vault. This implies a minimal effort to handle secrets. However, it is not the most secure way to store secrets. The passwords to disclose certain secrets need to be shared with the users who have the right to use them. Also, brute force attacks are possible.
Automation software communities are very important, because they make available a wide variety of modules to handle specific software.
Ansible is open source, released under the terms of the GNU GPL. It is produced by RedHat. RedHat has a page about Red Hat Ansible Automation Platform Partners, who can provide support and consulting.
Ansible Galaxy is a big repository of Ansible roles produced by both the vendor and the community. Ansible comes with ansible-galaxy
, a tool that can be used to create roles and upload them to Ansible Galaxy.
At the time of this writing, Ansible does not have specific MariaDB official modules. MySQL official modules can be used. However, be careful not try to use features that only apply to MySQL. There are several community-maintained MariaDB roles.
Puppet is open source, released under the GNU GPL. It is produced by a homonym company. The page Puppet Partners lists partners that can provide support and consulting about Puppet.
Puppet Forge is a big repository of modules produced by the vendor and by the community, as well as how-to guides.
Currently Puppet has many MariaDB modules.
For more information about the systems mentioned in this page, from MariaDB users perspective:
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB has an event scheduler that can be used to automate tasks, making them run at regular intervals of time. This page is about using events for automation. For more information about events themselves, and how to work with them, see event scheduler.
Events can be compared to Unix cron jobs or Windows scheduled tasks. MariaDB events have at least the following benefits compared to those tools:
Events are system-independent. The same code can run on any system.
Events are written in procedural SQL. There is no need to install other languages or libraries.
If you use user-defined functions, you can still take advantage of them in your events.
Events run in MariaDB. An implication, for example, is that the results of queries remain in MariaDB itself and are not sent to a client. This means that network glitches don't affect events, there is no overhead due to data roundtrip, and therefore locks are held for a shorter time.
Some drawbacks of using events are the following:
Events can only perform tasks that can be developed in SQL. So, for example, it is not possible to send alerts. Access to files or remote databases is limited.
The event scheduler runs as a single thread. This means that events that are scheduled to run while another event is running will wait until the other event has stopped. This means that there is no guarantee that an event will run on exactly it's scheduled time. This should not be a problem as long as one ensures that events are short lived.
For more events limitations, see Event Limitations.
In many cases you may prefer to develop scripts in an external programming language. However, you should know that simple tasks consisting of a few queries can easily be implemented as events.
When using events to automate tasks, there are good practices one may want to follow.
Move your SQL code in a stored procedure. All the event will do is to call a stored procedures. Several events may call the same stored procedure, maybe with different parameters. The procedure may also be called manually, if necessary. This will avoid code duplication. This will separate the logic from the schedule, making it possible to change an event without a risk of making changes to the logic, and the other way around.
Just like cron jobs, events should log whether if they succeed or not. Logging debug messages may also be useful for non-trivial events. This information can be logged into a dedicated table. The contents of the table can be monitored by a monitoring tool like Grafana. This allows to visualize in a dashboard the status of events, and send alerts in case of a failure.
Some examples of tasks that could easily be automated with events:
Copying data from a remote table to a local table by night, using the CONNECT storage engine. This can be a good idea if many rows need be copied, because data won't be sent to an external client.
Periodically delete historical data. For example, rows that are older than 5 years. Nothing prevents us from doing this with an external script, but probably this wouldn't add any value.
Periodically delete invalid rows. In an e-commerce, they could be abandoned carts. In a messaging system, they could be messages to users that don't exist anymore.
Add a new partition to a table and drop the oldest one (partition rotation).
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
The MariaDB Foundation maintains a Downloads REST API. See the Downloads API documentation to find out all the tasks that you can accomplish with this API. Generally speaking, we can say that it provides information about MariaDB products and available versions. This allows to easily automate upgrades for MariaDB and related products.
The Downloads API exposes HTTPS endpoints that return information in JSON format. HTTP and JSON are extremely common standards that can be easily used with any programming language. All the information provided by the API is public, so no authentication is required.
Linux shells are great for writing simple scripts. They are compatible to each other to some extent, so simple scripts can be run on almost any Unix/Linux system. In the following examples we'll use Bash.
On Linux, some programs you'll generally need to work with any REST API are:
curl, to call HTTP URLs and get their output.
js, to extract or transform information from a JSON document.
A trivial use case is to write a script that checks the list of MariaDB GA major versions and, when something changes, send us an email. So we can test the newest GA version and eventually install it.
The script in this example will be extremely simple. We'll do it this way:
Retrieve the JSON object describing all MariaDB versions.
For each element of the array, only show the release_id
and release_status
properties, and concatenate them.
Apply a filter, so we only select the rows containing 'stable' but not 'old' (so we exclude 'Old Stable').
From the remaining rows, only show the first column (the version number).
If the results we obtained are different from the previously written file (see last point) send an email.
Save the results into a file.
This is something that we can easily do with a Unix shell:
#!/bin/bash
current_ga_versions=$(
curl https://downloads.mariadb.org/rest-api/mariadb/ | \
jq -r '.major_releases[] | .release_id + " " + .release_status' | \
grep -i 'stable' | grep -vi 'old' | \
cut -d ' ' -f 1
)
# create file if it doesn't exist, then compare version lists
touch ga_versions
previous_ga_versions=$( cat ga_versions )
echo "$current_ga_versions" > ga_versions
if [ "$current_ga_versions" != "$previous_ga_versions" ];
then
mail -s 'NOTE: New MariaDB GA Versions' devops@example.com <<< 'There seems to be a new MariaDB GA version! Yay!'
fi
The only non-standard command here is jq. It is a great way to manipulate JSON documents, so if you don't know it you may want to take a look at jq documentation.
To use the API with Python, we need a module that is able to send HTTP requests and parse a JSON output. The requests
module has both these features. It can be installed as follows:
pip install requests
The following script prints stable versions to the standard output:
#!/usr/bin/env python
import requests
response = requests.get('https://downloads.mariadb.org/rest-api/mariadb/').json()
for x in response['major_releases']:
if x['release_status'] == 'Stable':
print(x['release_id'])
requests.get()
makes an HTTP call of type GET, and requests.json()
returns a dictionary representing the previously obtained JSON document.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
Vault is open source software for secret management provided by HashiCorp. It is designed to avoid sharing secrets of various types, like passwords and private keys. When building automation, Vault is a good solution to avoid storing secrets in plain text in a repository.
MariaDB and Vault may relate to each other in several ways:
MariaDB has a Hashicorp Key Management plugin, to manage and rotate SSH keys.
Users passwords can be stored in Vault.
MariaDB (and MySQL) can be used as a secret engine, a component which stores, generates, or encrypts data.
MariaDB (and MySQL) can be used as a backend storage, providing durability for Vault data.
For information about how to install Vault, see Install Vault, as well as MySQL/MariaDB Database Secrets Engine.
Vault is used via an HTTP/HTTPS API.
Vault is identity-based. Users login and Vault sends them a token that is valid for a certain amount of time, or until certain conditions occur. Users with a valid token may request to obtain secrets for which they have proper permissions.
Vault encrypts the secrets it stores.
Vault can optionally audit changes to secrets and secrets requests by the users.
Vault is a server. This allows decoupling the secrets management logic from the clients, which only need to login and keep a token until it expires.
The sever can actually be a cluster of servers, to implement high availability.
The main Vault components are:
Storage Backed: This is where the secrets are stored. Vault only send encrypted data to the backend storage.
HTTP API: This API is used by the clients, and provides an access to Vault server.
Barrier: Similarly to an actual barrier, it protects all inner Vault components. The HTTP API and the storage backend are outside of the barrier and could be accessed by anyone. All communications from and to these components have to pass through the barrier. The barrier verifies data and encrypts it. The barrier can have two states: sealed or unsealed. Data can only pass through when the barrier is unsealed. All the following components are located inside the barrier.
Auth Method: Handles login attempts from clients. When a login succeeds, the auth method returns a list of security policies to Vault core.
Token Store: Here the tokens generated as a result of a succeeded login are stored.
Secrets Engines: These components manage secrets. They can have different levels of complexity. Some of them simply expect to receive a key, and return the corresponding secret. Others may generate secrets, including one-time-passwords.
Audit Devices: These components log the requests received by Vault and the responses sent back to the clients.There may be multiple devices, in which case an Audit Broker sends the request or response to the proper device.
It is possible to start Vault in dev mode:
vault server -dev
Dev mode is useful for learning Vault, or running experiments on some particular features. It is extremely insecure, because dev mode is equivalent to starting Vault with several insecure options. This means that Vault should never run in production in dev mode. However, this also means that all the regular Vault features are available in dev mode.
Dev mode simplifies all operations. Actually, no configuration is necessary to get Vault up and running in dev mode. It makes it possible to communicate with the Vault API from the shell without any authentication. Data is stored in memory by default. Vault is unsealed by default, and if explicitly sealed, it can be unsealed using only one key.
For more details, see "Dev" Server Mode in Vault documentation.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
Orchestrator is no longer actively maintained.
Orchestrator is a MySQL and MariaDB high availability and replication management tool. It is released by Shlomi Noach under the terms of the Apache License, version 2.0.
Orchestrator provides automation for MariaDB replication in the following ways:
It can be used to perform certain operations, like repairing broken replication or moving a replica from one master to another. These operations can be requested using CLI commands, or via the GUI provided with Orchestrator. The actual commands sent to MariaDB are automated by Orchestrator, and the user doesn't have to worry about the details.
Orchestrator can also automatically perform a failover in case a master crashes or is unreachable by its replicas. If that is the case, Orchestrator will promote one of the replicas to a master. The replica to promote is chosen based on several criteria, like the server versions, the binary log formats in use and the datacenters locations.
Note that, if we don't want to use Orchestrator to automate operations, we can still use it as a dynamic inventory. Other tools can use it to obtain a list of existing MariaDB servers via its REST API or CLI commands.
Orchestrator has several big users, listed in the documentation Users page. It is also included in the PMM monitoring solution.
To install Orchestrator, see:
The install.md for a manual installation;
The links in README.md, to install Orchestrator using automation tools.
Currently, Orchestrator fully supports MariaDB GTID, replication, and semi-synchronous replication. While Orchestrator does not support Galera specific logic, it works with Galera clusters. For details, see Supported Topologies and Versions in Orchestrator documentation.
Orchestrator consists of a single executable called orchestrator
. This is a process that periodically connects to the target servers. It will run SQL queries against target servers, so it needs a user with proper permissions. When the process is running, a GUI is available via a web browser, at the URL 'localhost:3000'. It also exposes a REST API (see Using the web API in the Orchestrator documentation).
Orchestrator expects to find a JSON configuration file called orchestrator.conf.json
, in /etc
.
A database is used to store the configuration and the state of the target servers. By default, this is done using built-in SQLite. However, it is possible to use an external MariaDB or MySQL server instance.
If a cluster of Orchestrator instances is running, only one central database is used. One Orchestrator node is active, while the others are passive and are only used for failover. If the active node crashes or becomes unreachable, one of the other nodes becomes the active instance. The active_node
table shows which node is active. Nodes communicate between them using the Raft protocol.
As mentioned, Orchestrator can be used from the command-line. Here you can find some examples.
List clusters:
orchestrator -c clusters
Discover a specified instance and add it to the known topology:
orchestrator -c discover -i <host>:<port>
Forget about an instance:
orchestrator -c topology -i <host>:<port>
Move a replica to a different master:
orchestrator -c move-up -i <replica-host>:<replica-port> -d <master-host>:<master-port>
Move a replica up, so that it becomes a "sibling" of its master:
orchestrator -c move-up -i <replica-host>:<replica-port>
Move a replica down, so that it becomes a replica of its"sibling":
orchestrator -c move-below -i <replica-host>:<replica-port> -d <master-host>:<master-port>
Make a node read-only:
orchestrator -c set-read-only -i <host>:<port>
Make a node writeable:
orchestrator -c set-writeable -i <host>:<port>
The --debug
and --stack
options can be added to the above commands to make them more verbose.
The README.md file lists some related community projects, including modules to automate Orchestrator with Puppet and other technologies.
On GitHub you can also find links to projects that allow the use of automation software to deploy and manage Orchestrator.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB includes a powerful configuration system. This is enough when we need to deploy a single MariaDB instance, or a small number of instances. But many modern organisations have many database servers. Deploying and upgrading them manually could require too much time, and would be error-prone.
Several tools exist to deploy and manage several servers automatically. These tools operate at a higher level, and execute tasks like installing MariaDB, running queries, or generating new configuration files based on a template. Instead of upgrading servers manually, users can launch a command to upgrade a group of servers, and the automation software will run the necessary tasks.
Servers can be described in a code repository. This description can include MariaDB version, its configuration, users, backup jobs, and so on. This code is human-readable, and can serve as a documentation of which servers exist and how they are configured. The code is typically versioned in a repository, to allow collaborative development and track the changes that occurred over time. This is a paradigm called Infrastructure as Code.
Automation code is high-level and one usually doesn’t care how operations are implemented. Their implementation is delegated to modules that handle specific components of the infrastructure. For example a module could equally work with apt and yum package managers. Other modules can implement operations for a specific cloud vendor, so we declare we want a snapshot to be done, but we don’t need to write the commands to make it happen. For special cases, it is of course possible to write Bash commands, or scripts in every language, and declare that they must be run.
Manual interventions on the servers will still be possible. This is useful, for example, to investigate performance problems. But it is important to leave the servers in the state that is described by the code.
This code is not something you write once and never touch again. It is periodically necessary to modify infrastructures to update some software, add new replicas, and so on. Once the base code is in place, making such changes is often trivial and potentially it can be done in minutes.
Once replication is in place, two important aspects to automate are load balancing and failover.
Proxies can implement load balancing, redirecting the queries they receive to different server, trying to distribute the load equally. They can also monitor that MariaDB servers are running and in good health, thus avoiding sending queries to a server that is down or struggling.
However, this does not solve the problem with replication: if a primary server crashes, its replicas should point to another server. Usually this means that an existing replica is promoted to a master. This kind of changes are possible thanks to MariaDB GTID.
One can promote a replica to a primary by making change to existing automation code. This is typically simple and relatively quick to do for a human operator. But this operation takes time, and in the meanwhile the service could be down.
Automating failover will minimise the time to recover. A way to do it is to use Orchestrator, a tool that can automatically promote a replica to a primary. The choice of the replica to promote is done in a smart way, keeping into account things like the servers versions and the binary log format.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
General information and hints on how to automate MariaDB deployments and configuration with Ansible, an open source tool to automate deployment, configuration and operations
Ansible is a tool to automate servers configuration management. It is produced by Red Hat and it is open source software released under the terms of the GNU GPL.
It is entirely possible to use Ansible to automate MariaDB deployments and configuration. This page contains generic information for MariaDB users who want to learn, or evaluate, Ansible.
For information about how to install Ansible, see Installing Ansible in Ansible documentation.
Normally, Ansible can run from any computer that has access to the target hosts to be automated. It is not uncommon that all members of a team has Ansible installed on their own laptop, and use it to deploy.
Red Hat offers a commercial version of Ansible called Ansible Tower. It consists of a REST API and a web-based interface that work as a hub that handles all normal Ansible operations.
An alternative is AWX. AWX is the open source upstream project from which many Ansible Tower features are originally developed. AWX is released under the terms of the Apache License 2.0. However, Red Hat does not recommend to run AWX in production.
AWX development is fast. It has several features that may or may not end up in Ansible Tower. Ansible Tower is more focused on making AWS features more robust, providing a stable tool to automate production environments.
Ansible allows us to write playbooks that describe how our servers should be configured. Playbooks are lists of tasks.
Tasks are usually declarative. You don't explain how to do something, you declare what should be done.
Playbooks are idempotent. When you apply a playbook, tasks are only run if necessary.
Here is a task example:
- name: Install Perl
package:
name: perl
state: present
"Install Perl" is just a description that will appear on screen when the task is applied. Then we use the package
module to declare that a package called "perl" should be installed. When we apply the playbook, if Perl is already installed nothing happens. Otherwise, Ansible installs it.
When we apply a playbook, the last information that appears on the screen is a recap like the following:
PLAY RECAP ***************************************************************************************************
mariadb-01 : ok=6 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
This means that six tasks were already applied (so no action was taken), and two tasks were applied.
As the above example shows, Ansible playbooks are written in YAML.
Modules (like package
) can be written in any language, as long as they are able to process a JSON input and produce a JSON output. However the Ansible community prefers to write them in Python, which is the language Ansible is written in.
A piece of Ansible code that can be applied to a server is called a playbook.
A task is the smallest brick of code in a playbook. The name is a bit misleading, though, because an Ansible task should not be seen as "something to do". Instead, it is a minimal description of a component of a server. In the example above, we can see a task.
A task uses a single module, which is an interface that Ansible uses to interact with a specific system component. In the example, the module is "package".
A task also has attributes, that describe what should be done with that module, and how. In the example above, "name" and "state" are both attributes. The state
attribute exists for every module, by convention (though there may be exceptions). Typically, it has at least the "present" and "absent" state, to indicate if an object should exist or not.
Other important code concepts are:
An inventory determines which hosts Ansible should be able to deploy. Each host may belong to one or more groups. Groups may have children, forming a hierarchy. This is useful because it allows us to deploy on a group, or to assign variables to a group.
A role describes the state that a host, or group of hosts, should reach after a deploy.
A play associates hosts or groups to their roles. Each role/group can have more than one role.
A role is a playbook that describes how certain servers should be configured, based on the logical role they have in the infrastructure. Servers can have multiple roles, for example the same server could have both the "mariadb" and the "mydumper" role, meaning that they run MariaDB and they have mydumper installed (as shown later).
Tasks can use variables. They can affect how a task is executed (for example a variable could be a file name), or even whether a task is executed or not. Variables exist at role, group or host level. Variables can also be passed by the user when a play is applied.
Facts are data that Ansible retrieves from remote hosts before deploying. This is a very important step, because facts may determine which tasks are executed or how they are executed. Facts include, for example, the operating system family or its version. A playbook sees facts as pre-set variables.
Modules implement actions that tasks can use. Action examples are file (to declare that files and directories must exist) or mysql_variables (to declare MySQL/MariaDB variables that need to be set).
Let's describe a hypothetical infrastructure to find out how these concepts can apply to MariaDB.
The inventory could define the following groups:
"db-main" for the cluster used by our website. All nodes belong to this group.
"db-analytics" for our replicas used by data analysts.
"dump" for one or more servers that take dumps from the replicas.
"proxysql" for one or more hosts that run ProxySQL.
Then we'll need the following nodes:
"mariadb-node" for the nodes in "db-main". This role describes how to setup nodes of a cluster using Galera.
"mariadb-replica" for the members of "db-analytics". It describes a running replica, and it includes the tasks that are necessary to provision the node if the data directory is empty when the playbook is applied. The hostname of the primary server is defined in a variable.
"mariadb". The aforementioned "mariadb-node" and "mariadb-replica" can be children of this group. They have many things in common (filesystem for the data directory, some basic MariaDB configuration, some installed tools...), so it could make sense to avoid duplication and describe the common traits in a super-role.
A "mariadb-backup" role to take backups with mariadb-backup, running jobs during the night. We can associate this role to the "db-main" group, or we could create a child group for servers that will take the backups.
"mariadb-dump" for the server that takes dumps with mariadb-dump. Note that we may decide to take dumps on a replica, so the same host may belong to "db-analytics" and "mariadb-dump".
"proxysql" for the namesake group.
Ansible architecture is extremely simple. Ansible can run on any host. To apply playbooks, it connects to the target hosts and runs system commands. By default the connection happens via ssh, though it is possible to develop connection plugins to use different methods. Applying playbooks locally without establishing a connection is also possible.
Modules can be written in any language, though Python is the most common choice in the Ansible community. Modules receive JSON "requests" and facts from Ansible core, they are supposed to run useful commands on a target host, and then they should return information in JSON. Their output informs Ansible whether something has changed on the remote server and if the operations succeeded.
Ansible is not centralized. It can run on any host, and it is common for a team to run it from several laptops. However, to simplify things and improve security, it may be desirable to run it from a dedicated host. Users will connect to that host, and apply Ansible playbooks.
Ansible Automation Platform YouTube channel
Further information about the concepts discussed in this page can be found in Ansible documentation:
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
Ansible can be used to manage Docker container upgrades and configuration changes. Docker has native ways to do this, namely Dockerfiles and Docker Compose. But sometimes there are reasons to start basic containers from an image and then manage configuration with Ansible or similar software. See Benefits of Managing Docker Containers with Automation Software.
In this page we'll discuss how to use Ansible to manage Docker containers.
Ansible has modules to manage the Docker server, Docker containers, and Docker Compose. These modules are maintained by the community.
A dynamic inventory plugin for Docker exists. It retrieves the list of existing containers from Docker.
Docker modules and the Docker inventory plugin communicate with Docker using its API. The connection to the API can use a TSL connection and supports key authenticity verification.
To communicate with Docker API, Ansible needs a proper Python module installed on the Ansible node (docker
or docker-py
).
Several roles exist to deploy Docker and configure it. They can be found in Ansible Galaxy.
Further information can be found in Ansible documentation.
docker_container module.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
If we manage several remote servers, running commands on them manually can be frustrating and time consuming. Ansible allows one to run commands on a whole group of servers.
This page shows some examples of ansible-playbook invocations. We'll see how to deploy roles or parts of them to remote servers. Then we'll see how to run commands on remote hosts, and possibly to get information from them. Make sure to read Ansible Overview first, to understand Ansible general concepts.
Let's start with the simplest example: we just want our local Ansible to ping remote servers to see if they are reachable. Here's how to do it:
ansible -i production-mariadb all -m ping
Before proceeding with more useful examples, let's discuss this syntax.
ansible is the executable we can call to run a command from remote servers.
-i production-mariadb means that the servers must be read from an inventory called production-mariadb.
all means that the command must be executed against all servers from the above inventory.
-m ping specifies that we want to run the ping module. This is not the ping Linux command. It tells us if Ansible is able to connect a remote server and run a simple commands on them.
To run ping on a specific group or host, we can just replace "all" with a group name or host name from the inventory:
ansible -i production-mariadb main_cluster -m ping
The previous examples show how to run an Ansible module on remote servers. But it's also possible to run custom commands over SSH. Here's how:
ansible -i production-mariadb all -a 'echo $PATH'
This command shows the value of $PATH
on all servers in the inventory "production-mariadb".
We can also run commands as root by adding the -b
(or --become
) option:
# print a MariaDB variable
ansible -i production-mariadb all -b -a 'mysql -e "SHOW GLOBAL VARIABLES LIKE \'innodb_buffer_pool_size\';"'
# reboot servers
ansible -i production-mariadb all -b -a 'reboot'
We saw how to run commands on remote hosts. Applying roles to remote hosts is not much harder, we just need to add some information. An example:
ansible-playbook -i production-mariadb production-mariadb.yml
Let's see what changed:
ansible-playbook is the executable file that we need to call to apply playbooks and roles.
production-mariadb.yml is the play that associates the servers listed in the inventory to their roles.
If we call ansible-playbook with no additional arguments, we will apply all applicable roles to all the servers mentioned in the play.
To only apply roles to certain servers, we can use the -l
parameter to specify a group, an individual host, or a pattern:
# Apply to the mariadb-main role role
ansible-playbook -i production-mariadb -l mariadb-main production-mariadb.yml
# Apply to the mariadb-main-01 host
ansible-playbook -i production-mariadb -l mariadb-main-01 production-mariadb.yml
# Apply to multiple hosts whose name starts with "mariadb-main-"
ansible-playbook -i production-mariadb -l mariadb-main-* production-mariadb.yml
We can also apply tasks from roles selectively. Tasks may optionally have tags, and each tag corresponds to an operation that we may want to run on our remote hosts. For example, a "mariadb" role could have the "timezone-update" tag, to update the contents of the timezone tables. To only apply the tasks with the "timezone-update" tag, we can use this command:
ansible-playbook -i production-mariadb --tag timezone-update production-mariadb.yml
Using tags is especially useful for database servers. While most of the technologies typically managed by Ansible are stateless (web servers, load balancers, etc.) database servers are not. We must pay special attention not to run tasks that could cause a database server outage, for example destroying its data directory or restarting the service when it is not necessary.
We should always test our playbooks and roles on test servers before applying them to production. However, if test servers and production servers are not exactly in the same state (which means, some facts may differ) it is still possible that applying roles will fail. If it fails in the initial stage, Ansible will not touch the remote hosts at all. But there are cases where Ansible could successfully apply some tasks, and fail to apply another task. After the first failure, ansible-playbook will show errors and exit. But this could leave a host in an inconsistent state.
Ansible has a check mode that is meant to greatly reduce the chances of a failure. When run in check mode, ansible-playbook will read the inventory, the play and roles; it will figure out which tasks need to be applied; then it will connect to target hosts, read facts, and value all the relevant variables. If all these steps succeed, it is unlikely that running ansible-playbook without check mode will fail.
To run ansible-playbook in check mode, just add the --check
(or -C
) parameter.
Further documentation can be found in the Ansible website:
ansible tool.
ansible-playbook tool.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
This page contains links to Ansible modules and roles that can be used to automate MariaDB deployment and configuration. The list is not meant to be exhaustive. Use it as a starting point, but then please do your own research.
At the time of writing, there are no MariaDB-specific modules in Ansible Galaxy. MySQL modules can be used. Trying to use MySQL-specific features may result in errors or unexpected behavior. However, the same applies when trying to use a feature not supported by the MySQL version in use.
Currently, the MySQL collection in Ansible Galaxy contains at least the following modules:
mysql_db: manages MySQL databases.
mysql_info: gathers information about a MySQL server.
mysql_query: runs SQL queries against MySQL.
mysql_replication: configures and operates asynchronous replication.
mysql_user: creates, modifies and deletes MySQL users.
mysql_variables: manages MySQL configuration.
Note that some modules only exist as shortcuts, and it is possible to use mysql_query instead. However, it is important to notice that mysql_query is not idempotent. Ansible does not understand MySQL queries, therefore it cannot check whether a query needs to be run or not.
To install this collection locally:
ansible-galaxy collection install community.mysql
MariaDB Corporation maintains a ColumnStore playbook on GitHub.
Let's see some other modules that are useful to manage MariaDB servers.
Modules like shell and command allow one to run system commands.
To deploy on Windows, win_shell and win_command can be used.
Among other things, it is possible to use one of these modules to run MariaDB queries:
- name: Make the server read-only
# become root to log into MariaDB with UNIX_SOCKET plugin
become: yes
shell: $( which mysql ) -e "SET GLOBAL read_only = 1;"
The main disadvantage with these modules is that they are not idempotent, because they're meant to run arbitrary system commands that Ansible can't understand. They are still useful in a variety of cases:
To run queries, because mysql_query is also not idempotent.
In cases when other modules do not allow us to use the exact arguments we need to use, we can achieve our goals by writing shell commands ourselves.
To run custom scripts that implement non-trivial logic. Implementing complex logic in Ansible tasks is possible, but it can be tricky and inefficient.
To call command-line tools. There may be specific roles for some of the most common tools, but most of the times using them is an unnecessary complication.
An important part of configuration management is copying configuration files to remote servers.
The copy module allows us to copy files to target hosts. This is convenient for static files that we want to copy exactly as they are. An example task:
- name: Copy my.cnf
copy:
src: ./files/my.cnf.1
dest: /etc/mysql/my.cnf
As you can see, the local name and the name on remote host don't need to match. This is convenient, because it makes it easy to use different configuration files with different servers. By default, files to copy are located in a files
subdirectory in the role.
However, typically the content of a configuration file should vary based on the target host, the group and various variables. To do this, we can use the template module, which compiles and copies templates written in Jinja.
A simple template task:
- name: Compile and copy my.cnf
copy:
src: ./templates/my.cnf.j2
dest: /etc/mysql/my.cnf
Again, the local and the remote names don't have to match. By default, Jinja templates are located in a templates
subdirectory in the role, and by convention they have the .j2
extension. This is because Ansible uses Jinja version 2 for templating, at the time writing.
A simple template example:
## WARNING: DO NOT EDIT THIS FILE MANUALLY !!
## IF YOU DO, THIS FILE WILL BE OVERWRITTEN BY ANSIBLE
[mysqld]
innodb_buffer_pool_size = {{ innodb_buffer_pool_size }}
<div data-gb-custom-block data-tag="if" data-0='true' data-1='true' data-2='true' data-3='true'>
connect_work_size = {{ connect_work_size }}
</div>
The following modules are also often used for database servers:
user, useful to create the system user and group that run MariaDB binary.
file can be used to make sure that MariaDB directories (like the data directory) exist and have proper permissions. It can also be used to upload static files.
service is useful after installing MariaDB as a service, to start it, restart it or stop it.
Specific roles exist for MariaDB in Ansible Galaxy. Using them for MariaDB is generally preferable, to be sure to avoid incompatibilities and to probably be able to use some MariaDB specific features. However, using MySQL or Percona Server roles is also possible. This probably makes sense for users who also administer MySQL and Percona Server instances.
To find roles that suits you, check Ansible Galaxy search page. Most roles are also available on GitHub.
You can also search roles using the ansible-galaxy tool:
ansible-galaxy search mariadb
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
This page refers to the operations described in Installing MariaDB .deb Files. Refer to that page for a complete list and explanation of the tasks that should be performed.
Here we discuss how to automate such tasks using Ansible. For example, here we show how to install a package or how to import a GPG key; but for an updated list of the necessary packages and for the keyserver to use, you should refer to Installing MariaDB .deb Files.
To add a repository:
- name: Add specified repository into sources list
ansible.builtin.apt_repository:
repo: deb [arch=amd64,arm64,ppc64el] http://sfo1.mirrors.digitalocean.com/mariadb/repo/10.3/ubuntu bionic main
state: present
If you prefer to keep the repository information in a source list file in the Ansible repository, you can upload that file to the target hosts in this way:
- name: Create a symbolic link
ansible.builtin.file:
src: ./file/mariadb.list
dest: /etc/apt/sources.list.d/
owner: root
group: root
mod: 644
state: file
Both the Ansible modules ansible.builtin.apt and ansible.builtin.apt_repository have an update_cache
attribute. In ansible.builtin.apt it is set to "no" by default. Whenever a task sets it to 'yes', apt-get update
is run on the target system. You have three ways to make sure that repositories are updated.
The first is to use ansible.builtin.apt_repository to add the desired repository, as shown above. So you only need to worry about updating repositories if you use the file method.
The second is to make sure that update_cache
is set to 'yes' when you install a repository:
- name: Install foo
apt:
name: foo
update_cache: yes
But if you run certain tasks conditionally, this option may not be very convenient. So the third option is to update the repository cache explicitly as a separate task:
- name: Update repositories
apt:
- update_cache: yes
To import the GPG key for MariaDB we can use the ansible.builtin.apt_key Ansible module. For example:
- name: Add an apt key by id from a keyserver
ansible.builtin.apt_key:
keyserver: hkp://keyserver.ubuntu.com:80
id: 0xF1656F24C74CD1D8
To install Deb packages into a system:
- name: Install software-properties-common
apt:
name: software-properties-common
state: present
To make sure that a specific version is installed, performing an upgrade or a downgrade if necessary:
- name: Install foo 1.0
apt:
name: foo=1.0
To install a package or upgrade it to the latest version, use: state: latest
.
To install multiple packages at once:
- name: Install the necessary packages
apt:
pkg:
- pkg1
- pkg2=1.0
If all your servers run on the same system, you will always use ansible.builtin.apt and the names and versions of the packages will be the same for all servers. But suppose you have some servers running systems from the Debian family, and others running systems from the Red Hat family. In this case, you may find convenient to use two different task files for two different types of systems. To include the proper file for the target host's system:
- include: mariadb-debian.yml
when: "{{ ansible_facts['os_family'] }} == 'Debian'
The variables you can use to run the tasks related to the proper system are:
There is also a system-independent package module, but if the package names depend on the target system using it may be of very little benefit.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
An Ansible role often runs commands that require certain privileges, so it must perform some forms of login, using passwords or key pairs. In the context of database automation, we normally talk about: SSH access, sudo access, and access to MariaDB. If we write these secrets (passwords or private keys) in clear text in an Ansible repository, anyone who has access to the repository can access them, and this is not what we want.
Let's see how we can manage secrets.
Most of the times, Ansible connects to the target hosts via SSH. It is common to use the system username and the SSH keys installed in /.ssh
, which is the SSH clients default. In this case, nothing has to be done on the clients to be able to allow Ansible to use SSH, as long as they are already able to connect to the target hosts.
It is also possible to specify a different username as ANSIBLE_REMOTE_USER and an SSH configuration file as ANSIBLE_NETCONF_SSH_CONFIG. These settings can be specified in Ansible configuration file or as environment variables.
ANSIBLE_ASK_PASS can be specified. If this is the case, Ansible will prompt the user asking to type an SSH password.
As a general rule, any configuration that implies communicating sensible information to the persons who will connects to a system implies some degree of risk. Therefore, the most common choice is to allow users to login into remote systems with their local usernames, using SSH keys.
Once Ansible is able to connect remote hosts, it can also be used to install the public keys of some users to grant them access. Sharing these keys implies no risk. Sharing private keys is never necessary, and must be avoided.
MariaDB has a UNIX_SOCKET plugin that can be used to let some users avoid entering a password, as far as they're logged in the operating system. This authentication method is used by default for the root user. This is a good way to avoid having one more password and possibly writing to a .my.cnf
file so that the user doesn't have to type it.
Even for users who connect remotely, it is normally not necessary to insert passwords in an Ansible file. When we create a user with a password, a hash of the original password is stored in MariaDB. That hash can be found in the mysql.user table. To know the hash of a password without even creating a user, we can use the PASSWORD() function:
SELECT PASSWORD('my_password12') AS hash;
When we create a user, we can actually specify a hash instead of the password the user will have to type:
CREATE USER user@host IDENTIFIED BY PASSWORD '*54958E764CE10E50764C2EECBB71D01F08549980';
Even if you try to avoid sharing secrets, it's likely you'll have to keep some in Ansible. For example, MariaDB users that connect remotely have passwords, and if we want Ansible to create and manage those users, the hashes must be placed somewhere in our Ansible repository. While a hash cannot be converted back to a password, treating hashes as secrets is usually a good idea. Ansible provides a native way to handle secrets: ansible-vault.
In the simplest case, we can manage all our passwords with a single ansible-vault password. When we add or change a new password in some file (typically a file in host_vars
or group_vars
) we'll use ansible-vault to crypt this password. While doing so, we'll be asked to insert our ansible-vault password. When we apply a role and Ansible needs to decrypt this password, it will ask us to enter again our ansible-vault password.
ansible-vault can use more than one password. Each password can manage a different set of secrets. So, for example, some users may have the password to manage regular MariaDB users passwords, and only one may have the password that is needed to manage the root user.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
For documentation about the mariadb-tzinfo-to-sql
utility, see mysql_tzinfo_to_sql. This page is about running it using Ansible.
First, we should install mariadb-tzinfo-to-sql
if it is available on our system. For example, to install it on Ubuntu, we can use this task. For other systems, use the proper module and package name.
- name: Update timezone info
tags: [ timezone-update ]
apt:
name: tzdata
state: latest
install_recommends: no
register: timezone_info
This task installs the latest version of the tzdata
, unless it is already installed and up to date. We register the timezone_info
variables, so we can only run the next task if the package was installed or updated.
We also specify a timezone-update
tag, so we can apply the role to only update the timezone tables.
The next task runs mariadb-tzinfo-to-sql
.
- name: Move system timezone info into MariaDB
tags: [ timezone-update ]
shell: >
mysql_tzinfo_to_sql /usr/share/zoneinfo \
| grep -v "^Warning" \
| mysql --database=mysql
when: timezone_info.changed
We use the shell
module to run the command. Running a command in this way is not idempotent, so we specify when: timezone_info.changed
to only run it when necessary. Some warnings may be generated, so we pipe the output of mysql_tzinfo_to_sql
to grep
to filter warnings out.
If we're using MariaDB Galera Cluster we'll want to only update the timezone tables in one node, because the other nodes will replicate the changes. For our convenience, we can run this operation on the first node. If the nodes hostnames are defined in a list called cluster_hosts
, we can check if the current node is the first in this way:
when: timezone_info.changed and inventory_hostname == cluster_hosts[0].hostname
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
General information and hints on how to automate MariaDB deployments and configuration with Puppet, an open source tool for deployment, configuration, and operations.
Puppet is a tool to automate servers configuration management. It is produced by Puppet Inc, and released under the terms of the Apache License, version 2.
It is entirely possible to use Ansible to automate MariaDB deployments and configuration. This page contains generic information for MariaDB users who want to learn, or evaluate, Puppet.
Puppet modules can be searched using Puppet Forge. Most of them are also published on GitHub with open source licenses. Puppet Forge allows filtering modules to only view the most reliable: supported by Puppet, supported by a Puppet partner, or approved.
For information about installing Puppet, see Installing and upgrading in Puppet documentation.
With Puppet, you write manifests that describe the resources you need to run on certain servers and their attributes.
Therefore manifests are declarative. You don't write the steps to achieve the desired result. Instead, you describe the desired result. When Puppet detects differences between your description and the current state of a server, it decides what to do to fix those differences.
Manifests are also idempotent. You don't need to worry about the effects of applying a manifest twice. This may happen (see Architecture below) but it won't have any side effects.
Here's an example of how to describe a resource in a manifest:
file { '/etc/motd':
content => '',
ensure => present,
}
This block describes a resource. The resource type is file
, while the resource itself is /etc/motd
. The description consists of a set of attributes. The most important is ensure
, which in this case states that the file must exist. It is also common to use this resource to indicate that a file (probably created by a previous version of the manifest) doesn't exist.
These classes of resource types exist:
Built-in resources, or Puppet core resources: Resources that are part of Puppet, maintained by the Puppet team.
Defined resources: Resources that are defined as a combination of other resources. They are written in the Puppet domain-specific language.
Custom resources: Resources that are written by users, in the Ruby language.
To obtain information about resources:
# list existing resource types
puppet resource --types
# print information about the file resource type
puppet describe file
To group several resources in a reusable class:
class ssh_server {
file { '/etc/motd':
content => '',
ensure => present,
}
file { '/etc/issue.net':
content => '',
ensure => present,
}
}
There are several ways to include a class. For example:
include Class['ssh_server']
Puppet has a main manifest that could be a site.pp
file or a directory containing .pp
files. For simple infrastructures, we can define the nodes here. For more complex infrastructures, we may prefer to import other files that define the nodes.
Nodes are defined in this way:
node 'maria-1.example.com' {
include common
include mariadb
}
The resource type is node
. Then we specify a hostname that is used to match this node to an existing host. This can also be a list of hostnames, a regular expression that matches multiple nodes, or the default
keyword that matches all hosts. To use a regular expression:
node /^(maria|mysql)-[1-3]\.example\.com$/ {
include common
}
The most important Puppet concepts are the following:
Target: A host whose configuration is managed via Puppet.
Group: A logical group of targets. For example there may be a mariadb
group, and several targets may be part of this group.
Facts: Information collected from the targets, like the system name or system version. They're collected by a Ruby gem called Facter. They can be core facts (collected by default) or custom facts (defined by the user).
Manifest: A description that can be applied to a target.
Catalog: A compiled manifest.
Apply: Modifying the state of a target so that it reflects its description in a manifest.
Module: A set of manifests.
Resource: A minimal piece of description. A manifest consists of a piece of resources, which describe components of a system, like a file or a service.
Resource type: Determines the class of a resource. For example there is a file
resource type, and a manifest can contain any number of resources of this type, which describe different files.
Attribute: It's a characteristic of a resource, like a file owner, or its mode.
Class: A group of resources that can be reused in several manifests.
Depending on how the user decides to deploy changes, Puppet can use two different architectures:
An Agent-master architecture. This is the preferred way to use Puppet.
A standalone architecture, that is similar to Ansible architecture.
A Puppet master stores a catalog for each target. There may be more than one Puppet master, for redundancy.
Each target runs a Puppet agent in background. Each Puppet agent periodically connects to the Puppet master, sending its facts. The Puppet master compiles the relevant manifest using the facts it receives, and send back a catalog. Note that it is also possible to store the catalogs in PuppetDB instead.
Once the Puppet agent receives the up-to-date catalog, it checks all resources and compares them with its current state. It applies the necessary changes to make sure that its state reflects the resources present in the catalog.
With this architecture, the targets run Puppet apply. This application usually runs as a Linux cron job or a Windows scheduled task, but it can also be manually invoked by the user.
When Puppet apply runs, it compiles the latest versions of manifests using the local facts. Then it checks every resource from the resulting catalogs and compares it to the state of the local system, applying changes where needed.
Newly created or modified manifests are normally deployed to the targets, so Puppet apply can read them from the local host. However it is possible to use PuppetDB instead.
PuppetDB is a Puppet node that runs a PostgreSQL database to store information that can be used by other nodes. PuppetDB can be used with both the Agent-master and the standalone architectures, but it is always optional. However it is necessary to use some advanced Puppet features.
PuppetDB stored the following information:
The latest facts from each target.
The latest catalogs, compiled by Puppet apply or a Puppet master.
Optionally, the recent history of each node activities.
With both architectures, it is possible to have a component called an External Node Classifier (ENC). This is a script or an executable written in any language that Puppet can call to determine the list of classes that should be applied to a certain target.
An ENC received a node name in input, and should return a list of classes, parameters, etc, as a YAML hash.
Bolt can be used in both architectures to run operations against a target or a set of targets. These operations can be commands passed manually to Bolt, scripts, Puppet tasks or plans. Bolt directly connects to targets via ssh and runs system commands.
See Bolt Examples to get an idea of what you can do with Bolt.
hiera is a hierarchical configuration system that allows us to:
Store configuration in separate files;
Include the relevant configuration files for every server we automate with Puppet.
See Puppet hiera Configuration System for more information.
More information about the topics discussed in this page can be found in the Ansible documentation:
Puppet Glossary in Puppet documentation.
Overview of Puppet's architecture in Puppet documentation.
Classifying nodes in Puppet documentation.
Hiera in Puppet documentation.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
Puppet can also be used to manage Docker container upgrades and configuration changes. Docker has more specific tools for this purpose, but sometimes there are reasons to choose alternatives. See Benefits of Managing Docker Containers with Automation Software.
In this page you will find out what managing Docker with Puppet looks like. All the snippets in this page use the docker
resource type, supported by the Puppet company.
Installing or upgrading Docker is simple:
class { 'docker':
use_upstream_package_source => false,
version => '17.09.0~ce-0~debian',
}
In this example we are using our system's repositories instead of Docker official repositories, and we are specifying the desired version. To upgrade Docker later, all we need to do is to modify the version number. While specifying a version is not mandatory, it is a good idea because it makes our manifest more reproducible.
To uninstall Docker:
class { 'docker':
ensure => absent
}
Check the docker
resource type documentation to find out how to use more features: for example you can use Docker Enterprise Edition, or bind the Docker daemon to a TCP port.
To pull an image from Dockerhub:
docker::image { 'mariadb:10.0': }
We specified the 10.0
tag to get the desired MariaDB version. If we don't, the image with the latest
tag will be used. Note that this is not desirable in production, because it can lead to unexpected upgrades.
You can also write a Dockerfile yourself, and then build it to create a Docker image. To do so, you need to instruct Puppet to copy the Dockerfile to the target and then build it::
file { '/path/to/remote/Dockerfile':
ensure => file,
source => 'puppet:///path/to/local/Dockerfile',
}
docker::image { 'image_name':
docker_file => '/path/to/remote/Dockerfile'
}
It is also possible to subscribe to Dockerfile changes, and automatically rebuild the image whenever a new file is found:
docker::image { 'image_name':
docker_file => '/path/to/remote/Dockerfile'
subscribe => File['/path/to/remote/Dockerfile'],
}
To remove an image that was possibly built or pulled:
docker::image { 'mariadb':
ensure => absent
}
To run a container:
docker::run { 'mariadb-01':
image => 'mariadb:10.5',
ports => ['3306:6606']
}
mariadb-01
is the contained name. We specified the optional 10.5
tag, and we mapped the guest port 3306 to the host port 6606. In production, you normally don't map ports because you don't need to connect MariaDB clients from the host system to MariaDB servers in the containers. Third-party tools can be installed as separate containers.
docker resource type documentation, in Puppet documentation.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
This page contains links to Puppet modules that can be used to automate MariaDB deployment and configuration. The list is not meant to be exhaustive. Use it as a starting point, but then please do your own research.
Puppet Forge is the website to search for Puppet modules, maintained by the Puppet company. Modules are searched by the technology that needs to be automated, and the target operating system.
Search criteria include whether the modules are supported by Puppet or its partners, and whether a module is approved by Puppet. Approved modules are certified by Puppet based on their quality and maintenance standards.
Some modules that support the Puppet Development Kit allow some types of acceptance tests.
We can run a static analysis on a module's source code to find certain bad practices that are likely to be a source of bugs:
pdk validate
If a module's authors wrote unit tests, we can run them in this way:
pdk test unit
At the time of writing, there are no supported or approved modules for MariaDB.
However there is a mysql module supported by Puppet, that supports the Puppet Development Kit. Though it doesn't support MariaDB-specific features, it works with MariaDB. Its documentation shows how to use the module to install MariaDB on certain operating systems.
Several unsupported and not approved modules exist for MariaDB and MaxScale.
Puppet Forge website.
Puppet Development Kit documentation.
Modules overview in Puppet documentation.
Beginner's guide to writing modules in Puppet documentation.
Puppet Supported Modules page in Puppet Forge.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
hiera is part of Puppet. It is a hierarchical configuration system that allows us to:
Store configuration in separate files;
Include the relevant configuration files for every server we automate with Puppet.
Each hierarchy allows to one choose the proper configuration file for a resource, based on certain criteria. For example criteria may include node names, node groups, operating systems, or datacenters. Hierarchies are defined in a hiera.yaml
file, which also defines a path for the files in each hierarchy.
Puppet facts are commonly used to select the proper files to use. For example, a path may be defined as "os/%{facts.os.name}.yaml"
. In this case, each resource will use a file named after the operating system it uses, in the os directory. You may need to use custom facts, for example to check which microservices will use a MariaDB server, or in which datacenter it runs.
We do not have to create a file for each possible value of a certain fact. We can define a default configuration file with settings that are reasonable for most resources. Other files, when included, will override some of the default settings.
A hiera configuration file will look like this:
version: 5
defaults:
datadir: global
data_hash: yaml_data
hierarchy:
- name: "Node data"
path: "nodes/%{trusted.certname}.yaml"
- name: "OS data"
path: "os/%{facts.os.family}.yaml"
- name: "Per-datacenter business group data" # Uses custom facts.
path: "location/%{facts.whereami}/%{facts.group}.yaml"
This file would include the global files, the OS-specific files and the node-specific files. Each hierarchy will override settings from previous hierarchies.
We can actually have several hiera configuration files. hiera.yaml
is the global file. But we will typically have additional hiera configuration files for each environment. So we can include the configuration files that apply to production, staging, etc, plus global configuration files that should be included for every environment.
Importantly, we can also have hiera configuration files for each module. So, for example, a separate mariadb/hiera.yaml
file may defined the hierarchies for MariaDB servers. This allow us to define, for example, different configuration files for MariaDB and for MaxScale, as most of the needed settings are typically different.
You probably noticed that, in the previous example, we defined data_hash: yaml_data
, which indicates that configuration files are written in YAML. Other allowed formats are JSON and HOCON. The data_hash
setting is defined in defaults
, but it can be overridden by hierarchies.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
This page shows some examples of what we can do with Bolt to administer a set of MariaDB servers. Bolt is a tool that is part of the Puppet ecosystem.
For information about installing Bolt, see Installing Bolt in Bolt documentation.
The simplest way to call Bolt and instruct it to do something on some remote targets is the following:
bolt ... --nodes 100.100.100.100,200.200.200.200,300,300,300,300
However, for non-trivial setups it is usually better to use an inventory file. An example:
targets:
- uri: maria-1.example.com
name: maria_1
alias: mariadb_main
...
In this way, it will be possible to refer the target by name or alias.
We can also define groups, followed by the group members. For example:
groups:
- name: mariadb-staging
targets:
- uri: maria-1.example.com
name: maria_1
- uri: maria-2.example.com
name: maria_2
- name: mariadb-production
targets:
...
...
With an inventory of this type, it will be possible to run Bolt actions against all the targets that are members of a group:
bolt ... --nodes mariadb-staging
In the examples in the rest of the page, the --targets
parameter will be indicated in this way, for simplicity: --targets <targets>
.
The simplest way to run a command remotely is the following:
bolt command run 'mariadb-admin start-all-slaves' --targets <targets>
To copy a file or a whole directory to targets:
bolt file upload /path/to/source /path/to/destination --targets <targets>
To copy a file or a whole directory from the targets to the local host:
bolt file download /path/to/source /path/to/destination --targets <targets>
We can use Bolt to run a local script on remote targets. Bolt will temporarily copy the script to the targets, run it, and delete it from the targets. This is convenient for scripts that are meant to only run once.
bolt script run rotate_logs.sh --targets <targets>
Puppet tasks are not always as powerful as custom scripts, but they are simpler and many of them are idempotent. The following task stops MariaDB replication:
bolt task run mysql::sql --targets <targets> sql="STOP REPLICA"
It is also possible to apply whole manifests or portions of Puppet code (resources) on the targets.
To apply a manifest:
bolt apply manifests/server.pp --targets <targets>
To apply a resource description:
bolt apply --execute "file { '/etc/mysql/my.cnf': ensure => present }" --targets <targets>
Further information about the concepts explained in this page can be found in Bolt documentation:
Inventory Files in Bolt documentation.
Applying Puppet code in Bolt documentation.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
Containers are an OCI standard format for software images and their specified time all bundled up into a single distributable time. They can be used for production, development or testing.
Docker Inc. run a Docker Official Images program to provide users with an essential base implementation of MariaDB in a container and to exemplify best practices of a container.
The containers are available on Docker Hub as docker.io/library/mariadb though many container runtime implementation will fill in the docker.io/library where the host/path isn't specified.
The containers are in a Open Container Initiative format that allows the containers to be interoperable with a number of container runtime implementations. Docker, or more fully Docker Engine, is just one of the many available runtimes.
Many people use MariaDB Docker Official Image containers in CI systems like GitHub Actions, though its possible to use these in production environments like kubernetes.
The MariaDB Server container images are available with a number of tags:
A full version, like 10.11.5
A major version like 10.11
The most recent stable GA version - latest
The most recent stable LTS version - lts
Versions that aren't stable will be suffixed with -rc, or -alpha to clearly show their release status, and enables Renovatebot and other that follow semantic versioning to follow updates.
For a consistent application between testing an production environment using the SHA hash of the image is recommended like docker.io/library/mariadb@sha256:29fe5062baf36bae8ec68f21a3dce4f0372dadc185e687624f1252fc49d91c67. There is a list of mapping and history of tags to SHA hash on the Docker Library repository.
MariaDB has many plugins. Most are not enabled by default, some are in the mariadb
container, while others need to be installed from additional packages.
The following methods summarize Installing plugins in the MariaDB Docker Library Container (mariadb.org blog post) on this topic.
To see which plugins are available in the mariadb:
$ docker run --rm mariadb:latest ls -C /usr/lib/mysql/plugin
Using the --plugin-load-add
flag with the plugin name (can be repeated), the plugins will be loaded and ready when the container is started:
For example, to enable the simple\_password\_check
plugin:
$ docker run --name some-%%REPO%% -e MARIADB_ROOT_PASSWORD=my-secret-pw --network=host -d mariadb:latest --plugin-load-add=simple_password_check
plugin-load-add` can be used as a configuration option to load plugins. The example below loads the FederatedX Storage Engine.
$ printf "[mariadb]\nplugin-load-add=ha_federatedx\n" > /my/custom/federatedx.conf
$ docker run --name some-mariadb -v /my/custom:/etc/mysql/conf.d -e MARIADB_ROOT_PASSWORD=my-secret-pw -d mariadb:latest
INSTALL SONAME can be used to install a plugin as part of the database initialization.
Create the SQL file used in initialization:
$ echo 'INSTALL SONAME "disks";' > my_initdb/disks.sql
In this case, the my\_initdb
is a /docker-entrypoint-initdb.d
directory per "Initializing a fresh instance" section above.
A number of plugins are in separate packages to reduce their installation size. The package names of MariaDB-created plugins can be determined using the following command:
$ docker run --rm mariadb:latest sh -c 'apt-get update -qq && apt-cache search mariadb-plugin'
A new image needs to be created when using additional packages. The mariadb
image can however be used as a base:
In the following, the CONNECT Storage Engine is installed:
FROM mariadb:latest
RUN apt-get update && \
apt-get install mariadb-plugin-connect -y && \
rm -rf /var/lib/apt/lists/*
Installing plugins from packages creates a configuration file in the directory /etc/mysql/mariadb.conf.d/
that loads the plugin on startup.
This page is licensed: CC BY-SA / Gnu FDL
In this page we'll discuss why automating containers with software like Ansible or Puppet may be desirable in some cases. To talk about this, we'll first need to discuss why containers are defined ephemeral, and how this applies to containerized database servers (particularly MariaDB).
During the discussion, we should keep in mind that Docker Engine, CRI-I, containerd, Mirantis Container Runtime, Podman and other OCI container runtimes can be used to setup production and/or development environments. These use cases are very different from a database perspective: a production database may be big, and typically contains data that we don't want to lose. Development environments usually contain small sample data that can be rebuilt relatively quickly. This page focuses on the latter case.
Images are an OCI specified format that can be compiled from Dockerfiles as one of the ways. Containers are the OCI runtime specified way of creating a runtime version of an images. Normally, a container is not modified from the moment it is created. In other words, containers are usually designed to be ephemeral, meaning that they can be destroyed and replaced with new containers at any time. Provided that there is proper redundancy (for example, there are several web servers running the same services) destroying one container and starting a new one of the same type won't cause any damage.
We will discuss a bit later how this applies to MariaDB, and more generally to database servers.
When something should change, for example some software version or configuration, normally Dockerfiles are updated and containers are recreated from the latest image versions. For this reason, containers shouldn't contain anything that shouldn't be lost, and recreating them should be an extremely cheap operation. Docker Compose or the Swarm mode are used to declare which containers form a certain environment, and how they communicate with each other.
On the contrary, Ansible and Puppet are mainly built to manage the configuration of existing servers. It doesn't recreate servers, it changes their configuration. So Docker and Ansible have very different approaches. For this reason, Ansible and Puppet are not frequently used to deploy containers to production. However, using them together can bring some benefits, especially for development environments.
More on this later in the page. First, we need to understand how these concepts apply to database servers.
Using ephemeral containers works very well for stateless technologies, like web servers and proxies. These technologies virtually only need binaries, configuration and small amounts of data (web pages). If some data need to be restored after a container creation, it will be a fast operation.
In the case of a database, the problem is that data can be large and need to be written somewhere. We don't want all databases to disappear when we destroy a container. Even if we had an up-to-date backup, restoring it would take time.
However, OCI Containers has features called volumes. A volume is a directory in the host system mapped to a directory in one or more containers. Volumes are not destroyed when containers are destroyed. They can be used to share data between any number of containers and the host system. Therefore, they are also a good way to persist data.
Suppose a MariaDB container called mariadb-main-01
uses a volume that is mapped to /var/docker/volumes/mariadb-main
. At some point we want to use a more recent MariaDB version. As explained earlier, the container way to do this is to destroy the container and create a new one that uses a more recent version of the MariaDB image.
So, we will destroy mariadb-main-01
. The volume is still there. Then we create a new container with the same name, but based on a newer image. We make sure to link the volume to the new container too, so it will be able to use /var/docker/volumes/mariadb-main
again. At this point we may want to run mariadb-upgrade, but apart from that, everything should just work.
The container runtime implementations also provide the opportunity to create a volume with an explicit name and this is also persistent. The actual location on the filesystem is managed by the runtime.
The above described steps are simple, but running them manually is time consuming and error-prone. Automating them with some automation software like Ansible or Puppet is often desirable.
Containers can be deployed in the following ways:
Manually. See Installing and Using MariaDB via Docker. This is not recommended for production, or for complex environments. However, it can easily be done for the simplest cases. If we want to make changes to our custom images, we'll need to modify the Dockerfiles, destroy the containers and recreate them.
With Docker Compose. See Setting Up a LAMP Stack with Docker Compose for a simple example. When we modify a Dockerfile, we'll need to destroy the containers and recreate them, which is usually as simple as running docker compose down
followed by docker compose up
. After changing docker-cmpose.yml
(maybe to add a container or a network) we'll simply need to run docker compose up
again, because it is idempotent.
Using Ansible, Puppet or other automation software, as mentioned before. We can use Ansible or Puppet to create the containers, and run them again every time we want to apply some change to the containers. This means that the containers are potentially created once and modified any number of times.
In all these cases, it is entirely possible to add Vagrant to the picture. Vagrant is a way to deploy or provision several hosts, including virtual machines (the most common case), and containers. It is agnostic in regarding the underlying technology, so it can deploy to a virtual machine, a container, or even a remote server in the same way. Containers can work with Vagrant in two ways:
As a provisioner. In this case Vagrant will most commonly deploy a virtual machine, and will use Docker to setup the applications that need to run in it, as containers. This guarantees a higher level of isolation, compared to running the containers in the local host. Especially if you have different environments to deploy locally, because you can have them on different virtual machines.
As a provider. Vagrant will deploy one or more containers locally. Once each container is up, Vagrant can optionally use a provisioner on it, to make sure that the container runs the proper software with proper configuration. In this case, Ansible, Puppet or other automation software can be used as a provisioner. But again, this is optional: it is possible to make changes to the Dockerfiles and recreate the containers every time.
Containers can be entirely managed with Docker Compose or the Swarm mode. This is often a good idea.
However, choosing to use automation software like Ansible or Puppet has some benefits too. Benefits include:
Containers allow working without modifying the host system, and their creation is very fast. Much faster than virtual machines. This makes containers desirable for development environments.
As explained, making all containers ephemeral and using volumes to store important data is possible. But this means adding some complexity to adapt an ephemeral philosophy to technologies that are not ephemeral by nature (databases). Also, many database professionals don't like this approach. Using automation software allows easily triggering upgrades and configuration changes in the containers, treating them as non-ephemeral systems.
Sometimes containers are only used in development environments. If production databases are managed via Ansible, Puppet, or other automation software, this could lead to some code duplication. Dealing with configuration changes using the same procedures will reduce the cost of maintenance.
While recreating containers is fast, being able to apply small changes with Ansible or Puppet can be more convenient in some cases: particularly if we write files into the container itself, or if recreating a container bootstrap involves some lengthy procedure.
Trying to do something non-standard with Dockerfiles can be tricky. For example, running two processes in a container is possible but can be problematic, as containers are designed to run single main process per container. However there are situations when this is desirable. For example PMM containers run several different processes. Launching additional processes with Ansible or Puppet may be easier than doing it with a Dockerfile.
With all this in mind, let's see some examples of cases when managing containers with Ansible, Puppet or other automation software is preferable, rather than destroying containers every time we want to make a change:
We use Ansible or Puppet in production, and we try to keep development environments as similar as possible to production. By using Ansible/Puppet in development too, we can reuse part of the code.
We make changes to the containers often, and recreating containers is not as fast as it should be (for example because a MariaDB dump needs to be restored).
Creating a container implies some complex logic that does not easily fit a Dockerfile or Docker Compose (including, but not limited to, running multiple processes per container).
That said, every case is different. There are environments where these advantages do not apply, or bring a very small benefit. In those cases, the cost of adding some automation with Ansible, Puppet or similar software is probably not justified.
Suppose you want to manage containers configuration with Ansible.
At a first glance, the simplest way is to run Ansible in the host system. It will need to connect to the containers via SSH, so they need to expose the 22 port. But we have multiple containers, so we'll need to map the 22 port of each container to a different port in the host. This is hard to maintain and potentially insecure: in production you want to avoid exposing any container port to the host.
A better solution is to run Ansible itself in a container. The playbooks will be in a container volume, so we can access them from the host system to manage them more easily. The Ansible container will communicate with other containers using a container network, using the standard 22 port (or another port of your choice) for all containers.
See these pages on how to manage containers with different automation technologies:
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB databases in containers need backup and restore like their non-container equivalents.
In this section, we will assume that the MariaDB container has been created as follows:
$ docker volume create mariadb_data
$ docker volume create mariadb_backup
$ docker run --rm \
-v mariadb_data:/var/lib/mysql \
-v mariadb_backup:/backup \
mariadb \
chown -R mysql:mysql /var/lib/mysql /backup
$ docker run -d --name mariadb \
-v mariadb_data:/var/lib/mysql \
-v mariadb_backup:/backup \
-e MARIADB_ROOT_PASSWORD='MariaDB11!' \
<mariadb-image>
mariadb-dump is in the Docker Official Image and can be used as follows:
$ docker exec mariadb \
sh -c 'mariadb-dump --all-databases -u root -p"$MARIADB_ROOT_PASSWORD" > backup/db.sql'
For restoring data, you can use the following docker exec
command:
$ docker exec mariadb \
sh -c 'mariadb -u root -p"$MARIADB_ROOT_PASSWORD" < backup/db.sql'
mariadb-backup is in the Docker Official Image.
mariadb-backup can create a backup as follows:
To perform a backup using mariadb-backup, a second container is started that shares the original container's data directory. An additional volume for the backup needs to be included in the second backup instance. Authentication against the MariaDB database instance is required to successfully complete the backup. In the example below, a mysql@localhost
user is used with the MariaDB server's Unix socket shared with the backup container.
Note: Privileges listed here are for 10.5+. For an exact list, see mariadb-backup: Authentication and Privileges.
$ docker volume create mariadb_data
$ docker volume create mariadb_backup
$ docker run --rm \
-v mariadb_data:/var/lib/mysql \
-v mariadb_backup:/backup \
mariadb \
chown -R mysql:mysql /var/lib/mysql /backup
$ docker run -d --name mariadb \
-v mariadb_data:/var/lib/mysql \
-v mariadb_backup:/backup \
-e MARIADB_ROOT_PASSWORD='MariaDB11!' \
-e MARIADB_MYSQL_LOCALHOST_USER=1 \
-e MARIADB_MYSQL_LOCALHOST_GRANTS='RELOAD, PROCESS, LOCK TABLES, BINLOG MONITOR' \
<mariadb-image>
mariadb-backup will run as the mysql
user in the container, so the permissions on /backup
will need to ensure that it can be written to by this user:
$ docker exec --user mysql mariadb mariadb-backup --backup --target-dir=backup
These steps restore the backup made with mariadb-backup.
At some point before doing the restore, the backup needs to be prepared. The prepare must be done with the same MariaDB version that performed the backup. Perform the prepare like this:
$ docker run --rm \
--name mariadb-restore \
-v mariadb_backup:/backup \
<mariadb-image> \
mariadb-backup --prepare --target-dir=backup
Now that the image is prepared, start the container with both the data and the backup volumes and restore the backup. The data directory must be empty to perform this action:
$ docker volume create mariadb_restore
$ docker run --rm \
-v mariadb_restore:/var/lib/mysql \
--name mariadb-restore-change-permissions \
<mariadb-image> \
chown mysql: /var/lib/mysql
$ docker run --rm \
--name mariadb-restore \
-v mariadb_restore:/var/lib/mysql \
-v mariadb_backup:/backup \
--user mysql \
<mariadb-image> \
mariadb-backup --copy-back --target-dir=backup
With mariadb_restore
volume containing the restored backup, start normally as this is an initialized data directory. At this point a later version of <mariadb-image>
container can be used:
$ docker run -d --name mariadb \
-v mariadb_restore:/var/lib/mysql \
-e MARIADB_AUTO_UPGRADE=1 \
-e MARIADB_ROOT_PASSWORD='MariaDB11!' \
<mariadb-image>
On the environment variables here:
MARIADB_AUTO_UPGRADE here in addition to upgrading the system tables ensures there is a healthcheck user.
MARIADB_ROOT_PASSWORD
is a convenience if any scripts, like logical backup above, use the environment variable. This environment variable is not strictly required.
For further information on mariadb-backup, see mariadb-backup Overview.
This page is licensed: CC BY-SA / Gnu FDL
When using containers in production, it is important to be aware of container security concerns.
Depending on the container runtime, containers may be running on the host system's kernel or a kernel shared with other containers. If this kernel has security bugs, those bugs are also present in the containers. Malicious containers may attempt to explain a kernel vulnerability to impact the confidentiality, integrity or availability of other containers.
In particular, Linux based containers have a container runtime that can use the following features:
Namespaces, to isolate containers from each other and make sure that a container can't establish unauthorized connections to another container.
cgroups, to limit the resources (CPU, memory, IO) that each container can consume.
The administrators of a system should be particularly careful to upgrade the kernel whenever security bugs to these features are fixed.
It is important to note that when we upgrade the kernel, runC or Docker itself we cause downtime for all the containers running on the system.
Containers are built from images. If security is a major concern, you should make sure that the images you use are secure.
If you want to be sure that you are pulling authentic images, you should only pull images signed with Docker Content Trust. Signing only ensure authenticity or origin, it doesn't dictate that entity is trustworthy.
Updated images should be used. An image usually downloads packages information at build time. If the image is not recently built, a newly created container will have old packages. Updating the packages on container creation and regularly re-updating them will ensure that the container uses packages with the most recent versions. Rebuilding an image often will reduce the time necessary to update the packages the first time.
Security bugs are usually important for a database server, so you don't want your version of MariaDB to contain known security bugs. But suppose you also have a bug in Docker, in runC, or in the kernel. A bug in a user-facing application may allow an attacker to exploit a bug in those lower level technologies. So, after gaining access to the container, an attacker may gain access to the host system. This is why system administrators should keep both the host system and the software running in the containers updated.
For more information, see the following links:
Container Security from Red Hat.
Docker security on Docker documentation.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
OCI containers, frequently and incorrectly called Docker containers, are created from OCI images. An image contains software that can be launched, including the underlying system. A container is an instance of that software.
When we want to automate MariaDB, creating an image with MariaDB and the desired configuration, we may want to create an image by ourselves, which fulfils our needs.
One "source code" of an image is a Dockerfile. A Dockerfile is written in Docker specific language, and can be compiled into an image by the docker
binary, using the docker build
command. It can also be compiled by buildah using buildah bud
.
Most images are based on another image. The base image is specified at the beginning of the Dockerfile, with the FROM
directive. If the base image is not present in the local system, it is downloaded from the repository specified, or if not specified, from the default repository of the build program. This is often Docker Hub. For example, we can build a mariadb-rocksdb:10.5
image starting from the debian:13
image. In this way, we'll have all the software included in a standard Debian image, and we'll add MariaDB and its configuration upon that image.
All the following Dockerfile directives are compiled into a new Docker image, identified by an SHA256 string. Each of these images is based on the image compiled from the previous directive. A physical compiled image can serve as a base for any number of images. This mechanism saves a lot of disk space, download time and build time.
The following diagram shows the relationship between Dockerfiles, images and containers:
Here's a simple Dockerfile example:
FROM ubuntu:20.04
RUN apt-get update
RUN apt-get install -y mariadb-server
EXPOSE 3306
LABEL version="1.0"
LABEL description="MariaDB Server"
HEALTHCHECK --start-period=5m \
CMD mariadb -e 'SELECT @@datadir;' || exit 1
CMD ["mariadbd"]
This example is not very good for practical purposes, but it shows what a Dockerfile looks like.
First, we declare that the base image to use is ubuntu:20.04
.
Then we run some commands to install MariaDB from the Ubuntu default repositories and stop the MariaDB service.
We define some metadata about the image with LABEL
. Any label is valid.
We declare that the port 3306 (MariaDB default port) should be exposed. However, this has no effect if the port is not exposed at container creation.
We also define a healthcheck. This is a command that is run to check if the container is healthy. If the return code is 0 the healthcheck succeeds, if it's 1 it fails. In the MariaDB specific case, we want to check that it's running and able to answer a simple query. This is better than just checking that MariaDB process is running, because MariaDB could be running but unable to respond, for example because max_connections was reached or data si corrupted. We read a system variable, because we should not assume that any user-created table exists. We also specify --start-period
to allow some time for MariaDB to start, keeping in mind that restarting it may take some time if some data is corrupted. Note that there can be only one healthcheck: if the command is specified multiple times, only the last occurrence will take effect.
Finally, we start the container command: mariadbd. This command is run when a container based on this image starts. When the process stops or crashes, the container will immediately stop.
Note that, in a container, we normally run mariadbd directly or in an entrypoint script exec mariadbd
, rather than running mysqld_safe or running MariaDB as a service. Containers restart can be handled by the container service. See automatic restart.
See the documentation links below to learn the syntax allowed in a Dockerfile.
It is possible to use variables in a Dockerfile. This allows us, for example, to install different packages, install different versions of a package, or configure software differently depending on how variables are set, without modifying the Dockerfile itself.
To use a variable, we can do something like this:
FROM ubuntu:20.04
ARG MARIADB_CONFIG_FILE
...
ENTRYPOINT mariadbd --defaults-file=$MARIADB_CONFIG_FILE
Here ARG
is used after the FROM
directive, thus the variable cannot be used in FROM
. It is also possible to declare a variable before FROM
, so we can use a variable to select the base image to use or its tag, but in this case the variable cannot be used after the FROM
directive, unless ARG
is re-declared after the FROM
. Here is an example:
ARG UBUNTU_VERSION
FROM ubuntu:$UBUNTU_VERSION
# Uncomment for the build error to be avoided
# ARG UBUNTU_VERSION
# But this will cause a build error:
RUN echo 'Ubuntu version: $UBUNTU_VERSION' > /var/build_log
We'll have to assign variables a value when we build the Dockerfile, in this way:
docker build --build-arg UBUNTU_VERSION=20.04 .
Note that Dockerfile variables are just placeholders for values. Dockerfiles do not support assignment, conditionals or loops.
Dockerfiles are normally versioned, as well as the files that are copied to the images.
Once an image is built, it can be pushed to a container registry. Whenever an image is needed on a host to start containers from it, it is pulled from the registry.
A default container registry for OCI images is Docker Hub. It contains Docker Official Images maintained by the Docker Library team and the community. Any individual or organization can open an account and push images to Docker Hub. Most Docker images are open source: the Dockerfiles and the needed files to build the images are usually on GitHub.
It is also possible to setup a self-hosted registry. Images can be pushed to that registry and pulled from it, instead of using Docker Hub. If the registry is not publicly accessible, it can be used to store images used by the organization without making them publicly available.
But a self-hosted registry can also be useful for open source images: if an image is available on Docker Hub and also on a self-hosted registry, in case Docker Hub is down or not reachable, it will still be possible to pull images.
The names of images developed by the community follow this schema:
repository/maintainer/technology
It doesn't matter if the maintainer is an individual or an organization. For images available on Docker Hub, the maintainer is the name of a Docker Hub account.
Official images maintained by the Docker Library maintainers have the implicit name of library
filled in by the container fetching tool. For example, the official MariaDB image is called mariadb
which is an alias for docker.io/library/mariadb
.
All images have a tag, which identifies the version or the variant of an image. For example, all MariaDB versions available on Docker are used as image tags. MariaDB 10.11 is called mariadb:10.11
.
By conversion, tags form a hierarchy. So for example, there is a 10.1.1
tag whose meaning will not change over time. 10.5
will always identify the latest stable version in the 10.5 branch. For some time it was 10.5.1
, then it became 10.5.2
, and so on.
When we pull an image without specifying a tag (ie, docker pull mariadb
), we are implicitly requiring the image with the latest
tag. This is even more mutable: at different periods of time, it pointed to the latest 10.0
version, to the latest 10.1
version, and so on.
In production, it is always better to know for sure which version we are installing. Therefore it is better to specify a tag whose meaning won't change over time, like 10.5.21
. To keep to a latest LTS version, the lts
can be used.
To pull an image from Docker Hub or a self-hosted registry, we use the docker pull
command. For example:
docker pull mariadb:10.5
This command downloads the specified image if it is not already present in the system, or if the local version is not up to date.
After modifying a Dockerfile, we can build an image in this way:
docker build .
This step can be automated by services like Docker Hub and GitHub. Check those service's documentation to find out how this feature works.
Once an image is created, it can be pushed to a registry. We can do it in this way:
docker push <image_name>:<tag>
Docker has a feature called Docker Content Trust (DCT). It is a system used to digitally sign images, based on PEM keys. For environments where security is a major concern, it is important to sign images before pushing them. This can be done with both Docker Hub and self-hosted registries.
As mentioned, a Dockerfile is built by creating a new image for each directive that follows FROM
. This leads to some considerations.
Sometimes it can be a good idea to run several shell commands in a single RUN
directive to avoid creating images that are not useful.
Modifying a directive means that all subsequent directives also need to be rebuilt. When possible, directives that are expected to change often should follow directives that will change seldom.
Directives like LABEL
or EXPOSE
should be placed close to the end of Dockerfiles. In this way they will be rebuilt often, but this operation is cheap. On the other side, changing a label should not trigger a long rebuild process.
Variables should be used to avoid Dockerfiles proliferation. But if a variable is used, changing its value should be tested. So, be sure not to use variables without a good reason.
Writing logic into a Dockerfile is impossible or very hard. Call shell scripts instead, and write your logic into them. For example, in a shell script it is easy to perform a certain operation only if a variable is set to a certain value.
If you need MariaDB containers with different configurations or different sets of plugins, use the method explained above. Do not create several Dockerfiles, with different tags, for each desired configuration or plugin set. This may lead to undesired code duplication and increased maintenance costs.
More details can be found in the Docker documentation:
See also:
Privacy-Enhanced Mail on Wikipedia.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB Corporation provides Docker images for MariaDB Enterprise Server in the MariaDB Enterprise Docker Registry.
Docker provides multiple benefits:
Docker is an open platform for developing, shipping, and running applications that allows you to separate your applications from your infrastructure.
Docker images are portable. A Docker image can be deployed in a Docker container on any system using the Docker platform, regardless of the host operating system.
Docker containers are isolated from the host operating system and from other Docker containers.
If you want to deploy MariaDB Enterprise Server without Docker, alternative deployment methods are available.
MariaDB Enterprise Server can be deployed with Docker to support use cases that require software to be rapidly deployed on existing infrastructure, such as:
Continuously create and destroy automated testing environments as part of a continuous integration (CI) pipeline
Create a small test environment on a local workstation
Create multiple isolated test environments on the same host
Deployment alongside related containers using Docker Compose
The following products and versions can be deployed using the MariaDB Enterprise Docker Registry:
MariaDB Enterprise Server 10.5
MariaDB Enterprise Server 10.6
MariaDB Enterprise Server 11.4
For details about which storage engines and plugins are supported in the images for each version, see "MariaDB Enterprise Docker Registry".
To deploy MariaDB Enterprise Server in a Docker container, follow the instructions below.
MariaDB Corporation requires customers to authenticate when logging in to the MariaDB Enterprise Docker Registry. A customer-specific Customer Download Token must be provided as the password.
Customer Download Tokens are available through the MariaDB Customer Portal.
To retrieve the customer download token for your account:
Navigate to the Customer Download Token at the MariaDB Customer Portal.
Log in using your MariaDB ID.
Copy the Customer Download Token to use as the password when logging in to the MariaDB Enterprise Docker Registry.
Log in to the MariaDB Enterprise Docker Registry by executing docker login
:
$ docker login docker.mariadb.com
When prompted, enter the login details:
As the user name, enter the email address associated with your MariaDB ID.
As the password, enter your Customer Download Token.
The login details will be saved.
Confirm the login details were saved by checking the /.docker/config.json file for a JSON object named "docker.mariadb.com" inside an "auths" parent JSON object:
$ cat ~/.docker/config.json
{
"auths": {
"docker.mariadb.com": {
"auth": "<auth_hash>"
}
}
}
The enterprise-server
repository in the MariaDB Enterprise Docker Registry contains images for different MariaDB Enterprise Server releases using specific tags. Before continuing, you will need to decide which tag to use.
To deploy a container using the most recent image for the latest MariaDB Enterprise Server release series (currently 11.4), use the latest
tag.
For additional information, see "MariaDB Enterprise Docker Registry: Supported Tags".
Pull the Docker image with the chosen tag by executing docker pull
:
$ docker pull docker.mariadb.com/enterprise-server:latest
latest: Pulling from enterprise-server
5d87d5506868: Pull complete
Digest: sha256:68795ca747901e3402e30dab71d6d8bc72bce727db3b9e4888979468be77d250
Status: Downloaded newer image for docker.mariadb.com/enterprise-server:latest
docker.mariadb.com/enterprise-server:latest
Confirm the Docker image has been pulled by executing docker images
:
$ docker images \
--filter=reference='docker.mariadb.com/enterprise-server'
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.mariadb.com/enterprise-server latest dd17291aa340 3 months ago 451MB
Create a container using the pulled Docker image by executing docker run
:
$ docker run --detach \
--name mariadb-es-latest \
--env MARIADB_ROOT_PASSWORD='YourSecurePassword123!' \
--publish '3307:3306/tcp' \
docker.mariadb.com/enterprise-server:latest \
--log-bin=mariadb-bin \
<other mariadbd command-line options>
3082ab69e565be21c6157bb5a3d8c849ec03a2c51576778ac417a8a3aa9e7537
Configure the container and set the root password using environment variables by setting the --env
command-line option.
Configure TCP port bindings for the container by setting the --publish
or --publish-all
command-line options.
Configure MariaDB Enterprise Server by setting mariadbd command-line options.
Confirm the container is running by executing docker ps
:
$ docker ps \
--all \
--filter ancestor='docker.mariadb.com/enterprise-server:latest'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3082ab69e565 docker.mariadb.com/enterprise-server:latest "/es-entrypoint.sh -…" 12 seconds ago Up 11 seconds 3306/tcp mariadb-es-latest
By default, Docker uses Docker bridge networking for new containers. For details on how to use host networking for new containers, see "Create a Container with Host Networking".
Connect to the container by executing MariaDB Client on the container using docker exec:
$ docker exec --interactive --tty \
mariadb-es-latest \
mariadb \
--user=root \
--password
Confirm the container is using the correct version of MariaDB Enterprise Server by querying the version system variable with the SHOW GLOBAL VARIABLES statement:
SHOW GLOBAL VARIABLES
LIKE 'version'\G
*************************** 1. row ***************************
Variable_name: version
Value: 11.4.4-2-MariaDB-enterprise-log
Exit the container using exit
:
exit
Bye
Stop a Docker container using docker stop
:
$ docker stop mariadb-es-latest
mariadb-es-latest
Confirm the container is stopped by executing docker ps
:
$ docker ps \
--all \
--filter ancestor='docker.mariadb.com/enterprise-server:latest'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3082ab69e565 docker.mariadb.com/enterprise-server:latest "/es-entrypoint.sh -…" 2 minutes ago Exited (143) About a minute ago mariadb-es-latest
Remove a Docker container using docker rm
:
$ docker rm mariadb-es-latest
mariadb-es-latest
Confirm the container is removed by executing docker ps:
$ docker ps \
--all \
--filter ancestor='docker.mariadb.com/enterprise-server:latest'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
This page is: Copyright © 2025 MariaDB. All rights reserved.
This process shows how to deploy MariaDB in a Docker container running on an EC2 instance. First we'll create the EC2 VM, then we'll deploy Docker to it. After that, we'll pull the MariaDB Docker image which we'll use to create a running container with a MariaDB instance. Finally, we'll load a sample database into the MariaDB instance.
Create a VM in AWS EC2
Install MariaDB client on your local machine, either bundled with Maria DB server or standalone.
Login to AWS, navigate to EC2 service home
Choose Region for EC2 in the upper right corner of the console
Launch (1) Instance, giving instance a name (e.g. mrdb-ubuntu-docker-use1) and create or re-use a key pair
Choose Ubuntu 22.04 or similar free tier instance
Choose hardware, t2.micro or similar free tier instance
Create Key Pair with name (e.g. mrdb-docker-aws-pk.pem if using openSSH at the command line, or mrdb- docker-aws-pk..ppk for use with programs like PuTTY.)
Create or select a security group where SSH is allowed from anywhere 0.0.0.0/0. If you’d like to make this more secure, it can be restricted to a specific IP address or CIDR block.
{{aws-firewall}}
Accept remaining instance creation defaults and click “launch instance”.
Save the *.pem or *.ppk keyfile on your local hard drive when prompted. You will need it later. If you’re on Linux, don’t forget to change permissions on the downloaded *.pem / *.ppk key file:
$ chmod 400 mrdb-docker-pk.pem
Click into the instance summary (EC2 > Instances > Instance ID) and click on the “security” tab towards the bottom.
{{security-group}}
In the relevant security group for your instance, Create an inbound rule so that TCP port 3306 is open, allowing external connections to Maria DB (like your local command line client for MariaDB). Double check that port 22 is open while you're there for SSH.
Install Docker on the EC2 VM
For more detailed instructions, refer to Installing and Using MariaDB via Docker
Back in the instance summary (EC2 > Instances > Instance ID), copy the public IP (e.g. ww.xx.yyy.zzz)
{{aws-instance-ip}}
Open terminal window, navigate to the directory with private key (*.pem or *.ppk) file and start a SSH remote shell session by typing:
$ ssh -i mrdb-docker-pk.pem ubuntu@ww.xx.yyy.zzz
(switch ww.xx.yyy.zzz for your IP address from step 14).
Are you sure you want to continue connecting (yes/no/[fingerprint])? Say yes
Escalate to root
$ sudo su
Install Docker
$ curl -fsSL https://get.docker.com | sudo sh
Pull the MariaDB Docker image and create the container
Pull MariaDB Docker image
$ docker pull mariadb:lts
Start MDRB docker process
at your terminal / command line, type:
$ docker run --detach --name mariadb-docker -v \Users\YouUID\Documents\YourDirName:/var/lib/mysql:Z -p 3306:3306 -e MARIADB_ROOT_PASSWORD=yoursecurepassword mariadb:lts
The -v flag mounts a directory that you choose as /var/lib/mysql will ensure that the volume is persistent. Windows file paths like C:\Users\YouUID\Documents\YourDirName should be represented as above. Linux file paths should also be absolute vs. relative. Obviously replace the root password with something that is a bit more secure than you see in this example for anything other than development purposes.
Shell into container
$ docker exec -it mariadb-docker bash
Login to MRDB inside container
Using the root password specified in step 20, type:
$ mariadb -pyoursecurepassword
Setup admin account with permission for remote connection, configure access control
MariaDB [(none)]> CREATE USER 'admin'@'%' IDENTIFIED BY 'admin';
MariaDB [(none)]> GRANT ALL ON *.* to 'admin'@'%' WITH GRANT OPTION;
MariaDB [(none)]> SHOW GRANTS FOR admin;
Obviously replace these passwords with something that is a bit more secure than you see in this example for anything other than development purposes.
Setup service account for your app with permission for remote connection, configure access control
MariaDB [(none)]> CREATE USER 'yourappname'@'%' IDENTIFIED BY 'yoursecurepassword';
MariaDB [(none)]> GRANT INSERT, UPDATE, DELETE ON *.* to 'yourappname'@'%';
MariaDB [(none)]> SHOW GRANTS FOR yourappname;
Obviously replace these passwords with something that is a bit more secure than you see in this example for anything other than development purposes.
Load up your database from your preexisting SQL script that contains CREATE DATABASE; USE DATABASE; and CREATE TABLE statements.
In a new local terminal window, not your SSH session, change directory to the directory containing your database creation script, say, init.sql in this example. Type:
$ mariadb --host=ww.xx.yyy.zzz --port=3306 --user=admin --password=admin -e “SOURCE init.sql”
(switch ww.xx.yyy.zzz for your IP address from step 14).
This page is licensed: CC BY-SA / Gnu FDL
This process shows how to deploy MariaDB in a Docker container running on an GCE instance. First we'll create the GCE VM, then we'll deploy Docker to it. After that, we'll pull the MariaDB Docker image which we'll use to create a running container with a MariaDB instance. Finally, we'll load a sample database into the MariaDB instance.
Create a VM in Google Cloud Compute Engine
Install MariaDB client on your local machine, either bundled with Maria DB server or standalone.
Login to Google Cloud, navigate to VM instances
Enable Compute Engine API if you haven’t already.
Click create instance, give instance a name (e.g. mrdb-ubuntu-docker-use1b), choose a region and zone.
Machine configuration: Choose general-purpose / E2 micro
Boot Disk > Change
Switch the operating system to a modern Ubuntu release x86/64 CPU architecture, or similar free tier offering.
Create a firewall rule in the Firewall Policies section of the console. After naming it, change the targets, add 0.0.0.0/0 as a source IP range, and open TCP port 3306. Then Click create.
Connect using Google Cloud’s built in browser SSH. Accept all prompts for authorization.
Install Docker on the GCE VM
For more detailed instructions, refer to Installing and Using MariaDB via Docker
Escalate to root Escalate to root
$ sudo su
Install Docker
$ curl -fsSL get.docker.com | sudo sh
Pull Docker image
$ docker pull mariadb:lts
Start MDRB docker process
at your terminal / command line, type:
$ docker run --detach --name mariadb-docker -v \Users\YouUID\Documents\YourDirName:/var/lib/mysql:Z -p 3306:3306 -e MARIADB_ROOT_PASSWORD=yoursecurepassword mariadb:lts
The -v flag mounts a directory that you choose as /var/lib/mysql will ensure that the volume is persistent. Windows file paths like C:\Users\YouUID\Documents\YourDirName should be represented as above. Linux file paths should also be absolute vs. relative. Obviously replace the root password with something that is a bit more secure than you see in this example for anything other than development purposes.
Shell into container $ docker exec -it mariadb-docker bash
Login to MRDB inside container
Using the root password specified in step 12, type:
$ mariadb -pyoursecurepassword
Setup admin account with permission for remote connection, configure access control Execute these SQL commands in sequence:
MariaDB [(none)]> CREATE USER 'admin'@'%' IDENTIFIED BY 'admin';
MariaDB [(none)]> GRANT ALL ON . to 'admin'@'%' WITH GRANT OPTION;
MariaDB [(none)]> SHOW GRANTS FOR admin;
Obviously replace these passwords with something that is a bit more secure than you see in this example for anything other than development purposes.
Setup service account for your app with permission for remote connection, configure access control Execute these SQL commands in sequence:
MariaDB [(none)]> CREATE USER 'yourappname'@'%' IDENTIFIED BY 'yoursecurepassword';
MariaDB [(none)]> GRANT INSERT, UPDATE, DELETE ON . to 'yourappname'@'%';
MariaDB [(none)]> SHOW GRANTS FOR yourappname;
Obviously replace these passwords with something that is a bit more secure than you see in this example for anything other than development purposes.
Load up your database from your preexisting SQL script that contains CREATE DATABASE; USE DATABASE; and CREATE TABLE statements.
Copy the external IP address of your VM instance from the Console in the VM instances list.
In a new local terminal window, not your SSH session, change directory to the directory containing your database creation script, say, init.sql in this example.
Type: $ mariadb --host=ww.xx.yyy.zzz --port=3306 --user=admin --password=admin -e “SOURCE init.sql” (switch ww.xx.yyy.zzz for your IP address from step 17.
This page is licensed: CC BY-SA / Gnu FDL
This process shows how to deploy MariaDB in a Docker container running on an Azure VM instance. First we'll create the Azure VM, then we'll deploy Docker to it. After that, we'll pull the MariaDB Docker image which we'll use to create a running container with a MariaDB instance. Finally, we'll load a sample database into the MariaDB instance.
Create a VM in Azure
Install MariaDB client on your local machine, either bundled with Maria DB server or standalone.
Login to Azure, navigate to Azure Virtual Machine
Create VM. Give the VM a name (e.g. mrdb-ubuntu-docker-use1), and create new or use an existing resource group. Selection region and availability zone, and choose Ubuntu 22.04 LTS x64 (free services eligible).
Choose the VM instance size, like a B1s or similar free tier. Note that Azure free works on a credit based system for new accounts
Configure an administrator account and generate a new key pair, and give the key pair a name.
Click "Review + Create" at the very bottom of the "create virtual machine" page to create the VM.
Download the SSH keys and them in a safe place, you will need them later. For this example, let's name the key file mrdb-docker-pk.pem.
If your local machine is Linux or you are using WSL on Windows, open a terminal window and:
$ mv /mnt/c/ /.ssh/
$ chmod 400 /.ssh/
Once the VM is deployed, "click to resource" to get back to the virtual machine's overview page.
From the overview page, the left-hand navigation, choose settings > networking.
Click "add inbound port rule"
Configure the port rule to allow port TCP 3306 inbound (mySQL) so that you can make external connections from your local Maria DB command line client, to the dockerized Maria DB instance in your Azure Linux VM.
Navigate back to the virtual machine's overview page. Then copy the public IP address to the clipboard.
Install Docker on the Azure VM
For more detailed instructions, refer to Installing and Using MariaDB via Docker
Open terminal window, referencing the path to the private key (*.pem or *.ppk) file, and start a SSH remote shell session by typing:
$ ssh -i /.ssh/mrdb-docker-pk.pem azureuser@ww.xx.yyy.zzz
(switch ww.xx.yyy.zzz for your IP address from step 12, and replace "mrdb-docker-pk.pem" with your keyfile name if you chose something different).
If you forget your administrator account details, simply go to the left-hand navigation and choose settings > connect, and Azure will display the public IP address, admin username, and port for you.
Are you sure you want to continue connecting (yes/no/[fingerprint])? Say yes
Escalate to root
$ sudo su
Microsoft Azure on two machines come with docker preinstalled. For any reason you need to reinstall it , chose another machine type is not have docker preinstalled, you can install docker inside your SSH session with cURL by typing:
$ curl -fsSL get.docker.com | sudo sh
Pull the MariaDB Docker image and create the container
Pull MariaDB Docker image
$ docker pull mariadb:lts
Start MDRB docker process
at your terminal / command line, type:
$ docker run --detach --name mariadb-docker -v \Users\YouUID\Documents\YourDirName:/var/lib/mysql:Z -p 3306:3306 -e MARIADB_ROOT_PASSWORD=yoursecurepassword mariadb:lts
The -v flag mounts a directory that you choose as /var/lib/mysql will ensure that the volume is persistent. Windows file paths like C:\Users\YouUID\Documents\YourDirName should be represented as above. Linux file paths should also be absolute vs. relative. Obviously replace the root password with something that is a bit more secure than you see in this example for anything other than development purposes.
Shell into container
$ docker exec -it mariadb-docker bash
Login to MRDB inside container
Using the root password specified in step 20, type:
$ mariadb -pyoursecurepassword
Setup admin account with permission for remote connection, configure access control
MariaDB [(none)]> CREATE USER 'admin'@'%' IDENTIFIED BY 'admin';
MariaDB [(none)]> GRANT ALL ON . to 'admin'@'%' WITH GRANT OPTION;
MariaDB [(none)]> SHOW GRANTS FOR admin;
Obviously replace these passwords with something that is a bit more secure than you see in this example for anything other than development purposes.
Setup service account for your app with permission for remote connection, configure access control
MariaDB [(none)]> CREATE USER 'yourappname'@'%' IDENTIFIED BY 'yoursecurepassword';
MariaDB [(none)]> GRANT INSERT, UPDATE, DELETE ON . to 'yourappname'@'%';
MariaDB [(none)]> SHOW GRANTS FOR yourappname;
Obviously replace these passwords with something that is a bit more secure than you see in this example for anything other than development purposes.
Load up your database from your preexisting SQL script that contains CREATE DATABASE; USE DATABASE; and CREATE TABLE statements.
In a new local terminal window, not your SSH session, change directory to the directory containing your database creation script, say, init.sql in this example. Then type:
$ mariadb --host=ww.xx.yyy.zzz --port=3306 --user=admin --password=admin -e “SOURCE init.sql”
(switch ww.xx.yyy.zzz for your IP address from step 12).
This page is licensed: CC BY-SA / Gnu FDL
Frequently asked questions about the Docker Official Image
If you have an existing data directory and wish to reset the root and user passwords, and to create a database which the user can fully modify, perform the following steps.
First create a passwordreset.sql
file:
CREATE USER IF NOT EXISTS root@localhost IDENTIFIED BY 'thisismyrootpassword';
SET PASSWORD FOR root@localhost = PASSWORD('thisismyrootpassword');
GRANT ALL ON *.* TO root@localhost WITH GRANT OPTION;
GRANT PROXY ON ''@'%' ON root@localhost WITH GRANT OPTION;
CREATE USER IF NOT EXISTS root@'%' IDENTIFIED BY 'thisismyrootpassword';
SET PASSWORD FOR root@'%' = PASSWORD('thisismyrootpassword');
GRANT ALL ON *.* TO root@'%' WITH GRANT OPTION;
GRANT PROXY ON ''@'%' ON root@'%' WITH GRANT OPTION;
CREATE USER IF NOT EXISTS myuser@'%' IDENTIFIED BY 'thisismyuserpassword';
SET PASSWORD FOR myuser@'%' = PASSWORD('thisismyuserpassword');
CREATE DATABASE IF NOT EXISTS databasename;
GRANT ALL ON databasename.* TO myuser@'%';
Adjust myuser
, databasename
and passwords as needed.
Then:
$ docker run --rm -v /my/own/datadir:/var/lib/mysql -v /my/own/passwordreset.sql:/passwordreset.sql:z %%IMAGE%%:latest --init-file=/passwordreset.sql
On restarting the MariaDB container in this /my/own/datadir
, the root
and myuser
passwords will be reset.
Question, are you getting errors like the following where a temporary server start fails to succeed in 30 seconds?
Example of log:
2023-01-28 12:53:42+00:00 [Note] [Entrypoint]: Starting temporary server
2023-01-28 12:53:42+00:00 [Note] [Entrypoint]: Waiting for server startup
2023-01-28 12:53:42 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 72 ...
....
2023-01-28 12:53:42 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ...
2023-01-28 12:54:13 0 [Note] mariadbd: ready for connections.
Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution
2023-01-28 12:54:13+00:00 [ERROR] [Entrypoint]: Unable to start server.
The timeout on a temporary server start is a quite generous 30 seconds.
The lack of a message like the following indicates it failed to complete writing a temporary file of 12MiB in 30 seconds.
2023-01-28 12:53:46 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB.
If the datadir where this is stored is remote storage maybe it's a bit slow. It's ideal to have an InnoDB temporary path local so this can be configured using the command or configuration setting:
innodb_temp_data_file_path=/dev/shm/ibtmp1:12M:autoextend
Note: depending on container runtime this space may be limited.
MARIADB_REPLICATION_USER
/ MARIADB_REPLICATION_PASSWORD
specify the authentication for the connection. The MARIADB_MASTER_HOST
is the indicator that it is a replica and specifies the container aka hostname, of the master.
A docker-compose.yml
example:
version: "3"
services:
master:
image: mariadb:latest
command: --log-bin --log-basename=mariadb
environment:
- MARIADB_ROOT_PASSWORD=password
- MARIADB_USER=testuser
- MARIADB_PASSWORD=password
- MARIADB_DATABASE=testdb
- MARIADB_REPLICATION_USER=repl
- MARIADB_REPLICATION_PASSWORD=replicationpass
healthcheck:
test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
interval: 10s
timeout: 5s
retries: 3
replica:
image: mariadb:latest
command: --server-id=2 --log-basename=mariadb
environment:
- MARIADB_ROOT_PASSWORD=password
- MARIADB_MASTER_HOST=master
- MARIADB_REPLICATION_USER=repl
- MARIADB_REPLICATION_PASSWORD=replicationpass
- MARIADB_HEALTHCHECK_GRANTS=REPLICA MONITOR
healthcheck:
test: ["CMD", "healthcheck.sh", "--connect", "--replication_io", "--replication_sql", "--replication_seconds_behind_master=1", "--replication"]
interval: 10s
timeout: 5s
retries: 3
depends_on:
master:
condition: service_healthy
This will show up in the container log as:
2024-01-29 17:38:13 0 [ERROR] Incorrect definition of table mysql.event: expected column 'definer' at position 3 to have type varchar(, found type char(141).
2024-01-29 17:38:13 0 [ERROR] mariadbd: Event Scheduler: An error occurred when initializing system tables. Disabling the Event Scheduler.
The cause is the underlying table has change structure from the last MariaDB version. The easiest solution to this is to start the container with the environment variable MARIADB_AUTO_UPGRADE=1 and system tables will be updated. This is safe to keep on as it detects the version installed. The next start should not show this error.
This will show up in the error log as:
2022-05-23 12:29:20 0 [ERROR] InnoDB: Upgrade after a crash is not supported. The redo log was created with MariaDB 10.5.4.
2022-05-23 12:29:20 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
This is attempting to start on a higher MariaDB version when the shutdown of the previous version crashed.
By crashed, it means the MariaDB was force killed or had a hard power failure. MariaDB, being a durable database, can recover from these, if started with the same version. The redo log however is a less stable format, so the recovery has to be on the same Major.Minor version, in this case 10.5. This error message is saying that you when from force killed MariaDB to a later version.
So whenever you encounter this message. Start with the again with the tag set to the version in the error message, like 10.5.4, or as the redo long format is consistent in the Major.Minor version 10.5 is sufficient. After this has been started correctly, cleanly shut the service down and it will be recovered.
The logs on shutdown should have a message like:
2023-11-06 10:49:23 0 [Note] InnoDB: Shutdown completed; log sequence number 84360; transaction id 49
2023-11-06 10:49:23 0 [Note] mariadbd: Shutdown complete
After you see this, you can update your MariaDB tag to a later version.
2024-02-06 03:03:18+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.11.6+maria~ubu2204 started.
/usr/local/bin/docker-entrypoint.sh: line 600: /var/lib/mysql//mysql_upgrade_info: Permission denied
2024-02-06 03:03:18+00:00 [Note] [Entrypoint]: MariaDB upgrade (mariadb-upgrade) required, but skipped due to $MARIADB_AUTO_UPGRADE setting
2024-02-06 3:03:18 0 [Warning] Can't create test file '/var/lib/mysql/80a2bb81d698.lower-test' (Errcode: 13 "Permission denied")
2024-02-06 3:03:18 0 [Note] Starting MariaDB 10.11.6-MariaDB-1:10.11.6+maria~ubu2204 source revision fecd78b83785d5ae96f2c6ff340375be803cd299 as process 1
2024-02-06 3:03:18 0 [ERROR] mariadbd: Can't create/write to file './ddl_recovery.log' (Errcode: 13 "Permission denied")
2024-02-06 3:03:18 0 [ERROR] DDL_LOG: Failed to create ddl log file: ./ddl_recovery.log
2024-02-06 3:03:18 0 [ERROR] Aborting
Or:
2024-08-16 4:54:05 0 [ERROR] InnoDB: Operating system error number 13 in a file operation.
2024-08-16 4:54:05 0 [ERROR] InnoDB: The error means mariadbd does not have the access rights to the directory.
In this case, the container is running as a user that, inside the container, does not have write permissions on the datadir /varlib/mysql
.
From the transaction coordinator log this is a corrupted file. This will have a log message like the following:
2024-05-21 8:55:58 0 [Note] Recovering after a crash using tc.log
2024-05-21 8:55:58 0 [ERROR] Bad magic header in tc log
2024-05-21 8:55:58 0 [ERROR] Crash recovery failed. Either correct the problem (if it's, for example, out of memory error) and restart, or delete tc log and start server with --tc-heuristic-recover={commit|rollback}
2024-05-21 8:55:58 0 [ERROR] Can't init tc log
2024-05-21 8:55:58 0 [ERROR] Aborting
The cause of this is headed by the first not, its a crash recovery. Like the Every MariaDB start is a crash recovery answer below, this is an indication that MariaDB wasn't given enough time by the container runtime to shutdown cleanly. While MariaDB was shutdown, the new version that was started is a newer MariaDB and doesn't recognise the updated magic information in the header.
MariaDB should always perform crash recovery with the same version that actually crashed, the same major/minor number at least.
As such the solution is to restart the container with the previous MariaDB version that was running, and configure the container runtime to allow a longer stop time. See the Every MariaDB start is a crash recovery answer below to see if the timeout is sufficiently extended.
Do you get on every start:
db-1 | 2023-02-25 19:10:02 0 [Note] Starting MariaDB 10.11.2-MariaDB-1:10.11.2+maria~ubu2204-log source revision cafba8761af55ae16cc69c9b53a341340a845b36 as process 1
db-1 | 2023-02-25 19:10:02 0 [Note] mariadbd: Aria engine: starting recovery
db-1 | tables to flush: 3 2 1 0
db-1 | (0.0 seconds);
db-1 | 2023-02-25 19:10:02 0 [Note] mariadbd: Aria engine: recovery done
...
db-1 | 2023-02-26 13:03:29 0 [Note] InnoDB: Initializing buffer pool, total size = 32.000GiB, chunk size = 512.000MiB
db-1 | 2023-02-26 13:03:29 0 [Note] InnoDB: Completed initialization of buffer pool
db-1 | 2023-02-26 13:03:29 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes)
db-1 | 2023-02-26 13:03:29 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=193796878816
Container runtimes are assume to start and stop very quickly. Check the shutdown logs. They may be a log like:
db-1 | 2023-02-26 13:03:17 0 [Note] InnoDB: Starting shutdown...
db-1 | 2023-02-26 13:03:17 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
db-1 | 2023-02-26 13:03:17 0 [Note] InnoDB: Restricted to 519200 pages due to innodb_buf_pool_dump_pct=25
db-1 | 2023-02-26 13:03:17 0 [Note] InnoDB: Buffer pool(s) dump completed at 230226 13:03:17
db-1 exited with code 0
Note that the logs didn't include the following messages:
db-1 | 2023-02-26 13:03:43 0 [Note] InnoDB: Shutdown completed; log sequence number 46590; transaction id 15
db-1 | 2023-02-26 13:03:43 0 [Note] mariadbd: Shutdown complete
As these messages aren't here, the container was killed off before it could just down cleanly. When this happens, the startup will be a crash recovery and you won't be able to upgrade your MariaDB instance (previous FAQ) to the next Major.Minor version.
Solution is to extend the timeout in the container runtime to allow MariaDB to complete its shutdown.
docker volume create backup
docker run --name mdb -v backup:/backup -v datavolume:/var/lib/mysql mariadb
docker exec mdb mariadb-backup --backup --target-dir=/backup/d --user root --password soverysecret
docker exec mdb mariadb-backup --prepare --target-dir=/backup/d
docker exec mdb sh -c '[ ! -f /backup/d/.my-healthcheck.cnf ] && cp /var/lib/mysql/.my-healthcheck.cnf /backup/d'
docker exec --workdir /backup/d mdb tar -Jcf ../backup.tar.xz .
docker exec mdb rm -rf /backup/d
With the backup prepared like previously:
docker run -v backup:/docker-entrypoint-initdb.d -v newdatavolume:/var/lib/mysql mariadb
Because Apptainer has all the filesystems readonly except or the volume, the /run/mysqld directory is used as a pidfile and socket directory. An easy way is to mark this as a scratch directory.
mkdir mydatadir
apptainer run --no-home --bind $PWD/mydatadir:/var/lib/mysql --env MARIADB_RANDOM_ROOT_PASSWORD=1 --net --network-args "portmap=3308:3306/tcp" --fakeroot --scratch=/run/mysqld docker://mariadb:10.5
Alternately:
apptainer run --no-home --bind $PWD/mydatadir:/var/lib/mysql --env MARIADB_RANDOM_ROOT_PASSWORD=1 --net --network-args "portmap=3308:3306/tcp" --fakeroot docker://mariadb:10.5 --socket=/var/lib/mysql/mariadb.sock --pid-file=/var/lib/mysql/mariadb.pid
The MariaDB entrypoint briefly starts as root, and if a explicit volume is there, the owner of this volume will be root. To allow MariaDB to use the CHOWN capability to change to the volume owner to a user that can write to this volume, it needs to be briefly root. After this one action is taken, the entrypoint uses gosu to drop to a non-root user and continues execution. There is no accessible exploit vector to remotely affect the container startup when it is briefly running as the root user.
Yes. using the user: 2022 in a compose file, or --user 2022 as a command will run the entrypoint as the user id 2022. When this occurs, it is assumed that the volume of the datadir has the right permissions for MariaDB to access the datadir. This can be useful if your local user is user id 2022 and your datadir is owned locally by this user. Note inside the container there isn't the same user names outside the container defined, so working with numbers is more portable.
This page is licensed: CC BY-SA / Gnu FDL
Sometimes we want to install a specific version of MariaDB, MariaDB ColumnStore, or MaxScale on a certain system, but no packages are available. Or maybe, we simply want to isolate MariaDB from the rest of the system, to be sure that we won't cause any damage.
A virtual machine would certainly serve the scope. However, this means installing a system on the top of another system. It requires a lot of resources.
In many cases, the best solution is using containers. Docker is a framework that runs containers. A container is meant to run a specific daemon, and the software that is needed for that daemon to properly work. Docker does not virtualize a whole system; a container only includes the packages that are not included in the underlying system.
Docker requires a very small amount of resources. It can run on a virtualized system. It is used both in development and in production environments. Docker is an open source project, released under the Apache License, version 2.
Note that, while your package repositories could have a package called docker
, it is probably not the Docker we are talking about. The Docker package could be called docker.io
or docker-engine
.
For information about installing Docker, see Get Docker in Docker documentation.
The script below will install the Docker repositories, required kernel modules and packages on the most common Linux distributions:
curl -sSL https://get.docker.com/ | sh
On some systems you may have to start the dockerd daemon
yourself:
sudo systemctl start docker
sudo gpasswd -a "${USER}" docker
If you don't have dockerd
running, you will get the following error for most docker
commands:
installing-and-using-mariadb-via-docker
Cannot connect to the Docker daemon at unix:/var/run/docker.sock. Is the docker daemon running?
&#xNAN;<>
The easiest way to use MariaDB on Docker is choosing a MariaDB image and creating a container.
You can download a MariaDB image for Docker from the Offical Docker MariaDB, or choose another image that better suits your needs. You can search Docker Hub (the official set of repositories) for an image with this command:
docker search mariadb
Once you have found an image that you want to use, you can download it via Docker. Some layers including necessary dependencies will be downloaded too. Note that, once a layer is downloaded for a certain image, Docker will not need to download it again for another image.
For example, if you want to install the default MariaDB image, you can type:
docker pull mariadb:10.4
This will install the 10.4 version. Versions 10.2, 10.3, 10.5 are also valid choices.
You will see a list of necessary layers. For each layer, Docker will say if it is already present, or its download progress.
To get a list of installed images:
docker images
An image is not a running process; it is just the software needed to be launched. To run it, we must create a container first. The command needed to create a container can usually be found in the image documentation. For example, to create a container for the official MariaDB image:
docker run --name mariadbtest -e MYSQL_ROOT_PASSWORD=mypass -p 3306:3306 -d docker.io/library/mariadb:10.3
mariadbtest
is the name we want to assign the container. If we don't specify a name, an id will be automatically generated.
10.2 and 10.5 are also valid target versions:
docker run --name mariadbtest -e MYSQL_ROOT_PASSWORD=mypass -p 3306:3306 -d docker.io/library/mariadb:10.2
docker run --name mariadbtest -e MYSQL_ROOT_PASSWORD=mypass -p 3306:3306 -d docker.io/library/mariadb:10.5
Optionally, after the image name, we can specify some options for mariadbd. For example:
docker run --name mariadbtest -e MYSQL_ROOT_PASSWORD=mypass -p 3306:3306 -d mariadb:10.3 --log-bin --binlog-format=MIXED
Docker will respond with the container's id. But, just to be sure that the container has been created and is running, we can get a list of running containers in this way:
docker ps
We should get an output similar to this one:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
819b786a8b48 mariadb "/docker-entrypoint. 4 minutes ago Up 4 minutes 3306/tcp mariadbtest
Docker allows us to restart a container with a single command:
docker restart mariadbtest
The container can also be stopped like this:
docker stop mariadbtest
The container will not be destroyed by this command. The data will still live inside the container, even if MariaDB is not running. To restart the container and see our data, we can issue:
docker start mariadbtest
With docker stop
, the container will be gracefully terminated: a SIGTERM
signal will be sent to the mariadbd
process, and Docker will wait for the process to shutdown before returning the control to the shell. However, it is also possible to set a timeout, after which the process will be immediately killed with a SIGKILL
. Or it is possible to immediately kill the process, with no timeout.
docker stop --time=30 mariadbtest
docker kill mariadbtest
In case we want to destroy a container, perhaps because the image does not suit our needs, we can stop it and then run:
docker rm mariadbtest
Note that the command above does not destroy the data volume that Docker has created for /var/lib/mysql. If you want to destroy the volume as well, use:
docker rm -v mariadbtest
When we start a container, we can use the --restart
option to set an automatic restart policy. This is useful in production.
Allowed values are:
no
: No automatic restart.
on-failure
: The container restarts if it exits with a non-zero exit code.
unless-stopped
: Always restart the container, unless it was explicitly stopped as shown above.
always
: Similar to unless-stopped
, but when Docker itself restarts, even containers that were explicitly stopped will restart.
It is possible to change the restart policy of existing, possibly running containers:
docker update --restart always mariadb
# or, to change the restart policy of all containers:
docker update --restart always $(docker ps -q)
A use case for changing the restart policy of existing containers is performing maintenance in production. For example, before upgrading the Docker version, we may want to change all containers restart policy to always
, so they will restart as soon as the new version is up and running. However, if some containers are stopped and not needed at the moment, we can change their restart policy to unless-stopped
.
A container can also be frozen with the pause
command. Docker will freeze the process using croups. MariaDB will not know that it is being frozen and, when we unpause
it, MariaDB will resume its work as expected.
Both pause
and unpause
accept one or more container names. So, if we are running a cluster, we can freeze and resume all nodes simultaneously:
docker pause node1 node2 node3
docker unpause node1 node2 node3
Pausing a container is very useful when we need to temporarily free our system's resources. If the container is not crucial at this moment (for example, it is performing some batch work), we can free it to allow other programs to run faster.
If the container doesn't start, or is not working properly, we can investigate with the following command:
docker logs mariadbtest
This command shows what the daemon sent to the stdout since the last attempt of starting - the text that we typically see when we invoke mariadbd
from the command line.
On some systems, commands such as docker stop mariadbtest
and docker restart mariadbtest
may fail with a permissions error. This can be caused by AppArmor, and even sudo
won't allow you to execute the command. In this case, you will need to find out which profile is causing the problem and correct it, or disable it. Disabling AppArmor altogether is not recommended, especially in production.
To check which operations were prevented by AppArmor, see AppArmor Failures in AppArmor documentation.
To disable a profile, create a symlink with the profile name (in this example, mariadbd
) to etc/apparmor.d/disable
, and then reload profiles:
ln -s /etc/apparmor.d/usr.sbin.mariadbd /etc/apparmor.d/disable/
sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.mariadbd
For more information, see Policy Layout in AppArmor documentation.
After disabling the profile, you may need to run:
sudo service docker restart
docker system prune --all --volumes
Restarting the system will then allow Docker to operate normally.
To access the container via Bash, we can run this command:
docker exec -it mariadbtest bash
Now we can use normal Linux commands like cd, ls, etc. We will have root privileges. We can even install our favorite file editor, for example:
apt-get update
apt-get install vim
In some images, no repository is configured by default, so we may need to add them.
Note that if we run mariadb-admin shutdown or the SHUTDOWN command to stop the container, the container will be deactivated, and we will automatically exit to our system.
If we try to connect to the MariaDB server on localhost
, the client will bypass networking and attempt to connect to the server using a socket file in the local filesystem. However, this doesn't work when MariaDB is running inside a container because the server's filesystem is isolated from the host. The client can't access the socket file which is inside the container, so it fails to connect.
Therefore connections to the MariaDB server must be made using TCP, even when the client is running on the same machine as the server container.
Find the IP address that has been assigned to the container:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' mariadbtest
You can now connect to the MariaDB server using a TCP connection to that IP address.
After enabling network connections in MariaDB as described above, we will be able to connect to the server from outside the container.
On the host, run the client and set the server address ("-h") to the container's IP address that you found in the previous step:
mysql -h 172.17.0.2 -u root -p
This simple form of the connection should work in most situations. Depending on your configuration, it may also be necessary to specify the port for the server or to force TCP mode:
mysql -h 172.17.0.2 -P 3306 --protocol=TCP -u root -p
Multiple MariaDB servers running in separate Docker containers can connect to each other using TCP. This is useful for forming a Galera cluster or for replication.
When running a cluster or a replication setup via Docker, we will want the containers to use different ports. The fastest way to achieve this is mapping the containers ports to different port on our system. We can do this when creating the containers (docker run
command), by using the -p
option, several times if necessary. For example, for Galera nodes we will use a mapping similar to this one:
-p 4306:3306 -p 5567:5567 -p 5444:5444 -p 5568:5568
It is possible to download a Linux distribution image, and to install MariaDB on it. This is not much harder than installing MariaDB on a regular operating system (which is easy), but it is still the hardest option. Normally we will try existing images first. However, it is possible that no image is available for the exact version we want, or we want a custom installation, or perhaps we want to use a distribution for which no images are available. In these cases, we will install MariaDB in an operating system image.
First, we need the system image to run as a daemon. If we skip this step, MariaDB and all databases will be lost when the container stops.
To demonize an image, we need to give it a command that never ends. In the following example, we will create a Debian Jessie daemon that constantly pings the 8.8.8.8 special address:
docker run --name debian -p 3306:3306 -d debian /bin/sh -c "while true; do ping 8.8.8.8; done"
At this point, we can enter the shell and issue commands. First we will need to update the repositories, or no packages will be available. We can also update the packages, in case some of them are newer than the image. Then, we will need to install a text editor; we will need it to edit configuration files. For example:
# start an interactive Bash session in the container
docker exec -ti debian bash
apt-get -y update
apt-get -y upgrade
apt-get -y install vim
Now we are ready to install MariaDB in the way we prefer.
This page is licensed: CC BY-SA / Gnu FDL
Images can be found on MariaDB Docker Hub. To get the list of images run
$ docker images -a
$ docker network create mynetwork
It is good practice to create the container network and attach the container to the network.
Start the container with server options
To start the container in the background with the MariaDB server image run:
$ docker run --rm --detach \
--env MARIADB_ROOT_PASSWORD=sosecret \
--network mynetwork \
--name mariadb-server \
mariadb:latest
Additionally |environment variables are also provided.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ad374ec8a272 mariadb:latest "docker-entrypoint.s…" 3 seconds ago Up 1 second 3306/tcp mariadb-server
Note: specify the flag -a in case you want to see all containers
To start the mariadb client inside the created container and run specific commands, run the following:
$ docker exec -it mariadb-server mariadb -psosecret -e "SHOW PLUGINS"
$ docker logs mariadb-server
In the logs you can find status information about the server, plugins, generated passwords, errors and so on.
$ docker restart mariadb-server
$ docker exec -it mariadb-server bash
$ docker run --detach --env MARIADB_USER=anel \
--env MARIADB_PASSWORD=anel \
--env MARIADB_DATABASE=my_db \
--env MARIADB_RANDOM_ROOT_PASSWORD=1 \
--volume $PWD/my_container_config:/etc/mysql/conf.d:z \
--network mynetwork \
--name mariadb-server1 \
mariadb:latest
One can specify custom configuration files through the /etc/mysql/conf.d volume during container startup.
$ docker run --detach --env MARIADB_USER=anel\
--env MARIADB_PASSWORD=anel \
--env MARIADB_DATABASE=my_db \
--env MARIADB_RANDOM_ROOT_PASSWORD=1 \
--volume $PWD/my_init_db:/docker-entrypoint-initdb.d \
--network mynetwork \
--name mariadb-server1 \
mariadb:latest
User created with the environment variables has full grants only to the MARIADB_DATABASE. In order to override those grants, one can specify grants to a user, or execute any SQL statements from host file to docker-entrypoint-initdb.d. In the local_init_dir directory we can find the file, created like this:
$ echo "GRANT ALL PRIVILEGES ON *.* TO anel;" > my_init_db/my_grants.sql
This page is licensed: CC BY-SA / Gnu FDL
When you start the image, you can adjust the initialization of the MariaDB Server instance by passing one or more environment variables on the docker run command line. Do note that all of the variables below, except MARIADB_AUTO_UPGRADE
, will have no effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
From tag 10.2.38, 10.3.29, 10.4.19, 10.5.10 onwards, and all 10.6 and later tags, the MARIADB_* equivalent variables are provided. MARIADB_* variants will always be used in preference to MYSQL_* variants.
One of MARIADB_ROOT_PASSWORD_HASH, MARIADB_ROOT_PASSWORD, MARIADB_ALLOW_EMPTY_ROOT_PASSWORD, or MARIADB_RANDOM_ROOT_PASSWORD (or equivalents, including *_FILE), is required. The other environment variables are optional.
This specifies the password that will be set for the MariaDB root superuser account.
Set to a non-empty value, like 1
, to allow the container to be started with a blank password for the root user. NOTE: Setting this variable to yes is not recommended unless you really know what you are doing, since this will leave your MariaDB instance completely unprotected, allowing anyone to gain complete superuser access.
Set to a non-empty value, like yes, to generate a random initial password for the root user. The generated root password will be printed to stdout (GENERATED ROOT PASSWORD: .....).
This is the hostname part of the root user created. By default this is %, however it can be set to any default MariaDB allowed hostname component. Setting this to localhost will prevent any root user being accessible except via the unix socket.
This variable allows you to specify the name of a database to be created on image startup.
Both user and password variables, along with a database, are required for a user to be created. This user will be granted all access (corresponding to GRANT ALL) to the MARIADB_DATABASE database.
Do not use this mechanism to create the root superuser, that user gets created by default with the password specified by the MARIADB_ROOT_PASSWORD / MYSQL_ROOT_PASSWORD variable.
Set MARIADB_MYSQL_LOCALHOST_USER to a non-empty value to create the mysql@locahost database user. This user is especially useful for a variety of health checks and backup scripts.
The mysql@localhost user gets USAGE privileges by default. If more access is required, additional global privileges in the form of a comma separated list can be provided. If you are sharing a volume containing MariaDB's unix socket (/var/run/mysqld by default), privileges beyond USAGE can result in confidentiality, integrity and availability risks, so use a minimal set. Its also possible to use for Mariadb-backup. The healthcheck.sh script also documents the required privileges for each health check test.
Set MARIADB_HEALTHCHECK_GRANTS to the grants required to be given to the healthcheck@localhost
, healthcheck@127.0.0.1
, healthcheck@::1
, users. When not specified the default grant is USAGE.
The main value used here will be [REPLICA MONITOR](../../../../../reference/sql-statements-and-structure/sql-statements/account-management-sql-commands/grant.md#replica-monitor) for the [healthcheck --replication](using-healthcheck-sh.md) test.
By default, the entrypoint script automatically loads the timezone data needed for the CONVERT_TZ() function. If it is not needed, any non-empty value disables timezone loading.
Set MARIADB_AUTO_UPGRADE to a non-empty value to have the entrypoint check whether mariadb-upgrade needs to run, and if so, run the upgrade before starting the MariaDB server.
Before the upgrade, a backup of the system database is created in the top of the datadir with the name system_mysql_backup_*.sql.zst. This backup process can be disabled with by setting MARIADB_DISABLE_UPGRADE_BACKUP to a non-empty value.
If MARIADB_AUTO_UPGRADE
is set, and the .my-healthcheck.cnf
file is missing, the healthcheck
users are recreated if they don't exist, MARIADB_HEALTHCHECK_GRANTS
grants are given, the passwords of the healthcheck
users are reset to a random value and the .my-healthcheck.cnf
file is recreated with the new password populated.
When specified, the container will connect to this host and replicate from it.
When MARIADB_MASTER_HOST is specified, MARIADB_REPLICATION_USER and MARIADB_REPLICATION_PASSWORD will be used to connect to the master.
When not specified, the MARIADB_REPLICATION_USER will be created with the REPLICATION REPLICA grants required to a client to start replication.
This page is licensed: CC BY-SA / Gnu FDL
Docker Compose is a tool that allows one to declare which Docker containers should run, and which relationships should exist between them. It follows the infrastructure as code approach, just like most automation software and Docker itself.
For information about installing Docker Compose, see Install Docker Compose in Docker documentation.
docker-compose.yml
FileWhen using Docker Compose, the Docker infrastructure must be described in a YAML file called docker-compose.yml
.
Let's see an example:
version: "3"
services:
web:
image: "apache:${PHP_VERSION}"
restart: 'always'
depends_on:
- mariadb
restart: 'always'
ports:
- '8080:80'
links:
- mariadb
mariadb:
image: "mariadb:${MARIADB_VERSION}"
restart: 'always'
volumes:
- "/var/lib/mysql/data:${MARIADB_DATA_DIR}"
- "/var/lib/mysql/logs:${MARIADB_LOG_DIR}"
- /var/docker/mariadb/conf:/etc/mysql
environment:
MYSQL_ROOT_PASSWORD: "${MYSQL_ROOT_PASSWORD}"
MYSQL_DATABASE: "${MYSQL_DATABASE}"
MYSQL_USER: "${MYSQL_USER}"
MYSQL_PASSWORD: "${MYSQL_PASSWORD}"
In the first line we declare that we are using version 3 of the Docker compose language.
Then we have the list of services, namely the web
and the mariadb
services.
Let's see the properties of the services:
port
maps the 8080 container port to the 80 host system port. This is very useful for a development environment, but not in production, because it allows us to connect our browser to the containerized web server. Normally there is no need to connect to MariaDB from the host system.
links
declares that this container must be able to connect mariadb
. The hostname is the container name.
depends_on
declares that mariadb
needs to start before web
. This is because we cannot do anything with our application until MariaDB is ready to accept connections.
restart: always
declares that the containers must restart if they crash.
volumes
creates volumes for the container if it is set in a service definition, or a volume that can be used by any container if it is set globally, at the same level as services
. Volumes are directories in the host system that can be accessed by any number of containers. This allows destroying a container without losing data.
environment
sets environment variables inside the container. This is important because in setting these variables we set the MariaDB root credentials for the container.
It is good practice to create volumes for:
The data directory, so we don't lose data when a container is created or replaced, perhaps to upgrade MariaDB.
The directory where we put all the logs, if it is not the datadir.
The directory containing all configuration files (for development environments), so we can edit those files with the editor installed in the host system. Normally no editor is installed in containers. In production we don't need to do this, because we can copy files from a repository located in the host system to the containers.
Note that Docker Compose variables are just placeholders for values. Compose does not support assignment, conditionals or loops.
In the above example you can see several variables, like ${MARIADB_VERSION}
. Before executing the file, Docker Compose will replace this syntax with the MARIADB_VERSION
variable.
Variables allow making Docker Compose files more re-usable: in this case, we can use any MariaDB image version without modifying the Docker Compose file.
The most common way to pass variables is to write them into a file. This has the benefit of allowing us to version the variable file along with the Docker Compose file. It uses the same syntax you would use in BASH:
PHP_VERSION=8.0
MARIADB_VERSION=10.5
...
For bigger setups, it could make sense to use different environment files for different services. To do so, we need to specify the file to use in the Compose file:
services:
web:
env_file:
- web-variables.env
...
Docker Compose is operated using docker-compose
. Here we'll see the most common commands. For more commands and for more information about the commands mentioned here, see the documentation.
Docker Compose assumes that the Composer file is located in the current directory and it's called docker-compose.yml
. To use a different file, the -f <filename>
parameter must be specified.
To pull the necessary images:
docker-compose pull
Containers described in the Compose file can be created in several ways.
To create them only if they do not exist:
docker-compose up --no-recreate
To create them if they do not exist and recreate them if their image or configuration have changed:
docker-compose up
To recreate containers in all cases:
docker-compose up --force-recreate
Normally docker-compose up
starts the containers. To create them without starting them, add the --no-start
option.
To restart containers without recreating them:
docker-compose restart
To kill a container by sending it a SIGKILL
:
docker-compose kill <service>
To instantly remove a running container:
docker-compose rm -f <serice>
To tear down all containers created by the current Compose file:
docker-compose down
Overview of Docker Compose in the Docker documentation.
Compose file in the Docker documentation.
Docker Compose on GitHub.
Further information about the concepts explained in this page can be found in Docker documentation:
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
The healthcheck.sh script is part of the Docker Official Images of MariaDB Server. The script is part of the repository of the Docker Official Image of MariaDB Server.
The script processes a number of argument and tests, together, in strict order. Arguments pertaining to a test must occur before the test name. If a test fails, no further processing is performed. Both arguments and tests begin with a double-hyphen.
By default, (since 2023-06-27), official images will create healthcheck@localhost, healthcheck@127.0.0.1, healthcheck@::1 users with a random password and USAGE privileges. MARIADB_HEALTHCHECK_GRANTS can be used for --replication
where additional grants are required. This is stored in .my-healthcheck.cnf in the datadir of the container and passed as the --defaults-extra-file
to the healthcheck.sh script if it exists. The .my-healthcheck.cnf
also sets protocol=tcp
for the mariadb
so --connect
is effectively there on all tests.
The [MARIADB_AUTO_UPGRADE=1](mariadb-server-docker-official-image-environment-variables.md#mariadb_auto_upgrade) will regenerate the .my-healthcheck.cnf file if missing and recreate the healthcheck users of the database with a new random password. The current port configuration of the MariaDB container is written into this file.
The MARIADB_MYSQL_LOCALHOST_USER=1, MARIADB_MYSQL_LOCALHOST_GRANTS environment variables can also be used, but with the creation of the healthcheck user, these are backwards compatible.
An example of a compose file that uses the healthcheck.sh to determine a healthy service as a depedency before starting a wordpress service:
version: "3"
services:
mariadb:
image: mariadb:lts
environment:
- MARIADB_DATABASE=testdb
- MARIADB_USER=testuser
- MARIADB_PASSWORD=password
- MARIADB_RANDOM_ROOT_PASSWORD=1
healthcheck:
test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
start_period: 10s
interval: 10s
timeout: 5s
retries: 3
wordpress:
image: wordpress
environment:
- WORDPRESS_DB_HOST=mariadb
- WORDPRESS_DB_NAME=testdb
- WORDPRESS_DB_USER=testuser
- WORDPRESS_DB_PASSWORD=password
depends_on:
mariadb:
condition: service_healthy
This is active when a external user can connect to the TCP port of MariaDB Server. This strictly tests just the TCP connection and not if any authentication works.
This test is true when InnoDB has completed initializing. This includes any rollback or crash recovery that may be occurring in the background as MariaDB is starting.
The connecting user must have USAGE privileges to perform this test.
This indicates that the buffer pool dump previously saved has been completed loaded into the InnoDB Buffer Pool and as such the server has a hot cache ready for use. This checks the innodb_buffer_pool_load_status for a "complete" indicator.
This test doesn't check if innodb-system-variables/#innodb_buffer_pool_load_at_startupinnodb_buffer_pool_load_at_startup is set at startup.
The connecting user must have USAGE privileges to perform this test.
This indicates that the galera node is online by the wsrep_local_state variable. This includes states like "joining" and "donor" where it cannot serve SQL queries.
The connecting user must have USAGE privileges to perform this test.
This tests a replica based on the --replication_*
parameters. The replica test must pass all of the subtests to be true. The subtests are:
io - the IO thread is running
sql - the sql thread is running
seconds_behind_master - the replica is less than X seconds behind the master.
sql_remaining_delay - the delayed replica is less than X seconds behind the master's execution of the same SQL.
These are tested for all connections, if --replication_all
is set (default), or --replication_name
.
The connecting user must have REPLICATION_CLIENT if using a version less than MariaDB 10.5, or REPLICA MONITOR for MariaDB 10.5 or later.
This healthcheck indicates that the mariadb is upgrade to the current version.
Checks all replication sources
Sets the multisource connection name tested. Unsets --replication_all
.
IO thread is running
SQL thread is running
Less than or equal this seconds of delay
Less than or equal this seconds of remaining delay
Change to this user. Can only be done once as the root user is default for healthchecks.
Change to the mysql
unix user. Like --su
this respawns the script so will reset all parameters. Should be the first argument. The MARIADB_MYSQL_LOCALHOST_USER=1 environment variable is designed around usage here.
For the --mariadbupgrade
test where the upgrade file is located.
These are passed to mariadb shell for all tests except --mariadbupgrade
healthcheck.sh --su-mysql --connect --innodb_initialized
Switch to mysql
user, and check if can connect and the innodb is initialized.
healthcheck.sh --su-mysql --connect --replication_io --replication_sql --replication_seconds_behind_master=600 --replication_sql_remaining_delay=30 ----replication_name=archive --replication --replication_seconds_behind_master=10 --replication_name=channel1 --replication
Switch to mysql
user, check if connections can be made, for the replication channel "archive", ensure io and sql threads are running and the seconds behind master < 600 seconds and the sql remaining delay < 30 seconds. For the "channel1", the seconds behind master is limit to 10 seconds maximum.
This page is licensed: CC BY-SA / Gnu FDL
General information and hints on deploying MariaDB Kubernetes (K8s) containers, an open source container orchestration system which automates deployments, horizontal scaling, configuration, and operat
Operators basically instruct Kubernetes about how to manage a certain technology. Kubernetes comes with some default operators, but it is possible to create custom operators. Operators created by the community can be found on OperatorHub.io.
Kubernetes provides a declarative API. To support a specific (i.e. MariaDB) technology or implement a desired behavior (i.e. provisioning a replica), we extend Kubernetes API. This involves creating two main components:
A custom resource.
A custom controller.
A custom resource adds an API endpoint, so the resource can be managed via the API server. It includes functionality to get information about the resource, like a list of the existing servers.
A custom controller implements the checks that must be performed against the resource to check if its state should be corrected using the API. In the case of MariaDB, some reasonable checks would be verifying that it accepts connections, replication is running, and a server is (or is not) read only.
MariaDB Enterprise Operator provides a seamless way to run and operate containerized versions of MariaDB Enterprise Server and MaxScale on Kubernetes, allowing you to leverage Kubernetes orchestration and automation capabilities. This document outlines the features and advantages of using Kubernetes and the MariaDB Enterprise Operator to streamline the deployment and management of MariaDB and MaxScale instances.
Find the documentation here.
mariadb-operator is a Kubernetes operator that allows you to run and operate MariaDB in a cloud native way. It aims to declaratively manage MariaDB instances using Kubernetes CRDs instead of imperative commands.
It's available in both Artifact Hub and Operator Hub and supports the following features:
Easily provision and configure MariaDB servers in Kubernetes.
Multiple HA modes: Galera Cluster or MariaDB Replication.
Automated primary failover and cluster recovery.
Advanced HA with MaxScale: a sophisticated database proxy, router, and load balancer for MariaDB.
Flexible storage configuration. Volume expansion.
Take, restore, and schedule backups.
Multiple backup storage types: S3 compatible, PVCs, and Kubernetes volumes.
Policy-driven backup retention with compression options: bzip2 and gzip.
Target recovery time: restore the closest available backup to the specified time.
Bootstrap new instances from: backups, S3, PVCs...
Cluster-aware rolling update: roll out replica Pods one by one, wait for each of them to become ready, and then proceed with the primary Pod.
Multiple update strategies: ReplicasFirstPrimaryLast
, OnDelete
, and Never
.
Automated data-plane updates.
my.cnf change detection: automatically trigger updates when my.cnf changes.
Suspend operator reconciliation for maintenance operations.
Issue, configure, and rotate TLS certificates and CAs.
Native integration with cert-manager: automatically create Certificate
resources.
Prometheus metrics via mysqld-exporter and maxscale-exporter.
Native integration with prometheus-operator: automatically create ServiceMonitor
resources.
Declaratively manage SQL resources: users, grants, and logical databases.
Configure connections for your applications.
Orchestrate and schedule SQL scripts.
Validation webhooks to provide CRD immutability.
Additional printer columns to report the current CRD status.
CRDs designed according to the Kubernetes API conventions.
Install it using helm, OLM, or static manifests.
Multiple deployment modes: cluster-wide and single namespace.
Multi-arch distroless image.
GitOps friendly.
Please, refer to the documentation, the API reference and the example suite for further detail.
Content initially contributed by Vettabase Ltd. Updated 11/6/24 by MariaDB.
This page is licensed: CC BY-SA / Gnu FDL
Kubernetes, or K8s, is software to orchestrate containers. It is released under the terms of an open source license, Apache License 2.0.
Kubernetes was originally developed by Google. Currently it is maintained by the Cloud Native Computing Foundation (CNCF), with the status of Graduated Project.
For information about how to setup a learning environment or a production environment, see Getting started in Kubernetes documentation.
Kubernetes runs in a cluster. A cluster runs a workload: a set of servers that are meant to work together (web servers, database servers, etc).
A Kubernetes cluster consists of the following components:
Nodes run containers with the servers needed by our applications.
Controllersconstantly check the cluster nodes current state, and compare it with the desired state.
A Control Plane is a set of different components that store the cluster desired state and take decisions about the nodes. The Control Plane provides an API that is used by the controllers.
For more information on Kubernetes architecture, see Concepts and Kubernetes Components in Kubernetes documentation.
A node is a system that is responsible to run one or more pods. A pod is a set of containers that run a Kubernetes workload or part of it. All containers that run in the same pod are also located on the same node. Usually identical pods run on different nodes for fault tolerance.
For more details, see Nodes in the Kubernetes documentation.
Every node must necessarily have the following components:
kubelet
kube-proxy
A container runtime
kubelet has a set of PodSpecs which describe the desired state of pods. It checks that the current state of the pods matches the desired state. It especially takes care that containers don't crash.
In a typical Kubernetes cluster, several containers located in different pods need to connect to other containers, located in the same pods (for performance and fault tolerance reasons). Therefore, when we develop and deploy an application, we can't know in advance the IPs of the containers to which it will have to connect. For example, an application server may need to connect to MariaDB, but the MariaDB IP will be different for every pod.
The main purpose of kube-proxy is to implement the concept of Kubernetes services. When an application needs to connect to MariaDB, it will connect to the MariaDB service. kube-proxy will receive the request and will redirect it to a running MariaDB container in the same pod.
Kubernetes manages the containers in a pod via a container runtime, or container manager, that supports the Kubernetes Container Runtime Interface (CRI). Container runtimes that meet this requisite are listed in the Container runtimes page in the Kubernetes documentation. More information about the Container Runtime Interface can be found on GitHub.
Originally, Kubernetes used Docker as a container runtime. This was later deprecated, but Docker images can still be used using any container runtime.
Controllers constantly check if there are differences between the pod's current state and their desired state. When differences are found, controllers try to fix them. Each node type controls one or more resource types. Several types of controllers are needed to run a cluster.
Most of the actions taken by the controllers user the API server in the Control Plane. However, this is not necessarily true for custom controllers. Also, some actions cannot be performed via the Control Plane. For example, if some nodes crashed, adding new nodes involves taking actions outside of the Kubernetes cluster, and controllers will have to do this themselves.
It is possible to write custom controllers to perform checks that require knowledge about a specific technology. For example, a MariaDB custom controller may want to check if replication is working by issuing SHOW REPLICA STATUS commands. This logic is specific to the way MariaDB works, and can only be implemented in a customer controller. Custom controllers are usually part of operators.
For more information, see Controllers in the Kubernetes documentation.
The control plane consists of the following components.
For more information about the control plane, see Control Plane Components in Kubernetes documentation.
An API Server exposes API functions both internally and externally. It is essential to coordinate Kubernetes components so that they react to node's change of state, and it allows the user to send commands.
The default implementation of the API Server is kube-apiserver. It is able to scale horizontally and to balance the load between its instances.
Most controllers run in this component.
etcd contains all data used by a Kubernetes cluster. It is a good idea to take regular backups of etcd data.
When a new pod is created, kube-scheduler decides which node should host it. The decision is made based on several criteria, like the resource requirements for the pod.
cloud-controller-manager implements the logic and API of a cloud provider. It receives requests from the API Server and performs specific actions, like creating an instance in AWS. It also runs controllers that are specific to a cloud vendor.
Kubernetes comes with a set of tools that allow us to communicate with the API server and test a cluster.
kubectl allows communication with the API server and run commands on a Kubernetes cluster.
kubeadm allows creating a Kubernetes cluster that is ready to receive commands from kubectl.
These tools are meant to create and manage test clusters on a personal machine. They work on Linux, MacOS and Windows. kind creates a cluster that consists of Docker containers, therefore it requires Docker to be installed. minikube runs a single-node cluster on the local machine.
Kubernetes on Wikipedia.
Kubernetes organization on GitHub.
(video) MariaDB database clusters on Kubernetes, by Pengfei Ma, at MariaDB Server Fest 2020.
Series of posts by Anel Husakovic on the MariaDB Foundation blog:
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
Vagrant is an open source tool to quickly setup machines that can be used for development and testing. They can be local virtual machines, Docker containers, AWS EC2 instances, and so on
In this page we discuss how to create a Vagrantfile, which you can use to create new boxes or machines. This content is specifically written to address the needs of MariaDB users.
A Vagrantfile is a Ruby file that instructs Vagrant to create, depending on how it is executed, new Vagrant machines or boxes. You can see a box as a compiled Vagrantfile. It describes a type of Vagrant machines. From a box, we can create new Vagrant machines. However, while a box is easy to distribute to a team or to a wider public, a Vagrantfile can also directly create one or more Vagrant machines, without generating any box.
Here is a simple Vagrantfile example:
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/bionic64"
config.vm.provider "virtualbox"
config.vm.provision :shell, path: "bootstrap.sh"
end
Vagrant.configure("2")
returns the Vagrant configuration object for the new box. In the block, we'll use the config
alias to refer this object. We are going to use version 2 of Vagrant API.
vm.box
is the base box that we are going to use. It is Ubuntu BionicBeaver (18.04 LTS), 64-bit version, provided by HashiCorp. The schema for box names is simple: the maintainer account in Vagrant Cloud followed by the box name.
We use vm.provision
to specify the name of the file that is going to be executed at the machine creation, to provision the machine. bootstrap.sh
is the conventional name used in most cases.
To create new Vagrant machines from the Vagrantfile, move to the directory that contains the Vagrant project and run:
vagrant up
To compile the Vagrantfile into a box:
vagrant package
These operations can take time. To preventively check if the Vagrantfile contains syntax errors or certain types of bugs:
vagrant validate
A provider allows Vagrant to create a Vagrant machine using a certain technology. Different providers may enable a virtual machine manager (VirtualBox, VMWare, Hyper-V...), a container manager (Docker), or remote cloud hosts (AWS, Google Compute Engine...).
Some providers are developed by third parties. app.vagrant.com supports search for boxes that support the most important third parties providers. To find out how to develop a new provider, see Plugin Development: Providers.
Provider options can be specified. Options affect the type of Vagrant machine that is created, like the number of virtual CPUs. Different providers support different options.
It is possible to specify multiple providers. In this case, Vagrant will try to use them in the order they appear in the Vagrantfile. It will try the first provider; if it is not available it will try the second; and so on.
Here is an example of providers usage:
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/bionic64"
config.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--memory", 1024 * 4]
end
config.vm.provider "vmware_fusion"
end
In this example, we try to use VirtualBox to create a virtual machine. We specify that this machine must have 4G of RAM (1024M * 4). If VirtualBox is not available, Vagrant will try to use VMWare.
This mechanism is useful for at least a couple of reasons:
Different users may use different systems, and maybe they don't have the same virtualization technologies installed.
We can gradually move from one provider to another. For a period of time, some users will have the new virtualization technology installed, and they will use it; other users will only have the old technology installed, but they will still be able to create machines with Vagrant.
We can use different methods for provisioning. The simplest provisioner is shell
, that allows one to run a Bash file to provision a machine. Other provisioners allow setting up the machines using automation software, including Ansible, Puppet, Chef and Salt.
To find out how to develop a new provisioner, see Plugin Development: Provisioners.
shell
ProvisionerIn the example above, the shell provisioner runs boostrap.sh inside the Vagrant machine to provision it. A simple bootstrap.sh may look like the following:
#!/bin/bash
apt-get update
apt-get install -y
To find out the steps to install MariaDB on your system of choice, see the Getting, Installing, and Upgrading MariaDB section.
You may also want to restore a database backup in the new Vagrant machine. In this way, you can have the database needed by the application you are developing. To find out how to do it, see Backup and Restore Overview. The most flexible type of backup (meaning that it works between different MariaDB versions, and in some cases even between MariaDB and different DBMSs) is a dump.
On Linux machines, the shell
provisioner uses the default shell. On Windows machines, it uses PowerShell.
If we use the shell
provisioner, we need a way to upload files to the new machine when it is created. We could use the file
provisioner, but it works by connecting the machine via ssh, and the default user doesn't have permissions for any directory except for the synced folders. We could change the target directory owner, or we could add the default user to a group with the necessary privileges, but these are not considered good practices.
Instead, we can just put the file we need to upload somewhere in the synced folder, and then copy it with a shell command:
cp ./files/my.cnf /etc/mysql/conf.d/
Here is an example of how to provision a Vagrant machine or box by running Ansible:
Vagrant.configure("2") do |config|
...
config.vm.provision "ansible" do |ansible|
ansible.playbook = "vagrant.yml"
end
end
With the Ansible provisioner, Ansible runs in the host system and applies a playbook in the guest system. In this example, it runs a playbook called vagrant.yml
. The Ansible Local provisioner runs the playbook in the vagrant machine.
For more information, see Using Vagrant and Ansible in the Ansible documentation. For an introduction to Ansible for MariaDB users, see Ansible and MariaDB.
To provision a Vagrant machine or box by running Puppet:
Vagrant.configure("2") do |config|
...
config.vm.provision "puppet" do |puppet|
puppet.manifests_path = "manifests"
puppet.manifest_file = "default.pp"
end
end
In this example, Puppet Apply runs in the host system and no Puppet Server is needed. Puppet expects to find a manifests
directory in the project directory. It expects it to contain default.pp
, which will be used as an entry point. Note that puppet.manifests_path
and puppet.manifest_file
are set to their default values.
Puppet needs to be installed in the guest machine.
To use a Puppet server, the puppet_server
provisioner can be used:
Vagrant.configure("2") do |config|
...
config.vm.provision "puppet_server" do |puppet|
puppet.puppet_server = "puppet.example.com"
end
end
See the Puppet Apply provisioner and the Puppet Agent Provisioner.
For an introduction to Puppet for MariaDB users, see Puppet and MariaDB.
To restore a backup into MariaDB, in most cases we need to be able to copy it from the host system to the box. We may also want to occasionally copy MariaDB logs from the box to the host system, to be able to investigate problems.
The project directory (the one that contains the Vagrantfile) by default is shared with the virtual machine and mapped to the /vagrant
directory (the synced folder). It is a good practice to put there all files that should be shared with the box when it is started. Those files should normally be versioned.
The synced folder can be changed. In the above example, we could simply add one line:
config.vm.synced_folder "/host/path", "/guest/path"
The synced folder can also be disabled:
config.vm.synced_folder '.', '/vagrant', disabled: true
Note that multiple Vagrant machines may have synced folders that point to the same directory on the host system. This can be useful in some cases, if you prefer to test some functionalities quickly, rather that replicating production environment as faithfully as possible. For example, to test if you're able to take a backup from one machine and restore it to another, you can store the backup in a common directory.
It is often desirable for a machine to be able to communicate with "the outside". This can be done in several ways:
Private networks;
Public networks;
Exposing ports to the host.
Remember that Vagrant doesn't create machines itself; instead, it asks a provider to create and manage them. Some providers support all of these communication methods, while others may only support some of them, or even none at all. When you create a Vagrantfile that uses one of these networking features, it is implicit that this can only happen if the provider you are using supports them. Check your provider's documentation to find out which features it supports.
The default provider, VirtualBox, supports all of these communication methods, including multiple networks.
A private network is a networks that can only be accesses by machines that run on the same host. Usually this also means that the machines must run on the same provider (for example, they all must be VirtualBox virtual machines).
Some providers support multiple private networks. This means that every network has a different name and can be accessed by different machines.
The following line shows how to create or join a private network called "example", where this machine's IP is assigned by the provider via DHCP:
config.vm.network 'private_network', name: 'example', type: 'dhcp'
While this is very convenient to avoid IP conflicts, sometimes you prefer to assign some IP's manually, in this way:
config.vm.network 'private_network', name: 'example', ip: '111.222.111.222'
As explained above, public networks are networks that can be accessed by machines that don't run on the same host with the same provider.
To let a machine join a public network:
# use provider DHCP:
config.vm.network "public_network", use_dhcp_assigned_default_route: true
# assign ip manually:
config.vm.network "public_network", ip: "111.222.111.222"
To improve security, you may want to configure a gateway:
config.vm.provision "shell", run: "always", inline: "route add default gw 111.222.111.222"
Vagrant allows us to map a TCP or UDP port in a guest system to a TCP or UDP port in the host system. For example, you can map a virtual machine port 3306 to the host port 12345. Then you can connect MariaDB in this way:
mariadb -hlocalhost -P12345 -u<user> -p<password>
You are not required to map a port to a port with a different number. In the above example, if the port 3306 in your host is not in use, you are free to map the guest port 3306 to the host port 3306.
There are a couple of caveats:
You can't map a single host port to multiple guest ports. If you want to expose the port 3306 from multiple Vagrant machines, you'll have to map them to different host ports. When running many machines this can be hard to maintain.
Ports with numbers below 1024 are privileged ports. Mapping privileged ports requires root privileges.
To expose a port:
config.vm.network 'forwarded_port', guest: 3306, host: 3306
Suppose you run MariaDB and an application server in two separate Vagrant machines. It's usually best to let them communicate via a private network, because this greatly increases your security. The application server will still need to expose ports to the host, so the application can be tested with a web browser.
Suppose you have multiple environments of the same type, like the one described above. They run different applications that don't communicate with each other. In this case, if your provider supports this, you will run multiple private networks. You will need to expose the applications servers ports, mapping them to different host ports.
You may even want to implement different private networks to create an environment that reflects production complexity. Maybe in production you have a cluster of three MariaDB servers, and the application servers communicate with them via a proxy layer (ProxySQL, HAProxy, or MaxScale). So the applications can communicate with the proxies, but have no way to reach MariaDB directly. So there is a private network called "database" that can be accessed by the MariaDB servers and the proxy servers, and another private network called "application" that can be accessed by the proxy servers and the application servers. This requires that your provider supports multiple private networks.
Using public networks instead of private one will allow VMs that run on different hosts to be part of your topology. In general this is considered as an insecure practice, so you should probably ask yourself if you really need to do this.
The vagrant-mariadb-examples repository is an example of a Vagrantfile that creates a box containing MariaDB and some useful tools for developers.
Further information can be found in Vagrant documentation.
See also Ruby documentation.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
Vagrant is a tool to create and manage development machines (Vagrant boxes). They are usually virtual machines on the localhost system, but they could also be Docker containers or remote machines. Vagrant is open source software maintained by HashiCorp and released under the MIT license.
Vagrant benefits include simplicity, and a system to create test boxes that is mostly independent from the technology used.
For information about installing Vagrant, see Installation in Vagrant documentation.
In this page we discuss basic Vagrant concepts.
A Vagrant machine is compiled from a box. It can be a virtual machine, a container or a remote server from a cloud service.
A box is a package that can be used to create Vagrant machines. We can download boxes from app.vagrantup.com, or we can build a new box from a Vagrantfile. A box can be used as a base for another box. The base boxes are usually operating system boxes downloaded from app.vagrantup.com.
A provider is responsible for providing the virtualization technology that will run our machine.
A provisioner is responsible for installing and configuring the necessary software on a newly created Vagrant machine.
The above concepts are probably easier to understand with an example.
We can use an Ubuntu box as a base to build a Vagrant machine with MariaDB. So we write a Vagrantfile for this purpose. In the Vagrantfile we specify VirtualBox as a provider. And we use the Ansible provisioner to install and configure MariaDB. Once we finish this Vagrantfile, we can run a Vagrant command to start a Vagrant machine, which is actually a VirtualBox VM running MariaDB on Ubuntu.
The following diagram should make the example clear:
A Vagrantfile is a file that describes how to create one or more Vagrant machines. Vagrantfiles use the Ruby language, as well as objects provided by Vagrant itself.
A Vagrantfile is often based on a box, which is usually an operating system in which we are going to install our software. For example, one can create a MariaDB Vagrantfile based on the ubuntu/trusty64
box. A Vagrantfile can describe a box with a single server, like MariaDB, but it can also contain a whole environment, like LAMP. For most practical use cases, having the whole environment in a single box is more convenient.
Boxes can be searched in Vagrant Cloud. Most of their Vagrantfiles are available on GitHub. Searches can be made, among other things, by keyword to find a specific technology, and by provider.
A provider adds support for creating a specific type of machines. Vagrant comes with several providers, for example:
VirtualBox
allows one to create virtual machines with VirtualBox.
Microsoft-Hyper-V
allows one to create virtual machines with Microsoft Hyper-V.
Docker allows one to create Docker containers. On non-Linux systems, Vagrant will create a VM to run Docker.
Alternative providers are maintained by third parties or sold by HashiCorp. They allow one to create different types of machines, for example using VMWare.
Some examples of useful providers, recognized by the community:
If you need to create machines with different technologies, or deploy them to unsupported cloud platforms, you can develop a custom provider in Ruby language. To find out how, see Plugin Development: Providers in Vagrant documentation. The Vagrant AWS Provider was initially written as an example provider.
A provisioner is a technology used to deploy software to the newly created machines.
The simplest provisioner is shell
, which runs a shell file inside the Vagrant machine. powershell
is also available.
Other providers use automation software to provision the machine. There are provisioners that allow one to use Ansible, Puppet, Chef or Salt. Where relevant, there are different provisioners allowing the use of these technologies in a distributed way (for example, using Puppet apply) or in a centralized way (for example, using a Puppet server).
It is interesting to note that there is both a Docker provider and a Docker provisioner. This means that a Vagrant machine can be a Docker container, thanks to the docker
provisioner. Or it could be any virtualisation technology with Docker running in it, thanks to the docker
provisioner. In this case, Docker pulls images and starts containers to run the software that should be running in the Vagrant machine.
If you need to use an unsupported provisioning method, you can develop a custom provisioner in Ruby language. See Plugin Development: Provisioners in Vagrant documentation.
It is possible to install a plugin with this command:
vagrant plugin install <plugin_name>
A Vagrantfile can require that a plugin is installed in this way:
require 'plugin_name'
A plugin can be a Vagrant plugin or a Ruby gem installable from rubygems.org. It is possible to install a plugin that only exists locally by specifying its path.
HashiCorp published an article that describes its plans for Vagrant 3.0.
Vagrant will switch to a client-server architecture. Most of the logic will be stored in the server, while the development machines will run a thin client that communicates with the server. It will be possible to store the configuration in a central database.
Another notable change is that Vagrant is switching from Ruby to Go. For some time, it will still be possible to use Vagrantfiles and plugins written in Ruby. However, in the future Vagrantfiles and plugins should be written in one of the languages that support gRPC (not necessarily Go). Vagrantfiles can also be written in HCL, HashiCorp Configuration Language.
This is a list of the most common Vagrant commands. For a complete list, see Command-Line Interface in Vagrant documentation.
To list the available machines:
vagrant box list
To start a machine from a box:
cd /box/directory
vagrant up
To connect to a machine:
vagrant ssh
To see all machines status and their id:
vagrant global-status
To destroy a machine:
vagrant destroy <id>
Here are some valuable websites and pages for Vagrant users.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
Databases typically contain information to which access should be restricted. For this reason, it's worth discussing some security concerns that Vagrant users should be aware of.
By default, Vagrant machines are only accessible from the localhost. SSH access uses randomly generated key pairs, and therefore it is secure.
The password for root
and vagrant
is "vagrant" by default. Consider changing it.
By default, the project folder in the host system is shared with the machine, which sees it as /vagrant
. This means that whoever has access to the project folder also has read and write access to the synced folder. If this is a problem, make sure to properly restrict the access to the synced folder.
If we need to exchange files between the host system and the Vagrant machine, it is not advisable to disable the synced folder. This is because the only alternative is to use the file
provider, which works by copying files to the machine via ssh. The problem is that the default ssh user does not have permissions to write to any directory by default, and changing this would be less secure than using a synced folder.
When a machine is provisioned, it should read the needed files from the synced folder or copy them to other places. Files in the synced folder should not be accessed by the Vagrant machine during its normal activities. For example, it is fine to load a dump from the synced folder during provisioning; and it is fine to copy configuration files from the synced folder to directories in /etc
during provisioning. But it is a bad practice to let MariaDB use table files located in the synced folder.
Note that security bugs are not reported as normal bugs. Information about security bugs are not public. See Security at HashiCorp for details.
Content initially contributed by Vettabase Ltd.
This page is licensed: CC BY-SA / Gnu FDL
Install and manage MariaDB Server using RPM packages. This section provides detailed instructions for deploying and upgrading MariaDB on RPM-based Linux distributions.
The available RPM packages depend on the specific MariaDB release series.
The following RPMs are available in current versions of MariaDB:
galera-4
The WSREP provider for Galera 4.
MariaDB-backup
MariaDB-backup-debuginfo
Debuginfo for mariadb-backup
MariaDB-client
Client tools like mariadb CLI, mariadb-dump, and others.
MariaDB-client-debuginfo
Debuginfo for client tools like mariadb CLI, mariadb-dump, and others.
MariaDB-common
Character set files and /etc/my.cnf
MariaDB-common-debuginfo
Debuginfo for character set files and /etc/my.cnf
MariaDB-compat
Old shared client libraries, may be needed by old MariaDB or MySQL clients
MariaDB-connect-engine
The CONNECT storage engine.
MariaDB-connect-engine-debuginfo
Debuginfo for the CONNECT storage engine.
MariaDB-cracklib-password-check
The cracklib_password_check password validation plugin.
MariaDB-cracklib-password-check
Debuginfo for the cracklib_password_check password validation plugin.
MariaDB-devel
Development headers and static libraries.
MariaDB-devel-debuginfo
Debuginfo for development headers and static libraries.
MariaDB-gssapi-server
The gssapi authentication plugin.
MariaDB-gssapi-server-debuginfo
Debuginfo for the gssapi authentication plugin.
MariaDB-rocksdb-engine
The MyRocks storage engine.
MariaDB-rocksdb-engine-debuginfo
Debuginfo for the MyRocks storage engine.
MariaDB-server
The server and server tools, like myisamchk and mariadb-hotcopy are here.
MariaDB-server-compat
Symbolic links from old MySQL tool names to MariaDB, like mysqladmin -> mariadb-admin or mysql -> mariadb. Good to have if you are using MySQL tool names in your scripts.
MariaDB-server-debuginfo
Debuginfo for the server and server tools, like myisamchk and mariadb-hotcopy are here.
MariaDB-shared
Dynamic client libraries.
MariaDB-shared-debuginfo
Debuginfo for dynamic client libraries.
MariaDB-test
mysql-client-test executable, and mysql-test framework with the tests.
MariaDB-test-debuginfo
Debuginfo for mysql-client-test executable, and mysql-test framework with the tests.
MariaDB-tokudb-engine
The TokuDB storage engine.
MariaDB-tokudb-engine-debuginfo
Debuginfo for the TokuDB storage engine.
Preferably, you should install MariaDB RPM packages using the package manager
of your Linux distribution, for example yum
orzypper
. But you can also use the lower-levelrpm
tool.
When the MariaDB-server
RPM package is installed, it will create a user and group named mysql
, if it does not already exist.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB RPM packages since MariaDB 5.1.55 are signed.
The key we use has an id of 1BB943DB
and the key fingerprint is:
1993 69E5 404B D5FC 7D2F E43B CBCB 082A 1BB9 43DB
To check the signature you first need to import the public part of the key like so:
gpg --keyserver hkp://pgp.mit.edu --recv-keys 1BB943DB
Next you need to let pgp know about the key like so:
gpg --export --armour 1BB943DB > mariadb-signing-key.asc
sudo rpm --import mariadb-signing-key.asc
You can check to see if the key was imported with:
rpm -qa gpg-pubkey*
Once the key is imported, you can check the signature of the MariaDB RPM files by running the something like the following in your download directory:
rpm --checksig $(find . -name '*.rpm')
The output of the above will look something like this (make sure gpg shows up on each OK line):
me@desktop:~$ rpm --checksig $(find . -name '*.rpm')
./kvm-rpm-centos5-amd64/rpms/MariaDB-test-5.1.55-98.el5.x86_64.rpm: (sha1) dsa sha1 md5 gpg OK
./kvm-rpm-centos5-amd64/rpms/MariaDB-server-5.1.55-98.el5.x86_64.rpm: (sha1) dsa sha1 md5 gpg OK
./kvm-rpm-centos5-amd64/rpms/MariaDB-client-5.1.55-98.el5.x86_64.rpm: (sha1) dsa sha1 md5 gpg OK
./kvm-rpm-centos5-amd64/rpms/MariaDB-shared-5.1.55-98.el5.x86_64.rpm: (sha1) dsa sha1 md5 gpg OK
./kvm-rpm-centos5-amd64/rpms/MariaDB-devel-5.1.55-98.el5.x86_64.rpm: (sha1) dsa sha1 md5 gpg OK
./kvm-rpm-centos5-amd64/rpms/MariaDB-debuginfo-5.1.55-98.el5.x86_64.rpm: (sha1) dsa sha1 md5 gpg OK
./kvm-rpm-centos5-amd64/srpms/MariaDB-5.1.55-98.el5.src.rpm: (sha1) dsa sha1 md5 gpg OK
This page is licensed: CC BY-SA / Gnu FDL
This article describes how to download the RPM files and install them using therpm
command.
It is highly recommended to Install MariaDB with yum where possible.
Navigate toand choose the desired database version and then select the RPMs that match your Linux distribution and architecture.
Clicking those links takes you to a local mirror. Choose the rpms link and download the desired packages. The packages will be similar to the following:
MariaDB-client-5.2.5-99.el5.x86_64.rpm
MariaDB-debuginfo-5.2.5-99.el5.x86_64.rpm
MariaDB-devel-5.2.5-99.el5.x86_64.rpm
MariaDB-server-5.2.5-99.el5.x86_64.rpm
MariaDB-shared-5.2.5-99.el5.x86_64.rpm
MariaDB-test-5.2.5-99.el5.x86_64.rpm
For a standard server installation you will need to download at least the client, shared, and server RPM files. See About the MariaDB RPM Files for more information about what is included in each RPM package.
After downloading the MariaDB RPM files, you might want to check their signatures. See Checking MariaDB RPM Package Signatures for more information about checking signatures.
rpm --checksig $(find . -name '*.rpm')
Prior to installing MariaDB, be aware that it will conflict with an existing installation of MySQL. To check whether MySQL is already installed, issue the command:
rpm -qa 'mysql*'
If necessary, you can remove found MySQL packages before installing MariaDB.
To install MariaDB, use the command:
rpm -ivh MariaDB-*
You should see output such as the following:
Preparing... ########################################### [100%]
1:MariaDB-shared ########################################### [ 14%]
2:MariaDB-client ########################################### [ 29%]
3:MariaDB-client ########################################### [ 43%]
4:MariaDB-debuginfo ########################################### [ 57%]
5:MariaDB-devel ########################################### [ 71%]
6:MariaDB-server ########################################### [ 86%]
PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !
To do so, start the server, then issue the following commands:
/usr/bin/mariadb-admin -u root password 'new-password'
/usr/bin/mariadb-admin -u root -h hostname password 'new-password'
Alternatively you can run:
/usr/bin/mysql_secure_installation
which will also give you the option of removing the test
databases and anonymous user created by default. This is
strongly recommended for production servers.
See the MySQL manual for more instructions.
Please report any problems with the /usr/bin/mysqlbug script!
The latest information about MariaDB is available at http://www.askmonty.org/.
You can find additional information about the MySQL part at:
http://dev.mysql.com
Support MariaDB development by buying support/new features from
Monty Program Ab. You can contact us about this at sales@askmonty.org.
Alternatively consider joining our community based development effort:
http://askmonty.org/wiki/index.php/MariaDB#How_can_I_participate_in_the_development_of_MariaDB
Starting MySQL....[ OK ]
Giving mysqld 2 seconds to start
7:MariaDB-test ########################################### [100%]
Be sure to follow the instructions given in the preceding output and create a password for the root user either by using mariadb-admin or by running the /usr/bin/mysql_secure_installation script.
Installing the MariaDB RPM files installs the MySQL tools in the /usr/bin
directory. You can confirm that MariaDB has been installed by using the mariadb
client program. Issuing the command mariadb
should give you the MariaDB
cursor.
This page is licensed: CC BY-SA / Gnu FDL
On SLES, OpenSUSE, and other similar Linux distributions, it is highly recommended to install the relevant RPM packages from MariaDB's repository using zypper.
This page walks you through the simple installation steps using zypper
.
We currently have ZYpp repositories for the following Linux distributions:
SUSE Linux Enterprise Server (SLES) 12
SUSE Linux Enterprise Server (SLES) 15
OpenSUSE 15
OpenSUSE 42
If you want to install MariaDB with zypper
, then you can configure zypper
to install from MariaDB Corporation's MariaDB Package Repository by using the MariaDB Package Repository setup script.
MariaDB Corporation provides a MariaDB Package Repository for several Linux distributions that use zypper
to manage packages. This repository contains software packages related to MariaDB Server, including the server itself, clients and utilities, client libraries, plugins, and mariadb-backup. The MariaDB Package Repository setup script automatically configures your system to install packages from the MariaDB Package Repository.
To use the script, execute the following command:
curl -sS https://downloads.mariadb.com/MariaDB/mariadb_repo_setup | sudo bash
Note that this script also configures a repository for MariaDB MaxScale and a repository for MariaDB Tools, which currently only contains Percona XtraBackup and its dependencies.
See MariaDB Package Repository Setup and Usage for more information.
If you want to install MariaDB with zypper
, then you can configure zypper
to install from MariaDB Foundation's MariaDB Repository by using the MariaDB Repository Configuration Tool.
The MariaDB Foundation provides a MariaDB repository for several Linux distributions that use zypper
to manage packages. This repository contains software packages related to MariaDB Server, including the server itself, clients and utilities, client libraries, plugins, and mariadb-backup. The MariaDB Repository Configuration Tool can easily generate the appropriate commands to add the repository for your distribution.
For example, if you wanted to use the repository to install MariaDB 10.6 on SLES 15, then you could use the following commands to add the MariaDB zypper
repository:
sudo zypper addrepo --gpgcheck --refresh https://yum.mariadb.org/10.6/sles/15/x86_64 mariadb
sudo zypper --gpg-auto-import-keys refresh
If you wish to pin the zypper
repository to a specific minor release, or if you would like to downgrade to a specific minor release, then
you can create a zypper
repository with the URL hard-coded to that specific minor release.
The MariaDB Foundation archives repositories of old minor releases at the following URL:
So if you can't find the repository of a specific minor release at yum.mariadb.org
, then it would be a good idea to check the archive.
For example, if you wanted to pin your repository to MariaDB 10.6.21 on SLES 15, then you could use the following commands to add the MariaDB zypper
repository:
sudo zypper removerepo mariadb
sudo zypper addrepo --gpgcheck --refresh https://yum.mariadb.org/10.6.21/sles/15/x86_64 mariadb
MariaDB's zypper
repository can be updated to a new major release. How this is done depends on how you originally configured the repository.
If you configured zypper
to install from MariaDB Corporation's MariaDB Package Repository by using the MariaDB Package Repository setup script, then you can update the major release that the repository uses by running the script again.
If you configured zypper
to install from MariaDB Foundation's MariaDB Repository by using the MariaDB Repository Configuration Tool, then you can update the major release that the repository uses by removing the repository for the old version and adding the repository for the new version.
First, you can remove the repository for the old version by executing the following command:
sudo zypper removerepo mariadb
After that, you can add the repository for the new version. For example, if you wanted to use the repository to install MariaDB 10.6 on SLES 15, then you could use the following commands to add the MariaDB zypper
repository:
sudo zypper addrepo --gpgcheck --refresh https://yum.mariadb.org/10.6/sles/15/x86_64 mariadb
sudo zypper --gpg-auto-import-keys refresh
After that, the repository should refer to MariaDB 10.6.
Before MariaDB can be installed, you also have to import the GPG public key that is used to verify the digital signatures of the packages in our repositories. This allows the zypper
and rpm
utilities to verify the integrity of the packages that they install.
The id of our GPG public key is 0xcbcb082a1bb943db
. The short form of the id
is 0x1BB943DB
. The full key fingerprint is:
1993 69E5 404B D5FC 7D2F E43B CBCB 082A 1BB9 43DB
The rpm utility can be used to import this key. For example:
sudo rpm --import https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
Once the GPG public key is imported, you are ready to install packages from the repository.
After the zypper
repository is configured, you can install MariaDB by executing the zypper command. The specific command that you would use would depend on which specific packages that you want to install.
To Install the most common packages, execute the following command:
sudo zypper install MariaDB-server galera-4 MariaDB-client MariaDB-shared MariaDB-backup MariaDB-common
To Install MariaDB Server, execute the following command:
sudo zypper install MariaDB-server
The process to install MariaDB Galera Cluster with the MariaDB zypper
repository is practically the same as installing standard MariaDB Server.
Galera Cluster support has been included in the standard MariaDB Server packages, so you will need to install the MariaDB-server
package, as you normally would.
You also need to install the galera-4
package to obtain the Galera 4 wsrep provider library.
To install MariaDB Galera Cluster, you could execute the following command:
sudo zypper install MariaDB-server MariaDB-client galera-4
If you haven't yet imported the MariaDB GPG public key, then zypper
will prompt you to
import it after it downloads the packages, but before it prompts you to install them.
See MariaDB Galera Cluster for more information on MariaDB Galera Cluster.
MariaDB Connector/C has been included as the client library. However, the package name for the client library has not been changed.
To Install the clients and client libraries, execute the following command:
sudo zypper install MariaDB-client MariaDB-shared
To install mariadb-backup, execute the following command:
sudo zypper install MariaDB-backup
Some plugins may also need to be installed.
For example, to install the cracklib_password_check password validation plugin, execute the following command:
sudo zypper install MariaDB-cracklib-password-check
The MariaDB zypper
repository also contains debuginfo packages. These package may be needed when debugging a problem.
To install debuginfo for the most common packages, execute the following command:
sudo zypper install MariaDB-server-debuginfo MariaDB-client-debuginfo MariaDB-shared-debuginfo MariaDB-backup-debuginfo MariaDB-common-debuginfo
To install debuginfo for MariaDB Server, execute the following command:
sudo zypper install MariaDB-server-debuginfo
MariaDB Connector/C has been included as the client library. However, the package name for the client library has not been changed.
To install debuginfo for the clients and client libraries, execute the following command:
sudo zypper install MariaDB-client-debuginfo MariaDB-shared-debuginfo
To install debuginfo for mariadb-backup, execute the following command:
sudo zypper install MariaDB-backup-debuginfo
For some plugins, debuginfo may also need to be installed.
For example, to install debuginfo for the cracklib_password_check password validation plugin, execute the following command:
sudo zypper install MariaDB-cracklib-password-check-debuginfo
The MariaDB zypper
repository contains the last few versions of MariaDB. To show what versions are available, use the following command:
zypper search --details MariaDB-server
In the output you will see the available versions.
To install an older version of a package instead of the latest version we just need to specify the package name, a dash, and then the version number. And we only need to specify enough of the version number for it to be unique from the other available versions.
However, when installing an older version of a package, if zypper
has to install dependencies, then it will automatically choose to install the latest versions of those packages. To ensure that all MariaDB packages are on the same version in this scenario, it is necessary to specify them all.
The packages that the MariaDB-server package depend on are: MariaDB-client,
MariaDB-shared, and MariaDB-common. Therefore, to install MariaDB 10.6.21 from this zypper
repository, we would do the following:
sudo zypper install MariaDB-server-10.6.21 MariaDB-client-10.6.21 MariaDB-shared-10.6.21 MariaDB-backup-10.6.21 MariaDB-common-10.6.21
The rest of the install and setup process is as normal.
After the installation is complete, you can start MariaDB.
If you are using MariaDB Galera Cluster, then keep in mind that the first node will have to be bootstrapped.
This page is licensed: CC BY-SA / Gnu FDL
If you are using DirectAdmin and you encounter any issues with Installing MariaDB with YUM, then the directions below may help. The process is very straightforward.
Note: Installing with YUM is preferable to installing the MariaDB RPM packages manually, so only do this if you are having issues such as:
Starting httpd:
httpd:
Syntax error on line 18 of /etc/httpd/conf/httpd.conf:
Syntax error on line 1 of /etc/httpd/conf/extra/httpd-phpmodules.conf:
Cannot load /usr/lib/apache/libphp5.so into server:
libmysqlclient.so.18: cannot open shared object file: No such file or directory
Or:
Starting httpd:
httpd:
Syntax error on line 18 of /etc/httpd/conf/httpd.conf:
Syntax error on line 1 of /etc/httpd/conf/extra/httpd-phpmodules.conf:
Cannot load /usr/lib/apache/libphp5.so into server:
/usr/lib/apache/libphp5.so: undefined symbol: client_errors
To install the RPMs, there is a quick and easy guide to Installing MariaDB with the RPM Tool. Follow the instructions there.
We do not want DirectAdmin's custombuild to remove/overwrite our MariaDB installation whenever an update is performed. To rectify this, disable automatic MySQL installation.
Edit /usr/local/directadmin/custombuild/options.conf
Change:
mysql_inst=yes
To:
mysql_inst=no
Note: When MariaDB is installed manually (i.e. not using YUM), updates are not automatic. You will need to update the RPMs yourself.
This page is licensed: CC BY-SA / Gnu FDL
Here are the detailed steps for installing MariaDB (version 10.1.21) via RPMs on CentOS 7.
The RPM's needed for the installation are all available on the MariaDB website and are given below:
jemalloc-3.6.0-1.el7.x86_64.rpm
MariaDB-10.1.21-centos7-x86_64-client.rpm
MariaDB-10.1.21-centos7-x86_64-compat.rpm
galera-25.3.19-1.rhel7.el7.centos.x86_64.rpm
jemalloc-devel-3.6.0-1.el7.x86_64.rpm
MariaDB-10.1.21-centos7-x86_64-common.rpm
MariaDB-10.1.21-centos7-x86_64-server.rpm
Step by step installation:
First install all of the dependencies needed. Its easy to do this via YUM packages: yum install rsync nmap lsof perl-DBI nc
rpm -ivh jemalloc-3.6.0-1.el7.x86_64.rpm
rpm -ivh jemalloc-devel-3.6.0-1.el7.x86_64.rpm
rpm -ivh MariaDB-10.1.21-centos7-x86_64-common.rpm MariaDB-10.1.21-centos7-x86_64-compat.rpm MariaDB-10.1.21-centos7-x86_64-client.rpm galera-25.3.19-1.rhel7.el7.centos.x86_64.rpm MariaDB-10.1.21-centos7-x86_64-server.rpm
While installing MariaDB-10.1.21-centos7-x86_64-common.rpm there might be a conflict with older MariaDB packages. we need to remove them and install the original rpm again.
Here is the error message for dependencies:
# rpm -ivh MariaDB-10.1.21-centos7-x86_64-common.rpm
warning: MariaDB-10.1.21-centos7-x86_64-common.rpm: Header V4 DSA/SHA1 Signature, key ID 1bb943db: NOKEY
error: Failed dependencies:
mariadb-libs < 1:10.1.21-1.el7.centos conflicts with MariaDB-common-10.1.21-1.el7.centos.x86_64
Solution: search for this package:
# rpm -qa | grep mariadb-libs
mariadb-libs-5.5.52-1.el7.x86_64
Remove this package:
# rpm -ev --nodeps mariadb-libs-5.5.52-1.el7.x86_64
Preparing packages...
mariadb-libs-1:5.5.52-1.el7.x86_64
While installing the Galera package there might be a conflict in installation for a dependency package. Here is the error message:
[root@centos-2 /]# rpm -ivh galera-25.3.19-1.rhel7.el7.centos.x86_64.rpm
error: Failed dependencies:
libboost_program_options.so.1.53.0()(64bit) is needed by galera-25.3.19-1.rhel7.el7.centos.x86_64
The dependencies for Galera package is: libboost_program_options.so.1.53.0
Solution:
yum install boost-devel.x86_64
Another warning message while installing Galera package is as shown below:
warning: galera-25.3.19-1.rhel7.el7.centos.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 1bb943db: NOKEY
The solution for this is to import the key:
#rpm --import http://yum.mariadb.org/RPM-GPG-KEY-MariaDB
After step 4, the installation will be completed. The last step will be to run mysql_secure_installation to secure the production server by dis allowing remote login for root, creating root password and removing the test database.
mysql_secure_installation
This page is licensed: CC BY-SA / Gnu FDL
The following article is about different issues people have encountered when installing MariaDB on RHEL / CentOS.
It is highly recommended to install with yum where possible.
In RHEL/ CentOS it is also possible to install a RPM or a tar ball. The RPM is the preferred version, except if you want to install many versions of MariaDB or install MariaDB in a non standard location.
If you removed an MySQL RPM to install MariaDB, note that the MySQL RPM on uninstall renames /etc/my.cnf to /etc/my.cnf.rpmsave.
After installing MariaDB you should do the following to restore your configuration options:
mv /etc/my.cnf.rpmsave /etc/my.cnf
If you are using any of the following options in your /etc/my.cnf or other my.cnf file you should remove them. This is also true for MySQL 5.1 or newer:
skip-bdb
This page is licensed: CC BY-SA / Gnu FDL
MariaDB source RPMs (SRPMs) are not packaged on all platforms for which MariaDB RPMs are packaged.
The reason is that MariaDB's build process relies heavily on cmake for a lot of things. In this specific case, MariaDB's build process relies on CMake CPack Package Generators to build RPMs. The specific package generator that it uses to build RPMs is called CPackRPM.
Support for source RPMs in CPackRPM became usable with MariaDB's build system starting from around cmake 3.10. This means that we do not produce source RPMs on platforms where the installed cmake version is older than that.
See also Building MariaDB from a Source RPM.
This page is licensed: CC BY-SA / Gnu FDL
On RHEL, CentOS, Fedora, and other similar Linux RPM based distributions, these provide MariaDB packages. These are supported by those distributions. If you have a particular need for a later version than what is in the distribution, then MariaDB provides repositories for them.
Using repositories rather than installing RPM allows for an ease of update when a new release is made. It is highly recommended to install the relevant RPM packages from MariaDB's
repository using yum or dnf. Centos 7 still uses yum
, most others use dnf
, and SUSE/openSUSE use zypper
.
This page walks you through the simple installation steps using dnf
and yum
.
We currently have YUM/DNF repositories for the following Linux distributions, and for the versions that are in standard (not extended) support:
Red Hat Enterprise Linux (RHEL)
CentOS
Fedora
openSUSE
SUSE
If you want to install MariaDB with yum
, then you can configure yum
to install from MariaDB Corporation's MariaDB Package Repository by using the MariaDB Package Repository setup script.
MariaDB Corporation provides a MariaDB Package Repository for several Linux distributions that use yum
to manage packages. This repository contains software packages related to MariaDB Server, including the server itself, clients and utilities, client libraries, plugins, and mariadb-backup. The MariaDB Package Repository setup script automatically configures your system to install packages from the MariaDB Package Repository.
To use the script, execute the following command:
curl -sS https://downloads.mariadb.com/MariaDB/mariadb_repo_setup | sudo bash
Note that this script also configures a repository for MariaDB MaxScale and a repository for MariaDB Tools, which currently only contains Percona XtraBackup and its dependencies.
See MariaDB Package Repository Setup and Usage for more information.
If you want to install MariaDB with yum
, then you can configure yum
to install from MariaDB Foundation's MariaDB Repository by using the MariaDB Repository Configuration Tool.
The MariaDB Foundation provides a MariaDB repository for several Linux distributions that use yum
to manage packages. This repository contains software packages related to MariaDB Server, including the server itself, clients and utilities, client libraries, plugins, and mariadb-backup. The MariaDB Repository Configuration Tool can easily generate the appropriate configuration file to add the repository for your distribution.
Once you have the appropriate repository configuration section for your distribution, add it to a file named MariaDB.repo
under /etc/yum.repos.d/
.
For example, if you wanted to use the repository to install MariaDB 10.6 on RHEL (any version), then you could use the following yum
repository configuration in /etc/yum.repos.d/MariaDB.repo
:
[mariadb]
name = MariaDB
baseurl = https://rpm.mariadb.org/10.6/rhel/$releasever/$basearch
gpgkey= https://rpm.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
The example file above includes a gpgkey
line to automatically fetch the
GPG public key that is used to verify the digital signatures of the packages in our repositories. This allows the yum
, dnf
, and rpm
utilities to verify the integrity of the packages that they install.
If you wish to pin the yum
repository to a specific minor release, or if you would like to do a yum downgrade
to a specific minor release, then you can create a yum
repository configuration with a baseurl
option set to that specific minor release.
The MariaDB Foundation archives repositories all releases is at the following URL:
Note this isn't configured as a highly available server. For that purpose please use the main mirrors.
For example, if you wanted to pin your repository to MariaDB 10.8.8 on CentOS 7, then you could use the following yum
repository configuration in /etc/yum.repos.d/MariaDB.repo
:
[mariadb]
name = MariaDB-10.8.8
baseurl= http://archive.mariadb.org/mariadb-10.8.8/yum/centos/$releasever/$basearch
gpgkey= https://archive.mariadb.org/PublicKey
gpgcheck=1
Note that if you change an existing repository configuration, then you may need to execute the following:
sudo yum clean all
MariaDB's yum
repository can be updated to a new major release. How this is done depends on how you originally configured the repository.
If you configured yum
to install from MariaDB Corporation's MariaDB Package Repository by using the MariaDB Package Repository setup script, then you can update the major release that the repository uses by running the script again.
If you configured yum
to install from MariaDB Foundation's MariaDB Repository by using the MariaDB Repository Configuration Tool, then you can update the major release that the repository uses by updating the yum
repository configuration file in-place. For example, if you wanted to change the repository from MariaDB 10.6 to MariaDB 10.11, and if the repository configuration file was at /etc/yum.repos.d/MariaDB.repo
, then you could execute the following:
sudo sed -i 's/10.6/10.11/' /etc/yum.repos.d/MariaDB.repo
After that, the repository should refer to MariaDB 10.11.
If the yum
repository is pinned to a specific minor release, then the above sed
command can result in an invalid repository configuration. In that case, the recommended options are:
Edit the MariaDB.repo
repository file manually.
Or delete the MariaDB.repo
repository file, and then install the repository of the new version with the more robust MariaDB Package Repository setup script.
Before MariaDB can be installed, you also have to import the GPG public key that is used to verify the digital signatures of the packages in our repositories. This allows the yum
, dnf
and rpm
utilities to verify the integrity of the packages that they install.
The id of our GPG public key is:
short form: 0xC74CD1D8
long form: 0xF1656F24C74CD1D8
full fingerprint: 177F 4010 FE56 CA33 3630 0305 F165 6F24 C74C D1D8
yum
should prompt you to import the GPG public key the first time that you install a package from MariaDB's repository. However, if you like, the rpm utility can be used to manually import this key instead. For example:
sudo rpm --import https://supplychain.mariadb.com/MariaDB-Server-GPG-KEY
Once the GPG public key is imported, you are ready to install packages from the repository.
For releases before 2023 an older SHA1 based GPG key was used.
The id of this older GPG public key was 0xcbcb082a1bb943db
. The short form was 0x1BB943DB
. The full key fingerprint was:
1993 69E5 404B D5FC 7D2F E43B CBCB 082A 1BB9 43DB
After the dnf
/yum
repository is configured, you can install MariaDB by executing the dnf or yum command. The specific command that you would use would depend on which specific packages that you want to install.
To Install the most common packages, execute the following command:
sudo dnf install MariaDB-server galera-4 MariaDB-client MariaDB-shared MariaDB-backup MariaDB-common
To Install MariaDB Server, execute the following command:
sudo dnf install MariaDB-server
The process to install MariaDB Galera Cluster with the MariaDB yum
repository is practically the same as installing standard MariaDB Server.
You need to install the galera-4
package to obtain the Galera 4 wsrep provider library.
To install MariaDB Galera Cluster, you could execute the following command:
sudo yum install MariaDB-server MariaDB-client galera-4
If you haven't yet imported the MariaDB GPG public key, then yum
will prompt you to
import it after it downloads the packages, but before it prompts you to install them.
See MariaDB Galera Cluster for more information on MariaDB Galera Cluster.
MariaDB Connector/C has been included as the client library (staticly linked). However, the package name for the client library has not been changed.
To Install the clients and client libraries, execute the following command:
sudo yum install MariaDB-client MariaDB-shared
If you want compile your own programs against MariaDB Connector/C, execute the following command:
sudo yum install MariaDB-devel
To install mariadb-backup, execute the following command:
sudo yum install MariaDB-backup
Some plugins may also need to be installed.
For example, to install the cracklib_password_check password validation plugin, execute the following command:
sudo yum install MariaDB-cracklib-password-check
The MariaDB yum
repository also contains debuginfo packages. These package may be needed when debugging a problem.
To install debuginfo for the most common packages, execute the following command:
sudo yum install MariaDB-server-debuginfo MariaDB-client-debuginfo MariaDB-shared-debuginfo MariaDB-backup-debuginfo MariaDB-common-debuginfo
All packages have their debuginfo by appending -debuginfo
to the package name.
To install debuginfo for MariaDB Server, execute the following command:
sudo yum install MariaDB-server-debuginfo
The MariaDB yum
repository contains the last few versions of MariaDB. To show what versions are available, use the following command:
yum list --showduplicates MariaDB-server
The output shows the available versions. For example:
$ yum list --showduplicates MariaDB-server
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.mirrors.ovh.net
* extras: centos.mirrors.ovh.net
* updates: centos.mirrors.ovh.net
Available Packages
MariaDB-server.x86_64 10.3.10-1.el7.centos mariadb
MariaDB-server.x86_64 10.3.11-1.el7.centos mariadb
MariaDB-server.x86_64 10.3.12-1.el7.centos mariadb
mariadb-server.x86_64 1:5.5.60-1.el7_5 base
The MariaDB yum
repository in this example contains MariaDB 10.3.10, MariaDB 10.3.11, and MariaDB 10.3.12. The CentOS base yum
repository also contains MariaDB 5.5.60.
To install an older version of a package instead of the latest version we just need to specify the package name, a dash, and then the version number. And we only need to specify enough of the version number for it to be unique from the other available versions.
However, when installing an older version of a package, if yum
has to install dependencies, then it will automatically choose to install the latest versions of those packages. To ensure that all MariaDB packages are on the same version in this scenario, it is necessary to specify them all.
The packages that the MariaDB-server package depend on are: MariaDB-client,
MariaDB-shared, and MariaDB-common. Therefore, to install MariaDB 10.3.11 from this yum
repository, we would do the following:
sudo yum install MariaDB-server-10.3.11 MariaDB-client-10.3.11 MariaDB-shared-10.3.11 MariaDB-backup-10.3.11 MariaDB-common-10.3.11
The rest of the install and setup process is as normal.
After the installation is complete, you can start MariaDB.
If you are using MariaDB Galera Cluster, then keep in mind that the first node will have to be bootstrapped.
This page is licensed: CC BY-SA / Gnu FDL