Dive into advanced configurations for MariaDB Server performance. This section covers in-depth tuning parameters, optimization strategies, and best practices to maximize speed and efficiency.
Articles of how to setup your MariaDB optimally on different systems
When Innodb writes to the filesystem, there is generally no guarantee that a given write operation will be complete (not partial) in cases of a poweroff event, or if the operating system crashes at the exact moment a write is being done.
Without detection or prevention of partial writes, the integrity of the database can be compromised after recovery.
Since its inception, Innodb has had a mechanism to detect and ignore partial writes via the InnoDB Doublewrite Buffer (also innodb_checksum can be used to detect a partial write).
Doublewrites, controlled by the innodb_doublewrite system variable, comes with its own set of problems. Especially on SSD, writing each page twice can have detrimental effects (write leveling).
A better solution is to directly ask the filesystem to provide an atomic (all or nothing) write guarantee. Currently this is only available on a few SSD cards.
When starting, MariaDB 10.2 and beyond automatically detects if any of the supported SSD cards are used.
When opening an InnoDB table, there is a check if the tablespace for the table is on a device that supports atomic writes and if yes, it will automatically enable atomic writes for the table. If atomic writes support is not detected, the doublewrite buffer will be used.
One can disable atomic write support for all cards by setting the variableinnodb-use-atomic-writes to OFF
in your my.cnf file. It's ON
by default.
To use atomic writes instead of the doublewrite buffer, add:
innodb_use_atomic_writes = 1
to the my.cnf config file.
Note that atomic writes are only supported on Fusion-io devices that use the NVMFS file system in these versions of MariaDB.
The following happens when atomic writes are enabled
if innodb_flush_method is neither O_DIRECT
, ALL_O_DIRECT
, or O_DIRECT_NO_FSYNC
, it is switched to O_DIRECT
innodb_use_fallocate is switched ON
(files are extended using posix_fallocate
rather than writing zeros behind the end of file)
Whenever an Innodb datafile is opened, a special ioctl()
is issued to switch on atomic writes. If the call fails, an error is logged and returned to the caller. This means that if the system tablespace is not located on an atomic write capable device or filesystem, InnoDB/XtraDB will refuse to start.
if innodb_doublewrite is set to ON
, innodb_doublewrite
will be switched OFF
and a message written to the error log.
Here is a flowchart showing how atomic writes work inside InnoDB:
MariaDB currently supports atomic writes on the following devices:
Fusion-io devices with the NVMFS file system . MariaDB 5.5 and above.
Shannon SSD. MariaDB 10.2 and above.
This page is licensed: CC BY-SA / Gnu FDL
For optimal IO performance running a database on modern hardware we recommend using the none (previously called noop) scheduler.
Recommended schedulers are none, for SSDs and NVMes, and mq-deadline (previously called deadline) for hard disks.
You can check your scheduler setting with:
cat /sys/block/${DEVICE}/queue/scheduler
For instance, it should look like this output:
cat /sys/block/vdb/queue/scheduler
[none] mq-deadline kyber bfq
Older kernels may look like:
cat /sys/block/sda/queue/scheduler
[noop] deadline cfq
Writing the new scheduler name to the same /sys node will change the scheduler:
echo noop > /sys/block/vdb/queue/scheduler
The impact of schedulers depend significantly on workload and hardware. You can measure the IO-latency using the biolatency bcc-tools script with an aim to keep the mean as low as possible.
By default, the system limits how many open file descriptors a process can have open at one time. It has both a soft and hard limit. On many systems, both the soft and hard limit default to 1024. On an active database server, it is very easy to exceed 1024 open file descriptors. Therefore, you may need to increase the soft and hard limits. There are a few ways to do so.
If you are using mysqld_safe or mariadbd-safe to start mysqld
, then see the instructions at mariadbd-safe: Configuring the Open Files Limit.
If you are using systemd to start mysqld
, then see the instructions at systemd: Configuring the Open Files Limit.
Otherwise, you can set the soft and hard limits for the mysql
user account by adding the following lines to /etc/security/limits.conf:
mysql soft nofile 65535
mysql hard nofile 65535
After the system is rebooted, the mysql
user should use the new limits, and the user's ulimit
output should look like the following:
$ ulimit -Sn
65535
$ ulimit -Hn
65535
By default, the system limits the size of core files that could be created. It has both a soft and hard limit. On many systems, the soft limit defaults to 0. If you want to enable core dumps, then you may need to increase this. Therefore, you may need to increase the soft and hard limits. There are a few ways to do so.
If you are using mysqld_safe or mariadbd-safe to start mysqld
, then see the instructions at mariadb-safe: Configuring the Core File Size.
If you are using systemd to start mysqld
, then see the instructions at systemd: Configuring the Core File Size.
Otherwise, you can set the soft and hard limits for the mysql
user account by adding the following lines to /etc/security/limits.conf:
mysql soft core unlimited
mysql hard core unlimited
After the system is rebooted, the mysql
user should use the new limits, and the user's ulimit
output should look like the following:
$ ulimit -Sc
unlimited
$ ulimit -Hc
unlimited
This page is licensed: CC BY-SA / Gnu FDL
This article will help you configure MariaDB for optimal performance.
By default, MariaDB is configured to work on a desktop system and therefore use relatively few resources. To optimize installation for a dedicated server, you have to do a few minutes of work.
For this article we assume that you are going to run MariaDB on a dedicated server.
Feel free to update this article if you have more ideas.
MariaDB is normally configured by editing the my.cnf file. In the next section you have a list of variables that you may want to configure for dedicated MariaDB servers.
InnoDB is normally the default storage engine with MariaDB.
You should set innodb_buffer_pool_size to about 80% of your memory. The goal is to ensure that 80 % of your working set is in memory.
The other most important InnoDB variables are:
Some other important InnoDB variables:
innodb_buffer_pool_instances. Deprecated and ignored from MariaDB 10.5.1.
innodb_adaptive_max_sleep_delay. Deprecated and ignored from MariaDB 10.5.5.
innodb_thread_concurrency. Deprecated and ignored from MariaDB 10.5.5.
MariaDB uses by default the Aria storage engine for internal temporary files. If you have many temporary files, you should set aria_pagecache_buffer_size to a reasonably large value so that temporary overflow data is not flushed to disk. The default is 128M.
You can check if Aria is configured properly by executing:
MariaDB [test]> show global status like "Aria%";
+-----------------------------------+-------+
| Variable_name | Value |
+-----------------------------------+-------+
| Aria_pagecache_blocks_not_flushed | 0 |
| Aria_pagecache_blocks_unused | 964 |
| Aria_pagecache_blocks_used | 232 |
| Aria_pagecache_read_requests | 9598 |
| Aria_pagecache_reads | 0 |
| Aria_pagecache_write_requests | 222 |
| Aria_pagecache_writes | 0 |
| Aria_transaction_log_syncs | 0 |
+-----------------------------------+-------+
If Aria_pagecache_reads
is much smaller than Aria_pagecache_read_request
andAria_pagecache_writes
is much smaller than Aria_pagecache_write_request#, then your setup is good. If the aria_pagecache_buffer_size is big enough, the two variables should be 0, like above.
If you don't use MyISAM tables explicitly (true for most MariaDB 10.4+ users), you can set key_buffer_size to a very low value, like 64K.
Using memory tables for internal temporary results can speed up execution. However, if the memory table gets full, then the memory table will be moved to disk, which can hurt performance.
You can check how the internal memory tables are performing by executing:
MariaDB [test]> show global status like "Created%tables%";
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 1 |
| Created_tmp_tables | 2 |
+-------------------------+-------+
Created_tmp_tables
is the total number of internal temporary tables created as part of executing queries like SELECT.Created_tmp_disk_tables
shows how many of these did hit the storage.
You can increase the storage for internal temporary tables by setting max_heap_table_size and tmp_memory_table_size high enough. These values are per connection.
If you are doing a lot of fast connections / disconnects, you should increase back_log and if you are running MariaDB 10.1 or below thread_cache_size.
If you have a lot (> 128) of simultaneous running fast queries, you should consider setting thread_handling to pool_of_threads
.
If you are connecting from a lot of different machines you should increase host_cache_size to the max number of machines (default 128) to cache the resolving of hostnames. If you don't connect from a lot of machines, you can set this to a very low value!
Performance schema helps you understand what is taking time and resources.
Slow query log is used to find queries that are running slow.
OPTIMIZE TABLE helps you defragment tables.
This page is licensed: CC BY-SA / Gnu FDL
Obviously, accessing swap memory from disk is far slower than accessing RAM directly. This is particularly bad on a database server because:
MariaDB's internal algorithms assume that memory is not swap, and are highly inefficient if it is. Some algorithms are intended to avoid or delay disk IO, and use memory where possible - performing this with swap can be worse than just doing it on disk in the first place.
Swap increases IO over just using disk in the first place as pages are actively swapped in and out of swap. Even something like removing a dirty page that is no longer going to be stored in memory, while designed to improve efficiency, will under a swap situation cost more IO.
Database locks are particularly inefficient in swap. They are designed to be obtained and released often and quickly, and pausing to perform disk IO will have a serious impact on their usability.
The main way to avoid swapping is to make sure you have enough RAM for all processes that need to run on the machine. Setting the system variables too high can mean that under load the server runs short of memory, and needs to use swap. So understanding what settings to use and how these impact your server's memory usage is critical.
Linux has a swappiness setting which determines the balance between swapping out pages (chunks of memory) from RAM to a preconfigured swap space on the hard drive.
The setting is from 0 to 100, with lower values meaning a lower likelihood of swapping. The default is usually 60 - you can check this by running:
sysctl vm.swappiness
The default setting encourages the server to use swap. Since there probably won't be much else on the database server besides MariaDB processes to put into swap, you'll probably want to reduce this to zero to avoid swapping as much as possible. You can change the default by adding a line to the sysctl.conf
file (usually found in /etc/sysctl.conf
).
To set the swappiness to zero, add the line:
vm.swappiness = 0
This normally takes effect after a reboot, but you can change the value without rebooting as follows:
sysctl -w vm.swappiness=0
Since RHEL 6.4, setting swappiness=0 more aggressively avoids swapping out, which increases the risk of OOM killing under strong memory and I/O pressure.
A low swappiness setting is recommended for database workloads. For MariaDB databases, it is recommended to set swappiness to a value of 1.
vm.swappiness = 1
While some disable swap altogether, and you certainly want to avoid any database processes from using it, it can be prudent to leave some swap space to at least allow the kernel to fall over gracefully should a spike occur. Having emergency swap available at least allows you some scope to kill any runaway processes.
This page is licensed: CC BY-SA / Gnu FDL
This category contains information about Fusion-io support in MariaDB
Fusion-io develops PCIe based NAND flash memory cards and related software that can be used to speed up MariaDB databases.
The ioDrive branded products can be used as block devices (super-fast disks) or to extend basic DRAM memory. ioDrive is deployed by installing it on an x86 server and then installing the card driver under the operating system. All main line 64-bit operating systems and hypervisors are supported: RHEL, CentOS, SuSe, Debian, OEL etc. and VMware, Microsoft Windows/Server etc. Drivers and their features are constantly developed further.
ioDrive cards support software RAID and you can combine two or more physical cards into one logical drive. Through ioMemory SDK and its APIs, one can integrate and enable more thorough interworking between your own software and the cards - and cut latency.
The key differentiator between a Fusion-io and a legacy SSD/HDD is the following: A Fusion-io card is connected directly on the system bus (PCIe), this enables high data transfer throughput (1.5 GB/s, 3.0 GB/s or 6GB/s) and the fast direct memory access (DMA) method can be used to transfer data. The ATA/SATA protocol stack is omitted and therefore latency is cut short. Fusion-io performance is dependent on server speed: the faster processors and the newer PCIe-bus version you have, the better is the ioDrive performance. Fusion-io memory is non-volatile, in other words, data remains on the card even when the server is powered off.
You can start by using ioDrive for database files that need heavy random access.
Whole database on ioDrive.
In some cases, Fusion-io devices allow for atomic writes, which allows the server to safely disable the doublewrite buffer.
Use ioDrive as a write-through read cache. This is possible on server level with Fusion-io directCache software or in VMware environments using ioTurbine software or the ioCache bundle product. Reads happen from ioDrive and all writes go directly to your SAN or disk.
Highly Available shared storage with ION. Have two different hosts, Fusion-io cards in them and share/replicate data with Fusion-io's ION software.
The luxurious Platinum setup: MariaDB Galera Cluster running on Fusion-io SLC cards on several hosts.
Starting with MariaDB 5.5.31, MariaDB Server supports atomic writes on Fusion-io devices that use the NVMFS (formerly called DirectFS) file system. Unfortunately, NVMFS was never offered under ‘General Availability’, and SanDisk declared that NVMFS would reach end-of-life in December 2015. Therefore, NVMFS support is no longer offered by SanDisk.
MariaDB Server does not currently support atomic writes on Fusion-io devices with any other file systems.
See atomic write support for more information about MariaDB Server's atomic write support.
Extend InnoDB disk cache to be stored on Fusion-io acting as extended memory.
Fusion-io memory can be formatted with different sector size of either 512 or 4096 bytes. Bigger sectors are expected to be faster, but only if I/O is done in blocks of 4KB or multiples of that. Speaking of MariaDB: if only InnoDB data files are stored in Fusion-io memory, all I/O is done in blocks of 16K and thus 4K sector size can be used. If the InnoDB redo log (I/O block size: 512 bytes) goes to the same Fusion-io memory, then short sectors should be used.
Note: XtraDB has the experimental feature of an increased InnoDB log block size of 4K. If this is enabled, then both redo log I/O and page I/O in InnoDB will match a sector size of 4K.
As of file systems: currently XFS is expected to yield the best performance with MariaDB. However depending on the exact kernel version and version of XFS code in use, one might be affected by a bug that severely limits XFS performance in concurrent environments. This has been fixed in kernel versions above 3.5 or RHEL6 kernels kernel-2.6.32-358 or later (because of bug 807503 being fixed).
For the pitbull machine where I have run such tests, ext4 was faster than xfs for 32 or more threads:
up to 8 threads xfs was few percent faster (10% on average).
at 16 threads it was a draw (2036 tps vs. 2070 tps).
at 32 threads ext4 was 28% faster (2345 tps vs. 1829 tps).
at 64 threads ext4 was even 47% faster (2362 tps vs. 1601 tps).
at higher concurrency ext4 lost it’s bite, but was still constantly better than xfs.
Those numbers are for spinning disks. I guess for Fusion-io memory the XFS numbers will be even worse.
There are several card models. ioDrive is older generation, ioDrive2 is newer. SLC sustains more writes. MLC is good enough for normal use.
ioDrive2, capacities per card 365GB, 785GB, 1.2TB with MLC. 400GB and 600GB with SLC, performance up to 535000 IOPS & 1.5GB/s bandwidth
ioDrive2 Duo, capacities per card 2.4TB MLC and 1.2TB SLC, performance up to 935000 IOPS & 3.0GB/s bandwidth
ioDrive, capacities per card 320GB, 640GB MLC and 160GB, 320GB SLC, performance up to 145000 IOPS & 790MB/s bandwidth
ioDrive Duo, capacities per card 640GB, 1.28TB MLC and 320GB, 640GB SLC, performance up to 285000 IOPS & 1.5GB/s bandwidth
ioDrive Octal, capacities per card 5TB and 10TB MLC, performance up to 1350000 IOPS & 6.7GB/s bandwidth
ioFX, a 420GB QDP MLC workstation product, 1.4GB/s bandwidth
ioCache, a 600GB MLC card with ioTurbine software bundle that can be used to speed up VMware based virtual hosts.
ioScale, 3.2TB card, building block to enable all-flash data center build out in hyperscale web and cloud environments. Product has been developed in co-operation with Facebook.
directCache - transforms ioDrive to work as a read cache in your server. Writes go directly to your SAN
ioTurbine - read cache software for VMware
ION - transforms ioDrive into a shareable storage
ioSphere - software to manage and monitor several ioDrives
This page is licensed: CC BY-SA / Gnu FDL
DownloadRelease NotesChangelogFusion-io Introduction
Release date: 12 Dec 2014
For the highlights of this release, see therelease notes.
The revision number links will take you to the revision's page on Launchpad. On Launchpad you can view more details of the revision and view diffs of the code modified in that revision.
Revision #4009 Thu 2014-12-04 13:19:51 +0200
MDEV-7262: innodb.innodb-mdev7046 and innodb-page_compression* fail on BuildBot
Revision #4008 Wed 2014-12-03 13:23:42 +0200
Fix problem with trims.
Revision #4007 Wed 2014-12-03 12:05:00 +0200
Fix compiler error on fallocate and flags used.
Revision #4006 Tue 2014-12-02 20:26:21 +0200
Fix buildbot valgrind errors on test innodb.innodb-page_compression_tables
Revision #4005 [merge] Mon 2014-12-01 11:52:51 +0200
Merge MariaDB 10.0.15 from lp:maria/10.0 up to revision 4521
Revision #4004 Mon 2014-11-24 12:08:45 +0200
MDEV-7166: innodb.innodb-page_compression_zip fails in buildbot
Revision #4003 Wed 2014-11-19 20:20:31 +0200
MDEV-7133: InnoDB: Assertion failure in dict_tf_is_valid
Revision #4002 Wed 2014-11-12 10:06:39 +0200
MDEV-7088: Query stats for compression based on TRIM size
Revision #4001 Fri 2014-11-07 12:06:53 +0200
Move debug output inside a UNIV_DEBUG.
Revision #4000 Tue 2014-11-04 17:20:27 +0200
Fix posix_fallocate error message and add temporal debug output to resolve the problems on trim.
Revision #3999 Tue 2014-11-04 11:37:55 +0200
Fixed trim operation alligment problem.
Revision #3998 Wed 2014-10-29 08:51:17 +0200
MDEV-6648: InnoDB: Add support for 4K sector size if supported
Revision #3997 [merge] Mon 2014-10-20 11:34:21 +0300
Merge MariaDB 10.0.14 from lp:maria/10.0 up to revision 4116.
Revision #3996 [merge] Tue 2014-09-23 12:46:21 +0300
Merge MariaDB 10.0.13 i.e. lp:maria/10.0 up to revision 4346.
Revision #3995 Wed 2014-08-27 15:39:05 +0300
Fix small error on LZMA compression failure error message.
Revision #3994 Thu 2014-08-07 13:40:00 +0300
MDEV-6548: Incorrect compression on LZMA.
Revision #3993 Thu 2014-07-31 11:47:21 +0300
Merge MariaDB 10.1 -> 10.0-FusionIO
Revision #3992 Wed 2014-07-23 12:03:48 +0300
Fix default value for innodb-compression-algorithm to be 0 (uncompressed) to avoid test failures.
Revision #3991 Mon 2014-07-21 21:17:58 +0300
MDEV-6354: mplement a way to read MySQL 5.7.4-labs-tplc page compression format (Fusion-IO).
Revision #3990 [merge] Sat 2014-06-28 13:10:57 +0300
Merge lp:maria/10.0 up to MariaDB 10.0.12 i.e. revision 4252.
Revision #3989 Fri 2014-06-27 17:32:03 +0300
MDEV-6392: Change innodb_have_lzo and innodb_have_lz4 as a static variables and reduce the number of ifdef's
Revision #3988 Thu 2014-06-26 07:50:48 +0300
MDEV-6361: innodb_compression_algorithm configuration variable can be set to unsupported value.
Revision #3987 Mon 2014-05-26 20:42:06 +0200
compilation failure on Win64
Revision #3986 Mon 2014-05-26 20:41:10 +0200
use ENUM not ULONG for innodb-compression-algorithm command-line option
Revision #3985 Mon 2014-05-26 20:31:03 +0200
compilation failure on Windows
Revision #3984 Mon 2014-05-26 20:27:14 +0200
don't include the file that 1) not present everywhere 2) not used anyway
Revision #3983 Mon 2014-05-26 20:26:51 +0200
temporarily disable lzo compression
Revision #3982 Mon 2014-05-26 20:26:04 +0200
lzo.cmake: don't use the same symbol for two different tests
Revision #3981 Fri 2014-05-23 08:20:43 +0300
Fix compiler warnings.
Revision #3980 Thu 2014-05-22 21:03:26 +0300
Fix compiler error if LZO is not installed.
Revision #3979 Thu 2014-05-22 19:48:34 +0300
Fixed compiler errors caused by merge error.
Revision #3978 Thu 2014-05-22 16:31:31 +0300
Fix some compiler warnings and small errors on code.
Revision #3977 Fri 2014-05-16 15:30:13 +0300
Code cleanup after review.
Revision #3976 Mon 2014-04-28 07:52:41 +0300
Fixed small error on compression failure error text.
Revision #3975 Wed 2014-04-23 19:23:11 +0300
Fixed bug on free buffer space calculation when LZO is used. Fixed bug on function call when InnoDB plugin is used.
Revision #3974 [merge] Thu 2014-04-17 08:22:54 +0300
Merge lp:maria/10.0 up to MariaDB 10.0.10 revision 4140.
Revision #3973 Wed 2014-04-16 16:55:36 +0300
MDEV-6070: FusionIO: Failure to create a table with ATOMIC_WRITES option leaves the database in inconsistent state,
Revision #3972 Tue 2014-04-15 14:28:25 +0300
Added support for LZO compression method.
This page is licensed: CC BY-SA / Gnu FDL
Note: This page describes features in an unreleased version of MariaDB.Unreleased means there are no official packages or binaries available for download which contain the features. If you want to try out any of the new features described here you will need to get and compile the code yourself.
DownloadRelease NotesChangelogFusion-io Introduction
Release date: 12 Dec 2014
For an overview of MariaDB 10.0 Fusion-io see theFusion-io Introduction page.
Thanks, and enjoy MariaDB!
Since the MariaDB 10.0.9 Fusion-io preview release, the following notable changes have been made.
Merged with MariaDB 10.0.15 release
Added support for 4K sector size if supported
Added status variables for 1K, 2K, 4K, 8K, 16K, and 32K trims
Added innodb-compression-algorithm configuration variable to select default compression method
Added support for
LZO compression
LZMA compression
bzip2 compression
For a complete list of changes made in MariaDB 10.0.15 Fustion-io, with links to detailed information on each push, see the changelog.
This page is licensed: CC BY-SA / Gnu FDL