This section is for articles on debugging MariaDB
CMAKE_BUILD_TYPE
OptionThis option enables multiple debug instrumentation aspects within the MariaDB server that provided more detailed information around complex parts of the server and can be used to implement and run tests where the concurrent execution of multiple threads must be controlled to achieve a specific state. If you are not doing this, the following option is sufficient.
Compiling MariaDB with full debug information includes all code symbols and also new code to do internal testing of structures and allow one to trace MariaDB execution. A full debug binary will be notably slower than a normal binary (30%). Most of this overhead can be removed by disabling -DWITH_DBUG_TRACE=OFF
You can configure build a debug build by executing cmake
and by setting the CMAKE_BUILD_TYPE
option to Debug
. For example:
cmake -DCMAKE_BUILD_TYPE=Debug source_directory
To compile:
cmake --build .
You can find a list of the needed packages/libraries for building on Linux here.
To build MariaDB with debug symbols, to get better stack traces and to be able to debug the binary with gdb
, you need to supply the -Og and -g3 options to the gcc
compiler for an debuggable, with limited optimization tradeoffs..
Pass these are options to cmake like:
cmake -DCMAKE_CXX_COMPILE_FLAGS='-Og -g3' -DCMAKE_C_COMPILE_FLAGS='-Og -g3' source_directory
To compile:
cmake --build .
To build a "mariadbd" binary with debugging enabled that uses the same parameters as the ones used in Debian/Ubuntu binary packages, you must do as follows (you must have a deb-src line of one of the MariaDB repositories on your /etc/apt/sources.list in order to do that):
apt install build-essential devscripts fakeroot dpkg-dev
apt-get build-dep mariadb-server
apt-get source mariadb-server
cd mariadb-*
DEB_CFLAGS_APPEND='-Og -g3' DEB_CXXFLAGS_APPEND='-Og -g3' debuild -us -uc
The packages created will have these flags set.
The commands shown below replace the release mariadbd
binary with the debug mariadbd
binary that you compiled. Most importantly, they replace the binary in a way which makes it trivial to revert back to the
original release mariadbd
binary.
First, stop MariaDB.
Then, use the mv
utility to rename the release mariadbd
binary:
sudo mv /usr/sbin/mariadbd /usr/sbin/mariadbd-orig
Note: Do not use the cp
utility because that will change the file modification timestamp.
Then, install the debug mariadbd
binary from your source tree:
sudo install ~/mariadb-*/sql/mariadbd /usr/sbin/mariadbd
Then, start MariaDB.
Be sure to replace /usr/sbin/mariadbd
with the path to your mariadbdd
binary and to also replace ~/mariadb-*/sql/mariadbd
with the path to your debug #mariadbdbinary.
If you want to restore your original mariadbd
binary, you can do it with the following process::
First, stop MariaDB.
Then, execute the following command to delete the symbolic link:
sudo mv /usr/sbin/mariadbd /usr/sbin/mariadbd-debug
Then, execute the following command to move the original mariadbd
release binary back into place:
sudo mv /usr/sbin/mariadbd-orig /usr/sbin/mariadbd
Then, start MariaDB.
Be sure to replace /usr/sbin/mariadbd
with the path to your mariadbd
binary
The debug mariadb-debug
binary is still present if it is needed again in the future.
A debug binary has lots of code checks and asserts, that are not checked in production. This is done to get more performance when running in production. In some cases, when one is trying to find a hard-to-repeat bug, it could be beneficial to have these checks in production builds too.
Compiling with -DDBUG_ASSERT_AS_PRINTF
will change DBUG_ASSERT() to print any failed check to the error log.
cmake . -DDBUG_ASSERT_AS_PRINTF
Enabling the above option should not have any notable impact on performance (probably < 1% slowdown). This is achieved by grouping asserts in MariaDB server code into two groups:
Fast checks, using DBUG_ASSERT()
: These are converted to printing to error log.
Slow checks, using DBUG_SLOW_ASSERT()
. These will always be removed in production builds.
This page is licensed: CC BY-SA / Gnu FDL
If mysqld
is crashing, creating a trace file is a good way to find the issue.
A mysqld
binary that has been compiled with debugging support can create trace files using the DBUG package created by Fred Fish. To find out if your mysqld
binary has debugging support, run mysqld -V
on the command line. If the version number ends in -debug
then your mysqld
binary was compiled with debugging support.
See Compiling MariaDB for debugging for instructions on how to create your own mysqld
binary with debugging enabled.
To create the trace log, start mysqld
like so:
mysqld --debug
Without options for --debug, the trace file will be named /tmp/mysqld.trace
in MySQL and older versions of MariaDB before 10.5 and /tmp/mariadbd.trace
starting from MariaDB 10.5.
On Windows, the debug mysqld
is called mysqld-debug
and you should also use the --standalone
option. So the command on Windows will look like:
mysqld-debug --debug --standalone
Once the server is started, use the regular mysql
command-line client (or another client) to connect and work with the server.
After you are finished debugging, stop the server with:
mysqladmin shutdown
Trace files can grow to a significant size. You can reduce their size by telling the server to only log certain items.
The --debug
flag can take extra options in the form of a colon (:) delimited string of options. Individual options can have comma-separated sub-options.
For example:
mysqld --debug=d,info,error,query:o,/tmp/mariadbd.trace
The 'd
' option limits the output to the named DBUG_ macros. In the above example, the /tmp/mariadbd.trace
tracefile will contain output from the info, error, and query DBUG macros. A 'd
' by itself (with no sub-options) will select all DBUG_ macros.
The 'o
' option redirects the output to a file (/tmp/mariadbd.trace
in the example above) and overwrites the file if it exists.
This page is licensed: CC BY-SA / Gnu FDL
Even if you don't have a server that is compiled for debugging, there are still ways to get more information out from it if things go wrong.
When things go wrong, it's always better to have a version of mysqld daemon that is not stripped.
shell> file /usr/sbin/mysqld
If this doesn't say 'stripped' then you are fine. If not, you should either download a binary with debugging information or compile it, without stripping the binary.
Read the Profiling Memory Usage page for more information on how to debug high memory consumption.
If you have a problem with a mysqld process that keeps on growing, you can use tcmalloc to find out what is allocating memory:
Depending on the system you have to install the tcmalloc
(OpenSuse) or the google-perftools-lib
(RedHat, Centos) package.
The following set of commands starts mysqld with memory profiling and if you kill it with SIGABRT, you will get a core dump that you can examine:
HEAPPROFILE=/tmp/mysqld.prof /usr/sbin/mysqld_safe --malloc-lib=tcmalloc --core-file-size=unlimited --core-file
or if you prefer to invoke mysqld directly:
ulimit -c unlimted
LD_PRELOAD=/usr/lib64/libtcmalloc.so.4 HEAPPROFILE=/tmp/mysqld.prof /usr/sbin/mysqld --core-file
You can of course add other mysqld options to the end of the above line.
Now start your client/application that uses MariaDB. You can find where memory is allocated in the /tmp/mysqld.prof
file. If you find any memory issues, please report this in the MariaDB bug tracker!
New Ubuntu releases do not allow one process to examine the memory of an arbitrary user's process. As a result, when trying to attach GDB to a running MariaDB (or any other process) instance, one gets the following error in GDB:
ptrace: Operation not permitted
More details are available in the Ubuntu Wiki.
To allow GDB to attach, one needs to edit the value of the/proc/sys/kernel/yama/ptrace_scope
sysctl value.
To change it temporarily, open a root shell and issue:
echo 0 > /proc/sys/kernel/yama/ptrace_scope
To change it permanently, edit as root:
/etc/sysctl.d/10-ptrace.conf
and set the value to 0
.
If your mysqld server hangs, you may want to debug it to know what happened.
Preferably the server should be compiled for debugging, but it's not strictly necessary:
cmake -DCMAKE_BUILD_TYPE=Debug -DWITH_VALGRIND=ON .
make -j4
To know what the server is doing:
Find out the process number of mysqld
ps -edalf | grep mysqld
Attach to the process and get a back trace:
gdb -p 'pid of mysqld' path-to-mysqld
set height 0
set logging file /tmp/mysqld.log
set logging on
thread apply all backtrace full
After the above, you have a full backtrace, including all local variables, in the mysqld.log
file. Note that you will only get all variables if the server is not stripped.
This page is licensed: CC BY-SA / Gnu FDL
If you have MariaDB compiled for debugging you can both use it in a debugger, like ddd or gdb, and get comprehensive trace files of the execution of MariaDB. The trace files allow you to both see the flow of the code and to see the differences in execution by comparing two trace files.
Core dumps are also much easier to investigate if they come from a debug binary.
Note that a binary compiled for debugging and tracing is about 10-20% slower than a normal binary. If you just compile a binary for debugging (option -g
with gcc) the speed difference compared to a normal binary is negligible.
Execute:
mariadbd --debug --help
If you are using MariaDB before 10.5, then you should use mysqld
instead of mariadbd
!
If you get an error unknown option '--debug
, then MariaDB is not compiled
for debugging and tracing.
On Unix you need to pass -DCMAKE_BUILD_TYPE=Debug
to cmake to compile with debug information.
Here is how you compile with debug on older versions:
Use the scripts in the BUILD directory that will compile MariaDB with most common debug options and plugins, for example:
./BUILD/compile-pentium64-debug-max
For the most common configurations there exists a fine-tuned script in the BUILD directory.
If you want to use valgrind, a very good memory instrumentation tool and memory overrun checker, you should use
./BUILD/compile-pentium64-valgrind-max
Some recommended debugging scripts for Intel/AMD are:
BUILD/compile-pentium64-debug-max
BUILD/compile-pentium64-valgrind-max
This is an example of how to compile MariaDB for debugging in your home directory with MariaDB 5.2.9 as an example:
cd ~
mkdir mariadb
cd mariadb
tar xvf mariadb-5.2.9.tar.gz
ln -s mariadb-5.2.9 current
cd current
./BUILD/compile-pentium64-debug-max
The last command will produce a debug version of sql/mysqld
.
The following example creates the MariaDB databases in /data
.
./scripts/mariadb-install-db --srcdir=. --datadir=/data
The following example is using ddd
, an excellent graphical debugger in Linux. If you don't have ddd
installed, you can use gdb
instead.
cd sql
ddd ./mariadbd &
In ddd
or gdb
run --datadir=/data --language=./share/english --gdb
You can set the options in your /.my.cnf file so as not to have to repeat them on the run
line.
If you run mysqld
with --debug
, you will get a trace file in /tmp/mysqld.trace that shows what is happening.
Note that you can have different options in the configuration file for each MariaDB version (like having a specific language directory).
If you get a crash while running mariadb-test-run
you can debug this in a debugger by using one of the following options:
mariadb-test-run --gdb failing-test-name
or if you prefer the ddd
debugger:
mariadb-test-run --ddd failing-test-name
[client-server]
socket=/tmp/mysql-dbug.sock
port=3307
[mariadb]
datadir=/my/data
loose-innodb_file_per_table
server_id= 1
log-basename=master
loose-debug-mutex-deadlock-detector
max-connections=20
lc-messages=en_us
[mariadb-10.0]
lc-messages-dir=/my/maria-10.0/sql/share
[mariadb-10.1]
lc-messages-dir=/my/maria-10.1/sql/share
[mariadb-10.2]
lc-messages-dir=/my/maria-10.2/sql/share
[mariadb-10.3]
lc-messages-dir=/my/maria-10.3/sql/share
The above .my.cnf
file:
Uses an explicit socket for both client and server.
Assumes the server source is in /my/maria-xxx. You should change this to point to where your sources are located.
Has a unique patch for each MariaDB version so that one doesn't have to specify --lc-messages-dir or --language even if one switches between debugging different MariaDB versions.
This page is licensed: CC BY-SA / Gnu FDL
Debugging memory usage on CentOS 7.
This page describes how to debug MariaDB's memory usage. It uses CentOS 7 but can be applied to other systems as well.
The idea is to employ Google PerfTools: heapprofile.html
On CentOS :
sudo yum install gperftools
service mariadb stop
systemctl edit mariadb
This will open an editor.
Add this content and close the file:
[Service]
Environment="HEAPPROFILE=/tmp/heap-prof-1"
Environment="HEAP_PROFILE_ALLOCATION_INTERVAL=10737418240"
Environment="HEAP_PROFILE_INUSE_INTERVAL=1073741824"
Environment="LD_PRELOAD=/usr/lib64/libtcmalloc.so.4"
Then run
service mariadb start
Then, run the workload. When memory consumption becomes large enough, ruh
ls -la /tmp/heap-prof-*
This should show several files.
Copy away the last one of them:
cp /tmp/heap-prof-1.0007.heap .
Then, run
pprof --dot /usr/sbin/mysqld heap-prof-1.0007.heap > 7.dot
(Note: this produces a lot of statements like /bin/addr2line: Dwarf Error: ...
. Is this because it cannot find locations from the plugin .so files in mariadbd? Anyhow, this is not a showstopper at the moment)
Then, please send us the 7.dot file.
This page is licensed: CC BY-SA / Gnu FDL
Core dumps are enabled by default on Windows, so this step can be skipped on Windows in those versions. See MDEV-18439 for more information.
In order to enable core dumps, you need to set the core_file system variable either on the command-line or in a relevant server option group in an option file. For example:
[mariadb]
...
core_file
You can check your current value by executing:
my_print_defaults --mariadbd
core_file is a system variable. Its value can be checked at runtime by executing the following:
SHOW GLOBAL VARIABLES LIKE 'core_file';
There are some additional details related to using core files on Linux.
On some systems there is a limit on the sizes of core files that can be dumped. You can check the system's current system-wide limit by executing the following:
ulimit -c
You can check the current limit of the mariadbd
process specifically by executing the following:
sudo cat /proc/$(pidof mariadbd)/limits | grep "core file"
If you need to change the core size limit, the method you use depends on how you start mariadbd
. See the sections below for more details.
The resource limits for the mariadbd
process are printed to the error log when the mariadbd
process crashes. That way, users can confirm whether the process may have been allowed to dump a core file. See MDEV-15051 for more information.
If you are starting MariaDB by running mysqld_safe, then configuring the following in the [mysqld_safe]
option group in an option file should allow for unlimited sized core files:
[mysqld_safe]
...
core_file_size=unlimited
You can check your current values by executing:
my_print_defaults mysqld_safe
See mysqld_safe: Configuring the Core File Size for more details.
Note: If you are using mysqld_safe and running mariadbd
as the root
user, then no
core file is created on some systems. The solution is to run mariadbd
as another user.
If you are starting mariadbd manually or in a custom script, then you can allow for unlimited sized core files by executing the following in the same shell or script in which mariadbd is executed:
ulimit -c unlimited
If you are starting mariadbd
using systemd, then you may need to customize the MariaDB service to allow for unlimited size core files. For example, you could execute the following:
Using sudo systemctl edit mariadb.service
add the contents:
[Service]
LimitCORE=infinity
See systemd: Configuring the Core File Size for more details.
To get a core dump in a mariadb
container requires setting the path on Linux to not include a sysctl kernel.core_pattern
beginning with a pipe to an executable that doesn't exist in the container. Setting to a straight core
is recommended.
Also see Container with Debug Symbols.
If you want to change the system-wide limit to allow for unlimited size core files for the mysql
user account, then you can do so by adding the following lines to a file in /etc/security/limits.d/. For example:
sudo tee /etc/security/limits.d/mariadb_core.conf <<EOF
mysql soft core unlimited
mysql hard core unlimited
EOF
The system would have to be restarted for this change to take effect.
See Configuring Linux for MariaDB: Configuring the Core File Size for more details.
If you are using Linux, then it can be helpful to change a few settings to alter where the core files is written and what file name is used. This is done by setting the kernel.core_pattern
and kernel.core_uses_pid
attributes. You can check the current values by executing the following:
sysctl kernel.core_pattern
sysctl kernel.core_uses_pid
If you are using mysql-test-run
and want to have the core as part of the test result, the optimal
setting is probably the following (store cores in the current directory as core.number-of-process-id
):
sudo sysctl kernel.core_pattern=core.%p kernel.core_uses_pid=0
If you are using a production system, you probably want to have the core files in a specific directory, not in the data directory. They place to store cores can be temporarily altered using the sysctl utility, but it is often more common to alter them via the /proc file system. See the following example:
sudo mkdir /tmp/corefiles
sudo chmod 777 /tmp/corefiles
sudo sysctl kernel.core_pattern=/tmp/corefiles/core
sudo sysctl kernel.core_uses_pid=1
The above commands will tell the system to put core files in /tmp/corefiles
, and it also tells the system to put the process ID in the file name.
If you want to make these changes permanent, then you can add the following to a file in /etc/sysctl.conf.d/. For example:
sudo tee /etc/sysctl.d/mariadb_core.conf <<EOF
kernel.core_pattern=/tmp/corefiles/core
kernel.core_uses_pid=1
EOF
Note: if you are using containers, the pid is always going to be 1, so this may not be a useful setting. Appending an identifier like %t to the kernel.core_pattern will generate more unique files.
The value of kernel.core_pattern
is printed to the error log when the mariadbd
process crashes. That way, users can determine where the process may have dumped a core file. See MDEV-15051 for more information.
Note: Ensure that you have enough free disk space in the path pointed to by kernel.core_pattern
.
Core dump management can be automated using systemd, which then centrally manages all core dump files and provides information about detected core dumps and access to collected core files using the coredumpctl utility.
This is enabled per default on Red Hat Enterprise Linux 8 and CentOS 8, and maybe other contemporary Linux distribution releases by now, too. It can be easily checked for by looking at the kernel.core_pattern setting
. If it looks like this systemd-coredump is enabled:
# sysctl kernel.core_pattern
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e
On other distributions like Ubuntu (at least up to 21.10) it is not enabled by default, but can be set up manually.
To see all recent core dumps on the system you can then simply run
# coredumpctl list
Or you can check for MariaDB Server core dumps specifically with:
# coredumpctl list /usr/sbin/mariadbd
If an actual core file got stored you'll see present
in the COREFILE column of the output, you can then extract the core file with:
# coredumpctl dump -o mariadbd.core ...PID...
using the process id number from the PID column, or when you just want to retrieve the latest MariaDB Server related entry:
# coredumpctl dump -o mariadb.core /usr/sbin/mariadbd
Starting with systemd
248 it is also possible to invoke the gdb
debugger directly using the new --debugger-arguments=...
option, e.g. making the extraction of all thread backtraces from the most recent MariaDB server crash a one liner without even having to extract the core dump file first (requires gdb
to be installed):
# coredumpctl debug --debugger-arguments="-batch -ex 'thread apply all bt full'" /usr/sbin/mariadbd
So far none of the long term support Linux distribution releases have a new enough systemd
version for this, the (as of this writing) still upcoming Ubuntu 22.04 "Jammy Jellyfish" will probably the first to support this.
Since mariadbd
executes setuid, you may have to set fs.suid_dumpable=2
to allow core dumps on Linux. You can check the current fs.suid_dumpable
value by using the sysctl utility. For example:
sysctl fs.suid_dumpable
You can temporarily set it to 2
by using the sysctl utility. For example:
sudo sysctl -w fs.suid_dumpable=2
Or you can temporarily set it to 2
by writing to the /proc file system. For example:
sudo echo 2 > /proc/sys/fs/suid_dumpable
If you want to permanently set it to 2
then you can add the following to a file in /etc/sysctl.conf.d/:
sudo tee /etc/sysctl.d/mariadb_fs_suid_dumpable.conf <<EOF
fs.suid_dumpable=2
EOF
Note: If you don't want to change fs.suid_dumpable
, then another solution is to start mariadbd
directly as the mysql
user, so that the setuid call is not needed.
You can get a core dump from a running server with:
sudo gcore -o filename $(pidof mariadbd)
This will store a core dump in filename.pid
where pid is the process ID of mariadbd
.mariadbd
will continue to be running and will not be affected by gcore
.
Another method is to force a core file for mariadbd
by sending the process the sigabrt
signal, which has the signal code 6
. This is very useful to get the state of the unresponsive mariadbd
process. However, this will cause mariadbd
to crash, and crash recovery will be run on restart.
You can send the signal with the kill command. For example:
sudo kill -6 $(pidof mariadbd)
As an alternative to $(pidof mariadbd)
, you can find the process ID either by using the ps utility or by checking the file defined by the pid_file system variable.
Core dumps are enabled by default on Windows. See MDEV-18439 for more information.
There are some additional details related to using core files on Windows.
On Windows, the core file is created as a minidump file.
For details on how to configure and read the minidump file, see How to read the small memory dump file that is created by Windows if a crash occurs.
See the IBM Core Dump Handler project.
If your mariadbd
binary is built with Address Sanitizer (ASAN) then it will not be able to generate a core file.
Core files usually contain a dump of all memory in the process's full address space. This means that if a server has some large buffers configured (such as a large InnoDB buffer pool), then the server's core files can get very large.
Some large buffers have been excluded from core files on some systems as a way to reduce the size.
The following buffers are excluded:
InnoDB Redo log buffer (fixed 2M)
The buffers are only excluded on Linux when using kernel version 3.4 and above and when using a non-debug build of mariadbd
. Some Linux kernel versions have a bug which would cause the following warning to be printed to the log:
Sep 25 10:41:19 srv1 mysqld: 2018-09-25 10:41:19 0 [Warning] InnoDB: Failed to set memory to DODUMP: Invalid argument ptr 0x2aaac3400000 size 33554432
In those cases, the core dump may exclude some additional data. If that is not a concern, then the warning can be ignored. The problem can be fixed by upgrading to a Linux kernel version in which the bug is fixed.
This page is licensed: CC BY-SA / Gnu FDL
When mariadbd
crashes, it will write a stack trace in the error log by default. This is because the stack_trace option defaults to ON
. With a normal release build, this stack trace in the error log may look something like this:
2019-03-28 23:31:08 0x7ff4dc62d700 InnoDB: Assertion failure in file /home/buildbot/buildbot/build/mariadb-10.2.23/storage/innobase/rem/rem0rec.cc line 574
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to https://jira.mariadb.org/
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: https://mariadb.com/kb/en/library/innodb-recovery-modes/
InnoDB: about forcing recovery.
190328 23:31:08 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Server version: 10.2.23-MariaDB-10.2.23+maria~stretch
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=234
max_threads=752
thread_count=273
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1783435 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x7ff4d8001f28
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7ff4dc62ccc8 thread_stack 0x49000
*** buffer overflow detected ***: /usr/sbin/mysqld terminated
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x70bfb)[0x7ffa09af5bfb]
/lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x37)[0x7ffa09b7e437]
/lib/x86_64-linux-gnu/libc.so.6(+0xf7570)[0x7ffa09b7c570]
/lib/x86_64-linux-gnu/libc.so.6(+0xf93aa)[0x7ffa09b7e3aa]
/usr/sbin/mysqld(my_addr_resolve+0xe2)[0x55ca42284922]
/usr/sbin/mysqld(my_print_stacktrace+0x1bb)[0x55ca4226b1eb]
/usr/sbin/mysqld(handle_fatal_signal+0x41d)[0x55ca41d0a01d]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x110e0)[0x7ffa0b4180e0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcf)[0x7ffa09ab7fff]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7ffa09ab942a]
/usr/sbin/mysqld(+0x40f971)[0x55ca41ab8971]
/usr/sbin/mysqld(+0x887df6)[0x55ca41f30df6]
/usr/sbin/mysqld(+0x863673)[0x55ca41f0c673]
/usr/sbin/mysqld(+0x96648e)[0x55ca4200f48e]
/usr/sbin/mysqld(+0x89b559)[0x55ca41f44559]
/usr/sbin/mysqld(+0x8a15e4)[0x55ca41f4a5e4]
/usr/sbin/mysqld(+0x8a2187)[0x55ca41f4b187]
/usr/sbin/mysqld(+0x8b1a20)[0x55ca41f5aa20]
/usr/sbin/mysqld(+0x7f5c04)[0x55ca41e9ec04]
/usr/sbin/mysqld(_ZN7handler12ha_write_rowEPh+0x107)[0x55ca41d140d7]
/usr/sbin/mysqld(_Z12write_recordP3THDP5TABLEP12st_copy_info+0x72)[0x55ca41b4b992]
/usr/sbin/mysqld(_Z12mysql_insertP3THDP10TABLE_LISTR4ListI4ItemERS3_IS5_ES6_S6_15enum_duplicatesb+0x1206)[0x55ca41b560f6]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x3f68)[0x55ca41b6bee8]
/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_statebb+0x28a)[0x55ca41b70e4a]
/usr/sbin/mysqld(+0x4c864f)[0x55ca41b7164f]
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcjbb+0x1a7c)[0x55ca41b737fc]
/usr/sbin/mysqld(_Z10do_commandP3THD+0x176)[0x55ca41b748a6]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP7CONNECT+0x25a)[0x55ca41c3ec0a]
/usr/sbin/mysqld(handle_one_connection+0x3d)[0x55ca41c3ed7d]
/usr/sbin/mysqld(+0xb75791)[0x55ca4221e791]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x74a4)[0x7ffa0b40e4a4]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7ffa09b6dd0f]
If you plan to report a bug about the problem, then this information can be very useful for MariaDB's developers to track down the root cause. However, notice that some of the function names in the call stack are missing. In some cases, this partial stack trace may not be enough to find out exactly where the problem is.
A full stack trace can only be produced if you have debugging symbols for your mariadbd
binary.
Debug information is used by debugging tools to produce a meaningful stack trace. Importantly these packages do not replace any executables or any existing production executables or in any way interfere with the way the production server ran before these packages where installed.
If you are obtaining a backtrace for a coredump, you can move the core dump to a difference server that has the identical mariadb-server and debug info packages, and perform the backtrace there with no loss of information.
On some Linux distributions, you may be able to install debuginfo
packages that contain debugging symbols.
Currently, debuginfo
packages may not allow the server to print a nice stack trace in the error log. They also allow users to extract full stack traces from core dumps. See MDEV-20738 for more information.
The MariaDB yum
repository contains debuginfo packages.
On RHEL, CentOS, Fedora, and other similar Linux distributions, it is highly recommended to install the relevant RPM package from MariaDB's repository using yum or dnf. Starting with RHEL 8 and Fedora 22, yum
has been replaced by dnf
, which is the next major version of yum
. However, yum
commands still work on many systems that use dnf
. For example:
sudo yum install MariaDB-server-debuginfo
See Installing MariaDB with yum/dnf: Installing Debug Info Packages with YUM for more information.
The MariaDB zypper
repository contains debuginfo packages.
On SLES, OpenSUSE, and other similar Linux distributions, it is highly recommended to install the relevant RPM package from MariaDB's repository using zypper. For example:
sudo zypper install MariaDB-server-debuginfo
See Installing MariaDB with zypper: Installing Debug Info Packages with ZYpp for more information.
These are for when you already installed MariaDB from a MariaDB mirror.
For Ubuntu an additional repository step is needed:
sudo add-apt-repository 'deb [arch=amd64,arm64,ppc64el,s390x] https://ftp.osuosl.org/pub/mariadb/repo/10.5/ubuntu focal main/debug'
Adjust 10.5
to the major version you are debugging and focal
to the required distribution.
apt-get update && apt-get install -y mariadb-server-core-10.5-dbgsym
From MariaDB 10.9 the version isn't included in the package name and mariadb-server-core-dbgsym
can be used as the package name.
If you used the MariaDB versions provided by Debian or Ubuntu see the following links.
For Debian see AutomaticDebugPackages
For Ubuntu see Debug%20Symbol%20Packages
Debugging symbols are available to install on Windows.
Debugging symbols can be installed with the MSI installer. Debugging symbols are not installed by default. You must perform a custom installation and explicitly choose to install debugging symbols.
The MSI installer can be downloaded from the MariaDB downloads page.
MariaDB also provides a ZIP package that contains debugging symbols on Windows.
The ZIP package that contains debugging symbol can be downloaded from the MariaDB downloads page.
These are currently only per major release version and are generated out of CI. They are always the latest version in the main branch on GitHub.
There are available on at mariadb-debug?tab=tags.
Use the container name quay.io/mariadb-foundation/mariadb-debug:VERSION
.
Where VERSION
corresponds to the major version you wish to test with.
Build using a Containerfile/Dockerfile:
ARG VERSION=10.11
FROM mariadb:$VERSION
RUN apt-get update \
&& apt-get install -y gdb mariadb-server-core-dbgsym=$(dpkg -s mariadb-server-core | awk '/Version/{print $NF}') ; \
rm -rf /var/lib/apt/lists/*
Build with:
buildah bud --tag mariadb_debug --build-arg VERSION=10.11.6 -f Containerfile .
Then you will have a mariadb_debug
container.
Example use:
podman run --entrypoint gdb --user mysql --volume mariadb_data:/var/lib/mysql mariadb_debug -ex r --args /usr/sbin/mariadbd
To enable core dumps, see Enabling Core Dumps for details.
At the bottom of the error log there will be some text about the core location including:
Writing a core file...
Working directory at /var/lib/mariadb
Resource Limits:
Limit Soft Limit Hard Limit Units
...
Max core file size unlimited unlimited bytes
...
Core pattern: |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h
Kernel version: Linux version 6.0.0-0.rc2.19.fc38.x86_64 (mockbuild@bkernel01.iad2.fedoraproject.org) (gcc (GCC) 12.2.1 20220819 (Red Hat 12.2.1-1), GNU ld version 2.39-2.fc38) #1 SMP PREEMPT_DYNAMIC Mon Aug 22 12:52:40 UTC 2022
If the was a core limit in the resource limits there may be limited or no core file information.
If the core pattern begins with a |, then the following is the executable that handled the core file during the crash. The following show a few techniques to access the core depending on the pattern. If another program is used, look at its manual page to see how to get access to the core file.
If a plain core filename is in the "Core pattern" there's a good chance it will be in the Working directory location. It might have a .{process number}
suffix on the filename.
If you are running MariaDB in a container, the locations where the core dump can be generated are limited. Looking at the container log, this will likely be where the error log information is. The "Core pattern" of a Linux system is currently a global fixed value. The consequence is if this core pattern refers to a program, that program isn't likely to be in the container and won't be executed on the crash.
The system wide crash handler can be changed with sysctl kernel.core_pattern=core
to set this back to a file based crash. With this, the crash should occur in the working directory, normally the /var/lib/mysql
data directory of the container volume.
For systemd-coredump
, there is a program coredumpctl
to manage access.
coredumpctl list
TIME PID UID GID SIG COREFILE EXE >
Fri 2022-09-09 14:16:37 AEST 213571 1000 1000 SIGSEGV present /usr/sbin/mariadbd
To access the program using gdb
, coredumpctl debug
(defaults to last crash), will load the core dump in gdb. The instructions in the next section for extracting information.
See also: extracting core dumps with systemd-coredump.
A core pattern of |/usr/libexec/abrt-hook-ccpp
indicates abrt
system is used.
abrt-cli is a command line user interface for access the core file.
A core pattern of [|/usr/share/apport/apport
indicates apport
.
For more information see Apport Project Wiki.
apport-retrace allows you to "Examine Locally" and run a gdb
session. One you have gdb started instructions in the next section can be used for extracting information.
gdb
on LinuxTo analyze the core file on Linux, you can use gdb.
For example, to open a core file with gdb, you could execute the following:
sudo gdb /usr/sbin/mariadbd /var/lib/mysql/core.932
Be sure to replace /usr/sbin/mariadbd
with the path to your mariadbd
binary (might be mysqld
on MariaDB 10.4 and earlier) and to also replace /var/lib/mysql/core.932
with the path to your core file.
Once gdb has opened the core file, if you want to log all output to a file, then you could execute the following commands:
set logging file /tmp/gdb_output.log
set logging on
If you do not execute set logging file
, then the set logging on
command creates a gdb.txt
in your current working directory. Redirecting the output to a file is useful, because it can make it easier to analyze. It also makes it easier to send the information to a MariaDB developer, if that becomes necessary.
Do any commands that you would like to do. For example, you could get the backtraces.
Once you are done, you can exit gdb by executing the quit command.
gdb
on LinuxOn Linux, once you have debugging symbols for your mariadbd
binary, you can use the gdb utility to get backtraces, which are what gdb
calls stack traces. Backtraces can be obtained from a core file or from a running mariadbd
process.
Full backtraces are preferred and will contain function arguments, which can contain useful information such as query strings, so it can make the information easier to analyze.
To get a full backtrace of the main thread, then you could execute the following:
bt -frame-arguments all full
If you want to get a full backtrace of all threads, then you could execute the following:
thread apply all bt -frame-arguments all full
If you want to get a full backtrace to a file to report a bug, the recommended way is to use gdb:
set logging on
set pagination off
set print frame-arguments all
thread apply all bt full
set logging off
This will write the full backtrace into the file gdb.txt
.
Sometimes it can be helpful to get full backtraces for all threads. The full backtraces will contain function arguments, which can contain useful information such as query strings, so it can make the information easier to analyze.
To get full backtraces for all threads from a mariadbd
core file, execute a command like the following:
sudo gdb --batch --eval-command="set print frame-arguments all" --eval-command="thread apply all bt full" /usr/sbin/mariadbd /var/lib/mysql/core.932 > mariadbd_full_bt_all_threads.txt
Be sure to replace /usr/sbin/mariadbd
with the path to your mariadbd
binary and to also replace /var/lib/mysql/core.932
with the path to your core dump.
The backtraces will be output to the file mariadbd_full_bt_all_threads.txt
.
mariadbd
ProcessSometimes it can be helpful to get full backtraces for all threads. The full backtraces will contain function arguments, which can contain useful information such as query strings, so it can make the information easier to analyze.
To get full backtraces for all threads from a running mariadbd
process, execute a command like the following:
sudo gdb --batch --eval-command="set print frame-arguments all" --eval-command="thread apply all bt full" /usr/sbin/mariadbd $(pgrep -xn mariadbd) > mariadbd_full_bt_all_threads.txt
Be sure to replace /usr/sbin/mariadbd
with the path to your mariadbd
binary.
The backtraces will be output to the file mariadbd_full_bt_all_threads.txt
.
Sometimes very busy systems are too busy to batch obtain the backtrace. If this is the case, gcore $(pidof mariadbd)
can save the core and then obtain the backtrace out of the dumped core.
If the crash or assertion is repeatable it could be easiest to run mariadbd
under #gdb.
The container image name here can be a prebuilt one from quay.io/mariadb-foundation/mariadb-debug
or an explicit version built yourself as above.
docker run -v datavolume:/var/lib/mysql/ --rm --user mysql quay.io/mariadb-foundation/mariadb-debug:10.11 gdb -ex r -ex 'thread apply all bt -frame-arguments all full' --args mariadbd
In docker-compose.yml form this looks like:
services:
mariadb:
image: quay.io/mariadb-foundation/mariadb-debug:10.11
volumes:
- mariadbdata:/var/lib/mysql
environment:
- MARIADB_ROOT_PASSWORD=bob
command: gdb -ex r -ex 'thread apply all bt -frame-arguments all full' --args mariadbd
user: mysql
volumes:
mariadbdata: {}
Note, the initialization of data is assumed. Omit command
and user
if it isn't.
If you wish to attach to and existing process in a container, the container needs to be started with the SYS_PTRACE capability. The sysctl kernel.yama.ptrace_scope
that allows this should also be set to 0.
$ podman run -v data:/var/lib/mysql/ --cap-add SYS_PTRACE --name mtest -d quay.io/mariadb-foundation/mariadb-debug:11.2
$ podman exec --user mysql mtest gdb -p 1 -ex 'thread apply all bt -frame-arguments all full'
Note: podman
has the same arguments and behaviour as docker
if you'd rather use that.
or in compose:
cap_add:
- SYS_PTRACE
The container process is always pid one, and here we use c
then thread apply all bt -frame-arguments all full
as the pre-loaded gdb
commands. When a particular signal like assert or SEGV is triggered, the backtrace will be displayed.
First, the sysctl kernel.core_pattern
needs to be core
. If it starts with a pipe character it try to execute this within the container. This is a kernel wide setting and cannot be applied to a specific container. The bottom of a crash will show you want it is set to.
With this set, just running a container with debug info is sufficient. The core should be dumped on the datadir volume. With the core
dump here, analysis can occur like:
podman run --rm --user mysql --volume data:/var/lib/mysql -i mariadb_debug gdb --batch --eval-command="set print frame-arguments all" --eval-command="thread apply all bt full" /usr/sbin/mariadbd /var/lib/mysql/core | tee mariadbd_full_bt_all_threads.txt
If you are concerned with debuggers running on your production database you can also copy the database to another location.
This is useful when you know which statement crashed the server.
Just start mariadbd with the options--datadir=/copy-of-original-data-directory --core-file --stack-trace --socket=/tmp/mariadbd-alone.sock --skip-networking
In order to disable stack traces in the error log, you can configure the skip_stack_trace option either on the command-line or in a relevant server option group in an option file. For example:
[mariadb]
...
skip_stack_trace
If you encounter some problem in MariaDB, then MariaDB's developers would appreciate if you would report a bug at the MariaDB JIRA bug tracker. Please include the following information:
Your full stack trace.
Your error log.
Your option files.
How to reproduce the problem.
SHOW CREATE TABLE {table (for each table in query) and EXPLAIN {query} if a query related crash.
A MariaDB FTP server is available for large and/or sensitive information. Please upload in .tar.gz
or .zip
archive.
For very difficult or critical errors, you should consider uploading the following information to the MariaDB FTP server the following:
Your build of mariadbd
(if you compiled it), otherwise version information on the mariadb-server package.
Your core file.
Your contact information.
The associated JIRA issue identifier for the bug, if you reported a bug.
This information will allow MariaDB developers to analyze it and try to create a fix.
This page is licensed: CC BY-SA / Gnu FDL
This article provides a walkthrough on using the Process Monitor on Windows, tracing file system access by mysqld.exe during the "install plugin" call.
Process Monitor is an advanced monitoring tool for Windows that shows real-time file system, registry and process/thread activity. It is a part of sysinternals suite developed by Mark Russinovich and Bryce Cogswell. Process Monitor can be directly downloaded from ProcessMonitor.zip . More description can be found at https://learn.microsoft.com/en-us/sysinternals/downloads/procmon
There is no installation necessary; the single executable can be used after unpacking. I suggest putting procmon into some directory in the PATH environment variable.
The purpose of the following exercise is to learn how to use procmon to trace mysqld.exe calls to the filesystem.
We assume that mysqld.exe is already started.
Start procmon.exe . Dialog will pop up that offers to set filter. Use this dialog to set filter to "Process name" "is" "mysqld.exe", as shown in the screenshot below.
Click on "Add" button to mysqld.exe to include it in the filter, "Apply" and "OK".
Capture events (Menu File=>Capture Events (Ctrl+E)
Start mysql command line client and connect to the server. Execute
mysql> install plugin blackhole soname 'ha_blackhole.dll';
Query OK, 0 rows affected (0.03 sec)
Saving the trace
Back to Process Monitor Windows, you should see the filesystem events initiated by the "INSTALL PLUGIN" operation
To save it, choose File/Save.
It is also possible to see stacktraces corresponding to the events. For this to work , symbols support needs to be configured. This needs to be only done once.
Install Debugging Tools for Windows (google on how to do that).
Switch to Process Monitor's menu Options => Configure symbols.
Add dbghelp.dll from your installation of Debugging Tools into "dbghelp.dll path" input field . On my system it is C:\Program Files\Debugging Tools for Windows (x64)\dbghelp.dll
In "symbol path" input field, add srvC:\symbolssymbols;<path\to\your\installation\bin> (substitute last path element with real path to your installation)
This is how it looks on my machine:
Once symbols are configured, you'll get a stack trace corresponding to a filesystem event by simply doubleclicking on the line corresponding to the event. This is what I see after clicking on the first event of my tracing session (corresponds to opening my.ini file)
It is also possible to save the whole trace with callstacks as text (File/Save, choose XML, include callstack + resolve callstack).
This page is licensed: CC BY-SA / Gnu FDL
A MariaDB server configured with --with-debug=full
has the following differences from a normal MariaDB server:
You can have maximum of 1000 tables locked at the same time in one statement. (Define MAX_LOCKS
in mysys/thrlock.c). This is to detect loops in the used lists.
You can have maximum of 1000 threads locking the same table. (Define MAX_THREADS
in mysys/thrlock.c). This is to detect loops in the used lists.
Deadlock detection of mutex will be done at runtime. If wrong mutex handling is found an error will be printed to the error log. (Define SAFE_MUTEX
)
Memory overrun/underrun and not freed memory will be reported to the error log (Define SAFEMALLOC
)
You can get a trace of what mysqld
(and most other binaries) is doing by starting it with the --debug
option. The trace is usually put in /tmp
or C:\
This page is licensed: CC BY-SA / Gnu FDL