Configure multiple backup strategies and perform restoration.
A physical backup is a snapshot of the entire data directory (/var/lib/mysql), including all data files. This type of backup captures the exact state of the database at a specific point in time, allowing for quick restoration in case of data loss or corruption.
Physical backups are the recommended method for backing up MariaDB databases, especially in production environments, as they are faster and more efficient than logical backups.
Multiple strategies are available for performing physical backups, including:
mariadb-backup: Taken using the enterprise version of , specifically , which is available in the MariaDB enterprise images. The operator supports scheduling Jobs to perform backups using this utility.
Kubernetes VolumeSnapshot: Leverage to create snapshots of the persistent volumes used by the MariaDB Pods. This method relies on a compatible CSI (Container Storage Interface) driver that supports volume snapshots. See the section for more details.
In order to use VolumeSnapshots, you will need to provide a VolumeSnapshotClass that is compatible with your storage provider. The operator will use this class to create snapshots of the persistent volumes:
For the rest of compatible , the mariadb-backup CLI will be used to perform the backup. For instance, to use S3 as backup storage:
Multiple storage types are supported for storing physical backups, including:
S3 compatible storage: Store backups in a S3 compatible storage, such as or .
Persistent Volume Claims (PVC): Use any of the available in your Kubernetes cluster to create a PersistentVolumeClaim (PVC) for storing backups.
Kubernetes Volumes: Store backups in any of the supported by Kubernetes out of the box, such as NFS.
Physical backups can be scheduled using the spec.schedule field in the PhysicalBackup resource. The schedule is defined using a and allows you to specify how often backups should be taken:
If you want to immediatly trigger a backup after creating the PhysicalBackup resource, you can set the immediate field to true. This will create a backup immediately, regardless of the schedule.
If you want to suspend the schedule, you can set the suspend field to true. This will prevent any new backups from being created until the PhysicalBackup is resumed.
When using physical backups based on mariadb-backup, you are able to choose the compression algorithm used to compress the backup files. The available options are:
bzip2: Good compression ratio, but slower compression/decompression speed compared to gzip.
gzip: Good compression/decompression speed, but worse compression ratio compared to bzip2.
none: No compression.
To specify the compression algorithm, you can use the compression field in the PhysicalBackup resource:
compression is defaulted to none by the operator.
You can define a retention policy both for backups based on mariadb-backup and for VolumeSnapshots. The retention policy allows you to specify how long backups should be retained before they are automatically deleted. This can be defined via the maxRetention field in the PhysicalBackup resource:
When using physical backups based on mariadb-backup, the operator will automatically delete backups files in the specified storage older than the retention period.
When using VolumeSnapshots, the operator will automatically delete the VolumeSnapshot resources older than the retention period using the Kubernetes API.
Physical backups can only be restored in brand new MariaDB instances without any existing data. This means that you cannot restore a physical backup into an existing MariaDB instance that already has data.
To perform a restoration, you can specify a PhysicalBackup as restoration source under the spec.bootstrapFrom field in the MariaDB resource:
This will take into account the backup strategy and storage type used in the PhysicalBackup, and it will perform the restoration accordingly.
As an alternative, you can also provide a reference to an S3 bucket that was previously used to store the physical backup files:
It is important to note that the backupContentType field must be set to Physical when restoring from a physical backup. This ensures that the operator uses the correct restoration method.
To restore a VolumeSnapshot, you can provide a reference to a specific VolumeSnapshot resource in the spec.bootstrapFrom field:
By default, the operator will match the closest backup available to the current time. You can specify a different target recovery time by using the targetRecoveryTime field in the PhysicalBackup resource. This lets you define the exact point in time you want to restore to:
By default, both backups based on mariadb-backup and VolumeSnapshots will have a timeout of 1 hour. You can change this timeout by using the timeout field in the PhysicalBackup resource:
When timed out, the operator will delete the Jobs or VolumeSnapshots resources associated wit the PhysicalBackup resource. The operator will create new Jobs or VolumeSnapshots to retry the backup operation if the PhysicalBackup resource is still scheduled.
When taking backups based on mariadb-backup, you can specify extra options to be passed to the mariadb-backup command using the args field in the PhysicalBackup resource:
Refer to the for a list of available options.
Credentials for accessing an S3 compatible storage can be provided via the s3 key in the storage field of the PhysicalBackup resource. The credentials can be provided as a reference to a Kubernetes Secret:
Alternatively, if you are running in EKS, you can use dynamic credentials from an EKS Service Account using EKS Pod Identity or IRSA:
By leaving out the accessKeyIdSecretKeyRef and secretAccessKeySecretKeyRef credentials and pointing to the correct serviceAccountName, the backup Job will use the dynamic credentials from EKS.
When using S3 storage for backups, a staging area is used for keeping the external backups while they are being processed. By default, this staging area is an emptyDir volume, which means that the backups are temporarily stored in the node's local storage where the PhysicalBackup Job is scheduled. In production environments, large backups may lead to issues if the node doesn't have sufficient space, potentially causing the backup/restore process to fail.
Additionally, when restoring these backups, the operator will pull the backup files from S3, uncompress them if needded, and restore them to each of the MariaDB Pods in the cluster individually. To save network bandwidth and compute resources, a staging area is used to keep the uncompressed backup files after they have been restored to the first MariaDB Pod. This allows the operator to restore the same backup to the rest of MariaDB Pods seamlessly, without needing to pull and uncompress the backup again.
To configure the staging area, you can use the stagingStorage field in the PhysicalBackup resource:
Similarly, you may also use a staging area when , in the MariaDB resource:
In the examples above, a PVC with the default StorageClass will be provisioned to be used as staging area.
VolumeSnapshotsBefore using this feature, ensure that you meet the following prerequisites :
and its CRs are installed in the cluster.
You have a compatible CSI driver that supports VolumeSnapshots installed in the cluster.
The operator is capable of creating of the PVCs used by the MariaDB Pods. This allows you to create point-in-time snapshots of your data in a Kubernetes-native way, leveraging the capabilities of your storage provider.
Most of the fields described in this documentation apply to VolumeSnapshots, including scheduling, retention policy, and compression. The main difference with the mariadb-backup based backups is that the operator will not create a Job to perform the backup, but instead it will create a VolumeSnapshot resource directly.
In order to create consistent, point-in-time snapshots of the MariaDB data, the operator will perform the following steps:
Execute a BACKUP STAGE START statement followed by BACKUP STAGE BLOCK_COMMIT in one of the secondary Pods.
Create a VolumeSnapshot resource of the data PVC mounted by the MariaDB secondary Pod.
This backup process is described in the and is designed to be .
Both for mariadb-backup and VolumeSnapshot , the enterprise operator performs non-blocking physical backups by leveraging the . This implies that the backups are taken without long read locks, enabling consistent, production-grade backups with minimal impact on running workloads, ideal for high-availability and performance-sensitive environments.
When restoring a backup, the root credentials specified through the spec.rootPasswordSecretKeyRef field in the MariaDB resource must match the ones in the backup. These credentials are utilized by the liveness and readiness probes, and if they are invalid, the probes will fail, causing your MariaDB Pods to restart after the backup restoration.
JobWhen using backups based on mariadb-backup, restoring and uncompressing large backups can consume significant compute resources and may cause restoration Jobs to become stuck due to insufficient resources. To prevent this, you can define the compute resources allocated to the Job:
ReadWriteOncePod access mode partially supportedWhen using backups based on mariadb-backup, the data PVC used by the MariaDB Pod cannot use the access mode, as it needs to be mounted at the same time by both the MariaDB Pod and the PhysicalBackup Job. In this case, please use either the ReadWriteOnce or ReadWriteMany access modes instead.
Alternatively, if you want to keep using the ReadWriteOncePod access mode, you must use backups based on VolumeSnapshots, which do not require creating a Job to perform the backup and therefore avoid the volume sharing limitation.
PhysicalBackup Jobs schedulingPhysicalBackup Jobs must mount the data PVC used by one of the secondary MariaDB Pods. To avoid scheduling issues caused by the commonly used ReadWriteOnce access mode, the operator schedules backup Jobs on the same node as MariaDB by default.
If you prefer to disable this behavior and allow Jobs to run on any node, you can set podAffinity=false:
This configuration may be suitable when using the ReadWriteMany access mode, which allows multiple Pods across different nodes to mount the volume simultaneously.
Custom columns are used to display the status of the PhysicalBackup resource:
To get a higher level of detail, you can also check the status field directly:
You may also check the related events for the PhysicalBackup resource to see if there are any issues:
mariadb-backup log copy incomplete: consider increasing innodb_log_file_sizeIn some situations, when using the mariadb-backup strategy, you may encounter the following error in the backup Job logs:
This can be addressed by increasing the innodb_log_file_size in the MariaDB configuration. You can do this by adding the following to your MariaDB resource:
Refer to for further details on this issue.
Kubernetes VolumeSnapshots: Use Kubernetes VolumeSnapshots to create snapshots of the persistent volumes used by the MariaDB Pods. This method relies on a compatible CSI (Container Storage Interface) driver that supports volume snapshots. See the VolumeSnapshots section for more details.
You have a VolumeSnapshotClass configured configured for your CSI driver.
VolumeSnapshot resource becomes ready. When timing out, the operator will delete the VolumeSnapshot resource and retry the operation.Issue a BACKUP STAGE END statement.
apiVersion: enterprise.mariadb.com/v1alpha1
kind: PhysicalBackup
metadata:
name: physicalbackup
spec:
mariaDbRef:
name: mariadb
storage:
volumeSnapshot:
volumeSnapshotClassName: csi-hostpath-snapclassapiVersion: enterprise.mariadb.com/v1alpha1
kind: PhysicalBackup
metadata:
name: physicalbackup
spec:
mariaDbRef:
name: mariadb
storage:
s3:
bucket: physicalbackups
endpoint: minio.minio.svc.cluster.local:9000
accessKeyIdSecretKeyRef:
name: minio
key: access-key-id
secretAccessKeySecretKeyRef:
name: minio
key: secret-access-key
tls:
enabled: true
caSecretKeyRef:
name: minio-ca
key: ca.crtapiVersion: enterprise.mariadb.com/v1alpha1
kind: PhysicalBackup
metadata:
name: physicalbackup
spec:
mariaDbRef:
name: mariadb
schedule:
cron: "*/1 * * * *"
suspend: false
immediate: trueapiVersion: enterprise.mariadb.com/v1alpha1
kind: PhysicalBackup
metadata:
name: physicalbackup
spec:
mariaDbRef:
name: mariadb
compression: bzip2apiVersion: enterprise.mariadb.com/v1alpha1
kind: PhysicalBackup
metadata:
name: physicalbackup
spec:
mariaDbRef:
name: mariadb
maxRetention: 720h # 30 daysapiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb-galera
spec:
bootstrapFrom:
backupRef:
name: physicalbackup
kind: PhysicalBackupapiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb-galera
spec:
bootstrapFrom:
s3:
bucket: physicalbackups
prefix: mariadb
endpoint: minio.minio.svc.cluster.local:9000
accessKeyIdSecretKeyRef:
name: minio
key: access-key-id
secretAccessKeySecretKeyRef:
name: minio
key: secret-access-key
tls:
enabled: true
caSecretKeyRef:
name: minio-ca
key: ca.crt
backupContentType: PhysicalapiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb-galera
spec:
bootstrapFrom:
volumeSnapshotRef:
name: physicalbackup-20250611163352apiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb-galera
spec:
bootstrapFrom:
targetRecoveryTime: 2025-06-17T08:07:00ZapiVersion: enterprise.mariadb.com/v1alpha1
kind: PhysicalBackup
metadata:
name: physicalbackup
spec:
mariaDbRef:
name: mariadb
timeout: 2hapiVersion: enterprise.mariadb.com/v1alpha1
kind: PhysicalBackup
metadata:
name: physicalbackup
spec:
mariaDbRef:
name: mariadb
args:
- "--verbose"apiVersion: enterprise.mariadb.com/v1alpha1
kind: PhysicalBackup
metadata:
name: physicalbackup
spec:
mariaDbRef:
name: mariadb
storage:
s3:
bucket: physicalbackups
endpoint: minio.minio.svc.cluster.local:9000
accessKeyIdSecretKeyRef:
name: minio
key: access-key-id
secretAccessKeySecretKeyRef:
name: minio
key: secret-access-key
tls:
enabled: true
caSecretKeyRef:
name: minio-ca
key: ca.crtapiVersion: v1
kind: ServiceAccount
metadata:
name: mariadb-backup
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<<account_id>>:role/my-role-irsaapiVersion: enterprise.mariadb.com/v1alpha1
kind: PhysicalBackup
metadata:
name: physicalbackup
spec:
mariaDbRef:
name: mariadb
serviceAccountName: mariadb-backup
storage:
s3:
bucket: physicalbackups
prefix: mariadb
endpoint: s3.us-east-1.amazonaws.com
region: us-east-1
tls:
enabled: trueapiVersion: enterprise.mariadb.com/v1alpha1
kind: PhysicalBackup
metadata:
name: physicalbackup
spec:
mariaDbRef:
name: mariadb
storage:
s3:
bucket: physicalbackups
prefix: mariadb
endpoint: minio.minio.svc.cluster.local:9000
region: us-east-1
accessKeyIdSecretKeyRef:
name: minio
key: access-key-id
secretAccessKeySecretKeyRef:
name: minio
key: secret-access-key
tls:
enabled: true
caSecretKeyRef:
name: minio-ca
key: ca.crt
stagingStorage:
persistentVolumeClaim:
resources:
requests:
storage: 1Gi
accessModes:
- ReadWriteOnceapiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb-galera
spec:
mariaDbRef:
name: mariadb
bootstrapFrom:
s3:
bucket: physicalbackups
prefix: mariadb
endpoint: minio.minio.svc.cluster.local:9000
accessKeyIdSecretKeyRef:
name: minio
key: access-key-id
secretAccessKeySecretKeyRef:
name: minio
key: secret-access-key
tls:
enabled: true
caSecretKeyRef:
name: minio-ca
key: ca.crt
backupContentType: Physical
stagingStorage:
persistentVolumeClaim:
resources:
requests:
storage: 1Gi
accessModes:
- ReadWriteOnceapiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb
spec:
bootstrapFrom:
restoreJob:
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
memory: 1GiapiVersion: enterprise.mariadb.com/v1alpha1
kind: PhysicalBackup
metadata:
name: physicalbackup
spec:
mariaDbRef:
name: mariadb
podAffinity: falsekubectl get physicalbackups
NAME COMPLETE STATUS MARIADB LAST SCHEDULED AGE
physicalbackup True Success mariadb 17s 17skubectl get physicalbackups physicalbackup -o json | jq -r '.status'
{
"conditions": [
{
"lastTransitionTime": "2025-07-14T07:01:14Z",
"message": "Success",
"reason": "JobComplete",
"status": "True",
"type": "Complete"
}
],
"lastScheduleCheckTime": "2025-07-14T07:00:00Z",
"lastScheduleTime": "2025-07-14T07:00:00Z",
"nextScheduleTime": "2025-07-15T07:00:00Z"
}kubectl get events --field-selector involvedObject.name=physicalbackup
LAST SEEN TYPE REASON OBJECT MESSAGE
116s Normal WaitForFirstConsumer persistentvolumeclaim/physicalbackup waiting for first consumer to be created before binding
116s Normal JobScheduled physicalbackup/physicalbackup Job physicalbackup-20250714140837 scheduled
116s Normal ExternalProvisioning persistentvolumeclaim/physicalbackup Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
116s Normal Provisioning persistentvolumeclaim/physicalbackup External provisioner is provisioning volume for claim "default/physicalbackup"
113s Normal ProvisioningSucceeded persistentvolumeclaim/physicalbackup Successfully provisioned volume pvc-7b7c71f9-ea7e-4950-b612-2d41d7ab35b7mariadb [00] 2025-08-04 09:15:57 Was only able to copy log from 58087 to 59916, not 68968; try increasing
innodb_log_file_size
mariadb mariabackup: Stopping log copying thread.[00] 2025-08-04 09:15:57 Retrying read of log at LSN=59916apiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb
spec:
...
myCnf: |
[mariadb]
innodb_log_file_size=200MA logical backup is a backup that contains the logical structure of the database, such as tables, indexes, and data, rather than the physical storage format. It is created using mariadb-dump, which generates SQL statements that can be used to recreate the database schema and populate it with data.
Logical backups serve not just as a source of restoration, but also enable data mobility between MariaDB instances. These backups are called "logical" because they are independent from the MariaDB topology, as they only contain DDLs and INSERT statements to populate data.
Although logical backups are a great fit for data mobility and migrations, they are not as efficient as for large databases. For this reason, physical backups are the recommended method for backing up MariaDB databases, especially in production environments.
Currently, the following storage types are supported:
S3 compatible storage: Store backups in a S3 compatible storage, such as or .
PVCs: Use the available in your Kubernetes cluster to provision a PVC dedicated to store the backup files.
Kubernetes volumes: Use any of the supported natively by Kubernetes.
Our recommendation is to store the backups externally in a S3 compatible storage.
Backup CRYou can take a one-time backup of your MariaDB instance by declaring the following resource:
This will use the default StorageClass to provision a PVC that would hold the backup files, but ideally you should use a S3 compatible storage:
By providing the authentication details and the TLS configuration via references to Secret keys, this example will store the backups in a local Minio instance.
Alternatively you can use dynamic credentials from an EKS Service Account using EKS Pod Identity or IRSA:
By leaving out the accessKeyIdSecretKeyRef and secretAccessKeySecretKeyRef credentials and pointing to the correct serviceAccountName, the backup Job will use the dynamic credentials from EKS.
To minimize the Recovery Point Objective (RPO) and mitigate the risk of data loss, it is recommended to perform backups regularly. You can do so by providing a spec.schedule in your Backup resource:
This resource gets reconciled into a CronJob that periodically takes the backups.
It is important to note that regularly scheduled Backups complement very well the feature detailed below.
Given that the backups can consume a substantial amount of storage, it is crucial to define your retention policy by providing the spec.maxRetention field in your Backup resource:
You are able to compress backups by providing the compression algorithm you want to use in the spec.compression field:
Currently the following compression algorithms are supported:
bzip2: Good compression ratio, but slower compression/decompression speed compared to gzip.
gzip: Good compression/decompression speed, but worse compression ratio compared to bzip2.
none: No compression.
compression is defaulted to none by the operator.
Restore CRYou can easily restore a Backup in your MariaDB instance by creating the following resource:
This will trigger a Job that will mount the same storage as the Backup and apply the dump to your MariaDB database.
Nevertheless, the Restore resource doesn't necessarily need to specify a spec.backupRef, you can point to other storage source that contains backup files, for example a S3 bucket:
If you have multiple backups available, specially after configuring a , the operator is able to infer which backup to restore based on the spec.targetRecoveryTime field.
The operator will look for the closest backup available and utilize it to restore your MariaDB instance.
By default, spec.targetRecoveryTime will be set to the current time, which means that the latest available backup will be used.
MariaDB instancesTo minimize your Recovery Time Objective (RTO) and to switfly spin up new clusters from existing Backups, you can provide a Restore source directly in the MariaDB object via the spec.bootstrapFrom field:
As in the Restore resource, you don't strictly need to specify a reference to a Backup, you can provide other storage types that contain backup files:
Under the hood, the operator creates a Restore object just after the MariaDB resource becomes ready. The advantage of using spec.bootstrapFrom over a standalone Restore is that the MariaDB is bootstrap-aware and this will allow the operator to hold primary switchover/failover operations until the restoration is finished.
By default, all the logical databases are backed up when a Backup is created, but you may also select specific databases by providing the databases field:
When it comes to restore, all the databases available in the backup will be restored, but you may also choose a single database to be restored via the database field available in the Restore resource:
There are a couple of points to consider here:
The referred database (db1 in the example) must previously exist for the Restore to succeed.
The mariadb CLI invoked by the operator under the hood only supports selecting a single database to restore via the option, restoration of multiple specific databases is not supported.
Not all the flags supported by mariadb-dump and mariadb have their counterpart field in the Backup and Restore CRs respectively, but you may pass extra options by using the args field. For example, setting the --verbose flag can be helpful to track the progress of backup and restore operations:
Refer to the mariadb-dump and mariadb CLI options in the section.
When using S3 storage for backups, a staging area is used for keeping the external backups while they are being processed. By default, this staging area is an emptyDir volume, which means that the backups are temporarily stored in the node's local storage where the Backup/Restore Job is scheduled. In production environments, large backups may lead to issues if the node doesn't have sufficient space, potentially causing the backup/restore process to fail.
To overcome this limitation, you are able to define your own staging area by setting the stagingStorage field to both the Backup and Restore CRs:
In the examples above, a PVC with the default StorageClass will be used as staging area. Refer to the for more configuration options.
Similarly, you may also use a custom staging area when :
When restoring a backup, the root credentials specified through the spec.rootPasswordSecretKeyRef field in the MariaDB resource must match the ones in the backup. These credentials are utilized by the liveness and readiness probes, and if they are invalid, the probes will fail, causing your MariaDB Pods to restart after the backup restoration.
Restoring large backups can consume significant compute resources and may cause Restore Jobs to become stuck due to insufficient resources. To prevent this, you can define the compute resources allocated to the Job:
mysql.global_privGalera only replicates the tables with InnoDB engine, see the .
Something that does not include mysql.global_priv, the table used to store users and grants, which uses the MyISAM engine. This basically means that a Galera instance with mysql.global_priv populated will not replicate this data to an empty Galera instance. However, DDL statements (CREATE USER, ALTER USER ...) will be replicated.
Taking this into account, if we think now about a restore scenario where:
The backup file includes a DROP TABLE statement for the mysql.global_priv table.
The backup has some INSERT statements for the mysql.global_priv table.
The Galera cluster has 3 nodes: galera-0
This is what will happen under the scenes while restoring the backup:
The DROP TABLE statement is a DDL so it will be executed in galera-0, galera-1 and galera-2.
The INSERT statements are not DDLs, so they will only be applied to galera-0.
After the backup is fully restored, the liveness and readiness probes will kick in, they will succeed in galera-0, but they will fail in galera-1 and galera-2, as they rely in the root credentials available in mysql.global_priv, resulting in the galera-1 and galera-2 getting restarted.
To address this issue, when backing up MariaDB instances with Galera enabled, the mysql.global_priv table will be excluded from backups by using the --ignore-table option with mariadb-dump. This prevents the replication of the DROP TABLE statement for the mysql.global_priv table. You can opt-out from this feature by setting spec.ignoreGlobalPriv=false in the Backup resource.
Also, to avoid situations where mysql.global_priv is unreplicated, all the entries in that table must be managed via DDLs. This is the recommended approach suggested in the . There are a couple of ways that we can guarantee this:
Use the rootPasswordSecretKeyRef, username and passwordSecretKeyRef fields of the MariaDB CR to create the root and initial user respectively. This fields will be translated into DDLs by the image entrypoint.
Rely on the and CRs to create additional users and grants. Refer to the for further detail.
LOCK TABLESGalera is not compatible with the LOCK TABLES statement:
For this reason, the operator automatically adds the --skip-add-locks option to the Backup to overcome this limitation.
MariaDB running in KubernetesYou can leverage logical backups to bring your external MariaDB data into a new MariaDB instance running in Kubernetes. Follow this runbook for doing so:
Take a logical backup of your external MariaDB using one of the commands below:
If you are using Galera or planning to migrate to a Galera instance, make sure you understand the and use the following command instead:
Ensure that your backup file is named in the following format: backup.2024-08-26T12:24:34Z.sql. If the file name does not follow this format, it will be ignored by the operator.
Upload the backup file to one of the supported . We recommend using S3.
Create your MariaDB resource declaring that you want to and providing a that matches the backup:
If you are using Galera in your new instance, migrate your previous users and grants to use the User and Grant CRs. Refer to the for further detail.
MariaDB with different topologyDatabase mobility between MariaDB instances with different topologies is possible with logical backups. However, there are a couple of technical details that you need to be aware of in the following scenarios:
MariaDBsThis should be fully compatible, no issues have been detected.
MariaDBsThere are a couple of limitations regarding the backups in Galera, please make sure you read the section before proceeding.
To overcome this limitations, the Backup in the standalone/replicated instance needs to be taken with spec.ignoreGlobalPriv=true. In the following example, we are backing up a standalone MariaDB (single instance):
Once the previous Backup is completed, we will be able bootstrap a new Galera instance from it:
Pods restarting after bootstrapping from a backupPlease make sure you understand the .
After doing so, ensure that your backup does not contain a DROP TABLE mysql.global_priv; statement, as it will make your liveness and readiness probes to fail after the backup restoration.
galera-1galera-2The backup is restored in galera-0.
galera-1 and galera-2 not having the mysql.global_priv table.apiVersion: enterprise.mariadb.com/v1alpha1
kind: Backup
metadata:
name: backup
spec:
mariaDbRef:
name: mariadb
storage:
persistentVolumeClaim:
resources:
requests:
storage: 100Mi
accessModes:
- ReadWriteOnceapiVersion: enterprise.mariadb.com/v1alpha1
kind: Backup
metadata:
name: backup
spec:
mariaDbRef:
name: mariadb
storage:
s3:
bucket: backups
prefix: mariadb
endpoint: minio.minio.svc.cluster.local:9000
region: us-east-1
accessKeyIdSecretKeyRef:
name: minio
key: access-key-id
secretAccessKeySecretKeyRef:
name: minio
key: secret-access-key
tls:
enabled: true
caSecretKeyRef:
name: minio-ca
key: tls.crtapiVersion: v1
kind: ServiceAccount
metadata:
name: mariadb-backup
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<<account_id>>:role/my-role-irsaapiVersion: enterprise.mariadb.com/v1alpha1
kind: Backup
metadata:
name: backup
spec:
mariaDbRef:
name: mariadb
serviceAccountName: mariadb-backup
storage:
s3:
bucket: backups
prefix: mariadb
endpoint: s3.us-east-1.amazonaws.com
region: us-east-1
tls:
enabled: trueapiVersion: enterprise.mariadb.com/v1alpha1
kind: Backup
metadata:
name: backup
spec:
mariaDbRef:
name: mariadb
schedule:
cron: "*/1 * * * *"
suspend: falseapiVersion: enterprise.mariadb.com/v1alpha1
kind: Backup
metadata:
name: backup
spec:
mariaDbRef:
name: mariadb
maxRetention: 720h # 30 daysapiVersion: enterprise.mariadb.com/v1alpha1
kind: Backup
metadata:
name: backup
spec:
mariaDbRef:
name: mariadb
compression: gzipapiVersion: enterprise.mariadb.com/v1alpha1
kind: Restore
metadata:
name: restore
spec:
mariaDbRef:
name: mariadb
backupRef:
name: backupapiVersion: enterprise.mariadb.com/v1alpha1
kind: Restore
metadata:
name: restore
spec:
mariaDbRef:
name: mariadb
s3:
bucket: backups
prefix: mariadb
endpoint: minio.minio.svc.cluster.local:9000
region: us-east-1
accessKeyIdSecretKeyRef:
name: minio
key: access-key-id
secretAccessKeySecretKeyRef:
name: minio
key: secret-access-key
tls:
enabled: true
caSecretKeyRef:
name: minio-ca
key: tls.crtapiVersion: enterprise.mariadb.com/v1alpha1
kind: Restore
metadata:
name: restore
spec:
mariaDbRef:
name: mariadb
backupRef:
name: backup
targetRecoveryTime: 2023-12-19T09:00:00ZapiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb-from-backup
spec:
storage:
size: 1Gi
bootstrapFrom:
backupRef:
name: backup
targetRecoveryTime: 2023-12-19T09:00:00ZapiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb-from-backup
spec:
storage:
size: 1Gi
bootstrapFrom:
s3:
bucket: backups
prefix: mariadb
endpoint: minio.minio.svc.cluster.local:9000
accessKeyIdSecretKeyRef:
name: minio
key: access-key-id
secretAccessKeySecretKeyRef:
name: minio
key: secret-access-key
tls:
enabled: true
caSecretKeyRef:
name: minio-ca
key: tls.crt
targetRecoveryTime: 2023-12-19T09:00:00ZapiVersion: enterprise.mariadb.com/v1alpha1
kind: Backup
metadata:
name: backup
spec:
mariaDbRef:
name: mariadb
databases:
- db1
- db2
- db3apiVersion: enterprise.mariadb.com/v1alpha1
kind: Restore
metadata:
name: restore
spec:
mariaDbRef:
name: mariadb
backupRef:
name: backup
database: db1apiVersion: enterprise.mariadb.com/v1alpha1
kind: Backup
metadata:
name: backup
spec:
mariaDbRef:
name: mariadb
args:
- --verboseapiVersion: enterprise.mariadb.com/v1alpha1
kind: Restore
metadata:
name: restore
spec:
mariaDbRef:
name: mariadb
backupRef:
name: backup
args:
- --verboseapiVersion: enterprise.mariadb.com/v1alpha1
kind: Backup
metadata:
name: backup
spec:
storage:
s3:
...
stagingStorage:
persistentVolumeClaim:
resources:
requests:
storage: 10Gi
accessModes:
- ReadWriteOnceapiVersion: enterprise.mariadb.com/v1alpha1
kind: Restore
metadata:
name: restore
spec:
s3:
...
stagingStorage:
persistentVolumeClaim:
resources:
requests:
storage: 10Gi
accessModes:
- ReadWriteOnceapiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb
spec:
bootstrapFrom:
s3:
...
stagingStorage:
persistentVolumeClaim:
resources:
requests:
storage: 10Gi
accessModes:
- ReadWriteOnceapiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb
spec:
storage:
size: 1Gi
bootstrapFrom:
restoreJob:
args:
- --verbose
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
memory: 1GiapiVersion: enterprise.mariadb.com/v1alpha1
kind: Backup
metadata:
name: backup
spec:
mariaDbRef:
name: mariadb
ignoreGlobalPriv: falsemariadb-dump --user=${MARIADB_USER} --password=${MARIADB_PASSWORD} --host=${MARIADB_HOST} --single-transaction --events --routines --all-databases > backup.2024-08-26T12:24:34Z.sqlmariadb-dump --user=${MARIADB_USER} --password=${MARIADB_PASSWORD} --host=${MARIADB_HOST} --single-transaction --events --routines --all-databases --skip-add-locks --ignore-table=mysql.global_priv > backup.2024-08-26T12:24:34Z.sqlapiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb-galera
spec:
rootPasswordSecretKeyRef:
name: mariadb
key: root-password
replicas: 3
galera:
enabled: true
storage:
size: 1Gi
bootstrapFrom:
s3:
bucket: backups
prefix: mariadb
endpoint: minio.minio.svc.cluster.local:9000
accessKeyIdSecretKeyRef:
name: minio
key: access-key-id
secretAccessKeySecretKeyRef:
name: minio
key: secret-access-key
tls:
enabled: true
caSecretKeyRef:
name: minio-ca
key: tls.crt
targetRecoveryTime: 2024-08-26T12:24:34ZapiVersion: enterprise.mariadb.com/v1alpha1
kind: Backup
metadata:
name: backup-standalone
spec:
mariaDbRef:
name: mariadb-standalone
ignoreGlobalPriv: trueapiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb-galera
spec:
replicas: 3
galera:
enabled: true
storage:
size: 1Gi
bootstrapFrom:
backupRef:
name: backup-standaloneThis page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.