Enterprise Server 10.5
Enterprise Server 10.6
Enterprise Server 11.4
Columnar storage engine with S3-compatible object storage
Highly available
Automatic failover via MaxScale and CMAPI
Scales reads via MaxScale
Bulk data import
Enterprise Server 10.5, Enterprise ColumnStore 5, MaxScale 2.5
Enterprise Server 10.6, Enterprise ColumnStore 23.02, MaxScale 22.08
This procedure describes the deployment of the ColumnStore Object Storage topology with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5.
MariaDB Enterprise ColumnStore 5 is a columnar storage engine for MariaDB Enterprise Server 10.5. Enterprise ColumnStore is suitable for Online Analytical Processing (OLAP) workloads.
This procedure has 9 steps, which are executed in sequence.
This procedure represents basic product capability and deploys 3 Enterprise ColumnStore nodes and 1 MaxScale node.
This page provides an overview of the topology, requirements, and deployment procedures.
Please read and understand this procedure before executing.
Prepare ColumnStore Nodes
Configure Shared Local Storage
Install MariaDB Enterprise Server
Start and Configure MariaDB Enterprise Server
Test MariaDB Enterprise Server
Install MariaDB MaxScale
Start and Configure MariaDB MaxScale
Test MariaDB MaxScale
Import Data
Customers can obtain support by submitting a support case.
The following components are deployed during this procedure:
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
Database proxy that extends the availability, scalability, and security of MariaDB Enterprise Servers
Columnar storage engine
Highly available
Optimized for Online Analytical Processing (OLAP) workloads
Scalable query execution
Cluster Management API (CMAPI) provides a REST API for multi-node administration
Listener
Listens for client connections to MaxScale then passes them to the router service
MariaDB Monitor
Tracks changes in the state of MariaDB Enterprise Servers.
Read Connection Router
Routes connections from the listener to any available Enterprise ColumnStore node
Read/Write Split Router
Routes read operations from the listener to any available Enterprise ColumnStore node, and routes write operations from the listener to a specific server that MaxScale uses as the primary server
Server Module
Connection configuration in MaxScale to an Enterprise ColumnStore node
The MariaDB Enterprise ColumnStore topology with Object Storage delivers production analytics with high availability, fault tolerance, and limitless data storage by leveraging S3-compatible storage.
The topology consists of:
One or more MaxScale nodes
An odd number of ColumnStore nodes (minimum of 3) running ES, Enterprise ColumnStore, and CMAPI
The MaxScale nodes:
Monitor the health and availability of each ColumnStore node using the MariaDB Monitor (mariadbmon)
Accept client and application connections
Route queries to ColumnStore nodes using the Read/Write Split Router (readwritesplit)
The ColumnStore nodes:
Receive queries from MaxScale
Execute queries
Use S3-compatible object storage for data
Use shared local storage for the Storage Manager directory
These requirements are for the ColumnStore Object Storage topology when deployed with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5.
Node Count
Operating System
Minimum Hardware Requirements
Recommended Hardware Requirements
Storage Requirements
S3-Compatible Object Storage Requirements
Preferred Object Storage Providers: Cloud
Preferred Object Storage Providers: Hardware
Shared Local Storage Directories
Shared Local Storage Options
MaxScale nodes, 1 or more are required.
Enterprise ColumnStore nodes, 3 or more are required for high availability. You should always have an odd number of nodes in a multi-node ColumnStore deployment to avoid split brain scenarios.
In alignment to the enterprise lifecycle, the ColumnStore Object Storage topology with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5 is provided for:
CentOS Linux 7 (x86_64)
Debian 10 (x86_64)
Red Hat Enterprise Linux 7 (x86_64)
Red Hat Enterprise Linux 8 (x86_64)
Ubuntu 18.04 LTS (x86_64)
Ubuntu 20.04 LTS (x86_64)
MariaDB Enterprise ColumnStore's minimum hardware requirements are not intended for production environments, but the minimum hardware requirements can be appropriate for development and test environments. For production environments, see the recommended hardware requirements instead.
The minimum hardware requirements are:
MaxScale node
4+ cores
4+ GB
Enterprise ColumnStore node
4+ cores
4+ GB
MariaDB Enterprise ColumnStore will refuse to start if the system has less than 3 GB of memory.
If Enterprise ColumnStore is started on a system with less memory, the following error message will be written to the ColumnStore system log called crit.log:
Apr 30 21:54:35 a1ebc96a2519 PrimProc[1004]: 35.668435 |0|0|0| C 28 CAL0000: Error total memory available is less than 3GB.
And the following error message will be raised to the client:
ERROR 1815 (HY000): Internal error: System is not ready yet. Please try again.
MariaDB Enterprise ColumnStore's recommended hardware requirements are intended for production analytics.
The recommended hardware requirements are:
MaxScale node
8+ cores
16+ GB
Enterprise ColumnStore node
64+ cores
128+ GB
The ColumnStore Object Storage topology requires the following storage types:
The ColumnStore Object Storage topology uses S3-compatible object storage to store data.
The ColumnStore Object Storage topology uses shared local storage for the Storage Manager directory to store metadata.
The ColumnStore Object Storage topology uses S3-compatible object storage to store data.
Many S3-compatible object storage services exist. MariaDB Corporation cannot make guarantees about all S3-compatible object storage services, because different services provide different functionality.
For the preferred S3-compatible object storage providers that provide cloud and hardware solutions, see the following sections:
The use of non-cloud and non-hardware providers is at your own risk.
If you have any questions about using specific S3-compatible object storage with MariaDB Enterprise ColumnStore, contact us.
Amazon Web Services (AWS) S3
Google Cloud Storage
Azure Storage
Alibaba Cloud Object Storage Service
Cloudian HyperStore
Cohesity S3
Dell EMC
IBM Cloud Object Storage
Seagate Lyve Rack
Quantum ActiveScale
The ColumnStore Object Storage topology uses shared local storage for the Storage Manager directory to store metadata.
The Storage Manager directory is located at the following path by default:
/var/lib/columnstore/storagemanager
The most common shared local storage options for the ColumnStore Object Storage topology are:
EBS (Elastic Block Store) Multi-Attach
AWS
EBS is a high-performance block-storage service for AWS (Amazon Web Services).
EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported.
For deployments in AWS, EBS Multi-Attach is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data.
EFS (Elastic File System)
AWS
EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services).
For deployments in AWS, EFS is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data. EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services).
Filestore
GCP
Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform).
For deployments in GCP, Filestore is the recommended option for the Storage Manager directory, and Google Object Storage (S3-compatible) is the recommended option for data.
GlusterFS
On-premises
GlusterFS is a distributed file system.
GlusterFS supports replication and failover.
NFS (Network File System)
On-premises
NFS is a distributed file system.
If NFS is used, the storage should be mounted with the sync option to ensure that each node flushes its changes immediately.
For on-premises deployments, NFS is the recommended option for the Storage Manager directory, and any S3-compatible storage is the recommended option for data.
For best results, MariaDB Corporation would recommend the following storage options:
AWS
Amazon S3 storage
EBS Multi-Attach or EFS
GCP
Google Object Storage (S3-compatible)
Filestore
On-premises
Any S3-compatible object storage
NFS
Enterprise ColumnStore's CMAPI (Cluster Management API) is a REST API that can be used to manage a multi-node Enterprise ColumnStore cluster.
Many tools are capable of interacting with REST APIs. For example, the curl utility could be used to make REST API calls from the command-line.
Many programming languages also have libraries for interacting with REST APIs.
The examples below show how to use the CMAPI with curl.
https://{server}:{port}/cmapi/{version}/{route}/{command}
For example:
https://mcs1:8640/cmapi/0.4.0/cluster/shutdown
https://mcs1:8640/cmapi/0.4.0/cluster/start
https://mcs1:8640/cmapi/0.4.0/cluster/status
With CMAPI 1.4 and later:
https://mcs1:8640/cmapi/0.4.0/cluster/node
With CMAPI 1.3 and earlier:
https://mcs1:8640/cmapi/0.4.0/cluster/add-node
https://mcs1:8640/cmapi/0.4.0/cluster/remove-node
'x-api-key': '93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd'
'Content-Type': 'application/json'
x-api-key
can be set to any value of your choice during the first call to the server. Subsequent connections will require this same key.
$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .
$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/start \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20}' \
| jq .
$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/shutdown \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20}' \
| jq .
With CMAPI 1.4 and later:
$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .
With CMAPI 1.3 and earlier:
$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/add-node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .
With CMAPI 1.4 and later:
$ curl -k -s -X DELETE https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .
With CMAPI 1.3 and earlier:
$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/remove-node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .
Configuration File
Configuration files (such as /etc/my.cnf) can be used to set system-variables and options. The server must be restarted to apply changes made to configuration files.
Command-line
The server can be started with command-line options that set system-variables and options.
SQL
Users can set system-variables that support dynamic changes on-the-fly using the SET statement.
MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.
To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.
Distribution
Example Configuration File Path
CentOS
Red Hat Enterprise Linux (RHEL)
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian
Ubuntu
/etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf
The systemctl command is used to start and stop the MariaDB Enterprise Server service.
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb
For additional information, see "Starting and Stopping MariaDB".
MariaDB Enterprise Server produces log data that can be helpful in problem diagnosis.
Log filenames and locations may be overridden in the server configuration. The default location of logs is the data directory. The data directory is specified by the datadir system variable.
The systemctl command is used to start and stop the ColumnStore service.
Start
sudo systemctl start mariadb-columnstore
Stop
sudo systemctl stop mariadb-columnstore
Restart
sudo systemctl restart mariadb-columnstore
Enable during startup
sudo systemctl enable mariadb-columnstore
Disable during startup
sudo systemctl disable mariadb-columnstore
Status
sudo systemctl status mariadb-columnstore
In the ColumnStore Object Storage topology, the mariadb-columnstore service should not be enabled. The CMAPI service restarts Enterprise ColumnStore as needed, so it does not need to start automatically upon reboot.
The systemctl command is used to start and stop the CMAPI service.
Start
sudo systemctl start mariadb-columnstore-cmapi
Stop
sudo systemctl stop mariadb-columnstore-cmapi
Restart
sudo systemctl restart mariadb-columnstore-cmapi
Enable during startup
sudo systemctl enable mariadb-columnstore-cmapi
Disable during startup
sudo systemctl disable mariadb-columnstore-cmapi
Status
sudo systemctl status mariadb-columnstore-cmapi
For additional information on endpoints, see "CMAPI".
MaxScale can be configured using several methods. These methods make use of MaxScale's REST API.
Command-line utility to perform administrative tasks through the REST API. See MaxCtrl Commands.
MaxGUI is a graphical utility that can perform administrative tasks through the REST API.
The REST API can be used directly. For example, the curl utility could be used to make REST API calls from the command-line. Many programming languages also have libraries to interact with REST APIs.
The procedure on these pages configures MaxScale using MaxCtrl.
The systemctl
command is used to start and stop the MaxScale service.
Start
sudo systemctl start maxscale
Stop
sudo systemctl stop maxscale
Restart
sudo systemctl restart maxscale
Enable during startup
sudo systemctl enable maxscale
Disable during startup
sudo systemctl disable maxscale
Status
sudo systemctl status maxscale
For additional information, see "Starting and Stopping MariaDB".
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
Next: Step 1: Prepare ColumnStore Nodes.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page details step 1 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step prepares systems to host MariaDB Enterprise Server and MariaDB Enterprise ColumnStore 23.10.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Enterprise ColumnStore performs best with Linux kernel optimizations.
On each server to host an Enterprise ColumnStore node, optimize the kernel:
Set the relevant kernel parameters in a sysctl configuration file. To ensure proper change management, use an Enterprise ColumnStore-specific configuration file.
Create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file
:
# minimize swapping
vm.swappiness = 1
# Increase the TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase the TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# for 1 GigE, increase this to 2500
# for 10 GigE, increase this to 30000
net.core.netdev_max_backlog = 2500
Use the sysctl command to set the kernel parameters at runtime
$ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf
The Linux Security Modules (LSM) should be temporarily disabled on each Enterprise ColumnStore node during installation.
The LSM will be configured and re-enabled later in this deployment procedure.
The steps to disable the LSM depend on the specific LSM used by the operating system.
SELinux must be set to permissive mode before installing MariaDB Enterprise ColumnStore.
To set SELinux to permissive mode:
Set SELinux to permissive mode:
$ sudo setenforce permissive
Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.
For example, the file will usually look like this after the change:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
Confirm that SELinux is in permissive mode:
$ sudo getenforce
Permissive
SELinux will be configured and re-enabled later in this deployment procedure. This configuration is not persistent. If you restart the server before configuring and re-enabling SELinux later in the deployment procedure, you must reset the enforcement to permissive mode.
AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.
Disable AppArmor:
$ sudo systemctl disable apparmor
Reboot the system.
Confirm that no AppArmor profiles are loaded using aa-status:
$ sudo aa-status
apparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
AppArmor will be configured and re-enabled later in this deployment procedure.
MariaDB Enterprise ColumnStore requires the following TCP ports:
3306
Port used for MariaDB Client traffic
8600-8630
Port range used for inter-node communication
8640
Port used by CMAPI
8700
Port used for inter-node communication
8800
Port used for inter-node communication
The firewall should be temporarily disabled on each Enterprise ColumnStore node during installation.
The firewall will be configured and re-enabled later in this deployment procedure.
The steps to disable the firewall depend on the specific firewall used by the operating system.
Check if the firewalld service is running:
$ sudo systemctl status firewalld
If the firewalld service is running, stop it:
$ sudo systemctl stop firewalld
Firewalld will be configured and re-enabled later in this deployment procedure.
Check if the UFW service is running:
$ sudo ufw status verbose
If the UFW service is running, stop it:
$ sudo ufw disable
UFW will be configured and re-enabled later in this deployment procedure.
To install Enterprise ColumnStore on Amazon Web Services (AWS), the security group must be modified prior to installation.
Enterprise ColumnStore requires all internal communications to be open between Enterprise ColumnStore nodes. Therefore, the security group should allow all protocols and all ports to be open between the Enterprise ColumnStore nodes and the MaxScale proxy.
When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.
On RHEL 8, install additional dependencies:
$ sudo yum install glibc-locale-source glibc-langpack-en
Set the system's locale to en_US.UTF-8 by executing localedef:
$ sudo localedef -i en_US -f UTF-8 en_US.UTF-8
MariaDB Enterprise ColumnStore requires all nodes to have host names that are resolvable on all other nodes. If your infrastructure does not configure DNS centrally, you may need to configure static DNS entries in the /etc/hosts file of each server.
On each Enterprise ColumnStore node, edit the /etc/hosts file to map host names to the IP address of each Enterprise ColumnStore node:
192.0.2.1 mcs1
192.0.2.2 mcs2
192.0.2.3 mcs3
192.0.2.100 mxs1
Replace the IP addresses with the addresses in your own environment.
With the ColumnStore Object Storage topology, it is important to create the S3 bucket before you start ColumnStore. All Enterprise ColumnStore nodes access data from the same bucket.
If you already have an S3 bucket, confirm that the bucket is empty.
S3 bucket configuration will be performed later in this procedure.
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 1 of 9.
Next: Step 2: Configure Shared Local Storage.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page details step 2 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step configures shared local storage on systems hosting Enterprise ColumnStore 23.10.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
In a ColumnStore Object Storage topology, MariaDB Enterprise ColumnStore requires the Storage Manager directory to be located on shared local storage.
The Storage Manager directory is at the following path:
/var/lib/columnstore/storagemanager
The Storage Manager directory must be mounted on every ColumnStore node.
Select a Shared Local Storage solution for the Storage Manager directory:
For additional information, see "Shared Local Storage Options".
EBS is a high-performance block-storage service for AWS (Amazon Web Services). EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported.
For Enterprise ColumnStore deployments in AWS:
EBS Multi-Attach is a recommended option for the Storage Manager directory.
Amazon S3 storage is the recommended option for data.
Consult the vendor documentation for details on how to configure EBS Multi-Attach.
EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services)
For deployments in AWS:
EFS is a recommended option for the Storage Manager directory.
Amazon S3 storage is the recommended option for data.
Consult the vendor documentation for details on how to configure EFS.
Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform).
For Enterprise ColumnStore deployments in GCP:
Filestore is the recommended option for the Storage Manager directory.
Google Object Storage (S3-compatible) is the recommended option for data.
Consult the vendor documentation for details on how to configure Filestore.
GlusterFS is a distributed file system.
GlusterFS is a shared local storage option, but it is not one of the recommended options.
For more information, see "Recommended Storage Options".
On each Enterprise ColumnStore node, install GlusterFS.
Install on CentOS / RHEL 8 (YUM):
$ sudo yum install --enablerepo=PowerTools glusterfs-server
Install on CentOS / RHEL 7 (YUM):
$ sudo yum install centos-release-gluster
$ sudo yum install glusterfs-server
Install on Debian (APT):
$ wget -O - https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub | apt-key add -
$ DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"')
$ DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+')
$ DEBARCH=$(dpkg --print-architecture)
$ echo deb https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main > /etc/apt/sources.list.d/gluster.list
$ sudo apt update
$ sudo apt install glusterfs-server
Install on Ubuntu (APT):
$ sudo apt update
$ sudo apt install glusterfs-server
Start the GlusterFS daemon:
$ sudo systemctl start glusterd
$ sudo systemctl enable glusterd
Before you can create a volume with GlusterFS, you must probe each node from a peer node.
On the primary node, probe all of the other cluster nodes:
$ sudo gluster peer probe mcs2
$ sudo gluster peer probe mcs3
On one of the replica nodes, probe the primary node to confirm that it is connected:
$ sudo gluster peer probe mcs1
peer probe: Host mcs1 port 24007 already in peer list
On the primary node, check the peer status:
$ sudo gluster peer status
Number of Peers: 2
Hostname: mcs2
Uuid: 3c8a5c79-22de-45df-9034-8ae624b7b23e
State: Peer in Cluster (Connected)
Hostname: mcs3
Uuid: 862af7b2-bb5e-4b1c-8311-630fa32ed451
State: Peer in Cluster (Connected)
Create the GlusterFS volumes for MariaDB Enterprise ColumnStore. Each volume must have the same number of replicas as the number of Enterprise ColumnStore nodes.
On each Enterprise ColumnStore node, create the directory for each brick in the /brick directory:
$ sudo mkdir -p /brick/storagemanager
On the primary node, create the GlusterFS volumes:
$ sudo gluster volume create storagemanager \
replica 3 \
mcs1:/brick/storagemanager \
mcs2:/brick/storagemanager \
mcs3:/brick/storagemanager \
force
On the primary node, start the volume:
$ sudo gluster volume start storagemanager
On each Enterprise ColumnStore node, create mount points for the volumes:
$ sudo mkdir -p /var/lib/columnstore/storagemanager
On each Enterprise ColumnStore node, add the mount points to /etc/fstab:
127.0.0.1:storagemanager /var/lib/columnstore/storagemanager glusterfs defaults,_netdev 0 0
On each Enterprise ColumnStore node, mount the volumes:
$ sudo mount -a
NFS is a distributed file system. NFS is available in most Linux distributions. If NFS is used for an Enterprise ColumnStore deployment, the storage must be mounted with the sync option to ensure that each node flushes its changes immediately.
For on-premises deployments:
NFS is the recommended option for the Storage Manager directory.
Any S3-compatible storage is the recommended option for data.
Consult the documentation for your NFS implementation for details on how to configure NFS.
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 2 of 9.
Next: Step 3: Install MariaDB Enterprise Server.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page details step 3 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step installs MariaDB Enterprise Server, MariaDB Enterprise ColumnStore 23.10, CMAPI, and dependencies.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to https://customers.mariadb.com/downloads/token/
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN
when configuring the package repositories.
On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
$ sudo yum install curl
Install on Debian / Ubuntu (APT):
$ sudo apt install curl apt-transport-https
On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:
$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
$ echo "4d483b4df193831a0101d3dfa7fb3e17411dda7fc06c31be4f9e089c325403c0 mariadb_es_repo_setup" \
| sha256sum -c -
$ chmod +x mariadb_es_repo_setup
$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-maxscale \
--skip-tools \
--mariadb-server-version="11.4"
On each Enterprise ColumnStore node, install additional dependencies:
Install on CentOS and RHEL (YUM):
$ sudo yum install jemalloc jq curl
Install on Debian 9 and Ubuntu 18.04 (APT)
$ sudo apt install libjemalloc1 jq curl
Install on Debian 10 and Ubuntu 20.04 (APT):
$ sudo apt install libjemalloc2 jq curl
On each Enterprise ColumnStore node, install MariaDB Enterprise Server and MariaDB Enterprise ColumnStore:
Install on CentOS / RHEL (YUM):
$ sudo yum install MariaDB-server \
MariaDB-backup \
MariaDB-shared \
MariaDB-client \
MariaDB-columnstore-engine \
MariaDB-columnstore-cmapi
Install on Debian / Ubuntu (APT):
$ sudo apt install mariadb-server \
mariadb-backup \
libmariadb3 \
mariadb-client \
mariadb-plugin-columnstore \
mariadb-columnstore-cmapi
Navigation in the procedure "Deploy ColumnStore Object Storage Topology".
This page was step 3 of 9.
Next: Step 4: Start and Configure MariaDB Enterprise Server.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page details step 4 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step starts and configures MariaDB Enterprise Server, and MariaDB Enterprise ColumnStore 23.10.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
The installation process might have started some of the ColumnStore services. The services should be stopped prior to making configuration changes.
On each Enterprise ColumnStore node, stop the MariaDB Enterprise Server service:
$ sudo systemctl stop mariadb
On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:
$ sudo systemctl stop mariadb-columnstore
On each Enterprise ColumnStore node, stop the CMAPI service:
$ sudo systemctl stop mariadb-columnstore-cmapi
On each Enterprise ColumnStore node, configure Enterprise Server.
Set this system variable to utf8
Set this system variable to utf8_general_ci
columnstore_use_import_for_batchinsert
Set this system variable to ALWAYS to always use cpimport for LOAD DATA INFILE and INSERT...SELECT statements.
Set this system variable to ON.
Set this option to the file you want to use for the Binary Log. Setting this option enables binary logging.
Set this option to the file you want to use to track binlog filenames.
Set this system variable to ON.
Set this option to the file you want to use for the Relay Logs. Setting this option enables relay logging.
Set this option to the file you want to use to index Relay Log filenames.
Sets the numeric Server ID for this MariaDB Enterprise Server. The value set on this option must be unique to each node.
Mandatory system variables and options for ColumnStore Object Storage include:
Example Configuration
[mariadb]
bind_address = 0.0.0.0
log_error = mariadbd.err
character_set_server = utf8
collation_server = utf8_general_ci
log_bin = mariadb-bin
log_bin_index = mariadb-bin.index
relay_log = mariadb-relay
relay_log_index = mariadb-relay.index
log_slave_updates = ON
gtid_strict_mode = ON
# This must be unique on each Enterprise ColumnStore node
server_id = 1
On each Enterprise ColumnStore node, configure S3 Storage Manager to use S3-compatible storage by editing the /etc/columnstore/storagemanager.cnf configuration file:
[ObjectStorage]
…
service = S3
…
[S3]
bucket = your_columnstore_bucket_name
endpoint = your_s3_endpoint
aws_access_key_id = your_s3_access_key_id
aws_secret_access_key = your_s3_secret_key
# iam_role_name = your_iam_role
# sts_region = your_sts_region
# sts_endpoint = your_sts_endpoint
# ec2_iam_mode = enabled
[Cache]
cache_size = your_local_cache_size
path = your_local_cache_path
The S3-compatible object storage options are configured under [S3]:
The bucket option must be set to the name of the bucket that you created in "Create an S3 Bucket".
The endpoint option must be set to the endpoint for the S3-compatible object storage.
The aws_access_key_id and aws_secret_access_key
options must be set to the access key ID and secret access key for the S3-compatible object storage.
To use a specific IAM role, you must uncomment and set iam_role_name, sts_region, and sts_endpoint
.
To use the IAM role assigned to an EC2 instance, you must uncomment ec2_iam_mode=enabled
.
The local cache options are configured under [Cache]:
The cache_size option is set to 2 GB by default.
The path option is set to /var/lib/columnstore/storagemanager/cache
by default.
Ensure that the specified path has sufficient storage space for the specified cache size.
On each Enterprise ColumnStore node, start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:
$ sudo systemctl start mariadb
$ sudo systemctl enable mariadb
On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:
$ sudo systemctl stop mariadb-columnstore
After the CMAPI service is installed in the next step, CMAPI will start the Enterprise ColumnStore service as-needed on each node. CMAPI disables the Enterprise ColumnStore service to prevent systemd from automatically starting Enterprise ColumnStore upon reboot.
On each Enterprise ColumnStore node, start and enable the CMAPI service, so that it starts automatically upon reboot:
$ sudo systemctl start mariadb-columnstore-cmapi
$ sudo systemctl enable mariadb-columnstore-cmapi
For additional information, see "Starting and Stopping MariaDB".
The ColumnStore Object Storage topology requires several user accounts. Each user account should be created on the primary server, so that it is replicated to the replica servers.
Enterprise ColumnStore requires a mandatory utility user account to perform cross-engine joins and similar operations.
On the primary server, create the user account with the CREATE USER statement:
CREATE USER 'util_user'@'127.0.0.1'
IDENTIFIED BY 'util_user_passwd';
On the primary server, grant the user account SELECT privileges on all databases with the GRANT statement:
GRANT SELECT, PROCESS ON *.*
TO 'util_user'@'127.0.0.1';
On each Enterprise ColumnStore node, configure the ColumnStore utility user:
$ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
$ sudo mcsSetConfig CrossEngineSupport Port 3306
$ sudo mcsSetConfig CrossEngineSupport User util_user
On each Enterprise ColumnStore node, set the password:
$ sudo mcsSetConfig CrossEngineSupport Password util_user_passwd
For details about how to encrypt the password, see "Credentials Management for MariaDB Enterprise ColumnStore".
Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.
ColumnStore Object Storage uses MariaDB Replication to replicate writes between the primary and replica servers. As MaxScale can promote a replica server to become a new primary in the event of node failure, all nodes must have a replication user.
The action is performed on the primary server.
Create the replication user and grant it the required privileges:
Use the CREATE USER statement to create replication user.
CREATE USER 'repl'@'192.0.2.%' IDENTIFIED BY 'repl_passwd';
Replace the referenced IP address with the relevant address for your environment.
Ensure that the user account can connect to the primary server from each replica.
Grant the user account the required privileges with the GRANT statement.
GRANT REPLICA MONITOR,
REPLICATION REPLICA,
REPLICATION REPLICA ADMIN,
REPLICATION MASTER ADMIN
ON *.* TO 'repl'@'192.0.2.%';
ColumnStore Object Storage 23.10 uses MariaDB MaxScale 22.08 to load balance between the nodes.
This action is performed on the primary server.
Use the CREATE USER statement to create the MaxScale user:
CREATE USER 'mxs'@'192.0.2.%'
IDENTIFIED BY 'mxs_passwd';
Replace the referenced IP address with the relevant address for your environment.
Ensure that the user account can connect from the IP address of the MaxScale instance.
Use the GRANT statement to grant the privileges required by the router:
GRANT SHOW DATABASES ON *.* TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.columns_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.db TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.procs_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.proxies_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.roles_mapping TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.tables_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.user TO 'mxs'@'192.0.2.%';
Use the GRANT statement to grant privileges required by the MariaDB Monitor.
GRANT BINLOG ADMIN,
READ_ONLY ADMIN,
RELOAD,
REPLICA MONITOR,
REPLICATION MASTER ADMIN,
REPLICATION REPLICA ADMIN,
REPLICATION REPLICA,
SHOW DATABASES,
SELECT
ON *.* TO 'mxs'@'192.0.2.%';
On each replica server, configure MariaDB Replication:
Use the CHANGE MASTER TO statement to configure the connection to the primary server:
CHANGE MASTER TO
MASTER_HOST='192.0.2.1',
MASTER_USER='repl',
MASTER_PASSWORD='repl_passwd',
MASTER_USE_GTID=slave_pos;
Start replication using the START REPLICA statement:
START REPLICA;
Confirm that replication is working using the SHOW REPLICA STATUS statement:
SHOW REPLICA STATUS;
Ensure that the replica server cannot accept local writes by setting the read_only system variable to ON using the SET GLOBAL statement:
SET GLOBAL read_only=ON;
Initiate the primary server using CMAPI.
Create an API key for the cluster. This API key should be stored securely and kept confidential, because it can be used to add cluster nodes to the multi-node Enterprise ColumnStore deployment.
For example, to create a random 256-bit API key using openssl rand:
$ openssl rand -hex 32
93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd
This document will use the following API key in further examples, but users should create their own:
Use CMAPI to add the primary server to the cluster and set the API key. The new API key needs to be provided as part of the X-API-key HTML header.
For example, if the primary server's host name is mcs1 and its IP address is 192.0.2.1, use the following node command:
$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":120, "node": "192.0.2.1"}' \
| jq .
{
"timestamp": "2020-10-28 00:39:14.672142",
"node_id": "192.0.2.1"
}
Use CMAPI to check the status of the cluster node:
$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .
{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
}
Add the replica servers with CMAPI:
For each replica server, use CMAPI to add the replica server to the cluster. The previously set API key needs to be provided as part of the X-API-key HTML header.
For example, if the primary server's host name is mcs1 and the replica server's IP address is 192.0.2.2, use the following node command:
$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":120, "node": "192.0.2.2"}' \
| jq .
{
"timestamp": "2020-10-28 00:42:42.796050",
"node_id": "192.0.2.2"
}
After all replica servers have been added, use CMAPI to confirm that all cluster nodes have been successfully added:
$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .
{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
},
"192.0.2.2": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"192.0.2.3": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"num_nodes": 3
}
The specific steps to configure the security module depend on the operating system.
Configure SELinux for Enterprise ColumnStore:
To configure SELinux, you have to install the packages required for audit2allow. On CentOS 7 and RHEL 7, install the following:
$ sudo yum install policycoreutils policycoreutils-python
On RHEL 8, install the following:
$ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils
Allow the system to run under load for a while to generate SELinux audit events.
After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:
$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
If no audit events were found, this will print the following:
$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
Nothing to do
If audit events were found, the new SELinux policy can be loaded using semodule:
$ sudo semodule -i mariadb_local.pp
Set SELinux to enforcing mode:
$ sudo setenforce enforcing
Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.
For example, the file will usually look like this after the change:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
Confirm that SELinux is in enforcing mode:
$ sudo getenforce
Enforcing
For information on how to create a profile, see How to create an AppArmor Profile on Ubuntu.com.
The specific steps to configure the firewall service depend on the platform.
Configure firewalld for Enterprise Cluster on CentOS and RHEL:
Check if the firewalld service is running:
$ sudo systemctl status firewalld
If the firewalld service was stopped to perform the installation, start it now:
For example, if your cluster nodes are in the 192.0.2.0/24 subnet:
$ sudo systemctl start firewalld
Open up the relevant ports using firewall-cmd:
$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="3306" protocol="tcp"
accept'
$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8600-8630" protocol="tcp"
accept'
$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8640" protocol="tcp"
accept'
$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8700" protocol="tcp"
accept'
$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8800" protocol="tcp"
accept'
Reload the runtime configuration:
$ sudo firewall-cmd --reload
Configure UFW for Enterprise ColumnStore on Ubuntu:
Check if the UFW service is running:
$ sudo ufw status verbose
If the UFW service was stopped to perform the installation, start it now:
$ sudo ufw enable
Open up the relevant ports using ufw.
For example, if your cluster nodes are in the 192.0.2.0/24 subnet in the range 192.0.2.1 - 192.0.2.3:
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 3306 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8600:8630 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8640 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8700 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8800 proto tcp
Reload the runtime configuration:
$ sudo ufw reload
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 4 of 9.
Next: Step 5: Test MariaDB Enterprise Server.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page details step 5 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step tests MariaDB Enterprise Server and MariaDB Enterprise ColumnStore 23.10.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Enterprise ColumnStore 23.10 includes a testS3Connection
command to test the S3 configuration, permissions, and connectivity.
This action is performed on each Enterprise ColumnStore node.
Test the S3 configuration by executing the following:
$ sudo testS3Connection
StorageManager[26887]: Using the config file found at /etc/columnstore/storagemanager.cnf
StorageManager[26887]: S3Storage: S3 connectivity & permissions are OK
S3 Storage Manager Configuration OK
If the testS3Connection
command does not return OK
, investigate the S3 configuration.
Use Systemd to test whether the MariaDB Enterprise Server service is running.
This action is performed on each Enterprise ColumnStore node.
Check if the MariaDB Enterprise Server service is running by executing the following:
$ systemctl status mariadb
If the service is not running on any node, start the service by executing the following on that node:
$ sudo systemctl start mariadb
Use MariaDB Client to test the local connection to the Enterprise Server node.
This action is performed on each Enterprise ColumnStore node:
$ sudo mariadb
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>
The sudo
command is used here to connect to the Enterprise Server node using the root@localhost user account, which authenticates using the unix_socket authentication plugin. Other user accounts can be used by specifying the --user and --password command-line options.
Query the information_schema.PLUGINS table to confirm that the ColumnStore storage engine is loaded.
This action is performed on each Enterprise ColumnStore node.
Execute the following query:
SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM information_schema.PLUGINS
WHERE PLUGIN_LIBRARY LIKE 'ha_columnstore%';
+---------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+---------------------+---------------+
| Columnstore | ACTIVE |
| COLUMNSTORE_COLUMNS | ACTIVE |
| COLUMNSTORE_TABLES | ACTIVE |
| COLUMNSTORE_FILES | ACTIVE |
| COLUMNSTORE_EXTENTS | ACTIVE |
+---------------------+---------------+
The PLUGIN_STATUS
column for each ColumnStore-related plugin should contain ACTIVE.
Use Systemd to test whether the CMAPI service is running.
This action is performed on each Enterprise ColumnStore node.
Check if the CMAPI service is running by executing the following:
$ systemctl status mariadb-columnstore-cmapi
If the service is not running on any node, start the service by executing the following on that node:
$ sudo systemctl start mariadb-columnstore-cmapi
Use CMAPI to request the ColumnStore status. The API key needs to be provided as part of the X-API-key
HTML header.
This action is performed with the CMAPI service on the primary server.
Check the ColumnStore status using curl by executing the following:
$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .
{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
},
"192.0.2.2": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"192.0.2.3": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"num_nodes": 3
}
Use MariaDB Client to test DDL.
On the primary server, use the MariaDB Client to connect to the node:
$ sudo mariadb
Create a test database and ColumnStore table:
CREATE DATABASE IF NOT EXISTS test;
CREATE TABLE IF NOT EXISTS test.contacts (
first_name VARCHAR(50),
last_name VARCHAR(50),
email VARCHAR(100)
) ENGINE = ColumnStore;
On each replica server, use the MariaDB Client to connect to the node:
$ sudo mariadb
Confirm that the database and table exist:
SHOW CREATE TABLE test.contacts\G;
If the database or table do not exist on any node, then check the replication configuration.
Use MariaDB Client to test DML.
On the primary server, use the MariaDB Client to connect to the node:
$ sudo mariadb
Insert sample data into the table created in the DDL test:
INSERT INTO test.contacts (first_name, last_name, email)
VALUES
("Kai", "Devi", "kai.devi@example.com"),
("Lee", "Wang", "lee.wang@example.com");
On each replica server, use the MariaDB Client to connect to the node:
$ sudo mariadb
Execute a SELECT query to retrieve the data:
SELECT * FROM test.contacts;
+------------+-----------+----------------------+
| first_name | last_name | email |
+------------+-----------+----------------------+
| Kai | Devi | kai.devi@example.com |
| Lee | Wang | lee.wang@example.com |
+------------+-----------+----------------------+
If the data is not returned on any node, check the ColumnStore status and the storage configuration.
Navigation in the procedure 'Deploy ColumnStore Object Storage Topology".
This page was step 5 of 9.
Next: Step 6: Install MariaDB MaxScale.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page details step 6 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step installs MariaDB MaxScale 22.08.
ColumnStore Object Storage requires 1 or more MaxScale nodes.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT)
. A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to https://customers.mariadb.com/downloads/token/
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN
when configuring the package repositories.
On the MaxScale node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
$ sudo yum install curl
Install on Debian / Ubuntu (APT):
$ sudo apt install curl apt-transport-https
On the MaxScale node, configure package repositories and specify MariaDB MaxScale 22.08:
$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
$ echo "4d483b4df193831a0101d3dfa7fb3e17411dda7fc06c31be4f9e089c325403c0 mariadb_es_repo_setup" \
| sha256sum -c -
$ chmod +x mariadb_es_repo_setup
$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-server \
--skip-tools \
--mariadb-maxscale-version="22.08"
On the MaxScale node, install MariaDB MaxScale.
Install on CentOS / RHEL (YUM):
$ sudo yum install maxscale
Install on Debian / Ubuntu (APT):
$ sudo apt install maxscale
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 6 of 9.
Next: Step 7: Start and Configure MariaDB MaxScale.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page details step 7 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step starts and configures MariaDB MaxScale 22.08.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB MaxScale installations include a configuration file with some example objects. This configuration file should be replaced.
On the MaxScale node, replace the default /etc/maxscale.cnf
with the following configuration:
[maxscale]
threads = auto
admin_host = 0.0.0.0
admin_secure_gui = false
For additional information, see "Global Parameters".
On the MaxScale node, restart the MaxScale service to ensure that MaxScale picks up the new configuration:
$ sudo systemctl restart maxscale
For additional information, see "Start and Stop Services".
On the MaxScale node, use maxctrl create to create a server object for each Enterprise ColumnStore node:
$ maxctrl create server mcs1 192.0.2.101
$ maxctrl create server mcs2 192.0.2.102
$ maxctrl create server mcs3 192.0.2.103
MaxScale uses monitors to retrieve additional information from the servers. This information is used by other services in filtering and routing connections based on the current state of the node. For MariaDB Enterprise ColumnStore, use the MariaDB Monitor (mariadbmon).
On the MaxScale node, use maxctrl create monitor to create a MariaDB Monitor:
$ maxctrl create monitor columnstore_monitor mariadbmon \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
replication_user=repl \
replication_password='REPLICATION_USER_PASSWORD' \
--servers mcs1 mcs2 mcs3
In this example:
columnstore_monitor
is an arbitrary name that is used to identify the new monitor.
mariadbmon
is the name of the module that implements the MariaDB Monitor.
user=MAXSCALE_USER
sets the user parameter to the database user account that MaxScale uses to monitor the ColumnStore nodes.
password='MAXSCALE_USER_PASSWORD
' sets the password parameter to the password used by the database user account that MaxScale uses to monitor the ColumnStore nodes.
replication_user=REPLICATION_USER
sets the replication_user parameter to the database user account that MaxScale uses to setup replication.
replication_password='REPLICATION_USER_PASSWORD
' sets the replication_password parameter to the password used by the database user account that MaxScale uses to setup replication.
--servers
sets the servers parameter to the set of nodes that MaxScale should monitor. All non-option arguments after --servers
are interpreted as server names.
Other Module Parameters supported by mariadbmon in MaxScale 22.08 can also be specified.
Routers control how MaxScale balances the load between Enterprise ColumnStore nodes. Each router uses a different approach to routing queries. Consider the specific use case of your application and database load and select the router that best suits your needs.
Connection-based load balancing
Routes connections to Enterprise ColumnStore nodes designated as replica servers for a read-only pool
Routes connections to an Enterprise ColumnStore node designated as the primary server for a read-write pool.|
Query-based load balancing
Routes write queries to an Enterprise ColumnStore node designated as the primary server
Routes read queries to Enterprise ColumnStore node designated as replica servers
Automatically reconnects after node failures
Automatically replays transactions after node failures
Optionally enforces causal reads|
Use MaxScale Read Connection Router (readconnroute) to route connections to replica servers for a read-only pool.
On the MaxScale node, use maxctrl create service to create a router:
$ maxctrl create service connection_router_service readconnroute \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
router_options=slave \
--servers mcs1 mcs2 mcs3
In this example:
connection_router_service
is an arbitrary name that is used to identify the new service.
readconnroute is the name of the module that implements the Read Connection Router.
user=MAXSCALE_USER
sets the user parameter to the database user account that MaxScale uses to connect to the ColumnStore nodes.
password=MAXSCALE_USER_PASSWORD
sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ColumnStore nodes.
router_options=slave
sets the router_options
parameter to slave, so that MaxScale only routes connections to the replica nodes.
--servers
sets the servers parameter to the set of nodes to which MaxScale should route connections. All non-option arguments after --servers
are interpreted as server names.
Other Module Parameters supported by readconnroute
in MaxScale 22.08 can also be specified.
These instructions reference TCP port 3308. You can use a different TCP port. The TCP port used must not be bound by any other listener.
On the MaxScale node, use the maxctrl create listener command to configure MaxScale to use a listener for the Read Connection Router (readconnroute):
$ maxctrl create listener connection_router_service connection_router_listener 3308 \
protocol=MariaDBClient
In this example:
connection_router_service
is the name of the readconnroute
service that was previously created.
connection_router_listener
is an arbitrary name that is used to identify the new listener.
3308 is the TCP port.
protocol=MariaDBClient
sets the protocol parameter.
Other Module Parameters supported by listeners in MaxScale 22.08 can also be specified.
MaxScale Read/Write Split Router (readwritesplit) performs query-based load balancing. The router routes write queries to the primary and read queries to the replicas.
On the MaxScale node, use the maxctrl create service command to configure MaxScale to use the Read/Write Split Router (readwritesplit):
$ maxctrl create service query_router_service readwritesplit \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
--servers mcs1 mcs2 mcs3
In this example:
query_router_service
is an arbitrary name that is used to identify the new service.
readwritesplit
is the name of the module that implements the Read/Write Split Router.
user=MAXSCALE_USER
sets the user parameter to the database user account that MaxScale uses to connect to the ColumnStore nodes.
password=MAXSCALE_USER_PASSWORD
sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ColumnStore nodes.
--servers
sets the servers parameter to the set of nodes to which MaxScale should route queries. All non-option arguments after --servers
are interpreted as server names.
Other Module Parameters supported by readwritesplit
in MaxScale 22.08 can also be specified.
These instructions reference TCP port 3307. You can use a different TCP port. The TCP port used must not be bound by any other listener.
On the MaxScale node, use the maxctrl create listener command to configure MaxScale to use a listener for the Read/Write Split Router (readwritesplit):
$ maxctrl create listener query_router_service query_router_listener 3307 \
protocol=MariaDBClient
In this example:
query_router_service
is the name of the readwritesplit service that was previously created.
query_router_listener
is an arbitrary name that is used to identify the new listener.
3307 is the TCP port.
protocol=MariaDBClient
sets the protocol parameter.
Other Module Parameters supported by listeners in MaxScale 22.08 can also be specified.
To start the services and monitors, on the MaxScale node use maxctrl start services:
$ maxctrl start services
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 7 of 9.
Next: Step 8: Test MariaDB MaxScale
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page details step 8 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step tests MariaDB MaxScale 22.08.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Use maxctrl show maxscale command to view the global MaxScale configuration.
This action is performed on the MaxScale node:
$ maxctrl show maxscale
┌──────────────┬───────────────────────────────────────────────────────┐
│ Version │ 22.08.15 │
├──────────────┼───────────────────────────────────────────────────────┤
│ Commit │ 3761fa7a52046bc58faad8b5a139116f9e33364c │
├──────────────┼───────────────────────────────────────────────────────┤
│ Started At │ Thu, 05 Aug 2021 20:21:20 GMT │
├──────────────┼───────────────────────────────────────────────────────┤
│ Activated At │ Thu, 05 Aug 2021 20:21:20 GMT │
├──────────────┼───────────────────────────────────────────────────────┤
│ Uptime │ 868 │
├──────────────┼───────────────────────────────────────────────────────┤
│ Config Sync │ null │
├──────────────┼───────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ "admin_auth": true, │
│ │ "admin_enabled": true, │
│ │ "admin_gui": true, │
│ │ "admin_host": "0.0.0.0", │
│ │ "admin_log_auth_failures": true, │
│ │ "admin_pam_readonly_service": null, │
│ │ "admin_pam_readwrite_service": null, │
│ │ "admin_port": 8989, │
│ │ "admin_secure_gui": false, │
│ │ "admin_ssl_ca_cert": null, │
│ │ "admin_ssl_cert": null, │
│ │ "admin_ssl_key": null, │
│ │ "admin_ssl_version": "MAX", │
│ │ "auth_connect_timeout": "10000ms", │
│ │ "auth_read_timeout": "10000ms", │
│ │ "auth_write_timeout": "10000ms", │
│ │ "cachedir": "/var/cache/maxscale", │
│ │ "config_sync_cluster": null, │
│ │ "config_sync_interval": "5000ms", │
│ │ "config_sync_password": "*****", │
│ │ "config_sync_timeout": "10000ms", │
│ │ "config_sync_user": null, │
│ │ "connector_plugindir": "/usr/lib64/mysql/plugin", │
│ │ "datadir": "/var/lib/maxscale", │
│ │ "debug": null, │
│ │ "dump_last_statements": "never", │
│ │ "execdir": "/usr/bin", │
│ │ "language": "/var/lib/maxscale", │
│ │ "libdir": "/usr/lib64/maxscale", │
│ │ "load_persisted_configs": true, │
│ │ "local_address": null, │
│ │ "log_debug": false, │
│ │ "log_info": false, │
│ │ "log_notice": true, │
│ │ "log_throttling": { │
│ │ "count": 10, │
│ │ "suppress": 10000, │
│ │ "window": 1000 │
│ │ }, │
│ │ "log_warn_super_user": false, │
│ │ "log_warning": true, │
│ │ "logdir": "/var/log/maxscale", │
│ │ "max_auth_errors_until_block": 10, │
│ │ "maxlog": true, │
│ │ "module_configdir": "/etc/maxscale.modules.d", │
│ │ "ms_timestamp": false, │
│ │ "passive": false, │
│ │ "persistdir": "/var/lib/maxscale/maxscale.cnf.d", │
│ │ "piddir": "/var/run/maxscale", │
│ │ "query_classifier": "qc_sqlite", │
│ │ "query_classifier_args": null, │
│ │ "query_classifier_cache_size": 289073971, │
│ │ "query_retries": 1, │
│ │ "query_retry_timeout": "5000ms", │
│ │ "rebalance_period": "0ms", │
│ │ "rebalance_threshold": 20, │
│ │ "rebalance_window": 10, │
│ │ "retain_last_statements": 0, │
│ │ "session_trace": 0, │
│ │ "skip_permission_checks": false, │
│ │ "sql_mode": "default", │
│ │ "syslog": true, │
│ │ "threads": 1, │
│ │ "users_refresh_interval": "0ms", │
│ │ "users_refresh_time": "30000ms", │
│ │ "writeq_high_water": 16777216, │
│ │ "writeq_low_water": 8192 │
│ │ } │
└──────────────┴───────────────────────────────────────────────────────┘
Output should align to the global MaxScale configuration in the new configuration file you created.
Check Server Configuration Use the maxctrl list servers and maxctrl show server commands to view the configured server objects.
This action is performed on the MaxScale node:
Obtain the full list of servers objects:
$ maxctrl list servers
┌────────┬────────────────┬──────┬─────────────┬─────────────────┬────────┐
│ Server │ Address │ Port │ Connections │ State │ GTID │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs1 │ 192.0.2.1 │ 3306 │ 1 │ Master, Running │ 0-1-25 │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs2 │ 192.0.2.2 │ 3306 │ 1 │ Slave, Running │ 0-1-25 │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs3 │ 192.0.2.3 │ 3306 │ 1 │ Slave, Running │ 0-1-25 │
└────────┴────────────────┴──────┴─────────────┴─────────────────┴────────┘
For each server object, view the configuration:
$ maxctrl show server mcs1
┌─────────────────────┬───────────────────────────────────────────┐
│ Server │ mcs1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Address │ 192.0.2.1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Port │ 3306 │
├─────────────────────┼───────────────────────────────────────────┤
│ State │ Master, Running │
├─────────────────────┼───────────────────────────────────────────┤
│ Version │ 11.4.5-3-MariaDB-enterprise-log │
├─────────────────────┼───────────────────────────────────────────┤
│ Last Event │ master_up │
├─────────────────────┼───────────────────────────────────────────┤
│ Triggered At │ Thu, 05 Aug 2021 20:22:26 GMT │
├─────────────────────┼───────────────────────────────────────────┤
│ Services │ connection_router_service │
│ │ query_router_service │
├─────────────────────┼───────────────────────────────────────────┤
│ Monitors │ columnstore_monitor │
├─────────────────────┼───────────────────────────────────────────┤
│ Master ID │ -1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Node ID │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Slave Server IDs │ │
├─────────────────────┼───────────────────────────────────────────┤
│ Current Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Total Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Max Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Statistics │ { │
│ │ "active_operations": 0, │
│ │ "adaptive_avg_select_time": "0ns", │
│ │ "connection_pool_empty": 0, │
│ │ "connections": 1, │
│ │ "max_connections": 1, │
│ │ "max_pool_size": 0, │
│ │ "persistent_connections": 0, │
│ │ "reused_connections": 0, │
│ │ "routed_packets": 0, │
│ │ "total_connections": 1 │
│ │ } │
├─────────────────────┼───────────────────────────────────────────┤
│ Parameters │ { │
│ │ "address": "192.0.2.1", │
│ │ "disk_space_threshold": null, │
│ │ "extra_port": 0, │
│ │ "monitorpw": null, │
│ │ "monitoruser": null, │
│ │ "persistmaxtime": "0ms", │
│ │ "persistpoolmax": 0, │
│ │ "port": 3306, │
│ │ "priority": 0, │
│ │ "proxy_protocol": false, │
│ │ "rank": "primary", │
│ │ "socket": null, │
│ │ "ssl": false, │
│ │ "ssl_ca_cert": null, │
│ │ "ssl_cert": null, │
│ │ "ssl_cert_verify_depth": 9, │
│ │ "ssl_cipher": null, │
│ │ "ssl_key": null, │
│ │ "ssl_verify_peer_certificate": false, │
│ │ "ssl_verify_peer_host": false, │
│ │ "ssl_version": "MAX" │
│ │ } │
└─────────────────────┴───────────────────────────────────────────┘
Output should align to the Server Object configuration you performed.
Use the maxctrl list monitors and maxctrl show monitor commands to view the configured monitors.
This action is performed on the MaxScale node:
Obtain the full list of monitors:
$ maxctrl list monitors
┌─────────────────────┬─────────┬──────────────────┐
│ Monitor │ State │ Servers │
├─────────────────────┼─────────┼──────────────────┤
│ columnstore_monitor │ Running │ mcs1, mcs2, mcs3 │
└─────────────────────┴─────────┴──────────────────┘
For each monitor, view the monitor configuration:
$ maxctrl show monitor columnstore_monitor
┌─────────────────────┬─────────────────────────────────────┐
│ Monitor │ columnstore_monitor │
├─────────────────────┼─────────────────────────────────────┤
│ Module │ mariadbmon │
├─────────────────────┼─────────────────────────────────────┤
│ State │ Running │
├─────────────────────┼─────────────────────────────────────┤
│ Servers │ mcs1 │
│ │ mcs2 │
│ │ mcs3 │
├─────────────────────┼─────────────────────────────────────┤
│ Parameters │ { │
│ │ "backend_connect_attempts": 1, │
│ │ "backend_connect_timeout": 3, │
│ │ "backend_read_timeout": 3, │
│ │ "backend_write_timeout": 3, │
│ │ "disk_space_check_interval": 0, │
│ │ "disk_space_threshold": null, │
│ │ "events": "all", │
│ │ "journal_max_age": 28800, │
│ │ "module": "mariadbmon", │
│ │ "monitor_interval": 2000, │
│ │ "password": "*****", │
│ │ "script": null, │
│ │ "script_timeout": 90, │
│ │ "user": "mxs" │
│ │ } │
├─────────────────────┼─────────────────────────────────────┤
│ Monitor Diagnostics │ {} │
└─────────────────────┴─────────────────────────────────────┘
Output should align to the MariaDB Monitor (mariadbmon) configuration you performed.
Use the maxctrl list services and maxctrl show service commands to view the configured routing services.
This action is performed on the MaxScale node:
Obtain the full list of routing services:
$ maxctrl list services
┌───────────────────────────┬────────────────┬─────────────┬───────────────────┬──────────────────┐
│ Service │ Router │ Connections │ Total Connections │ Servers │
├───────────────────────────┼────────────────┼─────────────┼───────────────────┼──────────────────┤
│ connection_router_Service │ readconnroute │ 0 │ 0 │ mcs1, mcs2, mcs3 │
├───────────────────────────┼────────────────┼─────────────┼───────────────────┼──────────────────┤
│ query_router_service │ readwritesplit │ 0 │ 0 │ mcs1, mcs2, mcs3 │
└───────────────────────────┴────────────────┴─────────────┴───────────────────┴──────────────────┘
For each service, view the service configuration:
$ maxctrl show service query_router_service
┌─────────────────────┬─────────────────────────────────────────────────────────────┐
│ Service │ query_router_service │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Router │ readwritesplit │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ State │ Started │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Started At │ Sat Aug 28 21:41:16 2021 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Current Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Total Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Max Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Cluster │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Servers │ mcs1 │
│ │ mcs2 │
│ │ mcs3 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Services │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Filters │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ "auth_all_servers": false, │
│ │ "causal_reads": "false", │
│ │ "causal_reads_timeout": "10000ms", │
│ │ "connection_keepalive": "300000ms", │
│ │ "connection_timeout": "0ms", │
│ │ "delayed_retry": false, │
│ │ "delayed_retry_timeout": "10000ms", │
│ │ "disable_sescmd_history": false, │
│ │ "enable_root_user": false, │
│ │ "idle_session_pool_time": "-1000ms", │
│ │ "lazy_connect": false, │
│ │ "localhost_match_wildcard_host": true, │
│ │ "log_auth_warnings": true, │
│ │ "master_accept_reads": false, │
│ │ "master_failure_mode": "fail_instantly", │
│ │ "master_reconnection": false, │
│ │ "max_connections": 0, │
│ │ "max_sescmd_history": 50, │
│ │ "max_slave_connections": 255, │
│ │ "max_slave_replication_lag": "0ms", │
│ │ "net_write_timeout": "0ms", │
│ │ "optimistic_trx": false, │
│ │ "password": "*****", │
│ │ "prune_sescmd_history": true, │
│ │ "rank": "primary", │
│ │ "retain_last_statements": -1, │
│ │ "retry_failed_reads": true, │
│ │ "reuse_prepared_statements": false, │
│ │ "router": "readwritesplit", │
│ │ "session_trace": false, │
│ │ "session_track_trx_state": false, │
│ │ "slave_connections": 255, │
│ │ "slave_selection_criteria": "LEAST_CURRENT_OPERATIONS", │
│ │ "strict_multi_stmt": false, │
│ │ "strict_sp_calls": false, │
│ │ "strip_db_esc": true, │
│ │ "transaction_replay": false, │
│ │ "transaction_replay_attempts": 5, │
│ │ "transaction_replay_max_size": 1073741824, │
│ │ "transaction_replay_retry_on_deadlock": false, │
│ │ "type": "service", │
│ │ "use_sql_variables_in": "all", │
│ │ "user": "mxs", │
│ │ "version_string": null │
│ │ } │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Router Diagnostics │ { │
│ │ "avg_sescmd_history_length": 0, │
│ │ "max_sescmd_history_length": 0, │
│ │ "queries": 0, │
│ │ "replayed_transactions": 0, │
│ │ "ro_transactions": 0, │
│ │ "route_all": 0, │
│ │ "route_master": 0, │
│ │ "route_slave": 0, │
│ │ "rw_transactions": 0, │
│ │ "server_query_statistics": [] │
│ │ } │
└─────────────────────┴─────────────────────────────────────────────────────────────┘
Output should align to the Read Connection Router (readconnroute) or Read/Write Split Router (readwritesplit) configuration you performed.
Applications should use a dedicated user account. The user account must be created on the primary server.
When users connect to MaxScale, MaxScale authenticates the user connection before routing it to an Enterprise Server node. Enterprise Server authenticates the connection as originating from the IP address of the MaxScale node.
The application users must have one user account with the host IP address of the application server and a second user account with the host IP address of the MaxScale node.
The requirement of a duplicate user account can be avoided by enabling the proxy_protocol
parameter for MaxScale and the proxy_protocol_networks for Enterprise Server.
This action is performed on the primary Enterprise ColumnStore node:
Connect to the primary Enterprise ColumnStore node:
$ sudo mariadb
Create the database user account for your MaxScale node:
CREATE USER 'app_user'@'192.0.2.10' IDENTIFIED BY 'app_user_passwd';
Replace 192.0.2.10 with the relevant IP address specification for your MaxScale node.
Passwords should meet your organization's password policies.
Grant the privileges required by your application to the database user account for your MaxScale node:
GRANT ALL ON test.* TO 'app_user'@'192.0.2.10';
The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.
This action is performed on the primary Enterprise ColumnStore node:
Create the database user account for your application server:
CREATE USER 'app_user'@'192.0.2.11' IDENTIFIED BY 'app_user_passwd';
Replace 192.0.2.11 with the relevant IP address specification for your application server.
Passwords should meet your organization's password policies.
Grant the privileges required by your application to the d database user account for your application server:
GRANT ALL ON test.* TO 'app_user'@'192.0.2.11';
The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.
To test the connection, use the MariaDB Client from your application server to connect to an Enterprise ColumnStore node through MaxScale.
This action is performed on a client connected to the MaxScale node:
$ mariadb --host 192.0.2.10 --port 3307
--user app_user --password
If you configured the Read Connection Router, confirm that MaxScale routes connections to the replica servers.
On the MaxScale node, use the maxctrl list listeners command to view the available listeners and ports:
$ maxctrl list listeners
┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘
Open multiple terminals connected to your application server, in each, use MariaDB Client to connect to the listener port for the Read Connection Router (in the example, 3308):
$ mariadb --host 192.0.2.10 --port 3308 \
--user app_user --password
Use the application user credentials you created for the --user and --password
options.
In each terminal, query the hostname and server_id system variable and option to identify to which you're connected:
SELECT @@global.hostname, @@global.server_id;
+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs2 | 2 |
+-------------------+--------------------+
Different terminals should return different values since MaxScale routes the connections to different nodes.
Since the router was configured with the slave router option, the Read Connection Router only routes connections to replica servers.
If you configured the Read/Write Split Router, confirm that MaxScale routes write queries on this router to the primary Enterprise ColumnStore node.
on the MaxScale node, use the maxctrl list listeners command to view the available listeners and ports:
$ maxctrl list listeners
┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘
Open multiple terminals connected to your application server, in each, use MariaDB Client to connect to the listener port for the Read/Write Split Router (in the example, 3307):
$ mariadb --host 192.0.2.10 --port 3307 \
--user app_user --password
Use the application user credentials you created for the --user and --password
options.
In one terminal, create the test table:
CREATE TABLE test.load_balancing_test (
id INT PRIMARY KEY AUTO_INCREMENT,
hostname VARCHAR(256),
server_id INT
);
In each terminal, issue an insert.md statement to add a row to the example table with the values of the hostname and server_id system variable and option:
INSERT INTO test.load_balancing_test (hostname, server_id)
VALUES (@@global.hostname, @@global.server_id);
In one terminal, issue a SELECT statement to query the results:
SELECT * FROM test.load_balancing_test;
+----+----------+-----------+
| id | hostname | server_id |
+----+----------+-----------+
| 1 | mcs1 | 1 |
| 2 | mcs1 | 1 |
| 3 | mcs1 | 1 |
+----+----------+-----------+
While MaxScale is handling multiple connections from different terminals, it routed all connections to the current primary Enterprise ColumnStore node, which in the example is mcs1#.
If you configured the Read/Write Split Router (readwritesplit), confirm that MaxScale routes read queries on this router to replica servers.
On the MaxScale node, use the maxctrl list listeners command to view the available listeners and ports:
$ maxctrl list listeners
┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘
In a terminal connected to your application server, use MariaDB Client to connect to the listener port for the Read/Write Split Router (readwritesplit) (in the example, 3307):
$ mariadb --host 192.0.2.10 --port 3307 \
--user app_user --password
Use the application user credentials you created for the --user and --password options.
Query the hostname and server_id to identify which server MaxScale routed you to.
SELECT @@global.hostname, @@global.server_id;
+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs2 | 2 |
+-------------------+--------------------+
Resend the query:
SELECT @@global.hostname, @@global.server_id;
+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs3 | 3 |
+-------------------+--------------------+
Confirm that MaxScale routes the SELECT statements to different replica servers.
For more information on different routing criteria, see slave_selection_criteria
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 8 of 9.
Next: Step 9: Import Data
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page details step 9 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step bulk imports data to Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Before data can be imported into the tables, create a matching schema.
On the primary server, create the schema:
For each database that you are importing, create the database with the CREATE DATABASE statement:
CREATE DATABASE inventory;
For each table that you are importing, create the table with the CREATE TABLE statement:
CREATE TABLE inventory.products (
product_name VARCHAR(11) NOT NULL DEFAULT '',
supplier VARCHAR(128) NOT NULL DEFAULT '',
quantity VARCHAR(128) NOT NULL DEFAULT '',
unit_cost VARCHAR(128) NOT NULL DEFAULT ''
) ENGINE=Columnstore DEFAULT CHARSET=utf8;
Enterprise ColumnStore supports multiple methods to import data into ColumnStore tables.
Remote Database
Use normal database client
Avoid dumping data to intermediate filed
MariaDB Enterprise ColumnStore includes cpimport, which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server run cpimport:
$ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsv
When data is loaded with the LOAD DATA INFILE statement, MariaDB Enterprise ColumnStore loads the data using cpimport, which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server use LOAD DATA INFILE statement:
LOAD DATA INFILE '/tmp/inventory-products.tsv'
INTO TABLE inventory.products;
MariaDB Enterprise ColumnStore can also import data directly from a remote database. A simple method is to query the table using the SELECT statement, and then pipe the results into cpimport, which is a command-line utility that is designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a remote MariaDB database:
$ mariadb --quick \
--skip-column-names \
--execute="SELECT * FROM inventory.products" \
| cpimport -s '\t' inventory products
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 9 of 9.
This procedure is complete.
This page is: Copyright © 2025 MariaDB. All rights reserved.