Learn how to import data into MariaDB ColumnStore. This section covers various methods and tools for efficiently loading large datasets into your columnar database for analytical workloads.
MariaDB Enterprise ColumnStore supports very efficient bulk data loads.
MariaDB Enterprise ColumnStore performs bulk data loads very efficiently using a variety of mechanisms, including the cpimport tool, specialized handling of certain SQL statements, and minimal locking during data import.
MariaDB Enterprise ColumnStore includes a bulk data loading tool called cpimport, which provides several benefits:
Bypasses the SQL layer to decrease overhead
Does not block read queries
Requires a write metadata lock on the table, which can be monitored with the METADATA_LOCK_INFO plugin
Appends the new data to the table. While the bulk load is in progress, the newly appended data is temporarily hidden from queries. After the bulk load is complete, the newly appended data is visible to queries.
Inserts each row in the order the rows are read from the source file. Users can optimize data loads for Enterprise ColumnStore's automatic partitioning by loading presorted data files. For additional information, see "Load Ordered Data in Proper Order".
Supports parallel distributed bulk loads
Imports data from text files
Imports data from binary files
Imports data from standard input (stdin)
MariaDB Enterprise ColumnStore enables batch insert mode by default.
When batch insert mode is enabled, MariaDB Enterprise ColumnStore has special handling for the following statements:
[[|load-data-infileLOAD DATA [ LOCAL ] INFILE]]
Enterprise ColumnStore uses the following rules:
If the statement is executed outside of a transaction, Enterprise ColumnStore loads the data using cpimport, which is a command-line utility that is designed to efficiently load data in bulk. It executes cpimport using a wrapper called cpimport.bin
.
If the statement is executed inside of a transaction, Enterprise ColumnStore loads the data using the DML
interface, which is slower.
Batch insert mode can be disabled by setting the columnstore_use_import_for_batchinsert
system variable to OFF. When batch insert mode is disabled, Enterprise ColumnStore executes the statements using the DML
interface, which is slower.
MariaDB Enterprise ColumnStore requires a write metadata lock (MDL) on the table when a bulk data load is performed with cpimport
.
When a bulk data load is running:
Read queries will not be blocked.
Write queries and concurrent bulk data loads on the same table will be blocked until the bulk data load operation is complete, and the write metadata lock on the table has been released.
The write metadata lock (MDL) can be monitored with the METADATA_LOCK_INFO plugin.
Fastest
Shell
• Text file. • Binary file • Standard input (stdin)
• Server file system
Lowest latency. • Bypasses SQL layer. • Non-blocking
Fast
columnstore_info.load_from_s3
SQL
• Text file.
• S3-compatible object storage
• Loads data from the cloud. • Translates operation to cpimport command. • Non-blocking
Fast
SQL
• Text file.
• Server file system • Client file system
• Translates operation to cpimport command. • Non-blocking
Slow
SQL
• Other table(s).
• Same MariaDB server
• Translates operation to cpimport command. • Non-blocking
MariaDB Enterprise ColumnStore includes a bulk data loading tool called cpimport, which bypasses the SQL layer to decrease the overhead of bulk data loading.
Refer to, the cpimport modes for additional information and ColumnStore Bulk Data Loading.
The cpimport tool:
Bypasses the SQL layer to decrease overhead
Does not block read queries
Requires a write metadata lock on the table, which can be monitored with the METADATA_LOCK_INFO plugin
Appends the new data to the table. While the bulk load is in progress, the newly appended data is temporarily hidden from queries. After the bulk load is complete, the newly appended data is visible to queries.
Inserts each row in the order the rows are read from the source file. Users can optimize data loads for Enterprise ColumnStore's automatic partitioning by loading presorted data files.
Supports parallel distributed bulk loads
Imports data from text files
Imports data from binary files
Imports data from standard input (stdin)
You can load data using the cpimport tool in the following cases:
You are loading data into a ColumnStore table from a text file stored on the primary node's file system.
You are loading data into a ColumnStore table from a binary file stored on the primary node's file system.
You are loading data into a ColumnStore table from the output of a command running on the primary node.
MariaDB Enterprise ColumnStore requires a write metadata lock (MDL) on the table when a bulk data load is performed with cpimport.
When a bulk data load is running:
Read queries will not be blocked.
Write queries and concurrent bulk data loads on the same table will be blocked until the bulk data load operation is complete, and the write metadata lock on the table has been released.
The write metadata lock (MDL) can be monitored with the METADATA_LOCK_INFO plugin.
Before data can be imported into the tables, the schema must be created.
Connect to the primary server using MariaDB Client:
$ mariadb --host 192.168.0.100 --port 5001 \
--user db_user --password \
--default-character-set=utf8
After the command is executed, it will prompt you for a password.
For each database that you are importing, create the database with the CREATE DATABASE statement:
CREATE DATABASE inventory;
For each table that you are importing, create the table with the CREATE TABLE statement:
CREATE TABLE inventory.products (
product_name VARCHAR(11) NOT NULL DEFAULT '',
supplier VARCHAR(128) NOT NULL DEFAULT '',
quantity VARCHAR(128) NOT NULL DEFAULT '',
unit_cost VARCHAR(128) NOT NULL DEFAULT ''
) ENGINE=Columnstore DEFAULT CHARSET=utf8;
When MariaDB Enterprise ColumnStore performs a bulk data load, it appends data to the table in the order in which the data is read. Appending data reduces the I/O requirements of bulk data loads, so that larger data sets can be loaded very efficiently.
While the bulk load is in progress, the newly appended data is temporarily hidden from queries.
After the bulk load is complete, the newly appended data is visible to queries.
When MariaDB Enterprise ColumnStore performs a bulk data load, it appends data to the table in the order in which the data is read.
The order of data can have a significant effect on performance with Enterprise ColumnStore, so it can be helpful to sort the data in the input file prior to importing it.
For additional information, see "Load Ordered Data in Proper Order".
Before importing a file into MariaDB Enterprise ColumnStore, confirm that the field delimiter is not present in the data.
The default field delimiter for the cpimport tool is a pipe (|).
To use a different delimiter, you can set the field delimiter.
The cpimport tool can import data from a text file if a file is provided as an argument after the database and table name.
For example, to import the file inventory-products.txt
into the products table in the inventory database:
$ sudo cpimport \
inventory products \
inventory-products.txt
The cpimport tool can import data from a binary file if the -I1 or -I2
option is provided and a file is provided as an argument after the database and table name.
For example, to import the file inventory-products.bin
into the products table in the inventory database:
$ sudo cpimport -I1 \
inventory products \
inventory-products.bin
The -I1 and -I2
options allow two different binary import modes to be selected:
-I1
Numeric fields containing NULL will be treated as NULL unless the column has a default value
-I2
Numeric fields containing NULL will be saturated
The binary file should use the following format for data:
BIGINT
Little-endian integer format Signed NULL: 0x8000000000000000ULL Unsigned NULL: 0xFFFFFFFFFFFFFFFEULL
CHAR
String padded with '0' to match the length of the field NULL: '0' for the full length of the field
DATE
Use the format represented by the struct Date NULL: 0xFFFFFFFE
DATETIME
Use the format represented by the struct DateTime NULL: 0xFFFFFFFFFFFFFFFEULL
DECIMAL
Use an integer representation of the value without the decimal point Sizing depends on precision: * 1-2: use 2 bytes * 3-4: use 3 bytes * 4-9: use 4 bytes * 10+: use 8 bytes Signed and unsigned NULL: See equivalent-sized integer
DOUBLE
Native IEEE floating point format NULL: 0xFFFAAAAAAAAAAAAAULL
FLOAT
Native IEEE floating point format NULL: 0xFFAAAAAA
INT
Little-endian integer format Signed NULL: 0x80000000 Unsigned NULL: 0xFFFFFFFE
SMALLINT
Little-endian integer format Signed NULL: 0x8000 Unsigned NULL: 0xFFFE
TINYINT
Little-endian integer format Signed NULL: 0x80 Unsigned NULL: 0xFE
VARCHAR
String padded with '0' to match the length of the field NULL: '0' for the full length of the field
In binary input files, the cpimport tool expects DATE columns to be in the following format:
struct Date
{
unsigned spare : 6;
unsigned day : 6;
unsigned month : 4;
unsigned year : 16
};
In binary input files, the cpimport tool expects DATETIME columns to be in the following format:
struct DateTime
{
unsigned msecond : 20;
unsigned second : 6;
unsigned minute : 6;
unsigned hour : 6;
unsigned day : 6;
unsigned month : 4;
unsigned year : 16
};
The cpimport tool can import data from standard input (stdin) if no file is provided as an argument.
Importing from standard input is useful in many scenarios.
One scenario is when you want to import data from a remote database. You can use MariaDB Client to query the table using the SELECT statement, and then pipe the results into the standard input of the cpimport tool:
$ mariadb --quick \
--skip-column-names \
--execute="SELECT * FROM inventory.products" \
| cpimport -s '\t' inventory products
The cpimport tool can import data from a file stored in a remote S3 bucket.
You can use the AWS CLI to copy the file from S3, and then pipe the contents into the standard input of the cpimport tool:
$ aws s3 cp --quiet s3://columnstore-test/inventory-products.csv - \
| cpimport -s ',' inventory products
Alternatively, the columnstore_info.load_from_s3 stored procedure can import data from S3-compatible cloud object storage.
The default field delimiter for the cpimport tool is a pipe (|).
If your data file uses a different field delimiter, you can specify the field delimiter with the -s option.
For a TSV (tab-separated values) file:
$ sudo cpimport -s '\t' \
inventory products \
inventory-products.tsv
For a CSV (comma-separated values) file:
$ sudo cpimport -s ',' \
inventory products \
inventory-products.csv
By default, the cpimport tool does not expect fields to be quoted.
If your data file uses quotes around fields, you can specify the quote character with the -E option.
To load a TSV (tab-separated values) file that uses double quotes:
$ sudo cpimport -s '\t' -E '"' \
inventory products \
inventory-products.tsv
To load a CSV (comma-separated values) file that uses optional single quotes:
$ sudo cpimport -s ',' -E "'" \
inventory products \
inventory-products.csv
The cpimport tool writes logs to different directories, depending on the Enterprise ColumnStore version:
In Enterprise ColumnStore 5.5.2 and later, logs are written to /var/log/mariadb/columnstore/bulk/
In Enterprise ColumnStore 5 releases before 5.5.2, logs are written to /var/lib/columnstore/data/bulk/
In Enterprise ColumnStore 1.4, logs are written to /usr/local/mariadb/columnstore/bulk/
The cpimport tool requires column values to be in the same order in the input file as the columns in the table definition.
The cpimport tool requires DATE values to be specified in the format YYYY-MM-DD.
The cpimport tool does not write bulk data loads to the transaction log, so they are not transactional.
The cpimport tool does not write bulk data loads to the binary log, so they cannot be replicated using MariaDB replication.
When Enterprise ColumnStore uses object storage and the Storage Manager directory uses EFS in the default Bursting Throughput mode, the cpimport tool can have performance problems if multiple data load operations are executed consecutively. The performance problems can occur because the Bursting Throughput mode scales the rate relative to the size of the file system, so the burst credits for a small Storage Manager volume can be fully consumed very quickly.
When this problem occurs, some solutions are:
Avoid using burst credits by using Provisioned Throughput mode instead of Bursting Throughput mode
Monitor burst credit balances in AWS and run data load operations when burst credits are available
Increase the burst credit balance by increasing the file system size (for example, by creating a dummy file)
Additional information is available here.
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Enterprise ColumnStore automatically translates INSERT INTO .. SELECT FROM ..
statements into bulk data loads. By default, it translates the statement into a bulk data load that uses cpimport.bin
, which is an internal wrapper around the cpimport
tool.
You can load data using INSERT INTO .. SELECT FROM ..
in the following cases:
You are loading data into a ColumnStore table by querying one or more local tables.
MariaDB Enterprise ColumnStore enables batch insert mode by default.
When batch insert mode is enabled, MariaDB Enterprise ColumnStore has special handling for INSERT INTO .. SELECT FROM .. statements.
Enterprise ColumnStore uses the following rules:
If the statement is executed outside of a transaction, Enterprise ColumnStore loads the data using cpimport
, which is a command-line utility that is designed to efficiently load data in bulk. Enterprise ColumnStore executes cpimport
using a wrapper called cpimport.bin.
If the statement is executed inside of a transaction, Enterprise ColumnStore loads the data using the DML interface, which is slower.
Batch insert mode can be disabled by setting the columnstore_use_import_for_batchinsert system variable to OFF. When batch insert mode is disabled, Enterprise ColumnStore executes the statements using the DML interface, which is slower.
MariaDB Enterprise ColumnStore requires a write metadata lock (MDL) on the table when a bulk data load is performed with cpimport.
When a bulk data load is running:
Read queries will not be blocked.
Write queries and concurrent bulk data loads on the same table will be blocked until the bulk data load operation is complete, and the write metadata lock on the table has been released.
The write metadata lock (MDL) can be monitored with the METADATA_LOCK_INFO plugin.
Before data can be imported into the tables, the schema must be created.
Connect to the primary server using MariaDB Client:
$ mariadb --host 192.168.0.100 --port 5001 \
--user db_user --password \
--default-character-set=utf8
After the command is executed, it will prompt you for a password.
For each database that you are importing, create the database with the CREATE DATABASE statement:
CREATE DATABASE inventory;
For each table that you are importing, create the table with the CREATE TABLE statement:
CREATE TABLE inventory.products (
product_name VARCHAR(11) NOT NULL DEFAULT '',
supplier VARCHAR(128) NOT NULL DEFAULT '',
quantity VARCHAR(128) NOT NULL DEFAULT '',
unit_cost VARCHAR(128) NOT NULL DEFAULT ''
) ENGINE=Columnstore DEFAULT CHARSET=utf8;
When MariaDB Enterprise ColumnStore performs a bulk data load, it appends data to the table in the order in which the data is read. Appending data reduces the I/O requirements of bulk data loads, so that larger data sets can be loaded very efficiently.
While the bulk load is in progress, the newly appended data is temporarily hidden from queries.
After the bulk load is complete, the newly appended data is visible to queries.
When MariaDB Enterprise ColumnStore performs a bulk data load, it appends data to the table in the order in which the data is read.
The order of data can have a significant effect on performance with Enterprise ColumnStore. If your data is not already sorted, it can be helpful to sort the query results using an ORDER BY clause.
For example:
# Perform import from other table
# and insert in sorted order
INSERT INTO inventory.orders
SELECT *
FROM innodb_inventory.orders
ORDER BY order_date;
For additional information, see "Load Ordered Data in Proper Order".
Before importing a table's data into MariaDB Enterprise ColumnStore, confirm that the field delimiter is not present in the data.
The field delimiter is determined by the columnstore_import_for_batchinsert_delimiter
system variable. By default, Enterprise ColumnStore sets the field delimiter to the ASCII value 7
, which corresponds to the BEL
character.
To use a different delimiter, you can set the field delimiter.
When the data is passed to cpimport, each value is separated by a field delimiter. The field delimiter is determined by the columnstore_import_for_batchinsert_delimiter
system variable.
By default, Enterprise ColumnStore sets the field delimiter to the ASCII value 7
, which corresponds to the BEL
character. In general, setting the field delimiter is only required if your data contains this value.
Set the field delimiter by setting the columnstore_import_for_batchinsert_delimiter
system variable to the ASCII
value for the desired delimiter character.
For example, if you want to use a comma (,) as the field delimiter, you would set columnstore_import_for_batchinsert_delimiter
to 44:
# Set field delimiter
SET SESSION columnstore_import_for_batchinsert_delimiter=44;
# Perform import from other table
INSERT INTO inventory.products
SELECT *
FROM innodb_inventory.products;
When the data is passed to cpimport, each value is enclosed by a quote character. The quote character is determined by the columnstore_import_for_batchinsert_enclosed_by
system variable.
By default, Enterprise ColumnStore sets the quote character to the ASCII value 17
, which corresponds to the DC1
character. In general, setting the quote character is only required if your data contains this value.
Set the quote character by setting the columnstore_import_for_batchinsert_enclosed_by
system variable to the ASCII
value for the desired quote character.
For example, if you want to use a double quote (") as the quote character, you would set columnstore_import_for_batchinsert_enclosed_by
to 34:
# Set quote character
SET SESSION columnstore_import_for_batchinsert_enclosed_by=34;
# Perform import from other table
INSERT INTO inventory.products
SELECT *
FROM innodb_inventory.products;
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Enterprise ColumnStore automatically translates LOAD DATA [ LOCAL ] INFILE
statements into bulk data loads. By default, it translates the statement into a bulk data load that uses cpimport.bin, which is an internal wrapper around the cpimport tool.
You can load data using the LOAD DATA INFILE statement in the following cases:
You are loading data into a ColumnStore table from a text file stored on the primary node's file system.
You are loading data into a ColumnStore table from a text file stored on the client's file system. In this case, LOAD DATA LOCAL INFILE
must be used.
MariaDB Enterprise ColumnStore enables batch insert mode by default.
When batch insert mode is enabled, MariaDB Enterprise ColumnStore has special handling for LOAD DATA INFILE statements.
Enterprise ColumnStore uses the following rules:
If the statement is executed outside of a transaction, Enterprise ColumnStore loads the data using cpimport, which is a command-line utility that is designed to efficiently load data in bulk. Enterprise ColumnStore executes cpimport using a wrapper called cpimport.bin
.
If the statement is executed inside of a transaction, Enterprise ColumnStore loads the data using the DML interface, which is slower.
Batch insert mode can be disabled by setting the columnstore_use_import_for_batchinsert
system variable to OFF. When batch insert mode is disabled, Enterprise ColumnStore executes the statements using the DML interface, which is slower.
Starting with MariaDB Enterprise ColumnStore 6, an insert cache can be enabled to speed up LOAD DATA INFILE statements.
The insert cache is disabled by default, but it can be enabled by configuring columnstore_cache_inserts=ON
:
[mariadb]
...
columnstore_cache_inserts=ON\
The cache is flushed to ColumnStore in the following scenarios:
When the number of cached rows exceeds the value of columnstore_cache_flush_threshold
When a statement other than INSERT or LOAD DATA INFILE is executed
cpimport is used to flush the insert cache to ColumnStore when columnstore_cache_use_import=ON
is configured.
MariaDB Enterprise ColumnStore requires a write metadata lock (MDL) on the table when a bulk data load is performed with cpimport.
When a bulk data load is running:
Read queries will not be blocked.
Write queries and concurrent bulk data loads on the same table will be blocked until the bulk data load operation is complete, and the write metadata lock on the table has been released.
The write metadata lock (MDL) can be monitored with the METADATA_LOCK_INFO plugin.
Before data can be imported into the tables, the schema must be created.
Connect to the primary server using MariaDB Client:
$ mariadb --host 192.168.0.100 --port 5001 \
--user db_user --password \
--default-character-set=utf8
After the command is executed, it will prompt you for a password.
For each database that you are importing, create the database with the CREATE DATABASE statement:
CREATE DATABASE inventory;
For each table that you are importing, create the table with the CREATE TABLE statement:
CREATE TABLE inventory.products (
product_name VARCHAR(11) NOT NULL DEFAULT '',
supplier VARCHAR(128) NOT NULL DEFAULT '',
quantity VARCHAR(128) NOT NULL DEFAULT '',
unit_cost VARCHAR(128) NOT NULL DEFAULT ''
) ENGINE=Columnstore DEFAULT CHARSET=utf8;
When MariaDB Enterprise ColumnStore performs a bulk data load, it appends data to the table in the order in which the data is read. Appending data reduces the I/O requirements of bulk data loads, so that larger data sets can be loaded very efficiently.
While the bulk load is in progress, the newly appended data is temporarily hidden from queries.
After the bulk load is complete, the newly appended data is visible to queries.
When MariaDB Enterprise ColumnStore performs a bulk data load, it appends data to the table in the order in which the data is read.
The order of data can have a significant effect on performance with Enterprise ColumnStore, so it can be helpful to sort the data in the input file prior to importing it.
For additional information, see "Load Ordered Data in Proper Order".
Before importing a file into MariaDB Enterprise ColumnStore, confirm that the field delimiter is not present in the data.
The field delimiter is determined by the columnstore_import_for_batchinsert_delimiter
system variable. By default, Enterprise ColumnStore sets the field delimiter to the ASCII value 7
, which corresponds to the BEL
character.
To use another delimiter, you can set the field delimiter.
If you are loading a file located on the client, you can use the LOAD DATA LOCAL INFILE statement. However, for this statement to work, the client must explicitly enable usage of a local infile.
If you are using MariaDB Client the --local-infile
option can be used:
$ mariadb --host 192.168.0.1 \
--user db_user --password \
--default-character-set=utf8 \
--local-infile
If you are using MariaDB Connector/C, the MYSQL_OPT_LOCAL_INFILE
option can be set with the mysql_optionsv()
function:
/* enable local infile */
unsigned int enable_local_infile = 1;
mysql_optionsv(mysql, MYSQL_OPT_LOCAL_INFILE, (void *) &enable_local_infile);
If you are using MariaDB Connector/J, the allowLocalInfile
parameter can be set for the connection:
Connection connection = DriverManager.getConnection("jdbc:mariadb://192.168.0.1/test?user=test_user&password=myPassword&allowLocalInfile=true");
If you are using MariaDB Connector/Node.js, the permitLocalInfile
parameter can be set for the connection:
mariadb.createConnection({
host: '192.168.0.1',
user:'test_user',
password: 'myPassword',
permitLocalInfile: 'true'
});
If you are using MariaDB Connector/Python, the local_infile
parameter can be set for the connection:
conn = mariadb.connect(
user="test_user",
password="myPassword",
host="192.168.0.1",
port=3306,
local_infile=true)
Set the Field Delimiter
The default field delimiter for the LOAD DATA INFILE statement is a tab.
If your data file uses a different field delimiter, you can specify the field delimiter with the FIELDS TERMINATED BY
clause.
To load a TSV (tab-separated values)
file:
LOAD DATA INFILE 'inventory-products.tsv'
INTO TABLE inventory.products;
To load a CSV (comma-separated values) file:
LOAD DATA LOCAL INFILE 'inventory-products.csv'
INTO TABLE accounts.contacts
FIELDS TERMINATED BY ',';
By default, the LOAD DATA INFILE statement does not expect fields to be quoted.
If your data file uses quotes around fields, you can specify the quote character with the FIELDS [OPTIONALLY] ENCLOSED BY
clause.
To load a TSV (tab-separated values)
file that uses double quotes:
LOAD DATA INFILE 'inventory-products.tsv'
INTO TABLE inventory.products
FIELDS ENCLOSED BY '"';
To load a CSV (comma-separated values) file that uses optional single quotes:
LOAD DATA LOCAL INFILE 'inventory-products.csv'
INTO TABLE accounts.contacts
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\'';
MariaDB Enterprise ColumnStore ignores the ON DUPLICATE KEY
clause.
Please ensure that duplicate data is removed prior to performing a bulk data load.
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Enterprise ColumnStore includes a stored procedure called columnstore_info.load_from_s3
, which can load data from a plain-text file containing delimiter-separated values (such as CSV or TSV
) stored on S3-compatible cloud object storage.
MariaDB Enterprise ColumnStore 23.02
Before you import data with the columnstore_info.load_from_s3
stored procedure, the authentication credentials and the region can be configured using system variables:
columnstore_s3_key
columnstore_s3_secret
columnstore_s3_region
For example, the following statements show how to set the system variables for your current session:
SET columnstore_s3_key='S3_KEY';
SET columnstore_s3_secret='S3_SECRET';
SET columnstore_s3_region='S3_REGION';
To import data with the columnstore_info.load_from_s3
stored procedure, use the CALL
statement:
CALL columnstore_info.load_from_s3('BUCKET_URL',
'FILE_NAME',
'DATABASE_NAME',
'TABLE_NAME',
'TERMINATED_BY',
'ENCLOSED_BY',
'ESCAPED_BY');
Replace 'BUCKET_URL'
with the URL of your bucket. The protocol in the URL must be s3: for AWS S3 or gs:
for Google Cloud Storage
Replace 'FILE_NAME'
with the file name to load from. The file must be a plain-text file containing delimiter-separated values, such as a comma-separated values (CSV) or tab-separated values (TSV) file. The supported file format is similar to the plain-text file formats supported by cpimport
and LOAD DATA [LOCAL] INFILE
. Please note that this stored procedure can't load dump files created by mariadb-dump
Replace 'DATABASE_NAME'
with the database to import into
Replace 'TABLE_NAME'
with the table name to import into
Replace 'TERMINATED_BY'
with the field terminator used in the file, similar to the -s command-line
option for cpimport
Replace 'ENCLOSED_BY'
with the quotes used in the file, similar to the -E command-line
option for cpimport
Replace 'ESCAPED_BY'
with the escape character used in the file, similar to the -C command-line
option for cpimport
All parameters are mandatory.
For example, to load a comma-separated values (CSV) file from AWS S3:
CALL columnstore_info.load_from_s3('s3://mariadb-columnstore-test-data/',
'test-data-db1-tab1.csv',
'db1',
'tab1',
',',
'"',
'\\');
When the stored procedure completes, it returns JSON containing the status of the operation. If the JSON shows an error or "success": false, check your table to see if some or all of your data was loaded, because many errors are non-fatal.
When the data file is stored in Amazon S3, the AWS user only requires the s3:GetObject
action on the bucket.
For example, the AWS user can use a user policy like the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "readBucket",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
This page is: Copyright © 2025 MariaDB. All rights reserved.