LOAD DATA INFILE
Overview
The LOAD DATA INFILE statement can read lines from a text file into a table very quickly. You can read the file from the server host or from [S3 Compatible Object Storage] (../../../App-Develop/import-data/bulk-load/load-s3.md).
- Read the file back to the table and use
LOAD DATA INFILE. - Write data from the table to a file, using
SELECT ... INTO OUTFILE. - The syntax of the
FIELDSandLINESclauses are used in the same way as theLOAD DATA INFILEandSELECT ... INTO OUTFILEstatements are used in the same way, using the Fields and Lines parameters to specify how the data format is processed.
Grammar Structure
> LOAD DATA [LOCAL]
INFILE 'file_name'
INTO TABLE tbl_name
[CHARACTER SET charset_name]
[{FIELDS | COLUMNS}
[TERMINATED BY 'string']
[[OPTIONALLY] ENCLOSED BY 'char']
[ENCASPED BY 'char']
]
[LINES
[STARTING BY 'string']
[TERMINATED BY 'string']
]
[IGNORE number {LINES | ROWS}]
[SET column_name_1=nullif(column_name_1, expr1), column_name_2=nullif(column_name_2, expr2)...]
[PARALLEL {'TRUE' | 'FALSE'}]
[STRICT {'TRUE' | 'FALSE'}]
Parameter explanation
The parameters in the above syntax structure are explained as follows:
INFILE
LOAD DATA INFILE 'file_name':
Command line usage scenario: The data file that needs to be loaded is on the same machine as the MatrixOne host server.
file_name can be the relative path name of the file where it is stored, or it can be the absolute path name.
LOAD DATA LOCAL INFILE 'file_name':
Command line usage scenario: The data file that needs to be loaded is not on the same machine as the MatrixOne host server, that is, the data file is on the client.
file_name can be the relative path name of the file where it is stored, or it can be the absolute path name.
CHARACTER SET
If the file content uses a character set that is different from the default value, you can use CHARACTER SET to specify the character set. For example, you can use CHARACTER SET utf8 to specify the character set of imported content to utf8:
LOAD DATA LOCAL INFILE 'yourfilepath' INTO TABLE xx CHARACTER SET utf8;
Note
In addition to utf8, LOAD DATA also supports specifying character sets such as utf_8, UTF_16, UTF_xx, gbk, abcd, etc. Specifying character sets with- (such as utf-8, UTF-16) is not supported yet.
IGNORE LINES
The IGNORE number LINES clause can be used to ignore lines at the beginning of a file. For example, you can use IGNORE 1 LINES to skip the initial title row containing the column names:
LOAD DATA LOCAL INFILE '/tmp/test.txt' INTO TABLE table1 IGNORE 1 LINES;
FIELDS and LINES parameter description
Use the FIELDS and LINES parameters to specify how the data format is processed.
The grammar of the FIELDS and LINES clauses are the same for the LOAD DATA and SELECT ... INTO OUTFILE statements. Both clauses are optional, but if both are specified, FIELDS must be before LINES.
If you specify the FIELDS clause, then each clause of FIELDS (TERMINATED BY, [OPTIONALLY] ENCLOSED BY) is also optional unless you have to specify at least one of them.
LOAD DATA also supports the use of hexadecimal ASCII character expressions or binary ASCII character expressions as parameters for FIELDS ENCLOSED BY and FIELDS TERMINATED BY.
If no parameters for processing data are specified, the default values are used as follows:
FIELDS TERMINATED BY '\t' ENCLOSED BY '"' ESCAPED BY '\\' LINES TERMINATED BY '\n'
Note
FIELDS TERMINATED BY '\t': Use and only use\tas the separator.ENCLOSED BY '"': Use and only use"as the inclusion symbol.ESCAPED BY '\\': Use and only use\as escape character.LINES TERMINATED BY '\n': Use and only use\nor\r\nas the inter-line separator.
FIELDS TERMINATED BY
FIELDS TERMINATED BY represents the delimiter between fields. Use FIELDS TERMINATED BY to specify the delimiter for each data.
FIELDS TERMINATED BY The specified value can exceed one character.
Example:
For example, reading a file separated by comma is:
LOAD DATA LOCAL INFILE 'data.txt' INTO TABLE table1
FIELDS TERMINATED BY ',';
FIELDS ENCLOSED BY
FIELDS TERMINATED BY The specified value contains the characters of the input value. The value specified by ENCLOSED BY must be a single character; if the input value is not necessarily included in quotes, OPTIONALLY needs to be used before the ENCLOSED BY option.
As shown in the following example, some input values can be enclosed in quotes, while others can be enclosed without quotes:
LOAD DATA LOCAL INFILE 'data.txt' INTO TABLE table1
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"';
If ENCLOSED BY is not preceded by OPTIONALLY, for example, ENCLOSED BY '"' means using double quotes to enclose all fields.
FIELDS ESCAPED BY
FIELDS ESCAPED BY allows you to specify an escape character, the default value \\ represents an escape symbol. If the FIELDS ESCAPED BY character is not empty, the character will be deleted, and subsequent characters will be literally as part of the field value.
However, some double-character sequences have special meanings, as shown in the table:
| Escape sequence | Characters represented by sequence |
|---|---|
| \0 | Space character |
| \b | Backspace symbol |
| \n | Line breaks |
| \r | Carriage Return |
| \t | Tab |
| \z | End character (Ctl+Z) |
Example
- Example 1
The data.txt content is as follows:
(base) admin@admindeMacBook-Pro case % cat data.txt
1 a\\b
Connect mo to execute the following statement to import the data.txt content into t1:
create table t1(n1 int,n2 varchar(255));
load data local infile 'Users/admin/test/case/data.txt' into table t1;
mysql> select * from t1;
+------+------+
| n1 | n2 |
+------+------+
| 1 | a\b |
+------+------+
1 row in set (0.00 sec)
The result of n2 is a\b, because the first \ is an escape character, which was deleted.
- Example 2
The data.txt content is as follows:
(base) admin@admindeMacBook-Pro case % cat data.txt
1 a\\b
Connect mo to execute the following statement to import the data.txt content into t2:
create table t2(n1 int,n2 varchar(255));
load data local infile 'Users/admin/test/case/data.txt' into table t2 fields escaped by 'a';
mysql> select * from t2;
+------+------+
| n1 | n2 |
+------+------+
| 1 | \\b |
+------+------+
1 row in set (0.00 sec)
The result of n2 is \\b, because here we specify the escape character a, so a is deleted.
- Example 3
The data.txt content is as follows:
(base) admin@admindeMacBook-Pro case % cat data.txt
1 a\\ b
Connect mo to execute the following statement to import the data.txt content into t1:
create table t3(n1 int,n2 varchar(255));
load data local infile 'Users/admin/test/case/data.txt' into table t3 fields escaped by '';
mysql> SELECT * FROM t3;
+------+------+
| n1 | n2 |
+------+------+
| 1 | a\\b |
+------+------+
1 row in set (0.01 sec)
The result of n2 is a\\b. When ESCAPED BY is empty, the characters are not escaped when read as-is.
- Example 4
The data.txt content is as follows:
```(base) admin@admindeMacBook-Pro case % cat data.txt 1 a\0b 2 c\bd 3 a\nb 4 a\rb 5 a\tb 6 a\Zb
Connect mo to execute the following statement to import the data.txt content into t4:
```sql
create table t3(n1 int,n2 varchar(255));
load data local infile 'Users/admin/test/case/data.txt' into table t4;
mysql> select * from t1;
+------+------+
| n1 | n2 |
+------+------+
| 1 | a b |
| 2 | d |
| 3 | a
b |
b | 4 | a
| 5 | a b |
| 6 | ab |
+------+------+
6 rows in set (0.01 sec)
When n1=1, the result of n2 is a b, because \0 is a space character;
When n=2, the result of n2 is d, because \b is a backspace character, a is deleted;
When n=3, the result of n2 is a with a line break, because \n is a line break;
When n=4, the result of n2 is a plus b after changing to return, because \r is the carriage return;
When n=5, the result of n2 is b of a b, because \t is tabulation;
When n=6, the result of n2 is ab, because \z is the ending symbol.
LINES TERMINATED BY
LINES TERMINATED BY is used to specify the ending character of a line. The LINES TERMINATED BY value can exceed one character.
For example, a line in a csv file ends with a carriage return/line break pair, and when you load it, you can use LINES TERMINATED BY '\r\n' or LINES TERMINATED BY '\n':
LOAD DATA LOCAL INFILE 'data.txt' INTO TABLE table1
FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\r\n';
LINE STARTING BY
If all input lines have a public prefix that you want to ignore, you can use LINES STARTING BY 'prefix_string' to ignore anything before the prefix and prefix.
If a line does not contain a prefix, the entire line is skipped. As shown in the following statement:
LOAD DATA LOCAL INFILE '/tmp/test.txt' INTO TABLE table1
FIELDS TERMINATED BY ',' LINES STARTING BY 'xxx';
If the data file is in the following style:
xxx"abc",1
something xxx"def",2
"ghi",3
The result lines of the output are ("abc", 1) and ("def", 2). The third line in the file is ignored because it has no prefix.
SET
MatrixOne currently only supports SET column_name=nullif(column_name,expr). That is, when column_name = expr is returned NULL; otherwise, column_name is returned. For example, SET a=nullif(a, 1) returns NULL when a=1; otherwise, return the original value of column a.
Using this method, you can set the parameter SET column_name=nullif(column_name,"null") when loading the file to return the NULL value in the column.
Example
-
The local file
test.txtdetails are as follows:id,user_name,sex 1,"weder","man" 2,"tom","man" null,wederTom,"man" -
Create a new table
userin MatrixOne:create database aaa; use aaa; CREATE TABLE `user` (`id` int(11) ,`user_name` varchar(255) ,`sex` varchar(255)); -
Use the following command line to import
test.txtto the tableuser:LOAD DATA LOCAL INFILE '/tmp/test.txt' INTO TABLE user SET id=nullif(id,"null"); -
The imported table content is as follows:
select * from user; +-------+---------+ | id | user_name | sex | +-------+---------+ | 1 | weder | man | | 2 | tom | man | | null | wederTom | man | +-------+---------+
PARALLEL
For a large file with good format, such as a JSONLines file, or a CSV file with no newlines in a line of data, you can use PARALLEL to load the file in parallel to speed up loading.
For example, for a large file with 2 G, two threads are used to load, and the second thread is split and positioned to the 1G position first, and then reads and loads it. This allows two threads to read large files at the same time, and each thread reads 1G of data.
Example:
-- Open parallel loading
load data local infile 'file_name' into table tbl_name FIELDS TERMINATED BY '|' ENCLOSED BY '\"' LINES TERMINATED BY '\n' IGNORE 1 LINES PARALLEL 'TRUE';
-- Close parallel loading
load data local infile 'file_name' into table tbl_name FIELDS TERMINATED BY '|' ENCLOSED BY '\"' LINES TERMINATED BY '\n' IGNORE 1 LINES PARALLEL 'FALSE';
-- Parallel loading is turned off by default
load data local infile 'file_name' into table tbl_name FIELDS TERMINATED BY '|' ENCLOSED BY '\"' LINES TERMINATED BY '\n' IGNORE 1 LINES;
Note
Fields inside [PARALLEL {'TRUE' | 'FALSE'}], currently only TRUE or FALSE is supported, and are case-insensitive.
Note: If the PARALLEL field is not added in the LOAD statement, parallel loading is turned off by default for CSV files; for JSONLines files, parallel loading is turned on by default. If there is a line ending character in the CSV file, such as '\n', it may cause data errors when the file is loaded. If the file is too large, it is recommended to manually split the file from the starting and end point of the newline character before turning on parallel loading.
STRICT
MO supports the use of the STRICT parameter to specify the way of file cutting in parallel, which is only valid if PARALLEL is TRUE. The default value of STRICT is TRUE, indicating that the cutting is used in read-pre-detection. It not only depends on newline characters for segmentation, but also read-pre-verified to verify whether it matches the column definition of the table. It will only be processed as a valid split point when the data meets the column definition. When the parameter is FALSE, the line break character (default is \n) is used to cut when the data is imported in parallel. If there are line break characters, there may be errors in splitting.
Example:
-- Turn on read preview mode
load data localinfile 'file_name' into table tbl_name PARALLEL 'TRUE' STRICT 'TRUE';
-- Turn off read preview mode
load data local infile 'file_name' into table tbl_name PARALLEL 'TRUE' STRICT 'FALSE';
Supported file formats
In the current version of MatrixOne Intelligence, LOAD DATA supports CSV format and JSONLines format files.
For documentation on importing both formats, see Import.csv Format Data and Import JSONLines Data.
Note
LOAD DATA supports importing compressed files with lz4, gz, bz2, zlib, flate, and importing compressed files ending with .tar or .tar.xx is not supported.
CSV Format Standard Description
The MatrixOne Intelligence loading CSV format complies with the RFC4180 standard, and the CSV format is as follows:
-
Each record is located on a separate line, separated by a newline character (CRLF):
aaa,bbb,ccc CRLF zzz,yyy,xxx CRLFImporting into the table is as follows:
+-------------------------------+ | col1 | col2 | col3 | |------|--------------|-----------| | aaa | b bb | ccc | | zzz | yyy | xxx | +-------------------------------+
-
The last record in the file can have an ending newline or no ending newline (CRLF):
aaa,bbb,ccc CRLF zzz,yyy,xxxImporting into the table is as follows:
+-------------------------------+ | col1 | col2 | col3 | +-------------------------------+| aaa | b bb | ccc | | zzz | yyy | xxx | +-------------------------------+
-
The optional title line appears as the first line of the file, and its format is the same as that of the normal record line. For example:
field_name,field_name,field_name CRLF aaa,bbb,ccc CRLF zzz,yyy,xxx CRLFImporting into the table is as follows:
+------------------------------------------------------------------------------------------------------------------------------ | field_name | field_name | field_name | +------------------------------------------------------------------------------------------------------------------------------ | aaa | bbb | ccc | | zzz | yyy | xxx | +------------------------------------------------------------------------------------------------------------------------------
-
In the title and each record, there may be one or more fields, separated by commas. Spaces within a field are part of the field and should not be ignored. The last field in each record cannot be followed by a comma. For example:
aaa,bbb,cccor:
aa, bbb,cc cBoth examples are legal.
Importing into the table is as follows:
+-------------------------------+ | col1 | col2 | col3 | +-------------------------------+ | aaa | bbb | ccc | +-------------------------------+
or:
+-------------------------------+ | col1 | col2 | col3 | +-------------------------------+ | a aa | bbb | cc c | +-------------------------------+
-
Each field can be enclosed in double quotes or without double quotes. If the field is not enclosed with double quotes, then double quotes cannot appear in the field. For example:
"aaa","bbb","ccc" CRLF zzz,yyy,xxxor:
"aaa","bbb",ccc CRLF zzz,yyy,xxxBoth examples are legal.
Importing into the table is as follows:
+-------------------------------+ | col1 | col2 | col3 | +-------------------------------+ | aaa | bbb | ccc | | zzz | yyy | xxx | +-------------------------------+
-
Fields containing line breaks (CRLF), double quotes and commas should be enclosed in double quotes. For example:
"aaa","b CRLF bb","ccc" CRLF zzz,yyy,xxxImporting into the table is as follows:
+-------------------------------+ | col1 | col2 | col3 | +-------------------------------+ | aaa | b bb | ccc | | zzz | yyy | xxx | +-------------------------------+
-
If you use double quotes to enclose the field, then multiple double quotes that appear in the field must also be enclosed in double quotes, otherwise the first quote of two double quotes in the field will be parsed into escape characters, thus retaining only one double quote. For example:
"aaa","b"bb","ccc"The above CSV will parse
"b"bb"intob"bb. If the correct field isb""bb, then it should be written as:"aaa","b""bb","cc"or:
"aaa",b"bb,"ccc"
Example
You can also refer to the following syntax example to quickly understand LOAD DATA:
Example 1: LOAD CSV
Simple import example
The local name is char_varchar.csv The data in the file is as follows:
a|b|c|d
"a"|"b"|"c"|"d"
'a'|'b'|'c'|'d'
"'a'"|"'b'"|"'c'"|"'d'"
"aa|aa"|"bb|bb"|"cc|cc"|"dd|dd"
"aa|"|"bb|"cc|"|"dd|"
"aa|||aa"|"bb||bb"|"cc||cc"|"dd|||dd"
"aa'|'||aa"|"bb'|'|bb"|"cc'|'||cc"|"dd'|'||dd"
aa"aa|bb"bb|cc"cc|dd"dd
"aa"aa"|"bb"bb"|"cc"cc"|"dd"dd"
"aa""aa"|"bb"bb"|"cc""cc"|"dd""dd"
"aa""aa"|"bb""bb"|"cc""cc"|"dd""dd""dd"
"aa""""aa"|"bb""""bb"|"cc""""cc"|"dd""""dd"
"aa""|aa"|"bb"|bb"|"cc""|cc"|"dd"
"aa""""|aa"|"bb""|bb"|"cc""""|cc"|"dd""""|dd"
|||
|||||
""|""|"|"|
""""|"""|"""|""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
""""""|"""""|""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Create tables in MatrixOne:
mysql> drop table if exists t1;
Query OK, 0 rows affected (0.01 sec)
mysql> create table t1(
-> col1 char(225),
-> col2 varchar(225),
-> col3 text,
-> col4 varchar(225)
-> );
Query OK, 0 rows affected (0.02 sec)
Import the data file into Table t1 in MatrixOne:
load data local infile '<your-local-file-path>/char_varchar.csv' into table t1 fields terminated by'|';
The query results are as follows:
mysql> select * from t1;
+------------------------------------------------------------------------------------------------------------------------------
| col1 | col2 | col3 | col4 |
+------------------------------------------------------------------------------------------------------------------------------
| a | b | c | d |
| a | b | c | d |
| 'a' | 'b' | 'c' | 'd' |
| 'a' | 'b' | 'c' | 'd' |
| aa|aa | bb|bb | cc|cc | dd|dd |
| aa| | bb| | cc| | dd| |
| aa|||aa | bb||bb | cc||cc | dd|||dd |
| aa'|'||aa | bb'|'|bb | cc'|'|cc | dd'|'||dd |
| aa"aa | bb"bb | cc"cc | dd"dd |
| aa"aa | bb"bb | cc"cc | dd"dd |
| aa"aa | bb"bb | cc"cc | dd"dd |
| aa""aa | bb""bb | cc""cc | dd""dd |
| aa""aa | bb""bb | cc""cc | dd""dd |
| aa"|aa | bb"|bb | cc"|cc | dd"|dd |
| aa""|aa | bb""|bb | cc""|cc | dd""|dd |
| | | | | |
| | | | | |
| | | | | |
| " | " | " | " |
| "" | "" | "" | "" |
+------------------------------------------------------------------------------------------------------------------------------
20 rows in set (0.00 sec)
Add conditional import example
Following the simple example above, you can modify the LOAD DATA statement and add the condition at the end LINES STARTING BY 'aa' ignore 10 lines;:
delete from t1;
load data local infile '<your-local-file-path>/char_varchar.csv' into table t1 fields terminated by'|' LINES STARTING BY 'aa' ignore 10 lines;
The query results are as follows:
mysql> select * from t1;+-------------------+----------------+
| col1 | col2 | col3 | col4 |
+-------------------+----------------+
| aa"aa | bb"bb | cc"cc | dd"dd |
| aa""aa | bb""bb | cc""cc | dd""dd |
| aa""aa | bb""bb | cc""cc | dd""dd |
| aa"|aa | bb"|bb | cc"|cc | dd"|dd |
| aa""|aa | bb""|bb | cc""|cc | dd""|dd |
| | | | | |
| | | | | |
| | | | | |
| " | " | " | " |
| "" | "" | "" | "" |
+-------------------+----------------+
10 rows in set (0.00 sec)
As you can see, the query results ignore the first 10 lines and ignore the public prefix aa.
For detailed steps on how to import CSV format files, see Import .csv format data.
Example 2: LOAD JSONLines
Simple import example
The local name is jsonline_array.jl The data in the file is as follows:
[true,1,"var","2020-09-07","2020-09-07 00:00:00","2020-09-07 00:00:00","18",121.11,["1",2,null,false,true,{"q":1}],"1qaz",null,null]
["true","1","var","2020-09-07","2020-09-07 00:00:00","2020-09-07 00:00:00","18","121.11",{"c":1,"b":["a","b",{"q":4}]},"1aza",null,null]
Create tables in MatrixOne Intelligence:
mysql> drop table if exists t1;
Query OK, 0 rows affected (0.01 sec)
mysql> create table t1(col1 bool,col2 int,col3 varchar(100), col4 date,col5 datetime,col6 timestamp,col7 decimal,col8 float,col9 json,col10 text,col11 json,col12 bool);
Query OK, 0 rows affected (0.03 sec)
Import the data file into Table t1 in MatrixOne Intelligence:
load data local infile {'filepath'='<your-local-file-path>/jsonline_array.jl','format'='jsonline','jsondata'='array'} into table t1;
The query results are as follows:
mysql> select * from t1;
+------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| col1 | col2 | col3 | col4 | col5 | col6 | col7 | col8 | col9 | col10 | col11 | col12 |
+------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| true | 1 | var | 2020-09-07 | 2020-09-07 00:00:00 | 2020-09-07 00:00:00 | 18 | 121.11 | ["1", 2, null, false, true, {"q": 1}] | 1qaz | NULL | NULL |
| true | 1 | var | 2020-09-07 | 2020-09-07 00:00:00 | 2020-09-07 00:00:00 | 18 | 121.11 | {"b": ["a", "b", {"q": 4}], "c": 1} | 1aza | NULL | NULL |
+------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2 rows in set (0.00 sec)
Add conditional import example
Following the simple example above, you can modify the LOAD DATA statement and add ignore 1 lines at the end of the statement to experience the difference:
delete from t1;
load data local infile {'filepath'='<your-local-file-path>/jsonline_array.jl','format'='jsonline','jsondata'='array'} into table t1 ignore 1 lines;
The query results are as follows:
mysql> select * from t1;
+------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| col1 | col2 | col3 | col4 | col5 | col6 | col7 | col8 | col9 | col10 | col11 | col12 |
+------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| true | 1 | var | 2020-09-07 | 2020-09-07 00:00:00 | 2020-09-07 00:00:00 | 18 | 121.11 | {"b": ["a", "b", {"q": 4}], "c": 1} | 1aza | NULL | NULL |
+------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 row in set (0.00 sec)
As you can see, the query result ignores the first line.
For detailed steps on how to import JSONLines format files, see Import JSONLines Data.
limit
- The
REPLACEandIGNOREmodifiers are used to resolve the conflict between unique indexes:REPLACEmeans that if there is already an existing table, the old data will be replaced with new data;IGNOREmeans that the old data will be retained and the new data will be ignored. These two modifiers are not yet supported in MatrixOne. - MatrixOne currently supports
SET, onlySET columns_name=nullif(col_name,expr2). - When turning on parallel loading operations, it is necessary to ensure that each line of data in the file does not contain the specified line ending character, such as '\n', otherwise it may cause data errors when the file is loaded.
- Parallel loading of files requires that the file must be in a non-compressed format, and files in parallel loading compressed format are not supported.
- If you need to use
LOAD DATA LOCALfor local loading, you need to use the command line to connect to the MatrixOne Intelligence service host:mysql -h <mo-host -ip> -P 6001 -uroot -p111 --local-infile. - MatrixOne does not currently support
ESCAPED BY, and there is a certain difference between writing or reading special characters and MySQL.