4.命令行参数说明 - squids-io/dts-doc GitHub Wiki

命令行参数说明

parameter description
ConnectDB
--source source db connect string: username/[email protected]:3306
--target target db connect string: username/[email protected]:3306
--dbtype mysql
--enable-ssl-source enable source SSL connect (default n)
--enable-ssl-target enable target SSL connect (default n)
--ssl-cafile-source source SSL CA file path
--ssl-cafile-target target SSL CA file path
WorkSetting
--work-threads number of working thread,max value 48 (default 4)
--max-connections max source or target database connections,max value 64 (default 8)
--split-rowcount special table split size,used for parallel moving a table data,max 99999999 (default 50000)
--commit-batchsize batch commitsize for target table rows insert/merge,max 50000 (default 200)
--fetch-batchsize batch fetchsize for source table rows,max 100000 (default 10000)
MigrateOption
--move-model onlydata | onlymeta | all, you can choose move table data,object structure or both (default "onlydata")
--do-truncate n | y,truncate target table before data moving (default "n")
--exists-handle ignore | drop,used for onlymeta|all model,when target object exists (default "ignore")
--count-model estimate | count,the way we get a table row count (default "estimate")
--enable-merge n | y,enable replace/merge to target table,if table has no PK/UK,it will run in insert mode (default "n")
--clear-status n | y, clear migrate status in sqlitdb (default "n")
--only-check n | y,only check the migrating conditions and give you a checklist,do not migrate anything (default "n")
--peek-model metadatasync | datasync | metadata | data | meta | sync ,only for internal use (default "metadatasync")
Objects
--tables use comma to split tables,colon to map table,for example --tables=sc1.table1,sc1.table2,sc1.tableA:sc1.tableC
--triggers special the trigger list
--views special the view list
--procedures special the procedure list
--functions special the function list
--events special the event list
--schemas special the schema list,it will ignore tables/triggers/procedures/views/events/functions option
Changed Sync
--binlog start binlog of changed data capture, such as mysql-bin.000035
--pos start position of binlog file, such as 9527
--gtid start GTID of changed data capture, show master/slave status
--sync-changed n | y, auto sync changed incremental data (default n)
--sync-mode slave | canal, choose changed data sync mode (default slave)
--stop-slave n | y, stop master/slave replication, only for internal use (default n)
Data Check
--pre-check text | json, pre-check source/target db, and give you a text or json result, do not migrate anything
--data-check n | y, check data and objects consistency of source/target db (default "n")
--check-mode ptchecksum | checksum, ptchecksum is online check, needn't stop APP (default "ptchecksum")
--get-result objects | tables | sync | check ,get objects, table data, changed sync, check data result
ExportData
--export-data n | y,export data to filesystem (default n)
--struct-dump n | y,use mysqldump to get db structure (default n)
--data-dir special output data directory (default ./)
--file-format text | parquet,special output file format (default text)
--column-separator visible & invisible characters,such as \t or x07 or , (default \t)
--line-breaker visible & invisible characters,such as \n or x08 (default \n)
--file-encoding special datafile character encoding (default utf8)
--parquet-rgsizeMB special row group size in MB (default 128)
--parquet-pageKB special page size in KB (default 8)
--parquet-compress gzip | lz4 | snappy | uncompress,special compress type (default gzip)
--parquet-filerows special every parquet file rows (default 1000000)
--to-hdfs write file to hdfs,special hdfs namenode connect string,for example [email protected]:9000
Input/Output
--sql-file special .sql file to import data
--json-file use jsonfile to special move objects,this option will ignore objects/schemas option
--json-model object | schema,special jsonfile content is about objects or whole schemas,must be used with --json-file
--log-output console | file,log output model,the file is dbmotion.log (default "console")
Ohters
--result-storage mysql|sqlite, when the status informations are stored in (default sqlite)
--connection-string if the url that connect to mysql when --result-storage is mysql (default "")