tencent cloud

Data Transfer Service

소식 및 공지 사항
릴리스 노트
제품 소개
제품 개요
데이터 마이그레이션 기능 설명
데이터 동기화 기능 설명
데이터 구독(Kafka 버전) 기능 설명
제품 장점
구매 가이드
과금 개요
환불 설명
시작하기
데이터 마이그레이션 작업 가이드
데이터 동기화 작업 가이드
데이터 구독 작업 가이드(Kafka 버전)
준비 작업
자체구축 MySQL용 Binlog 설정
데이터 마이그레이션
데이터 마이그레이션 지원 데이터베이스
ApsaraDB 교차 계정 인스턴스 간 마이그레이션
PostgreSQL로 마이그레이션
작업 관리
데이터 동기화
데이터 동기화가 지원하는 데이터베이스
계정 간 TencentDB 인스턴스 동기화
작업 관리
데이터 구독(Kafka 버전)
데이터 구독이 지원하는 데이터베이스
데이터 구독 작업 생성
작업 관리
컷오버 설명
모니터링 및 알람
모니터링 메트릭 조회
사례 튜토리얼
양방향 동기화 데이터 구조 생성
다대일 동기화 데이터 구조 생성
멀티 사이트 Active-Active IDC 구축
데이터 동기화 충돌 해결 정책 선택하기
CLB 프록시를 사용하여 계정 간 데이터베이스 마이그레이션하기
CCN으로 자체 구축 MySQL에서 TencentDB for MySQL로 마이그레이션
검증 불통과 처리 방법
버전 확인
원본 데이터베이스 권한 확인
계정 충돌 확인
부분 데이터베이스 매개변수 확인
원본 인스턴스 매개변수 확인
매개변수 설정 충돌 확인
대상 데이터베이스 콘텐츠 충돌 확인
대상 데이터베이스 공간 확인
Binlog 매개변수 확인
증분 마이그레이션 전제 조건 확인
플러그인 호환성 확인
레벨2 파티션 테이블 확인
기본 키 확인
마이그레이션할 테이블에 대한 DDL 확인
시스템 데이터베이스 충돌 확인
소스 및 대상 인스턴스 테이블 구조 확인
InnoDB 테이블 확인
마이그레이션 객체 종속성 확인
제약 조건 확인
FAQs
데이터 마이그레이션
데이터 동기화
데이터 구독 Kafka 버전 FAQ
구독 정규식
API문서
History
Introduction
API Category
Making API Requests
(NewDTS) Data Migration APIs
Data Sync APIs
Data Consistency Check APIs
(NewDTS) Data Subscription APIs
Data Types
Error Codes
DTS API 2018-03-30
Service Agreement
Service Level Agreements
액세스 관리
DTS를 사용할 서브 계정 생성 및 권한 부여
서브 계정에 재무 권한 부여하기
문서Data Transfer Service

Use Instructions

포커스 모드
폰트 크기
마지막 업데이트 시간: 2024-07-08 17:34:39
Category
Description
Sync Object
1. Only sync of base tables and views is supported. Sync of functions, triggers, storage processes, and other objects is not supported.
2. Only sync of InnoDB database engines is supported. If there are tables with other database engines, an error will be reported during task check.
3. Interrelated data objects need to be synced at the same time, otherwise sync failure will occur.
4. In the source TDSQL MySQL, there is a limit on the number of tables, with a maximum number of 5,000 for the entire instance. Exceeding this limit will result in DTS task errors; also, having too many tables will increase the access time at the source, leading to performance fluctuation and decline.
5. During the incremental sync stage, if source database table names include TDSQLagent and tdsql_sub characters, they might be filtered out or cause sync anomalies, as these table names are the same as TDSQL system's temporary table names. TDSQLagent is a temporary table for scaling, and tdsql_sub tables are subtables for hash-list and hash-range. Therefore, it is recommended not to set the source's tables to be synced with these names.
Sync Feature
Currently, the primary key conflict resolution policy only supports conflict overwrite. For both full and incremental stages, primary key data conflicts will be handled by conflict overwrite.
Source Database Impact
1. DBbridge occupies certain source database resources while performing full data sync, which may lead to increased load in the source database and add to its stress. If your database configuration is low, it's recommended to perform this during the business off-peak period.
2. During data sync, DBbridge will use the account that executes the sync task to write into the system database __tencentdb__ in the source database, recording transaction marker ID and other metadata. It's necessary to ensure that the source database has read-write permissions for __tencentdb__.
To ensure that data comparison issues can be traced, the __tencentdb__ system database will not be deleted from the source database after the sync task is completed
The space occupied by the __tencentdb__ system database is very small, approximately one thousandth to one ten-thousandth of the source database's storage space (for example, if the source database is 50 GB, then the __tencentdb__ system database is about 5 MB - 50 MB). Moreover, by using a single thread and a waiting connection mechanism, it has almost no impact on the source database's performance and will not compete for resources.
3. By default, a lock-free sync method is used. The full data export stage does not place a global lock (FTWRL) on the source database; it only places table locks on tables without a primary key.
Operation Restrictions
Please do not perform the following operations during sync, as they will cause the sync task to fail.
1. During the full data export stage, please do not execute any DDL operations that change the database or table structure in the source database.
2. During the sync task, do not modify or delete user information (including username, password, and permissions) and port numbers in both the source and target databases.
3. Do not clear Binlogs in the source database.
Data Type
1. Modifying the primary key is not supported during incremental sync, including the primary key column, partition table distribution key, comments on primary key columns, adding, deleting, or modifying column fields and their lengths.
2. Geometry-related data types are not supported, and tasks will report an error when encountering such data types.
3. For tables with floating point types, the precision inconsistency between full and incremental sync may cause the sync result's precision to be inconsistent.
4. During the incremental sync process, if the source database generates Binlog statements in the STATEMENT format, it will lead to sync failure.
5. When the source TDSQL MySQL uses the MariaDB 10.1.x kernel, using the timestamp type without specifying precision (e.g., timestamp(3)) is not supported; otherwise, the DTS task will report an error. To resolve it, remove the precision and then recreate the task.
Transaction
1. Scenarios that include both DML and DDL Statements in one transaction are not supported; encountering such a scenario will result in an error.
2. When the source database is TDSQL MySQL (Kernel MariaDB 5.6), XA Transactions are not supported; encountering XA Transactions will result in an error.
HA Switch and Scaling
1. If the source database is a non-GTID Database, DBbridge does not support source HA switchover. Once the source TDSQL MySQL undergoes a switchover, it may cause DBbridge incremental sync interruption.
2. If the source database is a self-built database, and SET connections are used with TDSQL MySQL, when the sync task starts, and the source adds or deletes a SET node, the DBbridge sync task will report an error. Users are required to modify the SET configuration information in DBbridge (to keep it consistent with the actual SET on the source), and then restart the task. Only then can the information about the newly added or deleted SET be synced.
Partition Table Sync
1. The full volume stage supports the sync of primary/secondary partition tables, but the partition syntax needs to comply with the TDSQL MySQL standards. Primary Hash partition tables only support creation through the shardkey method. The key syntax for creating a partition table in TDSQL MySQL is as follows, for detailed syntax please refer to TDSQL MySQL Table Creation Syntax Example.
Primary Hash partition: shardkey
Primary Range partition: TDSQL_DISTRIBUTED BY RANGE
Primary List partition: TDSQL_DISTRIBUTED BY LIST
Primary Hash partition + secondary Range/List partition: shardkey + PARTITION BY RANGE/LIST
Primary Range partition + secondary Range/List partition: TDSQL_DISTRIBUTED BY RANGE + PARTITION BY RANGE/LIST
Primary List partition + secondary Range/List partition: TDSQL_DISTRIBUTED BY LIST + PARTITION BY RANGE/LIST
2. During the incremental sync stage, it is not supported to intensively create a secondary partition table, then drop a secondary partition table, and then create a secondary partition table again, as this might lead to task exceptions due to table type conflicts. Dropping a non-existent secondary partition table before creating a secondary partition table might cause a deadlock, with no error reported from the task, necessitating manual unlocking.
3. TDSQL MySQL sync to MySQL/MariaDB/Percona link: If the source database to be synced contains secondary partition tables, they will be single tables on the target side after sync.
Consistency Check
1. The scope of data consistency check is limited to comparing the selected database objects in the source database with those synced to the target database. If users write data to the target database during the sync task, this data is not included in the check scope, nor are other advanced objects (such as storage procedures, functions, and views). If Structure Initialization is not selected in the sync task configuration (indicating that table structures are not synced), then table structures are also not checked during the consistency check.
2. The current check task does not recognize DDL operations. If DDL operations are performed in the source database during sync, the result of the check will be inconsistent. Users need to initiate a new check task to obtain accurate comparison results.
3. In the consistency check task, the timeout limit for DTS to query data from the source or the target is 10 minutes per query. This applies to each block check query, row check query, etc. If a single query exceeds 10 minutes (for example, when querying large tables from the source), it will cause the check task to fail.
4. If some DMLs are selected or Where conditions are set for filtering in the sync task configuration, it will cause inconsistencies between the source and target databases. Therefore, consistency checks are not supported. To perform a consistency check, all DMLs and DDLs must be selected.
5. Data check supports unidirectional sync only; it does not support complex topologies like many-to-one or one-to-many.
6. The following configuration in a sync task may lead to inconsistent check results. Please be aware of this when creating a check task.
If the 'Full Data Initialization' option is not selected during data initialization, there may be inconsistencies between the source and the target data, potentially leading to discrepancies in the data check results.
7. A data consistency check task may increase the load in the source database instance. Therefore, you need to perform such tasks during the business off-peak period.
8. A data consistency check task can be executed repeatedly, but a DBbridge task can initiate only one consistency check task at any time.
9. Complete comparison and sampling comparison require the tables to have a primary key or unique key; otherwise, they will be skipped and not checked. Row count check does not require a primary key or unique key.
10. If the user chooses to end the sync task before the data consistency check is complete, the consistency check will fail.
11. Consistency checks require the use of the account executing the sync task to create a table CRC_xxx_CMP_xxx in the source database, which is used to record the data comparison information during the sync task.

도움말 및 지원

문제 해결에 도움이 되었나요?

피드백