tencent cloud

Stream Compute Service

Releases Notes and Announcements
Release Notes
Product Introduction
Overview
Strengths
Use Cases
Purchase Guide
Billing Overview
Billing Mode
Refund
Configuration Adjustments
Getting Started
Preparations
Creating a Private Cluster
Creating a SQL Job
Creating a JAR Job
Creating an ETL Job
Creating a Python Job
Operation Guide
Managing Jobs
Developing Jobs
Monitoring Jobs
Job Logs
Events and Diagnosis
Managing Metadata
Managing Checkpoints
Tuning Jobs
Managing Dependencies
Managing Clusters
Managing Permissions
SQL Developer Guide
Overview
Glossary and Data Types
DDL Statements
DML Statements
Merging MySQL CDC Sources
Connectors
SET Statement
Operators and Built-in Functions
Identifiers and Reserved Words
Python Developer Guide
ETL Developer Guide
Overview
Glossary
Connectors
FAQ
Contact Us

Glossary

PDF
포커스 모드
폰트 크기
마지막 업데이트 시간: 2023-11-08 16:21:05
The table below lists some common terms for ETL jobs.
Term
Description
Stream computing
Stream computing is the computing of stream data. It reads data in stream form from one or more data sources, efficiently computes the continuous data streams using multiple operators of the engine, and outputs the results to different sinks such as message queues, databases, data warehouses, and storage services.
Data source
The source that continuously generates data for stream computing.
Data sink
The destination of the results of stream computing.
Schema
‍The structure information of a table, such as column headings and types. In the context of PostgreSQL, a schema is smaller than a database and larger than a table. It can be seen as a namespace inside a database.
MySQL
A common type of database, which can be used as the data source or sink of an ETL job.
PostgreSQL
A type of relational database similar to MySQL.
ClickHouse
A columnar database management system (DBMS) for online analytical processing (OLAP). It can be used as the data sink of an ETL job.
Elasticsearch
A real-time search and data analytics engine.
Field mapping
Field mapping is the process of extracting data from a data source, computing and cleansing the data, and then loading the data to the sink.
Constant field
You can input a custom constant field to the data source and data sink.
Calculated field
You can perform value conversion or calculation on fields extracted from a data sink using the built-in functions of Stream Compute Service.


도움말 및 지원

문제 해결에 도움이 되었나요?

피드백