tencent cloud

Tencent Cloud TI Platform

Product Introduction
Overview
Product Pricing
Benefits to Customers
Use Cases
Purchase Guide
Billing Overview
Purchase Mode
Renewal Instructions
Overdue Payment Instructions
Security Compliance
Data Security Protection Mechanism
Monitoring, Auditing, and Logging
Security Compliance Qualifications
Quick Start
Platform Usage Preparation
Operation Guide
Model Hub
Task-Based Modeling
Dev Machine
Model Management
Model Evaluation
Online Services
Resource Group Management
Managing Data Sources
Tikit
GPU Virtualization
Practical Tutorial
Deploying and Reasoning of LLM
LLM Training and Evaluation
Built-In Training Image List
Custom Training Image Specification
Angel Training Acceleration Feature Introduction
Implementing Resource Isolation Between Sub-users Based on Tags
API Documentation
History
Introduction
API Category
Making API Requests
Online Service APIs
Data Types
Error Codes
Related Agreement
Service Level Agreement
Privacy Policy
Data Processing And Security Agreement
Open-Source Software Information
Contact Us

Introduction to Model Management

PDF
Focus Mode
Font Size
Last updated: 2026-02-04 18:36:31
Model Management is a module provided by Tencent Cloud TI-ONE Platform (TI-ONE) for model management and optimization, comprising two sub-modules: Model Repository and Model Optimization. Model Repository is the unified entry for model management on TI-ONE, allowing you to manage models trained on the platform as well as user-owned local models.
You can import trained models and perform CRUD operations on third-party models.
You can delete, modify, or view models that have been accelerated through the Model Optimization module.
You can perform service release packaging operations on the managed models and their corresponding inference code and configuration files.
You can manage models, model versions, and sub-directories under these versions.
You can view online services associated with models.
You can view batch prediction tasks associated with models.
Features related to model hot updates are supported.
Model Optimization is a module of TI-ONE for inference acceleration. This module uses TI Acceleration Service (TI-ACC) capabilities to optimize the inference acceleration of the models in Model Repository, while reducing costs and improving efficiency. The models optimized through this module can only be used for inference services in Model Services of TI-ONE.
You can perform inference acceleration on the models in Model Repository.
You can quickly accelerate models in batches.
You can accelerate common models in fields such as Computer Vision (CV), natural language processing (NLP), and Optical Character Recognition (OCR).
Two optimization levels are supported: lossless and FP16.
You can optimize models with multiple input nodes and fixed and dynamic shapes.
The acceleration report for the model inference part can be output.
You can save optimized models to Model Repository for service release.

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback