Help Center > > User Guide (Senior AI Engineers)> Operation Guide

Operation Guide

Updated at: Aug 12, 2019 GMT+08:00

ModelArts provides online code compiling environments as well as AI Development Lifecycle that covers data preparation, model training, model management, and service deployment for developers who are familiar with code compilation and debugging and common AI engines, helping the engineers build models efficiently and quickly.

This document describes how to perform AI development on the ModelArts management console. If you use the API or SDK for development, you are advised to view the ModelArts SDK Reference or ModelArts API Reference.

To view the examples of AI Development Lifecycle, see Modeling with MXNet and Modeling with Notebook.

AI Development Lifecycle

The AI Development Lifecycle function provided by ModelArts takes developers' habits into consideration and provides a variety of engines and scenarios for developers to choose. The following describes the entire process from data preparation to service development using the ModelArts platform.

Table 1 Process description

Task

Sub-task

Description

Reference

Prepare Data

Create a dataset.

Create a dataset in ModelArts to manage and preprocess your business data.

Creating a Dataset

Label data.

Label and preprocess the data in your dataset based on the business logic to facilitate subsequent training. Data labeling affects the model training effect.

Labeling Data

Publish the dataset.

After labeling data, publish the database to generate a dataset version that can be used for model training.

Publishing a Dataset

Develop Script

Create a notebook.

Create a notebook instance as the development environment.

Creating and Opening a Notebook Instance

Compile code.

Compile code in the existing notebook instance. You can also use a ModelArts sample notebook to build a model directly.

Using ModelArts Sample Notebooks

Export the .py file.

Export the compiled training script as a .py file for subsequent operations, such as model training and management.

Using the Convert to Python File Function

Train a Model

Create a training job.

Create a training job, and upload and use the compiled training script. After the training is completed, a model is generated and stored in OBS.

Creating a Training Job

(Optional) Create a TensorBoard job.

Create a TensorBoard job to view the model training process, learn about the model, and adjust and optimize the model. Currently, TensorBoard applies only to MXNet and TensorFlow engines.

Managing a TensorBoard Job

Manage Models

Compile inference code and configuration files.

Following the model package specifications provided by ModelArts, compile inference code and configuration files for your model, and save the inference code and configuration files to the training output location.

Model Package Specifications

Import the model.

Import the training model to ModelArts to facilitate service deployment.

Importing a Model

Deploy a Model

Deploy a model as a service.

Deploy a model as a real-time service or a batch service.

Access the service.

If the model is deployed as a real-time service, you can access and use the service. If the model is deployed as a batch service, you can view the prediction result.

Did you find this page helpful?

Submit successfully!

Thank you for your feedback. Your feedback helps make our documentation better.

Failed to submit the feedback. Please try again later.

Which of the following issues have you encountered?







Please complete at least one feedback item.

Content most length 200 character

Content is empty.

OK Cancel