Help Center > > Best Practices> Flower Recognition (Using an TensorFlow Built-in Algorithm for Image Classification)

Flower Recognition (Using an TensorFlow Built-in Algorithm for Image Classification)

Updated at: Apr 13, 2020 GMT+08:00

ModelArts provides built-in algorithms based on mainstream engines in the industry for AI beginners with certain AI development capabilities. You do not need to focus on the model development process, but directly use built-in algorithms to train existing data and quickly deploy the models as services. The built-in algorithms can be used in scenarios such as identifying object class and location and image classification.

This section provides an example of a flower image classification application to help you quickly get familiar with the process of building a model using a ModelArts built-in algorithm. In this example, you label the existing image data of the built-in flower image dataset, use the built-in algorithm ResNet_v1_50 to train the data to obtain an available model, and deploy the model as a real-time service. After the deployment is completed, you can use the real-time service to identify whether an input image contains a certain type of flower.

Before you start, carefully complete the preparations described in Preparations in advance. To use a built-in algorithm to build a model, perform the following steps:


  • You have registered with HUAWEI CLOUD and checked the account status before using ModelArts. The account cannot be in arrears or frozen.
  • You have obtained the AK/SK of the account and configured the AK/SK in Settings of ModelArts.
  • You have created a bucket and folders in OBS for storing the sample dataset and model. In this example, create a bucket named test-modelarts and folders listed in Table 1.
    For details about how to create OBS buckets and folders, see Creating a Bucket and Creating a Folder. Ensure that the OBS directory you use and ModelArts are in the same region.
    Table 1 Folder list




    Stores the dataset.


    Stores the model and prediction files outputted in the training.


    Stores training job logs.

Step 1: Prepare Data

ModelArts provides a sample dataset of flowers named Flowers-Data-Set. This example uses this dataset to build a model. Perform the following operations to upload the dataset to the OBS directory test-modelarts/dataset-flowers created in preparation.

  • Data labeling has been completed for the Flowers-Data-Set dataset. The .txt files are the labeling files of the corresponding images. Therefore, the data labeling operation is ignored in this step.
  • If you want to use your own dataset, skip this step, upload the dataset to the OBS folder, and select this directory in Step 2: Train a Model. If your dataset is not labeled, choose Data Management > Datasets to create a dataset and manually label your dataset before creating a training job.
  1. Download the Flowers-Data-Set dataset to the local PC.
  2. Decompress the file to the Flowers-Data-Set directory on the local PC.
  3. Upload all files in the Flowers-Data-Set folder to the test-modelarts/dataset-flowers directory on OBS in batches. For details about how to upload files, see Uploading a File.

Step 2: Train a Model

After data preparation is completed, you can create a training job and select the built-in algorithm ResNet_v1_50 to generate an available model.

The ResNet_v1_50 algorithm is based on the TensorFlow, TF-1.8.0-python2.7 engine and is used for image classification. For more information about built-in algorithms, choose Training Jobs > Built-in Algorithms to learn information such as the usage, engine type, and precision of the algorithms.

  1. Log in to the ModelArts management console. In the left navigation pane, choose Training Management > Training Jobs. The Training Jobs page is displayed.
  2. Click Create. The Create Training Job page is displayed.
  3. On the Create Training Job page, set required parameters.
    1. In the basic information area, retain the default values for Billing Mode and Version. Set Name and Description as prompted.
      Figure 1 Entering the name and description
    2. In the parameter configuration area, set Algorithm Source, Data Source, Training Output Path, Running Parameter, and Job Log Path.
      • Algorithm Source: Click Select and select the ResNet_v1_50 algorithm from the built-in algorithm list.
      • Data Source: The imported dataset has been labeled. Therefore, directly import the dataset from its storage location. Select Data path, click Select on the right of the text box, and select the OBS path to the dataset, for example, /test-modelarts/dataset-flowers/.
      • Training Output Path: Select the OBS path to store the model and prediction files, that is, select the created model-test folder. If no folder is available, click Select and create a folder in the dialog box that is displayed.
      • Running Parameter: After the ResNet_v1_50 algorithm is selected, the max_epoches parameter default to 100 is contained by default. In this example, it is recommended that you change the value of max_epoches to 10. The entire dataset is trained in 1 epoch. If you set max_epoches to 10, 10 epochs of training are performed. The training duration prolongs as the value of max_epoches increases.
      • Job Log Path: Select the OBS path to store job logs, that is, select the created train-log folder. If no folder is available, click Select and create a folder in the dialog box that is displayed.
      Figure 2 Parameter settings
    3. In the resource setting area, select Public resource pools, and set Specifications and Compute Nodes.

      If you select free specifications, read the prompt carefully and select I have read and agree to the above statements.

      Figure 3 Resource settings
    4. Click Next.
  4. On the Confirm tab page, check the parameters of the training job and click Submit.
  5. On the Training Jobs page, view the status of the created training job. It takes a couple of minutes to create and run a training job. When the job status changes to Successful, the training job is successfully created.

    You can click the name of the training job to go to the job details page and learn about the configurations, logs, and resource usage of the training job. You can obtain the generated model file from the OBS path specified by Training Output Path, that is, /test-modelarts/model-test/.

    Figure 4 Training job details

Step 3: (Optional) Create a Visualization Job to View the Model Training Process

Currently, visualization jobs provided by ModelArts are TensorBoard jobs by default. TensorBoard is a tool that can effectively display the computational graph of TensorFlow or MXNet in the running process, the trend of various metrics in time, and the data used in the training. Currently, visualization jobs support only the training jobs based on the TensorFlow and MXNet engines.

If the detailed information on the training details page is sufficient for you to determine the model quality and build a model, skip this step and go to Step 4: Import the Model.

  1. On the ModelArts management console, choose Training Management > Training Jobs in the left navigation pane, and then click the Visualization Jobs tab.
  2. On the Visualization Jobs tab page, click Create.
  3. On the Create Visualization Job page, set required parameters and click Next.
    The default visualization job type is Visualization Job and cannot be changed. Configure the Name and Training Output Path parameters of the visualization job. The Training Output Path parameter must be set to the value of Training Output Path of the training job. In the previous steps, Training Output Path is set to /test-modelarts/model-test/. Enable Auto Stop, and select 1 hour later to avoid unnecessary expense.
    Figure 5 Configuring visualization job parameters
  4. On the Confirm tab page, click Submit.
  5. Go to the Visualization Jobs tab page and wait for a while. When the visualization job status changes to Running, the job has been successfully created.

    For a running visualization job, you can click the job name to go to its visualization page. On this page, you can learn about the training process of the model. If the training process and parameters meet the requirements, you can proceed with Step 4: Import the Model.

    Figure 6 Visualization page

Step 4: Import the Model

The trained model is stored in the OBS path. You can import the model to ModelArts for management and deployment.

  1. On the ModelArts management console, choose Model Management > Models in the left navigation pane. The Models page is displayed.
  2. On the Models page, click Import.
  3. On the Import Model page, set required parameters and click Next.
    Set Name and Version. Set Meta Model Source to Training job, and the system automatically selects the training job you created. You can also select an available training job from the drop-down list box. Because this example is simple, retain the default values for other parameters.
    Figure 7 Importing a model
  4. After the model is imported, the Models page is displayed. You can view the imported model and its versions on the Models page.
    Figure 8 Models

Step 5: Deploy the Model as a Service

When the status of a model is normal after the model is imported, deploy it as a real-time, batch, or edge service. The following describes how to deploy a real-time service.

  1. On the Model Management > Models page, click the triangle next to a model name to expend all versions of the model. In the row of the target model version, click Deploy in the Operation column and select Real-Time Services from the drop-down list box. The Deploy page is displayed.
  2. On the Deploy page, set required parameters and click Next.

    Set the name of the real-time service and enable the Auto Stop function. In Model and Configuration, the system automatically selects the model and version used in Step 4: Import the Model. Select resource specifications from the drop-down list box of Specifications, for example, CPU: 2 vCPUs | 8 GiB. Retain the default values for other parameters.

    You are advised to retain the default settings for Data Collection and Hard Example Filtering, that is, disable the two functions.
    Figure 9 Deploying the model as a real-time service
  3. On the Confirm tab page, check the configurations and click Submit to create a real-time service.
  4. Choose Service Deployment > Real-Time Services to view information about the real-time service. It takes several minutes to deploy the model. When the service status changes to Running, the real-time service is successfully deployed.

Step 6: Test the Service

After the real-time service is deployed, access the service to send a prediction request for test.

  1. On the Real-Time Services management page, click the name of the real-time service. The real-time service details page is displayed.
  2. On the real-time service details page, click the Prediction tab.
  3. Click Upload next to Image File to upload an image with flowers and click Predict.

    After the prediction is completed, the prediction result is displayed in the Test Result pane. Based on the confidence score of the prediction result, the flower in the image is daisy.

    To ensure the test effect, do not use the existing images in the sample dataset for test.

    Figure 10 Prediction result

Step 7: Delete Related Resources to Avoid Unnecessary Charging

To avoid unnecessary charging, you are advised to delete related resources, such as the real-time service, visualization job, training job, data, and OBS directories after trial use.

  • To delete a real-time service, go to the Real-Time Services page, and choose More > Delete in the Operation column.
  • To delete a visualization job, choose Training Jobs > Visualization Jobs and click Delete in the Operation column.
  • To delete a training job, go to the Training Jobs page and click Delete in the Operation column.
  • To delete data, access OBS, delete the uploaded data, and delete the folder and OBS bucket.

Did you find this page helpful?

Submit successfully!

Thank you for your feedback. Your feedback helps make our documentation better.

Failed to submit the feedback. Please try again later.

Which of the following issues have you encountered?

Please complete at least one feedback item.

Content most length 200 character

Content is empty.

OK Cancel