Help Center > > Best Practices> Flower Recognition (Using an MXNet Built-in Algorithm for Image Classification)

Flower Recognition (Using an MXNet Built-in Algorithm for Image Classification)

Updated at: Sep 12, 2019 GMT+08:00

ModelArts provides built-in algorithms based on mainstream engines in the industry for AI beginners with certain AI development capabilities. You do not need to focus on the model development process, but directly use built-in algorithms to train existing data and quickly deploy the models as services. The built-in algorithms can be used in scenarios such as identifying object class and location and image classification.

This section provides an example of a flower image classification application to help you quickly get familiar with the process of building a model using a ModelArts preset algorithm. In this example, you label the existing image data of the built-in flower image dataset, use the built-in algorithm ResNet_v1_50 to train the data to obtain an available model, and deploy the model as a real-time service. After the deployment is completed, you can use the real-time service to identify whether an input image contains a certain type of flower.

Before you start, carefully complete the preparations described in Preparations in advance. To use a built-in algorithm to build a model, perform the following steps:

Preparations

  • You have registered with HUAWEI CLOUD and checked the account status before using ModelArts. The account cannot be in arrears or frozen.
  • Obtain the AK/SK of the account and configure the AK/SK in Settings of ModelArts.
  • You have created a bucket and folders in OBS for storing the sample dataset and model. In this example, create a bucket named test-modelarts and folders listed in Table 1.
    For details about how to create an OBS bucket and folder, see Creating a Bucket and Creating a Folder.
    Table 1 Folder list

    Folder

    Usage

    dataset-flowers

    Stores the dataset.

    model-test

    Stores the model and prediction files outputted in the training.

    train-log

    Stores training job logs.

Step 1: Prepare Data

ModelArts provides a sample dataset of flowers named Flowers-Data-Set. This example uses this dataset to build a model. Perform the following operations to upload the dataset to the OBS directory test-modelarts/dataset-flowers created in preparation.

NOTE:
  • Data labeling has been completed for the Flowers-Data-Set dataset. The .txt files are the labeling files of the corresponding images. Therefore, the data labeling operation is ignored in this step.
  • If you want to use your own dataset, skip this step, upload the dataset to the OBS folder, and select this directory in Step 2: Train a Model. If your dataset is not labeled, choose Data Management > Data Labeling to create a labeling job and manually label your dataset before creating a training job.
  1. Download the Flowers-Data-Set dataset to the local PC.
  2. Decompress the Flowers-Data-Set.zip file to the Flowers-Data-Set directory on the local PC.
  3. Upload all files in the Flowers-Data-Set directory to the test-modelarts/dataset-flowers directory on OBS. For details about how to upload files, see Uploading a File.

Step 2: Train a Model

After data preparation is completed, you can create a training job, select the built-in algorithm ResNet_v1_50, and generate an available model.

The ResNet_v1_50 algorithm is based on the TensorFlow, TF-1.8.0-python2.7 engine and is used for image classification. For more information about built-in algorithms, choose Training Jobs > Built-in Algos to learn information such as the usage, engine type, and precision of the algorithms.

  1. Log in to the ModelArts management console. In the left navigation pane, choose Training Jobs. The Training Jobs page is displayed.
  2. Click Create. The Create Training Job page is displayed.
  3. On the Create Training Job page, set required parameters.
    1. In the basic information area, retain the default values for Billing Mode and Version. Set Name and Description as prompted.
      Figure 1 Entering the name and description
    2. In the parameter configuration area, set Data Source, Algorithm Source, Running Parameter, Training Output Path, and Job Log Path.

      Data Source: The imported dataset has been labeled. Therefore, directly import the dataset from its storage location. Select Data path, click Select on the right of the text box, and select the OBS path to the dataset, for example, /test-modelarts/dataset-flowers/.

      Algorithm Source: Click Select and select the ResNet_v1_50 algorithm from the built-in algorithm list.

      Running Parameter: After the ResNet_v1_50 algorithm is selected, the max_epoches parameter default to 100 is contained by default. In this example, it is recommended that you change the value of max_epoches to 10. The entire dataset is trained in 1 epoch. If you set max_epoches to 10, 10 epochs of training are performed. The training duration prolongs as the value of max_epoches increases.

      Training Output Path: Select the OBS path to the model and prediction files, that is, select the created model-test folder. If no folder is available, click Select and create a folder in the dialog box that is displayed.

      Job Log Path: Select the OBS path to store job logs, that is, select the created train-log folder. If no folder is available, click Select and create a folder in the dialog box that is displayed.
      Figure 2 Parameter settings
    3. In the resource setting area, click Select on the right of the Resource Pool text box, select public resource pool Compute GPU (P100), and set Compute Nodes to 1.
      NOTE:

      To improve the training efficiency, this example uses the GPU for training. However, the cost of the GPU is higher than that of the CPU. You can select an available resource pool based on the actual situation.

      Figure 3 Resource settings
    4. Click Next.
  4. On the Confirm tab page, check the parameters of the training job and click Submit.
  5. On the Training Jobs page, view the status of the created training job. It takes a couple of minutes to create and run a training job. When the job status changes to Successful, the training job is successfully created.

    You can click the name of the training job to go to the job details page and learn about the configurations, logs, resource usage, and evaluation result of the training job. You can obtain the generated model file from the OBS path specified by Training Output Path, that is, /test-modelarts/model-test/.

    Figure 4 Training job details

Step 3: (Optional) Create a TensorBoard Job to View the Model Training Process

TensorBoard is a tool that can effectively display the computational graph of TensorFlow in the running process, the trend of various metrics in time, and the data used in the training. Currently, TensorBoard supports only the training jobs based on the TensorFlow and MXNet engines.

If the detailed information on the training details page is sufficient for you to determine the model quality and build a model, skip this step and go to Step 4: Import the Model.

  1. On the ModelArts management console, choose Training Jobs in the left navigation pane, and then click the TensorBoard tab. The TensorBoard management page is displayed.
  2. On the TensorBoard management page, click Create.
  3. On the Create TensorBoard Job page, set required parameters and click Next.
    Set Name and Log Path. Log Path must be set to the value of Training Output Path of the training job. In the preceding steps, Training Output Path is set to /test-modelarts/model-test/.
    Figure 5 Configuring TensorBoard job parameters
  4. On the Confirm tab page, click Submit to create a TensorBoard job.
  5. Go to the TensorBoard management page and wait for a while. When the TensorBoard job status changes to Running, the job is successfully created.

    For a running TensorBoard job, you can click the job name to go to the TensorBoard page. On this page, you can learn about the training process of the model. If the training process and parameters meet the requirements, you can proceed with Step 4: Import the Model.

    Figure 6 TensorBoard page

Step 4: Import the Model

The trained model is stored in the OBS path. You can import the model to ModelArts for management and deployment.

  1. On the ModelArts management console, choose Model Management > Models in the left navigation pane. The Models page is displayed.
  2. On the Models page, click Import.
  3. On the Import Model page, set required parameters and click Next.
    Set Name and Version. Set Meta Model Source to Training job, and the system automatically selects the training job you created. You can also select an available training job from the drop-down list box. Because this example is simple, retain the default values for other parameters.
    Figure 7 Importing a model
  4. After the model is imported, the Models page is displayed. You can view the imported model and its versions on the Models page.
    Figure 8 Models

Step 5: Deploy a Service

After the model is imported, deploy it as a real-time, batch, or edge service. The following describes how to deploy a real-time service.

  1. On the Models page, click Deploy in the Operation column and select Real-Time Service from the drop-down list box. The Deploy page is displayed.
  2. On the Deploy page, set required parameters and click Next.
    Set the name of the real-time service and enable the Auto Stop function. In Model and Configuration, the system automatically selects the model and version created in Step 4: Import the Model. Select a resource flavor from the drop-down list box of Instance Flavor, for example, 2 vCPUs|8 GiB. Retain the default values for other parameters.
    Figure 9 Deploying a real-time service
  3. On the Confirm tab page, check the configurations and click Submit to create a real-time service.
  4. Choose Service Deployment > Real-Time Service to view information about the real-time service. It takes several minutes to deploy the model. When the service status changes to Running, the real-time service is successfully deployed.

Step 6: Test the Service

After the real-time service is deployed, access the service to send a prediction request for test.

  1. On the Real-Time Services management page, click the name of the real-time service. The real-time service details page is displayed.
  2. On the real-time service details page, click the Prediction tab.
  3. Click Upload next to Image File to upload an image with flowers and click Predict.

    After the prediction is completed, the prediction result is displayed in the Test Result pane. Based on the confidence score of the prediction result, the flowers in the image are tulips.

    NOTE:

    To ensure the test effect, do not use the existing images in the sample dataset for test.

    Figure 10 Prediction result

Step 7: Delete Related Resources to Avoid Unnecessary Charging

To avoid unnecessary charging, you are advised to delete related resources, such as the real-time service, TensorBoard job, training job, dataset, and OBS directories after trial use.

  • To delete a real-time service, go to the Real-Time Services page, and choose More > Delete in the Operation column.
  • To delete a TensorBoard job, choose Training Jobs > TensorBoard and click Delete in the Operation column.
  • To delete a training job, go to the Training Jobs page and click Delete in the Operation column.
  • To delete a dataset, access OBS, delete the uploaded dataset, and delete the folder and OBS bucket.

Did you find this page helpful?

Submit successfully!

Thank you for your feedback. Your feedback helps make our documentation better.

Failed to submit the feedback. Please try again later.

Which of the following issues have you encountered?







Please complete at least one feedback item.

Content most length 200 character

Content is empty.

OK Cancel