Help Center > > Service Overview> Application Scenarios

Application Scenarios

Updated at: May 30, 2019 GMT+08:00

Auto Scaling


  • During promotions and flash sales, online shopping apps will see a dramatic rise in user access and may soon fall short of cloud computing resources. How to adapt cloud computing resources automatically to changing demand?
  • It is difficult for live video platforms to predict the number of video watchers. Not to mention the complexity in planning how many CPU or memory resources to invest in advance. Is there any way to start small and easily scale the live video platforms as CPU or memory usage grows?
  • The number of game players increases at 12:00 and 18:00–23:00 every day. It would be ideal to automatically scale game apps at a scheduled time.

CCE's Solution

CCE automatically adapts the amount of computing resources to fluctuating service load according to custom auto-scaling policies. To scale computing resources at the cluster level, CCE adds or reduces cloud servers. To scale computing resources at the workload level, CCE adds or reduces containers.

Benefits of CCE
  • Flexible

    Allows multiple scaling policies and scales containers within seconds when specified conditions are met.

  • High availability

    Automatically detects the statuses of instances in auto-scaling groups and replaces unhealthy instances with new ones.

  • Low cost

    Charges you only for the cloud servers that you use.

Related Services

autoscaler (an add-on used for auto cluster scaling), AOM (a cloud service used for workload scaling)

Figure 1 How auto scaling works

Traffic Management


Internet technologies are evolving and complexity in large enterprise systems is going beyond what traditional system architecture can handle. The microservice architecture has been rising in popularity. The idea behind the microservice architecture is to divide complex applications into smaller components called microservices. Microservices are independently developed, deployed, and scaled. The combined use of microservices and containers simplifies microservice delivery while improving application reliability and scalability.

However, the complexity in O&M, commissioning, and security management of the distributed application architecture increases as the quantity of microservices grows. Developers cannot focus on application development. They have to write additional code for microservice governance and are often distracted by the tedious task of working out a microservice governance solution and letting it work seamlessly with the existing application.

CCE's Solution

Istio service mesh is deeply integrated into CCE. Istio's out-of-the-box traffic management feature allows you to complete grayscale release, observe your traffic, and control the flow of traffic without needing to change code.

Benefits of CCE

  • Out-of-the-box usability

    Istio service mesh can be started in just a few clicks and works seamlessly with CCE. Once started, Istio service mesh can intelligently control the flow of traffic.

  • Intelligent routing

    HTTP/TCP connection policies and security policies can be enforced without requiring you to rewrite code.

  • Visibility into traffic

    Based on the monitoring data that is collected non-intrusively, Istio service mesh works closely with HUAWEI CLOUD APM service to provide a panoramic view of your services, including real-time traffic topology, call tracing, performance monitoring, and runtime diagnosis.

Related Services

Elastic Load Balance (ELB), Application Performance Management (APM), Application Operations Management (AOM)

Figure 2 How traffic management works

Continuous DevOps Delivery


Today's IT industry is growing rapidly and needs to be highly responsive when diverse, changeable customer needs emerge at scale. Only with fast, continuous integration can IT industry players stack new features continuously in order to gear their products to customer needs. Traditional enterprises and even Internet enterprises may face challenges like low R&D efficiency, outdated tools, and slow release when they practice continuous integration (CI). Continuous delivery (CD) is the secret key that can help them stride out of the dilemma.

CCE's Solution

CCE works with SWR to provide continuous DevOps features that will automatically complete code compilation, image building, dark launching, and containerization based on source code. The continuous DevOps features work seamless with traditional CI/CD systems, making it easier to containerize applications.

Benefits of CCE
  • Efficient process management

    Reduces scripting workload by more than 80% through streamlined process interaction.

  • Flexible integration

    Provides various APIs to integrate with existing CI/CD systems, greatly facilitating customization.

  • High performance

    Schedules tasks flexibly with a fully containerized architecture.

Related Services

Software Repository for Container (SWR), Object Storage Service (OBS), Virtual Private Network (VPN)

Figure 3 How continuous DevOps delivery works

Hybrid Cloud


  • Multi-cloud deployment and disaster recovery

    To achieve high service availability, customers prefer to deploy applications on container services from multiple cloud providers. When a cloud goes dark, application load will be automatically distributed to other clouds.

  • Traffic distribution and auto scaling

    Large enterprise systems need to span cloud facilities in different regions. They also need to be automatically resizable — they can start small and then scale up as system load grows. This frees enterprises from the costs of planning, purchasing, and maintaining more cloud facilities than needed and transforms what are commonly large fixed costs into much smaller variable costs.

  • Migration to the cloud and database hosting

    Finance, security, and other industries with a top concern for data confidentiality usually want to keep critical systems in local IDCs while moving other systems to the cloud. All systems, no matter in local IDCs or in the cloud, are expected to be managed using a unified dashboard.

  • Separation of development environment from deployment environment

    To ensure IP security, customers want to set up the production environment on a public cloud while setting up the development environment in a local IDC.

CCE's Solution

Applications and data can be seamlessly migrated between your on-premise network and the cloud, facilitating resource scheduling and disaster recovery (DR). This is made possible through environment-independent containers, network connectivity between private and public clouds, and the ability to collectively manage containers on CCE and your private cloud.

Benefits of CCE

  • On-Cloud DR

    Multicloud helps protect systems from outages. When a cloud is faulty, system load is automatically diverted to other clouds to ensure service continuity.

  • Automatic traffic distribution

    Access latency is reduced by directing user requests to the regional cloud provider who is closer to where the users are. Once the applications in local IDCs are overloaded, some of the application access requests can be distributed to the cloud that will automatically create instances of the applications to meet the fluctuating load.

  • Separation of computing from data; capability sharing

    Sensitive service data is separated from general service data. Separation can also be achieved between the development environment and the production environment, as well as between special computing and general services. Through auto scaling and unified cluster management, the hybrid cloud combines the resources and technological advantages of the on-premises system and the cloud.

  • Reduced cost

    The resource pool on public cloud can respond quickly to load spike. You no longer need to preserve a large amount of resources in advance and this will save you big on resource costs.

Related Services

Elastic Cloud Server (ECS), Direct Connect (DC), Virtual Private Network (VPN), Software Repository for Container (SWR)

Figure 4 How hybrid cloud works

High-Performance AI Computing


For industries such as AI, gene sequencing, and video processing, computing tasks are computing-intensive and usually run on GPUs, bare metal servers, and other hardware that provides high computing power. These industries opt to run computing services on the public cloud where a sea of computing resources is available. Meanwhile, to avoid the cost in using computing facilities at scale, general services are run in private cloud.

CCE's Solution

Running containers on high-performance GPU-accelerated cloud servers significantly improves AI computing performance by 3 to 5 folds. GPUs are usually expensive and sharing a GPU among containers greatly reduces AI computing costs. In addition to performance and cost advantages, CCE also offers fully managed clusters that will hide all the complexity in deploying and managing your AI applications so you can focus on high-value development.

Benefits of CCE
  • Superior Performance

    The bare-metal NUMA architecture and high-speed InfiniBand NICs drive a three- to five-fold improvement in AI computing performance.

  • Efficient computing

    GPUs are shared and scheduled among multiple containers, greatly reducing computing costs.

  • Extensive Field Experience
  • AI containers are compatible with all mainstream GPU models and have been used at scale in HUAWEI CLOUD's Enterprise Intelligence (EI) products.

Related Services

GPU-accelerated Cloud Server (GACS), Elastic Load Balance (ELB), Object Storage Service (OBS)

Figure 5 How AI computing works

Did you find this page helpful?

Submit successfully!

Thank you for your feedback. Your feedback helps make our documentation better.

Failed to submit the feedback. Please try again later.

Which of the following issues have you encountered?

Please complete at least one feedback item.

Content most length 200 character

Content is empty.

OK Cancel