Help Center > > Service Overview> Basic Concepts

Basic Concepts

Updated at: Jun 25, 2019 GMT+08:00

Region

Regions are geographic areas isolated from each other. Resources are region-specific and cannot be used across regions through internal network connections. For low network latency and quick resource access, select the nearest region. Public services, such as Object Storage Service (OBS), Virtual Private Cloud (VPC), Elastic IP (EIP), and Image Management Service (IMS) are shared within a region. For example, if you or your customer is in Beijing, you can select the cn-north region.

If you are not sure in which region or endpoint the CCE service is available, see Regions and Endpoints.

Availability Zone (AZ)

An AZ comprises of one or more physical data centers equipped with independent ventilation, fire, water, and electricity facilities. Compute, network, storage, and other resources in an AZ are logically divided into multiple clusters. AZs within a region are interconnected using high-speed optical fibers to allow you to build cross-AZ high-availability systems.

Cluster

A cluster is a group of one or more cloud servers (also known as nodes) in the same subnet. It has all the cloud resources (including VPCs and compute resources) required for running containers.

Node

A node is a cloud server (virtual or physical machine) running an instance of the Docker Engine. Containers are deployed, run, and managed on nodes. The node agent (kubelet) runs on each node to manage container instances on the node. The number of nodes in a cluster can be scaled.

Virtual Private Cloud (VPC)

A VPC is a logically isolated virtual network that facilitates secure internal network management and configurations. Resources in the same VPC can communicate with each other, but those in different VPCs cannot communicate with each other by default. VPCs provide the same network functions as physical networks and also advanced network services, such as elastic IP addresses and security groups.

Security Group

A security group is a collection of access control rules for ECSs that have the same security protection requirements and are mutually trusted in a VPC. After a security group is created, you can create different access rules for the security group to protect the ECSs that are added to this security group.

For more information, see Security Group.

Relationship Between Clusters, VPCs, Security Groups, and Nodes

As shown in Figure 1, a region may comprise of multiple VPCs. A VPC consists of one or more subnets. The subnets communicate with each other through a subnet gateway. A cluster is created in a subnet. There are three scenarios:
  • Different clusters are created in different VPCs.
  • Different clusters are created in the same subnet.
  • Different clusters are created in different subnets.
Figure 1 Relationship between clusters, VPCs, security groups, and nodes

Pod

A pod is the smallest and simplest unit in the Kubernetes object model that you create or deploy. A pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP address, and options that govern how the container(s) should run.

Figure 2 Pod

Workload

A workload is an abstract model of a group of pods in Kubernetes. Kubernetes classify workloads into deployments, StatefulSets, jobs, and DaemonSets.

  • Deployment: Pods of a deployment are completely independent of each other and deliver the same functions. Deployments support auto scaling and rolling updates. Nginx and WordPress and typical deployments.
  • StatefulSet: Pods in a StatefulSet are not completely independent of each other. StatefulSets have stable persistent storage and stable unique network identifiers. They support ordered, graceful deployment, scaling, and deletion. MySQL-HA and etcd are typical StatefulSets.
  • Job: It is a one-time task that runs to completion. It can be executed immediately after being created.
  • Cron job: It runs a job periodically on a given schedule. For example, you can create a cron job that will perform time synchronization for all active nodes at a fixed time point.
Figure 3 Relationship between workloads and pods

Container

A container is a running instance of a Docker image. Multiple containers can run on one node. Containers are actually software processes. Unlike traditional software processes, containers have separate namespace and do not run directly on a host.

Orchestration

An orchestration template describes the definitions and dependencies between a group of container services. You can use orchestration templates to deploy and manage multi-container applications and non-containerized applications.

Image

Docker creates the industry standard for packaging containerized applications. A Docker image is like a template that includes everything needed to run an application: code, runtime, system tools, system libraries, and settings. You can use images from Docker Hub, HUAWEI CLOUD Software Repository for Container, or private image registries. For example, a Docker image can contain a complete Ubuntu operating system, in which only the required programs and dependencies are installed. Docker images become Docker containers at runtime, that is, Docker containers are created from Docker images. Docker provides an easy way to create and update your own images. You can also download images created by other users.

Containers can be created, started, stopped, deleted, and suspended.

Namespace

A namespace is an abstract collection of resources and objects. It enables resources to be organized into non-overlapping groups. Multiple namespaces can be created inside a cluster and isolated from each other. This enables namespaces to share the same cluster services without affecting each other.

For example, you can split up resources into development and testing environments, and use separate namespaces to contain resources of the two environments.

Affinity and Anti-Affinity

If an application is not containerized, multiple components of the application may run on the same virtual machine and processes communicate with each other. However, in the case of containerization, software processes are packed into different containers and each container has its own lifecycle. For example, the transaction process is packed into a container while the monitoring/logging process and local storage process are packed into other containers. If closely related container processes run on distant nodes, routing between them will be costly and slow.

  • Affinity: Containers are scheduled onto the nearest node. For example, if application A and application B frequently interact with each other, it is necessary to use the affinity feature to keep the two applications as close as possible or even let them run on the same node. In this way, no performance loss will occur due to slow routing.
  • Anti-affinity: Instances of the same application spread across different nodes to achieve a higher availability. Once a node is down, instances on other nodes are not affected. For example, if an application has multiple replicas, it is necessary to use the anti-affinity feature to deploy the replicas on different nodes. In this way, no single point of failure will occur.

Istio Service Mesh

Istio service mesh is an open platform that connects, secures, controls, and observes microservices.

Istio service mesh is integrated into CCE and provides a non-intrusive approach to microservice governance. It supports complete lifecycle management and traffic management, and is compatible with Kubernetes and Istio ecosystems. You can start Istio service mesh in just a few clicks. Then Istio service mesh intelligently controls the flow of traffic by using a variety of features including load balancing, circuit breakers, and rate limiting. The built-in support for canary release, blue-green release, and other forms of grayscale release enables you to automate release management all in on place. Based on the monitoring data that is collected non-intrusively, Istio service mesh works closely with HUAWEI CLOUD Application Performance Management (APM) service to provide a panoramic view of your services, including real-time traffic topology, call tracing, performance monitoring, and runtime diagnosis.

Did you find this page helpful?

Submit successfully!

Thank you for your feedback. Your feedback helps make our documentation better.

Failed to submit the feedback. Please try again later.

Which of the following issues have you encountered?







Please complete at least one feedback item.

Content most length 200 character

Content is empty.

OK Cancel