OpenShift Container Platform 4.2 includes an operator-driven framework that manages the network infrastructure, the CNI. The CNI interface provides a choice of CNI plug-ins that you can deploy to enable various types of communication channels. The CNI interface is also used to enable access to SmartNICs and to add-in adapters and devices such as GPUs and FPGAs.
Servers (nodes) and container entities that are deployed within the Kubernetes cluster operate from within a pod.
Applications are run on “worker” (compute) nodes. Each worker node is equipped with resources such as CPU cores, memory, storage, NICs, and add-in host adapters (GPUs, SmartNICs, FPGAs, and so on). Kubernetes provides a mechanism to enable add-in resources such as NICs, GPUs, and FPGAs—the Container Network Interface (CNI) API. The CNI API uses the CNI plug-in to enable attachment of multiple adapter interfaces on each pod. Container Resource Definitions (CRD) objects handle the configuration of Multus CNI plug-ins.
Kubernetes control plane nodes host the cluster-wide control plane infrastructure that includes:
A pod, a basic unit of application deployment, consists of one or more containers that are deployed together on the same worker node. A pod shares the worker node network infrastructure with the other network resources that make up the cluster. As service demand expands, more identical pods are often deployed to the same or other worker nodes.
Networking is critical to the operation of a Kubernetes cluster. Your container ecosystem ceases to exist without networking. Four basic network communication flows arise within every Kubernetes cluster:
Pods share the Linux kernel namespaces, cgroups, and Linux operating system process isolation methods. Pods can communicate over standard IPC methods such as semaphores or shared memory. Containers that communicate within their pod use the localhost network address. Containers that communicate with any external pod originate their traffic based on the IP address of the pod.
Application containers make use of shared storage volumes (generally configured as part of the pod resource) that are mounted as part of the shared storage for each pod. Pods generally make use of ephemeral storage so that when the pod expires its storage is released and any storage it used is considered lost. Storage that is assigned to a pod is shared with all the containers that operate within it. In other words, a pod and its containers share the same shared part of the host file system. A pod can also be configured to use persistent storage volumes, which are also shared by all containers within a pod. Persistent volumes permit application storage to continue across container restarts.
Network traffic that might be associated with nonlocal storage must be able to route across node network infrastructure.
Services are generally used to abstract access to Kubernetes pods. Every node in a Kubernetes cluster runs a kube-proxy and is responsible for implementing virtual IP (VIP) for service.
Kubernetes supports two primary modes of finding (or resolving) a service:
Some part of your application (for example, front ends) might want to expose a service outside the application. If the service uses HTTP/HTTPS or any other TLS-encrypted protocol, use an ingress controller; otherwise, use a load balancer, external IP address, or a node port. A node port exposes the service on a static port on the node IP address. A service with NodePort-type as a resource exposes it on a specific port on all nodes in the cluster. Ensure that the external IP addresses are routed to the nodes.
The OpenShift Container Platform uses an ingress controller to provide external access. The ingress controller generally runs on two worker nodes but can be scaled up as required.
Dell EMC recommends creating a wildcard DNS entry and then setting up an ingress controller. This method enables you to work only within the context of an ingress controller. An ingress controller accepts external requests and then proxies them based on the routes that are provisioned.
A service is exposed by creating a route and using the ClusterIP. Routes are created in the OpenShift Container Platform project and a set of routes are admitted into ingress controllers.
Sharing ingress controllers enables you to:
Sharing can be performed on route labels or name spaces.
In addition to the Operator Framework, three main operators are available for network administration:
OpenShift SDN creates an overlay network based on Open Virtual Switch (OVS), which enables communication between pods across the OpenShift Container Platform cluster. OVS operates in one of the following modes:
OpenShift Container Platform 4.2 supports additional SDN orchestration and management plug-ins that comply with the CNI specification. See Chapter 6 Use Cases for examples of use cases for CNI plug-ins.
A number of distributed microservices work together to make up an application. OpenShift Service Mesh connects these distributed microservices over the networks within the cluster, and potentially across multiple clusters. The Service Mesh implementation is based on Istio, an open source project.
OpenShift Service Mesh provides a uniform way to connect, manage, and observe microservices-based applications. It is installed automatically through operators from the OperatorHub. Service Mesh uses code from the following open source project operators:
Service Mesh has key functional components that belong to either the data plane or the control plane:
Service Mesh controls traffic flows between microservices, enforces access policies, and aggregates telemetry data. It provides a policy-driven set of controls over network pathways that are provided by SDN- and CNI.
Users define the granularity of Service Mesh deployment, enabling them to meet their specific deployment and application needs. Service Mesh can be employed at the cluster level or at the project level.
Monitoring and troubleshooting the OpenShift Container Platform 4.2 cluster are important tasks. The cluster administrator can: