Applications run on worker (compute) nodes. Each worker node is equipped with resources such as CPU cores, memory, storage, NICs, and add-in host adapters
(GPUs, SmartNICs, FPGAs, and so on). Kubernetes provides a mechanism to enable orchestration of network resources through the Container Network Interface (CNI) API.
A pod, a basic unit of application deployment, consists of one or more containers that are deployed together on the same worker node. A pod shares the worker node network infrastructure with the other network resources that make up the cluster. As service demand expands, more identical pods are often deployed to the same or other worker nodes.
Networking is critical to the operation of an OpenShift Container cluster. Four basic network communication flows arise within every cluster:
Containers that communicate within their pod use the local host network address. Containers that communicate with any external pod originate their traffic based on the IP address of the pod.
Application containers make use of shared storage volumes (generally configured as part of the pod resource) that are mounted as part of the shared storage for each pod. Network traffic that might be associated with nonlocal storage must be able to route across node network infrastructure.
Services are used to abstract access to Kubernetes pods. Every node in a Kubernetes cluster runs a kube-proxy and is responsible for implementing virtual IP (VIP) for service. Kubernetes supports two primary modes of finding (or resolving) a service:
Some part of the application (for example, front-ends) might want to expose a service outside the application. If the service uses HTTP, HTTPS, or any other TLS-encrypted protocol, use an ingress controller. Otherwise, use a load balancer, , or node port.
A node port exposes the service on a static port on the node IP address. A service with NodePort-type as a resource exposes it on a specific port on all nodes in the cluster. Ensure that external IP addresses are routed to the nodes.
OpenShift Container Platform uses an ingress controller to provide external access. The ingress controller generally runs on two worker nodes, but it can be scaled up as required. Dell Technologies recommends creating a wildcard DNS entry and then setting up an ingress controller. This method enables you to work only within the context of an ingress controller. An ingress controller accepts external HTTP, HTTPS, and TLS requests using SNI and then proxies them based on the routes that are provisioned.
You can expose a service by creating a route and using the cluster IP. Cluster IP routes are created in the OpenShift Container Platform project, and a set of routes is admitted into ingress controllers.
You can perform sharding (horizontal partitioning of data) on route labels or name spaces. Sharding ingress controllers enables you to:
The following operators are available for network administration:
OpenShift SDN creates an overlay network based on Open Virtual Switch (OVS). The overlay network enables communication between pods across the cluster. OVS operates in one of the following modes:
OpenShift Container Platform 4.3 supports additional SDN orchestration and management plug-ins that comply with the CNI specification. See Use cases for examples of use cases for CNI plug-ins.
Distributed microservices work together to make up an application. Service Mesh provides a uniform method to connect, manage, and observe microservices-based applications. The Red Hat OpenShift Service Mesh implementation is based on Istio, an open-source project. OpenShift Service Mesh is not installed automatically as part of a default installation; the user must install Service Mesh by using operators from the OperatorHub.
Service Mesh has key functional components that belong to either the data plane or the control plane:
Users define the granularity of Service Mesh deployment, enabling them to meet their specific deployment and application needs. Service Mesh can be employed at the cluster level or at the project level.
OpenShift Container Platform 4.3 supports multiple networks. It comes with a default network. The cluster administrator defines additional networks using the Multus CNI plug-in and then chains the plug-ins. These additional networks are useful for increasing the networking capacity of the pods and meeting traffic separation requirements.
The following CNI plug-ins are available for creating additional networks:
When pods are provisioned with additional network interfaces that are based on macvlan or ipvlan, corresponding leaf-switch ports must match the VLAN configuration of the host. Failure to properly configure them results in a loss of traffic.