Applications run on compute nodes. Each compute node is equipped with resources such as CPU cores, memory, storage, NICs, and add-in host adapters including GPUs, SmartNICs, and FPGAs. Kubernetes provides a mechanism to enable orchestration of network resources through the Container Network Interface (CNI) API.
A pod, which is a basic unit of application deployment, consists of one or more containers that are deployed together on the same compute node. A pod shares the compute node network infrastructure with the other network resources that make up the cluster. As service demand expands, additional identical pods are often deployed to the same or other compute nodes.
Networking is critical to the operation of an OpenShift Container cluster. Four basic network communication flows occur within every cluster:
Containers that communicate within their pod use the local host network address. Containers that communicate with any external pod originate their traffic based on the IP address of the pod.
Application containers use shared storage volumes (configured as part of the pod resource) that are mounted as part of the shared storage for each pod. Network traffic that might be associated with nonlocal storage must be able to route across node network infrastructure.
Services are used to abstract access to Kubernetes pods. Every node in a Kubernetes cluster runs a kube-proxy and is responsible for implementing virtual IP (VIP) for services. Kubernetes supports two primary modes of finding (or resolving) a service:
Some parts of the application (front ends, for example) might need to expose a service outside the application. If the service uses HTTP, HTTPS, or any other TLS-encrypted protocol, use an ingress controller; for other protocols, use a load balancer, , or node port.
A node port exposes the service on a static port on the node IP address. A service with NodePort-type as a resource exposes the resource on a specific port on all nodes in the cluster. Ensure that external IP addresses are routed to the nodes.
OpenShift Container Platform uses an ingress controller to provide external access. The ingress controller defaults to running on two compute nodes, but it can be scaled up as required. Dell Technologies recommends creating a wildcard DNS entry and then setting up an ingress controller. This method enables you to work only within the context of an ingress controller. An ingress controller accepts external HTTP, HTTPS, and TLS requests using SNI, and then proxies them based on the routes that are provisioned.
You can expose a service by creating a route and using the cluster IP. Cluster IP routes are created in the OpenShift Container Platform project, and a set of routes is admitted into ingress controllers.
You can perform sharding (horizontal partitioning of data) on route labels or name spaces. Sharding enables you to:
The following operators are available for network administration:
The CNI specification serves to make the networking layer of containerized applications pluggable and extensible across container run-times. The specification is used in both upstream Kubernetes and OpenShift in the pod network. This use is not implemented by Kubernetes, but by various CNI plug-ins. The most commonly used CNI plug-ins are:
OpenShift SDN creates an overlay network that is based on Open Virtual Switch (OVS). The overlay network enables communication between pods across the cluster. OVS operates in one of the following modes:
OpenShift Container Platform 4.6 also supports using Open Virtual Network (OVN)-Kubernetes as the CNI network provider. OVN-Kubernetes will become the default CNI network provider in a future release of OpenShift Container Platform. OpenShift Container Platform 4.6 supports additional SDN orchestration and management plug-ins that comply with the CNI specification. See Use cases for examples.
Distributed microservices work together to make up an application. Service Mesh provides a uniform method to connect, manage, and observe microservices-based applications. The Red Hat OpenShift implementation of Service Mesh is based on Istio, an open-source project. Use operators from the OperatorHub to install Service Mesh; OpenShift Service Mesh is not installed automatically as part of a default installation.
Service Mesh has key functional components that belong to either the data plane or the control plane:
Service Mesh can be employed at the cluster level or the project level. Users define the granularity of the Service Mesh deployment, enabling them to meet their specific deployment and application needs. For more information, see .
OpenShift Container Platform 4.6 also supports software-defined multiple networks. The platform comes with a default network. The cluster administrator defines additional networks using the Multus CNI plug-in, and then chains the plug-ins. The additional networks are useful for increasing the networking capacity of the pods and meeting traffic separation requirements.
The following CNI plug-ins are available for creating additional networks:
When pods are provisioned with additional network interfaces that are based on macvlan or ipvlan, corresponding leaf-switch ports must match the VLAN configuration of the host. Failure to properly configure the ports results in a loss of traffic.