Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Guides > Design Guide—Red Hat OpenShift Container Platform 4.14 on AMD-powered Dell Infrastructure > OpenShift network operations
Applications run on compute nodes. Each compute node is equipped with resources such as CPU cores, memory, storage, NICs, and add-in host adapters including GPUs, SmartNICs, and FPGAs. Kubernetes provides a mechanism to enable orchestration of network resources through the Container Network Interface (CNI) API.
The CNI API uses the Multus CNI plug-in to enable attachment of multiple adapter interfaces on each pod. CRD objects are responsible for configuring Multus CNI plug-ins. For more information, see Multus CNI.
A pod is a basic unit of application deployment. Each pod consists of one or more containers that are deployed together on the same compute node. A pod shares the compute node network infrastructure with the other network resources that make up the cluster. As demand expands, additional identical pods are often deployed to the same or other compute nodes. For more information, see OpenShift Container Platform 4.14 Documentation.
Networking is critical to the operation of an OpenShift Container cluster. Four basic network communication flows occur within every cluster:
Containers that communicate within their pod use the local host network address. Containers that communicate with any external pod originate their traffic based on the IP address of the pod.
Application containers use shared storage volumes. These volumes are configured as part of the pod resource and are mounted as part of the shared storage for each pod. Network traffic that might be associated with nonlocal storage must be able to route across node network infrastructure. For more information, see Services, Load Balancing, and Networking.
Services are used to abstract access to Kubernetes pods. Every node in a Kubernetes cluster runs a kube-proxy and is responsible for implementing virtual IP (VIP) for services. Kubernetes supports two primary modes of finding (or resolving) a service:
Some parts of the application (front ends, for example) might need to expose a service outside the application. If the service uses HTTP, HTTPS, or any other TLS-encrypted protocol, use an ingress controller. For other protocols, use a load balancer, an external service IP address, or a node port.
A node port exposes the service on a static port on the node IP address. A service with NodePort-type as a resource exposes the resource on a specific port on all nodes in the cluster. Ensure that external IP addresses are routed to the nodes. For more information, see Virtual IPs and Service Proxies.
OpenShift Container Platform uses an ingress controller to provide external access. The ingress controller defaults to running on two compute nodes, but it can be scaled up as needed. For more information, see Configuring ingress cluster traffic using an Ingress Controller.
An ingress controller accepts external HTTP, HTTPS, and TLS requests using SNI, and then proxies them based on the routes that are provisioned. Dell Technologies recommends creating a wildcard DNS entry and then setting up an ingress controller. This method enables you to work within the context of an ingress controller only.
You can expose a service by creating a route and using the cluster IP address. Cluster IP routes are created in the OpenShift Container Platform project, and a set of routes is admitted into ingress controllers.
You can use “sharding,” which is the horizontal partitioning of data on route labels or name spaces, to:
For more information, see Ingress sharding in OpenShift Container Platform 4.14.
The following operators are available for network administration:
OpenShift SDN creates an overlay network that is based on Open Virtual Switch (OVS). The overlay network enables communication between pods across the cluster. OVS operates in one of the following modes:
The Open Virtual Network (OVN)-Kubernetes as the CNI plug-in is a network provider for the default cluster network. OpenShift Container Platform 4.14 supports additional SDN orchestration and management for plug-ins that complies with the CNI specification. For more information, see About the OpenShift SDN network plugin.
OpenShift Container Platform 4.14 also supports software-defined multiple networks. The platform comes with a default network. The cluster administrator uses the Multus CNI plug-in to define additional networks, and then chains the plug-ins. The additional networks serve to increase the networking capacity of the pods and meet traffic separation requirements.
The following CNI plug-ins are available:
For more information, see Attaching a pod to an additional network.
When pods are provisioned with additional network interfaces that are based on macvlan or ipvlan, corresponding leaf-switch ports must match the VLAN configuration of the host. Failure to properly configure the ports results in a loss of traffic.
For more information, see Updating node network configuration.