Home > Communication Service Provider Solutions > Converged Core > Guides > Dell Technologies 5G Core Solution with Affirmed and Red Hat OpenShift Container Platform Ref Arch Guide > OpenShift network operations
In Kubernetes, containers separate applications from underlying host infrastructure. Each container is assigned resources including CPU, memory, and network interfaces. Kubernetes provides a mechanism to enable the orchestration of network resources through the Container Network Interface (CNI) API.
The CNI API uses the Multus CNI plug-in to enable attaching multiple adapter interfaces on each pod. Container Resource Definition (CRD) objects are responsible for configuring Multus CNI plug-ins.
A pod, a basic unit of application deployment, consists of one or more containers that are deployed together on the same compute node. A pod shares the compute node network infrastructure with the other network resources that make up the cluster. As service demand expands, more identical pods are often deployed to the same or other compute nodes.
Networking is critical to the operation of an OpenShift Container cluster. Four basic network communication flows occur within every cluster:
Containers that communicate within their pod use the local host network address. Containers that communicate with any external pod originate their traffic based on the IP address of the pod.
Application containers use shared storage volumes (configured as part of the pod resource) that are mounted as part of the shared storage for each pod. Network traffic that might be associated with non-local storage must be able to route across node network infrastructure.
Services are used to abstract access to Kubernetes pods. Every node in a Kubernetes cluster runs a kube-proxy and is responsible for implementing virtual IP (VIP) for services. Kubernetes supports two primary modes of finding (or resolving) a service:
Part of the application (such as front-ends) might require exposure to a service outside the application. If the service uses HTTP, HTTPS, or any other TLS-encrypted protocol, use an ingress controller; otherwise, use a load balancer, external service IP address, or node port.
A node port exposes the service on a static port on the node IP address. A service with NodePort-type as a resource exposes the resource on a specific port on all nodes in the cluster. Ensure that external IP addresses are routed to the nodes.
The OpenShift Container Platform uses an ingress controller to provide external access. The ingress controller defaults to running on two compute nodes, but it can be scaled up as required. Dell Technologies recommends creating a wildcard DNS entry and then setting up an ingress controller. This method enables you to work only within the context of an ingress controller. An ingress controller accepts external HTTP, HTTPS, and TLS requests using SNI, and then proxies them based on the routes that are provisioned.
You can expose a service by creating a route and using the cluster IP. Cluster IP routes are created in the OpenShift Container Platform project, and a set of routes is admitted into ingress controllers.
You can perform sharding (horizontal partitioning of data) on route labels or name spaces. Sharding enables you to:
The following operators are available for network administration:
Affirmed 5GC with Red Hat OpenShift solution is primarily using the OVN-Kubernetes CNI plug-in. The CNI specification makes the networking layer of containerized applications pluggable and extensible across container run-times. The CNI specification is used in both upstream Kubernetes and OpenShift in the pod network. This use is not implemented by Kubernetes, but by various CNI plug-ins. The most commonly used CNI plug-ins are:
The OpenShift Software Defined Network (SDN) creates an overlay network that is based on Open Virtual Switch (OVS). The overlay network enables communication between pods across the cluster. OVS operates in one of the following modes:
OpenShift Container Platform 4.6 also supports using Open Virtual Network Kubernetes (OVN Kubernetes) as the CNI network provider. OVN-Kubernetes will become the default CNI network provider in a future release of OpenShift. OpenShift Container Platform 4.6 supports additional SDN orchestration and management plug-ins that comply with the CNI specification.
Distributed microservices work together to make up an application. Service Mesh provides a uniform method to connect, manage, and observe microservices-based applications. Service Mesh is not installed automatically as part of a default installation. You must use operators from the OperatorHub to install Service Mesh.
Service Mesh has key functional components that belong to either the data plane or the control plane:
Users define the granularity of the Service Mesh deployment, enabling them to meet their specific deployment and application needs. Service Mesh can be deployed at the cluster level or project level. For more information, see the OpenShift Service Mesh documentation.
Single Root Input/Output Virtualization (SR-IOV) enables the creation of multiple virtual functions from one physical function for a PCIe device such as NICs. In the network, you can use this capability to create many virtual functions from a single NIC, where you can attach each virtual function to a pod. Latency is reduced because of the reduced I/O overhead from the software switching layer. You can also use SR-IOV to configure multiple networks by attaching multiple virtual functions with different networks to a single pod. You can configure SR-IOV in OpenShift by using the SR-IOV operator, which can create virtual functions and provision additional networks. Dell Technologies has validated Intel XXV710 25G and Mellanox CX-4 NIC cards with OpenShift Container Platform 4.6 and supports using the cards with the platform. For more information, see the Dell Technologies – Red Hat OpenShift Container Platform Reference Architecture for Telecom Deployment Guide at the Dell Technologies Solutions Info Hub for Communication Service Providers.