Home > Communication Service Provider Solutions > Telecom Technical White Papers > Design and Optimize a 5G Telco Cloud > CPU pinning and NUMA boundaries
NUMA (Non-Uniform Memory Access) boundaries are crucial for Telco workloads on a CaaS (Container as a Service) platform because they directly impact the performance and efficiency of these workloads. Telco applications, such as virtual network functions (VNFs) and packet processing applications, often require high-speed data processing and low-latency communication between CPU cores and memory. NUMA architectures play a significant role in determining the proximity and accessibility of memory to CPU cores, affecting the overall performance and latency of Telco workloads.
In a NUMA system, the CPU cores are grouped into multiple nodes, each with its local memory. Accessing local memory is faster than accessing remote memory across different nodes, as it reduces memory access latency. Telco workloads heavily rely on fast memory access to process and exchange data efficiently. When these workloads span multiple NUMA nodes, it can increase memory latency and decrease performance.
To optimize the performance of Telco workloads, it is essential to align the workload’s CPU affinity with the underlying NUMA boundaries. CPU pinning, which assigns specific CPU cores to a particular workload, helps ensure that the workload runs primarily on the CPU cores within a single NUMA node. By doing so, the workload can access the local memory with reduced latency, resulting in improved performance and lower latency for Telco applications.
Challenge: Ensuring CPU pinning and NUMA (Non-Uniform Memory Access) boundaries for Containerized Network Functions (CNFs) is crucial to optimize performance and resource utilization. However, it presents a challenge in dynamic container orchestration environments.
Solution: Kubernetes Operators for CPU and NUMA affinity
To address the challenge of CPU pinning and NUMA boundaries in CNFs, various Kubernetes operators provide solutions that allow for fine-grained control over CPU and NUMA affinity. For example, the Kubernetes Operator for CPU Manager enables administrators to define CPU pinning policies for specific CNFs, ensuring they are bound to specific CPU cores. CPU pinning helps to minimize performance variations and maintain consistent performance levels.
Similarly, the Kubernetes Operator for NUMA Topology Manager allows operators to configure NUMA boundaries for CNFs. By leveraging NUMA-aware scheduling, CNFs can be placed in the same NUMA node as their associated resources, reducing latency and improving memory access.
Also, Kubernetes operators like the Kubernetes Topology Manager Operator and the NUMA-aware Pod Scheduler Operator provide capabilities to schedule CNFs based on NUMA affinity. These capabilities ensure that CNFs are placed on nodes with appropriate CPU and memory resources.
CPU Manager on OpenShift 4.13: https://docs.openshift.com/container-platform/4.13/scalability_and_performance/using-cpu-manager.html