Home > Communication Service Provider Solutions > Telecom Multicloud Foundation > Red Hat > Guides > Red Hat Open Shift Container Platform Guides > Deployment Guide: Red Hat OpenShift Container Platform Reference Architecture for Telecom > Performance profile deployment for low latency
You can install the PAO using the CLI, oc. To install the PAO on OpenShift Container Platform 4.6:
[core@r190bcsah cnf-tests]$ oc get pods -n openshift-performance-addon
NAME READY STATUS RESTARTS AGE
performance-operator-76bd65ccc7-dh7q8 1/1 Running 0 25d
By default, a performance profile can be applied either to all compute nodes, or to a subset of compute nodes. If you plan to apply the performance profile to only a subset of compute nodes, create a MachineConfigPool to which the profile can attach.
Note: Performance profiles must only be attached to a MachineConfigPool where all nodes in the pool have the same hardware configuration.
In this guide, the worker-cnf MachineConfigPool is used to attach the performance profile.
Note: Each node in the pool must have the same label.
oc label node <node_name> node-role.kubernetes.io/worker-cnf=
The nodes in the pool reboot to apply the changes.
[core@r190bcsah ~]$ oc get mcp
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-c6071bdd292e07e3386ee3f564d11cff True False False 3 3 3 0 50d
worker rendered-worker-f8da45f41f15c3fec7a7d6dc8bf25e99 True False False 0 0 0 0 50d
worker-1 rendered-worker-1-212bdae31cab1697518866e1d7679a8d True False False 1 1 1 0 21d
worker-cnf rendered-worker-cnf-51306a870fe800fd0654aedd6b731aea True False False 2 2 2 0 36d
This sample file in GitHub provides performance profile CRs for reference.
Note: These CRs are based on the hardware configuration of a Dell EMC PowerEdge XE2420 server. Use them as a reference only.
Two performance profiles are provided, one with and one without simultaneous multithreading.
After a PerformanceProfile CR is created, each node in the MCP reboots. A node might reboot multiple times.
[core@r190bcsah ~]$ oc get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
compute-0.oss.labs Ready worker,worker-cnf 21d v1.19.0+3b01205 100.67.190.134 <none> Red Hat Enterprise Linux CoreOS 46.82.202101191342-0 (Ootpa) 4.18.0-193.40.1.rt13.90.el8_2.x86_64 cri-o://1.19.1-4.rhaos4.6.git3846aab.el8
compute-1.oss.labs Ready worker,worker-1 50d v1.19.0+3b01205 100.67.190.135 <none> Red Hat Enterprise Linux CoreOS 46.82.202101191342-0 (Ootpa) 4.18.0-193.40.1.el8_2.x86_64 cri-o://1.19.1-4.rhaos4.6.git3846aab.el8
...<omitted_output>
Note: Nodes that are running the real-time kernel have “rt” embedded in the kernel version, as highlighted in yellow in the code extract. Nodes that are not running the real-time kernel do not have “rt” embedded in the kernel version, as highlighted in blue in the code extract.
For optimal low-latency performance, specify a high-performance CPU configuration in the performance profile. Each CPU in the “reserved” set must be on the same NUMA node. If SMT is enabled, both logical CPU cores on a physical CPU core must be designated as “isolated” or “reserved.”
[core@compute-2 ~]$ lscpu --all --extended
The following code sample shows an abbreviated output:
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE
0 0 0 0 0:0:0:0 yes
1 1 1 1 1:1:1:1 yes
2 0 0 2 2:2:2:0 yes
3 1 1 3 3:3:3:1 yes
4 0 0 4 4:4:4:0 yes
5 1 1 5 5:5:5:1 yes
6 0 0 6 6:6:6:0 yes
…
48 0 0 0 0:0:0:0 yes
49 1 1 1 1:1:1:1 yes
50 0 0 2 2:2:2:0 yes
51 1 1 3 3:3:3:1 yes
52 0 0 4 4:4:4:0 yes
53 1 1 5 5:5:5:1 yes
54 0 0 6 6:6:6:0 yes
The following code shows a sample CPU specification:
isolated: "1,3-47,49,51,53-95"
reserved: "0,48,2,50,4,52"
Each of the cores in the “reserved” set are on NUMA node 0, and all logical core pairs are designated either as “isolated” or “reserved.” There is no case where one logical core in a pair is “isolated” and the other logical core is “reserved.
For some use cases, it might be necessary to use a stricter topology management policy. Ensure that you set either “best-effort” or “restricted” topology management policies in the PerformanceProfile.
Note: For more information, see the Red Hat Topology Manager documentation.