OpenShift Virtualization Networking
Tue, 10 Oct 2023 09:55:24 -0000
|Read Time: 0 minutes
OpenShift Virtualization Networking
Introduction
Red Hat OpenShift Virtualization enables users to run virtual machines (VMs) alongside containers on the same platform, simplifying management and reducing the complexity of maintaining separate infrastructures and management tools. OpenShift Virtualization unifies the operations and management of VMs and containers on the same platform, helping organizations to benefit from their existing investments in virtualization.
The integration of VMs and containers on the same platform reduces the operational overhead and maximizes the hardware usage. The seamless deployment of OpenShift Virtualization makes configuration quick and easy for administrators. An enhanced web console provides a graphical portal to manage these virtualized resources. The feature enables multiple virtualization tasks, including:
- Creating and managing Linux and Windows VMs
- Connecting to VMs through various consoles and CLI tools
- Importing and cloning existing VMs
- Managing network interface controllers and storage disks that are attached to VMs
- Live-migrating VMs between nodes
OpenShift Virtualization is available as an operator in the OpenShift Operator Hub. The operator is installed from the CLI or the OpenShift web console, The Operator Lifecycle Manager (OLM) deploys operator pods for OpenShift Virtualization components such as compute, storage, networking, scaling, and templating.OLM also deploys the hyperconverged-cluster-operator pod, which is responsible for the deployment, configuration, and life cycle of other components, and the helper pods hco-webhook and hyperconverged-cluster-cli-download. For more information, see OpenShift Virtualization architecture | Virtualization | OpenShift Container Platform 4.12.
This blog describes a Dell-validated environment overview, the advantages of having a dedicated network for the VMs, how to configure the network on the cluster by using the NMState operator, and how to enable virtualization on the Red Hat Container platform.
Environment overview
The Dell OpenShift team used Dell PowerEdge R760 servers to host the Red Hat OpenShift 4.12 Container Platform and to validate OpenShift Virtualization with a dedicated network for VMs. For more information about deploying an OpenShift cluster on Dell powered bare metal servers, see the Red Hat OpenShift Container Platform 4.12 on Dell Infrastructure Implementation Guide.
The OpenShift MachineNetwork uses the 192.168.32.0/24 network. A dedicated VLAN with the IP address range 192.168.4.0/24 is created for the VMs. A dedicated physical interface on OpenShift nodes is configured for the VM network using NMState.
We installed the OpenShift Virtualization operator and created a hyper-converged custom resource on the cluster.
Lastly, we installed CSI PowerStore drivers on the cluster for NFS storage to load the ISOs for the VMs.
Why a dedicated network for virtual machines?
OpenShift VMs can use a dedicated network with a VLAN that is different from the one used by the OpenShift cluster. A network for VMs is created on a dedicated network interface on OpenShift nodes, with an IP address range that does not overlap with the cluster’s MachineNetwork.
Configuring a dedicated network for VMs allows for isolation between the VM network and the cluster or external network, helping administrators to manage VMs easily. A dedicated network also helps enhance security and increase performance.
Configuring a dedicated network using NMState
The Kubernetes NMState operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift cluster’s nodes. For more information, see About the Kubernetes NMState Operator - Kubernetes NMState | Networking | OpenShift Container Platform 4.12 .
OpenShift Virtualization uses NMstate to report on and configure the state of the node network, making it possible to modify network policy configuration. For example, you can create a Linux bridge on all nodes by applying a single configuration manifest to the cluster.
You can install the NMState operator from the Operator hub on the OpenShift web console., and then create an NMstate custom resource. NodeNetworkConfigurationPolicy describes the requested network configuration on nodes. Update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster.
To atttach a VM to an additional network, we performed the following steps:
- Create a Linux bridge node network configuration policy.
- Create a Linux bridge network attachment definition to provide Layer-2 networking to pods and VMs.
- Configure the VM, enabling the VM to recognize the network attachment definition.
After installing the NMState operator on the cluster, we applied the following NodeNetworkConfigurationPolicy to create a Linux bridge that attaches to the second Ethernet:
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: br1-eno12409-policy
spec:
nodeSelector:
kubernetes.io/hostname: cnv-21
desiredState:
interfaces:
- name: br1
description: Linux bridge with eno12409 as a port
type: linux-bridge
state: up
ipv4:
address:
- prefix-length: 24
ip: 192.168.4.21
dhcp: false
enabled: true
bridge:
options:
stp:
enabled: false
port:
- name: eno12409
We created a VM by booting Red Hat Enterprise Linux 8.6 ISO. A network attachment definition is created in the same namespace as the pod or VM. We added a network interface to the VM, and assigned the new VM an IP address from the dedicated network.
We also performed a live migration on the VM without interrupting the virtual workload or access, and then verified that the VM IP address remained the same.
References
- About OpenShift Virtualization | Virtualization | OpenShift Container Platform 4.12
- About the Kubernetes NMState Operator - Kubernetes NMState | Networking | OpenShift Container Platform 4.12
- Updating node network configuration - Kubernetes NMState | Networking | OpenShift Container Platform 4.12
- Connecting a virtual machine to a Linux bridge network - Virtual machines | Virtualization | OpenShift Container Platform 4.12
Related Blog Posts
OpenShift Virtualization with NVIDIA virtual GPU - Part 2
Tue, 23 Apr 2024 16:22:00 -0000
|Read Time: 0 minutes
OpenShift Virtualization with NVIDIA virtual GPU
This blog describes how to set up OpenShift Virtualization on OpenShift Container Platform clusters using nodes that are equipped with different NVIDIA GPU (Graphics Processing Unit) cards. The tables at the end of this blog show component versions and combinations of GPU workloads that the Dell OpenShift team validated across nodes in OpenShift cluster versions 4.14 and 4.12. NVIDIA and CUDA (Compute Unified Device Architecture) drivers are installed on RHEL8 operating system VMs, and a sample Spark application is created to consume the GPU/vGPU resource.
For a comprehensive overview of NVIDIA vGPU, GPU Operator, and OpenShift Virtualization, as well as the architecture of our validated environment, see OpenShift Virtualization with NVIDIA virtual GPU - Part 1.
Before you start
- Install Dell PowerStore drivers on the cluster to provision NFS volumes. For more information about deploying Dell CSI drivers on OpenShift, see the Red Hat OpenShift Container Platform 4.12 on Dell Infrastructure Implementation Guide.
- Enable SR-IOV on the OpenShift nodes to give the VMs direct hardware access to network resources.
- Install OpenShift Virtualization operator and create HyperConverged CR on the cluster.
- Optionally, configure a dedicated network for virtual machines using Kubernetes NMState.
- Install the Node Feature Discovery operator and create a NodeFeatureDiscovery CR.
- Install the NVIDIA GPU operator from Operator Hub.
- Create a MachineConfig resource to enable Input-Output Memory Management Unit (IOMMU) driver on the nodes, before configuring mediated devices.
Steps
- Add the GPU workload configuration label to the node:
oc label node <node-name> --overwrite nvidia.com/gpu.workload.config=vm-vgpu
You can assign the following values to the label: container, vm-passthrough, and vm-vgpu. The GPU operator uses the value of this label when determining which operands to deploy to support the workload type.
2. Annotate the HyperConverged CR to enable mediated devices:
oc annotate --overwrite -n openshift-cnv hco kubevirt-hyperconverged kubevirt.kubevirt.io/jsonpatch='[{"op": "add", "path": "/spec/configuration/developerConfiguration/featureGates/-", "value": "DisableMDEVConfiguration" }]'
3. Build the vGPU manager image:
a. Download the vGPU Software from Software Downloads in the NVIDIA Licensing Portal for the platform, platform version, and vGPU product version you want.
The vGPU software bundle is packaged as NVIDIA-GRID-Linux-KVM-<version>.zip.
b. Extract the bundle to obtain the NVIDIA vGPU Manager for Linux (NVIDIA-Linux-x86_64-<version>-vgpu-kvm.run file) in the Host_Drivers folder.
4. On your administration node, clone the driver container image repository, and change to the vgpu-manager/rhel8 directory:
git clone https://gitlab.com/nvidia/container-images/driver
cd driver/vgpu-manager/rhel8
5. Export the variables with the name of your private registry, where the driver image is pushed into the NVIDIA vGPU manager version, Red Hat CoreOS version (in the format rhcos4.x, where x is the supported minor OCP version), and the CUDA base image version for building the driver image. Build the image using Docker or Podman and push the image to the private registry:
export PRIVATE_REGISTRY=docker.io/indira0408 VERSION=525.125.06 OS_TAG=rhcos4.14 CUDA_VERSION=12.0
docker build --build-arg DRIVER_VERSION=${VERSION} --build-arg CUDA_VERSION=${CUDA_VERSION} -t ${PRIVATE_REGISTRY}/vgpu-manager:${VERSION}-${OS_TAG} .
podman push docker.io/indira0408/vgpu-manager:525.125.06-rhcos4.14
6. Create an imagePullSecret with user credentials for authenticating to the private registry in the nvidia-gpu-operator namespace. Create a clusterPolicy CR with the following custom configuration, and pass the vGPU manager image you created in the previous step in the clusterPolicy:
sandboxWorloads.enabled=true
vgpuManager.enabled=true
vgpuManager.repository=docker.io/indira0408
vgpuManager.image=vgpu-manager
vgpuManager.version=525.125.06
vgpuManager.imagePullSecrets=private-registry-secret
7. After the ClusterPolicy status changes to “Ready,” edit the HyperConverged CR to allow PCI/mediated devices. For examples of HyperConverged CRs for different PCI and mediated devices, see the Dell ISG OpenShift-bare-metal git page.
8. Create a RHEL 8.6 VM and assign the vGPU device. For instructions on how to create a VM on OpenShift, see Creating virtual machines.
9. Optionally, you can change the vGPU profile by labeling the node with a vGPU profile name.
The GPU operator re-creates the vGPU manager drivers. Update the PCI Devices and mediated devices in the HyperConverged CR:
oc label node cnv-vgpu1 nvidia.com/vgpu.config=A40-8Q
Installing NVIDIA drivers on an RHEL 8.6 VM
Note: A vGPU-assigned VM must have the vGPU driver installed. The vGPU software's "Guest_Drivers" folder contains the package and runfile installers for drivers. You can install either the data center driver or the vGPU driver on a VM that has been assigned a single physical GPU through GPU Passthrough mode. Get the data center drivers for the operating system, architecture, and version that you want from NVIDIA Unix Drivers.
- Register the VM to the Red Hat subscription server using subscription-manager:
sudo subscription-manager register –username <username> --password <password>
2. Install make and compilation tools on the VM:
yum install -y make
yum group install ”Development Tools” -y
3. Disable the Nouveau kernel:
echo ’blacklist nouveau’ | sudo tee -a /etc/modprobe.d/blacklist.conf
4. Reboot the VM to apply the change:
reboot
5. Install Kernel headers:
yum install -y kernel-devel-$(uname -r) kernel-headers-$(uname -r)
The NVIDIA driver requires that the kernel headers and development packages for the running version of the kernel be installed at the time of the driver installation.
6. Install the NVIDIA drivers using the runfile installer. Copy the NVIDIA-Linux-x86_64-525.125.06-grid.run file in Guest drivers folder in the downloaded vGPU software to the VM.
chmod +x NVIDIA-Linux-x86_64-525.125.06-grid.run
sh NVIDIA-Linux-x86_64-525.125.06-grid.run
7. Select the options you require and install the drivers.
8. Run the nvidia-smi command to view the GPU device, NVIDIA, and CUDA drivers.
Installing Spark application on VMs to consume vGPU
Prerequisites
NVIDIA and CUDA drivers are installed on the VM.
Steps
1. Install the Open-JDK package on the VM:
yum install java-1.8.0-openjdk -y
2. Choose the required version of Spark tarball from Downloads | Apache Spark.
3. Unpack the tar file into the /opt directory:
tar -xvf spark-3.5.0-bin-hadoop3.tgz
mv spark-3.5.0-bin-hadoop3 /opt/
mv /opt/spark-3.5.0-bin-hadoop3/ /opt/spark
4. Choose the Spark release you want, and then download the NVIDIA RAPIDS Accelerator for Apache Spark plug-in jar file into the /opt/spark/jars directory from Spark Rapids Download:
wget https://repo1.maven.org/maven2/com/nvidia/rapids-4-spark_2.12/23.10.0/rapids-4-spark_2.12-23.10.0.jar
5. Export variables for the Spark home and Java home directories inside bash_profile and reload the bash_profile:
export SPARK_HOME=/opt/spark
export PATH=$SPARK_HOME/bin:$PATH
export JAVA_HOME="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.392.b08-4.el8.x86_64/jre/"
export PATH=$JAVA_HOME/bin:$PATH
source .bash_profile
6. Download the GPU discovery script from the following GitHub link and save the script locally (/root/getGpusResources.sh):
wget https://github.com/apache/spark/blob/master/examples/src/main/scripts/getGpusResources.sh
7. Launch the spark shell with the following configuration settings and run a small compute program to use the vGPU device:
/opt/spark/bin/spark-shell --jars /opt/spark/jars/rapids-4-spark_2.12-23.10.0.jar --conf spark.plugins=com.nvidia.spark.SQLPlugin --conf spark.executor.resource.gpu.discoveryScript=/root/getGpusResources.sh --conf spark.executor.resource.gpu.vendor=nvidia.com --conf spark.rapids.sql.enabled=true --conf spark.executor.resource.gpu.amount=1
scala> val df = sc.makeRDD(1 to 1000000000, 6).toDF
scala> val df2 = sc.makeRDD(1 to 1000000000, 6).toDF
scala> df.select( $"value" as "a").join(df2.select($"value" as "b"), $"a" === $"b").count
8. Run nvidia-smi in the other terminal to monitor vGPU utilization:
watch nvidia-smi
The output shows the Java process and Volatile GPU-utilization percentage.
Validated scenarios and versions
References
- NVIDIA GPU Operator with OpenShift Virtualization
- NVIDIA Virtual GPU (vGPU) Software Documentation
- Configuring virtual GPUs - Virtual machines | Virtualization | OpenShift Container Platform 4.14
- GPU Operator Component Matrix
- NVIDIA Virtual GPU Software Documentation
- NVIDIA® Virtual GPU Software Supported GPUs
OpenShift Virtualization with NVIDIA virtual GPU - Part 1
Thu, 15 Feb 2024 08:55:16 -0000
|Read Time: 0 minutes
OpenShift Virtualization with NVIDIA virtual GPU
Red Hat OpenShift Virtualization enables users to run virtual machines (VMs) alongside containers on the same platform, simplifying management and reducing the complexity of maintaining separate infrastructures and management tools. OpenShift Virtualization unifies the operations and management of VMs and containers on the same platform, helping organizations to benefit from their existing virtualization investments. The seamless deployment of OpenShift Virtualization makes configuration quick and easy for administrators. An enhanced web console provides a graphical portal to manage these virtualized resources. For more information, see OpenShift Virtualization.
NVIDIA virtual GPU
NVIDIA virtual GPU (vGPU) products leverage NVIDIA GPU capabilities to accelerate compute-intensive workloads, Artificial Intelligence/Machine Learning (AI/ML), data processing, scientific computing, and professional workstations across on-premises, hybrid, and multicloud environments.
NVIDIA vGPU technology enables multiple VMs to access and share the resources of a single physical GPU through virtualization capabilities. You can install the NVIDIA vGPU software in data centers, cloud platforms, and virtual desktop infrastructure (VDI). The vGPU software stack divides the GPU, enabling efficient GPU resource sharing, improved performance for graphics-intensive applications within virtualized environments, and flexibility in allocating GPU resources to different VMs based on workload demands.
NVIDIA vGPU technology on OpenShift accelerates both containerized and VM-based workloads through the use of GPU devices. vGPU creates a mediated device (mdev) that represents a virtual GPU instance. The performance of the physical GPU is divided among these virtual devices and made available on OpenShift Container Platform. Although you can assign multiple vGPU devices to VMs, you can only allocate a vGPU device to one VM at a time.
Common use cases in OpenShift environments include AI/ML model training and inference, data processing, and complex simulations. Scalable and efficient GPU resource utilization can significantly improve performance.
NVIDIA GPU operator for OpenShift
The NVIDIA GPU operator for OpenShift is a Kubernetes operator that automates the deployment and management of the components of GPU-enabled workloads, including device drivers, container runtimes, and monitoring tools. The operator enables OpenShift Virtualization to attach GPUs or virtual GPUs to workloads running on OpenShift Container Platform. Users can easily provision and manage GPU-enabled VMs that run complex AI/ML workloads, and the operator can work in tandem with vGPU technology to streamline the management of GPU resources.
The GPU operator is responsible for configuring every node in the cluster with the required components to support GPU devices in Red Hat OpenShift. It is flexible enough to support heterogenous clusters that may contain multiple GPU device types.
The GPU operator uses the Kubernetes operator framework to automate the deployment and management of all the NVIDIA software components on worker nodes depending on what GPU workload is configured to run on those nodes. These components include the NVIDIA drivers (to enable CUDA), the Kubernetes device plug-in for GPUs, the NVIDIA Container Toolkit, and automatic node labeling using GPU Feature Discovery, DGCM-based monitoring, and more.
Architecture
A CSI-enabled storage provider is configured on the cluster to provision storage for VMs. The PowerStore 5000T standard deployment model provides organizations with all the benefits of a unified storage platform for block, file, and NFS storage, while also enabling flexible growth with the intelligent scale-up and scale-out capability of appliance clusters.
Dell Container Storage Modules (CSMs) enable simple and consistent integration and automation experiences, extending enterprise storage capabilities. Storage modules for Dell PowerStore expose enterprise features of storage arrays to Kubernetes, enabling developers to effortlessly leverage these features in their deployments, making PowerStore an ideal candidate for a VM storage solution. For information about deploying Dell CSI drivers on OpenShift, see Dell CSM.
OpenShift VMs are configured to use a separate network with a VLAN that is different from the MachineNetwork that is configured in the OpenShift cluster. To achieve isolation and security, create a network for VMs on a dedicated network interface on OpenShift nodes using an IP address range that does not overlap with the cluster’s MachineNetwork. The nodes are configured with a second network, and VMs are built on this network using the Kubernetes NMState operator. For more information, see OpenShift Virtualization Networking.
Further, the OpenShift worker nodes are enabled with:
- Single Root Input/Output Virtualization (SR-IOV): SR-IOV allows for more efficient use of network resources and improves the overall performance of network traffic in virtualized environments by enhancing how physical network devices are shared.
- Input/Output Memory Management Unit (IOMMU): IOMMU isolates and protects the memory spaces of devices such as network cards or GPUs and VMs. This isolation ensures that a malfunctioning or malicious device driver cannot access or corrupt the memory that is allocated to other devices or VMs. IOMMU is essential for SR-IOV, which allows a single physical device to appear as multiple virtual devices to different VMs, providing direct and isolated access to the VMs.
An OpenShift worker node can run either GPU-accelerated container VMs with GPU passthrough, or GPU-accelerated VMs with vGPU, but not a combination. The prerequisites for running containers and VMs with GPUs vary, with the primary difference being the required drivers. During the GPU operator deployment, OpenShift worker nodes are labeled with the details of the detected GPU devices. The labels are used for scheduling pods to be deployed by the GPU operator. The ClusterPolicy custom resource (CR) that is included with the GPU operator installs the required drivers and components as determined by the node labels. For example, the data center driver is needed for containers, the vfio-pci driver is needed for GPU passthrough, and the NVIDIA vGPU Manager is needed for creating vGPU devices. For more information, see NVIDIA GPU Operator with OpenShift Virtualization.
The architecture diagram shows how the GPU operator is configured to deploy different software components on worker nodes depending on what GPU workload is configured to run on those nodes:
- Containers: The GPU operator creates the NVIDIA GPU device drivers, and a container is assigned a whole GPU. The GPU operator also installs components such as the NVIDIA Container Toolkit, NVIDIA device plug-in, CUDA validator, DCGM exporter, and operator validator.
- VMs with Passthrough GPU: The VFIO driver that is created by the GPU operator gives the VM direct and exclusive access to the GPU resources, dedicating an entire physical GPU to a VM. This configuration is commonly used in scenarios where a VM requires direct access to the full capabilities of a GPU, such as high-performance computing (HPC) workloads, gaming VMs, or applications that demand GPU acceleration.
- VMs with vGPU: The vGPU manager enables virtualization of a physical GPU. A single GPU is sliced into multiple vGPU instances, allowing VMs to share the GPU resources.
For containerized workloads that do not require the capability of an entire GPU, you can configure an OpenShift cluster with Multi Instance GPU (MIG). By allowing the partitioning of a single physical GPU into multiple smaller instances, NVIDIA MIG technology enables each instance to be allocated to different containers, providing isolation and resource allocation for different tasks. MIG is different from vGPU in that the isolation is implemented by the device firmware. Also, MIGuses hardware boundaries, whereas vGPU is a higher-level, software-only approach.
When the driver installation is complete, OpenShift Virtualization automatically creates vGPUs and PCI Host devices based on GPU device configuration information that is provided in the HyperConverged CR. These devices are then assigned to VMs.
For detailed instructions for installing OpenShift Virtualization on the hardware depicted in Figure 1, as well as component versions and GPU workload combinations that have been validated across nodes, see OpenShift Virtualization with NVIDIA virtual GPU - Part 2.
References
- NVIDIA GPU Operator with OpenShift Virtualization
- NVIDIA Virtual GPU (vGPU) Software Documentation
- Configuring virtual GPUs
- GPU Operator Component Matrix
- NVIDIA Virtual GPU Software Documentation