Home > Communication Service Provider Solutions > Telecom Multicloud Foundation > Red Hat > Guides > Red Hat Open Shift Container Platform Guides > Deployment Guide: Red Hat OpenShift Container Platform Reference Architecture for Telecom > Enabling GPUs for use inside OpenShift Container Platform 4.6
To install the NFD Operator, log in to the OpenShift cluster through the web console (see Accessing the OpenShift web console).
$ oc new-project gpu-operator-resources
The Install Operator page opens, as shown in the following figure:
After the NFD pods have started running, more compute node labels are added.
Note: Depending on the GPUs that you are using, the node label that the NFD generates might vary. The V100 GPUs in this example the pci-de.present=true label.
Installing the NVIDIA GPU Operator
To install the NVIDIA GPU Operator:
A new nvidia.com/gpu resource is displayed In the NodeSpec for nodes with GPUs.
oc get node <gpu_node> -o yaml | grep -i nvidia.com/gpu
The following output is displayed:
Providing GPU resources to a pod
Sample Pod Spec
As shown in the following sample Pod Spec, you can provide GPUs to pods by specifying the GPU resource nvidia.com/gpu and requesting the number of GPUs that you want. This number must not exceed the number of GPUs present on a specific node.
apiVersion: v1
kind: Pod
metadata:
name: tensorflow-benchmarks-gpu
spec:
nodeSelector:
nvidia.com/gpu.product: Tesla-V100-PCIE-32GB
containers:
- image: nvcr.io/nvidia/tensorflow:19.09-py3
name: cudnn
command: ["/bin/sh","-c"]
args: ["git clone https://github.com/tensorflow/benchmarks.git;cd benchmarks/scripts/tf_cnn_benchmarks;python3 tf_cnn_benchmarks.py --num_gpus=2 --data_format=NHWC --batch_size=32 --model=resnet50 --variable_update=parameter_server"]
resources:
limits:
nvidia.com/gpu: 2
requests:
nvidia.com/gpu: 2
restartPolicy: Never
The NVIDIA GPU Operator also deploys gpu-feature-discovery pods on each compute node. The pod labels each node with information about the GPU type, family, count, and so on, as shown in the Pod Spec. These node labels can be used in the Pod Spec to schedule workloads based on criteria such as the GPU product name, as shown under nodeSelector.
The following documentation provides additional information about the operation and usage of GPUs in the OpenShift environment: