Home > Servers > Specialty Servers > White Papers > Deploy GenAI on the PowerEdge XE9680 with Intel® Gaudi®3 Accelerators > Running Gaudi® Jobs using Kubernetes
Create a Kubernetes job that acquires a Gaudi® device by using the resource.limits field. Following is an example using Intel® Gaudi®’s PyTorch container image:
cat <<EOF | kubectl apply -f -
apiVersion: batch/v1
kind: Job
metadata:
name: habanalabs-gaudi-demo
spec:
template:
spec:
hostIPC: true
restartPolicy: OnFailure
containers:
- name: habana-ai-base-container
image: vault.habana.ai/gaudi-docker/1.17.1/ubuntu22.04/habanalabs/pytorch-installer-2.3.1:latest
workingDir: /root
command: ["hl-smi"]
securityContext:
capabilities:
add: ["SYS_NICE"]
resources:
limits:
habana.ai/gaudi: 1
memory: 409Gi
hugepages-2Mi: 95000Mi
EOF
kubectl get pods
#Retrieve the name of the pod and see the results:
kubectl logs <pod-name>