Our results validate the use of GPUs and GPU partitions for training and inferences. We also demonstrate AI frameworks, like the cnvrg.io MLOPs platform, can be deployed on VMware vSphere with Tanzu and NVIDIA AI Enterprise. We also show that real life examples like the AI radiologist use case can be deployed, and corresponding AI models can be trained and used for inference on these clusters.