Home > APEX > Compute & HCI > White Papers > Optimize Machine Learning Through MLOps with Dell APEX and cnvrg.io > Optimizing ML through MLOps with cnvrg.io
Dell Technologies has worked closely with cnvrg.io to deliver MLOps for AI/ML adopters through a jointly engineered and tested solution. The solution helps organizations capitalize on the benefits of MLOps for ML and AI workloads, as shown in the following figure:
The Optimize Machine Learning Through MLOps with Dell APEX and cnvrg.io Design Guide provides guidance for architecting, deploying, and operating MLOps in the data center.
The design guide validates the cnvrg.io MLOps platform with the Dell Validated Design for AI provided by Virtualizing GPUs for AI with VMware and NVIDIA. It uses VMware vSphere with Tanzu for the Kubernetes layer. It uses NVIDIA AI Enterprise software for additional applications, frameworks, and tools that researchers, data scientists, and developers can use to build ML models and analyze data. The solution is powered with Dell PowerEdge servers for compute (with optional NVIDIA GPUs) and coupled with Dell PowerScale storage. It provides the analytics performance and concurrency at scale critical to consistently feeding the most data-hungry ML and AI algorithms.
Note: Other Dell Validated Designs for container platforms can be used with similar results, but Dell Technologies has not validated them as of the publication of this white paper.