Dell Technologies, VMware, and NVIDIA® have a long history of delivering integrated solutions and jointly supporting enterprise customers. By providing a seamless consumption and operational experience, customers can focus on ways to grow their business and meet their customers’ needs.
The three companies are jointly developing solutions to improve the state-of-the-art practices for implementing AI in the enterprise. Flexibility continues to be a key tenant and desirable trait for machine learning platforms that are used to train and host models for AI. This design guide provides background information and recommendations for using a wide variety of NVIDIA graphics processing unit (GPU) acceleration products and the many form factors of servers from Dell Technologies that can match almost any AI application requirements.
The millions of existing VMware professionals can quickly use their existing skills for deployment and operations of this machine learning platform. One lesson that has been learned over the last decade is that machine learning platforms with custom architectures—whether on-premises or in a cloud service provider—introduce integration challenges and higher costs. The ability for an organization to maintain and evolve its existing operational centers of excellence allows for improved total cost of ownership and better overall business continuity.
In this design guide, we present design guidelines for an AI infrastructure for the enterprise that is based in our engineering proof-of concept lab. Our lab setup includes server, storage and networking from Dell Technologies, virtualization software from VMware, and acceleration, networking, and software from NVIDIA. This design guide describes the recommended configurations, network topologies, deployment guidelines, and observed performance.