Home > AI Solutions > Gen AI > White Papers > Technical White Paper–Generative AI in the Enterprise – Model Training > Servers and GPUs
Dell Technologies offers a range of acceleration-optimized servers with an extensive portfolio of NVIDIA GPUs. The Dell PowerEdge XE9680 server is featured in this design for generative AI training.
The PowerEdge adaptive compute approach enables servers engineered to optimize the latest technology advances for predictable profitable outcomes. The improvements in the PowerEdge portfolio include:
The primary hardware components used in this solution are described below.
The PowerEdge XE9680 server is a high-performance server made for demanding AI, machine learning, and deep learning workloads that enable you to rapidly develop, train, and deploy large machine learning models.
The PowerEdge XE9680 server is the industry’s first server to ship with eight NVIDIA H100 GPUs and NVIDIA AI software. It is designed to maximize AI throughput, providing enterprises with a highly refined, systemized, and scalable platform to help them achieve breakthroughs in NLP, recommender systems, data analytics, and more. Its 6U air-cooled design chassis supports the highest wattage next-generation technologies up to 35C ambient. And it features high-speed networking with NVIDIA ConnectX-7 smart network interface cards (SmartNICs).
The NVIDIA H100 Tensor Core GPU delivers unprecedented performance, scalability, and security for every workload. With NVIDIA fourth-generation NVLink Switch System, the NVIDIA H100 GPU accelerates AI workloads with a dedicated Transformer Engine for trillion parameter language models. The NVIDIA H100 GPU uses breakthrough innovations in the NVIDIA Hopper architecture to deliver industry-leading conversational AI, speeding up large language models by 30 times over the previous generation.