Home > AI Solutions > Gen AI > White Papers > Maximizing Llama Open Source Model Inference Performance with Tensor Parallelism on a Dell XE9680 with H100s > We value your feedback
Dell Technologies and the authors of this document welcome your feedback on this document. Contact the Dell Technologies team by email.
Authors: Vaishali Gupta, Brandt Springman
Contributors: Vaishali Gupta, Brandt Springman, Bhavesh Patel
Note: For links to other documentation for this topic, see the Gen AI | Dell Technologies Info Hub.