Home > AI Solutions > Gen AI > Guides > Generative AI in the Enterprise with AMD Accelerators > Performance benchmarking overview
Benchmarking an infrastructure before using it for LLM tasks such as training and inferencing is important for several reasons. First, it helps in understanding the capacity of the infrastructure in handling the computational demands of these tasks. LLM tasks often require significant computational resources, and benchmarking can provide insights into whether the current infrastructure can meet these demands. Second, it allows for the optimization of resource allocation, ensuring that the infrastructure is used efficiently and cost-effectively. Third, benchmarking can highlight potential bottlenecks in the infrastructure that can hinder the performance of LLM tasks.
By identifying these issues in advance, you can take steps to mitigate them, ensuring smooth and efficient operation. Lastly, benchmarking provides a baseline against which future upgrades or changes to the infrastructure can be measured, aiding in continuous improvement efforts. Therefore, benchmarking is a vital step in preparing an infrastructure for LLM tasks.
The following sections show some of the end-to-end performance benchmarking we performed as part of the validation of this design on the inferencing and fine-tuning methodologies, as well as microbenchmarking on some of the key components.