Home > AI Solutions > Gen AI > Guides > Generative AI in the Enterprise with AMD Accelerators > Benchmarking overview
Benchmarking an infrastructure before using it for LLM tasks such as training and inference is crucial for several reasons. First, it helps in understanding the capacity of the infrastructure in handling the computational demands of these tasks. LLM tasks often require significant computational resources, and benchmarking can provide insights into whether the current infrastructure can meet these demands. Second, it allows for the optimization of resource allocation, ensuring that the infrastructure is used efficiently and cost-effectively. Third, benchmarking can highlight potential bottlenecks in the infrastructure that can hinder the performance of LLM tasks. By identifying these issues in advance, you can take steps to mitigate them, ensuring smooth and efficient operation. Lastly, benchmarking provides a baseline against which future upgrades or changes to the infrastructure can be measured, aiding in continuous improvement efforts. Therefore, benchmarking is a vital step in preparing an infrastructure for LLM tasks.
Numerous microbenchmark tools are available from the ROCm, AMD, and other repositories on GitHub. We highlight a few that demonstrate the performance of MI300X accelerators and increased speeds from MI210 accelerators to MI300X accelerators.