Home > AI Solutions > Artificial Intelligence > White Papers > Model Customization for Code Creation with Red Hat OpenShift AI on Dell AI Optimized Infrastructure > Conclusion
We have outlined a highly performant code large language model training and inferencing system using scalable components for additional performance where/when necessary.
The testing performed shows that training an LLM code on additional datasets can provide valuable additional information and accuracy when used during development activities.
The testing only scratched the surface of how an LLM can be trained/tuned and optimized for specific tasks within a specific company. Different models and datasets can be used to continue to refine the output of the system to yield even more amazing outcomes.