Home > AI Solutions > Artificial Intelligence > Guides > Design Guide—Digital Assistant on Dell APEX Cloud Platform for Red Hat OpenShift with Red Hat OpenShift AI > Introduction
LLMs are highly sophisticated AI models that are designed to understand and generate human-like text, enabling a wide range of natural language processing applications. One limitation of LLMs is that once they are trained and generated, they do not have access to information beyond the date that they were trained. Retrieval Augmented Generation (RAG) can extend the functionality of the LLMs by retrieving facts from an external knowledge base hosted using a vector database such as Redis or PGVector.
In this solution, we have deployed an LLM-based digital assistant that provides answers to user questions. These answers remain up-to-date and contain information unique to the organization by augmenting the model with relevant documentation. It leverages the data science pipelines in Dell APEX Cloud Platform for Red Hat OpenShift AI to ingest the data periodically. It also offers advanced features such as choice of different LLMs, different vector databases, along with options to change the hyperparameters within the user interface.