Home > AI Solutions > Artificial Intelligence > White Papers > AI Driven Speech Recognition and Synthesis on Dell APEX Cloud Platform for Red Hat OpenShift > Solution approach
An architecture based on the Dell APEX Cloud Platform for Red Hat OpenShift is proposed to address these business challenges. This turnkey solution platform includes Dell hardware with integrated Red Hat software components.
The following figure shows an overview of the main elements.
While most components in the diagram can be part of the standard Dell APEX Cloud Platform for Red Hat OpenShift delivery, some components, such as the top-of-rack switches, are acquired separately. Dell Technologies offers a family of switches and network products for customers who need a network solution.
A summary of the required installation process for the software components, such as the Red Hat OpenShift AI, NVIDIA GPU Operator, and NVIDIA Riva, are highlighted in the section Implementation guidance. Key elements of this solution are described in the following subsections.
APEX Cloud Platform for Red Hat OpenShift encompasses most of the three bottom layers in Figure 1. The servers with NVIDIA GPUs are preconfigured with Red Hat Enterprise Linux CoreOS and the Red Hat OpenShift Container Platform, which are responsible for host provisioning, Kubernetes deployment, management, monitoring, and more. They also include the APEX Cloud Platform Foundation Software, which integrates the infrastructure management into the OpenShift Web Console. The storage is built on Dell software-defined storage.
The two main NVIDIA components in the solution are the NVIDIA GPU operator and the NVIDIA Riva services. Both are installed in the OpenShift container platform, as shown in Figure 1. The GPU operator[1] uses the operator framework in Kubernetes to automate the management of all NVIDIA software components needed to provision the GPU. NVIDIA Riva deployment consists of two containers: an init container to download and deploy the desired models and a second for the Riva API microservices. NVIDIA GPU operator and NVIDIA Riva services use Dell software-defined storage through the CSI (Container Storage Interface) driver to store the models and, optionally, audio and text files, depending on the developed application. Depending on workload and performance requirements, alternative storage solutions, such as ObjectScale or PowerScale, could also be considered.
Red Hat OpenShift AI, shown as the top layer in Figure 1, offers organizations an efficient way to deploy an integrated set of common open-source and third-party tools to perform ML modeling. The ML models developed using Red Hat OpenShift AI are portable to deploy in production, on containers, on-premises, at the edge, or in the public cloud. It requires a subscription to be installed as an add-on. It is then available in the OpenShift console to create and configure specific data science projects, defining storage in the OpenShift cluster, models, and data sources.
By choosing this solution, IT teams not only overcome the barriers of AI deployment but also optimize business outcomes, as described in the section Solution benefits.