Home > AI Solutions > Artificial Intelligence > White Papers > AI Driven Speech Recognition and Synthesis on Dell APEX Cloud Platform for Red Hat OpenShift > Web application sample
Now that you are familiar with the Riva API service, the next step is to deploy an application that can leverage these services. In this example, a video chat doing automatic transcription and named entity recognition (NER) will be deployed using the publicly available code for the Riva Contact application as a proof of concept. This Riva Contact Center Video Conference is a lightweight Node.js sample application. A similar application could be used in a call center, where the call transcription and named entry recognition could launch additional queries to reduce response time, assess agent performance or customer satisfaction metrics, and ingest data for AI model training or fine-tuning.
Before you start to build your application, confirm that you have the integrated OpenShift Container Registry (OCR) deployed to manage your container images. The container registry will be responsible for storing the output from the ‘source to image’ build, which will be used for deployment, as explained in understanding image builds. See the internal registry overview page to learn more about deploying a registry in your OpenShift cluster.
Switch to Developer Mode in your Red Hat OpenShift console to start the application deployment. Click the +Add option to create your new project. Alternatively, you can select an existing project from the drop-down menu. Find the tile Git Repository and select it to import from Git. In the URL field, enter the address https://github.com/nvidia-riva/sample-apps.git for the NVIDIA Riva samples apps repository.
In the advanced Git options, add /riva-contact in the Context dir field to specify the correct application to be imported. A Node.js image will be automatically suggested, as shown in Figure 10. Finalize the remaining steps by giving your application a name and defining a target port. Be sure to select the Create Route check box to enable exposure of this application at a public URL.
When creating your own application, you will be required to set the address of your Riva API using your internal hostname. For the purposes of this example, a secret was created to overwrite the env.txt file with another one containing the correct Riva API server hostname, because the application code was imported directly from the NVIDIA sample GitHub repository without edits. You can also clone the application and edit the env.txt file to point to the Riva API server and service port deployed in your Red Hat OpenShift cluster.
When you click Create, OpenShift builds the application pod and makes a service and a route to expose it. After the creation process is concluded, you can find the URL link for the application by switching back to the administrator view under Routes in the networking menu.
The application sample has a video chat for two participants. Users can connect using their auto-assigned ID. Start the application to see the speech recognition and natural language processing features in action, with the AI-generated transcript and the live tagging that captures, in this example, persons, locations, organizations, time/date, and others.
This example shows the simplicity of deploying an AI application that can be customized to any business needs. The charts in Figure 12 show that the NVIDIA A2 Tensor GPU usage considering the web application running in the OpenShift cluster was minimal. This proof of concept used basic resources available in the APEX Cloud platform for Red Hat OpenShift. Customers who want to tailor a solution for their business needs and performance requirements, including scaling up applications for concurrent streaming calls, can consider other options of GPUs or storage.