Home > AI Solutions > Artificial Intelligence > White Papers > AI Driven Speech Recognition and Synthesis on Dell APEX Cloud Platform for Red Hat OpenShift > Red Hat OpenShift AI tests
Having confirmed that the service is functioning as expected, the next step involves verifying that you can connect to the service externally but within the same OpenShift cluster. Considering this hypothetical AI application development journey, in this phase you learn how to write your own code using the various Riva speech services. You can do this by using the Red Hat OpenShift AI platform to leverage Jupyter notebooks.
Then, in the console application launcher (the black-and-white icon that resembles a grid), navigate to OpenShift Self-Managed Services to open the Red Hat OpenShift AI environment. Under the menu Data Science Projects, create a new project.
Follow the on-screen steps to create a workbench using your preferred notebook image (minimal Python, for example) and select NVIDIA GPU as the accelerator. You are then required to define your persistent storage.
After the project is created, launch the workbench to see the Jupyter Lab Launcher. Then, in the left navigation bar, go to Git and click clone repository. Add the GitHub URL https://github.com/nvidia-riva/tutorials to use NVIDIA-created sample notebooks.
Still within the OpenShift AI project workbench,in the Jupyter Lab Launcher, open a terminal and use the following command to install the NVIDIA Riva Python client:
pip install nvidia-riva-client
The initial test will use the notebook asr-basics.ipynb. Notice that after opening a notebook for the first time, you must modify the address for the API server, as shown in Figure 7. You can find this internal hostname under Services in the Networking menu in the Red Hat OpenShift console. See Figure 6 to identify where the hostname is located.
This notebook processes the same en-US_sample.wav file used in the section Riva API Pod internal tests. The transcription output is shown in Figure 8.
To demonstrate another example of available functionality, navigate to the notebook tts-basics-customize-ssml.ipynb. Again, change the URI for the server hostname as described for the ASR notebook. The first example in this notebook shows how to generate synthetic speech for a given text entry. After running the notebook, you will see a play button in the output cells. This is an audio file with the AI-generated response. Click the play button to hear the results.
This notebook also shows several options to customize the speech output, such as rate, pitch, emotion, emphasis, and even pronunciation.
Note: Parts of these notebooks will only run if the associated models are enabled when deploying Riva. If applicable, go back to your values.yaml file to confirm that the specific models you need are included. For instance, a megatron model is required for multilingual neural machine translation (NMT).