Home > AI Solutions > Artificial Intelligence > White Papers > AI Driven Speech Recognition and Synthesis on Dell APEX Cloud Platform for Red Hat OpenShift > Hardware design
The architecture for this solution follows the two-layer integrated deployment option for the Dell APEX Cloud Platform, consisting of compute and storage MC (multicloud) nodes, including 4th Generation Intel® Xeon® CPUs.
The compute nodes run Red Hat OpenShift software on bare metal, meaning that OpenShift runs directly on the hardware, allowing for virtualization and serverless computing without needing a hypervisor. The currently available NVIDIA Ampere architecture-based GPUs are compatible with NVIDIA Riva support matrix. For the latest product configurations, including GPU options, consult the Dell APEX Cloud Platform for Red Hat OpenShift specification sheet.
The storage nodes run Dell software-defined storage (SDS) software on Red Hat Enterprise Linux (RHEL) directly on the storage nodes. The management components related to Dell SDS are co-resident on the storage nodes and do not consume resources on the compute nodes. Essential integration between OpenShift and the Dell SDS Element Manager is available. For more information, see the Dell APEX Cloud Platform for Red Hat OpenShift specification sheet.
The Dell APEX Cloud Platform for Red Hat OpenShift uses two dual-port network interface cards (NICs) to connect to top-of-rack switches and a single gigabit management switch for the iDRACs (Integrated Dell Remote Access Controller). Although the NIC options include 25Gb and 100Gb, the 25Gb NICs can be connected to 10Gb switches if needed. Three VLANs (virtual local area network) are used in the solution: one Management VLAN for OpenShift application traffic and two Data Path VLANs for connectivity to the storage cluster.