This three-node architecture is built to provide a high availability design. It can be extended to four or more nodes to consolidate more databases.
Figure 1. Architecture design
The server layer consists of:
- Database ESXi servers: Three R750 PowerEdge servers running ESXi 7.3 hypervisors for two VMs running Red Hat Enterprise Linux 8.5 guest operating system.
- R640 management/tool server: The R640 server runs VMware ESXi 7.0 as the hypervisor to host multiple VMs, as follows:
- VMware vCenter Server Appliance (VCSA) deployed as a VM
- VM with Red Hat Enterprise Linux 8.2 guest operating system installed to run the HammerDB test tool
- Three ESXi database server hosts, each carrying:
- Two 25 GbE network interface cards (NICs): Two dual-port 25 GbE network interface controllers for Oracle public and vMotion
- Two 32 Gbps host bus adapters (HBAs): Two dual-port 32 Gbps HBAs for SAN traffic in database servers
- One GbE management rNDC: At least one 1 GbE rNDC or LOM port for in-band management of the server from within the operating system
- One GbE integrated Dell Remote Access Controller (iDRAC): A dedicated iDRAC Ethernet port for out-of-band (OOB) management of servers
The network layer consists of the following types of connectivity:
- Management network between vCenter and ESXi hosts. This network also connects the iDRAC Ethernet ports of all the servers
- vMotion network for VM migration using vMotion between ESXi hosts
- Public network that connects the database server and the Oracle database instance to with the rest of the data center network
- SAN FC network that connects database servers and PowerStore SAN storage
The networks are implemented with the following switches:
- One 1 GbE switch for management switches
- Two 25 GbE ToR switches for the Oracle public and vMotion network
- Two 32 Gbps FC switches for FC SAN connectivity
The storage layer consists of the PowerStore T array as the FC SAN storage for databases and the data stores for VM operating system volumes.