We used the Dell EMC PowerEdge MX7000 modular chassis, which provides high-performance data center infrastructure, for both compute and network resources in this solution.
Compute or server layer
The compute or server resources for this reference architecture are:
- One PowerEdge MX840c blade for Oracle databases—We deployed this four-socket blade server with the VMware ESXi 6.7 hypervisor and configured it to run three single-node Oracle database virtual machines (VMs). We deployed each VM with Oracle 18c (18.3.0) Grid Infrastructure (GI) and a standalone Oracle Database 18c (18.3.0) running on Red Hat Enterprise Linux 7.4 as the guest operating system. We configured the VMs as follows:
- We configured the first VM to run the Oracle OLTP production database workload.
- We configured the second VM to run the Oracle DSS database workload.
- We configured the third VM to run an OLTP database workload that we created as a snapshot of the OLTP production database on the PowerMax storage array.
For details about the ESXi host, VMs for Oracle databases, and virtual network configuration, see Appendix B: Design and Configuration Details.
- One PowerEdge MX840c blade for SQL Server databases—We deployed this four-socket blade server with VMware ESXi 6.7 hypervisor and used it to run five single-node SQL Server database virtual machines (VMs). We deployed a standalone SQL Server 2017 instance on each VM with Red Hat Enterprise Linux 7.6 as the guest operating system. We configured the VMs as follows:
- We configured the first two VMs to run the OLTP SQL production database workload.
- We configured the third and the fourth VMs to run the SQL DSS database workload.
- We configured the fifth VM to run an OLTP database workload that we created as a snapshot of the OLTP production database on the PowerMax storage array to simulate a development or test environment, or both.
For details about the ESXi host, VMs for SQL Server databases, and virtual network configuration, see Appendix B: Design and Configuration Details.
- MX840c blade subcomponents—Each MX840c blade used for the Oracle and SQL Server databases consists of four Intel Xeon Scalable 20c physical CPUs, 1,536 GB of RAM, and four QLogic QL41262 dual-port 25 GbE mezzanine or converged network adapters (CNAs) for LAN and SAN traffic. We configured two of the mezzanine cards for Fibre Channel over Ethernet (FCoE) or SAN traffic. We configured the remaining two cards for LAN traffic. We created NIC partitioning (NPAR) on all the mezzanine cards. For details about the CNA configuration, see Converged network adapter configuration in Appendix B.
We used the PowerEdge MX7000 modular infrastructure to provide the network switching layer in this solution. The network layer consists of:
- Two MX9116n Fabric Switching Engine (FSE) I/O Modules (IOMs) or switches—We configured two MX9116n IOMs installed in MX fabric slot A1 and MX fabric slot B1 to run converged LAN and SAN traffic in this solution. We configured the two IOMs in Virtual Link Trunking (VLT) mode. We configured the two QSFP28 (100 Gb) external-facing unified ports in four 16 Gb/s Fibre Channel (FC) break-out mode and directly attached them to the PowerMax 2000 storage array. We configured the QSFP28 external-facing port in four 10 GbE break-out mode and uplinked it to spine switches for external LAN connectivity. We configured the 25 GbE internal-facing ports that we connected to the CNA ports in the MX840c blades to carry FCoE traffic and LAN traffic. For details about the LAN and SAN network configuration, including FC zoning, see Compute and network design in Appendix B.
- Redundant MX management module—We connected redundant 1 GbE MX management modules to 1 GbE switches. We used this management module to manage the MX7000 chassis and MX9116n IOMs, and to connect to the iDRACs on the MX840c blades. For details about MX7000 chassis management, see the Dell EMC OpenManage Enterprise-Modular Edition Version 1.00.01 for PowerEdge MX7000 Chassis User's Guide.