Introduction
This XtremIO X2-based Ready Stack solution is designed to consolidate multiple types of mixed-workload database environments in a single system. We tested and validated this Ready Stack solution in the following types of database environments:
- An OLTP production database in a physical environment (two nodes)
- XVC databases in a physical environment (single node)
- One XVC database that was repurposed from the production database for an OLAP workload
- One OLTP XVC database that was repurposed from the production database for development
- OLTP databases running on two VMs (single node)
Logical architecture overview
The following figure shows the logical architecture of the consolidated mixed-database environment. The figure shows the multiple layers of infrastructure components in this Ready Stack solution, which includes Data Domain DD6300 as the backup appliance for data protection.

Figure 1. Logical architecture overview
Server layer
The server layer consists of:
- R640 management server—vCenter Server Appliance is deployed as a VM on a single PowerEdge R640 server running ESXi 6.7 U3 as the hypervisor.
- R940 PROD database servers—The production database is deployed on two PowerEdge R940 servers running Red Hat Enterprise Linux 7.4 as the bare-metal operating system.
- R740 XVC database server—The two stand-alone databases that are repurposed from the production database are deployed on a single PowerEdge R740 server running Red Hat Enterprise Linux 7.4 as the bare-metal operating system.
- R940 virtual database server—The two virtual OLTP databases are deployed as separate VMs running Red Hat Enterprise Linux 7.4 as the guest operating system. Both VMs are running on a single PowerEdge R940 server that is installed with the ESXi 6.7 U3 hypervisor.
Each database server has:
- Two dual-port 10 GbE NICs—For public and private (PROD servers only) network traffic
- Two dual-port 16 Gbps HBAs—For SAN traffic
- At least one 1 GbE management rNDC or LOM port—For in-band server management from within the operating system
- 1 GbE dedicated iDRAC Ethernet port—For out-of-band (OOB) management of the server
Switch layer
The switch layer consists of:
- Two 10 GbE ToR switches—Redundant S5248F-ON top-of-rack (ToR) LAN switches that support the public, private, and backup traffic
- Two 32 Gbps FC switches—Redundant DS-6620-B switches for FC SAN traffic and connectivity between the database servers and the storage array
- One 1 GbE Management switch—A 1U S4148T-ON switch for the management traffic
Storage layer
The storage layer consists of:
- XtremIO X2 array—XtremIO X2 is the FC SAN storage that consolidates all the databases. The XtremIO X2 array consists of a two X-Brick cluster. Each X-Brick block has two storage controllers, and each storage controller has two front-end 16 Gbps FC ports.
- Data Domain DD6300 appliance—For database backup and recovery, we tested the solution with the DD6300 backup appliance. Two 10 GbE ports from the DD6300 appliance were connected to the ToR switches for the backup and recovery traffic.
Physical design overview
This section provides an overview of the physical LAN and SAN design and the connectivity that is deployed in this solution.
The following figure shows the physical design and redundant connectivity between the database servers, 10 GbE ToR switches, 1 GbE management switch, XtremIO X2 cluster, and the DS-6620B fabric switches:

Figure 2. Physical design and connectivity
The DD6300 backup appliance (not shown) is connected to the 10 GbE public network.
The SAN design features redundant components and connectivity at every level to ensure that there is no single point of failure. This design enables the database server to reach the storage array even if any of the following components fail:
- One or more HBA ports
- One HBA
- One FC switch
- One XtremIO front-end port or storage controller
- One XtremIO X-Brick
Virtual network design
The following figure provides a high-level overview of the virtual network design that is implemented in the ESXi database host in the virtual environment. The figure also shows the mapping between the virtual switches and the physical switches.

Figure 3. Virtual network design in the ESXi virtual database host
The main components of the design are:
- Public VDS—We created one VDS, which contains two distributed port groups:
- The public port group provides the virtual interfaces for database public traffic for the two database VMs.
- The physical uplinks port group is used to add the two 10 GbE physical network ports that are connected to the external 10 GbE ToR switches.
- Standard switch—This switch contains two default ports groups:
- The management network port group provides the VMkernel port vmk0 to manage the ESXi host from vCenter Server Appliance.
- The VM network port group provides the 1 GbE virtual interfaces for in-band management of the database VMs.
The management traffic is routed through the 1 GbE physical rNDC or LOM port on the server that is connected to the external management switch.