Before you set up the ESXi database server, ensure that the vCenter Server Appliance VM is installed and set up on the management server. This setup is required to connect to the vSphere Web Client to perform the tasks in this section.
To set up the ESXi database server, follow the procedures in this section to:
On the R940-based virtual database server, install ESXi 6.7 U3 by repeating the procedures in Installing ESXi and deploying vCenter Server Appliance on the management server. However, disregard the section about deploying vCenter Server Appliance because it has already been installed. Installing vCenter Server Appliance again on the virtual database server is unnecessary.
When ESXi 6.7 U3 was installed on the database server, we applied the following applicable best practices for our virtual environment.
Note: In our test environment, we applied best practices in accordance with the information in the Dell EMC XtremIO Storage Array Host Configuration Guide. See that guide for the complete list of best practices for XtremIO in a VMware ESXi host environment and for details about how to apply them.
The following table lists the default and recommended HBA queue depth settings for ESXi 6.7 U3 hosts connecting to XtremIO X2 storage arrays:
Table 29. HBA queue depth settings in ESXi based hosts
Parameter |
Default value |
Recommended value |
LUN Queue Depth |
QLogic: 64 Emulex: 30 |
QLogic: 256 Emulex: 128 |
HBA Queue Depth |
QLogic: Not applicable Emulex: 8192 |
QLogic: Not applicable Emulex: 8192 (maximum) |
We set the LUN queue depth to 256, the recommended value for the QLogic HBAs that we used in our virtualized database server. This value ensures that the XtremIO X2 storage arrays handle an optimal number of SCSI commands (including I/O requests).
Note: vSphere no longer reads the QLogic HBA Queue Depth setting so the setting is not relevant when you are configuring a vSphere host with QLogic HBAs.
Configure the virtual database ESXi host by using vSphere Native Multipathing (NMP) and the following parameter values on the host for optimal performance with the XtremIO X2 array:
Note: In ESXi 6.7 U3, the default path selection policy is round-robin and the default path switching frequency is 1. Therefore, no change was needed in our virtualized database server.
Configure the following ESXi host parameters:
$> esxcli storage nmp path list | grep XtremIO -B1 | grep "\ naa" | sort | uniq
$> esxcli storage core device set -d <naa.xxx> -O 256
where <naa.xxx> is the XtremIO volume that is obtained from the preceding command.
To create the data center and cluster containers inside vSphere:
https://<vCenter Server Administrator FQDN or IP>/vsphere-client
After you create a vSphere data center, add the hosts to vCenter by performing the following steps on each of the servers that will be part of the data center:
https://<vCenter Server Administrator FQDN or IP>/vsphere-client
VDS provides centralized VM network administration. In VDS, virtual switches of each ESXi host are abstracted into a large pool and spread across multiple ESXi hosts within the data center.
Note: Typically, for single ESXi host implementations, standard switches and port groups are sufficient. In this solution, even though we used only one ESXi host, we created distributed switches and port groups to provide the means to easily expand the environment by using multiple ESXi hosts, if needed.
To configure a VDS:
Create corresponding port groups.
The following table describes the distributed switches and port groups for the basic virtual network infrastructure for the databases:
Table 30. Distributed switches and port groups
Distributed switch name |
Distributed port group names |
Purpose |
DBPub |
DBPublic |
Interfaces for the database public and backup and recovery network |
vMotion* |
Interfaces for vSphere vMotion activity |
|
DBPub-DVUplinks-61 (uplink port group) |
Connection for two 10 GbE physical ports that the database public, vMotion, and backup and recovery traffic share on each ESXi host |
* A vMotion network is used for a solution with multiple ESXi hosts and is not required for this solution. If you add more ESXi hosts later, then you can use vMotion to migrate the VMs among the hosts.
Note: The uplink port group is created by default when you create the distributed switch.
Create a VDS on VMware vSphere 6.7 U3 or later as follows:
Uplink ports connect VDS to the physical NICs on associated hosts.
Network I/O Control monitors the load over the network and dynamically allocates resources.
After you create distributed switches, create port groups:
i For VLAN type, select VLAN.
ii For VLAN ID, select the ID.
Note: We set VLAN ID 16 for DBPublic (database public and backup and recovery traffic) and VLAN ID 99 for vMotion distributed port groups. The two uplink ports that are shared by DBpublic and vMotion distributed port groups were also tagged with the same VLAN ID 16 and 99 on the two ToR S5248F-ON switches to which they connect.
iii Under Advanced, select Customize default policies configuration.
iv Click Next.
vMotion on DBPub
VMFS datastores are repositories for VMs. You can set up VMFS datastores for iSCSI-based storage, FC-based storage, and local storage.
Note: First install and configure any required adapters and rescan the adapters to discover newly added storage devices.
Create datastores in vSphere 6.7 U3:
Figure 16. Creating a datastore
The following table provides the details of the datastore design in the virtual database environment:
Table 31. vSphere VM datastore design of XtremIO X2 storage volumes for the virtualized databases
Datastore name |
Datastore size (GB) |
Purpose |
C3-VM-OS |
600 |
1 datastore for the 2 guest operating systems (xorb-virt-1 and xorb-virt-2). Each guest operating system Virtual Machine Disk (VMDK) is 250 GB. |
C3-OCR1 |
100 |
3 datastores for a cluster voting disk of virtualized databases. Each VMDK is 48 GB. |
C3-OCR2 |
100 |
|
C3-OCR3 |
100 |
|
C3-DATA1 |
600 |
4 datastores for data disks in virtualized databases. Each VMDK is 298 GB. |
C3-DATA2 |
600 |
|
C3-DATA3 |
600 |
|
C3-DATA4 |
600 |
|
C3-REDO1 |
100 |
2 datastores for redo disks in virtualized databases. Each VMDK is 48 GB. |
C3-REDO2 |
100 |
|
C3-FRA |
200 |
1 datastore for recovery disks in virtualized databases. Each VMDK is 98 GB. |
C3-TEMP
|
500
|
1 datastore for temp disks in virtualized databases. Each VMDK is 248 GB. |
Create the two database VMs:
Figure 17. Creating a VM
The following table shows the specific guest operating system and VM settings for both VMs. We left all values set to the default except the values that are specified in the table.
Table 32. VM settings for virtual databases
VM setting |
Value |
Guest operating system |
Red Hat Enterprise Linux 7.4 |
Virtual CPU (vCPU) count |
18 |
Virtual Memory (vMem) (GB) |
140 |
vMem Reservation (GB) |
120 |
OS Hard Disk 1 Size (GB) |
250 |
ESXi networking provides communication between VMs on the same host and on different hosts, and between other virtual and physical hosts. It also manages ESXi hosts and communicates between VMkernel services and the physical network.
To set up networking:
Figure 18. Assigning an uplink to an adapter
i Select Select an existing network and click Browse to select the vMotion port group, as shown in the following figure:
Figure 19. Selecting the port group
ii Click OK, and then click Next.
iii At Port group properties, select VMotion traffic and click Next.
iv Select Use Static IPV4 settings, provide the details, and then click Finish.
After adding cluster hosts to VDS, add network adapters to VMs:
The new network adapter appears at the bottom of the list.
Choose the DBPublic distributed port group that was created for the database public traffic.
Set up volumes and hard disks for VM1:
Figure 20. New SCSI controller settings
Table 33. SCSI controller and hard disk mappings in each virtual database VM
Controller |
Disk purpose |
Number of disks |
SCSI 0:0 |
Guest operating system disk |
1 |
SCSI 1:0 to SCSI 1:3 |
Standalone database DATA1–4 disks |
4 |
SCSI 2:0 to SCSI 2:1 |
Standalone database REDO1–24 disks |
2 |
SCSI 3:0 to SCSI 3:4 |
Standalone database OCR1–3, FRA1, and TEMP1 |
5 |
Total |
12 |
Set up volumes and hard disks for VM2: