Home > Integrated Products > VxBlock 1000 and 3-Tier Platform Reference Architectures > Guides > 3-Tier Platform Design Guide > Cluster configurations
VMware ESXi hosts resources are pooled into clusters that contain CPU, memory, network, and storage resources for allocation to VMs. Clusters can scale up to a maximum of 64 hosts, enabling support for thousands of VMs.
VMware vSphere clusters support Cisco UCS B-Series blade servers and C-Series rack servers. VMware vSphere clusters are configured with VMware DRS and HA, with EVC mode enabled to ease expansions and migrations. On single-system management with less than three host clusters, VMware DRS or HA may not be configured.
3-Tier Platforms configure a VMware VDS per cluster to increase clarity and establish discrete configurations for ease of management.
VMware VDS uses Class of Service (CoS) to increase the resiliency and performance of the virtual network.
VMware vSphere includes vSphere Lifecycle Manager (vLCM) for host patch management.
On each cluster, the VMware vLCM implements a single cluster image concept. The server firmware component and hardware compatibility checks are not included. To understand the capabilities and limitations of this feature, see About Managing Host and Cluster Lifecycle. For information about the scalability that vLCM supports, see the VMware Configuration Maximums Matrix.
VMware vLCM is not supported on environments that contain:
The following table shows VMware vLCM cluster image components:
Table 101. VMware vLCM cluster image components
Component | Management | Workload |
VMware ESXi | VMware ESXi 8.0, or later | VMware ESXi 8.0, or later |
Updated drivers | Cisco VIC FC NIC driver (nfnic) Cisco VIC Ethernet NIC driver (nefic) Intel Native network driver (ixbgen) | Cisco VIC FC NIC driver (nfnic) Cisco VIC Ethernet NIC driver (nefic) Intel native network driver (i40en) |
Vendor add-ons | N/A | PowerPath VE |
The 3-Tier Platform with VMware vSphere supports block-level storage using VMFS, or file-level storage using NFS. Data stores are provisioned on storage arrays that use SAN connectivity.
The 3-Tier Platform with VMware vSphere supports VMware vSphere vVols for the VMs on the production workloads.
vVols simplify operations through policy-driven automation that allows:
The following storage arrays support vVols on the FC protocol:
The vVols reside in storage containers that logically represent a pool of hard drives on the storage array. On the vCenter Server and ESXi side, storage containers are presented as vVol data stores.
Before creating vVol data stores from storage containers, ensure that the storage array VASA provider is registered to the vCenter server. The Unity XT storage array is automatically registered in vSphere as a VASA provider when the corresponding vCenter and ESXi hosts are granted access to the system in Unisphere.
The VMware ESXi host does not have direct access to the vVol storage. Instead, the host accesses the vVols through an intermediate point in the data path, called a protocol endpoint. The protocol endpoints establish a data path on demand from the VMs to their respective vVols. VMware ESXi hosts must be zoned to the array, like traditional LUNs, for access to the protocol endpoints. Ensure that the vCenter instance that manages the vVol host cluster has VM storage policies that match the service levels that are present in the storage data store and capability profiles.
Each production ESXi host cluster has separate storage containers provisioned. At least one storage container is provisioned for each ESXi host cluster for the VMware vSphere Cluster Lifecycle (vCLS) VMs.
Adhere to the following best practices:
When using VMware as the hypervisor, you can achieve bandwidth prioritization for different traffic classes using Virtual Dedicated Server (VDS) technology. These traffic classes include host management, vSphere vMotion, and VM network. The VDS, which you can configure, manage, and monitor from a central interface, provides:
The following diagram shows the VDS configuration. The dual-port FC host bus adapters are used for connecting to shared storage:
Figure 21. VDS configuration
VMware VDS provides virtual networking using a minimum of four uplinks that are presented to VMware ESXi.
vNICs are equally distributed across all available physical adapter ports to ensure redundancy and maximum bandwidth. Equal distribution provides consistency and balance across all Cisco UCS B-Series blade servers and C-Series rack servers, regardless of the VIC hardware.
VMware vSphere ESXi has a predictable uplink interface count. All applicable VLANs, native VLANs, MTU settings, and QoS policies are assigned to the vNIC for migration to VMware VDS from uplinks.
3-Tier Platforms configure a vSphere VDS per cluster to increase clarity and establish discrete configurations for ease of management. NIOC is used on each management domain VDS for the specific port groups to optimize the traffic flow for priority data. QoS is used on each workload domain VDS with specific port groups to increase the resiliency and performance of the virtual network.
The following table shows the VDS01 and VDS02 port group attributes for the management domain:
Table 102. VDS01 and VDS02 port group attributes for the management domain
Port group | VDS number | MTU | Physical adapter | Teaming and failover | Load balancing |
Management | 01 | 1,500 | Uplink1 (VMNIC 0) | Active | Originating port ID |
Uplink2 (VMNIC 1) | Active | ||||
vMotion | 01 | 9,000 | Uplink1 (VMNIC 0) | Active | Explicit failover |
Uplink2 (VMNIC 1) | Standby | ||||
FT | 01 | 9,000 | Uplink1 (VMNIC 0) | Standby | Originating port ID |
Uplink2 (VMNIC 1) | Active | ||||
iSCSI (A) | 02 | 9,000 | Uplink1 (VMNIC 4) | Active | Originating port ID |
Uplink2 (VMNIC 5) | Unused | ||||
iSCSI (B) | 02 | 9,000 | Uplink1 (VMNIC 4) | Unused | Originating port ID |
Uplink2 (VMNIC 5) | Active |
The following table shows the VDS01 port group attributes for the workload domain:
Table 103. VDS01 port group attributes for the workload domain
Port group | MTU | Physical adapter | Teaming and failover | VMware CoS | Load balancing |
Management | 1500 | Uplink1 (VMNIC 0) | Active | Platinum CoS 6, DSCP 48 | Originating port ID |
Uplink2 (VMNIC 1) | Active | ||||
Uplink3 (VMNIC 2) | Active | ||||
Uplink4 (VMNIC 3) | Active | ||||
vMotion | 9000 | Uplink1 (VMNIC 0) | Active | Gold CoS 4, DSCP 26 | Explicit failover |
Uplink2 (VMNIC 1) | Standby | ||||
Uplink3 (VMNIC 2) | Active | ||||
Uplink4 (VMNIC 3) | Standby | ||||
NFS | 9000 | Uplink1 (VMNIC 0) | Active | Silver CoS 2, DSCP 16 | Originating port ID |
Uplink2 (VMNIC 1) | Active | ||||
Uplink3 (VMNIC 2) | Active | ||||
Uplink4 (VMNIC 3) | Active | ||||
FT | 9000 | Uplink1 (VMNIC 0) | Standby | Best effort | Originating port ID |
Uplink2 (VMNIC 1) | Active | ||||
Uplink3 (VMNIC 2) | Active | ||||
Uplink4 (VMNIC 3) | Standby | ||||
BRS data | 9000 | Uplink1 (VMNIC 0) | Active | N/A | Originating port ID |
Uplink2 (VMNIC 1) | Active | ||||
Uplink3 (VMNIC 2) | Active | ||||
Uplink4 (VMNIC 3) | Active |
VMware vSphere uses unique names for VLAN names, port group names, and sample values. These sample values are provided for the management solution and the 3-Tier Platform. Work with your Dell Technologies Sales representative to customize naming requirements. Unique naming is essential for autonomous computing, containers, and the use of APIs and custom scripting to manage virtual infrastructure.
The VLAN names and port groups names meet the requirements to deploy the management domain.
Distributed port group names contain specific information to determine the function and location of the port group. The port group names contain:
The switch and Cisco UCSM VLAN IDs are examples of values that use the same methodology as the port group names, without VMware object information. The VLAN IDs and names contain:
VMware object information that is not required in the physical infrastructure is not shown.
The following figure shows the management port group that is connected to the VMware VDS in San Francisco (Region A):
Figure 23. Management port group connected to the VMware VDS in Region A
The following figure shows the management port group that is connected to the VMware VDS in Los Angeles (Region B):
Figure 24. Management port group connected to the VMware VDS in Region B
The following figure shows the VI workload domain vSphere vMotion port group that is connected to VMware VDS in the San Francisco (Region A):
Figure 25. VI workload domain vMotion port group connected to the VMware VDS in Region A
The following example shows the Region B (Los Angeles) data center:
Figure 26. Region B data center
The following example shows the Region A (San Francisco) data center:
Figure 27. Region A data center
The following example shows the Site B (Los Angeles) data center:
Figure 28. Site B data center