Dynamic cluster overview
Dynamic clusters differentiate themselves from other VxRail cluster types with the resource selected for primary storage. With other cluster types, there is a dependency on the local vSAN datastore as the primary storage resource. With a dynamic cluster, the nodes used to build the cluster do not have local disk drives. Therefore, an external storage resource is required to support workload and applications.
A dynamic cluster may be preferable to other cluster types in these situations:
- You already have an investment in compatible external storage resources in your data centers that can serve as primary storage for a dynamic cluster.
- The business and operational requirements for the applications targeted for the VxRail cluster can be better served with existing storage resources.
- The likelihood of stranded assets through node expansion is less likely with a dynamic cluster.
If a dynamic cluster is the best fit for your business and operational requirements:
- Verify that the storage resource in the data center you plan to use for VxRail dynamic clusters is supported. See the VxRail 8.x Support Matrix to verify dynamic node compatibility.
- The target data center for the VxRail dynamic cluster must already have deployed one of the supported options for primary storage.
- We recommend using the technical documentation provided for the selected external storage to prepare the storage for a VxRail dynamic cluster.
- Any performance issues that are diagnosed at the storage level may be related to infrastructure outside of VxRail and must be managed separately.
- A dynamic cluster does not have the same level of visibility and control of an external storage resource in comparison to a local vSAN datastore.
One of the external storage resources supported with dynamic clusters is Fibre Channel storage. With this option, a compatible Fibre Channel storage array can be configured to supply a single VMFS datastore or multiple datastores to a VxRail dynamic cluster.
Figure 21. Dynamic cluster using FC-connected VMFS for primary storage
This option has the following prerequisites:
- Verify that the storage array in your data center you have planned for support of dynamic cluster is supported with VxRail. Consult the VxRail E-Lab Navigator to verify compatibility.
- Verify you have sufficient free capacity on the storage array. A VxRail dynamic cluster requires a VMFS device with a minimum of 800 GB to support workload and applications.
- At the time of ordering, include enough compatible Fibre Channel adapter cards. VxRail recommends a minimum of two Fibre Channel adapter cards per node for redundancy, although a single dual-port adapter card can be used.
- Verify that you have sufficient open ports on your Fibre Channel switches to accommodate the connections required from each VxRail node.
Another option to use for primary storage with dynamic clusters is an existing vSphere or VxRail cluster with a local vSAN datastore in your data center. The virtual machines running on the dynamic cluster will use the free storage resource on the vSAN datastore, and the compute resources on the local nodes.
Figure 22. Dynamic cluster using a remote vSAN datastore for primary storage
If you choose to pursue this option, ensure that the following guidelines are understood:
- Verify that the cluster being targeted to supply storage resources to the dynamic cluster is at a support VxRail version. See the VxRail Support Matrix to verify if an upgrade is needed on this cluster.
- Both vSAN Original Storage Architecture (OSA) and vSAN Express Storage Architecture (ESA) support the sharing of vSAN datastores, but sharing two different vSAN architectures across VxRail clusters is not supported. Both clusters must be running the same vSAN architecture type.
- If you already have a cluster that is sharing its vSAN datastore to other clusters, ensure that you have not reached the maximum of five clusters already mounted to this vSAN datastore.
- The physical Ethernet network in your data center must support connectivity between the cluster nodes of both clusters.
- If Layer 3 connectivity is required, routing settings such as static routes or BGP must be configured.
- The RTT latency between the cluster sharing the vSAN datastore and the dynamic cluster nodes must be less than 5 milliseconds.
- Routable IP addresses must be used on the VMkernel adapters supporting vSAN on both the cluster nodes sharing the vSAN datastore and dynamic cluster nodes.
PowerFlex is a Dell storage product that is supported with VxRail dynamic cluster. PowerFlex systems provide IP-based storage to VxRail dynamic clusters that can be configured as primary storage for virtual machine workload.
Figure 23. VxRail dynamic cluster storage provided by PowerFlex virtual volume
The PowerFlex system configures pools of storage through a virtualization process, and manages the allocation of virtual volumes to connected clients. Virtual volumes can be configured to meet certain capacity, performance, and scalability characteristics to align with the workload requirements planned for the VxRail dynamic cluster.
The PowerFlex architecture combines both the compute and storage in a fabric-connected network architecture, with Dell PowerEdge servers serving as the hardware foundation for block storage capacity.
If you plan to leverage virtual volumes provided by a PowerFlex system to serve as the primary storage for your VxRail dynamic cluster, ensure that best practices are followed to ensure a successful deployment:
- Follow the guidance in the Dell PowerFlex Networking Best Practices and Considerations to ensure the supporting network infrastructure is properly planned and configured
- Follow the steps in How to configure Dell PowerFlex Storage with VxRail Dynamic Nodes if you are unfamiliar with provisioning PowerFlex storage for this purpose
- Reserve two Ethernet ports on each VxRail node planned for the dynamic cluster to support connectivity to the PowerFlex volumes serving as primary storage
- Enable jumbo frames on the network configured to support VxRail dynamic cluster storage
- After the VxRail dynamic cluster is built, plan to configure a separate virtual distributed switch with new port groups in vCenter to support connectivity to the PowerFlex front-end system
If the data center does not support Fibre Channel storage, shared vSAN resources or have a PowerFlex storage array deployed, storage over an IP network is supported for VxRail.
Figure 24. IP-based external storage supporting VxRail dynamic clusters
The supported storage resources can be either block-based or file system based. With block-level storage option, the LUN is presented to the VxRail cluster nodes over an IP network. vSphere is then used to configure a VMFS datastore from the LUN.
iSCSI is supported with VxRail, and is a standard feature in VMware vSphere. iSCSI is enabled by configuring a software adapter on a NIC on the VxRail nodes. The adapter then serves as an iSCSI initiator by targeting external storage arrays to present LUNs back to the initiators.
The NFS option also works over an IP network, except the storage presented back to the VxRail cluster is from a compatible file server, and the storage format is file-based instead of block-based. With this option, the external file system is mounted by the VxRail nodes to enable access over the IP network and configured to serve as a datastore.
Leveraging NVMe (Non-Volatile Memory Express) is supported with VxRail for connecting to block-based storage. NVMe can support local storage devices over PCIe, and storage devices over an FC network or an IP network. NVMe can serve as an alternative to FC or iSCSI storage for demanding workloads, as it is designed for usage with faster storage devices enabled with nonvolatile memory.
Figure 25. NVMe software stack in vSphere
With all the storage options for dynamic clusters, verify that the storage resource in the data center you plan for VxRail dynamic clusters is supported. See the VxRail E-Lab Navigator to verify compatibility.