VMware vSAN 2-node cluster is a configuration that is implemented in environments where a minimal configuration is a key requirement.
VxRail v4.7.100 was the first release to support the vSAN 2-node cluster with direct-connect configuration. Starting with VxRail v4.7.410, vSAN 2-node cluster with switch configuration is also supported.
Note: A minimum of four ports are required for both configurations.
This guide provides information for the planning of a vSAN 2-node cluster infrastructure on a VxRail platform. This guide focuses on the VxRail implementation of the vSAN 2-node cluster, including minimum requirements and recommendations.
For detailed information about VMware vSAN 2-node cluster architecture and concepts, see the VMware vSAN 2-Node Guide.
A VMware vSAN 2-node cluster on VxRail consists of a cluster with two VxRail nodes, and a Witness host deployed as a virtual appliance. The VxRail cluster is deployed and managed by VxRail Manager and VMware vCenter Server.
A vSAN 2-node configuration is very similar to a stretched-cluster configuration. If there is a failure, the Witness host is the component that provides quorum for the two data nodes. As in a stretched-cluster configuration, the requirement for one Witness per cluster still applies.
Unlike a stretched cluster, the vCenter Server and the Witness host are typically located in a main data center, as shown in Figure 1. The two vSAN data nodes are typically deployed in a remote location. Even though the Witness host can be deployed at the same site as the data nodes. The most common deployment for multiple 2-node clusters is to have multiple Witnesses hosted in the same management cluster as the vCenter Server. This deployment optimizes the infrastructure cost by sharing the vSphere licenses and the management hosts.
This design is facilitated by the low bandwidth that is required for the communication between data nodes and the Witness.
A vSAN 2-node configuration maintains the same high availability characteristics as a regular cluster. Each physical node is configured as a vSAN fault domain. This means that the virtual machines can have one copy of data on each fault domain. If a node or a device fails, the virtual machine remains accessible through the alternate replica and Witness components.
When the failed node is restored, the Distributed Resource Scheduler (DRS) automatically rebalances the virtual machines between the two nodes. DRS is not required but highly recommended. It requires a vSphere Enterprise edition license or higher.