A VxRail cluster depends on adjacent Ethernet switches, commonly called “top-of-rack” switches, to support cluster operations. VxRail is broadly compatible with most Ethernet switches that are available on the market. For best results, select a switch platform that meets the operational and performance criteria for your planned use cases.
The VxRail product does not have a backplane, so the adjacent “top-of-rack” switch enables all connectivity between the nodes that make up a VxRail cluster. All the networks (management, storage, virtual machine movement, guest networks) configured within the VxRail cluster depend on the “top-of-rack” switches for:
The network traffic that is configured in a VxRail cluster is Layer 2. VxRail enables efficiency with the physical “top-of-rack” switches through the assignment of virtual LANs (VLANs) to individual VxRail Layer 2 networks in the cluster. This functionality eases network administration and integration with the upstream network.
The VxRail product has two separate and distinct management networks. One management network extends externally to connect to IT administration and external data center services. The second management network is isolated, visible only to the VxRail nodes.
Figure 4. VxRail management networks
The network that is visible only to the VxRail nodes depends on IPv6 multicasting services that are configured on the adjacent “top-of-rack” switches for node discovery. One node is automatically designated as the primary node. It acts as the source and listens for packets from the other nodes using multicast. A VLAN assignment on this network limits the multicast traffic only to the interfaces connected to this internal management network.
A common Ethernet switch feature, Multicast Listener Discovery (MLD) snooping and querier is designed to further constrain the flooding of multicast traffic, by examining MLD messages. It and then forwards multicast traffic only to interested interfaces. The traffic on this node discovery network is already constrained through the configuration of this VLAN on the ports supporting the VxRail cluster. This setting may provide some incremental efficiency benefits but does not negatively impact network efficiency.
Note: If your data center networking policy restricts the IPV6 multicast protocol, you can manually assign IP addresses to the VxRail nodes as an alternative to automatic discovery.
The Ethernet switch does not have to support Layer 3 services or be licensed for Layer 3 services. You can enable routing services further upstream on the network infrastructure or enable routing services at this “top-of-rack” switch.
A VxRail cluster can be deployed in a “flat” network using the default VLAN on the switch. It can also be configured so that all the management, storage. Virtual LANs segment guest networks for efficient operations. For best results, especially in a production environment, only managed switches should be deployed, and VLANs should be used. A VxRail cluster that is built on a “flat” network should be considered only for test cases or for temporary usage.
In certain instances, additional switch features and functionality are necessary to support specific use cases or requirements:
Other feature considerations to account for when selecting Ethernet switches for your VxRail cluster, depending on interoperability requirements for different types of storage resources may include:
Decide if you plan to use one or two switches for the VxRail cluster. One switch is acceptable and is often used in test and development environments. Two or more switches are required to support sustained performance, high availability, and failover in production environments.
VxRail is a software-defined data center which depends on the physical top-of-rack switching for network communications. It is engineered to enable full redundancy and failure protection across the cluster. Some customer environments require protection from a single point of failure. The adjacent network supporting the VxRail cluster must be designed and configured to eliminate any single point of failure. A minimum of two switches should be deployed to support high availability and balance the workload on the VxRail cluster. They should be linked with a pair of cables to support the flow of Layer 2 traffic between the switches.
Consider link aggregation to enable load-balancing and failure protection at the port level. NIC teaming, which pairs a set of physical ports into a logical port for this purpose, is supported in VxRail versions 7.0.130 and later. These logical port pairings can peer with a pair of ports on the adjacent switches to enable the load-balancing of demanding VxRail networks.
Figure 5. Multichassis link aggregation across two switches
For network-intense workloads that require high availability, consider switches that support multichassis link aggregation, such as Cisco Virtual Port Channel or Dell VLT Port Channel. This feature can be used to enable load-balancing from the VxRail cluster across a logical switch port that is configured between the two linked switches.
Support for Link Aggregation Control Protocol (LACP) at the cluster level is also introduced in VxRail version 7.0.130. The switches supporting the VxRail cluster should support LACP for better manageability and broader load-balancing options.