Step 1. Assess your requirements and perform a sizing exercise to determine the quantity and characteristics of the VxRail nodes you need to meet planned workload and targeted use cases.
Step 2. Determine the number of physical racks needed to support the quantity and footprint of VxRail nodes to meet workload requirements, including the top-of-rack switches. Verify that the data center has sufficient floor space, power, and cooling.
Step 3. Determine the network switch topology that aligns with your business and operational requirements. See the sample wiring diagrams in Appendix F: Physical Network Switch Examples for guidance on the options supported for VxRail cluster operations.
Step 4. Based on the sizing exercise, determine the number of Ethernet ports on each VxRail node you want to reserve for VxRail networking.
- Two ports might be sufficient in cases where the resource consumption on the cluster is low and will not exceed available bandwidth.
- Workloads with a high resource requirement or with a high potential for growth will benefit from a 4-port deployment. Resource-intensive networks, such as the vSAN and vMotion networks, benefit from the 4-port option because two ports can be reserved just for those demanding networks.
- The 4-port option is required to enable link aggregation of demanding networks for the purposes of load balancing. In this case, the two ports that are reserved exclusively for the resource-intensive networks (vSAN and possibly vMotion) are configured into a logical channel to enable load balancing.
The VxRail cluster must be at version 7.0.130 or later to support link aggregation.
Step 5. Determine the optimal VxRail adapter and Ethernet port types to meet planned workload and availability requirements.
- VxRail supports 1 GbE, 10 GbE, and 25 GbE connectivity options to build the initial cluster.
- Starting with VxRail version 7.0.130, you have flexibility with the selection of Ethernet adapter types.
- Reserve and use only ports on the NDC for VxRail cluster networking.
- Reserve and use both NDC-based and PCIe-based ports for VxRail cluster networking.
- Reserve and use only PCIe-based ports for VxRail cluster networking.
- If your performance and availability requirements might change later, you can initially reserve and use just NDC ports to build the initial cluster, and then migrate certain VxRail networks to PCIe-based ports.
The VxRail cluster must be at version 7.0.010 or later to migrate VxRail networks to PCIe-based ports.
Step 6. Decide whether you want to attach the VxRail nodes to the switches with RJ45, SFP+ or SFP28 connections.
- VxRail nodes with RJ-45 ports require CAT5 or CAT6 cables. CAT6 cables are included with every VxRail.
- VxRail nodes with SFP+ ports require optics modules (transceivers) and optical cables, or Twinax Direct-Attach Copper (DAC) cables. These cables and optics are not included; you must supply your own. The NIC and switch connectors and cables must be on the same wavelength.
- VxRail nodes with SFP28 ports require high thermal optics for ports on the NDC. Optics that are rated for standard thermal specifications can be used on the expansion PCIe network ports supporting SFP28 connectivity.
Step 7. Determine the additional ports and port speed on the switches for the uplinks to your core network infrastructure and inter-switch links for dual switch topologies. Select a switch or switches that provide sufficient port capacity and characteristics.
Step 8. Reserve one additional port on the switch for a workstation or laptop to access the VxRail management interface for the cluster.
- The additional port for access to the management interface is removed if connectivity is available elsewhere on the logical path from a jump host on the VxRail external management VLAN. Decide whether to deploy a separate switch to support connectivity to the VxRail management port on each node.
- Dell iDRAC supports 1 GbE connectivity. Dell Technologies recommends deploying a dedicated 1 GbE switch for this purpose. In certain cases, you can also use open ports on the top-of-rack switches.