PowerSwitch networking provides flexible, powerful top-of-rack (ToR) switches for data centers of all sizes. They are designed to deploy modern workloads and applications that are designed for the open networking era. They deliver low latency, superb performance, and high density with hardware and software redundancy.
This validated design uses the S5248F-ON and S3100 family, although other switch models can be used.
Design principles
Dell PowerSwitch networking products are designed for ease of use and to enable resilient network creation. Dell Technologies recommends designs that apply the following principles:
Meet network capacity and the separation requirements of the container pod.
Configure dual-homing to two Virtual Link Trunking (VLT) switches.
Create a scalable and resilient network fabric.
Enable monitoring of container communications.
Container network capacity and separation
Container networking takes advantage of the high-speed (25/100 GbE) network interfaces of the Dell PowerEdge server portfolio. To meet network capacity requirements, pods can attach to more networks by using available Container Network Interface (CNI) plugins.
Robin Cloud Native Platform supports the following network-based functions:
Each node that is part of the Robin CNP cluster has at least two NICs, each connected to at least two switches. The switches must have Virtual Link Trunking (VLT) connections so that they operate together as a single unit of connectivity. This configuration provides redundant data paths for all network traffic. The NICs at each node, and the ports that they connect to on each switch, can be aggregated using bonding to assure High Availability (HA) operation.
Network fabric
The microservices data traffic requires a nonblocking fabric. Dell Technologies recommends that you deploy a leaf-spine network.
Resilient networking
Each server in the rack is connected to two S5248F-ON leaf switches with 25 GbE network interfaces and one S3148 management switch for iDRAC connectivity.
The Dell network design employs a VLT connection between the two leaf switches. All paths in a Virtual Link Trunking (VLT) environment are active. It is possible to achieve high throughput while still protecting against hardware failures.
VLT technology enables a server to uplink multiple physical trunks into multiple PowerSwitch switches by treating the uplinks as one logical trunk. A VLT-connected pair of switches acts as a single switch to a connecting server. Both links from the bridge network can forward and receive traffic. VLT replaces Spanning Tree Protocol (STP)-based networks by providing both redundancy and full bandwidth utilization using multiple active paths.
The major benefits of VLT technology are:
Dual control plane for highly available, resilient network services
Full utilization of the active Link Aggregation Group (LAG) interfaces
Active/active design for seamless operations during maintenance events
The VLT configuration in this design uses two 100 GbE ports between each Top of Rack (ToR) switch. The remaining 100 GbE ports can be used for high-speed connectivity to spine switches, or directly to the data center core network infrastructure.
Network design
This design includes three logical networks:
Corporate network
The external network is used for the public API, the Robin Cloud Native Platform (Robin CNP) web interface, and applications exposed to the Corporate network.
Cluster Data network
This internal network is the primary, nonroutable network for cluster management, internode communication, and server provisioning using Preboot Execution Environment (PXE) and HTTP. DNS and DHCP services also reside on this network to provide deployment functionality. Network Address Translation (NAT) configured on the bastion node provides communication with the Internet.
iDRAC or Baseboard Management Controller (BMC) network
The iDRAC or BMC network is a secured, isolated network for switch and server hardware management, including access to the iDRAC9 module and Serial-over-LAN. You can configure optional connections to Corporate network management, enabling more direct access to the hardware in this design guide. This network is also known as the Out-of-Band (OOB) network.
The figure below shows the Robin Cloud Native Platform logical network components.
PowerSwitch configuration
Dell PowerSwitch high-capacity network switches are cost-effective and easy to deploy. They provide a clear path to a software-defined data center, offering:
High density for 25, 40, 50, or 100 GbE deployments in top-of-rack (ToR), middle-of-row, and end-of-row deployments
A choice of 25 GbE and 100 GbE switches, including the S5048F-ON; S5148F-ON; S5212F-ON; S5224F-ON; S5248F-ON; S5296F-ON; and S5232F-ON
A 10 GbE, 25 GbE, 40 GbE, 50 GbE, or 100 GbE modular switch - the S6100-ON
S6100-ONmodules that include:
16-port 40 GbE Quad Small Form-factor Pluggable (QSFP)+
Eight-port 100 GbE QSFP28
Combination module with four 100 GbE C Form-factor Pluggable (CXP) ports and four 100 GbE QSFP28 ports
This solution design uses the PowerSwitchS5248F-ON configured with Dell SmartFabric OS10. SmartFabric OS10 enables multilayered disaggregation of network functions that are layered on an open-source Linux-based operating system. The following topic describes a high-level configuration of the PowerSwitch switches that are used for a Robin Cloud Native Platform deployment at various scales.
High availability and load balancing
This design uses the following High Availability (HA) features:
Robin Cloud Native Platform—Multiple control plane nodes and associated infrastructure
Resilient load balancing—Three control plane nodes running HAProxy and Keepalived
Dell cloud native infrastructure— PowerEdge servers with dual NICs
Dell PowerSwitch—Spine-leaf fabric with Virtual Link Trunking (VLT)
HAProxy—Manages API server requests and redirects them in a round-robin manner to the control plane nodes
Keepalived—An open-source project that implements routing software using the Virtual Router Redundancy Protocol (VRRP)
If the primary server fails, VRRP enables a switchover to a backup server.
This switchover is achieved by using Virtual IP Address (VIP).
See the Robin.io document, High Availability, for more information regarding the HA features of Robin CNP.