The purpose of this section is to provide the set of configuration steps required to set up the below topology. To deploy your own customized solution, follow the steps using the values in your Initial Configuration Worksheet, adjusting the number of times to repeat steps accordingly. If the outcome of a step is not as expected, see the SmartFabric Storage Software Troubleshooting Guide Alerts and Events Reference on the SmartFabric Storage Software Download for NVMe/TCP SAN Documentation page for assistance.
The topology used in this guide has the same PowerStore configuration as the converged LAN and SAN topology, the same SFSS host configuration as the dual SAN with dedicated, air-gapped SAN switches, and dual SAN with dedicated air-gapped SAN spine/leaf fabrics over Layer 3 topologies, and the same ESXi host configuration as all three example network topologies.
SFSS is shown on both ESXi and Linux for demonstration purposes only.
Component attributes
Component purpose | Dell product | Port usage | Number of ports |
ESXi Management Cluster | PowerEdge Servers | Management and SFSS Control Traffic | 2 per host |
Linux Management Cluster | PowerEdge Servers | Management and SFSS Control Traffic | 2 per host |
Workload Cluster | PowerEdge Servers | ESXi Management | 2 per host |
Workload Cluster | PowerEdge Servers | NVMe/TCP Control and Data Traffic | 2 per host |
Linux Workload Cluster | PowerEdge Servers | Management | 1 per host |
Linux Workload Cluster | PowerEdge Servers | NVMe/TCP Control and Data Traffic | 2 per host |
Storage Subsystem | PowerStore T | Out of Band Management Ports | 1 per node |
Storage Subsystem | PowerStore T | NVMe/TCP Control and Data Traffic | 2 per node |
Storage Subsystem | PowerMax | NVMe/TCP Control and Data Traffic | 1 per node |
Centralized Discovery Controller | SFSS VM | Out of Band Management Port | 1 |
Centralized Discovery Controller | SFSS VM | NVMe/TCP Control Traffic | 1 per CDC |
SFSS and SFS Administration | OMNI | Out of Band management | 1 |
Network attributes
Dell PowerSwitch switches | Role |
Leafs 1A and 1B | Top of Rack (ToR) switches for the PowerEdge servers |
Leafs 2A and 2B | NVMe/TCP switches for the PowerStore T, PowerMax, and PowerEdge servers |
Spines 1 and 2 | Connecting leafs in a SmartFabric spine/leaf topology.1 Control traffic from SFSS to the endpoints and traverses the spines. |
VLAN ID | Description | Network | Gateway | Server interfaces |
1811 | Management | 172.18.11.0/24 | 172.18.11.254 | Tagged |
1812 | vMotion | 172.18.12.0/24 | None | Tagged |
1814 | Guest Network | 172.18.14.0/24 | 172.18.14.254 | Tagged |
1821 | NVMe/TCP traffic for SAN A | 172.18.21.0/24 | None, L2 only | Tagged |
1822 | NVMe/TCP traffic for SAN B | 172.18.22.0/24 | None, L2 only | Tagged |
FQDN or Hostname | Management IP address | Description |
esxi01.dell.lab | 172.18.11.101 | ESXi Host, Mgmt Cluster |
esxi02.dell.lab | 172.18.11.102 | ESXi Host, Mgmt Cluster |
esxi03.dell.lab | 172.18.11.103 | ESXi Host, Mgmt Cluster |
esxi04.dell.lab | 172.18.11.104 | ESXi Host, Mgmt Cluster |
esxi05.dell.lab | 172.18.11.105 | ESXi Host, Workload Cluster |
esxi06.dell.lab | 172.18.11.106 | ESXi Host, Workload Cluster |
esxi07.dell.lab | 172.18.11.107 | ESXi Host, Workload Cluster |
esxi08.dell.lab | 172.18.11.108 | ESXi Host, Workload Cluster |
lin01.dell.lab | 172.18.11.109 | Linux Host, Management Host |
lin02.dell.lab | 172.18.11.110 | Linux Host, Workload host |
vcenter.dell.lab | 172.18.11.62 | vCenter VM |
omni.dell.lab | 172.18.11.56 | OpenManage Network Integration software VM |
sfss.dell.lab | 172.18.11.57 | SmartFabric Storage Software VM |
PowerStore-Cluster-01 | 100.67.108.190 | PowerStore T Appliance |
PowerMax | 192.168.61.123 | PowerMax Subsystem |
Infrastructure services | IP address |
DNS | 172.18.11.50 |
NTP | 172.18.11.50 |