The following steps prepare the ESXi hosts for NVMe/TCP connectivity.
Home > Networking Solutions > Storage Networking > Guides > SmartFabric Storage Software Deployment Guide > Configure ESXi NVMe/TCP host virtual networking
The following steps prepare the ESXi hosts for NVMe/TCP connectivity.
In this example, the C02-vDS-NVMeTCP distributed switch is created for NVMe/TCP host traffic. Alternatively, you can use an existing vDS or standard switches. A virtual switch per SAN port can be used, but this example uses a single vDS for both SAN ports with each SAN port group mapped to different uplinks.
You will configure two port groups on this vDS: one for SAN-A and one for SAN-B traffic.
To create a vDS:
To add the hosts to the vDS, perform the following steps:
In this section, two port groups for NVMe/TCP storage traffic are created on the virtual distributed switch. In this example, C02-NVMeTCP-SAN-A and C02-NVMeTCP-SAN-B are created on the C02-vDS-NVMeTCP distributed switch.
To configure the SAN A port group for the NVMe/TCP VMkernels, perform the following steps:
To configure the SAN A port group for the NVMe/TCP VMkernels, perform the following steps:
Follow the steps to create VMkernel ports for NVMe/TCP on all hosts connected to the vDS.
The first set of steps provides instructions for port group C02-NVMeTCP-SAN-A. The steps are repeated using different IP settings for port group C02-NVMeTCP-SAN-B.
You can create VM HBAs in the CLI or in the vCenter GUI.
To create the storage adapters using the CLI, use the following esxcli command: esxcli nvme fabrics enable -p TCP -d [vmnic].
To create storage adapters using the vCenter GUI: