Design recommendations for VPLEX and metro node
The following recommendations apply to VPLEX and metro-node-based Integrated Data Protection solutions providing business continuity and DRR, including:
Management network
Adhere to the following best practices when configuring the management network for VPLEX on converged systems:
- The VPLEX Management Server in each converged system must connect to the converged system management network (101 or 1801 or OOB).
- VPLEX Metro requires a routable connection between the VPLEX management servers for each cluster, and between each management server and the Cluster Witness server. If a firewall exists between any of these servers, it must allow ICMP and IPsec traffic in both directions. For more information, see the VPLEX Security Configuration Guide on the Dell Technologies Support website.
- Metro node in a Metro configuration requires a routable connection between the director’s management IP addresses of each cluster in the Metro system. If a firewall exists between any of these servers, see the information in the Metro Node Security Configuration Guide on the Dell Technologies Support website.
Ideally, this routable connection does not use the L2 DCI link. You do not have to extend the management VLAN.
- Link latency between the two VPLEX management servers and the VPLEX Cluster Witness server must not exceed 1 second.
- The IP management network must not be able to route packets to the reserved IPv4 VPLEX subnets 128.221.252.0/24 and 128.221.253.0/24.
- The IP management network must not be able to route packets to the following reserved IPv4 metro node subnets: 128.221.250.0/24, 128.221.251.0/24, 128.221.252.0/24, and 128.221.253.0/24.
- The management servers (MMCSs) for the VPLEX VS6 require two management network connections and two IP addresses per VPLEX cluster.
Even though the VPLEX VS6 uses two MMCSs, the second MMCS does not currently provide any function (such as HA or failover). However, it must be configured with an IP address and operational at all times. Failure of MMCS-B will create alerts and become a Field Replaceable Unit (FRU) event.
Workload mobility and extending VLANs
When you configure workload mobility solutions, adhere to the following best practices:
- Ensure that there is Layer-2 (L2) adjacency between the two converged system production networks. This is to ensure connectivity following a live vSphere vMotion migration or a VMware HA-triggered VM restart event.
- At a minimum, extend the ESXi Management (205 or 1611) VLAN that is used to host the VMware vSphere management VMs between sites. This necessitates trunking a subset of VLANs that are generally considered to be internal outside the converged system.
- The customer is responsible for the L2 extension between sites. The technology that they choose for this (OTV or back-to-back vPC) must comply with latency requirements.
- Effective from VMware vSphere 6.0, VMware introduced Layer 3 vMotion capabilities, which means you do not have to extend the Layer 2 VLAN 206 that is used for VMware vSphere vMotion in the compute hosts.
- For full resilience, provide gateway redundancy between sites, ideally implementing FHRP isolation to avoid hosts and virtual machines unnecessarily crossing the DCI link to reach their gateway.
VPLEX Metro detach rules
When configuring VPLEX Metro, bear in mind the following considerations:
- A detach rule is defined for each VPLEX Metro distributed vVol.
- When a communication failure occurs between the two clusters in a VPLEX Metro configuration, the detach rule identifies which VPLEX cluster must detach its mirror leg, allowing services to continue.
- The purpose of a defined preferred site is to ensure that there is no possibility of a split-brain scenario with both VPLEX clusters continuing to allow I/O during a communication failure.
- After a complete communication failure between the two VPLEX clusters, the preferred site continues to provide service to the distributed vVol.
- The other VPLEX cluster suspends I/O service to the volumes. That cluster is referred to as the nonpreferred site.
- The detach rule is at the distributed vVol level so that either converged system in a VPLEX Metro configuration can be the preferred site for some distributed vVols and the nonpreferred site for others.
- A VPLEX Metro instance can support up to 5,000 distributed vVols, and each volume has its own detach rule.
- VPLEX Witness failure-handling semantics apply only to the distributed vVols in a CG. CGs have a bias rule set similar to a detach rule to determine the preferred site.
- All distributed vVols common to the same set of VMs must be in one CG. All VMs that are associated with that CG must be located at the preferred site.
Failure conditions that invoke VPLEX detach rules
The following failure conditions invoke VPLEX detach rules:
- A total VPLEX cluster failure at one site (all directors in a cluster)
- A VPLEX WAN partition
Total VPLEX cluster failure at one site
The following conditions occur during the failure and after the failure is resolved:
- A complete VPLEX cluster failure triggers detach rule behavior because the surviving VPLEX cluster cannot determine between intersite communication loss and complete VPLEX cluster failure.
- The distributed vVols whose preferred site is the surviving VPLEX cluster continue to run without interruption.
- The distributed vVols whose preferred site is the site with the failed VPLEX cluster site enter into I/O suspension.
- After the VPLEX cluster failure is resolved, the distributed vVols are reestablished, enabling I/O on both VPLEX clusters in a Metro configuration.
VPLEX WAN partition
The following conditions occur during the failure and after the failure is resolved:
- The VPLEX cluster WAN partition (intersite communication failure) also triggers execution of the detach rule.
- Each distributed vVol allows I/O to continue at the preferred site and suspends I/O at the nonpreferred site.
- After the VPLEX cluster WAN partition condition is resolved, the distributed vVols are re-established, enabling I/O on both VPLEX clusters in a Metro configuration.
VPLEX Metro persistent device loss
VMware vSphere does not recognize all types of total path failure when running in VPLEX environments. To remedy this issue, configure VPLEX Metro for persistent device loss under the advanced VMware ESXi settings, following these guidelines:
- VMware vSphere recognizes two types of total path failure to a VMware ESXi server. Either condition can be declared by the VMware ESXi server following a failure condition.
- All paths down (APD).
- Persistent device loss (PDL): PDL is a state declared by a VMware ESXi server when a SCSI sense code is sent from the underlying storage array (in this case, VPLEX) to the VMware ESXi host, informing the host that the paths can no longer be used. This condition can occur if VPLEX suffers a WAN partition causing storage volumes at the non-preferred location to suspend. VPLEX sends the PDL SCSI sense code to the VMware ESXi server from the site that is suspending (the non-preferred site).
VMware HA does not automatically recognize that a SCSI PDL state causes a VM to invoke an HA failover, which causes an outage. This is not acceptable when VMware HA is used with VPLEX in a stretched cluster configuration.
- VMware vSphere can act on the SCSI PDL state by powering off the VM, invoking HA failover. This behavior requires additional settings in the VMware vSphere cluster. For more information, see “Advanced VMware parameters to regulate PDL conditions” in the Tier-3 Platform Logical Build Guide. To obtain a copy of the guide, contact your Dell Technologies sales representative.
VMware vCenter Server placement
When implementing HA for VMware vCenter Server, adhere to the following best practices:
- In a stretched cluster solution, VMware vCenter Server requires site mobility capabilities to enable automatic failover to either converged system if there is an outage.
- Without VPLEX Metro and a VMware stretched clustering solution, each converged system would use its own instance of vCenter Server in the 3-Tier Management solution to manage its ESXi hosts. To deploy an active/active stretched clustering solution with VPLEX Metro, use a single VMware vCenter Server instance to manage all the hosts in the two converged systems that will participate in VMware stretched clusters.
The following options are supported on converged systems:
- If all ESXi hosts from Site-A and Site-B participate in a VMware stretched cluster, one vCenter server hosts all 3-Tier Management solution VMs from Site-A and Site-B, and all VMs on VPLEX distributed volumes that are attached to VMware ESXi hosts in Site-A and Site-B.
- For any VMware ESXi hosts in Site-A or Site-B or both that do not participate in a VMware stretched cluster, use the vCenter Server in each converged system to manage the 3-Tier Management solution and the local VMware ESXi hosts. Deploy a third vCenter server to manage the participating VMware ESXi hosts.
- Converged system management VMs include the VMware vCenter Server Appliance and the consolidated PowerPath vApp. All other management functions that reside in the 3-Tier Management solution remain local to their converged system.
- Ensure that you apply the additional VMware parameters to Cisco UCS C-Series servers when building a VMware stretched cluster. For more information, see “Advanced VMware parameters to regulate PDL conditions” in the
Tier 3 Platform Logical Build Guide. To obtain a copy of the guide, contact your Dell Technologies sales representative.
Design recommendations for VPLEX on converged systems
General best practices for the back end
Take account of the following general best practices when configuring back-end connectivity:
- Each VPLEX director has redundant physical connections to the back-end storage array across dual fabrics.
- For single converged systems, only 50 percent of the available VPLEX back-end ports, A1-FC00, A1-FC01, B1-FC00, and B1-FC01, are connected as standard for each engine.
- Use the remaining VPLEX back-end ports, A1-FC02, A1-FC03, B1-FC02, and B1-FC03, in a high-bandwidth environment or for connecting a second back-end storage array.
- VMware vVols are not supported on VPLEX.
- Moving the ESXi host boot LUN behind VPLEX is not supported. Boot LUNs must be provided to the host directly from the storage array, and not from VPLEX.
Storage array-specific best practices
Take account of the following best practices when configuring back-end connectivity.
VNX2 storage arrays
- Each VPLEX director must have logical and physical connectivity to storage controllers.
- VPLEX recognizes the nonpreferred paths and does not use them during normal conditions.
- Set mode 4 (ALUA) during VPLEX back-end initiator registration before the device presentation. Do not change this value after the presentation.
The minimum supported configuration is four paths (two active and two passive) per VPLEX director for any given LUN. For more information about VPLEX back-end zoning, see the Dell Integrated Data Protection Zoning for VPLEX spreadsheet. To obtain a copy of the spreadsheet, contact your Dell Technologies sales representative.
Dell Unity and Unity XT storage arrays
- Each VPLEX director must have logical and physical connectivity to storage controllers.
- The minimum supported configuration is four paths (two active and two passive) per VPLEX director for any given LUN.
The Dell Unity and Unity XT arrays come in two versions, a hybrid array and an all-flash array. Both versions potentially require large amounts of bandwidth. The following tables show the bandwidth that is required to ensure that VPLEX properly protects the data:
Table 80. VPLEX VS6 back-end requirements for Dell Unity and Unity XT arrays
VPLEX (50 percent cabled) <-> Dell Unity configuration | Dell Unity and Dell Unity XT configuration size |
Number of VPLEX VS6 engines | single | 64 Gbps <-> 32 Gbps | 64Gbps <-> 64 Gbps | - | - | - |
dual | - | 128 Gbps <-> 64 Gbps | 128 Gbps <-> 128 Gbps | - | - |
quad | - | - | 256 Gbps <-> 128 Gbps | 256 Gbps <-> 192 Gbps | 256 Gbps <-> 256 Gbps |
By default, Dell Technologies cables only 50 percent of the ports on each VPLEX VS6 engine on converged systems. There are four ports, each running at 16 Gbps. The Unity and Unity XT arrays start with the XS configuration with two 16 Gbps FC ports each. Each larger model adds an additional two FC ports (XS=2, S=4, M=6, L=8, and XL=10 FC 16 Gbps ports). In the preceding table, the configurations in bold can support the maximum bandwidth requirement with only 50 percent of the ports cabled. The configurations in plain type are oversubscribing the VPLEX ports when only 50 percent of the ports are cabled, and the Dell Technologies Sales Team must order the VPLEX configuration with all VPLEX back-end ports cabled.
Note: VPLEX VS6 can support all available Dell Unity and Unity XT configurations without oversubscribing the VPLEX ports, even with only 50 percent of the ports cabled.
For information about VPLEX back-end zoning, see the Dell Integrated Data Protection Zoning for VPLEX spreadsheet. To obtain a copy of the spreadsheet, contact your Dell Technologies sales representative.
PowerStore arrays
Take account of the following best practices when configuring back-end connectivity with PowerStore storage arrays:
- Each node has two I/O module (IOM) slots and at least one FCIOM.
- Every VPLEX engine is zoned to all appliances in the array cluster using the same FC ports. Each appliance owns a LUN, enabling LUN migration between the appliances in the array cluster.
- The minimum supported configuration is four paths per VPLEX director for any given LUN.
VPLEX VS6¾PowerStore with one FC IOM
The following table shows configurations that oversubscribe the VPLEX ports when 50 percent of the ports are cabled:
Table 81. VPLEX ports oversubscription configurations – one FC IOM
VPLEX (50% cabled) <-> PowerStore configuration | DPowerStore configuration
1 x PowerStore appliance
|
2 x PowerStore appliances |
3 x PowerStore appliances |
4 x PowerStore appliances |
Number of VPLEX VS6 engines | Single | 64 Gbps <-> 128 Gbps | 64 Gbps <-> 256 Gbps | - | - |
| Dual | - | 128 Gbps <-> 256 Gbps | 128 Gbps <-> 384 bps | 128 Gbps <-> 512 Gbps |
Quad | N/A | N/A | N/A | NA |
By default, Dell connects 50 percent of the ports on each VPLEX VS6 engine in a converged system. This means that each engine has four ports, each running at 16 Gbps each.
The PowerStore array starts with a single appliance with two 32 Gbps FC ports per node, for a total of four 32 Gbps FC ports per appliance. The array can scale up to a total of four appliances. Each appliance adds an additional four 32 Gbps ports. Proper sizing is required to determine the number of ports to cable.
VPLEX VS6 - PowerStore with two FC IOM
The following table shows configurations that are oversubscribing to the VPLEX ports when 100 percent of the ports are cabled:
Table 82. Oversubscription to VPLEX ports with two FC IOM
VPLEX (100% cabled) <-> PowerStore configuration | Dell PowerStore configuration size with two FC IOM |
1 x PowerStore appliance | 2 x PowerStore appliances | 3 x PowerStore appliances | 4 x PowerStore appliances |
Number of VPLEX VS6 engines | Single | 128 Gbps <-> 256 Gbps | 128 Gbps <-> 512 Gbps | - | - |
Dual | - | 256 Gbps <-> 512 Gbps | 256 Gbps <-> 768 Gbps | 256 Gbps <-> 1024 Gbps |
Quad | N/A | N/A | N/A | N/A |
By default, Dell connects 50 percent of the ports on each VPLEX VS6 engine in a converged system. This means that each engine has four ports, each running at 16 Gbps each. The preceding table shows the VPLEX cabled at 100 percent, which means that each engine has eight ports, each running at 16 Gbps.
The PowerStore array initially consists of a single appliance. Each node in the appliance has two 32 Gbps FC ports per node, for a total of eight 32 Gbps FC ports per appliance. The PowerStore array can scale up to four appliances, with each appliance contributing eight additional 32 Gbps ports. However, even when all the ports are cabled, as shown in the previous table, each configuration is oversubscribing the VPLEX ports, . Proper system sizing is therefore necessary to determine the number of ports to cable.
For more information, see the Dell Integrated Data Protection Zoning for VPLEX spreadsheet for the VPLEX back-end. To obtain a copy of the spreadsheet, contact your Dell Technologies sales representative.
VMAX, VMAX3/AF, and PowerMax arrays
Take account of the following best practices when configuring back-end connectivity with VMAX, VMAX3/AF, and PowerMax series storage arrays:
- Each VPLEX director must have a minimum of two I/O paths to every local back-end storage array and to every storage volume that is presented from that storage array.
- Each director can have a maximum of four active paths to a given LUN. Exceeding that limit is not supported.
The zoning examples in this guide enforce the maximum path recommendations and must not be modified to exceed four active paths per director.
For more information, see the Dell EMC Integrated Data Protection Zoning for VPLEX spreadsheet. Look for the following topics:
- VPLEX back-end zoning
- Front-end port assignments. See the VMAX3_AF-FE ports tab for VMAX3 and PowerMax configurations that are included with an eNAS
Configuring VMAX AF bit settings for connections to VPLEX
The following table describes best practices for configuring VMAX AF bit settings for connections to VPLEX:
Table 83. VMAX AF bit settings for connections to VPLEX
Set | Do not set | Optional |
SPC-2 Compliance (SPC2) SCSI-3 Compliance (SC3) Enable Point-to-Point (PP) Unique Worldwide Name (UWN) Common Serial Number (C) For Release 5.2 and later: OS-2007 (OS compliance) | Disable Queue Reset on Unit Attention (D) Avoid Reset Broadcast (ARB) Environment Reports to Host (E) Soft Reset (S) Return Busy (B) Enable Sunapee (SCL) Sequent Bit (SEQ) Non-Participant (N) For releases 5.2 and prior: OS-2007 (OS compliance) | Link speed Enable Auto-Negotiation (EAN) ACLX |
- Set the ACLX bit on each VMAX FA that is zoned to VPLEX back-end ports if the VMAX FA is also shared for hosts that require conflicting bit settings.
- Enable the OS2007 bit on VMAX FAs that are connected to VPLEX back-end ports.
Note: Enabling the OS2007 bit allows VPLEX to detect configuration changes on the array storage view and to react to it by automatically rediscovering the back-end storage view and detecting LUN remapping issues.
- To enable OS2007 bit on VMAX FA port 8e:0, run the following commands:
symconfigure -sid xxx -cmd "set port 8e:0 SCSI_Support1=ENABLE;" preview symconfigure -sid xxx -cmd "set port 8e:0 SCSI_Support1=ENABLE;" commit
- It may not be possible to set the OS2007 bit on the target VMAX FA port because the port is shared with non-VPLEX initiators. In that case, run the following commands:
symaccess -sid xxx -wwn <VPLEX initiator> set hba_flags on os2007 -enable symaccess -sid xxx -wwn <VPLEX initiator> list logins -dirport 8e:0 -v
- For VMAX3 arrays using Solutions Enabler 8.4 and later, run the following command to set the OS2007 bit on shared FA ports:
symaccess -sid xxx set hba_flags on OS2007 –enable -wwn <VPLEX initiator> -dirport ALL:ALL
- Repeat the previous commands for each VPLEX initiator.
This procedure is non-disruptive to host I/O and requires no specific actions on VPLEX.
Best practices for the VPLEX front end
Adhere to the following best practices when configuring front-end zoning for VPLEX on converged systems:
- Ensure that the front-end I/O module on each director has a physical connection to each fabric. Connect even ports to Fabric A, and odd ports to Fabric B.
- Ensure that each host has at least one path to an A director and one path to a B director on each fabric, for a total of four logical paths. A minimum of four paths are required for network data usage (NDU).
- Ensure that a VMware ESXi cluster connects to exactly two VPLEX directors.
- For a dual or quad engine, spread the host I/O paths across the engines and the directors.
- Use PowerPath on each host to provide load balancing and failover on converged systems.
- Use the adaptive policy for non cross-connect configurations on converged systems.
- Register all VMware ESXi host initiators with a Host Type of default.
- For single converged systems, ensure that only 50 percent of the available VPLEX front-end ports are connected as standard for each engine: A0-FC00, A0-FC01, B0-FC00, and B0-FC01.
Use the remaining VPLEX front-end ports, A0-FC02, A0-FC03, B0-FC02, and B0-FC03, in a cross-connect configuration or when connecting a second converged system. - Ensure that field personnel rebalance the front-end host connectivity when upgrading the engine count (single to dual, dual to quad) on the customer's premises.
Best practices for VPLEX WAN-COM
The following section describes best practices for configuring VPLEX WAN-COM.
General connectivity best practices
Keep in mind that:
- In a VPLEX configuration, intracluster connectivity refers to director-to-director communication in the VPLEX cluster.
- In a Metro configuration, intercluster connectivity refers to communication between the VPLEX clusters. The configuration uses the WAN-COM module on each director.
- Customers must purchase VPLEX Metro with the appropriate WAN-COM module because reconfiguration of the hardware module is not supported after the initial installation.
VPLEX FC-WAN-COM connectivity best practices
Take account of the following general best practices when configuring Cisco MDS replication VSANs and FC Inter-Switch Links for VPLEX Metro with FC replication.
When you deploy VPLEX Metro with internal intersite connectivity, ensure that:
- Internal intersite connections terminate at the converged system Cisco MDS switches. For example, an ISL to a local DWDM multiplexor (MUX) or termination of a directly connected point-to-point dark fiber link.
External intersite connections are completely external to the converged systems. VPLEX WAN-COM replication ports connect directly to external switches and no ISLs exist between the converged systems.
- The FC WAN-COM module supports switched fabric, DWDM, and FCIP protocols.
- The FC WAN-COM module does not support using Cisco Inter-VSAN routing for WAN-COM zoning.
- Use independent FC WAN links for redundancy on converged systems.
- Each VPLEX director has two FC WAN ports. You must connect these to separate fabrics to maximize redundancy and fault tolerance.
- Logically isolate replication traffic from other traffic using dedicated VSANs.
Configuring the Inter-Switch Link
Adhere to the following guidelines when configuring Inter-Switch Link (ISL) for trunking and buffer-to-buffer credits and deploying VPLEX Metro with DWDM and SONET.
Trunking
Adhere to the following best practices:
- Trunking enables interconnected ports to transmit and receive frames in more than one Cisco VSAN over the same physical link using the enhanced ISL (EISL) frame format.
- When trunking mode is disabled, add ISL interfaces to the Cisco VSAN that is being extended before activating the links to ensure that only the Cisco VSAN is extended between converged systems.
- By default, trunk mode is enabled on all FC interfaces, but it takes effect only when in E-port mode.
- An operational E-port with trunk mode enabled is referred to as a TE port.
- The trunk-allowed Cisco VSANs configured for TE ports are used by the trunking protocol to determine the allowed active Cisco VSANs in which frames can be received or transmitted.
- If configuring ISLs with trunking mode enabled, do not add local Cisco VSANs to the trunk-allowed Cisco VSAN list.
- Enable trunking only when multiple Cisco VSANs must be extended between converged systems.
- On the Cisco MDS 9000 switches, set the primary converged system end of the trunk setting to “on” and the secondary converged system end to “auto.”
For more information about how to configure inter-switch links between converged systems, see the following documentation:
Buffer-to-buffer credits
Adhere to the following best practices:
- FC uses buffer-to-buffer credits (BB_Credits) as a mechanism for hardware-based flow control, so it is not necessary to switch hardware to discard frames caused by high congestion.
- Standard FC flow control and BB_Credit value are adequate for most short-haul deployments. Additional buffering and WAN-optimized flow control are often needed for longer distances.
- Determining sufficient BB_Credits before use is crucial because miscalculations might lead to performance degradation due to credit starvation.
- Add 20 percent to the calculated BB_Credit value to account for spikes in traffic.
- Credit starvation occurs when the number of available credits reaches zero, preventing all forms of FC transmissions. This condition triggers a timeout value, causing the ISL link to reinitialize.
- Depending on the distance between ISL end-points and the MDS switch or switching module being used, BB_Credits must be used to ensure optimal operation.
The following table provides guidelines for determining how many BB_Credits are required based on the distance and speed of the ISL. If the calculated value exceeds the default value, adjust the ISL interface configuration.
Table 84. Recommended buffer-to-buffer credit configuration settings
ISL link speed (Gbps) | BB credits per km |
1 | 0.5 |
2 | 1 |
4 | 2 |
8 | 4 |
The following table shows the default and maximum values for the buffer-to-buffer credit configuration settings per ISL.
Note: The Cisco MDS 9148 Multilayer Fabric Switch has a maximum of 128 BB_Credits per port group.
Table 85. Buffer-to-buffer credit configuration settings per ISL
Switch | BB_Credits buffers per ISL port |
Default | Maximum |
Cisco MDS 9148 Multilayer Fabric Switch | 32 | 125 |
48-Port 8-Gbps FC Module | 250 | 500 |
24-Port 8-Gbps FC Channel Module | 500 | 500 |
DWDM and SONET best practices
If using DWDM or SONET connectivity between converged systems, ensure that the two rings have diverse pathing and that latency is measured for both paths. The following behavior is expected:
- VPLEX directors load-balance (implement round-robin) between the two paths so that any large discrepancy in latency causes VPLEX to operate at speeds based on the slower path.
- VPLEX issues call home events if there is a large discrepancy but does not take action.
Cisco MDS Inter-VSAN Routing (IVR)
Before you configure Cisco MDS Inter-VSAN routing (IVR), review the following information about licensing and the Cisco MDS Enterprise Package part numbers.
Licensing
Take into account that:
- The Cisco MDS Enterprise Package must enable Inter-VSAN (IVR) routing on each converged system FC switch. IVR is a feature of Advanced Traffic-Engineering.
- IVR allows a selective transfer of data traffic between specific initiators and targets on different VSANs, eliminating the need to merge VSANs into a single logical fabric.
- IVR facilitates resource sharing across VSANs without compromising the VSAN benefits of scalability, reliability, availability, or network security.
- IVR works across WANs using FCIP. FCIP is supported by VPLEX Metro.
- The Cisco MDS Enterprise Package enables zone-based Quality of Service (QoS) to complement the standard QoS that is already available, and extended buffer-to-buffer credits to increase the distance for SAN extension.
- In addition to Advanced Traffic-Engineering, the Cisco MDS Enterprise Package enables Enhanced Network Security with the following features:
- Cisco TrustSec FC Link Encryption
- Switch-switch and host-switch authentication with the FC Security Protocol (FCSP)
- Diffie-Hellman Challenge Handshake Authentication Protocol (DH-CHAP)
- Port security that locks mappings of entities to switch ports
- VSAN-based access control
- IP Security (IPsec) for FCIP
- Digital certificates and fabric binding for open systems
Cisco MDS Enterprise Package
The following table shows the part numbers to use when ordering a Cisco MDS Enterprise Package:
Table 86. Cisco MDS Enterprise Package part numbers
Part number | Description |
M97ENTK9 | Cisco MDS Enterprise Package for one Cisco MDS 9700 Series Multilayer Director |
M9500ENT1K9 | Cisco MD Enterprise Package for one Cisco MDS 9500 Series Multilayer Director |
M9100ENT1K9 | Cisco MDS Enterprise Package for one Cisco MDS 9100 Fabric Switch |
VPLEX IP WAN-COM connectivity
Adhere to the following best practices.
- Connect the IP WAN ports to the Cisco Nexus 9000. The IP WAN ports are optical 10 Gbps and do not automatically negotiate down to slower speeds. They must connect to 10 Gbps SFPs.
- Provide 10 Gbps connectivity locally where the IP WAN-COM ports attach to the network. Other network segments can run at speeds other than 10 Gbps.
- Assign all IP WAN ports in port-group 0 to the same VLAN in each site.
- Assign all IP WAN ports in port-group 1 to the same VLAN in each site.
The IP WAN ports do not support 802.1Q tagging.
- Use different VLANs for port-group 0 and port-group 1.
- Configure the switch interfaces as access ports.
- The Maximum Transmission Unit (MTU) size attribute affects IP WAN-COM performance. The default IPv4 MTU size for network switches is 1,500.
- Increasing the size of the MTU increases performance over the WAN.
- VPLEX supports a maximum MTU size of 9,000. Use the highest MTU size that is supported on the network.
- Configure every network switch in the path between the VPLEX clusters to support jumbo frames. Otherwise, the frame is fragmented into multiple smaller frames with an MTU of 1500, which negatively affects performance.
- When jumbo frames are used with IPv6, the routers do not fragment the packet on behalf of the source. Instead, they drop the packet and send back an error message.
- Set the correct socket buffer size according to your anticipated workload. The following values are suggested for starting your base lining process with a specified socket buffer size:
- 1 MB5 MB is optimal for an MTU of 1,500 with an RTT of 1 millisecond.
- 5 MB is optimal for an MTU of 1,500 with an RTT of 10 milliseconds.
- 5 MB is optimal for an MTU of 9,000 with an RTT of 1 millisecond and 10 milliseconds.
For instructions on how to change the MTU size or the socket buffer size, see docu58234 VPLEX IP Networking: Implementation Planning and Best Practices on the Dell Technologies Support website.
VPLEX Cluster Witness Server configuration
Before you configure the VPLEX Cluster Witness Server, take into consideration that:
- VPLEX Cluster Witness Server is a VPLEX component that is provisioned as a VM on a VMware ESXi host. VPLEX Witness is typically deployed in a third site or failure domain to enforce isolation from failures that could potentially affect the VPLEX clusters at either site.
- Deploying a VPLEX Metro solution with VPLEX Witness provides continuous availability to the storage volumes if there is a site failure or intercluster link failure (WAN partition).
- VPLEX Witness failure-handling semantics apply only to the distributed vVols in a CG.
- VPLEX Witness server is a mandatory requirement for cross-connect.
- VPLEX Witness server is recommended for non cross-connect.
VPLEX security considerations
Important: A VPLEX Metro system does not support native encryption over an IP WAN-COM link.
Dell Technologies recommends that you deploy an external encryption solution such as IPsec to achieve data confidentiality and end-point authentication over IP WAN-COM links between VPLEX clusters on converged systems.
VPLEX port usage and firewall rules
When configuring VPLEX solutions, apply the recommended guidelines for port usage and firewall rules. Look for the VPLEX Security GeoSynchrony Configuration Guide on the Dell Support website.
Design recommendations for metro node |
General best practices for the Dell metro node back-end
On converged systems, take into account that:
- Each metro node director must have redundant physical connections to the back-end storage array across dual fabrics.
- Metro node systems require that 100 percent of all front-end and back-end ports are connected.
- Metro node does not support vVols.
- Moving the ESXi host boot LUN behind metro node is not supported. Boot from SAN LUNs needs to be provided to the host directly from the storage array, and not from metro node.
Zoning guidelines and examples
Metro node zoning is based on the following design principles:
- Zone metro node director A ports to one group of four storage array ports.
- Zone metro node director B ports to a different group of four storage array ports.
- Each metro node director is treated as a VMware ESXi cluster (host). Follow the host connectivity guidelines in the Tier-3 Logical Build Guide. Look for the document on the Dell Support website. In a metro node environment, the same array ports are zoned to metro node director-A, with two active paths per fabric. The storage array ports that are reserved for the second VMware ESXi cluster are zoned to metro node director-B.
- The even back-end port (IO-02) connects to Fabric A. The odd back-end port (IO-03) connects to Fabric B.
See the Dell Integrated Data Protection Zoning for metro node spreadsheet for back-end zoning examples with Dell UnityXT and PowerStore arrays. To obtain a copy of the spreadsheet, contact your Dell Technologies sales representative.
Array-specific best practices for the Dell metro node back end
Adhere to the following best practices for specific storage arrays.
Unity XT
As an ALUA array, the Unity XT array requires the four active and four passive connectivity rules. Each director node of the metro node cluster must have four active and four passive paths to all LUNs.
Each metro node director has two back-end ports. For compliance, each metro node director back-end port must be zoned to two ports on each Unity XT storage processor (SP). The SP that owns the LUN provides the active paths to the LUN, while the second SP provides the passive paths.
When configuring back-end connectivity with Unity and Unity XT storage arrays, ensure that:
- Each metro node director has logical and physical connectivity to storage controllers.
- The minimum supported configuration is eight paths (four active and four passive) per metro node director for any given LUN.
For more information, see the Dell EMC Integrated Data Protection Zoning for metro node spreadsheet on the Dell Technologies Support website.
PowerStore
When configuring back-end connectivity with PowerStore storage arrays, ensure that:
- Each PowerStore node has two IOM slots.
- Each node has at least one FC IOM.
- The even FC ports (0 and 2) are cabled to the Cisco MDS switch A fabric.
- The odd FC ports (1 and 3) are cabled to the Cisco MDS switch B fabric.
- Each metro node engine is zoned to a port group on each appliance: FC00 is zoned to two Fabric A ports, and FC01 is zoned to two Fabric B ports.
- Each metro node engine is zoned to all appliances in the array cluster using the same FC ports. Each LUN is owned by one appliance, enabling LUN migration between appliances in the same array cluster.
- Each metro node director must have logical and physical connectivity to storage controllers.
- The minimum supported configuration is four paths per metro node director for any given LUN.
General best practices for the Dell metro node front end
When configuring front-end zoning for metro node on converged systems, ensure that:
- The front-end I/O module on each metro node director has a minimum of two physical connections, one to each fabric.
- Even port IO-00 connects to Fabric A, odd port IO-01) connects to Fabric B.
- Each host has at least one path to metro node director-A and one path to metro node director B on each fabric, for a total of four logical paths.
- Four paths are configured from each host to the metro node cluster to complete an NDU.
- PowerPath provides load-balancing and failover on converged systems.
- PowerPath adaptive policy is used for non cross-connect configurations on converged systems.
- All VMware ESXi host initiators are registered with a Host Type of default.
Metro node WAN-COM connectivity
In a metro node configuration:
- Intracluster connectivity refers to director-to-director communication within the metro node cluster.
- Inter-cluster connectivity refers to communication between the metro node clusters.
IP WAN-COM connectivity
Connect the IP WAN ports to the Cisco Nexus 9000 switches. Take into account that:
- Each of the four metro node directors has two WAN ports configured into two port groups, WC-00 and WC-01.
- Independent WAN links are strongly recommended for redundancy.
- Two separate non-vPC VLANs for the WC-00 port group must be defined on Switch A and for the WC-01 port group on Switch B.
- IP WAN-COM ports cannot participate in a port channel, so attaching to a member switch in a vPC pair results in orphan ports. The vPC peer-link between Cisco Nexus switches in a converged system is not sized to account for metro node Metro traffic.
- Separate uplinks are configured for the non-vPC VLANs. You can attach IP WAN-COM ports directly to the customer's dark fiber network switches.
- As optical 10 Gbps ports, the IP WAN-COM ports do not automatically negotiate down to slower speeds and must connect to 10 Gbps SFPs. Provide 10 Gbps connectivity locally where the IP WAN-COM ports attach to the network. Other network segments can run at speeds other than 10 Gbps.
- All WC-00 ports are assigned to the same VLAN in each site.
- All WC-01 ports are assigned to the same VLAN in each site.
- The IP WAN ports do not support 802.1Q (VLAN) tagging. Configure the switch interfaces as access ports.
- The MTU size attribute affects IP WAN-COM performance.
- The default IPv4 MTU size for network switches is 1,500.
- Increasing the size of the MTU increases performance over the WAN.
- Metro node supports a maximum MTU size of 9,000. Use the highest MTU size that is supported on the network.
- Every network switch in the path between the metro node clusters is configured to support jumbo frames. Otherwise, the frame is fragmented into multiple smaller frames with an MTU of 1,500, which negatively affects performance.
- The supported network round-trip latency is less than or equal to 10 milliseconds.
- The correct socket buffer size (socket-buf-size) is set for your anticipated workload.
To start your base lining process, use the following socket buffer size values:
- 1 MB for an MTU of 1,500 with an RTT of 1 millisecond
- 5MB for an MTU of 1,500 with an RTT of 10 milliseconds
- 5 MB for an MTU of 9,000 with an RTT of 1 millisecond and 10 milliseconds
For information about changing the MTU size or the socket buffer size, look for the Metro node best practices document on the Dell Technologies Support website.
Metro Cross-Connect best practices
When configuring a host Cross-Connect for metro node Metro, take into account that:
- The maximum round-trip latency for Cross-Connect is 1 millisecond with all metro node Metro configurations.
- The maximum round-trip latency for Cross-Connect is 5 milliseconds with VMware vSphere.
- Metro node Witness server is mandatory for Cross-Connect.
- Dell Technologies does not support Cross-Connect for converged systems that do not include Cisco MDS FC switches. This is referred to as the separated networking option.
- Host Cross-Connect requires front-end SAN connectivity between converged systems. This ensures that hosts in the primary converged system can communicate with front-end ports of the metro node cluster that is connected to the secondary converged system, and conversely.
- PowerPath/VE must be configured with active paths to the local metro node cluster.
- PowerPath/VE provides an auto-standby feature that groups logical paths by the metro node cluster and determines which has the lowest path latency to identify local and remote.
- PowerPath/VE autostandby must be enabled for each VMware ESXi host in a Cross-Connect configuration.
Autostandby with the proximity trigger determines and selects the preferred paths to a distributed volume and places nonpreferred paths in autostandby mode (asb:prox).
Here is the syntax of the command to enable autostandby: rpowermt set autostandby=on trigger=prox host=x.x.x.x
- The auto-resume feature must be set to true for all metro node Metro CGs.
- The thin-rebuild feature must be set to true for all metro node Metro distributed volumes.
- Cross-Connect zoning follows the rules that are defined in metro node front-end (FE) logical port groups.
- Storage array cross-connect (back-end SAN connectivity) is sometimes implemented for adding protection to system volumes.