Home > Storage > PowerVault > Guides > Dell PowerVault ME5 Series: Microsoft Hyper-V Best Practices > Present ME5 storage to Hyper-V hosts and VMs
Hyper-V supports DAS (SAS, FC, iSCSI) and SAN (FC, iSCSI) configurations with ME5.
See the Dell PowerVault ME5 Administrator’s Guide and the Dell PowerVault ME5 Deployment Guide at Dell Technologies Support for an in-depth review of transports and cabling options.
Deciding which transport to use is based on customer preference and factors such as the size of the environment, cost of the hardware, and the required support expertise.
iSCSI has grown in popularity for several reasons, such as improved performance with the higher bandwidth connectivity options now available. A converged Ethernet configuration also reduces complexity and cost. Small office, branch office, and edge use cases benefit when minimizing complexity and hardware footprints with converged networks.
Regardless of the transport, it is a best practice to ensure redundant paths to each host by configuring MPIO. For test or development environments that can accommodate down time without business impact, a less-costly, less-resilient design that uses single path may be acceptable.
In a Hyper-V environment, all hosts that are clustered should be configured to use a single common transport (FC, iSCSI, or SAS).
There is limited Microsoft support for mixing transports on the same host. Mixing transports is not recommended as a best practice, but there are some uses cases for temporary use.
For example, when migrating from one transport type to another, both transports may need to be available to a host during a transition period. If mixed transports must be used, use a single transport for each volume that is mapped to the host.
Consider the following example:
Note: Do not attempt to map a volume to a Windows host using more than one transport. Mixing transports for the same volume will result in unpredictable service-affecting I/O behavior in path failure scenarios. Each volume should be mapped using a unique transport.
Windows Server and Hyper-V natively support MPIO. A Device Specific Module (DSM) provides MPIO support. The DSM that is bundled with the Windows Server operating system is fully supported with ME5 arrays.
Windows and Hyper-V hosts default to the Round Robin with Subset policy with ME5 storage. Round Robin with Subset will work well for most Hyper-V environments. Specify a different supported MPIO policy if necessary.
In this example, each ME5 storage controller (Controller A and Controller B) has four FC front-end (FE) paths connected to dual fabrics, for eight paths total. Connecting fewer FE paths, such as two on each controller for four paths total, is also acceptable.
In Figure 15, a volume mapped from ME5 to a host lists eight total paths.
The Active/Optimized paths are associated with the ME5 storage controller that the volume is assigned to. The Active/Unoptimized paths are associated with the secondary or standby ME5 storage controller for that same volume.
When creating volumes on PowerVault, the wizard will alternate controller ownership in a round-robin fashion to help load balance the controllers. Administrators can override this behavior and specify a specific controller when creating a volume.
Best practices recommendations include the following:
ME5 block storage can also be presented directly to Hyper-V guest VMs using the following methods:
In-guest iSCSI: Configure the host and VM network so the VM can access ME5 iSCSI volumes through a Hyper-V host or cluster network.
Physical disks: Physical disks presented to a Hyper-V VM are often referred to as pass-through disks. A pass-through disk is mapped to a Hyper-V host or cluster, and I/O access is passed through directly to a Hyper-V guest VM. The Hyper-V host or cluster has visibility to a pass-through disk and assigns it a LUN ID, but does not have I/O access. Hyper-V keeps the disk in a reserved state. Only the guest VM has I/O access.
ME5 arrays support in-guest iSCSI and pass-through disks (direct-attached disks) mapped to guest VMs. However, using direct-attached storage for guest VMs is not recommended as a best practice unless there is a specific use case that requires it. Typical use cases include:
Note: Legacy Hyper-V environments that are using direct-attached disks for guest VM clustering should consider switching to shared virtual hard disks when migrating to a newer Hyper-V version.
Use a consistent LUN number when mapping shared volumes: quorum disks, cluster disks, and cluster shared volumes. Leverage host groups on the ME5 array to simplify the task of assigning consistent LUN numbers.
Note: Hyper-V hosts that use boot-from-SAN cannot be added to ME5 hosts groups. See the Boot from SAN section of this white paper for details.
Changing LUN IDs after initial assignment by ME5 may be necessary to make them consistent. By default, PowerVault Manager assigns the next available LUN ID that is common when mapping a new volume to a host group or group of hosts.
Each cluster shared volume (CSV) will support one VM or many VMs. How many VMs to place on a CSV is a function of user preference, the workload, and how ME5 storage features such as snapshots and replication will be used. Placing multiple VMs on a CSV a good design starting point in most scenarios. Adjust this strategy for specific uses cases.
Some advantages for a many-to-one strategy include the following:
Some advantages for a one-to-one strategy include the following:
Other strategies include placing VHDs with a common purpose on a CSV. For example, place boot VHDs on a common CSV, and place data VHDs on other CSVs.