Administrators should use virtual hard disks (VHD, VHDX, VHDS) with Hyper-V VMs whenever possible. However, Hyper-V does support three methods for presenting SAN block storage directly to a guest VM. Limit the use of pass-through or direct attached disks to specific or temporary use cases. The three supported methods are:
- In-guest iSCSI.
- Virtual Fiber Channel (vFC). For more information about vFC, see Virtual Fibre Channel.
- Physical pass-through disk.
If you have a use case that requires direct-attached storage, of the three options available, use in-guest iSCSI as a best practice.
Note: PowerStore does not support Metro Volumes configured as boot-from-SAN disks, in-guest iSCSI disks, or vFC disks.
In-guest iSCSI
Configure the host and VM network so the VM can access PowerStore iSCSI volumes through a Hyper-V host or cluster data network.
- Configure in-guest iSCSI on the VM the same way you configure iSCSI on a physical host.
- A guest VM supports MPIO when multiple paths are available to the VM. Install and configure the multipath I/O feature on the VM.
vFC
There is a separate section in this paper that covers vFC in more detail. See Virtual Fibre Channel.
Physical pass-through disks
A pass-through disk is a physical disk presented directly to a Hyper-V VM. The Hyper-V host or cluster assigns a LUN ID to a pass-through disk but does not have I/O access to the disk. Hyper-V keeps the disk in a reserved state. Only the guest VM has read/write I/O access.
- Hyper-V 2008 introduced pass-through disks. Pass-through disks are a legacy option that preserves backwards compatibility.
- Pass-through disks are no longer necessary because of the feature enhancements with newer releases of Hyper-V (generation 2 guest VMs, VHDX format, and shared VHDs).
- Avoid the use of pass-through disks, other than for temporary or specific use cases.
- Use of pass-through disks in not recommended in a cluster environment, other than for temporary or specific use cases.
- Map a SAN volume to the physical Hyper-V host or cluster nodes.
- Bring the disk online, initialize the disk, and then take the disk offline.
- Use Hyper-V Manager or Failover Cluster Manager to attach the disk directly to a guest VM.
- Generation 1 VMs (with virtual IDE or virtual SCSI controllers) and generation 2 VMs (with virtual SCSI controllers) support pass-through disks.
PowerStore appliances support in-guest iSCSI, vFC, and direct-attached (pass-through) disks mapped to Hyper-V guest VMs. However, avoid these options as a best practice, unless a specific use case requires it. Typical use cases include:
- Performance: Direct-attached disks bypass the host server file system and so offer slightly better performance than a VHD or VHDX. There is no significant difference in performance between a direct-attached disk and a virtual hard disk for most workloads.
- Preferred: Redesign the environment to eliminate virtual disk performance as a bottleneck rather than switching to a direct-attached disk.
- Clustering: VM clustering on legacy Hyper-V platforms requires the use of direct-attached disks. Use of shared VHDs is the preferred option for VM clustering (HA) with Server 2012 R2 and newer.
- Troubleshooting: Use of a direct-attached disk can be helpful if you must troubleshoot I/O performance on an isolated physical volume that is separate from all other servers and workloads.
- Custom snapshot or replication policy: When it is necessary, apply a custom PowerStore protection policy (snapshots and replication) to a specific disk (volume).
- Preferred: Place a virtual hard disk on a dedicated cluster volume or cluster shared volume (CSV) in a one-to-one configuration. Then, apply PowerStore snapshots and replication to the cluster disk or CSV.
- Capacity: Legacy VHDs support a maximum size of 2 TB. VHDX supports a maximum size of 64 TB. The maximum supported size of a direct-attached disk is usually much larger than 64 TB; it is a function of the VM operating system.
In-guest iSCSI, vFC, and pass-through disk storage limitations
- Native Hyper-V Snapshots: You lose the ability to perform a native Hyper-V snapshot when a VM uses direct-attached storage. However, the ability to protect direct-attached volumes with PowerStore snapshots and replication is unaffected.
- Complexity: Use of direct-attached disks increases complexity. Of the three options, vFC is the most complicated. More complexity increases management overhead.
- Mobility: Direct-attached disks create a physical hardware layer dependency that reduces VM mobility.
- Scale: Each pass-through disk consumes a LUN ID on each host in a Hyper-V cluster. Extensive use of pass-through disks quickly becomes impractical and unmanageable at scale on a Hyper-V cluster. Use pass-through disks only if they are required for a specific use case. Management of any of the three types of direct-attached storage becomes increasingly difficult at scale.
- Differencing Disks: The use of a pass-through disk as a boot volume on a guest VM prevents the use of a differencing disk.
- Metro Volume: PowerStore does not support in-guest iSCSI, or vFC disks configured as Metro Volumes. Pass-through disks are supported as Metro Volumes when configured as data disks, but not as boot disks.
Note: Legacy Hyper-V environments that use direct-attached disks for guest VM clustering (HA) should switch to shared virtual hard disks (VHDS) as a best practice when modernizing.