Home > Storage > PowerMax and VMAX > Storage Admin > Dell PowerMax and VMware vSphere Configuration Guide > Virtual disk allocation schemes
VMware vSphere offers multiple ways of formatting virtual disks through the UI or CLI. For new virtual machine creation, only the formats eagerzeroedthick, zeroedthick, and thin are offered as options.
Before discussing the type of virtual disks to use, it is worth mentioning the disk controllers and whether Dell prefers one type of controller over another.
There are three types of controllers as of the current vSphere 7.0 release: SCSI, NVMe, and SATA. When a new VM is created, the GuestOS selection drives the controller that VMware selects by default. For instance, if macOS is selected, VMware assigns a SATA controller. If Windows or Linux is selected, as SCSI controller is assigned. In addition, VMware also assigns a SCSI controller “type.” These types are:
For most Windows GuestOS VMware assigns LSI Logic SAS, and for Linux, PVSCSI. Though there are some differences, these two types are similar and can be used interchangeably.
For most situations, Dell Technologies recommends keeping the VMware default assignment of the controller and type. For environments with intensive I/O, the PVSCSI generally provides better performance and use fewer CPU cycles. PVSCSI is recommended under those conditions; but there is no requirement to change the controller type.
The NVMe controller is designed to improve performance while reducing CPU overhead when accessing high-performing NVMe media. Hardware version 13 is required along with a supported GuestOS. This requirement usually entails installing a driver if the operating system does not have one natively. The NVMe controller is assigned by default when using a newer version of Windows GuestOS (for example, 2016). There are some limitations with the NVMe controller, and it does not work with all applications. Thoroughly reviewing the requirements is important. VMware indicates the NVMe controller may provide better performance for NVMe. However, VMware also clearly states that when using the HPP plug-in for pathing, using PVSCSI is a best practice. Ultimately, for customers using NVMeoF on the PowerMax, either controller type is acceptable. Testing is always recommended.
Note: Changing the controller type or controller itself on an existing VM may require that certain steps are completed. See VMware article 341383 for more detail.
The default option when creating, cloning, or converting virtual disks in the vSphere Client is called Thick Provision Lazy Zeroed. The Thick Provision Lazy Zeroed selection is commonly known as the zeroedthick format. In this allocation scheme, the storage that is required for the virtual disks is reserved in the datastore but the VMware kernel does not initialize all the blocks. The guest operating system initializes the blocks as write activities to previously uninitialized blocks are performed. If the VMFS attempts to read blocks of data that it has not previously written to, it returns zeros to the guest operating system. This case is true even in cases where information from a previous allocation (data “deleted” on the host, but not deallocated on the thin pool) is available. The VMFS does not present stale data to the guest operating system when the virtual disk is created using the zeroedthick format.
Since the VMFS volume reports the virtual disk as fully allocated, the risk of oversubscribing is removed. This condition is because the oversubscription does not occur on both the VMware layer and the array layer. The virtual disks do not require more space on the VMFS volume as their reserved size is static with this allocation mechanism. More space is needed only if additional virtual disks are added. For example, in Figure 94, a single VM resides on the 500 GB datastore iSCSI_1. The VM has a single 90 GB virtual disk that uses the zeroedthick allocation method. The datastore browser reports the virtual disk as consuming the full 90 GB.
However, since the VMware kernel does not initialize unused blocks, the full 90 GB is not consumed on the thin device backing the datastore. In the example, the virtual disk resides on the thin device 0003A. No space is consumed, as seen in Figure 95.
If the VMware administrator does not have access to Unisphere, to view the relationship between the back-end device and datastore, VSI is recommended. The highlighted area in Figure 96 is pulled directly from the array and shows the current allocation, again zero.
Like zeroedthick, the thin allocation mechanism is also virtual provisioning-friendly. However, as explained in this section, it should be used with caution together with array virtual provisioning. Thin virtual disks increase the efficiency of storage utilization for virtualization environments by using only the amount of underlying storage resources needed for that virtual disk, exactly like zeroedthick. But unlike zeroedthick, thin devices do not reserve space on the VMFS volume which allows more virtual disks per VMFS. Upon the initial provisioning of the virtual disk, the disk is provided with an allocation equal to one block size worth of storage from the datastore. As that space is filled, additional chunks of storage in multiples of the VMFS block size (1 MB) are allocated for the virtual disk. That way the underlying storage demand grows as its size increases.
Using the same-sized VM as in Figure 94, a single 90 GB virtual disk is created that uses the thin allocation method rather than zeroedthick. Note now in Figure 97 that VMDK reports as zero.
As is the case with the zeroedthick allocation format, the VMware kernel does not initialize unused blocks for thin virtual disks. The full 90 GB is not reserved on the VMFS or consumed on the thin device on the array. The virtual disk that is presented, as shown in Figure 97, resides on thin device 0003A. It consumes no space, as shown in Figure 98.
With the eagerzeroedthick allocation mechanism (known as Thick Provision Eager Zeroed in the vSphere Client, or EZT), space that is required for the virtual disk is completely allocated and written to at creation time. This condition leads to a full reservation of space on the VMFS datastore and on the underlying PowerMax device. So, it takes longer to create disks in this format than to create other types of disks.
A single 90 GB virtual disk that uses the eagerzeroedthick allocation method is shown in Figure 99. The datastore browser reports the virtual disk as consuming the entire 90 GB on the volume, like zeroedthick and thin.
On most thin arrays, eagerzeroedthick causes an equal amount of space to be reserved on the array as VMware. However, the PowerMax does not work this way. The eagerzeroedthick virtual disk resides on thin device 003A and as highlighted in Figure 100 no space is consumed. The reason is addressed in the next sections.
VMware vSphere offers various VMware Storage APIs for Array Integration (VAAI) that provide the capability to offload specific storage operations to Dell PowerMax. This capability increases both overall system performance and efficiency. One of these supported primitives is Block Zero, which directly impacts eagerzeroedthick disks. Block Zero is also known as “write same” since it uses the WRITE SAME SCSI operation (0x93).
Using software to zero out a virtual disk is a slow and inefficient process. The typical back and forth from host to array is avoided by offloading this task to the array. The full capability of the array can be used to accomplish the zeroing.
When an EZT device is created on VMware with a PowerMax backend, the PowerMax employs block zero to zero out the disk. As explained, most arrays then reserve that space equal to the VMDK size; however, the PowerMax architecture is different and no space is reserved. The PowerMax can be thought of as a single pool of storage, since new extents can be allocated from any thin pool on the array. Therefore, the only way that a thin device can run out of space is if the entire array runs out of space. It was deemed unnecessary to reserve space on the PowerMax when using block zero. Even so, the array changes the extents to prime them for data. The array must still process the write same requests from VMware. So, the creation time for an EZT VMDK is not as quick as the other formats.
Block Zero is enabled by default and does not require any user intervention. If wanted, Block zero can be disabled on an ESXi host by using the vSphere Client or command-line utilities. Block zero can be disabled or enabled by altering the setting DataMover.HardwareAcceleratedInit in the ESXi host advanced settings under DataMover.