Home > Storage > PowerMax and VMAX > Storage Admin > Dell EMC PowerMax and VMAX All Flash: eNAS Best Practices > File system considerations
You must specify the maximum size to which the file system should automatically extend. The upper limit for the maximum size is 16 TB. The maximum size you set is the file system size that is presented to administrators, if the maximum size is larger than the actual file system size.
Note: Enabling automatic file system extension does not automatically reserve the space from the storage pool for that file system. Administrators must ensure that adequate storage space exists, so that the automatic extension operation can succeed. If the available storage is less than the maximum size setting, automatic extension fails. Administrators receive an error message when the file system becomes full, even though it appears that there is free space in the file system. The file system must be manually extended.
For more information about auto extending a file system, refer to the Managing Volumes and File Systems with VNX AVM document.
Automatic Volume Management (AVM) is a Dell EMC storage feature that automates volume creation and volume management. By using the VNX command options and interfaces that support AVM, system administrators can create and extend file systems without creating and managing the underlying volumes.
The automatic file system extension feature automatically extends file systems that are created with AVM when the file systems reach their specified high water mark (HWM). Thin provisioning works with automatic file system extension and allows the file system to grow on demand. With thin provisioning, the space presented to the user or application is the maximum size setting, while only a portion of that space is actually allocated to the file system.
The AVM feature automatically creates and manages file system storage. AVM is storage-system independent and supports existing requirements for automatic storage allocation (SnapSure, SRDF, and IP replication).
It is recommended not to have any file system over 90% utilized. If a file system does reach this threshold, extend it. The reason for this is to allow at least 10% of the file system capacity for:
If you want to create multiple file systems from the same Storage Group/nas_pool, use the slice volumes option. Otherwise, the file system you are creating consumes all of the specified Storage Group/nas_pool, making it unavailable for any other file systems.
If a user is employing auto extend of your file systems, set an HWM. The HWM specifies at what percent the file system should be auto extended. The default HWM is 90%. Once a file system reaches its HWM, it is auto extended, until the size of the free capacity is brought below the HWM again. The file system is extended, striping across all devices in the pool. This means that the used capacity might be brought down to just 1% below the HWM or more, depending on the amount and size of the volumes in the underlying storage pool.
Setting a maximum file system limit specifies the maximum size that a file system can grow to. To specify this maximum, type an integer and specify T for terabyte, G for gigabyte (the default), or M for megabyte.
If you do not set either of these variables, but DO setup auto extend of file systems:
For more information about auto extending a file system, see the document Managing Volumes and File Systems with VNX AVM.
If you are hosting an application that has a heavy I/O profile on eNAS, you can set a host I/O limit against the storage group that the application uses. This host I/O limit limits the amount of traffic that the array gives to this SG, regardless of the SLO that is set against the SG.
The I/O limit set on a storage group provisioned for eNAS applies to all file systems created on the volumes in that storage group. If the Host I/O limits set at the storage group level needs to be transparent to the corresponding eNAS file system, there must be a one-to-one correlation between them.
Assigning a specific host I/O limit for IOPS, for example, to a storage group (file system) with low-performance requirements can ensure that a spike in I/O demand does not saturate its storage, cause FAST inadvertently to migrate extents to higher tiers, or overload the storage, affecting performance of more critical applications.
Placing a specific IOPS limit on a storage group limits the total IOPS for the storage group. However, it does not prevent FAST from moving data based on the SLO for that group. For example, a storage group with Gold SLO may have data in both EFD and hard-drive tiers to satisfy SLO compliance yet be limited to the IOPS set by Host I/O Limits.