Home > APEX > Storage > White Papers > Dell APEX Block Storage for Microsoft Azure > Azure Storage considerations
When selecting the proper Azure storage, the following matrix can help refine your requirements.
You will need access to and permissions from your Azure Infrastructure administrator to access the Azure account for Dell APEX Block Storage.
Because the Azure infrastructure does accrue a cost during the deployment process, it is recommended to have an approved budget for the initial deployment, even if it is only for testing.
Azure Premium managed disks – Balanced use case |
Standard_F48s_v2 series type VM |
Simple and easy |
Inherently persistent |
For customers that power on/off regularly |
Requires more instances to reach higher performance levels |
Software attached and handled by Azure |
Locally attached NVMe – Performance use case |
Standard_L32as_v3 and Standard_L64as_v3 series type VMs |
Locally attached at the instance level. |
Higher performance with fewer instances. Can be up to 10x. |
No additional storage charges but requires more instances when considering capacity, which do incur an instance cost. |
When turning off all nodes of the cluster, persistence requires additional added configuration for resilience. |
NVMe drives are ephemeral - Data must be backed up to avoid data loss if Dell SDS cluster is shut off. |
Can use DDVE to S3 as a backup solution. |
The following table summarizes Dell APEX Block Storage for Azure supported capabilities and system limits.
Item | Limit |
System raw capacity | 16 PB |
Device size | Minimum: 240 GB Maximum: 15.36 TB (SSD) |
Minimum storage pool size | 720 GB |
Minimum devices (drives) per storage pool | 3, one per fault set |
Volume size | Minimum: 8 GB, Maximum: 1 PB |
Maximum file system partitions per volume | 15 |
Maximum total number of volumes and snapshots in a system | 131,072 |
Maximum total number of volumes and snapshots in a protection domain | 32,768 |
Maximum total number of volumes and snapshots per storage pool | 32,768 |
Maximum number of snapshots in a single vTree | 126 |
Maximum raw capacity per protection domain | 8 PB |
Maximum raw capacity per SDS | 160 TB (medium granularity) |
Maximum SDCs per system | 2048 |
Maximum SDSs per system | 512 |
Maximum SDSs per protection domain | 128 |
Maximum devices (drives) per SDS server | 64 |
Maximum devices per protection domain | 8192 |
Maximum devices per storage pool | 300 |
Total size of all volumes per storage pool | 4 PB |
Maximum volumes that can be mapped to a single SDC | 1024 |
System Over Provisioning Factor | 5X of net capacity per MG layout |
Maximum protection domains per system | 256 |
Maximum storage pools per system | 1024 |
Maximum storage pools per protection domain | 64 |
Maximum fault sets per protection domain | 64 |
Maximum number of snapshots a snapshot policy can be defined to retain per vTree (not including locked snapshots) | 60 |
Maximum snapshot policies per system | 1000 |
Maximum user accounts | 256 |
Maximum number of concurrent logged-in management clients (GUI/REST/CLI) | 128 |
Maximum volumes that can be mapped by API (GUI/REST/CLI) concurrently | 1024 |
Maximum number of configured syslog servers | 16 (must be an even number) |
Volumes per local consistency group (snapshot) | 1024 |
Maximum number of volume-to-SDC mappings per system | 262,143 |
SCSI-3 reservation type | Write exclusive registrants only |
Number of destination systems for replication | 4 |
Maximum number of SDRs per system | 128 |
Maximum number of replication consistency groups (RCGs) | 1024 |
Maximum replication pairs in RCG with initial copy | 1024 |
Maximum number of volume pairs per RCG | 1024 |
Maximum volume pairs per system | 32,000 |
Maximum number of remote protection domains | 6 |
Maximum number of copies per RCG | 1 |
Recovery Point Objective (RPO) | Minimum: 15 seconds Maximum: 1 hour |
Maximum replicated volume size | 64 TB |