Home > Workload Solutions > SQL Server > White Papers > Scaling SQL Server 2022 VMs on Dell Integrated System for Microsoft Azure Stack HCI > Remote Direct Memory Access VM deployment and sizing recommendations
Azure Stack HCI supports Remote Direct Memory Access (RDMA) by implementing either the Internet Wide Area RDMA Protocol (iWARP) or RDMA over Converged Ethernet (RoCE) protocol. SET supports teaming RDMA adapter ports. With Azure stack HCI, if you have a single adapter with RDMA capabilities, it is identified as such during the deployment and assigned as the backend network for your storage traffic to maximize performance and capabilities.
While deploying VMs in a clustered environment, customers should follow a few rules while optimizing for performance. VMs should be sized in such a manner that they do not span across NUMA nodes, whether it be due to vCPU core or memory assignments. Disabling NUMA spanning across the Hyper-V nodes is possible by disabling the setting.
Note: This setting should be used with extreme caution as creating and migrating a conflicting VM (which is spanning multiple NUMA nodes) to a host with this setting enabled will not allow the VM to power up. This setting cannot be used with dynamic memory.
Customers should be careful not to over-subscribe resources including the number of physical cores and amount of memory when performance is a priority. For instance, over-commissioning VM vCPUs is possible but leads to overall lower performance.
Choosing the right size virtual disks, memory assignment, or vCPU for your configuration and not over-subscribing resources should provide a consistent architecture with predictable performance.
The following list includes best practices for configuring the Windows operating system: