Home > Storage > PowerScale (Isilon) > Product Documentation > Storage (general) > PowerScale OneFS Best Practices > NFS considerations
NFSv3 is the ubiquitous protocol for clients accessing storage due to the maturity of the protocol version, ease of implementation, and wide availability of client and server stacks.
There are some useful configuration settings to keep in mind when using a OneFS powered cluster with NFS clients in a performance-oriented environment.
For NFS3 and NFS4, the maximum read and write sizes (rsize and wsize) are 1 MB. When you mount NFS exports from a cluster, a larger read and write size for remote procedure calls can improve throughput. The default read size in OneFS is 128 KB. An NFS client uses the largest supported size by default. Setting the value too small on a client overrides the default value and can undermine performance.
For performance workloads, the recommendation is to avoid explicitly setting NFS rsize or wsize parameters on NFS clients when mounting a cluster’s NFS exports directly, or using the automounter. Instead, for NFSv3 clients, use the following mount parameters:
mount -vers=3,rw,tcp,hard,intr,retry=2,retrans=5,timeo=600
For NFS clients that support it, the READDIRPLUS call can improve performance by ‘prefetching’ file handle, attribute information, and directory entries, plus information to allow the client to request additional directory entries in a subsequent readdirplus transaction. This ability relieves the client from having to query the server for that information separately for each entry.
For an environment with a high file count, the readdirplus prefetch can be configured to a value higher than the default value of 10. For a low file count environment, you can experiment with setting it lower than the default.
Another recommendation for performance NFS workflows is to use asynchronous (async) mounts from the client. Conversely, using sync as a client mount option makes all write operations synchronous, usually resulting in poor write performance. Sync mounts should be used only when a client program relies on synchronous writes without specifying them.
The number of threads used by the OneFS NFS server is dynamically allocated and auto-tuned, and is dependent on the amount of available RAM.
As a conservative best practice, active NFS v3 or v4 connections should be kept under 1,000, where possible. Although no maximum limit for NFS connections has been established, the number of available TCP sockets can limit the number of NFS connections. The number of connections that a node can process depends on the ratio of active-to-idle connections as well as the resources available to process the sessions. Monitoring the number of NFS connections to each node helps prevent overloading a node with connections.
The recommended limit for NFS exports per cluster is 40,000. To maximize performance, configure NFS exports for asynchronous commit.
For larger NFS environments consider the following:
OneFS 9.2 and later versions include Remote Direct Memory Access support for applications and clients with NFS over RDMA. OneFS 9.2 and later versions allow substantially higher throughput performance, especially for single connection and read-intensive workloads, while also reducing both cluster and client CPU utilization. OneFS 9.2 and later versions support NFSv3 over RDMA by the ROCEv2 network protocol (also known as Routable RoCE or RRoCE). New OneFS CLI and WebUI configuration options have been added, including global enablement, and IP pool configuration, filtering and verification of RoCEv2 capable network interfaces.
NFS over RDMA is also available on all PowerScale nodes that contain Mellanox ConnectX network adapters on the front end with either 25, 40, or 100 GbE connectivity. The ‘isi network interfaces list’ CLI command can be used to easily identify which of a cluster’s NICs support RDMA.
On the other side, the NFS clients will also need RoCEv2 capable NICs and drivers, and to be running RoCEv2.
Key considerations when using NFS over RDMA include:
For more information, see the OneFS NFS Design Considerations and Best Practices white paper.