Question: Which ports are required for clients to access over NFS?
2049 (nfsd TCP, mountd TCP/UDP, statd TCP), 10000 (nlockmgr TCP/UDP) and 111 (portmapper TCP/UDP)
Note: Services are part of ECS software and not exposed by the underlying operating system.
Question: What are the primary use cases for NFS on ECS?
Question: What are the implications of the "server-side" metadata caching the ECS NFS implementation uses to increase performance by reducing related disk operations?
Metadata is cached by nodes for the NFS operations they serve to clients. The cache allows for serving metadata quicker than is possible if disks access is required for each operation. Changes to metadata are not globally tracked across nodes and as such will not get reflected instantly across nodes. If client1 and client2 both connect to the same ECS node, both of them see the same information since it is either being served from the same cache or disk.
Metadata in cache is considered alive until it times out. This means if a change is made directly on disk for an object, and a client subsequently performs a listing operation on that object, older data from cache will be returned to the client until the time at which the cache expires. After expiration requests for the metadata will be served from disk and cache repopulated with the most recent information. If client1 and client2 connect to different ECS nodes, then there is a possibility that they see different information, if related metadata exists in cache, until the cache times out. Basically, metadata is cached locally to each ECS node and is not globally coherent.
Question: Are there no known limitations or hard-coded values for max number of directories or files per directory?
There are no known limitations, but the more files in the directory, the longer listing contents will take to complete.
Question: Is there a max file size in ECS?
For NFS only, the maximum file size allowed is 16 TB.
Question: How do storage administrators configure NFS access to a bucket?
Along with creating the exports, for a user to access a file over NFS, a mapping must be created between a bucket user and UNIX UID and GID. With this mapping ECS can translate the UID and GID received over the wire as part of the NFS operation to a bucket user to determine access. ECS does not retrieve user mapping from authentication sources, that is, all mappings must be created by a storage administrator.
Similarly, for access from a client configured with Kerberos, a mapping between principal names and UID/GID is required so that ECS can return a UID and GID over the wire to the client in its response.
Question: What authentication methods are supported for ECS NFS?
ECS NFS supports sys, krb5, krb5i, krb5p.
Question: Does ECS support all NFS v3 operations?
ECS NFS supports all NFSv3 procedures EXCEPT for LINK: Create hard link to an object.
Question: Can files be accessed over NFS during a site outage?
Read access over NFS is available, just like with all other protocol access, during a site outage. Write access over NFS depends on the zone ownership of the affected path during outage.
For example, a three-site replication group contains an Access During Outage (ADO) enabled namespace, ns1, and ns1 contains file-system-enabled bucket, b1, which is also configured with ADO enabled. A three-directory-deep path exists in b1 and each directory was created and is owned by a different site/zone. /ns1/b1/dir1/dir2/dir3.
If site 2, owner of dir2, is temporarily unavailable, contents in dir2 are read-only during outage but contents in dir1 (owned by site 1) and dir3 (owned by site 3) remain writable.