Your Browser is Out of Date

Nytro.ai uses technology that works best in other browsers.
For a full experience use one of the browsers below

Blogs

Short articles related to Dell PowerScale.

blogs (78)

  • PowerScale
  • OneFS
  • troubleshooting
  • firewall

OneFS Firewall Management and Troubleshooting

Nick Trimbee

Thu, 25 May 2023 14:41:59 -0000

|

Read Time: 0 minutes

In the final blog in this series, we’ll focus on step five of the OneFS firewall provisioning process and turn our attention to some of the management and monitoring considerations and troubleshooting tools associated with the firewall.

One can manage and monitor the firewall in OneFS 9.5 using the CLI, platform API, or WebUI. Because data security threats come from inside an environment as well as out, such as from a rogue IT employee, a good practice is to constrain the use of all-powerful ‘root’, ‘administrator’, and ‘sudo’ accounts as much as possible. Instead of granting cluster admins full rights, a preferred approach is to use OneFS’ comprehensive authentication, authorization, and accounting framework.

OneFS role-based access control (RBAC) can be used to explicitly limit who has access to configure and monitor the firewall. A cluster security administrator selects the desired access zone, creates a zone-aware role within it, assigns privileges, and then assigns members. For example, from the WebUI under Access > Membership and roles > Roles:

When these members login to the cluster from a configuration interface (WebUI, Platform API, or CLI) they inherit their assigned privileges.

Accessing the firewall from the WebUI and CLI in OneFS 9.5 requires the new ISI_PRIV_FIREWALL administration privilege.

# isi auth privileges -v | grep -i -A 2 firewall
         ID: ISI_PRIV_FIREWALL
Description: Configure network firewall
       Name: Firewall
   Category: Configuration
 Permission: w

This privilege can be assigned one of four permission levels for a role, including:

Permission Indicator

Description

No permission.

R

Read-only permission.

X

Execute permission.

W

Write permission.

By default, the built-in ‘SystemAdmin’ roles is granted write privileges to administer the firewall, while the built-in ‘AuditAdmin’ role has read permission to view the firewall configuration and logs.

With OneFS RBAC, an enhanced security approach for a site could be to create two additional roles on a cluster, each with an increasing realm of trust. For example:

1.  An IT ops/helpdesk role with ‘read’ access to the snapshot attributes would permit monitoring and troubleshooting the firewall, but no changes:

RBAC Role

Firewall Privilege

Permission

IT_Ops

ISI_PRIV_FIREWALL

Read

For example:

# isi auth roles create IT_Ops
# isi auth roles modify IT_Ops --add-priv-read ISI_PRIV_FIREWALL
# isi auth roles view IT_Ops | grep -A2 -i firewall
             ID: ISI_PRIV_FIREWALL
      Permission: r

2.  A Firewall Admin role would provide full firewall configuration and management rights:

RBAC Role

Firewall Privilege

Permission

FirewallAdmin

ISI_PRIV_FIREWALL

Write

For example:

# isi auth roles create FirewallAdmin
# isi auth roles modify FirewallAdmin –add-priv-write ISI_PRIV_FIREWALL
# isi auth roles view FirewallAdmin | grep -A2 -i firewall
ID: ISI_PRIV_FIREWALL
Permission: w

Note that when configuring OneFS RBAC, remember to remove the ‘ISI_PRIV_AUTH’ and ‘ISI_PRIV_ROLE’ privilege from all but the most trusted administrators.

Additionally, enterprise security management tools such as CyberArk can also be incorporated to manage authentication and access control holistically across an environment. These can be configured to change passwords on trusted accounts frequently (every hour or so), require multi-Level approvals prior to retrieving passwords, and track and audit password requests and trends.

OneFS firewall limits

When working with the OneFS firewall, there are some upper bounds to the configurable attributes to keep in mind. These include:

Name

Value

Description

MAX_INTERFACES

500

Maximum number of L2 interfaces including Ethernet, VLAN, LAGG interfaces on a node.

MAX _SUBNETS

100

Maximum number of subnets within a OneFS cluster

MAX_POOLS

100

Maximum number of network pools within a OneFS cluster

DEFAULT_MAX_RULES

100

Default value of maximum rules within a firewall policy

MAX_RULES

200

Upper limit of maximum rules within a firewall policy

MAX_ACTIVE_RULES

5000

Upper limit of total active rules across the whole cluster

MAX_INACTIVE_POLICIES

200

Maximum number of policies that are not applied to any network subnet or pool. They will not be written into ipfw tables.

Firewall performance

Be aware that, while the OneFS firewall can greatly enhance the network security of a cluster, by nature of its packet inspection and filtering activity, it does come with a slight performance penalty (generally less than 5%).

Firewall and hardening mode

If OneFS STIG hardening (that is, from ‘isi hardening apply’) is applied to a cluster with the OneFS firewall disabled, the firewall will be automatically activated. On the other hand, if the firewall is already enabled, then there will be no change and it will remain active.

Firewall and user-configurable ports

Some OneFS services allow the TCP/UDP ports on which the daemon listens to be changed. These include:

Service

CLI Command

Default Port

NDMP

isi ndmp settings global modify –port

10000

S3

isi s3 settings global modify –https-port

9020, 9021

SSH

isi ssh settings modify –port

22

The default ports for these services are already configured in the associated global policy rules. For example, for the S3 protocol:

# isi network firewall rules list | grep s3
default_pools_policy.rule_s3                  55     Firewall rule on s3 service                                                              allow
# isi network firewall rules view default_pools_policy.rule_s3
          ID: default_pools_policy.rule_s3
        Name: rule_s3
       Index: 55
 Description: Firewall rule on s3 service
    Protocol: TCP
   Dst Ports: 9020, 9021
Src Networks: -
   Src Ports: -
      Action: allow

Note that the global policies, or any custom policies, do not auto-update if these ports are reconfigured. This means that the firewall policies must be manually updated when changing ports. For example, if the NDMP port is changed from 10000 to 10001:

# isi ndmp settings global view
                       Service: False
                           Port: 10000
                            DMA: generic
          Bre Max Num Contexts: 64
MSB Context Retention Duration: 300
MSR Context Retention Duration: 600
        Stub File Open Timeout: 15
             Enable Redirector: False
              Enable Throttler: False
       Throttler CPU Threshold: 50
# isi ndmp settings global modify --port 10001
# isi ndmp settings global view | grep -i port
                           Port: 10001

The firewall’s NDMP rule port configuration must also be reset to 10001:

# isi network firewall rule list | grep ndmp
default_pools_policy.rule_ndmp                44     Firewall rule on ndmp service                                                            allow
# isi network firewall rule modify default_pools_policy.rule_ndmp --dst-ports 10001 --live
# isi network firewall rule view default_pools_policy.rule_ndmp | grep -i dst
   Dst Ports: 10001

Note that the –live flag is specified to enact this port change immediately.

Firewall and source-based routing

Under the hood, OneFS source-based routing (SBR) and the OneFS firewall both leverage ‘ipfw’. As such, SBR and the firewall share the single ipfw table in the kernel. However, the two features use separate ipfw table partitions.

This allows SBR and the firewall to be activated independently of each other. For example, even if the firewall is disabled, SBR can still be enabled and any configured SBR rules displayed as expected (that is, using ipfw set 0 show).

Firewall and IPv6

Note that the firewall’s global default policies have a rule allowing ICMP6 by default. For IPv6 enabled networks, ICMP6 is critical for the functioning of NDP (Neighbor Discovery Protocol). As such, when creating custom firewall policies and rules for IPv6-enabled network subnets/pools, be sure to add a rule allowing ICMP6 to support NDP. As discussed in a previous blog, an alternative (and potentially easier) approach is to clone a global policy to a new one and just customize its ruleset instead.

Firewall and FTP

The OneFS FTP service can work in two modes: Active and Passive. Passive mode is the default, where FTP data connections are created on top of random ephemeral ports. However, because the OneFS firewall requires fixed ports to operate, it only supports the FTP service in Active mode. Attempts to enable the firewall with FTP running in Passive mode will generate the following warning:

# isi ftp settings view | grep -i active
          Active Mode: No
# isi network firewall settings modify --enabled yes
FTP service is running in Passive mode. Enabling network firewall will lead to FTP clients having their connections blocked. To avoid this, please enable FTP active mode and ensure clients are configured in active mode before retrying. Are you sure you want to proceed and enable network firewall? (yes/[no]):

To activate the OneFS firewall in conjunction with the FTP service, first ensure that the FTP service is running in Active mode before enabling the firewall. For example:

# isi ftp settings view | grep -i enable
  FTP Service Enabled: Yes
# isi ftp settings view | grep -i active
          Active Mode: No
# isi ftp setting modify –active-mode true
# isi ftp settings view | grep -i active
          Active Mode: Yes
# isi network firewall settings modify --enabled yes

Note: Verify FTP active mode support and/or firewall settings on the client side, too.

Firewall monitoring and troubleshooting

When it comes to monitoring the OneFS firewall, the following logfiles and utilities provide a variety of information and are a good source to start investigating an issue:

Utility

Description

/var/log/isi_firewall_d.log

Main OneFS firewall log file, which includes information from firewall daemon.

/var/log/isi_papi_d.log

Logfile for platform AP, including Firewall related handlers.

isi_gconfig -t firewall

CLI command that displays all firewall configuration info.

ipfw show

CLI command that displays the ipfw table residing in the FreeBSD kernel.

Note that the preceding files and command output are automatically included in logsets generated by the ‘isi_gather_info’ data collection tool.

You can run the isi_gconfig command with the ‘-q’ flag to identify any values that are not at their default settings. For example, the stock (default) isi_firewall_d gconfig context will not report any configuration entries:

# isi_gconfig -q -t firewall
[root] {version:1}

The firewall can also be run in the foreground for additional active rule reporting and debug output. For example, first shut down the isi_firewall_d service:

# isi services -a isi_firewall_d disable
The service 'isi_firewall_d' has been disabled.

Next, start up the firewall with the ‘-f’ flag.

# isi_firewall_d -f
Acquiring kevents for flxconfig
Acquiring kevents for nodeinfo
Acquiring kevents for firewall config
Initialize the firewall library
Initialize the ipfw set
ipfw: Rule added by ipfw is for temporary use and will be auto flushed soon. Use isi firewall instead.
cmd:/sbin/ipfw set enable 0 normal termination, exit code:0
isi_firewall_d is now running
Loaded master FlexNet config (rev:312)
Update the local firewall with changed files: flx_config, Node info, Firewall config
Start to update the firewall rule...
flx_config version changed!                              latest_flx_config_revision: new:312, orig:0
node_info version changed!                               latest_node_info_revision: new:1, orig:0
firewall gconfig version changed!                                latest_fw_gconfig_revision: new:17, orig:0
Start to update the firewall rule for firewall configuration (gconfig)
Start to handle the firewall configure (gconfig)
Handle the firewall policy default_pools_policy
ipfw: Rule added by ipfw is for temporary use and will be auto flushed soon. Use isi firewall instead.
32043 allow tcp from any to any 10000 in
cmd:/sbin/ipfw add 32043 set 8 allow TCP from any  to any 10000 in  normal termination, exit code:0
ipfw: Rule added by ipfw is for temporary use and will be auto flushed soon. Use isi firewall instead.
32044 allow tcp from any to any 389,636 in
cmd:/sbin/ipfw add 32044 set 8 allow TCP from any  to any 389,636 in  normal termination, exit code:0
Snip...

If the OneFS firewall is enabled and some network traffic is blocked, either this or the ipfw show CLI command will often provide the first clues.

Please note that the ipfw command should NEVER be used to modify the OneFS firewall table!

For example, say a rule is added to the default pools policy denying traffic on port 9876 from all source networks (0.0.0.0/0):

# isi network firewall rules create default_pools_policy.rule_9876 --index=100 --dst-ports 9876 --src-networks 0.0.0.0/0 --action deny –live
# isi network firewall rules view default_pools_policy.rule_9876
          ID: default_pools_policy.rule_9876
        Name: rule_9876
       Index: 100
 Description:
    Protocol: ALL
   Dst Ports: 9876
Src Networks: 0.0.0.0/0
   Src Ports: -
      Action: deny

Running ipfw show and grepping for the port will show this new rule:

# ipfw show | grep 9876
32099            0               0 deny ip from any to any 9876 in

The ipfw show command output also reports the statistics of how many IP packets have matched each rule This can be incredibly useful when investigating firewall issues. For example, a telnet session is initiated to the cluster on port 9876 from a client:

# telnet 10.224.127.8 9876
Trying 10.224.127.8...
telnet: connect to address 10.224.127.8: Operation timed out
telnet: Unable to connect to remote host

The connection attempt will time out because the port 9876 ‘deny’ rule will silently drop the packets. At the same time, the ipfw show command will increment its counter to report on the denied packets. For example:

# ipfw show | grep 9876
32099            9             540 deny ip from any to any 9876 in

If this behavior is not anticipated or desired, you can find the rule name by searching the rules list for the port number, in this case port 9876:

# isi network firewall rules list | grep 9876
default_pools_policy.rule_9876                100                                                                 deny

The offending rule can then be reverted to ‘allow’ traffic on port 9876:

# isi network firewall rules modify default_pools_policy.rule_9876 --action allow --live

Or easily deleted, if preferred:

# isi network firewall rules delete default_pools_policy.rule_9876 --live
Are you sure you want to delete firewall rule default_pools_policy.rule_9876? (yes/[no]): yes

Author: Nick Trimbee




Read Full Blog
  • Isilon
  • PowerScale
  • OneFS
  • APEX

Running PowerScale OneFS in Cloud - APEX File Storage for AWS

Lieven Lin

Wed, 24 May 2023 14:36:21 -0000

|

Read Time: 0 minutes

PowerScale OneFS 9.6 now brings a new offering in AWS cloud — APEX File Storage for AWS. APEX File Storage for AWS is a software-defined cloud file storage service that provides high-performance, flexible, secure, and scalable file storage for AWS environments. It is a fully customer managed service that is designed to meet the needs of enterprise-scale file workloads running on AWS.

Benefits of running OneFS in Cloud

APEX File Storage for AWS brings the OneFS distributed file system software into the public cloud, allowing users to have the same management experience in the cloud as with their on-premises PowerScale appliance.

With APEX File Storage for AWS, you can easily deploy and manage file storage on AWS, without the need for hardware or software management. The service provides a scalable and elastic storage infrastructure that can grow or shrink, according to your actual business needs.

Some of the key features and benefits of APEX File Storage for AWS include:

  • Scale-out: APEX File Storage for AWS is powered by the Dell PowerScale OneFS distributed file system. You can start with a small OneFS cluster and then expand it incrementally as your data storage requirements grow. Cluster capacity can be scaled on-demand up to 1PiB. 
  • Data management: APEX File Storage for AWS provides powerful data management capabilities, such as snapshot, data replication, and backup and restore. Because OneFS features are the same in the cloud as in on-premises, organizations can simplify operations and reduce management complexity with a consistent user experience.
  • Simplified journey to hybrid cloud: More and more organizations operate in a hybrid cloud environment, where they need to move data between on-premises and cloud-based environments. APEX File Storage for AWS can help you bridge this gap by facilitating seamless data mobility between on-premises and the cloud with native replication and by providing a consistent data management platform across both environments. Once in the cloud, customers can take advantage of enterprise-class OneFS features such as multi-protocol support, CloudPools, data reduction, and snapshots, to run their workloads in the same way as they do on-premises. APEX File Storage for AWS can use CloudPools to tier cold or infrequently accessed data to lower cost cloud storage, such as AWS S3 object storage. CloudPools extends the OneFS namespace to the private/public cloud and allows you to store much more data than the usable cluster capacity.
  • High performance: APEX File Storage for AWS delivers high-performance file storage with low-latency access to data, ensuring that you can access data quickly and efficiently.

Architecture

The architecture of APEX File Storage for AWS is based on the OneFS distributed file system, which consists of multiple cluster nodes to provide a single global namespace. Each cluster node is an instance of OneFS software that runs on an AWS EC2 instance and provides storage capacity and compute resources. The following diagram shows the architecture of APEX File Storage for AWS.

  • Availability zone: APEX File Storage for AWS is designed to run in a single AWS availability zone to get the best performance.
  • Virtual Private Cloud (VPC): APEX File Storage for AWS requires an AWS VPC to provide network connectivity.
  • OneFS cluster internal subnet: The cluster nodes communicate with each other through the internal subnet. The internal subnet must be isolated from instances that are not in the cluster. Therefore, a dedicated subnet is required for the internal network interfaces of cluster nodes that do not share internal subnets with other EC2 instances.
  • OneFS cluster external subnet: The cluster nodes communicate with clients through the external subnet by using different protocols, such as NFS, SMB, and S3.
  • OneFS cluster internal network interfaces: Network interfaces that are located in the internal subnet.
  • OneFS cluster external network interfaces: Network interfaces that are located in the external subnet.
  • OneFS cluster internal security group: The security group applies to the cluster internal network interfaces and allows all traffic between the cluster nodes’ internal network interfaces only.
  • OneFS cluster external security group: The security group applies to cluster external network interfaces and allows specific ingress traffic from clients.
  • Elastic Compute Cloud (EC2) instance nodes: Cluster nodes that run the OneFS filesystem backed by Elastic Block Store (EBS) volumes and that provide network bandwidth.

 

Supported cluster configuration

APEX File Storage for AWS provides two types of cluster configurations:

  • Solid State Drive (SSD) cluster: APEX File Storage for AWS supports clusters backed by General Purpose SSD (gp3) EBS volumes with up to 1PiB cluster raw capacity. The gp3 EBS volumes are the latest generation of General Purpose SSD volumes, and the lowest cost SSD volume offered by AWS EBS. They balance price and performance for a wide variety of workloads.

Configuration items

Supported options

Cluster size

4 to 6 nodes

EC2 instance type

All nodes in a cluster must be same instance size. The supported instance sizes are m5dn.8xlarge, m5dn.12xlarge, m5dn.16xlarge, or m5dn.24xlarge. See Amazon EC2 m5 instances for more details.

EBS volume (disk) type

gp3

EBS volume (disk) counts per node

5, 6, 10, 12, 15, 18, or 20

Single EBS volume sizes 

1TiB - 16TiB

Cluster raw capacity

24TiB - 1PiB

Cluster protection level

+2n

  • Hard Disk Drive (HDD) cluster: APEX File Storage for AWS supports clusters backed by Throughput Optimized HDD (st1) EBS volumes with up to 360TiB cluster raw capacity. The st1 EBS volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. This volume type is a good fit for large sequential workloads.

Configuration items

Supported options

Cluster size

4 to 6 nodes

EC2 instance type

All nodes in a cluster must be same instance size. The supported instance sizes are m5dn.8xlarge, m5dn.12xlarge, m5dn.16xlarge, or m5dn.24xlarge. See  Amazon EC2 m5 instances for more details.

EBS volume (disk) type

st1

EBS volume (disk) counts per node

5 or 6

Single EBS volume sizes

4TiB or 10TiB

Cluster raw capacity

80TiB - 360TiB

Cluster protection level

+2n

APEX File Storage for AWS can deliver 10GB/s seq read and 4GB/s seq write performance as the cluster size grows. To learn more details about APEX File Storage for AWS, see the following documentation.

Author: Lieven Lin


Read Full Blog
  • security
  • PowerScale
  • OneFS

OneFS Firewall Configuration–Part 2

Nick Trimbee

Wed, 17 May 2023 19:13:33 -0000

|

Read Time: 0 minutes

In the previous article in this OneFS firewall series, we reviewed the upgrade, activation, and policy selection components of the firewall provisioning process.

Now, we turn our attention to the firewall rule configuration step of the process.

As stated previously, role-based access control (RBAC) explicitly limits who has access to manage the OneFS firewall. So, ensure that the user account that will be used to enable and configure the OneFS firewall belongs to a role with the ‘ISI_PRIV_FIREWALL’ write privilege.

4. Configuring Firewall Rules

When the desired policy is created, the next step is to configure the rules. Clearly, the first step here is to decide which ports and services need securing or opening, beyond the defaults.

The following CLI syntax returns a list of all the firewall’s default services, plus their respective ports, protocols, and aliases, sorted by ascending port number:

# isi network firewall services list
 
Service Name     Port  Protocol   Aliases
 
---------------------------------------------
 
ftp-data         20    TCP        -
 
ftp              21    TCP        -
 
ssh              22    TCP        -
 
smtp             25    TCP        -
 
dns              53    TCP        domain
 
                       UDP
 
http             80    TCP        www
 
                                  www-http
 
kerberos         88    TCP        kerberos-sec
 
                       UDP
 
rpcbind          111   TCP        portmapper
 
                        UDP       sunrpc
 
                                 rpc.bind
 
ntp              123   UDP        -
 
dcerpc           135   TCP        epmap
 
                        UDP       loc-srv
 
netbios-ns       137   UDP        -
 
netbios-dgm      138   UDP        -
 
netbios-ssn      139   UDP        -
 
snmp             161   UDP        -
 
snmptrap         162   UDP        snmp-trap
 
mountd           300   TCP        nfsmountd
 
                       UDP
 
statd            302   TCP        nfsstatd
 
                       UDP
 
lockd            304   TCP       nfslockd
 
                       UDP
 
nfsrquotad       305   TCP        -
 
                       UDP
 
nfsmgmtd         306   TCP        -
 
                       UDP
 
ldap             389   TCP        -
 
                       UDP
 
https            443   TCP        -
 
smb              445   TCP        microsoft-ds
 
hdfs-datanode    585   TCP        -
 
asf-rmcp         623   TCP        -
 
                       UDP
 
ldaps            636   TCP        sldap
 
asf-secure-rmcp  664   TCP        -
 
                       UDP
 
ftps-data        989   TCP        -
 
ftps             990   TCP        -
 
nfs              2049  TCP        nfsd
 
                       UDP
 
tcp-2097         2097  TCP        -
 
tcp-2098         2098  TCP        -
 
tcp-3148         3148  TCP        -
 
tcp-3149         3149  TCP        -
 
tcp-3268         3268  TCP        -
 
tcp-3269         3269  TCP        -
 
tcp-5667         5667  TCP        -
 
tcp-5668         5668  TCP        -
 
isi_ph_rpcd      6557  TCP        -
 
isi_dm_d         7722  TCP        -
 
hdfs-namenode    8020  TCP        -
 
isi_webui        8080  TCP        apache2
 
webhdfs          8082  TCP        -
 
tcp-8083         8083  TCP        -
 
ambari-handshake 8440   TCP       -
 
ambari-heartbeat 8441   TCP       -
 
tcp-8443         8443  TCP        -
 
tcp-8470         8470  TCP        -
 
s3-http          9020  TCP        -
 
s3-https         9021  TCP        -
 
isi_esrs_d       9443  TCP        -
 
ndmp             10000 TCP       -
 
cee              12228 TCP       -
 
nfsrdma          20049 TCP       -
 
                       UDP
 
tcp-28080        28080 TCP       -
 
---------------------------------------------
 
Total: 55

Similarly, the following CLI command generates a list of existing rules and their associated policies, sorted in alphabetical order. For example, to show the first five rules:

# isi network firewall rules list –-limit 5
 
ID                                             Index  Description                                                                              Action
 
----------------------------------------------------------------------------------------------------------------------------------------------------
 
default_pools_policy.rule_ambari_handshake    41      Firewall rule on ambari-handshake service                                                allow
 
default_pools_policy.rule_ambari_heartbeat    42      Firewall rule on ambari-heartbeat service                                               allow
 
default_pools_policy.rule_catalog_search_req  50      Firewall rule on service for global catalog search requests                             allow
 
default_pools_policy.rule_cee                 52     Firewall rule on cee service                                                             allow
 
default_pools_policy.rule_dcerpc_tcp          18      Firewall rule on dcerpc(TCP) service                                                     allow
 
----------------------------------------------------------------------------------------------------------------------------------------------------
 
Total: 5

Both the ‘isi network firewall rules list’ and the ‘isi network firewall services list’ commands also have a ‘-v’ verbose option, and can return their output in csv, list, table, or json formats with the ‘–flag’.

To view the detailed info for a given firewall rule, in this case the default SMB rule, use the following CLI syntax:

# isi network firewall rules view default_pools_policy.rule_smb
 
          ID: default_pools_policy.rule_smb
 
        Name: rule_smb
 
       Index: 3
 
 Description: Firewall rule on smb service
 
    Protocol: TCP
 
   Dst Ports: smb
 
Src Networks: -
 
   Src Ports: -
 
      Action: allow

Existing rules can be modified and new rules created and added into an existing firewall policy with the ‘isi network firewall rules create’ CLI syntax. Command options include:

Option

Description

–action

Allow, which mean pass packets.

 

Deny, which means silently drop packets.

 

Reject which means reply with ICMP error code.

id

Specifies the ID of the new rule to create. The rule must be added to an existing policy. The ID can be up to 32 alphanumeric characters long and can include underscores or hyphens, but cannot include spaces or other punctuation. Specify the rule ID in the following format:

 

<policy_name>.<rule_name>

 

The rule name must be unique in the policy.

–index

The rule index in the pool. The valid value is between 1 and 99. The lower value has the higher priority. If not specified, automatically go to the next available index (before default rule 100).

–live

The live option must only be used when a user issues a command to create/modify/delete a rule in an active policy. Such changes will take effect immediately on all network subnets and pools associated with this policy. Using the live option on a rule in an inactive policy will be rejected, and an error message will be returned.

–protocol

Specify the protocol matched for the inbound packets.  Available values are tcp, udp, icmp, and all.  if not configured, the default protocol all will be used.

–dst-ports  

Specify the network ports/services provided in the storage system which is identified by destination port(s). The protocol specified by –protocol will be applied on these destination ports.

–src-networks

Specify one or more IP addresses with corresponding netmasks that are to be allowed by this firewall policy. The correct format for this parameter is address/netmask, similar to “192.0.2.128/25”. Separate multiple address/netmask pairs with commas. Use the value 0.0.0.0/0 for “any”.

–src-ports

Specify the network ports/services provided in the storage system which is identified by source port(s). The protocol specified by –protocol will be applied on these source ports.

Note that, unlike for firewall policies, there is no provision for cloning individual rules.

The following CLI syntax can be used to create new firewall rules. For example, to add ‘allow’ rules for the HTTP and SSH protocols, plus a ‘deny’ rule for port TCP 9876, into firewall policy fw_test1:

# isi network firewall rules create  fw_test1.rule_http  --index 1 --dst-ports http --src-networks 10.20.30.0/24,20.30.40.0/24 --action allow
# isi network firewall rules create  fw_test1.rule_ssh  --index 2 --dst-ports ssh --src-networks 10.20.30.0/24,20.30.40.0/16 --action allow
# isi network firewall rules create fw_test1.rule_tcp_9876 --index 3 --protocol tcp --dst-ports 9876   --src-networks 10.20.30.0/24,20.30.40.0/24 -- action deny

When a new rule is created in a policy, if the index value is not specified, it will automatically inherit the next available number in the series (such as index=4 in this case).

# isi network firewall rules create fw_test1.rule_2049  --protocol udp -dst-ports 2049 --src-networks 30.1.0.0/16 -- action deny

For a more draconian approach, a ‘deny’ rule could be created using the match-everything ‘*’ wildcard for destination ports and a 0.0.0.0/0 network and mask, which would silently drop all traffic:

# isi network firewall rules create fw_test1.rule_1234  --index=100--dst-ports * --src-networks 0.0.0.0/0 --action deny

When modifying existing firewall rules, use the following CLI syntax, in this case to change the source network of an HTTP allow rule (index 1) in firewall policy fw_test1:

# isi network firewall rules modify fw_test1.rule_http --index 1  --protocol ip --dst-ports http --src-networks 10.1.0.0/16 -- action allow

Or to modify an SSH rule (index 2) in firewall policy fw_test1, changing the action from ‘allow’ to ‘deny’:

# isi network firewall rules modify fw_test1.rule_ssh --index 2 --protocol tcp --dst-ports ssh --src-networks 10.1.0.0/16,20.2.0.0/16 -- action deny

Also, to re-order the custom TCP 9876 rule form the earlier example from index 3 to index 7 in firewall policy fw_test1.

# isi network firewall rules modify fw_test1.rule_tcp_9876 --index 7

Note that all rules equal or behind index 7 will have their index values incremented by one.

When deleting a rule from a firewall policy, any rule reordering is handled automatically. If the policy has been applied to a network pool, the ‘–live’ option can be used to force the change to take effect immediately. For example, to delete the HTTP rule from the firewall policy ‘fw_test1’:

# isi network firewall policies delete fw_test1.rule_http --live

Firewall rules can also be created, modified, and deleted within a policy from the WebUI by navigating to Cluster management > Firewall Configuration > Firewall Policies. For example, to create a rule that permits SupportAssist and Secure Gateway traffic on the 10.219.0.0/16 network:

Once saved, the new rule is then displayed in the Firewall Configuration page:

5. Firewall management and monitoring.

In the next and final article in this series, we’ll turn our attention to managing, monitoring, and troubleshooting the OneFS firewall (Step 5).

Author: Nick Trimbee



Read Full Blog
  • security
  • PowerScale
  • OneFS

OneFS Firewall Configuration—Part 1

Nick Trimbee

Tue, 02 May 2023 17:21:12 -0000

|

Read Time: 0 minutes

The new firewall in OneFS 9.5 enhances the security of the cluster and helps prevent unauthorized access to the storage system. When enabled, the default firewall configuration allows remote systems access to a specific set of default services for data, management, and inter-cluster interfaces (network pools).

The basic OneFS firewall provisioning process is as follows:

 

Note that role-based access control (RBAC) explicitly limits who has access to manage the OneFS firewall. In addition to the ubiquitous root, the cluster’s built-in SystemAdmin role has write privileges to configure and administer the firewall.

1.  Upgrade cluster to OneFS 9.5.

First, to provision the firewall, the cluster must be running OneFS 9.5.

If you are upgrading from an earlier release, the OneFS 9.5 upgrade must be committed before enabling the firewall.

Also, be aware that configuration and management of the firewall in OneFS 9.5 requires the new ISI_PRIV_FIREWALL administration privilege. 

# isi auth privilege | grep -i firewall
ISI_PRIV_FIREWALL                   Configure network firewall

This privilege can be granted to a role with either read-only or read/write permissions. By default, the built-in SystemAdmin role is granted write privileges to administer the firewall:

# isi auth roles view SystemAdmin | grep -A2 -i firewall
             ID: ISI_PRIV_FIREWALL
     Permission: w

Additionally, the built-in AuditAdmin role has read permission to view the firewall configuration and logs, and so on:

# isi auth roles view AuditAdmin | grep -A2 -i firewall
             ID: ISI_PRIV_FIREWALL
     Permission: r

Ensure that the user account that will be used to enable and configure the OneFS firewall belongs to a role with the ISI_PRIV_FIREWALL write privilege.

2.  Activate firewall.

The OneFS firewall can be either enabled or disabled, with the latter as the default state. 

The following CLI syntax will display the firewall’s global status (in this case disabled, the default):

# isi network firewall settings view
Enabled: False

Firewall activation can be easily performed from the CLI as follows:

# isi network firewall settings modify --enabled true
# isi network firewall settings view
Enabled: True

Or from the WebUI under Cluster management > Firewall Configuration > Settings:

Note that the firewall is automatically enabled when STIG hardening is applied to a cluster.

3.  Select policies.

A cluster’s existing firewall policies can be easily viewed from the CLI with the following command:

# isi network firewall policies list
ID        Pools                    Subnets                   Rules
 -----------------------------------------------------------------------------
 fw_test1  groupnet0.subnet0.pool0  groupnet0.subnet1         test_rule1
 -----------------------------------------------------------------------------
 Total: 1

Or from the WebUI under Cluster management > Firewall Configuration > Firewall Policies:

The OneFS firewall offers four main strategies when it comes to selecting a firewall policy: 

  1. Retaining the default policy
  2. Reconfiguring the default policy
  3. Cloning the default policy and reconfiguring
  4. Creating a custom firewall policy

We’ll consider each of these strategies in order:

a.  Retaining the default policy

In many cases, the default OneFS firewall policy value provides acceptable protection for a security-conscious organization. In these instances, once the OneFS firewall has been enabled on a cluster, no further configuration is required, and the cluster administrators can move on to the management and monitoring phase.

The firewall policy for all front-end cluster interfaces (network pool) is the default. While the default policy can be modified, be aware that this default policy is global. As such, any change against it will affect all network pools using this default policy.

The following table describes the default firewall policies that are assigned to each interface:

Policy

Description

Default pools policy

Contains rules for the inbound default ports for TCP and UDP services in OneFS

Default subnets policy

Contains rules for:

  • DNS port 53
  • ICMP
  • ICMP6

These can be viewed from the CLI as follows:

# isi network firewall policies view default_pools_policy
            ID: default_pools_policy
          Name: default_pools_policy
    Description: Default Firewall Pools Policy
Default Action: deny
      Max Rules: 100
          Pools: groupnet0.subnet0.pool0, groupnet0.subnet0.testpool1, groupnet0.subnet0.testpool2, groupnet0.subnet0.testpool3, groupnet0.subnet0.testpool4, groupnet0.subnet0.poolcava
        Subnets: -
          Rules: rule_ldap_tcp, rule_ldap_udp, rule_reserved_for_hw_tcp, rule_reserved_for_hw_udp, rule_isi_SyncIQ, rule_catalog_search_req, rule_lwswift, rule_session_transfer, rule_s3, rule_nfs_tcp, rule_nfs_udp, rule_smb, rule_hdfs_datanode, rule_nfsrdma_tcp, rule_nfsrdma_udp, rule_ftp_data, rule_ftps_data, rule_ftp, rule_ssh, rule_smtp, rule_http, rule_kerberos_tcp, rule_kerberos_udp, rule_rpcbind_tcp, rule_rpcbind_udp, rule_ntp, rule_dcerpc_tcp, rule_dcerpc_udp, rule_netbios_ns, rule_netbios_dgm, rule_netbios_ssn, rule_snmp, rule_snmptrap, rule_mountd_tcp, rule_mountd_udp, rule_statd_tcp, rule_statd_udp, rule_lockd_tcp, rule_lockd_udp, rule_nfsrquotad_tcp, rule_nfsrquotad_udp, rule_nfsmgmtd_tcp, rule_nfsmgmtd_udp, rule_https, rule_ldaps, rule_ftps, rule_hdfs_namenode, rule_isi_webui, rule_webhdfs, rule_ambari_handshake, rule_ambari_heartbeat, rule_isi_esrs_d, rule_ndmp, rule_isi_ph_rpcd, rule_cee, rule_icmp, rule_icmp6, rule_isi_dm_d
 # isi network firewall policies view default_subnets_policy
            ID: default_subnets_policy
          Name: default_subnets_policy
    Description: Default Firewall Subnets Policy
Default Action: deny
      Max Rules: 100
          Pools: -
        Subnets: groupnet0.subnet0
          Rules: rule_subnets_dns_tcp, rule_subnets_dns_udp, rule_icmp, rule_icmp6

Or from the WebUI under Cluster management > Firewall Configuration > Firewall Policies:

b.  Reconfiguring the default policy

Depending on an organization’s threat levels or security mandates, there may be a need to restrict access to certain additional IP addresses and/or management service protocols.

If the default policy is deemed insufficient, reconfiguring the default firewall policy can be a good option if only a small number of rule changes are required. The specifics of creating, modifying, and deleting individual firewall rules is covered later in this article (step 3).

Note that if new rule changes behave unexpectedly, or firewall configuration generally goes awry, OneFS does provide a “get out of jail free” card. In a pinch, the global firewall policy can be quickly and easily restored to its default values. This can be achieved with the following CLI syntax:

# isi network firewall reset-global-policy
This command will reset the global firewall policies to the original system defaults. Are you sure you want to continue? (yes/[no]):

Alternatively, the default policy can also be easily reverted from the WebUI by clicking the Reset default policies:

 c.  Cloning the default policy and reconfiguring

Another option is cloning, which can be useful when batch modification or a large number of changes to the current policy are required. By cloning the default firewall policy, an exact copy of the existing policy and its rules is generated, but with a new policy name. For example:

# isi network firewall policies clone default_pools_policy clone_default_pools_policy
# isi network firewall policies list | grep -i clone
clone_default_pools_policy -                           

Cloning can also be initiated from the WebUI under Firewall Configuration > Firewall Policies > More Actions > Clone Policy:

Enter a name for the clone in the Policy Name field in the pop-up window, and click Save:

 Once cloned, the policy can then be easily reconfigured to suit. For example, to modify the policy fw_test1 and change its default-action from deny-all to allow-all:

# isi network firewall policies modify fw_test1 --default-action allow-all

When modifying a firewall policy, you can use the --live CLI option to force it to take effect immediately. Note that the --live option is only valid when issuing a command to modify or delete an active custom policy and to modify default policy. Such changes will take effect immediately on all network subnets and pools associated with this policy. Using the --live option on an inactive policy will be rejected, and an error message returned.

Options for creating or modifying a firewall policy include:

Option

Description

--default-action

Automatically add one rule to deny all or allow all to the bottom of the rule set for this created policy (Index = 100).

--max-rule-num

By default, each policy when created could have a maximum of 100 rules (including one default rule), so user could configure a maximum of 99 rules. User could expand the maximum rule number to a specified value. Currently this value is limited to 200 (and user could configure a maximum of 199 rules).

--add-subnets

Specify the network subnet(s) to add to policy, separated by a comma.

--remove-subnets

Specify the networks subnets to remove from policy and fall back to global policy.

--add-pools

Specify the network pool(s) to add to policy, separated by a comma.

--remove-pools

Specify the networks pools to remove from policy and fall back to global policy.

When you modify firewall policies, OneFS issues the following warning to verify the changes and help avoid the risk of a self-induced denial-of-service:   

# isi network firewall policies modify --pools groupnet0.subnet0.pool0 fw_test1
Changing the Firewall Policy associated with a subnet or pool may change the networks and/or services allowed to connect to OneFS. Please confirm you have selected the correct Firewall Policy and Subnets/Pools. Are you sure you want to continue? (yes/[no]): yes

Once again, having the following CLI command handy, plus console access to the cluster is always a prudent move:

# isi network firewall reset-global-policy

So adding network pools or subnets to a firewall policy will cause the previous policy to be removed from them. Similarly, adding network pools or subnets to the global default policy will revert any custom policy configuration they might have. For example, to apply the firewall policy fw_test1 to IP Pool groupnet0.subnet0.pool0 and groupnet0.subnet0.pool1:

# isi network pools view groupnet0.subnet0.pool0 | grep -i firewall
       Firewall Policy: default_pools_policy
# isi network firewall policies modify fw_test1 --add-pools groupnet0.subnet0.pool0, groupnet0.subnet0.pool1
# isi network pools view groupnet0.subnet0.pool0 | grep -i firewall
       Firewall Policy: fw_test1

Or to apply the firewall policy fw_test1 to IP Pool groupnet0.subnet0.pool0 and groupnet0.subnet0:

# isi network firewall policies modify fw_test1 --apply-subnet groupnet0.subnet0.pool0, groupnet0.subnet0
# isi network pools view groupnet0.subnet0.pool0 | grep -i firewall
 Firewall Policy: fw_test1
# isi network subnets view groupnet0.subnet0 | grep -i firewall
 Firewall Policy: fw_test1

To reapply global policy at any time, either add the pools to the default policy:

# isi network firewall policies modify default_pools_policy --add-pools groupnet0.subnet0.pool0, groupnet0.subnet0.pool1
# isi network pools view groupnet0.subnet0.pool0 | grep -i firewall
 Firewall Policy: default_subnets_policy
# isi network subnets view groupnet0.subnet1 | grep -i firewall
 Firewall Policy: default_subnets_policy

Or remove the pool from the custom policy:

# isi network firewall policies modify fw_test1 --remove-pools groupnet0.subnet0.pool0 groupnet0.subnet0.pool1

You can also manage firewall policies on a network pool in the OneFS WebUI by going to Cluster configuration > Network configuration > External network > Edit pool details. For example:

 

Be aware that cloning is also not limited to the default policy because clones can be made of any custom policies too. For example:

# isi network firewall policies clone clone_default_pools_policy fw_test1

d.  Creating a custom firewall policy

Alternatively, a custom firewall policy can also be created from scratch. This can be accomplished from the CLI using the following syntax, in this case to create a firewall policy named fw_test1:

# isi network firewall policies create fw_test1 --default-action deny
# isi network firewall policies view fw_test1
            ID: fw_test1
          Name: fw_test1
    Description:
Default Action: deny
      Max Rules: 100
          Pools: -
        Subnets: -
          Rules: -

Note that if a default-action is not specified in the CLI command syntax, it will automatically default to deny.

Firewall policies can also be configured in the OneFS WebUI by going to Cluster management > Firewall Configuration > Firewall Policies > Create Policy:

However, in contrast to the CLI, if a default-action is not specified when a policy is created in the WebUI, the automatic default is to Allow because the drop-down list works alphabetically.

If and when a firewall policy is no longer required, it can be swiftly and easily removed. For example, the following CLI syntax deletes the firewall policy fw_test1, clearing out any rules within this policy container:

# isi network firewall policies delete fw_test1
Are you sure you want to delete firewall policy fw_test1? (yes/[no]): yes

Note that the default global policies cannot be deleted.

# isi network firewall policies delete default_subnets_policy
Are you sure you want to delete firewall policy default_subnets_policy? (yes/[no]): yes
Firewall policy: Cannot delete default policy default_subnets_policy.

4.  Configure firewall rules.

 In the next article in this series, we’ll turn our attention to this step, configuring the OneFS firewall rules.

 

 

Read Full Blog
  • security
  • PowerScale
  • OneFS

OneFS Host-Based Firewall

Nick Trimbee

Wed, 26 Apr 2023 15:40:15 -0000

|

Read Time: 0 minutes

Among the array of security features introduced in OneFS 9.5 is a new host-based firewall. This firewall allows cluster administrators to configure policies and rules on a PowerScale cluster in order to meet the network and application management needs and security mandates of an organization.

The OneFS firewall protects the cluster’s external, or front-end, network and operates as a packet filter for inbound traffic. It is available upon installation or upgrade to OneFS 9.5 but is disabled by default in both cases. However, the OneFS STIG hardening profile automatically enables the firewall and the default policies, in addition to manual activation.

The firewall generally manages IP packet filtering in accordance with the OneFS Security Configuration Guide, especially in regards to the network port usage. Packet control is governed by firewall policies, which have one or more individual rules.

Item

Description

Match

Action

Firewall Policy

Each policy is a set of firewall rules.

Rules are matched by index in ascending order.

Each policy has a default action.

Firewall Rule

Each rule specifies what kinds of network packets should be matched by Firewall engine and what action should be taken upon them.

Matching criteria includes protocol, source ports, destination ports, source network address).

Options are allow, deny, or reject.

 A security best practice is to enable the OneFS firewall using the default policies, with any adjustments as required. The recommended configuration process is as follows:

Step

Details

1.  Access

Ensure that the cluster uses a default SSH or HTTP port before enabling. The default firewall policies block all nondefault ports until you change the policies.

2.  Enable

Enable the OneFS firewall.

3.  Compare

Compare your cluster network port configurations against the default ports listed in Network port usage.

4.  Configure

Edit the default firewall policies to accommodate any non-standard ports in use in the cluster.

NOTE: The firewall policies do not automatically update when port configurations are changed.

5.  Constrain

Limit access to the OneFS Web UI to specific administrator terminals.

Under the hood, the OneFS firewall is built upon the ubiquitous ipfirewall, or ipfw, which is FreeBSD’s native stateful firewall, packet filter, and traffic accounting facility.

Firewall configuration and management is through the CLI, or platform API, or WebUI, and OneFS 9.5 introduces a new Firewall Configuration page to support this. Note that the firewall is only available once a cluster is already running OneFS 9.5 and the feature has been manually enabled, activating the isi_firewall_d service. The firewall’s configuration is split between gconfig, which handles the settings and policies, and the ipfw table, which stores the rules themselves.

The firewall gracefully handles SmartConnect dynamic IP movement between nodes since firewall policies are applied per network pool. Additionally, being network pool based allows the firewall to support OneFS access zones and shared/multitenancy models. 

The individual firewall rules, which are essentially simplified wrappers around ipfw rules, work by matching packets through the 5-tuples that uniquely identify an IPv4 UDP or TCP session:

  • Source IP address
  • Source port
  • Destination IP address
  • Destination port
  • Transport protocol

The rules are then organized within a firewall policy, which can be applied to one or more network pools. 

Note that each pool can only have a single firewall policy applied to it. If there is no custom firewall policy configured for a network pool, it automatically uses the global default firewall policy.

When enabled, the OneFS firewall function is cluster wide, and all inbound packets from external interfaces will go through either the custom policy or default global policy before reaching the protocol handling pathways. Packets passed to the firewall are compared against each of the rules in the policy, in rule-number order. Multiple rules with the same number are permitted, in which case they are processed in order of insertion. When a match is found, the action corresponding to that matching rule is performed. A packet is checked against the active ruleset in multiple places in the protocol stack, and the basic flow is as follows: 

  1. Get the logical interface for incoming packets.
  2. Find all network pools assigned to this interface.
  3. Compare these network pools one by one with destination IP address to find the matching pool (either custom firewall policy, or default global policy).
  4. Compare each rule with service (protocol and destination ports) and source IP address in this pool in order of lowest index value.  If matched, perform actions according to the associated rule.
  5. If no rule matches, go to the final rule (deny all or allow all), which is specified upon policy creation.

The OneFS firewall automatically reserves 20,000 rules in the ipfw table for its custom and default policies and rules. By default, each policy can have a maximum of 100 rules, including one default rule. This translates to an effective maximum of 99 user-defined rules per policy, because the default rule is reserved and cannot be modified. As such, a maximum of 198 policies can be applied to pools or subnets since the default-pools-policy and default-subnets-policy are reserved and cannot be deleted.

Additional firewall bounds and limits to keep in mind include:

Name

Value

Description

MAX_INTERFACES

500

Maximum number of Layer 2 interfaces per node (including Ethernet, VLAN, LAGG interfaces).

MAX _SUBNETS

100

Maximum number of subnets within a OneFS cluster.

MAX_POOLS

100

Maximum number of network pools within a OneFS cluster.

DEFAULT_MAX_RULES

100

Default value of maximum rules within a firewall policy.

MAX_RULES

200

Upper limit of maximum rules within a firewall policy.

MAX_ACTIVE_RULES

5000

Upper limit of total active rules across the whole cluster.

MAX_INACTIVE_POLICIES

200

Maximum number of policies that are not applied to any network subnet or pool. They will not be written into ipfw table.

The firewall default global policy is ready to use out of the box and, unless a custom policy has been explicitly configured, all network pools use this global policy. Custom policies can be configured by either cloning and modifying an existing policy or creating one from scratch. 

Component

Description

Custom policy

A user-defined container with a set of rules. A policy can be applied to multiple network pools, but a network pool can only apply one policy.

Firewall rule

An ipfw-like rule that can be used to restrict remote access. Each rule has an index that is valid within the policy. Index values range from 1 to 99, with lower numbers having higher priority. Source networks are described by IP and netmask, and services can be expressed either by port number (i.e., 80) or service name (i.e., http, ssh, smb). The * wildcard can also be used to denote all services. Supported actions include allow, drop, and reject.

Default policy

A global policy to manage all default services, used for maintaining OneFS minimum running and management. While Deny any is the default action of the policy, the defined service rules have a default action to allow all remote access. All packets not matching any of the rules are automatically dropped.  

Two default policies: 

  • default-pools-policy
  • default-subnets-policy

Note that these two default policies cannot be deleted, but individual rule modification is permitted in each.

Default services

The firewall’s default predefined services include the usual suspects, such as: DNS, FTP, HDFS, HTTP, HTTPS, ICMP, NDMP, NFS, NTP, S3, SMB, SNMP, SSH, and so on. A full listing is available in the isi network firewall services list CLI command output.

For a given network pool, either the global policy or a custom policy is assigned and takes effect. Additionally, all configuration changes to either policy type are managed by gconfig and are persistent across cluster reboots.

In the next article in this series we’ll take a look at the CLI and WebUI configuration and management of the OneFS firewall. 

 

 

Read Full Blog
  • security
  • PowerScale
  • OneFS
  • snapshots

OneFS Snapshot Security

Nick Trimbee

Fri, 21 Apr 2023 17:11:00 -0000

|

Read Time: 0 minutes

In this era of elevated cyber-crime and data security threats, there is increasing demand for immutable, tamper-proof snapshots. Often this need arises as part of a broader security mandate, ideally proactively, but oftentimes as a response to a security incident. OneFS addresses this requirement in the following ways:

On-cluster

Off-cluster

  • Read-only snapshots
  • Snapshot locks
  • Role-based administration
  • SyncIQ snapshot replication
  • Cyber-vaulting

Read-only snapshots

At its core, OneFS SnapshotIQ generates read-only, point-in-time, space efficient copies of a defined subset of a cluster’s data.

Only the changed blocks of a file are stored when updating OneFS snapshots, ensuring efficient storage utilization. They are also highly scalable and typically take less than a second to create, while generating little performance overhead. As such, the RPO (recovery point objective) and RTO (recovery time objective) of a OneFS snapshot can be very small and highly flexible, with the use of rich policies and schedules.

OneFS Snapshots are created manually, on a schedule, or automatically generated by OneFS to facilitate system operations. But whatever the generation method, when a snapshot has been taken, its contents cannot be manually altered.

Snapshot Locks

In addition to snapshot contents immutability, for an enhanced level of tamper-proofing, SnapshotIQ also provides the ability to lock snapshots with the ‘isi snapshot locks’ CLI syntax. This prevents snapshots from being accidentally or unintentionally deleted.

For example, a manual snapshot, ‘snaploc1’ is taken of /ifs/test:

# isi snapshot snapshots create /ifs/test --name snaploc1
# isi snapshot snapshots list | grep snaploc1
79188 snaploc1                                     /ifs/test

A lock is then placed on it (in this case lock ID=1):

# isi snapshot locks create snaplock1
# isi snapshot locks list snaploc1
ID
----
1
----
Total: 1

Attempts to delete the snapshot fail because the lock prevents its removal:

# isi snapshot snapshots delete snaploc1
Are you sure? (yes/[no]): yes
Snapshot "snaploc1" can't be deleted because it is locked

The CLI command ‘isi snapshot locks delete <lock_ID>’ can be used to clear existing snapshot locks, if desired. For example, to remove the only lock (ID=1) from snapshot ‘snaploc1’:

# isi snapshot locks list snaploc1
ID
----
1
----
Total: 1
# isi snapshot locks delete snaploc1 1
Are you sure you want to delete snapshot lock 1 from snaploc1? (yes/[no]): yes
# isi snap locks view snaploc1 1
No such lock

When the lock is removed, the snapshot can then be deleted:

# isi snapshot snapshots delete snaploc1
Are you sure? (yes/[no]): yes
# isi snapshot snapshots list| grep -i snaploc1 | wc -l
       0

Note that a snapshot can have up to a maximum of sixteen locks on it at any time. Also, lock numbers are continually incremented and not recycled upon deletion.

Like snapshot expiration, snapshot locks can also have an expiration time configured. For example, to set a lock on snapshot ‘snaploc1’ that expires at 1am on April 1st, 2024:

# isi snap lock create snaploc1 --expires '2024-04-01T01:00:00'
# isi snap lock list snaploc1
ID
----
36
----
Total: 1
# isi snap lock view snaploc1 33
     ID: 36
Comment:
Expires: 2024-04-01T01:00:00
  Count: 1

Note that if the duration period of a particular snapshot lock expires but others remain, OneFS will not delete that snapshot until all the locks on it have been deleted or expired.

The following table provides an example snapshot expiration schedule, with monthly locked snapshots to prevent deletion:

Snapshot Frequency

Snapshot Time

Snapshot Expiration

Max Retained Snapshots

Every other hour

Start at 12:00AM

End at 11:59AM

1 day





27

Every day

At 12:00AM

1 week

Every week

Saturday at 12:00AM

1 month

Every month

First Saturday of month at 12:00AM

Locked

Roles-based Access Control

Read-only snapshots plus locks present physically secure snapshots on a cluster. However, if you can login to the cluster and have the required elevated administrator privileges to do so, you can still remove locks and/or delete snapshots.

Because data security threats come from inside an environment as well as out, such as from a disgruntled IT employee or other internal bad actor, another key to a robust security profile is to constrain the use of all-powerful ‘root’, ‘administrator’, and ‘sudo’ accounts as much as possible. Instead, of granting cluster admins full rights, a preferred security best practice is to leverage the comprehensive authentication, authorization, and accounting framework that OneFS natively provides.

OneFS role-based access control (RBAC) can be used to explicitly limit who has access to manage and delete snapshots. This granular control allows you to craft administrative roles that can create and manage snapshot schedules, but prevent their unlocking and/or deletion. Similarly, lock removal and snapshot deletion can be isolated to a specific security role (or to root only).

A cluster security administrator selects the desired access zone, creates a zone-aware role within it, assigns privileges, and then assigns members.

For example, from the WebUI under Access > Membership and roles > Roles:

When these members access the cluster through the WebUI, PlatformAPI, or CLI, they inherit their assigned privileges.

The specific privileges that can be used to segment OneFS snapshot management include:

Privilege

Description

ISI_PRIV_SNAPSHOT_ALIAS

Aliasing for snapshots

ISI_PRIV_SNAPSHOT_LOCKS

Locking of snapshots from deletion

ISI_PRIV_SNAPSHOT_PENDING

Upcoming snapshot based on schedules

ISI_PRIV_SNAPSHOT_RESTORE

Restoring directory to a particular snapshot

ISI_PRIV_SNAPSHOT_SCHEDULES

Scheduling for periodic snapshots

ISI_PRIV_SNAPSHOT_SETTING

Service and access settings

ISI_PRIV_SNAPSHOT_SNAPSHOTMANAGEMENT

Manual snapshots and locks

ISI_PRIV_SNAPSHOT_SNAPSHOT_SUMMARY

Snapshot summary and usage details

Each privilege can be assigned one of four permission levels for a role, including:

Permission Indicator

Description

No permission

R

Read-only permission

X

Execute permission

W

Write permission

The ability for a user to delete a snapshot is governed by the ‘ISI_PRIV_SNAPSHOT_SNAPSHOTMANAGEMENT’ privilege. Similarly, the ‘ISI_PRIV_SNAPSHOT_LOCKS’ privilege governs lock creation and removal.

In the following example, the ‘snap’ role has ‘read’ rights for the ‘ISI_PRIV_SNAPSHOT_LOCKS’ privilege, allowing a user associated with this role to view snapshot locks:

# isi auth roles view snap | grep -I -A 1 locks
             ID: ISI_PRIV_SNAPSHOT_LOCKS
     Permission: r
--
# isi snapshot locks list snaploc1
ID
----
1
----
Total: 1

However, attempts to remove the lock ‘ID 1’ from the ‘snaploc1’ snapshot fail without write privileges:

# isi snapshot locks delete snaploc1 1
Privilege check failed. The following write privilege is required: Snapshot locks (ISI_PRIV_SNAPSHOT_LOCKS)

Write privileges are added to ‘ISI_PRIV_SNAPSHOT_LOCKS’ in the ‘’snaploc1’ role:

# isi auth roles modify snap –-add-priv-write ISI_PRIV_SNAPSHOT_LOCKS
# isi auth roles view snap | grep -I -A 1 locks
             ID: ISI_PRIV_SNAPSHOT_LOCKS
     Permission: w
--

This allows the lock ‘ID 1’ to be successfully deleted from the ‘snaploc1’ snapshot:

# isi snapshot locks delete snaploc1 1
Are you sure you want to delete snapshot lock 1 from snaploc1? (yes/[no]): yes
# isi snap locks view snaploc1 1
No such lock

Using OneFS RBAC, an enhanced security approach for a site could be to create three OneFS roles on a cluster, each with an increasing realm of trust:

1.  First, an IT ops/helpdesk role with ‘read’ access to the snapshot attributes would permit monitoring and troubleshooting, but no changes:

Snapshot Privilege

Description

ISI_PRIV_SNAPSHOT_ALIAS

Read

ISI_PRIV_SNAPSHOT_LOCKS

Read

ISI_PRIV_SNAPSHOT_PENDING

Read

ISI_PRIV_SNAPSHOT_RESTORE

Read

ISI_PRIV_SNAPSHOT_SCHEDULES

Read

ISI_PRIV_SNAPSHOT_SETTING

Read

ISI_PRIV_SNAPSHOT_SNAPSHOTMANAGEMENT

Read

ISI_PRIV_SNAPSHOT_SNAPSHOT_SUMMARY

Read

2.  Next, a cluster admin role, with ‘read’ privileges for ‘ISI_PRIV_SNAPSHOT_LOCKS’ and ‘ISI_PRIV_SNAPSHOT_SNAPSHOTMANAGEMENT’ would prevent snapshot and lock deletion, but provide ‘write’ access for schedule configuration, restores, and so on.

Snapshot Privilege

Description

ISI_PRIV_SNAPSHOT_ALIAS

Write

ISI_PRIV_SNAPSHOT_LOCKS

Read

ISI_PRIV_SNAPSHOT_PENDING

Write

ISI_PRIV_SNAPSHOT_RESTORE

Write

ISI_PRIV_SNAPSHOT_SCHEDULES

Write

ISI_PRIV_SNAPSHOT_SETTING

Write

ISI_PRIV_SNAPSHOT_SNAPSHOTMANAGEMENT

Read

ISI_PRIV_SNAPSHOT_SNAPSHOT_SUMMARY

Write

3.  Finally, a cluster security admin role (root equivalence) would provide full snapshot configuration and management, lock control, and deletion rights:

Snapshot Privilege

Description

ISI_PRIV_SNAPSHOT_ALIAS

Write

ISI_PRIV_SNAPSHOT_LOCKS

Write

ISI_PRIV_SNAPSHOT_PENDING

Write

ISI_PRIV_SNAPSHOT_RESTORE

Write

ISI_PRIV_SNAPSHOT_SCHEDULES

Write

ISI_PRIV_SNAPSHOT_SETTING

Write

ISI_PRIV_SNAPSHOT_SNAPSHOTMANAGEMENT

Write

ISI_PRIV_SNAPSHOT_SNAPSHOT_SUMMARY

Write

Note that when configuring OneFS RBAC, remember to remove the ‘ISI_PRIV_AUTH’ and ‘ISI_PRIV_ROLE’ privilege from all but the most trusted administrators.

Additionally, enterprise security management tools such as CyberArk can also be incorporated to manage authentication and access control holistically across an environment. These can be configured to frequently change passwords on trusted accounts (that is, every hour or so), require multi-Level approvals prior to retrieving passwords, and track and audit password requests and trends.

While this article focuses exclusively on OneFS snapshots, the expanded use of RBAC granular privileges for enhanced security is germane to most key areas of cluster management and data protection, such as SyncIQ replication, and so on.

Snapshot replication

In addition to using snapshots for its own checkpointing system, SyncIQ, the OneFS data replication engine, supports snapshot replication to a target cluster.

OneFS SyncIQ replication policies contain an option for triggering a replication policy when a snapshot of the source directory is completed. Additionally, at the onset of a new policy configuration, when the “Whenever a Snapshot of the Source Directory is Taken” option is selected, a checkbox appears to enable any existing snapshots in the source directory to be replicated. More information is available in this SyncIQ paper.

Cyber-vaulting

File data is arguably the most difficult to protect, because:

  • It is the only type of data where potentially all employees have a direct connection to the storage (with the other type of storage it’s through an application)
  • File data is linked (or mounted) to the operating system of the client. This means that it’s sufficient to gain file access to the OS to get access to potentially critical data.
  • Users are the largest breach points.

The Cyber Security Framework (CSF) from the National Institute of Standards and Technology (NIST) categorizes the threat through recovery process:

Within the ‘Protect’ phase, there are two core aspects:

  • Applying all the core protection features available on the OneFS platform, namely:

Feature

Description

Access control

Where the core data protection functions are being executed. Assess who actually needs write access.

Immutability

Having immutable snapshots, replica versions, and so on. Augmenting backup strategy with an archiving strategy with SmartLock WORM.

Encryption

Encrypting both data in-flight and data at rest.

Anti-virus

Integrating with anti-virus/anti-malware protection that does content inspection.

Security advisories

Dell Security Advisories (DSA) inform customers about fixes to common vulnerabilities and exposures. 

  • Data isolation provides a last resort copy of business critical data, and can be achieved by using an air gap to isolate the cyber vault copy of the data. The vault copy is logically separated from the production copy of the data. Data syncing happens only intermittently by closing the air gap after ensuring that there are no known issues.

The combination of OneFS snapshots and SyncIQ replication allows for granular data recovery. This means that only the affected files are recovered, while the most recent changes are preserved for the unaffected data. While an on-prem air-gapped cyber vault can still provide secure network isolation, in the event of an attack, the ability to failover to a fully operational ‘clean slate’ remote site provides additional security and peace of mind.

We’ll explore PowerScale cyber protection and recovery in more depth in a future article.

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS
  • SupportAssist

OneFS SupportAssist Architecture and Operation

Nick Trimbee

Fri, 21 Apr 2023 16:41:36 -0000

|

Read Time: 0 minutes

The previous article in this series looked at an overview of OneFS SupportAssist. Now, we’ll turn our attention to its core architecture and operation.

Under the hood, SupportAssist relies on the following infrastructure and services:

Service

Name

ESE

Embedded Service Enabler.

isi_rice_d

Remote Information Connectivity Engine (RICE).

isi_crispies_d

Coordinator for RICE Incidental Service Peripherals including ESE Start.

Gconfig

OneFS centralized configuration infrastructure.

MCP

Master Control Program – starts, monitors, and restarts OneFS services.

Tardis

Configuration service and database.

Transaction journal

Task manager for RICE.

Of these, ESE, isi_crispies_d, isi_rice_d, and the Transaction Journal are new in OneFS 9.5 and exclusive to SupportAssist. By contrast, Gconfig, MCP, and Tardis are all legacy services that are used by multiple other OneFS components.

The Remote Information Connectivity Engine (RICE) represents the new SupportAssist ecosystem for OneFS to connect to the Dell backend. The high level architecture is as follows:

Dell’s Embedded Service Enabler (ESE) is at the core of the connectivity platform and acts as a unified communications broker between the PowerScale cluster and Dell Support. ESE runs as a OneFS service and, on startup, looks for an on-premises gateway server. If none is found, it connects back to the connectivity pipe (SRS). The collector service then interacts with ESE to send telemetry, obtain upgrade packages, transmit alerts and events, and so on.

Depending on the available resources, ESE provides a base functionality with additional optional capabilities to enhance serviceability. ESE is multithreaded, and each payload type is handled by specific threads. For example, events are handled by event threads, binary and structured payloads are handled by web threads, and so on. Within OneFS, ESE gets installed to /usr/local/ese and runs as ‘ese’ user and group.

The responsibilities of isi_rice_d include listening for network changes, getting eligible nodes elected for communication, monitoring notifications from CRISPIES, and engaging Task Manager when ESE is ready to go.

The Task Manager is a core component of the RICE engine. Its responsibility is to watch the incoming tasks that are placed into the journal and to assign workers to step through the tasks  until completion. It controls the resource utilization (Python threads) and distributes tasks that are waiting on a priority basis.

The ‘isi_crispies_d’ service exists to ensure that ESE is only running on the RICE active node, and nowhere else. It acts, in effect, like a specialized MCP just for ESE and RICE-associated services, such as IPA. This entails starting ESE on the RICE active node, re-starting it if it crashes on the RICE active node, and stopping it and restarting it on the appropriate node if the RICE active instance moves to another node. We are using ‘isi_crispies_d’ for this, and not MCP, because MCP does not support a service running on only one node at a time.

The core responsibilities of ‘isi_crispies_d’ include:

  • Starting and stopping ESE on the RICE active node
  • Monitoring ESE and restarting, if necessary. ‘isi_crispies_d’ restarts ESE on the node if it crashes. It will retry a couple of times and then notify RICE if it’s unable to start ESE.
  • Listening for gconfig changes and updating ESE. Stopping ESE if unable to make a change and notifying RICE.
  • Monitoring other related services.

The state of ESE, and of other RICE service peripherals, is stored in the OneFS tardis configuration database so that it can be checked by RICE. Similarly, ‘isi_crispies_d’ monitors the OneFS Tardis configuration database to see which node is designated as the RICE ‘active’ node.

The ‘isi_telemetry_d’ daemon is started by MCP and runs when SupportAssist is enabled. It does not have to be running on the same node as the active RICE and ESE instance. Only one instance of ‘isi_telemetry_d’ will be active at any time, and the other nodes will be waiting for the lock.

You can query the current status and setup of SupportAssist on a PowerScale cluster by using the ‘isi supportassist settings view’ CLI command. For example:

# isi supportassist settings view
        Service enabled: Yes
       Connection State: enabled
      OneFS Software ID: ELMISL08224764
          Network Pools: subnet0:pool0
        Connection mode: direct
           Gateway host: -
           Gateway port: -
    Backup Gateway host: -
    Backup Gateway port: -
  Enable Remote Support: Yes
Automatic Case Creation: Yes
       Download enabled: Yes

You can also do this from the WebUI by navigating to Cluster management > General settings > SupportAssist:

You can enable or disable SupportAssist by using the ‘isi services’ CLI command set. For example:

# isi services isi_supportassist disable
The service 'isi_supportassist' has been disabled.
# isi services isi_supportassist enable
The service 'isi_supportassist' has been enabled.
# isi services -a | grep supportassist
   isi_supportassist    SupportAssist Monitor                    Enabled

You can check the core services, as follows:

# ps -auxw | grep -e 'rice' -e 'crispies' | grep -v grep
root    8348    9.4   0.0 109844  60984  -   Ss   22:14        0:00.06 /usr/libexec/isilon/isi_crispies_d /usr/bin/isi_crispies_d
root    8183    8.8   0.0 108060  64396  -   Ss   22:14        0:01.58 /usr/libexec/isilon/isi_rice_d /usr/bin/isi_rice_d

Note that when a cluster is provisioned with SupportAssist, ESRS can no longer be used. However, customers that have not previously connected their clusters to Dell Support can still provision ESRS, but will be presented with a message encouraging them to adopt the best practice of using SupportAssist.

Additionally, SupportAssist in OneFS 9.5 does not currently support IPv6 networking, so clusters deployed in IPv6 environments should continue to use ESRS until SupportAssist IPv6 integration is introduced in a future OneFS release.

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS

OneFS SupportAssist Management and Troubleshooting

Nick Trimbee

Tue, 18 Apr 2023 20:07:18 -0000

|

Read Time: 0 minutes

In this final article in the OneFS SupportAssist series, we turn our attention to management and troubleshooting.

Once the provisioning process above is complete, the isi supportassist settings view CLI command reports the status and health of SupportAssist operations on the cluster.

# isi supportassist settings view
        Service enabled: Yes
       Connection State: enabled
      OneFS Software ID: xxxxxxxxxx
          Network Pools: subnet0:pool0
        Connection mode: direct
           Gateway host: -
           Gateway port: -
    Backup Gateway host: -
    Backup Gateway port: -
  Enable Remote Support: Yes
Automatic Case Creation: Yes
       Download enabled: Yes

This can also be obtained from the WebUI by going to Cluster management > General settings > SupportAssist:

 There are some caveats and considerations to keep in mind when upgrading to OneFS 9.5 and enabling SupportAssist, including:

  • SupportAssist is disabled when STIG hardening is applied to a cluster.
  • Using SupportAssist on a hardened cluster is not supported.
  • Clusters with the OneFS network firewall enabled (isi network firewall settings) might need to allow outbound traffic on port 9443.
  • SupportAssist is supported on a cluster that’s running in Compliance mode.
  • Secure keys are held in key manager under the RICE domain.

Also, note that Secure Remote Services can no longer be used after SupportAssist has been provisioned on a cluster.

SupportAssist has a variety of components that gather and transmit various pieces of OneFS data and telemetry to Dell Support and backend services through the Embedded Service Enabler (ESE). These workflows include CELOG events; in-product activation (IPA) information; CloudIQ telemetry data; Isi-Gather-info (IGI) logsets; and provisioning, configuration, and authentication data to ESE and the various backend services.

Activity

Information

Events and alerts

SupportAssist can be configured to send CELOG events.

Diagnostics

The OneFS isi diagnostics gather and isi_gather_info logfile collation and transmission commands have a SupportAssist option. 

HealthChecks

HealthCheck definitions are updated using SupportAssist.

License Activation

The isi license activation start command uses SupportAssist to connect.

Remote Support

Remote Support uses SupportAssist and the Connectivity Hub to assist customers with their clusters.

Telemetry

CloudIQ telemetry data is sent using SupportAssist. 

CELOG

Once SupportAssist is up and running, it can be configured to send CELOG events and attachments  through ESE to CLM. This can be managed by the isi event channels CLI command syntax. For example:

# isi event channels list
ID   Name                Type          Enabled
-----------------------------------------------
1    RemoteSupport       connectemc    No
2    Heartbeat Self-Test heartbeat     Yes
3    SupportAssist       supportassist No
-----------------------------------------------
Total: 3
# isi event channels view SupportAssist
     ID: 3
   Name: SupportAssist
   Type: supportassist
Enabled: No

Or from the WebUI:

CloudIQ telemetry

In OneFS 9.5, SupportAssist provides an option to send telemetry data to CloudIQ. This can be enabled from the CLI as follows:

# isi supportassist telemetry modify --telemetry-enabled 1 --telemetry-persist 0
# isi supportassist telemetry view
        Telemetry Enabled: Yes
        Telemetry Persist: No
        Telemetry Threads: 8
Offline Collection Period: 7200

Or in the SupportAssist WebUI:

Diagnostics gather

Also in OneFS 9.5, the isi diagnostics gather and isi_gather_info CLI commands both include a --supportassist upload option for log gathers, which also allows them to continue to function through a new “emergency mode” when the cluster is unhealthy. For example, to start a gather from the CLI that will be uploaded through SupportAssist:

# isi diagnostics gather start –supportassist 1

Similarly, for ISI gather info:

# isi_gather_info --supportassist

Or to explicitly avoid using SupportAssist for ISI gather info log gather upload:

# isi_gather_info --nosupportassist

This can also be configured from the WebUI at Cluster management > General configuration > Diagnostics > Gather:

License Activation through SupportAssist

PowerScale License Activation (previously known as In-Product Activation) facilitates the management of the cluster's entitlements and licenses by communicating directly with Software Licensing Central through SupportAssist.

To activate OneFS product licenses through the SupportAssist WebUI:

  1. Go to Cluster management Licensing. 
    For example, on a new cluster without any signed licenses:


     
  2. Click the Update & Refresh button in the License Activation section. In the Activation File Wizard, select the software modules that you want in the activation file.

     

  3. Select Review changes, review, click Proceed, and finally Activate

Note that it can take up to 24 hours for the activation to occur.

Alternatively, cluster license activation codes (LAC) can also be added manually.

Troubleshooting

When it comes to troubleshooting SupportAssist, the basic process flow is as follows:

 
The OneFS components and services above are:

Component

Info

ESE

Embedded Service Enabler

isi_rice_d

Remote Information Connectivity Engine (RICE)

isi_crispies_d

Coordinator for RICE Incidental Service Peripherals including ESE Start

Gconfig

OneFS centralized configuration infrastructure

MCP

Master Control Program; starts, monitors, and restarts OneFS services

Tardis

Configuration service and database

Transaction journal

Task manager for RICE

Of these, ESE, isi_crispies_d, isi_rice_d, and the transaction journal are new in OneFS 9.5 and exclusive to SupportAssist. In contrast, Gconfig, MCP, and Tardis are all legacy services that are used by multiple other OneFS components. 

For its connectivity, SupportAssist elects a single leader single node within the subnet pool, and NANON nodes are automatically avoided. Ports 443 and 8443 are required to be open for bi-directional communication between the cluster and Connectivity Hub, and port 9443 is for communicating with a gateway. The SupportAssist ESE component communicates with a number of Dell backend services:

  • SRS
  • Connectivity Hub
  • CLM
  • ELMS/Licensing
  • SDR
  • Lightning
  • Log Processor
  • CloudIQ
  • ESE

Debugging backend issues might involve one or more services, and Dell Support can assist with this process.

The main log files for investigating and troubleshooting SupportAssist issues and idiosyncrasies are isi_rice_d.log and isi_crispies_d.log. There is also an ese_log, which can be useful, too. These logs can be found at:

Component

Logfile location

Info

Rice

/var/log/isi_rice_d.log

Per node

Crispies

/var/log/isi_crispies_d.log

Per node

ESE

/ifs/.ifsvar/ese/var/log/ESE.log

Cluster-wide for single instance ESE

Debug level logging can be configured from the CLI as follows:

# isi_for_array isi_ilog -a isi_crispies_d --level=debug+
# isi_for_array isi_ilog -a isi_rice_d --level=debug+

Note that the OneFS log gathers (such as the output from the isi_gather_info utility) will capture all the above log files, plus the pertinent SupportAssist Gconfig contexts and Tardis namespaces, for later analysis.

If needed, the Rice and ESE configurations can also be viewed as follows:

# isi_gconfig -t ese
[root] {version:1}
ese.mode (char*) = direct
ese.connection_state (char*) = disabled
ese.enable_remote_support (bool) = true
ese.automatic_case_creation (bool) = true
ese.event_muted (bool) = false
ese.primary_contact.first_name (char*) =
ese.primary_contact.last_name (char*) =
ese.primary_contact.email (char*) =
ese.primary_contact.phone (char*) =
ese.primary_contact.language (char*) =
ese.secondary_contact.first_name (char*) =
ese.secondary_contact.last_name (char*) =
ese.secondary_contact.email (char*) =
ese.secondary_contact.phone (char*) =
ese.secondary_contact.language (char*) =
(empty dir ese.gateway_endpoints)
ese.defaultBackendType (char*) = srs
ese.ipAddress (char*) = 127.0.0.1
ese.useSSL (bool) = true
ese.srsPrefix (char*) = /esrs/{version}/devices
ese.directEndpointsUseProxy (bool) = false
ese.enableDataItemApi (bool) = true
ese.usingBuiltinConfig (bool) = false
ese.productFrontendPrefix (char*) = platform/16/supportassist
ese.productFrontendType (char*) = webrest
ese.contractVersion (char*) = 1.0
ese.systemMode (char*) = normal
ese.srsTransferType (char*) = ISILON-GW
ese.targetEnvironment (char*) = PROD
 
# isi_gconfig -t rice
[root] {version:1}
rice.enabled (bool) = false
rice.ese_provisioned (bool) = false
rice.hardware_key_present (bool) = false
rice.supportassist_dismissed (bool) = false
rice.eligible_lnns (char*) = []
rice.instance_swid (char*) =
rice.task_prune_interval (int) = 86400
rice.last_task_prune_time (uint) = 0
rice.event_prune_max_items (int) = 100
rice.event_prune_days_to_keep (int) = 30
rice.jnl_tasks_prune_max_items (int) = 100
rice.jnl_tasks_prune_days_to_keep (int) = 30
rice.config_reserved_workers (int) = 1
rice.event_reserved_workers (int) = 1
rice.telemetry_reserved_workers (int) = 1
rice.license_reserved_workers (int) = 1
rice.log_reserved_workers (int) = 1
rice.download_reserved_workers (int) = 1
rice.misc_task_workers (int) = 3
rice.accepted_terms (bool) = false
(empty dir rice.network_pools)
rice.telemetry_enabled (bool) = true
rice.telemetry_persist (bool) = false
rice.telemetry_threads (uint) = 8
rice.enable_download (bool) = true
rice.init_performed (bool) = false
rice.ese_disconnect_alert_timeout (int) = 14400
rice.offline_collection_period (uint) = 7200

The -q flag can also be used in conjunction with the isi_gconfig command to identify any values that are not at their default settings. For example, the stock (default) Rice gconfig context will not report any configuration entries:

# isi_gconfig -q -t rice
[root] {version:1}

 

Read Full Blog
  • PowerScale
  • OneFS

OneFS SupportAssist Provisioning – Part 2

Nick Trimbee

Thu, 13 Apr 2023 21:29:24 -0000

|

Read Time: 0 minutes

In the previous article in this OneFS SupportAssist series, we reviewed the off-cluster prerequisites for enabling OneFS SupportAssist:

  1. Upgrading the cluster to OneFS 9.5.
  2. Obtaining the secure access key and PIN.
  3. Selecting either direct connectivity or gateway connectivity.
  4. If using gateway connectivity, installing Secure Connect Gateway v5.x.

In this article, we turn our attention to step 5: Provisioning SupportAssist on the cluster.

As part of this process, we’ll be using the access key and PIN credentials previously obtained from the Dell Support portal in step 2 above.

Provisioning SupportAssist on a cluster

SupportAssist can be configured from the OneFS 9.5 WebUI by going to Cluster management > General settings > SupportAssist. To initiate the provisioning process on a cluster, click the Connect SupportAssist link, as shown here:

If SupportAssist is not configured, the Remote support page displays the following banner, warning of the future deprecation of SRS:

Similarly, when SupportAssist is not configured, the SupportAssist WebUI page also displays verbiage recommending the adoption of SupportAssist:

There is also a Connect SupportAssist button to begin the provisioning process.

Selecting the Configure SupportAssist button initiates the setup wizard.

1.  Telemetry Notice

 


The first step requires checking and accepting the Infrastructure Telemetry Notice:



2.  Support Contract



For the next step, enter the details for the primary support contact, as prompted:

 
You can also provide the information from the CLI by using the isi supportassist contacts command set. For example:

# isi supportassist contacts modify --primary-first-name=Nick --primary-last-name=Trimbee --primary-email=trimbn@isilon.com


3.  Establish Connections

Next, complete the Establish Connections page

This involves the following steps:

      • Selecting the network pool(s)
      • Adding the secure access key and PIN
      • Configuring either direct or gateway access
      • Selecting whether to allow remote support, CloudIQ telemetry, and auto case creation

a.  Select network pool(s).

At least one statically allocated IPv4 network subnet and pool are required for provisioning SupportAssist. OneFS 9.5 does not support IPv6 networking for SupportAssist remote connectivity. However, IPv6 support is planned for a future release.

Select one or more network pools or subnets from the options displayed. In this example, we select subnet0pool0:



Or from the CLI:

Select one or more static subnets or pools for outbound communication, using the following CLI syntax:

# isi supportassist settings modify --network-pools="subnet0.pool0"

Additionally, if the cluster has the OneFS 9.5 network firewall enabled (“isi network firewall settings”), ensure that outbound traffic is allowed on port 9443.

b.  Add secure access key and PIN.

In this next step, add the secure access key and pin. These should have been obtained in an earlier step in the provisioning procedure from the following Dell Support site: https://www.dell.com/support/connectivity/product/isilon-onefs.


Alternatively, if configuring SupportAssist from the OneFS CLI, add the key and pin by using the following syntax:

# isi supportassist provision start --access-key <key> --pin <pin>


c.  Configure access.

  • Direct access

Or, to configure direct access (the default) from the CLI, ensure that the following parameter is set:

# isi supportassist settings modify --connection-mode direct
# isi supportassist settings view | grep -i "connection mode"
        Connection mode: direct
  • Gateway access

Alternatively, to connect through a gateway, select the Connect via Secure Connect Gateway button:

Complete the Gateway host and Gateway port fields as appropriate for the environment.

Alternatively, to set up a gateway configuration from the CLI, use the isi supportassist settings modify syntax. For example, to use the gateway FQDN secure-connect-gateway.yourdomain.com and the default port 9443:

# isi supportassist settings modify --connection-mode gateway
# isi supportassist settings view | grep -i "connection mode"
        Connection mode: gateway
# isi supportassist settings modify --gateway-host secure-connect-gateway.yourdomain.com --gateway-port 9443

When setting up the gateway connectivity option, Secure Connect Gateway v5.0 or later must be deployed within the data center. SupportAssist is incompatible with either ESRS gateway v3.52 or SAE gateway v4. However, Secure Connect Gateway v5.x is backward compatible with PowerScale OneFS ESRS, which allows the gateway to be provisioned and configured ahead of a cluster upgrade to OneFS 9.5.

d. Configure support options.

Finally, configure the support options:



When you have completed the configuration, the WebUI will confirm that SmartConnect is successfully configured and enabled, as follows:

 
Or from the CLI:

# isi supportassist settings view
        Service enabled: Yes
       Connection State: enabled
      OneFS Software ID: ELMISL0223BJJC
          Network Pools: subnet0.pool0, subnet0.testpool1, subnet0.testpool2, subnet0.testpool3, subnet0.testpool4
        Connection mode: gateway
           Gateway host: eng-sea-scgv5stg3.west.isilon.com
           Gateway port: 9443
    Backup Gateway host: eng-sea-scgv5stg.west.isilon.com
    Backup Gateway port: 9443
  Enable Remote Support: Yes
Automatic Case Creation: Yes
       Download enabled: Yes

 

 

Read Full Blog
  • PowerScale
  • OneFS

OneFS SupportAssist Provisioning – Part 1

Nick Trimbee

Thu, 13 Apr 2023 20:20:31 -0000

|

Read Time: 0 minutes

In OneFS 9.5, several OneFS components now leverage SupportAssist as their secure off-cluster data retrieval and communication channel. These components include:

ComponentDetails

Events and Alerts

SupportAssist can send CELOG events and attachments through Embedded Service Enabler (ESE) to CLM.

Diagnostics

Logfile gathers can be uploaded to Dell through SupportAssist.

License activation

License activation uses SupportAssist for the isi license activation start CLI command.

Telemetry

Telemetry is sent through SupportAssist to CloudIQ for analytics.

Health check

Health check definition downloads now leverage SupportAssist.

Remote Support

Remote Support now uses SupportAssist along with Connectivity Hub.

For existing clusters, SupportAssist supports the same basic workflows as its predecessor, ESRS, so the transition from old to new is generally pretty seamless.

The overall process for enabling OneFS SupportAssist is as follows:

  1. Upgrade the cluster to OneFS 9.5.
  2. Obtain the secure access key and PIN.
  3. Select either direct connectivity or gateway connectivity.
  4. If using gateway connectivity, install Secure Connect Gateway v5.x.
  5. Provision SupportAssist on the cluster.

 We’ll go through each of these configuration steps in order:

1.  Upgrading to OneFS 9.5

First, the cluster must be running OneFS 9.5 to configure SupportAssist.

There are some additional considerations and caveats to bear in mind when upgrading to OneFS 9.5 and planning on enabling SupportAssist. These include:

  • SupportAssist is disabled when STIG hardening is applied to the cluster.
  • Using SupportAssist on a hardened cluster is not supported.
  • Clusters with the OneFS network firewall enabled (”isi network firewall settings”) might need to allow outbound traffic on ports 443 and 8443, plus 9443 if gateway (SCG) connectivity is configured.
  • SupportAssist is supported on a cluster that’s running in Compliance mode.
  • If you are upgrading from an earlier release, the OneFS 9.5 upgrade must be committed before SupportAssist can be provisioned.

Also, ensure that the user account that will be used to enable SupportAssist belongs to a role with the ISI_PRIV_REMOTE_SUPPORT read and write privilege:

# isi auth privileges | grep REMOTE
ISI_PRIV_REMOTE_SUPPORT                           
  Configure remote support

 For example, for an ese user account:

# isi auth roles view SupportAssistRole
       Name: SupportAssistRole
Description: -
    Members: ese
 Privileges
             ID: ISI_PRIV_LOGIN_PAPI
     Permission: r
             ID: ISI_PRIV_REMOTE_SUPPORT
      Permission: w

2.  Obtaining secure access key and PIN

An access key and pin are required to provision SupportAssist, and these secure keys are held in key manager under the RICE domain. This access key and pin can be obtained from the following Dell Support site: https://www.dell.com/support/connectivity/product/isilon-onefs.

In the Quick link navigation bar, select the Generate Access key link:

 On the following page, select the appropriate button:

The credentials required to obtain an access key and pin vary, depending on prior cluster configuration. Sites that have previously provisioned ESRS will need their OneFS Software ID (SWID) to obtain their access key and pin.

The isi license list CLI command can be used to determine a cluster’s SWID. For example:

# isi license list | grep "OneFS Software ID"
OneFS Software ID: ELMISL999CKKD

However, customers with new clusters and/or customers who have not previously provisioned ESRS or SupportAssist will require their Site ID to obtain the access key and pin.

Note that any new cluster hardware shipping after January 2023 will already have an integrated key, so this key can be used in place of the Site ID.

For example, if this is the first time registering this cluster and it does not have an integrated key, select Yes, let’s register:


 Enter the Site ID, site name, and location information for the cluster:

Choose a 4-digit PIN and save it for future reference. After that, click Create My Access Key:

The access key is then generated.
 

An automated email containing the pertinent key info is sent from the Dell | ServicesConnectivity Team. For example:

This access key is valid for one week, after which it automatically expires.

Next, in the cluster’s WebUI, go back to Cluster management > General settings > SupportAssist and enter the access key and PIN information in the appropriate fields. Finally, click Finish Setup to complete the SupportAssist provisioning process:



3.  Deciding between direct or gateway topology 


A topology decision will need to be made between implementing either direct connectivity or gateway connectivity, depending on the needs of the environment:

  • Direct connect:



  • Gateway connect:


SupportAssist uses ports 443 and 8443 by default for bi-directional communication between the cluster and Connectivity Hub. These ports will need to be open across any firewalls or packet filters between the cluster and the corporate network edge to allow connectivity to Dell Support.

Additionally, port 9443 is used for communicating with a gateway (SCG).

# grep -i esrs /etc/services
isi_esrs_d      9443/tcp   #EMC Secure Remote Support outbound alerts

4.  Installing Secure Connect Gateway (optional) 

This step is only required when deploying Dell Secure Connect Gateway (SCG). If a direct connect topology is preferred, go directly to step 5.

When configuring SupportAssist with the gateway connectivity option, Secure Connect Gateway v5.0 or later must be deployed within the data center.

Dell SCG is available for Linux, Windows, Hyper-V, and VMware environments, and, as of this writing, the latest version is 5.14.00.16. The installation binaries can be downloaded from https://www.dell.com/support/home/en-us/product-support/product/secure-connect-gateway/drivers.

Download SCG as follows:

  1. Sign in to www.dell.com/SCG-App. The Secure Connect Gateway - Application Edition page is displayed. If you have issues signing in using your business account or if you are unable to access the page even after signing in, contact Dell Administrative Support.
  2. In the Quick links section, click Generate Access key.
  3. On the Generate Access Keypage, perform the following steps:
    1. Select a site ID, site name, or site location.
    2. Enter a four-digit PIN and click Generate key. An access key is generated and sent to your email address. NOTE: The access key and PIN must be used within seven days and cannot be used to register multiple instances of SCG.
    3. Click Done.
  4. On the Secure Connect Gateway – Application Edition page, click the Drivers & Downloads tab.
  5. Search and select the required version.
  6. In the ACTION column, click Download.

The following steps are required to set up SCG:

https://dl.dell.com/content/docu105633_secure-connect-gateway-application-edition-quick-setup-guide.pdf?language=en-us


 Pertinent resources for installing SCG include:


Another useful source of SCG installation, configuration, and troubleshooting information is the Dell Support forum: https://www.dell.com/community/Secure-Connect-Gateway/bd-p/SCG

5.  Provisioning SupportAssist on the cluster

 At this point, the off-cluster prestaging work should be complete.

In the next article in this series, we turn our attention to the SupportAssist provisioning process on the cluster itself (step 5).

 

 

Read Full Blog
  • PowerScale
  • OneFS

Dell PowerScale OneFS Introduction for NetApp Admins

Aqib Kazi

Tue, 04 Apr 2023 17:15:00 -0000

|

Read Time: 0 minutes

For enterprises to harness the advantages of advanced storage technologies with Dell PowerScale, a transition from an existing platform is necessary. Enterprises are challenged by how the new architecture will fit into the existing infrastructure. This blog post provides an overview of PowerScale architecture, features, and nomenclature for enterprises migrating from NetApp ONTAP.

PowerScale overview

The PowerScale OneFS operating system is based on a distributed architecture, built from the ground up as a clustered system. Each PowerScale node provides compute, memory, networking, and storage. The concepts of controllers, HA, active/standby, and disk shelves are not applicable in a pure scale-out architecture. Thus, when a node is added to a cluster, the cluster performance and capacity increase collectively.

Due to the scale-out distributed architecture with a single namespace, single volume, single file system, and one single pane of management, the system management is far simpler than with traditional NAS platforms. In addition, the data protection is software-based rather than RAID-based, eliminating all the associated complexities, including configuration, maintenance, and additional storage utilization. Administrators do not have to be concerned with RAID groups or load distribution.

NetApp’s ONTAP storage operating system has evolved into a clustered system with controllers. The system includes ONTAP FlexGroups composed of aggregates and FlexVols across nodes.

OneFS is a single volume, which makes cluster management simple. As the cluster grows in capacity, the single volume automatically grows. Administrators are no longer required to migrate data between volumes manually. OneFS repopulates and balances data between all nodes when a new node is added, making the node part of the global namespace. All the nodes in a PowerScale cluster are equal in the hierarchy. Drives share data intranode and internode.

PowerScale is easy to deploy, operate, and manage. Most enterprises require only one full-time employee to manage a PowerScale cluster.

For more information about the PowerScale OneFS architecture, see PowerScale OneFS Technical Overview and Dell PowerScale OneFS Operating System.

DiagramDescription automatically generated

Figure 1. Dell PowerScale scale-out NAS architecture

OneFS and NetApp software features

The single volume and single namespace of PowerScale OneFS also lead to a unique feature set. Because the entire NAS is a single file system, the concepts of FlexVols, shares, qtrees, and FlexGroups do not apply. Each NetApp volume has specific properties associated with limited storage space. Adding more storage space to NetApp ONTAP could be an onerous process depending on the current architecture. Conversely, on a PowerScale cluster, as soon as a node is added, the cluster is rebalanced automatically, leading to minimal administrator management. 

NetApp’s continued dependence on volumes creates potential added complexity for storage administrators. From a software perspective, the intricacies that arise from the concept of volumes span across all the features. Configuring software features requires administrators to base decisions on the volume concept, limiting configuration options. The volume concept is further magnified by the impacts on storage utilization. 

The fact that OneFS is a single volume means that many features are not volume dependent but, rather, span the entire cluster. SnapshotIQ, NDMP backups, and SmartQuotas do not have limits based on volumes; instead, they are cluster-specific or directory-specific.

As a single-volume NAS designed for file storage, OneFS has the scalable capacity with ease of management combined with features that administrators require. Robust policy-driven features such as SmartConnect, SmartPools, and CloudPools enable maximum utilization of nodes for superior performance and storage efficiency for maximum value. You can use SmartConnect to configure access zones that are mapped to specific node performances. SmartPools can tier cold data to nodes with deep archive storage, and CloudPools can store frozen data in the cloud. Regardless of where the data is residing, it is presented as a single namespace to the end user.

Storage utilization and data protection

Storage utilization is the amount of storage available after the NAS system overhead is deducted. The overhead consists of the space required for data protection and the operating system.

For data protection, OneFS uses software-based Reed-Solomon Error Correction with up to N+4 protection. OneFS offers several custom protection options that cover node and drive failures. The custom protection options vary according to the cluster configuration. OneFS provides data protection against more simultaneous hardware failures and is software-based, providing a significantly higher storage utilization. 

The software-based data protection stripes data across nodes in stripe units, and some of the stripe units are Forward Error Correction (FEC) or parity units. The FEC units provide a variable to reformulate the data in the case of a drive or node failure. Data protection is customizable to be for node loss or hybrid protection of node and drive failure.

With software-based data protection, the protection scheme is not per cluster. It has additional granularity that allows for making data protection specific to a file or directory—without creating additional storage volumes or manually migrating data. Instead, OneFS runs a job in the background, moving data as configured.

Figure 2. OneFS data protection

OneFS protects data stored on failing nodes, or drives in a cluster through a process called SmartFail. During the process, OneFS places a device into quarantine and, depending on the severity of the issue, places the data on the device into a read-only state. While a device is quarantined, OneFS reprotects the data on the device by distributing the data to other devices. 

NetApp’s data protection is all RAID-based, including NetApp RAID-TEC, NetApp RAID-DP, and RAID 4. NetApp only supports a maximum of triple parity, and simultaneous node failures in an HA pair are not supported. 

For more information about SmartFail, see the following blog: OneFS Smartfail. For more information about OneFS data protection, see High Availability and Data Protection with Dell PowerScale Scale-Out NAS.

NetApp FlexVols, shares, and Qtrees

NetApp requires administrators to manually create space and explicitly define aggregates and flexible volumes. The concept of FlexVols, shares, and Qtrees are nonexistent in OneFS, as the file system is a single volume and namespace, spanning the entire cluster. 

SMB shares and NFS exports are created through the web or command-line interface in OneFS. Both methods allow the user to create either within seconds with security options. SmartQuotas is used to manage storage limits, cluster-wide, across the entire namespace. They include accounting, warning messages, or hard limits of enforcement. The limits can be applied by directory, user, or group. 

Conversely, ONTAP quota management is at the volume or FlexGroup level, creating additional administrative overhead because the process is more onerous.

Snapshots

The OneFS snapshot feature is SnapshotIQ, which does not have specified or enforced limits for snapshots per directory or snapshots per cluster. However, the best practice is 1,024 snapshots per directory and 20,000 snapshots per cluster. OneFS also supports writable snapshots. For more information about SnapshotIQ and writable snapshots, see High Availability and Data Protection with Dell PowerScale Scale-Out NAS.

NetApp Snapshot supports 255 snapshots per volume in ONTAP 9.3 and earlier. ONTAP 9.4 and later versions support 1,023 snapshots per volume. By default, NetApp requires a space reservation of 5 percent in the volume when snapshots are used, requiring the space reservation to be monitored and manually increased if space becomes exhausted. Further, the space reservation can also affect volume availability. The space reservation requirement creates additional administration overhead and affects storage efficiency by setting aside space that might or might not be used.

Data replication

Data replication is required for disaster recovery, RPO, or RTO requirements. OneFS provides data replication through SyncIQ and SmartSync. 

SyncIQ provides asynchronous data replication, whereas NetApp’s asynchronous replication, which is called SnapMirror, is block-based replication. SyncIQ provides options for ensuring that all data is retained during failover and failback from the disaster recovery cluster. SyncIQ is fully configurable with options for execution times and bandwidth management. A SyncIQ target cluster may be configured as a target for several source clusters. 

SyncIQ offers a single-button automated process for failover and failback with Superna Eyeglass DR Edition. For more information about Superna Eyeglass DR Edition, see Superna | DR Edition (supernaeyeglass.com).

SyncIQ allows configurable options for replication down to a specific file, directory, or entire cluster. Conversely, NetApp’s SnapMirror replication starts at the volume at a minimum. The volume concept and dependence on volume requirements continue to add management complexity and overhead for administrators while also wasting storage utilization.

To address the requirements of the modern enterprise, OneFS version 9.4.0.0 introduced SmartSync. This feature replicates file-to-file data between PowerScale clusters. SmartSync cloud copy replicates file-to-object data from PowerScale clusters to Dell ECS and cloud providers. Having multiple target destinations allows administrators to store multiple copies of a dataset across locations, providing further disaster recovery readiness. SmartSync cloud copy replicates file-to-object data from PowerScale clusters to Dell ECS and cloud providers. SmartSync cloud copy also pulls the replicated object data from a cloud provider back to a PowerScale cluster in file. For more information about SyncIQ, see Dell PowerScale SyncIQ: Architecture, Configuration, and Considerations. For more information about SmartSync, see Dell PowerScale SmartSync.

Quotas

OneFS SmartQuotas provides configurable options to monitor and enforce storage limits at the user, group, cluster, directory, or subdirectory level. ONTAP quotas are user-, tree-, volume-, or group-based.

For more information about SmartQuotas, see Storage Quota Management and Provisioning with Dell PowerScale SmartQuotas.

Load balancing and multitenancy

Because OneFS is a distributed architecture across a collection of nodes, client connectivity to these nodes requires load balancing. OneFS SmartConnect provides options for balancing the client connections to the nodes within a cluster. Balancing options are round-robin or based on current load. Also, SmartConnect zones can be configured to have clients connect based on group and performance needs. For example, the Engineering group might require high-performance nodes. A zone can be configured, forcing connections to those nodes.

NetApp ONTAP supports multitenancy with Storage Virtual Machines (SVMs), formerly vServers and Logical Interfaces (LIFs). SVMs isolate storage and network resources across a cluster of controller HA pairs. SVMs require managing protocols, shares, and volumes for successful provisioning. Volumes cannot be nondisruptively moved between SVMs. ONTAP supports load balancing using LIFs, but configuration is manual and must be implemented by the storage administrator. Further, it requires continuous monitoring because it is based on the load on the controller. 

OneFS provides multitenancy through SmartConnect and access zones. Management is simple because the file system is one volume and access is provided by hostname and directory, rather than by volume. SmartConnect is policy-driven and does not require continuous monitoring. SmartConnect settings may be changed on demand as the requirements change.

SmartConnect zones allow administrators to provision DNS hostnames specific to IP pools, subnets, and network interfaces. If only a single authentication provider is required, all the SmartConnect zones map to a default access zone. However, if directory access and authentication providers vary, multiple access zones are provisioned, mapping to a directory, authentication provider, and SmartConnect zone. As a result, authenticated users of an access zone only have visibility into their respective directory. Conversely, an administrator with complete file system access can migrate data nondisruptively between directories.

For more information about SmartConnect, see PowerScale: Network Design Considerations.

Compression and deduplication

Both ONTAP and OneFS provide compression. The OneFS deduplication feature is SmartDedupe, which allows deduplication to run at a cluster-wide level, improving overall Data Reduction Rate (DRR) and storage utilization. With ONTAP, the deduplication is enabled at the aggregate level, and it cannot cross over nodes. 

For more information about OneFS data reduction, see Dell PowerScale OneFS: Data Reduction and Storage Efficiency. For more information about SmartDedupe, see Next-Generation Storage Efficiency with Dell PowerScale SmartDedupe.

Data tiering

OneFS has integrated features to tier data based on the data’s age or file type. NetApp has similar functionality with FabricPools.

OneFS SmartPools uses robust policies to enable data placement and movement across multiple types of storage. SmartPools can be configured to move data to a set of nodes automatically. For example, if a file has not been accessed in the last 90 days, in can be migrated to a node with deeper storage, allowing admins to define the value of storage based on performance. 

OneFS CloudPools migrates data to a cloud provider, with only a stub remaining on the PowerScale cluster, based on similar policies. CloudPools not only tiers data to a cloud provider but also recalls the data back to the cluster as demanded. From a user perspective, all the data is still in a single namespace, irrespective of where it resides.

Figure 3. OneFS SmartPools and CloudPools

ONTAP tiers to S3 object stores using FabricPools.

For more information about SmartPools, see Storage Tiering with Dell PowerScale SmartPools. For more information about CloudPools, see:

Monitoring

Dell InsightIQ and Dell CloudIQ provide performance monitoring and reporting capabilities. InsightIQ includes advanced analytics to optimize applications, correlate cluster events, and accurately forecast future storage needs. NetApp provides performance monitoring and reporting with Cloud Insights and Active IQ, which are accessible within BlueXP.  

For more information about CloudIQ, see CloudIQ: A Detailed Review. For more information about InsightIQ, see InsightIQ on Dell Support.

Security

Similar to ONTAP, the PowerScale OneFS operating system comes with a comprehensive set of integrated security features. These features include data at rest and data in flight encryption, virus scanning tool, WORM SmartLock compliance, external key manager for data at rest encryption, STIG-hardened security profile, Common Criteria certification, and support for UEFI Secure Boot across PowerScale platforms. Further, OneFS may be configured for a Zero Trust architecture and PCI-DSS. 

Superna security 

Superna exclusively provides the following security-focused applications for PowerScale OneFS: 

  • Ransomware Defender: Provides real-time event processing through user behavior analytics. The events are used to detect and stop a ransomware attack before it occurs.
  • Easy Auditor: Offers a flat-rate license model and ease-of-use features that simplify auditing and securing PBs of data.
  • Performance Auditor: Provides real-time file I/O view of PowerScale nodes to simplify root cause of performance impacts, assessing changes needed to optimize performance and debugging user, network, and application performance.
  • Airgap: Deployed in two configurations depending on the scale of clusters and security features:
  • Basic Airgap Configuration that deploys the Ransomware Defender agent on one of the primary clusters being protected.
  • Enterprise Airgap Configuration that deploys the Ransomware Defender agent on the cyber vault cluster. This solution comes with greater scalability and additional security features.

Figure 4. Superna security

NetApp ONTAP security is limited to the integrated features listed above. Additional applications for further security monitoring, like Superna, are not available for ONTAP.

For more information about Superna security, see supernaeyeglass.com. For more information about PowerScale security, see Dell PowerScale OneFS: Security Considerations.

Authentication and access control

NetApp and PowerScale OneFS both support several methods for user authentication and access control. OneFS supports UNIX and Windows permissions for data-level access control. OneFS is designed for a mixed environment that allows the configuration of both Windows Access Control Lists (ACLs) and standard UNIX permissions on the cluster file system. In addition, OneFS provides user and identity mapping, permission mapping, and merging between Windows and UNIX environments.

OneFS supports local and remote authentication providers. Anonymous access is supported for protocols that allow it. Concurrent use of multiple authentication provider types, including Active Directory, LDAP, and NIS, is supported. For example, OneFS is often configured to authenticate Windows clients with Active Directory and to authenticate UNIX clients with LDAP.

Role-based access control

OneFS supports role-based access control (RBAC), allowing administrative tasks to be configured without a root or administrator account. A role is a collection of OneFS privileges that are limited to an area of administration. Custom roles for security, auditing, storage, or backup tasks may be provisioned with RBACs. Privileges are assigned to roles. As users log in to the cluster through the platform API, the OneFS command-line interface, or the OneFS web administration interface, they are granted privileges based on their role membership.

For more information about OneFS authentication and access control, see PowerScale OneFS Authentication, Identity Management, and Authorization.

Learn more about PowerScale OneFS

To learn more about PowerScale OneFS, see the following resources:

 

Read Full Blog
  • OneFS
  • monitoring
  • troubleshooting
  • SmartQoS

OneFS SmartQoS Monitoring and Troubleshooting

Nick Trimbee

Tue, 21 Mar 2023 18:30:54 -0000

|

Read Time: 0 minutes

The previous articles in this series have covered the SmartQoS architecture, configuration, and management. Now, we’ll turn our attention to monitoring and troubleshooting.

You can use the ‘isi statistics workload’ CLI command to monitor the dataset’s performance. The ‘Ops’ column displays the current protocol operations per second. In the following example, Ops stabilize around 9.8, which is just below the configured limit value of 10 Ops.

# isi statistics workload --dataset ds1 & data

 

Similarly, this next example from the SmartQoS WebUI shows a small NFS workflow performing 497 protocol Ops in a pinned workload with a limit of 500 Ops:

You can pin multiple paths and protocols by selecting the ‘Pin Workload’ option for a given Dataset. Here, four directory path workloads are each configured with different Protocol Ops limits:

When it comes to troubleshooting SmartQoS, there are a few areas that are worth checking right away, including the SmartQoS Ops limit configuration, isi_pp_d and isi_stats_d daemons, and the protocol service(s).

  1. For suspected Ops limit configuration issues, first confirm that the SmartQoS limits feature is enabled:

# isi performance settings view
Top N Collections: 1024
Time In Queue Threshold (ms): 10.0
Target read latency in microseconds: 12000.0
Target write latency in microseconds: 12000.0
Protocol Ops Limit Enabled: Yes

Next, verify that the workload level protocols_ops limit is correctly configured:

# isi performance workloads view <workload>

Check whether any errors are reported in the isi_tardis_d configuration log:

# cat /var/log/isi_tardis_d.log

  2. To investigate isi_pp_d, first check that the service is enabled:

# isi services –a isi_pp_d
Service 'isi_pp_d' is enabled.

If necessary, you can restart the isi_pp_d service as follows:

# isi services isi_pp_d disable
Service 'isi_pp_d' is disabled.
# isi services isi_pp_d enable
Service 'isi_pp_d' is enabled.

There’s also an isi_pp_d debug tool, which can be helpful in a pinch:

# isi_pp_d -h
Usage: isi_pp_d [-ldhs]
-l Run as a leader process; otherwise, run as a follower. Only one leader process on the cluster will be active.
-d Run in debug mode (do not daemonize).
-s Display pp_leader node (devid and lnn)
-h Display this help.

You can enable debugging on the isi_pp_d log file with the following command syntax:

# isi_ilog -a isi_pp_d -l debug, /var/log/isi_pp_d.log

For example, the following log snippet shows a typical isi_ppd_d.log message communication between isi_pp_d leader and isi_pp_d followers:

/ifs/.ifsvar/modules/pp/comm/SETTINGS
[090500b000000b80,08020000:0000bfddffffffff,09000100:ffbcff7cbb9779de,09000100:d8d2fee9ff9e3bfe,090001 00:0000000075f0dfdf]      
100,,,,20,1658854839  < in the format of <workload_id, cputime, disk_reads, disk_writes, protocol_ops, timestamp>

Here, the extract from the /var/log/isi_pp_d.log logfiles from nodes 1 and 2 of a cluster illustrate the different stages of Protocol Ops limit enforcement and usage:

  3. To investigate the isi_stats_d, first confirm that the isi_pp_d service is enabled:

# isi services -a isi_stats_d
Service 'isi_stats_d' is enabled.

If necessary, you can restart the isi_stats_d service as follows:

# isi services isi_stats_d disable
# isi services isi_stats_d enable

You can view the workload level statistics with the following command:

# isi statistics workload list --dataset=<name>

You can enable debugging on the isi_stats_d log file with the following command syntax:

# isi_stats_tool --action set_tracelevel --value debug
# cat /var/log/isi_stats_d.log

  4. To investigate protocol issues, the ‘isi services’ and ‘lwsm’ CLI commands can be useful. For example, to check the status of the S3 protocol:

# /usr/likewise/bin/lwsm list | grep -i protocol
hdfs                       [protocol]    stopped
lwswift                    [protocol]    running (lwswift: 8393)
nfs                        [protocol]    running (nfs: 8396)
s3                         [protocol]    stopped
srv                        [protocol]    running (lwio: 8096)
# /usr/likewise/bin/lwsm status s3
stopped
# /usr/likewise/bin/lwsm info s3
Service: s3
Description: S3 Server
Categories: protocol
Path: /usr/likewise/lib/lw-svcm/s3.so
Arguments:
Dependencies: lsass onefs_s3 AuditEnabled?flt_audit_s3
Container: s3

This CLI output confirms that the S3 protocol is inactive. You can start the S3 service as follows:

# isi services -a | grep -i s3
s3                   S3
Service                               Enabled

Similarly, you can restart the S3 service as follows:

# /usr/likewise/bin/lwsm restart s3
Stopping service: s3
Starting service: s3

To investigate further, you can increase the protocol’s log level verbosity. For example, to set the s3 log to ‘debug’:

# isi s3 log-level view
Current logging level is 'info'
# isi s3 log-level modify debug
# isi s3 log-level view
Current logging level is 'debug'

Next, view and monitor the appropriate protocol log. For example, for the S3 protocol:

# cat /var/log/s3.log
# tail -f /var/log/s3.log

Beyond the above, you can monitor /var/log/messages for pertinent errors, because the main partition performance (PP) modules log to this file. You can enable debug level logging for the various PP modules as follows.

Dataset:

# sysctl ilog.ifs.acct.raa.syslog=debug+
ilog.ifs.acct.raa.syslog: error,warning,notice (inherited) -> error,warning,notice,info,debug

Workload:

# sysctl ilog.ifs.acct.rat.syslog=debug+
ilog.ifs.acct.rat.syslog: error,warning,notice (inherited) -> error,warning,notice,info,debug

Actor work:

# sysctl ilog.ifs.acct.work.syslog=debug+
ilog.ifs.acct.work.syslog: error,warning,notice (inherited) -> error,warning,notice,info,debug

When finished, you can restore the default logging levels for the above modules as follows:

# sysctl ilog.ifs.acct.raa.syslog=notice+
# sysctl ilog.ifs.acct.rat.syslog=notice+
# sysctl ilog.ifs.acct.work.syslog=notice+

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS
  • NAS
  • clusters
  • SmartQoS

OneFS SmartQoS Configuration and Setup

Nick Trimbee

Tue, 14 Mar 2023 16:06:06 -0000

|

Read Time: 0 minutes

In the previous article in this series, we looked at the underlying architecture and management of SmartQoS in OneFS 9.5. Next, we’ll step through an example SmartQoS configuration using the CLI and WebUI.

After an initial set up, configuring a SmartQoS protocol Ops limit comprises four fundamental steps. These are:

 

Step

Task

Description

Example

1

Identify Metrics of interest

Used for tracking, to enforce an Ops limit

Uses ‘path’ and ‘protocol’ for the metrics to identify the workload.

2

Create a Dataset

For tracking all of the chosen metric categories

Create the dataset ‘ds1’ with the metrics identified.

3

Pin a Workload

To specify exactly which values to track within the chosen metrics

path: /ifs/data/client_exports
 

protocol: nfs3

4

Set a Limit

To limit Ops based on the dataset, metrics (categories), and metric values defined by the workload

Protocol_ops limit: 100

Step 1:

First, select a metric of interest. For this example, we’ll use the following:

  • Protocol: NFSv3
  • Path: /ifs/test/expt_nfs

If not already present, create and verify an NFS export – in this case at /ifs/test/expt_nfs:

# isi nfs exports create /ifs/test/expt_nfs
# isi nfs exports list
ID Zone Paths Description
------------------------------------------------
1 System /ifs/test/expt_nfs
------------------------------------------------

Or from the WebUI, under Protocols UNIX sharing (NFS) > NFS exports:

Step 2:

The ‘dataset’ designation is used to categorize workload by various identification metrics, including:

ID Metric

Details

Username

UID or SID

Primary groupname

Primary GID or GSID

Secondary groupname

Secondary GID or GSID

Zone name

 

IP address

Local or remote IP address or IP address range

Path

Except for S3 protocol

Share

SMB share or NFS export ID

Protocol

NFSv3, NFSv4, NFSoRDMA, SMB, or S3

SmartQoS in OneFS 9.5 only allows protocol Ops as the transient resources used for configuring a limit ceiling.

For example, you can use the following CLI command to create a dataset ‘ds1’, specifying protocol and path as the ID metrics:

# isi performance datasets create --name ds1 protocol path
Created new performance dataset 'ds1' with ID number 1.

Note: Resource usage tracking by the ‘path’ metric is only supported by SMB and NFS.

The following command displays any configured datasets:

# isi performance datasets list

Or, from the WebUI, by navigating to Cluster management > Smart QoS:

Step 3:

After you have created the dataset, you can pin a workload to it by specifying the metric values. For example:

# isi performance workloads pin ds1 protocol:nfs3 path: /ifs/test/expt_nfs

Pinned performance dataset workload with ID number 100.

Or from the WebUI, by browsing to Cluster management > Smart QoS > Pin workload:

After pinning a workload, the entry appears in the ‘Top Workloads’ section of the WebUI page. However, wait at least 30 seconds to start receiving updates.

To list all the pinned workloads from a specified dataset, use the following command:

# isi performance workloads list ds1

The prior command’s output indicates that there are currently no limits set for this workload.

By default, a protocol ops limit exists for each workload. However, it is set to the maximum (the maximum value of a 64-bit unsigned integer). This is represented in the CLI output by a dash (“-“) if a limit has not been explicitly configured:

# isi performance workloads list ds1
ID   Name  Metric Values           Creation Time       Cluster Resource Impact  Client Impact   Limits
--------------------------------------------------------------------------------------
100  -     path:/ifs/test/expt_nfs 2023-02-02T12:06:05  -          -              -
           protocol:nfs3
--------------------------------------------------------------------------------------
Total: 1

Step 4:

For a pinned workload in a dataset, you can configure a limit for the protocol ops limit from the CLI, using the following syntax:

# isi performance workloads modify <dataset> <workload ID> --limits protocol_ops:<value>

When configuring SmartQoS, always be aware that it is a powerful performance throttling tool which can be applied to significant areas of a cluster’s data and userbase. For example, protocol Ops limits can be configured for metrics such as ‘path:/ifs’, which would affect the entire /ifs filesystem, or ‘zone_name:System’ which would limit the System access zone and all users within it. While such configurations are entirely valid, they would have a significant, system-wide impact. As such, exercise caution when configuring SmartQoS to avoid any inadvertent, unintended, or unexpected performance constraints.

In the following example, the dataset is ‘ds1’, the workload ID is ‘100’, and the protocol Ops limit is set to the value ‘10’:

# isi performance workloads modify ds1 100 --limits protocol_ops:10
protocol_ops: 18446744073709551615 -> 10

Or from the WebUI, by browsing to Cluster management > Smart QoS > Pin and throttle workload:

You can use the ‘isi performance workloads’ command in ‘list’ mode to show details of the workload ‘ds1’. In this case, ‘Limits’ is set to protocol_ops = 10.

# isi performance workloads list test
ID   Name  Metric Values           Creation Time       Cluster Resource Impact  Client Impact   Limits
--------------------------------------------------------------------------------------
100  -     path:/ifs/test/expt_nfs 2023-02-02T12:06:05  -   -  protocol_ops:10
           protocol:nfs3
--------------------------------------------------------------------------------------
Total: 1

Or in ‘view’ mode:

# isi performance workloads view ds1 100
                     ID: 100
                   Name: -
          Metric Values: path:/ifs/test/expt_nfs, protocol:nfs3
          Creation Time: 2023-02-02T12:06:05
Cluster Resource Impact: -
          Client Impact: -
                 Limits: protocol_ops:10

Or from the WebUI, by browsing to Cluster management > Smart QoS:

You can easily modify the limit value of a pinned workload with the following CLI syntax. For example, to set the limit to 100 Ops:

# isi performance workloads modify ds1 100 --limits protocol_ops:100

Or from the WebUI, by browsing to Cluster management > Smart QoS > Edit throttle:

Similarly, you can use the following CLI command to easily remove a protocol ops limit for a pinned workload:

# isi performance workloads modify ds1 100 --no-protocol-ops-limit

Or from the WebUI, by browsing to Cluster management > Smart QoS > Remove throttle:

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS

OneFS SupportAssist

Nick Trimbee

Mon, 13 Mar 2023 23:31:33 -0000

|

Read Time: 0 minutes

Among the myriad of new features included in the OneFS 9.5 release is SupportAssist, Dell’s next-gen remote connectivity system. SupportAssist is included with all support plans (features vary based on service level agreement).

Dell SupportAssist rapidly identifies, diagnoses, and resolves cluster issues and provides the following key benefits:

  • Improves productivity by replacing manual routines with automated support
  • Accelerates resolution, or avoid issues completely, with predictive issue detection and proactive remediation

Within OneFS, SupportAssist transmits events, logs, and telemetry from PowerScale to Dell support. As such, it provides a full replacement for the legacy ESRS.

Delivering a consistent remote support experience across the Dell storage portfolio, SupportAssist is intended for all sites that can send telemetry off-cluster to Dell over the Internet. SupportAssist integrates the Dell Embedded Service Enabler (ESE) into PowerScale OneFS along with a suite of daemons to allow its use on a distributed system.

SupportAssistESRS

Dell’s next-generation remote connectivity solution

Being phased out of service

Can either connect directly, or through supporting gateways

Can only use gateways for remote connectivity

Uses Connectivity Hub to coordinate support

Uses ServiceLink to coordinate support

 Using the Dell Connectivity Hub, SupportAssist can either interact directly or through a Secure Connect gateway. 

SupportAssist has a variety of components that gather and transmit various pieces of OneFS data and telemetry to Dell Support and backend services through the Embedded Service Enabler (ESE). These workflows include CELOG events; In-product activation (IPA) information; CloudIQ telemetry data; Isi-Gather-info (IGI) logsets; and provisioning, configuration, and authentication data to ESE and the various backend services.

WorkflowDetails

CELOG

In OneFS 9.5, SupportAssist can be configured to send CELOG events and attachments through ESE to CLM. CELOG has a “supportassist” channel that, when active, creates an EVENT task for SupportAssist to propagate. 

License Activation

The isi license activation start command uses SupportAssist to connect.

 

Several pieces of PowerScale and OneFS functionality require licenses, and must communicate with the Dell backend services in order to register and activate those cluster licenses. In OneFS 9.5, SupportAssist is the preferred mechanism to send those license activations through ESE to the Dell backend. License information can be generated through the isi license generate CLI command and then activated with the isi license activation start syntax.  

Provisioning

SupportAssist must register with backend services in a process known as provisioning. This process must be run before the ESE will respond on any of its other available API endpoints. Provisioning can only successfully occur once per installation, and subsequent provisioning tasks will fail. SupportAssist must be configured through the CLI or WebUI before provisioning.  The provisioning process uses authentication information that was stored in the key manager upon the first boot.  

Diagnostics

The OneFS isi diagnostics gather and isi_gather_info logfile collation and transmission commands have a --supportassist option. 

Healthchecks

HealthCheck definitions are updated using SupportAssist.

Telemetry

CloudIQ telemetry data is sent using SupportAssist. 

Remote Support

Remote Support uses SupportAssist and the Connectivity Hub to assist customers with their clusters.

SupportAssist requires an access key and PIN, or hardware key, to be enabled, with most customers likely using the access key and pin method. Secure keys are held in key manager under the RICE domain.

In addition to the transmission of data from the cluster to Dell, Connectivity Hub also allows inbound remote support sessions to be established for remote cluster troubleshooting.

 In the next article in this series, we’ll take a deeper look at the SupportAssist architecture and operation.

 

 

Read Full Blog
  • PowerScale
  • OneFS
  • SmartQoS

OneFS SmartQoS Architecture and Management

Nick Trimbee

Wed, 01 Mar 2023 22:34:30 -0000

|

Read Time: 0 minutes

The SmartQoS Protocol Ops limits architecture, introduced in OneFS 9.5, involves three primary capabilities:

  • Resource tracking
  • Resource limit distribution
  • Throttling

Under the hood, the OneFS protocol heads (NFS, SMB, and S3) identify and track how many protocol operations are being processed through a specific export or share. The existing partitioned performance (PP) reporting infrastructure is leveraged for cluster wide resource usage collection, limit calculation and distribution, along with new OneFS 9.5 functionality to support pinned workload protocol Ops limits.

The protocol scheduling module (LwSched) has a built-in throttling capability that allows the execution of individual operations to be delayed by temporarily pausing them, or ‘sleeping’. Additionally, in OneFS 9.5, the partitioned performance kernel modules have also been enhanced to calculate ‘sleep time’ based on operation count resource information (requested, average usage, and so on) – both within the current throttling window, and for a specific workload.

We can characterize the fundamental SmartQoS workflow as follows:

  1. Configuration, using the CLI, pAPI, or WebUI.
  2. Statistics gatherer obtains Op/s data from the partitioned performance (PP) kernel.
  3. Stats gatherer communicates Op/s data to PP leader service.
  4. Leader queries config manager for per-cluster rate limit.
  5. Leader calculates per-node limit.
  6. PP follower service is notified of per-node Op/s limit.
  7. Kernel is informed of new per-node limit.
  8. Work is scheduled with rate-limited resource.
  9. Kernel returns sleep time, if needed.

When an admin configures a per-cluster protocol Ops limit, the statistics gathering service, isi_stats_d, begins collecting workload resource information every 30 seconds by default from the partitioned performance (PP) kernel on each node in the cluster and notifies the isi_pp_d leader service of this resource info. Next, the leader gets the per-cluster protocol Ops limit plus additional resource consumption metrics from the isi_acct_cpp service from isi_tardis_d, the OneFS cluster configuration service and calculates the protocol Ops limit of each node for the next throttling window. It then instructs the isi_pp_d follower service on each node to update the kernel with the newly calculated protocol Ops limit, plus a request to reset the throttling window.

When the kernel receives a scheduling request for a work item from the protocol scheduler (LwSched), the kernel calculates the required ‘sleep time’ value, based on the current node protocol Ops limit and resource usage in the current throttling window. If insufficient resources are available, the work item execution thread is put to sleep for a specific interval returned from the PP kernel. If resources are available, or the thread is reactivated from sleeping, it executes the work item and reports the resource usage statistics back to PP, releasing any scheduling resources it may own.

SmartQoS can be configured through either the CLI, platform API, or WebUI, and OneFS 9.5 introduces a new SmartQoS WebUI page to support this. Note that SmartQoS is only available when an upgrade to OneFS 9.5 has been committed, and any attempt to configure or run the feature prior to upgrade commit will fail with the following message:

# isi performance workloads modify DS1 -w WS1 --limits protocol_ops:50000
 Setting of protocol ops limits not available until upgrade has been committed

When a cluster is running OneFS 9.5 and the release is committed, the SmartQoS feature is enabled by default. This, and the current configuration, can be confirmed using the following CLI command:

 # isi performance settings view
                   Top N Collections: 1024
        Time In Queue Threshold (ms): 10.0
 Target read latency in microseconds: 12000.0
Target write latency in microseconds: 12000.0
          Protocol Ops Limit Enabled: Yes

In OneFS 9.5, the ‘isi performance settings modify’ CLI command now includes a ‘protocol-ops-limit-enabled’ parameter to allow the feature to be easily disabled (or re-enabled) across the cluster. For example:

# isi performance settings modify --protocol-ops-limit-enabled false
protocol_ops_limit_enabled: True -> False

Similarly, the ‘isi performance settings view’ CLI command has been extended to report the protocol OPs limit state:

# isi performance settings view *
Top N Collections: 1024
Protocol Ops Limit Enabled: Yes

In order to set a protocol OPs limit on workload from the CLI, the ‘isi performance workload pin’ and ‘isi performance workload modify’ commands now accept an optional ‘–limits’ parameter. For example, to create a pinned workload with the ‘protocol_ops’ limit set to 10000:

# isi performance workload pin test protocol:nfs3 --limits
protocol_ops:10000

Similarly, to modify an existing workload’s ‘protocol_ops’ limit to 20000:

# isi performance workload modify test 101 --limits protocol_ops:20000
protocol_ops: 10000 -> 20000

When configuring SmartQoS, always be aware that it is a powerful throttling tool that can be applied to significant areas of a cluster’s data and userbase. For example, protocol OPs limits can be configured for metrics such as ‘path:/ifs’, which would affect the entire /ifs filesystem, or ‘zone_name:System’ which would limit the System access zone and all users within it.

While such configurations are entirely valid, they would have a significant, system-wide impact. As such, exercise caution when configuring SmartQoS to avoid any inadvertent, unintended, or unexpected performance constraints.

To clear a protocol Ops limit on workload, the ‘isi performance workload modify’ CLI command has been extended to accept an optional ‘–noprotocol-ops-limit’ argument. For example:

# isi performance workload modify test 101 --no-protocol-ops-limit
protocol_ops: 20000 -> 18446744073709551615

Note that the value of ‘18446744073709551615’ in the command output above represents ‘NO_LIMIT’ set.

You can view a workload’s protocol Ops limit by using the ‘isi performance workload list’ and ‘isi performance workload view’ CLI commands, which have been modified in OneFS 9.5 to display the limits appropriately. For example:

# isi performance workload list test
ID Name Metric Values Creation Time Impact Limits
---------------------------------------------------------------------
101 - protocol:nfs3 2023-02-02T22:35:02 - protocol_ops:20000
---------------------------------------------------------------------
# isi performance workload view test 101
ID: 101
Name: -
Metric Values: protocol:nfs3
Creation Time: 2023-02-02T22:35:02
Impact: -
Limits: protocol_ops:20000

In the next article in this series, we’ll step through an example SmartQoS configuration and verification from both the CLI and WebUI.

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS
  • SmartQoS
  • performance management

OneFS SmartQoS

Nick Trimbee

Thu, 23 Feb 2023 22:34:49 -0000

|

Read Time: 0 minutes

Built atop the partitioned performance (PP) resource monitoring framework, OneFS 9.5 introduces a new SmartQoS performance management feature. SmartQoS allows a cluster administrator to set limits on the maximum number of protocol operations per second (Protocol Ops) that individual pinned workloads can consume, in order to achieve desired business workload prioritization. Among the benefits of this new QoS functionality are:

  • Enabling IT infrastructure teams to achieve performance SLAs
  • Allowing throttling of rogue or low priority workloads and hence prioritization of other business critical workloads
  • Helping minimize data unavailability events due to overloaded clusters

 

This new SmartQoS feature in OneFS 9.5 supports the NFS, SMB and S3 protocols, including mixed traffic to the same workload.

But first, a quick refresher. The partitioned performance resource monitoring framework, which initially debuted in OneFS 8.0.1, enables OneFS to track and report the use of transient system resources (resources that only exist at a given instant), providing insight into who is consuming what resources, and how much of them. Examples include CPU time, network bandwidth, IOPS, disk accesses, and cache hits, and so on.

OneFS partitioned performance is an ongoing project that in OneFS 9.5 now provides control and insights. This allows control of work flowing through the system, prioritization and protection of mission critical workflows, and the ability to detect if a cluster is at capacity.

Because identification of work is highly subjective, OneFS partitioned performance resource monitoring provides significant configuration flexibility, by allowing cluster admins to craft exactly how they want to define, track, and manage workloads. For example, an administrator might want to partition their work based on criteria such as which user is accessing the cluster, the export/share they are using, which IP address they’re coming from – and often a combination of all three.

OneFS has always provided client and protocol statistics, but they were typically front-end only. Similarly, OneFS has provided CPU, cache, and disk statistics, but they did not display who was consuming them. Partitioned performance unites these two realms, tracking the usage of the CPU, drives, and caches, and spanning the initiator/participant barrier.

OneFS collects the resources consumed and groups them into distinct workloads. The aggregation of these workloads comprises a performance dataset.

Item

Description

Example

Workload

A set of identification metrics and resources used

{username:nick, zone_name:System} consumed {cpu:1.5s, bytes_in:100K, bytes_out:50M, …}

Performance Dataset

The set of identification metrics by which to aggregate workloads

 

The list of workloads collected that match that specification

{usernames, zone_names}

Filter

A method for including only workloads that match specific identification metrics

  • {username:nick, zone_name:System}
  • {username:jane, zone_name:System}
  • {username:nick, zone_name:Perf}

The following metrics are tracked by partitioned performance resource monitoring:

Category

Items

Identification Metrics

  • Username / UID / SID
  • Primary Groupname / GID / GSID
  • Secondary Groupname / GID / GSID
  • Zone Name
  • Local/Remote IP Address/Range
  • Path
  • Share / Export ID
  • Protocol
  • System Name
  • Job Type

Transient Resources

  • CPU Usage
  • Bytes In/Out – Net traffic minus TCP headers
  • IOPs – Protocol OPs
  • Disk Reads – Blocks read from disk
  • Disk Writes – Block written to the journal, including protection
  • L2 Hits – Blocks read from L2 cache
  • L3 Hits – Blocks read from L3 cache
  • Latency – Sum of time taken from start to finish of OP
  • ReadLatency
  • WriteLatency
  • OtherLatency

Performance Statistics

  • Read/Write/Other Latency

Supported Protocols

  • NFS
  • SMB
  • S3
  • Jobs
  • Background Services

Be aware that, in OneFS 9.5, SmartQoS currently does not support the following Partitioned Performance criteria:

Unsupported Group

Unsupported Items

Metrics

  • System Name
  • Job Type

Workloads

  • Top workloads (as they are dynamically and automatically generated by the kernel)
  • Workloads belonging to the ‘system’ dataset

Protocols

  • Jobs
  • Background services

When pinning a workload to a dataset, note that the more metrics there are in that dataset, the more parameters need to be defined when pinning to it. For example:

Dataset = zone_name, protocol, username

To set a limit on this dataset, you’d need to pin the workload by also specifying the zone name, protocol, and username.

When using the remote_address and/or local_address metrics, you can also specify a subnet. For example: 10.123.456.0/24

With the exception of the system dataset, you must configure performance datasets before statistics are collected.

For SmartQoS in OneFS 9.5, you can define and configure limits as a maximum number of protocol operations (Protocol Ops) per second across the following protocols:

  • NFSv3
  • NFSv4
  • NFSoRDMA
  • SMB
  • S3

You can apply a Protocol Ops limit to up to four custom datasets. All pinned workloads within a dataset can have a limit configured, up to a maximum of 1024 workloads per dataset. If multiple workloads happen to share a common metric value with overlapping limits, the lowest limit that is configured would be enforced

Note that when upgrading to OneFS 9.5, SmartQoS is activated only when the new release has been successfully committed.

In the next article in this series, we’ll take a deeper look at SmartQoS’ underlying architecture and workflow.

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS
  • SmartPools

OneFS SmartPools Transfer Limits Configuration and Management

Nick Trimbee

Thu, 16 Feb 2023 15:48:08 -0000

|

Read Time: 0 minutes

In the first article in this series, we looked at the architecture and considerations of the new SmartPools transfer limits in OneFS 9.5. Now, we turn our attention to the configuration and management of this feature.

From the control plane side, OneFS 9.5 contains several WebUI and CLI enhancements to reflect the new SmartPools transfer limits functionality. Probably the most obvious change is in the Local storage usage status histogram, where tiers and their child node pools have been aggregated for a more logical grouping. Also, blue limit-lines have been added above each of the storage pools, and a red warning status is displayed for any pools that have exceeded the transfer limit.

Similarly, the storage pools status page now includes transfer limit details, with the 90% limit displayed for any storage pools using the default setting.

From the CLI, the isi storagepool nodepools view command reports the transfer limit status and percentage for a pool. The used SSD and HDD bytes percentages in the command output indicate where the pool utilization is relative to the transfer limit.

The storage transfer limit can be easily configured from the CLI as either for a  specific pool, as a default, or disabled, using the new –transfer-limit and –default-transfer-limit flags.

The following CLI command can be used to set the transfer limit for a specific storage pool:

# isi storagepool nodepools/tier modify --transfer-limit={0-100, default, disabled} 

For example, to set a limit of 80% on an A200 nodepool:

# isi storagepool a200_30tb_1.6tb-ssd_96gb modify --transfer-limit=80 

Or to set the default limit of 90% on tier perf1:

# isi storagepool perf1 --transfer-limit=default 

Note that setting the transfer limit of a tier automatically applies to all its child node pools, regardless of any prior child limit configurations.

The global isi storage settings view CLI command output shows the default transfer limit, which is 90%, but it can be configured between 0 to 100%.

This default limit can be reconfigured from the CLI with the following syntax:

# isi storagepool settings modify --default-transfer-limit={0-100, disabled}

For example, to set a new default transfer limit of 85%:

# isi storagepool settings modify --default-transfer-limit=85

And the same changes can be made from the SmartPools WebUI, by navigating to Storage pools > SmartPools settings:

 Once a SmartPools job has been completed in OneFS 9.5, the job report contains a new field, files not moved due to transfer limit exceeded.

# isi job reports view 1056 
... 
... 
Policy/testpolicy/Access changes skipped 0 
Policy/testpolicy/ADS containers matched 'head’ 0 
Policy/testpolicy/ADS containers matched 'snapshot’ 0 
Policy/testpolicy/ADS streams matched 'head’ 0 
Policy/testpolicy/ADS streams matched 'snapshot’ 0 
Policy/testpolicy/Directories matched 'head’ 0 
Policy/testpolicy/Directories matched 'snapshot’ 0 
Policy/testpolicy/File creation templates matched 0 
Policy/testpolicy/Files matched 'head’ 0 
Policy/testpolicy/Files matched 'snapshot’ 0 
Policy/testpolicy/Files not moved due to transfer limit exceeded 0 
Policy/testpolicy/Files packed 0 
Policy/testpolicy/Files repacked 0 
Policy/testpolicy/Files unpacked 0 
Policy/testpolicy/Packing changes skipped 0 
Policy/testpolicy/Protection changes skipped 0 
Policy/testpolicy/Skipped files already in containers 0 
Policy/testpolicy/Skipped packing non-regular files 0 
Policy/testpolicy/Skipped packing regular files 0

Additionally, the SYS STORAGEPOOL FILL LIMIT EXCEEDED alert is triggered at the Info level when a storage pool’s usage has exceeded its transfer limit. Each hour, CELOG fires off a monitor helper script that measures how full each storage pool is relative to its transfer limit. The usage is gathered by reading from the disk pool database, and the transfer limits are stored in gconfig. If a node pool has a transfer limit of 50% and usage of 75%, the monitor helper would report a measurement of 150%, triggering an alert.

# isi event view 126 
ID: 126 
Started: 11/29 20:32 
Causes Long: storagepool: vonefs_13gb_4.2gb-ssd_6gb:hdd usage: 33.4, transfer limit: 30.0 
Lnn: 0 
Devid: 0 
Last Event: 2022-11-29T20:32:16 
Ignore: No 
Ignore Time: Never 
Resolved: No 
Resolve Time: Never 
Ended: -- 
Events: 1 
Severity: information

And from the WebUI:


And there you have it: Transfer limits, and the first step in the evolution toward a smarter SmartPools.

 

Read Full Blog
  • PowerScale
  • OneFS
  • SmartPools

OneFS SmartPools Transfer Limits

Nick Trimbee

Wed, 15 Feb 2023 22:53:09 -0000

|

Read Time: 0 minutes

The new OneFS 9.5 release introduces the first phase of engineering’s Smarter SmartPools initiative, and delivers a new feature called SmartPools transfer limits.

The goal of SmartPools Transfer Limits is to address spill over. Previously, when file pool policies were executed, OneFS had no guardrails to protect against overfilling the destination or target storage pool. So if a pool was overfilled, data would unexpectedly spill over into other storage pools.

An overflow would result in storagepool usage exceeding 100%, and cause the SmartPools job itself to do a considerable amount of unnecessary work, trying to send files to a given storagepool. But because the pool was full, it would then have to send those files off to another storage pool that was below capacity. This would result in data going where it wasn’t intended, and the potential for individual files to end up getting split between pools. Also, if the full pool was on the most highly performing storage in the cluster, all subsequent newly created data would now land on slower storage, affecting its throughput and latency. The recovery from a spillover can be fairly cumbersome because it’s tough for the cluster to regain balance, and urgent system administration may be required to free space on the affected tier.

In order to address this, SmartPools Transfer Limits allows a cluster admin to configure a storagepool capacity-usage threshold, expressed as a percentage, and beyond which file pool policies stop moving data to that particular storage pool.

These transfer limits only take effect when running jobs that apply filepool policies, such as SmartPools, SmartPoolsTree, and FilePolicy.

The main benefits of this feature are two-fold:

  • Safety, in that OneFS avoids undesirable actions, so the customer is prevented from getting into escalation situations, because SmartPools won’t overfill storage pools.
  • Performance, because transfer limits avoid unnecessary work, and allow the SmartPools job to finish sooner.

Under the hood, a cluster’s storagepool SSD and HDD usage is calculated using the same algorithm as reported by the ‘isi storagepools list’ CLI command. This means that a pool’s VHS (virtual hot spare) reserved capacity is respected by SmartPools transfer limits. When a SmartPools job is running, there is at least one worker on each node processing a single LIN at any given time. In order to calculate the current HDD and SSD usage per storagepool, the worker must read from the diskpool database. To circumvent this potential bottleneck, the filepool policy algorithm caches the diskpool database contents in memory for up to 10 seconds.

Transfer limits are stored in gconfig, and a separate entry is stored within the ‘smartpools.storagepools’ hierarchy for each explicitly defined transfer limit.

Note that in the SmartPools lexicon, ‘storage pool’ is a generic term denoting either a tier or nodepool. Additionally, SmartPools tiers comprise one or more constituent nodepools.

Each gconfig transfer limit entry stores a limit value and the diskpool database identifier of the storagepool to which the transfer limit applies. Additionally, a ‘transfer limit state’ field specifies which of three states the limit is in:

Limit state

Description

Default

Fallback to the default transfer limit.

Disabled

Ignore transfer limit.

Enabled

The corresponding transfer limit value is valid.

A SmartPools transfer limit does not affect the general ingress, restriping, or reprotection of files, regardless of how full the storage pool is where that file is located. So if you’re creating or modifying a file on the cluster, it will be created there anyway. This will continue up until the pool reaches 100% capacity, at which point it will then spill over.

The default transfer limit is 90% of a pool’s capacity. This applies to all storage pools where the cluster admin hasn’t explicitly set a threshold. Note also that the default limit doesn’t get set until a cluster upgrade to OneFS 9.5 has been committed. So if you’re running a SmartPools policy job during an upgrade, you’ll have the preexisting behavior, which is to send the file to wherever the file pool policy instructs it to go. It’s also worth noting that, even though the default transfer limit is set on commit, if a job was running over that commit edge, you’d have to pause and resume it for the new limit behavior to take effect. This is because the new configuration is loaded lazily when the job workers are started up, so even though the configuration changes, a pause and resume is needed to pick up those changes.

SmartPools itself needs to be licensed on a cluster in order for transfer limits to work. And limits can be configured at the tier or nodepool level. But if you change the limit of a tier, it automatically applies to all of its child nodepools, regardless of any prior child limit configurations. The transfer limit feature can also be disabled, which results in the same spillover behavior OneFS always displayed, and any configured limits will not be respected.

Note that a filepool policy’s transfer limits algorithm does not consider the size of the file when deciding whether to move it to the policy’s target storagepool, regardless of whether the file is empty, or a large file. Similarly, a target storagepool’s usage must exceed its transfer limit before the filepool policy will stop moving data to that target pool. The assumption here is that any storagepool usage overshoot is insignificant in scale compared to the capacity of a cluster’s storagepool.

A SmartPools file pool policy allows you to send snapshot or HEAD data blocks to different targets, if so desired.

Because the transfer limit applies to the storagepool itself, and not to the file pool policy, it’s important to note that, if you’ve got varying storagepool targets and one file pool policy, you may have a situation where the head data blocks do get moved. But if the snapshot is pointing at a storage pool that has exceeded its transfer limit, its blocks will not be moved.

File pool policies also allow you to specify how a mixed node’s SSDs are used: either as L3 cache, or as an SSD strategy for head and snapshot blocks. If the SSDs in a node are configured for L3, they are not being used for storage, so any transfer limits are irrelevant to it. As an alternative to L3 cache, SmartPools offers three main categories of SSD strategy:  

  • Avoid, which means send all blocks to HDD 
  • Data, which means send everything to SSD 
  • Metadata Read or Write, which sends varying numbers of metadata mirrors to SSD, and data blocks to hard disk.

To reflect this, SmartPools transfer limits are slightly nuanced when it comes to SSD strategies. That is, if the storagepool target contains both HDD and SSD, the usage capacity of both mediums needs to be below the transfer limit in order for the file to be moved to that target. For example, take two node pools, NP1 and NP2.

A file pool policy, Pol1, is configured and which matches all files under /ifs/dir1, with an SSD strategy of Metadata Write, and pool NP1 as the target for HEAD’s data blocks. For snapshots, the target is NP2, with an ‘avoid’ SSD strategy, so just writing to hard disk for both snapshot data and metadata.

When a SmartPools job runs and attempts to apply this file pool policy, it sees that SSD usage is above the 85% configured transfer limit for NP1. So, even though the hard disk capacity usage is below the limit, neither HEAD data nor metadata will be sent to NP1.

For the snapshot, the SSD usage is also above the NP2 pool’s transfer limit of 90%.

However, because the SSD strategy is ‘avoid’, and because the hard disk usage is below the limit, the snapshot’s data and metadata get successfully sent to the NP2 HDDs.

Author: Nick Trimbee

Read Full Blog
  • security
  • PowerScale
  • OneFS
  • cybersecurity

PowerScale OneFS 9.5 Delivers New Security Features and Performance Gains

Nick Trimbee

Fri, 28 Apr 2023 19:57:51 -0000

|

Read Time: 0 minutes

PowerScale – the world’s most flexible[1] and cyber-secure scale-out NAS solution[2]  – is powering up the new year with the launch of the innovative OneFS 9.5 release. With data integrity and protection being top of mind in this era of unprecedented corporate cyber threats, OneFS 9.5 brings an array of new security features and functionality to keep your unstructured data and workloads more secure than ever, as well as delivering significant performance gains on the PowerScale nodes – such as up to 55% higher performance on all-flash F600 and F900 nodes as compared with the previous OneFS release.[3]   

Table

Description automatically generated

OneFS and hardware security features 

New PowerScale OneFS 9.5 security enhancements include those that directly satisfy US Federal and DoD mandates, such as FIPS 140-2, Common Criteria, and DISA STIGs – in addition to general enterprise data security requirements. Multi-factor authentication (MFA), single sign-on (SSO) support, data encryption in-flight and at rest, TLS 1.2, USGv6R1 IPv6 support, SED Master Key rekey, plus a new host-based firewall are all part of OneFS 9.5. 

15TB and 30TB self-encrypting (SED) SSDs now enable PowerScale platforms running OneFS 9.5 to scale up to 186 PB of encrypted raw capacity per cluster – all within a single volume and filesystem, and before any additional compression and deduplication benefit.  

Delivering federal-grade security to protect data under a zero trust model 

Security-wise, the United States Government has stringent requirements for infrastructure providers such as Dell Technologies, requiring vendors to certify that products comply with requirements such as USGv6, STIGs, DoDIN APL, Common Criteria, and so on. Activating the OneFS 9.5 cluster hardening option implements a default maximum security configuration with AES and SHA cryptography, which automatically renders a cluster FIPS 140-2 compliant. 

OneFS 9.5 introduces SAML-based single sign-on (SSO) from both the command line and WebUI using a redesigned login screen. OneFS SSO is compatible with identity providers (IDPs) such as Active Directory Federation Services, and is also multi-tenant aware, allowing independent configuration for each of a cluster’s Access Zones. 

Federal APL requirements mandate that a system must validate all certificates in a chain up to a trusted CA root certificate. To address this, OneFS 9.5 introduces a common Public Key Infrastructure (PKI) library to issue, maintain, and revoke public key certificates. These certificates provide digital signature and encryption capabilities, using public key cryptography to provide identification and authentication, data integrity, and confidentiality. This PKI library is used by all OneFS components that need PKI certificate verification support, such as SecureSMTP, ensuring that they all meet Federal PKI requirements. 

This new OneFS 9.5 PKI and certificate authority infrastructure enables multi-factor authentication, allowing users to swipe a CAC or PIV smartcard containing their login credentials to gain access to a cluster, rather than manually entering username and password information. Additional account policy restrictions in OneFS 9.5 automatically disable inactive accounts, provide concurrent administrative session limits, and implement a delay after a failed login.  

As part of FIPS 140-2 compliance, OneFS 9.5 introduces a new key manager, providing a secure central repository for secrets such as machine passwords, Kerberos keytabs, and other credentials, with the option of using MCF (modular crypt format) with SHA256 or SHA512 hash types. OneFS protocols and services may be configured to support FIPS 140-2 data-in-flight encryption compliance, while SED clusters and the new Master Key re-key capability provide FIPS 140-2 data-at-rest encryption. Plus, any unused or non-compliant services are easily disabled.  

On the network side, the Federal APL has several IPv6 (USGv6) requirements that are focused on allowing granular control of individual components of a cluster’s IPv6 stack, such as duplicate address detection (DAD) and link local IP control. Satisfying both STIG and APL requirements, the new OneFS 9.5 front-end firewall allows security admins to restrict the management interface to specified subnet and implement port blocking and packet filtering rules from the cluster’s command line or WebUI, in accordance with federal or corporate security policy. 

Improving performance for the most demanding workloads

OneFS 9.5 unlocks dramatic performance gains, particularly for the all-flash NVMe platforms, where the PowerScale F900 can now support line-rate streaming reads. SmartCache enhancements allow OneFS 9.5 to deliver streaming read performance gains of up to 55% on the F-series nodes, F600 and F9003, delivering benefit to media and entertainment workloads, plus AI, machine learning, deep learning, and more. 

Enhancements to SmartPools in OneFS 9.5 introduce configurable transfer limits. These limits include maximum capacity thresholds, expressed as a percentage, above which SmartPools will not attempt to move files to a particular tier, boosting both reliability and tiering performance. 

Granular cluster performance control is enabled with the debut of PowerScale SmartQoS, which allows admins to configure limits on the maximum number of protocol operations that NFS, S3, SMB, or mixed protocol workloads can consume. 

Enhancing enterprise-grade supportability and serviceability

OneFS 9.5 enables SupportAssist, Dell’s next generation remote connectivity system for transmitting events, logs, and telemetry from a PowerScale cluster to Dell Support. SupportAssist provides a full replacement for ESRS, as well as enabling Dell Support to perform remote diagnosis and remediation of cluster issues. 

Upgrading to OneFS 9.5 

The new OneFS 9.5 code is available on the Dell Technologies Support site, as both an upgrade and reimage file, allowing both installation and upgrade of this new release.  

Author: Nick Trimbee

[1] Based on Dell analysis, August 2021.

[2] Based on Dell analysis comparing cybersecurity software capabilities offered for Dell PowerScale vs. competitive products, September 2022.

[3] Based on Dell internal testing, January 2023. Actual results will vary.


Read Full Blog
  • PowerScale
  • OneFS
  • diagnostics

OneFS Diagnostics

Nick Trimbee

Sun, 18 Dec 2022 19:43:36 -0000

|

Read Time: 0 minutes

In addition to the /usr/bin/isi_gather_info tool, OneFS also provides both a GUI and a common ‘isi’ CLI version of the tool – albeit with slightly reduced functionality. This means that a OneFS log gather can be initiated either from the WebUI, or by using the ‘isi diagnostics’ CLI command set with the following syntax:

# isi diagnostics gather start

The diagnostics gather status can also be queried as follows:

# isi diagnostics gather status
Gather is running.

When the command has completed, the gather tarfile can be found under /ifs/data/Isilon_Support.

You can also view and modify the ‘isi diagnostics’ configuration as follows:

# isi diagnostics gather settings view
                Upload: Yes
                  ESRS: Yes
         Supportassist: Yes
           Gather Mode: full
  HTTP Insecure Upload: No
      HTTP Upload Host:
      HTTP Upload Path:
     HTTP Upload Proxy:
HTTP Upload Proxy Port: -
            Ftp Upload: Yes
       Ftp Upload Host: ftp.isilon.com
       Ftp Upload Path: /incoming
      Ftp Upload Proxy:
 Ftp Upload Proxy Port: -
       Ftp Upload User: anonymous
   Ftp Upload Ssl Cert:
   Ftp Upload Insecure: No

The configuration options for the ‘isi diagnostics gather’ CLI command include:

Option

Description

–upload <boolean>

Enable gather upload.

–esrs <boolean>

Use ESRS for gather upload.

–gather-mode (incremental | full)

Type of gather: incremental, or full.

–http-insecure-upload <boolean>

Enable insecure HTTP upload on completed gather.

–http-upload-host <string>

HTTP Host to use for HTTP upload.

–http-upload-path <string>

Path on HTTP server to use for HTTP upload.

–http-upload-proxy <string>

Proxy server to use for HTTP upload.

–http-upload-proxy-port <integer>

Proxy server port to use for HTTP upload.

–clear-http-upload-proxy-port

Clear proxy server port to use for HTTP upload.

–ftp-upload <boolean>

Enable FTP upload on completed gather.

–ftp-upload-host <string>

FTP host to use for FTP upload.

–ftp-upload-path <string>

Path on FTP server to use for FTP upload.

–ftp-upload-proxy <string>

Proxy server to use for FTP upload.

–ftp-upload-proxy-port <integer>

Proxy server port to use for FTP upload.

–clear-ftp-upload-proxy-port

Clear proxy server port to use for FTP upload.

–ftp-upload-user <string>

FTP user to use for FTP upload.

–ftp-upload-ssl-cert <string>

Specifies the SSL certificate to use in FTPS connection.

–ftp-upload-insecure <boolean>

Whether to attempt a plain text FTP upload.

–ftp-upload-pass <string>

FTP user to use for FTP upload password.

–set-ftp-upload-pass

Specify the FTP upload password interactively.

As mentioned above, ‘isi diagnostics gather’ does not present quite as broad an array of features as the isi_gather_info utility. This is primarily for security purposes, because ‘isi diagnostics’ does not require root privileges to run. Instead, a user account with the ‘ISI_PRIV_SYS_SUPPORT’ RBAC privilege is needed in order to run a gather from either the WebUI or ‘isi diagnostics gather’ CLI interface.

When a gather is running, a second instance cannot be started from any other node until that instance finishes. Typically, a warning similar to the following appears:

"It appears that another instance of gather is running on the cluster somewhere. If you would like to force gather to run anyways, use the --force-multiple-igi flag. If you believe this message is in error, you may delete the lock file here: /ifs/.ifsvar/run/gather.node."

You can remove this lock as follows:

# rm -f /ifs/.ifsvar/run/gather.node

You can also initiate a log gather from the OneFS WebUI by navigating to Cluster management > Diagnostics > Gather:

 

The WebUI also uses the ‘isi diagnostics’ platform API handler and so, like the CLI command, also offers a subset of the full isi_gather_info functionality.

A limited menu of configuration options is also available in the WebUI, under Cluster management > Diagnostics > Gather settings:

Also contained within the OneFS diagnostics command set is the ‘isi diagnostics netlogger’ utility. Netlogger captures IP traffic over a period of time for network and protocol analysis.

Under the hood, netlogger is a Python wrapper around the ubiquitous tcpdump utility, and can be run either from the OneFS command line or WebUI.

For example, from the WebUI, browse to Cluster management > Diagnostics > Netlogger:

Alternatively, from the OneFS CLI, the isi_netlogger command captures traffic on the interface (‘–interfaces’) over a timeout period of minutes (‘–duration’), and stores a specified number of log files (‘–count’).

Here’s the basic syntax of the CLI utility:

 # isi diagnostics netlogger start
        [--interfaces <str>]
        [--count <integer>]
        [--duration <duration>]
        [--snaplength <integer>]
        [--nodelist <str>]
        [--clients <str>]
        [--ports <str>]
        [--protocols (ip | ip6 | arp | tcp | udp)]
        [{--help | -h}]

Note that using the ‘-b’ bpf buffer size option will temporarily change the default buffer size while netlogger is running.

To display help text for netlogger command options, specify 'isi diagnostics netlogger start -h'. The command options include:

Netlogger Option

Description

–interfaces <str>

Limit packet collection to specified network interfaces.

–count <integer>

The number of packet capture files to keep after they reach the duration limit. Defaults to the latest 3 files. 0 is infinite.

–duration <duration>

How long to run the capture before rotating the capture file. Default is 10 minutes.

–snaplength <integer>

The maximum amount of data for each packet that is captured. Default is 320 bytes. Valid range is 64 to 9100 bytes.

–nodelist <str>

List of nodes specified by LNN on which to run the capture.

–clients <str>

Limit packet collection to specified Client hostname / IP addresses.

–ports <str>

Limit packet collection to specified TCP or UDP ports.

–protocols (ip | ip6 | arp | tcp | udp)

Limit packet collection to specified protocols.

Netlogger’s log files are stored by default under /ifs/netlog/<node_name>.

You can also use the WebUI to configure the netlogger parameters under Cluster management > Diagnostics > Netlogger settings:

Be aware that specifying ‘isi diagnostics netlogger’ can consume significant cluster resources. When running the tool on a production cluster, be aware of the effect on the system.

When the command has completed, the capture file(s) are stored under:

# /ifs/netlog/[nodename]

You can also use the following command to incorporate netlogger output files into a gather_info bundle:

# isi_gather_info -n [node#] -f /ifs/netlog

To capture on multiple nodes of the cluster, you can prefix the netlogger command by the versatile isi_for_array utility. For example:

# isi_for_array –s ‘isi diagnostics netlogger --nodelist 2,3 --timeout 5 --snaplength 256’

This command syntax creates five minute incremental files on nodes 2 and 3, using a snaplength of 256 bytes, which captures the first 256 bytes of each packet. These five-minute logs are kept for about three days. The naming convention is of the form netlog-<node_name>-<date>-<time>.pcap. For example:

# ls /ifs/netlog/tme_h700-1
netlog-tme_h700-1.2022-09-02_10.31.28.pcap

When using netlogger, set the ‘–snaplength’ option appropriately, depending on the protocol, in order to capture the right amount of detail in the packet headers and/or payload. Or, if you want the entire contents of every packet, use a value of zero (‘–snaplength 0’).

The default snaplength for netlogger is to capture 320 bytes per packet, which is typically sufficient for most protocols.

However, for SMB, a snaplength of 512 is sometimes required. Note that depending on a node’s traffic quantity, a snaplength of 0 (that is: capture whole packet) can potentially overwhelm the network interface driver.

All the output gets written to files under /ifs/netlog directory, and the default capture time is ten minutes (‘–duration 10’).

You can apply filters to constrain traffic to/from certain hosts or protocols. For example, to limit output to traffic between client 10.10.10.1 and the cluster node:

# isi diagnostics netlogger --duration 5 --snaplength 256 --clients 10.10.10.1

Or to capture only NFS traffic, filter on port 2049:

# isi diagnostics netlogger --ports 2049

Author: Nick Trimbee


Read Full Blog
  • PowerScale
  • OneFS
  • logfiles

OneFS Logfile Collection with isi-gather-info

Nick Trimbee

Sun, 18 Dec 2022 19:11:11 -0000

|

Read Time: 0 minutes

The previous blog outlining the investigation and troubleshooting of OneFS deadlocks and hang-dumps generated several questions about OneFS logfile gathering. So it seemed like a germane topic to explore in an article.

The OneFS ‘isi_gather_info’  utility has long been a cluster staple for collecting and collating context and configuration that primarily aids support in the identification and resolution of bugs and issues. As such, it is arguably OneFS’ primary support tool and, in terms of actual functionality, it performs the following roles:

  1. Executes many commands, scripts, and utilities on cluster, and saves their results
  2. Gathers all these files into a single ‘gzipped’ package.
  3. Transmits the gather package back to Dell, using several optional transport methods.

By default, a log gather tarfile is written to the /ifs/data/Isilon_Support/pkg/ directory. It can also be uploaded to Dell using the following means:

Transport Mechanism

Description

TCP Port

ESRS

Uses Dell EMC Secure Remote Support (ESRS) for gather upload.

443/8443

FTP

Use FTP to upload completed gather.

21

HTTP

Use HTTP to upload gather.

80/443

More specifically, the ‘isi_gather_info’ CLI command syntax includes the following options:

Option

Description

–upload <boolean>

Enable gather upload.

–esrs <boolean>

Use ESRS for gather upload.

–gather-mode (incremental | full)

Type of gather: incremental, or full.

–http-insecure-upload <boolean>

Enable insecure HTTP upload on completed gather.

–http-upload-host <string>

HTTP Host to use for HTTP upload.

–http-upload-path <string>

Path on HTTP server to use for HTTP upload.

–http-upload-proxy <string>

Proxy server to use for HTTP upload.

–http-upload-proxy-port <integer>

Proxy server port to use for HTTP upload.

–clear-http-upload-proxy-port

Clear proxy server port to use for HTTP upload.

–ftp-upload <boolean>

Enable FTP upload on completed gather.

–ftp-upload-host <string>

FTP host to use for FTP upload.

–ftp-upload-path <string>

Path on FTP server to use for FTP upload.

–ftp-upload-proxy <string>

Proxy server to use for FTP upload.

–ftp-upload-proxy-port <integer>

Proxy server port to use for FTP upload.

–clear-ftp-upload-proxy-port

Clear proxy server port to use for FTP upload.

–ftp-upload-user <string>

FTP user to use for FTP upload.

–ftp-upload-ssl-cert <string>

Specifies the SSL certificate to use in FTPS connection.

–ftp-upload-insecure <boolean>

Whether to attempt a plain text FTP upload.

–ftp-upload-pass <string>

FTP user to use for FTP upload password.

–set-ftp-upload-pass

Specify the FTP upload password interactively.

When the gather arrives at Dell, it is automatically unpacked by a support process and analyzed using the ‘logviewer’ tool.

Under the hood, there are two principal components responsible for running a gather. These are:

Component

Description

Overlord

The manager process, triggered by the user, which oversees all the isi_gather_info tasks that are executed on a single node.

Minion

The worker process, which runs a series of commands (specified by the overlord) on a specific node.

The ‘isi_gather_info’ utility is primarily written in Python, with its configuration under the purview of MCP, and RPC services provided by the isi_rpc_d daemon.

For example:

# isi_gather_info&
# ps -auxw | grep -i gather
root   91620    4.4  0.1 125024  79028  1  I+   16:23        0:02.12 python /usr/bin/isi_gather_info (python3.8)
root   91629    3.2  0.0  91020  39728  -  S    16:23        0:01.89 isi_rpc_d: isi.gather.minion.minion.GatherManager (isi_rpc_d)
root   93231    0.0  0.0  11148   2692  0  D+   16:23        0:00.01 grep -i gather

The overlord uses isi_rdo (the OneFS remote command execution daemon) to start up the minion processes and informs them of the commands to be executed by an ephemeral XML file, typically stored at /ifs/.ifsvar/run/<uuid>-gather_commands.xml. The minion then spins up an executor and a command for each entry in the XML file.

The parallel process executor (the default one to use) acts as a pool, triggering commands to run in parallel until a specified number are running in parallel. The commands themselves take care of the running and processing of results, checking frequently to ensure that the timeout threshold has not been passed.

The executor also keeps track of which commands are currently running, and how many are complete, and writes them to a file so that the overlord process can display useful information. When this is complete, the executor returns the runtime information to the minion, which records the benchmark file. The executor will also safely shut itself down if the isi_gather_info lock file disappears, such as if the isi_gather_info process is killed.

During a gather, the minion returns nothing to the overlord process, because the output of its work is written to disk.

Architecturally, the ‘gather’ process comprises an eight phase workflow:

 

The details of each phase are as follows:

Phase

Description

1. Setup

Reads from the arguments passed in, and from any config files on disk, and sets up the config dictionary, which will be used throughout the rest of the codebase. Most of the code for this step is contained in isilon/lib/python/gather/igi_config/configuration.py. This is also the step where the program is most likely to exit, if some config arguments end up being invalid.

2. Run local

Executes all the cluster commands, which are run on the same node that is starting the gather. All these commands run in parallel (up to the current parallelism value). This is typically the second longest running phase.

3. Run nodes

Executes the node commands across all of the cluster’s nodes. This runs on each node, and while these commands run in parallel (up to the current parallelism value), they do not run in parallel with the local step.

4. Collect

Ensures that all results end up on the overlord node (the node that started gather). If gather is using /ifs, it is very fast, but if it’s not, it needs to SCP all the node results to a single node.

5. Generate Extra Files

Generates nodes_info and package_info.xml. These two files are present in every single gather, and tell us some important metadata about the cluster.

6. Packing

Packs (tars and gzips) all the results. This is typically the longest running phase, often by an order of magnitude.

7. Upload

Transports the tarfile package to its specified destination. Depending on the geographic location, this phase might also be lengthy.

8. Cleanup

Cleans up any intermediary files that were created on cluster. This phase will run even if gather fails or is interrupted.

Because the isi_gather_info tool is primarily intended for troubleshooting clusters with issues, it runs as root (or compadmin in compliance mode), because it needs to be able to execute under degraded conditions (that is, without GMP, during upgrade, and under cluster splits, and so on). Given these atypical requirements, isi_gather_info is built as a stand-alone utility, rather than using the platform API for data collection.

The time it takes to complete a gather is typically determined by cluster configuration, rather than size. For example, a gather on a small cluster with a large number of NFS shares will take significantly longer than on large cluster with a similar NFS configuration. Incremental gathers are not recommended, because the base that’s required to check against in the log store may be deleted. By default, gathers only persist for two weeks in the log processor.

On completion of a gather, a tar’d and zipped logset is generated and placed under the cluster’s /ifs/data/IsilonSupport/pkg directory by default. A standard gather tarfile unpacks to the following top-level structure:

# du -sh *
536M    IsilonLogs-powerscale-f900-cl1-20220816-172533-3983fba9-3fdc-446c-8d4b-21392d2c425d.tgz
320K    benchmark
 24K    celog_events.xml
 24K    command_line
128K    complete
449M    local
 24K    local.log
 24K    nodes_info
 24K    overlord.log
 83M    powerscale-f900-cl1-1
 24K    powerscale-f900-cl1-1.log
119M    powerscale-f900-cl1-2
 24K    powerscale-f900-cl1-2.log
134M    powerscale-f900-cl1-3
 24K    powerscale-f900-cl1-3.log

In this case, for a three node F900 cluster, the compressed tarfile is 536 MB in size. The bulk of the data, which is primarily CLI command output, logs, and sysctl output, is contained in the ‘local’ and individual node directories (powerscale-f900-cl1-*). Each node directory contains a tarfile, varlog.tar, containing all the pertinent logfiles for that node.

The root directory of the tarfile file includes the following:

Item

Description

benchmark

§ Runtimes for all commands executed by the gather.

celog_events.xml

  • Info about the customer, including name, phone, email, and so on.
  • Contains significant details about the cluster and individual nodes, including:

§ Cluster/Node names

§ Node Serial numbers

§ Configuration ID

§ OneFS version info

§ Events

complete

§ Lists of complete commands run across the cluster and on individual nodes

local

  • See below.

nodes_info

  • Contains general information about the nodes, including the node ID, the IP address, the node name, and the logical node number

overlord.log

§ Gather execution and issue log.

package_info.xml

§ Cluster version details, GUID, S/N, and customer info (name, phone, email, and so on).

command_line

  • Syntax of gather commands run.

Notable contents of the ‘local’ directory (all the cluster-wide commands that are executed on the node running the gather) include:

Local Contents Item

Description

isi_alerts_history

 

  • This file seems to contain a list of all alerts that have ever occurred on the cluster
  • Event Id, which consists of the number of the initiating node and the event number
  • Times that the alert was issued and was resolved
  • Severity
  • Logical Node Number of the node(s) to which the alert applies
  • The message contained in the alert

isi_job_list

  • Contains information about job engine processes
  • Includes Job names, enabled status, priority policy, and descriptions

isi_job_schedule

  • A schedule of when job engine processes run
  • Includes job name, the schedule for a job, and the next time that a run of the job will occur

isi_license

  • The current license status of all of the modules

isi_network_interfaces

§ State and configuration of all the cluster’s network interfaces.

isi_nfs_exports

§ Configuration detail for all the cluster’s NFS exports.

isi_services

§ Listing of all the OneFS services and whether they are enabled or disabled. More detailed configuration for each service is contained in separate files. For example, for SnapshotIQ:

  • snapshot_list
  • snapshot_schedule
  • snapshot_settings
  • snapshot_usage
  • writable_snapshot_list

isi_smb

§ Detailed configuration info for all the cluster’s NFS exports.

isi_stat

§ Overall status of the cluster, including networks, drives, and so on.

isi_statistics

§ CPU, protocol, and disk IO stats.

Contents of the directory for the ‘node’ directory include:

Node Contents Item

Description

df

Output of the df command

du

  • Output of the du command
  • Unfortunately it runs ‘du -h’ which reports capacity in ‘human readable’ form, but makes it more complex to sort.

isi_alerts

Contains a list of outstanding alerts on the node

ps and ps_full

Lists of all running process at the time that isi_gather_info was executed.

As the isi_gather_info command runs, status is provided in the interactive CLI session:

# isi_gather_info
Configuring
    COMPLETE
running local commands
    IN PROGRESS \
Progress of local
[########################################################  ]
147/152 files written  \
Some active commands are: ifsvar_modules_jobengine_cp, isi_statistics_heat, ifsv
ar_modules

When the gather has completed, the location of the tarfile on the cluster itself is reported as follows:

# isi_gather_info
Configuring
    COMPLETE
running local commands
    COMPLETE
running node commands
    COMPLETE
collecting files
    COMPLETE
generating package_info.xml
    COMPLETE
tarring gather
    COMPLETE
uploading gather
    COMPLETE

The path to the tar-ed gather is:

/ifs/data/Isilon_Support/pkg/IsilonLogs-h5001-20220830-122839-23af1154-779c-41e9-b0bd-d10a026c9214.tgz

If the gather upload services are unavailable, errors are displayed on the console, as shown here:

…
uploading gather
    FAILED
        ESRS failed - ESRS has not been provisioned
        FTP failed - pycurl error: (28, 'Failed to connect to ftp.isilon.com port 21 after 81630 ms: Operation timed out')

Author: Nick Trimbee

Read Full Blog
  • networking
  • PowerScale
  • OneFS
  • clusters

OneFS Hardware Network Considerations

Nick Trimbee

Wed, 07 Dec 2022 20:54:43 -0000

|

Read Time: 0 minutes

As we’ve seen in prior articles in this series, OneFS and the PowerScale platforms support a variety of Ethernet speeds, cable and connector styles, and network interface counts, depending on the node type selected. However, unlike the back-end network, Dell Technologies does not specify particular front-end switch models, allowing PowerScale clusters to seamlessly integrate into the data link layer (layer 2) of an organization’s existing Ethernet IP network infrastructure. For example:

 

A layer 2 looped topology, as shown here, extends VLANs between the distribution/aggregation switches, with spanning tree protocol (STP) preventing network loops by shutting down redundant paths. The access layer uplinks can be used to load balance VLANs. This distributed architecture allows the cluster’s external network to connect to multiple access switches, affording each node similar levels of availability, performance, and management properties.

Link aggregation can be used to combine multiple Ethernet interfaces into a single link-layer interface, and is implemented between a single switch and a PowerScale node, where transparent failover or switch port redundancy is required. Link aggregation assumes that all links are full duplex, point to point, and at the same data rate, providing graceful recovery from link failures. If a link fails, traffic is automatically sent to the next available link without disruption.

Quality of service (QoS) can be implemented through differentiated services code point (DSCP), by specifying a value in the packet header that maps to an ‘effort level’ for traffic. Because OneFS does not provide an option for tagging packets with a specified DSCP marking, the recommended practice is to configure the first hop ports to insert DSCP values on the access switches connected to the PowerScale nodes. OneFS does however retain headers for packets that already have a specified DSCP value.

When designing a cluster, the recommendation is that each node have at least one front-end interface configured, preferably in at least one static SmartConnect zone. Although a cluster can be run in a ‘not all nodes on the network’ (NANON) configuration, where feasible, the recommendation is to connect all nodes to the front-end network(s). Additionally, cluster services such as SNMP, ESRS, ICAP, and auth providers (AD, LDAP, NIS, and so on) prefer that each node have an address that can reach the external servers.

In contrast with scale-up NAS platforms that use separate network interfaces for out-of-band management and configuration, OneFS traditionally performs all cluster network management in-band. However, PowerScale nodes typically contain a dedicated 1Gb Ethernet port that can be configured for use as a management network by ICMP or iDRAC, simplifying administration of a large cluster. OneFS also supports using a node’s serial port as an RS-232 out-of-band management interface. This practice is highly recommended for large clusters. Serial connectivity can provide reliable BIOS-level command line access for on-site or remote service staff to perform maintenance, troubleshooting, and installation operations.

SmartConnect provides a configurable allocation method for each IP address pool:

Allocation Method

Attributes

Static

• One IP per interface is assigned, will likely require fewer IPs to meet minimum requirements

• No Failover of IPs to other interfaces

Dynamic

• Multiple IPs per interface is assigned, will require more IPs to meet minimum requirements

• Failover of IPs to other interfaces, failback policies are needed

The default ‘static’ allocation assigns a single persistent IP address to each interface selected in the pool, leaving additional pool IP addresses unassigned if the number of addresses exceeds the total interfaces.

The lowest IP address of the pool is assigned to the lowest Logical Node Number (LNN) from the selected interfaces. The same is true for the second-lowest IP address and LNN, and so on. If a node or interface becomes unavailable, this IP address does not move to another node or interface. Also, when the node or interface becomes unavailable, it is removed from the SmartConnect zone, and new connections will not be assigned to the node. When the node is available again, SmartConnect automatically adds it back into the zone and assigns new connections.

By contrast, ‘dynamic’ allocation divides all available IP addresses in the pool across all selected interfaces. OneFS attempts to assign the IP addresses as evenly as possible. However, if the interface-to-IP address ratio is not an integer value, a single interface might have more IP addresses than another. As such, wherever possible, ensure that all the interfaces have the same number of IP addresses.

In concert with dynamic allocation, dynamic failover provides high availability by transparently migrating IP addresses to another node when an interface is not available. If a node becomes unavailable, all the IP addresses it was hosting are reallocated across the new set of available nodes in accordance with the configured failover load-balancing policy. The default IP address failover policy is round robin, which evenly distributes IP addresses from the unavailable node across available nodes. Because the IP address remains consistent, irrespective of the node on which it resides, failover to the client is transparent, so high availability is seamless.

The other available IP address failover policies are the same as the initial client connection balancing policies, that is, connection count, throughput, or CPU usage. In most scenarios, round robin is not only the best option but also the most common. However, the other failover policies are available for specific workflows.

The decision on whether to implement dynamic failover depends on the protocol(s) being used, general workflow attributes, and any high-availability design requirements:

Protocol

State

Suggested Allocation Strategy

NFSv3

Stateless

Dynamic

NFSv4

Stateful

Dynamic or Static, depending on mount daemon, OneFS version, and Kerberos.

SMB

Stateful

Dynamic or Static

SMB Multi-channel

Stateful

Dynamic or Static

S3

Stateless

Dynamic or Static

HDFS

Stateful

Dynamic or Static. HDFS uses separate name-node and data-node connections. Allocation strategy depends on the need for data locality and/or multi-protocol, that is:

 

HDFS + NFSv3 : Dynamic Pool

 

HDFS + SMB : Static Pool

HTTP

Stateless

Static

FTP

Stateful

Static

SyncIQ

Stateful

Static required

Assigning each workload or data store to a unique IP address enables OneFS SmartConnect to move each workload to one of the other interfaces. This minimizes the additional work that a remaining node in the SmartConnect pool must absorb and ensures that the workload is evenly distributed across all the other nodes in the pool.

Static IP pools require one IP address for each logical interface within the pool. Because each node provides two interfaces for external networking, if link aggregation is not configured, this would require 2*N IP addresses for a static pool.

Determining the number of IP addresses within a dynamic allocation pool varies depending on the workflow, node count, and the estimated number of clients that would be in a failover event. While dynamic pools need, at a minimum, the number of IP addresses to match a pool’s node count, the ‘N * (N – 1)’ formula can often prove useful for calculating the required number of IP addresses for smaller pools. In this equation, N is the number of nodes that will participate in the pool.

For example, a SmartConnect pool with four-node interfaces, using the ‘N * (N – 1)’ model will result in three unique IP addresses being allocated to each node. A failure on one node interface will cause each of that interface’s three IP addresses to fail over to a different node in the pool. This ensures that each of the three active interfaces remaining in the pool receives one IP address from the failed node interface. If client connections to that node are evenly balanced across its three IP addresses, SmartConnect will evenly distribute the workloads to the remaining pool members. For larger clusters, this formula may not be feasible due to the sheer number of IP addresses required.

Enabling jumbo frames (Maximum Transmission Unit set to 9000 bytes) typically yields improved throughput performance with slightly reduced CPU usage than when using standard frames, where the MTU is set to 1500 bytes. For example, with 40 Gb Ethernet connections, jumbo frames provide about five percent better throughput and about one percent less CPU usage.

OneFS provides the ability to optimize storage performance by designating zones to support specific workloads or subsets of clients. Different network traffic types can be segregated on separate subnets using SmartConnect pools.

For large clusters, partitioning the cluster’s networking resources and allocating bandwidth to each workload can help minimize the likelihood that heavy traffic from one workload will affect network throughput for another. This is particularly true for SyncIQ replication and NDMP backup traffic, which can frequently benefit from its own set of interfaces, separate from user and client IO load.

The ‘groupnet’ networking object is part of OneFS’ support for multi-tenancy. Groupnets sit above subnets and pools and allow separate Access Zones to contain distinct DNS settings.

The management and data network(s) can then be incorporated into different Access Zones, each with their own DNS, directory access services, and routing, as appropriate.

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS
  • clusters

OneFS Hardware Platform Considerations

Nick Trimbee

Wed, 07 Dec 2022 20:42:17 -0000

|

Read Time: 0 minutes

A key decision for performance, particularly in a large cluster environment, is the type and quantity of nodes deployed. Heterogeneous clusters can be architected with a wide variety of node styles and capacities, to meet the needs of a varied data set and a wide spectrum of workloads. These node styles encompass several hardware generations, and fall loosely into three main categories or tiers. While heterogeneous clusters can easily include many hardware classes and configurations, the best practice of simplicity for building clusters holds true here too.

Consider the physical cluster layout and environmental factors, particularly when designing and planning a large cluster installation. These factors include:

  • Redundant power supply
  • Airflow and cooling
  • Rackspace requirements
  • Floor tile weight constraints
  • Networking requirements
  • Cabling distance limitations

The following table details the physical dimensions, weight, power draw, and thermal properties for the range of PowerScale F-series all-flash nodes:

Model

Tier

Height

Width

Depth

RU

Weight

Max Watts

Watts

Max BTU

Normal BTU

F900

All-flash NVMe

performance

2U
 
(2×1.75IN)

17.8 IN /
 
45 cm

31.8 IN / 85.9 cm

2RU

 73 lbs

1297

859

4425

2931

F600

All-flash NVMe

Performance

1U

(1.75IN)

17.8 IN /
 
45 cm

31.8 IN / 85.9 cm

1RU

 43 lbs

467

718

2450

1594

F200

All-flash
 
Performance

1U

(1.75IN)

17.8 IN /
 
45 cm

31.8 IN / 85.9 cm

1RU

 47 lbs

395

239

1346

816

Note that the table above represents individual nodes. A minimum of three similar nodes are required for a node pool.

Similarly, the following table details the physical dimensions, weight, power draw, and thermal properties for the range of PowerScale chassis-based platforms:

Model

Tier

Height

Width

Depth

RU

Weight

Max Watts

Watts

Max BTU

Normal BTU

F800/
 810

All-flash

performance

4U (4×1.75IN)

17.6 IN / 45 cm

35 IN / 
 88.9 cm

4RU

169 lbs (77 kg)

1764

1300

6019

4436

H700

 

Hybrid/Utility

4U (4×1.75IN)

17.6 IN / 45 cm

35 IN / 
 88.9 cm

4RU

261lbs (100 kg)

1920

1528

6551

5214

H7000

 

Hybrid/Utility

4U (4×1.75IN)

17.6 IN / 45 cm

39 IN / 
 99.06 cm

4RU

312 lbs (129 kg)

2080

1688

7087

5760

H600

 

Hybrid/Utility

4U (4×1.75IN)

17.6 IN / 45 cm

35 IN / 
 88.9 cm

4RU

 213 lbs (97 kg)

1990

1704

6790

5816

H5600

 

Hybrid/Utility

4U (4×1.75IN)

17.6 IN / 45 cm

39 IN / 
 99.06 cm

4RU

285 lbs (129 kg)

1906

1312

6504

4476

H500

 

Hybrid/Utility

4U (4×1.75IN)

17.6 IN / 45 cm

35 IN / 
 88.9 cm

4RU

248 lbs (112 kg)

1906

1312

6504

4476

H400

 

Hybrid/Utility

4U (4×1.75IN)

17.6 IN / 45 cm

35 IN / 
 88.9 cm

4RU

242 lbs (110 kg)

1558

1112

5316

3788

A300

 

Archive

4U (4×1.75IN)

17.6 IN / 45 cm

35 IN / 
 88.9 cm

4RU

252 lbs (100 kg)

1460

1070

4982

3651

A3000

 

Archive

4U (4×1.75IN)

17.6 IN / 45 cm

39 IN / 
 99.06 cm

4RU

303 lbs (129 kg)

1620

1230

5528

4197

A200

 

Archive

4U (4×1.75IN)

17.6 IN / 45 cm

35 IN / 
 88.9 cm

4RU

219 lbs (100 kg)

1460

1052

4982

3584

A2000

 

Archive

4U (4×1.75IN)

17.6 IN / 45 cm

39 IN / 
 99.06 cm

4RU

285 lbs (129 kg)

1520

1110

5186

3788

Note that this table represents 4RU chassis, each of which contains four PowerScale platform nodes (the minimum node pool size).

The following figure shows the locations of both the front-end (ext-1 & ext-2) and back-end (int-1 & int-2) network interfaces on the PowerScale stand-alone F-series and chassis-based nodes:

 

A PowerScale cluster’s back-end network is analogous to a distributed systems bus. Each node has two back-end interfaces for redundancy that run in an active/passive configuration (int-1 and int-2 above). The primary interface is connected to the primary switch; the secondary interface is connected to a separate switch.

For nodes using 40/100 Gb or 25/10 Gb Ethernet or InfiniBand connected with multimode fiber, the maximum cable length is 150 meters. This allows a cluster to span multiple rack rows, floors, and even buildings, if necessary. While this can solve floor space challenges, in order to perform any physical administration activity on nodes, you must know where the equipment is located.

The following table shows the various PowerScale node types and their respective back-end network support. While Ethernet is the preferred medium – particularly for large PowerScale clusters – InfiniBand is also supported for compatibility with legacy Isilon clusters.

Node Models

Details

F200, F600, F900

F200: nodes support a 10 GbE or 25 GbE connection to the access switch using the same NIC. A breakout cable can connect up to four nodes to a single switch port.

 

F600: nodes support a 40 GbE or 100 GbE connection to the access switch using the same NIC.

 

F900: nodes support a 40 GbE or 100 GbE connection to the access switch using the same NIC.

H700, H7000, A300, A3000

Supports 40 GbE or 100 GbE connection to the access switch using the same NIC.

 

OR

 

Supports 25 GbE or 10 GbE connection to the leaf using the same NIC. A breakout cable can connect a 40 GbE switch port to four 10 GbE nodes or a 100 GbE switch port to four 25 GbE nodes.

F810, F800, H600, H500, H5600

Performance nodes support a 40 GbE connection to the access switch.

A200, A2000, H400

Archive nodes support a 10GbE connection to the access switch using a breakout cable. A breakout cable can connect a 40 GbE switch port to four 10 GbE nodes or a 100 GbE switch port to four 10 GbE nodes.

Currently only Dell Technologies approved switches are supported for back-end Ethernet and IB cluster interconnection. These include:

Switch 
Model

Port 
Count

Port 
Speed

Height 
(Rack 
 Units)

Role

Notes

Dell S4112

24

10GbE

½

ToR

10 GbE only.

Dell 4148

48

10GbE

1

ToR

10 GbE only.

Dell S5232

32

100GbE

1

Leaf or Spine

Supports 4x10GbE or 4x25GbE breakout cables.

 

Total of 124 10GbE or 25GbE nodes as top-of-rack back-end switch.

 

Port 32 does not support breakout.

Dell Z9100

32

100GbE

1

Leaf or Spine

Supports 4x10GbE or 4x25GbE breakout cables.

 

Total of 128 10GbE or 25GbE nodes as top-of-rack back-end switch.

Dell Z9264

64

100GbE

2

Leaf or Spine

Supports 4x10GbE or 4x25GbE breakout cables.

 

Total of 128 10GbE or 25GbE nodes as top-of-rack back-end switch.

Arista 7304

128

40GbE

8

Enterprise core

40GbE or 10GbE line cards.

Arista 7308

256

40GbE

13

Enterprise/ large cluster

40GbE or 10GbE line cards.

Mellanox Neptune MSX6790

36

QDR

1

IB fabric

32Gb/s quad data rate InfiniBand.

Be aware that the use of patch panels is not supported for PowerScale cluster back-end connections, regardless of overall cable lengths. All connections must be a single link, single cable directly between the node and back-end switch. Also, Ethernet and InfiniBand switches must not be reconfigured or used for any traffic beyond a single cluster.

Support for leaf spine back-end Ethernet network topologies was first introduced in OneFS 8.2. In a leaf-spine network switch architecture, the PowerScale nodes connect to leaf switches at the access, or leaf, layer of the network. At the next level, the aggregation and core network layers are condensed into a single spine layer. Each leaf switch connects to each spine switch to ensure that all leaf switches are no more than one hop away from one another. For example:

Leaf-to-spine switch connections require even distribution, to ensure the same number of spine connections from each leaf switch. This helps minimize latency and reduces the likelihood of bottlenecks in the back-end network. By design, a leaf spine network architecture is both highly scalable and redundant.

Leaf spine network deployments can have a minimum of two leaf switches and one spine switch. For small to medium clusters in a single rack, the back-end network typically uses two redundant top-of-rack (ToR) switches, rather than implementing a more complex leaf-spine topology.

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS
  • clusters
  • cabling

OneFS Hardware Installation Considerations

Nick Trimbee

Wed, 07 Dec 2022 20:29:30 -0000

|

Read Time: 0 minutes

When it comes to physically installing PowerScale nodes, most use a 35 inch depth chassis and will fit in a standard depth data center cabinet. Nodes can be secured to standard storage racks with their sliding rail kits, included in all node packaging and compatible with racks using either 3/8 inch square holes, 9/32 inch round holes, or 10-32 / 12-24 / M5X.8 / M6X1 pre-threaded holes. These supplied rail kit mounting brackets are adjustable in length, from 24 inches to 36 inches, to accommodate different rack depths. When selecting an enclosure for PowerScale nodes, ensure that the rack supports the minimum and maximum rail kit sizes.

 

Rack Component

Description

a

Distance between front surface of the rack and the front NEMA rail

b

Distance between NEMA rails, minimum=24in (609.6mm), max=34in (863.6mm)

c

Distance between the rear of the chassis to the rear of the rack, min=2.3in (58.42mm)

d

Distance between inner front of the front door and the NEMA rail, min=2.5in (63.5mm)

e

Distance between the inside of the rear post and the rear vertical edge of the chassis and rails, min=2.5in (63.5mm)

f

Width of the rear rack post

g

19in (486.2mm)+(2e), min=24in (609.6mm)

h

19in (486.2mm) NEMA+(2e)+(2f)

Note: Width of the PDU+0.5in (13mm) <=e +f

 

If j=i+c+PDU depth+3in (76.2mm), then h=min 23.6in (600mm)

 

Assuming the PDU is mounted beyond i+c.

i

Chassis depth: Normal chassis=35.80in (909mm) : Deep chassis=40.40in (1026mm)

Switch depth (measured from the front NEMA rail): Note: The inner rail is fixed at 36.25in (921mm)

 

Allow up to 6in (155mm) for cable bend radius when routing up to 32 cables to one side of the rack. Select the greater of the installed equipment.

j

Minimum rack depth=i+c

k

Front

l

Rear

m

Front door

n

Rear door

p

Rack post

q

PDU

r

NEMA

s

NEMA 19 inch

t

Rack top view

u

Distance from front NEMA to chassis face:

Dell PowerScale deep and normal chassis = 0in

However, the high-capacity models, such as the F800/810, H7000, H5600, A3000 and A2000, have 40 inch depth chassis and require extended depth cabinets, such as the APC 3350 or Dell Titan-HD rack.

Additional room must be provided for opening the FRU service trays at the rear of the nodes and, in the chassis-based 4RU platforms, the disk sleds at the front of the chassis. Except for the 2RU F900, the stand-alone PowerScale all-flash nodes are 1RU in height (including the 1RU diskless P100 accelerator and B100 backup accelerator nodes).

Power-wise, each cabinet typically requires between two and six independent single or three-phase power sources. To determine the specific requirements, use the published technical specifications and device rating labels for the devices to calculate the total current draw for each rack.

Specification

North American 3 wire connection (2 L and 1 G)

International 3 wire connection (1 L, 1 N, and 1 G)

Input nominal voltage

200–240 V ac +/- 10% L – L nom

220–240 V ac +/- 10% L – L nom

Frequency

50–60 Hz

50–60 Hz

Circuit breakers

30 A

32 A

Power zones

Two

Two

Power requirements at site (minimum to maximum)

Single-phase: six 30A drops, two per zone

 

Three-phase Delta: two 50A drops, one per zone

 

Three-phase Wye: two 32A drops, one per zone

Single-phase: six 30A drops, two per zone

 

Three-phase Delta: two 50A drops, one per zone

 

Three-phase Wye: two 32A drops, one per zone

Additionally, the recommended environmental conditions to support optimal PowerScale cluster operation are as follows:

Attribute

Details

Temperature

Operate at >=90 percent of the time between 10 degrees Celsius to 35 degrees Celsius, and <=10 percent of the time between 5 degrees Celsius to 40 degrees Celsius.

Humidity

40 to 55 percent relative humidity

Weight

A fully configured cabinet must sit on at least two floor tiles, and can weigh approximately 1588 kilograms (3500 pounds).

Altitude

0 meters to 2439 meters (0 to 8,000 ft) above sea level operating altitude.

Weight is a critical factor to keep in mind, particularly with the chassis-based nodes. Individual 4RU chassis can weigh up to around 300 lbs each, and the maximum floor tile capacity for each individual cabinet or rack must be kept in mind. For the deep node styles (H7000, H5600, A3000 and A2000), the considerable node weight may prevent racks from being fully populated with PowerScale equipment. If the cluster uses a variety of node types, installing the larger, heavier nodes at the bottom of each rack and the lighter chassis at the top can help distribute weight evenly across the cluster racks’ floor tiles.

Note that there are no lift handles on the PowerScale 4RU chassis. However, the drive sleds can be removed to provide handling points if no lift is available. With all the drive sleds removed, but leaving the rear compute modules inserted, the chassis weight drops to a more manageable 115 lbs or so. It is strongly recommended to use a lift for installation of 4RU chassis.

Cluster back-end switches ship with the appropriate rails (or tray) for proper installation of the switch in the rack. These rail kits are adjustable to fit NEMA front rail to rear rail spacing ranging from 22 in to 34 in.

Note that some manufacturers’ Ethernet switch rails are designed to overhang the rear NEMA rails, helping to align the switch with the PowerScale chassis at the rear of the rack. These require a minimum clearance of 36 in from the front NEMA rail to the rear of the rack, in order to ensure that the rack door can be closed.

Consider the following large cluster topology, for example:

This contiguous rack architecture is designed to scale up to the current maximum PowerScale cluster size of 252 nodes, in 63 4RU chassis, across nine racks as the environment grows – while still keeping cable management relatively simple. Note that this configuration assumes 1RU per node. If you are using F900 nodes, which are 2RU in size, be sure to budget for additional rack capacity.

Successful large cluster infrastructures depend on the proficiency of the installer and their optimizations for maintenance and future expansion. Some good data center design practices include:

  • Pre-allocating and reserving adjacent racks in the same isle to accommodate the anticipated future cluster expansion
  • Reserving an empty ‘mailbox’ slot in the top half of each rack for any pass-through cable management needs
  • Dedicating one of the racks in the group for the back-end and front-end distribution/spine switches – in this case rack R3

For Hadoop workloads, PowerScale clusters are compatible with the rack awareness feature of HDFS to provide balancing in the placement of data. Rack locality keeps the data flow internal to the rack.

Excess cabling can be neatly stored in 12” service coils on a cable tray above the rack, if available, or at the side of the rack as illustrated below.

The use of intelligent power distribution units (PDUs) within each rack can facilitate the remote power cycling of nodes, if desired.

For deep nodes such as the H7000 and A3000 hardware, where chassis depth can be a limiting factor, horizontally mounted PDUs within the rack can be used in place of vertical PDUs, if necessary. If front-mounted, partial depth Ethernet switches are deployed, you can install horizontal PDUs in the rear of the rack directly behind the switches to maximize available rack capacity.

With copper cables (such as SFP+, QSFP, CX4), the maximum cable length is typically limited to 10 meters or less. After factoring in for dressing the cables to maintain some level of organization and proximity within the racks and cable trays, all the racks with PowerScale nodes need to be near each other – either in the same rack row or close by in an adjacent row – or adopt a leaf-spine topology, with leaf switches in each rack.

If greater physical distance between nodes is required, support for multimode fiber (QSFP+, MPO, LC, etc) extends the cable length limitation to 150 meters. This allows nodes to be housed on separate floors or on the far side of a floor in a datacenter if necessary. While solving the floor space problem, this does have the potential to introduce new administrative and management challenges.

The following table lists the various cable types, form factors, and supported lengths available for PowerScale nodes:

Cable Form Factor

Medium

Speed (Gb/s)

Max Length

QSFP28

Optical

100Gb

30M

MPO

Optical

100/40Gb

150M

QSFP28

Copper

100Gb

5M

QSFP+

Optical

40Gb

10M

LC

Optical

25/10Gb

150M

QSFP+

Copper

40Gb

5M

SFP28

Copper

25Gb

5M

SFP+

Copper

10Gb

7M

CX4

Copper

IB QDR/DDR

10M

The connector types for the cables above can be identified as follows:

As for the nodes themselves, the following rear views indicate the locations of the various network interfaces:

Note that Int-a and int-b indicate the primary and secondary back-end networks, whereas Ext-1 and Ext-2 are the front-end client networks interfaces.

Be aware that damage to the InfiniBand or Ethernet cables (copper or optical fiber) can negatively affect cluster performance. Never bend cables beyond the recommended bend radius, which is typically 10–12 times the diameter of the cable. For example, if a cable is 1.6 inches, round up to 2 inches and multiply by 10 for an acceptable bend radius.

Cables differ, so follow the explicit recommendations of the cable manufacturer.

The most important design attribute for bend radius consideration is the minimum mated cable clearance (Mmcc). Mmcc is the distance from the bulkhead of the chassis through the mated connectors/strain relief including the depth of the associated 90 degree bend. Multimode fiber has many modes of light (fiber optic) traveling through the core. As each of these modes moves closer to the edge of the core, light and the signal are more likely to be reduced, especially if the cable is bent. In a traditional multimode cable, as the bend radius is decreased, the amount of light that leaks out of the core increases, and the signal decreases. Best practices for data cabling include:

  • Keep cables away from sharp edges or metal corners.
  • Avoid bundling network cables with power cables. If network and power cables are not bundled separately, electromagnetic interference (EMI) can affect the data stream.
  • When bundling cables, do not pinch or constrict the cables.
  • Avoid using zip ties to bundle cables. Instead use Velcro hook-and-loop ties that do not have hard edges, and can be removed without cutting. Fastening cables with Velcro ties also reduces the impact of gravity on the bend radius.

Note that the effects of gravity can also decrease the bend radius and result in degradation of signal power and quality.

Cables, particularly when bundled, can also obstruct the movement of conditioned air around the cluster, and cables should be secured away from fans. Flooring seals and grommets can be useful to keep conditioned air from escaping through cable holes. Also ensure that smaller Ethernet switches are drawing cool air from the front of the rack, not from inside the cabinet. This can be achieved either with switch placement or by using rack shelving.

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS
  • clusters
  • cooling
  • cabling

OneFS Hardware Environmental and Logistical Considerations

Nick Trimbee

Wed, 07 Dec 2022 17:28:21 -0000

|

Read Time: 0 minutes

In this article, we turn our attention to some of the environmental and logistical aspects of cluster design, installation, and management.

In addition to available rack space and physical proximity of nodes, provision needs to be made for adequate power and cooling as the cluster expands. New generations of drives and nodes typically deliver increased storage density, which often magnifies the power draw and cooling requirements per rack unit.

The recommendation is for a large cluster’s power supply to be fully redundant and backed up with a battery UPS and/or power generator. In the worst instance, if a cluster does loose power, the nodes are protected internally by filesystem journals which preserve any in-flight uncommitted writes. However, the time to restore power and bring up a large cluster from an unclean shutdown can be considerable.

Like most data center equipment, the cooling fans in PowerScale nodes and switches pull air from the front to back of the chassis. To complement this, data centers often employ a hot isle/cold isle rack configuration, where cool, low humidity air is supplied in the aisle at the front of each rack or cabinet either at the floor or ceiling level, and warm exhaust air is returned at ceiling level in the aisle to the rear of each rack.

Given the significant power draw, heat density, and weight of cluster hardware, some datacenters are limited in the number of nodes each rack can support. For partially filled racks, the use of blank panels to cover the front and rear of any unfilled rack units can help to efficiently direct airflow through the equipment.

The table below shows the various front and back-end network speeds and connector form factors across the PowerScale storage node portfolio.

Speed (Gb/s)

Form Factor

Front-end/
Back-end

Speed (Gb/s)

100/40

QSFP28

Back-end

F900, F600, H700, H7000, A300, A3000, P100, B100

40

QDR

QSFP+

Back-end

F800, F810, H600, H5600, H500, H400, A200, A2000

25/10

SFP28

Back-end

F900, F600, F200, H700, H7000, A300, A3000, P100, B100

10

QDR

QSFP+

Back-end

H400, A200, A2000

100/40

QSFP28

Front-end

F900, F600, H700, H7000, A300, A3000, P100, B100

40

QDR

QSFP+

Front-end

F800, F810, H600, H5600, H500, H400, A200, A2000

25/10

SFP28

Front-end

F900, F600, F200, H700, H7000, A300, A3000, P100, B100

25/10

SFP+

Front-end

F800, F810, H600, H5600, H500, H400, A200, A2000

10 

QDR

SFP+

Front-end

F800, F810, H600, H5600, H500, H400, A200, A2000

With large clusters, especially when the nodes may not be racked in a contiguous manner, it is highly advised to have all the nodes and switches connected to serial console concentrators and remote power controllers. However, to perform any physical administration or break/fix activity on nodes, you must know where the equipment is located and have administrative resources available to access and service all locations.

As such, the following best practices are recommended:

  • Develop and update thorough physical architectural documentation.
  • Implement an intuitive cable coloring standard.
  • Be fastidious and consistent about cable labeling.
  • Use the appropriate length of cable for the run and create a neat 12” loop of any excess cable, secured with Velcro.
  • Observe appropriate cable bend ratios, particularly with fiber cables.
  • Dress cables and maintain a disciplined cable management ethos.
  • Keep a detailed cluster hardware maintenance log.
  • Where appropriate, maintain a ‘mailbox’ space for cable management.

Disciplined cable management and labeling for ease of identification is particularly important in larger PowerScale clusters, where density of cabling is high. Each chassis can require up to 28 cables, as shown in the following table:

Cabling Component

Medium

Cable Quantity per Chassis

Back-end network

Ethernet or Infiniband

8

Front-end network

Ethernet

8

Management interface

1Gb Ethernet

4

Serial console

DB9 RS 232

4

Power cord

110V or 220V AC power

4

Total

 

28

The recommendations for cabling a PowerScale chassis are:

  • Split cabling in the middle of the chassis, between nodes 2 and 3.
  • Route Ethernet and Infiniband cables towards the lower side of the chassis.
  • Connect power cords for nodes 1 and 3 to PDU A, and power cords for nodes 2 and 4 to PDU B.
  • Bundle network cables with the AC power cords for ease of management.
  • Leave enough cable slack for servicing each individual node’s FRUs.

 

Similarly, the stand-alone F-series all flash nodes, in particular the 1RU F600 and F200 nodes, also have a similar density of cabling per rack unit:

Cabling Component

Medium

Cable Quantity per 
F-series node

Back-end network

10 or 40 Gb Ethernet or QDR Infiniband

2

Front-end network

10 or 40Gb Ethernet

2

Management interface

1Gb Ethernet

1

Serial console

DB9 RS 232

1

Power cord

110V or 220V AC power

2

Total

 

8

Consistent and meticulous cable labeling and management is particularly important in large clusters. PowerScale chassis that employ both front and back-end Ethernet networks can include up to 20 Ethernet connections per 4RU chassis.

In each node’s compute module, there are two PCI slots for the Ethernet cards (NICs). Viewed from the rear of the chassis, in each node the right hand slot (HBA Slot 0) houses the NIC for the front-end network, and the left hand slot (HBA Slot 1) houses the NIC for the front-end network. There is also a separate built-in 1Gb Ethernet port on each node for cluster management traffic.

While there is no requirement that node 1 aligns with port 1 on each of the back-end switches, it can certainly make cluster and switch management and troubleshooting considerably simpler. Even if exact port alignment is not possible, with large clusters, ensure that the cables are clearly labeled and connected to similar port regions on the back-end switches.

PowerScale nodes and the drives they contain have identifying LED lights to indicate when a component has failed and to allow proactive identification of resources. You can use the ‘isi led’ CLI command to illuminate specific node and drive indicator lights, as needed, to aid in identification.

Drive repair times depend on a variety of factors:

  • OneFS release (determines Job Engine version and how efficiently it operates)
  • System hardware (determines drive types, amount of CPU, RAM, and so on)
  • Filesystem: Amount of data, data composition (lots of small vs large files), protection, tunables, and so on.
  • Load on the cluster during the drive failure

A useful method to estimate future FlexProtect runtime is to use old repair runtimes as a guide, if available.

The drives in the PowerScale chassis-based platforms have a bay-grid nomenclature, where A-E indicates each of the sleds and 0-6 would point to the drive position in the sled. The drive closest to the front is 0, whereas the drive closest to the back is 2/3/5, depending on the drive sled type.

When it comes to updating and refreshing hardware in a large cluster, swapping nodes can be a lengthy process of somewhat unpredictable duration. Data has to be evacuated from each old node during the Smartfail process prior to its removal, and restriped and balanced across the new hardware’s drives. During this time there will also be potentially impactful group changes as new nodes are added and the old ones removed.

However, if replacing an entire node-pool as part of a tech refresh, a SmartPools filepool policy can be crafted to migrate the data to another nodepool across the back-end network. When complete, the nodes can then be Smartfailed out, which should progress swiftly because they are now empty.

If multiple nodes are Smartfailed simultaneously, at the final stage of the process the node remove is serialized with around 60 seconds pause between each. The Smartfail job places the selected nodes in read-only mode while it copies the protection stripes to the cluster’s free space. Using SmartPools to evacuate data from a node or set of nodes in preparation to remove them is generally a good idea, and is usually a relatively fast process.

Another efficient approach can often be to swap drives out into new chassis. In addition to being considerably faster, the drive swapping process focuses the disruption on a single whole cluster down event. Estimating the time to complete a drive swap, or ‘disk tango’ process, is simpler and more accurate and can typically be completed in a single maintenance window.

With PowerScale chassis-based platforms, such as the H700 and A300, the available hardware ‘tango’ options are expanded and simplified. Given the modular design of these platforms, the compute and chassis tango strategies typically replace the disk tango:

Replacement Strategy

Component

PowerScale

F-series

Chassis-based Nodes

Description

Disk tango

Drives / drive sleds

x

x

Swapping out data drives or drive sleds

Compute tango

Chassis Compute modules

 

x

Rather than swapping out the twenty drive sleds in a chassis, it’s usually cleaner to exchange the four compute modules

Chassis tango

4RU Chassis

 

x

Typically only required if there’s an issue with the chassis mid-plane.

Note that any of the above ‘tango’ procedures should only be executed under the recommendation and supervision of Dell support.

Author: Nick Trimbee




Read Full Blog
  • data protection
  • PowerScale
  • NAS
  • zero trust

Address your Security Challenges with Zero Trust Model on Dell PowerScale

Aqib Kazi

Mon, 03 Oct 2022 16:39:01 -0000

|

Read Time: 0 minutes

Dell PowerScale, the world’s most secure NAS storage array[1], continues to evolve its already rich security capabilities with the recent introduction of External Key Manager for Data-at-Rest-Encryption, enhancements to the STIG security profile, and support for UEFI Secure Boot across PowerScale platforms. 

Our next release of PowerScale OneFS  adds new security features that include software-based firewall functionality, multi-factor authentication with support for CAC/PIV, SSO for administrative WebUI, and FIPS-compliant data in flight. 

As the PowerScale security feature set continues to advance, meeting the highest level of federal compliance is paramount to support industry and federal security standards. We are excited to announce that our scheduled verification by the Department of Defense Information Network (DISA) for inclusion on the DoD Approved Product List will begin in March 2023. For more information, see the DISA schedule here.

Moreover, OneFS will embrace the move to IPv6-only networks with support for USGv6-r1, a critical network standard applicable to hundreds of federal agencies and to the most security-conscious enterprises, including the DoD. Refreshed Common Criteria certification activities are underway and will provide a highly regarded international and enterprise-focused complement to other standards being supported.

We believe that implementing the zero trust model is the best foundation for building a robust security framework for PowerScale. This model and its principles are discussed below.  

Supercharge Dell PowerScale security with the zero trust model

In the age of digital transformation, multiple cloud providers, and remote employees, the confines of the traditional data center are not enough to provide the highest levels of security. In the traditional sense, security was considered placing your devices in an imaginary “bubble.” The thought was that as long as devices were in the protected “bubble,” security was already accounted for through firewalls on the perimeter. However, the age-old concept of an organization’s security depending on the firewall is no longer relevant and is the easiest for a malicious party to attack.

A person standing in a hallway

Description automatically generated with medium confidence

Now that the data center is not confined to an area, the security framework must evolve, transform, and adapt. For example, although firewalls are still critical to network infrastructure, security must surpass just a firewall and security devices.

Why is data security important?

Although this seems like an easy question, it’s essential to understand the value of what is being protected. Traditionally, an organization’s most valuable assets were its infrastructure, including a building and the assets required to produce its goods. However, in the age of Digital Transformation, organizations have realized that the most critical asset is their data.

Why a zero trust model?

Because data is an organization’s most valuable asset, protecting the data is paramount. And how do we protect this data in the modern environment without data center confines? Enter the zero trust model!

Although Forrester Research first defined zero trust architecture in 2010, it has recently received more attention with the ever-changing security environment leading to a focus on cybersecurity. The zero trust architecture is a general model and must be refined for a specific implementation. For example, in September 2019, the National Institute of Standards and Technology (NIST) introduced its concept of Zero Trust Architecture. As a result, the White House has also published an Executive Order on Improving the Nation’s Cybersecurity, including zero trust initiatives.

In a zero trust architecture, all devices must be validated and authenticated. The concept applies to all devices and hosts, ensuring that none are trusted until proven otherwise. In essence, the model adheres to a “never trust, always verify” policy for all devices.   

NIST Special Publication 800-207 Zero Trust Architecture states that a zero trust model is architected with the following design tenets:

  • All data sources and computing services are considered resources.
  • All communication is secured regardless of network location.
  • Access to individual enterprise resources is granted on a per session basis.
  • Access to resources is determined by dynamic policy—including the observable state of client identity, application/service, and the requesting asset—and may include other behavioral and environmental attributes.
  • The enterprise monitors and measures the integrity and security posture of all owned and associated assets.
  • All resource authentication and authorization are dynamic and strictly enforced before access is allowed.
  • The enterprise collects as much information as possible related to the current state of assets, network infrastructure, and communications and uses it to improve its security posture.

A picture containing text, computer, person, indoor

Description automatically generated

PowerScale OneFS follows the zero trust model

The PowerScale family of scale-out NAS solutions includes all-flash, hybrid, and archive storage nodes that can be deployed across the entire enterprise – from the edge, to core, and the cloud, to handle the most demanding file-based workloads. PowerScale OneFS combines the three layers of storage architecture—file system, volume manager, and data protection—into a scale-out NAS cluster. Dell Technologies follows the NIST Cybersecurity Framework to apply zero trust principles on a PowerScale cluster. The NIST Framework identifies five principles: identify, protect, detect, respond, and recover. Combining the framework from the NIST CSF and the data model provides the basis for the PowerScale zero trust architecture in five key stages, as shown in the following figure.

Let’s look at each of these stages and what Dell Technologies tools can be used to implement them.

1. Locate, sort, and tag the dataset

To secure an asset, the first step is to identify the asset. In our case, it is data. To secure a dataset, it must first be located, sorted, and tagged to secure it effectively. This can be an onerous process depending on the number of datasets and their size. We recommend using the Superna Eyeglass Search and Recover feature to understand your unstructured data and to provide insights through a single pane of glass, as shown in the following image. For more information, see the Eyeglass Search and Recover Product Overview.

2. Roles and access

Once we know the data we are securing, the next step is to associate roles to the indexed data. The role-specific administrators and users only have access to a subset of the data necessary for their responsibilities. PowerScale OneFS allows system access to be limited to an administrative role through Role-Based Access Control (RBAC). As a best practice, assign only the minimum required privileges to each administrator as a baseline. In the future, more privileges can be added as needed. For more information, see PowerScale OneFS Authentication, Identity Management, and Authorization.

3. Encryption

For the next step in deploying the zero trust model, use encryption to protect the data from theft and man-in-the-middle attacks.

Data at Rest Encryption

PowerScale OneFS provides Data at Rest Encryption (D@RE) using self-encrypting drives (SEDs), allowing data to be encrypted during writes and decrypted during reads with a 256-bit AES encryption key, referred to as the data encryption key (DEK). Further, OneFS wraps the DEK for each SED in an authentication key (AK). Next, the AKs for each drive are placed in a key manager (KM) that is stored securely in an encrypted database, the key manager database (KMDB). Next, the KMDB is encrypted with a 256-bit master key (MK). Finally, the 256-bit master key is stored external to the PowerScale cluster using a key management interoperability protocol (KMIP)-compliant key manager server, as shown in the following figure. For more information, see PowerScale Data at Rest Encryption.

 

Data in flight encryption

Data in flight is encrypted using SMB3 and NFS v4.1 protocols. SMB encryption can be used by clients that support SMB3 encryption, including Windows Server 2012, 2012 R2, 2016, Windows 10, and 11. Although SMB supports encryption natively, NFS requires additional Kerberos authentication to encrypt data in flight. OneFS Release 9.3.0.0 supports NFS v4.1, allowing Kerberos support to encrypt traffic between the client and the PowerScale cluster.

Once the protocol access is encrypted, the next step is encrypting data replication. OneFS supports over-the-wire, end-to-end encryption for SyncIQ data replication, protecting and securing in-flight data between clusters. For more information about these features, see the following:

4. Cybersecurity

In an environment of ever-increasing cyber threats, cyber protection must be part of any security model. Superna Eyeglass Ransomware Defender for PowerScale provides cyber resiliency. It protects a PowerScale cluster by detecting attack events in real-time and recovering from cyber-attacks. Event triggers create an automated response with real-time access auditing, as shown in the following figure.

The Enterprise AirGap capability creates an isolated data copy in a cyber vault that is network isolated from the production environment, as shown in the following figure. For more about PowerScale Cyber Protection Solution, check out this comprehensive eBook.

5. Monitoring

Monitoring is a critical component of applying a zero trust model. A PowerScale cluster should constantly be monitored through several tools for insights into cluster performance and tracking anomalies. Monitoring options for a PowerScale cluster include the following:

  • Dell CloudIQ for proactive monitoring, machine learning, and predictive analytics.
  • Superna Ransomware Defender for protecting a PowerScale cluster by detecting attack events in real-time and recovering from cyber-attacks. It also offers AirGap.
  • PowerScale OneFS SDK to create custom applications specific to an organization. Uses the OneFS API to configure, manage, and monitor cluster functionality. The OneFS SDK provides greater visibility into a PowerScale cluster.

Conclusion

This blog introduces implementing the zero trust model on a PowerScale cluster. For additional details and applying a complete zero trust implementation, see the PowerScale Zero Trust Architecture section in the Dell PowerScale OneFS: Security Considerations white paper. You can also explore the other sections in this paper to learn more about all PowerScale security considerations.

Author: Aqib Kazi

[1] Based on Dell analysis comparing cybersecurity software capabilities offered for Dell PowerScale vs competitive products, September 2022.


Read Full Blog
  • security
  • PowerScale
  • cybersecurity

PowerScale Security Baseline Checklist

Aqib Kazi

Sat, 01 Oct 2022 23:21:56 -0000

|

Read Time: 0 minutes

As a security best practice, a quarterly security review is recommended. Forming an aggressive security posture for a PowerScale cluster is composed of different facets that may not be applicable to every organization. An organization’s industry, clients, business, and IT administrative requirements determine what is applicable. To ensure an aggressive security posture for a PowerScale cluster, use the checklist in the following table as a baseline for security.

This table serves as a security baseline and must be adapted to specific organizational requirements. See the Dell PowerScale OneFS: Security Considerations white paper for a comprehensive explanation of the concepts in the table below.

Further, cluster security is not a single event. It is an ongoing process: Monitor this blog for updates. As new updates become available, this post will be updated. Consider implementing an organizational security review on a quarterly basis.

The items listed in the following checklist are not in order of importance or hierarchy but rather form an aggressive security posture as more features are implemented.

Table 1.  PowerScale security baseline checklist

Security Feature

Configuration

Links

Complete (Y/N)

Notes

Data at Rest Encryption

Implement external key manager with SEDs

PowerScale Data at Rest Encryption

 

 

Data in flight encryption

Encrypt protocol communication and data replication

PowerScale: Solution Design and Considerations for SMB Environments

PowerScale OneFS NFS Design Considerations and Best Practices

PowerScale SyncIQ: Architecture, Configuration, and Considerations

 

 

Role-based access control (RBACs)

Assign the lowest possible access required for each role

Dell PowerScale OneFS: Authentication, Identity Management, and Authorization

 

 

Multi-factor authentication

Dell PowerScale OneFS: Authentication, Identity Management, and Authorization 

Disabling the WebUI and other non-essential services

 

 

Cybersecurity

Cyber Protection and Recovery for Dell PowerScale 

Superna Ransomware Defender & AirGap 2.0

 

 

Monitoring

Monitor cluster activity

Dell CloudIQ - AIOps for Intelligent IT Infrastructure Insights

Various Superna applications

 

 

Secure Boot

Configure PowerScale Secure Boot

See PowerScale Secure Boot section

 

 

Auditing

Configure auditing

File System Auditing with Dell PowerScale and Dell Common Event Enabler

 

 

Custom applications

Create a custom application for cluster monitoring 

PowerScale OneFS SDK

 

 

Perform a quarterly security review 

Review all organizational security requirements and current implementation.

Check this paper and checklist for updates 

Monitor security advisories for PowerScale: https://www.dell.com/support/security/en-us

 

 

General cluster security best practices

 

 

See the Security best practices section in the Security Configuration Guide for the relevant release at OneFS Info Hubs

 

 

Login, authentication, and privileges best practices

 

 

SNMP security best practices

 

 

SSH security best practices

 

 

Data-access protocols best practices

 

 

Web interface security best practices

 

 

Anti-Virus

PowerScale: AntiVirus Solutions

 

 

Author: Aqib Kazi


Read Full Blog
  • PowerScale
  • OneFS
  • object storage

Distributed Media Workflows with PowerScale OneFS and Superna Golden Copy

Gregory Shiff

Tue, 06 Sep 2022 20:46:32 -0000

|

Read Time: 0 minutes

Object is the new core

Content creation workflows are increasingly distributed between multiple sites and cloud providers. Data orchestration has long been a key component in these workflows. With the extra complexity (and functionality) of multiple on-premises and cloud infrastructures, automated data orchestration is more crucial than ever.

There has been a subtle but significant shift in how media companies store and manage data. In the old way, file storage formed the “core” and data was eventually archived off to tape or object storage for long-term retention. The new way of managing data flips this paradigm. Object storage has become the new “core” with performant file storage at edge locations used for data processing and manipulation.

Various factors have influenced this shift. These factors include the ever-increasing volume of data involved in modern productions, the expanding role of public cloud providers (for whom object storage is the default), and media application support.

  

Figure 1.  Global storage environment

With this shift in roles, new techniques for data orchestration become necessary. Data management vendors are reacting to these requirements for data movement and global file system solutions.

However, many of these solutions require data to be ingested and accessed through dedicated proprietary gateways. Often this gateway approach means that the data is now inaccessible using the native S3 API.

PowerScale OneFS and Superna Golden Copy provide a way of orchestrating data between file and object that retains the best qualities of both types of storage. Data is available to be accessed on both the performant edge (PowerScale) and the object core (ECS or public cloud) with no lock in at either end.

Further, Superna Golden Copy is directly integrated with the PowerScale OneFS API. The OneFS snapshot change list is used for immediate incremental data moves. Filesystem metadata is preserved in S3 tags.

Golden Copy and OneFS are a solution built for seamless movement of data between locations, file system, and object storage. File structure and metadata are preserved.

Right tool for the job

Data that originates on object storage needs to be accessible natively by systems that can speak object APIs. Also, some subset of data needs to be moved to file storage for further processing. Production data that originates on file storage similarly needs native access. That file data will need to be moved to object storage for long-term retention and to make it accessible to globally distributed resources.

Content creation workflows are spread across multiple teams working in many locations. Multisite productions require distributed storage ecosystems that can span geographies. This architecture is well suited to a core of object storage as the “central source of truth”. Pools of highly performant file storage serve teams in their various global locations.

The Golden Copy GraphQL API allows external systems to control, configure, and monitor Golden Copy jobs. This type of API-based data orchestration is essential to the complex global pipelines of content creators. Manually moving large amounts of data is untenable. Schedule-based movement of data aligns well with some content creation workflows; other workflows require more ad hoc data movement.

 Figure 2.  Object Core with GoldenCopy and PowerScale

A large ecosystem of production management tools, such as Autodesk Shotgrid, exist for managing global teams. These tools are excellent for managing projects, but do not typically include dedicated data movers. Data movement can be particularly challenging when large amounts of media need to be shifted between object and file.

Production asset management can trigger data moves with Golden Copy based on metadata changes to a production or scene. This kind of API and metadata driven data orchestration fits in the MovieLabs 2030 vision for software-defined workflows for content creation. This topic is covered in some detail for tiering within a OneFS file system in the paper: A Metadata Driven Approach to On Demand Tiering.

For more information about using PowerScale OneFS together with Superna GoldenCopy, see my full white paper PowerScale OneFS: Distributed Media Workflows.

Author: Gregory Shiff

Read Full Blog
  • AI
  • PowerScale
  • NFS
  • performance metrics

Artificial Intelligence for IT operations (AIOps) in PowerScale Performance Prediction

Vincent Shen

Tue, 06 Sep 2022 18:14:53 -0000

|

Read Time: 0 minutes

AI is a fancy and hot topic in recent years. A common question from our customers is ‘How can AI help the day-to-day operation and management of PowerScale?’ It’s a very interesting question, because although AI can help realize so many possibilities, we still don’t have that many implementations of it in IT infrastructure. 

But, we finally have something that is very inspiring. Here is what we have achieved as proof of concept in our lab with the support of AI Dynamics, a professional AI platform company. 

Challenges for IT operations and opportunities for AIOps

With the increase in complexity of IT infrastructure comes the increase in the amount of data produced by these systems, Real-time performance logs, usage reports, audits, and other metadata can add up to gigabytes or terabytes a day. It is a big challenge for the IT department to analyze this data and to extract proactive predictions, such as IT infrastructure performance issues and their bottlenecks.

AIOps is the methodology to address these challenges. The term ‘AIOps’ refers to the use of artificial intelligence (AI), specifically machine learning (ML) techniques, to ingest, analyze, and learn from large volumes of data from every corner of the IT environment. The goal of AIOps is to allow IT departments to manage their assets and tackle performance challenges proactively, in real-time (or better), before they become system-wide issues. 

PowerScale key performance prediction using AIOps

Overview

In this solution, we identify NFS latency as the PowerScale performance indicator that customers would like to see predictive reporting about. The goal of the AI model is to study historical system activity and predict the NFS latency at five-minute intervals for four hours in the future. A traditional software system can use these predictions to alert users of a potential performance bottleneck based on the user’s specified latency threshold level and spike duration. In the future, AI models can be built that help diagnose the source of these issues so that both an alert and a best-recommended solution can be reported to the user.

The whole training process involves the following three steps (I’ll explain the details in the following sections):

  • Data preparation – to get the raw data and extract the useful features as the input for training and validation
  • Training the model – to pick up a proper AI architecture and tune the parameters for accuracy
  • Model validation – to validate the AI model based on the data set obtained from the training

Data preparation

The raw performance data is collected through Dell Secure Remote Services (SRS) from 12 different all-flash PowerScale clusters from an electronic design automation (EDA) customer each week. We identify and extract 26 performance key metrics from the raw data, most of which are logged and updated every five minutes. AI Dynamics NeoPulse is used to extract some additional fields (such as the day of the week and time of day from the UNIX timestamp fields) to allow the model to make better predictions. Each week new data was collected from the PowerScale cluster to increase the size of the training dataset and to improve the AI model. During every training run, we also withheld 10% of the data, which we used to test the AI model in the testing phase. This is separate from the 10% of training data withheld for validation.

Figure 1.  Data preparation process

Training the model

Over a period of two months, more than 50 different AI models were trained using a variety of different time series architectures, varying model architecture parameters, hyperparameters, and data engineering techniques to maximize performance, without overfitting to existing data. When these training pipelines were created in NeoPulse, they could easily be reused as new data arrived from the client each week, to rerun training and testing in order to quantify the performance of the model.

At the end of the two-month period, we had built a model that could predict whether this one performance metric (NFS3 latency) would be above a threshold of 10ms, correctly for 70% of each one of the next 48 five-minute intervals (four hours total).

Model validation

In the data preparation phase, we withheld 10% of the total data set to be used for AI model validation and testing. With the current AI model, end-users can easily configure the threshold of the latency as they want. In this case, we validated the model at 10ms and 15ms thresholds latency. The model can correctly identify over 70% of 10ms latency spikes and 60% of 15ms latency spikes over the entire ensuing four-hour period.

Figure 2.  Model Validation

Results

In this solution, we used NFS latency from PowerScale as the indicator to be predicted. The AI model uses the performance data from the previous four hours to predict the trends and spikes of NFS latency in the next four hours. If the software identifies a five-minute period when a >10ms latency spike would occur more than 70% of the time, it will trigger a configurable alert to the user.

The following diagram shows an example. At 8:55 a.m., the AI model predicts the NFS latency from 8:55 a.m. to 12:55 p.m., based on the input of performance data from 4:55 a.m. to 8:55 a.m. The AI model makes predictions for each five-minute period over the prediction duration. The model predicts a few isolated spikes in latency, with a large consecutive cluster of high latency between around 12 p.m. and 12:55 p.m. A software system can use this prediction to alert the user about the expected increase in latency, giving them over three hours to get ahead of the problem and reduce the server load. In the graph, the dotted line shows the AI model’s prediction, whereas the solid line shows actual performance.

Chart, line chart, histogram

Description automatically generated

Figure 3.  Dell PowerScale NFS Latency Forecasting

To sum up, the solution achieved the following:

  • By using the previous four hours of PowerScale performance data, the solution can forecast the next four hours of any specified metric.
  • For NFS3 latency, the solution was benchmarked as correctly identifying periods when latency would be above 10ms 70% of the time.
  • The data and model training pipelines created for this task can easily be adapted to predict other performance metrics, such as NFS throughput spikes, SMB latency spikes, and so on.
  • The AI learns to improve its predictions week by week as it adapts to each customer’s nuanced usage patterns, creating customized models for each customer’s idiosyncratic workload profiles.

Conclusion

AIOps introduces the intelligence needed to manage the complexity of modern IT environments. The NeoPulse platform from AI Dynamics makes AIOps easy to implement. In an all-flash configuration of Dell PowerScale clusters, performance is one of the key considerations. Hundreds and thousands of performance logs are generated every day and it is very easy for AIOps to consume the existing logs and provide insight into potential performance bottlenecks. Dell servers with GPUs are great platforms for performing training and inference, for not just this model but for any other new AI challenge the company wishes to tackle, across dozens of problem types.  

For additional details about our testing, see the white paper Key Performance Prediction using Artificial Intelligence for IT operations (AIOps).

Author: Vincent Shen

Read Full Blog
  • data storage
  • CSI
  • PowerScale

Network Design for PowerScale CSI

Sean Zhan Florian Coulombel

Tue, 23 Aug 2022 17:00:45 -0000

|

Read Time: 0 minutes

Network connectivity is an essential part of any infrastructure architecture. When it comes to how Kubernetes connects to PowerScale, there are several options to configure the Container Storage Interface (CSI). In this post, we will cover the concepts and configuration you can implement.

The story starts with CSI plugin architecture.

CSI plugins

Like all other Dell storage CSI, PowerScale CSI follows the Kubernetes CSI standard by implementing functions in two components.

  • CSI controller plugin
  • CSI node plugin 

The CSI controller plugin is deployed as a Kubernetes Deployment, typically with two or three replicas for high-availability, with only one instance acting as a leader. The controller is responsible for communicating with PowerScale, using Platform API to manage volumes (to PowerScale it’s to create/delete directories, NFS exports, and quotas), to update the NFS client list when a Pod moves, and so on.

A CSI node plugin is a Kubernetes DaemonSet, running on all nodes by default. It’s responsible for mounting the NFS export from PowerScale, to map the NFS mount path to a Pod as persistent storage, so that applications and users in the Pod can access the data on PowerScale.

Roles, privileges, and access zone

Because CSI needs to access both PAPI (PowerScale Platform API) and NFS data, a single user role typically isn’t secure enough: the role for PAPI access will need more privileges than normal users.

According to the PowerScale CSI manual, CSI requires a user that has the following privileges to perform all CSI functions:

Privilege

Type

ISI_PRIV_LOGIN_PAPI

Read Only

ISI_PRIV_NFS

Read Write

ISI_PRIV_QUOTA

Read Write

ISI_PRIV_SNAPSHOT

Read Write

ISI_PRIV_IFS_RESTORE

Read Only

ISI_PRIV_NS_IFS_ACCESS

Read Only

ISI_PRIV_IFS_BACKUP

Read Only

Among these privileges, ISI_PRIV_SNAPSHOT and ISI_PRIV_QUOTA are only available in the System zone. And this complicates things a bit. To fully utilize these CSI features, such as volume snapshot, volume clone, and volume capacity management, you have to allow the CSI to be able to access the PowerScale System zone. If you enable the CSM for replication, the user needs the ISI_PRIV_SYNCIQ privilege, which is a System-zone privilege too.

By contrast, there isn’t any specific role requirement for applications/users in Kubernetes to access data: the data is shared by the normal NFS protocol. As long as they have the right ACL to access the files, they are good. For this data accessing requirement, a non-system zone is suitable and recommended.

These two access zones are defined in different places in CSI configuration files:

  • The PAPI access zone name (FQDN) needs to be set in the secret yaml file as “endpoint”, for example “f200.isilon.com”.
  • The data access zone name (FQDN) needs to be set in the storageclass yaml file as “AzServiceIP”, for example “openshift-data.isilon.com”.

If an admin really cannot expose their System zone to the Kubernetes cluster, they have to disable the snapshot and quota features in the CSI installation configuration file (values.yaml). In this way, the PAPI access zone can be a non-System access zone.

The following diagram shows how the Kubernetes cluster connects to PowerScale access zones.

Network

Normally a Kubernetes cluster comes with many networks: a pod inter-communication network, a cluster service network, and so on. Luckily, the PowerScale network doesn’t have to join any of them. The CSI pods can access a host’s network directly, without going through the Kubernetes internal network. This also has the advantage of providing a dedicated high-performance network for data transfer.

For example, on a Kubernetes host, there are two NICs: IP 192.168.1.x and 172.24.1.x. NIC 192.168.1.x is used for Kubernetes, and is aligned with its hostname. NIC 172.24.1.x isn’t managed by Kubernetes. In this case, we can use NIC 172.24.1.x for data transfer between Kubernetes hosts and PowerScale.

Because by default, the CSI driver will use the IP that is aligned with its hostname, to let CSI recognize the second NIC 172.24.1.x, we have explicitly set the IP range in “allowedNetworks” in the values.yaml file in the CSI driver installation. For example:

allowedNetworks: [172.24.1.0/24]

Also, in this network configuration, it’s unlikely that the Kubernetes internal DNS can resolve the PowerScale FQDN. So, we also have to make sure the “dnsPolicy” has been set to “ClusterFirstWithHostNet” in the values.yaml file. With this dnsPolicy, the CSI pods will reach the DNS server in /etc/resolv.conf in the host OS, not the internal DNS server of Kubernetes.

The following diagram shows the configuration mentioned above:

Please note that the “allowedNetworks” setting only affects the data access zone, and not the PAPI access zone. In fact, CSI just uses this parameter to decide which host IP should be set as the NFS client IP on the PowerScale side.

Regarding the network routing, CSI simply follows the OS route configuration. Because of that, if we want the PAPI access zone to go through the primary NIC (192.168.1.x), and have the data access zone to go through the second NIC (172.24.1.x), we have to change the route configuration of the Kubernetes host, not this parameter.

Hopefully this blog helps you understand the network configuration for PowerScale CSI better. Stay tuned for more information on Dell Containers & Storage!

Authors: Sean Zhan, Florian Coulombel

Read Full Blog
  • security
  • PowerScale
  • OneFS

Disabling the WebUI and other Non-essential Services

Aqib Kazi

Mon, 25 Jul 2022 13:43:38 -0000

|

Read Time: 0 minutes

In today's security environment, organizations must adhere to governance security requirements, including disabling specific HTTP services.

OneFS release 9.4.0.0 has introduced an option to disable non-essential cluster services selectively rather than disabling all HTTP services. Disabling selectively allows administrators to determine which services are necessary. Disabling the services allows other essential services on the cluster to continue to run. You can disable the following non-essential services:

  • PowerScaleUI (WebUI)
  • Platform-API-External
  • Rest Access to Namespace (RAN)
  • RemoteService

Each of these services can be disabled independently and has no impact on other HTTP-based data services. The services can be disabled through the CLI or API with the ISI_PRIV_HTTP privilege. To manage the non-essential services from the CLI, use the isi http services list command to list the services. Use the isi http services view and isi http services modify commands to view and modify the services. The impact of disabling each of the services is listed in the following table.

HTTP services impacts

Service

Impacts

PowerScaleUI

The WebUI is entirely disabled. Attempting to access the WebUI displays Service Unavailable. Please contact Administrator.

Platform-API-External

Disabling the Platform-API-External service does not impact the Platform-API-Internal service of the cluster. The Platform-API-Internal services continue to function, even if the Platform-API-External service is disabled. However, if the Platform-API-External service is disabled, the WebUI is also disabled at that time, because the WebUI uses the Platform-API-External service.

RAN (Remote Access to Namespace)

If RAN is disabled, use of the Remote File Browser UI component is restricted in the Remote File Browser and the File System Explorer.

RemoteService

If RemoteService is disabled, the remote support UI and the InProduct Activation UI components are restricted.

To disable the WebUI, use the following command:

isi http services modify --service-id=PowerScaleUI --enabled=false

Author: Aqib Kazi



Read Full Blog
  • VMware
  • PowerScale
  • cloud
  • Google Cloud
  • NAS

Dell PowerScale for Google Cloud New Release Available

Lieven Lin

Fri, 22 Jul 2022 17:58:28 -0000

|

Read Time: 0 minutes

PowerScale for Google Cloud provides the native-cloud experience of file services with high performance. It is a scalable file service that provides high-speed file access over multiple protocols, including SMB, NFS, and HDFS. PowerScale for Google Cloud enables customers to run their cloud workloads on the PowerScale scale-out NAS storage system. The following figure shows the architecture of PowerScale for Google Cloud. The three main parts are the Dell Technologies partner data center, the Dell Technologies Google Cloud organization (isiloncloud.com), and the customer’s Google Cloud organization (for example, customer-a.com and customer-b.com).

PowerScale for Google Cloud: a new release

We proudly released a new version of PowerScale for Google Cloud on July 8, 2022. It provides the following key features and enhancements:

More flexible configuration to choose

In the previous version of PowerScale for Google Cloud, only several pre-defined node tiers were available. With the latest version, you can purchase all PowerScale node types to fit your business needs and accelerate your native-cloud file service experience. 

New location available in EMEA region

In the previous version, the supported regions include North America and APJ (Australia and Singapore). We are now adding the EMEA region, which includes London, Frankfurt, Paris, and Warsaw.

Google Cloud VMware Engine (GCVE) Certification

PowerScale for Google Cloud is now certified to support GCVE. GCVE guest VMs can connect to PowerScale for Google Cloud file services to fully leverage PowerScale cluster storage. We’ll be taking a deeper look at the details in blog articles in the next few weeks.

Want to know more about the powerful cloud file service solution? Just click these links:

Resources

Author: Lieven Lin


Read Full Blog
  • PowerScale
  • OneFS
  • NAS

PowerScale Delivers Better Efficiency and Higher Node Density with Gen2 QLC Drives

Cris Banson

Wed, 13 Jul 2022 14:50:00 -0000

|

Read Time: 0 minutes

Quad-level cell (QLC) flash memory 15TB and 30TB drives have just been made available for the PowerScale F900 and F600 all-flash models. These new QLC drives, supported by the currently shipping OneFS 9.4 release, offer our customers optimum economics for NAS workloads that require performance, reliability, and capacity – such as financial modeling, media and entertainment, artificial intelligence (AI), machine learning (ML), and deep learning (DL). See the preview of this technology that we provided in May at Dell Technologies World (DTW).

PowerScale F900/F600 QLC raw capacity

 

Chassis design (per node)

Raw capacity per node

Raw capacity for maximum cluster configuration (252 nodes)

F900

2U with 24 NVMe SSD drives

737.28TB with 30.72TB QLC

368.6TB with 15.36TB QLC

185.79PB with 30.72TB QLC

92.89PB with 15.36TB QLC

F600

1U with 8 NVMe SSD drives

245.76TB with 30.72TB QLC

122.88TB with 15.36TB QLC

61.93PB with 30.72TB QLC

30.96PB with 15.36TB QLC

QLC drives expand the data lake with up to 2x more capacity than previous generations in the same footprint, while delivering savings in consolidated rack space and power/cooling. From the edge to the core and to the cloud, PowerScale systems deliver simplicity, value, performance, flexibility, and choice.   

  • Dell PowerScale with QLC drives delivers better efficiency with half the power and half the rack space required per TB as compared with current highest capacity node.1
  • Dell PowerScale with QLC drives delivers up to 2x higher raw cluster capacity as compared with current all-flash drives.2
  • Dell PowerScale with QLC drives delivers up to 2x higher raw node density as compared with current all-flash drives.3
  • Dell PowerScale with QLC drives delivers up to 19% lower price per TB as compared with current all-flash drives.4 

PowerScale nodes comprised of QLC drives can deliver the same level of performance as those nodes comprised of TLC drives, while requiring only half the power and half the rack space. They are also up to 19% lower in price per TB, thus delivering superior economics and value to our customers. QLC-enabled nodes performed at parity or slightly better than TLC-enabled nodes for throughput benchmarks and SPEC workloads. 5

QLC drive-enabled nodes deliver the same performance as TLC while improving efficiency and doubling cluster capacity

These QLC drives become part of the overall lifecycle management system within OneFS which gives PowerScale a major TCO advantage over the competition. Seamless integration of nodes with QLC drives into existing PowerScale clusters allows clusters to take on new workloads. To address the storage capacity, performance needs, and cost optimization requirements for today’s workloads (while being powerful enough to handle the unpredictable demands of tomorrow), PowerScale systems are designed to provide customers choice, scale, and flexibility.

“With PowerScale, we have the flexibility to deploy the right storage with the right performance and right capacity to meet our business needs of today and the future,” said Michael Loggins, Global Vice President, Information Technology, SMC Corporation of America.

For more information about the PowerScale F600 and F900 QLC drives, visit the PowerScale all-flash spec sheet.

-------------------------------------------------------

1Based on Dell internal analysis, June 2022. Actual results will vary.   

2Based on Dell internal analysis, June 2022.

3Based on Dell internal analysis, June 2022.

4Based on Dell internal pricing analysis, June 2022. Actual results will vary.   

5Based on Dell internal testing, April 2022. Actual results will vary. 

 

Author: Cris Banson


Read Full Blog
  • PowerScale
  • OneFS
  • data access

Data Access in OneFS - Part 2: Introduction to OneFS Access Tokens

Lieven Lin

Fri, 01 Jul 2022 14:15:16 -0000

|

Read Time: 0 minutes

Recap

In the previous blog, we introduced the OneFS file permission basics, including:

1. OneFS file permission is only in one of the following states:

  • POSIX mode bits - authoritative with a synthetic ACL
  • OneFS ACL - authoritative with approximate POSIX mode bits

2. No matter the OneFS file permission state, the on-disk identity for a file is always a UID, a GID, or an SID. The name of a user or group is for display only.

3. When OneFS receives a user access request, it generates an access token for the user and compares the token to the file permissions based on UID/GID/SID.

Therefore, in this blog, we will explain what UID/GID/SID is, and will explain what a OneFS access token is. Now, let’s start by looking at UID/GID/SIDs.

UID/GID and SID

In our daily life, we are usually familiar with a username or a group name, which stands for a user or a group. In a NAS system, we use UID, GID, and SID to identify a user or a group, then the NAS system will resolve the UID, GID, and SID into a related username or group name.

The UID/GID is usually used in a UNIX environment to identify users/groups with a positive integer assigned. The UID/GID is usually provided by the local operating system and LDAP server.

The SID is usually used in a Windows environment to identify users/groups. The SID is usually provided by the local operating system and Active Directory (AD). The SID is written in the format:

            (SID)-(revision level)-(identifier-authority)-(subauthority1)-(subauthority2)-(etc)

for example:

S-1-5-21-1004336348-1177238915-682003330-512

For more information about SIDs, see the Microsoft article: What Are Security Identifiers?.

OneFS access token

In OneFS, information about users and groups is managed and stored in different authentication providers, including UID/GID and SID information, and user group membership information. OneFS can add multiple types of authentication provider, including:

  • Active Directory (AD)
  • Lightweight Directory Access Protocol (LDAP) servers
  • NIS
  • File provider
  • Local provider

OneFS retrieves a user’s identity (UID/GID/SID) and group memberships from the above authentication providers. Assuming that we have a user named Joe, OneFS tries to resolve Joe’s UID/GID and group memberships from LDAP, NIS, file provider, and Local provider. Meanwhile, it also tries to resolve Joe’s SID and group memberships from AD, file provider, or local provider. 

  • If neither UID/GID nor SID can be found in any of the authentication providers, the user does not exist. User access may be denied or be mapped to the ‘nobody’ user, depending on your protocol. 
  • If only a UID/GID can be found or only a SID can be found, OneFS generates a fake UID or SID for the user.

It is not always the case that OneFS needs to resolve a user from username to UID/GID/SID. It is also possible that OneFS needs to resolve a user in reverse: that is, resolve a UID to its related username. This usually occurs when using NFSv3. When OneFS gets all UID/GID/SID information for a user, it will maintain the identity relationship in a local database, which records the UID <--> SID and GID <-->SID mapping, also known as the ID mapping function in OneFS.

Now, you should have an overall idea about how OneFS maintains the important UID/GID/SID information, and how to retrieve this information as needed.

Next, let’s see how OneFS can determine whether different usernames in different authentication types are actually the same real user. For example: how can we tell if the Joe in AD and the joe_f in LDAP is same guy or not? If they are the same, OneFS needs to ensure that they have the same access to the same file, even with different protocols.

That is the magic of the OneFS user mapping function. The default user mapping rule maps users together that have the same usernames in different authentication providers. For example, the Joe in AD and the Joe in LDAP will be considered the same user. You must create user mapping rules if a real user has different names in different authentication providers. The user mapping rule can have different operators, to provide more flexible management between different usernames in different authentication providers. The operators could be Append, Insert, Replace, Remove Groups, Join. See OneFS user mapping operators for more details. We just need to remember that the user mapping is just a function to determine if the user information in an authentication provider should be used when generating an access token. 

Although it is easy to confuse user mapping with ID mapping, user mapping is the process of identifying users across authentication providers for the purpose of token generation. After the token is generated, the mappings of SID<-->UID are placed in the ID mapping database.

Finally, OneFS must choose an authoritative identity (that is, an On-Disk Identity) from the collected/generated UID/GID/SID for the user, which will be stored on disk and is used when the file is created or when ownership of file changes, impacting the file permissions.

In a single protocol environment, determining the On-Disk Identity is simple because Windows uses SIDs and Linux uses UIDs. However, in a multi-protocol environment, only one identity is stored, and the challenge is determining which one is stored. By default, the policy configured for on-disk identity is Native mode. Native mode is the best option for most environments. OneFS selects the real value between the SID and UID/GID. If both the SID and UID/GID are real values, OneFS selects UID/GID. Please note that this blog series is based on the default policy setting.

Now you should have an overall understanding of user mapping, ID mapping, and on-disk identity. These are the key concepts when understanding user access tokens and doing troubleshooting. Finally, let’s see what an access token contains. 

You can view a user’s access token by using the command isi auth mapping token <username> in OneFS. Here is an example of Joe’s access token:

vonefs-aima-1# isi auth mapping token Joe
                   User
                       Name: Joe
                        UID: 2001
                        SID: S-1-5-21-1137111906-3057660394-507681705-1002
                    On Disk: 2001
                    ZID: 1
                   Zone: System
             Privileges: -
          Primary Group
                       Name: Market
                        GID: 2003
                        SID: S-1-5-21-1137111906-3057660394-507681705-1006
                    On Disk: 2003
Supplemental Identities
                       Name: Authenticated Users
                        SID: S-1-5-11 Below 

From the above output, we can see that an access token contains the following information:

  • User’s username, UID, SID, and final on-disk identity
  • Access zone ID and name
  • OneFS RBAC privileges
  • Primary group name, GID, SID, and final on-disk identity
  • Supplemental group names, GID or SID.

Still, remember that we have a file created and owned by Joe in the previous blog? Here are the file permissions:

vonefs-aima-1# ls -le acl-file.txt
-rwxrwxr-x +   1 Joe  Market   69 May 28 01:08 acl-file.txt
 OWNER: user:Joe
 GROUP: group:Market
 0: user:Joe allow file_gen_all
 1: group:Market allow file_gen_read,file_gen_execute
 2: user:Bob allow file_gen_all
 3: everyone allow file_gen_read,file_gen_execute

The ls -le command here shows the user’s username only. And we already emphasized that the on-disk identity is always UID/GID or SID, so let’s use the ls -len command to show the on-disk identities. In the following output, we see that Joe’s on-disk identity is his UID 2001, and his GID 2003. When Joe wants to access the file, OneFS compares Joe’s access token with the file permissions below, finds that Joe’s UID is 2001 in his token, and grants him access to the file.

vonefs-aima-1# ls -len acl-file.txt
-rwxrwxr-x +   1 2001  2003   69 May 28 01:08 acl-file.txt
 OWNER: user:2001
 GROUP: group:2003
 0: user:2001 allow file_gen_all
 1: group:2003 allow file_gen_read,file_gen_execute
 2: user:2002 allow file_gen_all
 3: SID:S-1-1-0 allow file_gen_read,file_gen_execute

The above Joe is a OneFS local user from a local provider. Next, we will see what the access token looks like if a user’s SID is from AD and UID/GID is from LDAP.

Let’s assume that user John has an account named John_AD in AD, and also has an account named John_LDAP in LDAP server. This means that OneFS has to ensure that the two usernames have consistent access permissions on a file. To achieve that, we need to create a user mapping rule to join them together, so that the final access token will contain the SID information in AD and UID/GID information in LDAP. The access token for John_AD looks like this:

vonefs-aima-1# isi auth mapping token vlab\\John_AD
                   User
                       Name: VLAB\john_ad
                         UID: 1000019
                        SID: S-1-5-21-2529895029-2434557131-462378659-1110
                    On Disk: S-1-5-21-2529895029-2434557131-462378659-1110
                    ZID: 1
                   Zone: System
             Privileges: -
          Primary Group
                        Name: VLAB\domain users
                         GID: 1000041
                         SID: S-1-5-21-2529895029-2434557131-462378659-513
                    On Disk: S-1-5-21-2529895029-2434557131-462378659-513
Supplemental Identities
                        Name: Users
                         GID: 1545
                         SID: S-1-5-32-545
 
                        Name: Authenticated Users
                         SID: S-1-5-11
 
                       Name: John_LDAP
                        UID: 19421
                         SID: S-1-22-1-19421
 
                        Name: ldap_users
                         GID: 32084
                         SID: S-1-22-2-32084

Assume that a file that is owned and only accessible by John_LDAP has the file permissions shown in the following output. As the John_AD and John_LDAP is joined together with a user mapping rule, the John_LDAP identity (UID) is also in the John_AD access token, so John_AD can also access the file.

vonefs-aima-1# ls -le john_ldap.txt
-rwx------     1 John_LDAP  ldap_users  19 Jun 15 07:36 john_ldap.txt
 OWNER: user:John_LDAP
 GROUP: group:ldap_users
 SYNTHETIC ACL
 0: user:John_LDAP allow file_gen_read,file_gen_write,file_gen_execute,std_write_dac
 1: group:ldap_users allow std_read_dac,std_synchronize,file_read_attr

You should now have an understanding of OneFS access tokens, and how they are used to determine a user’s authorized operation on data, through file permission checking.

In my next blog, we will see what will happen for different protocols when accessing OneFS data.

Resources

Author: Lieven Lin

 



Read Full Blog
  • data protection
  • PowerScale
  • OneFS

OneFS Smartfail

Nick Trimbee

Mon, 27 Jun 2022 21:03:17 -0000

|

Read Time: 0 minutes

OneFS protects data stored on failing nodes or drives in a cluster through a process called smartfail. During the process, OneFS places a device into quarantine and, depending on the severity of the issue, the data on it into a read-only state. While a device is quarantined, OneFS reprotects the data on the device by distributing the data to other devices.

After all data eviction or reconstruction is complete, OneFS logically removes the device from the cluster, and the node or drive can be physically replaced. OneFS only automatically smartfails devices as a last resort. Nodes and/or drives can also be manually smartfailed. However, it is strongly recommended to first consult Dell Technical Support.

Occasionally a device might fail before OneFS detects a problem. If a drive fails without being smartfailed, OneFS automatically starts rebuilding the data to available free space on the cluster. However, because a node might recover from a transient issue, if a node fails, OneFS does not start rebuilding data unless it is logically removed from the cluster.

A node that is unavailable and reported by isi status as ‘D’, or down, can be smartfailed. If the node is hard down, likely with a significant hardware issue, the smartfail process will take longer because data has to be recalculated from the FEC protection parity blocks. That said, it’s well worth attempting to bring the node up if at all possible – especially if the cluster, and/or node pools, is at the default +2D:1N protection. The concern here is that, with a node down, there is a risk of data loss if a drive or other component goes bad during the smartfail process.

If possible, and assuming the disk content is still intact, it can often be quicker to have the node hardware repaired. In this case, the entire node’s chassis (or compute module in the case of Gen 6 hardware) could be replaced and the old disks inserted with original content on them. This should only be performed at the recommendation and under the supervision of Dell Technical Support. If the node is down because of a journal inconsistency, it will have to be smartfailed out. In this case, engage Dell Technical Support to determine an appropriate action plan.

The recommended procedure for smartfailing a node is as follows. In this example, we’ll assume that node 4 is down:

From the CLI of any node except node 4, run the following command to smartfail out the node:

# isi devices node smartfail --node-lnn 4

Verify that the node is removed from the cluster.

# isi status –q

(An ‘—S-’ will appear in node 4’s ‘Health’ column to indicate it has been smartfailed).

Monitor the successful completion of the job engine’s MultiScan, FlexProtect/FlexProtectLIN jobs:

# isi job status

Un-cable and remove the node from the rack for disposal.

As mentioned previously, there are two primary Job Engine jobs that run as a result of a smartfail:

  • MultiScan
  • FlexProtect or FlexProtectLIN

MultiScan performs the work of both the AutoBalance and Collect jobs simultaneously, and it is triggered after every group change. The reason is that new file layouts and file deletions that happen during a disruption to the cluster might be imperfectly balanced or, in the case of deletions, simply lost.

The Collect job reclaims free space from previously unavailable nodes or drives. A mark and sweep garbage collector, it identifies everything valid on the filesystem in the first phase. In the second phase, the Collect job scans the drives, freeing anything that isn’t marked valid.

When node and drive usage across the cluster are out of balance, the AutoBalance job scans through all the drives looking for files to re-layout, to make use of the less filled devices.

The purpose of the FlexProtect job is to scan the file system after a device failure to ensure that all files remain protected. Incomplete protection levels are fixed, in addition to missing data or parity blocks caused by drive or node failures. This job is started automatically after smartfailing a drive or node. If a smartfailed device was the reason the job started, the device is marked gone (completely removed from the cluster) at the end of the job.

Although a new node can be added to a cluster at any time, it’s best to avoid major group changes during a smartfail operation. This helps to avoid any unnecessary interruptions of a critical job engine data reprotection job. However, because a node is down, there is a window of risk while the cluster is rebuilding the data from that cluster. Under pressing circumstances, the smartfail operation can be paused, the node added, and then smartfail resumed when the new node has happily joined the cluster.

Be aware that if the node you are adding is the same node that was smartfailed, the cluster maintains a record of that node and may prevent the re-introduction of that node until the smartfail is complete. To mitigate risk, Dell Technical Support should definitely be involved to ensure data integrity.

The time for a smartfail to complete is hard to predict with any accuracy, and depends on:

Attribute

Description

OneFS release

Determines OneFS job engine version and how efficiently it operates.

System hardware

Drive types, CPU, RAM, and so on.

File system

Quantity and type of data (that is, small vs. large files), protection, tunables, and so on.

Cluster load

Processor and CPU utilization, capacity utilization, and so on.

Typical smartfail runtimes range from minutes (for fairly empty, idle nodes with SSD and SAS drives) to days (for nodes with large SATA drives and a high capacity utilization). The FlexProtect job already runs at the highest job engine priority (value=1) and medium impact by default. As such, there isn’t much that can be done to speed up this job, beyond reducing cluster load.

Smartfail is also a valuable tool for proactive cluster node replacement, such as during a hardware refresh. Provided that the cluster quorum is not broken, a smartfail can be initiated on multiple nodes concurrently – but never more than n/2 – 1 nodes (rounded up)!

If replacing an entire node pool as part of a tech refresh, a SmartPools filepool policy can be crafted to migrate the data to another node pool across the backend network. When complete, the nodes can then be smartfailed out, which should progress swiftly because they are now empty.

If multiple nodes are smartfailed simultaneously, at the final stage of the process the node remove is serialized with roughly a 60 second pause between each. The smartfail job places the selected nodes in read-only mode while it copies the protection stripes to the cluster’s free space. Using SmartPools to evacuate data from a node or set of nodes, in preparation to remove them, is generally a good idea, and usually a relatively fast process.

SmartPools’ Virtual Hot Spare (VHS) functionality helps ensure that node pools maintain enough free space to successfully re-protect data in the event of a smartfail. Though configured globally, VHS actually operates at the node pool level so that nodes with different size drives reserve the appropriate VHS space. This helps ensure that while data may move from one disk pool to another during repair, it remains on the same class of storage. VHS reservations are cluster wide and configurable, as either a percentage of total storage (0-20%), or as a number of virtual drives (1-4), with the default being 10%.

Note: a smartfail is not guaranteed to remove all data on a node. Any pool in a cluster that’s flagged with the ‘System’ flag can store /ifs/.ifsvar data. A filepool policy to move the regular data won’t address this data. Also, because SmartPools ‘spillover’ may have occurred at some point, there is no guarantee that an ‘empty’ node is completely devoid of data. For this reason, OneFS still has to search the tree for files that may have blocks residing on the node.

Nodes can be easily smartfailed from the OneFS WebUI by navigating to Cluster Management > Hardware Configuration and selecting ‘Actions > More > Smartfail Node’ for the desired node(s):

Similarly, the following CLI commands first initiate and then halt the node smartfail process, respectively. First, the ‘isi devices node smartfail’ command kicks off the smartfail process on a node and removes it from the cluster.

# isi devices node smartfail -h
Syntax
# isi devices node smartfail
[--node-lnn <integer>]
[--force | -f]
[--verbose | -v]

If necessary, the ‘isi devices node stopfail’ command can be used to discontinue the smartfail process on a node.

# isi devices node stopfail -h
Syntax
isi devices node stopfail
[--node-lnn <integer>]
[--force | -f]
[--verbose | -v]

Similarly, individual drives within a node can be smartfailed with the ‘isi devices drive smartfail’ CLI command.

# isi devices drive smartfail { <bay> | --lnum <integer> | --sled <string> }
        [--node-lnn <integer>]
        [{--force | -f}]
        [{--verbose | -v}]
        [{--help | -h}]

Author: Nick Trimbee



Read Full Blog
  • PowerScale
  • OneFS
  • SmartPools

OneFS SmartPools and the FilePolicy Job

Nick Trimbee

Fri, 24 Jun 2022 18:22:15 -0000

|

Read Time: 0 minutes

Traditionally, OneFS has used the SmartPools jobs to apply its file pool policies. To accomplish this, the SmartPools job visits every file, and the SmartPoolsTree job visits a tree of files. However, the scanning portion of these jobs can result in significant random impact to the cluster and lengthy execution times, particularly in the case of the SmartPools job. To address this, OneFS also provides the FilePolicy job, which offers a faster, lower impact method for applying file pool policies than the full-blown SmartPools job.

But first, a quick Job Engine refresher…

As we know, the Job Engine is OneFS’ parallel task scheduling framework, and is responsible for the distribution, execution, and impact management of critical jobs and operations across the entire cluster.

The OneFS Job Engine schedules and manages all data protection and background cluster tasks: creating jobs for each task, prioritizing them, and ensuring that inter-node communication and cluster wide capacity utilization and performance are balanced and optimized. Job Engine ensures that core cluster functions have priority over less important work and gives applications integrated with OneFS – Isilon add-on software or applications integrating to OneFS by means of the OneFS API – the ability to control the priority of their various functions to ensure the best resource utilization.

Each job, such as the SmartPools job, has an “Impact Profile” comprising a configurable policy and a schedule that characterizes how much of the system’s resources the job will take – plus an Impact Policy and an Impact Schedule. The amount of work a job has to do is fixed, but the resources dedicated to that work can be tuned to minimize the impact to other cluster functions, like serving client data.

Here’s a list of the specific jobs that are directly associated with OneFS SmartPools:

Job

Description

SmartPools

Job that runs and moves data between the tiers of nodes within the same cluster. Also executes the CloudPools functionality if licensed and configured.

SmartPoolsTree

Enforces SmartPools file policies on a subtree.

FilePolicy

Efficient changelist-based SmartPools file pool policy job.

IndexUpdate

Creates and updates an efficient file system index for FilePolicy job.

SetProtectPlus

Applies the default file policy. This job is disabled if SmartPools is activated on the cluster.

In conjunction with the IndexUpdate job, FilePolicy improves job scan performance by using a ‘file system index’, or changelist, to find files needing policy changes, rather than a full tree scan.

 

Avoiding a full treewalk dramatically decreases the amount of locking and metadata scanning work the job is required to perform, reducing impact on CPU and disk – albeit at the expense of not doing everything that SmartPools does. The FilePolicy job enforces just the SmartPools file pool policies, as opposed to the storage pool settings. For example, FilePolicy does not deal with changes to storage pools or storage pool settings, such as:

  • Restriping activity due to adding, removing, or reorganizing node pools
  • Changes to storage pool settings or defaults, including protection

However, most of the time, SmartPools and FilePolicy perform the same work. Disabled by default, FilePolicy supports the full range of file pool policy features, reports the same information, and provides the same configuration options as the SmartPools job. Because FilePolicy is a changelist-based job, it performs best when run frequently – once or multiple times a day, depending on the configured file pool policies, data size, and rate of change.

Job schedules can easily be configured from the OneFS WebUI by navigating to Cluster Management > Job Operations, highlighting the desired job, and selecting ‘View\Edit’. The following example illustrates configuring the IndexUpdate job to run every six hours at a LOW impact level with a priority value of 5:

When enabling and using the FilePolicy and IndexUpdate jobs, the recommendation is to continue running the SmartPools job as well, but at a reduced frequency (monthly).

In addition to running on a configured schedule, the FilePolicy job can also be executed manually.

FilePolicy requires access to a current index. If the IndexUpdate job has not yet been run, attempting to start the FilePolicy job will fail with the error shown in the following figure. Instructions in the error message appear, prompting to run the IndexUpdate job first. When the index has been created, the FilePolicy job will run successfully. The IndexUpdate job can be run several times daily (that is, every six hours) to keep the index current and prevent the snapshots from getting large.

Consider using the FilePolicy job with the job schedules below for workflows and datasets with the following characteristics:

  • Data with long retention times
  • Large number of small files
  • Path-based File Pool filters configured
  • Where the FSAnalyze job is already running on the cluster (InsightIQ monitored clusters)
  • There is already a SnapshotIQ schedule configured
  • When the SmartPools job typically takes a day or more to run to completion at LOW impact

For clusters without the characteristics described above, the recommendation is to continue running the SmartPools job as usual and not to activate the FilePolicy job.

The following table provides a suggested job schedule when deploying FilePolicy:

Job

Schedule

Impact

Priority

FilePolicy

Every day at 22:00

LOW

6

IndexUpdate

Every six hours, every day

LOW

5

SmartPools

Monthly – Sunday at 23:00

LOW

6

Because no two clusters are the same, this suggested job schedule may require additional tuning to meet the needs of a specific environment.

Note that when clusters running older OneFS versions and the FSAnalyze job are upgraded to OneFS 8.2.x or later, the legacy FSAnalyze index and snapshots are removed and replaced by new snapshots the first time that IndexUpdate is run. The new index stores considerably more file and snapshot attributes than the old FSA index. Until the IndexUpdate job effects this change, FSA keeps running on the old index and snapshots.

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS
  • CloudPools

Preparations for Upgrading a CloudPools Environment

Jason He

Thu, 23 Jun 2022 15:51:46 -0000

|

Read Time: 0 minutes

Introduction

CloudPools 2.0 brings many improvements and is released along with OneFS 8.2.0. It’s valuable to be able to upgrade OneFS from 8.x to 8.2.x or later and leverage the data management benefits of CloudPools 2.0.

This blog describes the preparations for upgrading a CloudPools environment. The purpose is to avoid potential issues when upgrading OneFS from 8.x to 8.2.x or later (that is, from CloudPools 1.0 to CloudPools 2.0).

For the recommended procedure for upgrading a CloudPools environment, see the document PowerScale CloudPools: Upgrading 8.x to 8.2.2.x or later.

For the best practices and considerations for CloudPools upgrades, see the white paper Dell PowerScale: CloudPools and ECS.

This blog covers the preparations both on cloud providers and on PowerScale clusters.

Cloud providers

CloudPools is a OneFS feature that allows customers to archive or tier data from a PowerScale cluster to cloud storage, including public cloud providers such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud, Alibaba Cloud, or a private cloud based on Dell ECS.

Important: Run the isi cloud account list command to verify which cloud providers are used for CloudPools. Different authentications are used with different cloud providers for CloudPools, which might cause potential issues when upgrading a CloudPools environment.

AWS signature authentication is used for AWS, Dell ECS, and Google Cloud. In OneFS releases prior to 8.2, AWS SigV2 is only supported for CloudPools. Starting from OneFS 8.2, AWS SigV4 is added, which provides an extra level of security for authentication with the enhanced algorithm. For more information about V4, see Authenticating Requests: AWS Signature V4. AWS SigV4 will be used automatically for CloudPools in OneFS 8.2.x or later if the configurations (CloudPools and cloud providers) are correct. Please note that a different authentication is used for Azure or Alibaba Cloud.

If public cloud providers are used in a customer’s environment, there should be no issues because all configurations are already created by public cloud providers.

If Dell ECS is used in a customer’s environment, the ECS configurations are implemented separately and you need make sure that the configurations are correct on ECS, including load balancer and Domain Name System (DNS).

This section only covers the preparations for CloudPools and Dell ECS before upgrading OneFS from 8.x to 8.2.x or later.

Dell ECS

In general, CloudPools may already be archiving a lot of data from a PowerScale (Isilon) cluster to ECS before an upgrade OneFS from 8.x to 8.2.x or later. That means that most of the configurations should be created for CloudPools. For more information about CloudPools and ECS, see the white paper Dell PowerScale: CloudPools and ECS.

This section covers the following configurations for ECS before a OneFS upgrade from 8.x to 8.2.x or later.

  • Load balancer
  • DNS
  • Base URL

Load balancer

A load balancer balances traffic to the various ECS nodes from the PowerScale cluster, and can provide much better performance and throughput for CloudPools. A load balancer is strongly recommended for CloudPools 2.0 and ECS. The following white papers provide information about how to implement a load balancer with ECS:

DNS

AWS always has a wildcard DNS record configured. See the document Virtual hosting of buckets, which introduces path-style access and virtual hosted-style access for a bucket. It also shows how to associate a hostname with an Amazon S3 bucket using CNAMEs for virtual hosted-style access.

Meanwhile, the path-style URL will be deprecated on September 23, 2022. Buckets created after that date must be referenced using the virtual-hosted model. For the reasons behind moving to the virtual-hosted model, see the document Amazon S3 Path Deprecation Plan – The Rest of the Story.

ECS supports Amazon S3 compatible applications that use virtual host-style and path-style addressing schemes. (For more information, see document Bucket and namespace addressing.) And, to help ensure the proper DNS configuration for ECS, see the document DNS configuration.

The procedure for configuring DNS depends on your DNS server or DNS provider.

For example, a DNS is set up on a Windows server. The following two tables and three figures show the DNS entries created. The customer must create their own DNS entries.

Name

Record Type

FQDN

IP Address

Comment

ecs

A

ecs.demo.local

192.168.1.40

The FQDN of the load balancer will be ecs.demo.local.

 


Name

Record Type

FQDN

FQDN for
target host

Comment

cloudpools_uri

CNAME

cloudpools_uri.demo.local

ecs.demo.local

If you create an SSL certificate for the ECS S3 service, it must have the certificate and the non-wildcard version as a Subject Alternate Name.

*.cloudpools_uri

CNAME

*.cloudpools_uri.demo.local

ecs.demo.local

Used for virtual host addressing for a bucket. 


 

Base URL

In CloudPools 2.0 and ECS, a base URL must be created on ECS. For details about creating a Base URL on ECS, see the section Appendix A Base URL in the white paper Dell PowerScale: CloudPools and ECS.

When creating a new Base URL, keep the default setting (No) for Use with Namespace. Make sure that the Base URL is the FQDN alias of the load balancer virtual IP.

PowerScale clusters

If SyncIQ is configured for CloudPools, run the following commands on the source and target PowerScale cluster to check and record the CloudPools configurations, including CloudPools storage accounts, CloudPool, file pool policies, and SyncIQ policies.

# isi cloud accounts list -v
# isi cloud pools list -v
# isi filepool policies list -v
# isi sync policies list -v

For CloudPools and ECS, please be sure that URI is the FQDN alias of the load balancer virtual IP.

Important: It is strongly recommended that no job (such as for CloudPools/SmartPools, SyncIQ, and NDMP) be running before upgrading.  

In a SyncIQ environment, upgrade the SyncIQ target cluster before upgrading the source cluster. OneFS allows SyncIQ to send CP1.0 formatted SmartLink files to the target, where they will be converted into CP2.0 formatted SmartLink files. (If the source cluster is upgraded first, Sync operations will fail until both are upgraded; the only known resolution is to reconfigure the Sync policy to "Deep Copy".)

And the customer may have active (read & write) CloudPools accounts both on source and target PowerScale clusters, replicating SmartLink files of active CloudPools accounts bidirectionally. That means that the source is also a target. In this case, you need to reconfigure the Sync policy to “Deep Copy” on one of PowerScale clusters. After that, the target with replicated SmartLink files should be upgraded first.

Summary

This blog covered what you need to check, on cloud providers and PowerScale clusters, before upgrading OneFS from 8.x to 8.2.x or later (that is, from CloudPools 1.0 to CloudPools 2.0). My hope is that it can help you avoid potential CloudPools issues when upgrading a CloudPools environment.

Author: Jason He, Principal Engineering Technologist

Read Full Blog
  • Isilon
  • security
  • PowerScale
  • OneFS

PowerScale Now Supports Secure Boot Across More Platforms

Aqib Kazi

Tue, 21 Jun 2022 19:55:15 -0000

|

Read Time: 0 minutes

Dell PowerScale OneFS 9.3.0.0 first introduced support for Secure Boot on the Dell Isilon A2000 platform. Now, OneFS 9.4.0.0 expands that support across the PowerScale A300, A3000, B100, F200, F600, F900, H700, H7000, and P100 platforms.

Secure Boot was introduced as part of the Unified Extensible Firmware Interface (UEFI) Forums of the UEFI 2.3.1 specification. The goal for Secure Boot is to ensure device security in the preboot environment by allowing only authorized EFI binaries to be loaded during the process.

The operating system boot loaders are signed with a digital signature. PowerScale Secure Boot takes the UEFI framework further by including the OneFS kernel and modules. The UEFI infrastructure is responsible for the EFI signature validation and binary loading within UEFI Secure Boot. Also, the FreeBSD veriexec function can perform signature validation for the boot loader and kernel. The PowerScale Secure Boot feature runs during the nodes’ bootup process only, using public-key cryptography to verify the signed code and ensure that only trusted code is loaded on the node.

Supported platforms

PowerScale Secure Boot is available on the following platform:

Platform

NFP version

OneFS release

Isilon A2000

11.4 or later

9.3.0.0 or later

PowerScale A300, A3000, B100, F200, F600, F900, H700, H7000, P100

11.4 or later

9.3.0.0 or later

Considerations

Before configuring the PowerScale Secure Boot feature, consider the following:

  • Isilon and PowerScale nodes are not shipped with PowerScale Secure Boot enabled. However, you can enable the feature to meet site requirements.
  • A PowerScale cluster composed of PowerScale Secure Boot enabled nodes, and PowerScale Secure Boot disabled nodes, is supported.
  • A license is not required for PowerScale Secure Boot because the feature is natively supported.
  • At any point, you can enable or disable the PowerScale Secure Boot feature.
  • Plan a maintenance window to enable or disable the PowerScale Secure Boot feature, because a node reboot is required during the process to toggle the feature.
  • The PowerScale Secure Boot feature does not impact cluster performance, because the feature is only run at bootup.

Configuration

For more information about configuring the PowerScale Secure Boot feature, see the document Dell PowerScale OneFS Secure Boot.


Author: Aqib Kazi


Read Full Blog
  • PowerScale
  • OneFS

OneFS SnapRevert Job

Nick Trimbee

Tue, 21 Jun 2022 19:44:06 -0000

|

Read Time: 0 minutes

There have been a couple of recent inquiries from the field about the SnapRevert job.

For context, SnapRevert is one of three main methods for restoring data from a OneFS snapshot. The options are shown here: 

MethodDescription
CopyCopying specific files and directories directly from the snapshot
CloneCloning a file from the snapshot
RevertReverting the entire snapshot using the SnapRevert job

However, the most efficient of these approaches is the SnapRevert job, which automates the restoration of an entire snapshot to its top-level directory. This allows for quickly reverting to a previous, known-good recovery point (for example, if there is a virus outbreak). The SnapRevert job can be run from the Job Engine WebUI, and requires adding the desired snapshot ID.

 

There are two main components to SnapRevert:

  • The file system domain that the objects are put into.
  • The job that reverts everything back to what’s in a snapshot.

So, what exactly is a SnapRevert domain? At a high level, a domain defines a set of behaviors for a collection of files under a specified directory tree. The SnapRevert domain is described as a restricted writer domain, in OneFS parlance. Essentially, this is a piece of extra filesystem metadata and associated locking that prevents a domain’s files from being written to while restoring a last known good snapshot.

Because the SnapRevert domain is essentially just a metadata attribute placed onto a file/directory, a best practice is to create the domain before there is data. This avoids having to wait for DomainMark (the aptly named job that marks a domain’s files) to walk the entire tree, setting that attribute on every file and directory within it.

The SnapRevert job itself actually uses a local SyncIQ policy to copy data out of the snapshot, discarding any changes to the original directory. When the SnapRevert job completes, the original data is left in the directory tree. In other words, after the job completes, the file system (HEAD) is exactly as it was at the point in time that the snapshot was taken. The LINs for the files or directories do not change because what is there is not a copy.

To manually run SnapRevert, go to the OneFS WebUI > Cluster Management > Job Operations > Job Types > SnapRevert, and click the Start Job button.

Also, you can adjust the job’s impact policy and relative priority, if desired.

Before a snapshot is reverted, SnapshotIQ creates a point-in-time copy of the data that is being replaced. This enables the snapshot revert to be undone later, if necessary.

Also, individual files, rather than entire snapshots, can also be restored in place using the isi_file_revert command-line utility.

# isi_file_revert
usage:
isi_file_revert -l lin -s snapid
isi_file_revert -p path -s snapid
-d (debug output)
-f (force, no confirmation)

This can help drastically simplify virtual machine management and recovery, for example.

Before creating snapshots, it is worth considering that reverting a snapshot requires that a SnapRevert domain exist for the directory that is being restored. As such, we recommend that you create SnapRevert domains for those directories while the directories are empty. Creating a domain for an empty (or sparsely populated) directory takes considerably less time.

Files may belong to multiple domains. Each file stores a set of domain IDs indicating which domain they belong to in their inode’s extended attributes table. Files inherit this set of domain IDs from their parent directories when they are created or moved. The domain IDs refer to domain settings themselves, which are stored in a separate system B-tree. These B-tree entries describe the type of the domain (flags), and various other attributes.

As mentioned, a Restricted-Write domain prevents writes to any files except by threads that are granted permission to do so. A SnapRevert domain that does not currently enforce Restricted-Write shows up as (Writable) in the CLI domain listing.

Occasionally, a domain will be marked as (Incomplete). This means that the domain will not enforce its specified behavior. Domains created by the job engine are incomplete if not all files that are part of the domain are marked as being members of that domain. Since each file contains a list of domains of which it is a member, that list must be kept up to date for each file. The domain is incomplete until each file’s domain list is correct.

Besides SnapRevert, OneFS also uses domains for SyncIQ replication and SnapLock immutable archiving.

A SnapRevert domain must be created on a directory before it can be reverted to a particular point in time snapshot. As mentioned before, we recommend creating SnapRevert domains for a directory while the directory is empty.

The root path of the SnapRevert domain must be the same root path of the snapshot. For instance, a domain with a root path of /ifs/data/marketing cannot be used to revert a snapshot with a root path of /ifs/data/marketing/archive.

For example, for snapshot DailyBackup_04-27-2021_12:00 which is rooted at /ifs/data/marketing/archive, you would perform the following:

1. Set the SnapRevert domain by running the DomainMark job (which marks all files).

# isi job jobs start domainmark --root /ifs/data/marketing --dm-type SnapRevert

2. Verify that the domain has been created.

# isi_classic domain list –l

To restore a directory back to the state it was in at the point in time when a snapshot was taken, you need to:

  • Create a SnapRevert domain for the directory
  • Create a snapshot of a directory

 To accomplish this, do the following:

1. Identify the ID of the snapshot you want to revert by running the isi snapshot snapshots view command and picking your point in time (PIT).

For example:

# isi snapshot snapshots view DailyBackup_04-27-2021_12:00
ID: 38
Name: DailyBackup_04-27-2021_12:00
Path: /ifs/data/marketing
Has Locks: No
Schedule: daily
Alias: -
Created: 2021-04-27T12:00:05
Expires: 2021-08-26T12:00:00
Size: 0b
Shadow Bytes: 0b
% Reserve: 0.00%
% Filesystem: 0.00%
State: active

2. Revert to a snapshot by running the isi job jobs start command. The following command reverts to snapshot ID 38 named DailyBackup_04-27-2021_12:00.

# isi job jobs start snaprevert --snapid 38

You can also perform this action from the WebUI. Go to Cluster Management > Job Operations > Job Types > SnapRevert, and click the Start Job button.

OneFS automatically creates a snapshot before the SnapRevert process reverts the specified directory tree. The naming convention for these snapshots is of the form: <snapshot_name>.pre_revert.*

# isi snap snap list | grep pre_revert
39 DailyBackup_04-27-2021_12:00.pre_revert.1655328160 /ifs/data/marketing

This allows for an easy rollback of a SnapRevert if the desired results are not achieved.

If a domain is currently preventing the modification or deletion of a file, a protection domain cannot be created on a directory that contains that file. For example, if files under /ifs/data/smartlock are set to a WORM state by a SmartLock domain, OneFS will not allow a SnapRevert domain to be created on /ifs/data/.

If desired or required, SnapRevert domains can also be deleted using the job engine CLI. For example, to delete the SnapRevert domain at /ifs/data/marketing:

# isi job jobs start domainmark --root /ifs/data/marketing --dm-type SnapRevert --delete

 

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS
  • data access

Data Access in OneFS - Part 1: Introduction to OneFS File Permissions

Lieven Lin

Thu, 16 Jun 2022 20:29:24 -0000

|

Read Time: 0 minutes

About this blog series

Have you ever been confused about PowerScale OneFS file system multi-protocol data access? If so, this blog series will help you out. We’ll try to demystify OneFS multi-protocol data access. Different Network Attached Storage vendors have different designs for implementing multi-protocol data access. In OneFS multi-protocol data access, you can access the same set of data consistently with different operating systems and protocols.

To make it simple, the overall data access process in OneFS includes:

  1. When a client user tries to access OneFS cluster data by means of protocols (such as SMB, NFS, and S3), OneFS must first authenticate the client user.
  2. When the authentication succeeds, OneFS checks whether the user has permission on file share, where the access level depends on your access protocol, such as SMB share, NFS export, or S3 bucket.
  3. Only when the user is authorized to have permission on the file shares will OneFS apply user mapping rules and generate an access token for the user in most cases. The access token contains the following information:
  • The user's Security Identifier (SID), User Identifier (UID), and Group Identifier (GID).
  • The user's supplemental groups
  • The user's role-based access control (RBAC) privileges

Finally, OneFS enforces the permissions on the target data for the user. This process evaluates the file permissions based on the user's access token and file share level permissions.

Does it sound simple but some details still confusing? Like, what exactly are UIDs, GIDs, and SIDs? What’s an access token? How does OneFS evaluate the file permissions? and so on. Don’t worry if you are not familiar with these concepts. Keep reading and we’ll explain!

To make it easier, we will start with OneFS file permissions, and then introduce OneFS access tokens. Finally, we will see how data access depends on the protocol you use.

In this blog series, we’ll cover the following topics:

  • Data Access in OneFS - Part 1: Introduction to OneFS File Permissions
  • Data Access in OneFS - Part 2: Introduction to OneFS Access Tokens
  • Data Access in OneFS - Part 3: Why Use Different Protocols?
  • Data Access in OneFS - Part 4: Using NFSv3 and NFSv4.x
  • Data Access in OneFS - Part 5: Using SMB
  • Data Access in OneFS - Part 6: Using S3
  • More to add…

Now let's have a look at OneFS file permissions. In a multi-protocol environment, the OneFS operating system is designed to support basic POSIX mode bits and Access Control Lists (ACLs). Therefore, two file permission states are designated:

  • POSIX mode bits - authoritative with a synthetic ACL
  • OneFS ACL - authoritative with approximate POSIX mode bits

POSIX mode bits - authoritative with a synthetic ACL

POSIX mode bits only define three specific permissions: read(r), write(w), and execute(x). Meanwhile, there are three classes to which you can assign permissions: Owner, Group, and Others.

  • Owner: represents the owner of a file/directory.
  • Group: represents the group of a file/directory.
  • Others: represents the users who are not the owner, nor a member of the group.

The ls -le command displays a file’s permissions; the ls -led command displays a directory’s permissions. If it has these permissions:

-rw-rw-r--

then:

-rw-rw-r--         means that the owner has read and write permissions 

-rw-rw-r--         means that the group has read and write permissions

-rw-rw-r--         means that all others have only read permissions

In the following example for the file posix-file.txt, the file owner Joe has read and write access permissions, the file group Market has read and write access permissions, and all others only have read access permissions.

Also displayed here is the synthetic ACL (shown beneath the SYNTHETIC ACL flag) which indicates that the file is in the POSIX mode bit file permission state. There are three Access Control Entities (ACEs) created for the synthetic ACL, all of which is another way of representing the file’s POSIX mode bits permissions.

vonefs-aima-1# ls -le posix-file.txt
-rw-rw-r--     1 Joe  Market   65 May 28 02:08 posix-file.txt
 OWNER: user:Joe
 GROUP: group:Market
 SYNTHETIC ACL
 0: user:Joe allow file_gen_read,file_gen_write,std_write_dac
 1: group:Market allow file_gen_read,file_gen_write
 2: everyone allow file_gen_read

When OneFS receives a user access request, it generates an access token for the user and compares the token to the file permissions – in this case, the POSIX mode bits.  

OneFS ACL authoritative with approximate POSIX mode bits

In contrast to POSIX mode bits, OneFS ACLs support more expressive permissions. (For all available permissions, which are listed in Table 1 through Table 3 of the documentation, see Access Control Lists on Dell EMC PowerScale OneFS.) A OneFS ACL consists of one or more Access Control Entries (ACEs). A OneFS ACE contains the following information:

  • ACE index: indicates the ACE order in an ACL
  • Identity type: indicates the identity type, supported identity type including user, group, everyone, creator_owner, creator_group, or owner_rights
  • Identity ID: in OneFS, the UID/GID/SID is stored on disk instead of user names or group names. The name of a user or group is for display only.
  • ACE type: The type of the ACE (allow or deny)
  • ACE permissions and inheritance flags: A list of permissions and inheritance flags separated by commas

For example, the ACE "0: group:Engineer allow file_gen_read,file_gen_execute" indicates that its index is 0, and allows the group called Engineer to have file_gen_read and file_gen_execute access permissions.

The following example shows a full ACL for a file. Although there is no SYNTHETIC ACL flag, there is a "+" character following the POSIX mode bits that indicates that the file is in the OneFS real ACL state. The file’s OneFS ACL grants full permission to users Joe and Bob. It also grants file_gen_read and file_gen_execute permissions to the group Market and to everyone. In this case, the POSIX mode bits are for representation only: you cannot tell the accurate file permissions from the approximate POSIX mode bits. You should therefore always rely on the OneFS ACL to check file permissions.

vonefs-aima-1# ls -le acl-file.txt
-rwxrwxr-x +   1 Joe  Market   69 May 28 01:08 acl-file.txt
 OWNER: user:Joe
 GROUP: group:Market
 0: user:Joe allow file_gen_all
 1: group:Market allow file_gen_read,file_gen_execute
 2: user:Bob allow file_gen_all
 3: everyone allow file_gen_read,file_gen_execute

No matter the OneFS file permission state, the on-disk identity for a file is always a UID, a GID, or an SID. So, for the above two files, file permissions stored on disk are:

vonefs-aima-1# ls -len posix-file.txt
-rw-rw-r--     1 2001  2003   65 May 28 02:08 posix-file.txt
 OWNER: user:2001
 GROUP: group:2003
 SYNTHETIC ACL
 0: user:2001 allow file_gen_read,file_gen_write,std_write_dac
 1: group:2003 allow file_gen_read,file_gen_write
 2: SID:S-1-1-0 allow file_gen_read
 
vonefs-aima-1# ls -len acl-file.txt
-rwxrwxr-x +   1 2001  2003   69 May 28 01:08 acl-file.txt
 OWNER: user:2001
 GROUP: group:2003
 0: user:2001 allow file_gen_all
 1: group:2003 allow file_gen_read,file_gen_execute
 2: user:2002 allow file_gen_all
 3: SID:S-1-1-0 allow file_gen_read,file_gen_execute

When OneFS receives a user access request, it generates an access token for the user and compares the token to the file permissions. OneFS grants access when the file permissions include an ACE that allows the identity in the token to access the file, and does not include an ACE that denies the identity access.

When evaluating the file permission for a user's access token, OneFS checks the ACEs one by one by following the ACEs index order and stops checking when the following conditions are met:

  • All of the required permissions for the user access request are allowed by ACLs, and the access request is authorized.
  • Any one of the required permissions for the user access request is explicitly denied by ACLs, and the access request is denied.
  • All ACEs have been checked, but not all required permissions for the user access request are allowed by ACLs, then the access request is also denied.

Let’s say we have a file named acl-file01.txt that has the file permissions shown below. When user Bob tries to read the data of the file, OneFS checks the ACEs from index 0 to index 3. When checking ACE index 1, it explicitly denies Bob read data permissions. The ACLs then stop checking, and read access is denied.

vonefs-aima-1# ls -le acl-file01.txt
-rwxrw-r-- +   1 Joe  Market   12 May 28 06:19 acl-file01.txt
 OWNER: user:Joe
 GROUP: group:Market
 0: user:Joe allow file_gen_all
 1: user:Bob deny file_gen_read
 2: user:Bob allow file_gen_read,file_gen_write
 3: everyone allow file_gen_read

Now let’s say that we still have the file named acl-file01.txt, but the file permissions are now a little different, as shown below. When user Bob tries to read the data of the file, OneFS checks the ACEs from index 0 to index 3. When checking ACE index 1, it explicitly allows Bob to have read permissions. The ACLs checking process therefore ends, and read access is authorized. Therefore, it is recommended to put all “deny” ACEs in front of “allow” ACEs if you want to explicitly deny specific permissions for specific users/groups.

vonefs-aima-1# ls -le acl-file01.txt
-rwxrw-r-- +   1 Joe  Market   12 May 28 06:19 acl-file01.txt
 OWNER: user:Joe
 GROUP: group:Market
 0: user:Joe allow file_gen_all
 1: user:Bob allow file_gen_read,file_gen_write
 2: user:Bob deny file_gen_read
 3: everyone allow file_gen_read

File permission state changes

As mentioned before, a file can only be in one state at a time. However, the file permission state of the file may be flipped. If a file is in POSIX, it can be flipped to an ACL file by modifying the permissions using SMB/NFSv4 clients or by using the chmod command in OneFS. If a file is in ACL, it can be flipped to a POSIX file, by using the OneFS CLI command: chmod –b XXX <filename>. The ‘XXX’ specifies the new POSIX permission. For more examples, see File permission state changes.

Now, you should be able to check a file’s permission on OneFS with the command ls -len filename, and check a directory’s permissions on OneFS with the command ls -lend directory_name.

In my next blog, we will cover what an access token is and how to check a user’s access token!

Resources

Author: Lieven Lin

Read Full Blog
  • PowerScale
  • data management
  • OneFS
  • data reduction

Understanding ‘Total inlined data savings’ When Using ’isi_cstats’

Yunlong Zhang

Thu, 12 May 2022 14:22:45 -0000

|

Read Time: 0 minutes

Recently a customer contacted us to tell us that he thought that there was an error in the output of the OneFS CLI command ‘isi_cstats’. Starting with OneFS 9.3, the ‘isi_cstats’ command includes the accounted number of inlined files within /ifs. It also contains a statistic called “Total inlined data savings”.

This customer expected that the ‘Total inlined data savings’ number was simply ‘Total inlined files’ multiplied by 8KB. The reason he thought this number was wrong was that this number does not consider the protection level. 

In OneFS, for the 2d:1n protection level, each file smaller than 128KB is stored as 3X mirrors. Take the screenshot below as an example.

 

If we do some calculation here,

379,948,336 * 8KB = 3,039,586,688KiB = 2898.78GiB

we can see that the 2,899GiB from the command output is calculated as one block per inlined file. So, in our example, the customer would think that ‘Total inlined data savings’ should report 2898.78 GiB * 3, because of the 2d:1n protection level. 

Well, this statistic is not the actual savings, it is really the logical on-disk cost for all inlined files. We can't accurately report the physical savings because it depends on the non-inlined protection overhead, which can vary. For example:

  • If the protection level is 2d:1n, without the data inlining in 8KB inode feature, each of the inlined files would cost 8KB * 3.
  • If the protection level is 3d:1n1d, it will become 8KB * 4.

One more thing to consider, if a file is smaller than 8KB after compression, it will be inlined into an inode as well. Therefore, this statistic doesn't represent logical savings either, because it doesn't take compression into account. To report the logical savings, total logical size for all inlined files should be tracked.

To avoid any confusion, we plan to rename this statistic to “Total inline data” in the next version of OneFS. We also plan to show more useful information about total logical data of inlined files, in addition to “Total inline data”.

For more information about the reporting of data reduction features, see the white paper   PowerScale OneFS: Data Reduction and Storage Efficiency on the Info Hub.

Author: Yunlong Zhang, Principal Engineering Technologist

Read Full Blog
  • PowerScale
  • data management
  • OneFS

OneFS Data Reduction and Efficiency Reporting

Nick Trimbee

Wed, 04 May 2022 14:36:26 -0000

|

Read Time: 0 minutes

Among the objectives of OneFS reduction and efficiency reporting is to provide ‘industry standard’ statistics, allowing easier comprehension of cluster efficiency. It’s an ongoing process, and prior to OneFS 9.2 there was limited tracking of certain filesystem statistics – particularly application physical and filesystem logical – which meant that data reduction and storage efficiency ratios had to be estimated. This is no longer the case, and OneFS 9.2 and later provides accurate data reduction and efficiency metrics at a per-file, quota, and cluster-wide granularity.

The following table provides descriptions for the various OneFS reporting metrics, while also attempting to rationalize their naming conventions with other general industry terminology:

OneFS Metric

Also Known As

Description

Protected logical

Application logical

Data size including sparse data, zero block eliminated data, and CloudPools data stubbed to a cloud tier.

Logical data

Effective

 

Filesystem logical

Data size excluding protection overhead and sparse data, and including data efficiency savings (compression and deduplication).

Zero-removal saved

 

Capacity savings from zero removal.

Dedupe saved

 

Capacity savings from deduplication.

Compression saved

 

Capacity savings from in-line compression.

Preprotected physical

Usable

 

Application physical

Data size excluding protection overhead and including storage efficiency savings.

Protection overhead

 

Size of erasure coding used to protect data.

Protected physical

Raw

 

Filesystem physical

Total footprint of data including protection overhead FEC erasure coding) and excluding data efficiency savings (compression and deduplication).

Dedupe ratio

 

Deduplication ratio. Will be displayed as 1.0:1 if there are no deduplicated blocks on the cluster.

Compression ratio

 

Usable reduction ratio from compression, calculated by dividing ‘logical data’ by ‘preprotected physical’ and expressed as x:1.

Inlined data ratio

 

Efficiency ratio from storing small files’ data within their inodes, thereby not requiring any data or protection blocks for their storage.

Data reduction ratio

Effective to Usable

Usable efficiency ratio from compression and deduplication. Will display the same value as the compression ratio if there is no deduplication on the cluster.

Efficiency ratio

Effective to Raw

Overall raw efficiency ratio expressed as x:1

So let’s take these metrics and look at what they represent and how they’re calculated.

  • Application logical, or protected logical, is the application data that can be written to the cluster, irrespective of where it’s stored.
  • Removing the sparse data from application logical results in filesystem logical, also known simply as logical data or effective. This can be data that was always sparse, was zero block eliminated, or data that has been tiered off-cluster by means of CloudPools, and so on.

  (Note that filesystem logical was not accurately tracked in releases prior to OneFS 9.2, so metrics prior to this were somewhat estimated.)

  • Next, data reduction techniques such as compression and deduplication further reduce filesystem logical to application physical, or pre-protected physical. This is the physical size of the application data residing on the filesystem drives, and does not include metadata, protection overhead, or data moved to the cloud.

  • Filesystem physical is application physical with data protection overhead added – including inode, mirroring, and FEC blocks. Filesystem physical is also referred to as protected physical.

  • The data reduction ratio is the amount that’s been reduced from the filesystem logical down to the application physical.

  • Finally, the storage efficiency ratio is the filesystem logical divided by the filesystem physical.

With the enhanced data reduction reporting in OneFS 9.2 and later, the actual statistics themselves are largely the same, just calculated more accurately.

The storage efficiency data was available in releases prior to OneFS 9.2, albeit somewhat estimated, but the data reduction metrics were introduced with OneFS 9.2.

The following tools are available to query these reduction and efficiency metrics at file, quota, and cluster-wide granularity:

Realm

OneFS Command

OneFS Platform API

File

isi get -D


Quota

isi quota list -v

12/quota/quotas

Cluster-wide

isi statistics data-reduction

1/statistics/current?key=cluster.data.reduce.*

Detailed Cluster-wide

isi_cstats

1/statistics/current?key=cluster.cstats.*

Note that the ‘isi_cstats’ CLI command provides some additional, behind-the-scenes details. The interface goes through platform API to fetch these stats.

The ‘isi statistics data-reduction’ CLI command is the most comprehensive of the data reduction reporting CLI utilities. For example:

# isi statistics data-reduction
                      Recent Writes Cluster Data Reduction
                           (5 mins)
--------------------- ------------- ----------------------
Logical data                  6.18M                  6.02T
Zero-removal saved                0                      -
Deduplication saved          56.00k                  3.65T
Compression saved             4.16M                  1.96G
Preprotected physical         1.96M                  2.37T
Protection overhead           5.86M                910.76G
Protected physical            7.82M                  3.40T
Zero removal ratio         1.00 : 1                      -
Deduplication ratio        1.01 : 1               2.54 : 1
Compression ratio          3.12 : 1               1.02 : 1
Data reduction ratio       3.15 : 1               2.54 : 1
Inlined data ratio         1.04 : 1               1.00 : 1
Efficiency ratio           0.79 : 1               1.77 : 1

The ‘recent writes’ data in the first column provides precise statistics for the five-minute period prior to running the command. By contrast, the ‘cluster data reduction’ metrics in the second column are slightly less real-time but reflect the overall data and efficiencies across the cluster. Be aware that, in OneFS 9.1 and earlier, the right-hand column metrics are designated by the ‘Est’ prefix, denoting an estimated value. However, in OneFS 9.2 and later, the ‘logical data’ and ‘preprotected physical’ metrics are tracked and reported accurately, rather than estimated.

The ratio data in each column is calculated from the values above it. For instance, to calculate the data reduction ratio, the ‘logical data’ (effective) is divided by the ‘preprotected physical’ (usable) value. From the output above, this would be:

6.02 / 2.37 = 1.76              Or a Data Reduction ratio of 2.54:1

Similarly, the ‘efficiency ratio’ is calculated by dividing the ‘logical data’ (effective) by the ‘protected physical’ (raw) value. From the output above, this yields:

6.02 / 3.40 = 0.97              Or an Efficiency ratio of 1.77:1

OneFS SmartQuotas reports the capacity saving from in-line data reduction as a storage efficiency ratio. SmartQuotas reports efficiency as a ratio across the desired data set as specified in the quota path field. The efficiency ratio is for the full quota directory and its contents, including any overhead, and reflects the net efficiency of compression and deduplication. On a cluster with licensed and configured SmartQuotas, this efficiency ratio can be easily viewed from the WebUI by navigating to File System > SmartQuotas > Quotas and Usage. In OneFS 9.2 and later, in addition to the storage efficiency ratio, the data reduction ratio is also displayed. 

Similarly, the same data can be accessed from the OneFS command line by using the ‘isi quota quotas list’ CLI command. For example:

# isi quota quotas list
Type    AppliesTo   Path  Snap  Hard   Soft  Adv  Used   Reduction  Efficiency
----------------------------------------------------------------------------
directory DEFAULT    /ifs   No    -     -      -    6.02T 2.54 : 1   1.77 : 1
----------------------------------------------------------------------------

Total: 1

More detail, including both the physical (raw) and logical (effective) data capacities, is also available by using the ‘isi quota quotas view <path> <type>’ CLI command. For example:

# isi quota quotas view /ifs directory
                        Path: /ifs
                        Type: directory
                   Snapshots: No
                    Enforced: No
                   Container: No
                      Linked: No
                       Usage
                           Files: 5759676
         Physical(With Overhead): 6.93T
        FSPhysical(Deduplicated): 3.41T
         FSLogical(W/O Overhead): 6.02T
        AppLogical(ApparentSize): 6.01T
                   ShadowLogical: -
                    PhysicalData: 2.01T
                      Protection: 781.34G
     Reduction(Logical/Data): 2.54 : 1
Efficiency(Logical/Physical): 1.77 : 1

To configure SmartQuotas for in-line data efficiency reporting, create a directory quota at the top-level file system directory of interest, for example /ifs. Creating and configuring a directory quota is a simple procedure and can be performed from the WebUI by navigating to File System > SmartQuotas > Quotas and Usage and selecting Create a Quota. In the Create a quota dialog, set the Quota type to ‘Directory quota’, add the preferred top-level path to report on, select ’Application logical size’ for Quota Accounting, and set the Quota Limits to ‘Track storage without specifying a storage limit’. Finally, click the ‘Create Quota’ button to confirm the configuration and activate the new directory quota.

The efficiency ratio is a single, current-in time efficiency metric that is calculated per quota directory and includes the sum of in-line compression, zero block removal, in-line dedupe, and SmartDedupe. This is in contrast to a history of stats over time, as reported in the ‘isi statistics data-reduction’ CLI command output, described above. As such, the efficiency ratio for the entire quota directory will reflect what is actually there.

Author: Nick Trimbee

Read Full Blog
  • data storage
  • PowerScale
  • OneFS

OneFS In-line Dedupe

Nick Trimbee

Thu, 12 May 2022 14:48:01 -0000

|

Read Time: 0 minutes

Among the features and functionality delivered in the new OneFS 9.4 release is the promotion of in-line dedupe to enabled by default, further enhancing PowerScale’s dollar-per-TB economics, rack density and value.

Part of the OneFS data reduction suite, in-line dedupe initially debuted in OneFS 8.2.1. However, it was enabled manually, so many customers simply didn’t use it. But with this enhancement, new clusters running OneFS 9.4 now have in-line dedupe enabled by default.

Cluster configuration

In-line dedupe

In-line compression

New cluster running OneFS 9.4

Enabled

Enabled

New cluster running OneFS 9.3 or earlier

Disabled

Enabled

Cluster with in-line dedupe enabled that is upgraded to OneFS 9.4

Enabled

Enabled

Cluster with in-line dedupe disabled that is upgraded to OneFS 9.4

Disabled

Enabled

That said, any clusters that upgrade to 9.4 will not see any change to their current in-line dedupe config during upgrade. Also, there is also no change to the behavior for in-line compression, which remains enabled by default in all OneFS versions from 8.1.3 onwards.

But before we examine the-under-the-hood changes in OneFS 9.4, let’s have a quick dedupe refresher.

Currently, OneFS in-line data reduction, which encompasses compression, dedupe, and zero block removal, is supported on the F900, F600, and F200 all-flash nodes, plus the F810, H5600, H700/7000, and A300/3000 Gen6.x chassis.

Within the OneFS data reduction pipeline, zero block removal is performed first, followed by dedupe, and then compression. This order allows each phase to reduce the scope of work each subsequent phase.

Unlike SmartDedupe, which performs deduplication once data has been written to disk, or post-process, in-line dedupe acts in real time, deduplicating data as is ingested into the cluster. Storage efficiency is achieved by scanning the data for identical blocks as it is received and then eliminating the duplicates.

When in-line dedupe discovers a duplicate block, it moves a single copy of the block to a special set of files known as shadow stores. These are file-system containers that allow data to be stored in a sharable manner. As such, files stored under OneFS can contain both physical data and pointers, or references, to shared blocks in shadow stores.

Shadow stores are similar to regular files but are hidden from the file system namespace, so they cannot be accessed through a pathname. A shadow store typically grows to a maximum size of 2 GB, which is around 256 K blocks, and each block can be referenced by 32,000 files. If the reference count limit is reached, a new block is allocated, which may or may not be in the same shadow store. Also, shadow stores do not reference other shadow stores. And snapshots of shadow stores are not permitted because the data contained in shadow stores cannot be overwritten.

When a client writes a file to a node pool configured for in-line dedupe on a cluster, the write operation is divided up into whole 8 KB blocks. Each block is hashed, and its cryptographic ‘fingerprint’ is compared against an in-memory index for a match. At this point, one of the following will happen:

  1. If a match is discovered with an existing shadow store block, a byte-by-byte comparison is performed. If the comparison is successful, the data is removed from the current write operation and replaced with a shadow reference.
  2. When a match is found with another LIN, the data is written to a shadow store instead and is replaced with a shadow reference. Next, a work request is generated and queued that includes the location for the new shadow store block, the matching LIN and block, and the data hash. A byte-by-byte data comparison is performed to verify the match and the request is then processed.
  3. If no match is found, the data is written to the file natively and the hash for the block is added to the in-memory index.

For in-line dedupe to perform on a write operation, the following conditions need to be true:

  • In-line dedupe must be globally enabled on the cluster.
  • The current operation is writing data (not a truncate or write zero operation).
  • The no_dedupe flag is not set on the file.
  • The file is not a special file type, such as an alternate data stream (ADS) or an EC (endurant cache) file.
  • Write data includes fully overwritten and aligned blocks.
  • The write is not part of a rehydrate operation.
  • The file has not been packed (containerized) by small file storage efficiency (SFSE).

 OneFS in-line dedupe uses the 128-bit CityHash algorithm, which is both fast and cryptographically strong. This contrasts with the OneFS post-process SmartDedupe, which uses SHA-1 hashing.

Each node in a cluster with in-line dedupe enabled has its own in-memory hash index that it compares block fingerprints against. The index lives in system RAM and is allocated using physically contiguous pages and is accessed directly with physical addresses. This avoids the need to traverse virtual memory mappings and does not incur the cost of translation lookaside buffer (TLB) misses, minimizing dedupe performance impact.

The maximum size of the hash index is governed by a pair of sysctl settings, one of which caps the size at 16 GB, and the other which limits the maximum size to 10% of total RAM. The strictest of these two constraints applies. While these settings are configurable, the recommended best practice is to use the default configuration. Any changes to these settings should only be performed under the supervision of Dell support.

Since in-line dedupe and SmartDedupe use different hashing algorithms, the indexes for each are not shared directly. However, the work performed by each dedupe solution can be used by each other. For instance, if SmartDedupe writes data to a shadow store, when those blocks are read, the read-hashing component of in-line dedupe sees those blocks and indexes them.

When a match is found, in-line dedupe performs a byte-by-byte comparison of each block to be shared to avoid the potential for a hash collision. Data is prefetched before the byte-by-byte check and is compared against the L1 cache buffer directly, avoiding unnecessary data copies and adding minimal overhead. Once the matching blocks are compared and verified as identical, they are shared by writing the matching data to a common shadow store and creating references from the original files to this shadow store.

In-line dedupe samples every whole block that is written and handles each block independently, so it can aggressively locate block duplicity. If a contiguous run of matching blocks is detected, in-line dedupe merges the results into regions and processes them efficiently.

In-line dedupe also detects dedupe opportunities from the read path, and blocks are hashed as they are read into L1 cache and inserted into the index. If an existing entry exists for that hash, in-line dedupe knows there is a block-sharing opportunity between the block it just read and the one previously indexed. It combines that information and queues a request to an asynchronous dedupe worker thread. As such, it is possible to deduplicate a data set purely by reading it all. To help mitigate the performance impact, the hashing is performed out-of-band in the prefetch path, rather than in the latency-sensitive read path.

The original in-line dedupe control path design had its limitations, since it did not provide gconfig control settings for the default-disabled in-line dedupe. The previous control-path logic had no gconfig control settings for default-disabled in-line dedupe. But in OneFS 9.4, there are now two separate features that interact together to distinguish between a new cluster or an upgrade to an existing cluster configuration:

For the first feature, upon upgrade to 9.4 on an existing cluster, if there is no in-line dedupe config present, the upgrade explicitly sets it to disabled in gconfig. This has no effect on an existing cluster since it’s already disabled. Similarly, if the upgrading cluster already has an existing in-line dedupe setting in gconfig, OneFS takes no action.

For the other half of the functionality, when booting OneFS 9.4, a node looks in gconfig to see if there’s an in-line dedupe setting. If no config is present, OneFS enables it by default. Therefore, new OneFS 9.4 clusters automatically enable dedupe, and existing clusters retain their legacy setting upon upgrade.

Since the in-line dedupe configuration is binary (either on or off across a cluster), you can easily control it manually through the OneFS command line interface (CLI). As such, the isi dedupe inline settings modify CLI command can either enable or disable dedupe at will—before, during, or after the upgrade. It doesn’t matter.

For example, you can globally disable in-line dedupe and verify it using the following CLI command:

# isi dedupe inline settings viewMode: enabled# isi dedupe inline settings modify –-mode disabled
# isi dedupe inline settings view
Mode: disabled

Similarly, the following syntax enables in-line dedupe:

# isi dedupe inline settings view
Mode: disabled
# isi dedupe inline settings modify –-mode enabled
# isi dedupe inline settings view
Mode: enabled

While there are no visible userspace changes when files are deduplicated, if deduplication has occurred, both the disk usage and the physical blocks metrics reported by the isi get –DD CLI command are reduced. Also, at the bottom of the command’s output, the logical block statistics report the number of shadow blocks. For example:

Metatree logical blocks:    zero=260814 shadow=362 ditto=0 prealloc=0 block=2 compressed=0

In-line dedupe can also be paused from the CLI:

# isi dedupe inline settings modify –-mode paused
# isi dedupe inline settings view
Mode: paused

However, it’s worth noting that this global setting states what you’d like to happen, after which each node attempts to enact the new configuration. However, it can’t guarantee the change, because not all node types support in-line dedupe. For example, the following output is from a heterogenous cluster with an F200 three-node pool supporting in-line dedupe, and an H400 four-node pool not supporting it.

Here, we can see that in-line dedupe is globally enabled on the cluster:

# isi dedupe inline settings view
Mode: enabled

However, you can use the isi_for_array isi_inline_dedupe_status command to display the actual setting and state of each node:

# isi dedupe inline settings view
Mode: enabled
# isi_for_array -s isi_inline_dedupe_status
1: OK Node setting enabled is correct
2: OK Node setting enabled is correct
3: OK Node setting enabled is correct
4: OK Node does not support inline dedupe and current is disabled
5: OK Node does not support inline dedupe and current is disabled
6: OK Node does not support inline dedupe and current is disabled
7: OK Node does not support inline dedupe and current is disabled

Also, any changes to the dedupe configuration are also logged to /var/log/messages, you can find them by grepping for inline_dedupe.

In a nutshell, in-line compression has always been enabled by default since its introduction in OneFS 8.1.3. For new clusters running 9.4 and above, in-line dedupe is on by default. For clusters running 9.3 and earlier, in-line dedupe remains disabled by default. And existing clusters that upgrade to 9.4 will not see any change to their current in-line dedupe config during upgrade.

And here’s the OneFS in-line data reduction platform support matrix for good measure:

Read Full Blog
  • PowerScale
  • OneFS
  • performance metrics

PowerScale Update: QLC Support, Incredible Performance and TCO

David Noy

Mon, 02 May 2022 15:50:26 -0000

|

Read Time: 0 minutes

Dell PowerScale is known for its exceptional feature set, which offers scalability, flexibility and simplicity.  Our customers frequently start with one workload such as file share consolidation or mixed media storage and then scale-out OneFS to support all types of workloads leveraging the simple, cloud-like single pool storage architecture.  

To provide our customers with even more flexibility and choice, this summer we will introduce new Quad-level cell (QLC) flash memory 15TB and 30TB drives for our PowerScale F900 and F600 all-flash models. And we are seeing an up to 25% or more better performance for streaming reads, depending on workload, with all-flash nodes in the subsequent PowerScale OneFS release.1

Delivering latest-generation, Gen 2 QLC Support

With the many important and needed improvements in reliability and performance delivered by Gen 2 QLC technology, we’ve reached the optimal point in the development of QLC technology to deliver QLC flash drives for the PowerScale F900 and F600 all-flash models. These new QLC drives, supported by the currently shipping OneFS 9.4 release, will offer our customers incredible economics for fast NAS workloads that need both performance and capacity – such as financial modeling, media and entertainment, artificial intelligence (AI), machine learning (ML), and deep learning (DL). With 30TB QLC drive support, we are able to increase the raw density per node to 720TB for PowerScale F900 and 240TB for PowerScale F600 – and lower the cost of flash for our customers.  

OneFS.next Performance Boost 

Another emerging PowerScale feature of interest, targeted for an upcoming OneFS software release, is a major performance enhancement that will unlock streaming read throughput gains of up to 25% or more, depending on workload, for our flagship all-flash PowerScale F-series NVMe platforms.1 This significant performance boost will be of particular benefit to customers with high throughput, streaming read-heavy workloads, such as media and entertainment hi-res playout, ADAS for the automotive industry, and financial services high frequency, complex trading queries. Pairing nicely with the aforementioned performance boost is PowerScale’s support for NFS over RDMA (NFSoRDMA), which can further accelerate high throughput performance, especially for single connection and read intensive workloads such as machine learning – while also dramatically reducing both cluster and client CPU utilization. 

All Together Now

Further, these drives become part of the overall life cycle management system within OneFS. This gives PowerScale a major TCO advantage over the competition. In harmony with this forthcoming streaming performance enhancement, OneFS’s non-disruptive upgrade framework will enable existing PowerScale environments to seamlessly and non-disruptively up-rev their cluster software and enjoy this major performance boost on PowerScale F900 and F600 pools – free from any hardware addition, modification, reconfiguration, intervention, or downtime. 

These are just a few of the exciting things we have in the works for PowerScale, the world’s most flexible scale-out NAS solution.2

If you are attending Dell Technologies World, check out these sessions for more about our PowerScale innovations.  

  • Discover the latest Enhancements to PowerScale for Unstructured Storage Solutions
    • May 3 at 12 p.m. in Lando 4205
  • Improve Threat Detection, Isolation and Data Recovery with PowerScale Cyber Protection
    • May 3 or May 4 at 3 p.m. in Lando 4205
  • Top 10 Tips to Get More out of Your PowerScale Investment
    • May 3 at 12 p.m. in Palazzo I
  • Ask the Experts: Harness the Power of Your Unstructured Data
    • May 4 at 3 p.m. in Zeno 4601

_________________ 

Based on Dell internal testing, April 2022. Actual results will vary.

Based on internal Dell analysis of publicly available information, August 2021.

Author: David Noy, Vice President of Product Management, Unstructured Data Solutions and Data Protection Solutions, Dell Technologies



Read Full Blog
  • PowerScale
  • OneFS

Announcing PowerScale OneFS 9.4!

Nick Trimbee

Fri, 28 Apr 2023 19:52:18 -0000

|

Read Time: 0 minutes

Arriving in time for Dell Technologies World 2022, the new PowerScale OneFS 9.4 release shipped on Monday 4th April 2022. 

OneFS 9.4 brings with it a wide array of new features and functionality, including:

Feature

Description

SmartSync Data Mover

  • Introduction of a new OneFS SmartSync data mover, allowing flexible data movement and copying, incremental resyncs, push and pull data transfer, and one-time file to object copy. Complementary to SyncIQ, SmartSync provides an additional option for data transfer, including to object storage targets such as ECS, AWS, and Azure.

IB to Ethernet Backend Migration

  • Non-disruptive rolling InfiniBand to Ethernet back-end network migration for legacy Gen6 clusters.

Secure Boot

  • ·       Secure boot support is extended to include the F900, F600, F200, H700/7000, and A700/7000 platforms.

Smarter SmartConnect Diagnostics

  • Identifies non-resolvable nodes and provides their detailed status, allowing the root cause to be easily pinpointed.

In-line Dedupe

  • In-line deduplication will be enabled by default on new OneFS 9.4 clusters. Clusters upgraded to OneFS 9.4 will maintain their current dedupe configuration.

Healthcheck Auto-updates

  • Automatic monitoring, download, and installation of new healthcheck packages as they are released.

CloudIQ Protocol Statistics

  • New protocol statistics ‘count’ keys are added, allowing performance to be measured over a specified time window and providing point-in-time protocol information.

SRS Alerts and CELOG Event Limiting

  • Prevents CELOG from sending unnecessary event types to Dell SRS and restricts CELOG alerts from customer-created channels.

CloudPools Statistics

  • Automated statistics gathering on CloudPools accounts and policies, providing insights for planning and troubleshooting CloudPools-related activities. 

We’ll be taking a deeper look at some of these new features in blog articles over the course of the next few weeks. 

Meanwhile, the new OneFS 9.4 code is available for download on the Dell Online Support site, in both upgrade and reimage file formats. 

Enjoy your OneFS 9.4 experience!

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS

OneFS Caching Hierarchy

Nick Trimbee

Tue, 22 Mar 2022 20:05:56 -0000

|

Read Time: 0 minutes

Caching occurs in OneFS at multiple levels, and for a variety of types of data. For this discussion we’ll concentrate on the caching of file system structures in main memory and on SSD.

OneFS’ caching infrastructure design is based on aggregating each individual node’s cache into one cluster wide, globally accessible pool of memory. This is done by using an efficient messaging system, which allows all the nodes’ memory caches to be available to each and every node in the cluster.

For remote memory access, OneFS uses the Sockets Direct Protocol (SDP) over an Ethernet or Infiniband (IB) backend interconnect on the cluster. SDP provides an efficient, socket-like interface between nodes which, by using a switched star topology, ensures that remote memory addresses are only ever one hop away. While not as fast as local memory, remote memory access is still very fast due to the low latency of the backend network.

OneFS uses up to three levels of read cache, plus an NVRAM-backed write cache, or write coalescer. The first two types of read cache, level 1 (L1) and level 2 (L2), are memory (RAM) based, and analogous to the cache used in CPUs. These two cache layers are present in all PowerScale storage nodes. An optional third tier of read cache, called SmartFlash, or Level 3 cache (L3), is also configurable on nodes that contain solid state drives (SSDs). L3 cache is an eviction cache that is populated by L2 cache blocks as they are aged out from memory.

The OneFS caching subsystem is coherent across the cluster. This means that if the same content exists in the private caches of multiple nodes, this cached data is consistent across all instances. For example, consider the following scenario:

  1. Node 2 and Node 4 each have a copy of data located at an address in shared cache.
  2. Node 4, in response to a write request, invalidates node 2’s copy.
  3. Node 4 then updates the value.
  4. Node 2 must re-read the data from shared cache to get the updated value.

OneFS uses the MESI Protocol to maintain cache coherency, implementing an “invalidate-on-write” policy to ensure that all data is consistent across the entire shared cache. The various states that in-cache data can take are:

M – Modified: The data exists only in local cache, and has been changed from the value in shared cache. Modified data is referred to as ‘dirty’.

E – Exclusive: The data exists only in local cache, but matches what is in shared cache. This data referred to as ‘clean’.

S – Shared: The data in local cache may also be in other local caches in the cluster.

I – Invalid: A lock (exclusive or shared) has been lost on the data.

L1 cache, or front-end cache, is memory that is nearest to the protocol layers (such as NFS, SMB, and so on) used by clients, or initiators, connected to that node. The main task of L1 is to prefetch data from remote nodes. Data is pre-fetched per file, and this is optimized to reduce the latency associated with the nodes’ IB back-end network. Because the IB interconnect latency is relatively small, the size of L1 cache, and the typical amount of data stored per request, is less than L2 cache.

L1 is also known as remote cache because it contains data retrieved from other nodes in the cluster. It is coherent across the cluster, but is used only by the node on which it resides, and is not accessible by other nodes. Data in L1 cache on storage nodes is aggressively discarded after it is used. L1 cache uses file-based addressing, in which data is accessed by means of an offset into a file object. The L1 cache refers to memory on the same node as the initiator. It is only accessible to the local node, and typically the cache is not the primary copy of the data. This is analogous to the L1 cache on a CPU core, which may be invalidated as other cores write to main memory. L1 cache coherency is managed using a MESI-like protocol using distributed locks, as described above.

L2, or back-end cache, refers to local memory on the node on which a particular block of data is stored. L2 reduces the latency of a read operation by not requiring a seek directly from the disk drives. As such, the amount of data prefetched into L2 cache for use by remote nodes is much greater than that in L1 cache.

L2 is also known as local cache because it contains data retrieved from disk drives located on that node and then made available for requests from remote nodes. Data in L2 cache is evicted according to a Least Recently Used (LRU) algorithm. Data in L2 cache is addressed by the local node using an offset into a disk drive which is local to that node. Because the node knows where the data requested by the remote nodes is located on disk, this is a very fast way of retrieving data destined for remote nodes. A remote node accesses L2 cache by doing a lookup of the block address for a particular file object. As described above, there is no MESI invalidation necessary here and the cache is updated automatically during writes and kept coherent by the transaction system and NVRAM.

L3 cache is a subsystem that caches evicted L2 blocks on a node. Unlike L1 and L2, not all nodes or clusters have an L3 cache, because it requires solid state drives (SSDs) to be present and exclusively reserved and configured for caching use. L3 serves as a large, cost-effective way of extending a node’s read cache from gigabytes to terabytes. This allows clients to retain a larger working set of data in cache, before being forced to retrieve data from higher latency spinning disk. The L3 cache is populated with “interesting” L2 blocks dropped from memory by L2’s least recently used cache eviction algorithm. Unlike RAM based caches, because L3 is based on persistent flash storage, once the cache is populated, or warmed, it’s highly durable and persists across node reboots, and so on. L3 uses a custom log-based file system with an index of cached blocks. The SSDs provide very good random read access characteristics, such that a hit in L3 cache is not that much slower than a hit in L2.

To use multiple SSDs for cache effectively and automatically, L3 uses a consistent hashing approach to associate an L2 block address with one L3 SSD. In the event of an L3 drive failure, a portion of the cache will obviously disappear, but the remaining cache entries on other drives will still be valid. Before a new L3 drive can be added to the hash, some cache entries must be invalidated.

OneFS also uses a dedicated inode cache in which recently requested inodes are kept. The inode cache frequently has a large impact on performance, because clients often cache data, and many network I/O activities are primarily requests for file attributes and metadata, which can be quickly returned from the cached inode.

OneFS provides tools to accurately assess the performance of the various levels of cache at a point in time. These cache statistics can be viewed from the OneFS CLI using the isi_cache_stats command. Statistics for L1, L2, and L3 cache are displayed for both data and metadata. For example:

# isi_cache_stats
Totals
l1_data: a 224G 100% r 226G 100% p 318M 77%, l1_encoded: a 0.0B 0% r 0.0B 0% p 0.0B 0%, l1_meta: r 4.5T 99% p 152K 48%,
l2_data: r 1.2G 95% p 115M 79%, l2_meta: r 27G 72% p 28M 3%,
l3_data: r 0.0B 0% p 0.0B 0%, l3_meta: r 8G 99% p 0.0B 0%

For more detailed and formatted output, a verbose option of the command is available using the ‘isi_cache_stats -v’ option.

It’s worth noting that for L3 cache, the prefetch statistics will always read zero, since it’s a pure eviction cache and does not use data or metadata prefetch.

Due to balanced data distribution, automatic rebalancing, and distributed processing, OneFS is able to leverage additional CPUs, network ports, and memory as the system grows. This also allows the caching subsystem (and, by virtue, throughput and IOPS) to scale linearly with the cluster size.

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS
  • NFS

OneFS Endurant Cache

Nick Trimbee

Tue, 22 Mar 2022 18:27:04 -0000

|

Read Time: 0 minutes

My earlier blog post on multi-threaded I/O generated several questions on synchronous writes in OneFS. So, this seemed like a useful topic to explore in a bit more detail.

OneFS natively provides a caching mechanism for synchronous writes – or writes that require a stable write acknowledgement to be returned to a client. This functionality is known as the Endurant Cache, or EC.

The EC operates in conjunction with the OneFS write cache, or coalescer, to ingest, protect, and aggregate small synchronous NFS writes. The incoming write blocks are staged to NVRAM, ensuring the integrity of the write, even during the unlikely event of a node’s power loss.  Furthermore, EC also creates multiple mirrored copies of the data, further guaranteeing protection from single node and, if desired, multiple node failures.

EC improves the latency associated with synchronous writes by reducing the time to acknowledgement back to the client. This process removes the Read-Modify-Write (R-M-W) operations from the acknowledgement latency path, while also leveraging the coalescer to optimize writes to disk. EC is also tightly coupled with OneFS’ multi-threaded I/O (Multi-writer) process, to support concurrent writes from multiple client writer threads to the same file. And the design of EC ensures that the cached writes do not impact snapshot performance.

The endurant cache uses write logging to combine and protect small writes at random offsets into 8KB linear writes. To achieve this, the writes go to special mirrored files, or ‘Logstores’. The response to a stable write request can be sent once the data is committed to the logstore. Logstores can be written to by several threads from the same node and are highly optimized to enable low-latency concurrent writes.

Note that if a write uses the EC, the coalescer must also be used. If the coalescer is disabled on a file, but EC is enabled, the coalescer will still be active with all data backed by the EC.

So what exactly does an endurant cache write sequence look like?

Say an NFS client wishes to write a file to a PowerScale cluster over NFS with the O_SYNC flag set, requiring a confirmed or synchronous write acknowledgement. Here is the sequence of events that occurs to facilitate a stable write.

1. A client, connected to node 3, begins the write process sending protocol level blocks. 4K is the optimal block size for the endurant cache.

 

2. The NFS client’s writes are temporarily stored in the write coalescer portion of node 3’s RAM. The Write Coalescer aggregates uncommitted blocks so that the OneFS can, ideally, write out full protection groups where possible, reducing latency over protocols that allow “unstable” writes. Writing to RAM has far less latency that writing directly to disk.

3. Once in the write coalescer, the endurant cache log-writer process writes mirrored copies of the data blocks in parallel to the EC Log Files.

The protection level of the mirrored EC log files is the same as that of the data being written by the NFS client.

4. When the data copies are received into the EC Log Files, a stable write exists and a write acknowledgement (ACK) is returned to the NFS client confirming the stable write has occurred. The client assumes the write is completed and can close the write session.

5. The write coalescer then processes the file just like a non-EC write at this point. The write coalescer fills and is routinely flushed as required as an asynchronous write to the block allocation manager (BAM) and the BAM safe write (BSW) path processes.

6. The file is split into 128K data stripe units (DSUs), parity protection (FEC) is calculated, and FEC stripe units (FSUs) are created.

7. The layout and write plan is then determined, and the stripe units are written to their corresponding nodes’ L2 Cache and NVRAM. The EC logfiles are cleared from NVRAM at this point. OneFS uses a Fast Invalid Path process to de-allocate the EC Log Files from NVRAM.

8. Stripe Units are then flushed to physical disk.

9. Once written to physical disk, the data stripe Unit (DSU) and FEC stripe unit (FSU) copies created during the write are cleared from NVRAM but remain in L2 cache until flushed to make room for more recently accessed data.

As far as protection goes, the number of logfile mirrors created by EC is always one more than the on-disk protection level of the file. For example:

File Protection Level

Number of EC Mirrored Copies

+1n

2

2x

3

+2n

3

+2d:1n

3

+3n

4

+3d:1n

4

+4n

5

The EC mirrors are only used if the initiator node is lost. In the unlikely event that this occurs, the participant nodes replay their EC journals and complete the writes.

If the write is an EC candidate, the data remains in the coalescer, an EC write is constructed, and the appropriate coalescer region is marked as EC. The EC write is a write into a logstore (hidden mirrored file) and the data is placed into the journal.

Assuming the journal is sufficiently empty, the write is held there (cached) and only flushed to disk when the journal is full, thereby saving additional disk activity.

An optimal workload for EC involves small-block synchronous, sequential writes – something like an audit or redo log, for example. In that case, the coalescer will accumulate a full protection group’s worth of data and be able to perform an efficient FEC write.

The happy medium is a synchronous small block type load, particularly where the I/O rate is low and the client is latency-sensitive. In this case, the latency will be reduced and, if the I/O rate is low enough, it won’t create serious pressure.

The undesirable scenario is when the cluster is already spindle-bound and the workload is such that it generates a lot of journal pressure. In this case, EC is just going to aggravate things.

So how exactly do you configure the endurant cache?

Although on by default, setting the efs.bam.ec.mode sysctl to value ‘1’ will enable the Endurant Cache:

# isi_sysctl_cluster efs.bam.ec.mode=1

EC can also be enabled and disabled per directory:

# isi set -c [on|off|endurant_all|coal_only] <directory_name>

To enable the coalescer but switch off EC, run:

# isi set -c coal_only

And to disable the endurant cache completely:

# isi_for_array –s isi_sysctl_cluster efs.bam.ec.mode=0

A return value of zero on each node from the following command will verify that EC is disabled across the cluster:

# isi_for_array –s sysctl efs.bam.ec.stats.write_blocks efs.bam.ec.stats.write_blocks: 0

If the output to this command is incrementing, EC is delivering stable writes.

Be aware that if the Endurant Cache is disabled on a cluster, the sysctl efs.bam.ec.stats.write_blocks output on each node will not return to zero, because this sysctl is a counter, not a rate. These counters won’t reset until the node is rebooted.

As mentioned previously, EC applies to stable writes, namely:

  • Writes with O_SYNC and/or O_DIRECT flags set
  • Files on synchronous NFS mounts

When it comes to analyzing any performance issues involving EC workloads, consider the following:

  • What changed with the workload?
  • If upgrading OneFS, did the prior version also have EC enabled? 

If the workload has moved to new cluster hardware:

  • Does the performance issue occur during periods of high CPU utilization?
  • Which part of the workload is creating a deluge of stable writes?
  • Was there a large change in spindle or node count?
  • Has the OneFS protection level changed?
  • Is the SSD strategy the same?

Disabling EC is typically done cluster-wide and this can adversely impact certain workflow elements. If the EC load is localized to a subset of the files being written, an alternative way to reduce the EC heat might be to disable the coalescer buffers for some particular target directories, which would be a more targeted adjustment. This can be configured using the isi set –c off command.

One of the more likely causes of performance degradation is from applications aggressively flushing over-writes and, as a result, generating a flurry of ‘commit’ operations. This can generate heavy read/modify/write (r-m-w) cycles, inflating the average disk queue depth, and resulting in significantly slower random reads. The isi statistics protocol CLI command output will indicate whether the ‘commit’ rate is high.

It’s worth noting that synchronous writes do not require using the NFS ‘sync’ mount option. Any programmer who is concerned with write persistence can simply specify an O_FSYNC or O_DIRECT flag on the open() operation to force synchronous write semantics for that file handle. With Linux, writes using O_DIRECT will be separately accounted for in the Linux ‘mountstats’ output. Although it’s almost exclusively associated with NFS, the EC code is actually protocol-agnostic. If writes are synchronous (write-through) and are either misaligned or smaller than 8k, they have the potential to trigger EC, regardless of the protocol.

The endurant cache can provide a significant latency benefit for small (such as 4K), random synchronous writes – albeit at a cost of some additional work for the system.

However, it’s worth bearing the following caveats in mind:

  • EC is not intended for more general purpose I/O.
  • There is a finite amount of EC available. As load increases, EC can potentially ‘fall behind’ and end up being a bottleneck.
  • Endurant Cache does not improve read performance, since it’s strictly part of the write process.
  • EC will not increase performance of asynchronous writes – only synchronous writes.

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS

OneFS Writes

Nick Trimbee

Mon, 14 Mar 2022 23:13:12 -0000

|

Read Time: 0 minutes

OneFS runs equally across all the nodes in a cluster such that no one node controls the cluster and all nodes are true peers. Looking from a high-level at the components within each node, the I/O stack is split into a top layer, or initiator, and a bottom layer, or participant. This division is used as a logical model for the analysis of OneFS’ read and write paths.

At a physical-level, CPUs and memory cache in the nodes are simultaneously handling initiator and participant tasks for I/O taking place throughout the cluster. There are caches and a distributed lock manager that are excluded from the diagram below for simplicity’s sake.

 

When a client connects to a node to write a file, it is connecting to the top half or initiator of that node. Files are broken into smaller logical chunks called stripes before being written to the bottom half or participant of a node (disk). Failure-safe buffering using a write coalescer is used to ensure that writes are efficient and read-modify-write operations are avoided. The size of each file chunk is referred to as the stripe unit size. OneFS stripes data across all nodes and protects the files, directories, and associated metadata via software erasure-code or mirroring.

OneFS determines the appropriate data layout to optimize for storage efficiency and performance. When a client connects to a node, that node’s initiator acts as the ‘captain’ for the write data layout of that file. Data, erasure code (FEC) protection, metadata, and inodes are all distributed on multiple nodes, and spread across multiple drives within nodes. The following figure shows a file write occurring across all nodes in a three node cluster.

OneFS uses a cluster’s Ethernet or Infiniband back-end network to allocate and automatically stripe data across all nodes. As data is written, it’s also protected at the specified level.

When writes take place, OneFS divides data out into atomic units called protection groups. Redundancy is built into protection groups, such that if every protection group is safe, then the entire file is safe. For files protected by FEC, a protection group consists of a series of data blocks as well as a set of parity blocks for reconstruction of the data blocks in the event of drive or node failure. For mirrored files, a protection group consists of all of the mirrors of a set of blocks.

OneFS is capable of switching the type of protection group used in a file dynamically, as it is writing. This allows the cluster to continue without blocking in situations when temporary node failure prevents the desired level of parity protection from being applied. In this case, mirroring can be used temporarily to allow writes to continue. When nodes are restored to the cluster, these mirrored protection groups are automatically converted back to FEC protected.

During a write, data is broken into stripe units and these are spread across multiple nodes as a protection group. As data is being laid out across the cluster, erasure codes or mirrors, as required, are distributed within each protection group to ensure that files are protected at all times.

One of the key functions of the OneFS AutoBalance job is to reallocate and balance data and, where possible, make storage space more usable and efficient. In most cases, the stripe width of larger files can be increased to take advantage of new free space, as nodes are added, and to make the on-disk layout more efficient.

The initiator top half of the ‘captain’ node uses a modified two-phase commit (2PC) transaction to safely distribute writes across the cluster, as follows:

Every node that owns blocks in a particular write operation is involved in a two-phase commit mechanism, which relies on NVRAM for journaling all the transactions that are occurring across every node in the storage cluster. Using multiple nodes’ NVRAM in parallel allows for high-throughput writes, while maintaining data safety against all manner of failure conditions, including power failures. If a node should fail mid-transaction, the transaction is restarted instantly without that node involved. When the node returns, it simply replays its journal from NVRAM.

In a write operation, the initiator also orchestrates the layout of data and metadata, the creation of erasure codes, and lock management and permissions control. OneFS can also optimize layout decisions to better suit the workflow. These access patterns, which can be configured at a per-file or directory level, include:

Concurrency: Optimizes for current load on the cluster, featuring many simultaneous clients.

Streaming: Optimizes for high-speed streaming of a single file, for example to enable very fast reading with a single client.

Random: Optimizes for unpredictable access to the file, by adjusting striping and disabling the use of prefetch.

 

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS

OneFS File Locking and Concurrent Access

Nick Trimbee

Mon, 14 Mar 2022 23:03:37 -0000

|

Read Time: 0 minutes

There has been a bevy of recent questions around how OneFS allows various clients attached to different nodes of a cluster to simultaneously read from and write to the same file. So it seemed like a good time for a quick refresher on some of the concepts and mechanics behind OneFS concurrency and distributed locking.

 

File locking is the mechanism that allows multiple users or processes to access data concurrently and safely. For reading data, this is a fairly straightforward process involving shared locks. With writes, however, things become more complex and require exclusive locking, because data must be kept consistent.

OneFS has a fully distributed lock manager that marshals locks on data across all the nodes in a storage cluster. This locking manager is highly extensible and allows for multiple lock types to support both file system locks, as well as cluster-coherent protocol-level locks, such as SMB share mode locks or NFS advisory-mode locks. OneFS supports delegated locks such as SMB oplocks and NFSv4 delegations.

Every node in a cluster can act as coordinator for locking resources, and a coordinator is assigned to lockable resources based upon a hashing algorithm. This selection algorithm is designed so that the coordinator almost always ends up on a different node than the initiator of the request. When a lock is requested for a file, it can either be a shared lock or an exclusive lock. A shared lock is primarily used for reads and allows multiple users to share the lock simultaneously. An exclusive lock, on the other hand, allows only one user access to the resource at any given moment, and is typically used for writes. Exclusive lock types include:

Mark Lock: An exclusive lock resource used to synchronize the marking and sweeping processes for the Collect job engine job.

Snapshot Lock: As the name suggests, the exclusive snapshot lock that synchronizes the process of creating and deleting snapshots.

Write Lock: An exclusive lock that’s used to quiesce writes for particular operations, including snapshot creates, non-empty directory renames, and marks.

The OneFS locking infrastructure has its own terminology, and includes the following definitions:

Domain: Refers to the specific lock attributes (recursion, deadlock detection, memory use limits, and so on) and context for a particular lock application. There is one definition of owner, resource, and lock types, and only locks within a particular domain might conflict.

Lock Type: Determines the contention among lockers. A shared or read lock does not contend with other types of shared or read locks, while an exclusive or write lock contends with all other types. Lock types include:

  • Advisory
  • Anti-virus
  • Data
  • Delete
  • LIN
  • Mark
  • Oplocks
  • Quota
  • Read
  • Share Mode
  • SMB byte-range
  • Snapshot
  • Write

Locker: Identifies the entity that acquires a lock.

Owner: A locker that has successfully acquired a particular lock. A locker may own multiple locks of the same or different type as a result of recursive locking.

Resource: Identifies a particular lock. Lock acquisition only contends on the same resource. The resource ID is typically a LIN to associate locks with files.

Waiter: Has requested a lock but has not yet been granted or acquired it.

Here’s an example of how threads from different nodes could request a lock from the coordinator:

  1. Node 2 is selected as the lock coordinator of these resources.
  2. Thread 1 from Node 4 and thread 2 from Node 3 request a shared lock on a file from Node 2 at the same time.
  3. Node 2 checks if an exclusive lock exists for the requested file.
  4. If no exclusive locks exist, Node 2 grants thread 1 from Node 4 and thread 2 from Node 3 shared locks on the requested file.
  5. Node 3 and Node 4 are now performing a read on the requested file.
  6. Thread 3 from Node 1 requests an exclusive lock for the same file as being read by Node 3 and Node 4.
  7. Node 2 checks with Node 3 and Node 4 if the shared locks can be reclaimed.
  8. Node 3 and Node 4 are still reading so Node 2 asks thread 3 from Node 1 to wait for a brief instant.
  9. Thread 3 from Node 1 blocks until the exclusive lock is granted by Node 2 and then completes the write operation.

Author: Nick Trimbee


Read Full Blog

OneFS Time Synchronization and NTP

Nick Trimbee

Fri, 11 Mar 2022 16:08:05 -0000

|

Read Time: 0 minutes

OneFS provides a network time protocol (NTP) service to ensure that all nodes in a cluster can easily be synchronized to the same time source. This service automatically adjusts a cluster’s date and time settings to that of one or more external NTP servers.

You can perform NTP configuration on a cluster using the isi ntp command line (CLI) utility, rather than modifying the nodes’ /etc/ntp.conf files manually. The syntax for this command is divided into two parts: servers and settings. For example:

# isi ntp settings
Description:
    View and modify cluster NTP configuration.
Required Privileges:
    ISI_PRIV_NTP
Usage:
    isi ntp settings <action>
        [--timeout <integer>]
        [{--help | -h}]
Actions:
    modify    Modify cluster NTP configuration.
    view      View cluster NTP configuration.
Options:
  Display Options:
    --timeout <integer>
        Number of seconds for a command timeout (specified as 'isi --timeout NNN
        <command>').
    --help | -h
        Display help for this command.

There is also an isi_ntp_config CLI command available in OneFS that provides a richer configuration set and combines the server and settings functionality:

Usage: isi_ntp_config COMMAND [ARGUMENTS ...]
Commands:
    help
      Print this help and exit.
    list
      List all configured info.
    add server SERVER [OPTION]
      Add SERVER to ntp.conf.  If ntp.conf is already
      configured for SERVER, the configuration will be replaced.
      You can specify any server option. See NTP.CONF(5)
 
    delete server SERVER
      Remove server configuration for SERVER if it exists.
   
 add exclude NODE [NODE...]
      Add NODE (or space separated nodes) to NTP excluded entry.
      Excluded nodes are not used for NTP communication with external
      NTP servers.
 
    delete exclude NODE [NODE...]
      Delete NODE (or space separated Nodes) from NTP excluded entry.
 
    keyfile KEYFILE_PATH
      Specify keyfile path for NTP auth. Specify "" to clear value.
      KEYFILE_PATH has to be a path under /ifs.
 
    chimers [COUNT | "default"]
      Display or modify the number of chimers NTP uses.
      Specify "default" to clear the value.

By default, if the cluster has more than three nodes, three of the nodes are selected as chimers. Chimers are nodes which can contact the external NTP servers. If the cluster consists of three nodes or less, only one node is selected as a chimer. If no external NTP server is set, they use the local clock instead. The other non-chimer nodes use the chimer nodes as their NTP servers. The chimer nodes are selected by the lowest node number which is not excluded from chimer duty.

If a node is configured as a chimer. its /etc/ntp.conf entry will resemble:
# This node is one of the 3 chimer nodes that can contact external NTP
# servers. The non-chimer nodes will use this node as well as the other
# chimers as their NTP servers.
server time.isilon.com
# The other chimer nodes on this cluster:
server 192.168.10.150 iburst
server 192.168.10.151 iburst
# If none or bad connection to external servers this node may become
# the time server for this cluster. The system clock will be a time
# source and run at a high stratum

Besides managing NTP servers and authentication, you can exclude individual nodes from communicating with external NTP servers.

The local clock of the node is set as an NTP server at a high stratum level. In NTP, a server with lower stratum number is preferred, so if an external NTP server is set, the system prefers an external time server if configured. The stratum level for the chimer is determined by the chimer number. The first chimer is set to stratum 9, the second to stratum 11, and the others continue to increment the stratum number by 2. This is so the non-chimer nodes prefer to get the time from the first chimer if available.

For a non-chimer node, its /etc/ntp.conf entry will resemble:

# This node is _not_ one of the 3 chimer nodes that can contact external
# NTP servers. These are the cluster's chimer nodes:
server 192.168.10.149 iburst true
server 192.168.10.150 iburst true
server 192.168.10.151 iburst true

When configuring NTP on a cluster, you can specify more than one NTP server to synchronize the system time from. This ability allows for full redundancy of ysnc targets. The cluster periodically contacts the server or servers and adjusts the time, date or both as necessary, based on the information it receives.

You can use the isi_ntp_config CLI command to configure which NTP servers a cluster will reference. For example, the following syntax adds the server time.isilon.com:

# isi_ntp_config add server time.isilon.com

Alternatively, you can manage the NTP configuration from the WebUI by going to Cluster Management > General Settings > NTP.

NTP also provides basic authentication-based security using symmetrical keys, if preferred.

If no NTP servers are available, Windows Active Directory (AD) can synchronize domain members to a primary clock running on the domain controller or controllers. If there are no external NTP servers configured and the cluster is joined to AD, OneFS uses the Windows domain controller as the NTP time server. If the cluster and domain time become out of sync by more than four minutes, OneFS generates an event notification.

Be aware that if the cluster and Active Directory drift out of time sync by more than five minutes, AD authentication will cease to function.

If both NTP server and domain controller are not available, you can manually set the cluster’s time, date and time zone using the isi config CLI command. For example:

1. Run the isi config command. The command-line prompt changes to indicate that you are in the isi config subsystem:

# isi config
Welcome to the Isilon IQ configuration console.
Copyright (c) 2001-2017 EMC Corporation. All Rights Reserved.
Enter 'help' to see list of available commands.
Enter 'help <command>' to see help for a specific command.
Enter 'quit' at any prompt to discard changes and exit.
        Node build: Isilon OneFS v8.2.2 B_8_2_2(RELEASE)Node serial number: JWXER170300301
>>> 

2. Specify the current date and time by running the date command. For example, the following command sets the cluster time to 9:20 AM on April 23, 2020:

>>> date 2020/04/23 09:20:00
Date is set to 2020/04/23 09:20:00

3. The help timezone command lists the available timezones. For example:

>>> help timezone
 
timezone [<timezone identifier>]
 
Sets the time zone on the cluster to the specified time zone.
Valid time zone identifiers are:
        Greenwich Mean Time
        Eastern Time Zone
        Central Time Zone
        Mountain Time Zone
        Pacific Time Zone
        Arizona
        Alaska
        Hawaii
        Japan
        Advanced

4. To verify the currently configured time zone, run the timezone command. For example:

>>> timezone
The current time zone is: Greenwich Mean Time

5. To change the time zone, enter the timezone command followed by one of the displayed options. For example, the following command changes the time zone to Alaska:

>>> timezone Alaska
Time zone is set to Alaska

A message confirming the new time zone setting displays. If your preferred time zone did not display when you ran the help timezone command, enter timezone Advanced. After a warning screen displays, you will see a list of regions. When you select a region, a list of specific time zones for that region appears. Select the preferred time zone (you may need to scroll), and enter OK or Cancel until you return to the isi config prompt.

6. When done, run the commit command to save your changes and exit isi config.

>>> commit
Commit succeeded.

Alternatively, you can manage these time and date parameters from the WebUI by going to Cluster Management > General Settings > Date and Time.


Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS

OneFS Multi-writer

Nick Trimbee

Fri, 04 Mar 2022 21:09:19 -0000

|

Read Time: 0 minutes

In one of my other blog articles, we looked at write locking and shared access in OneFS. Next, we’ll delve another layer deeper into OneFS concurrent file access.

The OneFS locking hierarchy also provides a mechanism called Multi-writer, which allows a cluster to support concurrent writes from multiple client writer threads to the same file. This granular write locking is achieved by sub-dividing the file into separate regions and granting exclusive data write locks to these individual ranges, as opposed to the entire file. This process allows multiple clients, or write threads, attached to a node to simultaneously write to different regions of the same file.

Concurrent writes to a single file need more than just supporting data locks for ranges. Each writer also needs to update a file’s metadata attributes such as timestamps or block count. A mechanism for managing inode consistency is also needed, since OneFS is based on the concept of a single inode lock per file.

In addition to the standard shared read and exclusive write locks, OneFS also provides the following locking primitives, through journal deltas, to allow multiple threads to simultaneously read or write a file’s metadata attributes:

OneFS Lock Types include:

Exclusive: A thread can read or modify any field in the inode. When the transaction is committed, the entire inode block is written to disk, along with any extended attribute blocks.

Shared: A thread can read, but not modify, any inode field.

DeltaWrite: A thread can modify any inode fields which support delta-writes. These operations are sent to the journal as a set of deltas when the transaction is committed.

DeltaRead: A thread can read any field which cannot be modified by inode deltas.

These locks allow separate threads to have a Shared lock on the same LIN, or for different threads to have a DeltaWrite lock on the same LIN. However, it is not possible for one thread to have a Shared lock and another to have a DeltaWrite. This is because the Shared thread cannot perform a coherent read of a field which is in the process of being modified by the DeltaWrite thread.

The DeltaRead lock is compatible with both the Shared and DeltaWrite lock. Typically the file system will attempt to take a DeltaRead lock for a read operation, and a DeltaWrite lock for a write, since this allows maximum concurrency, as all these locks are compatible.

Here’s what the write lock incompatibilities looks like:

OneFS protects data by writing file blocks (restriping) across multiple drives on different nodes. The Job Engine defines a restripe set comprising jobs which involve file-system management, protection and on-disk layout. The restripe set contains the following jobs:

  • AutoBalance & AutoBalanceLin
  • FlexProtect & FlexProtectLin
  • MediaScan
  • MultiScan
  • SetProtectPlus
  • SmartPools
  • Upgrade

Multi-writer for restripe allows multiple restripe worker threads to operate on a single file concurrently. This, in turn, improves read/write performance during file re-protection operations, plus helps reduce the window of risk (MTTDL) during drive Smartfails or other failures. This is particularly true for workflows consisting of large files, while one of the above restripe jobs is running. Typically, the larger the files on the cluster, the more benefit multi-writer for restripe will offer.

With multi-writer for restripe, an exclusive lock is no longer required on the LIN during the actual restripe of data. Instead, OneFS tries to use a delta write lock to update the cursors used to track which parts of the file need restriping. This means that a client application or program should be able to continue to write to the file while the restripe operation is underway.

An exclusive lock is only required for a very short period of time while a file is set up to be restriped. A file will have fixed widths for each restripe lock, and the number of range locks will depend on the quantity of threads and nodes which are actively restriping a single file.

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS
  • SmartPools

OneFS FilePolicy Job

Nick Trimbee

Fri, 04 Mar 2022 15:25:02 -0000

|

Read Time: 0 minutes

Traditionally, OneFS has used the SmartPools jobs to apply its file pool policies. To accomplish this, the SmartPools job visits every file, and the SmartPoolsTree job visits a tree of files. However, the scanning portion of these jobs can result in significant random impact to the cluster and lengthy execution times, particularly in the case of SmartPools job.

To address this, OneFS also provides the FilePolicy job, which offers a faster, lower impact method for applying file pool policies than the full-blown SmartPools job.

But first, a quick Job Engine refresher…

As we know, the Job Engine is OneFS’ parallel task scheduling framework, and is responsible for the distribution, execution, and impact management of critical jobs and operations across the entire cluster.

The OneFS Job Engine schedules and manages all the data protection and background cluster tasks: creating jobs for each task, prioritizing them, and ensuring that inter-node communication and cluster wide capacity utilization and performance are balanced and optimized. Job Engine ensures that core cluster functions have priority over less important work and gives applications integrated with OneFS – Isilon add-on software or applications integrating to OneFS via the OneFS API – the ability to control the priority of their various functions to ensure the best resource utilization.

Each job (such as the SmartPools job) has an “Impact Profile” comprising a configurable policy and a schedule that characterizes how much of the system’s resources the job will take – plus an Impact Policy and an Impact Schedule. The amount of work a job has to do is fixed, but the resources dedicated to that work can be tuned to minimize the impact to other cluster functions, like serving client data.

Here’s a list of the specific jobs that are directly associated with OneFS SmartPools:

Job

Description

SmartPools

Job that runs and moves data between the tiers of nodes within the same cluster. Also executes the CloudPools functionality if licensed and configured.

SmartPoolsTree

Enforces SmartPools file policies on a subtree.

FilePolicy

Efficient changelist-based SmartPools file pool policy job.

IndexUpdate

Creates and updates an efficient file system index for FilePolicy job.

SetProtectPlus

Applies the default file policy. This job is disabled if SmartPools is activated on the cluster.

In conjunction with the IndexUpdate job, FilePolicy improves job scan performance by using a ‘file system index’, or changelist, to find files needing policy changes, rather than a full tree scan.

Avoiding a full treewalk dramatically decreases the amount of locking and metadata scanning work the job is required to perform, reducing impact on CPU and disk – albeit at the expense of not doing everything that SmartPools does. The FilePolicy job enforces just the SmartPools file pool policies, as opposed to the storage pool settings. For example, FilePolicy does not deal with changes to storage pools or storage pool settings, such as:

  • Restriping activity due to adding, removing, or reorganizing node pools
  • Changes to storage pool settings or defaults, including protection

However, the majority of the time SmartPools and FilePolicy perform the same work. Disabled by default, FilePolicy supports the full range of file pool policy features, reports the same information, and provides the same configuration options as the SmartPools job. Because FilePolicy is a changelist-based job, it performs best when run frequently – once or multiple times a day, depending on the configured file pool policies, data size, and rate of change.

Job schedules can easily be configured from the OneFS WebUI by navigating to Cluster Management > Job Operations, highlighting the desired job, and selecting ‘View\Edit’. The following example illustrates configuring the IndexUpdate job to run every six hours at a LOW impact level with a priority value of 5:

When enabling and using the FilePolicy and IndexUpdate jobs, the recommendation is to continue running the SmartPools job as well, but at a reduced frequency (monthly).

In addition to running on a configured schedule, the FilePolicy job can also be executed manually.

FilePolicy requires access to a current index. If the IndexUpdate job has not yet been run, attempting to start the FilePolicy job will fail with the error shown in the following figure. Instructions in the error message are displayed, prompting to run the IndexUpdate job first. When the index has been created, the FilePolicy job will run successfully. The IndexUpdate job can be run several times daily (that is, every six hours) to keep the index current and prevent the snapshots from getting large.

Consider using the FilePolicy job with the job schedule shown in the table below for workflows and datasets that have these characteristics:

  • Data with long retention times
  • Large number of small files
  • Path-based File Pool filters configured
  • Where FSAnalyze job is already running on the cluster (InsightIQ monitored clusters)
  • There is already a SnapshotIQ schedule configured
  • When the SmartPools job typically takes a day or more to run to completion at LOW impact

For clusters without these characteristics, the recommendation is to continue running the SmartPools job as usual and not to activate the FilePolicy job.

The following table provides a suggested job schedule when deploying FilePolicy:

Job

Schedule

Impact

Priority

FilePolicy

Every day at 22:00

LOW

6

IndexUpdate

Every six hours, every day

LOW

5

SmartPools

Monthly – Sunday at 23:00

LOW

6

Because no two clusters are the same, this suggested job schedule may require additional tuning to meet the needs of a specific environment.

Note that when clusters running older OneFS versions and the FSAnalyze job are upgraded to OneFS 8.2.x or later, the legacy FSAnalyze index and snapshots are removed and replaced by new snapshots the first time that IndexUpdate is run. The new index stores considerably more file and snapshot attributes than the old FSA index. Until the IndexUpdate job effects this change, FSA keeps running on the old index and snapshots.

Author: Nick Trimbee



Read Full Blog
  • data storage
  • data tiering
  • PowerScale
  • API
  • OneFS

A Metadata-based Approach to Tiering in PowerScale OneFS

Gregory Shiff

Wed, 02 Mar 2022 22:56:32 -0000

|

Read Time: 0 minutes

OneFS SmartPools provides sophisticated tiering between storage node types. Rules based on file attributes such as last accessed time or creation date can be configured in OneFS to drive transparent motion of data between PowerScale node types. This kind of “set and forget” approach to data tiering is ideal for some industries but not workable for most content creation workflows.

A classic case of how this kind of tiering falls short for media is the real-time nature of video playback. For an extreme example, take an uncompressed 4K image sequence (or even 8K), that might require >1.5GB/s of throughput to play properly. If this media has been tiered down to low performing archive storage and it needs to be used, those files must be migrated back up before they will play. This problem causes delays and confusion all around and makes media storage administrators hesitant to archive anything.

The good news is that the PowerScale OneFS ecosystem has a better way of doing things!

The approach I have taken here is to pull metadata from elsewhere in the workflow and use it to drive on demand tiering in OneFS. How does that work? OneFS supports file extended attributes, which are <key/value> pairs (metadata!) that can be written to the files and directories stored in OneFS. File Policies can be configured in OneFS to move data based on those file extended attributes. And a SmartPoolsTree job can be run on only the path that needs to be moved. All this goodness can be controlled externally by combining the DataIQ API and the OneFS API.

Figure 1: API flow

Note that while I’m focused on combining the DataIQ and OneFS APIs in this post, other API driven tools with OneFS file system visibility could be substituted for DataIQ.

DataIQ

DataIQ is a data indexing and analysis tool. It runs as an external virtual machine and maintains an index of mounted file systems. DataIQ’s file system crawler is efficient, fast, and lightweight, meaning it can be kept up to date with little impact on the storage devices it is indexing.

DataIQ has a concept called “tagging”. Tags in DataIQ apply to directories and provide a mechanism for reporting sets of related data. A tag in DataIQ is an arbitrary <key>/<value> pair. Directories can be tagged in DataIQ in three different ways:

  • Autotagging rules:
    1. Tags are automatically placed in the file system based on regular expressions defined in the Autotagging configuration menu.
  • Use of .cntag files:
    1. Empty files named in the format <key>.<value>.cntag are placed in directories and will be recognized as tags by DataIQ.
  • API-based tagging:
    1. The DataIQ API allows for external tagging of directories.

Tags can be placed throughout a file system and then reported on as a group. For instance, temporary render directories could contain a render.temp.cntag file. Similarly, an external tool could access the DataIQ API and place a <Project/Name> tag on the top-level directory of each project. DataIQ can generate reports on the storage capacity those tags are consuming.

File system extended attributes in OneFS

As I mentioned earlier, OneFS supports file extended attributes. Extended attributes are arbitrary metadata tags in the form of <key/value> pairs that can be applied to files and directories. Extended attributes are not visible in the graphical interface or when accessing files over a share or export. However, the attributes can be accessed using the OneFS CLI with the getexattr and setextattr commands.

Figure 2: File extended attributes

The SmartPools job engine will move data between node pools based on these file attributes. And it is that SmartPools functionality that uses this metadata to perform on demand data tiering.

Crucially, OneFS supports creation of file system extended attributes from an external script using the OneFS REST API. The OneFS API Reference Guide has great information about setting and reading back file system extended attributes.

Figure 3: File policy configuration

Tiering example with Autodesk Shotgrid, DataIQ, and OneFS

Autodesk ShotGrid (formerly Shotgun) is a production resource management tool common in the visual effects and animation industries. ShotGrid is a cloud-based tool that allows for coordination of large production teams. Although it isn’t a storage management tool, its business logic can be useful in deciding what tier of storage a particular set of files should live on. For instance, if a shot tracked in ShotGrid is complete and delivered, the files associated with that shot could be moved to archive.

DataIQ plug-in for Autodesk ShotGrid

The open-source DataIQ plug-in for ShotGrid is available on GitHub here:

Dell DataIQ Autodesk ShotGrid Plugin

This plug-in is proof of concept code to show how the ShotGrid and DataIQ APIs can be combined to tag data in DataIQ based on shot status in ShotGrid. The DataIQ tags are dynamically updated with the current shot status in ShotGrid.

Here is a “shot” in ShotGrid configured with various possible statuses:

Figure 4: ShotGrid status

The following figure of DataIQ shows where the shot status field from ShotGrid has been automatically applied as a tag in DataIQ.

Figure 5: DataIQ tags

Once metadata from ShotGrid has been pulled into DataIQ, that information can be used to drive OneFS SmartPools tiering:

  1. A user (or system) passes the DataIQ tag <key/values> to the DataIQ API. The DataIQ API returns a list of directories associated with that tag.
  2. A directory chosen from Step 1 above can be passed back to the DataIQ API to get a listing of all contents by way of the DataIQ file index.
  3. Those items are passed programmatically to the OneFS API. The <key/value> pair of the original DataIQ tag is written as an extended attribute directly to the targeted files and directories.  
  4. And finally, the SmartPoolsTree job can be run on the parent path chosen in Step 2 above to begin tiering the data immediately. 

Using business logic to drive storage tiering

DataIQ and OneFS provide the APIs necessary to drive storage tiering based on business logic. Striking efficiencies can be gained by taking advantage of the metadata that exists in many workflow tools. It is a matter of “connecting the dots”.

The example in this blog uses ShotGrid and DataIQ, however it is easy to imagine that similar metadata-based techniques could be developed using other file system index tools. In the media and entertainment ecosystem, media asset management and production asset management systems immediately come to mind as candidates for this kind of API level integration.

As data volumes increase exponentially, it is unrealistic to keep all files on the highest costing tiers of storage. Various automated storage tiering approaches have been around for years, but for many use cases this automated tiering approach falls short. Bringing together rich metadata and an API driven workflow bridges the gap.

To see the Python required to put this process together, refer to my white paper PowerScale OneFS: A Metadata Driven Approach to On Demand Tiering.

Author: Gregory Shiff, Principal Solutions Architect, Media & Entertainment    LinkedIn


Read Full Blog
  • PowerScale
  • OneFS
  • syslog protocol

Understanding the Protocol Syslog Format in PowerScale OneFS

Vincent Shen

Wed, 23 Feb 2022 19:23:07 -0000

|

Read Time: 0 minutes

Recently I’ve received several queries on the format of the audit protocol syslog in PowerScale. It is a little bit complicated for the following reasons:

  1. For different protocol operations (such as OPEN and CLOSE), various fields have been defined to meet auditing goals.
  2. Some fields are easy to parse and some are more difficult.
  3. It is not currently documented.

Syslog format

The following table shows the details of the format of the syslog protocol in PowerScale. (This table is very wide. Extend your browser to show all 13 fields.):

Operation

Field 1

Field 2

Field 3

Field 4

Field 5

Field 6

Field 7

Field 8

Field 9

Field 10

Field 11

Field 12

Field 13

LOGON

userSID

zoneName

ZoneID

clientIPAddr

protocol

operation

ntStatus

username

 

 

 

 

 

LOGOFF

userSID

zoneName

ZoneID

clientIPAddr

protocol

operation

ntStatus

username

 

 

 

 

 

TREE-CONNECT

userSID

zoneName

ZoneID

clientIPAddr

protocol

operation

ntStatus

 

 

 

 

 

 

READ

userSID

userID

zoneName

ZoneID

clientIPAddr

protocol

operation

ntStatus

isDirectory

inode/lin

filename

 

 

WRITE

userSID

userID

zoneName

ZoneID

clientIPAddr

protocol

operation

ntStatus

isDirectory

inode/lin

filename

 

 

CLOSE

userSID

userID

zoneName

ZoneID

clientIPAddr

protocol

operation

ntStatus

isDirectory

bytesRead

bytesWrite

inode/lin

filename

DELETE

userSID

userID

zoneName

ZoneID

clientIPAddr

protocol

operation

ntStatus

isDirectory

inode/lin

filename

 

 

GET_SECURITY

userSID

userID

zoneName

ZoneID

clientIPAddr

protocol

operation

ntStatus

isDirectory

inode/lin

filename

 

 

SET_SECURITY

userSID

userID

zoneName

ZoneID

clientIPAddr

protocol

operation

ntStatus

isDirectory

inode/lin

filename

 

 

OPEN

userSID

userID

zoneName

ZoneID

clientIPAddr

protocol

operation

ntStatus

desiredAccess

isDirectory

createResult

inode/lin

filename

RENAME

userSID

userID

zoneName

ZoneID

clientIPAddr

protocol

operation

ntStatus

isDirectory

inode/lin

filename

newFileName 

 

Some Notes:

  1. Starting with OneFS 9.2.0.0, we apply the RFC 5425 as the standard of the syslog protocol.
  2. userSID: UserSID is a unique identifier for an object in Active Directory or NT4 domains. On a native Windows file server (as well as some other CIFS server implementations), this SID is used directly to determine a user's identity, and is generally stored on every file or folder in the file system that the user has rights to. SIDs commonly start with the letter `S', and include a series of numbers and dashes.
  3. userID: On most UNIX based systems, file and folder permissions are assigned to UIDs and GIDs (most commonly found in /etc/passwd and /etc/group).
  4. protocol: it’s one of the following:
    1. SMB
    2. NFS
    3. HDFS

      SMB is also returned for the LOGON, LOGOFF, and TREE-CONNECT operations.

  5. ntStatus:

  1. If the ntStatus field is 0, it will return “SUCCESS”.
  2. If the ntStatus field is non-zero, it will return “FAILD: <NT Status Code>”.
  3. If the ntStatus field is not in the payload, it will return “ERROR”.
  4. You can refer to the Microsoft Open Specifications (https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/596a1078-e883-4972-9bbc-49e60bebca55) for the value and description of the NT Status Code.

  6. isDirectory:

  1. If it’s a file, it will return “FILE”.
  2. If it’s a directory, it will return “DIR”.

Example

 

Conclusion

I hope you have found this helpful.

Thanks for reading!

Author: Vincent Shen





Read Full Blog
  • PowerScale
  • OneFS

OneFS and Long Filenames

Nick Trimbee

Fri, 28 Jan 2022 21:24:39 -0000

|

Read Time: 0 minutes

Another feature debut in OneFS 9.3 is support for long filenames. Until now, the OneFS filename limit has been capped at 255 bytes. However, depending on the encoding type, this could potentially be an impediment for certain languages such as Chinese, Hebrew, Japanese, Korean, and Thai, and can create issues for customers who work with international languages that use multi-byte UTF-8 characters.

Since some international languages use up to 4 bytes per character, a file name of 255 bytes could be limited to as few as 63 characters when using certain languages on a cluster.

To address this, the new long filenames feature provides support for names up to 255 Unicode characters, by increasing the maximum file name length from 255 bytes to 1024 bytes. In conjunction with this, the OneFS maximum path length is also increased from 1024 bytes to 4096 bytes.

Before creating a name length configuration, the cluster must be running OneFS 9.3. However, the long filename feature is not activated or enabled by default. You have to opt-in by creating a “name length” configuration. That said, the recommendation is to only enable long filename support if you are actually planning on using it. This is because, once enabled, OneFS does not track if, when, or where, a long file name or path is created.

The following procedure can be used to configure a PowerScale cluster for long filename support:

Step 1: Ensure cluster is running OneFS 9.3 or later

The ‘uname’ CLI command output will display a cluster’s current OneFS version.

For example:

# uname -sr
Isilon OneFS v9.3.0.0

The current OneFS version information is also displayed at the upper right of any of the OneFS WebUI pages. If the output from Step 1 shows the cluster running an earlier release, an upgrade to OneFS 9.3 will be required. This can be accomplished either using the ‘isi upgrade cluster’ CLI command or from the OneFS WebUI, by going to Cluster Management > upgrade.

Once the upgrade has completed it will need to be committed, either by following the WebUI prompts, or using the ‘isi upgrade cluster commit’ CLI command.

Step 2. Verify cluster’s long filename support configuration: Viewing a cluster’s long filename support settings

The ‘isi namelength list’ CLI command output will verify a cluster’s long filename support status. For example, the following cluster already has long filename support enabled on the /ifs/tst path:

# isi namelength list
Path     Policy     Max Bytes   Max Chars
-----------------------------------------
/ifs/tst restricted 255         255
-----------------------------------------
Total: 1

Step 3. Configure long filename support

The ‘isi namelength create <path>’ CLI command can be run on the cluster to enable long filename support.

# mkdir /ifs/lfn
# isi namelength create --max-bytes 1024 --max-chars 1024 /ifs/lfn

By default, namelength support is created with default maximum values of 255 bytes in length and 255 characters.

Step 4: Confirm long filename support is configured

The ‘isi namelength list’ CLI command output will confirm that the cluster’s /ifs/lfn directory path is now configured to support long filenames:

# isi namelength list
Path     Policy     Max Bytes   Max Chars
-----------------------------------------
/ifs/lfn custom      1024       1024
/ifs/tst restricted 255         255
-----------------------------------------
Total: 2

Name length configuration is set up per directory and can be nested. Plus, cluster-wide configuration can be applied by configuring at the root /ifs level.

Filename length configurations have two defaults:

  • “Full” – which is 1024 bytes, 255 characters.
  • “Restricted” – which is 255 bytes, 255 characters, and the default if no long additional filename configuration is specified.

Note that removing the long name configuration for a directory will not affect its contents, including any previously created files and directories with long names. However, it will prevent any new long-named files or subdirectories from being created under that directory.

If a filename is too long for a particular protocol, OneFS will automatically truncate the name to around 249 bytes with a ‘hash’ appended to it, which can be used to consistently identify and access the file. This shortening process is referred to as ‘name mangling’. If, for example, a filename longer than 255 bytes is returned in a directory listing over NFSv3, the file’s mangled name will be presented. Any subsequent lookups of this mangled name will resolve to the same file with the original long name. Be aware that filename extensions will be lost when a name is mangled, which can have ramifications for Windows applications, and so on.

If long filename support is enabled on a cluster with active SyncIQ policies, all source and target clusters must have OneFS 9.3 or later installed and committed, and long filename support enabled.

However, the long name configuration does not need to be identical between the source and target clusters -- it only needs to be enabled. This can be done via the following sysctl command:

# sysctl efs.bam.long_file_name_enabled=1

When the target cluster for a Sync policy does not support long file names for a SyncIQ policy and the source domain has long file names enabled, the replication job will fail. The subsequent SyncIQ job report will include the following error message:

Note that the OneFS checks are unable to identify a cascaded replication target running an earlier OneFS version and/or without long filenames configured.

So there are a couple of things to bear in mind when using long filenames:

  • Restoring data from a 9.3 NDMP backup containing long filenames to a cluster running an earlier OneFS version will fail with an ‘ENAMETOOLONG’ error for each long-named file. However, all the files with regular length names will be successfully restored from the backup stream.
  • OneFS ICAP does not support long filenames. However CAVA, ICAP’s replacement, is compatible.
  • The ‘isi_vol_copy’ migration utility does not support long filenames.
  • Neither does the OneFS WebDAV protocol implementation.
  • Symbolic links created via SMB are limited to 1024 bytes due to the size limit on extended attributes.
  • Any pathnames specified in long filename pAPI operations are limited to 4068 bytes.
  • And finally, while an increase in long named files and directories could potentially reduce the number of names the OneFS metadata structures can hold, the overall performance impact of creating files with longer names is negligible.

Author: Nick Trimbee




Read Full Blog
  • PowerScale
  • OneFS

OneFS Virtual Hot Spare

Nick Trimbee

Fri, 28 Jan 2022 21:12:37 -0000

|

Read Time: 0 minutes

There have been several recent questions from the field around how a cluster manages space reservation and pre-allocation of capacity for data repair and drive rebuilds.

OneFS provides a mechanism called Virtual Hot Spare (VHS), which helps ensure that node pools maintain enough free space to successfully re-protect data in the event of drive failure.

Although globally configured, Virtual Hot Spare actually operates at the node pool level so that nodes with different size drives reserve the appropriate VHS space. This helps ensure that, while data may move from one disk pool to another during repair, it remains on the same class of storage. VHS reservations are cluster wide and configurable as either a percentage of total storage (0-20%) or as a number of virtual drives (1-4). To achieve this, the reservation mechanism allocates a fraction of the node pool’s VHS space in each of its constituent disk pools.

No space is reserved for VHS on SSDs unless the entire node pool consists of SSDs. This means that a failed SSD may have data moved to HDDs during repair, but without adding additional configuration settings. This avoids reserving an unreasonable percentage of the SSD space in a node pool.

The default for new clusters is for Virtual Hot Spare to have both “subtract the space reserved for the virtual hot spare…” and “deny new data writes…” enabled with one virtual drive. On upgrade, existing settings are maintained.

It is strongly encouraged to keep Virtual Hot Spare enabled on a cluster, and a best practice is to configure 10% of total storage for VHS. If VHS is disabled and you upgrade OneFS, VHS will remain disabled. If VHS is disabled on your cluster, first check to ensure the cluster has sufficient free space to safely enable VHS, and then enable it.

VHS can be configured via the OneFS WebUI, and is always available, regardless of whether SmartPools has been licensed on a cluster. For example:

 

From the CLI, the cluster’s VHS configuration is part of the storage pool settings, and can be viewed with the following syntax:

# isi storagepool settings view
     Automatically Manage Protection: files_at_default
Automatically Manage Io Optimization: files_at_default
Protect Directories One Level Higher: Yes
       Global Namespace Acceleration: disabled
       Virtual Hot Spare Deny Writes: Yes
        Virtual Hot Spare Hide Spare: Yes
      Virtual Hot Spare Limit Drives: 1
     Virtual Hot Spare Limit Percent: 10
             Global Spillover Target: anywhere
                    Spillover Enabled: Yes
        SSD L3 Cache Default Enabled: Yes
                     SSD Qab Mirrors: one
            SSD System Btree Mirrors: one
            SSD System Delta Mirrors: one

Similarly, the following command will set the cluster’s VHS space reservation to 10%.

# isi storagepool settings modify --virtual-hot-spare-limit-percent 10

Bear in mind that reservations for virtual hot sparing will affect spillover. For example, if VHS is configured to reserve 10% of a pool’s capacity, spillover will occur at 90% full.

Spillover allows data that is being sent to a full pool to be diverted to an alternate pool. Spillover is enabled by default on clusters that have more than one pool. If you have a SmartPools license on the cluster, you can disable Spillover. However, it is recommended that you keep Spillover enabled. If a pool is full and Spillover is disabled, you might get a “no space available” error but still have a large amount of space left on the cluster.

If the cluster is inadvertently configured to allow data writes to the reserved VHS space, the following informational warning will be displayed in the SmartPools WebUI:

There is also no requirement for reserved space for snapshots in OneFS. Snapshots can use as much or little of the available file system space as desirable and necessary.

A snapshot reserve can be configured if preferred, although this will be an accounting reservation rather than a hard limit and is not a recommend best practice. If desired, snapshot reserve can be set via the OneFS command line interface (CLI) by running the ‘isi snapshot settings modify –reserve’ command.

For example, the following command will set the snapshot reserve to 10%:

# isi snapshot settings modify --reserve 10

It’s worth noting that the snapshot reserve does not constrain the amount of space that snapshots can use on the cluster. Snapshots can consume a greater percentage of storage capacity specified by the snapshot reserve.

Additionally, when using SmartPools, snapshots can be stored on a different node pool or tier than the one the original data resides on.

For example, as above, the snapshots taken on a performance aligned tier can be physically housed on a more cost effective archive tier.

Author: Nick Trimbee

Read Full Blog
  • security
  • PowerScale
  • OneFS
  • MFA

Configure SSH Multi-Factor Authentication on OneFS Using Duo

Lieven Lin

Thu, 27 Jan 2022 21:03:07 -0000

|

Read Time: 0 minutes

Duo Security at Cisco is a vendor of cloud-based multi-factor authentication (MFA) services. MFA enables security to prevent a hacker from masquerading as an authenticated user. Duo allows an administrator to require multiple options for secondary authentication. With multi-factor authentication, even though a hacker steals the username and password, he cannot be authenticated to a network service easily without a user’s device.

SSH Multi-Factor Authentication (MFA) with Duo is a new feature introduced in OneFS 8.2. Currently, OneFS supports the SSH MFA with Duo service through SMS (short message service), phone callback, and Push notification via the Duo app. This blog describes how to integrate OneFS SSH MFA with the Duo service.

Duo supports many kinds of applications, such as Microsoft Azure Active Directory, Cisco Webex, and Amazon Web Services. For a OneFS cluster, it appears as a "Unix Application" entry. To integrate OneFS with the Duo service, you must configure the Duo service and the OneFS cluster. Before configuring OneFS with Duo, you need to have Duo account. In this blog, we used a trial version account for demonstration purposes.

Failback mode

By default, the SSH failback mode for Duo in OneFS is “safe”, which allows common authentication if the Duo service is not available. The “secure” mode will deny SSH access if the Duo service is not available, including the bypass users, because the bypass users are defined and validated in the Duo service. To configure the failback mode in OneFS, specify the --failmode option using the following command:

# isi auth duo modify --failmode 

Exclusion group

By default, all groups are required to use Duo unless the group is configured to bypass Duo auth. The groups option allows you to exclude or specify dedicated user groups from using Duo service authentication. This method provides a way to configure users so they can still SSH into the cluster even when the Duo service is not available and failback mode is set to “secure”. Otherwise, all users may be locked out of the cluster in this situation.

To configure the exclusion group option, add an exclamation character “!” before the group name and preceded by an asterisk to ensure that all other groups use Duo service. For example:

# isi auth duo modify --groups=”*,!groupname”

Note: zsh shell requires the “!” to be escaped. In this case, the example above should be changed to:

# isi auth duo modify --groups=”*,\!groupname”

Prepare the Duo service for OneFS

1. Use your new Duo account to log into the Duo Admin Panel. Select the Application item from the left menu, then click Protect an Application, as shown in Figure 1.

Figure 1  Protect an Application

2.  Type “Unix Application” in the search bar. Click Protect this Application to create a new UNIX Application entry.

Figure 2  Search for UNIX Application

3. Scroll down the creation page to find the Settings section. Type a name for the new UNIX Application. Try to use a name that can recognize your OneFS cluster, as shown in Figure 3. In the Settings section, you can also find the Duo’s name normalization setting. 

By default, Duo username normalization is not AD aware. This means that it will alter incoming usernames before trying to match them to a user account. For example, "DOMAIN\username", "username@domain.com", and "username" are treated as the same user. For other options, refer to here.

Figure 3  UNIX Application Name

4. Check the required information for OneFS under the Details section, including API hostname, integration key, and secret key, as shown in Figure 4.

Figure 4  Required Information for OneFS

5. Manually enroll a user. In this example, we are creating a user named admin, which is the default OneFS administrator user. Switch the menu item to Users and click the Add User button, as shown in Figure 5. For details about user enrollment in the Duo service, refer to the Duo documentation Enrolling Users.

Figure 5  User Enrollment

6. Type the user name, as shown in Figure 6.

Figure 6  Manually User Enrollment

7. Find the Phones settings in the user page and click the Add Phone button to add a device for the user. See Figure 7.

Figure 7  Add Phone for User

8. Type your phone number.

Figure 8  Add New Phone

9. (optional) If you want to use Duo push authentication methods, you need to install the Duo Mobile app in the phone and activate the Duo Mobile app. As highlighted in Figure 9, click the link to activate the Duo Mobile app.

Figure 9  Activate the Duo Mobile app

The Duo service is now prepared for OneFS. Now let's go on to configure OneFS.

Configuring and verifying OneFS

1. By default, the authentication setting template is set for “any”. To use OneFS with the Duo service, the authentication setting template must not be set to “any” or “custom”. It should be set to “password”, “publickey”, or “both”. In the following example, we are configuring the setting to “password”, which will use user password and Duo for SSH MFA.

# isi ssh modify --auth-settings-template=password

2. To confirm the authentication method, use the following command:

# isi ssh settings view| grep "Auth Settings Template"
      Auth Settings Template: password

3. Configure the required Duo service information and enable it for SSH MFA, as shown here. Use the same information as when we set up the UNIX Application in Duo, including API hostname, integration key, and secret key.

# isi auth duo modify --enabled=true --failmode=safe --host=api-13b1ee8c.duosecurity.com --ikey=DIRHW4IRSC7Q4R1YQ3CQ --set-skey
Enter skey:
Confirm:

4. Verify SSH MFA using the user “admin”. An SMS passcode and the user’s password are used for authentication in this example, as shown in Figure 10.

Figure 10 SSH MFA Verification

You have now completed the configuration on your Duo service portal and OneFS cluster as well! SSH users have to be authenticated with Duo, therefore, you can further strengthen your OneFS cluster security with MFA enabled.

Author: Lieven Lin

 



Read Full Blog
  • backup
  • PowerScale
  • CPU
  • OneFS
  • SAN

Introducing the Accelerator Nodes – the Latest Additions to the Dell PowerScale Family

Cris Banson

Thu, 20 Jan 2022 14:45:39 -0000

|

Read Time: 0 minutes

The Dell PowerScale family announced a recent addition with the latest release of accelerator nodes. Accelerator nodes contribute additional CPU, memory, and network bandwidth to a cluster that already has adequate storage resources.

The PowerScale accelerator nodes include the PowerScale P100 performance accelerator and the PowerScale B100 backup accelerator. Both the P100 and B100 are based on 1U PowerEdge R640 servers and can be part of a PowerScale cluster that is powered by OneFS 9.3 or later. The accelerator nodes contain boot media only and are optimized for CPU/memory configurations. A single P100 or B100 node can be added to a cluster. Expansion is through single node increments.

PowerScale all-flash and all-NVMe storage deliver the necessary performance to meet demanding workloads. If additional capabilities are required, new nodes can be non-disruptively added to the cluster, to provide both performance and capacity. There may be specialized compute-bound workloads that require extra performance but don’t need any additional capacity. These types of workloads may benefit by adding the PowerScale P100 performance accelerator node to the cluster. The accelerator node contributes CPU, memory, and network bandwidth capabilities to the cluster. This accelerated storage solution delivers incremental performance at a lower cost. Let’s look at each in detail.  

A PowerScale P100 Performance Accelerator node adds performance to the workflows on a PowerScale cluster that is composed of CPU-bound nodes. The P100 provides a dedicated cache, separate from the cluster. Adding CPU to the cluster will improve performance where there are read/re-read intensive workloads. The P100 also provides additional network bandwidth to a cluster through the additional front-end ports.

With rapid data growth, organizations are challenged by shrinking backup windows that impact business productivity and the ability to meet IT requirements for tape backup, and compliance archiving. In such an environment, providing fast, efficient, and reliable data protection is essential. Given the 24x7 nature of the business, a high-performance backup solution delivers the performance and scale to address the SLAs of the business. Adding one or more PowerScale B100 backup accelerator nodes to a PowerScale cluster can reduce risk while addressing backup protection needs. 

A PowerScale B100 Backup Accelerator enables backing up a PowerScale cluster using a two-way NDMP protocol. The B100 is delivered in a cost-effective form factor to address the SLA targets and tape backup needs of a wide variety of workloads. Each node includes Fibre Channel ports that can connect directly to a tape subsystem or a Storage Area Network (SAN). The B100 can benefit backup operations as it reduces overhead on the cluster, by going through the Fibre Channel ports directly, thereby separating front-end and NDMP traffic.

The PowerScale P100 and B100 nodes can be monitored using the same tools available today, including the OneFS web administration interface, the OneFS command-line interface, Dell DataIQ, and InsightIQ.

In a world where unstructured data is growing rapidly and taking over the data center, organizations need an enterprise storage solution that provides the flexibility to address the additional performance needs of certain workloads, and that meets the organization’s overall data protection requirements. 

The following information provides the technical specifications and best practice design considerations of the PowerScale Accelerator nodes:

Author: Cris Banson


Read Full Blog
  • data storage
  • Isilon
  • PowerScale

OneFS & Files Per Directory

Nick Trimbee

Thu, 13 Jan 2022 15:00:46 -0000

|

Read Time: 0 minutes

Had several recent inquiries from the field recently asking about the low impact methods to count the number of files in large directories containing hundreds of thousands to millions of files).

Unfortunately, there’s no ‘silver bullet’ command or data source available that will provide that count instantaneously: Something will have to perform a treewalk to gather these stats.  That said, there are a couple of approaches to this, each with its pros and cons:

  • If the customer has a SmartQuotas license, they can configure an advisory directory quota on the directories they want to check. As mentioned, the first job run will require working the directory tree, but they can get fast, low impact reports moving forward.
  • Another approach is using traditional UNIX commands, either from the OneFS CLI or, less desirably, from a UNIX client. The two following commands will both take time to run: “
# ls -f /path/to/directory | wc –l
# find /path/to/directory -type f | wc -l

It’s worth noting that when counting files with ls, you’ll probably get faster results by omitting the ‘-l’ flag and using ‘-f’ flag instead. This is because ‘-l’ resolves UID & GIDs to display users/groups, which creates more work thereby slowing the listing. In contrast, ‘-f’ allows the ‘ls’ command to avoid sorting the output. This should be faster and reduce memory consumption when listing extremely large numbers of files.

Ultimately, there really is no quick way to walk a file system and count the files – especially since both ls and find are single threaded commands.  Running either of these in the background with output redirected to a file is probably the best approach.

Depending on your arguments for the ls or find command, you can gather a comprehensive set of context info and metadata on a single pass.

# find /path/to/scan -ls > output.file

It will take quite a while for the command to complete, but once you have the output stashed in a file you can pull all sorts of useful data from it.

Assuming a latency of 10ms per file it would take 33 minutes for 200,000 files. While this estimate may be conservative, there are typically multiple protocol ops that need to be done to each file, and they do add up. Plus, as mentioned before, ‘ls’ is a single threaded command.

  • If possible, ensure the directories of interest are stored on a file pool that has at least one of the metadata mirrors on SSD (metadata-read).
  • Windows Explorer can also enumerate the files in a directory tree surprisingly quickly. All you get is a file count, but it can work pretty well.
  • If the directory you wish to know the file count for just happens to be /ifs, you can run the LinCount job, which will tell you how many LINs there are in the file system.

Lincount (relatively) quickly scans the filesystem and returns the total count of LINs (logical inodes). The LIN count is essentially equivalent to the total file and directory count on a cluster. The job itself runs by default at the LOW priority and is the fastest method of determining object count on OneFS, assuming no other job has run to completion.

The following syntax can be used to kick off the Lincount job from the OneFS CLI:

# isi job start lincount

The output from this will be along the lines of “Added job [52]”.

Note: The number in square brackets is the job ID.

To view results, run the following command from the CLI:

# isi job reports view [job ID]

For example:

# isi job reports view 52
LinCount[52] phase 1 (2021-07-06T09:33:33)
------------------------------------------
Elapsed time 1 seconds
Errors 0
Job mode LinCount
LINs traversed 1722
SINs traversed 0

The "LINs traversed" metric indicates that 1722 files and directories were found.

Note: The Lincount job will also include snapshot revisions of LINs in its count.

Alternatively, if another treewalk job has run against the directory you wish to know the count for, you might be in luck.

At any rate, hundreds of thousands of files is a large number to store in one directory. To reduce the directory enumeration time, where possible divide the files up into multiple subdirectories.

When it comes to NFS, the behavior is going to partially depend on whether the client is doing READDIRPLUS operations vs READDIR. READDIRPLUS is useful if the client is going to need the metadata. However, ff all you’re trying to do is list the filenames, it actually makes that operation much slower.

If you only read the filenames in the directory, and you don’t attempt to stat any associated metadata, then this requires a relatively small amount of I/O to pull the names from the meta-tree and should be fairly fast.

If this has already been done recently, some or all of the blocks are likely to already be in L2 cache. As such, a subsequent operation won’t need to read from hard disk and will be substantially faster.

NFS is more complicated regarding what it will and won’t cache on the client side, particularly with the attribute cache and the timeouts that are associated with it.

Here are some options from fastest to slowest:

  • If NFS is using READDIR, as opposed to READDIRPLUS, and the ‘ls’ command is invoked with the appropriate arguments to prevent it polling metadata or sorting the output, execution will be relatively swift.
  • If ‘ls’ polls the metadata (or if NFS uses READDIRPLUS) but doesn’t sort the results, output will be fairly immediately, but will take longer to complete overall.
  • If ‘ls’ sorts the output, nothing will be displayed until ls has read everything and sorted it, then you’ll get the output in a deluge at the end.

  

Author: Nick Trimbee

 

Read Full Blog
  • data storage
  • Isilon
  • PowerScale

OneFS NFS Netgroups

Nick Trimbee

Thu, 13 Jan 2022 15:17:23 -0000

|

Read Time: 0 minutes

A OneFS network group, or netgroup, defines a network-wide group of hosts and users. As such, they can be used to restrict access to shared NFS filesystems, etc. Network groups are stored in a network information services, such as LDAP, NIS, or NIS+, rather than in a local file. Netgroups help to simplify the identification and management of people and machines for access control.

The isi_netgroup_d service provides netgroup lookups and caching for consumers of the ‘isi_nfs’ library.  Only mountd and the ‘isi nfs’ command-line interface use this service.  The isi_netgroup_d daemon maintains a fast, persistent cluster-coherent cache containing netgroups and netgroup members.  isi_netgroup_d enforces netgroup TTLs and netgroup retries.  A persistent cache database (SQLite) exists to store and recover cache data across reboots.  Communication with isi_netgroup_d is via RPC and it will register its service and port with the local rpcbind.

Within OneFS, the netgroup cache possesses the following gconfig configuration parameters:

# isi_gconfig -t nfs-config | grep cache

shared_config.bypass_netgroup_cache_daemon (bool) = false

netcache_config.nc_ng_expiration (uint32) = 3600000

netcache_config.nc_ng_lifetime (uint32) = 604800

netcache_config.nc_ng_retry_wait (uint32) = 30000

netcache_config.ncdb_busy_timeout (uint32) = 900000

netcache_config.ncdb_write (uint32) = 43200000

netcache_config.nc_max_hosts (uint32) = 200000

Similarly, the following files are used by the isi_netgroup_d daemon:

File

Purpose

     /var/run/isi_netgroup_d.pid

The pid of the currently running isi_netgroup_d

     /ifs/.ifs/modules/nfs/nfs_config.gc

Server configuration file

     /ifs/.ifs/modules/nfs/netcache.db

Persistent cache database

     /var/log/isi_netgroup_d.log

Log output file

 In general, using IP addresses works better than hostnames for netgroups. This is because hostnames require a DNS lookup and resolution from FQDN to IP address. Using IP addresses directly saves this overhead.

Resolving a large set of hosts in the allow/deny list is significantly faster when using netgroups. Entering a large host list in the NFS export means OneFS has to look up the hosts for each individual NFS export. In Netgroups, once looked up, it is cached by netgroups, so it doesn’t have to be looked up again if there are overlap between exports. It is also better to use an LDAP (or NIS) server when using Netgroups instead of the flat file. If you have a large list of hosts in the netgroups file, it can take a while to resolve as it is single threaded and sequential. LDAP/NIS provider based netgroups lookup is parallelized.

The OneFS netgroup cache has a default limit in gconfig of 200,000 host entries.

# isi_gconfig -t nfs-config | grep max

netcache_config.nc_max_hosts (uint32) = 200000

So, what is the waiting period between when /etc/netgroup is updated to when the NFS export realizes the change? OneFS uses a netgroup cache and both its expiration and lifetime are both tunable. The netgroup expiration and lifetime can be configured with this following CLI command:

# isi nfs netgroup modify

--expiration or -e <duration> 

Set the netgroup expiration time.

--lifetime or -l <duration>

Set the netgroup lifetime.

OneFS also provides the ‘isi nfs netgroups flush’ CLI command, which can be used to force a reload of the file.

# isi nfs netgroup flush

        [--host <string>]

        [{--verbose | -v}]

        [{--help | -h}]
 

Options:

    --host <string>

        IP address of the node to flush. Defaults is all nodes.


  Display Options: 

    --verbose | -v

        Display more detailed information.

    --help | -h

        Display help for this command.

However, it is not recommended to flush the cache as a part of normal cluster operation. Refresh will walk the file and update the cache as needed.

Another area of caution is applying a netgroup with unresolved hostname(s). This will also slow down resolution of the hosts in the file when a refresh or startup of node happens. The best practice is to ensure that each host in the netgroups file is resolvable in DNS, or to just use IP addresses rather than names in the netgroup.

When it comes to switching to a netgroup for clients already on an export, a netgroup can be added and clients removed in one step (#1 –add-client netgroup –remove-clients 1,2,3 ,etc.). The cluster allows a mix of netgroup and host entries, so duplicates are tolerated. However, it’s worth noting that if there are unresolvable hosts in both areas, the startup resolution time will take that much longer.

 

 

Author: Nick Trimbee

Read Full Blog
  • data storage
  • Isilon
  • PowerScale

OneFS Protocol Auditing

Nick Trimbee

Thu, 13 Jan 2022 15:38:26 -0000

|

Read Time: 0 minutes

Auditing can detect potential sources of data loss, fraud, inappropriate entitlements, access attempts that should not occur, and a range of other anomalies that are indicators of risk. This can be especially useful when the audit associates data access with specific user identities.

In the interests of data security, OneFS provides ‘chain of custody’ auditing by logging specific activity on the cluster. This includes OneFS configuration changes plus NFS, SMB, and HDFS client protocol activity, which are required for organizational IT security compliance, as mandated by regulatory bodies like HIPAA, SOX, FISMA, MPAA, etc.

OneFS auditing uses Dell EMC’s Common Event Enabler (CEE) to provide compatibility with external audit applications. A cluster can write audit events across up to five CEE servers per node in a parallel, load-balanced configuration. This allows OneFS to deliver an end to end, enterprise grade audit solution which efficiently integrates with third party solutions like Varonis DatAdvantage.

OneFS auditing provides control over exactly what protocol activity is audited. For example:

  • Stops collection of unneeded audit events that 3rd party applications do not register for
  • Reduces the number of audit events collected to only what is needed. Less unneeded events are stored on ifs and sent off cluster.

OneFS protocol auditing events are configurable at CEE granularity, with each OneFS event mapping directly to a CEE event. This allows customers to configure protocol auditing to collect only what their auditing application requests, reducing both the number of events discarded by CEE and stored on /ifs.

The ‘isi audit settings’ command syntax and corresponding platform API are used to specify the desired events for the audit filter to collect.

A ‘detail_type’ field within OneFS internal protocol audit events allows a direct mapping to CEE audit events. For example:

“protocol":"SMB2",
 
"zoneID":1,
 
"zoneName":"System",
 
"eventType":"rename",
 
"detailType":"rename-directory",
 
"isDirectory":true,
 
"clientIPAddr":"10.32.xxx.xxx",
 
"fileName":"\\ifs\\test\\New folder",
 
"newFileName":"\\ifs\\test\\ABC",
 
"userSID":"S-1-22-1-0",
 
"userID":0,

Old audit events are processed and mapped to the same CEE audit events as in previous releases. Backwards compatibility is maintained with previous audit events such that old versions ignore the new field. There are no changes to external audit events sent to CEE or syslog.

  • New default audit events when creating an access zone

Here are the protocol audit events:

New OneFS Audit Event

Pre-8.2 Audit Event

create_file

create

create_directory

create

open_file_write

create

open_file_read

create

open_file_noaccess

create

open_directory

create

close_file_unmodified

close

close_file_modified

close

close_directory

close

delete_file

delete

delete_directory

delete

rename_file

rename

rename_directory

rename

set_security_file

set_security

set_security_directory

set_security

get_security_file,

get_security

get_security_directory

get_security

write_file

write

read_file

read

Audit Event

logon

logoff

tree_connect

The ‘isi audit settings’ CLI command syntax is a follows:

Usage:
 
    isi audit <subcommand>
 
Subcommands:
 
    settings    Manage settings related to audit configuration.
 
    topics      Manage audit topics.
 
    logs        Delete out of date audit logs manually & monitor process.
 
    progress    Get the audit event time.

All options that take <events> use the protocol audit events:

# isi audit settings view –zone=<zone>
 
# isi audit settings modify --audit-success=<events> --zone=<zone>
 
# isi audit settings modify --audit-failure=<events> --zone=<zone>
 
# isi audit settings modify --syslog-audit-events=<events> --zone=<zone>

When it comes to troubleshooting audit on a cluster, the ‘isi_audit_viewer’ utility can be used to list protocol audit events collected.

# isi_audit_viewer -h
 
Usage: isi_audit_viewer [ -n <nodeid> | -t <topic> | -s <starttime>|
 
         -e <endtime> | -v ]
 
         -n <nodeid> : Specify node id to browse (default: local node)
 
         -t <topic>  : Choose topic to browse.
 
            Topics are "config" and "protocol" (default: "config")
 
         -s <start>  : Browse audit logs starting at <starttime>
 
         -e <end>    : Browse audit logs ending at <endtime>
 
         -v verbose  : Prints out start / end time range before printing
 
             records

The new audit event type is in the ‘detail_type’ field. Additionally, any errors that are encountered while processing audit events, and when delivering them to an external CEE server, are written to the log file ‘/var/log/isi_audit_cee.log’. Additionally, the protocol specific logs will contain any issues the audit filter has collecting while auditing events.

These protocol log files are:

Protocol

Log file

HDFS

/var/log/hdfs.log

NFS

/var/log/nfs.log

SMB

/var/log/lwiod.log

S3

/var/log/s3.log

 

Author: Nick Trimbee

Read Full Blog
  • data storage
  • Isilon
  • PowerScale

OneFS Hardware Fault Tolerance

Nick Trimbee

Thu, 13 Jan 2022 15:42:03 -0000

|

Read Time: 0 minutes

There have been several inquiries recently around PowerScale clusters and hardware fault tolerance, above and beyond file level data protection via erasure coding. It seemed like a useful topic for a blog article, so here are some of the techniques which OneFS employs to help protect data against the threat of hardware errors:

File system journal

Every PowerScale node is equipped with a battery backed NVRAM file system journal. Each journal is used by OneFS as stable storage, and guards write transactions against sudden power loss or other catastrophic events. The journal protects the consistency of the file system and the battery charge lasts up to three days. Since each member node of a cluster contains an NVRAM controller, the entire OneFS file system is therefore fully journaled.

Proactive device failure

OneFS will proactively remove, or SmartFail, any drive that reaches a particular threshold of detected Error Correction Code (ECC) errors, and automatically reconstruct the data from that drive and locate it elsewhere on the cluster. Both SmartFail and the subsequent repair process are fully automated and hence require no administrator intervention.

Data integrity

ISI Data Integrity (IDI) is the OneFS process that protects file system structures against corruption via 32-bit CRC checksums. All OneFS blocks, both for file and metadata, utilize checksum verification. Metadata checksums are housed in the metadata blocks themselves, whereas file data checksums are stored as metadata, thereby providing referential integrity. All checksums are recomputed by the initiator, the node servicing a particular read, on every request.

In the event that the recomputed checksum does not match the stored checksum, OneFS will generate a system alert, log the event, retrieve and return the corresponding error correcting code (ECC) block to the client and attempt to repair the suspect data block.

Protocol checksums

In addition to blocks and metadata, OneFS also provides checksum verification for Remote Block Management (RBM) protocol data. As mentioned above, the RBM is a unicast, RPC-based protocol used over the back-end cluster interconnect. Checksums on the RBM protocol are in addition to the InfiniBand hardware checksums provided at the network layer and are used to detect and isolate machines with certain faulty hardware components and exhibiting other failure states.

Dynamic sector repair

OneFS includes a Dynamic Sector Repair (DSR) feature whereby bad disk sectors can be forced by the file system to be rewritten elsewhere. When OneFS fails to read a block during normal operation, DSR is invoked to reconstruct the missing data and write it to either a different location on the drive or to another drive on the node. This is done to ensure that subsequent reads of the block do not fail. DSR is fully automated and completely transparent to the end-user. Disk sector errors and Cyclic Redundancy Check (CRC) mismatches use almost the same mechanism as the drive rebuild process.

MediaScan

MediaScan’s role within OneFS is to check disk sectors and deploy the above DSR mechanism in order to force disk drives to fix any sector ECC errors they may encounter. Implemented as one of the phases of the OneFS job engine, MediaScan is run automatically based on a predefined schedule. Designed as a low-impact, background process, MediaScan is fully distributed and can thereby leverage the benefits of a cluster’s parallel architecture.

IntegrityScan

IntegrityScan, another component of the OneFS job engine, is responsible for examining the entire file system for inconsistencies. It does this by systematically reading every block and verifying its associated checksum. Unlike traditional ‘fsck’ style file system integrity checking tools, IntegrityScan is designed to run while the cluster is fully operational, thereby removing the need for any downtime. In the event that IntegrityScan detects a checksum mismatch, a system alert is generated and written to the syslog and OneFS automatically attempts to repair the suspect block.

The IntegrityScan phase is run manually if the integrity of the file system is ever in doubt. Although this process may take several days to complete, the file system is online and completely available during this time. Additionally, like all phases of the OneFS job engine, IntegrityScan can be prioritized, paused or stopped, depending on the impact to cluster operations and other jobs.

Fault isolation

Because OneFS protects its data at the file-level, any inconsistencies or data loss is isolated to the unavailable or failing device—the rest of the file system remains intact and available.

For example, a ten node, S210 cluster, protected at +2d:1n, sustains three simultaneous drive failures—one in each of three nodes. Even in this degraded state, I/O errors would only occur on the very small subset of data housed on all three of these drives. The remainder of the data striped across the other two hundred and thirty-seven drives would be totally unaffected. Contrast this behavior with a traditional RAID6 system, where losing more than two drives in a RAID-set will render it unusable and necessitate a full restore from backups.

Similarly, in the unlikely event that a portion of the file system does become corrupt (whether as a result of a software or firmware bug, etc.) or a media error occurs where a section of the disk has failed, only the portion of the file system associated with this area on disk will be affected. All healthy areas will still be available and protected.

As mentioned above, referential checksums of both data and meta-data are used to catch silent data corruption (data corruption not associated with hardware failures). The checksums for file data blocks are stored as metadata, outside the actual blocks they reference, and thus provide referential integrity.

Accelerated drive rebuilds

The time that it takes a storage system to rebuild data from a failed disk drive is crucial to the data reliability of that system. With the advent of four terabyte drives, and the creation of increasingly larger single volumes and file systems, typical recovery times for multi-terabyte drive failures are becoming multiple days or even weeks. During this MTTDL period, storage systems are vulnerable to additional drive failures and the resulting data loss and downtime.

Since OneFS is built upon a highly distributed architecture, it’s able to leverage the CPU, memory and spindles from multiple nodes to reconstruct data from failed drives in a highly parallel and efficient manner. Because a PowerScale cluster is not bound by the speed of any particular drive, OneFS is able to recover from drive failures extremely quickly and this efficiency grows relative to cluster size. As such, a failed drive within a cluster will be rebuilt an order of magnitude faster than hardware RAID-based storage devices. Additionally, OneFS has no requirement for dedicated ‘hot-spare’ drives.

Automatic drive firmware updates

Clusters support automatic drive firmware updates for new and replacement drives, as part of the non-disruptive firmware update process. Firmware updates are delivered via drive support packages, which both simplify and streamline the management of existing and new drives across the cluster. This ensures that drive firmware is up to date and mitigates the likelihood of failures due to known drive issues. As such, automatic drive firmware updates are an important component of OneFS’ high availability and non-disruptive operations strategy.

 

 

Author: Nick Trimbee

Read Full Blog
  • data storage
  • Isilon
  • PowerScale

OneFS and SMB Encryption

Nick Trimbee

Thu, 13 Jan 2022 15:49:36 -0000

|

Read Time: 0 minutes

Received a couple of recent questions around SMB encryption, which is supported in addition to the other components of the SMB3 protocol dialect that OneFS supports, including multi-channel, continuous availability (CA), and witness.

OneFS allows encryption for SMB3 clients to be configured on a per share, zone, or cluster-wide basis. When configuring encryption at the cluster-wide level, OneFS provides the option to also allow unencrypted connections for older, non-SMB3 clients.

The following CLI command will indicate whether SMB3 encryption has already been configured globally on the cluster:

# isi smb settings global view | grep -i encryption
     Support Smb3 Encryption: No

The following table lists what behavior a variety of Microsoft Windows and Apple Mac OS versions will support with respect to SMB3 encryption:

Operating System

Description

Windows Vista/Server 2008

Can only access non-encrypted shares if cluster is configured to allow non-encrypted connections

Windows 7/Server 2008 R2

Can only access non-encrypted shares if cluster is configured to

allow non-encrypted connections

Windows 8/Server 2012

Can access encrypted share (and non-encrypted shares if cluster is configured to allow non-encrypted connections)

Windows 8.1/Server 2012 R2

Can access encrypted share (and non-encrypted shares if cluster is configured to allow non-encrypted connections)

Windows 10/Server 2016

Can access encrypted share (and non-encrypted shares if cluster is configured to allow non-encrypted connections)

OSX10.12

Can access encrypted share (and non-encrypted shares if cluster is configured to allow non-encrypted connections)

 Note that only operating systems which support SMB3 encryption can work with encrypted shares. These operating systems can also work with unencrypted shares, but only if the cluster is configured to allow non-encrypted connections. Other operating systems can access non-encrypted shares only if the cluster is configured to allow non-encrypted connections.

If encryption is enabled for an existing share or zone, and if the cluster is set to only allow encrypted connections, only Windows 8/Server 2012 and later and OSX 10.12 will be able to access that share or zone. Encryption cannot be turned on or off at the client level.

The following CLI procedures will configure SMB3 encryption on a specific share, rather than globally across the cluster:

As a prerequisite, ensure that the cluster and clients are bound and connected to the desired Active Directory domain (for example in this case, ad1.com).

To create a share with SMB3 encryption enabled from the CLI:

# mkdir -p /ifs/smb/data_encrypt
# chmod +a group "AD1\\Domain Users" allow generic_all /ifs/smb/data_encrypt
# isi smb shares create DataEncrypt /ifs/smb/data_encrypt --smb3-encryption-enabled true
 # isi smb shares permission modify DataEncrypt --wellknown Everyone -d allow -p full

To verify that an SMB3 client session is actually being encrypted, launch a remote desktop protocol (RDP) session to the Windows client, log in as administrator, and perform the following:

  1. Ensure a packet capture and analysis tool such as Wireshark is installed.
  2. Start Wireshark capture using the capture filter “port 445
  3. Map the DataEncrypt share from the second node in the cluster
  4. Create a file on the desktop on the client (e.g., README-W10.txt).
  5. Copy the README-W10.txt file from the Desktop on the client to the DataEncrypt shares using Windows explorer.exe
  6. Stop the Wireshark capture
  7. Set the Wireshark the display filter to “smb2 and ip.addr for node 1
    1. Examine the SMB2_NEGOTIATE packet exchange to verify the capabilities, negotiated contexts and protocol dialect (3.1.1)
    2. Examine the SMB2_TREE_CONNECT to verify that encryption support has not been enabled for this share
    3. Examine the SMB2_WRITE requests to ensure that the file contents are readable.
  8. Set the Wireshark the display filter to “smb2 and ip.addr for node 2
    1. Examine the SMB2_NEGOTIATE packet exchange to verify the capabilities, negotiated contexts and protocol dialect (3.1.1)
    2. Examine the SMB2_TREE_CONNECT to verify that encryption support has been enabled for this share
    3. Examine the communication following the successful SMB2_TREE_CONNECT response that the packets are encrypted
  9. Save the Wireshark Capture to the DataEncrypt share using the name Win10-SMB3EncryptionDemo.pcap.

SMB3 encryption can also be applied globally to a cluster. This will mean that all the SMB communication with the cluster will be encrypted, not just with individual shares. SMB clients that don’t support SMB3 encryption will only be able to connect to the cluster so long as it is configured to allow non-encrypted connections. The following table presents the available global SMB3 encryption config options:

Setting

Description

Disabled

Encryption for SMBv3 clients in not enabled on this cluster.

Enable SMB3 encryption

Permits encrypted SMBv3 client connections to Isilon clusters but does not make encryption mandatory. Unencrypted SMBv3 clients can still connect to the cluster when this option is enabled. Note that this setting does not actively enable SMBv3 encryption: To encrypt SMBv3 client connections to the cluster, you must first select this option and then activate encryption on the client side. This setting applies to all shares in the cluster.

 

Reject unencrypted SMB3 client connections

Makes encryption mandatory for all SMBv3 client connections to the cluster. When this setting is active, only encrypted SMBv3 clients can connect to the cluster. SMBv3 clients that do not have encryption enabled are denied access. This setting applies to all shares in the cluster.

The following CLI syntax will configure global SMB3 encryption:

# isi smb settings global modify --support-smb3-encryption=yes

Verify the global encryption settings on a cluster by running:

# isi smb settings global view | grep -i encrypt
Reject Unencrypted Access: Yes
     Support Smb3 Encryption: Yes

Global SMB3 encryption can also be enabled from the WebUI by browsing to Protocols > Windows Sharing (SMB) > SMB Server Settings: 

 

 Author: Nick Trimbee

Read Full Blog
  • data storage
  • Isilon
  • PowerScale

OneFS File Pool Policies

Nick Trimbee

Thu, 13 Jan 2022 15:56:39 -0000

|

Read Time: 0 minutes

A OneFS file pool policy can be easily generated from either the CLI or WebUI. For example, the following CLI syntax creates a policy which archives older files to a lower storage tier.

# isi filepool policies modify ARCHIVE_OLD --description "Move older files to archive storage" --data-storage-target TIER_A --data-ssd-strategy metadata-write --begin-filter --file-type=file --and --birth-time=2021-01-01 --operator=lt --and --accessed-time= 2021-09-01 --operator=lt --end-filter

After a file match with a File Pool policy occurs, the SmartPools job uses the settings in the matching policy to store and protect the file. However, a matching policy might not specify all settings for the match file. In this case, the default policy is used for those settings not specified in the custom policy. For each file stored on a cluster, the system needs to determine the following:

  • Requested protection level
  • Data storage target for local data cache
  • SSD strategy for metadata and data
  • Protection level for local data cache
  • Configuration for snapshots
  • SmartCache setting
  • L3 cache setting
  • Data access pattern
  • CloudPools actions (if any)

 If no File Pool policy matches a file, the default policy specifies all storage settings for the file. The default policy, in effect, matches all files not matched by any other SmartPools policy. For this reason, the default policy is the last in the file pool policy list, and, as such, always the last policy that SmartPools applies.

Next, SmartPools checks the file’s current settings against those the policy would assign to identify those which do not match.  Once SmartPools has the complete list of settings that it needs to apply to that file, it sets them all simultaneously, and moves to restripe that file to reflect any and all changes to Node Pool, protection, SmartCache use, layout, etc.

Custom File Attributes, or user attributes, can be used when more granular control is needed than can be achieved using the standard file attributes options (File Name, Path, File Type, File Size, Modified Time, Create Time, Metadata Change Time, Access Time).  User Attributes use key value pairs to tag files with additional identifying criteria which SmartPools can then use to apply File Pool policies. While SmartPools has no utility to set file attributes, this can be done easily by using the ‘setextattr’ command.

Custom File Attributes are generally used to designate ownership or create project affinities. Once set, they are leveraged by SmartPools just as File Name, File Type or any other file attribute to specify location, protection and performance access for a matching group of files.

For example, the following CLI commands can be used to set and verify the existence of the attribute ‘key1’ with value ‘val1’ on a file ‘attrib.txt’:

# setextattr user key1 val1 attrib.txt
# getextattr user key1 attrib.txt
 file    val1

A File Pool policy can be crafted to match and act upon a specific custom attribute and/or value.

For example, the File Policy below, created via the OneFS WebUI, will match files with the custom attribute ‘key1=val1’ and move them to the ‘Archive_1’ tier:

 


Once a subset of a cluster’s files have been marked with a custom attribute, either manually or as part of a custom application or workflow, they will then be moved to the Archive_1 tier upon the next successful run of the SmartPools job.

The file system explorer (and ‘isi get –D’ CLI command) provides a detailed view of where SmartPools-managed data is at any time by both the actual Node Pool location and the File Pool policy-dictated location (i.e. where that file will move after the next successful completion of the SmartPools job).

When data is written to the cluster, SmartPools writes it to a single Node Pool only.  This means that, in almost all cases, a file exists in its entirety within a Node Pool, and not across Node Pools.  SmartPools determines which pool to write to based on one of two situations:

  • If a file matches a file pool policy based on directory path, that file will be written into the Node Pool dictated by the File Pool policy immediately.
  • If a file matches a file pool policy which is based on any other criteria besides path name, SmartPools will write that file to the Node Pool with the most available capacity.

If the file matches a file pool policy that places it on a different Node Pool than the highest capacity Node Pool, it will be moved when the next scheduled SmartPools job runs.

For performance, charge back, ownership or security purposes it is sometimes important to know exactly where a specific file or group of files is on disk at any given time.  While any file in a SmartPools environment typically exists entirely in one Storage Pool, there are exceptions when a single file may be split (usually only on a temporary basis) across two or more Node Pools at one time.

SmartPools generally only allows a file to reside in one Node Pool. A file may temporarily span several Node Pools in some situations.  When a file Pool policy dictates a file move from one Node Pool to another, that file will exist partially on the source Node Pool and partially on the Destination Node Pool until the move is complete.  If the Node Pool configuration is changed (for example, when splitting a Node Pool into two Node Pools) a file may be split across those two new pools until the next scheduled SmartPools job runs.  If a Node Pool fills up and data spills over to another Node Pool so the cluster can continue accepting writes, a file may be split over the intended Node Pool and the default Spillover Node Pool.  The last circumstance under which a file may span more than One Node Pool is for typical restriping activities like cross-Node Pool rebalances or rebuilds.


Author: Nick Trimbee

 

Read Full Blog
  • data storage
  • Isilon
  • PowerScale

OneFS Path-based File Pool Policies

Nick Trimbee

Thu, 13 Jan 2022 16:30:42 -0000

|

Read Time: 0 minutes

As we saw in a previous article, when data is written to the cluster, SmartPools determines which pool to write to based on either path or on any other criteria.

If a file matches a file pool policy which is based on any other criteria besides path name, SmartPools will write that file to the Node Pool with the most available capacity.

However, if a file matches a file pool policy based on directory path, that file will be written into the Node Pool dictated by the File Pool policy immediately.

 

 

If the file matches a file pool policy that places it on a different Node Pool than the highest capacity Node Pool, it will be moved when the next scheduled SmartPools job runs.

If a filepool policy applies to a directory, any new files written to it will automatically inherit the settings from the parent directory. Typically, there is not much variance between the directory and the new file. So, assuming the settings are correct, the file is written straight to the desired pool or tier, with the appropriate protection, etc. This applies to access protocols like NFS and SMB, as well as copy commands like ‘cp’ issued directly from the OneFS command line interface (CLI). However, if the file settings differ from the parent directory, the SmartPools job will correct them and restripe the file. This will happen when the job next runs, rather than at the time of file creation.

However, simply moving a file into the directory (via the UNIX CLI commands such as cp, mv, etc.) will not occur until a SmartPools, SetProtectPlus, Multiscan, or Autobalance job runs to completion. Since these jobs can each perform a re-layout of data, this is when the files will be re-assigned to the desired pool. The file movement can be verified by running the following command from the OneFS CLI:

# isi get -dD <dir>

So the key is whether you’re doing a copy (that is, a new write) or not. As long as you’re doing writes and the parent directory of the destination has the appropriate file pool policy applied, you should get the behavior you want.

One thing to note: If the actual operation that is desired is really a move rather than a copy, it may be faster to change the file pool policy and then do a recursive “isi filepool apply –recurse” on the affected files.

There’s negligible difference between using an NFS or SMB client versus performing the copy on-cluster via the OneFS CLI. As mentioned above, using isi filepool apply will be slightly quicker than a straight copy and delete, since the copy is parallelized above the filesystem layer.

A file pool policy may be crafted which dictates that anything written to path /ifs/path1 is automatically moved directly to the Archive tier. This can easily be configured from the OneFS WebUI by navigating to File System > Storage Pools > File Pool Policies:

 

In the example above, a path based policy is created such that data written to /ifs/path1 will automatically be placed on the cluster’s F600 node pool.

For file Pool Policies that dictate placement of data based on its path, data typically lands on the correct node pool or tier without a SmartPools job running.  File Pool Policies that dictate placement of data on other attributes besides path name get written to Disk Pool with the highest available capacity and then moved, if necessary, to match a File Pool policy, when the next SmartPools job runs.  This ensures that write performance is not sacrificed for initial data placement.

Any data not covered by a File Pool policy is moved to a tier that can be selected as a default for exactly this purpose.  If no Disk Pool has been selected for this purpose, SmartPools will default to the Node Pool with the most available capacity.

Be aware that, when reconfiguring an existing path-based filepool policy to target a different nodepool or tier, the change will not immediately take effect for the new incoming data. The directory where new files will be created must be updated first and there are a several options available to address this:

  • Running the SmartPools job will achieve this. However, this can take a significant amount of time, as the job may entail restriping or migrating a large quantity of file data.
  • Invoking the ’isi filepool apply <path>’ command on a single directory in question will do it very rapidly. This option is ideal for a single, or small number, of ‘incoming’ data directories.
  • To update all directories in a given subtree, but not affect the files’ actual data layouts, use:
# isi filepool apply --dont-restripe --recurse /ifs/path1


  • OneFS also contains the SmartPoolsTree job engine job specifically for this purpose. This can be invoked as follows:
# isi job start SmartPoolsTree --directory-only  --path /ifs/path

For example, a cluster has both an F600 pool and an A2000 pool. A directory (/ifs/path1) is created and a file (file1.txt) written to it:

# mkdir /ifs/path1
# cd !$; touch file1.txt

As we can see, this file is written to the default A2000 pool:

# isi get -DD /ifs/path1/file1.txt | grep -i pool
*  Disk pools:         policy any pool group ID -> data target a2000_200tb_800gb-ssd_16gb:97(97), metadata target a2000_200tb_800gb-ssd_16gb:97(97)

Next, a path-based file pool policy is created such that files written to /ifs/test1 are automatically directed to the cluster’s F600 tier:

# isi filepool policies create test2 --begin-filter --path=/ifs/test1 --and --data-storage-target f600_30tb-ssd_192gb --end-filter
# isi filepool policies list
Name  Description  CloudPools State
------------------------------------
Path1              No access
------------------------------------    
Total: 1
# isi filepool policies view Path1
Name: Path1
Description:
                   CloudPools State: No access
                CloudPools Details: Policy has no CloudPools actions
                       Apply Order: 1
             File Matching Pattern: Path == path1 (begins with)
          Set Requested Protection: -
               Data Access Pattern: -
                  Enable Coalescer: -
                    Enable Packing: -
               Data Storage Target: f600_30tb-ssd_192gb
                 Data SSD Strategy: metadata
           Snapshot Storage Target: -
             Snapshot SSD Strategy: -
                        Cloud Pool: -
         Cloud Compression Enabled: -
          Cloud Encryption Enabled: -
              Cloud Data Retention: -
Cloud Incremental Backup Retention: -
       Cloud Full Backup Retention: -
               Cloud Accessibility: -
                  Cloud Read Ahead: -
            Cloud Cache Expiration: -
         Cloud Writeback Frequency: -
                                ID: Path1

The ‘isi filepool apply’ command is run on /ifs/path1 in order to activate the path-based file policy:

# isi filepool apply /ifs/path1

A file (file-new1.txt) is then created under /ifs/path1:

# touch /ifs/path1/file-new1.txt

An inspection shows that this file is written to the F600 pool, as expected per the Path1 file pool policy:

# isi get -DD /ifs/path1/file-new1.txt | grep -i pool
*  Disk pools:         policy f600_30tb-ssd_192gb(9) -> data target f600_30tb-ssd_192gb:10(10), metadata target f600_30tb-ssd_192gb:10(10)
 

The legacy file (/ifs/path1/file1.txt) is still on the A2000 pool, despite the path-based policy. However, this policy can be enacted on pre-existing data by running the following:

# isi filepool apply --dont-restripe --recurse /ifs/path1

Now, the legacy files are also housed on the F600 pool, and any new writes to the /ifs/path1 directory will also be written to the F600s:

# isi get -DD file1.txt | grep -i pool
*  Disk pools:         policy f600_30tb-ssd_192gb(9) -> data target a2000_200tb_800gb-ssd_16gb:97(97), metadata target a2000_200tb_800gb-ssd_16gb:97(97)

 


Author: Nick Trimbee

Read Full Blog
  • data storage
  • Isilon
  • PowerScale

PowerScale Gen6 Chassis Hardware Resilience

Nick Trimbee

Thu, 13 Jan 2022 16:48:24 -0000

|

Read Time: 0 minutes

In this article, we’ll take a quick look at the OneFS journal and boot drive mirroring functionality in PowerScale chassis-based hardware:

PowerScale Gen6 platforms, such as the new H700/7000 and A300/3000, stores the local filesystem journal and its mirror in the DRAM of the battery backed compute node blade.  Each 4RU Gen 6 chassis houses four nodes. These nodes comprise a ‘compute node blade’ (CPU, memory, NICs), plus drive containers, or sleds, for each.

A node’s file system journal is protected against sudden power loss or hardware failure by OneFS journal vault functionality – otherwise known as ‘powerfail memory persistence’ (PMP). PMP automatically stores the both the local journal and journal mirror on a separate flash drive across both nodes in a node pair:

This journal de-staging process is known as ‘vaulting’, during which the journal is protected by a dedicated battery in each node until it’s safely written from DRAM to SSD on both nodes in a node-pair. With PMP, constant power isn’t required to protect the journal in a degraded state since the journal is saved to M.2 flash and mirrored on the partner node.

So, the mirrored journal is comprised of both hardware and software components, including the following constituent parts:

Journal Hardware Components

  • System DRAM
  • 2 Vault Flash
  • Battery Backup Unit (BBU)
  • Non-Transparent Bridge (NTB) PCIe link to partner node
  • Clean copy on disk

Journal Software Components

  • Power-fail Memory Persistence (PMP)
  • Mirrored Non-volatile Interface (MNVI)
  • IFS Journal + Node State Block (NSB)
  • Utilities

Asynchronous DRAM Refresh (ADR) preserves RAM contents when the operating system is not running. ADR is important for preserving RAM journal contents across reboots, and it does not require any software coordination to do so.

The journal vault feature encompasses the hardware, firmware, and operating system support that ensure the journal’s contents are preserved across power failure. The mechanism is similar to the NVRAM controller on previous generation nodes but does not use a dedicated PCI card.

On power failure, the PMP vaulting functionality is responsible for copying both the local journal and the local copy of the partner node’s journal to persistent flash. On restoration of power, PMP is responsible for restoring the contents of both journals from flash to RAM and notifying the operating system.

A single dedicated flash device is attached via M.2 slot on the motherboard of the node’s compute module, residing under the battery backup unit (BBU) pack. To be serviced, the entire compute module must be removed.

If the M.2 flash needs to be replaced for any reason, it will be properly partitioned and the PMP structure will be created as part of arming the node for vaulting.

The battery backup unit (BBU), when fully charged, provides enough power to vault both the local and partner journal during a power failure event.

A single battery is utilized in the BBU, which also supports back-to-back vaulting.

On the software side, the journal’s Power-fail Memory Persistence (PMP) provides an equivalent to the NVRAM controller‘s vault/restore capabilities to preserve Journal. The PMP partition on the M.2 flash drive provides an interface between the OS and firmware.

If a node boots and its primary journal is found to be invalid for whatever reason, it has three paths for recourse:

  • Recover journal from its M.2 vault.
  • Recover journal from its disk backup copy.
  • Recover journal from its partner node’s mirrored copy.

A single battery is utilized in the BBU, which also supports back-to-back vaulting.

On the software side, the journal’s Power-fail Memory Persistence (PMP) provides an equivalent to the NVRAM controller‘s vault/restore capabilities to preserve Journal. The PMP partition on the M.2 flash drive provides an interface between the OS and firmware.

If a node boots and its primary journal is found to be invalid for whatever reason, it has three paths for recourse:

  • Recover journal from its M.2 vault.
  • Recover journal from its disk backup copy.
  • Recover journal from its partner node’s mirrored copy.

The mirrored journal must guard against rolling back to a stale copy of the journal on reboot. This necessitates storing information about the state of journal copies outside the journal. As such, the Node State Block (NSB) is a persistent disk block that stores local and remote journal status (clean/dirty, valid/invalid, etc), as well as other non-journal information. NSB stores this node status outside the journal itself and ensures that a node does not revert to a stale copy of the journal upon reboot.

Here’s the detail of an individual node’s compute module:

Of particular note is the ‘journal active’ LED, which is displayed as a white hand icon.

When this white hand icon is illuminated, it indicates that the mirrored journal is actively vaulting, and it is not safe to remove the node!

There is also a blue ‘power’ LED, and a yellow ‘fault’ LED per node. If the blue LED is off, the node may still be in standby mode, in which case it may still be possible to pull debug information from the baseboard management controller (BMC).

The flashing yellow ‘fault’ LED has several state indication frequencies:

Blink Speed

Blink Frequency

Indicator

Fast blink

¼ Hz

BIOS

Medium blink

1 Hz

Extended POST

Slow blink

4 Hz

Booting OS

Off

Off

OS running

The mirrored non-volatile interface (MNVI) sits below /ifs and above RAM and the NTB, provides the abstraction of a reliable memory device to the /ifs journal. MNVI is responsible for synchronizing journal contents to peer node RAM, at the direction of the journal, and persisting writes to both systems while in a paired state. It upcalls into the journal on NTB link events and notifies the journal of operation completion (mirror sync, block IO, etc.).

For example, when rebooting after a power outage, a node automatically loads the MNVI. It then establishes a link with its partner node and synchronizes its journal mirror across the PCIe Non-Transparent Bridge (NTB).

Prior to mounting /ifs, OneFS locates a valid copy of the journal from one of the following locations in order of preference:

Order

Journal Location

Description

1st

Local disk

A local copy that has been backed up to disk

2nd

Local vault

A local copy of the journal restored from Vault into DRAM

3rd

Partner node

A mirror copy of the journal from the partner node

 

If the node was shut down properly, it will boot using a local disk copy of the journal.  The journal will be restored into DRAM and /ifs will mount. On the other hand, if the node suffered a power disruption the journal will be restored into DRAM from the M.2 vault flash instead (the PMP copies the journal into the M.2 vault during a power failure).

In the event that OneFS is unable to locate a valid journal on either the hard drives or M.2 flash on a node, it will retrieve a mirrored copy of the journal from its partner node over the NTB.  This is referred to as ‘Sync-back’.

Note: Sync-back state only occurs when attempting to mount /ifs.

On booting, if a node detects that its journal mirror on the partner node is out of sync (invalid), but the local journal is clean, /ifs will continue to mount.  Subsequent writes are then copied to the remote journal in a process known as ‘sync-forward’.

Here’s a list of the primary journal states:

Journal State

Description

Sync-forward

State in which writes to a journal are mirrored to the partner node.

Sync-back

Journal is copied back from the partner node. Only occurs when attempting to mount /ifs.

Vaulting

Storing a copy of the journal on M.2 flash during power failure. Vaulting is performed by PMP.

 During normal operation, writes to the primary journal and its mirror are managed by the MNVI device module, which writes through local memory to the partner node’s journal via the NTB. If the NTB is unavailable for an extended period, write operations can still be completed successfully on each node. For example, if the NTB link goes down in the middle of a write operation, the local journal write operation will complete. Read operations are processed from local memory.

Additional journal protection for Gen 6 nodes is provided by OneFS powerfail memory persistence (PMP) functionality, which guards against PCI bus errors that can cause the NTB to fail.  If an error is detected, the CPU requests a ‘persistent reset’, during which the memory state is protected and node rebooted. When back up again, the journal is marked as intact and no further repair action is needed.

If a node loses power, the hardware notifies the BMC, initiating a memory persistent shutdown.  At this point the node is running on battery power. The node is forced to reboot and load the PMP module, which preserves its local journal and its partner’s mirrored journal by storing them on M.2 flash.  The PMP module then disables the battery and powers itself off.

Once power is back on and the node restarted, the PMP module first restores the journal before attempting to mount /ifs.  Once done, the node then continues through system boot, validating the journal, setting sync-forward or sync-back states, etc.

During boot, isi_checkjournal and isi_testjournal will invoke isi_pmp. If the M.2 vault devices are unformatted, isi_pmp will format the devices.

On clean shutdown, isi_save_journal stashes a backup copy of the /dev/mnv0 device on the root filesystem, just as it does for the NVRAM journals in previous generations of hardware.

If a mirrored journal issue is suspected, or notified via cluster alerts, the best place to start troubleshooting is to take a look at the node’s log events. The journal logs to /var/log/messages, with entries tagged as ‘journal_mirror’.

The following new CELOG events have also been added in OneFS 8.1 for cluster alerting about mirrored journal issues:

CELOG Event

Description

HW_GEN6_NTB_LINK_OUTAGE

Non-transparent bridge (NTP) PCIe link is unavailable

FILESYS_JOURNAL_VERIFY_FAILURE

No valid journal copy found on node

Another reliability optimization for the Gen6 platform is boot mirroring. Gen6 does not use dedicated bootflash devices, as with previous generation nodes. Instead, OneFS boot and other OS partitions are stored on a node’s data drives. These OS partitions are always mirrored (except for crash dump partitions). The two mirrors protect against disk sled removal. Since each drive in a disk sled belongs to a separate disk pool, both elements of a mirror cannot live on the same sled.

The boot and other OS partitions are 8GB and reserved at the beginning of each data drive for boot mirrors. OneFS automatically rebalances these mirrors in anticipation of, and in response to, service events. Mirror rebalancing is triggered by drive events such as suspend, softfail and hard loss.

The following command will confirm that boot mirroring is working as intended:

# isi_mirrorctl verify

When it comes to smartfailing nodes, here are a couple of other things to be aware of with mirror journal and the Gen6 platform:

  • When you smartfail a node in a node pair, you do not have to smartfail its partner node.
  • A node will still run indefinitely with its partner missing. However, this significantly increases the window of risk since there’s no journal mirror to rely on (in addition to lack of redundant power supply, etc).
  • If you do smartfail a single node in a pair, the journal is still protected by the vault and powerfail memory persistence.

 

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS

PowerScale Platform Update

Nick Trimbee

Wed, 12 Jan 2022 22:52:06 -0000

|

Read Time: 0 minutes

In this blog, we’ll take a quick peek at the new PowerScale Hybrid H700/7000 and Archive A300/3000 hardware platforms that were released in September 2021. So the current PowerScale platform family hierarchy is as follows:

Here’s the lowdown on the new additions to the hardware portfolio:

Model

Tier

Drive per Chassis & Drives

Max Chassis Capacity (16TB HDD)

CPU per Node

Memory per Node

Network

H700

Hybrid/Utility

Standard:

60 x 3.5” HDD

960TB

CPU: 2.9Ghz, 16c

Mem: 384GB

FE: 100GbE

BE: 100GbE or IB

H7000

Hybrid/Utility

Deep:

80 x 3.5” HDD

1280TB

CPU: 2.9Ghz, 16c

Mem: 384GB

FE: 100GbE

BE: 100GbE or IB

A300

Archive

Standard:

60 x 3.5” HDD

960TB

CPU: 1.9Ghz, 16c

Mem: 96GB

FE: 25GbE

BE: 25GbE or IB

A3000

Archive

Deep:

80 x 3.5” HDD

1280TB

CPU: 1.9Ghz, 16c

Mem: 96GB

FE: 25GbE

BE: 25GbE or IB

The PowerScale H700 provides performance and value to support demanding file workloads. With up to 960 TB of HDD per chassis, the H700 also includes inline compression and deduplication capabilities to further extend the usable capacity.

The PowerScale H7000 is a versatile, high performance, high capacity hybrid platform with up to 1280 TB per chassis. The deep chassis based H7000 can consolidate a range of file workloads on a single platform. The H7000 includes inline compression and deduplication capabilities.

On the active archive side, the PowerScale A300 combines performance, near-primary accessibility, value, and ease of use. The A300 provides between 120 TB to 960 TB per chassis and scales to 60 PB in a single cluster. The A300 includes inline compression and deduplication capabilities.

The PowerScale A3000 is an ideal solution for high performance, high density, deep archive storage that safeguards data efficiently for long-term retention. The A3000 stores up to 1280 TB per chassis and scales to north of 80 PB in a single cluster. The A3000 also includes inline compression and deduplication.

These new H700/7000 and A300/3000 nodes require OneFS 9.2.1, and can be seamlessly added to an existing cluster, offering the full complement of OneFS data services including snapshots, replication, quotas, analytics, data reduction, load balancing, and local and cloud tiering. In addition to the storage HDDs, all also contain a small quantity of SSD for L3 cache or metadata acceleration.

Unlike the all-flash PowerScale F900, F600, and F200 stand-alone nodes, which required a minimum of three nodes to form a cluster, a single chassis of four nodes is required to create a cluster, with support for both InfiniBand and Ethernet backend network connectivity.

Each F700/7000 and A300/3000 chassis contains four compute modules (one per node), and five drive containers, or sleds, per node. These sleds occupy bays in the front of each chassis, with a node’s drive sleds stacked vertically:

The drive sled is a tray that slides into the front of the chassis, and contains between three and four 3.5 inch drives in an H700/0 or A300/0, depending on the drive size and configuration of the particular node. Both regular hard drives or self-encrypting drives (SEDs) are available in 2, 4, 8, 12, and 16TB capacities.

Each drive sled has a white ‘not safe to remove’ LED on its front top left, as well as a blue power/activity LED, and an amber fault LED.

The compute modules for each node are housed in the rear of the chassis, and contain CPU, memory, networking, and SSDs, as well as power supplies. Nodes 1 and 2 are a node pair, as are nodes 3 and 4. Each node pair shares a mirrored journal and two power supplies:

Here’s the detail of an individual compute module, which contains a multi core Cascade Lake CPU, memory, M2 flash journal, up to two SSDs for L3 cache, six DIMM channels, front end 40/100 or 10/25 Gb Ethernet, 40/100 or 10/25 Gb Ethernet or InfiniBand, an Ethernet management interface, and power supply and cooling fans:

On the front of each chassis is an LCD front panel control with back-lit buttons and four LED Light Bar Segments – one per node. These LEDs typically display blue for normal operation or yellow to indicate a node fault. This LCD display is hinged so it can be swung clear of the drive sleds for non-disruptive HDD replacement:

So, in summary, the new Gen6 hardware delivers:

  • More Power
    • More cores, more memory, and more cache
    • A300/3000 up to 2x faster than previous generation (A200/2000)
  • More Choice
    • 100GbE, 25GbE, and InfiniBand options for cluster interconnect
    • Node compatibility for all hybrid and archive nodes
    • 30 TB to 320 TB per rack unit
  • More Value
    • Inline data reduction across the PowerScale family
    • Lowest $/GB and most density among comparable solutions

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS
  • node exclusion

OneFS Job Execution and Node Exclusion

Nick Trimbee

Thu, 06 Jan 2022 23:26:13 -0000

|

Read Time: 0 minutes

Up through OneFS 9.2, a job engine job was an all or nothing entity. Whenever a job ran, it involved the entire cluster – regardless of individual node type, load, or condition. As such, any nodes that were overloaded or in a degraded state could still impact the execution ability of the job at large.

To address this, OneFS 9.3 provides the capability to exclude one or more nodes from participating in running a job. This allows the temporary removal of any nodes with high load, or other issues, from the job execution pool so that jobs do not become stuck.

The majority of the OneFS job engine’s jobs have no default schedule and are typically manually started by a cluster administrator or process. Other jobs such as FSAnalyze, MediaScan, ShadowStoreDelete, and SmartPools, are normally started via a schedule. The job engine can also initiate certain jobs on its own. For example, if the SnapshotIQ process detects that a snapshot has been marked for deletion, it will automatically queue a SnapshotDelete job.

The Job Engine will also execute jobs in response to certain system event triggers. In the case of a cluster group change, for example the addition or subtraction of a node or drive, OneFS automatically informs the job engine, which responds by starting a FlexProtect job. The coordinator notices that the group change includes a newly-smart-failed device and then initiates a FlexProtect job in response.

Job administration and execution can be controlled via the WebUI, CLI, or platform API. A job can be started, stopped, paused and resumed, and this is managed via the job engines’ check-pointing system. For each of these control methods, additional administrative security can be configured using roles-based access control (RBAC).

The job engine’s impact control and work throttling mechanism can limit the rate at which individual jobs can run. Throttling is employed at a per-manager process level, so job impact can be managed both granularly and gracefully.

 

Every twenty seconds, the coordinator process gathers cluster CPU and individual disk I/O load data from all the nodes across the cluster. The coordinator uses this information, in combination with the job impact configuration, to decide how many threads can run on each cluster node to service each running job. This can be a fractional number, and fractional thread counts are achieved by having a thread sleep for a given percentage of each second.

Using this CPU and disk I/O load data, every sixty seconds the coordinator evaluates how busy the various nodes are and makes a job throttling decision, instructing the various job engine processes as to the action they need to take. This enables throttling to be sensitive to workloads in which CPU and disk I/O load metrics yield different results. There are also separate load thresholds tailored to the different classes of drives used in OneFS powered clusters, from capacity optimized SATA disks to flash-based SSDs.

Configuration is via the OneFS CLI and gconfig and is global, such that it applies to all jobs on startup. However, the exclusion configuration is not dynamic, and once a job is started with the final node set, there is no further reconfiguration permitted. So if a participant node is excluded, it will remain excluded until the job has completed. Similarly, if a participant needs to be excluded, the current job will have to be cancelled and a new job started. Any nodes can be excluded, including the node running the job engine’s coordinator process. The coordinator will still monitor the job, it just won’t spawn a manager for the job.

The list of participating nodes for a job are computed in three phases:

  1. Query the cluster’s GMP group.
  2. Call to job.get_participating_nodes to get a subset from the gmp group.
  3. Remove the nodes listed in core.excluded_participants from the subset.

The CLI syntax for configuring an excluded nodes list on a cluster is as follows (in this example, excluding nodes one through three):

# isi_gconfig –t job-config core.excluded_participants="{1,2,3}"

The ‘excluded_participants’ are entered as a comma-separated devid value list with no spaces, specified within parentheses and double quotes. All excluded nodes must be specified in full, since there’s no aggregation. Note that, while the excluded participant configuration will be displayed via gconfig, it is not reported as part of the ‘sysctl efs.gmp.group’ output.

A job engine node exclusion configuration can be easily reset to avoid excluding any nodes by assigning the “{}” value.

# isi_gconfig –t job-config core.excluded_participants="{}"
A ‘core.excluded_participant_percent_warn’ parameter defines the maximum percentage of removed nodes.
# isi_gconfig -t job-config core.excluded_participant_percent_warn
core.excluded_participant_percent_warn (uint) = 10

This parameter defaults to 10%, above which a CELOG event warning is generated.

As many nodes as desired can be removed from the job group. CELOG informational event will notify of removed nodes. If too many nodes have been removed (the gconfig parameter sets too many node thresholds), CELOG will fire a warning event. If some nodes are removed but they’re not part of the GMP group, a different warning event will trigger.

If all nodes are removed, a CLI/pAPI error will be returned, the job will fail, and a CELOG warning will fire. For example:

# isi job jobs start LinCount

Job operation failed: The job had no participants left. Check core.excluded_participants setting and make sure there is at least one node to run the job:  Invalid argument

# isi job status

10   LinCount         Failed    2021-10-24T:20:45:23

------------------------------------------------------------------

Total: 9

Note, however, that the following core system maintenance jobs will continue to run across all nodes in a cluster even if a node exclusion has been configured:

  • AutoBalance
  • Collect
  • FlexProtect
  • MediaScan
  • MultiScan

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS
  • Google Cloud
  • Dell EMC PowerScale

Setting Up PowerScale for Google Cloud SmartConnect

Lieven Lin

Wed, 29 Dec 2021 17:48:23 -0000

|

Read Time: 0 minutes

In the Dell EMC PowerScale for Google Cloud solution, OneFS uses the cluster service FQDN as its SmartConnect Zone name with a round-robin client-connection balancing policy. The round-robin policy is a default setting and is recommended for most cases in OneFS. (For more details about the OneFS SmartConnect load-balancing policy, see the Load Balancing section of the white paper Dell EMC PowerScale: Network Design Considerations.)

After the cluster is deployed, you must find the OneFS SmartConnect service IP in the clusters page within Google Cloud Console. Then, configure your DNS server to delegate the cluster service FQDN zone to the OneFS Service IP. You need to configure a forwarding rule in Google Cloud DNS which forwards the cluster service FQDN query to the DNS server, and set up a zone delegation on the DNS server that points to the cluster service IP. The following figure shows the DNS query flow by leveraging Google Cloud DNS along with a DNS server in the project.

  1. VM clients send a DNS request for Cluster service FQDN to the Google Cloud DNS service.
  2. Google Cloud DNS forwards the request to the DNS server.
  3. The DNS server forwards the request to the cluster service IP. The service IP is responsible for translating the cluster service IP into an available node IP.
  4. SmartConnect returns a node IP to the client. The client can now access cluster data.

Because Google Cloud DNS cannot communicate with the OneFS cluster directly, we use a DNS server that is located in the authorized VPC network to forward the SmartConnect DNS request to the cluster. You can use either a Windows server or a Linux server. In this blog we use a Windows server to show the detailed steps.

Obtain required cluster information

The following information is required before setting up SmartConnect:

  • Cluster service FQDN -- This is the OneFS SmartConnect zone name used by clients.
  • Service IP -- This is the OneFS SmartConnect service IP that is responsible for resolving the client DNS request and returning an available node IP to clients.
  • Authorized network -- By default, only the machines on an authorized VPC network can access a PowerScale cluster.

To obtain this required information, do the following:

  1. In the Google Cloud Console navigation menu, click PowerScale and then click Clusters.
  2. Find your cluster row, where you can see the cluster service FQDN and service IP:

3. To find the authorized network information, click the name of the cluster. From the PowerScale Cluster Details page, find the authorized network from the Network information, highlighted here:

Set up a DNS server

If you already have an available DNS server that is connected to the cluster authorized network, you can use this existing DNS server and skip Step 1 and Step 2 below.

  1. In the Google Cloud Console navigation menu, click Compute Engine and then click VM instances. In this example, we are creating a Windows VM instance as a DNS server. Make sure your DNS server is connected to the cluster authorized network.
  2. Log into the DNS server and install DNS Server Role in the Windows machine. (If you are using a Linux machine, you can use Bind software instead.)
  3. Create a new DNS zone in the DNS server:

4. Create an (A) record for the cluster service IP. (See the section DNS delegation best practices of the white paper Dell EMC PowerScale: Network Design Considerations for more details.)

5. Create a new delegation for your cluster service FQDN (sc-demo.tme.local in this example) and point the delegation server to your cluster service IP (A) record created above (sip-demo.tme.local in this example), as shown here:

Configure Cloud DNS and firewall rules

  1. In the Google Cloud Console navigation menu, click Network services and then click Cloud DNS.
  2. Click the CREATE ZONE button.
  3. Choose the Private zone type and enter your Cluster Service FQDN in the DNS name field. Choose Forward queries to another server and your cluster authorized network, as shown here:

4. Obtain the DNS server IP address that you configured in the ‘Set up a DNS server’ step.

5. Point the destination DNS server to your own DNS server IP address, then click the Create button.

6. Add firewall rules to allow ingress DNS traffic to your DNS server from Cloud DNS. In the Google Cloud Console navigation menu, click VPC network and then click Firewall.

7. Click the CREATE FIREWALL RULE button.

8. Create a new Firewall rule and include the following options:

  • In the Network field, make sure the cluster authorized network is selected.
  • Source filter: IPv4 ranges
  • Source IPv4 ranges: 35.199.192.0/19. This is the IP range Cloud DNS requests will originate from. See Cloud DNS zones overview for more details.
  • Protocols and ports: TCP 53 and UDP 53.

See the following example:

4. The resulting firewall rule in Google Cloud will appear as follows:

Verify your SmartConnect

  1. Log into a VM instance that is connected to an authorized network. (This example uses a Linux machine.)
  2. Resolve the cluster service FQDN via nslookup and mount a file share via NFS.

Conclusion

PowerScale cluster is a distributed file system composed of multiple nodes. We always recommend using the SmartConnect feature to balance the client connections to all cluster nodes. This way, you can maximize PowerScale cluster performance to provide maximum value to your business. Try it now in your Dell EMC PowerScale for Google Cloud solution.

Author: Lieven Lin


Read Full Blog
  • PowerScale
  • OneFS
  • Google Cloud
  • Gallery SIENNA

Live Broadcast Recording Using OneFS for Google Cloud, Gallery SIENNA ND, and Adobe Premiere Pro

Andy Copeland

Fri, 07 Jan 2022 14:03:44 -0000

|

Read Time: 0 minutes

Here at Dell Technologies, we tested a cloud native real-time NDI ISO feed ingest workflow based on Gallery SIENNA, OneFS, and Adobe Premiere Pro, all running natively in Google Cloud.

TL; DR... it's awesome!

Mark Gilbert (CTO at Gallery SIENNA) had noticed there was a growing demand in the market for highly scalable, enterprise-grade file storage in the public cloud for ISO recording. So, we were excited to test this much-needed solution.

Sure, we could have just spun up a cloud-compute instance, created some SMB shares or NFS exports on it, and off you go. But then you quickly find that your ability to scale becomes an issue.

Arguably, the most critical part of any live broadcast is the bulk recording of ISO feeds, and as camera technology improves, recorded data is growing at an ever-increasing pace. Resolutions are increasing, frame rates are faster and internet connection pipes are getting larger.

This is where OneFS for Google Cloud steps in.

Remote production is now a must rather than a nice-to-have for every studio. The production world has had to adopt it, embrace it and buckle in for the ride. There are some great products out there to help businesses enable remote-production workflows. Gallery SIENNA is one of these products. It enables NDI-from-anywhere workflows that help to reduce utilization on over-contended connections.

You can purchase OneFS for Google Cloud through the Google Cloud Marketplace, attach it to a Gallery SIENNA Processing Engine via NFS export and start recording at the click of a button. In our testing, as soon as the recorders began writing, we were able to open and manipulate the files in Adobe Premiere Pro, which we connected to via SMB to prove out that multi-protocol worked too. This was all while the files were being recorded, and we could expand them in real-time in the timeline as they grow.

Infrastructure components (provisioned in Google Cloud):

  • 1 x OneFS for Google Cloud
  • 1 x Ubuntu VM
    • Running Gallery SIENNA ND Processing Engine
  • 1 x Windows 10 VM
    • NDI Tools
    • Adobe Premiere Pro

We used a SIENNA ND Processing Engine to generate six real-time record feeds, three of which were 3840p60 NDI and the other three of 1080p30 DNxHD 145

One of the great benefits of using Gallery SIENNA ND on Google Cloud is that our ingest could have come from anywhere. We could have used any internet-connected device that can reach the Google Cloud instance, be that a static connection in a purpose-built facility or a 4G/5G cell phone camera on the street with the NDI tools on it.

High-level workflow:

  1. Added a Signal Generator node (3840p60) into our SIENNA ND Processing Engine instance
  2. Used the SIENNA ND node-based architecture to add on a timecode burn and frame sync
  3. Added 3 x <NDI Recorder>
  4. Configured the recorders to write out to an NFS export on our OneFS for Google Cloud instance
  5. Added a StreamLink Test node (1080p30) into the same SIENNA ND Processing Engine instance
  6. Added timecode burn and frame sync nodes again
  7. Added 3 x <DNxHD 145 Recorder>
  8. Configured the recorders to write out to the same NFS export on our OneFS for Google Cloud instance
  9. Hit record on all recorders

Once the record was running, we added a "Media Picker" node and selected one of the files that we were recording. Then, we connected this growing file and one of the frame-sync outputs to a "multiviewer" node. We then watched both the live feed and chase play from disk as it was being laid down.

To cap it off, we also mounted one of the output paths using SMB from a Google Cloud hosted Windows 10 instance running Adobe Premiere Pro, and we were able to import, scrub and expand the files as they grew in real-time, allowing us to chase edit.

To find out more about the Dell Technologies offers for Media and Entertainment, feel free to get in touch by DM, or click here to find one of our experts in your time zone.

See the following links for more information about OneFS for Google Cloud and Gallery SIENNA.

Author: Andy Copeland

 



Read Full Blog
  • PowerScale
  • OneFS
  • Dell EMC PowerScale
  • data inlining

OneFS Data Inlining – Performance and Monitoring

Nick Trimbee

Tue, 16 Nov 2021 19:57:36 -0000

|

Read Time: 0 minutes

In the second of this series of articles on data inlining, we’ll shift the focus to monitoring and performance.

The storage efficiency potential of inode inlining can be significant for data sets comprising large numbers of small files, which would have required a separate inode and data blocks for housing these files prior to OneFS 9.3.

Latency-wise, the write performance for inlined file writes is typically comparable or slightly better as compared to regular files, because OneFS does not have to allocate extra blocks and protect them. This is also true for reads, because OneFS doesn’t have to search for and retrieve any blocks beyond the inode itself. This also frees up space in the OneFS read caching layers, as well as on disk, in addition to requiring fewer CPU cycles.

The following figure illustrates the levels of indirection a file access request takes to get to its data. Unlike a standard file, an inline file skips the later stages of the path, which involve the inode metatree redirection to the remote data blocks.

Access starts with the Superblock, which is located at multiple fixed block addresses on each drive in the cluster. The Superblock contains the address locations of the LIN Master block, which contains the root of the LIN B+ Tree (LIN table).  The LIN B+Tree maps logical inode numbers to the actual inode addresses on disk, which, in the case of an inlined file, also contains the data. This saves the overhead of finding the address locations of the file’s data blocks and retrieving data from them.

For hybrid nodes with sufficient SSD capacity, using the metadata-write SSD strategy automatically places inlined small files on flash media. However, because the SSDs on hybrid nodes default to 512byte formatting, when using metadata read/write strategies, you must set the ‘–force-8k-inodes’ flag for these SSD metadata pools in order for files to be inlined. This can be a useful performance configuration for small file HPC workloads, such as EDA, for data that is not residing on an all-flash tier. But keep in mind that forcing 8KB inodes on a hybrid pool’s SSDs will result in a considerable reduction in available inode capacity than would be available with the default 512 byte inode configuration.

You can use the OneFS ‘isi_drivenum’ CLI command to verify the drive block sizes in a node. For example, the following output for a PowerScale Gen6 H-series node shows drive Bay 1 containing an SSD with 4KB physical formatting and 512byte logical sizes, and Bays A to E comprising hard disks (HDDs) with both 4KB logical and physical formatting.

# isi_drivenum -bz
Bay 1  Physical Block Size: 4096     Logical Block Size:   512
Bay 2  Physical Block Size: N/A     Logical Block Size:   N/A
Bay A0 Physical Block Size: 4096     Logical Block Size:   4096
Bay A1 Physical Block Size: 4096     Logical Block Size:   4096
Bay A2 Physical Block Size: 4096     Logical Block Size:   4096
Bay B0 Physical Block Size: 4096     Logical Block Size:   4096
Bay B1 Physical Block Size: 4096     Logical Block Size:   4096
Bay B2 Physical Block Size: 4096     Logical Block Size:   4096
Bay C0 Physical Block Size: 4096     Logical Block Size:   4096
Bay C1 Physical Block Size: 4096     Logical Block Size:   4096
Bay C2 Physical Block Size: 4096     Logical Block Size:   4096
Bay D0 Physical Block Size: 4096     Logical Block Size:   4096
Bay D1 Physical Block Size: 4096     Logical Block Size:   4096
Bay D2 Physical Block Size: 4096     Logical Block Size:   4096
Bay E0 Physical Block Size: 4096     Logical Block Size:   4096
Bay E1 Physical Block Size: 4096     Logical Block Size:   4096
Bay E2 Physical Block Size: 4096     Logical Block Size:   4096

Note that the SSD disk pools used in PowerScale hybrid nodes that are configured for meta-read or meta-write SSD strategies use 512 byte inodes by default. This can significantly save space on these pools, because they often have limited capacity, but it will prevent data inlining from occurring. By contrast, PowerScale all-flash nodepools are configured by default for 8KB inodes.

The OneFS ‘isi get’ CLI command provides a convenient method to verify which size inodes are in use in a given node pool. The command’s output includes both the inode mirrors size and the inline status of a file.

When it comes to efficiency reporting, OneFS 9.3 provides three CLI improved tools for validating and reporting the presence and benefits of data inlining, namely:

  1. The ‘isi statistics data-reduction’ CLI command has been enhanced to report inlined data metrics, including both a capacity saved and an inlined data efficiency ratio:
# isi statistics data-reduction
                      Recent Writes Cluster Data Reduction
                           (5 mins)
--------------------- ------------- ----------------------
Logical data                 90.16G                 18.05T
Zero-removal saved                0                      -
Deduplication saved           5.25G                624.51G
Compression saved             2.08G                303.46G
Inlined data saved            1.35G                  2.83T
Preprotected physical        82.83G                 14.32T
Protection overhead          13.92G                  2.13T
Protected physical           96.74G                 26.28T
Zero removal ratio         1.00 : 1                      -
Deduplication ratio        1.06 : 1               1.03 : 1
Compression ratio          1.03 : 1               1.02 : 1
Data reduction ratio       1.09 : 1               1.05 : 1
Inlined data ratio         1.02 : 1               1.20 : 1
Efficiency ratio           0.93 : 1               0.69 : 1

Be aware that the effect of data inlining is not included in the data reduction ratio because it is not actually reducing the data in any way – just relocating it and protecting it more efficiently. However, data inlining is included in the overall storage efficiency ratio.

The ‘inline data saved’ value represents the count of files which have been inlined, multiplied by 8KB (inode size).  This value is required to make the compression ratio and data reduction ratio correct.

  1. The ‘isi_cstats’ CLI command now includes the accounted number of inlined files within /ifs, which is displayed by default in its console output.
# isi_cstats
Total files                 : 397234451
Total inlined files         : 379948336
Total directories           : 32380092
Total logical data          : 18471 GB
Total shadowed data         : 624 GB
Total physical data         : 26890 GB
Total reduced data          : 14645 GB
Total protection data       : 2181 GB
Total inode data            : 9748 GB
Current logical data        : 18471 GB
Current shadowed data       : 624 GB
Current physical data       : 26878 GB
Snapshot logical data       : 0 B
Snapshot shadowed data      : 0 B
Snapshot physical data      : 32768 B
Total inlined data savings  : 2899 GB
Total inlined data ratio    : 1.1979 : 1
Total compression savings   : 303 GB
Total compression ratio     : 1.0173 : 1
Total deduplication savings : 624 GB
Total deduplication ratio   : 1.0350 : 1
Total containerized data    : 0 B
Total container efficiency  : 1.0000 : 1
Total data reduction ratio  : 1.0529 : 1
Total storage efficiency    : 0.6869 : 1
Raw counts
{ type=bsin files=3889 lsize=314023936 pblk=1596633 refs=81840315 data=18449 prot=25474 ibyte=23381504 fsize=8351563907072 iblocks=0 }
{ type=csin files=0 lsize=0 pblk=0 refs=0 data=0 prot=0 ibyte=0 fsize=0 iblocks=0 }
{ type=hdir files=32380091 lsize=0 pblk=35537884 refs=0 data=0 prot=0 ibyte=1020737587200 fsize=0 iblocks=0 }
{ type=hfile files=397230562 lsize=19832702476288 pblk=2209730024 refs=81801976 data=1919481750 prot=285828971 ibyte=9446188553728 fsize=17202141701528 iblocks=379948336 }
{ type=sdir files=1 lsize=0 pblk=0 refs=0 data=0 prot=0 ibyte=32768 fsize=0 iblocks=0 }
{ type=sfile files=0 lsize=0 pblk=0 refs=0 data=0 prot=0 ibyte=0 fsize=0 iblocks=0 }
  1. The ‘isi get’ CLI command can be used to determine whether a file has been inlined. The output reports a file’s logical ‘size’, but indicates that it consumes zero physical, data, and protection blocks. There is now also an ‘inlined data’ attribute further down in the output that also confirms that the file is inlined.
# isi get -DD file1
* Size:              2
* Physical Blocks:  0
* Phys. Data Blocks: 0
* Protection Blocks: 0
* Logical Size:      8192
PROTECTION GROUPS
* Dynamic Attributes (6 bytes):
*
ATTRIBUTE           OFFSET SIZE
Policy Domains      0      6
INLINED DATA
0,0,0:8192[DIRTY]#1

So, in summary, some considerations and recommended practices for data inlining in OneFS 9.3 include the following:

  • Data inlining is opportunistic and is only supported on node pools with 8KB inodes.
  • No additional software, hardware, or licenses are required for data inlining.
  • There are no CLI or WebUI management controls for data inlining.
  • Data inlining is automatically enabled on applicable nodepools after an upgrade to OneFS 9.3 is committed.
  • However, data inlining occurs for new writes and OneFS 9.3 does not perform any inlining during the upgrade process. Any applicable small files are instead inlined upon their first write.
  • Since inode inlining is automatically enabled globally on clusters running OneFS 9.3, OneFS recognizes any diskpools with 512 byte inodes and transparently avoids inlining data on them.
  • In OneFS 9.3, data inlining is not performed on regular files during tiering, truncation, upgrade, and so on.
  • CloudPools Smartlink stubs, sparse files, and writable snapshot files are also not candidates for data inlining in OneFS 9.3.
  • OneFS shadow stores do not apply data inlining. As such:
  • Small file packing is disabled for inlined data files.
  • Cloning works as expected with inlined data files.
  • Inlined data files do not apply deduping. Non-inlined data files that are once deduped will not inline afterwards.
  • Certain operations may cause data inlining to be reversed, such as moving files from an 8KB diskpool to a 512 byte diskpool, forcefully allocating blocks on a file, sparse punching, and so on.

The new OneFS 9.3 data inlining feature delivers on the promise of small file storage efficiency at scale, providing significant storage cost savings, without sacrificing performance, ease of use, or data protection.

Author: Nick Trimbee

Read Full Blog
  • PowerScale
  • OneFS
  • Dell EMC PowerScale
  • data inlining

OneFS Small File Data Inlining

Nick Trimbee

Tue, 16 Nov 2021 19:41:09 -0000

|

Read Time: 0 minutes

OneFS 9.3 introduces a new filesystem storage efficiency feature that stores a small file’s data within the inode, rather than allocating additional storage space. The principal benefits of data inlining in OneFS include:

  • Reduced storage capacity utilization for small file datasets, generating an improved cost per TB ratio
  • Dramatically improved SSD wear life
  • Potential read and write performance improvement for small files
  • Zero configuration, adaptive operation, and full transparency at the OneFS file system level
  • Broad compatibility with other OneFS data services, including compression and deduplication

Data inlining explicitly avoids allocation during write operations because small files do not require any data or protection blocks for their storage. Instead, the file content is stored directly in unused space within the file’s inode. This approach is also highly flash media friendly because it significantly reduces the quantity of writes to SSD drives.

OneFS inodes, or index nodes, are a special class of data structure that store file attributes and pointers to file data locations on disk.  They serve a similar purpose to traditional UNIX file system inodes, but also have some additional unique properties. Each file system object, whether it be a file, directory, symbolic link, alternate data stream container, or shadow store, is represented by an inode.

Within OneFS, SSD node pools in F series all-flash nodes always use 8KB inodes. For hybrid and archive platforms, the HDD node pools are either 512 bytes or 8KB in size, and this is determined by the physical and logical block size of the hard drives or SSDs in a node pool. 

There are three different styles of drive formatting used in OneFS nodes, depending on the manufacturer’s specifications:

Drive Formatting

Characteristics

Native 4Kn (native)

A native 4Kn drive has both a physical and logical block size of 4096B.

512n (native)

A drive that has both physical and logical size of 512 is a native 512B drive.

512e (emulated)

A 512e (512 byte-emulated) drive has a physical block size of 4096, but a logical block size of 512B.

If the drives in a cluster’s nodepool are native 4Kn formatted, by default the inodes on this nodepool will be 8KB in size.  Alternatively, if the drives are 512e formatted, then inodes by default will be 512B in size. However, they can also be reconfigured to 8KB in size if the ‘force-8k-inodes’ setting is set to true.

A OneFS inode is composed of several sections. These include:

  • A static area, which is typically 134 bytes in size and contains fixed-width, commonly used attributes like POSIX mode bits, owner, and file size. 
  • Next, the regular inode contains a metatree cache, which is used to translate a file operation directly into the appropriate protection group. However, for inline inodes, the metatree is no longer required, so data is stored directly in this area instead. 
  • Following this is a preallocated dynamic inode area where the primary attributes, such as OneFS ACLs, protection policies, embedded B+ Tree roots, timestamps, and so on, are cached. 
  • And lastly a sector where the IDI checksum code is stored.

When a file write coming from the writeback cache, or coalescer, is determined to be a candidate for data inlining, it goes through a fast write path in BSW (BAM safe write - the standard OneFS write path). Compression will be applied, if appropriate, before the inline data is written to storage.

The read path for inlined files is similar to that for regular files. However, if the file data is not already available in the caching layers, it is read directly from the inode, rather than from separate disk blocks as with regular files.

Protection for inlined data operates the same way as for other inodes and involves mirroring. OneFS uses mirroring as protection for all metadata because it is simple and does not require the additional processing overhead of erasure coding. The number of inode mirrors is determined by the nodepool’s achieved protection policy, according to the following table:

OneFS Protection Level

Number of Inode Mirrors

+1n

2 inodes per file

+2d:1n

3 inodes per file

+2n

3 inodes per file

+3d:1n

4 inodes per file

+3d:1n1d

4 inodes per file

+3n

4 inodes per file

+4d:1n

5 inodes per file

+4d:2n

5 inodes per file

+4n

5 inodes per file

Unlike file inodes above, directory inodes, which comprise the OneFS single namespace, are mirrored at one level higher than the achieved protection policy. The root of the LIN Tree is the most critical metadata type and is always mirrored at 8x.

Data inlining is automatically enabled by default on all 8KB formatted nodepools for clusters running OneFS 9.3, and does not require any additional software, hardware, or product licenses in order to operate. Its operation is fully transparent and, as such, there are no OneFS CLI or WebUI controls to configure or manage inlining.

In order to upgrade to OneFS 9.3 and benefit from data inlining, the cluster must be running a minimum OneFS 8.2.1 or later. A full upgrade commit to OneFS 9.3 is required before inlining becomes operational.

Be aware that data inlining in OneFS 9.3 does have some notable caveats. Specifically, data inlining will not be performed in the following instances:

  • When upgrading to OneFS 9.3 from an earlier release which does not support inlining
  • During restriping operations, such as SmartPools tiering, when data is moved from a 512 byte diskpool to an 8KB diskpool
  • Writing CloudPools SmartLink stub files
  • On file truncation down to non-zero size
  • Sparse files (for example, NDMP sparse punch files) where allocated blocks are replaced with sparse blocks at various file offsets
  • For files within a writable snapshot

Similarly, in OneFS 9.3 the following operations may cause inlined data inlining to be undone, or spilled:

  • Restriping from an 8KB diskpool to a 512 byte diskpool
  • Forcefully allocating blocks on a file (for example, using the POSIX ‘madvise’ system call)
  • Sparse punching a file
  • Enabling CloudPools BCM (BAM cache manager) on a file

These caveats will be addressed in a future release.

Author: Nick Trimbee


Read Full Blog
  • Isilon
  • PowerScale
  • OneFS
  • NFS
  • RDMA
  • Dell EMC PowerScale
  • Media and Entertainment
  • 8K

Boosting Storage Performance for Media and Entertainment with RDMA

Gregory Shiff

Tue, 02 Nov 2021 20:20:27 -0000

|

Read Time: 0 minutes

We are in a new golden era of content creation. The explosion of streaming services has brought an unprecedented volume of new and amazing media. Production, post-production, visual effects, animation, finishing: everyone is booked solid with work. And the expectations for this content are higher than ever, with new technically challenging formats becoming the norm rather than the exception. Anyone who has had to work with this content knows that even in 2021, working natively with 8K video or high frame rate 4K video is no joke.  

During post, storage and workstation performance can be huge bottlenecks. These bottlenecks can be particularly painful for “hero” seats that are tasked with working in real time with uncompressed media.

So, let’s look at a new PowerScale OneFS 9.2 feature that can improve storage and workstation performance simultaneously. That technology is Remote Direct Memory Access (RDMA), and specifically NFS over RDMA.

Why NFS? Linux is still the operating system of choice for the applications that media professionals use to work with the most challenging media. Even if those applications have Windows or macOS variants, the Linux version is what is used in the truly high-end. And the native way for a Linux computer to access network storage is NFS. In particular, NFS over TCP.

Already this article is going down a rabbit hole of acronyms! I imagine that most people reading are already familiar with NFS (and SMB) and TCP (and UDP) and on and on. For the benefit of those folks who are not, NFS stands for Network File System. NFS is how Linux systems talk to network storage (there are other ways, but mostly, it is NFS). NFS traffic sits on top of other lower-level network protocols, in particular TCP (or UDP, but mostly it is TCP). TCP does a great job of handling things like packet loss on congested networks, but that comes with performance implications. Back to RDMA.

As the name implies, RDMA is a protocol that allows for a client system to copy data from a storage server’s memory directly into that client’s own memory. And in doing so, the client system bypasses many of the buffering layers inherent in TCP. This direct communication improves storage throughput and reduces latency in moving data between server and client. It also reduces CPU load on both the client and storage server.

RDMA was developed in the 1990s to support high performance compute workloads running over InfiniBand networks. In the 2000s, two methods of running RDMA over Ethernet networks were developed: iWARP and RoCE. Without going into too much detail, iWARP uses TCP for RDMA communications and RoCE uses UDP. There are various benefits and drawbacks of these two approaches. iWARP’s reliance on TCP allows for greater flexibility in network design, but suffers from many of the same performance drawbacks of native TCP communications. RoCE has reduced CPU overhead as compared to iWARP, but requires a lossless network. Once again, without going into too much detail, RoCE is the clear winner given that we are looking for the maximum storage performance with the lowest CPU load. And that is exactly what PowerScale OneFS uses, RoCE (actually RoCEv2, also known as Routable RoCE or RRoCE).

So, put that all together, and you can run NFS traffic over RDMA leveraging RoCE! Yes, back into alphabet soup land. But what this means is that if your environment and PowerScale storage nodes support it, you can massively boost performance by mounting the network storage with a few mount options. And that is a neat trick. The performance gains of RDMA are impressive. In some cases, RDMA is twice as performant as TCP, all other things being equal (with a similar drop in workstation utilization).

A good place to start learning if your PowerScale nodes support RDMA is my colleague Nick Trimbee’s excellent blog: Unstructured Data Tips.

Let’s bring this back to media creation and look at some real-world examples that were tested for this article. The first example is playing an uncompressed 8K DPX image sequence in DaVinci Resolve. Uncompressed video puts less of a strain on the workstation (no real-time decompression), but the file sizes and bandwidth requirements are huge. As an image sequence, each frame of video is a separate file, and at 8K resolution, that meant that each file was approximately 190 MB. To sustain 24 frames per second playback requires 4.5 GB! Long story short, the image sequence would not play with the storage mounted using TCP. Mounting the exact same storage using RDMA was a night and day difference: 8K video at 24 frames per second in Resolve over the network.

Now let’s look at workstation performance. Because to be fair, uncompressed 8K video is unwieldy to store or work with. The number of facilities truly working in uncompressed 8K is small. Far more common is a format such as 6K PIZ compressed OpenEXR. OpenEXR is another image sequence format (file per frame) and PIZ compression is lossless, retaining full image fidelity. The PIZ compressed image sequence I used here had frames that were between 80 MB and 110 MB each. To sustain 24 frames per second requires around 2.7 GB. This bandwidth is less than uncompressed 8K but still substantial. However, the real challenge is that the workstation needs to decompress each frame of video as it is being read. Pulling the 6K image sequence into DaVinci Resolve again and attempting playback over the network storage mounted using TCP did not work. The combination of CPU cycles required for reading the files over network storage and decoding each 6K frame were too much. RDMA was the key for this kind of playback. And sure enough, remounting the storage using RDMA enabled smooth playback of this OpenEXR 6K PIZ image sequence over the network in Resolve.

Going a little deeper with workstation performance, let us look at some other common video formats: Sony XAVC and Apple ProRes 422HQ both at full 4K DCI resolution and 59.94 frames per second. This time AutoDesk Flame 2022 is used as the playback application. Flame has a debug mode that shows video disk dropped frames, GPU dropped frames, and broadcast output dropped frames. With the file system mounted using TCP or RDMA, the video disk never dropped a frame.

The storage is plenty fast enough. However, with the file system mounted using TCP, the broadcast output dropped thousands of frames, and the workstation could not keep up. Playing back the material over RDMA was a different story, smooth broadcast output and essentially no dropped frames at all. In this case, it was all about the CPU cycles freed up by RDMA.

NFS over RDMA is a big deal for PowerScale OneFS environments supporting the highest end playback. The twin benefits of storage performance and workstation CPU savings change what is possible with network storage. For more specifics about the storage environment, the tests run, and how to leverage NFS over RDMA, see my detailed white paper PowerScale OneFS: NFS over RDMA for Media.

Author: Gregory Shiff, Principal Solutions Architect, Media & Entertainment    LinkedIn

Read Full Blog
  • Isilon
  • data protection
  • security
  • PowerScale
  • OneFS
  • Dell EMC PowerScale

PowerScale OneFS Release 9.3 now supports Secure Boot

Aqib Kazi

Fri, 22 Oct 2021 20:50:20 -0000

|

Read Time: 0 minutes

Many organizations are looking for ways to further secure systems and processes in today's complex security environments. The grim reality is that a device is typically most susceptible to loading malicious malware during its boot sequence.

With the introduction of OneFS 9.3, the UEFI Secure Boot feature is now supported on Isilon A2000 nodes. Not only does the release support the UEFI Secure Boot feature, but OneFS goes a step further by adding FreeBSD’s signature validation. Combining UEFI Secure Boot and FreeBSD’s signature validation helps protect the boo