Your Browser is Out of Date uses technology that works best in other browsers.
For a full experience use one of the browsers below

Home > Storage > PowerStore > Blogs


Short articles related to Dell PowerStore.

blogs (15)

Tag :

All Tags

Author :

All Authors

data storage PowerStore NFS NAS performance metrics

Let’s Talk File (#3) – PowerStore File Systems

Wei Chen

Thu, 12 May 2022 15:39:24 -0000


Read Time: 0 minutes


A file system is a storage resource that holds data and can be accessed through file sharing protocols such as SMB or NFS. The PowerStore file system architecture is designed to be highly scalable, efficient, performance-focused, and flexible. PowerStore offers a 64-bit file system that is mature and robust, enabling it to be used in many of the traditional NAS file use cases.

File system highlights

PowerStore file systems can accommodate large amounts of data, directories, and files. Each individual file system is designed to scale up to 256TB in size, hold 10 million subdirectories per directory, and store 32 billion files. Don’t forget that PowerStore can support up to 500 file systems on an appliance as well!

All file systems are thinly provisioned and always have compression and deduplication enabled. This means that capacity is allocated on demand as capacity is consumed on the file system. In addition, compression and deduplication help reduce the total cost of ownership and increase the efficiency of the system by reducing the amount of physical capacity that is needed to store the data. Savings are not only limited to the file system itself, but also to its snapshots and thin clones. Compression and deduplication occur inline between the system cache and the backend drives. The compression task is offloaded to a dedicated chip on the node, which frees up CPU cycles.

PowerStore file systems are tuned and optimized for high performance across all use cases. In addition, platform components such as Non-Volatile Memory Express (NVMe) drives and high-speed connectivity options enable the system to maintain low response times while servicing large workloads.

How to provision a file system

Now that you understand the benefits of the PowerStore file system, let’s review the file system provisioning process. PowerStore Manager makes it quick and simple to provision a file system, create NFS exports and/or SMB shares, configure access, and apply a protection policy using a single wizard.

To create a file system, open PowerStore Manager and navigate to Storage > File Systems > Create. The file system creation wizard prompts you for the information displayed in the following table.



NAS Server

Select the NAS server that will be used to access this file system, ensuring the necessary protocols are enabled on the NAS server for client access.


Provide a name for the file system.


Specify the size of the file system that is presented to the client, between 3GB and 256TB.

NFS Export (Optional)

Only displayed if NFS is enabled on the NAS server. Provide a name for the NFS export if NFS access is desired. The NFS Export Path is displayed so you can easily mount the NFS export on the client.

Configure Access

Only displayed if an NFS export is created. This screen has the following settings:

  • Minimum Security – Determines the type of security that is enforced on the NFS export
    • Sys (Default) –  Uses client-provided UNIX UIDs and GIDs for NFS authentication
    • Kerberos – Kerberos, Kerberos with Integrity, or Kerberos with Encryption can be selected if Secure NFS is enabled on the NAS server
  • Default Access – Determines the access permissions for all hosts that attempt to connect to the NFS export
    • No Access (Default)
    • Read/Write
    • Read-Only
    • Read/Write, allow Root
    • Read-Only, allow Root
    • The allow root options are the equivalent to no_root_squash on UNIX systems. This means if the user has root access on the client, they are also granted root access to the NFS export
  • Override List - For hosts that need a different access setting than the default
    • Hostnames, IP addresses, or subnets can be added to this list along with one of the access options above.
    • Examples: 
      • Hostname:
      • IPv4 address:
      • IPv6 address: fd00:c6:a8:1::1
      • Subnet with Netmask:
      • Subnet with Prefix:
    • Multiple entries can also be added simultaneously in a comma-separated format.
    • Entries can also be populated by uploading a CSV file. A template with syntax and examples is provided in the wizard.

SMB Share (Optional)

Only displayed if SMB is enabled on the NAS server. This screen has the following settings:

  • Name – Name for the SMB Share
  • Offline Availability – Determine if files and programs on a share are available when offline
    • None (Default) – Nothing is available offline
    • Manual – Only specified files and programs are available offline
    • Programs – Programs are available offline
    • Documents – Documents are available offline
  • UMASK (Default 022) - The UMASK is a bitmask that controls the default UNIX permissions for newly created files and folders. This setting only applies to new files and folders that are created on SMB on multiprotocol file systems.
  • Continuous Availability (Default Disabled) - Allows persistent access to file systems without loss of the session state
  • Protocol Encryption (Default Disabled) - Provides in-flight data encryption between SMB3 clients and the NAS server
  • Access-Based Enumeration (Default Disabled) - Restricts the display of files and folders based on the access privileges of the user attempting to view them
  • Branch Cache Enabled (Default Disabled) - Allows users to access data that is stored on a remote NAS server locally over the LAN, removing the need to traverse the WAN

Protection Policy

Select a protection policy to protect the file system.

The following screenshot shows an example of the summary page when creating a new file system. In this example, we provisioned the file system, NFS export, SMB share, configured host access, and applied a protection policy to the file system.

If you’re testing file for the first time, you may want to start off with deploying a basic minimum configuration. To do this, all you need to choose is a NAS server, configure a file system name, specify a size, and create either an NFS export or an SMB share. If you enable NFS, you’ll also need to enable host access to your client.

When the file system and NFS export or SMB share is provisioned, you now mount the file system on to your host for access:

  • SMB: \\<SMB_Server>\<SMB_Share_Name>
  • NFS: mount <NFS_Server>:/<NFS_Export_Name> /<Mountpoint>
    • For example: mount nas:/fs /mnt

File system management

PowerStore file systems provide increased flexibility by providing the ability to shrink and extend file systems as needed. Shrink and extend operations are used to resize the file system and update the capacity that is seen by the client. Extend operations do not change how much capacity is allocated to the file system. However, shrink operations may be able to reclaim unused space, depending on how much capacity is allocated to the file system and the presence of snapshots or thin clones.

If the advertised file system size is too small or full, extending it allows additional data to be written to the file system. If the advertised file system size is too large, shrinking it limits the amount of data that can be written to the file system. For shrink and extend, the minimum value is equal to the used size of the file system; the maximum value is 256 TB. You cannot shrink the file system to less than the used size, because this would cause the client to see the file system as more than 100% full.

The following figure shows the file system properties page in PowerStore Manager, where you can shrink or extend a file system.

File system performance metrics

Performance metrics are available to view the latency, IOPS, bandwidth, and IO size at the file system level. You can tweak the timeline to view preset timeframes ranging from the last hour to the last 2 years, or drag and zoom in to specific sections of the graph. You can also export the metrics to file types such as PNG, PDF, JPEG, or CSV.

File-specific metrics are also available at the node, cluster, and appliance level. At the node level, SMB and NFS protocol metrics can also be viewed. The available metrics are:

  • Read, Write, and Total IOPS
  • Read, Write, and Total Bandwidth
  • Read, Write, and Average Latency
  • Read, Write, and Average IO Size

The following figure shows the file metrics page that displays the NFS protocol metrics on Node A.


Congratulations! You have successfully provisioned a file system, NFS export, SMB share, and accessed it from a host. Now you can write files and folders or run workloads on the file system. We also reviewed how to leverage shrink and extend to update the file system size, and looked at some of the performance metrics so you can monitor your file systems. Stay tuned for the next blog in this series where we’ll take a deeper dive into the SMB protocol.


Author: Wei Chen, Senior Principal Engineering Technologist


Read Full Blog
data storage PowerStore NAS

Let’s Talk File (#2) – PowerStore NAS Servers

Wei Chen

Wed, 20 Apr 2022 17:22:25 -0000


Read Time: 0 minutes


PowerStore file uses virtualized file servers that are called NAS servers, which are a critical piece of a file environment. In this blog, we will review what NAS servers are, study the NAS server architecture and its benefits, take a quick look at the NAS server settings, and walk through the process to create a new NAS server using PowerStore Manager.

What is a NAS server? A NAS server provides administrators with the ability to specify how PowerStore and its clients should connect, authenticate, and communicate with each other. It contains the configuration, interfaces, and environmental information that is used to facilitate access to the data residing on the file systems. In addition, features such as anti-virus protection, backups, user mappings, and more are also configured on the NAS server.

NAS Server Architecture

PowerStore’s modern NAS server architecture provides many inherent benefits. NAS servers have many responsibilities, including enabling access to file systems, providing data separation, and acting as a basis for multi-tenancy. They are also used as components for load balancing and high availability. This makes it quick and simple to deploy a feature-rich and enterprise-level file solution that meets your business requirements. The image below illustrates the NAS server architecture on PowerStore. 

Each NAS server has its own independent configuration, enabling it to be used to enforce multitenancy. This is useful when hosting multiple tenants on a single system, such as for service providers. Each NAS server can be tailored to meet the requirements of each tenant without impacting the other NAS servers on the same appliance.

When creating a file system, the file system is assigned to a NAS server. Each NAS server has its own set of file systems to store file data. Because each NAS server is logically separated from the others, clients that have access to one NAS server do not inherently have access to the file systems on the other NAS servers. To access file systems on a different NAS server, clients must separately authenticate using the methods specified by that NAS server.

Each PowerStore node can host multiple NAS servers and both nodes are actively used to service file IO. New NAS servers are automatically assigned on a round-robin basis across both available nodes. This active/active architecture enables load balancing, provides high availability, and allows both nodes to serve file data simultaneously. If a PowerStore node reboots, NAS servers and their corresponding file systems automatically fail over to the surviving node. NAS servers are also automatically moved to the peer node and back during the upgrade process. After the upgrade completes, the NAS servers return to the node they were assigned to at the beginning of the upgrade. 

NAS Server Settings

Let’s do a quick review of some of the items that can be configured on a NAS server. See the following table for a list of items along with a short description of their purpose.

Don’t worry if you’re not familiar with some of these services or terms because they’re not all required. You only need to enable and configure services that you are actively using in your specific environment. We’ll also cover these services in more detail in future blogs in this series.




IP address, subnet, gateway, and VLAN to access the NAS server

Access Protocols

Server Message Block (SMB) – Primarily used by Windows clients for SMB shares

Network File System (NFS) – Primarily used by UNIX and VMware ESXi clients for NFS exports

File Transfer Protocol (FTP) – Used by all clients for file transfers

SSH File Transfer Protocol (SFTP) - Used by all clients for secure file transfers

Lightweight Directory Access Protocol (LDAP) / Network Information Service (NIS) / Local Files

Resolving user IDs and names to each other

Domain Name System (DNS)

Resolving IP addresses and names to each other


Anti-virus servers used to identify and eliminate known viruses before they infect other files

Network Data Management Protocol (NDMP)

A standard used for backing up file storage


A distributed authentication service used for Secure NFS

How to Configure a NAS Server

When deploying a file environment, the first resource you should provision on PowerStore is the NAS server. Now that you understand how they work, let’s go ahead and create one. To do this, open PowerStore Manager and navigate to Storage > NAS Servers. The NAS server creation wizard prompts you for the information displayed in the table below. All of these options can also be modified after creation, if needed.



Interface Details (Required)

  • IP Address
  • Subnet Mask or Prefix Length
  • Gateway (Optional)
  • VLAN ID (0-4094) – Must be different from the Management and Storage VLANs

Sharing Protocols (Optional)

  • SMB – Either Standalone or Active Directory (AD) Domain Joined
  • NFSv3
  • NFSv4

Note: If both SMB and NFS protocols are enabled, multiprotocol access is automatically enabled

UNIX Directory Services (shown if NFS is enabled)

  • Local Files
  • Secure NFS

DNS (Required for AD Joined SMB Servers, but otherwise optional)

  • DNS Transport Protocol – UDP or TCP
  • Domain Name
  • DNS Servers

The screenshot below shows an example of the summary page when configuring a new NAS server. In this example, we created an interface, AD-joined SMB Server, NFSv3, and DNS.

If you’re testing file for the first time, you may want to start off with deploying a basic minimum configuration. To do this, all you need to configure is an interface and enable at least one access protocol.


Now that we have our NAS server configured, our clients have a communication path to connect to PowerStore using a file protocol! This is the first major step, but we’re not done yet. Next, we need to provision a file system to store our data and shares or exports to make the file system accessible to clients. Stay tuned for the next blog in this series where we’ll review file system provisioning, management, and monitoring.


Author: Wei Chen, Senior Principal Engineering Technologist


Read Full Blog

What’s New in PowerStoreOS 2.1.1

Ethan Stokes

Tue, 19 Apr 2022 10:00:08 -0000


Read Time: 0 minutes

New releases continue to pile on for PowerStore, and today marks the most recent release with PowerStoreOS 2.1.1. This new release unlocks a lot of content for a service pack, but to fully understand what it delivers, we’ll need to revisit the previous release, PowerStoreOS 2.1.

PowerStoreOS 2.1 packed a lot into a minor release, including several key features on top of continued performance improvements and general enhancements. The anchor features were front-end NVMe/TCP access and integration with SmartFabric Storage Software. However, this release also included DC support for PowerStore 500, dynamic node affinity for improved storage intelligence, and various management, security and serviceability features. 

The first service pack for PowerStoreOS 2.1, also known as PowerStoreOS 2.1.1, is supported on all PowerStore models, including PowerStore T and PowerStore X. If you recall, with the PowerStoreOS 2.1 launch in January, the new software was only made available to PowerStore T appliances. With this latest release, all software features introduced in PowerStoreOS 2.1 are now available on PowerStore X. Besides bringing the new set of features to PowerStore X, this release introduces several general system enhancements to both platforms, and specific improvements to PowerStore X models.

PowerStoreOS 2.1.1

PowerStoreOS 2.1.1 brings the features of PowerStoreOS 2.1 to PowerStore X appliances, plus some general system enhancements. Beyond the capabilities of PowerStoreOS 2.1, PowerStoreOS 2.1.1 also introduces vSphere 7 for PowerStore X, a brand new capability available in this latest release.

PowerStoreOS 2.1 for PowerStore X

Since PowerStoreOS 2.1.1 unlocks the new features of PowerStoreOS 2.1 on PowerStore X, it makes sense to recap those features here. The following features were all introduced in the previous release, and they are now fully supported on PowerStore X models:

 NVMe/TCP: Support for host connectivity using NVMe over Ethernet fabrics with NVMe/TCP on existing embedded and IO module Ethernet ports.

  • SmartFabric Storage Software (SFSS) support: A software product that enables an end-to-end automated and integrated NVMe/TCP fabric connecting NVMe hosts and targets.
  • Dynamic node affinity: Dynamically-set node access when mapping volumes to hosts and the ability to automatically change node affinity for load balancing purposes.
  • Customizable login message: Enables storage administrators to create, enable and disable a customizable login message.
  • Application tags: Allows users to create application tags to label volumes for better organization and management.
  • Thin packages and upgrades: Adds support for off-release packages such as hotfixes, disk firmware or improved health checks.

 For more detail on the PowerStoreOS 2.1 release, make sure to check out the blog What’s New with the Dell PowerStoreOS 2.1 Release?.

vSphere 7 for PowerStore X

The jump from vSphere 6.7 to vSphere 7 delivers significant improvements to the ESXi nodes, which serve as the foundation of any PowerStore X cluster. A multitude of security enhancements ensure that your system has all the newest developments and improvements that were captured in vSphere 7. 

Another change introduced in vSphere 7.0 is called vSphere Cluster Services (vCLS). This is a modification on how both vSphere DRS and vSphere HA are implemented for the ESXi cluster. This change ensures the continued functionality of vSphere DRS and vSphere HA in the event the vCenter Server instance becomes unavailable. Since both features are crucial to any PowerStore X cluster, this change will certainly be noticed by any observant virtualization administrator. Although hidden in the standard inventory view, the vCLS components appear as virtual machines when viewing the PowerStore vVol datastore.


After you deploy a PowerStore X cluster running PowerStoreOS 2.1.1, you can confirm the vSphere version running on the hosts by selecting them directly in vSphere. Note that as additional updates are released for PowerStore, the exact version of vSphere may not match the version captured in the screenshot below. Make sure to reference the PowerStore Simple Support Matrix to get the most up-to-date information on supported versions.

In addition to vSphere, PowerStore Manager also captures this information. From the Dashboard page, simply navigate to Compute > Hosts & Host Groups and note the ESXi Version column. This column is not enabled by default and must be added using the Show/Hide Columns option to the right of the table.

Upgrading to PowerStoreOS 2.1.1

All these new features sound great, but the next logical question is: How do I get this code running on my system? Thankfully, PowerStore fully supports a non-disruptive upgrade (NDU) to PowerStoreOS 2.1.1 on both PowerStore T and PowerStore X appliances.

PowerStore T upgrades

While much of the new content in PowerStoreOS 2.1.1 is directed toward PowerStore X systems, there are still several general system enhancements and bug fixes that will benefit PowerStore T appliances. PowerStore T upgrades are fully supported on systems running PowerStoreOS 1.X or 2.X. Make sure to download the latest version of the PowerStore Release Notes to determine which software upgrade packages are required based on the current version of code you are running. For all PowerStore upgrades, see the Dell EMC PowerStore Software Upgrade Guide on

PowerStore X upgrades

PowerStoreOS 2.1.1 upgrades are fully supported on PowerStore X clusters running PowerStoreOS 2.0.X. If the cluster is running an earlier version, you can first perform an upgrade to PowerStoreOS 2.0.X. Once that is satisfied, ensure that the vCenter Server connected to the PowerStore X cluster is running a supported version of vSphere 7.0. To view the current list of supported vCenter Server versions, see the VMware Licensing and Support for PowerStore X table in the PowerStore Simple Support Matrix. Finally, make sure to see the Dell EMC PowerStore Software Upgrade Guide on


The PowerStoreOS 2.1.1 release provides new capabilities to PowerStore X systems, unlocking NVMe/TCP, SmartFabric Storage Software support, vSphere 7, dynamic node affinity, and much more. Adding to these new features, several system enhancements and bug fixes are delivered for both PowerStore X and PowerStore T model appliances. With easy, non-disruptive upgrade options for all PowerStore models, this is a great release for any currently deployed system. 



Ethan Stokes, Senior Engineering Technologist

Read Full Blog
PowerStore PowerStoreOS snapshots

Have You Checked Your Snapshots Lately?

Ryan Poulin

Mon, 28 Mar 2022 21:44:38 -0000


Read Time: 0 minutes

While this question may sound like a line from a scary movie, it is a serious one. Have you checked your snapshots lately?

In many regions of the world, seasonal time changes occur to maximize daylight hours and ensure darkness falls later in the day. This time shift is commonly known as Daylight Time, Daylight Saving Time, or Summer Time. Regions that observe this practice often change their clocks by 30 minutes or 1 hour depending on the region and time of year. At the time of this publication, multiple regions of the world have recently experienced a time change, while others occur shortly after this publication.

Some storage systems use Coordinated Universal Time (UTC) internally for logging purposes and to run scheduled jobs. Users typically create a schedule to run a task based on their local time, but the storage system then adjusts this time and runs the job based on the internal UTC time. When a regional change in time occurs, scheduled tasks that run on a UTC schedule “shift” when compared to wall clock time. Something that used to run at one time locally may seem to run at another, but only because the wall clock time in the region has changed. While this shift in schedule may not be an issue to most, with some customers the change is noticeable. Some have found that jobs such as snapshot creations and deletions are now occurring during other scheduled tasks such as backups or the snapshots are now missing the intended time, such as the beginning or end of the business workday.

To show what I mean, let’s use the Eastern US time zone as an example. Let’s say a user has created a rule to take a snapshot daily at 12:00 AM midnight local time. When Daylight Saving Time is not in observance, 12:00 AM US EST is equivalent to 5:00 AM UTC and the snapshot schedule will be configured to run at 5:00 AM UTC daily within the system. On Sunday, March 13, 2022 at 2:00 AM the regions of the United States that observe time changes altered their clocks 1 hour forward. The 2:00 AM hour instantaneously became 3:00 AM and an hour of time was seemingly lost.

As the figure below shows, a scheduled job that is configured to run at 5:00 AM UTC daily was taking snapshots at 12:00 AM local time but now runs at 1:00 AM local time, due to the UTC schedule of the storage system and the time change. A similar shift also occurs when the time change occurs again later in the year.


Within PowerStore, protection policies, snapshot rules, and replication rules are used to apply data protection to a resource. A snapshot rule is created to tell the system when to create a snapshot on a resource. The snapshot rule is then added to a protection policy, and the protection policy is assigned to a resource. When creating a snapshot rule, the user can either choose a fixed interval based on several hours or provide a specific time to create a snapshot.

For systems running PowerStoreOS 2.0 or later, when specifying the exact time to create a snapshot, the user also selects a time zone. The time zone drop-down list defaults to the user’s local time zone, but it can be adjusted if the system is physically located in a different time zone. Specifying a specific time with a time zone ensures that seasonal time changes do not impact the creation time of a snapshot.

For systems that were configured with a code prior to the 2.0 release and later upgraded, it is a great idea to review the snapshot rules and ensure that ones that are configured for a particular time of day are set correctly.

So, I ask again: Have you checked your snapshots lately?


Technical Documentation

Demos and Hands-on Labs

  • To see how PowerStore’s features work and integrate with different applications, see the PowerStore Demos YouTube playlist.
  • To gain firsthand experience with PowerStore, see our many Hands-On Labs.

Author: Ryan Poulin

Read Full Blog
data storage PowerStore NAS

Let’s Talk File (#1) – PowerStore File Overview

Wei Chen

Mon, 21 Mar 2022 14:15:46 -0000


Read Time: 0 minutes


Our customers have a wide variety of traditional and modern workloads. Each of these workloads connects to the underlying infrastructure using various networking protocols. PowerStore’s single architecture for block, file, and vVols uses the latest technologies to achieve these disparate objectives without sacrificing the cost-effective nature of midrange storage. PowerStore provides the ultimate workload flexibility and enables IT to simplify and consolidate infrastructure.

PowerStore features a native file solution that is highly scalable, efficient, performance-focused, and flexible. In this new series, we’ll visit the vast world of PowerStore file and review the comprehensive set of features that it offers. Over the course of this series, we’ll cover everything from NAS servers, file systems, quotas, snapshots and thin clones, protocols, multiprotocol, directory services, and more. We’ll start with the basics before diving in deeper, so no previous file experience or knowledge is required! 

File Overview

Let’s start with a quick and high-level overview of file. File-level storage is a storage type where files are shared across a network to a group of heterogeneous clients. It is also known as Network-Attached Storage (NAS). File-level storage is widely used across small and medium-sized businesses to large enterprises across the world. If you’ve ever connected to a shared drive such as a home directory or departmental share, then you’ve used file-level storage before.

PowerStore File

File functionality is natively available on PowerStore T model appliances that are configured in Unified mode. There are no extra pieces of software, hardware, or licenses required to enable this functionality. All file management, monitoring, and provisioning capabilities are available in the HTML5-based PowerStore Manager.

Within an appliance, both nodes are used for both file as well as block. This configuration creates a fully redundant and active/active architecture where both nodes are used to serve file data simultaneously. This design provides the ability to load balance across both nodes while also ensuring high availability.

PowerStore supports the following file access protocols:

  • Server Message Block (SMB) – Primarily used by Windows clients for SMB shares
  • Network File System (NFS) – Primarily used by UNIX clients for NFS exports
  • File Transfer Protocol (FTP) – Used by all clients for file transfers
  • SSH File Transfer Protocol (SFTP) – Used by all clients for secure file transfers

PowerStore File Systems

PowerStore features a 64-bit file system architecture that is designed to be highly scalable, efficient, performant, and flexible. PowerStore also includes a rich supporting file feature set to ensure that the data is secure, protected, and can be easily monitored.

PowerStore file systems are also tuned and optimized for high performance. In addition, platform components such as Non-Volatile Memory Express (NVMe) drives and dual-socket CPUs enable the system to maintain low response times while servicing large workloads.

The maturity and robustness of the PowerStore file system combined with these supporting features enables it to be used in many file-level storage use cases, such as departmental shares or home directories.


With the native file capabilities available on PowerStore, administrators can easily implement a file solution that is designed for the modern data center. Throughout this blog series, we’ll review how quick and easy it is to configure file in your data center.

Now that we have an overview of file, we can begin jumping into more specific technical details. Stay tuned for the next blog in this series where we’ll start by looking at NAS servers and their role in facilitating access to the data residing on the file systems.


Author: Wei Chen, Senior Principal Engineering Technologist


Read Full Blog
PowerStore PowerStoreOS SFSS SmartFabric Storage Software

What’s New with the Dell PowerStoreOS 2.1 Release?

Andrew Sirpis

Tue, 15 Mar 2022 21:40:18 -0000


Read Time: 0 minutes

2022 got off to a great start with the PowerStoreOS 2.1 release. It builds upon the previous release with performance improvements and added functionality to support current and future workloads. PowerStoreOS 2.1 provides these key features:

  • NVMe/TCP – This protocol, based on standard IP over Ethernet networks, allows users to take advantage of their existing network for storage. NVMe/TCP is much more efficient, parallel, and scalable than SCSI. It makes an external networked array feel like direct attach storage. PowerStoreOS 2.1 introduced support for NVMe/TCP on PowerStore appliances, which allows users to configure Ethernet interfaces for iSCSI or NVMe/TCP host connectivity. 


  • SmartFabric Storage Software integration – (SFSS) is a new and innovative software product from Dell Technologies that enables an end-to-end automated and integrated NVMe/oF Ethernet fabric connecting NVMe hosts and targets using TCP. The solution was designed in partnership with VMware and provides enterprise organizations with the agility to stay ahead of growing workload demands. It supports modern, automated, and secure storage connectivity both today and for future hybrid cloud migrations.
  • Dynamic Node Affinity – This feature allows PowerStore to dynamically set a node for access when mapping volumes to a host, and allows it to automatically change the node affinity for load balancing purposes.
  • DC support with PowerStore 500 – Allows users to utilize DC power supply units instead of just AC with their storage appliance.  

  • Management and Serviceability
    • Customizable Login Message – Enables storage administrators to create, enable, and disable a customizable login message.  
    • Application Tags – Allows users to specify a volume application tag during volume creation. This allows labeling of the volumes with a specific category and application type, based on the use case. Using application-centric management, users can view and sort through volumes by the application tag, by adding the new “Application” column in the list view.    
    • Thin Packages and Upgrades – In PowerStore Manager you can manage, upload, and deploy various non-disruptive upgrade (NDU) packages. Generally, the NDU packages consist of two categories: software releases and thin packages. Software releases are PowerStoreOS upgrades that contain the full operating-system (OS) image, or patch or hotfix images, for a specific OS version. Thin packages contain a smaller and more targeted amount of functionality than regular PowerStoreOS packages. These allow Dell to offer off-release updates such as hotfixes, disk firmware, pre-upgrade health check updates, and usually do not require node reboots. Thin packages are what is new with the 2.1 release and since they’re typically smaller in size, it saves users time and effort during the install process.
    • Telemetry Notices – There is a notification displayed after the EULA which provides information about the Dell Telemetry collector and privacy policy information.  

Check out the information below about this jampacked release: white papers, videos, and an interactive demo. The Dell Info Hub has a wealth of great material and we hope it helps you with your innovative technology solutions!  




Interactive Demo

Author: Andrew Sirpis, Senior Principal Engineering Technologist


Read Full Blog
PowerStore API

PowerStore REST API: Using Filtering to Fine Tune Your Queries

Robert Weilhammer

Fri, 11 Mar 2022 16:03:19 -0000


Read Time: 0 minutes

The PowerStore REST API provides a powerful way to manage a PowerStore cluster, mainly when using one’s own scripts [3] or automation tools.

In some areas of PowerStore, almost all of its functionality is available when using the REST API – and sometimes even more when the required attributes are unavailable in the PowerStore Manager GUI.

A great place to start learning more about the REST API is the integrated SwaggerUI [2] which provides online documentation with test functionality on your system. SwaggerUI uses an OpenAPI definition. Some 3rd party tools can leverage the same OpenAPI definition, and can be downloaded from SwaggerUI. SwaggerUI is available on all PowerStore models and types by using https://<PowerStore>/swaggerui in your preferred browser.

When working with the PowerStore REST API it’s not always obvious how to query some attributes. For example, it’s easy to filter for a specific volume name to get id, size, and type of a volume or volumes when using “*” as wildcard:  

To query for all volumes with “Test” somewhere in its name, we could use


as the query string:

% curl -k -s -u user:pass -X GET "https://powerstore.lab/api/rest/volume?select=id,name,size,type&name=like.*Test*" | jq .
    "id": "a6fa6b1c-2cf6-4959-a632-f8405abc10ed",
    "name": "TestVolume",
    "size": 107374182400,
    "type": "Primary"

In that example, although we know that there are multiple snapshots for a particular volume, the REST API query that uses the parent volume name does not show the related snapshots. It’s because snapshots must not have the name of the parent volume in their name. From PowerStore Manager we know that this volume has three snapshots, but their names do not relate to the volume name:  

How is it possible to get the same output with a REST API query? We know that everything in PowerStore is managed with IDs, and the API description in SwaggerUI shows that a volume could have an attribute parent_id underneath the protection_data section.

All volumes with a protection_data->>parent_id that is equal to the ID of our “TestVolume” show the related snapshots for the TestVolume. The key for the query is the following syntax for the underlying attributes:


The resulting curl command to query for the snapshot volumes shows the same syntax to select “creator_type” from a nested resource:

% curl -k -s -u user:pass -X GET 'https://powerstore/api/rest/volume?select=id,name,protection_data->>creator_type,creation_timestamp&protection_data->>parent_id=eq.a6fa6b1c-2cf6-4959-a632-f8405abc10ed' | jq .
    "id": "051ef888-a815-4be7-a2fb-a39c20ee5e43",
    "name": "2nd snap with new 1GB file",
     "creator_type": "User",
     "creation_timestamp": "2022-02-03T15:35:53.920133+00:00"
    "id": "23a26cb6-a806-48e9-9525-a2fb8acf2fcf",
    "name": "snap with 1 GB file",
     "creator_type": "User",
     "creation_timestamp": "2022-02-03T15:34:07.891755+00:00"
    "id": "ef30b14e-daf8-4ef8-8079-70de6ebdb628",
    "name": "after deleting files",
     "creator_type": "User",
     "creation_timestamp": "2022-02-03T15:37:21.189443+00:00"


For more white papers, blogs, and other resources about PowerStore, please visit our PowerStore Info Hub.

Related resources to this blog on the Info Hub:

For some great video resources referenced in this blog, see:

See also this PowerStore product documentation:

Author: Robert Weilhammer, Principal Engineering Technologist

LinkedIn, XING

Read Full Blog
vSphere VMware Kubernetes PowerStore Tanzu Amazon EKS

Exploring Amazon EKS Anywhere on PowerStore X – Part I

Jason Boche

Tue, 11 Jan 2022 17:53:03 -0000


Read Time: 0 minutes

A number of years ago, I began hearing about containers and containerized applications. Kiosks started popping up at VMworld showcasing fun and interesting uses cases, as well as practical uses of containerized applications. A short time later, my perception was that focus had shifted from containers to container orchestration and management or simply put, Kubernetes. I got my first real hands on experience with Kubernetes about 18 months ago when I got heavily involved with VMware’s Project Pacific and vSphere with Tanzu. The learning experience was great and it ultimately lead to authoring a technical white paper titled Dell EMC PowerStore and VMware vSphere with Tanzu and TKG Clusters.

Just recently, a Product Manager made me aware of a newly released Kubernetes distribution worth checking out: Amazon Elastic Kubernetes Service Anywhere (Amazon EKS). Amazon EKS Anywhere was preannounced at AWS re:Invent 2020 and announced as generally available in September 2021.

Amazon EKS Anywhere is a deployment option for Amazon EKS that enables customers to stand up Kubernetes clusters on-premises using VMware vSphere 7+ as the platform (bare metal platform support is planned for later this year). Aside from a vSphere integrated control plane and running vSphere native pods, the Amazon EKS Anywhere approach felt similar to the work I performed with vSphere with Tanzu. Control plane nodes and worker nodes are deployed to vSphere infrastructure and consume native storage made available by a vSphere administrator. Storage can be block, file, vVol, vSAN, or any combination of these. Just like vSphere with Tanzu, storage consumption, including persistent volumes and persistent volume claims, is made easy by leveraging the Cloud Native Storage (CNS) feature in vCenter Server (released in vSphere 6.7 Update 3). No CSI driver installation necessary.

Amazon EKS users will immediately gravitate towards the consistent AWS management experience in Amazon EKS Anywhere. vSphere administrators will enjoy the ease of deployment and integration with vSphere infrastructure that they already have on-premises. To add to that, Amazon EKS Anywhere is Open Source. It can be downloaded and fully deployed without software or license purchase. You don’t even need an AWS account.

I found PowerStore was a good fit for vSphere with Tanzu, especially the PowerStore X model, which has a built in vSphere hypervisor, allowing customers to run applications directly on the same appliance through a feature known as AppsON.

The question that quickly surfaces is: What about Amazon EKS Anywhere on PowerStore X on-premises or as an Edge use case? It’s a definite possibility. Amazon EKS Anywhere has already been validated on VxRail. The AppsON deployment option in PowerStore 2.1 offers vSphere 7 Update 3 compute nodes connected by a vSphere Distributed Switch out of the box, plus support for both vVol and block storage. CNS will enable DevOps teams to consume vVol storage on a storage policy basis for their containerized applications, which is great for PowerStore because it boasts one of the most efficient vVol implementations on the market today. The native PowerStore CSI driver is also available as a deployment option. What about sizing and scale? Amazon EKS Anywhere deploys on a single PowerStore X appliance consisting of two nodes but can be scaled across four clustered PowerStore X appliances for a total of eight nodes.

As is often the case, I went to the lab and set up a proof of concept environment consisting of Amazon EKS Anywhere running on PowerStore X 2.1 infrastructure. In short, the deployment was wildly successful. I was up and running popular containerized demo applications in a relatively short amount of time. In Part II of this series, I will go deeper into the technical side, sharing some of the steps I followed to deploy Amazon EKS Anywhere on PowerStore X.

Author: Jason Boche

Twitter: (@jasonboche)


Read Full Blog
PowerMax containers data storage Kubernetes PowerStore

Part 2 – The ‘What’ - Introducing Dell Container Storage Modules (CSM)

Itzik Reich

Fri, 19 Nov 2021 14:04:44 -0000


Read Time: 0 minutes

In the first post of the series, which you can read all about here, I discussed some of the challenges that are associated with managing the storage / Data Protection aspects of Kubernetes. Now, let’s discuss our solutions:

Enter CSM, or Introduction to Container Storage Modules

Remember the 2019 session and the in-depth thinking we had gone through about our customers’ real world needs? The Kubernetes ecosystem is growing rapidly and when it comes to storage integration, CSI plugins offer a way to expose block and file storage systems to containerized workloads on Container Orchestration systems (COs) like Kubernetes. 

Container Storage Modules (CSM) improves the observability, usability, and data mobility for stateful applications using Dell Technologies storage portfolio. It also extends Kubernetes storage features beyond what is available in the Container Storage Interface (CSI) specification. CSM and the underlying CSI plugins are pioneering application-aware/application consistent backup and recovery solutions from the most comprehensive enterprise-grade storage and data protection for Kubernetes. 

CSM extends enterprise storage capabilities to Kubernetes. It reduces management complexity so developers can independently consume enterprise storage with ease and automate daily operations such as provisioning, snapshotting, replication, observability, authorization, and resiliency. CSM is open-source and freely available from

Dell EMC Container Storage Modules (CSM) brings powerful enterprise storage features and functionality to Kubernetes for easier adoption of cloud-native workloads, improved productivity, and scalable operations. This release delivers software modules for storage management that provide developers with access to build automation for enhanced IT needs and other critical enterprise storage features. These include data replication across data centers, role-based access control (RBAC) authorization, observability, and resiliency for disaster recovery and avoidance. Improved resource utilization enables automated access to any of our portfolio storage systems into K8s environments and:

  • Gives the flexibility to choose whatever in the back end allows them to provision and leverage the strengths of the individual system
  • Flexible + simple = powerful
  • You have storage that isn’t 100% utilized

This enables the K8 environment manager to directly allocate storage and services, and it:

  • Reduces time
  • Gives them the pot to pull things out of and then lets them go handle it
  • Frees up the developer to develop

Extend Enterprise Storage to Kubernetes – by accelerating adoption of cloud-native workloads with proven enterprise storage with proven enterprise storage:

  • Dell EMC Container Storage Modules (CSM) enables a high-performing and resilient enterprise storage foundation for Kubernetes.
  • CSM delivers a full stack of enterprise capabilities such as industry-leading replication, authorization, failure recovery, and management. These capabilities accelerate deployment testing, resulting in a faster application deployment life cycle.
  • CSM allows developers and storage admins to take advantage of the unique benefits of Dell EMC storage systems, such as PowerMax Metro smart DR and the PowerFlex software-defined storage architecture.
  • Dell Technologies has purpose-built platforms for streaming data, IoT, and Edge computing use cases designed with container-based architecture and management. These capabilities accelerate deployment testing, resulting in a faster application deployment life cycle.

Empower Developers – Improve productivity by reducing development life cycles

  • CSM reduces storage management complexity with observability modules so developers can consume enterprise storage with ease.
  • It also provides a complete K8s solution stack that delivers an integrated experience for developers and storage admins.
  • Customers will be able to take advantage of consistent monitoring, management, and policy enforcement across enterprise storage and DevOps environments.

Automate storage operations – Integrate enterprise storage with existing Kubernetes toolsets for scalable operations

  • CSM allows customers to realize the promise of infrastructure as code for frictionless data collection and consumption
  • CSM observability empowers customers to create storage pools across multiple storage arrays for minimal storage management
  • CSM delivers an integrated experience that bridges the gap between Kubernetes admins/developers and the traditional IT admins, furthering solidifying enterprise storage’s role as a viable alternative to public cloud while eliminating silos and shadow IT.

The modules are separated into these six specific capabilities:

Observability – Delivers a single pane to view the whole CSM environment for the K8s/container administrator, using Grafana and Prometheus dashboards that K8s admins are familiar with in monitoring persistent storage performance.

Replication – Enables array replication capabilities to K8s users with support for stretched and replica K8s clusters. 

Authorization – Provides storage and provides Kubernetes administrators the ability to apply RBAC and usage rules for our CSI Drivers. 

Resiliency – Enables K8s node failover by monitoring persistent volume health, designed to make Kubernetes Applications, including those that use persistent storage, more resilient to node failures. The module is focused on detecting node failures (power failure), K8s control plane network failures, and Array I/O network failures, and to move the protected pods to hardware that is functioning correctly. 

Volume Placement – Intelligent volume placement for Kubernetes workloads, optimized based on available capacity.

Snapshots - CSI based snapshots for operational recovery and data repurposing. The Snapshots feature is part of the CSI plugins of the different Dell EMC arrays and takes advantage of the state-of-the-art snapshot technology to protect and repurpose data. In addition to point-in-time recovery, these snapshots are writable and can be mounted for test and dev and analytics use cases without impacting the production volumes. These modules are planned for RTS, but there is a rolling release prioritized based upon customer demand by storage platform – applicable to PowerScale, PowerStore, PowerMax, PowerFlex, and Unity XT. Available on RTS:

  • Authorization Module
    • PowerScale
    • PowerMax
    • PowerFlex
  • Resiliency Module
    • PowerFlex
    • Unity XT
  • Observability Module
    • PowerFlex
    • PowerStore
  • Replication Module
    • PowerMax Metro/Async
  • One Installer

The publicly accessible repository for CSM is available at For a complete set of material on CSM, see the documentation at

Here is an overview demo of CSM:

Watched it? Awesome, now let’s go deeper into the modules:


CSM for Observability is part of the CSM (Container Storage Modules) open-source suite of Kubernetes storage enablers for Dell EMC products. It is an OpenTelemetry agent that collects array-level metrics for Dell EMC storage so they can be scraped into a Prometheus database. With CSM for Observability, you will gain visibility not only on the capacity of the volumes/file shares you manage with Dell CSM CSI (Container Storage Interface) drivers but also their performance in terms of bandwidth, IOPS, and response time. Thanks to pre-packaged Grafana dashboards, you will be able to go through these metrics’ history and see the topology between a Kubernetes PV (Persistent Volume) and its translation as a LUN or file share in the backend array. This module also allows Kubernetes admins to collect array level metrics to check the overall capacity and performance directly from the Prometheus/Grafana tools rather than interfacing directly with the storage system itself. Metrics data is collected and pushed to the OpenTelemetry Collector, so it can be processed and exported in a format consumable by Prometheus. 

CSM for Observability currently supports PowerFlex and PowerStore. Its key high-level features are:

  • Collect and expose Volume Metrics via the OpenTelemetry Collector
  • Collect and expose File System Metrics via the OpenTelemetry Collector
  • Collect and expose export (K8s) node metrics via the OpenTelemetry Collector
  • Collect and expose filesystem capacity metrics via the OpenTelemetry Collector
  • Collect and expose block storage capacity metrics via the OpenTelemetry Collector
  • Non-disruptive config changes
  • Non-disruptive log level changes
  • Grafana Dashboards for displaying metrics and topology data

Below, you can see the module, working with PowerStore:

And PowerFlex:

The publicly accessible repository is available at

See documentation for a complete set of material on CSM Observability:


CSM for Replication is the module that allows provisioning of replicated volumes using Dell storage. CSM for Replication currently supports PowerMax and PowerStore.

Key High-Level Features:

  • Replication of PersistentVolumes (PV) across Kubernetes clusters Multi/single cluster topologies
  • Replication action execution (planned/unplanned failover, sync, pause, resume)
  • Async/Sync/Metro configurations support (PowerStore only supports Async)
  • repctl – CLI tool that helps with replication related procedures across multiple K8s clusters

The publicly accessible repository for CSM is available at

See the documentation for a complete set of material on CSM Replication:

The following video includes an Introduction and the Architecture (using PowerMax as the example):

Below, you can see end-to-end demos on how to configure CSM replication for PowerStore, and how to perform failover & failback operations of WordPress and MySQL DB, using PowerStore Async replication. 


Performing Failover & Failback (Reprotect):

Using PowerMax?

  • The following video shows synchronous replication using CSM Replication for PowerMax SRDF Sync Replication with File I/O being generated.

  • This video shows Active-Active High-Availability using CSM Replication for PowerMax SRDF Metro Volumes with PostgreSQL:


  • CSM for Authorization is part of the CSM (Container Storage Modules) open-source suite of Kubernetes storage enablers for Dell EMC products. CSM for Authorization provides storage and Kubernetes administrators the ability to apply RBAC for CSI Drivers. It does this by deploying a proxy between the CSI driver and the storage system to enforce role-based access and usage rules.
  • Storage administrators of compatible storage platforms will be able to apply quota and RBAC rules that instantly and automatically restrict cluster tenants’ usage of storage resources. Users of storage through CSM for Authorization do not need to have storage admin root credentials to access the storage system.
  • Kubernetes administrators will have an interface to create, delete, and manage roles/groups to which storage rules may be applied. Administrators and/or users can then generate authentication tokens that can be used by tenants to use storage with proper access policies being automatically enforced.
  • CSM for Authorization currently supports PowerFlex, PowerMax, and PowerScale.

Its key high-level features are:

  • Ability to set storage quota limits to ensure K8s tenants are not over consuming storage 
  • Ability to create access control policies to ensure K8s tenant clusters are not accessing storage that does not belong to them 
  • Ability to shield storage credentials from Kubernetes administrators, ensuring that credentials are only handled by storage admins

The publicly accessible repository is available at

See the documentation for a complete set of material on CSM Authorization:

Below, you can see the Authorization module for PowerFlex:


User applications can have problems if you want their Pods to be resilient to node failure. This is especially true of those deployed with StatefulSets that use PersistentVolumeClaims. Kubernetes guarantees that there will never be two copies of the same StatefulSet Pod running at the same time and accessing storage. Therefore, it does not clean up StatefulSet Pods if the node executing them fails.

CSM for Resiliency currently supports PowerFlex and Unity. 

Key High-Level Features:

  • Detect pod failures for the following failure types – Node failure, K8s Control Plane Network failure, Array I/O Network failure
  • Cleanup pod artifacts from failed nodes
  • Revoke PV access from failed nodes

Below, you can see a demo of the Resiliency module for PowerFlex:

The publicly accessible repo is available at

See the documentation for a complete set of material on CSM Resiliency:

The Snapshots feature is part of the CSI plugins of the different Dell EMC arrays and takes advantage of the state-of-the-art snapshot technology to protect and repurpose data. In addition to point-in-time recovery, these snapshots are writable and can be mounted for test and dev and analytics use cases without impacting the production volumes.

See the following demo about volume groups snapshots for PowerFlex:

No man (or a customer) is an island and Kubernetes comes in many flavors. Here at Dell Technologies, we offer a wide variety of solutions for the customer, starting from just storage arrays for every need (from PowerStore to PowerFlex to PowerMax to PowerScale and ECS) to turnkey solutions like VxRail with/without VCF, deep integration with our storage arrays to anything from upstream Kubernetes to RedHat Openshift, with deep integration to the OpenShift Operator, or vSphere with Tanzu, just so we can meet you where you are today AND tomorrow.

With Dell Technologies’ broad portfolio designed for modern and flexible IT growth, customers can employ end-to-end storage, data protection, compute, and open networking solutions that support rapid container adoption. Developers can create and integrate modern data applications by relying on accessible open-source integrated frameworks and tools across bare metal, virtual, and containerized platforms. Dell enables support for organizational autonomy and real-time benefits for container and Kubernetes platforms with and adherence to IT best practices based on an organization’s own design needs.

In the next post, we will be covering the ‘How’ to install the new CSI 2.0 Common installer and the CSM modules.

Read Full Blog
PowerMax containers data storage Kubernetes PowerStore

Introducing Dell Container Storage Modules (CSM), Part 1 - The 'Why'

Itzik Reich

Fri, 19 Nov 2021 14:04:44 -0000


Read Time: 0 minutes

Dell Tech World 2019, yea, the days of actual in-person conferences, Michael Dell is on stage and during his keynote, he says “we are fully embracing Kubernetes”. My session is the next one where I explain our upcoming integration of storage arrays with the Kubernetes CSI (Container Storage Interface) API. Now, don’t get me wrong, CSI is awesome! But at the end of my session, I’m getting a lot of people coming to me and ask very similar questions, the theme was around ‘how do I still keeping track of what’s going to happen in the storage array’, you see, CSI doesn’t have role-based access to the storage array, not to even mention things like quota management. At a very high level, think about storage admins that want to embrace Kubernetes but are afraid to lose control of their storage arrays. If ‘CSI’ feels like a name of a TV show, I encourage you to stop here and go ahead and have some previous reads in my blog about it: Back to 2019. Post my session, I gathered a team of product managers and we started to think about upcoming customer’s needs, we didn’t have to use a crystal ball but rather, as the largest storage company in the world, started to interview customers about their upcoming needs re K8s. Now, let’s take a step back and discuss the emergence of cloud-native apps and Kubernetes.

In the past, companies would rely on Waterfall development and ITIL change management operational practices. This meant organizations had to plan for:

  • Long Development cycles before handing an application to ops
  • IT ops often resisting change and slow innovation

Now companies want to take advantage of a new development cycle called Agile along with DevOps operational practices. This new foundation for IT accelerates innovation through:

  • Rapid iteration and quick releases
  • Collaboration via involving the IT ops teams throughout the process

Operational practices aren’t the only evolving element in today’s enterprises; application architectures are quickly changing as well. For years, monolithic architectures were the standard for application architectures. These types of applications had great power and efficiency and run on virtual machines. However, these applications have proven costly to reconfigure, update, and take a long time to load. In cloud-native applications, components of the app are segmented into microservices, which are then bundled and deployed via containers. This container/microservice relationship allows cloud-native apps to be updated and scaled independently. To manage these containerized workloads, organizations use an open-source management platform called Kubernetes. To give a real-world example, imagine monolithic apps like a freight train – there is a lot of power and capacity but it takes a long time to load and is not easy to reconfigure. Whereas cloud-native apps function more like a fleet of delivery vehicles with reduced capacity but resilient and flexible in changing the payload or adapting capacity as needed. A fleet of delivery vehicles needs a conductor to schedule and coordinate the service, and that is the role that Kubernetes plays for containers in a cloud-native environment. Both approaches are present in today’s modern apps but the speed and flexibility of cloud-native apps shifting priorities everywhere.

Let’s dig more into this shift in software development and delivery. Leading this shift is the use of microservices, which are loosely coupled components that are self-contained, highly available, and easily deployable, and with containers that provide these microservices with lightweight packages capable of resource utilization efficiencies, enable those microservices patterns. They provide a ‘build once, run anywhere flexibility with the scale that developers are embracing. Then came Kubernetes. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It has become the industry “go-to” for more service discovery, load balancing, storage orchestration. With agile development comes the need for speed and continuous delivery which, with the right tools and infrastructure can create the right business outcomes as demands increase. With the advent of flexible cloud-native applications; DevOps teams formed and created their own agile frameworks that in addition to increasing delivery of code with less dysfunction and overhead of traditional models whereby intentionally or unintentionally bypassing IT Operations’ best practices and the opportunity to build modern IT infrastructures to support their development initiatives, as well as enhance them.

As traditional models for software development evolve, so does the infrastructure that supports it. IT Operations’ best practices can be applied to these new models through the Enterprise level data management tools that Dell Technologies’ provides. DevOps teams require seamless, non-disruptive, and reliable mechanisms to continue to meet business demands with agility and scale. With Dell Technologies” broad portfolio designed for modern and flexible IT growth, customers can employ end-to-end storage, data protection, compute and open networking solutions that support accelerated container adoption. Developers can create and integrate modern applications by relying on accessible open-source integrated frameworks and tools across bare metal, virtual, and containerized platforms. Dell enables support for DevOps elasticity and real-time benefits for container and Kubernetes platforms’ applying best practices based on their own design and needs.

Dell Technologies aligns developers and IT operations, empowering them to design and operate cloud-native organizations while achieving business demands and increasing quality outputs. With the support of industry standards built on containers such as Containers’ storage interfaces, Plug-ins with container storage modules, PowerProtect data manager can Availability is the most important aspect of data that customers and different levels of business ultimately care about from about every angle; especially securely accessed data whether it be on-premises, in the cloud. Though developers seem to claim they understand Kubernetes inside and out, they miss out on features at the IT operations level that we can provide.  With a big portfolio such as ours, we must understand what maturity level the customer is in. For the storage administrator, they will defer using their PowerMax or VxRail; if they want to continue to purchase these products, they would appreciate built-in containers/Kubernetes support that is easy to onboard without disrupting their developers. At the application layer, you may be employing Kubernetes or OpenShift well into the software-defined journey and PowerFlex would be an optional choice. GitHub CSI downloads exceed 1 million downloads. Kubernetes developers know nothing about storage except local storage servers and drives; whereby their operational partners care about resiliency, snapshot, restore, replication, compression, and security. With the variety of storage solutions, having CSI plug-ins and Container Storage Modules simplifies deployment choices, emphasis on applying operational best practices. 


  • Cloud-Native Computing Foundation (CNCF) SIG contributor
  • Complete E2E integrated industry-standard APIs
  • Self-service CSI driver workflows
  • GitHub repository for developers
  • Partner integrations with VMware Tanzu, Red Hat OpenShift,  Google Anthos, Rancher, others
  • DevOps and IaC integration with Ansible, Terraform, Puppet, ServiceNow, vRO, Python, Powershell, etc.
  • Kubernetes Certified Service Provider (KCSP) Consultant Services

Automate & Manage:

  • Container storage modules (CSM)
    • Data replication across data centers
    • RBAC authorization
    • Observability
    • Resiliency (disaster recovery & avoidance)
  • Single platform, Kubernetes & application-aware data protection
  • Application consistent backups
    • MySQL, MongoDB, Cassandra, Postgres, etc.
  • Infrastructure Automation & Lifecycle Management
    • API driven software-defined infrastructure with automated lifecycle management
  • Policy-based protection
    • Replication, retention, tiering to S3-compatible object storage, SLA reporting
  • Provide in-cloud options for developers with support for AWS, Azure backup policies

Scale & Secure:

  • Provisioning and automating policies
  • Extract data value in real-time through open networking and server/compute
  • Deploy data protection backup and restores via PowerProtect Data Manager
  • Integrated Systems; VxBlock, VxRail, PowerFlex, Azure Stack 
  • Manage Kubernetes with PowerScale in multi-cloud environments
  • Accelerate with edge / bare metal via Kubespray / Streaming Data Platform (SDP) w/ Ready Stack for Red Hat OpenShift platforms
  • Obtain seamless security and secure critical data via CloudLink

Ok, let’s talk Kubernetes.

Kubernetes is really starting to pick up, as you can see in the above graphs, by 2025, it is expected that up to 70% of the enterprises out there, will be using Kubernetes AND that, 54% will be deployed primarily in their production environments! Yep, that means, we are way beyond the ‘Kicking the tires’ phase. A few weeks ago, I talked with my manager about these trends which you can see below.

BUT, it’s not all rosy, Kubernetes provides a lot of challenges, to name a few:

Lack of internal alignment…shadow IT results… which leads to a harder job for the IT admins with lack of visibility and monitoring, and meeting security and compliance requirements. Kubernetes also cannot automatically guarantee that resources are properly allocated between different workloads running in a cluster. To set that up, you need to set up resource quotas manually. The opportunity is to align developers and IT operations by empowering them to design and operate cloud-native organizations while achieving business demands and increasing quality outputs.

In the next post, I will share the ‘What’ are we releasing to tackle these challenges...

Read Full Blog
Microsoft hybrid cloud PowerStore Azure Arc Azure Arc-enabled Services APEX

Azure Arc Data Services on Dell EMC PowerStore

Doug Bernhardt

Thu, 04 Nov 2021 19:37:31 -0000


Read Time: 0 minutes

Azure Data Services is a powerful data platform for building cloud and hybrid data applications. Both Microsoft and Dell Technologies see tremendous benefit in a hybrid computing approach. Therefore, when Microsoft announced Azure Arc and Azure Arc-enabled data services as a way to enable hybrid cloud, this was exciting news!

Enhancing value

Dell Technologies is always looking for ways to bring value to customers by not only developing our own world-class solutions, but also working with key technology partners. The goal is to provide the best experience for critical business applications. As a key Microsoft partner, Dell Technologies looks for opportunities to perform co-engineering and validation whenever possible. We participated in the Azure Arc-enabled Data Services validation process, provided feedback into the program, and worked to validate several Dell EMC products for use with Azure Arc Data Services.

Big announcement

When Microsoft announced general availability of Azure Arc-enabled data services earlier this year, Dell Technologies was there with several supported platforms and solutions from Day 1. The biggest announcement was around Azure Arc-enabled data services with APEX Data Storage Services. However, what you may have missed is that Dell EMC PowerStore is also validated on Azure Arc-enabled data services!

What does this validation mean?

Basically, what this validation means is that Dell Technologies has run a series of tests on the certified solutions to ensure that our solutions provide the required features and integrations. The results of the testing were then reviewed and approved by Microsoft. In the case of PowerStore, both PowerStore X and PowerStore T versions were fully validated. Full details of the validation program are available on GitHub.

Go forth and deploy with confidence knowing that the Dell EMC PowerStore platform is fully validated for Azure Arc!

More information

In addition to PowerStore, Dell Technologies leads the way in certified platforms. Additional details about this validation can be found here.

For more information, you can find lots of great material and detailed examples for Dell EMC PowerStore here: Databases and Data Analytics | Dell Technologies Info Hub

You can find complete information on all Dell EMC storage products on Dell Technologies Info Hub.

Author: Doug Bernhardt  Twitter  LinkedIn

Read Full Blog
AppSync data protection PowerStore Dell EMC AppSync

What’s new with Dell EMC AppSync Copy Management Software and PowerStore

Andrew Sirpis

Thu, 14 Oct 2021 21:15:43 -0000


Read Time: 0 minutes

For those who don’t already know, Dell EMC AppSync is a software package that can simplify and automate the process of generating and consuming copies of production data. At a high-level, AppSync can perform end to end operations such as quiescing the database, snapping the volumes, and mounting and recovering the database. For many end users, these operations can be difficult without AppSync, because of different applications and storage platforms.  

AppSync provides a single pane of glass and its workflows work the same, regardless of the underlying array or application. AppSync natively supports Oracle, Microsoft SQL Server, Microsoft Exchange, SAP HANA, VMware, and filesystems from various operating systems. The product also provides an extensible framework through plug-in scripts to deliver a copy data management solution for custom applications.    

Customers use AppSync for repurposing data, operational recovery, and backup acceleration.

There are two primary workflows for AppSync:

  • Protection workflows allow customers to schedule copy creation and expiration to meet service level objectives of operational recovery or backup acceleration.
  • Repurposing workflows allow customers to schedule the creation and refresh of multi-generation copies.  

Both workflows offer automated mount and recovery of the copy data.  

AppSync is available as a 90-day full featured trial download, and provides two licensing options:  

  • The AppSync Basic license ships with all PowerStore systems.
  • The AppSync Advanced license is fully featured and available for purchase with all Dell EMC primary arrays.

For supported configurations, see the AppSync support matrix. (You’ll need to create a Dell EMC account profile if you do not already have one.)

The latest version, AppSync 4.3 – released on July 13th, 2021 – contains many new features and enhancements, but in this blog I want to focus on the PowerStore system support and functionality. AppSync has supported PowerStore since version 4.1. Because PowerStore supports both licensing models mentioned above, testing it and implementing it into production is simple.            

AppSync supports the discovery of both the PowerStore T and PowerStore X model hardware and multi-appliance clusters. The new PowerStore 500 is also supported: a low cost, entry level PowerStore system that can support up to 25 NVMe drives. Clustering the 500 model with other PowerStore 1000-9000 models is fully supported. For more details, check out the PowerStore 500 product page and the PowerStore Info Hub.  

                                               PowerStore 500

PowerStore can use both local and remote AppSync workflows: Protection and Repurposing.  Production restore is supported for both local and remote. AppSync uses the PowerStore Snapshot and Thin Clone technologies embedded in the appliance, so copies are created instantly and efficiently. It also leverages PowerStore async native replication for remote copy management. (When replicating between two PowerStore systems, source to target, you can only have one target system.) The figure below shows how a PowerStore array is discovered in AppSync.

We have more sources of information about integrating AppSync and PowerStore. Here are some to get you started:

  • Dell EMC PowerStore and AppSync Integration – This video shows how AppSync can automatically create remote application consistent copies on PowerStore for Microsoft SQL Server. (The configuration includes a PowerStore X model appliance at the source site, running AppSync and SQL Server virtual machines using PowerStore AppsON functionality. The remote site uses a PowerStore T model as the replication destination site.)
  • Dell EMC PowerStore: AppSync – This white paper provides guidelines for integrating the two for copy management.   

You can also find other AppSync related documents on the Dell Info Hub AppSync section.  

We hope you have a great experience using these products together!

Author: Andrew Sirpis  LinkedIn





Read Full Blog
PowerStore PowerStoreOS PowerStoreCLI

What is PowerStoreCLI Filtering?

Ryan Meyer

Thu, 14 Oct 2021 20:20:56 -0000


Read Time: 0 minutes

What is PowerStoreCLI Filtering? With the sheer volume of features that are being pumped out with every Dell EMC PowerStore release, this may be a common question among other minor features that sometimes gets overlooked. That’s why I’m here to boast some helpful PowerStoreCLI tips and show why you just might want to use the filtering feature that got an update in PowerStoreOS 2.0.

PowerStoreCLI, also known as pstcli, is a light-weight command line interface application that installs on your client that allows you to manage, provision, and modify your PowerStore cluster.

For starters, check out the Dell EMC PowerStore CLI Reference Guide and Dell EMC PowerStore REST API Reference Guide on to see the wide variety of CLI and REST API commands available. You can also download the pstcli client from the Dell Product Support page.

Now when it comes to fetching information about the system through pstcli, we generally use the “show” action command to display needed information. This could be a list of volumes, replications, hosts, alerts, or what have you. Bottom line is you’re going to use the show command quite a bit and sometimes the output can be a little cumbersome to sift through if you don’t know what you’re looking for.

This is where filtering comes into play using the “-query” switch. When we use the query capabilities, you can fine-tune your output to focus rather precisely on what you’re looking for. Query supports various conditions (and, or, not) and  operators such as ==, <, >, !=, like, and contains, to name a few. If you’re familiar with SQL, the condition and operator syntax is very similar to how you would type out SQL statements. But as with all pstcli commands, you can always put a “-help” at the end of a command if you can’t remember the syntax

Filtering your pstcli commands combined with custom scripts can be quite a powerful automation tool, allowing you to filter output directly from the source, rather than putting all your filtering logic on the client side. There are tons of ways to automate through scripting which I will save for another discussion. I’ll mainly be focusing on the command line filtering aspects from the PowerStore side. Let’s look at an example of how you can filter the output of your alerts with pstcli.

I’ll be using the commands in a session to reduce screenshot clutter, so you don’t see all the login information. You don’t need to use a session to use pstcli filtering, but it’s a neat way to get familiar with pstcli, without having to see all the login and password info on every command. If you don’t know how to establish a pstcli session to your PowerStore, I recommend checking out the Dell EMC PowerStore: CLI User Guide

The basic alert command is “alert show”. This will blast out every cached alert on the system, both active and cleared alerts.

alert show

I only listed the first few alerts in this figure because this was a long-standing system with hundreds of cleared alerts and only a few active. As you can see, there is a lot of information in the output. By default, most of the columns are abbreviated unless you have a very wide terminal and because of this, the output may not give much insight on what’s happening with the system at first glance. Combine that with the fact that you may have 100s of lines to look at. This is where filtering can really clear things up and provide you a more targeted view of your command output.

So, let’s apply some filtering. Perhaps I only want to see active alerts and ones that have a severity other than Info.

alert show -query ‘state == ACTIVE && severity != Info’

There, now my output went from 100+ lines to only displaying five alerts!

Take it one step further with the -select switch to filter out the extra columns. Let’s say my script only needs the event ID, Event Code, Timestamp, and Severity.

alert show -select id,event_code,raised_timestamp,severity -query ‘state == ACTIVE && severity != Info’

By the way, you if you prefer REST API, you can apply the same filtering logic to your REST commands! Here’s a sample REST command using curl that would return the same output from our example above:

$curl -k -u admin:<PowerStore_password> -X GET "https://<PowerStore_IP>/api/rest/alert?select=id,event_code,raised_timestamp,severity?severity=neq.Info&state=eq.ACTIVE" -H "accept: application/json"

There we go, we’ve filtered through tons of alerts to only seeing the five active alerts that we are interested in and at the same time only viewing the information we need. From here, as you can imagine the possibilities are endless!

For more information on PowerStore, I suggest checking out the PowerStore Info Hub.

Author: Ryan Meyer   LinkedIn

Read Full Blog
NVMe PowerStore NFS PowerStoreOS

Introduction to the PowerStore Platform Offerings and their Benefits

Kenneth Avilés Padilla

Wed, 06 Oct 2021 14:24:27 -0000


Read Time: 0 minutes

In May 2020, we released Dell EMC PowerStore, our groundbreaking storage family with a new container-based microservices architecture that is driven by machine learning. This versatile platform includes advanced storage technologies and a performance-centric design that delivers scale-up and scale-out capabilities, always-on data reduction, and support for next-generation media. 

PowerStore provides a data-centric, intelligent, and adaptable infrastructure that supports both traditional and modern workloads. 


Figure 1. Overview of PowerStore

Let’s start by going over the hardware details for the appliance. The PowerStore appliance is a 2U, two-node, all NVMe Base Enclosure with an active/active architecture. Each node has access to the same storage, with Active-optimized/Active-unoptimized front-end connectivity. 


The following figures show the front and back views of the PowerStore appliance: 

Figure 2. Front view of PowerStore appliance

Figure 3. Back view of PowerStore 1000-9000 appliance

The PowerStore models start at the PowerStore 500 all the way up to the PowerStore 9000. The configurations available vary by model. This table provides a comparison of the different models:

PowerStore Model

 NVMe NVRAM drives




 Capacity (cluster)

  28.57 TB to 11.36 PB Effective (11.52 TB to 3.59 PB Raw)

 Max drives (appliance / cluster)


96 / 384

 Expansion (appliance)


Add up to three expansion enclosures per appliance



X models

 Drive types


  NVMe SCM       

NVMe SSD, NVMe SCM, SAS SSD (Only on the expansion enclosures)


  Up to four appliances: Mix and match appliances of any model/config of

  the same type

 Embedded ports

4-port card:

  25/10/1 GbE
optical/SFP+ and

25/10/1 GbE optical/SFP+ and Twinax or 10/1 GbE BaseT

2-port card:

10/1 GbE optical/SFP+ and Twinax


 IO Modules (2 per node)

  32/16/8 Gb FC or 16/8/4 FC
25/10/1 GbE optical/SFP+ and Twinax (PowerStore T only)
   10/1 GbE BaseT (PowerStore T only)

 Front-End Connectivity

  FC: 32 Gb NVMe/FC, 32/16/8Gb FC
   Ethernet: 25/10/1 GbE iSCSI and File

For more details about the PowerStore hardware, see the Introduction to the Platform white paper. 

Model types

PowerStore comes in two model types: PowerStore T and PowerStore X. Both types run the PowerStoreOS. 

PowerStore T

For the PowerStore T, the PowerStoreOS runs on the purpose-built hardware as shown in Figure 4. PowerStore T can support a unified deployment with the option to run block, file, and vVol workloads, all from within the same appliance.

PowerStore T supports the following:

  • SAN (NVMe/FC, FC and iSCSI)
  • NAS (NFS, SMB, FTP and SFTP)
  • vVol (FC and iSCSI)

Figure 4. PowerStore T

PowerStore T has two deployment modes to ensure that the platform delivers maximum performance for each use case. The deployment mode is a one-time setting configured when using the Initial Configuration Wizard. The following describes the two deployment modes available as part of the storage configuration: unified and block optimized.

  • Unified: 
    1. Default storage configuration (factory state)
    2. Supports SAN, NAS, and vVol
    3. Resources shared between block and file components
  • Block Optimized:
    1. Alternate storage configuration (requires a quick reboot)
    2. Supports SAN and vVol
    3. Resources dedicated to block components

Depending on the storage configuration you set, the PowerStore T can cover various use cases, and and fulfill many of the use cases that a traditional storage array would, but with added benefits. 

Some use cases that the PowerStore T can cover:

  • With the Unified storage configuration: file workloads are supported. This entails support for home directories, content repositories, SMB shares, NFS exports, multiprotocol file systems (access through SMB and NFS in parallel), and more. For more details, see the File Capabilities white paper.
  • With the Block Optimized storage configuration: For customers running block only workloads, you can leverage PowerStore T with the traditional FC and iSCSI protocols, in addition to running NVMe/FC. 

Now for our second model type, PowerStore X. 

PowerStore X

PowerStore X also runs the same PowerStoreOS as the PowerStore T but it is virtualized as Virtual Machines (VMs) running on VMware ESXi hosts that run directly on the purpose-built hardware. This model type includes one of the key features known as AppsON. As the name suggests, it can run your typical block workload in conjunction with running customer and application VMs. Figure 5 provides a glimpse of this model. 

Figure 5. PowerStore X

PowerStore X supports the following:

  • SAN (NVMe/FC, FC, and iSCSI)
  • vVol (FC and iSCSI)
  • Embedded Applications (Virtual Machines)

You can leverage AppsON for multiple use cases that span Edge, Remote Office, data intensive applications, and so on. 

Some example use cases are:

  • Applications: As organizations thrive to simplify, while continuing to keep up with ongoing accelerated demands, AppsON can be leveraged to help consolidate the infrastructure that is running business-critical applications. It can host a broad range of applications, such as MongoDB (MongoDB Solution Guide, Microsoft SQL Server (Microsoft SQL Server Best Practices), or Splunk (Capture the Power of Splunk with Dell EMC PowerStore) to name a few. For white papers regarding databases and data analytics, see the databases and data analytics page. 
  • VM Mobility: As mentioned previously, with AppsON we can host VMs natively within PowerStore. This allows for greater flexibility through VMware vSphere because we can leverage Compute vMotion and Storage vMotion to seamlessly move applications between PowerStore and other VMware targets. For example, you can deploy applications on external ESXi hosts, hyperconverged infrastructure (that is, Dell EMC VxRail), or directly on the PowerStore appliance that migrate transparently between them. 

We have provided a high-level overview and some examples. There are additional use cases that PowerStore can cover. 


Technical documentation

To learn more about the different features that PowerStore provides, see our technical white papers.

Demos and Hands-on Labs 

To see how PowerStore’s features work and integrate with different applications, see the PowerStore Demos YouTube playlist. 

To gain firsthand experience with PowerStore, see our Hands-On Labs site for multiple labs.

Author: Kenneth Avilés Padilla  LinkedIn

Read Full Blog
SQL Server containers Kubernetes PowerStore

Kubernetes Brings Self-service Storage

Doug Bernhardt

Tue, 28 Sep 2021 18:49:52 -0000


Read Time: 0 minutes

There are all sorts of articles and information on various benefits of Kubernetes and container-based applications. When I first started using Kubernetes (K8s) a couple of years ago I noticed that storage, or Persistent Storage as it is known in K8s, has a new and exciting twist to storage management. Using the Container Storage Interface (CSI), storage provisioning is automated, providing real self-service for storage. Once my storage appliance was set up and my Dell EMC CSI driver was deployed, I was entirely managing my storage provisioning from within K8s!

Self-service realized

Earlier in my career as a SQL Server Database Administrator (DBA), I would have to be very planful about my storage requirements, submit a request to the storage team, listen to them moan and groan as if I asked for their first born child, then ultimately provide half of the storage I requested.  As my data requirements grew, I would have to repeat this process each time I needed more storage. In their defense, this was several years ago before data reduction and thin provisioning were common.

When running stateful applications and database engines, such as Microsoft SQL Server on K8s, the application owner or database administrator no longer needs to involve the storage administrator when provisioning storage. Volume creation, volume deletion, host mapping, and even volume expansion and snapshots are handled through the CSI driver! All the common functions that you need for day-to-day storage management data are provided by the K8s control plane through common commands.

K8s storage management

When Persistent Storage is required in Kubernetes, using the K8s control plane commands, you create a Persistent Volume Claim (PVC). The PVC contains basic information such as the name, storage type, and the size.

To increase the volume size, you simply modify the size in the PVC definition. Want to manage snapshots? That too can also be done through K8s commands. When it’s time to delete the volume, simply delete the PVC.

Because the CSI storage interface is generic, you don’t need to know the details of the storage appliance. Those details are contained in the CSI driver configuration and a storage class that references it. Therefore, the provisioning commands are the same across different storage appliances.

Rethinking storage provisioning

It’s a whole new approach to managing storage for data hungry applications that not only enables application owners but also challenges how storage management is done and the traditional roles in a classic IT organization. With great power comes great responsibility!

For more information, you can find lots of great material and detailed examples for Dell EMC PowerStore here: Databases and Data Analytics | Dell Technologies Info Hub

You can find complete information on all Dell EMC storage products on Dell Technologies Info Hub.

All Dell EMC CSI drivers and complete documentation can be found on GitHub. Complete information on Kubernetes and CSI is also found on GitHub.

Author: Doug Bernhardt

Twitter: @DougBern

Read Full Blog