Automate and standardize SAP operations using Dell EMC ESI for storage integration
Tue, 22 Sep 2020 13:36:46 -0000|
Read Time: 0 minutes
Enterprise SAP landscapes can have dozens of interrelated instances when you include all the nonproduction systems that are used for development, testing, training, and sandbox experimentation. SAP Landscape Management (LaMa) software combined with the Dell EMC Enterprise Storage Integrator (ESI) for SAP LaMa simplifies management of these complex SAP environments by using advanced storage-based local and remote replication services that are integrated into Dell EMC storage systems.
Dell Technologies offers the SAP enabled enterprise one of the industry’s broadest portfolios of storage array options. All the storage systems that are listed in this blog post are supported by Dell EMC ESI for SAP LaMa software for simplifying landscape management. Customers can choose a solution from any of the Unity, Unity XT, VMAX3, and PowerMax storage array models and get integration with SAP LaMa to improve management of their SAP systems. Unity XT arrays are midrange storage platforms that are designed for performance, efficiency, and data protection. PowerMax arrays are larger storage platforms that accelerate applications with end-to-end NVMe flash storage, global deduplication and compression, and data protection. The following table shows the storage arrays that ESI supports:
Table 1: ESI supported storage systems
Unified SAN and NAS
SAP LaMa is an automation and orchestration solution that replaces manual or scripted processes for creating clones, copies, and related refresh activities. Simplified landscape management provides key business benefits, including improved service quality and the capability to new drive business innovation. SAP LaMa combined with Dell EMC ESI provides a single pane of glass for operations such as SAP system relocation, snapshots, provisioning processes, and more. These capabilities increase manageability and promote business agility by enabling administration teams to address rapidly changing organizational demands. Dell EMC supports SAP LaMa in physical, virtual, and cloud technologies, providing a single pathway to manage most landscape configurations.
Examples of improved operational capabilities include:
- SAP LaMa System Relocation—This operation enables relocation of an SAP system from the original location to another host that is recognized by SAP LaMa. System relocation operations are useful when the primary SAP server system requires scheduled maintenance or an upgrade. The entire relocation operation is automated, with ESI enabling administrators to be quickly up and running on another server. The following configurations are supported:
- Physical-to-physical (P2P) bare-metal to bare-metal
- Physical-to-virtual (P2V) bare-metal to VMware virtual using Raw Device Mapping (pRDMs)
- Virtual-to-virtual (V2V) VMware VMDK disks from one VM to another
- Managed SAP LaMa Managed snapshots—This operation enables “snap copying” of all source volumes from an SAP system by using a single API call to maintain storage consistency. Storage snapshots are a low-overhead point-in-time image of source volumes on a storage array. Customers can use these snapshots in place of full copies for many management tasks. For example, PowerMax and VMAX arrays use SnapVX to create a consistent image of SAP system volumes. Snapshots are more efficient than full copies because only the data changes between the source volumes and the images are copied to the “snap copy” volumes. For many SAP landscape management operations, PowerMax and VMAX snapshots consume only a small fraction of the space that is used on the primary SAP system storage array.
- SAP LaMa system provisioning
- System Clone—This operation duplicates a system that is currently running or a previously created managed snapshot. The duplicated clone and source systems have identical system IDs. The clone is isolated on a dedicated network to prevent application users from connecting to the wrong system by mistake. The default configuration for system clones that are created on Dell EMC storage arrays is to use space-efficient snapshots of the source volumes (space savings). By selecting the ‘Full Copy’ option, customers can also create a full clone that doubles the storage space that is consumed.
- System Copy—This operation creates a copy of an existing SAP system with a new unique SAP system ID, host name, and IP address. A system copy is useful when the business needs a copy of either a production or nonproduction system for quality assurance, development, or testing. The two key differences with a system clone are the creation of a new identity (ID, hostname, IP) when using a copy operation and the use of new storage volumes and full space allocations by the copy.
- System Refresh—This operation refreshes either a complete or used part of an existing system, as specified by the user. System refresh procedures enable three options: Refresh system, Refresh database (database only), and Restore-based refresh. Refreshing an existing system is frequently faster than creating a copy. Also, the system refresh procedure enables application teams to continue using the SAP system that they are familiar with, reducing complexity. The restore-based refresh procedure integrates with the leading Dell EMC Data Protection solutions such as PowerProtect Application Direct database agents and Data Domain with DDBoost.
Figure 1: Dell EMC ESI integration with SAP Landscape Management
In addition to the preceding scenarios, customers can streamline operations such as monitoring and data protection by enabling Dell EMC integration with SAP LaMa. For example, with data protection integration, you can perform on-demand and scheduled backups of the SAP system. By using Unity, Unity XT, VMAX3, and PowerMax ESI integration, customers can automate most system operations for SAP. Further, the opportunity for increased storage savings through efficient storage snapshots means that SAP customers can have a greater number of SAP systems consuming less overall space on their Dell EMC storage systems.
The Dell Technologies SAP site is the place to start learning about the features and capabilities that both Dell EMC storage and PowerEdge servers offer. If you are interested in more technical material, see the Enterprise Storage Integrator for SAP Landscape Management End-user Guide 8.0 on the Dell Technologies support site.
Related Blog Posts
Mitigating Slow Drain with Per Initiator Bandwidth Limits
Wed, 22 Jun 2022 13:16:04 -0000|
Read Time: 0 minutes
This blog discusses a recently introduced functionality for PowerMax and All Flash arrays as part of the Foxtail release. This new feature allows customers to leverage QoS settings on the PMAX/VMAXAF to reduce or eliminate Slow Drain issues due to servers with slow HBAs. This was introduced in Q2 2019 (OS 5978 Q2 2019 SR 5978.479.479). From a PowerMax or VMAX AF perspective, this can help customers alleviate congestion spreading caused by Slow Drain devices, such as HBAs that have a lower link speed compared to the storage array SLIC. Customers have been burdened by this Slow Drain phenomenon for many years and this issue can lead to severe fabric wide performance degradation.
While there are many definitions and descriptions of Slow Drain, in this blog we define Slow Drain as:
Slow Drain is an FC SAN phenomenon where a single FC end point, due to an inability to accept data from the link at the speed at which it is being sent, causes switch/link buffers and credits to be consumed, resulting in data being “backed up” on the SAN. This causes overall SAN degradation as central components, such as switches, encounter resources that are monopolized by traffic that is destined for the slow drain device, impacting all SAN traffic.
In short, Slow Drain is caused by an end device that is not capable of accepting data at the rate that it is being received.
For example, if an 8 GBs HBA sends a series of large block read requests to a 32Gbs PowerMax front-end Coldspell SLIC, the transition rate will be 32 GBs when the Array starts to send the data back to the host. Since the HBA is only capable of receiving data at 8 GB, there will be congestion at the host end. Too much congestion can lead to congestion spreading, which can affect unrelated server and storage ports to experience performance degradation.
The goal of the Foxtail release is to reduce Slow Drain issues, as they are difficult to prevent and diagnose. It does this through a mechanism where the customer can prevent an end device from becoming a Slow Drain by limiting the amount of data that will be sent to it.
On PowerMax, this can be accomplished by applying a per initiator bandwidth limit. This application limits the amount of data that is sent to the end device (host) at a rate at which it can receive data. We have provided customers the ability to leverage Unisphere or Solutions Enabler QoS settings to keep faster array SLICs from overwhelming slower host HBAs. Figure 1 shows a scenario of congestion spreading caused by Slow Drain devices, which can lead to severe fabric-wide performance degradation.
Figure 1: Congestion spreading
Implementing per initiator bandwidth limits
Customers can now configure per initiator bandwidth limits at an Initiator Group (IG) or Host level. The I/O for that Initiator is throttled to a set limit by the PowerMaxOS. This can be configured through Unisphere, REST API, Solutions Enabler (9.1 and above), and Inlines.
Note: The Array must be running PowerMaxOS 5978 Q2 2019 or later.
Figure 2: Unisphere for PowerMax 9.1 and higher support
This release also includes a bandwidth limit setting. Users can go to this new menu item by clicking on the “traffic lights” which prompts the set bandwidth limit dialogue to open. The range for the bandwidth limit is between zero and the maximum rate that the Initiators can support (for example, 16,32 GB).
Note: The menu item to bring up the dialogue is only enabled for Fibre (FC) and disabled for ISCSI hosts.
The Bandwidth Limit is set using a value in MB/sec. For example, Figure 3 shows setting the Bandwidth Limit to 800MB/s. The details panel for the Host displays an extra field for the bandwidth limit.
Figure 3: Unisphere Set Host Bandwidth Limit
Slow Drain monitoring
Typically, B2B credit starvation or drops in R_RDYs symptoms of Slow Drain. Writes can take a long time to complete if they are stuck in the queue behind reads, it may also cause the XFER_RDY to be delayed.
The Bandwidth Limit Exceeded seconds metric is available for Solutions Enabler 9.1. In the Performance dashboard, this is the number of seconds that the director port and initiator has run at maximum quota. This metric uses the bw_exceeded_count. The KPI is available under the initiator objects section.
Solutions Enabler 9.1 also features enhanced support setting bandwidth limits at the Initiator Group Level. It allows the user to create an IG with bandwidth limits, set bandwidth limits on an IG, clear bandwidth limits on an IG, Modify the bandwidth limits on an IG, and of course display the IG bandwidth limits
Figure 4: Solutions Enabler bandwidth limit
REST API Support
REST API can also be used to set bandwidth limits. All communication for REST is HTTPS over IP, and calls authenticated against Unisphere for PowerMax server. REST API supports these four main verbs:
To set the bandwidth limit, we used a REST PUT call, as shown in Figure 6.
Figure 5: Inlines support
Note: Inlines support is also available with the 8F Utility.
The following list provides more information about this release:
- Host I/O limits and Initiator limits can co-exist, one at the SG level and the other at Initiator level.
- PowerMaxOS supports a max of 4096 limits, both host I/O limits and initiator limits share this limit
- When an Initiator connects to multiple directors, the per initiator limit is distributed evenly across the directors
- Limits can be set for child IG (Host) ONLY - not Parent IG (Host Group)
- An Initiator must be in an IG to set a BW limit on it
- The IG must contain an initiator in it to set a BW limit for it
- For PowerMaxOS downgrades (NDDs) if the system has Initiators which have bandwidth limits set, the downgrade will be blocked until the limits are cleared from the config
- Currently, only FC support is offered
Note: The bandwidth limit is set per Initiator and split across directors. Although the limit is applied within the IG/Host group screen within Unisphere for PowerMax, it applies to every initiator in that group. This means that the limit is not aggregate across all the initiators in that group, but individually applied to all of them and split across directors.
- Unisphere for PowerMax Release Notes (log in required)
- Connectrix Switch Cisco & Brocade Series: Performance issues in a SAN (log in required)
- Congestion Spreading and How to Avoid It
- Slow Drain Knowledge Map
Author: Pat Tarrant, Principal Engineering Technologist
Looking Ahead: Dell Container Storage Modules 1.2
Mon, 21 Mar 2022 14:42:31 -0000|
Read Time: 0 minutes
The quarterly update for Dell CSI Drivers & Dell Container Storage Modules (CSM) is here! Here’s what we’re planning.
New CSM Operator!
Dell Container Storage Modules (CSM) add data services and features that are not in the scope of the CSI specification today. The new CSM Operator simplifies the deployment of CSMs. With an ever-growing ecosystem and added features, deploying a driver and its affiliated modules need to be carefully studied before beginning the deployment.
The new CSM Operator:
- Serves as a one-stop-shop for deploying all Dell CSI driver and Container Storage Modules
- Simplifies the install and upgrade operations
- Leverages the Operator framework to give a clear status of the deployment of the resources
- Is certified by Red Hat OpenShift
In the short/middle term, the CSM Operator will deprecate the experimental CSM Installer.
Replication support with PowerScale
For disaster recovery protection, PowerScale implements data replication between appliances by means of the the SyncIQ feature. SyncIQ replicates the data between two sites, where one is read-write while the other is read-only, similar to Dell storage backends with async or sync replication.
The role of the CSM replication module and underlying CSI driver is to provision the volume within Kubernetes clusters and prepare the export configurations, quotas, and so on.
CSM Replication for PowerScale has been designed and implemented in such a way that it won’t collide with your existing Superna Eyeglass DR utility.
A live-action demo will be posted in the coming weeks on our VP YouTube channel: https://www.youtube.com/user/itzikreich/.
Across the portfolio
In this release, each CSI driver:
- Supports OpenShift 4.9
- Supports Kubernetes 1.23
- Supports the CSI Spec 1.5
- Updates the latest UBI-minimal image
- Supports fsGroupPolicy
There are three possible options:
- None -- which means that the fsGroup directive from the securityContext will be ignored
- File -- which means that the fsGroup directive will be applied on the volume. This is the default setting for NAS systems such as PowerScale or Unity-File.
- ReadWriteOnceWithFSType -- which means that the fsGroup directive will be applied on the volume if it has fsType defined and is ReadWriteOnce. This is the default setting for block systems such as PowerMax and PowerStore-Block.
In all cases, Dell CSI drivers let kubelet perform the change ownership operations and do not do it at the driver level.
Standalone Helm install
Drivers for PowerFlex and Unity can now be installed with the help of the install scripts we provide under the dell-csi-installer directory.
Note: To ensure that you install the driver on a supported Kubernetes version, the Helm charts take advantage of the kubeVersion field. Some Kubernetes distributions use labels in kubectl version (such as v1.21.3-mirantis-1 and v1.20.7-eks-1-20-7) that require manual editing.
Volume Health Monitoring support
This feature is currently in alpha in Kubernetes (in Q1-2022), and is disabled with a default installation.
Once enabled, the drivers will expose the standard storage metrics, such as capacity usage and inode usage through the Kubernetes /metrics endpoint. The metrics will flow natively in popular dashboards like the ones built-in OpenShift Monitoring:
Pave the way for full open source!
All Dell drivers and dependencies like gopowerstore, gobrick, and more are now on Github and will be fully open-sourced. The umbrella project is and remains https://github.com/dell/csm, from which you can open tickets and see the roadmap.
Google Anthos 1.9
NFSv4 POSIX and ACL support
- In PowerScale, you can use plain ACL or built-in values such as private_read, private, public_read, public_read_write, public or custom ones;
- In PowerStore, you can use the custom ones such as A::OWNER@:RWX, A::GROUP@:RWX, and A::OWNER@:rxtncy.
For more details you can:
- Watch these great CSM demos on our VP YouTube channel: https://www.youtube.com/user/itzikreich/
- Read the FAQs
- Subscribe to Github notification and be informed of the latest releases on: https://github.com/dell/csm
- Ask for help or chat with us on Slack
Author: Florian Coulombel