Your Browser is Out of Date

ShareDemos uses technology that works best in other browsers.
For a full experience use one of the browsers below

Blogs

Dell Technologies' blog posts about the PowerOne system

Blogs(6)

Tag :

All Tags

Author :

All Authors

vSphere PowerOne

Integrating Dell EMC PowerOne and ServiceNow software

David Iovino

Thu, 03 Sep 2020 22:25:30 -0000

|

Read Time: 0 minutes

This blog post shares the results of our project to integrate PowerOne and the ServiceNow platform. The goal was to demonstrate how PowerOne autonomous operations can be integrated into a ServiceNow workflow.

What is ServiceNow?

ServiceNow software is a comprehensive platform that facilitates IT operations at scale. ServiceNow software is well known for its capabilities, of which we only used a subset, and won’t be covered in any detail here. (For details about ServiceNow, see https://www.servicenow.com/.)

 

About PowerOne

PowerOne is part of Dell’s Converged Infrastructure (CI) family and provides all the benefits of an engineered system. PowerOne is outcome oriented and declarative. Outcomes are primarily focused on the lifecycle of Cluster Resource Groups (CRGs). A CRG is a logical construct that combines compute, storage, network, and a platform (currently VMware vSphere). The following outcomes can be declared by an operator:

 

·      Create: Autonomously create a new CRG based on a set of requirements.

·      Expand: Autonomously expand an existing CRG based on a new (expanded) set of requirements.

·      Reduce: Autonomously reduce an existing CRG based on a new (reduced) set of requirements.

·      Update: Autonomously update an existing CRG to a new platform (vSphere) version.

·      Delete: Autonomously delete an existing CRG, returning resources to the available pool.

 

The secret sauce of PowerOne lies in the PowerOne Controller. The PowerOne Controller leverages a microservices architecture and delivers outcomes through autonomous operations. PowerOne Controller functionality is exposed through the PowerOne API. Orchestration tools, such as ServiceNow software, would use the PowerOne API directly. (PowerOne Navigator, an HTML5 based User Interface (UI) that uses the PowerOne API, is provided for operator convenience.)

 

Project Criteria

·      Allow the end user to create a CRG using a ServiceNow workflow

·      Allow the end user to initiate the process from ServiceNow’s Service Catalog

·      Allow the end user to specify requirements, and then to select among offered configurations 

·      Incorporate ServiceNow’s request/approval mechanism into the overall workflow 

 

ServiceNow Integration

The following diagram illustrates the integration between ServiceNow and PowerOne:



Note the following, starting with the Dell P1 Controller:

 

·      Dell P1 Controller:  The PowerOne simulator running in a virtual machine within the Dell corporate environment. 

·      MID Server:  The ServiceNow MID (Management, Instrumentation, and Discovery) server application running in the same virtual machine as the Dell P1 Controller. The MID server communicates with the ServiceNow Integration Hub. Commands are sent to the MID server, which then execute API calls against the Dell P1 Controller. The MID server then sends the response data back to the Integration Hub.

·      Integration Hub:  Enables simple extension of ServiceNow workflows to third-party services. Communicates with the MID server via the ECC (External Communication Channel) queue. 

·      Service Script:  Runs in the ServiceNow instance and interacts with both the client script and the Integration Hub.

·      Client Script:  Runs in the browser and interacts with both end user and server script.

 

Using the PowerOne API to create a CRG

The following diagram illustrates the three step process for creating a new CRG:


 

·      Request - The first API call (POST) allows the client to access the PowerOne select-asset process. The client sends CRG requirements (platform, cores, memory, storage, and storage features) as a JSON object. PowerOne determines the CRG configurations that meet the requirements, then returns this data to the client as a JSON object. 

·      Build - The second API call (POST) triggers the create-cluster automation within the PowerOne Controller. Because execution time will vary, depending on the size of the CRG, this is an asynchronous operation. The PowerOne Controller returns a job-id to the client, so that the client can monitor the job status.

·      Monitor - The third API call (GET) allows the client to monitor the status of the create-cluster job.

 

More about ServiceNow integration

This section provides additional details about the integration work we performed. Most of the following images were taken directly the ServiceNow portal.

Catalog Item

Below we can see the catalog item we created and named New Dell PowerOne Cluster. We also added a Flow called DP1 Cluster Request Process  under the Process Engine tab. We will talk about that Flow later.


Variable Set

We also defined, within the Catalog Item, a Variable Set called DP1 Specifications.

We can see six variables included within the Variable Set:


In the web browser, this Variable Set is rendered as follows:

Let’s take a moment to review these variables.

·      Because it is possible to have multiple MID servers, the variable dp1_controller allows us to specify which one to use.

·      The variables desired_cpu_cores, desired_ram, and desired_storage are used by the select-assets API call.

·      The variable cluster_name will be used by the create-cluster API call.

·      The client script uses the props select box to display the response from the select-assets API call.

Below we can see that four client scripts are defined within the Variable Set for onChange events. This is the same script applied to four separate variables.

The effect of this is: if the value for dp1_controller, desired_cpu_cores, desired_ram, or desired_storage is changed, then the script will trigger.


Client Script

The client script is shown below.

(Note the GlideAjax class that allows the execution of server-side code from the client.)

The client script: 

·      Executes the service script DP1Ajax

·      Passes the variables desired_cpu_cores, desired_ram, desired_storage, and dp1_controller as parameters

·      Processes the response from DP1Ajax, then displays it in the props select box.



Service Script

The following are snippets from the service script DP1Ajax.

When DP1Ajax is executed, it uses the parameters passed by the client script as part of a select-assets API call to PowerOne:


The DP1Ajax service script then processes the results and returns them to the client script:

Flow

Shown below is the DP1 Cluster Request Process Flow from Flow Designer.

Up to this point:

·      The interaction between ServiceNow and PowerOne has occurred outside of a Flow.

·      The client script and the service script interact to handle all select-assets API calls.

·      Any change to the following variables will result in a new select-assets API call: 

o  desired_cpu_cores, desired_ram, desired_storage, dp1_controller

When ordering the Catalog Item, the end user triggers the Flow by clicking the Order button. The key steps within the Flow are as follows:

·      Ask for Approval 

·      If approved:

  •     The DP1 Create Cluster Sub Flow handles the create-cluster API call.
  •     The DP1 Provisioning Status Sub Flow monitors the status of the create-cluster job on the PowerOne Controller.
  •        Once complete, the Flow triggers an email to the end user that contains the details about the CRG that was created.



Conclusion

Because of its outcome-oriented nature, PowerOne can be easily integrated into any orchestration tool. The select-assets operation eliminates the need to employ a complex process to query and select inventory. The process of getting from requirements to a CRG that is ready for virtual machines couldn’t be simpler.


David Iovino - LinkedIn

 




Read Full Blog
PowerOne networking

Extending Layer 2 networking for workload migration to PowerOne systems

Iñigo Olcoz

Fri, 07 Aug 2020 21:07:34 -0000

|

Read Time: 0 minutes

The PowerOne System represents a new paradigm in the way we operate and manage infrastructure life cycle in the data center. Greenfield workload deployments are ideal candidates to adapt to the new operational model, but it is also desirable to include existing business workloads in the new platform.

The default mechanism for connecting a PowerOne System to a customer network is by means of Layer 3. Layer 3 has numerous benefits in terms of scalability, failure domain isolation, and ease of operation. A screenshot of a cell phone

Description automatically generated

Figure 1 PowerOne System fabric connection to the customer network

However, against this background, customers may need to leverage traditional Layer 2 technologies at the integration point, in tandem with Layer 3, to satisfy either of the following two key use cases, both of which are described in detail in our white paper:

  1. Migrating workloads from the customer network into the PowerOne System, without the need to change the identity (the IP addresses) of the virtual machines. The net result is that the customer’s workload IP gateway will migrate to the PowerOne Dell ToR switches, in the guise of VxLAN IP anycast gateways. In this use case, all virtual machines attached to that VLAN are migrated into the PowerOne System. 
  2. Mixed workload & Legacy system use case: In some instances, the customer may be unable to migrate the entire network into the PowerOne System, and only be capable of migrating a portion of the workload. Other components, such as non-virtualized storage, physical Layer 2 firewalls, and load balancers, may not be good candidates for migration, or indeed cannot be migrated.

For this use case, a Layer 2 infrastructure can be configured, in tandem with the Layer 3 architecture, to facilitate Layer 2 communication between virtual devices on the PowerOne System and legacy non-virtualized systems on the customer network. This scenario implies that the default gateway for these networks will remain on the customer network, even for virtualized traffic within the PowerOne domain, though it is entirely possible for the gateway to reside on the PowerOne System.

When a customer adds a PowerOne System to their infrastructure, they can create cluster resource groups (CRGs). (A CRG is an aggregation of compute, storage, and networking infrastructure assets created to satisfy the resource needs of new application workloads.) But what about existing workloads -- the ones that are already running in the customer’s infrastructure? There are instances in which customers do not wish to re-IP or lift-and-shift their workload VMs and subnets in order to migrate them successfully onto a PowerOne systemA re-IP process is generally highly disruptive for existing applications, both operationally and technically. 

In order to leverage the considerable intelligent resources available in PowerOne, these existing workloads need to find their way to the PowerOne System. PowerOne standardizes its networking on the principles of Layer 3 because of its inherent ability:

  • To scale in the data center (for possible future multi-Pod configurations)
  • To provide a native mechanism to integrate into a spine/leaf architecture

Situations also arise in which a customer wishes to natively extend existing Layer 2 constructs from their network into the PowerOne environment. This could include bare-metal server or appliance integrations for service chaining or excess resource allocations. Our white paper also describes how to enable the infrastructure so that customers can seamlessly migrate their applications, with application identity intact, into a PowerOne system, as well as simply extending existing or new VLANs into PowerOne.

Solution

Fortunately, when migrating existing workloads to a PowerOne system, it is not necessary to apply new IP addresses to the workload VMs. This simplifies the overall workload network architecture and management, makes migrating workloads a much simpler process, and eliminates unnecessary risk in a re-IP’ing process. In this manner, keeping the virtual machines’ IP addresses the same in both the source and destination workload clusters is all important. Keeping intact the application identity, including its IP, allows us to avoid changes at the DNS level. Using industry standard tools and procedures for migrations, the potential challenges can be overcome.

In many cases we would like to integrate customer Layer 2 constructs into a PowerOne system. This use case shows how customer switch(es) can be connected in order to facilitate Layer 2 extension into, and out of, the PowerOne system fabric.

Summary

There are two use cases for extending a customer’s existing Layer 2 network constructs into PowerOne:

  1. To extend temporarily only the customer’s Layer 2 network, with the goal of migrating existing virtualized workloads into PowerOne. After completing the migration, we have the option to remove the Layer 2 network. The result is that Layer 3 networking operates within the PowerOne ToR switches, as per original PowerOne design, with existing workloads migrated into PowerOne without the need to re-IP the VMs on which they are running.
  2. To extend permanently the customer’s Layer 2 network AND migrate the entire customer VLAN and associated IP subnet onto the PowerOne infrastructure. In this case, customer switch(es) can be connected to the PowerOne system fabric in order to facilitate Layer 2 extension into, and out of, the PowerOne system fabric.

For more details on these two approaches, please see our whitepaper: Enabling PowerOne Infrastructure for Network Layer 2 Extension



Read Full Blog
PowerOne SRDF disaster recovery

PowerOne and SRDF (Part 3)

Iñigo Olcoz

Mon, 20 Jul 2020 20:12:56 -0000

|

Read Time: 0 minutes

In this third and final blog post on PowerOne and SRDF we will focus on the SRDF/Metro use case. In this post, we explain the network requirements to provide Layer 2 and Layer 3 services and go into some depth with some best practices and recommendations for setting up a PowerOne system in an SRDF/Metro scenario.

Network Architecture

First, we need to define how we will stretch the network for a Metro scenario. As part of the project design, we must determine how to provide Layer 2 and Layer 3 networking for vSphere, NSX Management, and vMotion networks.

The SRDF/Metro use case consists of three sites: two workload sites (local and remote) and a witness site. (Remember that a witness serves to prevent data inconsistencies between local and remote sites. A witness can be virtual or physical.)

Here are the essential Dell EMC Best Practices for setting up SRDF/Metro on both workload sites and on a witness site:

  • Where possible, use dedicated ports on each PowerMax for connectivity to the dedicated replication network. The network must meet the latency requirements for SRDF/Metro and VMware Metro Storage Clusters.
  • Use non-uniform host access to simplify SAN design and provide predictable I/O latency for workloads.
  • Implement vSphere and NSX-T Management as required to meet operational needs and constraints through a dedicated management cluster or through a cluster that is shared with workloads.
  • Implement vCenter HA with a witness site or with restart recovery, depending on network architecture and operational needs.
  • Follow Dell Technologies Best Practices (NSX-T Data Center Administration Guide) to set up NSX-T L2 VPN across both sites.
  • Deploy or migrate workload VMs with NSX-T L2 VPN as the VM Network, for example, for application access.
  • Configure workload VMs to leverage a VMware NSX-T L2 VPN or stretched VXLAN implementation in Dell Smart Fabric Services to retain identical IP addressing on both sites.

The proposed architecture options and their implications appear in the following examples.

Figure 1: Layer 2 networking architecture

In this example, a Layer 2 stretched network is implemented using either of the following:

  • A BGP eVPN—The network is extended by running a VXLAN tunnel over the top of the Layer 3 handoff from a PowerOne system
  • Layer 2 trunking—Dedicated 802.1Q trunk ports can be configured on the Dell S5232-F switches used as leaf devices, mapping incoming 802.1Q tags into the proper VXLAN virtual-networks
  • If the customer already has a VLAN Layer 2 adjacent between sites, this effectively extends it to the other datacenter, where the other PowerOne system uses the same method for mapping an incoming tag to the virtual network (VXLAN).

Stretching the management VLAN adds complexity to the physical network but has the advantage of retaining identical IP addressing for management components across sites.

Figure 2: Layer 3 networking architecture

In this example, you can include Layer 3 networking to eliminate the need to stretch a VLAN at the physical network level. This example shows how vCenter HA can be used to distribute vSphere Management across sites.

This approach requires that all NSX-T management components are configured with DNS entries that have a short TTL. It also requires a re-IP operation and a DNS update after restarting on Site 2. All workload VMs are restarted automatically in this scenario, through vCenter HA, and are immediately fully functional on the NSX-T L2 VPN.

Management activities, such as changing the NSX-T configuration, become available once the NSX-T Management VMs are restarted.

Figure 3: Distributed networking architecture

Another option is to use the third site for witness duties. By centralizing the management components at the third site, they become isolated from the workload sites. In this case, in addition to the PowerMax witness, we have all the vCenter and NSX-T network management elements in this third site, so we will not need to re-start those services if a workload site fails.

Protecting the management in the third site is not covered in this example.

All of these vMSC architectures, (as described in Best Practices for Using Dell EMC SRDF/Metro in a VMware vSphere Metro Storage Cluster) form a highly available, business-continuous scenario. In this kind of scenario, both primary and secondary sites are perceived as one by a PowerOne vSphere host and from a CRG provisioning perspective. Stretching the sites at the storage and network levels enables seamless vMotion and Storage vMotion operations between the two sites.

PowerOne continues to deliver the now-traditional Converged Infrastructure values with extensive automation for site local needs, while simultaneously allowing for seamless integration with outcomes that fall outside of autonomous operations. All operations related to these stretched CRGs (such as provisioning, expansion, and lifecycle management) can be tailored to extended use cases, such as SRDF/Metro, through traditional configuration approaches widely used in the industry today.

Best Practices and Recommendations

To determine the correct PowerOne technology configuration for the required outcome, we first need to determine the organization’s continuity requirements for a given cluster, in terms of the following:

  • Recovery Time Objective (RTO)—How long does it take to get the cluster working again after site failure?
  • Recovery Point Objective (RPO)—How much data is the customer prepared to lose?

Designing the architecture means:

  • Determining RTO and RPO requirements and site distance constraints
  • Working with an organization to understand their continuity requirements and DR needs
  • Designing and estimating the price of the appropriate PowerOne configurations to meet those needs

It is typical for an organization to classify its recovery needs on a per-application basis, resulting in collections of applications that have common availability requirements. This maps well to the PowerOne approach in which the cluster is the primary unit of consumption. At the cluster level, one cluster could have an RPO and RTO that is different from that of another cluster, allowing a direct mapping of the recovery needs of applications to the clusters in which they run.

But investment in operational continuity is the starting point. Having confidence that your plan will work is the critical point. Confidence comes from regular testing that is minimally disruptive and performed in a controlled manner.

PowerOne makes possible these various outcomes by making the right set of components and other resources available. This means that the initial sizing work must include understanding the organization’s continuity requirements and how they will be implemented. This will help ensure that the components and configurations needed to fulfill the requirements are incorporated in the system definition and that the correct technologies and capacities are available at implementation.

Operational Continuity: Solutions

To investigate further how an organization’s continuity solution can be provided, let’s examine the best-practice recovery configurations for PowerOne, and how they can be extended to achieve the required outcomes.

We will need to address questions about how we deal with physical connectivity at the storage and network layers, such as how we configure the logical behavior of our vSphere environments to minimize RPO. In order to use the traditional configuration techniques for non-autonomous extended use cases such as SRDF, the PowerOne Controller is invoked to reserve and allow seamless hand-off of components. We rely on well-proven tools such as VMware Site Recovery Manager to automate failover and failback operations, and to configure the production network on the vSphere remote site.

SDRF/Metro

PowerOne with SRDF/Metro provides an organization’s continuity solution in which we can define RPO and RTO as zero or near-zero when VMware vSphere Metro Storage Cluster technology and architecture are also implemented.

There are a number of considerations to take into account when designing a PowerOne system with SRDF/Metro architecture, specifically:  

  • Compute, storage, and network configuration
  • How we design our vSphere environment in terms of vCenter configuration (HA and Platform Service Controller architectures)
  • NSX-T management design guidelines
  • A few considerations about the SRDF witness 
  • Potential benefits of including a third site in the architecture design
  • Architectural considerations on site design and intersite replication

Conclusion

There is a market demand to provide a way to deploy critical applications in a highly available manner. As an architecture option for building that solution, combining the core Converged Infrastructure and site-local autonomous outcomes of PowerOne with traditional configuration capabilities for SRDF (and VMware SRM), delivers cumulative industry-leading value from each of those components in one solution, all fully supported by Dell Technologies.

For additional in-depth information, please read the supporting white paper: Protecting Business-Critical Workloads with Dell EMC SRDF and PowerOne.

Read Full Blog
PowerOne SRDF disaster recovery

PowerOne and SRDF (Part 2)

Iñigo Olcoz

Thu, 25 Jun 2020 17:55:46 -0000

|

Read Time: 0 minutes

PowerOne with SRDF use cases and associated topologies

In my previous blog post (PowerOne and SRDF Part I) we introduced the business context and technologies involved in a PowerOne with SRDF scenario. In this second blog post we describe two data center use cases and the associated topologies:

  • Two sites with SRDF Synchronous (SRDF/S) or Asynchronous (SRDF/A) data replication with Remote Restart (disaster restart protection for virtual infrastructure)
  • Two sites in a stretch cluster configuration with synchronous data mirroring (SRDF/Metro). This stretched cluster use case relies on the existence of a third site to perform the witness role.

Protected clusters with SRDF/S and SRDF/A

In this topology, vSphere clusters built using traditional configuration techniques on PowerOne systems are recovered at a defined secondary site on a per-cluster basis. The recovery process can take up to several minutes, depending on the number of servers involved.

The functional requirements for this approach are:

  1. Deploy two PowerOne Systems with PowerMax, one at the primary site and one at the recovery site.  These systems must be licensed for SRDF and VMware SRM.

  2. Create, modify or delete a cluster at the primary site. Add volumes to the replication set. Select the mode of protection based on RPO with sync for zero data loss (or) async for data loss (from a few seconds of data loss to minutes, depending on the RPO).

  3. Add stretched application VLAN(s), typically called overlay networks, such as VMware VXLAN using NSX, so that IP addressing will work at the primary or the recovery site. Any mechanism to re-IP and modify DNS entries as needed in the secondary site may also be valid.

  4. Create, modify or delete remote array connections, called SRDF Groups. Links must be scalable so you can add links to increase throughput.

  5. Invoke failover/failback to prove all technologies and operational processes are functioning as expected.

  6. In order to be able to attach the replicated storage volumes for failover and failback, create storage-less clusters and add them to the vCenter instance at the secondary site.

  7. For non-disruptive failover testing, SRM provides an orchestrated recovery validation mechanism called Bubble. Bubble provides a recovery area using SnapVX clones of R2 volumes without impacting the replication process. Isolate the Bubble recovery to specific VLANs or subnets (so that it does not overlap with production or recovery networks).

This approach describes a typical Disaster Recovery scenario. Through the integration of PowerOne, SRDF/S/A, and VMware SRM, we can create an automated DR architecture that, in case of a site failure, will failover the production CRGs to the secondary disaster site. VMware SRM automates this multi-step recovery of virtual machines. For more details about pre-requisites, supported devices, and configurations, see Implementing Dell EMC SRDF SRA with VMware SRM.

Stretched clusters with SRDF/Metro

In this topology, PowerOne clusters are always on. If one site goes down, VMs are restarted on surviving servers. Application architecture determines recovery time. For example, this means that monolithic applications that require all VMs to be restarted will have a wait time, whereas distributed or cloud-native applications will continue to run without interruption at lower capacity levels until the failed VMs restart.

Figure 1: PowerOne with SRDF Metro architecture

The functional requirements for this approach are:

  1. Deploy two PowerOne Systems, with PowerMax licensed for SRDF/Metro, within metro distance.  Create a low-latency communication channel between them to avoid disk write delays.

  2. Create, modify, or delete remote array connections, called SRDF Groups, which require redundant ports and replication adapters using Ethernet or Fibre Channel protocols. Links must be scalable so you can add links to increase throughput.

  3. Create, modify, or delete VMware Metro Storage Cluster(s) using SRDF device pairing. Split servers 50/50 across both systems so that storage is bidirectionally mirrored across both sites.

  4. Add stretched application VLAN(s), such as VMware VXLAN using NSX, so that IP addressing works regardless of which half of the cluster runs the application VM.

  5. (Optional) Implement any fine-grained workload migration controls to address restarting the workloads if application-specific needs or dependencies arise.

  6. Create, delete, or modify SRDF pairs. Suspend and deactivate SRDF/Metro failure recovery controls.

  7. Implement either a bias or witness mechanism to prevent data inconsistencies with multi-access at both sides. The witness is an external arbitrator reachable by both sites. The witness can be a Virtual Witness (vWitness) or a physical array acting as a witness.

In the third and final part of this blog series, we will explore the network architecture, best practices, and recommendations for the different PowerOne with SRDF scenarios we have presented.

Read Full Blog
PowerOne SRDF disaster recovery

PowerOne and SRDF (Part I)

Iñigo Olcoz

Tue, 23 Jun 2020 18:03:50 -0000

|

Read Time: 0 minutes

Introduction

Planning for disaster recovery is essential for IT organizations who are designing their environments to support business-critical applications. Each new application or instance must be deployed with enough resiliency to overcome common hazards such as floods, fire, power failures, and human error.

To design a resilient IT infrastructure, we must first consider the datacenter site, or sites. If there is a single datacenter site, simply duplicating the IT infrastructure will not provide the desired resiliency if the failure event impacts the entire site. If we deploy our business applications across more than one site, we would need mechanisms to replicate the information across sites. In the event of a site loss, even with information replicated, we would still need to introduce processes or tools that would help during the subsequent failover and failback operations.

All industries and geographies share a need for a resilient IT architecture, one that is manifested in IT architecture proposals that consider factors such as:

  • Site Distance – Depending on the distance between sites, the required technologies will vary, and the recovery scenarios will be different. Relatively short distances (under 100 km with Round Trip Time under 10 ms) allow the use of more powerful tools in order to minimize the following two factors (RTO and RPO).
  • Recovery Time Objective (RTO) – Every business or application may allow for a different length of time during which to recover when a failure occurs. Some will only support a few seconds of application downtime or no downtime at all, while others may be able to withstand minutes or even hours. This factor greatly influences the architectural requirements.
  • Recovery Point Objective (RPO) – Another key factor when defining a solution is the amount of data a business can afford to lose in the event of an application outage or site loss. In some cases, a business might be able to withstand recovery of its applications to a data state that existed minutes or even hours before the failure; in other cases, the business could not withstand the loss of a single transaction.

In this context, we propose a solution to address this business need with a highly effective and function-rich architecturefeaturing Dell EMC PowerOne with Dell EMC Symmetrix Remote Data Facility (SRDF), and VMware Site Recovery Manager.

This blog post is part one of a three-part series. In this first installment, we will expose the business context and technologies involved. In part two we will deal with the main use cases that this blog addresses. In the third and last blog post, we will share some technology recommendations and best practices.

PowerOne and SRDF technology overview

Dell EMC PowerOne combines compute, storage, and networking in a fully engineered and highly automated converged infrastructure that provides autonomous operations, all-in-one simplicity, and flexible consumption options. With PowerOne, IT organizations can start moving from traditional operations to modern cloud outcomes.

Based on vSphere clusters, PowerOne delivers business outcomes. During daily tasks, such as provisioning workloads, the customer is never required to specify low-level details about IP stack configuration parameters, storage array configuration object names, and so on. Instead, the customer is asked only to identify the capacity required to support the target workload. All other information required to deliver the desired outcome is derived from system standards and best practices.

Dell EMC Symmetrix Remote Data Facility (SRDF) solutions provide near real-time copies of application data from a production storage array to one or more remote storage arrays. The main use cases are:

  • Disaster recovery
  • High availability
  • Data migration

In a traditional SRDF device pair relationship, the secondary device (“R2”), is read-only, and writes are disabled. Only the primary device (“R1”) is enabled for read and write activity. With SRDF/Metro, the R2 is also write-enabled and accessible by the host or application. The R2 takes on the personality of the R1, including the World Wide Name (WWN). A host would see both the R1 and R2 as the same device.

When SRDF/Metro is used in conjunction with VMware vSphere across various hosts in two sites, a VMware vSphere Metro Storage Cluster (vMSC) is formed. A VMware vMSC infrastructure is a stretched cluster -- an architecture that extends local network and storage configuration across remote sites, enabling on-demand and nonintrusive workload mobility.

VMware Site Recovery Manager (SRM) is another technology that can play a key role in simplifying operations in multi-site architectures. VMware SRM provides workflow and business continuity, and disaster restart process management for VMware vSphere workloads. For the SRDF/Metro use case, because we can build a vMSC, SRM is not required because the multi-site deployment is perceived by vSphere workloads as a single stretched site. However, SRM is a mainstream technology for SRDF/S/A for handling failover and failback operations. In a second use case documented in the white paper Protecting Business-Critical Workloads with Dell EMC SRDF and PowerOne, VMware SRM leverages SRDF replication to protect PowerOne Cluster Resource Groups (CRGs).

The integration of VMware SRM with SRDF automates storage-based disaster restart operations on PowerOne systems. In the white paper, we focus on the availability and disaster recovery scenarios made possible by PowerOne.

A screenshot of a computer

Description automatically generated

Figure 1: PowerOne with SRDF basic architecture

In my next blog post, we will explore two use cases and their associated technologies:

  • Two data center sites with SRDF Synchronous or Asynchronous (SRDF/S or SRDF/A)
  • Two sites in a stretch cluster configuration with synchronous data mirroring (SRDF/Metro).

 

Read Full Blog
TCO PowerOne

PowerOne Total Cost of Ownership (TCO) - A qualitative analysis

David Iovino

Wed, 15 Apr 2020 17:38:06 -0000

|

Read Time: 0 minutes

As a member of the technical marketing team that covers PowerOne, I see many requests for quantitative analysis data related to PowerOne. Usually, these requests are needed for a TCO spreadsheet. We provide this information where available, but some aspects of PowerOne or for that matter, any product, are resistant to quantitative analysis. In addition, quantitative analyses themselves aren’t completely devoid of subjectivity.

When aspects of a solution are resistant to quantitative analysis, even if their analysis would be useful to decision makers, they are often left unexplored. The point of this post is to explore qualitative aspects of PowerOne and encourage their use in TCO. The following qualitative aspects of PowerOne are not exhaustive and I may, add to or expand upon this post in the future.

Wisdom from Charlie Munger

"People calculate too much and think too little." 

This is not only true in the world of finance and investing, but also in the world of selling Information Technology (IT) products and solutions. As part of the sales cycle, there is a strong push to show how CAPEX spending will, directly and quantitatively, translate into OPEX savings. This is certainly a reasonable exercise, but often these translations miss some qualitative aspects that are harder to evaluate.

Consumable Infrastructure Automation

PowerOne takes the concept of an engineered system one step further and includes consumable automation that meets the same rigorous standards of engineered systems. This allows the customer to consume infrastructure automation vs. either manually configuring that infrastructure or taking on the heavy lift of building, testing, and maintaining their own infrastructure automation.

              Okay, so how does that reduce TCO for a customer?

98% reduction in manual tasks

Because PowerOne automates configuration tasks, customers will see a 98% reduction in the manual tasks they would perform if they were manually configuring their infrastructure. This reduction doesn’t only bear fruit in reduced effort, but also in the reduction of human error.

Consume vs. build automation

Dell has and will continue to invest thousands of hours to build, test, and maintain PowerOne automation that can be easily consumed by customers. One aspect of building automation that often gets overlooked: it is the difference in skill and effort required to build an automation script that operates once and from a single known state vs. built-in automation that must operate over a long period of time and from many possible states.  

Simplified consumption

Not only does Dell build, test and maintain PowerOne automation, but it also exposes access to that automation through a simple RESTful API. This eliminates the need for customers to have knowledge of infrastructure automation mechanisms and allows them to leverage a single RESTful API interface. This allows a customer’s developers to focus their automation efforts at the virtualization, application, and business layers, instead of the infrastructure layer.

Outcome oriented

In addition to the benefits of consume vs. build and simplified consumption through a single RESTful API, PowerOne is designed to be outcome oriented. This important aspect helps simplify the translation of business requirements to infrastructure configuration.

And so how does that reduce TCO for a customer?

Easier translation of business requirements

PowerOne helps the customer specify CRG requirements in the form of compute, memory, and storage. PowerOne then proposes configurations that meet those requirements and that adhere to Dell best practices. This approach allows for easier translation of application sizing to CRG requirements.

Modern Software Design

In order to deliver consistent outcomes over time, PowerOne automation is managed by the PowerOne Controller. To ensure that the controller software can easily evolve, PowerOne employs a microservices based architecture that uses opensource software.

              How does that reduce TCO for a customer?

Opensource isn’t free!

Leveraging opensource software provides flexibility, but also adds complexity, such as managing  opensource licensing. This task is often overlooked, but essential to prevent exposing the business to legal risk. PowerOne includes opensource software from many projects. This software makes its intelligent modern software design possible. When a customer buys PowerOne, they can leverage these benefits without the overhead of managing opensource licensing.

An Engineered System

PowerOne is an engineering system. It brings together compute, storage and network components into a single system. As an engineered system, thousands of hours have been invested into design, quality assurance, and interoperability testing.

              Now how does that reduce TCO for a customer?

System Architecture

If a customer embarks on a “build your own (BYO)” project, they will need to invest many hours in creating a system architecture. This extends beyond just determining the connectivity of the individual components. It requires that various requirements and characteristics be explored: performance, configuration options, scalability, and so on. By contrast, when a customer purchases a PowerOne system, Dell Technologies has already designed and validated the system architecture. The customer simply needs to assess whether that system’s architecture fits their needs.

Procurement

The procurement process often gets overlooked. Though it varies for each customer, each must complete some design and planning work before the procurement process can begin. If a customer embarks on a BYO project, most if not all of the system architecture must be in place, and a significant amount of the logistics work must be complete before the procurement process can begin. Instead, when a customer purchases a PowerOne system, they must perform only a small amount of logistics work ahead of procurement.

Summary

PowerOne provides many qualitative benefits to consider when evaluating whether PowerOne is right for you. As the industry continues to move towards automated datacenters, these qualitative aspects will become ever more important. Today, an infrastructure system should do more than provide quality infrastructure -- it should easily dovetail into a larger datacenter automation framework. PowerOne does just that and does it well.

David Iovino - LinkedIn


Read Full Blog