Extending Dell Technologies Cloud Platform Availability for Mission Critical Applications
Mon, 29 Jun 2020 14:48:57 -0000|
Read Time: 0 minutes
Reference Architecture Validation Whitepaper Now Available!
Many of us here at Dell Technologies regularly have conversations with customers and talk about what we refer to as the “Power of the Portfolio.” What does this mean exactly? It is essentially a reference to the fact that, as Dell Technologies, we have a robust and broad portfolio of modern IT infrastructure products and solutions across storage, networking, compute, virtualization, data protection, security, and more! At first glance, it can seem overwhelming to many. Some even say it could be considered complex to sort through. But we, as Dell Technologies, on the other hand, see it as an advantage. It allows us to solve a vast majority of our customers’ technical needs and support them as a strategic technology partner.
It is one thing to have the quality and quantity of products and tools to get the job done -- it’s another to leverage this portfolio of products to deliver on what customers want most: business outcomes.
As Dell Technologies continues to innovate, we are making the best use of the technologies we have and are developing ways to use them together seamlessly in order to deliver better business outcomes for our customers. The conversations we have are not about this product OR that product but instead they are about bringing together this set of products AND that set of products to deliver a SOLUTION giving our customers the best of everything Dell Technologies has to offer without compromise and with reduced risk.
Figure 1: Cloud Foundation on VxRail Platform Components
The Dell Technologies Cloud Platform is an example of one of these solutions. And there is no better example that illustrates how to take advantage of the “Power of the Portfolio” than one that appears in a newly published reference architecture white paper that focuses on validating the use of the Dell EMC PowerMax system with SRDF/Metro in a Dell Technologies Cloud Platform (VMware Cloud Foundation on a Dell EMC VxRail) multi-site stretched-cluster deployment configuration (Extending Dell Technologies Cloud Platform Availability for Mission Critical Applications).This configuration provides the highest levels of application availability for customers who are running mission-critical workloads in their Cloud Foundation on VxRail private cloud that would otherwise not be possible with core DTCP alone.
Let’s briefly review some of the components used in the reference architecture and how they were configured and tested.
Using external storage with VCF on VxRail
Customers commonly ask whether they can use external storage in Cloud Foundation on VxRail deployments. The answer is yes! This helps customers ease into the transition to a software-defined architecture from an operational perspective. It also helps customers leverage the investments in their existing infrastructure for the many different workloads that might still require external storage services.
External storage and Cloud Foundation have two important use cases: principal storage and supplemental storage.
- Principal storage - SDDC Manager provisions a workload domain that uses vSAN, NFS, or Fiber Channel (FC) storage for a workload domain cluster’s principal storage (the initial shared storage that is used to create a cluster). By default, VCF uses vSAN storage as the principal storage for a cluster. The option to use NFS and FC-connected external storage is also available. This option enables administrators to create a workload domain cluster whose principal storage can be a previously provisioned NFS datastore or an FC-based VMFS datastore instead of vSAN. External storage as principal storage is only supported on VI Workload Domains as vSAN is the required principal storage for the management domain in VCF.
- Supplemental storage - This involves mounting previously provisioned external NFS, iSCSI, vVols, or FC storage to a Cloud Foundation workload domain cluster that is using vSAN as the principal storage. Supporting external storage for these workload domain clusters is comparable to the experience of administrators using standard vSphere clusters who want to attach secondary datastores to those clusters.
At the time of writing, Cloud Foundation on VxRail supports supplemental storage use cases only. This is how external storage was used in the reference architecture solution configuration.
The Dell EMC PowerMax is the first Dell EMC hardware platform that uses an end-to-end Non-Volatile Memory Express (NVMe) architecture for customer data. NVMe is a set of standards that define a PCI Express (PCIe) interface used to efficiently access data storage volumes based on Non-Volatile Memory (NVM) media, which includes modern NAND-based flash along with higher-performing Storage Class Memory (SCM) media technologies. The NVMe-based PowerMax array fully unlocks the bandwidth, IOPS, and latency performance benefits that NVM media and multi-core CPUs offer to host-based applications—benefits that are unattainable using the previous generation of all-flash storage arrays. For a more detailed technical overview of the PowerMax Family, please check out the whitepaper Dell EMC PowerMax: Family Overview.
The following figure shows the PowerMax 2000 and PowerMax 8000 models.
Figure 2: PowerMax product family
The Symmetrix Remote Data Facility (SRDF) maintains real-time (or near real-time) copies of data on a PowerMax production storage array at one or more remote PowerMax storage arrays. SRDF has three primary applications:
- Disaster recovery
- High availability
- Data migration
In the case of this reference architecture, SRDF/Metro was used to provide enhanced levels of high availability across two availability zone sites. For a complete technical overview of SRDF, please check out this great SRDF whitepaper: Dell EMC SRDF.
Now that we are familiar with the components used in the solution, let’s discuss the details of the solution architecture that was used.
This overall solution design provides enhanced levels of flexibility and availability that extend the core capabilities of the VCF on VxRail cloud platform. The VCF on VxRail solution natively supports a stretched-cluster configuration for the management domain and a VI workload domain between two availability zones by using vSAN stretched clusters. A PowerMax SRDF/Metro with Metro Stretched Cluster (vMSC) configuration is added to protect VI workload domain workloads by using supplementary storage for the workloads that are running on them.
Two types of vMSC configurations are verified with stretched Cloud Foundation on VxRail: uniform and non-uniform.
- Uniform host access configuration - vSphere hosts from both sites are all connected to a storage node in the storage cluster across all sites. Paths presented to vSphere hosts are stretched across a distance.
- Non-uniform host access configuration - vSphere hosts at each site are connected only to storage nodes at the same site. Paths presented to vSphere hosts from storage nodes are limited to the local site.
The following figure shows the topology used in the reference architecture of the Cloud Foundation uniform stretched-cluster configuration with PowerMax SRDF/Metro.
Figure 3: Cloud Foundation on VxRail uniform stretched-cluster config with PowerMax SRDF/Metro
The following figure shows the topology used in the reference architecture of the Cloud Foundation on VxRail non-uniform stretched cluster configuration with PowerMax SRDF/Metro.
Figure 4: Cloud Foundation on VxRail non-uniform stretched-cluster config with PowerMax SRDF/Metro
Solution Validation Testing Methodology
We completed solution validation testing across the following major categories for both iSCSI and FC connected devices:
- Functional Verification Tests - This testing addresses the basic operations that are performed when PowerMax is used as supplementary storage with VMware VCF on VxRail.
- High Availability Tests - HA testing helps validate the capability of the solution to avoid a single point of failure, from the hardware component port level up to the IDC site level.
- Reliability Tests - In general, reliability testing validates whether the components and the whole system are reliable enough with a certain level of stress running on them.
For complete details on all of the individual validation test scenarios that were performed, and the pass/fail results, check out the whitepaper.
To summarize, this white paper describes how Dell EMC engineers integrated VMware Cloud Foundation on VxRail with PowerMax SRDF/Metro and provides the design configuration steps that they took to automatically provision PowerMax storage by using the PowerMax vRO plug-in. The paper validates that the Cloud Foundation on VxRail solution functions as expected in both a PowerMax uniform vMSC configuration and a non-uniform vMSC configuration by passing all the designed test cases. This reference architecture validation demonstrates the power of the Dell Technologies portfolio to provide customers with modern cloud infrastructure technologies that deliver the highest levels of application availability for business-critical and mission-critical applications running in their private clouds.
Find the link to the white paper below along with other VCF on VxRail resources and see how you can leverage the “Power of the Portfolio” to support your business!
Twitter - @vwhippersnapper
Related Blog Posts
The Dell Technologies Cloud Platform – Smaller in Size, Big on Features
Wed, 20 May 2020 13:07:08 -0000|
Read Time: 0 minutes
The latest VMware Cloud Foundation 4.0 on VxRail 7.0 release introduces a more accessible entry cloud option with support for new four node configurations. It also delivers a simple and direct path to vSphere with Kubernetes at cloud scale.
The Dell Technologies team is very excited to announce that May 12, 2020 marked the general availability of our latest Dell Technologies Cloud Platform release, VMware Cloud Foundation 4.0 on VxRail 7.0. There is so much to unpack in this release across all layers of the platform, from the latest features of VCF 4.0 to newly supported deployment configurations new to VCF on VxRail. To help you navigate through all of the goodness, I have broken out this post into two sections: VCF 4.0 updates and new features introduced specifically to VCF on VxRail deployments. Let’s jump right to it!
VMware Cloud Foundation 4.0 Updates
A lot great information on VCF 4.0 features was already published by VMware as a part of their Modern Apps Launch earlier this year. If you haven’t caught yourself up, check out links to some VMware blogs at the end of this post. Some of my favorite new features include new support for vSphere for Kubernetes (GAMECHANGER!), support for NSX-T in the Management Domain, and the NSX-T compatible Virtual Distributed Switch.
Now let’s dive into the items that are new to VCF on VxRail deployments, specifically ones that customers can take advantage of on top of the latest VCF 4.0 goodness.
New to VCF 4.0 on VxRail 7.0 Deployments
VCF Consolidated Architecture Four Node Deployment Support for Entry Level Cloud (available beginning May 26, 2020)
New to VCF on VxRail is support for the VCF Consolidated Architecture deployment option. Until now, VCF on VxRail required that all deployments use the VCF Standard Architecture. This was due to several factors: a major one was that NSX-T was not supported in the VCF Management Domain until this latest release. Having this capability was a prerequisite before we could support the consolidated architecture with VCF on VxRail.
Before we jump into the details of a VCF Consolidated Architecture deployment, let's review what the current VCF Standard deployment is all about.
VCF Standard Architecture Details
This deployment would consist of:
- A minimum of seven VxRail nodes (however eight is recommended)
- A four node Management Domain dedicated to run the VCF management software and at least one dedicated workload domain that consists of a three node cluster (however four is recommended) to run user workloads
- The Management Domain runs its own dedicated vCenter and NSX-T instance
- The workload domains are deployed with their own dedicated vCenter instances and choice of dedicated or shared NSX-T instances that are separate from the Management Domain NSX-T instance.
A summary of features includes:
- Requires a minimum of 7 nodes (8 recommended)
- A Management Domain dedicated to run management software components
- Dedicated VxRail VI domain(s) for user workloads
- Each workload domain can consist of multiple clusters
- Up to 15 domains are supported per VCF instance including the Management Domain
- vCenter instances run in linked-mode
- Supports vSAN storage only as principal storage
- Supports using external storage as supplemental storage
This deployment architecture design is preferred because it provides the most flexibility, scalability, and workload isolation for customers scaling their clouds in production. However, this does require a larger initial infrastructure footprint, and thus cost, to get started.
For something that allows customers to start smaller, VMware developed a validated VCF Consolidated Architecture option. This allows for the Management domain cluster to run both the VCF management components and a customer’s general purpose server VM workloads. Since you are just using the Management Domain infrastructure to run both your management components and user workloads, your minimum infrastructure starting point consists of the four nodes required to create your Management Domain. In this model, vSphere Resource Pools are used to logically isolate cluster resources to the respective workloads running on the cluster. A single vCenter and NSX-T instance is used for all workloads running on the Management Domain cluster.
VCF Consolidated Architecture Details
A summary of features of a Consolidated Architecture deployment:
- Minimum of 4 VxRail nodes
- Infrastructure and compute VMs run together on shared management domain
- Resource Pools used to segregate and isolate workload types
- Supports multi-cluster and scale to documented vSphere maximums
- Does not support running Horizon Virtual Desktop or vSphere with Kubernetes workloads
- Supports vSAN storage only as principal storage
- Supports using external storage as supplemental storage for workload clusters
For customers to get started with an entry level cloud for general purpose VM server workloads, this option provides a smaller entry point, both in terms of required infrastructure footprint as well as cost.
With the Dell Technologies Cloud Platform, we now have you covered across your scalability spectrum, from entry level to cloud scale!
Automated and Validated Lifecycle Management Support for vSphere with Kubernetes Enabled Workload Domain Clusters
How is it that we can support this? How does this work? What benefits does this provide you, as a VCF on VxRail administrator, as a part of this latest release? You may be asking yourself these questions. Well, the answer is through the unique integration that Dell Technologies and VMware have co-engineered between SDDC Manager and VxRail Manager. With these integrations, we have developed a unique set of LCM capabilities that can benefit our customers tremendously. You can read more about the details in one of my previous blog posts here.
VCF 4.0 on VxRail 7.0 customers who benefit from the automated full stack LCM integration that is built into the platform can now include in this integration vSphere with Kubernetes components that are a part of the ESXi hypervisor! Customers are future proofed to be able to automatically LCM vSphere with Kubernetes enabled clusters when the need arises with fully automated and validated VxRail LCM workflows natively integrated into the SDDC Manager management experience. Cool right?! This means that you can now bring the same streamlined operations capabilities to your modern apps infrastructure just like you already do for your traditional apps! The figure below illustrates the LCM process for VCF on VxRail.
VCF on VxRail LCM Integrated Workflow
Introduction of initial support of VCF (SDDC Manager) Public APIs
VMware Cloud Foundation first introduced the concept of SDDC Manager Public APIs back in version 3.8. These APIs have expanded in subsequent releases and have been geared toward VCF deployments on Ready Nodes.
Well, we are happy to say that in this latest release, the VCF on VxRail team is offering initial support for VCF Public APIs. These will include a subset of the various APIs that are applicable to a VCF on VxRail deployment. For a full listing of the available APIs, please refer to the VMware Cloud Foundation on Dell EMC VxRail API Reference Guide.
Another new API related feature in this release is the availability of the VMware Cloud Foundation Developer Center. This provides some very handy API references and code samples built right into the SDDC Manager UI. These references are readily accessible and help our customers to better integrate their own systems and other third party systems directly into VMware Cloud Foundation on VxRail. The figure below provides a summary and a sneak peek at what this looks like.
VMware Cloud Foundation Developer Center SDDC Manager UI View
Reduced VxRail Networking Hardware Configuration Requirements
Finally, we end out journey of new features on the hardware front. In this release, we have officially reduced the minimum VxRail node networking hardware configurations required for VCF use cases. With the introduction of vSphere 7.0 in VCF 4.0, admins can now use the vSphere Distributed Switch (VDS) for NSX-T. The need for a separate N-VDS switch has been deprecated. So why is this important and how does this lead to VxRail node network hardware configuration improvements?
Well, up until now, VxRail and SDDC management networks have been configured to use the VDS. And this VDS would be configured to use at least two physical NIC ports as uplinks for high availability. When introducing the use of NSX-T on VxRail, an administrator would need to create a separate N-VDS switch for the NSX-T traffic to use. This switch would require its own pair of dedicated uplinks for high availability. Thus, in VCF on VxRail environments in which NSX-T would be used, each VxRail node would require a minimum of four physical NIC ports to support the two different pairs of uplinks for each of the switches. This resulted in a higher infrastructure footprint for both the VxRail nodes and for a customer’s Top of Rack Switch infrastructure because they would need to turn on more ports on the switch to support all of these host connections. This, in turn, would come with a higher cost.
Fast forward to this release -- now we can run NSX-T traffic on the same VDS as the VxRail and SDDC Manager management traffic. And when you can share the same VDS, you can get away with reducing the number of physical uplink ports to provide high availability down to two and reduce the upfront hardware footprint and cost across the board! Win win! The following figure highlights this new feature.
NSX-T Dual pNIC Features
Well, that about sums it all up. Thanks for coming on this journey and learning about the boat load of new features in VCF 4.0 on VxRail 7.0. As always, feel free to check out the additional resources for more information. Until next time, stay safe and stay healthy out there!
VMware Cloud Foundation on Dell EMC VxRail Integration Features Series: Part 1 -- Full Stack Automated LCM
Fri, 03 Apr 2020 21:39:05 -0000|
Read Time: 0 minutes
VMware Cloud Foundation on Dell EMC VxRail Integration Features Series: Part 1
Full Stack Automated Lifecycle Management
It’s no surprise that VMware Cloud Foundation on VxRail features numerous unique integrations with many VCF components, such as SDDC Manager and even VMware Cloud Builder. These integrations are the result of the co-engineering efforts by Dell Technologies and VMware with every release of VCF on VxRail. The following figure highlights some of the components that are part of this integration effort.
These integrations of VCF on VxRail offer customers a unique set of features in various categories, from security to infrastructure deployment and expansion, to deep monitoring and visibility that have all been developed to drive infrastructure operations.
Where do these integrations exist? The following figure outlines how they impact a customer’s Day 0 to Day 2 operations experience with VCF on VxRail.
In this series I will showcase some of these unique integration features, including some of the more nuanced ones. But for this initial post, I want to highlight one of the most popular and differentiated customer benefits that emerged from this integration work: full stack automated lifecycle management (LCM).
VxRail already delivers a differentiated LCM customer experience through its Continuously Validated States capabilities for the entire VxRail hardware and software stack. (As you may know, the VxRail stack includes the hardware and firmware of compute, network, and storage components, along with VMware ESXi, VMware vSAN, and the Dell EMC VxRail HCI System software itself, which includes VxRail Manager.)
With VCF on VxRail, VxRail Manager is integrated natively into the SDDC Manager LCM management framework through the SDDC Manager UI, and through VxRail Manager APIs for LCM by SDDC Manager when executing LCM workflows. This integration allows SDDC Manager to leverage all of the LCM capabilities that natively exist in VxRail right out of the box. SDDC Manager can then execute SDDC software LCM AND drive native VxRail HCI system LCM. It does this by leveraging native VxRail Manager APIs and the continuously validated state update packages for both the VxRail software and hardware components.
All of this happens seamlessly behind the scenes when administrators use the SDDC Manager UI to kick off native SDDC Manager workflows. This means that customers don’t have to leave the SDDC Manager UI management experience at all for full stack SDDC software and VxRail HCI infrastructure LCM operations. How cool is that?! The following figure illustrates the concepts behind this effective relationship.
For more details about how this LCM experience works, check out my lightboard talk about it!
Also, if you want to get some hands on experience in walking through performing LCM operations for the full VCF on VxRail stack, check out the VCF on VxRail Interactive Demo to see this and some of the other unique integrations!
I am already hard at work writing up the next blog post in the series. Check back soon to learn more.
Twitter - @vwhippersnapper