Your Browser is Out of Date

Nytro.ai uses technology that works best in other browsers.
For a full experience use one of the browsers below

Dell.com Contact Us
US(English)

Blogs

The latest news about VxRail releases and updates

blogs (99)

  • vSphere
  • VxRail
  • vSAN
  • ESA
  • LCM
  • Express Storage Architecture
  • vLCM
  • 8.0.210
  • VD-4000

Learn About the Latest Major VxRail Software Release: VxRail 8.0.210

Daniel Chiu Daniel Chiu

Fri, 15 Mar 2024 20:13:39 -0000

|

Read Time: 0 minutes

It’s springtime, VxRail customers!  VxRail 8.0.210 is our latest software release to bloom.  Come see for yourself what makes this software release shine.

VxRail 8.0.210 provides support for VMware vSphere 8.0 Update 2b.  All existing platforms that support VxRail 8.0 can upgrade to VxRail 8.0.210.  This is also the first VxRail 8.0 software to support the hybrid and all-flash models of the VE-660 and VP-760 nodes based on Dell PowerEdge 16th Generation platforms that were released last summer and the edge-optimized VD-4000 platform that was released early last year.

Read on for a deep dive into the release content.  For a more comprehensive rundown of the feature and enhancements in VxRail 8.0.210, see the release notes.

Support for VD-4000

The support for VD-4000 includes vSAN Original Storage Architecture (OSA) and vSAN Express Storage Architecture (ESA).  VD-4000 was launched last year with VxRail 7.0 support with vSAN OSA.  Support in VxRail 8.0.210 carries over all previously supported configurations for VD-4000 with vSAN OSA.  What may intrigue you even more is that VxRail 8.0.210 is introducing first-time support for VD-4000 with vSAN ESA.

In the second half of last year, VMware reduced the hardware requirements to run vSAN ESA to extend its adoption of vSAN ESA into edge environments.  This change enabled customers to consider running the latest vSAN technology in areas where constraints from price points and infrastructure resources were barriers to adoption.  VxRail added support for the reduced hardware requirements shortly after for existing platforms that already supported vSAN ESA, including E660N, P670N, VE-660 (all-NVMe), and VP-760 (all-NVMe).  With VD-4000, the VxRail portfolio now has an edge-optimized platform that can run vSAN ESA for environments that also may have space, energy consumption, and environmental constraints.  To top that off, it’s the first VxRail platform to support a single-processor node to run vSAN ESA, further reducing the price point.

It is important to set performance expectations when running workload applications on the VD-4000 platform.  While our performance testing on vSAN ESA showed stellar gains to the point where we made the argument to invest in 100GbE to maximize performance (check it out here), it is essential to understand that the VD-4000 platform is running with an Intel Xeon-D processor with reduced memory and bandwidth resources. In short, while a VD-4000 running vSAN ESA won’t be setting any performance records, it can be a great solution for your edge sites if you are looking to standardize on the latest vSAN technology and take advantage of vSAN ESA’s data services and erasure coding efficiencies.

Lifecycle management enhancements

VxRail 8.0.210 offers support for a few vLCM feature enhancements that came with vSphere 8.0 Update 2.  In addition, the VxRail implementation of these enhancements further simplifies the user experience.

For vLCM-enabled VxRail clusters, we’ve made it easier to benefit from VMware ESXi Quick Boot.  The VxRail Manager UI has been enhanced so that users can enable Quick Boot one time, and VxRail will maintain the setting whenever there is a Quick Boot-compatible cluster update.  As a refresher for some folks not familiar with Quick Boot, it is an operating system-level reboot of the node that skips the hardware initialization.  It can reduce the node reboot time by up to three minutes, providing significant time savings when updating large clusters.  That said, any cluster update that involves firmware updates is not Quick Boot-compatible.

Using Quick Boot had been cumbersome in the past because it required several manual steps.  To use Quick Boot for a cluster update, you would need to go to the vSphere Update Manager to enable the Quick Boot setting.  Because the setting resets to Disabled after the reboot, this step had to be repeated for any Quick Boot-compatible cluster update.  Now, the setting can be persisted to avoid manual intervention.

As shown in the following figure, the update advisor report now informs you whether a cluster update is Quick Boot-compatible so that the information is part of your update planning procedure.  VxRail leverages the ESXi Quick Boot compatibility utility for this status check.

 Highlighting the Quick Boot cluster-level setting being enabled and the Quick Boot compatibility check column in the VxRail Update Advisor

Figure 1. VxRail update advisor report highlighting Quick Boot information

Another new vLCM feature enhancement that VxRail supports is parallel remediation.  This enhancement allows you to update multiple nodes at the same time, which can significantly cut down on the overall cluster update time.  However, this feature enhancement only applies to VxRail dynamic nodes because vSAN clusters still need to be updated one at a time to adhere to storage policy settings.

This feature offers substantial benefits in reducing the maintenance window, and VxRail’s implementation of the feature offers additional protections over how it can be used on vSAN Ready Nodes.  For example, enabling parallel remediation with vSAN Ready Nodes means that you would be responsible for managing when nodes go into and out of maintenance mode as well as ensuring application availability because vCenter will not check whether the nodes that you select will disrupt application uptime.  The VxRail implementation adds safety checks that help mitigate potential pitfalls, ensuring a smoother parallel remediation process.

VxRail Manager manages when nodes enter and exit maintenance modes and provides the same level of error checking that it already performs on cluster updates.  You have the option of letting VxRail Manager automatically set the maximum number of nodes that it will update concurrently, or you can input your own number.  The number for the manual setting is capped at the total node count minus two to ensure that the VxRail Manager VM and vCenter Server VM can continue to run on separate nodes during the cluster update.

A screenshot of VxRail Update Manager highlighting the option to select parallel remediation, which gives sub options of automatically setting the maximum concurrent remediation setting or manually setting with an input.

Figure 2. Options for setting the maximum number of concurrent node remediations

During the cluster update, VxRail Manager intelligently reduces the node count of concurrent updates if a node cannot enter maintenance mode or if the application workload cannot be migrated to another node to ensure availability.  VxRail Manager will automatically defer that node to the next batch of node updates in the cluster update operation.

The last vLCM feature enhancement in VxRail 8.0.210 that I want to discuss is installation file pre-staging.  The idea is to upload as many installation files for the node update as possible onto the node before it actually begins the update operation. Transfer times can be lengthy, so any reduction in the maintenance window would have a positive impact to the production environment.

To reap the maximum benefits of this feature, consider using the scheduling feature when setting up your cluster update.  Initiating a cluster update with a future start time allows VxRail Manager the time to pre-stage the files onto the nodes before the update begins.

As you can see, the three vLCM feature enhancements can have varying levels of impact on your VxRail clusters.  Automated Quick Boot enablement only benefits cluster updates that are Quick Boot-compatible, meaning there is not a firmware update included in the package.  Parallel remediation only applies to VxRail dynamic node clusters.  To maximize installation files pre-staging, you need to schedule cluster updates in advance.  

That said, two commonalities across all three vLCM feature enhancements is that you must have your VxRail clusters running vLCM mode and that the VxRail implementation for these three feature enhancements makes them more secure and easy to use.  As shown in the following figure, the Updates page on the VxRail Manager UI has been enhanced so that you can easily manage these vLCM features at the cluster level.

Screenshot of the VxRail update settings for the vLCM features. This shows that vLCM is enabled and that the staging, quick boot, parallel remediation settings are enabled. This also shows that the max number of concurrent remediations is set to 8.

Figure 3. VxRail Update Settings for vLCM features

VxRail dynamic nodes

VxRail 8.0.210 also introduces an enhancement for dynamic node clusters with a VxRail-managed vCenter Server.  In a recent VxRail software release, VxRail added an option for you to deploy a VxRail-managed vCenter Server with your dynamic node cluster as a Day 1 operation.  The initial support was for Fiber-Channel attached storage. The parallel enhancement in this release adds support for dynamic node clusters using IP-attached storage for its primary datastore.   That means iSCSI, NFS, and NVMe over TCP attached storage from PowerMax, VMAX, PowerStore, UnityXT, PowerFlex, and VMware vSAN cross-cluster capacity sharing is now supported. Just like before, you are still responsible for acquiring and applying your own vCenter Server license before the 60-day evaluation period expires.

Password management

Password management is one of the key areas of focus in this software release.  To reduce the manual steps to modify the vCenter Server management and iDRAC root account passwords, the VxRail Manager UI has been enhanced to allow you to make the changes via a wizard-driven workflow instead of having to change the password on the vCenter Server or iDRAC themselves and then go onto VxRail Manager UI to provide the updated password.  The enhancement simplifies the experience and reduces potential user errors.

To update the vCenter Server management credentials, there is a new Security page that replaces the Certificates page.  As illustrated in the following figure, a Certificate tab for the certificates management and a Credentials tab to change the vCenter Server management password are now present.

This figure highlights how to change passwords. Go to the Security tab on the left side of the VxRail Update Manager. There are two tabs: Credentials and Certificate, in that order. Click on Credentials to see an Edit button that allows you to change the vCenter server password.

Figure 4. How to update the vCenter Server management credentials

To update the iDRAC root account password, there is a new iDRAC Configuration page where you can click the Edit button to launch a wizard to the change password.

This shows how to update the iDRAC root account password. Go to the iDRAC Configuration tab on the left-hand menu, then select edit to launch a wizard to change the password.

Figure 5. How to update the iDRAC root password

Deployment Flexibility

Lastly, I want to touch on two features in deployment flexibility.

Over the past few years, the VxRail team has invested heavily in empowering you with the tools to recompose and rebuild the clusters on your own.  One example is making our VxRail nodes customer-deployable with the VxRail Configuration Portal.  Another is the node imaging tool.

This illustrates different options to use the node image management tool  Client-based: Windows and linux support. Remotely image target nodes with desired software version up to 5 nodes concurrently.  API: Image target notes for VxRail Manager using VxRail API. Automate full deployment of clustter using VxRail API calls and custom scripting.   USB (NEW): Boot VxRail node directly from USB drive to initiate re-imaging (image as many nodes concurrently as desired). Professional services do not need to connect their laptop onto customer network. Can re-image VD-4000 witness.

Figure 6. Different options to use the node image management tool

Initially, the node imaging tool was Windows client-based where the workstation has the VxRail software ISO image stored locally or on a share.  By connecting the workstation onto the local network where the target nodes reside, the imaging tool can be used to connect to the iDRAC of the target node.  Users can reimage up to 5 nodes on the local network concurrently.  In a more recent VxRail release, we added Linux client support for the tool.

We’ve also refactored the tool into a microservice within the VxRail HCI System Software so that it can be used via VxRail API.  This method added more flexibility so that you can automate the full deployment of your cluster by using VxRail API calls and custom scripting.  

In VxRail 8.0.210, we are introducing the USB version of the tool.  Here, the tool can be self-contained on a USB drive so that users can plug the USB drive into a node, boot from it, and initiate reimaging.  This provides benefits in scenarios where the 5-node maximum for concurrent reimage jobs is an issue.  With this option, users can scale reimage jobs by setting up more USB drives.  The USB version of the tool now allows an option to reimage the embedded witness on the VD-4000.

The final feature for deployment flexibility is support for IPv6.  Whether your environment is exhausting the IPv4 address pool or there are requirements in your organization to future-proof your networking with IPv6, you will be pleasantly surprised by the level of support that VxRail offers.

You can deploy IPv6 in a dual or single network stack.   In a dual network stack, you can have IPv4 and IPv6 addresses for your management network.  In a single network stack, the management network is only on the IPv6 network.  Initial support is for VxRail clusters running vSAN OSA with 3 or more nodes.   Other than that, the feature set is on par with what you see with IPv4.  Select the network stack at cluster deployment.

Conclusion

VxRail 8.0.210 offers a plethora of new features and platform support such that there is something for everyone.  As you digest the information about this release, know that updating your cluster to the latest VxRail software provides you with the best return on your investment from a security and capability standpoint.  Backed by VxRail Continuously Validated States, you can update your cluster to the latest software with confidence.  For more information about VxRail 8.0.210, please refer to the release notes.  For more information about VxRail in general, visit the Dell Technologies website.

 

Author: Daniel Chiu, VxRail Technical Marketing

https://www.linkedin.com/in/daniel-chiu-8422287/


Read Full Blog
  • HCI
  • Ansible
  • VxRail
  • API
  • automation
  • PowerShell

VxRail API—Updated List of Useful Public Resources

Karol Boguniewicz Karol Boguniewicz

Fri, 09 Feb 2024 16:07:26 -0000

|

Read Time: 0 minutes

Well-managed companies are always looking for new ways to increase efficiency and reduce costs while maintaining excellence in the quality of their products and services. Hence, IT departments and service providers look at the cloud and Application Programming Interfaces (APIs) as the enablers for automation, driving efficiency, consistency, and cost-savings.

This blog helps you get started with VxRail API by grouping the most useful VxRail API resources available from various public sources in one place. This list of resources is updated every few months. Consider bookmarking this blog as it is a useful reference.

Before jumping into the list, it is essential to answer some of the most obvious questions:

What is VxRail API?

VxRail API is a feature of the VxRail HCI System Software that exposes management functions with a RESTful application programming interface. It is designed for ease of use by VxRail customers and ecosystem partners who want to better integrate third-party products with VxRail systems. VxRail API is:

  • Simple to use— Thanks to embedded, interactive, web-based documentation, and PowerShell and Ansible modules, you can easily consume the API using a supported web browser, using a familiar command line interface for Windows and VMware vSphere admins, or using Ansible playbooks.
  • Powerful—VxRail offers dozens of API calls for essential operations such as automated life cycle management (LCM), and its capabilities are growing with every new release.
  • Extensible—This API is designed to complement REST APIs from VMware (such as vSphere Automation API, PowerCLI, and VMware Cloud Foundation on Dell EMC VxRail API), offering a familiar look and feel and vast capabilities.

Why is VxRail API relevant?

VxRail API enables you to use the full power of automation and orchestration services across your data center. This extensibility enables you to build and operate infrastructure with cloud-like scale and agility. It also streamlines the integration of the infrastructure into your IT environment and processes. Instead of manually managing your environment through the user interface, the software can programmatically trigger and run repeatable operations.

More customers are embracing DevOps and Infrastructure as Code (IaC) models because they need reliable and repeatable processes to configure the underlying infrastructure resources that are required for applications. IaC uses APIs to store configurations in code, making operations repeatable and greatly reducing errors.

How can I start? Where can I find more information?

To help you navigate through all available resources, I grouped them by level of technical difficulty, starting with 101 (the simplest, explaining the basics, use cases, and value proposition), through 201, up to 301 (the most in-depth technical level).

101 Level

  • Solution BriefDell VxRail API – Solution Brief is a concise brochure that describes the VxRail API at a high-level, typical use cases, and where you can find additional resources for a quick start. I highly recommend starting your exploration from this resource.
       
  • Learning ToolVxRail Interactive Journey is the "go-to resource" to learn about VxRail and HCI System Software. It includes a dedicated module for the VxRail API, with essential resources to maximize your learning experience.

  • On-demand SessionAutomation with VxRail API is a one-hour interactive learning session delivered as part of the Tech Exchange Live VxRail Series, available on-demand. This session is an excellent introduction for anyone new to VxRail API, discussing the value, typical use cases, and how to get started.
  • On-demand SessionInfrastructure as Code (IaC) with VxRail is another one-hour interactive learning session delivered as part of the Tech Exchange Live VxRail Series, available on-demand. This one is an introduction to adopting Infrastructure as Code on VxRail, with automation tools like Ansible and Terraform.
  • Instructor SessionAutomation with VxRail is a live, interactive training session offered by Dell Technologies Education Services. Hear directly from the VxRail team about new capabilities and what’s on the roadmap for VxRail new releases and the latest advancements.
      During the session you will:
      • Learn about the VxRail ecosystem and leverage its automation capabilities
      • Elevate performance of automated VxRail operations using the latest tools
      • Experience live demonstrations of customer use cases and apply these examples to your environment
      • Increase your knowledge of VxRail API tools such as PowerShell and Ansible modules
      • Receive bonus material to support you in your automation journey
  • InfographicDell VxRail HCI System Software RESTful API is an infographic that provides quick facts about VxRail HCI System Software differentiation. This infographic explains the value of VxRail API.
  • Whiteboard VideoLevel up your HCI automation with VxRail API – This technical whiteboard video introduces you to automation with VxRail API. We discuss different ways you can access the API, and provide example use cases.
  • Blog PostTake VxRail automation to the next level by leveraging APIs is my first blog that focuses on VxRail API. It addresses some of the challenges related to managing a farm of VxRail clusters and how VxRail API can be a solution. It also covers the enhancements introduced in VxRail HCI System Software 4.7.300, such as Swagger and PowerShell integration.
  • Blog PostVxRail – API PowerShell Module Examples is a blog from my colleague David, explaining how to install and get started with the VxRail API PowerShell Modules Package.
  • Blog PostInfrastructure as Code with VxRail Made Easier with Ansible Modules for Dell VxRail – my blog with an introduction to VxRail Ansible Modules, including a demo.
  • (New!) Blog PostVxRail Edge Automation Unleashed - Simplifying Satellite Node Management with Ansible – my blog explaining the use of Ansible Modules for Dell VxRail for satellite node management, including a demo.
  • Blog PostProtecting VxRail from Power Disturbances is my second API-related blog, in which I explain an exciting use case by Eaton, our ecosystem partner, and the first UPS vendor who integrated their power management solution with VxRail using the VxRail API.
  • Blog PostProtecting VxRail From Unplanned Power Outages: More Choices Available describes another UPS solution integrated with the VxRail API, from our ecosystem partner APC (Schneider Electric).
  • DemoVxRail API – Overview is our first VxRail API demo published on the official Dell YouTube channel. It was recorded using VxRail HCI System Software 4.7.300, which explains VxRail API basics, API enhancements introduced in this version, and how you can explore the API using the Swagger UI.
  • DemoVxRail API – PowerShell Package is a continuation of the API overview demo referenced above, focusing on PowerShell integration. It was recorded using VxRail HCI System Software 4.7.300.
  • DemoAnsible Modules for Dell VxRail provides a quick overview of VxRail Ansible Modules. It was recorded using VxRail HCI System Software 7.0.x.
  • (New!) DemoAnsible Modules for Dell VxRail – Automating Satellite Node Management continues the subject of VxRail Ansible Modules, showcasing the satellite node management use case for the Edge. It was recorded using VxRail HCI System Software 8.0.x.

201 Level

  • (Updated!) HoLHands On Lab: HOL-0310-01 - Scalable Virtualization, Compute, and Storage with the VxRail REST API- allows you to experience the VxRail API in a virtualized demo environment using various tools. This has been premiered at Dell Technologies World 2022 and is a very valuable self-learning tool for VxRail API. It includes four modules:
    • Module 1: Getting Started (~10 min / Basic) - The aim of this module is to get the lab up and running and dip your toe in the VxRail API waters using our web-based interactive documentation.
        • Access interactive API documentation
        • Explore available VxRail API functions
        • Test a VxRail API function
        • Explore Dell Technologies' Developer Portal
    • Module 2: Monitoring and Maintenance (~15 min / Intermediate) - In this module you will navigate our VxRail PowerShell Modules and the VxRail Manager, to become more familiar with the options available to monitor the health indicators of a VxRail cluster. There are also some maintenance tasks that show how these functions can simplify the management of your environment.
        Monitoring the health of a VxRail cluster:
        • Check the cluster's overall health
        • Check the health of the nodes
        • Check the individual components of a node
        Maintenance of a VxRail cluster:
        • View iDRAC IP configuration
        • Collect a log bundle of the VxRail cluster
      • Cluster shutdown (Dry run)
    • Module 3Add & Update VxRail Satellite Nodes (~30 min / Intermediate) - In this module, you will experiment with adding and updating VxRail satellite nodes using VxRail API and VxRail API PowerShell Modules.
        • Add a VxRail satellite node
        • Update a VxRail satellite node
    • Module 4: Cluster Expansion or Scaling Out (~25 min / Advanced) - In this module, you will experience our official VxRail Ansible Modules and how easy it is to expand the cluster with an additional node.
        • Connect to Ansible server
        • View VxRail Ansible Modules documentation
        • Add a node to the existing VxRail cluster
        • Verify cluster state after expansion
    • Module 5: Lifecycle Management or LCM (~25 min / Advanced) - In this module, you will experience our VxRail APIs using POSTMAN. You will see how easy LCM operations are using our VxRail API and software.
        • Explore POSTMAN
        • Generate a compliance report
      • Explore LCM pre-check and LCM upgrade API functions available to bring it to the next VxRail version.

 

Note: If you’re a customer, you will need to ask your Dell or partner account team to create a session for you and a hyperlink to get the access to this lab.

  • vBrownBag session—vSphere and VxRail REST API: Get Started in an Easy Way is a vBrownBag community session that took place at the VMworld 2020 TechTalks Live event. There are no slides and no “marketing fluff,” but an extensive demo showing: 
    1. How you can begin your API journey by using interactive, web-based API documentation
    2. How you can use these APIs from different frameworks (such as scripting with PowerShell in Windows environments) and configuration management tools (such as Ansible on Linux)
    3. How you can consume these APIs virtually from any application in any programming language.
  • vBrownBag session—Automating large scale HCI deployments programmatically using REST APIs is a vBrownBag community session that took place at the VMworld 2021 TechTalks Live event. This approx. 10 minute session discusses the sample use cases and tools at your disposal, allowing you to jumpstart your API journey in various frameworks quickly. It includes a demo of VxRail cluster expansion using PowerShell.

301 Level

  • (Updated!) ManualVxRail API User Guide at Dell Technologies Developer Portal is an official web-based version of the reference manual for VxRail API. It provides a detailed description of each available API function.
      Make sure to check the “Tutorials” section of this web-based manual, which contains code examples for various use cases and will replace the API Cookbook over time.
  • ManualVxRail API User Guide is an official reference manual for VxRail API in PDF format. It provides a detailed description of each available API function, support information for specific VxRail HCI System Software versions, request parameters and possible response codes, successful call response data models, and example values returned. Dell Technologies Support portal access is required.
  • (Updated!) Ansible ModulesThe Ansible Modules for Dell VxRail available on GitHub and Ansible Galaxy allow data center and IT administrators to use Red Hat Ansible to automate and orchestrate the configuration and management of Dell VxRail.
      The Ansible Modules for Dell VxRail are used for gathering system information and performing cluster level operations. These tasks can be executed by running simple playbooks written in yaml syntax. The modules are written so that all the operations are idempotent, therefore making multiple identical requests has the same effect as making a single request.
  • PowerShell PackageVxRail API PowerShell Modules is a package with VxRail.API PowerShell Modules that allows simplified access to the VxRail API, using dedicated PowerShell commands and integrated help. This version supports VxRail HCI System Software 7.0.010 or later.
    Note: You must sign into the Dell Technologies Support portal to access this link successfully.
  • API ReferencevSphere Automation API is an official vSphere REST API reference that provides API documentation, request/response samples, and usage descriptions of the vSphere services.
  • API ReferenceVMware Cloud Foundation on Dell VxRail API Reference Guide is an official VMware Cloud Foundation (VCF) on VxRail REST API reference that provides API documentation, request/response samples, and usage descriptions of the VCF on VxRail services.
  • Blog PostDeployment of Workload Domains on VMware Cloud Foundation 4.0 on Dell VxRail using Public API is a VMware blog explaining how you can deploy a workload domain on VCF on VxRail using the API with the CURL shell command.

I hope you find this list useful. If so, make sure that you bookmark this blog for your reference. I will update it over time to include the latest collateral.

Enjoy your Infrastructure as Code journey with the VxRail API!


Author: Karol Boguniewicz, Senior Principal Engineering Technologist, VxRail Technical Marketing 

Twitter: @cl0udguide


Read Full Blog
  • VMware
  • VxRail
  • NVMe
  • vSAN
  • 16G
  • vSAN ESA
  • VP-760
  • VE-660
  • vSAN OSA

VxRail’s Latest Hardware Evolution

Michael Athanasiou Michael Athanasiou

Thu, 04 Jan 2024 17:22:21 -0000

|

Read Time: 0 minutes

December is a time of celebration and anticipation, a month in which we may reflect on the events of the year and look ahead to what is yet to come. Charles Dickens’ “A Christmas Carol” – and its many stage and movie remakes – is one of those literary classics that helps showcase this season’s magic at its finest. It is even said that there is a special kind of magic—one full of excitement, innovation, and productivity—that finds a way to (hyper)converge the past, present, and future for data center administrators all around the world who have been good all year!  

No, your wondering eyes do not deceive you. Appearing today are VxRail’s next generation platforms—the VE-660 and VP-760—in all-new, all-NVMe configurations! While Santa’s elves have spent the year building their backlog of toys and planning supply-chain delivery logistics that rival SLA standards of the world’s largest e-tailers, the VxRail team has been hard at work innovating our VxRail family portfolio to ensure that your workloads can run faster than ever before. So, let’s grab a glass of eggnog and invite the holiday spirits along for a tour of VxRail past, present, and future to better understand our latest portfolio addition.


Figure 1. VxRail VE-660

Figure 2. VxRail VP-760


Spirit of VxRail Past

Figure 3. Santa still runs 3-tier architecture. He needs VxRail and the speed of NVMe!When VxRail first launched almost 8 years ago in early 2016, we introduced the concept of hyperconverged infrastructure to the masses with one easily-managed platform that combined best-of-breed Dell PowerEdge servers with VMware technology. This new age of data center management brought better performance, extended capabilities, and time-saving advantages to data center admins everywhere. Over the years, we’ve sought to improve the offering by taking advantage of the latest hardware standards and technologies. 

This was especially true earlier this summer when we launched the VE-660 and VP-760 VxRail platforms based on 16th Generation Dell PowerEdge servers. These next-gen successors to the VxRail E-Series and P-Series platforms not only contained the latest hardware innovations, but also represented a systemic change in the overall VxRail offering. 

First, the mainline E- and P-series platforms were respectively re-christened as the VE-660 and VP-760. This was done primarily to invite easier comparison points to the underlying PowerEdge servers on which they’re based – the R660 and R760. Second, we tracked how the use of accelerators in the data center had evolved over the years and made the strategic decision to fold the capabilities of the V-Series platform into the P-Series by way of specific riser configurations. Now, customers have the ability to glean all the benefits of a high-performant 2U system with the choice of either storage-optimized (up to 28 total drive bays) or accelerator-optimized (up to 2x double wide or 6x single wide GPUs) chassis configurations—whichever best aligns to the specifics of their workload needs. And third, VxRail platforms dropped the storage type suffix from the model name. Hybrid and all-flash (and as of today, all-NVME–more on this later) storage variants are now offered as part of the riser configuration selection options of these baseline platforms, where applicable.

These changes are representative of how the breadth and depth of customer needs have grown tremendously over the years. By taking these steps to streamline the VxRail portfolio, we charted an evolutionary path forward that continues our commitment to offer greater customer choice and flexibility.

Spirit of VxRail Present

These themes of greater choice and flexibility are amplified by the architectural improvements underpinning these new VxRail platforms. Primary among them is the introduction of Intel® 4th Generation Xeon® Scalable processors. Intel’s latest generation of processors do more than bump VxRail core density per socket to 56 (112 max per node). They also come with built-in AMX accelerators (Advanced Matrix Extensions) that support AI and HPC workloads without the need for any additional drivers or hardware. For a deeper dive into the Intel® AMX capability set, the Spirit of VxRail Present invites you to read this blog: VxRail and Intel® AMX, Bringing AI Everywhere, authored by Una O’Herlihy.

Intel’s latest processors also usher in support for DDR5 memory and PCIe Gen 5, two other architectural pillars that underpin significant jumps in performance. The following table offers a high-level overview and comparison of these pillars and a useful at-a-glance primer for those considering a technology refresh from earlier generation VxRail:

Table 1. VxRail 14th Generation to 16th Generation comparison


VxRail VE-660 & VP-760

VxRail E560, P570 & V570

Intel Chipset

4th Generation Xeon
 (Sapphire Rapids)

2nd Generation Xeon
 (Cascade Lake)

Cores

8 - 56

4 - 28

TDP

125W – 350W

85W – 205W

Max DRAM Memory 

4TB per socket

1.5TB per socket

Memory Channels

8 (DDR5)

6 (DDR4)

Memory Bandwidth

Up to 4800 MT/s

Up to 2933 MT/s

PCIe Generation 

PCIe Gen 5

PCIe Gen 3

PCIe Lanes

80

48

PCIe Throughput

32 GT/s

8 GT/s

As the operational needs of a business change day-by-day, finding the right balance between workload density and load balance can often feel like an infinite war for resources. The adoption of DDR5 memory across the latest generation of VxRail platforms offers additional flexibility in the way system resources can be divvied up by virtue of two key benefits: greater memory density and faster bandwidth. The VE-660 and VP-760 wield eight memory channels per processor, with the ability to slot up to two 4800MT/s DIMMs per channel for a maximum memory capacity of 8TB per node. Compared to a VxRail P570, the density and speed improvements are staggering: 33% more memory channels per processor, 2.6x increase in per system total memory, and up to a 64% increase in memory speed! With faster and greater density compute and memory available for workloads, each node in a VxRail cluster can handle more VMs, and if there is ever a case of task bottlenecking, there are plenty of resources still available for optimal load balancing.

When we consider the presence of PCIe Gen 5, we see an even greater increase in the overall performance envelope. PowerEdge’s Next-Generation Tech Note does a great job of contextualizing the capabilities of PCIe Gen 5. The main takeaway for VxRail, however, is that it increases the maximum bandwidth achievable from various peripheral components by roughly 25% when compared to PCIe Gen 4 and roughly 66% when compared to PCIe Gen 3. In particular, the jump in available PCIe lanes (48 lanes to a luxurious 80 lanes) and associated throughput (8 GT/s to 32 GT/s per lane) from Gen 3 to Gen 5 significantly reduces performance bottlenecks, resulting in faster storage transfer rates and more bandwidth for accelerators to process AI and ML workloads. 

PCIe Gen 5 is also backwards compatible with previous generation peripherals, enabling a certain degree of flexibility with respect to VxRail’s component extensibility and longevity in the data center. Yesterday’s technologies can still be used, but the VE-660 and VP-760 can adapt to growing workload demands by taking full advantage of the latest peripherals as they are released. They are even equipped with an additional PCIe slot over their E- & P-Series predecessors, providing extra dimensions of configuration. These boons in flexibility ensure any investment into this generation of VxRail enjoys longer relevance as your infrastructure backbone.

Spirit of VxRail Future

Even with all these architectural improvements defining the VP-760 and VE-660, we knew we could find ways of improving the capability set. So, we made our list of desired features (and checked it twice!) and determined that the best way to augment these next-generation hardware enhancements would be with the introduction of all-NVMe storage options. 

The Spirit of VxRail Past wishes to remind us that VxRail with all-NVMe storage is not new—NVMe first made its way to the VxRail lineup with the P580N and E560N almost four years ago and has been a mainstay facet of the VxRail with vSAN architecture ever since. However, what is most compelling about all-NVMe versions of the VE-660 and VP-760—what the Spirit of VxRail Future wishes to strongly communicate—is that NVMe opens the door to two very compelling benefits: additional flexibility of choice with respect to vSAN architecture and an associated increase in overall storage capacity with the addition of read intensive NVMe drives in sizes of up to 15.36TB.

The following figure outlines all of the generational advantages customers can benefit from when transitioning from existing 14th Generation VxRail environments to VP-760 all-NVMe platforms.

Figure 4. The VxRail 16th Generation all-NVMe advantage

In addition, VxRail on 16th Generation hardware can now support deployments with either vSAN Original Storage Architecture (OSA) or vSAN Express Storage Architecture (ESA). David Glynn provided a great summary of the core value vSAN ESA brings to the table for VxRail in his blog written nearly a year ago. With today’s launch, the VP-760 and VE-660 can now take advantage of vSAN ESA’s single-tier storage architecture that enables RAID-5 resiliency and capacity with RAID-1 performance. Customers who choose to deploy with vSAN OSA can also see the benefit of these new read intensive NVMe drives, with a total storage per node of up to 122.88TB in the VE-660 and 322.56TB in the VP-760. For those who deploy with vSAN ESA, maximum achievable storage is 153.6TB on the VE-660 and up to 368.64TB on the VP-760.

The Spirit of VxRail Future has seen the value of all-NVMe and is content knowing that VxRail will continue to underpin VMware mission-critical workloads for years to come.

Resources

 

Author: Mike Athanasiou, Sr. Engineering Technologist


Read Full Blog
  • VxRail

VxRail and Intel® AMX, Bringing AI Everywhere

Una O'Herlihy Una O'Herlihy

Wed, 13 Dec 2023 22:54:31 -0000

|

Read Time: 0 minutes

We have seen exponential growth and adoption of Artificial Intelligence (AI) across nearly every sector in recent years, with many implementing their AI strategies as soon as possible to tap into the benefits and efficiencies AI has to offer. 

With our VxRail platforms, we have been supporting fully integrated and pre-installed GPUs for many years, with an array of NVIDIA GPUs available that already caters for high performance compute, graphics, and, of course, AI workloads. 

These GPUs are stood up as part of the first run deployment of your VxRail system (though licensing for NVIDIA will be separate) and will be displayed and managed in vCenter. On top of this integration with vSphere and VxRail Manager, you will also see the GPUs’ lifecycle management taken care of through vLCM, where the GPU vib will be added to VxRail’s LCM bundle and then upgraded as part of that upgrade process.

When considering the type of accelerator you need for your VxRail system, there is now an additional option outside of discrete GPUs, which may have just enough acceleration capabilities to cater to your AI workloads.

Figure 1. Embrace AI with VxRail

Our VxRail 16th Generation platforms, launched this past summer, come with a choice of Intel® 4th generation Xeon® Scalable processors, all of which come with built-in accelerators called Intel® Advanced Matrix Extensions (AMX) that are deeply embedded in every core of the processor. The Intel® AMX accelerator, which benefits both AI and HPC workloads, is supported out-of-the-box and comes as standard without any requirement for drivers, special hardware, or additional licensing.

In this blog, we will cover the performance testing carried out by our VxRail Performance team in conjunction with Intel, as well as the gains we can expect to see when running AI inferencing workloads on our VxRail 16th Generation platforms leveraging Intel® AMX.

But first – what is Intel® AMX and how does it work?

Intel® AMX’s architecture consists of two components:

  • Tiles – consisting of eight two-dimensional registers that store large chunks of data, each 1kilobyte in size
  • Tile Matrix Multiplication (TMUL) – an accelerator engine attached to the tiles that performs matrix-multiply computations for AI

The accelerator works by combining larger 2D register files called tiles and a set of matrix multiplication instructions, enabling Intel® AMX to deliver the type of matrix compute functionality that you commonly find in dedicated AI accelerators (i.e. GPUs) directly into our CPU cores. This allows AI workloads to run on the CPU instead of offloading them to dedicated GPUs. 

Figure 2. Intel AMX Architecture Tile and TMUL

With this functionality, the Intel® AMX accelerator works best with AI workloads that rely on matrix math, like natural language processing, recommendation systems, and image recognition. The Intel® AMX accelerator delivers acceleration for both inferencing and deep learning on these workloads, providing a significant performance boost which we will cover shortly. 

There are two data types – INT8 and BF16 – supported for Intel® AMX, both of which allow for the matrix multiplication I mentioned earlier.

Some Intel® AMX workload use cases include:

  • Image recognition
  • Natural language processing
  • Recommendation systems
  • Media processing 
  • Machine translation

Did you say performance testing? 

Yes, I did. 

With this testing, we saw increased AI performance for two sets of benchmark results that demonstrated the generation-to-generation inference performance gains delivered by our 16th generation VxRail VE-660 platform (with Intel® AMX!) compared to previous 15th generation VxRail platforms. 

The testing was focused on the inferencing of two different AI tasks, one for image classification with the ResNet50 model and the other for natural language processing with the BERT-large Model. The following covers the details of the testing:

Benchmark Testing: 

  • ResNet50 for Image Classification
  • BERT benchmark for Natural Language Processing

Framework: TensorFlow 2.11

Table 1. Tested VxRail hardware overview

Generation

16th Generation

15th Generation

15th Generation (different processor)

System Name

VxRail VE-660

VxRail E660N

VxRail E660N

Number of Nodes

4

4

4

 

Components per VE-660 node

Components per E660N node

Components per E660N node

Processor Model

Intel® Xeon ® 6430 (32c)

Intel® Xeon ® 6338 (32c)

Intel® Xeon ® 6330 (28c)

Intel® AMX?

Yes

No

No

Processors per node

2

2

2

Core count per node

64

64

56

Processor Frequency

2.1 GHz, 3.4 GHz Turbo boost

2.0 GHz, 3.0 GHz Turbo boost

2.0 GHz, 3.10 GHz Turbo boost

Memory per node

512GB RAM

512GB RAM

512GB RAM

Storage

2x diskgroups

(1 x cache, 3 x capacity)

2x diskgroups

(1 x cache, 4 x capacity)

2x diskgroups

(1 x cache, 4 x capacity)

vSAN OSA

vSAN OSA 8.0 U2

vSAN OSA 8.0

vSAN OSA 8.0

VxRail version

Engineer pre-release VxRail code

8.0.010

8.0.010

We can see in the following figures that the ResNet50 image classification throughput increased by 3.1x, and we see a 3.7x increase in AI performance for the BERT benchmark results for natural language processing (NLP).

Figure 3. VxRail Generation-to-Generation – ResNet 50 inference results

Figure 4. VxRail Generation-to-Generation - BERT inference results

This exceptional increase in performance illustrates the type of AI performance gains you can achieve with Intel® AMX on VxRail without needing to invest in dedicated GPUs, enabling you to start your AI journey whenever you want.

Before we go, let’s review some highlights…

Intel® AMX and VxRail are…

  • Already included in any Intel processor on VxRail 16th Generation VE-660 and VP-760 platforms
  • Highly optimized for matrix operations common to AI workloads
  • Cost-effective, allowing you to run AI workloads without the need of a dedicated GPU
  • Integral to increased AI performance on VxRail 16thGeneration platform
    • 3.1 x for Image Classification* 
    • 3.7 x for Natural Language Processing (NLP)* 

Intel® AMX and VxRail support…

  • Most popular AI frameworks, including TensorFlow, Pytorch, OpenVINO, and more
  • int8 and bf16 data types
  • Deep Learning AI Inference and Training Workloads for:
    • Image recognition
    • Natural language processing
    • Recommendation systems
    • Media processing 
    • Machine translation

(*Results based on engineering pre-release VxRail code)

Conclusion

Our VE-660 and VP-760 VxRail platforms come with built-in Intel® AMX accelerators which improve AI performance by 3.1x for image classification and 3.7x for NLP. The combination of these 16th Generation VxRail platforms and 4th generation Intel® Xeon® processors provides a cost-effective solution for customers that rely on Intel® AMX to meet their SLA for AI workload acceleration. 

 

Author: Una O’Herlihy

Read Full Blog
  • VMware
  • VxRail
  • Automation

A Closer Look at New Features Brought with VxRail 7.0.480

Dylan Jackson Dylan Jackson

Sat, 17 Feb 2024 23:57:31 -0000

|

Read Time: 0 minutes

The landscape of VxRail software is ever-evolving. As software releases become available, so too do new features and functions. These new features and functions create a more robust ecosystem, focusing on simplifying regular tasks that appear mundane but are critical to maintaining a secure, up-to-date, and healthy IT environment. VxRail 7.0.480 brought several new and enhanced capabilities to administrators, continuing to build on the streamlined infrastructure management experience that VxRail offers. Many of these improvements are part of the LCM experience. Let’s take a moment to discuss some of these new software improvements and what they can do for infrastructure staff. These include expanded storage of update advisor reports from one report to thirty reports, the ability to export compliance reports to preservable files, automated node reboots for clusters, and extended ADC bundle and installer metadata upload functionality for improved prechecking and update advisor reporting.  

Extended update advisor report availability 

Administrative teams have likely seen various update advisor reports. These reports have been part of the VxRail LCM experience for the past few releases and present a look at theA view of the History tab showing multiple different update advisor reports from different times is displayed.Figure 1. A view of the pane showing multiple update advisor reports available for review cluster as it is at the moment. That said, storing multiple reports helps provide a documented history of the cluster. VxRail 7.0.480 has taken these singular reports and extended their storage to hold up to thirty reports, granting administrators the information and reporting to review up to the last thirty updates. 

Imagine that you have a large cluster. Different nodes could need different remediating actions. The ability to maintain multiple reports would enable administrators to address issues raised in a report while also creating a documentation trail for when corrective actions take multiple administrative cycles spanning extended lengths of time, possibly exceeding a day.

Export of compliance drift reports

Compliance drift reports are another reporting element of the LCM process, helping administrators to ensure that clusters conform with a Continuously Validated State (CVS) on a daily basis. This frees up administrators to attend to business-specific tasks, while ensuring that the more mundane work of gathering software versions for review is automated. This is a critical task that helps prevent time-intensive infrastructure issues that IT teams need to dedicate resources to correcting. Additionally, these reports ensure that LCM updates are successful by identifying any components that may have drifted from what is defined by the current Continuously A view of the option to export drift reports to a local HTML file.Figure 2. The option to export a drift report to a local HTML fileValidated State. 

These compliance drift reports, demonstrated to the right, can now be exported, aiding administrators in creating and maintaining a documented history of their clusters' adherence to Continuously Validated States. Each report can be grouped by components and is saved to an HTML file, preserving the original view that VxRail administrators have come to know. 




Sequential node reboot

Our next new feature automates the sequential reboot of nodes within a cluster, a task that many customers engage in manually. The automatic node reboot function is found within the Hosts submenu in the Configure tab. As shown in the following demonstration, administrators simply select the nodes they want to reboot, click the reboot button, and then complete the wizard. The wizard offers the options to begin rebooting immediately or schedule them for a later time. Once this selection is made, the wizard will run a precheck, and the reboot cycles can begin. While this feature most benefits larger clusters, clusters of any size are advantaged by automating infrastructure tasks. Node reboots can help further improve update cycle success rates by clearing issues like memory utilization or restarting any potentially hung processes. 

As an example, let’s consider memory utilization again. If there were an issue with the balloon driver making memory available, the update precheck would detect it, however rebooting the node would restart the service and force the memory to be made available once again. We’ve also observed cases where larger clusters are updated less often compared to smaller clusters due to longer maintenance windows. This can lead to longer times between reboots for larger clusters. The sequential reboot of nodes within a cluster eases the difficulty in restarting larger clusters through automation and orchestration, leading to restarts with minimal administrator activity. This can clear a variety of issues that could halt an upgrade.

That said, manually rebooting each node within a cluster can require a significant time investment. Imagine for a moment that we have a 20-node cluster. If it took just 10 minutes per node to migrate workloads away from a node, restart the host, bring it back online physically and relaunch software services, and finally bring workload back, cycling through all 20 nodes would still take over three hours of an administrator's undivided attention and time. In reality, this reboot cycle would likely take longer. Automating these actions allows clusters to benefit from these actions while freeing IT staff up to focus on other critical business tasks.

ADC bundle and installer metadata upload

VxRail 7.0.480 brings the ability to use the adaptive data collector (ADC) bundle and installer metadata, shown being uploaded in the following demonstration, to update the LCM precheck and update advisor functions VxRail Manager provides. This is helpful because the precheck routinely welcomes new developments, leading to a more robust precheck and more successful LCM update cycles. For example, one of the more recent precheck developments involves an additional check on memory utilization. The LCM precheck examines CPU and memory utilization of the vCenter Server appliance. If either CPU or memory utilization exceeds an 80% threshold, a warning will appear in the precheck report. If the check occurs as part of an upgrade cycle, then the warning appears in the update progress dashboard. The update advisor metadata file includes all the version information related to the target VxRail release version. This allows the update advisor to create reports showing the current, expected, and target software versions for each LCM cycle. These packages are pulled by VxRail Manager automatically over the network for clusters using a Secure Connect Gateway connection and are also available to offline dark sites using the Local Updates tab.

Conclusion

The VxRail engineering team routinely delivers new features and functions to our customers. In this blog, we reviewed the enhancements for expanded update advisor report storage, the ability to export drift reports to local HTML files, automated cluster node reboot cycles, and the enhanced LCM precheck and update advisor with the ADC bundle and installer metadata file uploads. As we move forward, we continue to enhance LCM operations and minimize the time required to manage VxRail. As such, VxRail is a fantastic choice to run your virtualized workloads and will continue to become a more robust and administration-friendly platform.

 

Author: Dylan Jackson, Engineering Technologist


Read Full Blog
  • HCI
  • Ansible
  • VxRail
  • API
  • satellite node
  • Edge
  • REST
  • Karol Boguniewicz

VxRail Edge Automation Unleashed - Simplifying Satellite Node Management with Ansible

Karol Boguniewicz Karol Boguniewicz

Thu, 30 Nov 2023 17:43:03 -0000

|

Read Time: 0 minutes

VxRail Edge Automation Unleashed

Simplifying Satellite Node Management with Ansible

In the previous blog, Infrastructure as Code with VxRail made easier with Ansible Modules for Dell VxRail, I introduced the modules which enable the automation of VxRail operations through code-driven processes using Ansible and VxRail API. This approach not only streamlines IT infrastructure management but also aligns with Infrastructure as Code (IaC) principles, benefiting both technical experts and business leaders.

The corresponding demo is available on YouTube:


The previous blog laid the foundation for the continued journey where we explore more advanced Ansible automation techniques, with a focus on satellite node management in the VxRail ecosystem. I highly recommend checking out that blog before diving deeper into the topics discussed here - as the concepts discussed in this demo will be much easier to absorb

What are the VxRail satellite nodes?

VxRail satellite nodes are individual nodes designed specifically for deployment in edge environments and are managed through a centralized primary VxRail cluster. Satellite nodes do not leverage vSAN to provide storage resources and are an ideal solution for those workloads where the SLA and compute demands do not justify even the smallest of VxRail 2-node vSAN clusters.

Satellite nodes enable customers to achieve uniform and centralized operations within the data center and at the edge, ensuring VxRail management throughout. This includes comprehensive, automated lifecycle management for VxRail satellite nodes, while encompassing hardware and software and significantly reducing the need for manual intervention.

To learn more about satellite nodes, please check the following blogs from my colleagues:

Automating VxRail satellite node operations using Ansible

You can leverage the Ansible Modules for Dell VxRail to automate various VxRail operations, including more advanced use cases, like satellite node management. It’s possible today by using the provided samples available in the official repository on GitHub.

Have a look at the following demo, which leverages the latest available version of these modules at the time of recording – 2.2.0. In the demo, I discuss and demonstrate how you can perform the following operations from Ansible:

  • Collecting information about the number of satellite nodes added to the primary VxRail cluster
  • Adding a new satellite node to the primary VxRail cluster
  • Performing lifecycle management operations – staging the upgrade bundle and executing the upgrade on managed satellite nodes
  • Removing a satellite node from the primary cluster


The examples used in the demo are slightly modified versions of the following samples from the modules' documentation on GitHub. If you’d like to replicate these in your environment, here are the links to the corresponding samples for your reference, which need slight modification:

In the demo, you can also observe one of the interesting features of the Ansible Modules for Dell VxRail that is shown in action but not explained explicitly. You might be aware that some of the VxRail API functions are available in multiple versions – typically, a new version is made available when some new features are available in the VxRail HCI System Software, while the previous versions are stored to provide backward compatibility. The example is “GET /vX/system”, which is used to retrieve the number of the satellite nodes – this property was introduced in version 4. If you avoid specifying the version, the modules will automatically select the latest supported version, simplifying the end-user experience.

How can you get more hands-on experience with automating VxRail operations programmatically?

The above demo, discussing the satellite nodes management using Ansible, was configured in the VxRail API hands-on lab which is available in the Dell Technologies Demo Center. With the help of the Demo Center team, we built this lab as the self-education tool for learning VxRail API and how it can be used for automating VxRail operations using various methods – through exploring the built-in, interactive, web-based documentation, VxRail API PowerShell Modules, Ansible Modules for Dell VxRail and Postman.

The hands-on lab provides a safe VxRail API sandbox, where you can easily start experimenting by following the exercises from the lab guide or trying some other use cases on your own without any concerns about making configuration changes to the VxRail system.

The lab was refreshed for the Dell Technologies World 2023 conference to leverage VxRail HCI System Software 8.0.x and the latest version of the Ansible Modules. If you’re a Dell partner, you should have access directly, and if you’re a customer who’d like to get access – please contact your Account SE from Dell or Dell Partner. The lab is available in the catalog as: “HOL-0310-01 - Scalable Virtualization, Compute, and Storage with the VxRail REST API”.

Conclusion

In the fast-evolving landscape of IT infrastructure, the ability to automate operations efficiently is not just a convenience but a necessity. With the power of Ansible Modules for Dell VxRail, we've explored how this necessity can be met, looking at the examples of satellite nodes use case. We encourage you to embrace the full potential of VxRail automation using VxRail API and Ansible or other tools. If it is something new, you can get the experience by experimenting with the hands-on lab available in the Demo Center catalog.

Resources


Author: Karol Boguniewicz, Senior Principal Engineering Technologist, VxRail Technical Marketing
Twitter/X: @cl0udguide
LinkedIn: https://www.linkedin.com/in/boguniewicz/



Read Full Blog
  • VCF on VxRail
  • Cloud Foundation on VxRail

Learn About the Latest VMware Cloud Foundation 5.1 on Dell VxRail 8.0.200 Release

Jason Marques Jason Marques

Tue, 05 Dec 2023 17:06:36 -0000

|

Read Time: 0 minutes

Pairing more configuration flexibility with more integrated automation delivers even more simplified outcomes to meet more business needs!

More is what sums up this latest Cloud Foundation on VxRail release! This new release is based on the latest software bill of materials (BOM) featuring vSphere 8.0 U2, vSAN 8.0 U2, and NSX 4.1.2. Read on for more details.…

 

Operations and serviceability user experience updates

SDDC Manager WFO UI custom host networking configuration enhancements

With this enhancement, the administrator can configure networking of a new workload domain or VxRail cluster using either “Default” VxRail Network Profiles or a “Custom” Network Profile configuration. Cloud Foundation on VxRail already supports the ability for administrators to deploy custom host networking configurations using the SDDC Manager WFO API deployment method, however this new feature now brings this support to the SDDC Manager WFO UI deployment method, making it even easier to operationalize.

The following demo walks through using the SDDC Manager WFO UI to create a new workload domain with a VxRail cluster that is configured with vSAN ESA and VxRail vLCM mode enabled and a custom network profile.

New VCF Infrastructure as Code (IaC) tooling with new Terraform VCF Provider and PowerCLI VCF Module

Infrastructure teams can now utilize the Terraform Provider for VCF and the VCF module that is now integrated into VMware’s official PowerCLI tool to perform Infrastructure-as-code (IaC), allowing them to deploy, manage, and operate VMware Cloud Foundation on VxRail deployments. 

By using prebuilt IaC best practices code that is designed to take advantage of interfacing with a single VCF API, IaC teams are able to perform infrastructure provisioning tasks that can accelerate IaC usage and lessen the burden to develop and maintain code for individual infrastructure components intended to deliver similar outcomes.

Important Note: Not all operations using these tools may be supported in Cloud Foundation on VxRail. Please refer to tool documentation links at the bottom of this post for details.

    

LCM updates

Day 1 VxRail vLCM mode compatibility for management and workload domains

VMware Cloud Foundation 5.1 on VxRail 8.0.200 now supports the configuration and deployment of new domains using vSphere Lifecycle Manager Images (vLCM) enabled VxRail clusters, depicted in figure 1. VxRail vLCM enabled clusters can leverage VxRail Manager to unify not only your ESXi Image but also your BIOS/firmware/drivers through a single update process, all controlled/orchestrated by VxRail Manager using the integrated SDDC Manager’s native LCM operations experience via VxRail APIs. VxRail clusters will have their VxRail Continuously Validated State image managed at the cluster level by VxRail Manager just like in VxRail standard LCM mode enabled clusters. 

Figure 1 is a high-level diagram of VxRail vLCM mode architecture

Figure 1. High-level VxRail vLCM mode architecture

Mixed-mode support for workload domains as a steady state

Existing VMware Cloud Foundation 5.x on VxRail 8.x deployments now allow administrators to run workload domains of different VCF 5.x versions as a steady state”.  Administrators can now update the management domain and any other workload domain of a VCF 5.0 deployment to the latest VCF 5.x version without the need to upgrade all workload domains. Mixed-mode support also allows administrators to leverage the benefits of new SDDC Manager features in the management domain without having to upgrade a full VCF 5.x on VxRail 8.x instance. 

Asynchronous download support for SDDC Manager update precheck files

SDDC Manager update precheck files can now be downloaded and updated asynchronously from full release updates, an addition to similar async VxRail specific precheck file updates that already exist within VxRail Manager. This feature allows administrators to download, deploy, and run SDDC Manager update prechecks tailored to a specific VMware Cloud Foundation on VxRail releases. SDDC Manager precheck files are created by VMware engineering and contain detailed checks for SDDC Manager to run prior to upgrading to a newer VCF on VxRail target release, as shown in the following figure. 

Figure 2. High-level process of asynchronous download support for SDDC Manager update precheck files

 

Networking updates

Support for the separation of DvPG for management appliances and ESXi host (VMKernel) management

Prior to this release, the default networking topology deployed by VMware Cloud Foundation on VxRail consisted of ESXi host management interfaces (vmkernel interface) and management components (vCenter server, SDDC Manager, NSX components, VxRail Manager, etc.) being applied to the same Distributed Virtual Port Group (DvPG). This new DvPG separation feature enables traffic isolation between management component VMs and ESXi Host Management vmkernel Interfaces, helping align to an organization’s desired security posture. Figure 3 illustrates this new configuration architecture.

Figure 3. New DvPG architecture

Configure custom NSX Edge cluster without 2-tier routing (via API)

VMware Cloud Foundation 5.1 on VxRail 8.0.200 now provides the option to deploy a custom NSX Edge cluster without the need to configure both a Tier-0 and Tier-1 gateway. These types of NSX Edge cluster deployments can be configured using the SDDC Manager (API only).

Static IP-based NSX Tunnel End Point and Sub Transport Node Profile assignment support for L3 aware clusters and L2/L3 vSAN stretched clusters

VxRail stretched clusters that are deployed using vSAN OSA can now be configured with vLCM mode enabled. In addition, administrators can now configure NSX Host TEPs to utilize a NSX static IP pool and no longer need to manually maintain an external DHCP server to support Layer 3 vSAN OSA stretched clusters, as illustrated in the following figure.

Figure 4. TEP Configuration Flexibility Example for vSAN Stretched Clusters

Building off these capabilities, deployments of VxRail stretched clusters with vSAN OSA which are configured using static IP Pools can now also leverage Sub-Transport Node Profiles (Sub-TNP), a feature introduced with NSX-T 3.2.2 and NSX 4.1.

Sub-TNPs can be used to prepare clusters of hosts without L2 adjacency to the Host TEP VLAN. This is useful for customers with rack-based IP schemas and allows Host TEP IPs to be configured on their own separate networks. Configuring vSAN stretched clusters using NSX Sub-TNP provides increased security, allowing administrators to enable and configure Distributed Malware Prevention and Detection. An example of this is depicted in the following figure.

Figure 5. Sub-TNP vSAN L3 Stretched Cluster Configuration Example

Note: Stretched VxRail with vSAN ESA clusters are not yet supported.

Support for multiple VDS for NSX host networking configurations

This release now provides the option to configure multiple VDS for NSX through the SDDC Manager WFO UI and WFO API.

Administrators can now configure additional VxRail host VDS prepared for NSX (VDS for NSX) to configure using VLAN Transport Zones (VLAN TZs), as shown in the following figure. This provides administrators the added benefit of configuring NSX Distributed Firewall (DFW) for workloads in VLAN transport zones, allowing security to be more granular. These capabilities further simplify the configuration of advanced networking and security for Cloud Foundation on VxRail.

Figure 6 is VxRail host

Figure 6. Configuring additional VxRail host VDS for NSX to configure using VLAN TZs

 

Security and access updates

OKTA SSO identity federation support

VMware Cloud Foundation 5.1 on VxRail 8.0.200 now supports the option to configure the VMware Identity Broker for federation using Okta (3rd party IDP).  Once configured, federated users can seamlessly move between vCenter Server and NSX Manager consoles without being prompted to re-authenticate.  

 

Storage updates

vSAN OSA/ESA support for management and workload domain VxRail clusters

VMware Cloud Foundation 5.1 on VxRail 8.0.200 adds support for both vSAN OSA-based and vSAN ESA-based VxRail clusters when deploying a new management domain (greenfield VCF on VxRail instance) and new workload domains/clusters in VCF on VxRail instances that have been upgraded to this latest release. VCF requires that vSAN ESA-based cluster deployments have vLCM mode enabled. Also, as of this release, only 15th generation VxRail vSAN ESA compatible hardware platforms are supported. 16th generation VxRail platform support is planned for a future release.

Support for vSAN OSA/ESA remote datastores as principal storage when used with VxRail dynamic node workload domain clusters

This release adds support of VxRail dynamic node compute-only clusters in cross cluster capacity sharing use cases. This means that vSAN OSA or ESA remote datastores sourced from a standard VxRail HCI cluster with vSAN within the same workload domain can now be used as principal storage for VxRail dynamic node- compute only workload domain clusters. This capability is available via the SDDC Manager WFO script deployment method only. 

 

Platform and scale updates

Increased VCF remote cluster maximum support for up to 16 nodes and up to 150ms latency

There are new validated updates to the maximum supported latency requirements for use of VCF remote clusters. These links now require 10 Mbps of bandwidth available and a latency less than 150ms.

There have also been updates regarding VCF remote cluster size scalability ranges. A VCF remote cluster now requires a minimum of 3 hosts when using local vSAN as cluster principal storage or 2 hosts when using supported Dell external storage principal storage with VxRail dynamic nodes. On the max scale limit side, VCF remote clusters cannot exceed the new maximum of 16 VxRail hosts in either case. 

Note: Support for this feature is expected to be available after GA.

Support for 2-node workload domain VxRail dynamic node clusters when using VMFS on FC Dell external storage as principal storage

Cloud Foundation on VxRail now supports the ability to deploy 2-node dynamic node-based workload domain clusters when using VMFS on FC Dell external storage as cluster Principal storage.

Increased GPU scale for Private AI

Nvidia GPUs can be configured for AI / ML to support a variety of different use cases. In VMware Cloud Foundation 5.1 on VxRail 8.0.200, where GPUs have been configured for vGPUs, a VM can now be configured with up to 16 vGPU profiles that represent all of a GPU or parts of a GPU. These enhancements allow customers to support larger Generative AI and large-language model (LLM) workloads while delivering maximum performance.

 

VxRail hardware platform updates

15th generation VxRail E660N and P670N all-NVMe vSAN ESA hardware platform support

Cloud Foundation on VxRail administrators can now use VxRail hardware platforms that have been qualified to run vSAN ESA and VxRail 8.0.200 software. The all-NVMe VxRail platforms such as the 15th generation VxRail E660N and P670N can now be ordered and deployed in Cloud Foundation 5.1 on VxRail 8.0.200 environments.

 

Hybrid cloud management updates

VCF mixed licensing mode support

VMware Cloud Foundation 5.1 on VxRail 8.0.200 introduces support for both Key-based and Keyless licensing for existing deployments, as illustrated in the following figure. 

To enable the deployment, the management domain must first be cloud connected and subscribed.  Once complete, enhanced SDDC Manager workflows allow administrators the option to license a new workload domain using Keyless licenses (cloud connected subscription) or Key-based licenses (perpetual or cloud disconnected subscription). This deployment scenario is referred to as Mixed Licensing Mode. All licensing used within a domain must be homogenous, meaning all components within a domain must use either a Key-based or Keyless license and not a combination thereof.

Figure 7. Understanding Key-based and Keyless licensing for existing deployments

VMware Cloud Disaster Recovery service for VCF cloud connected subscription deployments

VMware Cloud Foundation on VxRail cloud connected subscriptions now support VMware Cloud Disaster Recovery (VCDR) as an add-on service through the VMware Cloud Portal.

 

Other asynchronous release-independent related updates

VMware redefines Cloud Foundation product lifecycle policies

The product lifecycle policies for new and existing VMware Cloud Foundation releases have been redefined by VMware. VCF on VxRail product lifecycle policies align with VMware’s VCF product lifecycle policy. 

End of General Support for VCF 5.x is now four (4) years from the original VCF 5.0 launch date. This change allows IT teams to run their VMware Cloud Foundation on VxRail deployments for longer before planning an upgrade, providing more control for IT organizations to adopt a cloud operating model that evolves at the pace of their business. 

 

Summary

Well, there you have it! Another release in the books. If you want even more information beyond what was discussed here, feel free to check out the resources linked below. See you next time!

 

Resources

 

Author: Jason Marques

Twitter:  @vWhipperSnapper 

Read Full Blog
  • VxRail cluster

Meet Deployment Needs as Varied as the Autumn Leaves with VxRail

Dylan Jackson Dylan Jackson

Tue, 14 Nov 2023 17:57:28 -0000

|

Read Time: 0 minutes

Meet Deployment Needs as Varied as the Autumn Leaves with VxRail

As we dive into the midst of the autumn season, the trees’ leaves all tint into a million hues of autumnal glee. Those of us in the US have the Thanksgiving holiday to look forward to. I’m certainly looking forward to a heaping plate of turkey and mashed potatoes all the while enjoying the seasonal beauty of a crisp autumn day. Like the many varieties of autumn leaves, VxRail is here to meet every hue of your deployment needs this season. The VxRail Technical Marketing team has recently released new video content showcasing how VxRail can suit a smorgasbord of colorful use cases.

VxRail within the Datacenter

For our first course, let’s discuss the VxRail with vSAN deployment flexibility video. Deploying VxRail clusters with vSAN is the most common deployment option, the turkey of this analogy. Prior to deploying a VxRail with vSAN cluster, one of the first choices to make relates to the storage architecture. Be sure to watch this video, if you want to learn more about VxRail deployments with vSAN and the original and express storage architectures. It will walk you through many of the details and benefits of using VxRail to power your vSAN-backed datacenter workloads. 

If you aren’t particularly familiar with dynamic nodes, then our second-course video is certainly for you. Dynamic nodes are a fantastic way to bring additional compute power to the datacenter. They do so by offering compute-only nodes that leverage Dell SAN arrays for storage. When backed by PowerStore-T arrays, users can also take advantage of the Dynamic AppsOn feature. Dynamic AppsOn is a “better together” enhancement, much like a loaded baked potato. Datacenters aren’t the only ones with a place at the VxRail dinner table, though. There are also 2-node clusters and satellite node options that extend the benefits of VxRail automation to more locations.

Reaching Out to Branch Offices and Edge Sites

2-node clusters and satellite nodes are the mashed potatoes and gravy of the VxRail deployment story. Much like how you wouldn’t make a plate of only potatoes, these deployment types are meant to work with VxRail vSAN and dynamic node clusters. 2-node clusters are an excellent choice for branch locations because they allow administrators to extend the LCM and automation enhancements from primary datacenters to remote locations. Click the node above to find the VxRail video playlist, including 2-node clusters and satellite nodes.

Finally, we complete our deployment options with VxRail satellite nodes for dessert. Satellite nodes are stand-alone nodes managed by a VxRail cluster but aren’t cluster members themselves. For sites where even a 2-node cluster would be too large, they’re great for deployments at edge locations where even a 2-node cluster would be too much. If extending automation to site remote sites with small footprint requirements is something you need, try satellite nodes.

Regardless of the deployment type, all nodes run the same VxRail HCI System Software. This is why the user experience remains consistent among VxRail with vSAN clusters, dynamic node clusters, 2-node clusters, and satellite hosts. Daniel Chiu, from the VxRail Technical Marketing team created a video explaining all the advantages that the VxRail HCI System software provides. I would definitely recommend checking it out to gain a better understanding of the software that makes the VxRail experience possible.

Conclusion

Autumn is certainly one of my favorite seasons of the year. It brings much needed color into the world, and it feels like such a breath of fresh air. Much like how no two autumn leaves are the same tone of amber, we see an equal variety in needs when it comes to infrastructure deployments. No matter the business case, VxRail can be built to suit. If more resources are needed in the datacenter, VxRail clusters with vSAN can certainly fill that need. VxRail dynamic nodes also suit datacenter needs by providing compute power for Dell SAN arrays or even enhancing storage utilization through vSAN HCI Mesh. Both 2-node clusters and satellite nodes allow customers to extend their compute resources outside the datacenter to remote offices, branch offices, and other edge sites while simplifying the management activity needed to maintain this infrastructure.

Resources

 

Author: Dylan Jackson, VxRail Technical Marketing


Read Full Blog
  • HCI
  • VMware
  • VxRail
  • Dynamic nodes
  • LCM
  • Lifecycle Management
  • 7.0.480

Learn More About the Latest Major VxRail Software Release: VxRail 7.0.480

Daniel Chiu Daniel Chiu

Tue, 24 Oct 2023 15:51:48 -0000

|

Read Time: 0 minutes

Happy Autumn, VxRail customers!  As the morning air gets chillier and the sun rises later, this blog on our latest software release – VxRail 7.0.480 – paired with your Pumpkin Spice Latte will give you the boost you need to kick start your day.  It may not be as tasty as freshly made cider donuts, but this software release has significant additions to the VxRail lifecycle management experience that can surely excite everyone.

VxRail 7.0.480 provides support for VMware ESXi 7.0 Update U3o and VMware vCenter 7.0 Update U3o. All existing platforms that support VxRail 7.0, except ones based on Dell PowerEdge 13th Generation platforms, can upgrade to VxRail 7.0.480.  This includes the VxRail systems based on PowerEdge 16th Generation platforms that were released in August.

Read on for a deep dive into the VxRail Lifecycle Management (LCM) features and enhancements in this latest VxRail release. For a more comprehensive rundown of the features and enhancements in VxRail 7.0.480, see the release notes.

 

Improving update planning activities for unconnected clusters or clusters with limited connectivity

VxRail 7.0.450, released earlier this year, provided significant improvements to update planning activities in a major effort to streamline administrative work and increase cluster update success rates. Enhancements to the cluster pre-update health check and the introduction of the update advisor report were designed to drive even more simplicity to your update planning activities. By having VxRail Manager automatically run the update advisor report, inclusive of the pre-update health check, every 24 hours against the latest information, you will always have an up-to-date report to determine your cluster’s readiness to upgrade to the latest VxRail software version.

If you are not familiar with the LCM capabilities added in VxRail 7.0.450, you can review this blog for more information.

VxRail 7.0.450 offered a seamless path for clusters that are connected to the Dell cloud to take advantage of these new capabilities. Internet-connected clusters can automatically download LCM pre-checks and the installer metadata files, which provide the manifest information about the latest VxRail software version, from the Dell cloud. The ability to periodically scan the Dell cloud for the latest files ensures the update advisor report is always up to date to support your decision-making.

While unconnected clusters could use these features, the user experience in VxRail 7.0.450 made it more cumbersome for users to upload the latest LCM pre-checks and installer metadata files. VxRail 7.0.480 aims to improve the user experience for those who have clusters deployed in dark or remote sites that have limited network connectivity.

Starting in VxRail 7.0.480, users of unconnected clusters will have an easier experience uploading the latest LCM pre-checks file onto VxRail Manager. The VxRail Manager UI has been enhanced, so you no longer have to upload via CLI.

Knowing that some clusters are deployed in areas where network bandwidth is at a premium, the VxRail Manager UI has also been updated so that you only need to upload the installer metadata file to generate the update advisor report. In VxRail 7.0.450, users had to upload the full LCM bundle for the update advisor report. The difference in the payload size of greater than 10GB for a full LCM bundle versus a 50KB installer metadata file is a tremendous improvement for bandwidth-constrained clusters, eliminating a barrier to relying on the update advisor report as a standard cluster management practice. With VxRail 7.0.480, whether you have connected or unconnected clusters, these update planning features are easy to use and will help increase your cluster update success rates.

To accommodate these improvements, the Local Updates tab has been modified to support these new capabilities.  There are now two sub-tabs underneath the Local Updates tab:

  • The Update sub-tab represents the existing cluster update workflow where you would upload the full LCM bundle to generate the update advisor report and initiate the cluster update operation.
  • The Plan and Update sub-tab is the recommended path which incorporates the enhancements in VxRail 7.0.480. Here you can upload the latest LCM pre-checks file and the installer metadata file that you found and downloaded from the Dell Support website. Uploading the LCM pre-checks file is optional to create a new report because there may not always be an updated file to apply. However, you do need to upload an installer metadata file to generate a new report from here. Once uploaded, VxRail Manager will generate an update advisor report against that installer metadata file every 24 hours.

Figure 1. New look to the Local Updates tab

 

Easier record-keeping for compliance drift and update advisor reports

VxRail 7.0.480 adds new functionality to make the compliance drift reports exportable to outside the VxRail Manager UI while also introducing a history tab to access past update advisor reports.

Some of you use the contents of the compliance drift report to build out a larger infrastructure status report for information sharing across your organizations. Making the report exportable would simplify that report building process. When exporting the report, there is an option to group the information by host if you prefer.

Note that the compliance check functionality has moved from the Compliance tab under the Updates page to a separate page, which you can navigate to by selecting Compliance from under the VxRail section.

 

Figure 2. Exporting the compliance drift report

The exit of the Compliance tab comes with the introduction of the History tab on the Updates page in VxRail 7.0.480. Because VxRail Manager automatically generates a new update advisor report every 24 hours and you have the option to generate one on-demand, the update advisor report is often overwritten. To avoid the need to constantly export them as a form of record-keeping, the new History tab stores the last 30 update advisor reports. The reports are listed in a table format where you can see which target version the report was run against and when it was run. To view the full report, you can click on the icon on the left-hand column.

Figure 3. New History tab to store the last 30 update advisor reports

 

Addressing cluster update challenges for larger-sized clusters

For some of you that have larger-sized clusters, cluster updates pose challenges that may prevent you from upgrading more frequently.  For example, the length of the maintenance window required to complete a full cluster update may not fit within your normal business operations such that any cluster update activity will impact service availability.  As a result, cluster updates are kept to a minimum and nodes inevitably are not rebooted for long periods of time.  While the cluster pre-update health check is an effective tool to determine cluster readiness for an upgrade, some issues may be lurking that a node reboot can uncover.  That’s why some of you script your own node reboot sequence that acts as a test run for a cluster upgrade.  The script reboots each node one at a time to ensure service levels of your workloads are maintained.  If any nodes fail to reboot, you can investigate those nodes.

VxRail 7.0.480 introduces the node reboot sequence on VxRail Manager UI so that you do not have to manage your scripts anymore.  The new feature includes cluster-level and node-level prechecks to ensure it is safe to perform this activity.  If nodes fail to reboot, there is an option for you to retry the reboot or skip it.   Making this activity easy may also encourage more customers to do this additional pre-check before upgrading their clusters.

 Figure 4. Selecting nodes in a cluster to reboot in sequential order

 

Figure 5. Monitoring the node reboot sequence on the dashboard

VxRail 7.0.480 also provides the capability to split your cluster update into multiple parts.   Doing so allows you to separate your cluster upgrade into smaller maintenance windows and work around your business operation needs.  Though this capability could reduce the impact of a cluster upgrade to your organization, VMware does recommend that you complete the full upgrade within one week given that there are some Day 2 operations that are disabled while the cluster is partially upgraded.  VxRail enables this capability only through VxRail API.  When a cluster is in a partially upgraded state, features in the Updates tab are disabled and a banner appears alerting you of the cluster state. Cluster expansion and node removal operations are also unavailable in this scenario.

 

Conclusion

The new lifecycle management capabilities added to VxRail 7.0.480 are part of the continual evolution of the VxRail LCM experience.  They also represent how we value your feedback on how to improve the product and our dedication to making your suggestions come to fruition.  The LCM capabilities added to this software release will drive more effective cluster update planning, which will result in higher rates of cluster update success that will drive more efficiencies in your IT operations.  Though this blog focuses on the improvements in lifecycle management, please refer to the release notes for VxRail 7.0.480 for a complete list of features and enhancements added to this release. For more information about VxRail in general, visit the Dell Technologies website.

 

Author:  Daniel Chiu

 

Read Full Blog
  • VxRail
  • SFS
  • SmartFabric Services
  • OMNI

What’s happening with SmartFabric Services and VxRail 8.0

Jim Slaughter Jim Slaughter

Fri, 15 Sep 2023 20:06:45 -0000

|

Read Time: 0 minutes

This article describes the SmartFabric Services (SFS) Automated VxRail Switch Configuration feature in VxRail and explains why it was removed in VxRail 8.0.

VxRail 4.7 and 7.0 releases included Automated VxRail Switch Configuration. This feature was designed for SFS and was always enabled. It automatically configured VxRail networks on SmartFabric switches during VxRail deployment. However, this integration prevented the ability to support SFS with VxRail in some network environments, such as with a vSAN stretched cluster or VMware Cloud Foundation.

Starting in VxRail 7.0.400, the option to manually disable Automated VxRail Switch Configuration was added to the VxRail deployment wizard, as shown below.

Figure 1 VxRail 7.0.400 deployment wizard resources page

This option is described in New Deployment Option for SmartFabric Services with VxRail, and is present in VxRail 7.0.400 and later VxRail 7.x releases. If Automated VxRail Switch Configuration is set to Disabled during VxRail deployment as recommended, SFS can be supported in other network environments.

Starting in VxRail 8.0, the Top-of-Rack (TOR) Switch section in the VxRail deployment wizard has been removed as shown below.

 

Figure 2 VxRail 8.0 deployment wizard resources page

Automated VxRail Switch Configuration is automatically disabled in VxRail 8.0. Disabling this feature ensures that new SFS with VxRail installations are supported in other network environments.

Disabling automated switch configuration only affects SmartFabric switch automation during VxRail deployment or when adding VxRail nodes to an existing cluster after deployment. With the feature disabled, you will use the SFS UI to place VxRail node-connected ports in the correct networks instead of the automation.

You can still configure SmartFabric switches automatically after VxRail deployment by registering the VxRail vCenter Server with OpenManage Network Integration (OMNI). When registration is complete, networks created in vCenter continue to be automatically configured on the SmartFabric switches using OMNI.

The Dell Networking SmartFabric Services Deployment with VxRail 7.0.400 deployment guide still applies to VxRail 8.0 deployments. The only difference is the Resources page of the VxRail deployment wizard will look like Figure 2 instead of Figure 1.

Resources

Dell Networking SmartFabric Services Deployment with VxRail 7.0.400

New Deployment Option for SmartFabric Services with VxRail

OpenManage Network Integration User Guide Release 3.2

Dell SmartFabric Services User Guide Release 10.5.4

Read Full Blog
  • VxRail
  • SmartFabric
  • SFS
  • SmartFabric Services

New Deployment Option for SmartFabric Services with VxRail

Jim Slaughter Jim Slaughter

Fri, 15 Sep 2023 20:06:45 -0000

|

Read Time: 0 minutes

VxRail 7.0.400 introduces an option that opens up new functionality for SmartFabric Services (SFS) switches with VxRail deployments. This new option allows you to choose whether automated SmartFabric switch configuration is enabled or disabled during VxRail deployment.

In VxRail versions earlier than 7.0.400, automated SmartFabric switch configuration during deployment is always enabled, and there is no option to disable it. The automation configures VxRail networks on the SmartFabric switches during deployment, but it limits the ability to support SFS with VxRail in a wide range of network environments after deployment. This limitation is because internal settings specific to SFS are made to VxRail Manager which limits the ability to expand the network or add additional products in some environments.

Starting with VxRail 7.0.400, the VxRail Deployment Wizard provides an option to set Automated VxRail Switch Configuration to Disabled. Dell Technologies recommends disabling the automation, as shown below. 

With automated switch configuration disabled, the VxRail Deployment wizard skips the switch configuration step. The advantage of bypassing this automation during deployment is that SFS and VxRail can now be supported across a wide array of network environments after deployment. It will also simplify the VxRail upgrade process in the future.

Disabling automated switch configuration only affects SmartFabric switch automation during VxRail deployment or when adding nodes to an existing VxRail cluster after deployment. The SFS UI is used to place VxRail node-connected ports in the correct networks instead of the automation.

Automated SmartFabric switch configuration after VxRail is deployed is still supported by registering the VxRail vCenter Server with OpenManage Network Integration (OMNI). When registration is done, networks that are created in vCenter continue to be automatically configured on the SmartFabric switches by OMNI.

In summary, for new deployments with SFS and VxRail 7.0.400 or later, it is recommended that you disable the automated switch configuration during deployment. This action will give you more flexibility when expanding your network in the future.

Resources

Dell Networking SmartFabric Services Deployment with VxRail 7.0.400

OpenManage Network Integration User Guide Release 3.2

Dell SmartFabric Services User Guide Release 10.5.4

Read Full Blog
  • HCI
  • VMware
  • VxRail
  • life cycle management
  • CloudIQ

Impacting the World, One Happy Customer at a Time

Robert Percy Robert Percy

Fri, 25 Aug 2023 21:45:55 -0000

|

Read Time: 0 minutes

As I get back from a lovely week to relax and reset by the beach in Mauritius, I have had time to realize that sometimes the important thing to do is to find time to relax and rejuvenate. I have come back with a burst of energy ready to get back to doing what I love most – spending time helping customers build simple Infrastructure solutions.

As a core member of the Dell Technologies infrastructure solutions sales team, I have come to realize that our core job is to solve problems. All businesses today are out there solving customer problems and challenges, either by producing goods or delivering services. Most businesses today have a lot of behind-the-scenes challenges to overcome to be able to help their customers.  

Technology plays a big part in everything we do today, and IT teams must be on top of their game all the time to ensure businesses can continue to focus on what’s important – Customers!

I have had the opportunity to work with a non-profit organization that is literally making the world a better place for everyone globally. The work they do is non-stop and it is not easy. Their work requires an immaculate IT setup that needs to be always online, secure, and able to scale for their bespoke applications. Their current setup has gone through some major changes in terms of their applications and tracking methodologies. They had been experiencing multiple information and data silos, complexity in infrastructure management, and data security issues. In helping them find a way to simplify their IT, we too played a part in making the world a better place.

We had a few conversations and agreed that we needed to build the entire infrastructure on one platform. In this case, VMware was the unanimous choice. The two biggest challenges were to eliminate silos and to simplify management. HCI was the best way to achieve both, and we chose VxRail HCI systems. This solution went on to deliver a consistent platform across the edge, core, and cloud. It has proven to be a solution that can that manage all of the compute, storage, and networking resources through a single pane of glass with vCenter -- all under a single support umbrella for all of the hardware and software deployed.

Lifecycle management with BIOS, firmware, and software updates and upgrades can be a painful and time-consuming process. But what if I told you we can automate these tasks with one-click upgrades, one node at a time without any downtime – how does that sound? When I asked, the CTO was happy, and the IT manager was happier. All those investments in our R&D labs with over 100 people working on resolving some of the most common challenges -- like upgrades for IT teams around the world -- now made sense.  

What made the solution choice easier was the ability to remotely monitor it from anywhere in the world with Cloud IQ, and its ability to scale and grow, not just on premises but in the cloud, any cloud at any time.

Did we manage to resolve their IT challenges - Yes, with a simplified solution like VxRail that provides performance, management simplicity, automation of tasks, and the flexibility to grow and scale. The customer was delighted - knowing full well that they now have an infrastructure setup that helps them do all the work they do consistently, and to be able to expand their work to different Geo Regions as well.

At the end of it all did I enjoy my time off after helping build an infrastructure solution for an organization doing something so meaningful. While I was away, I did get a postcard from the IT manager who was able take his wonderful family out for a nice little vacation, knowing that he could easily manage anything he needed to from anywhere in the world.

On to helping our next customer get the same peace of mind so they can leave their mark on the world too.

Author: Manish Bajaj

Read Full Blog
  • VxRail
  • CloudIQ
  • systems management
  • multi-cluster

Empowering Cloud-based Multi-cluster Management Using VxRail with CloudIQ

Dylan Jackson Dylan Jackson

Fri, 18 Aug 2023 23:01:25 -0000

|

Read Time: 0 minutes

Introduction

In today's digital landscape, organizations across various industries are generating and accumulating amazing sums of data. To harness the potential of this explosive data growth, businesses heavily rely on cluster computing. Managing these clusters effectively is critical to optimizing performance and ensuring continuous operations. VxRail clusters provide massive amounts of automation right out-of-the-box, which helps administrators accomplish significantly more.

But as the number of clusters grows, a centralized management interface becomes more and more valuable. That’s why I wanted to talk to you about CloudIQ today and introduce three exciting new features:

  • Support for 2-node and stretched vSAN clusters
  • DPU monitoring
  • Performance anomaly detection with historical seasonality

These advancements revolutionize cluster management because they offer enhanced efficiency, flexibility, and performance to meet the evolving needs of modern enterprises.

The evolution of cloud-based cluster management

Traditional on-premises cluster management frequently presents challenges with hardware maintenance, scalability issues, and costly infrastructure investments. Cluster management with CloudIQ has proved to be a game-changer, allowing businesses to centralize the management of hardware and infrastructure to a single cloud-based tool.

By combining VxRail automation with CloudIQ, enterprises can focus on optimizing their applications and workflows while more easily handling cluster provisioning, scaling, and maintenance. This paradigm shift not only improves resource allocation and utilization. It also enables organizations to adapt more quickly to dynamic workloads.

2-Node and stretched vSAN cluster support for CloudIQ

In response to diverse business needs, CloudIQ now supports two additional cluster deployment types: the 2-node and stretched vSAN clusters.

2-Node Clusters

Traditionally, clusters required a minimum of three nodes to maintain high availability, because having an odd number of nodes helped avoid split-brain scenarios. However, 2-node clusters can also address this challenge and ensure fault tolerance and high availability.

2-node clusters use advanced quorum mechanisms, allowing them to make decisions efficiently despite the lack of a third node. The nodes in the cluster communicate with each other and decide on quorum, based on various factors like network connectivity, storage health, and other cluster components. This setup significantly reduces infrastructure costs and is ideal for small to medium-sized businesses that require robust cluster management without the expense of additional nodes. 2-node clusters populate in the same location in CloudIQ as the rest of your clusters. They can be found by selecting the Systems option under the Monitor tab. After you select Systems, select the HCI inventory option and your enrolled VxRail clusters will populate there.

Stretched vSAN clusters

Businesses often want to deploy clusters across multiple geographically distributed data centers to improve disaster recovery and enhance business continuity. Stretched VxRail clusters with vSAN provide an excellent solution by extending vSAN technology across multiple data centers.

Key benefits of stretched VxRail clusters with vSAN include:

  1. Disaster Recovery: By replicating data between data centers, these clusters protect against site-wide outages, ensuring that operations continue seamlessly in case of a data center failure.
  2. Load Balancing: Stretched clusters intelligently distribute workloads across different data centers, based on demand, to optimize resource utilization and performance.
  3. Data Locality: Organizations can maintain data locality to comply with regional data regulations and reduce data access latency for end-users across different geographical regions.

Data Processing Unit Reporting

In a clustered environment, data processing units (DPUs) can become critical for efficient resource management. DPUs are hardware accelerators designed to handle specific data processing tasks, like NSX and other tasks handled by the vSphere Distributed Services Engine, to enhance overall cluster performance for specific workloads.

The Data Processing Unit Reporting feature provides insight into the details of DPUs within the cluster. Cluster administrators can view the hardware information for each DPU, including: the host name of each server with a DPU, the model, the OS version running on the host, the slot the DPU is installed in, each DPU’s serial number, and their manufacturer.

Performance Anomaly Detection

Unanticipated performance fluctuations can significantly impact application responsiveness and overall user experience. To address this concern, CloudIQ now integrates Performance Anomaly Detection—an intelligent monitoring feature that proactively identifies performance issues as they develop.

How does Performance Anomaly Detection work?

This feature uses machine learning algorithms to establish baseline performance patterns for various cluster metrics, including CPU utilization, memory utilization, power consumption, and networking.

A screen shot of a graph

Description automatically generated

When configured, the system continuously monitors real-time performance metrics and compares them to the baseline.

When CloudIQ detects any deviations from the expected behavior, it can raise alerts, enabling administrators to investigate and rectify potential problems immediately. This proactive approach ensures that performance issues are addressed before they affect critical operations, reducing downtime and enhancing user satisfaction.

Conclusion

As the demand for efficient data processing and storage continues to grow, cloud-based cluster management becomes vital for modern enterprises. The introduction of 2-node and stretched vSAN cluster support, data processing unit reporting, and performance anomaly detection takes cluster management to new heights. By leveraging the cutting-edge features of CloudIQ with VxRail, business organizations can unlock unparalleled efficiency, scalability, and performance, gaining a competitive advantage in today's fast-paced digital landscape. Embracing cloud-based cluster management with CloudIQ and its new features will undoubtedly pave the way for a bright and productive future for organizations and industries of all sizes.

Author: Dylan Jackson

Read Full Blog
  • vSphere
  • VxRail
  • vSAN
  • security
  • RecoverPoint
  • encryption
  • data center
  • STIG
  • cryptography
  • DoD

Enhancing your Data Center Security with VxRail

Olatunji Adeyeye Olatunji Adeyeye

Fri, 28 Jul 2023 22:16:57 -0000

|

Read Time: 0 minutes

In addition to providing operational efficiency, VxRail fundamentally sets up a secure foundation for your organization’s data center. This blog post provides a high-level overview of VxRail security. For a complete understanding of VxRail security features, read the VxRail Comprehensive Security by Design white paper or view the three-part video series VxRail Security: A Secure Foundation for your Data Center:

The white paper and videos provide a complete picture of how security begins with VxRail design and extends through VxRail deployment in your IT infrastructure.

As an introduction to what you can expect to learn from the videos, here’s the first of the three:


The integrated components of VxRail are designed to help secure your data center, starting from the PowerEdge server layer running on Intel or AMD processors, to the VMware vSphere (ESXi) layer integrated with vSAN for virtual storage, to the VxRail HCI system software layer that provides life cycle management through VxRail Manager (which is accessed through the vCenter plug-in), and to other add-ons from Dell and VMware, such as RecoverPoint for Virtual Machines. The video series and security by design white paper provide information about data protection and how VxRail creates a stable environment to ensure business continuity.

VxRail is engineered to employ functions of the NIST framework: protect, detect, and recover to boost cyber resiliency. VxRail includes integrated features to protect VxRail BIOS, firmware, and your organization’s data stored in vSAN. The VxRail system built on the PowerEdge server has a system lockdown feature that prevents configuration changes that may lead to security vulnerabilities. The PowerEdge hardware of the VxRail system verifies the integrity of software update files moving through the integrated stack through the embedded UEFI Secure Boot feature, which ensures that the files are from vetted sources.

Furthermore, the VxRail nodes are protected through Intel’s Trusted Execution Technology (TXT). The TXT prevents the introduction of malware into the VxRail nodes is prevented by the TXT by verifying the cryptographically signed PowerEdge server firmware, BIOS, and hypervisor version. Also, VxRail devices deployed in open environments are protected using bezel locks, preventing the introduction of malware-infected USB drives. With the bezel locks, the ports can be disabled and enabled. In addition to using bezel locks on VxRail in an open environment, VxRail satellite nodes are protected from theft and the compromise of data privacy by self-encrypting drives (SEDs).

To secure your organization’s workloads, VxRail is designed to protect data and VMs using the VxRail Manager, VMware vSphere, and vSAN. FIPS 140-2 Level 1 encrypts data in use, data at rest, and data in transit. These keys are carefully stored using Dell BSAFE Crypto-C Micro Edition and two FIPS-validated cryptographic modules using AES 256-bit.

Dell provides hardening packages for your VxRail using the Security Requirement Guide published by the Defense Information Systems Agency (DISA) for customers seeking additional security that meets their industry or sector requirements. For more information about hardening your IT infrastructure, see the resource links at the end of this post.

If you have not already watched the VxRail security video series or read the white paper, I hope this short summary of features gives you some insight into the tremendous features of VxRail security. To learn more about how VxRail provides a secure foundation for your data center through a carefully vetted supply chain, secure development life cycle, and many other features provided by VxRail, see the following resources:

Author:

Olatunji Adeyeye, Product Manager

Read Full Blog
  • VxRail
  • life cycle management
  • LCM
  • upgrades

Take Advantage of the Latest Enhancements to VxRail Life Cycle Management

Daniel Chiu Daniel Chiu

Tue, 20 Jun 2023 16:52:40 -0000

|

Read Time: 0 minutes

Providing the best life cycle management experience for HCI is not easy, nor is it a one-time job for which we can pat ourselves on the back and move on to the next endeavor. It’s a continuous cycle that incorporates feature enhancements and improvements based on your feedback. While we know that improving VxRail LCM is vitally important for us to continue to deliver differentiating value to you, it is just as important that your clusters continue to run the latest software to realize the benefits. In this post, I’ll provide a deep dive into the LCM enhancements introduced in the past few software releases so you can consider the added functionality that you can benefit from.

Focus areas for improved LCM

Going back into last year, we prioritized four focus areas to improve your LCM experience. While the value is incremental when you look at just a single software release, this post provides a holistic perspective of how VxRail has improved upon LCM over time to further increase the efficiencies that you enjoy today.

  • Based on data that we have gathered on reported cluster update failures, we found that almost half of the update failures occurred because a node failed to enter maintenance mode. Effectively addressing this issue can potentially be the most impactful benefit for our customer base.
  • As the VxRail footprint expands beyond the data center, resource constraints such as network bandwidth and Internet connectivity can become significant hurdles for effectively deploying infrastructure solutions at the edge. Recent enhancements in VxRail focused on creating space-efficient LCM bundle transfers.
  • Doing more with less is a common thread across all organizations and industries. In the context of VxRail LCM, we’re looking to further simplify your cluster update planning experience by putting more actionable information at your fingertips.
  • While no product, including VxRail, can avoid a failure from ever happening, VxRail looks to put you in a better position to protect your cluster and quickly recover from a failure.

Figure 1. 12+ month recap of LCM enhancements

Now that you know about the four focus areas, let’s get into the details about the actual improvements that have been introduced in the last 12+ months.

Mitigating maintenance mode failures

In our investigation, we were able to identify three major issues that caused a cluster update failure because a node did not enter maintenance mode accordingly:

  • VMtools was still mounted on a VM.
  • VMs were pinned to a host due to an existing policy.
  • vSAN resynchronization was taking too long and exceeded the timeout value.

In VxRail 7.0.350, prechecks were added for the first two issues. When a pre-update health check is run, these new VxRail prechecks identify those issues if they exist and alert you in the report so that you can remedy the issue before initiating a cluster update. In the same release, the timeout value to wait for a node to enter maintenance mode was doubled to reduce the chance that vSAN resynchronization does not finish in time.  

Next, the cluster update capability set was also enhanced to address a cluster update failure due to a node not entering maintenance mode as expected. With the combination of enhancements made to cluster update error handling and cluster update retry operations in VxRail 7.0.350 and VxRail 7.0.400 respectively, VxRail is now able to handle this scenario much more efficiently. If a node fails to enter maintenance mode, the cluster update operation now skips the node and continues on to the next node instead of failing out of the operation altogether. Upon running the cluster update retry operation, VxRail can automatically detect which node requires an update instead of updating the entire cluster.

Space-efficient LCM bundle transfers

The next area of improvement addressed reducing the package sizes of the LCM bundles. A smaller package size can be very beneficial for bandwidth-constrained environments such as edge locations.  

VxRail 7.0.350 introduced the capability for you to designate a local Windows client at your data center to be the central repository and distributor of LCM bundles for remote VxRail clusters that are not connected to the Internet. Using a separate PowerShell commandlet installed on the client, you can initiate space-efficient bundle transfers from the client to your remote clusters in your internal network. The transfer operation automatically scans the manifest of the Continuously Validated State (VxRail software version) running on the VxRail cluster and determines the delta compared to the requested LCM bundle. Instead of transferring the full LCM bundle, which is greater than 10 GB in size, it only packages the necessary installation files. A much smaller LCM bundle can cut down on bandwidth usage and transfer times.

Figure 2. Central repository and distributor of LCM bundles to remote VxRail clusters

In VxRail 7.0.450, space-efficient LCM bundles can also be created when VxRail Manager downloads an LCM bundle from the Dell cloud. This feature requires that the VxRail Manager be connected to the Dell cloud.

Simplified cluster update planning experience

The next set of LCM enhancements is centered around providing you with critical insights to maximize the probability of a successful cluster update and for the information to be up-to-date and readily available whenever you need it.

Since VxRail 7.0.400, the pre-update health check includes a RecoverPoint for VMs compatibility precheck to detect whether its current version of software is compatible with the target VxRail software version.

VxRail 7.0.450 increased the frequency at which the VxRail prechecks file is updated. The increased frequency ensures that any additional prechecks added by engineering because of technology changes or new learnings from support cases are incorporated into the VxRail prechecks file that is run against your cluster. When your cluster is connected to the Dell cloud, VxRail Manager periodically scans for the latest VxRail prechecks file.

VxRail 7.0.450 also automated the health check to run every 24 hours. The combination of automated VxRail prechecks file scans and health check runs ensure that you have access to an up-to-date health check report once you log in to VxRail Manager.

VxRail 7.0.450 also further simplified your cluster update planning experience by consolidating into a single, exportable report all the necessary insights about your cluster to help you decide whether to move forward with a cluster update. This update advisor report has four sections:

  • VxRail Update Advisor Report Summary includes the current VxRail version running on the cluster, the target (or selected) VxRail version, estimated duration to complete a cluster update, a link to the release notes, and information about your backup for your service VMs.

Figure 3. Update advisor report—summary report

  • VxRail Components shows which components need to be updated to get to the target VxRail version. The table includes the current version and target version for each component.

Figure 4. Update advisor report—components report

  • VxRail Precheck is the previously mentioned pre-update health check report, inclusive of all the enhancements discussed.

Figure 5. Update advisor report—LCM precheck report

  • VxRail Custom Components is a report that highlights user-managed components installed on the cluster. You should consider these custom components when deciding whether to schedule a cluster update.

Figure 6. Update advisor report—custom components report

When VxRail Manager is connected to the Dell cloud, it automatically scans for new update paths. Once a new update path is detected, VxRail Manager downloads a lightweight manifest file that contains all the information needed to produce the update advisor report. The report is automatically generated every 24 hours. This feature is designed to streamline the availability of up-to-date critical insights to help you make an informed decision about a cluster update.

Serviceability

The last set of LCM enhancements that I will cover is around serviceability. While many of the features discussed earlier are meant to be proactive and to prevent failures, there are times when failures can still occur. Being able to efficiently troubleshoot the issues is critically important to getting your clusters back up and running quickly.

In VxRail 7.0.410, the logging capability was enhanced in a couple of areas so that the Dell Support team can pinpoint issues faster. When a pre-update health check identifies failures, the offending host is now recorded. If a node does fail to enter maintenance mode, the logs now capture the reason for the failure.

In VxRail 7.0.450, we automated the backup of the VxRail Manager VM and vCenter Server VM (if it’s VxRail managed). Now you can easily back up your service VMs before updating a cluster.  

Figure 7. Automate VxRail backup of service VMs before a cluster update

This feature is also integrated into the update advisor report, where you can see the latest backup on the report summary and click a link to go to the backup page to create another backup.

Value of VxRail life cycle management

If life cycle management is one of the major reasons that you chose to invest in VxRail, our continuous improvements to life cycle management should be a compelling reason to keep your clusters running the latest software. VxRail life cycle management continues to provide significant value by addressing the challenges that your organization faces today.

Figure 8. VxRail benefits (data from "The Business Value of Dell VxRail HCI," April 2023, IDC)

In an IDC study sponsored by Dell Technologies, The Business Value of Dell VxRail HCI, the value that VxRail LCM provides to organizations is significant and compelling. The results of this study are major proof points on why you should continue investing in VxRail to mitigate these challenges:

  • Overburdened IT staff. The automated LCM and mechanisms in VxRail to maintain cluster integrity throughout the life of the cluster drives significant efficiencies in your IT infrastructure team.
  • Unplanned outages that lead to significant disruption to businesses. The benefit of pretested and prevalidated sets of drivers, firmware, and software which we call VxRail Continuously Validated States is the significant reduction in risk as you update your HCI cluster from one version to the next.
  • More time spent on deploying infrastructure and resulting slowdown of pace at which your business can innovate. The automation and integrated validation checks speeds up deployment times without compromising security.

Conclusion

The emphasis that we put on improving your LCM experience is extraordinary, and we encourage you to maximize your investment in VxRail. Updating to the latest VxRail software release gives you access to the many LCM enhancements that can drive greater efficiencies in your organization. And with VxRail Continuously Validated States, you can safely get to the next software release and the ones that follow.

Resources

For more information about the features in VxRail 7.0.400, check out this blog post:
https://infohub.delltechnologies.com/p/learn-about-the-latest-major-vxrail-software-release-vxrail-7-0-400/

For more information about the features in VxRail 7.0.450, see this post:
https://infohub.delltechnologies.com/p/learn-about-the-latest-major-vxrail-software-release-vxrail-7-0-450/

If you want to learn about the latest in the VxRail portfolio, you can check the VxRail page on the Dell Technologies website:
https://www.dell.com/en-us/dt/converged-infrastructure/vxrail/index.htm

Author: Daniel Chiu, VxRail Technical Marketing
https://www.linkedin.com/in/daniel-chiu-8422287/

Read Full Blog
  • VxRail
  • VCF
  • VCF on VxRail
  • Cloud Foundation on VxRail
  • VCF on VxRail Releases

Announcing VMware Cloud Foundation 5.0 on Dell VxRail 8.0.100

Karol Boguniewicz Karol Boguniewicz

Thu, 22 Jun 2023 13:00:59 -0000

|

Read Time: 0 minutes

A more flexible and scalable hybrid cloud platform with simpler upgrades from previous releases

The latest release of the co-engineered turnkey hybrid cloud platform is now available, and I wanted to take this great opportunity to discuss its enhancements.

Many new features are included in this major release, including support for the latest VCF and VxRail software component versions based on the latest vSphere 8.0 U1 virtualization platform generation, and more. Read on for the details!

In-place upgrade lifecycle management enhancements

Support for automated in-place upgrades from VCF 4.3.x and higher to VCF 5.0

This is the most significant feature our customers have been waiting for. In the past, due to significant architectural changes between major VCF releases and their SDDC components (such as NSX), a migration was required to move from one major version to the next. (Moving from VCF 4.x to VCF 5.x is considered a major version upgrade.) In this release, this type of upgrade is now drastically improved.

After the SDDC Manager has been upgraded to version 5.0 (by downloading the latest SDDC Manager update bundle and performing an in-place automated SDDC Manager orchestrated LCM update operation), an administrator can follow the new built-in SDDC Manager in-place upgrade workflow. The workflow is designed to assist in upgrading the environment without needing any migrations. Domains and clusters can be upgraded sequentially or in parallel. This reduces the number and duration of maintenance windows, allowing administrators to complete an upgrade in less time. Also, VI domain skip-level upgrade support allows customers to run VCF 4.3.x or VCF 4.4.x BOMs in their domains to streamline their upgrade path to VCF 5.0, by skipping intermediary VCF 4.4.x and 4.5.x versions respectively. All this is performed automatically as part of the native SDDC Manager LCM workflows.

What does this look like from the VCF on VxRail administrator’s perspective? The administrator is first notified that a new SDDC Manager 5.0 upgrade is available. Administrators will be guided first to update their SDDC Manager instance to version 5.0. With SDDC Manager 5.0 in place, administrators can take advantage of the new enhancements which streamline the in-place upgrade process that can be used for the remaining components in the environment. These enhancements follow VMware best practices, reduce risk, and allow administrators to upgrade the full stack of the platform in a staged manner. These enhancements include:

  • Context aware prechecks
  • vRealize Suite prechecks
  • Config drift awareness
  • vCenter Server migration workflow
  • Licensing update workflow

The following image highlights part of the new upgrade experience from the SDDC Manager UI. First, on the updates tab for a given domain, we can see the availability of the upgrade from VCF 4.5.1 to VCF 5.0.0 on VxRail 8.0.100 (Note: In this example, the first upgrade bundle for the SDDC Manager 5.0 was already applied.)

When the administrator clicks View Bundles, SDDC Manager displays a high-level workflow that highlights the upgrade bundles that would be applied, and in which order.

To see the in-place upgrade in action, check out the following demo:

Now let’s dive a little deeper into the upgrade workflow steps. The following diagram depicts the end-to-end workflow for performing an in-place LCM upgrade from VCF 4.3.x/4.4.x/4.5.x to VCF 5.0 for the management domain.

The in-place upgrade workflow for the management domain consists of the following six steps:

  1. Plan and prepare by ensuring all important prerequisites are met (for example, the minimum supported VCF on VxRail version for an in-place upgrade is validated, in-place upgrade supported topologies are being used, and so on).
  2. Run an update precheck and resolve any issues before starting the upgrade process.
  3. Download the VMware Cloud Foundation and VxRail Upgrade Bundles from the Online Depot within SDDC Manager using a MyVMware account and a Dell support online depot account respectively.
  4. Upgrade components using the automated guided workflow, including SDDC Manager, NSX-T Data Center, vCenter Server for VCF, and VxRail hosts.
  5. Apply configuration drifts, which capture required configuration changes between release versions.
  6. When the upgrade is completed, update component licensing using the built-in SDDC Manager workflow (only applicable for VCF instances deployed using perpetual licensing).

Upgrading workload domains follows a similar workflow.

If performed manually, the in-place upgrade process to VCF 5.0 on VxRail 8.0.100 from previous releases would be potentially error-prone and time-consuming. The guided, simplified, and automated experience now provided in SDDC Manager 5.0 greatly reduces the effort and risk for customers, by helping them perform this operation in a fully controlled, guided, and automated manner on their own, providing a much better user experience and better value.

SDDC Manager context aware prechecks

Keeping a large-scale cloud environment in a healthy, well-managed state is very important to achieve the desired service levels and increase the success rate of LCM operations. In SDDC Manager 5.0, prechecks have been further enhanced and are now context aware. But what does this mean?

Before performing the upgrade, administrators can choose to run a precheck against a specific VCF release (“Target Version”) or run a “General Update Readiness” precheck. Each type of precheck allows the administrator to select the specific VCF on VxRail objects to run the precheck on. For example, an administrator can run it against an entire domain, a VxRail cluster, or even an individual SDDC software component such as NSX and vRealize/Aria Suite components. For example, a precheck can be run at a per-VxRail cluster level, which might be useful for large workload domains configured with multiple clusters. It could reduce planned maintenance windows by updating components of the domain separately.

But what is the difference between the “Target Version” and “General Upgrade Readiness” precheck types? Let me explain:

  • Target Version precheck - Prechecks for a specific “Target Version” will run prechecks related to the components between the source and target VCF on VxRail release. (Note that the drop-down in the SDDC Manager UI will only show target versions from VCF 5.x on VxRail 8.x after the SDDC Manager has been updated to 5.0). This feature reduces the risk of issues during the upgrade to the target VCF release.
  • General Upgrade Readiness precheck - The “General Upgrade Readiness” precheck can be run any time to plan and assess upgrade readiness without triggering the upgrade. The “General Upgrade Readiness” precheck can periodically run as a health check on a given SDDC component.

The following screenshot shows what this feature looks like from the system administrator perspective in the SDDC Manager UI:

Platform security and scalability enhancements

Isolated domains with individual SSO

Another significant new feature I’d like to highlight is the introduction of Isolated workload domains. This has a significant impact on both the security and scalability of the platform.

In the past, VMware Cloud Foundation 4.x deployments by design have been configured to use a single SSO instance shared between the management domain and all VI workload domains (WLDs). All WLDs’ vCenter Servers are connected to each other using Enhanced Linked Mode (ELM). After a user is logged into SDDC Manager, ELM provides seamless access to all the products in the stack without being challenged to authenticate again.

VCF 5.0 on VxRail 8.0.100 deployments allow administrators to configure new workload domains using separate SSO instances. These are known as Isolated domains. This capability can be very useful, especially for Managed Service Providers (MSPs) who can allocate Isolated workload domains to different tenants with their own SSO domains for better security and separation between the tenant environments. Each Isolated SSO domain within VCF 5.0 on VxRail 8.0.100 is also configured with its own NSX instance.

As a positive side effect of this new design, the maximum number of supported domains per VCF on VxRail instance has now been increased to 25 (this includes the management domain and assumes all workload domains are deployed as isolated domains). This scalability enhancement results from not hitting the max number of vCenters configured in an ELM instance (which is 15) because Isolated domains are not configured with ELM with the management SSO domain. So, increasing the security and separation between the workload domains can also increase the overall scalability of the VCF on VxRail cloud platform.

The following diagram illustrates how customers can increase the scalability of the VCF on VxRail platform by adding isolated domains with their dedicated SSO:

What does this new feature look like from the VCF on VxRail administrator’s perspective?

When creating a new workload domain, there’s a new option in the UI wizard allowing either to join the new WLD into the management SSO domain or create a new SSO domain:

After the SSO domain is created, its information is shown in the workload domain summary screen:

General LCM updates

VxRail accurate versioning support and SDDC Manager ‘Release Versions’ UI and API enhancements

These two features should be discussed together. Beginning in VCF 5.0 on VxRail 8.0.100 and higher, enhancements were made to the SDDC Manager LCM service that enables more granular compatibility and tracking of current and previous VxRail versions that are compatible with current and previous VCF versions. This opens VCF on VxRail to be more flexible by supporting different VxRail versions within a given VCF release. It allows admins to support applying and tracking asynchronous VxRail release patches outside of the 1:1 mapped, fully compatible VCF on VxRail release that could require waiting for it to be available. This information about available and supported release versions for VCF and VxRail is integrated into the SDDC Manager UI and API.

Flexible WLD target versions

VCF 5.0 on VxRail 8.0.100 introduces the ability for each workload domain to have different versions as far back as N-2 releases, where N is the current version on the management domain. With this new flexibility, VCF on VxRail administrators are not forced to upgrade workload domain versions to match the management domain immediately. In the context of VCF 5.0 on VxRail 8.0.100, this can help admins plan upgrades over a long period of time when maintenance windows are tight.

SDDC Manager config drift awareness

Each VMware Cloud Foundation release introduces several new features and configuration changes to its underlying components. Update bundles contain these configuration changes to ensure an upgraded VCF on VxRail instance will function like a greenfield deployment of the same version. Configuration drift awareness allows administrators to view parameters and configuration changes as part of the upgrade. An example of configuration drift is adding a new service account or ESXi lockdown enhancement. This added visibility helps customers better understand new features and capabilities and their impact on their deployments.

The following screenshot shows how this new feature appears to the administrator of the platform:

SDDC Manager prechecks for vRealize/Aria Suite component versions

SDDC Manager 5.0 allows administrators to run a precheck for vRealize/VMware Aria Suite component compatibility. The vRealize/Aria Suite component precheck is run before upgrading core SDDC components (NSX, vCenter Server, and ESXi) to a newer VCF target release, and can be run from VCF 4.3.x on VxRail 7.x and above. The precheck will verify if all existing vRealize/Aria Suite components will be compatible with core SDDC components of a newer VCF target release by checking them against the VMware Product Interoperability Matrix.  

General security updates

Enhanced certificate management

VCF 5.0 on VxRail 8.0.100 contains improved workflows that orchestrate the process of configuring Certificate Authorities and Certificate Signing Requests. Administrators can better manage certificates in VMware Cloud Foundation, with improved certificate upload and installation, and new workflows to ensure certificate validity, trust, and proper installation. These new workflows help to reduce configuration time and minimize configuration errors.

Storage updates

Support for NVMe over TCP connected supplemental storage

Supplemental storage can be used to add storage capacity to any domain or cluster within VCF, including the management domain. It is configured as a Day 2 operation. What’s new in VCF 5.0 on VxRail 8.0.100 is the support for the supplemental storage to be connected with the NVMe over TCP protocol.

Administrators can benefit from using NVMe over TCP storage in a standard Ethernet-based networking environment. NVMe over TCP can be more cost-efficient than NVMe over FC and eliminates the need to deploy and manage a fiber channel fabric if that is what an organization requires.

Operations and serviceability updates

VCF+ enhancements

VMware Cloud Foundation+ has been enhanced for the VCF 5.0 release, allowing greater scale and integrated lifecycle management. First, the scalability increased – it allows administrators to connect up to eight domains per VCF instance (including the management domain) to the VMware Cloud portal. Second, updates to the Lifecycle Notification Service within the VMware Cloud portal provide visibility of pending updates to any component within the VCF+ Inventory. Administrators can initiate updates through the VCF+ Lifecycle Management Notification Service, which connects back to the specific SDDC Manager instance to be updated. From here, administrators can use familiar SDDC Manager prechecks and workflows to update their environment.   

VxRail hardware platform updates

Support for single socket 15G VxRail P670F

A new VxRail hardware platform is now supported, providing customers more flexibility and choice. The single-socket VxRail P670F, a performance-focused platform, is now supported in VCF on VxRail deployments and can offer customers savings on software licensing in specific scenarios.

Other asynchronous release related updates

VCF Async Patch Tool 1.1.0.1 release

While not directly tied to VCF 5.0 on VxRail 8.0.100 release, VMware has also released the latest version of the VCF Async Patch Tool. This latest version now supports applying patches to VCF 5.0 on VxRail 8.0.100 environments.

Summary

VMware Cloud Foundation 5.0 on Dell VxRail 8.0.100 is a new major platform release based on the latest generation of VMware’s vSphere 8 hypervisor. It provides several exciting new capabilities, especially around automated upgrades and lifecycle management. This is the first major release that provides guided, simplified upgrades between the major releases directly in the SDDC Manager UI, offering a much better experience and more value for customers.

All of this makes the new VCF on VxRail release a more flexible and scalable hybrid cloud platform, with simpler upgrades from previous releases, and lays the foundation for even more beneficial features to come.

Author: Karol Boguniewicz, Senior Principal Engineering Technologist, VxRail Technical Marketing

Twitter: @cl0udguide

Additional Resources:

VMware Cloud Foundation on Dell VxRail Release Notes

VxRail page on DellTechnologies.com

VxRail page on InfoHub

VCF on VxRail Interactive Demo

VxRail Videos

 

Read Full Blog
  • VMware
  • VxRail
  • Kubernetes
  • Tanzu

Deploying VMware Tanzu for Kubernetes Operations on Dell VxRail: Now for the Multicloud

Jason Marques Jason Marques

Wed, 17 May 2023 15:56:43 -0000

|

Read Time: 0 minutes

VMware Tanzu for Kubernetes Operations (TKO) on Dell VxRail is a jointly validated Dell and VMware reference architecture solution designed to streamline Kubernetes use for the enterprise. The latest version has been extended to showcase multicloud application deployment and operations use cases. Read on for more details.

VMware Tanzu and Dell VxRail joint solutions

VMware TKO on Dell VxRail is yet another example of the strong partnership and joint development efforts that Dell and VMware continue to deliver on behalf of our joint customers so they can find success in their infrastructure modernization and digital transformation efforts. It is an addition to an existing portfolio of jointly developed and/or engineered products and reference architecture solutions that are built upon VxRail as the foundation to help customers accelerate and simplify their Kubernetes adoption.

Figure 1 highlights the joint VMware Tanzu and Dell VxRail offerings available today. Each is specifically designed to meet customers where they are in their journey to Kubernetes adoption.

Figure 1.  Joint VMware Tanzu and Dell VxRail solutions

VMware TKO on VxRail

VMware Tanzu For Kubernetes Operations on Dell VxRail reference architecture updates

This latest release of the jointly developed reference architecture builds off the first release. To learn more about what TKO on VxRail is and our objective for jointly developing it, take a look at this blog post introducing its first iteration.

Okay… Now that you are all caught up, let’s dive into what is new in this latest version of the reference architecture.

Additional TKO multicloud components

Let’s dive a bit deeper and highlight what we see as the essential building blocks for your cloud infrastructure transformation that are included in the TKO edition of Tanzu.

First, you’re going to need a consistent Kubernetes runtime like Tanzu Kubernetes Grid (TKG) so you can manage and upgrade clusters consistently as you move to a multicloud Kubernetes environment.

Next, you’re going to need some way to manage your platform and having a management plane like Tanzu Mission Control (TMC) that provides centralized visibility and control over your platform will be critical to helping you roll this out to distributed teams.

Also, having platform-wide observability like Aria Operations for Applications (formerly known as Tanzu/Aria Observability) ensures that you can effectively monitor and troubleshoot issues faster. Having data protection capabilities allows you to protect your data both at rest and in transit, which is critical if your teams will be deploying applications that run across clusters and clouds. And with NSX Advanced Load Balancer, TKO can also help you implement global load balancing and advanced traffic routing that allows for automated service discovery and north-south traffic management.

TKO on VxRail, VMware and Dell’s joint solution for core IT and cloud platform teams, can help you get started with your IT modernization project and enable you to build a standardized platform that will support you as you grow and expand to more clouds.

In the initial release of the reference architecture with VxRail, Tanzu Mission Control (TMC) and Aria Operations for Applications were used, and a solid on-premises foundation was established for building our multicloud architecture onward. The following figure shows the TKO items included in the first iteration.

VMware Tanzu for Kubernetes Operations features.

Figure 2.  Base TKO components used in initial version of reference architecture

In this second phase, we extended the on-premises architecture to a true multicloud environment fit for a new generation of applications.

Added to the latest version of the reference architecture are VMware Cloud on AWS, an Amazon EKS service, Tanzu Service Mesh, and Global Server Load Balancing (GSLB) functionality provided by NSX Advanced Load Balancer to build a global namespace for modern applications.

New TMC functionalities were also added that were not part of the first reference architecture, such as EKS LCM and continuous delivery capabilities. Besides the fact that AWS is still the most widely used public cloud provider, the reason AWS was used for this reference architecture is because the VMware SaaS products have the most features available for AWS cloud services. Other hyperscaler public cloud provider services are still in the VMware development pipeline. For example, today you can perform life cycle management of Amazon EKS clusters through Tanzu Mission Control. This life cycle management capability isn’t available yet with other cloud providers. The following figure highlights the high-level set of components used in this latest reference architecture update.

Figure 3.  Additional components used in latest version of TKO on VxRail RA

New multicloud testing environment

To test this multicloud architecture, the Dell and VMware engineering teams needed a true multicloud environment. Figure 4 illustrates a snapshot of the multisite/multicloud lab infrastructure that our VMware and Dell engineering teams built to provide a “real-world” environment to test and showcase our solutions. We use this environment to work on projects with internal teams and external partners.

Figure 4.  Dell/VMware Multicloud Innovation Lab Environments

The environment is made up of five data centers and private clouds across the US, all connected by VMware SD-WAN, delivering a private multicloud environment. An Equinix data center provides the fiber backbone to connect with most public cloud providers as well as VMware Cloud Services. 

Extended TKO on VxRail multicloud architecture

Figure 5 shows the multicloud implementation of Tanzu for Kubernetes Operations on VxRail. Here you have K8s clusters on-premises and running on multiple cloud providers. 

 

Figure 5.  TKO on VxRail Reference Architecture Multicloud Architecture

Tanzu Mission Control (TMC), which is part of Tanzu for Kubernetes Operations, provides you with a management plane through which platform operators or DevOps team members can manage the entire K8s environment across clouds. Developers can have self-service access, authenticated by either cloud identity providers like Okta or Microsoft Active Directory or through corporate Active Directory federation. With TMC, you can assign consistent policies across your cross-cloud K8s clusters. DevOps teams can use the TMC Terraform provider to manage the clusters as infrastructure-as-code. 

Through TMC support for K8s open-source project technologies such as Velero, teams can back up clusters either to Azure blob, Amazon S3, or on-prem S3 storage solutions such as Dell ECS, Dell ObjectScale, or another object storage of their choice. 

When you enable data protection for a cluster, Tanzu Mission Control installs Velero with Restic (an open-source backup tool), configured to use the opt-out approach. With this approach, Velero backs up all pod volumes using Restic.

TMC integration with Aria Operations for Applications (formerly Tanzu/Aria Observability) delivers fine-grained insights and analytics about the microservices applications running across the multicloud environments.

TMC also has integration with Tanzu Service Mesh (TSM), so you can add your clusters to TSM. When the TKO on VxRail multicloud reference architecture is implemented, users would connect to their multicloud microservices applications through a single URL provided by NSX Advanced Load Balancer (formerly AVI Load Balancer) in conjunction with TSM. TSM provides advanced, end-to-end connectivity, security, and insights for modern applications—across application end users, microservices, APIs, and data—enabling compliance with service level objectives (SLOs) and data protection and privacy regulations.

TKO on VxRail business outcomes

Dell and VMware know what business outcomes matter to enterprises, and together we help customers map those outcomes to transformations.

Figure 6 highlights the business outcomes that customers are asking for and that we are delivering through the Tanzu portfolio on VxRail today. They also set the stage to inform our joint development teams about future capabilities we look forward to delivering.

Figure 6.  TKO on VxRail and business outcomes alignment

Learn more at Dell Technologies World 2023

Want to dive deeper into VMware Tanzu for Kubernetes Operations on Dell VxRail? Visit our interactive Dell Technologies and VMware booths at Dell Technologies World to talk with any of our experts. You can also attend our session Simplify & Streamline via VMware Tanzu for Kubernetes Operations on VxRail.

Also, feel free to check out the VMware Blog on this topic, written by Ather Jamil from VMware. It includes some cool demos showing TKO on VxRail in action!

Author: Jason Marques (Dell Technologies)
Twitter:
@vWhipperSnapper

Contributor: Ather Jamil (VMware)

Resources

 

 

Read Full Blog
  • VxRail
  • PowerStore
  • life cycle management
  • Dynamic AppsON

Dell VxRail and Dell PowerStore: Better Together Through Dynamic AppsON

Wei Chen Wei Chen

Fri, 05 May 2023 16:48:57 -0000

|

Read Time: 0 minutes

Dynamic AppsON overview

When two products come together with new and unique capabilities, customers benefit from the “better together” value that is created. That value is clearly visible with Dynamic AppsON, which is a configuration that provides an exclusive integration between compute-only Dell VxRail dynamic nodes and a Dell PowerStore storage system.

Dynamic AppsON enables independent scaling of compute and storage, providing flexibility of choice by increasing the extensibility of both platforms. It provides VxRail environments access to PowerStore enterprise efficiency, data protection, and resiliency features. Additionally, it helps PowerStore environments quickly expand compute for CPU-intensive workloads in a traditional three-tier architecture.

Another integration point that further enhances the Dynamic AppsON experience is the Virtual Storage Integrator (VSI). VSI brings storage provisioning, management, and monitoring capabilities directly into vCenter. It enables the ability to perform common storage tasks and provides additional visibility into the storage system without needing to launch PowerStore Manager.

With Dynamic AppsON, you have the flexibility to choose the type of datastore and connectivity that fits your environment. Dell Technologies recommends vVols and NVMe/TCP.

Leveraging the native vVol capabilities of PowerStore is the optimal way to provision VM datastores. This enables increased storage granularity at the VM level, offloading of data services to PowerStore, and storage policy-based management directly in vCenter. This further enables vCenter as the common operating environment for the administrator.

For connectivity, NVMe/TCP is recommended because it provides significant advantages. It enables performance that is comparable to direct-attach, while retaining the cost-effectiveness, scalability, and flexibility of traditional Ethernet.

 

Figure 1.  Dynamic AppsON overview

For more information about Dynamic AppsON, see the Dell VxRail and Dell PowerStore: Better Together Through Dynamic AppsON white paper.

Dynamic AppsON lifecycle management

Dell VxRail and Dell PowerStore have taken this integration a step further by introducing lifecycle management for Dynamic AppsON deployments. This enables an administrator to view the PowerStore details and initiate a code upgrade directly from VxRail Manager in vCenter. By leveraging the VxRail Manager user interface and workflows, an administrator does not need to switch between multiple interfaces for the lifecycle management operations.

The lifecycle management functionality from VxRail Manager is exclusively enabled through VSI. Dynamic AppsON lifecycle management is available starting with VxRail 7.0.450, PowerStoreOS 3.0, and Virtual Storage Integrator (VSI) 10.2.

Dynamic AppsON lifecycle management provides the following capabilities in VxRail Manager in vCenter:

  • View the attached storage system type and software version
  • Upload a code bundle from the local client directly to PowerStore
  • Run a Pre-Upgrade Health Check on PowerStore and report any warnings and failures
  • Initiate a code upgrade and track the progress until completion

The following figures show these Dynamic AppsON lifecycle management tasks in VxRail Manager.

Figure 2.  PowerStore code reporting

Figure 3.  PowerStore code upload

Figure 4.  PowerStore Pre-Upgrade Health Check

Figure 5.  PowerStore upgrade in progress

Figure 6.  PowerStore upgrade completed successfully

Figure 7.  Updated PowerStore code version

To see all these lifecycle management tasks in action from start to finish, refer to this video:


Conclusion

With the addition of lifecycle management for Dynamic AppsON, the number of storage management tasks for which a virtualization and storage administrator has to leave vCenter is reduced. This functionality provides a consistent, common, and efficient management experience for both VxRail and PowerStore. The integration between VxRail, PowerStore, and the VSI plug-in enables consistent workflows and visibility between storage and compute. Better together through Dynamic AppsON, brought to you by Dell VxRail and Dell PowerStore.

Resources

 
Author: Wei Chen, Technical Staff, Engineering Technologist

LinkedIn

Read Full Blog
  • VxRail
  • PowerStore
  • life cycle management
  • Dynamic nodes
  • Dynamic AppsON
  • LCM

Learn About the Latest Major VxRail Software Release: VxRail 7.0.450

Daniel Chiu Daniel Chiu

Thu, 11 May 2023 16:14:15 -0000

|

Read Time: 0 minutes

To our many VxRail customers, you know that our innovation train is a constant machine that keeps on delivering more value while keeping you on a continuously validated track. The next stop on your VxRail journey brings you to VxRail 7.0.450 which offers significant benefits to life cycle management and dynamic node clusters.

VxRail 7.0.450 provides support for VMware ESXi 7.0 Update 3l and VMware vCenter 7.0 Update 3l. All existing platforms that support VxRail 7.0 can update to 7.0.450.

This blog provides a deep dive into some of the life cycle management enhancements as well as PowerStore Life Cycle Management integration into VxRail Manager for VxRail dynamic node clusters. For a more comprehensive rundown of the features introduced in this release, see the release notes.

Life cycle management

The life cycle management features that I am covering can provide the most impact to our VxRail customers. The first set of features are designed to offer you actionable information at your fingertips. Imagine taking your first sip of coffee or tea as you log onto VxRail Manager at the start of your day, and you immediately have all the up-to-date information that you need to make decisions and plan out your work.

VxRail pre-update health check

The VxRail pre-update health check, or pre-check as the VxRail Manager UI refers to it, has been an important tool for you to determine the overall health of your clusters and assess the readiness for a cluster update. The output of this report brings helps you to be aware of troublesome areas and provides you with information, such as Knowledge Base articles, to resolve the issues. This tool relies on a script that can be automatically uploaded onto the VxRail Manager VM, if the cluster is securely connected to the Dell cloud, or manually uploaded as a bundle procured from the Dell Support website.

For the health check to stay reliable and improve over time, the development of the health check script needs to incorporate a continuous feedback loop so that the script can easily evolve. Feedback can come from our Dell Services and escalation engineering teams as they learn from support cases, and from the engineering team as new capabilities and additions are introduced to the VxRail offering.

To provide an even more accurate assessment of the cluster health and readiness for a cluster update, the VxRail team has increased the frequency of how often the health check script is updated. Starting with VxRail 7.0.450, clusters that are connected to the Dell cloud will automatically scan for new health check scripts multiple times per day. The health check will automatically run every 24 hours, with the latest script in hand, so that you will have an up-to-date report ready for your review whenever you log onto VxRail Manager. This enhancement has just made the pre-update health check even more reliable and convenient.

For clusters that are not connected to the Dell cloud, you can still benefit from the increased frequency of health script updates. However, you are responsible for checking for any updates on the Dell Support website, downloading them, and staging the script on VxRail Manager for the tool to utilize it.

VxRail cluster update planning

The next enhancement that I will delve into provides a simpler and more convenient cluster update planning experience. VxRail 7.0.450 introduces more automation into the cluster update planning operations, so that you have all the information that you need to plan for an update without manual intervention.

For a cluster connected to the Dell cloud, VxRail Manager will automatically scan for new update paths that are relevant to that particular cluster.  This scan happens multiple times a day. If a new update path is found, VxRail Manager will download the lightweight manifest file from that target LCM composite bundle. This file provides the metadata of the LCM composite bundle, including the manifest of the target VxRail Continuously Validated State.

The following figure shows the information of two update paths provided by their manifest files to populate the Internet Updates tab. That information includes the target VxRail software version, estimated cluster update time, link to the release notes, and whether reboots are required for the nodes to complete an update to this target version. (You can disregard the actual software version numbers: these are engineering test builds used to demonstrate the new functionality.)

VxRail Manager, by default, will recommend the next software version on the same software train. For the recommended path, VxRail Manager automatically generates an update advisor report which is the new feature for cluster update planning. An update advisor report is a singular exportable report that consolidates the output from existing planning tools:

  • Same metadata of the update path, as provided on the Internet Updates tab:

  • The update advisory report that provides component-by-component change analysis, which helps users build IT infrastructure change reports:

  • The health check report that was discussed earlier:

  • The user-managed component report that reminds users whether they need to update non-VxRail managed components for a cluster update:

This report is automatically generated every 24 hours so that you can log onto VxRail Manager and have all the up-to-date information at your disposal to make informed decisions. This feature will make your life easier because you no longer have to manually run all these jobs and wait for them to complete!

For a non-recommended update, you can manually generate an update advisor report using the Actions button for the listed update path. For clusters not connected to Dell cloud, you can still benefit from the update advisor report. However, instead of downloading a lightweight manifest file, you would have to download the full LCM bundle from the Dell Support website to generate the report.

Smart bundle

The last life cycle management feature that I want to focus on is about smart bundles. The term ‘smart bundle’ refers to a space-efficient LCM bundle that can be downloaded from the Dell cloud. For VxRail users who are using CloudIQ today to manage their VxRail clusters, this feature is familiar to you. A space-efficient bundle is created by first performing a change analysis of the VxRail Continuously Validated State currently running on a cluster versus the target VxRail Continuously Validated State that a user wants to download for their cluster. The change analysis determines the delta of install files in the full LCM bundle that is needed by the cluster to download and update to the target version.

In VxRail 7.0.450, you can now initiate smart bundle transfers from VxRail Manager. Smart bundles can greatly reduce the transfer size of an update bundle, which can be extremely beneficial for bandwidth-constrained environments. To use the smart bundle feature, the cluster has to be configured to connect to CloudIQ in the Dell cloud. If VxRail Manager is not properly configured to use the smart bundle feature or if the smart bundle operation fails, VxRail Manager defaults to using the traditional method of downloading the full LCM bundle from the Dell cloud.

VxRail dynamic nodes with PowerStore

VxRail 7.0.450 introduces the much-anticipated integration of PowerStore life cycle management into VxRail Manager for a configuration consisting of VxRail dynamic nodes using PowerStore as the primary storage (also referred to as Dynamic AppsON). This integration further centralizes PowerStore management onto the vCenter Server console for VMware environments. With the Virtual Storage Integrator (VSI) plugin to vCenter, you have been able to provision PowerStore storage and manage data services. Now, you can use the VxRail Manager plugin to manage a PowerStore update and view the array’s software version.

To enable this functionality, VxRail leverages the VSI’s new API server to communicate with the PowerStore Manager and initiate lifecycle management operations and retrieve status information. The API server was developed exclusively for VxRail Manager in a Dynamic AppsON configuration. You start the LCM workflow by first uploading the update bundle to PowerStore Manager, then running an update pre-check, and lastly running the update. The operations are initiated from VxRail Manager but the actual operations are executed on the PowerStore Manager.

The following video shows the PowerStore LCM workflow that can be run from the VxRail Manager. You can update a PowerStore that is using any storage type, except NFS, as the primary storage for a VxRail dynamic node cluster.


Conclusion

Although VxRail 7.0.450 is a jam packed release with many new features and enhancements, the features I’ve described are the headliners and deserve a deeper dive to unpack the capability set. Overall, the set of LCM enhancements in this release provides immense value for your future cluster management and update experience. For the full list of features introduced in this release, see the release notes. And for more information about VxRail in general, check out the Dell VxRail Hyperconverged Infrastructure page on www.dell.com.

Author: Daniel Chiu


Read Full Blog
  • NVIDIA
  • VxRail
  • VMware Cloud Foundation
  • GPU
  • VCF
  • VCF Async Patch Tool
  • VCF on VxRail
  • serviceability
  • Cloud Foundation on VxRail

What’s New: VMware Cloud Foundation 4.5.1 on Dell VxRail 7.0.450 Release and More!

Jason Marques Jason Marques

Thu, 11 May 2023 15:55:52 -0000

|

Read Time: 0 minutes

This latest Cloud Foundation (VCF) on VxRail release includes updated versions of software BOM components, a bunch of new VxRail platform enhancements, and some good ol’ under-the-hood improvements that lay the groundwork for future features designed to deliver an even better customer experience. Read on for the highlights…

VCF on VxRail operations and serviceability enhancements

View Nvidia GPU hardware details in VxRail Manager vCenter plugin ‘Physical View’ and VxRail API

Leveraging the power of GPU acceleration with VCF on VxRail delivers a lot of value to organizations looking to harness the power of their data. VCF on VxRail makes operationalizing infrastructure with Nvidia GPUs easier with native GPU visualization and details using the VxRail Manager vCenter Plugin ‘Physical View’ and VxRail API. Administrators can quickly gain deeper-level hardware insights into the health and details of the Nvidia GPUs running on their VxRail nodes, to easily map the hardware layer to the virtual layer, and to help improve infrastructure management and serviceability operations.

Figure 1 shows what this looks like.

Figure 1.  Nvidia GPU visualization and details – VxRail vCenter Plugin ‘Physical View’ UI

Support for the capturing, displaying, and proactive Dell dial home alerting for new VxRail iDRAC system events and alarms

Introduced in VxRail 7.0.450 and available in VCF 4.5.1 on VxRail 7.0.450 are enhancements to VxRail Manager intelligent system health monitoring of iDRAC critical and warning system events. With this new feature, new iDRAC warning and critical system events are captured, and through VxRail Manager integration with both iDRAC and vCenter, alarms are triggered and posted in vCenter.

Customers can view these events and alarms in the native vCenter UI and the VxRail Manager vCenter Plugin Physical View which contains KB article links in the event description to provide added details and guidance on remediation. These new events also trigger call home actions to inform Dell support about the incident.

These improvements are designed to improve the serviceability and support experience for customers of VCF on VxRail. Figures 2 and 3 show these events as they appear in the vCenter UI ‘All Issues’ view and the VxRail Manager vCenter Plugin Physical View UI, respectively.

                        

Figure 2.  New iDRAC events displayed in the vCenter UI ‘All Issues’ view

Figure 3.  New iDRAC events displayed in the VxRail Manager vCenter Plugin UI ‘Physical View’

Support for the capturing, displaying, and proactive dial home alerting for new iDRAC NIC port down events and alarms

To further improve system serviceability and simplify operations, VxRail 7.0.450 introduces the capturing of new iDRAC system events related to host NIC port link status. These include NIC port down warning events, each of which is indicated by a NIC100 event code and a ‘NIC port is started/up’ info event.

A NIC100 event indicates either that a network cable is not connected, or that the network device is not working.

A NIC101 event indicates that the transition from a network link ‘down’ state to a network link ‘started’ or ‘up’ state has been detected on the corresponding NIC port.

VxRail Manager now creates new VxM events that track these NIC port states.

As a result, users can be alerted through an alarm in vCenter when a NIC port is down. VxRail Manager will also generate a dial-home event when a NIC port is down. When the condition is no longer present, VxRail Manager will automatically clear the alarm by generating a clear-alarm event.

Finally, to reduce the number of false positive events and prevent unnecessary alarm and dial home events, VxRail Manager implements an intelligent throttling mechanism to handle situations in which false positive alarms related to network maintenance activities could occur. This makes the alarms/events that are triggered more credible for an admin to act against.

Table 1 contains a summary of the details of these two events and the VxRail Manager serviceability behavior.

Table 1.  iDRAC NIC port down and started event and behavior details

Let’s double click on this serviceability behavior in a bit more detail.

Figure 4 depicts the behavior process flow VxRail Manager takes when iDRAC discovers and triggers a NIC port down system event. Let’s walk through the details now:

1.  The first thing that occurs is that iDRAC discovers that the NIC port state has gone down and triggers a NIC port down event.

2.  Next, iDRAC will send that event to VxRail Manager.

3.  At this stage VxRail Manager will validate how long the NIC port down event has been active and check whether a NIC port started (or up) event has been triggered within a 30-minute time frame since the original NIC port down event occurred. With this check, if there has not been a NIC port started event triggered, VxRail Manager will begin throttling NIC port down event communication in order to prevent duplicate alerts about the same event.

If during the 30-minute window, a NIC port started event has been detected, VxRail Manager will cease throttling and clear the event.

4. When the VxRail Manager event throttling state is active, VxRail Manager will log it in its event history.

5. VxRail Manager will then trigger a vCenter alarm and post the event to vCenter.

6.  Finally, VxRail Manager will trigger a NIC port down dial home event communication to backend Dell Support Systems, if connected.

  Figure 4.  Processing VxRail NIC port down events, and VxRail Manager throttling logic

Figure 5 shows what this looks like in the vCenter UI.

  Figure 5.  VxRail NIC port down trigger alarm in vCenter UI

Figure 6 shows what this looks like in the VxRail Manager vCenter Plugin ‘Physical View’ UI.

 Figure 6.  VxRail Manager vCenter Plugin ‘Physical View’ UI view of a VxRail NIC port down event

VCF on VxRail storage updates

Support for new PowerMax 2500 and 8500 storage arrays with VxRail 14G and 15G dynamic nodes using VMFS on FC principal storage

Starting in VCF 4.5.1 on VxRail 7.0.450, support has been added for the latest next gen Dell PowerMax 2500 and 8500 storage systems as VMFS on FC principal storage when deployed with 14G and 15G VxRail dynamic node clusters in VI workload domains.

Figure 7 lists the Dell storage arrays that support VxRail dynamic node clusters using VMFS on FC principal storage for VCF on VxRail, along with the corresponding supported FC HBA makes and models.

Note: Compatible supported array firmware and software versions are published in the Dell E-Lab Support Matrix for reference.

  Figure 7.  Supported Dell storage arrays used as VMFS on FC principal storage

VCF on VxRail lifecycle management enhancements

VCF Async Patch Tool 1.0.1.1 update

This tool addresses both LCM and security areas. Although it is not officially a feature of any specific VCF on VxRail release, it does get released asynchronously (pun intended) and is designed for use in VCF and VCF on VxRail environments. Thus, it deserves a call out.

For some background, the VCF Async Patch Tool is a new CLI based tool that allows cloud admins to apply individual component out-of-band security patches to their VCF on VxRail environment, separately from an official VCF LCM update release. This enables organizations to address security vulnerabilities faster without having to wait for a full VCF release update. It also allows admins to install these patches themselves without needing to engage support resources to get them applied manually.

With this latest AP Tool 1.0.1.1 release, the AP Tool now supports the ability to use patch VxRail (which includes all of the components in a VxRail update bundle including VxRail Manager and ESXi software components, and VxRail HW firmware/drivers) within VCF on VxRail environments. This is a great addition to the tool’s initial support for patching vCenter and NSX Manager in its first release. VCF on VxRail customers now have a centralized and standardized process for applying security patches for core VCF and VxRail software and core VxRail HCI stack hardware components (such as server BIOS or pNIC firmware/driver for example), all in a simple and integrated manner that VCF on VxRail customers have come to expect from a jointly engineered integrated turnkey hybrid cloud platform.

Note: Hardware patching is made possible due to how VxRail implements HW updates with the core VxRail update bundle. All VxRail patches for VxRail Manager, ESXi, and HW components are delivered in a the VxRail update bundle and leveraged by the AP Tool to apply.

From an operational standpoint, when patches for the respective software and hardware components have been applied, and a new VCF on VxRail BOM update is available that includes the security fixes, admins can use the tool to download the latest VCF on VxRail LCM release bundles and upgrade their environment back to an official in-band VCF on VxRail release BOM. After that, admins can continue to use the native SDDC Manager LCM workflow process for applying additional VCF on VxRail upgrades. Figure 8 highlights this process at a high level.

  Figure 8.  Async Patch Tool overview                                                  

You can access VCF Async Patch Tool instructions and documentation from VMware’s website.

Summary

In this latest release, the new features and platform improvements help set the stage for even more innovation in the future. For more details about bug fixes in this release, see VMware Cloud Foundation on Dell VxRail Release Notes. For this and other Cloud Foundation on VxRail information, see the following additional resources.

Author: Jason Marques

Twitter: @vWhipperSnapper

Additional Resources


Read Full Blog
  • VxRail

Optimize and Streamline Data Center Operations with VxRail

Dylan Jackson Dylan Jackson

Fri, 31 Mar 2023 19:07:36 -0000

|

Read Time: 0 minutes

Winter will soon be coming to an end, and Spring will be right behind. More outdoor life will resume as the season of refreshment and renewal approaches. In that spirit, I want to reseed the VxRail field of knowledge. One of the most critical areas of improvement that VxRail offers is in lifecycle management; just like how an attentive gardener can improve a plant’s life and fruit output, so too can VxRail improve and enhance the lifecycle management of clusters. We’ll take a look at how VxRail accomplishes this objective within this blog and the linked videos.

VxRail Manager is an on-cluster virtual machine that makes cluster management more accessible than ever through VMware vCenter. Much like how the tractor transformed harvesting wheat, VxRail Manager transforms how we manage clusters. Both cases enable people to accomplish much more. This virtual machine enables VxRail functionality in customer environments, providing automation solutions built by our VxRail team and support for customer-built scripts through an API. We’ll discuss some of these automation features later in this blog. These features are update prechecks, cluster update cycles, compliance checks, and cluster expansion. These functions, as well as many, many others, are also available by API call, which enables customers to craft their own automation solutions in addition to what VxRail already provides. The automation features that VxRail comes included with, such as a cluster shutdown, are readily consumed within vCenter, eliminating the need for administrators to familiarize themselves with yet another interface. 

Enhanced update prechecks

The first significant part of the VxRail HCI System Software we’ll cover in this blog is update prechecks. The VxRail update precheck massively assists administrators by helping to confirm that a cluster is healthy enough to upgrade. This precheck script performs hundreds of tasks, but, as an example, let’s focus on just a couple of them.

Updates can halt for a variety of reasons. For example, if a cluster has insufficient storage, the whole cluster will fail to update. If a node was left in maintenance mode, it would not update. By running the optional VxRail update precheck, these headaches can be avoided. To get a look at the precheck, check out the video to the right. Just like how a tractor will till a field to prepare it for more successful seeding, the precheck also helps prepare clusters for more successful upgrades. VxRail Manager identifies issues like storage availability with this check, but it doesn’t stop there. VxRail Manager then provides administrators with a solution recommendation through a link to a Dell Knowledge Base article. Running the precheck takes a fraction of the time an update requires and has the potential to save massive amounts of labor and time by avoiding problems both large and small. So, while the precheck is optional, I’d recommend running it before every update cycle. The ability to prevent most issues from taking root is one everyone can appreciate.

Prechecks aren’t the only improvement, however. Now that VxRail Manager has confirmed that our clusters are healthy, we would want to apply an update. This is where we discuss some of the most critical enhancements a VxRail solution provides. 

Easier updates

Updating a traditional non VxRail cluster is a process with a significant number of steps. Administrators need to create baseline images for each cluster and add any drivers, firmware, or other vendor add-ons each time. The difference in VxRail comes with implementing what we’ve come to call “Continuously Validated States.” Continuously Validated States make the LCM experience so much better. This is done in several ways, the first of which is that it removes the need to create baselines and cluster profiles. A video overview of the update process is here to the left, but Continuously Validated States handles this process. The VxRail team assembles the majority of the components needed in an update cycle, requiring only devices like GPUs, Fibre Channel controllers, and additional VMware software components, like NSX-T, to be added to the update packages. This helps administrators save time and effort, but it also provides massive long-term stability benefits. VxRail doesn’t just throw a bunch of component updates into a zip file and call it a day. These update packages undergo extensive testing in a multimillion-dollar VxRail testing facility for over 100,000 hours to help clusters reach as high as 99.9999 percent uptime in some cases. Administrators can then use the update packages created by the VxRail team to move their clusters from one Continuously Validated State to the next, keeping their VxRail environment in a known-good, happy state as it moves through the cluster lifecycle. 

The update packages created by VxRail can be used in two different ways. Clusters with Internet access can pull these files through the network using the Internet Update button in the UI. Air-gapped clusters can move between Continuously Validated States through the Offline Update option, which uses local bundle files downloaded from the Dell support site. Returning to our gardening and cultivation analogies, you can certainly till a plot with a garden hoe, but you can do much more much easier with a rotary tiller. Similarly, administrators can manage a DIY HCI environment manually, but it will be significantly more difficult. 

If you’d like to know about the new VxRail 8.0.000 release, you can read an article on it on the Info Hub Blog page.

Confirming the cluster state

Update automation ensures that VxRail clusters remain in a Continuously Validated State during update cycles, but clusters spend most of their lives completing business operations. VxRail Manager includes a compliance checker tool that helps ensure that clusters continue to adhere to their Continuously Validated State and identify any installed versions that drift away from it. This check is available on demand within the update interface of the UI. The compliance checker tool examines each node in a cluster, collects the current version set, and then displays the output in a way that calls out problem items specifically. When you consider that a significant part of the value that VxRail brings to the table is based on using and adhering to Continuously Validated States, it’s easy to see how helpful an automated inspection tool can be. 

Easier and faster growth

Moving along, VxRail also enhances cluster growth. VxRail provides heterogeneous hardware support, which means customers can add different nodes as needed to best address their resource demands. As an example, this could be like adding different node models or even different hardware generations to a cluster. There are software improvements as well. When an administrator goes to add their new node or nodes, VxRail Manager scans the hosts available on the network for their software and firmware versions and confirm compatibility with the rest of the cluster. You can view the check in the video to the right, but it’s powered by the same concept behind VxRail updates, that being Continuously Validated States and the certainty their use provides. Clusters aren’t static environments—updates happen, and different versions in the environment could make compatibility checks a new hurdle! VxRail alleviates this pinch point with the use of the node image management tool. This tool, also known as NIM, is much like an enhanced Rapid Appliance Self-Recovery, or RASR, operation. Where a RASR will reimage a particular node with several steps, NIM allows for multiple node reimaging operations in parallel. The tool and instructions for its use can be found with SolVe online procedures.

Managing an HCI environment takes a lot of work with many tasks to accomplish. VxRail automates many of these tasks, particularly when it comes to lifecycle management. The automation enhancements don’t stop with what has been engineered into the software. VxRail also features an API that exposes the same calls and features used in the UI for script automation. Our API features over 100 calls, providing the toolkit necessary to create all kinds of special-purpose automation solutions. If VxRail is our tractor, the API toolkit helps customers create their own cultivators, levellers, hay bailers, and more! The precheck, updates, and compliance check enhancements we talked about earlier in this blog are all examples of actions that the API can take. As environments grow larger, management options need to grow with them. The VxRail API offers an expansive toolkit for developers to build the tools that businesses need.

Conclusion

Enhancing cluster lifecycle management and enabling administrators to manage more infrastructure are core parts of the VxRail offering. We covered how VxRail improves lifecycle management and empowers administrators with the update precheck that ensures clusters are ready to upgrade safely. We also covered the improved cluster update process with reduced labor time, compliance checks that ensure that cluster nodes adhere to their Continuously Validated States between upgrade cycles, and the cluster expansion tool with image control. The VxRail API can interface with these processes, enabling custom automation solutions within customer data centers. Just like how the humble tractor provided productivity that was previously unimaginable, VxRail provides the automation to maintain data centers more efficiently than ever. For more information about VxRail, visit the VxRail Info Hub.

 Author: Dylan Jackson, Engineering Technologist


Read Full Blog
  • VMware
  • VxRail
  • vSAN
  • ESA
  • 100GbE

100 GbE Networking – Harness the Performance of vSAN Express Storage Architecture

David Glynn David Glynn

Wed, 05 Apr 2023 12:48:50 -0000

|

Read Time: 0 minutes

For a few years, 25GbE networking has been the mainstay of rack networking, with 100 GbE reserved for uplinks to spine or aggregation switches. 25 GbE provides a significant leap in bandwidth over 10 GbE, and today carries no outstanding price premium over 10 GbE, making it a clear winner for new buildouts. But should we still be continuing with this winning 25 GbE strategy? Is it time to look to a future of 100 GbE networking within the rack? Or is that future now?

This question stems from my last blog post: VxRail with vSAN Express Storage Architecture (ESA) where I called out VMware’s 100 GbE recommended for maximum performance. But just how much more performance can vSAN ESA deliver with 100GbE networking? VxRail is fortunate to have its performance team, who stood up two identical six-node VxRail with vSAN ESA clusters, except for the networking. One was configured with Broadcom 57514 25 GbE networking, and the other with Broadcom 57508 100 GbE networking. For more VxRail white papers, guides, and blog posts visit VxRail Info Hub.

When it comes to benchmark tests, there is a large variety to choose from. Some benchmark tests are ideal for generating headline hero numbers for marketing purposes – think quarter-mile drag racing. Others are good for helping with diagnosing issues. Finally, there are benchmark tests that are reflective of real-world workloads. OLTP32K is a popular one, reflective of online transaction processing with a 70/30 read-write split and a 32k block size, and according to the aggregated results from thousands of Live Optics workload observations across millions of servers.

One more thing before we get to the results of the VxRail Performance Team's testing. The environment configuration. We used a storage policy of erasure coding with a failure tolerance of two and compression enabled.

When VMware announced vSAN with Express Storage Architecture they published a series of blogs all of which I encourage you to read. But as part of our 25 GbE vs 100 GbE testing, we also wanted to verify the astounding claims of RAID-5/6 with the Performance of RAID-1 using the vSAN Express Storage Architecture and vSAN 8 Compression - Express Storage Architecture. In short, forget the normal rules of storage performance, VMware threw that book out of the window. We didn’t throw our copy out of the window, well not at first, but once our results validated their claims… it went out.

Let’s look at the data: Boom!  

 Figure 1. ESA: OLTP32KB 70/30 RAID6 25 GbE vs 100 GbE performance graph

Boom! A 78% increase in peak IOPS with a substantial 49% drop in latency. This is a HUGE increase in performance, and the sole difference is the use of the Broadcom 57508 100 GbE networking. Also, check out that latency ramp-up on the 25 GbE line, it’s just like hitting a wall. While it is almost flat on the 100 GbE line.

But nobody runs constantly at 100%, at least they shouldn’t be. 60 to 70% of absolute max is typically a normal day-to-day comfortable peak workload, leaving some headroom for spikes or node maintenance. At that range, there is an 88% increase in IOPS with a 19 to 21% drop in latency, with a smaller drop in latency attributable to the 25 GbE configuration not hitting a wall. As much as applications like high performance, it is needed to deliver performance with consistent and predictable latency, and if it is low all the better. If we focus on just latency, the 100 GbE networking enabled 350K IOPS to be delivered at 0.73 ms, while the 25 GbE networking can squeak out 106K IOPS at 0.72 ms. That may not be the fairest of comparisons, but it does highlight how much 100GbE networking can benefit latency-sensitive workloads.  

Boom, again! This benchmark is not reflective of real-world workloads but is a diagnostic test that stresses the network with its 100% read-and-write workloads. Can this find the bottleneck that 25 GbE hit in the previous benchmark?

 Figure 2. ESA: 512KB RAID6 25 GbE vs 100 GbE performance graph

This testing was performed on a six-node cluster, with each node contributing one-sixth of the throughput shown in this graph. 20359MB/s of random read throughput for the 25 GbE cluster or 3393 MB/s per node. Which is slightly above the theoretical max throughput of 3125 MB/s that 25 GbE can deliver. This is the absolute maximum that 25 GbE can deliver! In the world of HCI, the virtual machine workload is co-resident with the storage. As a result, some of the IO is local to the workload, resulting in higher than theoretical throughput. For comparison, the 100 GbE cluster achieved 48,594 MB/s of random read throughput, or 8,099 MB/s per node out of a theoretical maximum of 12,500 MB/s.

But this is just the first release of the Express Storage Architecture. In the past, VMware has added significant gains to vSAN, as seen in the lab-based performance analysis of Harnessing the Performance of Dell EMC VxRail 7.0.100. We can only speculate on what else they have in store to improve upon this initial release.

What about costs, you ask? Street pricing can vary greatly depending on the region, so it's best to reach out to your Dell account team for local pricing information. Using US list pricing as of March 2023, I got the following:

Table 1 Network component pricing

Component

Dell PN

List price

Per port

25GbE

100GbE

Broadcom 57414 dual 25 Gb

540-BBUJ 

$769

$385

$385

 

S5248F-ON 48 port 25 GbE

210-APEX

$59,216

$1,234

$1,234

 

25 GbE Passive Copper DAC

470-BBCX 

$125

$125

$125

 

Broadcom 57508 dual 100Gb

540-BDEF 

$2,484

$1,242

 

$1,242

S5232F-ON 32 port 100 GbE

210-APHK

$62,475

$1,952

 

$1,952

100 GbE Passive Copper DAC

470-ABOX

$360

$360

 

$360

Total per port

 

 

 

$1,743

$3,554

Overall, the per-port cost of the 100 GbE equipment was 2.04 times that of the 25 GbE equipment. However, this doubling of network cost provides four times the bandwidth, a 78% increase in storage performance, and a 49% reduction in latency.

If your workload is IOPS-bound or latency-sensitive and you had planned to address this issue by adding more VxRail nodes, consider this a wakeup call. Adding dual 100Gb came at a total list cost of $42,648 for the twelve ports used. This cost is significantly less than the list price of a single VxRail node and a fraction of the list cost of adding enough VxRail nodes to achieve the same level of performance increase.

 Reach out to your networking team; they would be delighted to help deploy the 100 Gb switches your savings funded. If decision-makers need further encouragement, send them this link to the white paper on this same topic Dell VxRail Performance Analysis (similar content, just more formal), and this link to VMware's vSAN 8 Total Cost of Ownership white paper.

While 25 GbE has its place in the datacenter, when it comes to deploying vSAN Express Storage Architecture, it's clear that we're moving beyond it and onto 100 GbE. The future is now 100 GbE, and we thank Broadcom for joining us on this journey.



Read Full Blog
  • VMware Cloud Foundation
  • VCF
  • VCF on VxRail
  • VCF on VxRail storage

Getting To Know VMware Cloud Foundation on Dell VxRail Flexible Storage Options

Jason Marques Jason Marques

Thu, 09 Feb 2023 20:45:22 -0000

|

Read Time: 0 minutes

Have you been tasked with executing your company’s cloud transformation strategy and you’re worried about creating yet another infrastructure silo just for a subset of workloads to run in this cloud and, as a result, need a solution that delivers storage flexibility? Then you have come to the right place. 

Dell Technologies and VMware have you covered with VMware Cloud Foundation on Dell VxRail. VCF on VxRail delivers the cloud infrastructure that can deliver storage flexibility that meets you where you are in your cloud adoption journey.

This new whiteboard video walks through these flexible storage options that you can take advantage of, that might align to your business, workload, and operational needs. Check it out below. 


And for more information about VCF on VxRail, visit the VxRail Info Hub page.

Author: Jason Marques

Twitter: @vWhipperSnapper

Additional Resources

 Videos


Read Full Blog
  • VMware
  • vSphere
  • VxRail
  • PCIe

VxRail and SmartDPUs—A Transformative Data Center Pairing

David Glynn David Glynn

Tue, 17 Jan 2023 21:15:58 -0000

|

Read Time: 0 minutes

What VMware announced as Project Monterey back in September 2020 has finally come to fruition as the vSphere Distributed Services Engine. While that name may seem like a mouthful, it really does a great job of describing what the product does in just a few words.

vSphere Distributed Services Engine provides a consistent management and services platform to deliver dynamic hardware for the growing world of agile distributed applications. The vSphere Distributed Services Engine will, in the future, be the engine upon which non-hypervisor vSphere services run. Today it begins with NSX, but VMware has set its sights on moving vSAN storage and host management services to the vSphere Distributed Services Engine, thus freeing up x86 resources for virtual machines, containers, and the applications they support.

Powering vSphere Distributed Services Engine is a new type of PCIe card known as a data processing unit (DPU), currently available from NVIDIA and AMD. At Dell, we are calling them SmartDPUs, as these PCIe cards and the software they run are the cornerstone of tomorrow’s disaggregated cloud-native data center.

From a hardware perspective, it would be easy to assume that a SmartDPU is just a fancy network card; after all, the most distinguishing external features are the SFP ports. But hiding under the large heatsink is a small powerful server with its own processor, memory, and storage. Most importantly is the programmable hardware I/O accelerator, the core of the SmartDPU that will deliver performance. The PowerEdge server team at Dell has gone a step further. They’ve tightly coupled the SmartDPUs with the existing PowerEdge iDRAC through the serial port and side-band server communication connections, bypassing the RJ45 management port. This allows the iDRAC to not only manage the PowerEdge server, but also to manage the SmartDPUs. As awesome as the hardware is, it needs software for its power to be unleashed.

This is where vSphere Distributed Services Engine comes into play. In this initial release, VMware is moving NSX and the networking and security services that it provides to the vSphere environment from the main x86 CPU and onto the SmartDPU. This provides several benefits: The most obvious is that this will free up x86 CPU resources for virtual machine and container workloads. Less obvious is the creation of an air gap between the NSX services and ESXi, enabling zero trust security. Does this mean that SmartDPUs are just an NSX offload card? Yes and no. VMware and the industry are taking the first small steps in what will be a leap forward for data center infrastructure and design. Future steps by VMware will expand the vSphere Distributed Services Engine to have storage and host management services running on the SmartDPUs, thus leaving the x86 CPU to run host virtual machines and containers.

VMware’s journey does not stop there, and these steps may seem blasphemous at first, but VMware will provide bare metal support, enabling Linux or Windows to be deployed on the x86 hardware. VMware acknowledges that not every workload is suited to run on vSphere, but that these workloads would benefit from the security, networking, and storage services provided by the vSphere Distributed Services Engine—transforming the data center, breaking down silo walls, distributing and aggregating any and all workloads. 

Where does VxRail fit in all this? In the same place as we always have: Standing on the shoulders of the PowerEdge and VMware giants looking to remove complexity and friction from technology, making it easier and simpler for you to purchase, deploy, manage, update, and most importantly use this transformative technology. Freeing up your cycles to refactor your applications to meet the ever-growing needs of your business. VxRail will be supporting the vSphere Distributed Services Engine with the AMD Pensando and NVIDIA Bluefield-2 SmartDPUs on our core platforms—the E660F, V670F, and P670N. These nodes will be available in configurations for both VxRail with vSAN original storage architecture and VxRail dynamic nodes.

The journey of the modern data center is complex and ever changing, but with VxRail at your side you are in good company.

 
Author: David Glynn, Sr. Principal Engineer, VxRail Technical Marketing

 

Read Full Blog
  • VMware
  • vSphere
  • VxRail
  • vSAN

Learn About the Latest Major VxRail Software Release: VxRail 8.0.000

Daniel Chiu Daniel Chiu

Mon, 09 Jan 2023 14:45:15 -0000

|

Read Time: 0 minutes

Happy New Year!  I hope you had a wonderful and restful holiday, and you have come back reinvigorated.  Because much like the fitness centers in January, this VxRail blog site is going to get busy.  We have a few major releases in line to greet you, and there is much to learn.  

A picture containing logo

Description automatically generated

First in line is the VxRail 8.0.000 software release that provides introductory support for VMware vSphere 8, which has created quite the buzz these past few months.  Let’s walk through the highlights of this release.

  • For VxRail users who want to be early adopters of vSphere 8, VxRail 8.0.000 provides the first upgrade path for VxRail clusters to transition to VMware’s latest vSphere software train. Only clusters with VxRail nodes based on either the 14th generation or 15th generation PowerEdge servers can upgrade to vSphere 8, because VMware has removed support for a legacy BIOS driver used by 13th generation PowerEdge servers. Importantly, users need to upgrade their vCenter Server to version 8.0 before a cluster upgrade, and vSAN 8.0 clusters require users to upgrade their existing vSphere and vSAN licenses.  In VxRail 8.0.000, the VxRail Manager has been enhanced to check platform compatibility and warn users of license issues to prevent compromised situations. Users should always consult the release notes to fully prepare for a major upgrade.
  • VxRail 8.0.000 also provides introductory support for vSAN Express Storage Architecture (ESA), which has garnered much attention for its potential while eliciting just as much curiosity because of its newness. To level set, vSAN ESA is an optimized version of vSAN that exploits the full potential of the very latest in hardware, such as multi-core processing, faster and larger capacity memory, and NVMe technology to unlock new capabilities to drive new levels of performance and efficiency.  You can get an in-depth look at vSAN ESA in David Glynn’s blog. It is important to note that vSAN ESA is an alternative, optional vSAN architecture. The existing architecture (which is now referred to as Original Storage Architecture (OSA)) is still available in vSAN 8.  It’s a choice that users can make on which one to use when deploying clusters.

    In order to deploy VxRail clusters with vSAN ESA, you need to order brand-new VxRail nodes specifically configured for vSAN ESA. This new architecture eliminates the use of discrete cache and capacity drives. Nodes will require all NVMe storage drives. Each drive will contribute to cache and capacity. VxRail 8.0.000 offers two choices for platforms: E660N and the P670N. The user will select either the 3.2 TB or 6.4 TB TLC NVMe storage drives to populate each node in their new VxRail cluster with vSAN ESA. To learn about the configuration options, see David Glynn’s blog.
  • The support in vSphere 8 in VxRail 8.0.000 also includes support for the increased cache size for VxRail clusters with vSAN 8.0 OSA. The increase from 600 TB to 1.6 TB will provide significant performance gain. VxRail already has cache drives that can take advantage of the larger cache size. It is easier to deploy a new cluster with a larger cache size than for an existing cluster to expand the current cache size. (For existing clusters, nodes need their disk groups rebuilt when the cache is expanded. This can be a lengthy and tedious endeavor.)

Major VMware releases like vSphere 8 often shine a light on the differentiated experience that our VxRail users enjoy. The checklist of considerations only grows when you’re looking to upgrade to a new software train. VxRail users have come to expect that VxRail provides them the necessary guardrails to guide them safely along the upgrade path to reach their destination. The 800,000 hours of test run time performed by our 100+ staff members, who are dedicated to maintaining the VxRail Continuously Validated States, is what gives our customers the confidence to move fearlessly from one software version to the next.  And for customers looking to explore the potential of vSAN ESA, the partnership between VxRail and VMware engineering teams adds to why VxRail is the fastest and most effective path for users to maximize the return on their investment in VMware’s latest technologies.

If you’re interested in upgrading to VxRail 8.0.000, please read the release notes.

If you’re looking for more information about vSAN ESA and VxRail’s support for vSAN ESA, check out this blog.

Author: Daniel Chiu


Read Full Blog
  • VMware
  • VxRail
  • vSAN
  • price/performance

VxRail with vSAN Express Storage Architecture (ESA)

David Glynn David Glynn

Mon, 09 Jan 2023 14:40:28 -0000

|

Read Time: 0 minutes

vSAN Express Storage Architecture: The New Era. It may well be given the dramatic gains in performance that VMware is claiming (my VxRail with vSAN ESA performance blog will be next month) and the major changes to the capabilities of data services provided in vSAN ESA. It’s important to understand that this is the next step in vSAN’s evolution, not an end.  vSAN’s Original Storage Architecture (OSA) has been continuously evolving since it was first released in 2014. vSAN 8.0 Express Storage Architecture is just another small step on that evolutionary journey – well maybe more of a giant leap. A giant leap that VxRail will take along with it.

vSAN OSA was designed at a time when spinning disks were the norm, flash was expensive, double digit multi-core processors were new-ish, and 10Gbit networking was for switch uplinks not servers. Since then, there have been significant changes in the underlying hardware, which vSAN has benefited from and leveraged along the way. Fast forward to today, spinning disk is for archival use, NVMe is relatively cheap, 96 core processors exist, 25Gb networking is the greenfield default with 100Gb networking available for a small premium. Therefore, it is no surprise to see VMware optimizing vSAN to exploit the full potential of the latest hardware, unlocking new capabilities, higher efficiency, and more performance. Does this spell the end of the road for vSAN OSA? Far from it! Both architectures are part of vSAN 8.0, with OSA getting several improvements. The most exciting of which is the Increased Write Buffer Capacity from 600GB to 1.6TB per diskgroup. This will not only increase performance, but equally important also improve performance consistency. 

Before you get excited about the performance gains and new capabilities that upgrading to vSAN ESA will unlock on your VxRail cluster, be aware of one important item. vSAN ESA is, for now, greenfield only. This was done to enable vSAN ESA to fully exploit the potential of the latest in server hardware and protocols to deliver new capabilities and performance to meet the ever-evolving demands that business places on today’s IT infrastructure.

Aside from being greenfield only, vSAN ESA has particular hardware requirements. So that you can hit the ground running this New Year with vSAN ESA, we’ve refreshed the VxRail E660N and P670N with configurations that meet or exceed vSAN ESA’s significantly different requirements, enabling you to purchase with confidence:

  • Six 3.2TB or 6.4TB mixed-use TLC NVMe devices
  • 32 cores 
  • 512GB RAM
  • Two 25Gb NIC ports, with 100Gb recommended for maximum performance. Yes, vSAN ESA will saturate a 25Gb network port. And yes, you could bond multiple 25Gb network ports, but the price delta (including switches and cables) between quad 25Gb and dual 100Gb networking is surprisingly small.

And as you’d expect, VxRail Manager has already been in training and is hitting the ground running alongside you. At deployment, VxRail Manager will recognize this new configuration, deploy the cluster following vSAN ESA best practices and compatibility checks, and perform future updates with Continuously Validated States.

But hardware is only half of the story. What VMware did with vSAN to take advantage of the vastly improved hardware landscape is key. vSAN ESA stands on the shoulders of the work that OSA has done, re-using much of it, but optimizing the data path to utilize today’s hardware. These architectural changes occur in two places: a new log-structured file system, and an optimized log-structured object manager and data structure. Pete Koehler’s blog post An Introduction to the vSAN Express Storage Architecture explains this in a clear and concise manner – and much better than I could. What I found most interesting was that these changes have created a storage paradox of high performing erasure coding with highly valued data services:

Figure 1.  Log structured file system - optimized data handling

  • Data services like compression and encryption occur at the highest layer, minimizing process amplification, and lowering processor and network utilization. To put this another way, data services are done once, and the resulting now smaller and encrypted data load is sent over the network to be written on multiple hosts. 
  • The log-structured file system rapidly ingests data, while organizing it into a full stripe write. The key part here is that full stripe write. Typically, with erasure coding, a partial stripe write is done. This results in a read-modify-write which causes the performance overhead we traditionally associate with RAID5 and RAID6. Thus, full stripe writes enable the space efficiency of RAID5/6 erasure coding with the performance of RAID1 mirroring. 
  • Snapshots also benefit from the log structured file system, with writes written to new areas of storage and metadata pointers tracking which data belongs to which snapshot. This change enables Scalable, High-Performance Native Snapshots with compatibility for existing 3rd party VDAP backup solutions and vSphere Replication.

Does this mean we get to have our cake and eat it too? This is certainly the case, but check out my next blog where we’ll delve into the brilliant results from the extensive testing by the VxRail Performance team. 

Back to the hardware half of the story. Don’t let the cost of mixed-use NVMe drives scare you away from vSAN ESA. The TCO of ESA is actually lower than OSA. There are a few minor things that contribute to this, no SAS controller, and no dedicated cache devices. However, because of ESA’s RAID-5/6 with the Performance of RAID-1, less capacity is needed, delivering significant costs savings. Traditionally, performance and mirroring, required twice the capacity, but ESA RAID6 can deliver comparable performance, with 33% more usable capacity, and better resiliency with a failure to tolerate of two. Even small clusters benefit from ESA with adaptive RAID5, which has a 2+1 data placement scheme for use on clusters with as few as three nodes. As these small clusters grow beyond five nodes, vSAN ESA will adapt that RAID5 2+1 data placement to the more efficient RAID5 4+1 data placement.

Figure 2.  Comparing vSAN OSA and ESA on different storage policies against performance and efficiency

Finally, ESA has an ace up its sleeve with the more resource efficient and granular compression with a claimed “up to a 4x improvement over original storage architecture”. ESA’s minimum hardware requirements may seem high, but bear in mind, they are specified to enable ESA to deliver the high performance it is capable of. When running the same workload that you have today on your VxRail with vSAN OSA cluster on a VxRail with vSAN ESA cluster, the resource consumption will be noticeably lower – releasing resources for additional virtual machines and their workloads. 

A big shoutout to my technical marketing peers over at VMware for the many great blogs, videos, and other assets they have delivered. I linked to several of them above, but you can find the all of their vSAN ESA material over at core.vmware.com/vsan-esa including a very detailed FAQ, and an updated Foundations of vSAN Architecture video series on YouTube.

vSAN Express Storage Architecture is a giant leap forward for datacenter administrators everywhere and will draw more of them into the world of hyperconverged infrastructure. vSAN ESA on VxRail provides the most effective and secure path for customers to leverage this new technology. The New Era has started, and it is going to be very interesting.

Author: David Glynn, Sr. Principal Engineer, VxRail Technical Marketing

Images courtesy of VMware

For the curious and the planners out there, migrating to vSAN ESA is, as you’d expect, just a vMotion and SvMotion.


Read Full Blog
  • HCI
  • Ansible
  • VxRail
  • API
  • REST API

Infrastructure as Code with VxRail Made Easier with Ansible Modules for Dell VxRail

Karol Boguniewicz Karol Boguniewicz

Tue, 15 Nov 2022 16:27:36 -0000

|

Read Time: 0 minutes

Many customers are looking at Infrastructure as Code (IaC) as a better way to automate their IT environment, which is especially relevant for those adopting DevOps. However, not many customers are aware of the capability of accelerating IaC implementation with VxRail, which we have offered for some time already—Ansible Modules for Dell VxRail.

What is it? It's the Ansible collection of modules, developed and maintained by Dell, that uses the VxRail API to automate VxRail operations from Ansible.

By the way, if you're new to the VxRail API, first watch the introductory whiteboard video available on YouTube.


Ansible Modules for Dell VxRail are well-suited for IaC use cases. They are written in such a way that all requests are idempotent and hence fault-tolerant. This means that the result of a successfully performed request is independent of the number of times it is run.

Besides that, instead of just providing a wrapper for individual API functions, we automated holistic workflows (for instance, cluster deployment, cluster expansion, LCM upgrade, and so on), so customers don't have to figure out how to monitor the operation of the asynchronous VxRail API functions.  These modules provide rich functionality and are maintained by Dell; this means we're introducing new functionality over time. They are already mature—we recently released version 1.4.

Finally, we are also reducing the risk for customers willing to adopt the Ansible modules in their environment, thanks to the community support model, which allows you to interact with the global community of experts. From the implementation point of view, the architecture and end-user experience are similar to the modules we provide for Dell storage systems.

Getting Started

Ansible Modules for Dell VxRail are available publicly from the standard code repositories: Ansible Galaxy and GitHub. You don't need a Dell Support account to download and start using them.

Requirements

The requirements for the specific version are documented in the "Prerequisites" section of the description/README file.

In general, you need a Linux-based server with the supported Ansible and Python versions. Before installing the modules, you have to install a corresponding, lightweight Python SDK library named "VxRail Ansible Utility," which is responsible for the low-level communication with the VxRail API. You must also meet the minimum version requirements for the VxRail HCI System Software on the VxRail cluster.

This is a summary of requirements for the latest available version (1.4.0) at the time of writing this blog:

Ansible Modules for Dell VxRail

VxRail HCI System Software version

Python version

Python library (VxRail Ansible Utility) version

Ansible version

1.4.0

7.0.400

3.7.x

1.4.0

2.9 and 2.10

Installation

You can install the SDK library by using git and pip commands. For example:

git clone https://github.com/dell/ansible-vxrail-utility.git

cd ansible-vxrail-utility/
pip install .

Then you can install the collection of modules with this command:

ansible-galaxy collection install dellemc.vxrail:1.4.0

Testing

After the successful installation, we're ready to test the modules and communication between the Ansible automation server and VxRail API.

I recommend performing that check with a simple module (and corresponding API function) such as dellemc_vxrail_getsysteminfo, using GET /system to retrieve VxRail System Information.

Let's have a look at this example (you can find the source code on GitHub):

 
Note that this playbook is run on a local Ansible server (localhost), which communicates with the VxRail API running on the VxRail Manager appliance using the SDK library. In the vars section, , we need to provide, at a minimum, the authentication to VxRail Manager for calling the corresponding API function. We could move these variable definitions to a separate file and include the file in the playbook with vars_files. We could also store sensitive information, such as passwords, in an encrypted file using the Ansible vault feature. However, for the simplicity of this example, we are not using this option.

After running this playbook, we should see output similar to the following example (in this case, this is the output from the older version of the module):


Cluster expansion example

Now let's have a look at a bit more sophisticated, yet still easy-to-understand, example. A typical operation that many VxRail customers face at some point is cluster expansion. Let's see how to perform this operation with Ansible (the source code is available on GitHub):

In this case, I've exported the definitions of the sensitive variables, such as vcpasswd, mgt_passwd, and root_passwd, into a separate, encrypted Ansible vault file, sensitive-vars.yml, to follow the best practice of not storing them in the clear text directly in playbooks.

As you can expect, besides the authentication, we need now to provide more parameters—configuration of the newly added host—defined in the vars section. We select the new host from the pool of available hosts, using the PSNT identifier (host_psnt variable).

This is an example of an operation performed by an asynchronous API function. Cluster expansion is not something that is completed immediately but takes minutes. Therefore, the progress of the expansion is monitored in a loop until it finishes or the number of retries is passed. If you communicated with the VxRail API directly by using the URI module from your playbook, you would have to take care of such monitoring logic on your own; here, you can use the example we provide.

You can watch the operation of the cluster expansion Ansible playbook with my commentary in this demo: 



Getting help

The primary source of information about the Ansible Modules for Dell VxRail is the documentation available on GitHub. There you'll find all the necessary details on all currently available modules, a quick description, supported endpoints (VxRail API functions used), required and optional parameters, return values, and location of the log file (modules have built-in logging feature to simplify troubleshooting— logs are written in the /tmp directory on the Ansible automation server). The GitHub documentation also contains multiple samples showing how to use the modules, which you can easily clone and adjust as needed to the specifics of your VxRail environment.

There's also built-in documentation for the modules, accessible with the ansible-doc command.

Finally, the Dell Automation Community is a public discussion forum where you can post your questions and ask for help as needed.

Conclusion 

I hope you now understand the Ansible Modules for Dell VxRail and how to get started. Let me quickly recap the value proposition for our customers. The modules are well-suited for IaC use cases, thanks to automating holistic workflows and idempotency. They are maintained by Dell and supported by the Dell Automation Community, which reduces risk. These modules are much easier to use than the alternative of accessing the VxRail API on your own. We provide many examples that can be adjusted to the specifics of the customer’s environment.

Resources

To learn more, see these resources:

The following links provide additional information:

Author: Karol Boguniewicz, Senior Principal Engineering Technologist, VxRail Technical Marketing
Twitter: @cl0udguide


Read Full Blog
  • VxRail

Scaling Up VxRail: Managing an Ecosystem

Dylan Jackson Dylan Jackson

Tue, 08 Nov 2022 20:13:27 -0000

|

Read Time: 0 minutes

This is the sixth article in a series introducing VxRail concepts.

The engineering team behind VxRail has done a fantastic job building cluster and life cycle management tools into our software. The cluster update process is an excellent example of one of these software enhancements. However, we need to go further. The value of these enhancements decays a bit as you have more and more clusters, resulting in more and more actions required to manage an environment. This end result is antithetical to the entire idea behind VxRail. However, this complexity reintroduction never occurs, thanks to the features and functionality of the VxRail API and CloudIQ. The API scales out management operations by providing access to many of the same software calls that VxRail makes. Then we have CloudIQ. CloudIQ is a cloud-based management utility that can interface with various Dell infrastructures that VxRail uses to improve cluster management as environments scale out.

Expanding and automating your VxRail environment

For readers that aren’t familiar with what APIs are, the acronym stands for “Application Programming Interface.” APIs exist to help two, or sometimes more, pieces of software communicate with each other. VxRail has its own API that works in conjunction with VMware APIs and the Redfish API for the iDRAC and hardware. This enables the management of hardware and both VMware and VxRail software at scale. The VxRail API Guide shows the full range of calls available to developers. There are dozens of them; the last number I saw was over 70 individual calls. Now, there’s more to the API than its comprehensive nature. It also brings with it the simplicity of use. The API can be taken advantage of using the Swagger web interface and a PowerShell module to provide simple command line interfaces that IT staff are familiar with. 

The API can help customers of any size, but who I see that benefits the most from using an API is a large customer that might have tens to hundreds of nodes in many clusters. The scale of these environments creates a need for further automation that can link VxRail clusters with management tools and practices. Some use-case examples include items like node discovery to see what various hardware is available and the versions running on that hardware; another example would be something like examining node and cluster health throughout the data center. The API can also enable infrastructure-as-code projects, such as automatically spinning up and winding down clusters as needed. Even automating simple tasks, like the shutdown of clusters in a way that maintains data consistency, provides a massive value to VxRail customers. 

CloudIQ: Helping Manage Your Ecosystem

VxRail has more than the API to aid in managing large environments. As great as the API is, it takes a bit of preparation to use, whereas CloudIQ is ready for use as soon as Secure Connect Gateway is enabled and clusters are enrolled. If you haven’t heard of CloudIQ, I recommend checking out the CloudIQ simulator. The simulator doesn’t provide access to the complete feature set of CloudIQ but makes for an excellent introduction to what the product can do.

CloudIQ is a cloud-based application that monitors and resolves problems with Dell storage, server, data protection, networking, HCI, and CI products, and APEX services. You might see CloudIQ referred to as an AIOps application. This is short for artificial intelligence for IT operations. In the case of VxRail, this data is sent in by customers’ clusters using Secure Connect Gateway, where CloudIQ can then perform analytics functions. The output of this analytics can be used to create custom reports, create various estimates on storage utilization, reduce IT risk, and recover from problems faster. Beginning in May and continuing into June, Dell ran a survey of CloudIQ users. These users were able to accelerate IT recovery as little as 2x to as much as 10x faster, which saved them about an entire workday per week, on average. CloudIQ provides all this to customers with no financial or IT overhead due to it being freely available for use by Dell customers connecting to the Dell cloud.

Conclusion

Growth is exciting, but it comes with new challenges, and old ones don’t go away—they get bigger. VxRail provides customers with an API designed to work with the iDRAC and VMware APIs to provide automation throughout the entire cluster stack. This helps customers reduce repetitive labor tasks and create infrastructure-as-code projects. Then with CloudIQ, IT staff can get a view of their Dell infrastructure equipment from one pane of glass. For VxRail, this would include software versions, cluster health scores, the ability to initiate updates, and other functionality. While the API offers most of its value to customers with very large VxRail footprints, most all customers can also benefit from CloudIQ to view multiple clusters as well as the remainder of their Dell infrastructure equipment.

 

Read Full Blog
  • VxRail

Recovering Clusters Faster: VxRail Serviceability

Dylan Jackson Dylan Jackson

Tue, 08 Nov 2022 20:13:28 -0000

|

Read Time: 0 minutes

This is the fifth article in a series introducing VxRail concepts.

Every tool or piece of equipment out there requires maintenance of some kind. That’s as true for the cars we drive as it is for the servers, storage, and switches that power our data centers. However, a lot of data center maintenance is reactive. Look at hardware failure as an example. If a drive were to fail in one of your clusters, nothing would happen until IT staff respond. VxRail offers the ability to automate some of these responses. Let’s talk about what happens when things go sideways in a cluster’s life. 

Help Righting the Ship

One of the roles that the VxRail Manager VM fills is a centralized alert collector. VxRail integrates with the iDRAC to monitor hardware health and with vCenter to monitor VMware software, in addition to its own internal alerts and events. VxRail monitors all this information and creates a more holistic monitoring system for a cluster. This obviously benefits IT staff, but there are some additional benefits to this multi-level integration that other solutions might struggle to match.

VxRail uses a service called “Secure Connect Gateway” to establish a static connection to Dell data centers. This enables a lot of functionality on VxRail, including with CloudIQ for multi-cluster management, but that’s the subject of a future discussion. This static connection helps technical support become more proactive in helping you recover your clusters. For example, say you had a disk fail. If Secure Connect Gateway is enabled, VxRail would dial home and create a case automatically. Support could then use this to confirm the disk failure and confirm that there aren’t any other hardware or software issues being raised. Depending on what warranty services you have, you could even opt to have a replacement hard drive sent out automatically. It wasn’t uncommon for me to see support cases where we were the first to let the administrators know that there was an issue. It was definitely nice to be able to tell them a correction was already on its way out to them.

These phone homes that go through the Secure Connect Gateway add more value than helping to automate parts of some dispatches. The gateway also aids in the support experience. It can do this in a few ways, including providing an integrated support chat applet, accessible from the VxRail Support tab in vCenter. Secure Connect Gateway also facilitates the transfer of the system logs needed to troubleshoot most any problem in the VxRail stack. These logs include the VxRail Manager virtual machine logs, vCenter logs, ESXi logs, iDRAC logs, and platform logs. vCenter and ESXi logs obviously are logs specific to the software powering the cluster. The iDRAC and platform logs contain the hardware inventory, LCM activity, out-of-band hardware log, and more. 

I’ve touched on a lot of topics surrounding the support experience, but there’s one more that absolutely needs to be mentioned—that’s the people in support! The technical support staff standing behind VxRail are a very talented and knowledgeable group of folks. Many of these agents are VMware Certified Professionals, some looking for higher levels of certification, like the VCIX, one of VMware’s expert level certifications. This cumulative knowledge pool allows our support team to resolve over 95% of the incidents they encounter without needing a higher-level VMware engagement. However, in the instances where a VMware engagement is needed, say that a bug is discovered with vCenter for example, then VxRail support can escalate to VMware on the end customer’s behalf. This helps to create continuity in the support experience that might be missing from a solution without the jointly engineered nature of VxRail.

Conclusion

Servicing clusters can become a challenge, no matter the environment. Hardware and software both encounter failures that require an IT staff response. As environments grow and scale, the challenge of maintaining health for the environment grows, too. To help meet this expanding problem, VxRail helps administrators by automatically collecting events and alerts from the hardware and both VMware and VxRail software. This information can then be compressed into log bundles that can be shared with support. Contacting support is even easier, thanks to an integrated chat connecting your host to VxRail support staff. These support staff are specialists in both VMware and VxRail software, capable of resolving a vast majority of all issues with a single vendor. Our final discussion will be on the extensibility of VxRail, featuring CloudIQ and the VxRail API. See you there!


Read Full Blog
  • VxRail
  • Kubernetes
  • Tanzu
  • VCF
  • K8s

Improved management insights and integrated control in VMware Cloud Foundation 4.5 on Dell VxRail 7.0.400

Jason Marques Jason Marques

Tue, 11 Oct 2022 12:59:13 -0000

|

Read Time: 0 minutes

The latest release of the co-engineered hybrid cloud platform delivers new capabilities to help you manage your cloud with the precision and ease of a fighter jet pilot in the cockpit! The new VMware Cloud Foundation (VCF) on VxRail release includes support for the latest Cloud Foundation and VxRail software components based on vSphere 7, the latest VxRail P670N single socket All-NVMe 15th Generation HW platform, and VxRail API integrations with SDDC Manager. These components streamline and automate VxRail cluster creation and LCM operations, provide greater insights into platform health and activity status, and more! There is a ton of airspace to cover, ready to take off? Then buckle up and let’s hit Mach 10, Maverick!

VCF on VxRail operations and serviceability enhancements

Support for VxRail cluster creation automation using SDDC Manager UI

The best pilots are those that can access the most fully integrated tools to get the job done all from one place: the cockpit interface that they use every day. Cloud Foundation on VxRail administrators should also be able to access the best tools, minus the cockpit of course.

The newest VCF on VxRail release introduces support for VxRail cluster creation as a fully integrated end-to-end SDDC Manager workflow, driven from within the SDDC Manager UI. This integrated API-driven workload domain and VxRail cluster SDDC Manager feature extends the deep integration capabilities between SDDC Manager and VxRail Manager. This integration enables users to VxRail clusters when creating new VI workload domains or expanding existing workload domains (by adding new VxRail clusters into them) all from an SDDC Manager UI-driven end-to-end workflow experience.

In the initial SDDC Manager UI deployment workflow integration, only unused VxRail nodes discovered by VxRail Manager are supported. It also only supports clusters that are using one of the VxRail predefined network profile cluster configuration options. This method supports deploying VxRail clusters using both vSAN and VMFS on FC as principal storage options.

Another enhancement allows administrators to provide custom user-defined cluster names and custom user-defined VDS and port group names as configuration parameters as part of this workflow.

You can watch this new feature in action in this demo.

Now that’s some great co-piloting!

Support for SDDC Manager WFO Script VxRail cluster deployment configuration enhancements

Th SDDC Manager WFO Script deployment method was first introduced in VCF 4.3 on VxRail 7.0.202 to support advanced VxRail cluster configuration deployments within VCF on VxRail environments. This deployment method is also integrated with the VxRail API and can be used with or without VxRail JSON cluster configuration files as inputs, depending on what type of advanced VxRail cluster configurations are desired.

Note:

  • The legacy method for deploying VxRail clusters using the VxRail Manager Deployment Wizard has been deprecated with this release.
  • VxRail cluster deployments using the SDDC Manager WFO Script method currently require the use of professional services.

Proactive notifications about expired passwords and certificates in SDDC Manager UI and from VCF public API

To deliver improved management insights into the cloud infrastructure system and its health status, this release introduces new proactive SDDC Manager UI notifications for impending VCF and VxRail component expired passwords and certificates. Now, within 30 days of expiration, a notification banner is automatically displayed in the SDDC Manager UI to give cloud administrators enough time to plan a course of action before these components expire. Figure 1 illustrates these notifications in the SDDC Manager UI.

Figure 1. Proactive password and certificate expiration notifications in SDDC Manager UI

VCF also displays different types of password status categories to help better identify a given account’s password state. These status categories include: 

  • Active – Password is in a healthy state and not within a pending expiry window. No action is necessary.
  • Expiring – Password is in a healthy state but is reaching a pending expiry date. Action should be taken to use SDDC Manager Password Management to update the password.
  • Disconnected – Password of component is unknown or not in sync with the SDDC Manager managed passwords database inventory. Action should be taken to update the password at the component and remediate with SDDC Manager to resync.

The password status is displayed on the SDDC Manager UI Password Management dashboard so that users can easily reference it. 

Figure 2. Password status display in SDDC Manager UI

Similarly, certificate status state is also monitored. Depending on the certificate state, administrators can remediate expired certificates using the automated SDDC Manager certificate management capabilities, as shown in Figure 3.

Figure 3. Certificate status and management in SDDC Manager UI

Finally, administrators looking to capture this information programmatically can now use the VCF public API to query the system for any expired passwords and certificates. 

Add and delete hosts from WLD clusters within a workload domain in parallel using SDDC Manager UI or VCF public API

Agility and efficiency are what cloud administrators strive for. The last thing anyone wants is to have to wait for the system to complete a task before being able to perform the next one. To address this, VCF on VxRail now allows admins to add and delete hosts in clusters within a workload domain in parallel using the SDDC Manager UI or VCF Public API. This helps to perform infrastructure management operations faster: some may even say at Mach 9!

Note:

  • Prerequisite: Currently, VxRail nodes must be added to existing clusters using VxRail Manager first prior to executing SDDC Manager add host workflow operations in VCF.
  • Currently a maximum of 10 operations of each type can be performed simultaneously. Always check the VMware Configuration Maximums Guide for VCF documentation for the latest supported configuration maximums.

SDDC Manager UI: Support for Day 2 renaming of VCF cluster objects

To continue making the VCF on VxRail platform more accommodating to each organization’s governance policies and naming conventions, this release enables administrators to rename VCF cluster objects from within the SDDC Manager UI as a Day 2 operation.

New menu actions to rename the cluster are visible in-context when operating on cluster objects from within the SDDC Manager UI. This is just the first step in a larger initiative to make VCF on VxRail even more adaptable with naming conventions across many other VCF objects in the future. Figure 4 describes new in-context rename cluster menu option looks like.

Figure 4.  Day 2 Rename Cluster Menu Option in SDDC Manager UI

Support for assigning user defined tags to WLD, cluster, and host VCF objects in SDDC Manager

VCF on VxRail now incorporates SDDC Manager support for assigning and displaying user defined tags for workload domain, cluster, and host VCF objects.

Administrators now see a new Tags pane in the SDDC Manager UI that displays tags that have been created and assigned to WLD, cluster, and host VCF objects. If no tags exist, are not assigned, or if changes to existing tags are needed, there is an assign link that allows an administrator to assign the tag or link and launch into that object in vCenter where tag management (create, delete, modify) can be performed. When tags are instantiated, VCF syncs them and allow administrators to assign and display them in the tags pane in the SDDC Manager UI, as shown in Figure 5.

Figure 5. User-defined tags visibility and assignment, using SDDC Manager

Support for SDDC Manager onboarding within SDDC Manager UI

VCF on VxRail is a powerful and flexible hybrid cloud platform that enables administrators to manage and configure the platform to meet their business requirements. To help organizations make the most of their strategic investments and start operationalizing them quicker, this release introduces support for a new SDDC Manager UI onboarding experience.

The new onboarding experience:

  • Focuses on Learn and plan and Configure SDDC Manager phases with drill down to configure each phase
  • Includes in-product context that enables administrators to learn, plan, and configure their workload domains, with added details including documentation articles and technical illustrations
  • Introduces a step-by-step UI walkthrough wizard for initial SDDC Manager configuration setup
  • Provides an intuitive UI guided walkthrough tour of SDDC Manager UI in stages of configuration that reduces the learning curve for customers
  • Provides opt-out and revisit options for added flexibility

Figure 6 illustrates the new onboarding capabilities.

 

       

Figure 6. SDDC Manager Onboarding and UI Tour Experience

VCF on VxRail lifecycle management enhancements

VCF integration with VxRail Retry API

The new VCF on VxRail release delivers new integrations with SDDC Manager and the VxRail Retry API to help reduce overall LCM performance time. If a cloud administrator has attempted to perform LCM operations on a VxRail cluster within their VCF on VxRail workload domain and only a subset of those nodes within the cluster can be upgraded successfully, another LCM attempt would be required to fully upgrade the rest of the nodes in the cluster.

Before VxRail Retry API, the VxRail Manager LCM would start the LCM from the first node in the cluster and scan each one to determine if it required an upgrade or not, even if the node was already successfully upgraded. This rescan behavior added unnecessary time to the LCM execution window for customers with large VxRail clusters.

The VxRail Retry API has made LCM even smarter. During an LCM update where a cluster has a mix of updated and non-updated nodes, VxRail Manager automatically skips right to the non-updated nodes only and runs through the LCM process from there until all remaining non-updated nodes are upgraded. This can provide cloud administrators with significant time savings. Figure 7 shows the behavior difference between standard and enhanced VxRail Retry API Behavior.

Figure 7. Comparison between standard and enhanced VxRail Retry API LCM Behavior 

The VxRail Retry API behavior for VCF 4.5 on VxRail 7.0.400 has been natively integrated into the SDDC Manager LCM workflow. Administrators can continue to manage their VxRail upgrades within the SDDC Manager UI per usual. They can also take advantage of these improved operational workflows without any additional manual configuration changes.

Improved SDDC Manager prechecks

More prechecks have been integrated into the platform that help fortify platform stability and simplify operations. These are:

  • Verification of valid licenses for software components
  • Checks for expired NSX Edge cluster passwords
  • Verification of system inconsistent state caused by any prior failed workflows
  • Additional host maintenance mode prechecks
    1. Determine if a host is in maintenance mode
    2. Determine whether CPU reservation for NSX-T is beyond VCF recommendation
    3. Determine whether DRS policy has changed from the VCF recommended (Fully Automated)
  • Additional filesystem capacity and permissions checks

While VCF on VxRail has many core prechecks that monitor many common system health issues, VCF on VxRail will continue to integrate even more into the platform with each new release.

Support for vSAN health check silencing

The new VCF on VxRail release also includes vSAN health check interoperability improvements. These improvements allow VCF to:

  • Address common upgrade blockers due to vSAN HCL precheck false positives
  • Allow vSAN pre-checks to be more granular, which enables the administrator to only perform those that are applicable to their environment
  • Display failed vSAN health checks during LCM operations of domain-level pre-checks and upgrades
  • Enable the administrators to silence the health checks

Display VCF configurations drift bundle progress details in SDDC Manager UI during LCM operations

In a VCF on VxRail context, configuration-drift is a set of configuration changes that are required to bring upgraded BOM components (such as vCenter, NSX, and so on) with a new VCF on VxRail installation. These configuration changes are delivered by VCF configuration-drift LCM update bundles.

VCF configuration drift update improvements deliver greater visibility into what specifically is being changed, improved error details for better troubleshooting, and more efficient behavior for retry operations.

VCF Async Patch Tool support

VCF Async Patch Tool support offers both LCM and security enhancements.

Note: This feature is not officially included in this new release, but it is newly available.

The VCF Async Patch Tool is a new CLI based tool that allows cloud administrators to apply individual component out-of-band security patches to their VCF on VxRail environment, separate from an official VCF LCM update release. This enables organizations to address security vulnerabilities faster without having to wait for a full VCF release update. It also gives administrators control to install these patches without requiring the engagement of support resources.

Today, VCF on VxRail supports the ability to use the VCF Async Patch Tool for NSX-T and vCenter security patch updates only. Once patches have been applied and a new VCF BOM update is available that includes the security fixes, administrators can use the tool to download the latest VCF LCM release bundles and upgrade their environment back to an official in-band VCF release BOM. After that, administrators can continue to use the native SDDC Manager LCM workflow process to apply additional VCF on VxRail upgrades.

Note: Using VCF Async Patch Tool for VxRail and ESXi patch updates is not yet supported for VCF on VxRail deployments. There is currently separate manual guidance available for customers needing to apply patches for those components.

Instructions on downloading and using the VCF Async Patch Tool can be found here.

VCF on VxRail hardware platform enhancements

Support for 24-drive All-NVMe 15th Generation P670N VxRail platform

The VxRail 7.0.400 release delivers support for the latest VxRail 15th Generation P670N VxRail hardware platform. This 2U1N single CPU socket model delivers an All-NVMe storage configuration of up to 24 drives for improved workload performance. Now that would be powerful single engine aircraft!

Time to come in for a landing…

I don’t know about you, but I am flying high with excitement about all the innovation delivered with this release. Now it’s time to take ourselves down for a landing. For more information, see the following additional resources so you can become your organization’s Cloud Ace.

Author: Jason Marques

Twitter: @vWhipperSnapper

Additional resources

VMware Cloud Foundation on Dell VxRail Release Notes

VxRail page on DellTechnologies.com

VxRail page on Info Hub

VCF on VxRail Interactive Demo

 VxRail Youtube channel 

Read Full Blog
  • VxRail

VxRail Cluster Integrity and Resilience

Dylan Jackson Dylan Jackson

Tue, 01 Nov 2022 13:30:05 -0000

|

Read Time: 0 minutes

This is the fourth article in a series introducing VxRail concepts.

Maintaining cluster integrity is an important task to ensure normal business operations. Some readers might not have a great understanding of what cluster integrity means, so let’s quickly define what we’re discussing. Cluster integrity describes a state where a cluster remains free of hardware and software errors. One of the primary challenges to maintaining cluster integrity is handling change through the cluster life cycle. It’s very likely that an administrator will need to make some kind of change—whether to address something minor, like a disk failure, or something more complex, like needing to add new hardware to a cluster. Software updates are a different kind of challenge. Each cluster node has drivers, firmware, an operating system, and the more elaborate VMware and VxRail system software, adding up to a considerable number of files to consider. VxRail helps make patching more holistic and successful.


The hardware life cycle

As a Dell Support specialist, I spoke with customers about many challenges, and one of the biggest was moving through the hardware lifecycle. For customers working in a nonclustered environment, hardware refreshes frequently come along with highly involved migrations. Administrators can create clusters from these nodes to better provision IT resources, but these clusters are made from off-the-shelf hardware that isn’t necessarily intended to work together. VxRail HCI system software simplifies scaling out drive additions. 

Part of the VxRail advantage is the ability to provide administrators confidence while it facilitates change. Continuously Validated States help to achieve this by providing administrators with hardware choices they can select, knowing that their cluster’s current state was built to support that new hardware. For example, maybe you have a 3-node cluster and are ready to add nodes and expand it to a 5-node cluster, but a matching NIC isn’t available. Continuously Validated States define other NICs that will work without creating compatibility problems. Automation scripting, which is used at the time of adding the new nodes to the cluster, scans the network, identifies the available hosts, and orchestrates node assignment to the cluster. This allows customers to scale out a cluster quite easily.

Nodes can also be removed from a cluster with similar scripting. This scripting helps clusters make intergenerational migrations when used in conjunction with the node addition automation. Once the new-generation hosts arrive at the data center, they get added to the cluster with one wizard and can be removed with another to complete the life cycle move. However, VxRail also supports heterogeneous clusters. This would allow you to continue using the older cluster nodes as long as they are needed and comply with each of the cluster’s Continuously Validated States as the cluster progresses through the life cycle.

Software patching

Continuing with our travel analogy from a previous blog in this series, if the update process is a vehicle, then each patch is a piece of cargo that an administrator has to worry about. These patches can quickly build up into backlogs for IT staff, even if only some of them need to be applied to clusters. VxRail life cycle management processes improve patch control by consolidating these independent release cycles into singular bundles that have confirmed compatibility between the new software packages. This helps promote cluster integrity by creating order among these patches. The patches are bundled up into VxRail releases that are made available to cluster owners within 30 days of VMware’s releases. IT staff can then use these resources to be more selective in the patching process and can think of these cycles as opportunities to add new features and functionality to clusters, as opposed to a task with no clearly defined benefit or purpose. 

Conclusion

Maintaining cluster integrity means maintaining the stable and productive working state that businesses need nodes to be in. An HCI cluster built internally faces additional challenges maintaining this integrity, especially as software and hardware changes are needed in the environment. VxRail Continuously Validated States help to both broaden the viable hardware options and provide software patch control through update bundles that bake in the patches. Orchestration automation serves to control the cluster as patches are applied or when hardware changes, such as adding new nodes to the cluster, are made. The next article in this series will discuss serviceability and include topics like disk replacement, alerts and events, the overall support experience, and more!

Read Full Blog
  • VxRail

Moving Through the VxRail Cluster Life Cycle

Dylan Jackson Dylan Jackson

Tue, 01 Nov 2022 13:29:21 -0000

|

Read Time: 0 minutes

This is the third article in a series introducing VxRail concepts.

The last entry in this series discussed Continuously Validated States and the benefits that come with having new cluster states tested before they ever arrive or are implemented. This article is about movement. More specifically, movement to new cluster states with new software, firmware, and drivers. If Continuously Validated States help provide known-good destination states and create our map, then the VxRail enhanced update process creates the vehicle used to move from one state to the next. Let’s dive into some of the specifics that illustrate the advantage of the VxRail process over traditional update processes and vLCM.


Single-bundle updates

The first step in an update cycle is to define a new state, so let’s discuss that first. Whether updating a single server or an HCI solution you built yourself, the first step in building this state is identifying all the hardware so that nothing gets missed in the cycle. Once all hardware is accounted for, administrators can begin to collect the updated driver and firmware packages. Depending on the volume of hardware, updating a single node might well require around 15 different packages to touch all the system software, drivers, and firmware. If the environment has different hardware among nodes, then this task becomes more complex with more components to account for.

Where administrators would previously construct an update themselves, VxRail users perform updates by using prebuilt packages. These prebuilt packages contain the components to move a cluster to its next Continuously Validated State and are intended to service the VxRail family, as opposed to a specific cluster. This means that whether you’re primarily working with smaller clusters with different hardware or large, monolithic clusters, you can use a single bundle to bring the entire cluster up to date. In addition to the individual update packages, the update also carries a new Continuously Validated State with it. This frees up IT resources to complete more important tasks that have a greater business impact.

Life cycle management prechecks

In addition to compressing updates into single packages, the VxRail update process performs a series of readiness prechecks to ensure that the cluster is in a state where it is ready to accept an update. These tasks are examples of VxRail automation that obviously wouldn’t be present in an IT-designed HCI solution or traditional infrastructure. Let’s talk about what some of these prechecks are and what they can do for you as a user.

The precheck process examines more than 200 different items, so I won’t go into all of them here. However, I would like to highlight a few areas. Let’s start this part of the discussion with hardware and work our way up. Hardware examination runs a full range of exams to confirm cluster health. For example, physical checks are performed on memory to look for memory bit errors that could cause a host to crash during the update. Some other examples include inventory checks, to confirm that the hardware profile hasn’t changed to include components that our bundles can’t address, such as an unsupported NIC or another PCI device.

Prechecks extend to software versions as well. Software prechecks examine items such as whether a host successfully entered maintenance mode or if services are in the proper state to begin an update. These prechecks, in some cases, replace user interaction, as with the ability to cycle hosts into maintenance mode.

Change analysis

After the prechecks are complete, VxRail shows users all the hardware and software affected by the update. This helps users understand exactly what is changing in the environment. As indicated in the screenshot, this information also helps identify the specific changes to the cluster.

Launching an update

Users have two options for launching an update—they can update the cluster immediately, or they can schedule it to run at a planned time. A lot of customers I worked with liked to schedule their updates to run over weekends. Users might think that this is largely analogous to VMware’s vLCM. vLCM does offer automation benefits, but users must still create their own cluster profiles, create their own images, and perform their own testing. So, while vLCM certainly offers some automation advantages, VxRail takes this further by enhancing the update package collection and application processes. VxRail clusters can also be updated through the API or with CloudIQ.

Conclusion

Hopefully, this has helped illuminate some of the value that VxRail can provide to the cluster update cycle. Users get the benefit of consolidated update packages, saving the time and effort of collecting these files themselves. An in-depth series of prechecks then combs through cluster hardware and software to confirm that a cluster is in an ideal state to accept update packages. Once this is complete, change analysis scripting specifies the changes to be made to the environment. Finally, with the application of the update, VxRail sequentially moves node to node and cycles each through the update list, placing the nodes into maintenance mode and having vMotion move workloads to other available nodes. Taken together, these services, which are under continuous improvement by VxRail Engineering, help to make the update cycle as easy as possible.

 

Read Full Blog
  • VxRail
  • life cycle management

Easing Life Cycle Management with VxRail

Dylan Jackson Dylan Jackson

Thu, 13 Oct 2022 22:47:20 -0000

|

Read Time: 0 minutes

This is the second article in a series introducing VxRail concepts.

I mentioned in the introduction blog that I previously worked in Technical Support for Dell. That experience really set the stage for me to embrace VxRail because the VxRail approach to life cycle management eases a lot of the pain points I saw in support engagements. Many of the issues I saw were resolved with system updates, and VxRail makes moving through the life cycle significantly easier than with traditional hardware or an internally built solution. We do this with our state management model, known as Continuously Validated States. Let’s take some time to understand what these are, because they help enable VxRail customers to do more with their infrastructure more easily than before.


Defining a state

I’m someone who likes to be thorough, so if you already understand what a system state is, then you can skip this section. But for readers newer to infrastructure, this might be a different way to think about things. A system state, be it good or bad, refers to the hardware, firmware, drivers, and system software that power the infrastructure. When your servers or clusters are in a “good” or “happy” state, then everything is working optimally. A “bad” or “faulty” state might have a compatibility issue creating crashes, or it might contain failed hardware. Replacing failed hardware is an example of modifying the hardware state. Modifying the software state might look like an update to VMware software. All these changes then represent new individual states.

VxRail takes the chaos out of traditional state management for customers and replaces it with confidence. VxRail Continuously Validated States make the exchange from chaos to confidence possible. Updating a cluster, such as to a new vCenter version, means changing a cluster, and that change introduces uncertainty. That uncertainty is natural because customers are moving their infrastructure into new unknown configurations. 

Let’s discuss the “Validated” portion of Continuously Validated States. VxRail engineering validates the current state, the state you intend to go to, and the continuity through the update cycle. Customers can gain tremendous value by relying on VxRail Engineering to validate all three aspects of an upgrade. This is the “Validated” part of Continuously Validated States that completely inverts the experience I got used to while working in Technical Support.

Moving to a new state

When you make a change, such as adding a driver or updating system software, you are modifying the system state. Making changes to system states has always been a problem with different remediation strategies that have revealed new IT challenges. I believe the challenge that Continuously Validated States best addresses can be described as, “I need my infrastructure to help me respond to new business needs and make moving through the life cycle as easy as possible.” Modifying an HCI cluster designed internally would present additional difficulties because you don’t know what kind of behavior to expect without testing.

This kind of change anxiety is what the validation process in our state-creation process aims to correct. Before the VxRail Engineering team releases a new VxRail update package—a package that would change your cluster’s system state, the package is tested in the team’s dedicated testing facility for nearly 800,000 cumulative hours. The facility has comprehensive access to the hardware that VxRail supports, allowing thorough testing. The purpose of this testing is to first ensure that all the new supported configurations are stable and then ensure that the move from old cluster states to the new states is a reliable process.

Lifecycle continuity

The creation of a series of known-good configurations isn’t the only benefit VxRail can provide with this different approach to state management. Let’s talk about the continuity that Continuously Validated States provide. VxRail clusters spend their entire lives conforming with and moving between different configurations supported and defined by the Continuously Validated State. This creates a continuity that begins from the time a cluster is first unloaded from the truck, persists through the changes of both the update cycle and hardware modification, and continues on to the final point of cluster retirement. 

Let’s tie these ideas together. I like to think of Continuously Validated States as being like a GPS that helps avoid road construction during a cluster’s life. VxRail can do this because our engineering teams are building the roads and identifying the best routes.  Go ahead and imagine a map for me. I like to imagine a map of my home state. No matter what kind of map, it’s going to have a bunch of points and show you how to move from one point to another. Continuously Validated States serve a similar role for your clusters. Much like the points on your map, each of these states verifies new hardware and software versions for customers to move their clusters to. These states serve another role like that of a GPS—they help identify the ideal paths between states and help clusters efficiently move between them. As you might have guessed, the Continuously Validated States model isn’t simple cartography. This ideal path is identified through hundreds of thousands of testing hours performed by VxRail Engineering team members in a massive million-dollar lab environment. Those movement paths, in combination with software tooling in the update process, create continuity for clusters as they move between states and proceed through their life cycles.

Conclusion

Hopefully, this blog has helped distinguish how Continuously Validated States change configuration management for the better. Changing the configuration state of production clusters is an anxiety-generating action that VxRail eases by creating, testing, and validating known-good configuration states for customers. The result is that customers can update their equipment with more confidence than ever and spend more IT resources focused on enabling business projects than on performing maintenance tasks. Mike Athanasiou, a colleague of mine, did a fantastic job with our Interactive Journey video series. In the videos, Mike shows how the use of Continuously Validated States enhances different areas of cluster management. I found the videos helpful in better understanding VxRail. 

The next entry in this blog series will address the advantage that VxRail offers in the update process.

Read Full Blog
  • HCI
  • VxRail

Introducing VxRail Concepts

Dylan Jackson Dylan Jackson

Tue, 04 Oct 2022 14:12:18 -0000

|

Read Time: 0 minutes

Believe it or not, VxRail has been around for six years!  It may not sound like a particularly long time because there are products in the Dell storage portfolio that got their start a couple of decades ago. In any case, the early rapid growth of VxRail created a sizable customer base that has matured and developed into seasoned VxRail users. The figures I last saw cited just over 17,000 customers with some 237,000 nodes deployed. As a new member of the VxRail product group, I can see that our VxRail content has matured and developed with our seasoned customers to an extent that there’s an assumed common knowledge about VxRail. This is amazing for the existing VxRail community, but it would also be nice for people new to VxRail, like me, who are in a different phase of their VxRail journey, to learn about it from the ground up. After a few months on the job, I saw a great opportunity to do just that—create content that gets back to the basics of VxRail and focuses on the fundamentals of VxRail with vSAN and VxRail Dynamic Nodes for traditional SAN architecture.

While I was new to the VxRail technical marketing team at the start of the year, I’d previously spent many years as a Dell Support specialist, primarily working with our PowerEdge servers and Compellent storage offerings. It has definitely been a rewarding experience so far learning about HCI and VxRail—though sometimes it was like trying to drink from a firehose. While I have many resources to lean on, most other folks don’t have the same benefit. That’s why I’m building a blog series that covers the basics of VxRail so you can quickly get a leg up on your VxRail journey. Through this, you can have a solid foundation so that you can consume more advanced VxRail content. Here’s what I have so far in my queue:

  • Understanding the importance of cluster integrity—The biggest pain point, I find, is the amount of time that infrastructure administrators spend to ensure application uptime and compliance, and looking out for the next system update. Understanding the continuously validated state of VxRail is a game changer.
  • Cluster updates—Knowing what needs to be updated is critical, but the actual process of doing a system update to a cluster isn’t a walk in the park. The VxRail cluster update experience is a significant factor in your operational investment.
  • Maintaining cluster integrity—VxRail has some very helpful tools to ensure that the cluster is running in an optimal state. These tools can make your administration activity much easier.
  • Serviceability—There will be times when things go wrong; that’s inevitable. I want to bring focus onto how VxRail can make this part of the experience as streamlined as possible. Current clusters feature phone-home capabilities that provide a massive benefit to the support experience.
  • Extensibility—While this might be a bit on the advanced side, it’s good to know that VxRail is designed to be managed efficiently at scale. Whether your concern is deployment complexity or scaling up the management experience, this will be a good read. 

Over the course of the next few weeks, I’ll be posting blogs about these topics. Stay tuned! For those new to VxRail, I hope I can provide a great start to your VxRail journey. Feel more than welcome to reach out and connect with me on LinkedIn. If you have suggestions for other topics that you’d like covered, I’d be more than happy to hear them.


Read Full Blog
  • VxRail
  • ESXi
  • security
  • life cycle management
  • vCenter Server

Learn About the Latest Major VxRail Software Release: VxRail 7.0.400

Daniel Chiu Daniel Chiu

Thu, 22 Sep 2022 13:11:44 -0000

|

Read Time: 0 minutes

As many parts of the world welcome the fall season and the cooler temperatures that it brings, one area that has not cooled down is VxRail. The latest VxRail software release, 7.0.400, introduces a slew of new features that will surely fire up our VxRail customers and spur them to schedule their next cluster update.

VxRail 7.0.400 provides support for VMware ESXi 7.0 Update 3g and VMware vCenter Server 7.0 Update 3g. All existing platforms that support VxRail 7.0 can upgrade to VxRail 7.0.400. Upgrades from VxRail 4.5 and 4.7 are supported, which is an important consideration because standard support from Dell for those versions ends on September 30.

VxRail 7.0.400 software introduces features in the following areas:

  • Life cycle management
  • Dynamic nodes
  • Security
  • Configuration flexibility
  • Serviceability

This blog delves into major enhancements in those areas. For a more comprehensive rundown of the features added to this release, see the release notes.

Life cycle management

Because life cycle management is a key area of value differentiation for our VxRail customers, the VxRail team is continuously looking for ways to further enhance the life cycle management experience. One aspect that has come into recent focus is handling cluster update failures caused by VxRail nodes failing to enter maintenance mode.

During a cluster update, nodes are put into maintenance mode one at time. Their workloads are moved onto the remaining nodes in the cluster to maintain availability while the nodes go through software, firmware, and driver updates. VxRail 7.0.350 introduced capabilities to notify users of situations such as host pinning and mounted VM tools on the host that can cause nodes to fail to enter maintenance mode, so users can address those situations before initiating a cluster update.

VxRail 7.0.400 addresses this cluster update failure scenario even further by being smarter with how it handles this issue once the cluster update is in operation. If a node fails to enter maintenance mode, VxRail automatically skips that node and moves onto the next node. Previously, this scenario would cause the cluster update operation to fail. Now, users can run that cluster update and process as many nodes as possible. Users can then run a cluster update retry, which targets only the nodes that were skipped. The combination of skipping nodes and targeted retry of those skipped nodes significantly improves the cluster update experience.

Figure 1: Addressing nodes failing to enter maintenance mode

In VxRail 7.0.400, a Dell RecoverPoint for VMs compatibility check has been added to the update advisory report, cluster update pre-check, and cluster update operation to inform users of a potential incompatibility scenario. Having data protection in an unsupported state puts an environment at risk. The addition of the compatibility check is a great news for RecoverPoint for VMs users because this previously manual task is now automated, helping to reduce risk and streamline operations.

VxRail dynamic nodes

Since the introduction of VxRail dynamic nodes last year, we’ve incrementally added more storage protocol support for increased flexibility. NFS, CIFS, and iSCSI support were added earlier this year. In VxRail 7.0.400, users can configure their VxRail dynamic nodes with storage from Dell PowerStore using NVMe on Fabric over TCP (NVMe-oF/TCP). NVMe provides much faster data access compared to SATA and SAS. The support requires Dell PowerStoreOS 2.1 or later and Dell PowerSwitch with the virtual Dell SmartFabric Storage Service appliance.

VxRail cluster deployment using NVMe-oF/TCP is not much different from setting up iSCSI storage as the primary datastore for VxRail dynamic node clusters. The cluster must go through the Day 1 bring-up activities to establish IP connectivity. From there, the user can then set up the port group, VM kernels, and NVMe-oF/TCP adapter to access the storage shared from the PowerStore.

Setting up NVMe-oF/TCP between the VxRail dynamic node cluster and PowerStore is separate from the cluster deployment activities. You can find more information about deploying NVMe-oF/TCP here: https://infohub.delltechnologies.com/t/smartfabric-storage-software-deployment-guide/.

VxRail 7.0.400 also adds VMware Virtual Volumes (vVols) support for VxRail dynamic nodes. Cluster deployment with vVols over Fibre Channel follows a workflow similar to cluster deployment with a VMFS datastore. Provisioning and zoning of the Virtual Volume needs to be done before the Day 1 bring-up. The VxRail Manager VM is installed onto the datastore as part of the Day 1 bring-up.

For vVols over IP, the Day 1 bring-up needs to be completed first to establish IP connectivity. Then the Virtual Volume can be mounted and a datastore can be created from it for the VxRail Manager VM.

Figure 2: Workflow to set up VxRail dynamic node clusters with VMware Virtual Volumes

VxRail 7.0.400 introduces the option for customers to deploy a local VxRail managed vCenter Server with their VxRail dynamic node cluster. The Day 1 bring-up installs a vCenter Server onto the cluster with a 60-day evaluation license, but the customer is required to purchase their own vCenter Server license. VxRail customers are accustomed to having a Standard edition vCenter Server license packaged with their VxRail purchase. However, that vCenter Server license is bundled with the VMware vSAN license, not the VMware vSphere license.

VxRail 7.0.400 supports the use of Dell PowerPath/VE with VxRail dynamic nodes, which is important to many storage customers who have been relying on PowerPath software for multipathing capabilities. With VxRail 7.0.400, VxRail dynamic nodes can use PowerPath with PowerStore, PowerMax, or Unity XT storage array via NFS, iSCSI, or NVMe over Fibre Channel storage protocol.

Security

Another topic that continues to burn bright, no matter the season, is security. As threats continue to evolve, it’s important to continue to advance security measures for the infrastructure. VxRail 7.0.400 introduces capabilities that make it even easier for customers to further protect their clusters.

While the security configuration rules set forth by the Security Technical Implementation Guide (STIG) are required for customers working in or with the U.S. federal government and Department of Defense, other customers can benefit from hardening their own clusters. VxRail 7.0.400 automatically applies a subset of the STIG rules on all VxRail clusters. These rules protect VM controls and the underlying SUSE Linux operating system controls. Application of the rules occurs without any user intervention upon an upgrade to VxRail 7.0.400 and at the cluster deployment with this software version, providing a seamless experience. This feature increases the security baseline for all VxRail clusters starting with VxRail 7.0.400.

Digital certificates are used to verify the external communication between trusted entities. VxRail customers have two options for digital certificates. Self-signed certificates use the VxRail as the certificate authority to sign the certificate. Customers use this option if they don’t need a Certificate Authority or choose not to pay for the service. Otherwise, customers can import a certificate signed by a Certificate Authority to the VxRail Manager. Both options require certificates to be shared between the VxRail Manager and vCenter Server for secure communication to manage the cluster.

Previously, both options required manual intervention, at varying levels, to manage certificate renewals and ensure uninterrupted communication between the VxRail Manager and the vCenter Server. Loss of communication can affect cluster management operations, though not the application workloads.

Figure 3: Workflow for managing certificates

With VxRail 7.0.400, all areas of managing certificates have been simplified to make it easier and safer to import and manage certificates over time. Now, VxRail certificates can be imported via the VxRail Manager and API. There’s an API to import the vCenter certificate into the VxRail trust store. Renewals can be managed automatically via the VxRail Manager so that customers do not need to constantly check expiring certificates and replace certificates. Alternatively, new API calls have been created to perform these activities. While these features simplify the experience for customers already using certificates, hopefully the simplified certificate management will encourage more customers to use it to further secure their environment.

VxRail 7.0.400 also introduces end-to-end upgrade bundle integrity check. This feature has been added to the pre-update health check and the cluster update operation. The signing certificate is verified to ensure the validity of the root certificate authority. The digital certificate is verified. The bundle manifest is also checked to ensure that the contents in the bundle have not been altered. 

Configuration flexibility

With any major VxRail software release comes enhancements in configuration flexibility. VxRail 7.0.400 provides more flexibility for base networking and more flexibility in using and managing satellite nodes.

Previous VxRail software releases introduced long-awaited support for dynamic link aggregation for vSAN and vSphere vMotion traffic and support for two vSphere Distributed Switches (VDS) to separate traffic management traffic from vSAN and vMotion traffic. VxRail 7.0.400 removes the previous port count restriction of four ports for base networking. Customers can now also deploy clusters with six or eight ports for base networking while employing link aggregation or multiple VDS, or both.

Figure 4: Two VDS with six NIC ports

Figure 5: Two VDS with eight NIC ports with link redundancy for vMotion traffic and link aggregation for vSAN traffic

With VxRail 7.0.400, customers can convert their vSphere Standard Switch on their satellite nodes to a customer-managed VDS after deployment. This support allows customers to more easily manage their VDS and satellite nodes at scale.

Serviceability

The most noteworthy serviceability enhancement I want to mention is the ability to create service tickets from the VxRail Manager UI. This functionality makes it easier for customers to submit service tickets, which can speed resolution time and improve the feedback loop for providing product improvement suggestions. This feature requires an active connection with the Embedded Service Enabler to Dell Support Services. Customers can submit up to five attachments to support a service ticket.

Figure 6: Input form to create a service request

Conclusion

VxRail 7.0.400 is no doubt one of the more feature-heavy VxRail software releases in some time. Customers big and small will find value in the capability set. This software release enhances existing features while also introducing new tools that further focus on VxRail operational simplicity. While this blog covers the highlights of this release, I recommend that you review the release notes to further understand all the capabilities in VxRail 7.0.400. 

Read Full Blog
  • HCI
  • PowerEdge
  • VMware
  • VxRail
  • vSAN

New VxRail Node Lets You Start Small with Greater Flexibility in Scaling and Additional Resiliency

David Glynn David Glynn

Mon, 29 Aug 2022 19:00:25 -0000

|

Read Time: 0 minutes

When deploying infrastructure, it is important to know two things: current resource needs and that those resource needs will grow. What we don’t always know is in what way the demands for resources will grow. Resource growth is rarely equal across all resources. Storage demands will grow more rapidly than compute, or vice-versa. At the end of the day, we can only make an educated guess, and time will tell if we guessed right. We can, however, make intelligent choices that increase the flexibility of our growth options and give us the ability to scale resources independently. Enter the single processor Dell VxRail P670F.

The availability of the P670F with only a single processor provides more growth flexibility for our customers who have smaller clusters. By choosing a less compute dense single processor node, the same compute workload will require more nodes. There are two benefits to this:

  • More efficient storage: More nodes in the cluster opens the door to using the more capacity efficient erasure coding vSAN storage option. Erasure coding, also known as parity RAID, (such as RAID 5 and RAID 6) has a capacity overhead of 33% compared to the 100% overhead that mirroring requires. Erasure coding can deliver 50% more usable storage capacity while using the same amount of raw capacity. While this increase in storage does come with a write performance penalty, VxRail with vSAN has shown that the gap between erasure coding and mirroring has narrowed significantly, and provides significant storage performance capabilities.
  • Reduced cluster overhead: Clusters are designed around N+1, where ‘N’ represents sufficient resources to run the preferred workload, and ‘+1’ are spare and unused resources held in reserve should a failure occur in the nodes that make up the N. As the number of nodes in N increases, the percentage of overall resources that are kept in reserve to provide the +1 for planned and unplanned downtime drops.

Figure 1: Single processor P670F disk group options

You may be wondering, “How does all of this deliver flexibility in the options for scaling?” 

 You can scale out the cluster by adding a node. Adding a node is the standard option and can be the right choice if you want to increase both compute and storage resources. However, if you want to grow storage, adding capacity drives will deliver that additional storage capacity. The single processor P670F has disk slots for up to 21 capacity drives with three cache drives, which can be populated one at a time, providing over 160TB of raw storage. (This is also a good time to review virtual machine storage policies: does that application really need mirrored storage?) The single processor P670F does not have a single socket motherboard. Instead, it has the same dual socket motherboard as the existing P670F—very much a platform designed for expanding CPU and memory in the future. 

If you are starting small, even really small, as in a 2-node cluster (don’t worry, you can still scale out to 64 nodes), the single processor P670F has even more additional features that may be of interest to you. Our customers frequently deploy 2-node clusters outside of their core data center at the edge or at remote locations that can be difficult to access. In these situations, the additional data resiliency that provided by Nested Fault Domains in vSAN is attractive. To provide this additional resiliency on 2-node clusters requires at least three disk groups in each node, for which the single processor P670F is perfectly suited. For more information, see VMware’s Teodora Hristov blog post about Nested fault domain for 2 Node cluster deployments. She also posts related information and blog posts on Twitter

It is impressive how a single change in configuration options can add so much more configuration flexibility, enabling you to optimize your VxRail nodes specifically to your use cases and needs. These configuration options impact your systems today and as you scale into the future.

Author Information

Author: David Glynn, Sr. Principal Engineer, VxRail Technical Marketing

Twitter: @d_glynn


Read Full Blog
  • VxRail
  • SUSE Rancher

Find Your Edge: Running SUSE Rancher and K3s with SLE Micro on Dell VxRail

Jason Marques Jason Marques

Tue, 16 Aug 2022 13:51:15 -0000

|

Read Time: 0 minutes

The goal of our ongoing partnership between Dell Technologies and SUSE is to bring validated modern products and solutions to market that enable our joint customers to operate CNCF-Certified Kubernetes clusters in the core, in the cloud, and at the edge, to support their digital businesses and harness the power of their data.

Existing examples of this collaboration have already begun to bear fruit with work done to validate SUSE Rancher and RKE2 on Dell VxRail. You can find more information on that in a Solution Brief here and blog post here. This initial example was to highlight deploying and operating Kubernetes clusters in a core datacenter use case.

But what about providing examples of jointly validated solutions for near edge use cases? More and more organizations are looking to deploy solutions at the edge since that is an increasing area where data is being generated and analyzed. As a result, this is where the focus of our ongoing technology validation efforts recently moved.

Our latest validation exercise featured deploying SUSE Rancher and K3s with the SUSE Linux Enterprise Micro operating system (SLE Micro) and running it on Dell VxRail hyperconverged infrastructure. These technologies were installed in a non-production lab environment by a team of SUSE and Dell VxRail engineers. All the installation steps that were used followed the SUSE documentation without any unique VxRail customization. This illustrates the seamless compatibility of using these technologies together and allowing for standardized deployment practices with the out-of-the-box system capabilities of both VxRail and the SUSE products.

Solution Components Overview

Before jumping into the details of the solution validation itself, let’s do a quick review of the major components that we used.

SUSE Rancher is a complete software stack for teams that are adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes (K8s) clusters, including lightweight K3s clusters, across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.

K3s is a CNCF sandbox project that delivers a lightweight yet powerful certified Kubernetes distribution.

SUSE Linux Enterprise Micro (SLE Micro) is an ultra-reliable, lightweight operating system purpose built for containerized and virtualized workloads.

Dell VxRail is the only fully integrated, pre-configured, and tested HCI system optimized with VMware vSphere, making it ideal for customers who want to leverage SUSE Rancher, K3s, and SLE Micro through vSphere to create and operate lightweight Kubernetes clusters on-premises or at the edge.

Validation Deployment Details

Now, let’s dive into the details of the deployment for this solution validation.

First, we deployed a single VxRail cluster with these specifications:

  • 4 x VxRail E660F nodes running VxRail 7.0.370 version software
    • 2 x Intel® Xeon® Gold 6330 CPUs
    • 512 GB RAM
    • Broadcom Adv. Dual 25 Gb Ethernet NIC
    • 2 x vSAN Disk Groups:
    •     1 x 800 GB Cache Disk
    •     3 x 4 TB Capacity Disks
  • vSphere K8s CSI/CNS

After we built the VxRail cluster, we deployed a set of three virtual machines running SLE Micro 5.1. We installed a multi-node K3s cluster running version 1.23.6 with Server and Agent services, Etcd, and a ContainerD container runtime on these VMs. We then installed SUSE Rancher 2.6.3 on the K3s cluster.  Also included in the K3s cluster for Rancher installation were Fleet GitOps services, Prometheus monitoring and metrics capture services, and Grafana metrics visualization services. All of this formed our Rancher Management Server. 

We then used Rancher to deploy managed K3s workload clusters. In this validation, we deployed the two managed K3 workload clusters on the Rancher Management Server cluster. These managed workload clusters were single node and six-node K3s clusters all running on vSphere VMs with the SLE Micro operating system installed.

You can easily modify this validation to be more highly available and production ready. The following diagram shows how to incorporate more resilience.

Figure 1: SUSE Rancher and K3s with SLE Micro on Dell VxRail - Production Architecture

The Rancher Management Server stays the same, because it was already deployed with a highly available four-node VxRail cluster and three SLE Micro VMs running a multi-node K3 cluster. As a production best practice, managed K3s workload clusters should run on separate highly available infrastructure from the Rancher Management Server to maintain separation of management and workloads. In this case, you can deploy a second four-node VxRail cluster. For the managed K3s workload clusters, you should use a minimum of three-node clusters to provide high availability for both the Etcd services and the workloads running on it. However, three nodes are not enough to provide node separation and high availability for the Etcd services and workloads. To remedy this, you can deploy a minimum six-node K3s cluster (as shown in the diagram with the K3s Kubernetes 2 Prod cluster).

Summary

Although this validation features Dell VxRail, you can also deploy similar architectures using other Dell hardware platforms, such as Dell PowerEdge and Dell vSAN Ready Nodes running VMware vSphere! 

For more information and to see other jointly validated reference architectures using Dell infrastructure with SUSE Rancher, K3s, and more, check out the following resource pages and documentation. We hope to see you back here again soon.

Author: Jason Marques

Twitter: @vWhippersnapper

Dell Resources

SUSE Resources

Read Full Blog
  • HCI
  • VxRail
  • API
  • UPS

Protecting VxRail From Unplanned Power Outages: More Choices Available

Karol Boguniewicz Karol Boguniewicz

Thu, 16 Jun 2022 18:06:44 -0000

|

Read Time: 0 minutes

In my previous blog, Protecting VxRail from Power Disturbances, I described the first API-integrated solution that helps customers preserve data integrity on VxRail if there are unplanned power events. Today, I'm excited to introduce another solution that resulted from our close partnership with Schneider Electric (APC).

Why is it important?

Over the last few years, VxRail has become a critical HCI system and data-center building block for over 15,000 customers who have deployed more than 220,000 nodes globally. When HCI was first introduced, it was often considered for specific workloads such as VDI or ROBO locations. However, with the evolution of hardware and software capabilities, VxRail became a catalyst in data-center modernization, deployed across various use cases from core to cloud to edge. Today, customers are deploying VxRail for mission-critical workloads because it is powerful enough to meet the most demanding requirements for performance, capacity, availability, and rich data services.

Dell Technologies is a leader in data-protection solutions and offers a portfolio of products that can fulfill even the most demanding RPO and RTO requirements from customers. In addition to using traditional data-protection solutions, it is best practice to use a UPS to protect the infrastructure and ensure data integrity if there are unplanned power events. In this blog, I want to highlight a new solution from Schneider Electric, the provider of APC Smart-UPS systems.

The APC UPS protection solution for VxRail

Schneider Electric is one of Dell Technologies’ strategic partners in the Extended Technologies Complete Program. It provides Dell Technologies with APC UPS and IT rack enclosures offering a comprehensive solution set of infrastructure hardware, monitoring, management software, and service options.

PowerChute Network Shutdown in version 4.5 seamlessly integrates with VxRail by communicating over the network with the APC UPS. If there is a power outage, PowerChute can gracefully shut down VxRail clusters using the VxRail API. As a result of this integration, PowerChute can run on the same protected VxRail cluster, saving space and reducing hardware costs.

Solution components:

  • VxRail cluster with VxRail HCI System Software version 7.0.320, 4.7.540 or higher
  • Dell Smart-UPS Online 5kVA DLRT5KRMXLT or Dell Smart-UPS Online 3kVA DLRT3000RMXLA
  • UPS Network Management Card 3 (AP9640, AP9640, or AP9643) with NMC firmware version v2.2 or higher
  • Either a 1-Year or 3-Year PowerChute license for each VxRail node in the cluster (PowerChute Network Shutdown software version 4.5 or higher)

Key benefits of this solution include:

  • Unattended, graceful shutdown of virtual machines (VMs), followed by the VxRail cluster that avoids data corruption thanks to integration with the VxRail API.
  • Minimal downtime after critical events have passed with a pre-configured automated start-up sequence, which is useful at remote or unattended sites.
  • Full deployment within the VxRail cluster saves space and reduces hardware requirements since you don't have to deploy PowerChute on a separate machine outside the cluster.
  • Edge-ready with support for Edge-ready vSAN architecture with vSAN 2-node clusters.
  • Redundant VxRail API-based cluster shutdown. In a redundant UPS set-up, if one NMC3 is offline, PowerChute will connect to one or more available NMC3s to carry out the VxRail cluster shutdown.

How does it work?

This is easiest to describe using the following diagram, which covers the steps taken in a power event and when the event is cleared:

 

How PowerChute Network Shutdown works with VxRail

I highly recommend watching the demo of this solution in action, which is listed in the Additional resources section at the end of this blog.

Summary

Protection against unplanned power events should be a part of a business continuity strategy for all customers who run their critical workloads on VxRail. This practice ensures data integrity by enabling automated and graceful shutdown of VxRail clusters. Customers now have more choice in providing such protection, with the new version of PowerChute Network Shutdown software for APC UPS systems integrated with VxRail API and validated with VxRail.

Additional resources

Website: Schneider Electric APC and Dell Technologies Alliance Website

Solution brochure: PowerChute Network Shutdown v4.5 Brochure

Solution demo video: PowerChute Network Shutdown v4.5 VxRail Technical Demo

Video: APC PowerChute Network Shutdown 4.5 and Dell VxRail Integration 

Previous blog: Protecting VxRail from Power Disturbances

Author:

Karol Boguniewicz, Senior Principal Engineering Technologist, Dell Technologies

LinkedIn: Karol Boguniewicz

Twitter: @cl0udguide

Read Full Blog
  • VxRail
  • Dell SolVe

Dell Technologies SolVe: Increase Your Solution Satisfaction

Vic Dery Vic Dery

Thu, 02 Jun 2022 14:57:22 -0000

|

Read Time: 0 minutes

Dell Technologies SolVeIncrease Your Solution Satisfaction 

With technology constantly changing and advancing, a continuous effort is necessary to improve processes and procedures. I can remember having to make changes in an environment and grab a user manual to ensure I got it right. Problems arose when the process changed after it was documented. The last code update changed the screens or did not provide information about the new area I needed to click. While it did not happen most of the time, it was frustrating when it did. So let me introduce you to Dell SolVe.   

SolVe stands for Solutions for Validating your Engagement. It is a knowledge solution that you can use to access trusted, best-practice, guided instructions for accomplishing common service tasks. After logging in to https://solve.dell.com, you will find detailed step-by-step instructions for completing numerous tasks and processes.  With SolVe’s verified procedures, I could have gone into the data center or approached my laptop with confidence that I could complete the task to solve my challenges (pun intended!).   

While SolVe is available for products across the Dell portfolio, this blog focuses on SolVe as it relates to VxRail. SolVe provides the blueprints for successful procedures that complement various VxRail processes. You can think of these as tailored procedures that are specific to your hardware and software versions.

 Figure 1 shows a high-level view of SolVe for VxRail options:

  

Figure 1: SolVe for VxRail 

Under VxRail Procedures, five main categories are available: Connectivity, Install, Upgrade, Replacement Procedures, and Miscellaneous (other Dell products may have sections that reflect that product's options). A Reference Material section contains Documentation and Support Matrices. Select a category to view more specific options in that category, for example, select Replacement Procedures to view Hardware Replacement Procedures.  

SolVe is an outstanding tool to ensure that you have globally consistent processes at your fingertips. From installation to upgrading to replacing customer-replaceable parts like drives, fans, and power supplies, SolVe provides easy-to-understand instructions to ensure your success in completing these tasks.

The following example shows how to generate the process for replacing a Capacity HDD on a VxRail P570: 

Select Replacement Procedures > Hardware Replacement Procedures to start the generation procedure (Figure 2). You are prompted to provide the required information. 

 

 Figure 2: Providing information 

Select the VxRail model and Hardware Component that you want to replace. Note: If any alerts or warnings are displayed at the top of the page, click Acknowledge to activate the NEXT button. These special alerts and warnings may be critical to ensure that the process is completed successfully.

Next, select the applicable VxRail HCI System Software (Figure 3) to ensure that the procedure is generated for that version.  

  

Figure 3: Selecting the software version 

In the Usage information section (Figure 4), you can provide information about the product or service request for which this procedure will be used. While this information is not mandatory, it is useful for documenting which work was performed on which node or if you want to delegate the task.   

 

Figure 4: Providing usage information 

You can review a summary of your selections before you generate the procedure (Figure 5).

 


Figure 5: Generating the procedure

After you click GENERATE, a PDF file downloads with the information that is required to complete the task. In our hard drive replacement example, the PDF includes a list of recommended tools. 

SolVe makes it possible to share and promote relevant, focused information for a planned activity in a globally consistent manner. Other benefits of using SolVe Include increasing the success of first-time fixes, not having to call support, and having the freedom to do the work when you are ready.  I recommend visiting    https://solve.dell.com to test-drive a process and see how easy it is to get custom procedures. Using the PDF-generated procedure may also simplify the process of getting change control approval. Using SolVe instead of user manuals would have saved me a lot of time and frustration by giving me a single source of truth across multiple data centers in heterogenous environments and enabling me to stay on top of processes that evolve over time. 

Resources 
Solve Procedure Generator
Dell Support 

Author: Vic Dery  LinkedIn

Read Full Blog
  • VMware
  • VxRail
  • Kubernetes
  • VCF
  • validated solution

Announcing VMware Cloud Foundation 4.4.1 on Dell VxRail 7.0.371

Jason Marques Jason Marques

Wed, 25 May 2022 14:07:35 -0000

|

Read Time: 0 minutes

With each turn of the calendar, as winter dissipates and the warmer spring weather brings new life back into the world, a certain rite of passage comes along with it: Spring Cleaning! As much as we all hate to do it, it is necessary to ensure that we keep everything operating in tip top shape. Whether it be errands like cleaning inside your home or repairing the lawn mower to be able to cut the grass, we all have them, and we all recognize they are important, no matter how much we try to avoid it. 

The VMware Cloud Foundation (VCF) on Dell VxRail team also believes in applying a spring cleaning mindset when it comes to your VCF on Dell VxRail cloud environment. This will allow your cloud environment to keep running in an optimal state and better serve you and your consumers. 

So, in the spirit of the spring season, Dell is happy to announce the release of Cloud Foundation 4.4.1 on VxRail 7.0.371. Beginning on May 25, 2022, existing VCF on VxRail customers will be able to LCM to this latest version while support for new deployments will be made available beginning June 2, 2022.

This new release introduces the following “spring cleaning” enhancements:

  • New component software version updates
  • New VxRail LCM logic improvements
  • New VxRail serviceability enhancements
  • VCF and VxRail software security bug fixes
  • VCF on VxRail with VMware Validated Solution Enhancements

VCF on VxRail life cycle management enhancements

New VxRail prechecks and vSAN resync timeout improvements

Starting with this release, the VxRail LCM logic has been modified to address scenarios when the cluster update process may fail to put a node into Maintenance Mode. This LCM logic enhancement is leveraged in addition to similar SDDC Manager prechecks that already exist. All VxRail prechecks are used when SDDC Manager calls on VxRail to run its precheck workflow prior to an LCM update. SDDC Manager does this by using its integration with the VxRail Health Check API. SDDC Manager also calls on these prechecks during an LCM update using its integration with the VxRail LCM API. So, VCF on VxRail customers benefit from this VxRail enhancement seamlessly. 

Failing to enter Maintenance Mode can cause VxRail cluster updates to fail. Finding ways to mitigate this type of failure will significantly enhance the LCM reliability experience for many VCF on VxRail customers. 

Figure 1: VCF on VxRail LCM

The following list describes scenarios in which a VxRail node could fail to enter maintenance mode, but are improved with the latest enhancements:

  • If VMtools are mounted to customer VM workloads: VxRail LCM precheck now checks for this state to detect if VMtools are mounted. If this exists, it is the administrator’s responsibility to address the issue in their environment before initiating a VxRail cluster update.
  • If VMs are pinned to specific hosts:  VxRail LCM precheck will now detect whether there is host pinning configured for VMs.   If this exists, it is the administrator’s responsibility to address the configuration in their environment before initiating a cluster update.
  • vSAN Resync Time Timeout: During the cluster update process, a VxRail node can fail if vSAN resync takes too long. When the system waits before the node is put into Maintenance Mode, it causes a timeout. To prevent this from happening, the VxRail vSAN Resync timeout value has been increased by 2x while the cluster update waits for the vSAN resync to finish.

VCF on VxRail serviceability enhancements

Support for next generation Dell secure remote service connectivity agent and gateway

VxRail systems will now use the next generation secure remote service connectivity agent and the Secure Connect Gateway to connect to the Dell cloud for dial home serviceability. This new connectivity agent running within VxRail will also be used on all Dell infrastructure products.  

Figure 2: Next Generation Dell Secure remote connectivity agent and gateway architecture

The Secure Connect Gateway is the 5th generation gateway that acts as a centralization point for Dell products in the customer environment to manage the connection to the Dell cloud.  This remote connectivity enables a bi-directional communication between the product and Dell cloud.  Products can send telemetry data and event information to the Dell cloud which can be used to facilitate remote support by Dell services as well as to deliver cloud services such as CloudIQ, MyService360, Licensing Portal, and Service Link.

The latest generation remote service connector is intended to provide a uniform telemetry experience across all Dell ISG products.   By providing standardization, customers can reduce redundant infrastructure used to provide remote services for all their Dell products. The connectivity agent also introduces a simpler setup experience by streamlining and automating setup process of the secure remote service for new VxRail cluster deployments.

 

Figure 3: Enabling secure remote gateway connectivity

For existing VxRail clusters running an earlier version than VCF 4.4.1 on VxRail 7.0.371 in a VCF on VxRail deployment, the migration effort to adopt the new secure connect gateway requires the administrator to first upgrade their older generation dell serviceability gateways in their environment (whether it’s the 3rd generation Secure Remote Service gateway or the 4th generation Dell SupportAssist Enterprise gateway).  

Once the gateways are upgraded to the latest 5th generation Dell Secure Connect Gateway, the VCF on VxRail administrator can perfrom the VxRail cluster update for the migration, as part of the standard VCF on VxRail LCM process. The built-in VxRail LCM precheck steps will inform the administrator to upgrade their gateways if necessary. The VxRail cluster update will now retrieve the gateway configuration for the connectivity agent and convert the device or access key to a unique connectivity key for remote connection authentication. Administrators should be aware that this additional migration work may add a one time 15 minutes or so time increase to the total cluster update time.

New nodes that are shipped with VxRail 7.0.350 or higher will also now include a unique connectivity key for the secure remote gateway. Dell manufacturing will embed this key into the iDRAC of the VxRail nodes. So, instead of a user logging onto the Dell support portal to retrieve the access key to enable secure remote services, the enablement process will automatically retrieve this unique connectivity key from iDRAC for the connectivity agent to enable the connection. This feature is designed to simplify and streamline the secure connect gateway serviceability setup experience.

Customers can also have a direct connection to Dell cloud bypassing having a gateway deployed.  This option is available for any clusters running VxRail 7.0.350 and higher.

VxRail dial home payload improvements

VxRail dial home payload improvements have been introduced to help provide Dell support with additional key cluster information in the dial home payload itself and capture more system error conditions to help further improve VCF on VxRail serviceability and reduce time to resolution of any VxRail related issues. 

 Additional payload information now includes: 

  • Smart Logs: Smart logging automatically collects the logs on the node of the call-home event, which provides additional information to the Support team when necessary. Starting with VCF 4.4.1 on VxRail 7.0.371, smart logging functionality has been redesigned to achieve the following tasks: 
    1. Adapt smart logging workflow to the new secure remote gateway architecture
    2. Associate smart log with Dell Service Request (SR) such that the smart log file can be included in the SR as a link.
  • Sub-component details: These include information such as the part number and slot number for CRU/FRU items such as disk drives and memory DIMMs for more efficient auto-dispatch of these failed components.
  • VxRail cluster personality identifier information: To help making the troubleshooting experience more efficient, this cluster metadata information allows Dell Support to know that the VxRail clusters are deployed within a VCF on VxRail environment.

 Also included are additional error conditions that are now captured to bring VxRail events in parity with existing PowerEdge events and additional ADC error states. And finally, to reduce the cost of service and improve the customer experience by avoiding a deluge of unnecessary event information, some events are no longer being reported.

VxRail physical view UI update now includes Fiber Channel HBA hardware view

New support for FC HBA Physical HW views have been introduced as part of the VxRail Manager vCenter Plugin Physical View UI for E560F, P570F, and V570F VxRail nodes that support externally attached storage.

 Supported FC HBAs include the following Emulex and QLogic models:

  • Emulex LPE 35002 Dual Port 32 Gb HBA
  • Emulex LPE 31002 Dual Port 16 Gb HBA
  • QLogic 2772 Dual Port 32 Gb HBA
  • QLogic 2692 Dual Port 16 Gb HBA

 

Figure 4: Fiber Channel HBA physical hardware view in VxRail Manager vCenter Plugin – firmware

 This new functionality provides a similar UI viewing experience to what administrators are already used to seeing, regarding physical NICs and NIC ports. This new FC HBA view will include port link status and firmware/driver version information. An example of the firmware/driver views is shown in Figure 4.

VCF on VxRail security enhancements

VCF and VxRail software security vulnerability fixes

This release includes several security vulnerabilities fixes for both VxRail and VCF software components.

 VxRail Software 7.0.371 contains fixes that resolve multiple security vulnerabilities. Some of these include:

  • DSA-2022-084
  • DSA-2022-056
  • DSA-2021-255
  • iDRAC8 Updates 

 For more information, see iDRAC8 2.82.82.82 Release Notes 

 For more details on the DSAs, see the Dell Security Advisory (DSA) portal and search for DSA IDs.

 VCF 4.4.1 Software: This contains fixes that resolve issues in NSX-T by introducing support for NSX-T 3.1.7.3.2. For more information about these issues, see the VMware KB Article.

 vRealize Suite Software: In the last VCF 4.4 on VxRail 7.0.320 release we introduced vRealize Flexible Upgrades. Read more about it here. As a result, the vRealize Suite components (other than vRealize Suite Lifecycle Manager) are no longer a part of the VCF core software package. So if there are security vulnerabilities that are discovered and relevant patches that need to be applied, the process of doing so has changed. No longer will those vRealize component software updates be delivered and applied through VCF software update bundles. Administrators now must apply them independently using vRSLCM starting from the VCF 4.4 on VxRail 7.0.320 release.

 I bring this up because there has been some vRealize Suite component security patches that have also been released that are relevant to VCF 4.4.1 on VxRail 7.0.371 deployments. See this blog post, written by my peers on the VMware team, describing the issue related to VMSA-2022-0011 and how to apply the fixes for it.

 VCF on VxRail with VMware Validated Solution enhancements

New VCF on VxRail qualification with VMware Validated Solutions

For those of you who aren’t aware, VMware Validated Solutions are technical validated implementations built and tested by VMware and VMware Partners. These solutions are designed to help customers solve common business problems using VMware Cloud Foundation as the foundational infrastructure. Types of solutions include Site Protection and Disaster Recovery for VMware Cloud Foundation using multi-site VCF deployments with stretched NSX-T networks and Advanced Load Balancing for VMware Cloud Foundation using VMware NSX Advanced Load Balancer for workloads on VCF. These validated solution designs have been enhanced over time to include VMware developed automation scripts to help customers further simplify and accelerate getting these implemented. You can learn more about them here.

 Although this solution is not directly tied to this latest VCF 4.4.1 on VxRail 7.0.371 release as a release feature itself, VMware and Dell can now qualify the VMware Validated Solutions on VCF on VxRail. All VVS solutions that are qualified will be marked with a VxRail tag. 

Figure 5: VMware Validated Solutions Portal

These solutions get updated asynchronously from VCF releases. Be sure to check the VMware VVS portal for the latest updates on existing solutions or to see when new solutions are added.

That’s a wrap

Thanks for taking the time learn more about VMware Cloud Foundation on Dell VxRail. For even more solution information, see the Additional Resources links at the bottom of this post. I don’t know about you, but I feel squeaky clean already! Can’t say the same about my outdoor landscaping though...I should probably go address that…

Author: Jason Marques

Twitter: @vWhipperSnapper

Additional Resources

VMware Cloud Foundation on Dell VxRail Release Notes

VxRail page on DellTechnologies.com

VxRail page on InfoHub

VCF on VxRail Interactive Demos

VxRail Videos

Read Full Blog
  • NVIDIA
  • VxRail
  • NVIDIA Omniverse

Preparing for the Metaverse with NVIDIA Omniverse and Dell Technologies

George O'Toole George O'Toole

Thu, 14 Apr 2022 20:01:45 -0000

|

Read Time: 0 minutes

Initially published on March 21, 2022 at: https://infohub.delltechnologies.com/p/preparing-for-the-metaverse-with-nvidia-omniverse-and-dell-technologies/

As technology evolves, it can sometimes seem like creativity leads to complexity. In the case of 3D graphics creation, teams with a broad range of skills must come together. Each of the team members — from artists, designers, and animators to engineers and project managers — typically have a special skill set requiring their own tools, systems, and work environment. 

But the technical issues do not stop there. As existing technologies mature and new ones enter the scene, the number of specialized 3D design and content creation tools rapidly increases. Many of them lack compatibility or interoperability with other tools. And across the 3D graphics ecosystem, hybrid workforces require a “physical workstation” experience wherever they may be. 

It is no small task to provide compute-power access to a geographically distributed team in a way that enables collaboration without compromising security. But to remain competitive and retain top talent, companies must provide the necessary tools to the remote workforce. 

Solutions for remote, collaborative, and secure 3D graphics creation

The new Dell Validated Design for VDI with NVIDIA Omniverse makes it possible for 3D graphics creation teams to work together from anywhere in real-time using multiple applications within shared virtual 3D worlds. NVIDIA® Omniverse users can access the resources and compute power they need through a virtualized desktop infrastructure (VDI), without the need for a local physical workstation. 

By running NVIDIA Omniverse on Dell PowerEdge NVIDIA‑Certified Systems™, companies can enable 3D graphics teams to connect with major design tools, assets, and projects to collaborate and iterate seamlessly from early-stage ideation through to finished creation.

Designs for NVIDIA Omniverse rely on Dell PowerEdge NVIDIA‑Certified Systems and leverage a VxRail‑based virtual workstation environment with NVIDIA A40 GPUs. VMware® Horizon® can provide workload virtualization. The designs were validated with Autodesk® Maya® 3D animation and visual effects software. 

The flexible solution supports the varying needs of different industries. Within media and entertainment, for example, the goal might be to create virtual worlds for films or games. Those in architecture, engineering, and construction might want to see their innovative building designs come to life within a 3D reality. Manufacturers might look to collaborate on interactive and physically accurate visualizations of potential future products or create realistic simulations of factory floors. 

For all organizations, the benefits are significant:

  • Empower the workforce–Enhance 3D interactive user collaboration and productivity for remote and distributed teams.
  • Boost efficiency–Streamline deployment, management and support with engineering‑validated designs created in close collaboration with NVIDIA.
  • Collaborate securely–Protect data against cyberthreats from the data center to the endpoint with built‑in security features.

Transforming 3D graphics production

At Dell Technologies, we are excited about NVIDIA Omniverse and its ability to enable collaboration for designers, engineers, and other innovators leveraging Dell PowerEdge servers. Dell Validated Design for VDI with NVIDIA Omniverse solutions have the potential to transform every stage of 3D production, a crucial step forward toward the metaverse. According to the Marvel® database, “The Omniverse is every version of reality and existence imaginable and unimaginable” and brings together our physical and digital lives with augmented reality, extended reality and virtual reality in our physical world like the Oasis in Ready Player One

An open platform built for virtual collaboration and real‑time simulation, NVIDIA Omniverse streamlines and accelerates even the most complex 3D workflows by enabling remote workers to collaborate using multiple apps simultaneously. Dell Technologies is committed to offering innovative solutions to help customers manage their collaborative and performance-intensive workflows.

Learn more about Dell Validated Design for VDI with NVIDIA Omniverse.

 

Read Full Blog
  • VMware
  • VxRail
  • security

Latest Security Enhancements for VxRail– April 2022

Vic Dery Linka Biaggi Vic Dery Linka Biaggi

Tue, 26 Apr 2022 15:59:16 -0000

|

Read Time: 0 minutes

Dell VxRail: Comprehensive Security by Design

VxRail is the only co-engineered, fully integrated, pre-configured, and pre-tested VMware hyperconverged integrated system that is optimized for VMware vSAN. This has been the case since VxRail was launched over six years ago in February of 2016. VxRail is a truly remarkable “Better Together” story. It stands out as a testament to tight integration work, as no other vendor has gone as deep in their integration as VxRail has with vSAN.

VxRail’s simplicity, scalability, and performance, along with the ongoing rapid pace of innovation, make it a platform for data center modernization and more. One could say that VxRail helps future proof businesses. VxRail also provides a fast and straightforward path to this security transformation from Cloud to Core to Edge.

Dell Technologies created and, for years, has maintained the Dell VxRail: Comprehensive Security by Design white paper which provides an overview of VxRail security features, updates, and details about security options. Security on the VxRail is part of its DNA; it was in the foreground of the concept. Security is similar to the medical industry as it requires continuous learning, skills, and process updates. Keeping up with security demands requires users to follow these same practices, and there is always more that can be done.

The following links provide detailed information about VxRail. If you are not familiar with VxRail, use these two links to gain additional insight into the product.

  • VxRail Interactive Journey VxRail Interactive Journey provides a better way for technical buyers to get familiar with VxRail and quickly come away with what makes VxRail awesome through an immersive experience for consuming videos, podcasts, and interactive demos.
  • Dell VxRail System TechBook The TechBook is a conceptual and architectural review of the Dell VxRail, optimized for VMware vSAN. The TechBook describes how hyperconverged infrastructure drives digital transformation and focuses on the VxRail system as a leading hyperconverged technology solution.

The following list includes key security updates that are provided in the April 2022 version of the white paper:

  • CloudIQ — Updates the CloudIQ section to include the rebranding from MyVxRail to CloudIQ. The switch to CloudIQ brings consistency while delivering the same quality of service across Dell Technologies solutions. 
  • Role Based Access Control (RBAC) Adds the use of RBAC to keep customer data safe and with independent viewing so that customers can only view their own data. 
  • Ransomware — Provides new details, especially regarding the supply chain, focusing on the growth in the number of targeted attacks and business types. 
  • Snapshot recovery Describes the shift to using vSAN snapshots as a means of recovery, specifically using point-in-time recovery snaps to create backups.

VxRail is the only HCI system on the market that fully integrates Dell PowerEdge Servers with VMware, vSphere, and vSAN. Because VxRail is built on our award-winning PowerEdge platform, we inherited security features native to our hardware. Additional information about security, such as these PowerEdge and VMware white papers, provides deeper and more specific security-related information about VxRail. 

Note: Additional security documentation, such as the PowerEdge or VMware security papers, also provide more specific security-related information related to the products that also make up the VxRail.

This blog is a high-level overview of some information in the newly revised security features. There is a continuous effort to enhance VxRail security landscapes. This blog is to simplify the delivery of security information and to keep it relevant for our readers. 

For more information, see the Dell VxRail: Comprehensive Security by Design or the VxRail Security Infographic for a quick overview.

Additional resources

Dell VxRail: Comprehensive Security by Design

This white paper describes both integrated and optional security features, best practices, and proven techniques for securing your VxRail system from the Core to the Edge to the Cloud.

VxRail Interactive Journey  

VxRail Interactive Journey provides a better way for technical buyers to get familiar with VxRail and quickly come away with what makes VxRail awesome through an immersive experience for consuming videos, podcasts, and interactive demos.

Dell VxRail System TechBook  

The TechBook is a conceptual and architectural review of the Dell VxRail, optimized for VMware vSAN. The TechBook describes how hyperconverged infrastructure drives digital transformation and focuses on the VxRail system as a leading hyperconverged technology solution.

Technical White Paper: Cyber Resilient Security in Dell PowerEdge Servers

The PowerEdge paper details the security features built into in the PowerEdge Cyber Resilient Platform, many enabled by the Dell Remote Access Controller (iDRAC9)

VMware Product Security 

VMware Product Security provides an overview of VMware's commitment to building trust with the customer 

Read Full Blog
  • HCI
  • VMware
  • vSphere
  • VxRail
  • security
  • vCenter

HCI Security Simplified: Protecting Dell VxRail with VMware NSX Security

Karol Boguniewicz Francois Tallet Karol Boguniewicz Francois Tallet

Fri, 08 Apr 2022 18:14:37 -0000

|

Read Time: 0 minutes

The challenge

Cybersecurity and protection against ransomware attacks are among the top priorities for most customers who have successfully implemented or are going through a digital transformation. According to the ESG’s 2022 Technology Spending Intentions Survey:

  • 69 percent of respondents shared that their spending on cybersecurity will increase in 2022 (#1).
  • 48 percent of respondents believe their IT organizations have a problematic shortage of existing skills in this area (#1).
  • 38 percent of respondents believe that strengthening cybersecurity will drive the majority of technology spending in their organization in the next 12 months (#1).

The data clearly shows that this area is one of the top concerns for our customers today. They need solutions that significantly simplify increasing cybersecurity activities due to a perceived skills shortage.

It is worth reiterating the critical role that networking plays within Hyperconverged Infrastructure (HCI). In contrast to legacy three-tier architectures, which typically have a dedicated storage network and storage, HCI architecture is more integrated and simplified. Its design lets you share the same network infrastructure for workload-related traffic and intercluster communication with the software-defined storage. The accessibility of the running workloads (from the external network) depends on the reliability of this network infrastructure, and on setting it up properly. The proper setup also impacts the performance and availability of the storage and, as a result, the whole HCI system. To prevent human error, it is best to employ automated solutions to enforce configuration best practices.

VxRail as an HCI system supports VMware NSX, which provides tremendous value for increasing cybersecurity in the data center, with features like microsegmentation and AI-based behavioral analysis and prevention of threats. Although NSX is fully validated with VxRail as a part of VMware Cloud Foundation (VCF) on VxRail platform, setting it outside of VCF requires strong networking skills. The comprehensive capabilities of this network virtualization platform might be overwhelming for VMware vSphere administrators who are not networking experts. What if you only want to consume the security features? This scenario might present a common challenge, especially for customers who are deploying small VxRail environments with few nodes and do not require full VCF on the VxRail stack.

The great news is that VMware recognized these customer challenges and now offers a simplified method to deploy NSX for security use cases. This method fits the improved operational experience our customers are used to with VxRail. This experience is possible with a new VMware vCenter Plug-in for NSX, which we introduce in this blog.

NSX and security

NSX is a comprehensive virtualization platform that provides advanced networking and security capabilities that are entirely decoupled from the physical infrastructure. Implementing networking and security in software, distributed across the hosts responsible for running virtual workloads, provides significant benefits:

  • Flexibility—Total flexibility for positioning workloads in the data center enables optimal use of compute resources (a key aspect of virtualization).
  • Optimal consumption of CPU resources —Advanced NSX features only consume CPU from the hosts when they are used. This consumption leads to lower cost and simplified provisioning when compared to running the features on dedicated appliances.
  • High performance—NSX features are performed in VMware ESXi kernel space, a unique capability on vSphere.

The networking benefits are evident for large deployments, with NSX running in almost all Fortune 100 companies and many medium scale businesses. In today’s world of widespread viruses, ransomware, and even cyber warfare, the security aspect of NSX built on top of the NSX distributed firewall (DFW) is relevant to vSphere customers, regardless of their size.

The NSX DFW is a software firewall instantiated on the vNICs of the virtual machines in the data center. Thanks to its inline position, it provides maximum filtering granularity because it can inspect the traffic coming in and going out of every virtual machine without requiring redirection of the traffic to a security appliance, as shown in the following figure. It also moves along with the virtual machine during vMotion and maintains its state.

 

Figure 1: Traditional firewall appliance compared to the NSX DFW

The NSX DFW state-of-the-art capabilities are configured centrally from the NSX Manager and allow implementing security policies independently of the network infrastructure. This method makes it easy to implement microsegmentation and compliance requirements without dedicating racks, servers, or subnets to a specific type of workload. With the NSX DFW, security teams can deploy advanced threat prevention capabilities such as distributed IDS/IPS, network sandboxing, and network traffic analysis/network detection and response (NTA/NDR) to protect against known and zero-day threats.

A dedicated solution for security

Many NSX customers who are satisfied with the networking capability of vSphere run their production environment on a VDS with VLAN-backed dvportgroups. They deploy NSX for its security features only, and do not need its advanced networking components. Until now, those customers had to migrate their virtual machines to NSX-backed dvportgroups to benefit from the NSX DFW. This migration is easy but managing networking from NSX modifies the workflow of all the teams, including those teams that are not concerned by security:

Figure 2: Traditional NSX deployment

Starting with NSX 3.2, you can run NSX security on a regular VDS, without introducing the networking components of NSX. The security team receives all the benefits of NSX DFW, and there is no impact to any other team:

Figure 3: NSX Security with vCenter Plugin

Even better, NSX can now integrate further with vCenter, thanks to a plug-in that allows you to configure NSX from the vCenter UI. This method means that NSX can be consumed as a simple security add-on for a traditional vSphere deployment.

How to deploy and configure NSX Security

Requirements

First, we need to ensure that our VxRail environment meets the following requirements:

  • vCenter Server 7.0 U3c (included with VxRail 7.0.320)
  • VDS 6.7 or later
  • The OVA for NSX-T with the vCenter Plugin version 3.2 or later and an appropriate NSX license

Deploy the NSX Manager and the NSX DFW on ESXi hosts

Running NSX in a vSphere environment consists of deploying a single NSX Manager virtual machine protected by vSphere HA. A shortcut in vCenter enables this step:

Figure 4: Deploy the NSX Manager appliance virtual machine from the NSX tab in vCenter

When the NSX Manager is up and running, it sets up a one-to-one association with vCenter and uploads the plug-in that presents the NSX UI in vCenter, as if NSX security is part of vCenter. The vCenter administrator becomes an effective NSX security administrator.

The next step, performed directly from the vCenter UI, is to enter the NSX license and select the cluster on which to install the NSX DFW binaries:

Figure 5: Select the clusters that will receive the NSX DFW binaries

After the DFW binaries are installed on the ESXi hosts, the NSX security is deployed and operational. You can exit the security configuration wizard (and configure directly from the NSX view in the vCenter UI) or let the wizard run.

Run the security configuration wizard

After installing the NSX binaries on the ESXi hosts, the plug-in runs a wizard that guides you through the configuration of basic security rules according to VMware best practices. The wizard gives the vSphere administrator simple guidance for implementing a baseline configuration that the security team can build on later. There are three different steps in this guided workflow.

First step—Segment the data center in groups

Perform the following steps, as shown in the following figure:

  • Create an infrastructure group, identifying the services that the workloads in the data center will access. These services typically include DNS, NTP, DHCP servers, and so on.
  • Segment the data center coarsely in environments, such as groups like Development, Production, and DMZ.
  • Segment the data center finely by identifying applications running across the different environments.

Figure 6: Example of group creation

Second step—Define communication between different groups

Perform the following steps, as shown in the following figure:

  • Define which groups can access the infrastructure services
  • Define how the different environments communicate with each other
  • Define how applications communicate with each other

Figure 7: Define the communication between environments using a graphcial represenation

Third step—Review the configuration and publish it to the NSX DFW

After reviewing the configuration, publish the configuration to NSX:

Figure 8: Review DFW rules before exiting the wizard

The full NSX UI is now available in vCenter. Select the NSX tab to access the NSX UI directly.

Final thoughts

The new VMware vCenter Plug-in for NSX drastically simplifies the deployment and adoption of NSX with VxRail for security use cases. In the past, advanced knowledge of the network virtualization platform was required. A vSphere adminstrator can now deploy it easily, using an intuitive configuration wizard available directly from vCenter. 

The VMware vCenter Plug-in for NSX provides the kind of simplified and optimized experience that VxRail customers are used to when managing their HCI environment. It also addresses the challenge that customers face today, improving security even with a perceived shortage of skills in this area. Also, it can be configured easily and quickly, making the robust NSX security features more available for smaller HCI deployments.

Additional resources:

VMworld 2021 Session: NET1483 - Deploy and Manage NSX-T via vCenter: A Single Console to Drive VMware SDDC

Planning Guide: Dell EMC VxRail Network Planning Guide – Physical and Logical Network Considerations and Planning

ESG Research Report: 2022 Technology Intentions Survey

Authors:

Francois Tallet, Technical Product Manager, VMware

Karol Boguniewicz, Senior Principal Engineering Technologist, Dell Technologies

 


Read Full Blog
  • VxRail
  • life cycle management
  • edge
  • API
  • satellite node

Enhancing Satellite Node Management at Scale

Stephen Graham Stephen Graham

Tue, 15 Mar 2022 20:30:40 -0000

|

Read Time: 0 minutes

Satellite nodes are a great addition to the VxRail portfolio, empowering users at the edge, as described in David Glynn’s blog Satellite Nodes: Because sometimes even a 2-node cluster is too much. Although satellite nodes are still new, we’ve been working hard and have already started making improvements. Dell’s latest VxRail 7.0.350 release has a number of new VxRail enhancements and in this blog we’ll focus on these new satellite node features:

  • Improved life cycle management (LCM)
  • New APIs
  • Improved security

Improved LCM

The first way we’ve improved satellite nodes is by reducing the required maintenance window. To do this, the satellite node update process has now been split in two. Instead of staging the recovery bundle and performing the update in one step, you can now stage the recovery bundle and perform the update separately.

Staging the bundle in advance is great because we know bandwidth can be limited at the edge and this allows ample time to transfer the bundle in advance to ensure your update happens during your scheduled maintenance window. Once your bundles are staged, it’s as simple as scheduling the updates and letting VxRail execute the node update. This improvement ensures that you can complete the update within the expected timeframe to minimize downtime. Satellite nodes sit outside the cluster and, as a result, workloads will go offline while the node is updated.

New APIs

Do you have a large number of edge locations that could use satellite nodes and need an easier way to manage at scale? Good news! These new APIs are perfect for making edge life at scale easier. 

The new APIs include:

  • Satellite node LCM
  • Add a satellite node to a managed folder
  • Remove a satellite node from a managed folder

The introductory release of VxRail satellite nodes featured LCM operations through the VxRail Manager plug-in, which could be quite time consuming if you are managing a large number of satellite nodes. We saw room for improvement so now administrators can use VxRail APIs to add, update, and remove satellite nodes to simplify and speed up operations. 

You can use the satellite node LCM API to adjust configuration settings that benefit management at scale, such as adjusting the number of satellite nodes you want to update in parallel. For example, although the default is to update 20 nodes in parallel, you can initiate updates for up to 30 satellite nodes in parallel, as needed. 

There is also a failure rate feature that will set a condition to exit from an LCM operation. For example, if you are updating multiple satellite nodes at one time and nodes are failing to update, the failure rate setting is a way to abort the operation altogether if the rate surpasses a set threshold. The default threshold is 20% but can be set anywhere from 1% to 100%. Using the VxRail API, you can adjust settings like this that are not available in the VxRail Manager.

These new APIs are great for users with a large number of VxRail satellite nodes. Adding, removing, and updating satellite nodes can now be automated through the new APIs, saving you precious time across your edge locations.

Improved Security

VxRail satellite nodes can now use Secure Enterprise Key Management (SEKM), made available through the Dell PowerEdge servers that VxRail is built on. What is SEKM you might ask? Well, SEKM gives you the ability to secure drive access using encryption keys stored on a central key management server (not on the satellite node). 

SEKM is great for many reasons. First, an edge location might be more exposed and have less physical security than your typical data center but that doesn’t mean securing your data is any less important. SEKM keeps your data drives locked even if the entire server is stolen. When paired with self-encrypting drives, you can secure the data even further. Second, the encryption keys are stored in a centralized location, making it easier to manage the security of large numbers of satellite nodes instead of having to manage each satellite node individually.

In this blog we’ve highlighted some exciting new satellite node features, including an improved update process, new APIs, and enhanced security, all of which enhance managing the edge at scale. Check out the full VxRail 7.0.350 release and see the full list of enhancements by clicking the link below.

Thanks for reading!

Resources

Author: Stephen Graham, VxRail Tech Marketing

Read Full Blog
  • HCI
  • VxRail
  • vSAN

Satellite nodes: Because sometimes even a 2-node cluster is too much

David Glynn David Glynn

Tue, 01 Mar 2022 15:03:31 -0000

|

Read Time: 0 minutes

Wait a minute, where's me cluster? Oh no.

You may have noticed a different approach from Dell EMC VxRail in Daniel Chiu’s blog A Taste of VxRail Deployment Flexibility. In short, we are extending the value of VxRail into new adjacencies, into new places, and new use cases. With the release of VxRail dynamic nodes in September, these benefits became a new reality in the landscape of VxRail deployment flexibility:

  • Using VxRail for compute clusters with vSAN HCI Mesh
  • Using storage arrays with VxRail dynamic nodes in VMware Cloud Foundation on VxRail
  • Extending the benefits of VxRail HCI System Software to traditional 3-tier architectures using Dell EMC for primary storage

The newest adjacency in 7.0.320 is the VxRail satellite node, as sometimes even a 2-node cluster may be too much.

A VxRail satellite node is ideal for those workloads where the SLA and compute demands do not justify even the smallest of 2-node clusters – in the past you might have even recycled a desktop to meet these requirements. Think retail and ROBO with their many distributed sites, or 5G with its “shared nothing” architecture. But in today’s IT environment, out of sight cannot mean out of mind. Workloads are everywhere and anywhere. The datacenter and the public cloud are just two of the many locations where workloads exist, and compute is needed. These infrastructure needs are well understood, and in the case of public cloud – out of scope. The challenge for IT is managing and maintaining the growing and varied infrastructure demands of workloads outside of the data center, like the edge, in its many different forms. The demands of the edge vary greatly. But even with infrastructure needs met with a single server, IT is still on the hook for managing and maintaining it.

While satellite nodes are a single node extension of VxRail, they are managed and life cycled by the VxRail Manager from a VxRail with vSAN cluster. Targeted at existing VxRail customers, these single nodes should not be thought of as lightweights. We’re leveraging the existing VxRail E660, E660F, and V670F with all their varied hardware options, and have added support for the PERC H755 adapter for local RAID protected storage. This provides options as lightweight as a E660 with an eight core Intel Xeon Gen 3 Scalable processor and spinning disks, all the way up to a V670F with dual 40 core Intel Xeon Gen 3 Scalable processors, accelerated by a pair of NVIDIA Ampere A100 80GB Data Center GPUs, and over 150 TB of flash storage. Because edge workloads come in all sizes from small to HUUUUGE!!!

Back when I started in IT, a story about a Missing Novell server discovered after four years sealed behind a wall was making the rounds. While it was later claimed to be false, it was a story that resonated with many season IT professionals and continues to do so today. Regardless as to where a workload is running, the onus is on IT not only to protect that workload, but also to protect the network, all the other workloads on the network, and anything that might connect to that workload. This is done in layers, with firewalls, DMZ, VPN, and so on. But it is also done by keeping hypervisors updated, and BIOS and firmware up to date.

For six years, VxRail HCI System Software has been helping virtualization administrators keep their VxRail with vSAN cluster up to date, regardless as to where they are in the world -- be it at a remote monitoring station, running a grocery store, or in the dark sealed up behind a wall. VxRail satellite nodes and VxRail dynamic nodes extend the VxRail operating model into new adjacencies. We are enabling you our customers to manage and life cycle these ever growing and diverse workloads with the click of a button.

Also, in the release of VxRail 7.0.320 are two notable stand-outs. The first is validation of Dell EMC PowerFlex scale-out SDS as an option for use with VxRail dynamic nodes. The second is increased resilience for vSAN 2-node clusters (also applies to stretched clusters) which are often used at the edge. Both John Nicholson and Teodora Hristov of VMware do a great job of explaining the nuts and bolts of this useful addition. But I want to reiterate that for 2-node deployments, this increased resilience will require that each node have three disk groups.

Don’t let the fact that a workload is too small, or too remote, or not suited to HCI, be the reason for your company to be at risk by running out-of-date firmware and BIOS. There is more flexibility than ever with VxRail, much more, and the value of VxRail’s automation and HCI System Software can now be extended to the granularity of a single node deployment.

Author: David Glynn, Sr. Principal Engineer, VxRail Tech Marketing
Twitter: @d_glynn

Read Full Blog
  • NVIDIA
  • VxRail
  • Kubernetes
  • VMware Cloud Foundation
  • Tanzu
  • VCF

New Year’s Resolutions Fulfilled: Cloud Foundation on VxRail

Jason Marques Jason Marques

Thu, 10 Feb 2022 13:24:57 -0000

|

Read Time: 0 minutes

New Year’s Resolutions Fulfilled: VMware Cloud Foundation 4.4 on VxRail 7.0.320

Many of us make New Year’s resolutions for ourselves with each turn of the calendar. We hope everyone is still on track!

The Cloud Foundation on VxRail team wanted to establish our own resolutions too. And with that, Dell Technologies and VMware have come together to fulfill our resolution of continuing to innovate by making operating and securing cloud platforms easier for our customers while helping them unlock the power of their data.

And as a result, we are happy to announce the availability of our first release of the new year: VMware Cloud Foundation 4.4 on Dell VxRail 7.0.320! This release includes Cloud Foundation and VxRail software component version updates that include patches to some recent widely known security vulnerabilities. It also adds support for Dell ObjectScale on the vSAN Data Persistence Platform (vDPp), support for additional 15th generation VxRail platforms, new security hardening features, lifecycle management improvements, new Nvidia GPU workload support, and more. Phew! So be resolute and read on for the details.

VCF on VxRail Storage Enhancements

VCF on VxRail Lifecycle Management Enhancements

VCF on VxRail Hardware Platform Enhancements

VCF on VxRail Developer and AI-Ready Enterprise Platform Enhancements

VCF on VxRail Operations Enhancements

VCF on VxRail Security Enhancements

VCF on VxRail Storage Enhancements

Support for vSAN Data Persistence Platform and Dell ObjectScale Modern Stateful Object Storage Services

Initially introduced in vSphere 7.0 U1, the vSAN Data Persistence Platform (vDPp) is now supported as part of in VCF 4.4 on VxRail 7.0.320. Check out this great VMware blog post to learn more about vDPp.

Beginning in this release, support for running the new Dell ObjectScale data service on top of vDPp is also available. This new next-gen cloud native software defined object storage service is geared toward those IT teams who are looking to extend their cloud platform to run Kubernetes native stateful modern application data services. To learn more about ObjectScale please refer to this blog post. Note: VCF on VxRail currently supports using vDPp in a vSAN “Shared Nothing Architecture Mode” only. 

The following figure illustrates the high-level architecture of vDPp.

 Figure 1 – vDPp and ObjectScale 

As a result of this new capability, VCF on VxRail customers can further extend the storage flexibility the platform can support with S3 compatible object storage delivered as part of the turnkey cloud infrastructure management/operations experience.

Giving customers more storage flexibility resolution: Check!

VCF on VxRail Lifecycle Management Enhancements

Improved SDDC Manager LCM Prechecks

This release brings even more intelligence that is embedded into the SDDC Manager LCM precheck workflow. When performing an upgrade, the SDDC Manager needs to communicate to various components to complete various actions as well as requiring that certain system resources be configured correctly and are available.

To avoid any potential issues during LCM activities, VCF administrators can run SDDC Manager prechecks to weed any issues out before any LCM operation is executed. In this latest release SDDC Manager now adds six additional checks. These include:

  • Password validity (including expired passwords)
  • File system permissions
  • File system capacity
  • CPU reservation for NSX-T Managers
  • Hosts in maintenance mode
  • DRS configuration mode

All these checks apply to ESXi, vCenter, NSX-T, NSX-T Edge VMs, VxRail Manager, and vRealize Suite components in the VCF on VxRail environment. Figure 2 below illustrates some examples of what these prechecks look like from the SDDC Manager UI.

 Figure 2 – New SDDC Manager Prechecks

Giving customers enhanced LCM improvements resolution: Check!

vRealize Suite Lifecycle Manager Flexible Upgrades

VCF 4.4 has been enhanced to allow vRealize suite products to be updated independently without having to upgrade the VCF SDDC stack.

 

 Figure 3 – vRSLCM Flexible Upgrades 

This means that from VCF 4.4 on, administrators will use vRSLCM to manage vRealize Suite update bundles and orchestrate and apply those upgrades to vRealize Suite products (vRealize Automation, vRealize Operations, vRealize Log Insight, Workspace ONE Access, and more) independently from the core VCF version upgrade to help better align with an organization’s business requirements. It also helps decouple VCF infrastructure team updates from DevOps team updates enabling teams to consume new vRealize features quickly. And finally, it enables an independent update cadence between VCF and vRealize versions which boosts and improves interoperability flexibility. And who doesn’t like flexibility? Am I right?

One last note with this enhancement: SDDC Manager will no longer be used to manage vRealize Suite component update bundles and orchestrate vRealize Suite component LCM updates. With this change, future versions of VCF will not include vRealize Suite components as part of its software components. vRSLCM will still be a part of VCF software components validated for compatibility for each VCF release since that will continue to be deployed and updated using SDDC Manager. As such, SDDC Manager continues to manage vRSLCM install and update bundles just as it has done up to this point.

Giving customers enhanced LCM flexibility resolution: Check!

VCF on VxRail Hardware Platform Enhancements

Support For New 15th Generation Intel-Based VxRail Dynamic Node Platforms

VxRail 7.0.320 includes support for the latest 15th Generation VxRail dynamic nodes for the E, P, and V series models. These can be used when deploying VMFS on FC Principal storage VxRail VI Workload Domain clusters. Figure 4 below highlights details for each model.

       

 Figure 4 – New 15th Generation VxRail dynamic node models

Also, as it relates to using VxRail dynamic nodes when deploying VMFS on FC Principal storage, support for using NVMe over FC configurations has also been introduced since it is a part of the VxRail 7.0.320 release that VCF on VxRail customers can just inherit from VxRail. It’s like finding a fifth chicken nugget in the bag after ordering the four-piece meal! Wait, it is New Year’s—I should have used a healthier food example. Oops!

Support For New 15th Generation Intel-Based VxRail With vSAN Platforms (S670 and E660N)

In addition to new 15th generation dynamic nodes, this release introduces support for two new 15th generation VxRail node types, the S670 and E660N. The S670 is our 2U storage density optimized hybrid platform based on the PowerEdge R750 while the E660N is our 1U “everything” all NVMe platform based on the PowerEdge R650.

Giving customers more hardware platform choices resolution: Check!

VCF on VxRail Developer and AI-Ready Enterprise Platform Enhancements

NVIDIA GPU Options for AI and ML Workload Use Cases

As AI and ML applications are becoming more critical within organizations, IT teams are looking at the best approaches to run them within their own data centers to ensure ease of manageability and scale, improved security, and maintaining governance.

As a follow on to the innovative and collaborative partnerships between Dell Technologies, VMware, and NVIDIA that were first introduced at VMworld 2021, we are happy to announce, with this VCF on VxRail release, the ability to run GPUs within VMware Cloud Foundation 4.4 on VxRail 7.0.320 to deliver an end-to-end AI-Ready enterprise platform that is simple to deploy and operate.                   

                      

Figure 5 – VCF with Tanzu on VxRail + NVIDIA AI-Ready Enterprise Platform

VMware Cloud Foundation with Tanzu, when used together with NVIDIA certified systems like VxRail and NVIDIA AI Enterprise Suite software, deliver an end-to-end AI / ML enterprise platform. And with VxRail being the first and only HCI Integrated System certified with NVIDIA AI Enterprise Suite and its supported GPUs, IT teams can deliver and provision GPU resources quickly in a variety of ways, while also allowing data scientists to easily consume and scale GPU resources quickly when they need it.

While getting into all the details on getting this set up is beyond the scope of this blog post, you can find more information on using NVIDIA GPUs with VxRail and NVIDIA AI Enterprise Software Suite using the link at the end of this post. VMware has additional information about this new support in a blog post that you can check out using the link at the bottom of this page.

Giving customers a simple path to unlock the power of their data resolution: Check!

VCF on VxRail Operations Enhancements

Configure DNS/NTP From SDDC Manager UI

This new feature simplifies and streamlines DNS and NTP Day 2 management operations for cloud administrators. In previous releases, all DNS and NTP configuration was included in the VCF Bring Up Parameter file that was used by Cloud Builder at the time of VCF on VxRail installation. But there was no straightforward way to make updates or changes to these settings once VCF on VxRail has been deployed. Now, if additional modifications are needed to these configurations, they can be performed within the SDDC Manager UI as a simple Day 2 operation. This feature integrates SDDC Manager with native VxRail APIs to automate VxRail cluster DNS/NTP settings. The figure below shows what this looks like.

                                             

 Figure 6 – DNS/NTP Day 2 Configuration From SDDC Manager UI

Giving customers a simpler and more flexible day 2 operations experience resolution: Check!

VCF on VxRail Security Enhancements

Activity Logging For VCF REST API Call-Driven Actions

Administrators can now ensure audit tracking for activity that takes place using the VCF REST API. In this release, SDDC Manager logs capture SDDC Manager API activity from SDDC Manager UI and other sources with user context. This can be used to ensure audit tracking of VCF activity and making analyzing logs easier to understand. Figure 5 below illustrates this activity. The log entries include the following data points:

  • Timestamp
  • Username
  • Client IP
  • User agent
  • API called
  • API method

 Figure 7 – SDDC Manager REST API Activity Logging

Each of the SDDC Manager core services has a dedicated activity log. These logs are in the respective /var/log/vmware/vcf/*service*/ service directories on the SDDC Manager VM.

Giving customers enhanced security logging resolution – Check!

Enhanced Access Security Hardening

This release disables the SSH service on ESXi hosts by default, following the vSphere security configuration guide recommendation.

This applies to new and upgraded VMware Cloud Foundation 4.4 on VxRail 7.0.320 deployments.

Giving customers enhanced default platform security hardening resolution: Check!

Log4j and Apache HTTP Server Fixes

No security conversation is complete without addressing the headache that has been the talk of the technology world recently and that is the Log4j and Apache HTTP Server vulnerability discoveries. VCF on VxRail customers can be rest assured that as a part of this release fixes for these vulnerabilities are included.

Kicking Log4j and Apache HTTP bugs to the curb resolution: Check!

To wrap up…

Well, that about covers it for this new batch of updates. For the full list of new features, please refer to the release notes listed below. There are additional resource links at the bottom of this post. We hope to continue making good on our VCF on VxRail platform resolutions throughout the year! Hopefully, we all can say the same for ourselves in other areas of our lives. Now, where is that treadmill...?

Author: Jason Marques

Twitter: @vWhipperSnapper

Additional resources

VMware Cloud Foundation on Dell VxRail Release Notes

VxRail page on DellTechnologies.com

VxRail page on InfoHub

VxRail Videos

Virtualizing GPUs for AI Workloads with NVIDIA AI Enterprise Suite and VxRail Whitepaper

VMware Blog Post on new VCF 4.4 support of NVIDIA AI Enterprise Suite and GPUs

Read Full Blog
  • VxRail
  • SUSE
  • Kubernetes

Running SUSE Rancher and Rancher Kubernetes Engine (RKE2) on Dell VxRail

Vic Dery Vic Dery

Wed, 19 Jan 2022 14:29:54 -0000

|

Read Time: 0 minutes

                                                                                       A picture containing text

Description automatically generated     


As containerization is exploding in many data centers, Dell Technologies is continuing to assist customers on their DevOps adoption journey by developing infrastructure solutions that can act as the foundation for running their modern containerized business applications. These hyperconverged infrastructure (HCI) and storage solutions are called DevOps-ready platforms. VxRail is included in the DevOps-ready platform family as a scalable HCI integrated system infrastructure solution with automated lifecycle management that eases the IT operations experience and helps speed up the delivery of infrastructure resources to developers, thus enhancing their DevOps end-user experience.

While it is important to have DevOps-ready infrastructure to underpin an organization’s DevOps adoption journey, a solution is not complete without implementing a cloud native container orchestration platform, such as Kubernetes, on top. SUSE Rancher is used as a Kubernetes management platform that is open-source. Together, SUSE Rancher and VxRail enables customers to implement a multi-cloud deployment strategy and ensure that their organizations can more effectively control resource costs and adhere to corporate governance mandates while maintaining the flexibility of cloud operations on-premises.

VxRail is the only fully integrated, pre-configured, and tested HCI system optimized with VMware vSphere, making it ideal for customers who want to leverage SUSE Rancher through vSphere to create and operate Kubernetes clusters on-premises. VxRail has a proven hyperconverged infrastructure that allows VM’s creation for traditional workloads for apps not ready to be containerized. As a result, SUSE Rancher can potentially reduce the number of operating systems and virtual machines created or installed. Additionally, SUSE Rancher on VxRail can also enable IT operations to control the management of VxRail while giving DevOps the ability to build or manage their own containers via the SUSE Rancher interface. Running SUSE Rancher on VxRail delivers a seamless and automated operations experience across cloud-native and traditional workloads.

SUSE Rancher is the K8s cluster management part of the SUSE portfolio and Rancher Kubernetes Engine (RKE2) is the Kubernetes runtime component.  SUSE Rancher is the complete enterprise computing platform to run Kubernetes clusters on-premises, in the cloud, or at the edge. One hundred percent open-source software with zero lock-ins, SUSE Rancher fits in perfectly with your multi-cluster, hybrid, or multi-cloud container orchestration strategy. A recently published solution brief shows work effort conducted by the Dell Customer Solution Center engineering team validating the deployment of SUSE Rancher and RKE2 on VxRail and highlights the better together experience customers can get with SUSE Rancher and RKE2 on VxRail.

The Dell Customer Solution Center allows customers to experience Dell’s end-to-end solutions portfolio with a personalized engagement with Dell customer solution center engineers. These engagements are designed to help customers identify and solve their business challenges by utilizing Dell Technologies solutions to optimize the innovation within their organizations. Customer Solution Center services range from proof of concepts to technical deep dive conversations and presentations and more. Customers can work with these Dell Technologies experts in our dedicated labs or remotely from any location with the latest showcases of our offerings. For more information on the Dell Customer Solution Center, see https://www.delltechnologies.com/csc

As part of the solution validation, the solution center engineering team deployed SUSE Rancher and RKE2 in various ways, using a single node as well as a multi-node deployment and using both automated and manual installations processes. This effort was performed using the instructions found in the SUSE Rancher user documentation. As a result, the team could confirm the ease of deployment of the platform.

Deploying SUSE Rancher is as easy as following the documentation. This documentation can be found on the SUSE Rancher website and provides straightforward information about initial installation requirements. SUSE Rancher can be freely downloaded and installed on your VxRail infrastructure. A SUSE Rancher support subscription is available for purchase through Dell Technologies.

VxRail is a Dev-Ops ready platform that can work with traditional workloads and container based orchestrators such as SUSE Rancher and RKE2. In addition, SUSE Rancher and RKE2 provide a Kubernetes platform that addresses operational and security challenges. Together, VxRail and SUSE Rancher make it easy for businesses to standardize both IT and developer operations on-premises and in the public cloud and accelerate their Dev-Ops adoption journey.

VxRail Resources

 

Dell Technologies Resources

SUSE Rancher Resources


Author information

Vic Dery, Senior Principal Technical Marketing Engineer 

Vic.Dery@dell.com

LinkedIn


Read Full Blog
  • HCI
  • containers
  • VxRail

Containing The Future With Dell EMC VxRail

Vic Dery Vic Dery

Thu, 04 Nov 2021 15:21:05 -0000

|

Read Time: 0 minutes

Containing The Future With Dell EMC VxRail: Modern HCI Infrastructure for Running Container Orchestration Platforms

The world of containers is here, and it is driving business forward. Developers and infrastructure operators are designing, deploying, and integrating next-generation cloud native applications using a combination of containers and virtual machines (VMs), and taking advantage of the benefits that each delivers.

This evolution empowers customers to use their existing virtualization knowledge and extend it to containerized applications. Rather than develop siloed infrastructures that cater to individual workload types during this transition, many organizations are looking for a unified infrastructure platform that supports running both VMs and containers. This is where VxRail comes in.

The VxRail infrastructure is designed to run both VMs and containers. Regardless of the container orchestration platform, VxRail provides a scalable and life cycle-managed environment for consistently running containers across single or multicluster solutions. The simplicity of running container orchestration platforms on VxRail frees up organizations to focus on the business value and benefits that the solution delivers.

In recent years, a steady stream of performing validations and creating reference architectures for running containers on VxRail highlights the following:

  • More customers are running container frameworks alongside—or even within—their virtualization frameworks, making for a smoother shift into the adoption of containers.
  • Organizations are seeing VxRail as an ideal foundational infrastructure platform for quickly adopting containers and supporting their container orchestration runtime ecosystems of choice.

VxRail Hyperconverged Infrastructure (HCI) capabilities

VxRail Hyperconverged Infrastructure (HCI)-integrated systems help accelerate data center modernization, deploy hybrid clouds, and implement developer-ready application platforms based on Kubernetes (K8s). These tasks are possible as VxRail supports running the most demanding workloads and applications, whether VM-based or containerized while simplifying operations for IT infrastructure teams.

VxRail is the only fully integrated, preconfigured, and tested HCI system optimized for VMware. It delivers a seamless, automated operational experience with 100 percent native integration between VxRail Manager and vCenter. Intelligent life cycle management automates non-disruptive upgrades, updates, and node addition or retirement while keeping the VxRail infrastructure in a continuously validated state to ensure that workloads are always available.

These features make VxRail ideal for running container orchestration platforms, specifically those platforms that require vSphere for operation. As a result, VxRail provides customers with the flexibility to choose container orchestration platforms that are right for them. It enables them to run the container orchestration platform on a common HCI infrastructure platform that may be used with other traditional workloads.

Validating VxRail across container platform options

Dell Technologies helps customers accelerate their multicloud adoption and ensure that they have choices to select the best container orchestration platform. This flexibility has been confirmed through the development of a series of validation or reference architectures across several of the most widely adopted container orchestration platform distributions.

With VxRail, these containerized solutions deliver the same benefits on-premises or in the cloud. The following figure highlights some of these distribution options where validation work has been performed.

Figure 1. Examples of the container platforms options with VxRail

Let’s look at specific examples of running VxRail with some of today’s most commonly adopted orchestration platforms.

VMware Cloud Foundation with Tanzu on VxRail

VMware Tanzu enables businesses to build, run, and manage modern applications on any cloud and continuously deliver value to their customers. With VMware Tanzu, organizations can simplify multicloud operations and free up developers to move faster with easy access to the right resources. It also enables development and operations teams to work together to deliver transformative business results.

Figure 2.2

These capabilities start with the Tanzu Kubernetes Grid (TKG) runtime. With TKG, VMware uses the leading open-source technologies in the Kubernetes ecosystem to build a full Kubernetes runtime platform capable of running mission-critical customer applications.

TKG has the following open-source technologies, which VMware supports, built into its runtime platform for easy enterprise adoption:

  • Cluster API for cluster life cycle management
  • Harbor for container registry
  • Contour for ingress
  • Fluentbit for logging
  • Grafana and Prometheus for monitoring
  • Antrea and Calico for container networking
  • Velero for backup and recovery
  • Sonobuoy for conformance testing

With VMware Tanzu, businesses also have the flexibility for implementing the TKG runtime. They can do any of the following:

  1. Run TKG on any infrastructure, including vSphere, VMware Cloud on AWS, or native public clouds like AWS
  2. Run TKG in vSphere by using the TKG Service, which is bundled as a part of vSphere 7 with Tanzu and VMware Cloud Foundation (VCF) with Tanzu
  3. Run TKG as a service with Tanzu Mission Control (TMC)

Having touched on these TKG runtime implementation options, let’s look at the method used in our validated reference architecture: validating VMware Cloud Foundation with Tanzu on VxRail using the TKG Service. Why did we choose this method out of the three available methods? Because it delivers the type of easy deployment and operation that customers are looking for! VMware Cloud Foundation on VxRail delivers a simple and direct path to the hybrid cloud and Kubernetes at cloud-scale with one complete, automated platform.

The Reference Architecture document provides general design and deployment guidelines for running modern applications such as Confluent Kafka and Elasticsearch on VMware Cloud Foundation with Tanzu on VxRail. Find the Running Modern Applications with VMware Cloud Foundation with Tanzu on Dell EMC VxRail document here.

Amazon EKS Anywhere on VxRail

Amazon EKS Anywhere is a deployment option that enables customers to create and operate Kubernetes clusters on-premises using VMware vSphere, while allowing for connectivity and portability to AWS public cloud environments. It also provides operational consistency and tooling with AWS EKS.

Dell Technologies and Amazon recently validated Dell EMC VxRail running Amazon EKS Anywhere, in addition to the use of Dell EMC VxRail dynamic node clusters and Dell EMC PowerStore to provide the back-end storage for Amazon EKS Anywhere. (Dynamic nodes are not limited to this solution as they are features of VxRail and not specific to Amazon EKS Anywhere.)

Amazon EKS Anywhere on VxRail

VxRail is a strong platform choice for EKS Anywhere, which requires vSphere for production environments. EKS Anywhere running on VxRail delivers a seamless, automated operational experience for VxRail infrastructure across cloud-native and traditional workloads.

VxRail intelligent life cycle management automates non-disruptive upgrades and updates to keep its infrastructure in a continuously validated state, ensuring running workloads and optimized clusters. This automation greatly reduces risk so that customers can stay current with the multiple releases of Kubernetes and the EKS platform, which are updated using EKS Anywhere. VxRail and EKS Anywhere make it easy to standardize both IT and developer operations on-premises and in the Amazon public cloud.

EKS Anywhere is built on open-source software, using VMware vSphere to create and operate Kubernetes on-premises with automated deployment, scaling, and management of containerized applications. EKS Anywhere provides an installable software package for creating and operating on-premises Kubernetes clusters based on Amazon EKS Distro—the same Kubernetes distribution used by Amazon EKS for clusters on AWS.

By simplifying the creation and operation of on-premises Kubernetes clusters and automating cluster management, EKS Anywhere can reduce support costs and avoid the maintenance of redundant open-source and third-party tools. Using the EKS console also means viewing all Kubernetes clusters (including EKS Anywhere clusters) running through the EKS Connector (public preview).

Amazon EKS Anywhere is available by free download from AWS here. For more details, see the Running Amazon Elastic Kubernetes Service Anywhere on Dell EMC VxRail Solutions Brief here.

Red Hat OpenShift with VMware Cloud Foundation on VxRail

Red Hat OpenShift ships with Red Hat Enterprise Linux CoreOS for the Kubernetes control plane nodes. It supports both Red Hat Enterprise Linux CoreOS and Red Hat Enterprise Linux for worker nodes.

OpenShift supports the Open Container Initiative (OCI), an open governance structure for container formats and runtimes, including hundreds of fixes for defects, security, and performance issues for

Figure 4. OpenShift with VMware Cloud Foundation on VxRail

upstream Kubernetes in each release. It is tested with dozens of technologies as a tightly integrated platform supported over a nine-year life cycle. OpenShift includes software-defined networking, validates additional common networking solutions, and validates numerous storage and third-party plug-ins for its releases.

VMware Cloud Foundation on VxRail delivers flexible, consistent, secure infrastructure and operations across private and public clouds. It is well suited to meet the demands of modern applications running on Red Hat OpenShift Container Platform in a virtualized environment and makes it easy to manage the life cycle of the hybrid cloud environment. A unified management plane is also available for all applications, including OpenShift. 

VMware Cloud Foundation uses leading virtualization technologies, including vSphere, NSX-T, and vSAN. VxRail Manager and VMware Cloud Foundation Manager provide the life cycle management, and vSAN provides reliable, high-performance, and flexible storage to OpenShift. NSX-T provides the secure, high-performance virtual networking infrastructure to OpenShift, and vSphere DRS and vSphere HA deliver efficient resource usage and high availability. All of these technologies combined to create a consolidated solution of running OpenShift Container Platform with VMware Cloud Foundation on VxRail.

The Running Red Hat OpenShift Container Platform on VMware Cloud Foundation Reference Architecture document, which demonstrates the architecture of running OpenShift Container Platform with VMware Cloud Foundation on VxRail, can be found here. This document shows the configuration details, hardware resources, and software resources used in the solution validation, along with various configuration options and best practices.

Conclusion

Dell Technologies and VMware continue to see containers as a high-value technology foundation for the future of enterprise solutions. While this blog post is heavily focused on containerization, keep in mind the significant and lasting role that virtualization continues to have in modern data centers. The importance of virtualization is especially true as not every workload is suited for containerization, meaning that containers complement virtualization while setting the foundation for building on the flexibility of containerized systems and platforms on VxRail.  

Additional resources

Author information

Vic Dery, Senior Principal Technical Marketing Engineer

Vic.Dery@dell.com

LinkedIn


Read Full Blog
  • VxRail

A Taste of VxRail Deployment Flexibility

Daniel Chiu Daniel Chiu

Thu, 28 Oct 2021 11:13:18 -0000

|

Read Time: 0 minutes

With the recent announcements of VxRail dynamic nodes and satellite nodes, the VxRail portfolio is certainly getting more diverse.  Like after any good trick-or-treating run, it’s time to sort through the bag of goodies.  Yes, here in the United States it’s Halloween time if you can believe it, though stores are trying to confuse you by putting up Christmas decorations already.

The addition of VxRail dynamic nodes and VxRail satellite nodes allows VxRail to address even more customer workloads.  This blog breaks down the different deployment options that are now available at the datacenter and at the edge.  So, let’s check out what’s in that bag.

VxRail for the datacenter

Figure 1 VxRail node with vSAN

 At the core of the VxRail portfolio is the VxRail cluster with vSAN.  To me, the VxRail node with vSAN plays the role of the Snickers bar -- a hyperconvergence of caramel, peanuts, and milk chocolate with the heartiness and versatility to satisfy your need for energy whether at home or far from it.  Similarly, the VxRail node is composed of software-defined compute and storage, in vSphere and vSAN, internal cache and capacity drives, and network cards. Running on VxRail HCI System Software, the VxRail cluster provides a hyperconverged infrastructure (HCI) that allows customers to cost-effectively scale and incrementally expand their cluster, from as few as 3 nodes to 64 nodes, to match the pace of growth of their workload requirements.  Most VxRail customers start with this deployment type as their introduction to the world of HCI.

Figure 2 VxRail series types

 The VxRail node is available in six different series that are based on several PowerEdge Server platforms to offer different combinations of space-efficiency, performance, storage capacity, and workload diversity. 

For situations where customers are looking for site resiliency to service their applications, they can turn to stretched clusters.  A cluster can be stretched across two datacenters so that, in case one site experiences a catastrophic event that causes it to go offline, the secondary site can automatically service the same applications to the clients.  Because writes to storage need to be mirrored onto the secondary site before they are acknowledged on the primary site, the two sites are typically in the same region so that latency does not significantly impact the quality of service of the applications running on the primary site.  

With the addition of VxRail dynamic nodes, VMware Cloud Foundation (VCF) on VxRail customers can now better address use cases where customers continue to utilize their enterprise storage arrays to run mission-critical or life-critical workloads for data resiliency and data protection.  Almost every industry has applications that fall under this category such as financial service applications or critical patient care services.  For these applications, customers typically store them on enterprise storage arrays and rely on vSphere clusters for virtualized compute resources.  By deploying VxRail dynamic node clusters as vSphere clusters, customers will benefit from the same operational consistency and simplicity across all their VxRail clusters

Figure 3 VxRail dynamic node

Like Halloween candy without nuts, there are use cases for VxRail nodes without drives.  VxRail dynamic nodes are compute-only nodes without internal storage which means they don’t require vSAN licenses.  They are available in the E, P, and V Series.  VxRail dynamic nodes rely on an external storage resource as their primary storage.  They can use external storage from Dell EMC storage arrays or from datastores shared by vSAN clusters using VMware vSAN HCI Mesh.  With VxRail dynamic nodes in the fold, VCF on VxRail customers can include workload domains that use the existing enterprise storage arrays for their critical workloads without incurring vSAN license costs.  For customers looking to optimize their vSAN  resources, VxRail dynamic node clusters allow them to scale compute and storage independently for certain workloads like Oracle to reduce vSAN license costs.

To learn more about VxRail dynamic nodes, you can take a look at my previous blog about VxRail 7.0.240.

VxRail for the edge

As customers look to extend more to the edge to process information closer to where it is being collected, the VxRail portfolio is extending as well to help customers expand their VxRail footprint to maintain the operational consistency and simplicity from the core to the edge.  The edge space covers a wide spectrum of IT infrastructure requirements – from just having scaled-down datacenter infrastructure at the edge to extreme remote locations where they can be space-constrained, power-constrained, bandwidth-constrained, or subject to harsh climate and use.   While VxRail portfolio does not address the further ends of far edge, let’s walk through the deployment options available with the portfolio.  

Starting with the scaled-down datacenter infrastructure, the VxRail cluster with vSAN may still be the right fit for some edge profiles.   For locations such as regional engineering hubs or satellite university campuses, having a three or four-node cluster can provide the performance and availability required to meet the site needs.

Like Twix, the VxRail 2-node cluster with vSAN comes in two VxRail nodes with vSAN.  When used with the E Series or D Series, the 2-node cluster is the smallest form factor for a vSAN cluster in the VxRail portfolio.  This deployment type requires a witness appliance installed outside of the cluster for disaster recovery after a failed node comes back online.


Figure 4 VxRail D series

 

As mentioned before, the D-series is the ruggedized VxRail node with much shorter depth at 20”.  It’s a very interesting option at edge locations where space is limited or the ambient environment would be too much of a challenge for a typical datacenter solution.  Let’s say in case you want to run a VxRail on an airplane that’s 15,000 feet (~4500 meters) above ground.  You can find more details here

With the newly announced VxRail satellite nodes, there is a great opportunity to extend the VxRail footprint even further to locations where, previously, it just was not the right fit whether it be cost-related, space-related, or the inability to even manage the infrastructure.  VxRail satellite nodes are like the M&Ms in this VxRail bag of goodies.  You can have a lot of them and they may look different on the outside but, at each core, it’s the same milk chocolatey center.  

Figure 5 VxRail satellite node management paradigmVxRail satellite nodes are single VxRail nodes designed to operate at the outer edges as an extension to a VxRail cluster with vSAN which manages them.  For the retail industries, you can find them at retail shops that run your sales inventory, payment, and ordering applications.  VxRail satellite nodes will be available on three VxRail models (E660, E660F, and V670F) and run the same VxRail HCI System Software as other VxRail deployment offerings.  VxRail satellite nodes act as separate ESXi hosts.  They do not run vSAN but have their own internal storage that is protected via an onboard RAID controller.

For edge locations where application availability is not as important as the cost, the VxRail satellite node is the most cost-effective VxRail solution.  Satellite nodes are centrally managed by a VxRail cluster with vSAN, typically deployed at a regional datacenter.  Virtual administrators can monitor the health of the satellite nodes, run health checks, and initiate node updates from a central location.


VxRail HCI System Software as the common denominator

Though the new offerings in the VxRail portfolio differ from what you normally view as a VxRail node, all VxRail nodes run the same VxRail HCI System Software.  Like sugar for candy, once you have a taste you want more.  The common operating model allows VxRail customers to confidently apply Continuously Validated States across their VxRail footprint to maximize their investment in VMware software in a secure way.  VxRail HCI System Software continues to provide the peace of mind to allow our customers to innovate and transform their infrastructure as their workload demands evolve from the datacenter to the far reaches at the edge.

Conclusion

Unlike the sugar highs and lows that we all will get from consuming too much Halloween candy, this VxRail bag of goodies delivers the operational steadiness and consistency that will help our customers achieve the management bliss they’ll need for their IT infrastructure from the core to the edge.   To learn more about VxRail deployment flexibility, listen to our latest podcast featuring Ash McCarty, Director of product management in VxRail platforms, as he provides a technical deep dive into the VxRail dynamic node and VxRail satellite node offerings.

Author Information

Daniel Chiu, Senior Technical Marketing Manager at Dell Technologies

LinkedIn: https://www.linkedin.com/in/daniel-chiu-8422287/ 


Read Full Blog
  • Intel
  • HCI
  • VMware
  • VxRail
  • vSAN
  • Optane

I feel the need – the need for speed (and endurance): Intel Optane edition

David Glynn David Glynn

Wed, 13 Oct 2021 17:37:52 -0000

|

Read Time: 0 minutes

It has only been three short months since we launched VxRail on 15th Generation PowerEdge, but we're already expanding the selection of configuration offerings. So far we've added 18 additional processors to power your workloads, including some high frequency and low core count options. This is delightful news for those with applications that are licensed per core, an additional NVIDIA GPU - the A30, a slew of additional drives, and doubled the RAM capacity to 8TB. I've probably missed something, as it can be hard to keep up with the all the innovations taking place within this race car that is VxRail!

In my last blog, I hinted at one of those drive additions, faster cache drives. Today I'm excited to announce that you can now order, and turbo charge your VxRail with the 400GB or 800GB Intel P5800X – Intel’s second generation Optane NVMe drive. Before we delve into some of the performance numbers, let’s discuss what it is about the Optane drives that makes them so special. More specifically, what is it about them that enables them to deliver so much more performance, in addition to significantly higher endurance rates. 

To grossly over-simplify it, and my apologies in advance to the Intel engineers who poured their lives into this, when writing to NAND flash an erase cycle needs to be performed before a write can be made. These erase cycles are time-consuming operations and the main reason why random write IO capabilities on NAND flash is often a fraction of the read capability. Additionally, a garbage collection is running continuously in the background to ensure that there is space available to incoming writes. Optane, on the other hand, does bit-level write in place operations, therefore it doesn’t require an erase cycle, garbage collection, or performance penalty writes. Hence, random write IO capability almost matches the random read IO capability. So just how much better is endurance with this new Optane drive? Endurance can be measured in Drive Writes Per Day (DWPD), which measures how many times the drive's entire size could be overwritten each day of its warranty life. For the 1.6TB NVMe P5600 this is 3 DWPD, or 55 MB per second, every second for five years – just shy of 9PB of writes, not bad. However, the 800GB Optane P5800X will endure 146PB over its five-year warranty life, or almost 1 GB per second (926 MB/s) every second for its five year 100 DWPD warranty life. Not quite indestructible, but that is a lot of writes, so much so you don’t need extra capacity for wear leveling and a smaller capacity drive will suffice.

You might wonder why you should care about endurance, as Dell EMC will replace the drive under warranty anyway – there are three reasons. When a cache drive fails, its diskgroup is taken offline, so not only have you lost performance and capacity, your environment is taking on the additional burden of a rebuild operation to re-protect your data. Secondly, more and more systems are being deployed outside of the core data center. Replacing a drive in your data center is straightforward, and you might even have spares onsite, but what about outside of your core datacenter? What is your plan for replacing a drive at a remote office, or a thousand miles away? What if that remote location is not an office but an oilrig one hundred miles offshore, or a cruise ship halfway around the world where the cost of getting a replacement drive there is not trivial? In these remote locations, onsite spares are commonplace, but the exceptions are what lead me to the third reason, Murphy's Law. IT and IT staffing might be an afterthought at these remote locations. Getting a failed drive swapped out at a remote location which lacks true IT staffing may not get the priority it deserves, and then there is that ever present risk of user error... “Oh, you meant the other drive?!? Sorry...” 

Cache in its many forms plays an important role in the datacenter. Cache enables switches and storage to deliver higher levels of performance. On VxRail, our cache drives fall into two categories, SAS and NVMe, with NVMe delivering up to 35% higher IOPS and 14% lower latency. Among our NVMe cache drive we have two from Intel, the 1.6TB P5600 and the Optane P5800X, in 400GB and 800GB capacities. The links for each will bring you to the drive specification including performance details. But how does the performance at a drive level impact performance at the solution level? Because, at the end of the day that is what your application consumes at the solution level, after cache mirroring, network hops, and the vSAN stack. Intel is a great partner to work with, when we checked with them about publishing solution level performance data comparing the two drives side-by-side, they were all for it. 

In my over-simplified explanation above, I described how the write cycle for Optane drives is significantly different as an erase operation and does not need to be done first. So how does that play out in a full solution stack? Figure 1 compares a four node VxRail P670F cluster, running a 100% sequential write 64KB workload. Not a test that reflects any real-world workload, but one that really stresses the vSAN cache layer, highlights the consistent write performance that 3D XPoint technology delivers, and shows how Optane is able to de-stage cache when it fills up without compromising performance.

Figure 1: Optane cache drives deliver consistent and predictable write performance

When we look at performance, there are two numbers to keep in mind: IOPS and latency. The target is to have high IOPS with low and predictable latency, at a real-world IO size and read:write ratio. To that end, let’s look at how VxRail performance differs with the P5600 and P5800X under OLTP32K (70R30W) and RDBMS (60R40W) benchmark workload, as shown in Figure 2. 

Figure 2: Optane cache drives deliver higher performance and lower latency across a variety of workload types.

It doesn't take an expert to see that with the P5800X this four node VxRail P670F cluster's peak performance is significantly higher than when it is equipped with the P5600 as a cache drive. For RDBMS workloads up to 44% higher IOPS with a 37% reduction in latency. But peak performance isn't everything. Many workloads, particularly databases, place a higher importance on latency requirements. What if our workload, database or otherwise, requires 1ms response times? Maybe this is the Service Level Agreement (SLA) that the infrastructure team has with the application team. In such a situation, based on the data shown, and for a OLTP 70:30 workload with a 32K block size, the VxRail cluster would deliver over twice the performance at the same latency SLA, going from 147,746 to 314,300 IOPS. 

In the datacenter, as in life, we are often faced with "Good, fast, or cheap. Choose two." When you compare the price tag of the P5600 and P5800X side by side, the Optane drive has a significant premium for its good and fast. However, keep in mind that you are not buying an individual drive, you are buying a full stack solution of several pieces of hardware and software, where the cost of the premium pales in comparison to the increased endurance and performance. Whether you are looking to turbo charge your VxRail like a racecar, or make it as robust as a tank, Intel Optane SSD drives will get you both.

Author Information 

David Glynn, Technical Marketing Engineer,  VxRail at Dell Technologies

Twitter: @d_glynn

LinkedIn: David Glynn

Additional Resources

Intel SSD D7P5600 Series 1.6TB 2.5in PCIe 4.0 x4 3D3 TLC Product Specifications

Intel Optane SSD DC P5800X Series 800GB 2.5in PCIe x4 3D XPoint Product Specifications

Read Full Blog
  • VxRail
  • VMware Cloud Foundation

Cloud Foundation on VxRail is Even More “Dynamic” and “Power”-ful

Jason Marques Jason Marques

Wed, 03 Aug 2022 21:32:14 -0000

|

Read Time: 0 minutes

Dell Technologies and VMware are happy to announce the availability of VMware Cloud Foundation 4.3.1 on Dell EMC VxRail 7.0.241. This new release extends flexible VxRail platform principal storage options with Dell EMC Fibre Channel storage, new VxRail storage integration enhancements, new VxRail dynamic node and 15th generation hardware platform support, new LCM enhancements, and security and deployment updates. Read on for details!

Cloud Foundation on VxRail: Storage Enhancements

VMFS on FC Principal Storage with Dell EMC PowerMax, PowerStore, and Unity XT Storage, and 14th Generation VxRail Dynamic Nodes

More co-engineered goodness makes its way into this release with new support for VMFS on FC Principal storage options for VI Workload Domains.

First, VxRail 7.0.241 supports deploying clusters using VMFS over FC storage as its principal storage instead of vSAN when using new 14th Generation VxRail dynamic nodes. Customers of Dell EMC PowerMax, Dell EMC Powerstore-T, and Dell EMC Unity XT can now leverage their existing storage array investments for VxRail environments. They can also take advantage of the benefits of the VxRail HCI System Software cluster validation, automation, and life cycle management for the compute, hardware, and software infrastructure components.

This principal FC storage support also extends into use cases for Cloud Foundation on VxRail. VCF 4.3.1 now provides updated SDDC Manager awareness of these new types of VMFS on FC principal storage-based 14th Generation VxRail dynamic node clusters. In addition, we have also updated SDDC Manager workflows to support adding either VMFS on FC principal storage-based 14th Generation VxRail dynamic node clusters or vSAN based principal storage VxRail HCI node clusters into VI Workload Domains. 

With this latest enhancement VCF on VxRail delivers even more storage flexibility to best meet your workload requirements. The figure below illustrates the different ways in which storage can be leveraged in VCF on VxRail deployments across workload domain types and across principal and supplemental storage use cases. (Note that external storage, including remote vSAN HCI Mesh datastores, were already supported with VCF on VxRail but as supplemental storage prior to this latest release.)

Figure 1: Cloud Foundation on VxRail Supported Storage Options

To get some hands on with creating a new VxRail VI workload domain using VMFS on FC principal storage and VxRail dynamic nodes with PowerStore-T, check out this new interactive demo that walks you through the process.

Cloud Foundation on VxRail LCM Enhancements

SDDC Manager LCM Precheck and Cluster Operation Workflows Integrated with VxRail Health Check API

SDDC Manager has always enabled VCF administrators to perform ad hoc LCM prechecks. These prechecks are used to validate the VCF environment health and configuration status of workload domains, to avoid running into issues while executing LCM and cluster management related workflows. 

This latest release includes more co-engineered enhancements to these prechecks by integrating them with native VxRail Health Check APIs. As a result, SDDC Manager ad hoc precheck, LCM, and cluster management related workflows will call on these VxRail APIs to obtain detailed VxRail system-specific cluster health and configuration checks. This brings administrators a more turnkey platform experience that now factors in underlying HCI system HW/SW delivered by VxRail, all within the native SDDC Manager administration experience.

Figure 2: Integrated VxRail Health Check API with SDDC Manager LCM precheck

VxRail Hardware Platform Enhancements

Intel-based 15th Generation VxRail HCI Nodes and 14th Generation VxRail Dynamic Nodes

The VxRail 7.0.241 release brings about new HW platform support with Intel-based 15th Generation VxRail HCI nodes and new 14th Generation VxRail dynamic nodes. For more information on the latest VxRail hardware platforms, check out these blogs:

Cloud Foundation on VxRail Deployment Enhancements

New VxRail First Run Options to Set Network MTU and Setting Multiple VxRail System VDS via UI

As part of the management cluster prep for VCF on VxRail deployments, the network MTU size of the management cluster system network must be configured before Cloud Builder executes the VCF Bring up. This ensures that the management cluster can support the prerequisites for the deployment and installation of NSX-T, and to align it with the required VCF best practice design architecture.

Prior to this release, these network settings would have needed to be manually configured. Now, they are set as part of the standard VxRail First Run cluster deployment automation process. Doing this streamlines prerequisite management cluster configuration for VCF and speeds up VCF on VxRail deployments, to bring about a faster Time-To-Value (TTV) for customers.

One can now also use the VxRail Manager First Run UI Deployment Wizard to deploy VxRail clusters with Multiple System VDS configured. In previous versions of VxRail, this was only available when using the VxRail API. This wizard allows you to configure this and other cluster settings to simplify cluster deployments, while delivering on more flexible cluster configurations options.


Cloud Foundation on VxRail Security Enhancements

ESXi Lockdown Mode For VxRail Clusters

No blog post is complete without calling out new security feature enhancements. And VCF 4.3.1 on VxRail 7.0.241 delivers. Introduced in this release is new support for ESXi lockdown mode for VxRail clusters. 

After a workload domain and a corresponding VxRail cluster have been created, a user can use the vSphere Web Client to configure lockdown mode on a given VxRail host. VCF also allows you to enable or disable lockdown mode for a workload domain or cluster by using the SOS command line utility. Using the SOS command automates the process of enabling or disabling this feature over several hosts quickly. (Important: VCF currently only supports the implementation of normal lockdown mode. It is the SOS utility that configures this lockdown mode. ‘Strict’ lockdown mode is currently not supported.) 

Well there you have it. Tons of new VCF on VxRail goodies to ponder for now. As always, for more information on VxRail and VCF on VxRail, please check out the links at the end of this blog and other VxRail related blogs here on the InfoHub.

Additional Resources

VMware Cloud Foundation on Dell EMC VxRail Release Notes

VxRail page on DellTechnologies.com

VxRail Videos

VCF on VxRail Interactive Demos 

Author Information

Author: Jason Marques

Twitter - @vWhipperSnapper

 

 
 

Read Full Blog
  • VxRail

Delivering VxRail simplicity with vLCM compatibility

Daniel Chiu Daniel Chiu

Tue, 28 Sep 2021 17:23:22 -0000

|

Read Time: 0 minutes

As the days start off with cooler mornings and later sunrises, we welcome the autumn season.  Growing up each season brought forth its own traditions and activities.  While venturing through corn mazes was fun, autumn first and foremost meant that it was apple-picking time.  Combing through the orchard, you’re constantly looking for which apple to pick, even comparing ones from the same branch because no two are alike.  Just like the newly introduced VMware vSphere Lifecycle Manager (vLCM) compatibility in VxRail 7.0.240, there are differences to the VxRail implementation as compared to that of the Dell EMC vSAN Ready Nodes, though they’re from the same vLCM “branch.”

Now that VxRail offers vLCM compatibility, it’s a good opportunity to provide an update to Cliff’s blog post last year where he provided a comprehensive review of the customer experiences with lifecycle management of vSAN Ready Nodes and VxRail clusters. While my previous blog post about the VxRail 7.0.240 release provided a summary of VxRail’s vLCM implementation and the added value, I’ll focus more on customer experience this time. Combining the practice of Continuously Validated States to ensure cluster integrity with a VxRail-driven experience truly showcases how automated the vLCM process can be. 

In this blog, I’ll cover the following:

  • Overview of VMware vLCM
  • Compare how to establish a baseline image 
  • Compare how to perform a cluster update

Overview of VMware vLCM

Figure 1: VMware vSphere Lifecycle Manager vLCM framework

VMware vLCM was introduced in vSphere 7.0 as a framework to allow for software and hardware to be updated together as a single system.  Being able to combine the ESXi image and component firmware and drivers into a single workflow helps streamline the update experience.  To do that, server vendors are tasked with developing their own plugin into this vLCM framework to perform the function of the firmware and drivers addon as depicted in the Figure 1.  The server vendor implementation provides functionality to build the hardware catalog of firmware and drivers on the server and supply the bits to vCenter.   For some components, the server vendors do not supply their firmware and drivers, and relies on individual vendors to provide the addon capability.  Put together, the software and hardware form a cluster image.  To start using vLCM, you need to build out a cluster image and assign it as the baseline image.  For future updates, you have to build out a cluster image and assign it as the desired state image.  Drift detection between the two determines what needs to be remediated for the cluster to arrive at the desired state.

For Dell EMC vSAN Ready Nodes, you will use the OMIVV (OpenManage Integration with VMware vCenter) plugin to vCenter to use the vLCM framework.  Now VxRail has enhanced VxRail Manager to plug into vCenter in its vLCM implementation.  The difference between the two implementations really drives home that vSAN Ready Nodes, whether its Dell EMC’s or other server vendors, deliver a customer-driven experience versus a VxRail-driven experience.  Both implementations have their merits because they target different customer problems.  The customer-driven experience makes sense for customers who have already invested the IT resources to have more operational control of what is installed on their clusters.  For customers looking for operational efficiency that reduces and simplifies their day-to-day responsibility to administrate and secure infrastructure, the VxRail-driven experience provides them with the confidence to be able to so. 

Enabling VMware vLCM with the baseline image

A baseline image is a cluster image that you have identified as the version set to deliver that happy state for your cluster.  IT operations team is happy because the cluster is running secure and stable code that complies with their company’s security standards.  End users of the applications running on the cluster are happy because they are getting the consistent service required to perform their jobs.

For Dell EMC vSAN Ready Nodes or any vSAN Ready Nodes, users first need to arrive at what the baseline image should be before deploying their clusters.  That requires research and testing to validate that the set of firmware and drivers are compatible and interoperable with the ESXi image.  Importing it into vLCM framework involves a series of steps.

Figure 2: Customer-driven process to establish a baseline image for Dell EMC vSAN Ready Nodes

 

Dell EMC vSAN Ready Node uses the OMIVV plugin to interface with vCenter Server.  A user needs to first deploy this OMIVV virtual machine on vCenter.

  1. Once deployed, the user has to register it with vCenter Server. 
  2. From the vCenter UI, the user must configure the host credentials profile for iDRAC and the host.
  3. To acquire the bits for the firmware and drivers, user needs to install the Dell Repository Manager which provides the depot to all firmware and drivers.  Here is where the user can build the catalog of firmware and drivers component-by-component (BIOS, NICs, storage controllers, IO controllers, and so on) for their cluster.
  4. With the catalog in place, the user uploads each file into an NFS/CIFS share that the vCenter Server can access.
  5. From the vCenter UI, user creates a repository profile that points to the share with the firmware and drivers. Next is defining the cluster profile with the ESXi image running on the cluster and the repository profile.   This cluster profile becomes the baseline image for future compliance checks and drift remediation scans.

For VxRail, vLCM is not automatically enabled once your cluster is updated to VxRail 7.0.240.   It’s a decision you make based on the benefits that vLCM compatibility provides (described in my previous blog post).  Once enabled, it cannot be disabled.  To enable vLCM, your VxRail cluster needs to be running in a Continuously Validated State.  It is a good idea to run the compliance checker first.

Once you have made the decision to move forward, VxRail’s vLCM implementation is astoundingly simple! There’s no need for you to define the baseline image because you’re already running in a Continuously Validated State. The VxRail implementation obfuscates the plugin interaction and uses the vLCM APIs to automate all the previously described manual steps. As a result, enabling vLCM and establishing the baseline image have been reduced to a 3-step process.

  1. Enter the vCenter user credentials.
  2. VxRail automatically performs a compliance check to verify the cluster is running in a Continuously Validate State.
  3. VxRail automatically ports the Continuously Validated State into the formation of the baseline image.

And that’s it!  The following video clip captures the compliance check you can run first and then the  three step process to enable vLCM:

 Figure 3: How to enable vLCM on VxRail

Cluster update with vLCM

For Dell EMC vSAN Ready Nodes, the customer-driven process to build the desired state image is similar to the baseline image. It requires investigation, research, and testing to define the next happy state and the use of the Dell Repository Manager to save and export the hardware catalog to vCenter. From there, users build out a cluster image that includes the ESXi image and the hardware catalog that becomes the desired state image.  

Not surprisingly, performing a cluster update with vLCM doesn’t fall too far from the VxRail tree, VxRail streamlines that process down to a few steps within VxRail Manager. By using vLCM APIs, VxRail incorporates the vLCM process into the VxRail Manager experience for a complete LCM experience.

Figure 4: Process to perform cluster update with VxRail

  1. From the new update advisor tool, select the target VxRail version to which you want to update your cluster.  The update advisor then generates a drift remediation report (called an advisory report) that provides a component-by-component analysis of what needs to be updated.  This information along with estimated update time will help you plan the length of your maintenance window.
  2. Running a cluster readiness precheck ahead of your maintenance window is good practice.  It allows you time to address any issues that may be found ahead of your scheduled window or to plan for additional time.
  3. Having passed the precheck,  VxRail Manager will incorporate the vLCM process into its own experience.  VxRail Manager includes the vendor addon capability in vLCM so that you can add separate firmware and drivers that are not part of the VxRail Continuously Validated State, such as a Fibre-channel HBA.  Using the vLCM APIs, VxRail can automatically port the Continuously Validated State LCM bundle and any non-VxRail managed component firmware and drivers into the cluster image for remediation.
  4. If you want to customize the cluster image even more with NSX-T or Tanzu VIBs, you can add them from vCenter UI. Once included in the desired state image, you have the option of either initiating the remediation from vCenter or from the VxRail Manager UI. For those not adding these VIBs, then the entire cluster update experience stays within the simple and familiar VxRail Manager experience.

Check out the following video clip to see this end-to-end process in action:

Figure 5: How to update your VxRail cluster with VMware vLCM

Conclusion

With both Dell EMC vSAN Ready Nodes and VxRail using the same vLCM framework, it’s a much easier task to deliver an apples-to-apples comparison that clearly shows the simplicity of VxRail LCM with vLCM compatibility. This vLCM implementation is a perfect example how VxRail is built with VMware and made to enhance VMware. We’ve integrated the innovations of vLCM into the simple and streamlined VxRail-driven experience. As VMware looks to deliver more features to vLCM, VxRail is well positioned to present these capabilities in VxRail fashion.

For more information about this topic, check out the latest podcast: https://infohub.delltechnologies.com/p/vxrail-vlcm-compatibility/ 

Author Information

Daniel Chiu, Senior Technical Marketing Manager at Dell Technologies

LinkedIn: https://www.linkedin.com/in/daniel-chiu-8422287/



Read Full Blog
  • HCI
  • PowerEdge
  • VMware
  • VxRail
  • vSAN

It’s Been a Dell EMC VxRail Filled Summer

David Glynn David Glynn

Wed, 03 Aug 2022 21:32:14 -0000

|

Read Time: 0 minutes

Get your VxRail learn on with Tech Field Days and ESG

It has been a busy summer with the launch of our Next Gen VxRail nodes built on 15th Generation PowerEdge servers. This has included working with the fantastic people at ESG and Tech Field Day. Working with these top tier luminaries really forces us to distill our messaging to the key points – no small feat, particularly with so many new releases and enhancements.

If you are not familiar with Tech Field Days, they are “a series of invite-only technical meetings between delegates invited from around the world and sponsoring enterprise IT companies that share their products and ideas through presentations, demos, roundtables, and more”. The delegates are all hand-picked industry thought leaders – those wicked smart people you are following on Twitter – and they ask tough questions. Earlier this month, Dell Technologies spent two days with them: a day dedicated to storage, and a day for VxRail. You can catch the recordings from both days here: Dell Technologies HCI & Storage: Cutting Edge Infrastructure to Drive Your Business.

Of the twelve great jampacked VxRail sessions, if you cannot watch them all, do make time in your day for these three:

One more suggestion, if you are new to VxRail, or on the fence about deploying VxRail, tune into this session from Adam Little, Senior Cybersecurity Administrator for New Belgium Brewing, and the reasons they selected VxRail. Even brewing needs high availability, redundancy, and simplicity.

New ESG webinar discusses risk areas for BYOD and guest access | CommScopeESG is an IT analyst, research, validation, and strategy firm whose staff is well known for their technical prowess and frankly are fun to work with. Maybe that is because they are techies at heart who love geeking out over new hardware. I got to work with Tony Palmer as he audited the results of our VxRail on 15th Generation PowerEdge performance testing. Tony went through things with a fine-tooth comb, and asked a lot of great (and tough) probing questions.

What was most interesting was how he looked at the same data but in a very different way – quickly zeroing in on how much performance VxRail could deliver at sub-millisecond latency, verses peak performance. Tony pointed out “It’s important to note that not too long ago, performance this high with sub-millisecond response times required a significant investment in specialized storage hardware”. Personally, I love this independent validation. It is one thing for our performance team to benchmark VxRail performance, but it is quite another for an analyst firm to audit our results and to be blown out of the water to the degree they were. Read their full Technical Validation of Dell EMC VxRail on 15th Generation PowerEdge Technology: Pushing the Boundaries of Performance and VM Density for Business- and Mission-critical Workloads], and then follow it up with some of their previous work on VxRail.

If performance concerns have been holding you back from putting your toes in the HCI waters, now is a great time to jump in. The 3rd Gen Intel® Xeon® Scalable processors are faster and have more cores, but also bring other hardware architectural changes. From a storage performance perspective, the most impactful of those is PCIe Gen4, with double the bandwidth of PCIe Gen 3 which was introduced in 2012. 

From the OLTP16K (70/30) workload in the following figure, we can see that by just upgrading the vSAN cache drive to PCIe Gen 4, an additional 37% of performance can be unleashed. If that is not enough, enabling RDMA for vSAN nets an additional 21% of performance. One more thing, this is with only two diskgroups… check in with me later for when we crank performance up to 11 with four diskgroups, faster cache drives, and a few more changes.

With OLTP4K (70/30) peak IOPS performance clocking in at 155K with 0.853ms latency per node, VxRail can support workloads that demand the most of storage performance. But performance is not always the focus of storage.

If your workloads benefit from SAN data services such as PowerStore’s 4:1 Data Reduction or PowerMax’s SRDF, then now is a great time to learn about the VxRail Advantage and the benefits that VxRail Lifecycle Management provides. Check out Daniel Chiu’s blog post on VxRail dynamic nodes, where the power of the portfolio is delivering the best of both worlds.

 

Author: David Glynn, Twitter (@d_glynn), LinkedIn



Read Full Blog
  • VMware
  • PowerMax
  • VxRail
  • PowerStore
  • life cycle management
  • VCF

Learn more about the latest major VxRail software update: VxRail 7.0.240

Daniel Chiu Daniel Chiu

Wed, 15 Sep 2021 16:27:30 -0000

|

Read Time: 0 minutes

In a blink of an eye, September is already here. All those well-deserved August holidays have come and gone. As those summer memories with colorful umbrella drinks in hand fade into the background, your focus now turns to finishing this year strong. With the recent announcement on the latest VxRail software release, VxRail is providing the juice to get you well on your way.

VxRail HCI System Software version 7.0.240 has arrived with much anticipation as it includes the expansion of the VxRail product portfolio in the form of VxRail dynamic nodes and significant lifecycle management (LCM) enhancements that our VxRail customers will surely appreciate. Dynamic nodes extend the spectrum of use cases for VxRail by addressing more workload types. The LCM enhancements in the latest software release add to the operational simplicity that VxRail users truly value by increasing the level of automation and flexibility to ensure cluster integrity throughout the life of their cluster.

VxRail dynamic nodes

As VxRail dynamic nodes were described in the external launch event, they benefit customers who are committed to continue running their mission-critical data-centric workloads on Dell EMC storage arrays because of the enterprise-level resiliency and data protection capabilities but value the operational certainty that VxRail offers to their IT teams. This use case can be particularly relevant for customers who have standardized on VCF on VxRail as their infrastructural building block for their cloud operating model. These scenarios can apply to financial and medical industries among many others. For some customers, scaling of storage and compute independently in their HCI environments can better suit some of their application workloads, whether it is a better use of resources or potential reduction in license costs for compute-intensive workloads like Oracle.

Piqued your interest? Let’s move deeper into the technical details so you can better understand how VxRail dynamic nodes address these use cases.

Figure 1: VxRail dynamic node offering

  1. VxRail dynamic nodes are compute-only nodes running vSphere. Dynamic nodes run VMware ESXi with vSphere Enterprise Plus licenses but do not have vSAN licenses.
  2. They do not have any internal drives. As a result, the VxRail Manager VM runs on an external datastore that can come from either Dell EMC storage arrays (PowerStore-T, PowerMax, and Unity XT) or VMware vSAN HCI Mesh. Customers can now scale their compute and storage independently while some customers can continue to leverage their Dell EMC storage arrays for enterprise-level resiliency options.
  3. Dynamic nodes run on the same VxRail HCI System Software as any other VxRail cluster. The same intelligent LCM experience backed by VxRail’s Continuously Validated States exists in dynamic nodes.

Figure 2: VxRail dynamic node platforms

Like the three-flavor Neapolitan ice cream tub, there’s a flavor of dynamic nodes to match each application requirement. While there are not any cache and capacity drives on dynamic nodes, all other hardware configurations on these models are available. The E-series is the space-efficient 1U platform. The P-series is the performance-focused platform. The V-series is optimized for GPU-acceleration with up to six GPUs per node.

For those wanting to use their Dell EMC storage arrays with these brand-new VxRail dynamic nodes, here are some important pieces of information to consider.

  • With VxRail 7.0.240, Dell EMC PowerStore-T, PowerMax, and UnityXT are the supported external arrays for this use case. Third-party storage arrays are not supported.
  • Storage connectivity is through Fibre-Channel, either 16Gb or 32Gb Dell EMC Connectrix Brocade or Cisco MDS FC switches.
  • Management of the storage array and Fibre-Channel switch is done separately including lifecycle management, zoning, and provisioning of storage. VxRail HCI System Software is responsible for the LCM of the dynamic nodes themselves.
  • When deploying a dynamic node cluster, the datastores need to be already provisioned and zoned to the dynamic nodes.
  • The storage array and dynamic nodes are sold separately and supported discretely by Dell Technologies.

 

LCM Enhancements

Now let’s move onto the LCM enhancements in VxRail 7.0.240. There are three notable enhancements that VxRail users will notice – unless their thoughts have drifted away into those summertime memories.

Figure 3: Update advisor

First, update advisor is a new tool to help you plan for their next cluster update. From the Updates > Internet Updates tab, you can now see a list of available update paths for their specific cluster. This feature does not replace your responsibility to review the release notes and decide on to which version to update their cluster but, it does generate an advisory report with critical information to let you know what needs to be updated based on your cluster’s current Continuously Validated State. Update advisor is a helpful tool to plan your maintenance window.

Figure 4: Sample compliance drift report

Second, VxRail Manager now has a compliance checker that will detect any unforeseen version drift from the current Continuously Validated State running on your VxRail cluster. As shown on the image above, it provides a component-by-component report as part of the compliance check. It is run daily by default and can be initiated on-demand.

The third LCM enhancement is VxRail LCM compatibility with VMware vSphere Lifecycle Manager (vLCM).

Figure 5: VMware vSphere Lifecycle Manager vLCM framework

As a refresher, VMware vLCM was introduced in vSphere 7.0 as a framework to allow for software (ESXi) and hardware (firmware and drivers) to be updated together as a single system. VMware supplies the base image which is the ESXi image, and then it is up to the hardware vendors, like Dell Technologies, to provide the hardware support manager that plugs into that framework to supply the necessary firmware and drivers and to update them. Together, they form the baseline image which is used for the compliance checker. When updating the cluster, a desired state image is built from a combination of VMware-provided ESXi image and vendor(s)-provided firmware and drivers. Based on the drift detection analysis between the baseline and desired state images, vLCM will remediate the hosts on the cluster to complete the update.

VxRail’s newly introduced vLCM compatibility enables the VxRail Manager VM to plug into the framework to perform cluster updates using VxRail-provided desired state images in the form of Continuously Validated States. Essentially, VxRail has automated the hardware support manager plugin setup and exporting the depot of firmware and drivers to vCenter, which is a very manual process for other HCI solutions. While other hardware support manager plugins to vLCM require a multiple-step procedure to establish a baseline image and desired state image and interaction with multiple interfaces, VxRail’s implementation leverages the vLCM APIs to truly obfuscate those complexities into a streamlined experience all within VxRail Manager. Because VxRail Manager already stores the Continuously Validated State on its VM, the process of identification and exporting of the hardware firmware and drivers on the VxRail stack can easily be automated. The simplicity of VxRail’s support for vLCM cannot be understated.

Figure 6: VxRail’s vLCM implementation automates and simplifies the user’s cluster update experience

Similarly, performing cluster updates is a streamlined process once the LCM bundle is downloaded onto the VxRail Manager VM. From VxRail Manager, via the vLCM APIs, the bundle is loaded onto the vLCM framework as the desired state image. In short, vLCM compatibility is mostly transparent to the user as the LCM experience still runs through VxRail Manager.

The next likely question is why offer this enhancement? The explanation can be conveyed in two points both related to cutting down the time to update the cluster.

  1. Consolidate VMware software updates – for users that already run NSX-T or vSphere with Tanzu, vLCM allows for those VIBs to be included into the desired state image. Instead of updating each VMware software separately, they can be done together in a single boot cycle.
  2. Consolidate non-VxRail managed components – there are a few components such as the FC HBA that are not part of Continuously Validated State. Those components would then need to be updated separately which may require additional host reboots. The vendor addon feature in vLCM, as shown in the image above, provides the capability to include component firmware/drivers into the cluster image for a consolidated update cycle. Using vLCM APIs, VxRail has incorporated the vendor addon feature into its vLCM implementation in VxRail Manager.

By introducing vLCM compatibility into VxRail LCM, users can benefit from these cool capabilities. With VxRail 7.0.240, the use of vLCM is disabled by default. Users can choose to enable vLCM immediately or enable it later. Developing vLCM compatibility is also a strategic decision to put VxRail in a position to enhance more vLCM capabilities as they come.

Conclusion

VxRail 7.0.240 is a monumental software release that expands the breadth of the VxRail portfolio’s reach in addressing workload types with VxRail dynamic nodes and its depth by enhancing is differentiated LCM experience by providing more ways to ensure cluster integrity and to improve cluster maintenance times. Though the summer is drawing to a close, VxRail is providing you the boost to stay dynamic and finish 2021 strong. Keep an eye out for more content about the latest VxRail release.

For more information about VxRail dynamic nodes, you can check out the VxRail launch page: https://www.delltechnologies.com/en-us/events/vxrail-launch.htm.

If you want to learn more about how VxRail LCM differentiates itself from other HCI vendors using VMware vLCM, you can read these previously posted blogs:

Exploring the customer experience with lifecycle management for vSAN ReadyNodes and VxRail clusters

How does vSphere LCM compare with VxRail LCM?

Author Information

Daniel Chiu, Senior Technical Marketing Manager at Dell Technologies

LinkedIn: https://www.linkedin.com/in/daniel-chiu-8422287/

Read Full Blog
  • NVIDIA
  • HCI
  • VxRail
  • Kubernetes
  • hybrid cloud
  • Tanzu
  • edge

VxRail Tech Field Day – A technical relay race for the athletically challenged

Kathleen Cintorrino Kathleen Cintorrino

Wed, 03 Aug 2022 21:32:14 -0000

|

Read Time: 0 minutes

Earlier this summer we announced key advancements to VxRail Hyperconverged Infrastructure (HCI) hardware and software, including major updates to help customers simplify their infrastructure, scale when and where they need to, and address more workloads while adopting next generation technologies. We recently discussed the technical know-how behind those new and exciting innovations at a special Dell Technologies Tech Field Day event.

Tech Field Day offers an opportunity for companies like Dell to share, learn, and interact with a small group of independent technical influencers in a series of focused technical sessions. Channel your elementary school-aged self, and imagine a day dedicated to a tug-of-workloads, hybrid cloud water balloon toss, and a flexible storage balancing act. If that sounds like your idea of a good time, you’ll find these sessions beneficial. They were originally livestreamed, uncensored conversations that highlighted how VxRail drives value for our customers across four key areas:

  1. Optimizing operations and automating as much as possible to allow customers to focus resources on strategic IT initiatives rather than on maintaining the infrastructure
  2. Embracing hybrid cloud to rapidly deploy on-demand services and scale without limits
  3. Optimizing workloads and modern apps to drive business innovation
  4. And finally, by unlocking business value at the edge

These themes were carried throughout the full-day event and we covered a lot of ground on the unique capabilities and differentiators for our products – like how we’ve added even more flexibility to our HCI portfolio with VxRail dynamic nodes, the ways we’re helping customers accelerate adoption of Kubernetes with Tanzu on VxRail, and how we’re partnering with NVIDIA to deliver superior performance for AI workloads. We also heard New Belgium Brewing discuss how they cut their data center costs by about 80% with VxRail, demonstrating significant savings while boosting IT agility and efficiency.

Each session highlighted not only the passion our technologists, customers, and partners have for VxRail, but also excitement from our audience of technical influencers who had many “aha moments” throughout. The influencers were thrilled to have an opportunity to dive into our latest advancements and ask all their burning questions to our technologists directly.

Ready, set, go! Watch Tech Field Day on-demand

I invite you to check out the individual sessions on our Dell Technologies Youtube Channel where we went ahead and curated a dedicated playlist. It features:

Additional Resources

Our growing list of VxRail technical videos.

 

Author: Kathleen Cintorino, Twitter (@k_lasorsa), LinkedIn

 


Read Full Blog
  • VMware
  • VxRail
  • VCF

Secure Cloud: Check! Flexible Cloud Networking: Check! Powerful Cloud Hardware: Check!

Karol Boguniewicz Karol Boguniewicz

Wed, 03 Aug 2022 21:32:14 -0000

|

Read Time: 0 minutes

Dell Technologies and VMware are happy to announce the availability of VMware Cloud Foundation 4.3.0 on VxRail 7.0.202. This new release provides several security-related enhancements, including FIPS 140-2 support, password auto-rotation support, SDDC Manager secure API authentication, data protection enhancements, and more. VxRail-specific enhancements include support for the more powerful, 3rd Gen AMD EYPC™ CPUs and NVIDIA A100 GPUs (check this blog for more information about the corresponding VxRail release), and more flexible network configuration options with the support for multiple System Virtual Distributed Switches (vDS).

Let’s quickly discuss the comprehensive list of the new enhancements and features:

VCF and VxRail Software BOM Updates

These include the updated version of vSphere, vSAN, NSX-T, and VxRail Manager. Please refer to the VCF on VxRail Release Notes for comprehensive, up-to-date information about the release and supported software versions.

VCF on VxRail Networking Enhancements

Day 2 AVN deployment using SDDC Manager workflows

The configuration of an NSX-T Edge cluster and AVN networks are now a post-deployment process that is automated through SDDC Manager. This approach simplifies and accelerates the VCF on VxRail Bring-up and provides more flexibility for the network configuration after the initial deployment of the platform.

Figure 1: Cloud Foundation Initial Deployment – Day 2 NSX-T Edge and AVN

Shrink and expand operations of NSX-T Edge Clusters using SDDC Manager workflows

NSX-T Edge Clusters can now be expanded and shrunk using in-built-in automation from within SDDC Manager. This allows VCF operators to scale the right level of resources on-demand without having to size for demand up-front, which results in more flexibility and better use of infrastructure resources in the platform.

VxRail Multiple System VDS support

Two System Virtual Distributed Switch (vDS) configuration support was introduced in VxRail 7.0.13x. VCF 4.3 on VxRail 7.0.202 now supports a VxRail deployed with two system vDS, offering more flexibility and choice for the network configuration of the platform. This is relevant for customers with strict requirements for separating the network traffic (for instance, some customers might be willing to use a dedicated network fabric and vDS for vSAN). See the Figure 2 below for a sample diagram of the new network topology supported:

Figure 2: Multiple System VDS Configuration Example

VCF on VxRail Data Protection Enhancements

Expanded SDDC Manager backup and restore capabilities for improved VCF platform recovery

This new release introduces new abilities to define a periodic backup schedule, retention policies of backups, and disable or enable these schedules in the SDDC Manager UI, resulting in simplified backup and recovery of the platform (see the screenshot below in Figure 3).

Figure 3: Backup Schedule

VCF on VxRail Security Enhancements

SDDC Manager certificate management operations – expanded support for using SAN attributes

The built-in automated workflow for generating certificate signing requests (CSRs) within SDDC Manager has been further enhanced to include the option to input a Subject Alternate Name (SAN) when generating a certificate signing request. This improves security and prevents vulnerability scanners from flagging invalid certificates.

 

SDDC Manager Password Management auto-rotation support

Many customers need to rotate and update passwords regularly across their infrastructure, and this can be a tedious task if not automated. VCF 4.3 provides automation to update individual supported platform component passwords or rotate all supported platform component passwords (including integrated VxRail Manager passwords) in a single workflow. This feature enhances the security and improves the productivity of the platform admins.

 

FIPS 140-2 Support for SDDC Manager, vCenter, and Cloud Builder

This new support increases the number of VCF on VxRail components that are FIPS 140-2 compliant in addition to VxRail Manager, which is already compliant with this security standard. It improves platform security and regulatory compliance with FIPS 140-2.

 

Improved VCF API security

Token based Auth API access is now enabled within VCF 4.3 for secure authentication to SDDC Manager by default. Access to private APIs that use Basic Auth has been restricted. This change improves platform security when interacting with the VCF API. 

 

VxRail Hardware Platform Enhancements

VCF 4.3 on VxRail 7.0.202 brings new hardware features including support for AMD 3rd Generation EPYC CPU Platform Support and Nvidia A100 GPUs.

These new hardware options provide better performance and more configuration choices. Check this blog for more information about the corresponding VxRail release.

 

VCF on VxRail Multi-Site Architecture Enhancements

NSX-T Federation guidance - upgrade and password management Day 2 operations 

New manual guidance for password and certificate management and backup & restore of Global Managers.

As you can see, most of the new enhancements in this release are focused on improving platform security and providing more flexibility of the network configurations. Dell Technologies and VMware continue to deliver the optimized, turnkey platform experience for customers adopting the hybrid cloud operating model. If you’d like to learn more, please check the additional resources linked below.


Additional Resources

VMware Cloud Foundation on Dell EMC VxRail Release Notes

VxRail page on DellTechnologies.com

VxRail Videos

VCF on VxRail Interactive Demos   


Author Information 

Author: Karol Boguniewicz, Senior Principal Engineering Technologist, VxRail Technical Marketing 

Twitter: @cl0udguide 





 

Read Full Blog
  • SQL Server
  • containers
  • VxRail
  • Kubernetes
  • Tanzu

Microsoft SQL Server Big Data Clusters on Tanzu Kubernetes Grid on Dell EMC VxRail

Vic Dery Vic Dery

Thu, 19 Aug 2021 15:25:13 -0000

|

Read Time: 0 minutes

A recently created reference architecture, running Microsoft SQL Server Big Data Clusters (BDC) on Tanzu Kubernetes Grid (TKG) on Dell EMC VxRail, demonstrates a fast and simple way to get started with big data workloads running on Kubernetes. It also shows how the containerized workloads ran using VxRail. 

SQL BDC on TKG on VxRail enables simplified servicing for cloud native workloads, and is designed to scale with business needs. Administrators can implement the policies for namespaces and manage access and quota allocation for application-focused management. All of this helps build a developer ready infrastructure with enterprise-grade Kubernetes with advanced governance, reliability, and security. 

This reference architecture also validated SQL BDC with Spark SQL TPC-DS benchmark optimized parameters. The test results showed that Tanzu Kubernetes Grid on VxRail provides linear scalability (for complex TPC-DS-like decision support workloads that use different query types) with predictable query response time and high throughput.

In the business value section for using SQL BDC and TKG on VxRail, based on the five measurements below. It's covered in more detail within the reference architecture. 

  • Simplified installation of Kubernetes 
  • Automated multi-cluster operations 
  • Integrated platform services
  • Open source alignment
  • Production Ready

Cross-functional teams from Dell EMC VxRail, VMware, and Microsoft have reviewed the reference architecture for content and supportability. This can provide comfort for those wanting to run on Tanzu. Some notes from Microsoft Release notes from Cumulative Update 12 (CU12) of BDC):

SQL Server Big Data Clusters is supported as a workload. Microsoft provides support for the software components on the containers installed and configured by SQL Server Big Data Clusters only. Kubernetes itself, and other containers that may influence SQL Server Big Data Clusters behavior, are not supported by the (Microsoft) support team. For Kubernetes support please contact your certified Kubernetes distribution provider.

Note: This reference architecture provides general design and deployment guidelines of running Microsoft SQL Server Big Data Clusters on VMware Tanzu™ Kubernetes Grid™ on Dell EMC VxRail. The reference architecture also applies to any compatible hardware platforms running VMware Tanzu Kubernetes Grid on vSAN™.

To wrap up, VxRail provides SQL BDC on Tanzu as a scalable and secure platform to deliver key business outcomes. This reference architecture highlights one of the first known support solutions built on Tanzu Kubernetes Grid to manage Kubernetes. The paper covers the spectrum on the build, testing, and expected performance on VxRail.

Resources:

Author: Vic Dery – Linkedin

Read Full Blog
  • HCI
  • VxRail
  • SaaS

VxRail Interactive Journey – new whiteboard video on SaaS multi-cluster management

Daniel Chiu Daniel Chiu

Fri, 30 Jul 2021 13:26:38 -0000

|

Read Time: 0 minutes

As promised, we’re constantly refreshing content on the VxRail Interactive Journey.  For those not familiar with the awesomeness of the VxRail Interactive Journey, you can check out this short blog.  In the latest release, there is a brand-new whiteboard video to walk you through the architectural framework of SaaS multi-cluster management in VxRail HCI System Software.  As customers look to scale their VxRail footprint, the need for simple global management grows.  

In this whiteboard video, you will learn how we’re able to extend the operational simplicity in VxRail to SaaS multi-cluster management and deliver that experience to you on an easy-to-use web portal.  The video covers three key areas of SaaS multi-cluster management.

  • Monitoring – The Adaptive Data Collector service in VxRail HCI System Software gathers telemetry data from the HCI stack for more streamlined monitoring of all your clusters.
  • Multi-cluster management – The added use of the Secure Remote Services gateway for bi-directional communication between the VxRail clusters and the web portal enables LCM services at scale.  
  • Security – The need for managing who has access to clusters becomes even more important with cluster configuration capabilities.

I invite you to check out this new whiteboard video in the VxRail Interactive Journey. You can find it in the SaaS multi-cluster management module.

 


 

 

Read Full Blog
  • HCI
  • PowerEdge
  • VMware
  • VxRail
  • vSAN

Our fastest and biggest launch ever! - We’ve also made it simpler

David Glynn David Glynn

Tue, 13 Jul 2021 19:23:47 -0000

|

Read Time: 0 minutes

With this hardware launch, we at VxRail are refreshing our mainline platforms. Our “everything” E Series, our performance-focused P Series, and our virtualization-accelerated V Series. You’ve probably already guessed that these nodes are faster and bigger. This is always the case with new hardware in the tech industry, thanks to Moore’s Law of "Cramming more components onto integrated circuits,” but we’ve also made this hardware release simpler. Let’s dig into these changes, what they mean to you, the consumer, and what choices you may need to consider.

Faster. Bigger.

The headline in this could well be the 3rd Generation Intel Xeon Scalable processor (code named Ice Lake) with its increased cores and performance. After all, the CPU is the heart of every computing device from the nebulous public cloud to the smart refrigerator in your kitchen. But there is more to CPUs and servers than cores and clock speeds. The most significant of these, in my opinion, are support for the fourth generation of the PCIe bus. PCIe Gen 3 was introduced on 12th Generation PowerEdge servers in 2012, so the arrival of PCIe Gen 4 with double the bandwidth and 33% more lanes is very much appreciated. The PCIe bus is the highway network that connects everything together, this increase in bandwidth and lanes drives change and enables improvements in many other components.

The most significant impact for VxRail is the performance that it unlocks with PCIe Gen 4 NVMe drives, available on all the new nodes including the V Series. With vSAN’s distributed architecture, all writes go to multiple cache drives on multiple nodes. Anything that improves cache performance, be it high bandwidth, lower latency networking, or faster cache drives, will drive overall application performance and increased densities. For the relatively small price premium of NVMe cache drives over SAS caches drives, VxRail can deliver up to 35% higher IOPS and up to 14% lower latency (OLTP 32K on RAID 1). NVMe cache drives also reduce the performance impact of enabling data service like deduplicate, compression, and encryption at rest. For more information, check out this paper from our performance team last year (did you know that VxRail has its own performance testing team?)  where they showed the performance impact of dedupe and compression compared to compression only compared to no data reduction. This data highlights the small performance impact that compression only has on performance and the benefit of NVMe for cache drives.

Staying with storage, the new SAS HBA has double the number of lanes, which doubles the bandwidth available to drives. Don’t assume that this means twice the storage performance – wait for my next post where I’ll delve into those details with ESG. It is a topic worthy of its own post and well worth the wait, I promise! The SAS HBA has been moved to the front of the node right behind the drive bay, this is noteworthy because it frees up a PCIe slot on some configurations. We also freed up a PCIe slot on all configurations with the new Boot Optimized Storage Solution (BOSS) device – more on the new BOSS device below. These changes: deliver a third PCIe slot on the E Series, flexibility on the V Series with support for six GPUs while still offering PCIe slots for networking and FC expansion. Some would argue you can never have enough PCIe slots, but we argued, and sacrificed these gains on the P Series in favor of delivering four additional capacity drive slots, providing 184 TB of raw storage capacity in 2U. Don’t worry, there are still plenty of PCIe slots for additional networking or fibre channel cards – yes, in case you missed it, you can add fibre channel storage to your favorite HCI platform, extending the storage offerings for your various workloads, through the addition of QLogic or Emulex 16/32GB fibre channel cards. These are also PCIe Gen 4 to drive maximum performance.

PCIe Gen 4 is also enabling network cards to drive more throughput. With this new generation of VxRail, we are launching with an onboard quad port 25 GbE networking card, 2.5 times more than what the previous generation launched with. See the Get thee to 25GbE section in my recent post for A trilogy of reasons to see why you need to be looking at 25 GbE NICs today, even if you are not upgrading your network switches just yet. With this release, VxRail is shifting our onboard networking to use the Open Compute Project (OCP) spec 3.0 form factor. For you, the customer, this means greater choice in on-board network cards, with 10 cards from three vendors available at launch, and more to come. If you are not familiar with OCP, check it out. OCP is a large cross company organization that started as an internal project at Facebook, but now has a diverse membership of almost 100 companies working “collaboratively on redesigning hardware technology to efficiently support the growing demands on compute infrastructure.” The quad 25Gbe NIC is only consuming half of the bandwidth that OCP 3.0 can support, so we all have an interesting networking future.

Simpler

This hardware release is not just faster and bigger, we have also made these VxRail nodes simpler. Simplicity, like beauty, is in the eye of the beholder; there isn’t an industry benchmark for it, but I think you’ll agree with me that these changes will make life simpler in the data center. The new BOSS-S2 device is located at the rear of the node and hot-pluggable. In the event of failure of a RAID 1 protected M.2 SATA drive, it can easily and non-disruptively be replaced without powering off and opening the node. We’ve also relocated the power supplies, there is now one on each side of the chassis. This improves air flow, cooling, and enables easier and tidier cabling – we’ve all seen those rats’ nest of cables in the data center. Moving around to the front, we’ve added a Quick Resource Locator (QRL) to the chassis luggage tag, which can be scanned with an Android or iOS app, this will display system and warranty details and also provide links to SolVe procedures and documentation. Sticking with mobile applications, we’ve added OpenManage and Mobile Quick Sync 2 which enables, from the press of the Wireless Activation button, access to iDRAC and all the troubleshooting help it provides – no more dragging a crash cart across the data center.

VxRail is more than the sum of its components, be it through Lifecycle Management, simpler cloud operations, or ongoing product education. The value it delivers is seen daily by our 12.4K customers around the globe. Today we celebrate not just our successes and our new release, but also the successes and achievements of the giants that hoist us up to stand on their shoulders and enable VxRail and our customers to reach for the stars. Join us as we continue our journey and Reimagine HCI.

References

VxRail Spec Sheet

DellTechnologies.com/VxRail

Read Full Blog
  • SQL Server
  • VxRail
  • Tanzu
  • Azure Arc
  • TKG
  • PostgresSQL

Azure Arc-enabled data services meets VxRail

Vic Dery Vic Dery

Tue, 29 Jun 2021 14:00:38 -0000

|

Read Time: 0 minutes

On June 29th, 2021, Microsoft and Dell Technologies announced the launch of Azure Arc-enabled data services as part of the Azure Hybrid and Multicloud Digital Event. This launch benefits several Dell Technologies product lines, including Dell EMC VxRail, Powerflex, PowerMax, and PowerStore.  While my colleague, Robert Sonders, discusses the technical details of Azure Arc-enabled data services in his blog, I want to highlight the value that VxRail delivers for customers when running Azure Arc-enabled data services on VMware Tanzu Kubernetes Grid (TKG). 

 

The figure above shows the stack transition from VxRail up through TKG, and Azure Arc-enabled data services. These lines don’t separate into hard lines between products as when you add SQL Server to a bare metal server. VxRail, specifically VMware vSphere, has to integrate at some point to TKG. This is defined in what is now called the “Holistic” model used to communicate between data formats or models. For more information, see Why Canonicalization Should Be a Core Component of Your SQL Server Modernization (Part 2).  The vSphere software has key factors in running TKG, as does TKG to run the control plane with vSphere. TKG and Kubernetes (K8s) has a similar requirement when it comes to controlling Azure Arc in Kubernetes.

Azure Arc-enabled data services is an application data services layer available to developers that now provides either a SQL Managed Instance or a PostgreSQL Hyperscale Instance. With Azure Arc-enabled data services, these instances receive updates on a frequent basis, including servicing patches and new features -- similar to the experience in Azure. In addition, updates from the Microsoft Container Registry are provided to you, and you set deployment cadences in line with  your own policies. Again, Rob's blog goes into much more detail about Azure Arc-enabled data services.

VMware Tanzu Kubernetes Grid (TKG) addresses challenges in the enterprise Kubernetes runtimes, allowing Kubernetes-orchestrated containers across multiple cloud infrastructures. TKG is also tested, signed, and supported by VMware. This includes signed and supported open-source applications: shared Kubernetes services that a production Kubernetes environment requires.

As of v1.3.0, TKG supports Photon OS and now Ubuntu 20.04. Adding Ubuntu 20.04 to TKG’s list of supported operating systems allows for the “bring your own node image” experience by providing different OS options when deploying consistent Kubernetes environments and containers. 

VxRail provides the only fully integrated, pre-configured, and pre-tested VMware hyperconverged integrated system optimized for VMware solutions. VxRail transforms HCI networking and simplifies VMware cloud adoption while meeting any HCI use cases, including support for many of the most demanding workloads and applications. Powered by Dell EMC PowerEdge server platforms and VxRail HCI System software, VxRail features next-generation technology to future-proof your infrastructure and enable deep integration across the VMware ecosystem. The advanced VMware hybrid cloud integration and automation simplifies the deployment of a secure VxRail cloud infrastructure.

VxRail helps make this layer of the stack more efficient by simplifying management and operations. Although there is no direct VxRail integration with TKG or with Azure Arc-enabled data services applications, infrastructure admins can expand dynamically and add lifecycle infrastructure to meet the resource needs of next-generation applications developed with Azure Arc that run on TKG. With VxRail, we deliver automated Lifecycle Management (LCM), automated cluster right sizing with cluster expansions, and, as clusters may shrink to quickly deliver the right infrastructure resources for next-generation apps when needed, bringing agility to on-prem infrastructure as well as to cloud native app architectures.

Conclusion

Dell EMC VxRail provides the secure, scalable, and proven platform to build Azure Arc-enabled data services on TKG. VxRail allows enterprises to effectively execute across edge, private, public, hybrid, and multi-clouds while now adding Azure Arc-enabled data services. VxRail is the solid foundation for running TKG, which provides a consistent, upstream-compatible implementation of Kubernetes, all while delivering the Azure Arc-enabled data services as a solution. 

Dell Technologies focuses on innovation and enhancing solutions, and providing validated solutions like Azure Arc-enabled data services. This solution helps your organization focus on what matters to you.

Additional Resources

VxRail white papers: https://infohub.delltechnologies.com/t/white-papers-40/

Robert Sonders’ InFocus blogs: https://infocus.delltechnologies.com/author/robert_sonders/

Azure Arc validation program: https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/validation-program

TKG 1.3.1 docs: https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.3.1/rn/VMware-Tanzu-Kubernetes-Grid-131-Release-Notes.html#k8s-versions

Blog: Fuel Azure hybrid cloud with a validated infrastructure: https://www.delltechnologies.com/en-us/blog/fuel-azure-hybrid-cloud-with-a-validated-infrastructure/

Blog: Let’s get technical, and fuel your Azure hybrid cloud with Azure Arc: https://infocus.delltechnologies.com/robert_sonders/lets-get-technical-and-fuel-your-azure-hybrid-cloud-with-azure-arc/

Read Full Blog
  • VxRail
  • Kubernetes
  • VMware Cloud Foundation
  • OpenShift
  • VCF

Containing yourself with OpenShift running VMware Cloud Foundation (VCF) on VxRail

Vic Dery Vic Dery

Wed, 16 Jun 2021 19:24:28 -0000

|

Read Time: 0 minutes

Containing yourself with OpenShift running VMware Cloud Foundation (VCF) on VxRail

Red Hat OpenShift is a container application platform running on Red Hat Enterprise Linux CoreOS (RHCOS) and built on top of Kubernetes. In addition, OpenShift includes everything you need for running a secure infrastructure and operations across private and public clouds, like a container runtime, networking, monitoring, container registry, authentication, and authorization above the Linux container host. 


Why OpenShift with VMware Cloud Foundation (VCF) on VxRail? Using VCF on VxRail provides an enterprise solution that includes security, workload isolation, lifecycle management, and more. In addition, OpenShift provides its security and lifecycle management for the components of OpenShift, such as RHEL CoreOS, K8s, and the clusters. The combined outcome can be an enhanced solution for OpenShift customers using VCF on VxRail.

OpenShift using VCF on VxRail leverages VMware leading virtualization technologies, including vSphere, NSX-T, and vSAN.  This allows businesses to make better decisions, provide a faster outcome, and reduce cost and risk.

Using VCF and workload domains can provide environment isolation separating production from development operations. And with the scalability of VCF on VxRail, growth is manageable, and control is handled at the VCF level, allowing the OpenShift layer to scale out.

Details around the reference architecture guide can be found HERE.

Read Full Blog
  • NVIDIA
  • HCI
  • VMware
  • VxRail
  • Horizon
  • GPU
  • AMD

More GPUs, CPUs and performance - oh my!

David Glynn David Glynn

Mon, 14 Jun 2021 14:30:46 -0000

|

Read Time: 0 minutes

Continuous hardware and software changes deployed with VxRail’s Continuously Validated State

A wonderful aspect of software-defined-anything, particularly when built on world class PowerEdge servers, is speed of innovation. With a software-defined platform like VxRail, new technologies and improvements are continuously added to provide benefits and gains today, and not a year or so in the future. With the release of VxRail 7.0.200, we are at it again! This release brings support for VMware vSphere and vSAN 7.0 Update 2, and for new hardware: 3rd Gen AMD EPYC processors (Milan), and more powerful hardware from NVIDIA with their A100 and A40 GPUs. 

VMware, as always, does a great job of detailing the many enhanced or new features in a release. From high level What’s New corporate or personal blog posts, to in-depth videos by Duncan Epping. However, there are a few changes that I want to highlight:

 Get thee to 25GbE: A trilogy of reasons - Storage, load-balancing, and pricing.

vSAN is a distributed storage system. To that end, anything that improves the network or networking efficiency improves storage performance and application performance -- but there is more to networking than big, low-latency pipes. RDMA has been a part of vSphere since the 6.5 release; it is only with 7.0 Update 2 that it is leveraged by vSAN. John Nicholson explains the nuts and bolts of vSAN RDMA in this blog post, but only touches on the performance gains. From our performance testing on VxRail, I can share with you the gains we have seen with VxRail: up to 5% reduction in CPU utilization, up to 25% lower latency, and up to 18% higher IOPS, along with increases in read and write throughput. It should be noted that even with medium block IO, vSAN is more than capable of saturating a 10GbE port, RDMA is pushing performance beyond that, and we’ve yet to see what Intel 3rd Generation Xeon processors will bring. The only fly in the ointment for vSAN RDMA is the current small list of approved network cards – no doubt more will be added soon.

 vSAN is not the only feature that enjoys large low-latency pipes. Niels Hagoort describes the changes in vSphere 7.0 Update 2 that have made vMotion faster, thus making Balancing Workloads Invisible and the lives of virtualization administrators everywhere a lot better. Aside: Can I say how awesome it is to see VMware continuing to enhance a foundational feature that they first introduced in 2003, a feature that for many was that lightbulb Aha! moment that started their virtualization journey. 

 One last nudge: pricing. The cost delta between 10GbE and 25GbE network hardware is minimal, so for greenfield deployments the choice is easy; you may not need it today, but workloads and demands continue to grow. For brownfield, where the existing network is not due for replacements, the choice is still easy. 25GbE NICs and switch ports can negotiate to 10GbE making a phased migration, VxRail nodes now and switches in the future, possible. The inverse is also possible: upgrade the network to 25GbE switches while still connecting your existing VxRail 10GbE SFP+ NIC ports.

Is 25GbE in your infrastructure upgrade plans yet? If not, maybe it should be.

 A duo of AMD goodness

Last year we released two AMD-based VxRail platforms, the E665/F and the P675F/N, so I’m delighted to see CPU scheduler optimizations for AMD EPYC processors, as described in Aditya Sahu blog post. What is even better is the 29 page performance study Aditya links to, the depth of detail provided on how the ESXi CPU scheduling works, and didn’t work, with AMD EYPC processors is truly educational. The extensive performance testing VMware continuously runs and the results they share (spoiler: they achieved very significant gains) are also a worthwhile read. In our testing we’ve seen that with just these scheduler optimizations AMD alone VxRail 7.0.200 can provide up to 27% more IOPS and up to 27% lower latency for both RAID1 and RAID5 with relational database (RDBMS22K 60R/40W 100%Random) workloads.

 VxRail begins shipping the 3rd generation AMD EYPC processors – also known as
Milan in VxRail E665 and P675 nodes later this month. These are not a replacement
for the current 2nd Gen EPYC processors we offer, rather the addition of higher
performing 24-core, 32-core, and 64-core choices to the VxRail line up delivering up to 33% more IOPS and 16% lower latency across a range of workloads and block sizes. Check out this VMware blog post for the performance gains they showcase with the VMmark benchmarking tool. 

HCI Mesh – only recently introduced, yet already getting better

When VMware released HCI Mesh just last October, it enabled stranded storage on one VxRail cluster to be consumed by another VxRail cluster. With the release of VxRail 7.0.200 this has been expanded to making it more applicable to more customers by enabling any vSphere clusters to also be consumers of that excess storage capacity – these remote clusters do not require a vSAN license and consume the storage in the same manner they would any other datastore. This opens up some interesting multi-cluster use cases, for example:

In solutions where a software application licensing requires each core/socket in the vSphere cluster to be licensed, this licensing cost can easily dwarf other costs. Now this application can be deployed on a small compute-only cluster, while consuming storage from the larger VxRail cluster. Or where the density of storage per socket didn’t make VxRail viable, it can now be achieved with a smaller VxRail cluster, plus a separate compute-only cluster. If only the all the goodness that is VxRail was available in a compute-only cluster – now that would be something dynamic

 A GPU for every workload

GPUs, once the domain of PC gamers, are now a data center staple with their parallel processing capabilities accelerating a variety of workloads. The versatile VxRail V Series has multiple NVIDIA GPUs to choose from and we’ve added two more with the addition of the NVIDIA A40 and A100. The A40 is for sophisticated visual computing workloads – think large complex CAD models, while the A100 is optimized for deep learning inference workloads for high-end data science.

 Evolution of hardware in a software-defined world

PowerEdge took a big step forward with their recent release built on 3rd Gen Intel Xeon Scalable processors. Software-defined principles enable VxRail to not only quickly leverage this big step forward, but also to quickly leverage all the small steps in hardware changes throughout a generation. Building on the latest PowerEdge servers we are Reimagine HCI with VxRail with the next generation VxRail E660/F, P670F or V670F. Plus, what’s great about VxRail is that you can seamlessly integrate this latest technology into your existing infrastructure environment. This is an exciting release, but equally exciting are all the incremental changes that VxRail software-defined infrastructure will get along the way with PowerEdge and VMware.

VxRail, flexibility is at its core.

 Availability

  • VxRail systems with Intel 3rd Generation Xeon processors will be globally available in July 2021.
  • VxRail systems with AMD 3rd Generation EPYC processors will be globally available in June 2021.
  • VxRail HCI System Software updates will be globally available in July 2021.
  • VxRail dynamic nodes will be globally available in August 2021.
  • VxRail self-deployment options will begin availability in North America through an early access program in August 2021.

Additional resources

 

 

 

 

 

 

 


Read Full Blog
  • VxRail
  • VMware Cloud Foundation

Test driving Dell EMC VxRail with the CTO Advisor

Kathleen Cintorrino Kathleen Cintorrino

Fri, 11 Jun 2021 07:57:49 -0000

|

Read Time: 0 minutes

You wouldn’t buy a new car without researching and test driving it first, would you?  The same holds true for your data center – test driving, researching, checking peer reviews, and carefully weighing your IT infrastructure options are key to making an informed decision for your business.

Keith Townsend, a technical analyst also known as the CTO Advisor, and his team recently did just that with Dell EMC VxRail. Through a study commissioned by Dell Technologies, the CTO Advisor team stepped into the driver’s seat to investigate the differentiated value of VxRail as part of a broader strategy to modernize their CTO Advisor Hybrid Infrastructure.

The CTO Advisor team completed two weeks of rigorous testing with two main objectives in mind. First, they examined whether VxRail HCI System Software, a suite of integrated software elements that sits between VxRail infrastructure components such as vSAN and VMware Cloud Foundation, adds unique, differentiated value for businesses like the CTO Advisor. Second, they looked specifically at integration between VxRail, VMware Cloud Foundation, and Intel – does it offer unique differentiation versus competitors?

Spoiler alert: the answers to both of their questions were yes and yes! But you don’t have to take my word for it -- you can join Keith and the CTO Advisor team for the ride. Available now, their seven-part video series documents their VxRail journey from start-to-finish and includes technical conversations with our technologists. Their videos showed how VxRail:

  • Simplifies day two operations with VxRail Manager
  • Delivers multi-cluster management for a single-pane-of-glass view of your entire infrastructure
  • Enables custom automation with RESTful APIs
  • Enables automation across the entire stack with ecosystem connectors
  • Reduces risk with the electronic compatibility matrix
  • Delivers deep integration with VMware Cloud Foundation for a turnkey hybrid cloud experience

Watch at the links below, and be sure to view the report and project page!


Presales Interview

Keith Townsend takes on the "Executive" persona in this video. Jeremy Merrill, a Dell Technologies VxSeal, briefs Keith on the strengths and benefits of VxRail. This video sets the stage for much of the validation testing that the CTO Advisor team performed.




VxRail Technical Overview

Keith Townsend sets the stage for the project. He discusses the problem he would like to solve, current pain points, and outcomes he would like to see from VxRail.


SaaS Controller Surprise

In this technical review, CTO Advisor Analyst Alastair Cooke sits down with CTO Advisor Engineer Thom Greene and Dell Technologies VxSeal Curtis Edwards. The trio review the work from a more traditional technical perspective. Specifically, the team goes deeper into the SaaS controller.




VxRail Technical Overview – Part 1

Keith rejoins the program for a conversation with Thom Greene. Thom shares the high-level technical results of his VMware Cloud Foundation on VxRail analysis. Keith probes Thom to understand the overall operational potential value for the CTOA as an organization.


VxRail Technical Overview – Part 2

Alastair and Thom are joined by Dell Technologies technologist Joe Mauer to discuss the technical details of VMware Cloud Foundation on VxRail, as well as findings from Thom’s review.




End Design

Keith wraps the technical overview and describes the benefits of VxRail within the CTO Advisor Hybrid Infrastructure.


Concluding Analysis

Keith provides an executive overview of the research the team conducted and introduces the written work.







Read Full Blog
  • VxRail

VxRail Interactive Journey – the momentum train continues!

Daniel Chiu Daniel Chiu

Thu, 13 May 2021 17:39:53 -0000

|

Read Time: 0 minutes

Tired of spending hours doing web searches on product research? Tired of trying to stitch together each nugget of information to form your own mental model of how it works and what value it brings?  Yeah... we feel your pain! There should be a better way to get all that information in a tight and concise package that doesn’t make your eyes dry out. And that’s what the VxRail Interactive Journey is designed to deliver: a clear and engaging learning experience wrapped into a single package for easy consumption.

 

Introduced in March, the VxRail Interactive Journey provides a better way for technical buyers to get familiar with VxRail and quickly come away with what makes VxRail awesome, through an immersive experience for consuming videos, podcasts, and interactive demos. We’ve designed VxRail Interactive Journey for frequent updates, so expect new content to be added as VxRail evolves and as we find ways to build upon the experience.

And speaking of updates, I have a significant one to share today! In this latest update to the VxRail Interactive Journey, we’ve added the Support and Serviceability module. While serviceability and support are not often top of list for learning about a product, they encompass some critical aspects of a product to support your everyday needs. We are excited to share with you our latest podcast that features Christine Fantoni, VxRail product manager and serviceability enthusiast, who shares her passion about this topic and how VxRail serviceability drives product reliability. 

Whether it’s your first time or you’re intrigued to see the new module, I invite you to check out the VxRail Interactive Journey (https://vxrail.is/interactivejourney)!  

Read Full Blog
  • NVIDIA
  • VDI

Breaking down the barriers for VDI with VxRail and NVIDIA vGPU

Todd Day Todd Day

Wed, 21 Apr 2021 15:17:54 -0000

|

Read Time: 0 minutes

Desktop transformation initiatives often lead customers to look at desktop and application virtualization. According to Gartner, “Although few organizations planned for the global circumstances of COVID-19, many will now decide to have some desktop virtualization presence to expedite business resumption.” 

However, customers looking to embrace these technologies have faced several hurdles, including:

  • Significant up-front CapEx investments for storage, compute, and network infrastructure 
  • Long planning, design, and procurement cycles 
  • High cost of adding additional capacity to meet demand
  • Difficulty delivering a consistent user experience across locations and devices 

These hurdles have often caused desktop transformation initiatives to fail fast, but there is good news on the horizon. Dell Technologies and VMware have come together to provide customers with a superior solution stack that will allow them to get started more quickly than ever, with simple and cost-effective end-to-end desktop and application virtualization solutions using NVIDIA vGPU and powered by VxRail.

Dell Technologies VDI solutions powered by VxRail

Dell Technologies VDI solutions based on VxRail feature a superior solution stack at an exceptional total cost of ownership (TCO). The solutions are built on Dell EMC VxRail and they leverage VMware Horizon 8 or Horizon Apps and NVIDIA GPU for those who need high-performance graphics. Wyse Thin and Zero client, OptiPlex micro form factor desktop, and Dell monitors are also available as part of these solutions. Simply plug in, power up, and provision virtual desktops in less than an hour, reducing the time needed to plan, design, and scale your virtual desktop and application environment. 

VxRail HCI system software provides out-of-the-box automation and orchestration for deployment and day-to-day system-based operational tasks, reducing the overall IT OpEx required to manage the stack. You are not likely to find any build-it-yourself solution that provides this level of lifecycle management, automation, and operational simplicity

Dell EMC VxRail and NVIDIA GPU a powerful combination

Remote work has become the new normal, and organizations must enable their workforces to be productive anywhere while ensuring critical data remains secure.

Enterprises are turning to GPU-accelerated virtual desktop infrastructure (VDI) because GPU-enabled VDI provides workstation-like performance, allowing creative and technical professionals to collaborate on large models and access the most intensive 3D graphics applications.

Together with VMware Horizon, NVIDIA virtual GPU solutions help businesses to securely centralize all applications and data while providing users with an experience equivalent to the traditional desktop.

NVIDIA vGPU software included with the latest VMware Horizon release, which is available now, helps transform workflows so users can access data outside the confines of traditional desktops, workstations, and offices. Enterprises can seamlessly collaborate in real time, from any location, and on any device.

With NVIDIA vGPU and VMware Horizon, professional artists, designers, and engineers can access new features such as 10bit HDR and high-resolution 8K display support while working from home by accessing their virtual workstation.

How NVIDIA GPU and Dell EMC VxRail power VDI


In a VDI environment powered by NVIDIA virtual GPU, the virtual GPU software is installed at the virtualization layer. The NVIDIA software creates virtual GPUs that enable every virtual machine to share a physical GPU installed on the server or allows for multiple GPUs to be allocated on a single VM to power the most demanding workloads. The NVIDIA virtualization software includes a driver for every VM. Because work that was previously done by the CPU is offloaded to the GPU, the users, even demanding engineering and creative users, have a much better experience.


Virtual GPU for every workload on Dell EMC VxRail

As more knowledge workers are added on a server, the server will run out of CPU resources. Adding an NVIDIA GPU offloads CPU operations that would otherwise use the CPU, resulting in an improved user experience and performance. We used the NVIDIA nVector knowledge worker VDI workload to test user experience and performance with NVIDIA GPU. The NVIDIA M10, T4, A40, RTX6000/8000 and V100S, all of which are available on Dell EMC VxRail, achieve similar performance for this workload.

Customers are realizing the benefits of increased resource utilization by leveraging GPU-accelerated Dell EMC VxRail to run virtual desktops and workstations. They are also leveraging these resources to run compute workloads, for example AI or ML, when users are logged off. Customers who want to be able to run compute workloads on the same infrastructure on which they run VDI, might leverage a V100S to do so. For the complete list, see NVIDIA GPU cards supported on Dell EMC VxRail.

Conclusion

 With the prevalence of graphics-intensive applications and the deployment of Windows 10 across the enterprise, adding graphics acceleration to VDI powered by NVIDIA virtual GPU technology is critical to preserving the user experience. Moreover, adding NVIDIA GRID with NVIDIA GPU to VDI deployments increases user density on each server, which means that more users can be supported with a better experience.

To learn more about measuring user experience in your own environments, contact your Dell Account Executive.

Useful links

 Video: VMware Horizon on Dell Technologies Cloud

Dell Technologies Solutions: Empowering your remote workforce

Certified GPU for VxRail: NVIDIA vGPU for VxRail[

Everything VxRail: Dell EMC VxRail

VDI Design Guide: VMware Horizon on VxRail and vSAN Ready Nodes

Latest VxRail release: Simpler cloud operations and more deployment options!




Read Full Blog
  • VMware
  • VxRail
  • Kubernetes
  • VMware Cloud Foundation
  • Tanzu

Simpler Cloud Operations and Even More Deployment Options Please!

Jason Marques Jason Marques

Wed, 03 Aug 2022 21:32:14 -0000

|

Read Time: 0 minutes

The latest VMware Cloud Foundation on Dell EMC VxRail release debuts LCM and storage enhancements, support for transitioning from VCF Consolidated to VCF Standard Architecture, AMD-based VxRail hardware platforms, and more!


Dell Technologies and VMware are happy to announce the availability of VMware Cloud Foundation 4.2.0 on VxRail 7.0.131.

This release brings about support for the latest versions of VCF and Dell EMC VxRail that provide a simple and consistent operational experience for developer ready infrastructure across core, edge, and cloud. Let’s review these new updates and enhancements.

Some Important Updates:

VCF on VxRail Management Operations

Ability For Customers to Perform Their Own VxRail Cluster Expansion Operations in VCF on VxRail Workload Domains. Sometimes some of the best announcements that come with a new release have nothing to do with a new technical feature but instead are about new customer driven serviceability operations. The VCF on VxRail team is happy to announce this new serviceability enhancement. Customers no longer must purchase a professional services engagement simply to expand a single site layer 2 configured VxRail WLD cluster deployment by adding nodes to it. This should save time and money and give customers the freedom to perform these operations on their own.

This aligns to existing support that already exists for customers performing similar cluster expansion operations for VxRail systems deployed as standard clusters in non-VCF use cases. 

Note: There are some restrictions on which cluster configurations support customer driven expansion serviceability. Stretched VxRail cluster deployments and layer 3 VxRail cluster configurations will still require engagement with professional services as these are more advanced deployment scenarios. Please reach out to your local Dell Technologies account team for a complete list of the cluster configurations that are supported for customer driven expansions. 

VCF on VxRail Deployment and Services

Support for Transitioning From VCF on VxRail Consolidated Architecture to VCF on VxRail Standard Architecture. Continuing the operations improvements, the VCF on VxRail team is also happy to announce this new capability. We introduced support for VCF Consolidated Architecture deployments in VCF on VxRail back in May 2020. You can read about it here. VCF Consolidated Architecture deployments provide customers a way to familiarize themselves with VCF on VxRail in their core datacenters without a significant investment in cost and infrastructure footprint. Now, with support for transitioning from VCF Consolidated Architecture to VCF Standard Architecture, customers can expand as their scale demands it in their core, edge, or distributed datacenters! Now that’s flexible!

Please reach out to your local Dell Technologies account team for details on the transition engagement process requirements.

And Some Notable Enhancements:

VxRail Hardware Platform

AMD-based VxRail Platform Support in VCF 4.x Deployments. With the latest VxRail 7.0.131 HCI System Software release, ALL available AMD-based VxRail series models are now supported in VCF 4.x deployments. These models include VxRail E-Series and P-Series and support single socket 2nd Gen AMD EYPC™ processors with 8 to 64 cores, allowing for extremely high core densities per socket.

The figure below shows the latest VxRail HW platform family.

 


For more info on these AMD platforms, check out my colleague David Glynn’s blog post on the subject here when AMD platform support was first introduced to the VxRail family last year. (Note: New 2U P-Series options have been released since that post.)

VCF on VxRail Multi-Site Architecture

NSX-T 3.1 Federation Now Supported with VCF 4.2 on VxRail 7.0.131. NSX-T Federation provides a cloud-like operating model for network administrators by simplifying the consumption of networking and security constructs. NSX-T Federation includes centralized management, consistent networking and security policy configuration with enforcement and synchronized operational state across large scale federated NSX-T deployments. With NSX-T Federation, VCF on VxRail customers can leverage stretched networks and unified security policies across multi-region VCF on VxRail deployments, providing workload mobility and simplified disaster recovery. This initial support will be through prescriptive manual guidance that will be made available soon after VCF on VxRail solution general availability.  For a detailed explanation of NSX-T Federation, check out this VMware blog post here

The figure below depicts what the high-level architecture would look like.


VCF on VxRail Storage

VCF 4.2 on VxRail 7.0.131 Support for VMware HCI Mesh. VMware HCI Mesh is a vSAN feature that provides for “Disaggregated HCI” exclusively through software. In the context of VCF on VxRail, HCI Mesh allows an administrator to easily define a relationship between two or more vSAN clusters contained within a workload domain. It also allows a vSAN cluster to borrow capacity from other vSAN clusters, improving the agility and efficiency in an environment. This disaggregation allows the administrator to separate compute from storage. HCI Mesh uses vSAN’s native protocols for optimal efficiency and interoperability between vSAN clusters. HCI Mesh accomplishes this by using a client/server mode architecture. vCenter is used to configure the remote datastore on the client side. Various configuration options are possible that can allow for multiple clients to access the same datastore on a server. VMs can be created that utilize the storage capacity provided by the server. This can enable other common features, such as performing a vMotion of a VM from one vSAN cluster to another. 

The figure below depicts this architecture.

VCF on VxRail Networking

This release continues to extend networking flexibility to further adapt to various customer environments and to reduce deployment efforts. 

Customer-Defined IP Pools for NSX-T TEP IP Addresses for the Management Domain and Workload Domain Hosts. To extend networking flexibility, this release introduces NSX-T TEP IP Address Pools. This enhances the existing support for using DHCP to assign NSX-T TEP IPs. This new feature allows customers to avoid deploying and maintaining a separate DHCP server for this purpose. Admins can select to use IP Pools as part of VCF Bring Up by entering this information in the Cloud Builder template configuration file. The IP Pool will then be automatically configured during Bring Up by Cloud Builder. There is also a new option to choose DHCP or IP Pools during new workload domain deployments in the SDDC Manager. 

The figure below illustrates what this looks like. Once domains are deployed, IP address blocks are managed through each domain’s NSX Manager respectively. 

             

 

pNIC-Level Redundancy Configuration During VxRail First Run. Network flexible configurations are further extended with this feature in VxRail 7.0.131. It allows an administrator to configure the VxRail System VDS traffic across NDC and PCIe pNICs automatically during VxRail First Run using a new VxRail Custom NIC Profile option. Not only does this help provide additional high availability network configurations for VCF on VxRail domain clusters, it also helps to further simplify operations by removing the need for additional Day 2 activities in order to get to the same host configuration outcome.

Specify the VxRail Network Port Group Binding Mode During VxRail First Run. To further accelerate and simplify VCF on VxRail deployments, VxRail 7.0.131 has introduced this new enhancement designed with VCF in mind. VCF requires all host Port Group Binding Modes be set to Ephemeral. VxRail First Run now enables admins to have this parameter configured automatically, reducing the number of manual steps needed to prep VxRail hosts for VCF on VxRail use. Admins can set this parameter using the VxRail First Run JSON configuration file or manually enter it into the UI during deployment. 

 The figure below illustrates an example of what this looks like in the Dell EMC VxRail Deployment Wizard UI.

 

VCF on VxRail LCM

New SDDC Manager LCM Manifest Architecture. This new LCM Manifest architecture changes the way SDDC Manager handles the metadata required to enable upgrade operations as compared to the legacy architecture used up until this release.

With the legacy LCM Manifest architecture: 

  • The metadata used to determine upgrade sequencing and availability was published as part of the LCM bundle itself or was part of SDDC Manager VM.
  • Did not allow for changes to the metadata after the bundle was published. This limited the ability for VMware to modify upgrade sequencing without requiring an upgrade to a new VCF release.

The newly updated LCM Manifest architecture helps address these challenges by enabling dynamic updates to LCM metadata, enabling future functionality such as recalling upgrade bundles or modifying skip level upgrade sequencing.

VCF Skip-Level Upgrades Using SDDC Manager UI and Public API. Keeping up with new releases can be challenging and scheduling maintenance windows to perform upgrades may not be as readily available for every customer. The goal behind this enhancement is to provide VCF on VxRail administrators the flexibility to reduce the number of stepwise upgrades needed in order to get to the latest SDDC Manager/VCF release if they are multiple versions behind. All required upgrade steps are now automated as a single SDDC Manager orchestrated LCM workflow and is built upon the new SDDC Manager LCM Manifest architecture. VCF skip level upgrades allow admins to quickly and directly adopt code versions of choice and to reduce maintenance window requirements. 

Note: To take advantage of VCF skip level upgrades for future VCF releases, customers must be at a minimum of VCF 4.2.

The figure below shows what this option looks like in the SDDC Manager UI. 

Improvements to Upgrade Resiliency Through VCF Password Prechecks. Other LCM enhancements in this release come in the area of password prechecks. When performing an upgrade, VCF needs to communicate to various components to complete various actions. Of course, to do this, the SDDC Manager needs to have valid credentials. If the passwords have expired or have been changed outside of VCF, the patching operation fails. To avoid any potential issues, VCF now checks to ensure that the credentials needed are valid prior to commencing the patching operation. These checks will occur both during the execution of the pre-check validation as well as during an upgrade of a resource, such as ESXi, NSX-T, vCenter, or VxRail Manager. Check out what this looks like in the figure below


Automated In-Place vRSLCM Upgrades. Upgrading vRSLCM in the past required the deployment of a net new vRSLCM appliance. With VCF 4.2, the SDDC Manager keeps the existing vRSLCM appliance, takes a snapshot of it, then transfers the upgrade packages directly to it and upgrades everything in place. This provides a more simplified and streamlined LCM experience.

VCF API Performance Enhancements. Administrators who use a programmatic approach will experience a quicker retrieval of information through the caching of certain information when executing API calls.

VCF on VxRail Security

Mitigate Man-In-The-Middle Attacks. Want to prevent Man-In-The-Middle Attacks on your VCF on VxRail cloud infrastructure? This release is for youIntroduced in VCF 4.2, customers can leverage SSH RSA fingerprint and SSL thumbprint enforcement capabilities that are natively built-into SDDC Manager in order to verify the authenticity of cloud infrastructure components (vCenter, ESXi, and VxRail Manager). Customers can choose to enable this feature for their VCF on VxRail deployment during VCF Bring Up by filling in the affiliated parameter fields in the Cloud Builder configuration file.

An SSH RSA Fingerprint comes from the host SSH public key while an SSL Thumbprint comes from the host’s certificates. One or more of these data points can be used to validate the authenticity of VCF on VxRail infrastructure components when being added and configured into the environment. For the Management Domain, both SSH fingerprints and SSL thumbprints are available to use while Workload Domains have SSH Fingerprints available. See what this looks like in the figure below. 

             


Natively Integrated Dell Technologies Next Gen SSO Support With SDDC Manager. Dell Technologies Next Gen SSO is a newly implemented backend service used in authenticating with Dell Technologies support repositories where VxRail update bundles are published. With the native integration that SDDC Manager has with monitoring and downloading the latest supported VxRail upgrade bundles from this depot, SDDC Manager now utilizes this new SSO service for its authentication. While this is completely transparent to customers, existing VCF on VxRail customers may need to log SDDC Manager out of their current depot connection and re-authenticate with their existing credentials to ensure future VxRail updates are accessible by SDDC Manager.   

New Advanced Security Add-on for VMware Cloud Foundation License SKUs: Though not necessarily affiliated with the VCF 4.2 on VxRail 7.0.131 BOM directly, new VMware security license SKUs for Cloud Foundation are now available for customers who want to bring their own VCF licenses to VCF on VxRail deployments. 

The Advanced Security Add-on for VMware Cloud Foundation now includes advanced threat protection, and workload and endpoint security that provides the following capabilities:

  • Carbon Black Workload Advanced: This includes Next Generation Anti-Virus, Workload Audit/Remediation, and Workload EDR.
  • Advanced Threat Prevention Add-on for NSX Data Center – Coming in Advanced and Enterprise Plus editions, this includes NSX Firewall, NSX Distributed IDS/IPS, NSX Intelligence, and Advanced Threat Prevention
  • NSX Advanced Load Balancer with Web Application Firewall

Updated VMware Cloud Foundation and VxRail BOM

VMware Cloud Foundation 4.2.0 on VxRail 7.0.131 introduces support for the latest versions of the SDDC and VxRail. For the complete list of component versions in the release, please refer to the VCF on VxRail release notes. A link is available at the end of this post.

Well, that about covers it for this release. The innovation continues with co-engineered features coming from all layers of the VCF on VxRail stack. This further illustrates the commitment that Dell Technologies and VMware have to drive simplified turnkey customer outcomes. Until next time, feel free to check out the links below to learn more about VCF on VxRail.

Jason Marques
Twitter - @vwhippersnapper

 Additional Resources


Read Full Blog
  • VxRail
  • SAP HANA
  • SAP
  • edge

Deploying SAP HANA at the Rugged Edge

David Glynn David Glynn

Mon, 14 Dec 2020 18:38:19 -0000

|

Read Time: 0 minutes

SAP HANA is one of those demanding workloads that has been steadfastly contained within the clean walls of the core data center. However, this time last year VxRail began to chip away at these walls and brought you SAP HANA certified configurations based on the VxRail all-flash P570F workhorse and powerful quad socket all-NVMe P580N. This year, we are once again in the giving mood and are bringing SAP HANA to the edge. Let us explain.

mobile response centers for natural disasters

Dell Technologies defines the edge as “The edge exists wherever the digital world & physical world intersect. It’s where data is securely collected, generated and processed to create new value.” This is a very broad definition that extends the edge from the data center to oil rigs, to mobile response centers for natural disasters. It is a broad claim not only to provide compute and storage in such harsh locations, but also to provide enough of it that meets the strict and demanding needs of SAP HANA, all while not consuming a lot of physical space. After all -- it is the edge where space is at a premium.

Shrinking the amount of rack space needed was the easier of the two challenges, and our 1U E for Everything (or should that be E for Everywhere?) was a perfect fit. The all-flash E560F and all-NVMe E560N, both of which can be enhanced with Intel Optane Persistent Memory, can be thought of as the shorter sibling of our 2U P570F, packing a powerful punch with equivalent processor and memory configurations.

alpine climbWhile the E Series fits the bill for space constrained environments, it still needs data center like conditions. This is not the case for the durable D560F, the tough little champion that joined the VxRail family in June of this year, and which is now the only SAP HANA certified ruggedized platform in the industry. Weighing in at a lightweight 28 lbs. and a short depth of 20 inches, this little fighter will run all day at 45°C with eight hour sprints of up to 55°C, all while enduring shock, vibration, dust, humidity, and EMI, as this little box is MIL-STD 810G and DNV-GL Maritime certified. In other words, if your holiday plans involve a trip to hot sand beaches, a ship cruise through a hurricane, or an alpine climb, and you’re bringing SAP HANA with you (we promise we won’t ask why), then the durable D560F is for you.

The best presents sometimes come in small packages. So, we won’t belabor this blog with anything more than to announce that these two little gems, the E560 and the D560, are now SAP HANA certified.

Author: David Glynn, Sr. Principal Engineer, VxRail Tech Marketing

References:

360° View: VxRail D Series: The Toughest VxRail Yet 

Video: HCI Computing at the Edge

Solution brief: Taking HCI to the Edge: Rugged Efficiency for Federal Teams

White paper: ESG Technical Validation: Dell EMC VxRail and Intel Optane Persistent Memory -- Enabling High Performance and Greater VM Density for HCI Workloads

Press release: Dell Technologies Brings IT Infrastructure and Cloud Capabilities to Edge Environments

SAP Certification link: Certified and Supported SAP HANA® Hardware Directory 



Read Full Blog
  • VxRail
  • VMware Cloud Foundation
  • DTCP
  • life cycle management

Lifecycle Management for vSAN Ready Nodes and VxRail Clusters: Part 2 – Cloud Foundation Use Cases

Cliff Cahill Cliff Cahill

Wed, 03 Aug 2022 21:32:15 -0000

|

Read Time: 0 minutes

In my previous post I explored the customer experience between using vSphere Lifecycle Manager Images (vLCM Images) and VxRail Manager to maintain HCI stack integrity and completing full stack software updates for standard vSphere cluster use cases. It was clear to see that VxRail Manager optimized operational efficiencies while taking ownership of software validation of the complete cluster to remove the burden of testing and reducing the overall risk during a lifecycle management event. However, common questions I frequently get are: Do those same values carry over when using VxRail as part of a VMware Cloud Foundation on VxRail (Dell Technologies Cloud Platform) deployment? Is vLCM Images even used in VCF on VxRail deployments? In this post I want to dive into answering these questions.

There are some excellent resources available on the VxRail InfoHub web portal. Along with several blog posts that discuss the unique integration between SDDC Manager and VxRail Manager in the area of LCM (like this one), I suggest that you check them out if you are unfamiliar with VCF and SDDC Manager functionality as it will help in following along in this post.

Before we dive in, there are a few items that you should be aware of regarding SDDC Manager and vLCM. I won’t go into all of them here, but you can check out the VCF 4.1 Release Notes, vLCM Requirements and Limitations, VCF 4.1 Admin Guide, and Tanzu documentation for more details. A few worth highlighting include:

  • You cannot deploy a Service VM to an NSX Manager that is associated with a workload domain that is using vSphere Lifecycle Manager Images
  • Management domains, and thus VCF consolidated architecture deployments, can only support vSphere Lifecycle Manager Baselines (formerly known as VUM) based updates because vLCM Images use is not supported in the Management domain default cluster. 
  • VMware vSphere Replication is considered a non-integrated solution and cannot be used in conjunction with vLCM Images. 
  • While vLCM Images supports both NSX-T and vSphere with Kubernetes, it does not support both enabled at the same time within a cluster. This means you cannot use vLCM Images with vSphere with Kubernetes within a VCF workload domain deployment.

 As in my last post, the main area of focus here is around the customer experience with VMware Cloud Foundation and VRSLCM and VxRail, specifically:

  • Defining the initial baseline node image configuration
  • Planning for a cluster update
  • Executing the cluster update
  • Sustaining cluster integrity over the long term

Oh, and one last important point to make before we get into the details. As of this writing, vLCM is only used when deploying VCF on server/vSAN Ready Nodes and is not used when deploying VCF on VxRail. As a result, all information covered here when comparing vLCM with VxRail Manager essentially compares the LCM experience of running VCF on servers/vSAN Ready Nodes vs VCF on VxRail.

Defining the Initial Baseline Node Image Configuration  

How is it Done With vLCM Images?

We have covered this in detail in my previous post. The requirements for VCF-based systems also remains the same but one thing to highlight in VCF use cases is that the customer is still responsible for the installation, configuration, and ongoing updating of the hardware vendor HSM components used in the vLCM Images-based environment. SDDC Manager does not automatically deploy these components nor lifecycle them for you.

VCF deployments do differ in the area of initial VCF node imaging. In VCF deployments there are two initial node imaging options for customers:

  • A manual install of ESXi and associated driver and firmware packages
  • A semi-automated process using a service called VMware Imaging Appliance (VIA) that runs as a part of the Cloud Builder appliance. 

The VIA service tool uses a PXE Boot environment to image nodes that need to be on the same L2 domain as the appliance and are reachable over an untagged VLAN (VLAN ID 0). ESXi Images and VIBs can be uploaded to the Cloud Builder appliance VIA service. Hostnames and IP address are assigned during the imaging process. Once initial imaging is complete, and Cloud Builder has run though its automated workflow, you are left with a provisioned Management Domain. (One important consideration here regarding initial node baseline images: customers need to ensure that the hardware and software components included in these images are validated against the VCF and ESXi software and hardware BOMs that have been certified for the version of VCF that will be installed in their environment.) This default cluster within the Management Domain cannot use vLCM Images for future cluster LCM updates. 

When you are creating a new VI Workload Domain in a VCF on a vSAN Ready Nodes deployment, that is, when you can choose to enable vLCM Images as your method of choice for cluster updates or alternatively, you can also select vLCM Baselines. (Note: When using vLCM Baselines, firmware updates are not maintained as part of cluster lifecycle management). If you opt to use vLCM Images, you cannot revert to using vLCM Baselines for cluster management. So, it is very important to choose wisely and understand what LCM operating model is needed prior to deploying the workload domain. Because this blog post focuses on vLCM Images, let’s review what is involved when you select this option. 

To begin, it’s important to know that you cannot create a vLCM Images-based workload domain until you import an image into SDDC Manager. But you cannot import an image into SDDC Manager until you have a vLCM Images enabled cluster.  

To get over this chicken and egg scenario, the administrator needs to create an empty cluster within the Management Domain where you can set up the image requirements and assign firmware and driver profiles that you have validated during the planning and preparation phase for the initial cluster build. The following figure provides an example of creating the temporary cluster needed to configure vLCM Images. 

Figure 1  Creating a temporary cluster to enable vLCM Images as part of the initial setup.

When defining vLCM images, similar to when defining the initial baseline node images, customers are responsible for ensuring that these images are validated against the VCF software BOM that has been certified for the version of VCF that is installed in the environment.

When you are satisfied with the image configuration and you have defined the Driver, Firmware and Cluster Profiles, export the required JSON, ESX ISO, and ZIP files from vSphere UI to your local file system, as shown in the following figure. These files include:

  • SOFTWARE_SPEC_1386209123.json
  • ISO_IMAGE_1904428419.iso
  • OFFLINE_BUNDLE_1829789659.zip

Figure 2  Exporting Images

Next, within the vCenter UI, go to the Development Center menu and choose the API Explorer Tab. At this stage you need to run several API commands. 

To do this, first select your endpoint (vCenter Server) from the drop-down option, then select the vCenter Related APIs. When completed, you will be presented with all the applicable vCenter APIs for your chosen end point. Expand the Cluster section and execute the GET API command below for /rest/vCenter/Cluster as shown in the following figure.

 

Figure 3  In Developer Center: List all Clusters

This displays all the clusters managed by that vCenter and provides a variable for each cluster. Click on the vcenter.cluster.summary (Dell-VSRN-Temp) and copy this value (that is, Domain-c2022 in my example) that you will use in the next step. 

Change the focus on the API explorer to ESX and execute a GET API command for /api/esx/settings/clusters/Domain-c2022/software.

Fill in the cluster id parameter (Domain-c2022) as the required value to run the API command (see the following figure). Once executed, click on the download json option and an additional json file downloads to your local file system. 

Figure 4  Execute the Cluster software API Command

At this point in time, you have four files

  1. SOFTWARE_SPEC_1386209123.json
  2. ISO_IMAGE_1904428419.iso
  3. OFFLINE_BUNDLE_1829789659.zip
  4. Reponse-body.json

Finally, within SDDC Manager, select Repository then Image Management and Import Cluster Image. Here you need to import the four files mentioned above. As you import the individual files, make sure that you specify a name for the cluster image and import them in the correct order. Once the import is successful, you can now start to deploy your first vLCM Images enabled workload domain.

How is it Done Using VxRail LCM?

VxRail key integrations with Cloud Foundation start even before any VCF on VxRail components are installed at Dell facilities, as part of the Dell manufacturing process. Here, the nodes are loaded with a VxRail Continuously Validated State image that includes all pre-validated vSphere, vSAN, and hardware firmware components. This means that once VxRail nodes are racked, stacked, and powered on within your datacenter, they are ready to be used to install a new VCF instance, create new workload domains, expand existing workload domains with new clusters, or a expand clusters on an existing system. 

For new VCF deployments, Cloud Builder has unique integrated workflows that tailor a streamlined deployment process with VxRail, leveraging existing capabilities for VxRail cluster management operations. Once SDDC Manager is deployed using the Cloud Builder connectivity, two update bundle repositories can then be configured. 

Figure 5  SDDC Manager Repository Settings 

The first is to the VMware repository which is used for the VMware software such as vSphere, NSX, and SDDC Manager. The second is for the Dell EMC repository for the VxRail software. Once you configure and authenticate with the appropriate user account credentials in SDDC Manager, it will automatically connect to the VxRail repository at Dell EMC and pull down the next available VxRail update package. Each available VxRail update package will have already been validated, tested, and certified with the version of VCF running in the customer’s environment.

Figure 6  VxRail Software Bundle in SDDC Manager

The following figure summarizes the steps needed for defining initial baseline node images for VCF using vLCM Images and VCF using VxRail Manager.

Figure 7  Initial baseline node images and configuration  

Planning for a Cluster Update 

How is it Done Using vLCM Images?

Although we have reviewed this in detail before, it is worth mentioning here again. Ownership of this process lies on the shoulders of the administrator. In this model, customers would take on the responsibility validating and testing the software and driver combination of their desired state image to ensure full stack interoperability and integrity, and ensuring that the component versions fall within the supported VCF software BOM being used in their environment. 

How is it Done Using VxRail LCM?

The VxRail approach is much different. The VxRail engineering teams spend 1000s of test hours across multiple platforms to validate each release. The end user is given a single image to leverage knowing that Dell Technologies has completed the very heavy lift for platform validation. As I mentioned above, SDDC Manager will download the correct bundle based on your VCF Release and mark it available within your SDDC Manager. When a customer sees a new image available, they are guaranteed that it is already compatible with their VCF deployment. This curated bundle management and validation is part of the turnkey experience customers gain with VCF on VxRail.

The following figure illustrates the differences in planning a cluster update for VCF with vLCM Images and VCF with VxRail.

Figure 8  Planning for a cluster update

Executing the Cluster Update

How is it Done Using vLCM Images?

Defining the baseline node image is vital for defining the hardware health of your cluster. Defining a target version for your system’s next update is equally as important. It should involve testing the specific combination of components for the image that is desired. This would be in addition to some of the standard interoperability validation performed by the Ready Node hardware vendor when updates to server hardware firmware and drivers are released. Once the hardware baseline is known, the ESXi image must be imported into vCenter. Drivers, firmware, and Cluster Profiles must then be defined in vCenter so they can be ready to be exported. 

We use the same process as originally outlined for the initial setup: Export the images, run the relevant APIs calls, and import the files into SDDC Manager. Every future update will follow the same process as I’ve outlined. Additional firmware and driver profiles will have to be created if new workload domains or clusters are added with different server hardware configuration. Thus, a deployment that caters to multiple hardware use cases will end up with several driver/firmware profiles that will need to be managed and tested independently. 

How is it Done Using VxRail LCM?

SDDC Manager is the orchestration engine, defining:

  • When each update is applicable
  • Ensuring that each update is made available in the correct order, and
  • Ensuring that components such as SDDC Manager, vCenter, NSX-T, and VxRail components are updated and coordinated in the correct manner.

For VxRail LCM updates, SDDC Manager will send API calls directly to each VxRail Manager for every cluster being updated to initiate a cluster upgrade. From that point on VxRail Manager will take ownership of the VxRail update execution using the same native VxRail Manager LCM execution process that is used in non-VCF VxRail deployments. During LCM execution, VxRail Manager provides constant feedback to the SDDC Manager throughout the process. VxRail updates these components:

  • VMware ESXi
  • vSAN
  • Hardware firmware
  • Hardware drivers

To understand the full range of hardware components that are updated with each release, I urge you to check out the VxRail 7.0 Support Matrix.

The following figure summarizes the steps required to execute cluster updates for VCF with vLCM Images and VCF with VxRail.

Figure 9  Executing a cluster update workflow

Sustaining Cluster Integrity Over the Long Term

How is it Done Using vLCM Images?

Unlike standalone vSphere cluster deployments where vLCM Images manages images on a per cluster basis, VMware Cloud Foundation allows you to manage all cluster images, once imported and repurpose them for other clusters or workload domains. A definite improvement, but each new update requires you create the image, firmware, and driver combinations in vCenter first and then import into SDDC Manager. Of course, this is after you have repeated the planning phases and have completed all the driver and firmware interoperability testing. 

Also, it is important to note that if your cluster is being managed by  vLCM Images, and you need to expand your clusters with hardware that is not identical to the original hosts (this can happen in situations in which hardware components go end of sale or you have different hardware or firmware requirements for different nodes), you can no longer leverage vLCM Images or change back to using vLCM Baselines. So proper planning is very important.

How is it Done Using VxRail LCM?

VxRail LCM supports customers’ ability to grow their clusters with heterogenous nodes over time. Different generations of servers or servers with differing hardware characteristics can be mixed within a cluster, in accordance with application profile requirements. A single pre-validated image will be made available that will cover all hardware profiles. All of this is factored into each VxRail Continuously Validated State update bundle that is applied to each individual cluster based on its current component's version state.

Conclusion

When we piece together the bigger picture with all the LCM stages combined, it provides an excellent representation of the ease of management when VxRail is at the heart of your VCF deployment. 

Figure 10  Comparing vLCM Images and VxRail LCM cluster update operations 

It’s clear to see that VxRail, with its pre-validated engineered approach, can provide a differentiated customer experience when it comes to operational efficiency, during both the initial deployment phase and the continuous lifecycle management of the HCI. 

While vLCM Images provides a significant improvement from manually applying the updates, the planning and testing required can become quite iterative. And when newer hardware profiles are introduced over the lifespan of the system, things could become more difficult to manage, introducing additional complexity. 

By contrast, VxRail provides a single update file for each release that is curated and made accessible within SDDC Manager natively, with no additional administration effort required. It’s simplicity at its finest, and simplicity is at the core of the VxRail turnkey customer experience.

Cliff Cahill
Dell EMC VxRail Engineering Technologist
Twitter: @cliffcahill
LinkedIn: http://linkedin.com/in/cliffcahill

Additional Resources 

VCF on VxRail Interactive Demo

VxRail page on DellTechnologies.com

VxRail Videos



Read Full Blog
  • VMware
  • vSphere
  • VxRail
  • vSAN
  • life cycle management

Update to VxRail 7.0.100 and Unleash the Performance Within It

David Glynn David Glynn

Thu, 05 Nov 2020 23:07:52 -0000

|

Read Time: 0 minutes

What could be better than faster storage? How about faster storage, more capacity, and better durability?

Last week at Dell Technologies we released VxRail 7.0.100. This release brings support for the latest versions of VMware vSphere and vSAN 7.0 Update 1. Typically, in an update release we will see a new feature or two, but VMware out did themselves and crammed not only a load of new or significantly enhanced features into this update, but also some game changing performance enhancements. As my peers at VMware already did a fantastic job of explain these features, I won’t even attempt to replicate their work – you can find links to the blogs on features that caught my attention in the reference section below. Rather, I want to draw attention to the performance gains, and ask the question: Could RAID5 with compression only be the new normal?

Don’t worry, I can already hear the cries of “Max performance needs RAID1, RAID5 has IO amplification and parity overhead, data reduction services have drawbacks”, but bear with me a little. Also, I’m not suggesting that RAID5 compression only be used for all workloads, there are some workloads that are definitely unsuitable – streams of compressed video come to mind. Rather I’m merely suggesting that after our customers go through the painless process of updating their cluster to VxRail 7.0.100 from one of our 36 previous releases in over the past two years (yes you can leap straight from 4.5.211 to 7.0.100 in a single update and yes we do handle the converging and decommissioning of the Platform Services Controller), that they check out the reduction in storage IO latency that their existing workload is putting on their VxRail cluster, and investigate what it represents – in short, more storage performance headroom.

As customers buy VxRail clusters to support production workloads, they can’t exactly load it up with a variety of benchmark workload test to see how far they can push it. But at VxRail we are fortune to have our own dedicated performance team, who have enough VxRail nodes to run a mid-sized enterprise, and access to a large library of components so that they can replicate almost any VxRail configuration we sell (and a few we don’t). So, there is data behind my outrageous suggestion, it isn’t just back of the napkin mathematics. Grab a copy of the performance team’s recent findings in their whitepaper: Harnessing the performance of Dell EMC VxRail 7.0.100: A lab based performance analysis of VxRail, and skip to figure 3. There you’ll find some very telling before and after performance latency curves with and without data reduction services for an RDBMS workload. Spoiler: 58% more peak IOPS and almost 40% lower latency, with compression this only drops to a still very significant 45% more peak IOPS with 30% lower latency. (For those of you screaming “but failure domains” check out the blog Space Efficiency Using the New “Compression only” Option where Pete Kohler explains the issue, and how it not longer exists with compression only.) But what about RAID5? Skip up to figure 1 which summarizes the across the board performance gains for IOPS and throughput, impressive, right? Now slide down to figure 2 to compare the throughput, in particular compare RAID 1 on 7.0 with RAID 5 on 7.0 U1 – the read performance is almost identical, while the gap in write performance has narrowed. Write performance on RAID5 will likely always lag RAID1 due to IO amplification, but VMware is clearly looking to narrow that gap as much as possible.

If nothing else the whitepaper should tell you that a simple hassle-free upgrade to VxRail 7.0.100 will unlock additional performance headroom on your vSAN cluster without any additional costs, and that the tradeoffs associated with RAID5 and data reduction services (compression only) are greatly reduced. There are opportunistic space savings to be had from compression only, but they require committing to a cluster wide change to unlock, which is something that should not be taken lightly. However, realizing the guaranteed 33% capacity savings of RAID5, can be unlocked per virtual machine, reverted just as easily, represents a lower risk. I opened asking the question if RAID5 with compression only could be the new normal, and I firmly believe that the data indicates that this is a viable option for many more workloads.

References:

My peers at VMware (John Nicholson, Pete Flecha (these two are the voices and brains behind the vSpeakingPodcast – definitely worth listening to), Teodora Todorova Hristov, Pete Koehler and Cedric Rajendran) have written great and in-depth blogs about these features that caught my attention:

vSAN HCI Mesh – eliminate stranded storage by enabling VMs registered to cluster A access storage from cluster B

Shared Witness for 2-Node Deployments - reduced administration time and infrastructure costs thru one witness for up to sixty-four 2-node clusters

Enhanced vSAN File Services – adds SMB v2.1 and v3 for Windows and Mac clients. Add Kerberos authentication for existing NFS v4.1

Space Efficiency: Compression only option - For demanding workloads that cannot take advantage of deduplication. Compression only has higher throughput, lower latency, and significantly reduced impact on write performance compared to deduplication and compression. Compression only has a reduced failure domain and 7x faster data rebuild rates.

Spare Capacity Management – Slack space guidance of 25% has been replaced with a calculated Operational Reserve the requires less space, and decreases with scale. Additional option to enable Host rebuild reserve, VxRail Sizing Tool reserves this by default when sizing configurations, with the filter Add HA

Enhanced Durability during Maintenance Mode – data being intended for a host in maintenance mode is temporally recorded in a delta file on another host, providing the configured FTT during Maintenance Mode operations

 

Read Full Blog
  • VMware
  • VxRail
  • Kubernetes
  • VMware Cloud Foundation
  • Tanzu
  • DTCP

Take VMware Tanzu to the Cloud Edge with Dell Technologies Cloud Platform

Jason Marques Jason Marques

Wed, 12 Jul 2023 16:23:35 -0000

|

Read Time: 0 minutes

Dell Technologies and VMware are happy to announce the availability of VMware Cloud Foundation 4.1.0 on VxRail 7.0.100.

This release brings support for the latest versions of VMware Cloud Foundation and Dell EMC VxRail to the Dell Technologies Cloud Platform and provides a simple and consistent operational experience for developer ready infrastructure across core, edge, and cloud. Let’s review these new features.

Updated VMware Cloud Foundation and VxRail BOM

Cloud Foundation 4.1 on VxRail 7.0.100 introduces support for the latest versions of the SDDC listed below:

  • vSphere 7.0 U1 
  • vSAN 7.0 U1 
  • NSX-T 3.0 P02
  • vRealize Suite Lifecycle Manager 8.1 P01
  • vRealize Automation 8.1 P02
  • vRealize Log Insight 8.1.1
  • vRealize Operations Manager 8.1.1
  • VxRail 7.0.100

For the complete list of component versions in the release, please refer to the VCF on VxRail release notes. A link is available at the end of this post.

VMware Cloud Foundation Software Feature Updates

VCF on VxRail Management Enhancements

vSphere Cluster Level Services (vCLS)

vSphere Cluster Services is a new capability introduced in the vSphere 7 Update 1 release that is included as a part of VCF 4.1. It runs as a set of virtual machines deployed on top of every vSphere cluster. Its initial functionality provides foundational capabilities that are needed to create a decoupled and distributed control plane for clustering services in vSphere. vCLS ensures cluster services like vSphere DRS and vSphere HA are all available to maintain the resources and health of the workloads running in the clusters independent of the availability of vCenter Server. The figure below shows the components that make up vCLS from the vSphere Web Client.

Figure 1

Not only is vSphere 7 providing modernized data services like embedded vSphere Native Pods with vSphere with Tanzu but features like vCLS are now beginning the evolution of modernizing to distributed control planes too! 

VCF Managed Resources and VxRail Cluster Object Renaming Support

VCF can now rename resource objects post creation, including the ability to rename domains, datacenters, and VxRail clusters.

The domain is managed by the SDDC Manager. As a result, you will find that there are additional options within the SDDC Manager UI that will allow you to rename these objects. 

VxRail Cluster objects are managed by a given vCenter server instance. In order to change cluster names, you will need to change the name within vCenter Server. Once you do, you can go back to the SDDC Manager and after a refresh of the UI, the new cluster name will be retrieved by the SDDC Manager and shown.

In addition to the domain and VxRail cluster object rename, SDDC Manager now supports the use of a customized Datacenter object name. The enhanced VxRail VI WLD creation wizard process has been updated to include inputs for Datacenter Name and is automatically imported into the SDDC Manager inventory during the VxRail VI WLD Creation SDDC Manager workflow. Note: Make sure the Datacenter name matches the one used during the VxRail Cluster First Run. The figure below shows the Datacenter Input step in the enhanced VxRail VI WLD creation wizard from within SDDC Manager.

 

Figure 2

Being able to customize resource object names makes VCF on VxRail more flexible in aligning with an IT organization’s naming policies.

VxRail Integrated SDDC Manager WLD Cluster Node Removal Workflow Optimization

Furthering the Dell Technologies and VMware co-engineering integration efforts for VCF on VxRail, new workflow optimizations have been introduced in VCF 4.1 that take advantage of VxRail Manager APIs for VxRail cluster host removal operations.

 When the time comes for VCF on VxRail cloud administrators to remove hosts from WLD clusters and repurpose them for other domains, admins will use the SDDC Manager “Remove Host from WLD Cluster” workflow to perform this task. This remove host operation has now been fully integrated with native VxRail Manager APIs to automate removing physical VxRail hosts from a VxRail cluster as a single end-to-end automated workflow that is kicked off from the SDDC Manager UI or VCF API. This integration further simplifies and streamlines VxRail infrastructure management operations all from within common VMware SDDC management tools. The figure below illustrates the SDDC Manager sub tasks that include new VxRail API calls used by SDDC Manager as a part of the workflow.

 Figure 3

 Note: Removed VxRail nodes require reimaging prior to repurposing them into other domains. This reimaging currently requires Dell EMC support to perform.

I18N Internationalization and Localization (SDDC Manager)

SDDC Manager now has international language support that meets the I18N Internationalization and Localization standard. Options to select the desired language are available in the Cloud Builder UI, which installs SDDC Manager using the selected language settings. SDDC Manager will have localization support for the following languages – German, Japanese, Chinese, French, and Spanish. The figure below illustrates an example of what this would look like in the SDDC Manager UI.

Figure 4

vRealize Suite Enhancements 

VCF Aware vRSLCM

New in VCF 4.1, the vRealize Suite is fully integrated into VCF. The SDDC Manager deploys the vRSLCM and creates a two way communication channel between the two components. When deployed, vRSLCM is now VCF aware and reports back to the SDDC Manager what vRealize products are installed. The installation of vRealize Suite components utilizes built standardized VVD best practices deployment designs leveraging Application Virtual Networks (AVNs).

Software Bundles for the vRealize Suite are all downloaded and managed through the SDDC Manager. When patches or updates become available for the vRealize Suite, lifecycle management of the vRealize Suite components is controlled from the SDDC Manager, calling on vRSLCM to execute the updates as part of SDDC Manager LCM workflows. The figure below showcases the process for enabling vRealize Suite for VCF.

 Figure 5

VCF Multi-Site Architecture Enhancements

VCF Remote Cluster Support

VCF Remote Cluster Support enables customers to extend their VCF on VxRail operational capabilities to ROBO and Cloud Edge sites, enabling consistent operations from core to edge. Pair this with an awesome selection of VxRail hardware platform options and Dell Technologies has your Edge use cases covered. More on hardware platforms later…For a great detailed explanation on this exciting new feature check out the link to a detailed VMware blog post on the topic at the end of this post.

VCF LCM Enhancements

NSX-T Edge and Host Cluster-Level and Parallel Upgrades

With previous VCF on VxRail releases, NSX-T upgrades were all encompassing, meaning that a single update required updates to all the transport hosts as well as the NSX Edge and Manager components in one evolution.

With VCF 4.1, support has been added to perform staggered NSX updates to help minimize maintenance windows. Now, an NSX upgrade can consist of three distinct parts:

  • Updating of edges
    1. Can be one job or multiple jobs. Rerun the wizard.
    2. Must be done before moving to the hosts
  • Updating the transport hosts
  • Once the hosts within the clusters have been updated, the NSX Managers can be updated.

Multiple NSX edge and/or host transport clusters within the NSX-T instance can be upgraded in parallel.  The Administrator has the option to choose some clusters without having to choose all of them. Clusters within a NSX-T fabric can also be chosen to be upgraded sequentially, one at a time. Below are some examples of how NSX-T components can be updated.

NSX-T Components can be updated in several ways. These include updating:

  • NSX-T Edges and Host Clusters within an NSX-T instance can be upgraded together in parallel (default) 
  • NSX-T Edges can be upgraded independently of NSX-T Host Clusters
  • NSX-T Host Clusters can be upgraded independently of NSX-T Edges only after the Edges are upgraded first
  • NSX-T Edges and Host Clusters within an NSX-T instance can be upgraded sequentially one after another.

The figure below visually depicts these options.

 Figure 6

These options provide Cloud admins with a ton of flexibility so they can properly plan and execute NSX-T LCM updates within their respective maintenance windows. More flexible and simpler operations. Nice! 

VCF Security Enhancements

Read-Only Access Role, Local and Service Accounts

A new ‘view-only’ role has been added to VCF 4.1. For some context, let’s talk a bit now about what happens when logging into the SDDC Manager. 

First, you will provide a username and password. This information gets sent to the SDDC Manager, who then sends it to the SSO domain for verification. Once verified, the SDDC Manager can see what role the account has privilege for. 

In previous versions of Cloud Foundation, the role would either be for an Administrator or it would be for an Operator. 

 Now, there is a third role available called a ‘Viewer’. Like its name suggests, this is a view only role which has no ability to create, delete, or modify objects. Users who are assigned this role may not see certain items in the SDDC Manger UI, such as the User screen. They may also see a message saying they are unauthorized to perform certain actions.

 Also new, VCF now has a local account that can be used during an SSO failure. To help understand why this is needed let’s consider this: What happens when the SSO domain is unavailable for some reason? In this case, the user would not be able to login. To address this, administrators now can configure a VCF local account called admin@local. This account will allow the performing of certain actions until the SSO domain is functional again. This VCF local account is defined in the deployment worksheet and used in the VCF bring up process. If bring up has already been completed and the local account was not configured, then a warning banner will be displayed on the SDDC Manager UI until the local account is configured.

Lastly, SDDC Manager now uses new service accounts to streamline communications between SDDC manager and the products within Cloud Foundation. These new service accounts follow VVD guidelines for pre-defined usernames and are administered through the admin user account to improve inter-VCF communications within SDDC Manager.

VCF Data Protection Enhancements

 As described in this blog, with VCF 4.1, SDDC Manager backup-recovery workflows and APIs have been improved to add capabilities such as backup management, backup scheduling, retention policy, on-demand backup & auto retries on failure. The improvement also includes Public APIs for 3rd party ecosystem and certified backup solutions from Dell PowerProtect.

VxRail Software Feature Updates

VxRail Networking Enhancements

VxRail 4 x 25Gbps pNIC redundancy

VxRail engineering continues innovate in areas that drive more value to customers. The latest VCF on VxRail release follows through on delivering just that for our customers. New in this release, customers can use the automated VxRail First Run Process to deploy VCF on VxRail nodes using 4 x 25Gbps physical port configurations to run the VxRail System vDS for system traffic like Management, vSAN, and vMotion, etc. The physical port configuration of the VxRail nodes would include 2 x 25Gbps NDC ports and additional 2 x 25Gbps PCIe NIC ports.

In this 4 x 25Gbps set up, NSX-T traffic would run on the same System vDS. But what is great here (and where the flexibility comes in) is that customers can also choose to separate NSX-T traffic on its own NSX-T vDS that uplinks to separate physical PCIe NIC ports by using SDDC Manager APIs. This ability was first introduced in the last release and can also be leveraged here to expand the flexibility of VxRail host network configurations.

The figure below illustrates the option to select the base 4 x 25Gbps port configuration during VxRail First Run.

Figure 7

By allowing customers to run the VxRail System VDS across the NDC NIC ports and PCIe NIC ports, customers gain an extra layer of physical NIC redundancy and high availability. This has already been supported with 10Gbps based VxRail nodes. This release now brings the same high availability option to 25Gbps based VxRail nodes. Extra network high availability AND 25Gbps performance!? Sign me up!

VxRail Hardware Platform Updates

Recently introduced support for ruggedized D-Series VxRail hardware platforms (D560/D560F) continue expanding the available VxRail hardware platforms supported in the Dell Technologies Cloud Platform. 

These ruggedized and durable platforms are designed to meet the demand for more compute, performance, storage, and more importantly, operational simplicity that deliver the full power of VxRail for workloads at the edge, in challenging environments, or for space-constrained areas. 

These D-Series systems are a perfect match when paired with the latest VCF Remote Cluster features introduced in Cloud Foundation 4.1.0 to enable Cloud Foundation with Tanzu on VxRail to reach these space-constrained and challenging ROBO/Edge sites to run cloud native and traditional workloads, extending existing VCF on VxRail operations to these locations! Cool right?!

To read more about the technical details of VxRail D-Series, check out the VxRail D-Series Spec Sheet.

Well that about covers it all for this release. The innovation train continues. Until next time, feel free to check out the links below to learn more about DTCP (VCF on VxRail).

 

Jason Marques

Twitter - @vwhippersnapper

 

Additional Resources

VMware Blog Post on VCF Remote Clusters

Cloud Foundation on VxRail Release Notes

VxRail page on DellTechnologies.com

VxRail Videos

VCF on VxRail Interactive Demos


Read Full Blog
  • HCI
  • vSphere
  • VxRail
  • security
  • life cycle management
  • SaaS

Building on VxRail HCI System Software: the advantages of multi-cluster active management capabilities

Daniel Chiu Daniel Chiu

Tue, 29 Sep 2020 19:03:05 -0000

|

Read Time: 0 minutes

The signs of autumn are all around us, from the total takeover of pumpkin-spiced everything to the beautiful fall foliage worthy of Bob Ross’s inspiration. Like the amount of change autumn brings forth, so too does the latest release of VxRail ACE, or should I preface that with ‘formerly known as’? I’ll get to that explanation shortly.

This release introduces multi-cluster update functionality that will further streamline the lifecycle management (LCM) of your VxRail clusters at scale. With this active management feature comes a new licensing structure and role-based access control to enable the active management of your clusters.

Formerly known as VxRail ACE

The colors of the leaves are changing and so is the VxRail ACE name. The brand name VxRail ACE (Analytical Consulting Engine), will no longer be used as of this release. While it had a catchy name and was easy to say, there are two reasons for this change. First, Analytical Consulting Engine no longer describes the full value or how we intend to expand the features in the future. It has grown beyond the analytics and monitoring capabilities of what was originally introduced in VxRail ACE a year ago and now includes several valuable LCM operations that greatly expand its scope. Secondly, VxRail ACE has always been part of the VxRail HCI System Software offering. Describing the functionality as part of the overall value of VxRail HCI System Software, instead of having its own name, simplifies the message of VxRail’s value differentiation.

Going forward, the capability set (that is, analytics, monitoring, and LCM operations) will be referred to as SaaS multi-cluster management -- a more accurate description. The web portal is now referred to as MyVxRail.  

Cluster updates

Cluster updates is the first active management feature offered by SaaS multi-cluster management. It builds on the existing LCM operational tools for planning cluster updates: on-demand pre-update health checks (LCM pre-check) and update bundle downloads and staging. Now you can initiate updates of your VxRail clusters at scale from MyVxRail. The benefits of cluster updates on MyVxRail tie closely with existing LCM operations. During the planning phase, you can run LCM pre-checks of the clusters you want to update. This informs you if a cluster is ready for an update and pinpoints areas for remediation for clusters that are not ready. From there, you can schedule your maintenance window to perform a cluster update and, from MyVxRail, initiate the download and staging of the VxRail update bundle onto those clusters. With this release, you can now execute cluster updates for those clusters. Now that’s operational efficiency!

When setting a cluster update operation, you have the benefit of two pieces of information – a time estimate for the update and the change data. The update time estimate will help you determine the length of the maintenance window. The estimate is generated by telemetry gathered about the install base to provide more accurate information. The change data is the list of the components that require an update to reach the target VxRail version.

Figure 1  MyVxRail Updates tab

Role-based access control

Active management requires role-based access control so that you can provide permissions to the appropriate individuals to perform configuration changes to your VxRail clusters. You don’t want anyone with access to MyVxRail to perform cluster updates on the clusters. SaaS multi-cluster management leverages vCenter Access Control for role-based access. From MyVxRail, you will be able to register MyVxRail with the vCenter Servers that are managing your VxRail clusters. The registration process will give VxRail privileges to vCenter Server to build roles with specific SaaS multi-cluster management capabilities.

MyVxRail registers the following privileges on vCenter:

  • Download software bundle: downloads and stages the VxRail software bundle onto the cluster
  • Execute health check: performs an on-demand pre-update health check on the cluster
  • Execute cluster update: initiates the cluster update operation on the clusters
  • Manage update credentials: modifies the VxRail infrastructure credentials used for active management

Figure 2  VxRail privileges for vCenter access control

VxRail Infrastructure Credentials

We’ve done more to make it easier to perform cluster updates at scale. Typically, when you’re performing a single cluster update, you have to enter the root account credentials for vCenter Server, Platform Services Controller, and VxRail Manager. That’s the same process when performing it from VxRail Manager. But that process can get tedious when you have multiple clusters to update.

VxRail Infrastructure Credentials can store those credentials so you can enter them once, at the initial setup of active management, and not have to do it again as you perform a multi-cluster update. MyVxRail can read the stored credentials that are saved on each individual cluster for security.

Big time saver! But how secure is it? More secure than hiding Halloween candy from children. For a user to perform cluster update, the administrator needs to add the ‘execute cluster update’ privilege to the role assigned to that user. Root credentials can only be managed by users assigned with a role that has the ‘manage update credentials’ privilege.

Figure 3  MyVxRail credentials manager

Licensing

The last topic is licensing. While all the capabilities you have been using on MyVxRail come with the purchase of the VxRail HCI System Software license, multi-cluster update is different. This feature requires a fee-based add-on software license called ‘SaaS active multi-cluster management for VxRail HCI System Software’. All VxRail nodes come with VxRail HCI System Software and you have access to MyVxRail and SaaS multi-cluster management features, except for cluster update. For you to perform an update of a cluster on MyVxRail, all nodes in the clusters must have the add-on software license.   

Conclusion

That is a lot to consume for one release. Hopefully, unlike your Thanksgiving meal, you can stay awake for the ending. While the brand name VxRail ACE is no more, we’re continuing to deliver value-adding capabilities. Multi-cluster update is a great feature to further your use of MyVxRail for LCM of your VxRail clusters. With role-based access and VxRail infrastructure credentials, rest assured you’re benefitting from multi-cluster update without sacrificing security.

To learn more about these features, check out the VxRail techbook and the interactive demo for SaaS multi-cluster management.

 

Daniel Chiu, VxRail Technical Marketing

LinkedIn

Read Full Blog
  • HCI
  • VMware
  • vSphere
  • VxRail
  • OpenManage

Exploring the customer experience with lifecycle management for vSAN Ready Nodes and VxRail clusters

Cliff Cahill Cliff Cahill

Thu, 24 Sep 2020 19:41:49 -0000

|

Read Time: 0 minutes

The difference between VMware vSphere LCM (vLCM) and Dell EMC VxRail LCM is still a trending topic that most HCI customers and prospects want more information about. While we compared the two methods at a high level in our previous blog post, let’s dive into the more technical aspects of the LCM operations of VMware vLCM and VxRail LCM. The detailed explanation in this blog post should give you a more complete understanding of your role as an administrator for cluster lifecycle management with vLCM versus VxRail LCM.

Even though vLCM has introduced a vast improvement in automating cluster updates, lifecycle management is more than executing cluster updates. With vLCM, lifecycle management is still very much a customer-driven endeavor. By contrast, VxRail’s overarching goal for LCM is operational simplicity, by leveraging Continuously Validated States to drive cluster LCM for the customer. This is a large part of why VxRail has over 8,600 customers since it was launched in early 2016.

In this blog post, I’ll explain the four major areas of LCM:

  • Defining the initial baseline configuration
  • Planning for a cluster update
  • Executing the cluster update
  • Sustaining cluster integrity over the long term

Defining the initial baseline configuration

The baseline configuration is a vital part of establishing a steady state for the life of your cluster. The baseline configuration is the current known good state of your HCI stack. In this configuration, all the component software and firmware versions are compatible with one another. Interoperability testing has validated full stack integrity for application performance and availability while also meeting security standards in place. This is the ‘happy’ state for you and your cluster. Any changes to the configuration use this baseline to know what needs to be rectified to return to the ‘happy’ state.

How is it done with vLCM?

vLCM depends on the hardware vendor to provide a Hardware Management Services virtual machine. Dell provides this support for its Dell EMC PowerEdge servers, including vSAN ReadyNodes. I’ll use this implementation to explain the overall process. Dell EMC vSAN ReadyNodes use the OpenManage Integration for VMware vCenter (OMIVV) plugin to connect to and register with the vCenter Server.

Once the VM is deployed and registered, you need to create a credential-based profile. This profile captures two accounts: one for the out-of-band hardware interface, the iDRAC, and the other for the root credentials for the ESXi host. Future changes to the passwords require updating the profile accordingly.

With the VM connection and profile in place, a Catalog XML file is used by vLCM to define the initial baseline configuration. To create the Catalog XML file, you need to install and configure the Dell Repository Manager (DRM) to build the hardware profile. Once a profile is defined to your specification, it must then be exported and stored on an NFS or CIFS share. The profile is then used to populate the Repository Profile data in the OMIVVV UI. If you are unsure of your configuration, refer to the vSAN Hardware Compatibility List (HCL) for the specific supported firmware versions. Once the hardware profile is created, you can then associate it with the cluster profile. With the cluster profile defined, you can enable drift detection. Any future change to the Catalog XML file is done within the DRM.

It’s important to note that vLCM was introduced in vSphere 7.0. To use vLCM, you must first update or deploy your clusters to run vSphere 7.x.

How is it done with VxRail LCM?

With VxRail, when the cluster arrives at the customer data center, it’s already running in a ‘happy’ state. For VxRail, the ‘happy’ state is called Continuously Validated States. The term is pluralized because VxRail defines all the ‘happy’ states that your cluster will update to over time. This means that your cluster is always running in a ‘happy’ state without you needing to research, define, and test to arrive at Continuously Validated States throughout the life of your cluster. VxRail – well, specifically the VxRail engineering team - does it for you. This has been a central tenet of VxRail since the product first launched with vSphere 6.0. Since then it has helped customers transition to vSphere 6.5, 6.7, and now 7.0.

Once the VxRail cluster initialization is completed, use your Dell EMC Support credentials to configure the VxRail repository setting within vCenter. VxRail Manager plugin to vCenter will automatically connect to the VxRail repository at Dell EMC and pull down the next available update package.

Figure 1  Defining the initial baseline configuration

Planning for a cluster update

Updates are a constant in IT, and VMware is constantly adding new capabilities or product/security fixes that require updating to newer versions of software. Take for example the vSphere 7.0 Update 1 release that VMware and Dell Technologies just announced. Those eye-opening features are available to you when you update to that release. You can check out just how often VMware has historically updated vSphere here: https://kb.vmware.com/s/article/2143832.

As you know, planning for a cluster update is an iterative process with inherent risk associated with it. Failing to plan diligently can cause adverse effects on your cluster, ranging from network outages and node failure to data unavailability or data loss. That said, it’s important to mitigate the risk where you can.

How is it done with vLCM?

With vLCM, the responsibility of planning for a cluster update rests on the customers’ shoulders, including the risk. Understanding the Bill of Materials that makes up your server’s hardware profile is paramount to success. Once all the components are known, and a target version of vSphere ESXi is specified, the supported driver and firmware version needs to be investigated and documented. You must consult the VMware Compatibility Guide to find out which drivers/firmware are supported for each ESXi release.

It is important to note that although vLCM gives you the toolset to apply firmware and driver updates, it does not validate compatibility or support for each combination for you, except for the HBA Driver. This task is firmly in the customer’s domain. It is advisable to validate and test the combination in a separate test environment to ensure that no performance regression or issues are introduced into the production environment. Interoperability testing can be an extensive and expensive undertaking. Customers should create and define robust testing processes to ensure that full interoperability and compatibility is met for all components managed and upgraded by vLCM.

With Dell EMC vSAN Ready Nodes, customers can rest assured that the HCL certification and compatibility validation steps have been performed. However, the customer is still responsible for interoperability testing.

How is it done with VxRail LCM?

VxRail engineering has taken a unique approach to LCM. Rather than leaving the time-consuming LCM planning to already overburdened IT departments, they have drastically reduced the risk by investing over $60 million, more than 25,000 hours of testing for major releases, and more than 100 team members into a comprehensive regression test plan. This plan is completed prior to every VxRail code release. (This is in addition to the testing and validation performed by PowerEdge, on which VxRail nodes are built.)

Dell EMC VxRail engineering performs this testing within 30 days of any new VMware release (even quicker for express patches), so that customers can continually benefit from the latest VMware software innovations and confidently address security vulnerabilities. You may have heard this called “synchronous release”.

The outcome of this effort is a single update bundle that is used to update the entire HCI stack, including the operating system, the hardware’s drivers and firmware, and management components such as VxRail Manager and vCenter. This allows VxRail to define the declarative configuration we mentioned previously (“Continuously Validated States”), allowing us to move easily from one validated state to the next with each update.

 

Figure 2  Planning for a cluster update

Executing the cluster update

The biggest improvement with vLCM is its ability to orchestrate and automate a full stack HCI cluster update. This simplifies the update operation and brings enormous time savings. This process is showcased in a recent study performed by Principled Technologies with PowerEdge Servers with vSphere (not including vSAN).

How is it done with vLCM?

The first step is to import the ESXi ISO via the vLCM tab in the vCenter Server UI. Once uploaded, select the relevant cluster, ensure that the cluster profile (created in the initial baseline configuration phase) is associated with the cluster being updated. Now, you can apply the target configuration by editing the ESXi image and, from the OMIVV UI, choose the correct firmware and driver to apply to the hardware profile. Once a compliance scan is complete, you will have the option to remediate all hosts.

If there are multiple homogenous clusters you need to update, it can be as easy as using the same cluster profile to execute the cluster update against. However, if the next cluster has a different hardware configuration, then you would have to perform the above steps over again. Customers with varying hardware and software requirements for their clusters will have to repeat many of these steps, including the planning tasks, to ensure stack integrity.

How it is done with VxRail LCM?

With VxRail and Continuously Validated States, updating from one configuration to another is even simpler. You can access the VxRail Manager directly within the vCenter Server UI to initiate the update. The LCM operation automatically retrieves the update bundle from the VxRail repository, runs a full stack pre-update health check, and performs the cluster update.

With VxRail, performing multi-cluster updates is as simple as performing a single-cluster update. The same LCM cluster update workflow is followed. While different hardware configurations on separate clusters will add more labor for IT staff for vSAN Ready Nodes, this doesn’t apply to VxRail.  In fact, in the latest release of our SaaS multi-cluster management capability set, customers can now easily perform cluster updates at scale from our cloud-based management platform, MyVxRail.

Figure 3  Executing a cluster update

Sustaining cluster integrity over the long term

The long-term integrity of a cluster outlasts the software and hardware in it. As mentioned earlier, because new releases are made available frequently, software has a very short life span. While hardware has more staying power, it won’t outlast some of the applications running on them. New hardware platforms will emerge. New hardware devices will enter the market that will launch new workloads, such as machine learning, graphics rendering, and visualization workflows. You will need the cluster to evolve non-disruptively to deliver the application performance, availability, and diversity your end-users require.

How is it done with vLCM?

In its current form, vLCM will struggle in long-term cluster lifecycle management. In particular, its inability to support heterogeneous nodes (nodes with different hardware configurations) in the same cluster will limit its application diversification and its ability to take advantage of new hardware platforms without impacting end-users.

How it is done with VxRail LCM?

VxRail LCM touts its ability to allow customers to grow non-disruptively and to scale their clusters over time. That includes adding non-identical nodes into the clusters for new applications, adding new hardware devices for new applications or more capacity, or retiring old hardware from the cluster.

Conclusion 

Figure 4  Comparing vSphere LCM and VxRail LCM cluster update operations driven by the customer

The VMware vLCM approach empowers customers who are looking for more configuration flexibility and control. They have the option to select their own hardware components and firmware to build the cluster profile. With this freedom comes the responsibility to define the HCI stack and make investments in equipment and personnel to ensure stack integrity. vLCM supports this customer-driven approach with improvements in cluster update execution for faster outcomes.

Dell EMC VxRail LCM continues to take a more comprehensive approach to optimize operational efficiency from the point of the view of the customer. VxRail customers value its LCM capabilities because it reduces operational time and effort which can be diverted into other areas of need in IT. VxRail takes on the responsibility to drive stack integrity for the lifecycle management of the cluster with Continuously Validated States. And VxRail sustains stack integrity throughout the life of the cluster, allowing you to simply and predictably evolve with technology trends.

Cliff Cahill
VxRail Engineering Technologist
Twitter @cliffcahill
LinkedIn http://linkedin.com/in/cliffcahill





Read Full Blog
  • VMware
  • VxRail
  • VMware Cloud Foundation

The Latest VxRail Platform Innovation is Now Included in Your Cloud

Jason Marques Jason Marques

Tue, 25 Aug 2020 10:33:16 -0000

|

Read Time: 0 minutes

The Dell Technologies Cloud Platform, VCF on VxRail, now supports the latest VxRail HCI System Software release featuring a new and improved first run experience, host geo-location tagging capabilities, hardware platform updates, and enhanced security features

Dell Technologies and VMware are happy to announce the general availability VCF 4.0.1.1 on VxRail 7.0.010. 

This release brings support for the latest version of VxRail to the Dell Technologies Cloud Platform. Let’s review what these new features are all about. 


Updated VxRail Software Bill of Materials 

Please check out the VCF on VxRail release notes for a full listing of the supported software BOM associated with this release. You can find the link at the bottom of page. 



VxRail Hardware Platform Updates 

VxRail 7.0.010 brings about new support for ruggedized D-Series VxRail hardware platforms (D560/D560F). These ruggedized and durable platforms are designed to meet the demand for more compute, performance, storage, and more importantly, operational simplicity that deliver the full power of VxRail for workloads at the edge, in challenging environments, or for space-constrained areas. To read more about the technical details of VxRail D-Series, check out the VxRail D-Series Spec Sheet. 


Also, this release is reintroducing GPU support that was not in the initial VCF 4.0 on VxRail 7.0 release. 



New and Improved VxRail First Run Experience  

A new Day 1 VxRail cluster first run workflow and UI enhancements have been updated. The new day 1 VxRail first run deployment wizard is comprised of 13 steps or top level tasks. This day 1 workflow update was required to support new VxRail HCI System software enhancements. 


The new UI provides for improved levels of configuration data entry flexibility during deployment. These options include things like allowing unique hostnames for each ESX host without forcing a name configuration, allowing for non-sequential IP addresses for hosts in the cluster, support for a geographical location ID tag, e.g. Rack Name or Rack Location are now supported. It provides a cleaner interface with a consistent look and feel for Information, Warnings, and Errors. There is improved validation, providing a higher level of feedback when errors are encountered of validation checks fail. And finally, options to manually enter all the configuration parameters or upload a pre-defined configuration via a YAML or JSON file are till available too! The figure below illustrates the new first run steps and UI. 


 

Figure 1

 

New VxRail API to Automate Day 1 VxRail First Run Cluster Creation 

This feature allows for fast and consistent VxRail cluster deployments using the programmatic extensibility of a REST API. It provides administrators with an additional option for creating VxRail clusters in addition to the VxRail Manager first run UI.  



Day 1 Support to Initially Deploy Up to Six Nodes in a VxRail Cluster During VxRail First Run 

The previous maximum node deployment supported in the VxRail first run was four. Administrators who needed larger VxRail cluster sizes over four nodes would have needed to create the cluster with four nodes and once that was in place, perform node expansions to get to the desired cluster size. This new feature helps reduce time needed to initially create larger VxRail clusters by allowing for a larger starting point of six VxRail nodes. 



VxRail Host Geo-Location Tagging 

This is probably one of the coolest and most underrated features in the release in my opinion. VxRail Manager now supports geographic location tags for VxRail hosts. This capability allows for important admin-defined host metadata that can assist many customers in gaining greater visibility of the physical location of the HCI infrastructure that makes up their cloud. This information is configured as “Host Settings” during VxRail first run as illustrated in the figure below. 



Figure 2

 

As shown, the two values that make up the geo-location tags are Rack Name and Rack Position. These values are stored in the iDRAC of each VxRail host. You may be asking yourself, “Great! I have the ability to add additional metadata for my VxRail hosts but what can I do with it?”. Well, together, these values help a cloud administrator identify a VxRail host’s position within a given rack within the data center. Cloud administrators can then leverage this data to choose the VxRail host order they want to be displayed in the VxRail Manager vCenter plugin Physical View. The figure below illustrates what this would look like. 



Figure 3

 

As datacenter environments grow, VxRail host expansion operations can be used to add additional infrastructure capacity. The VxRail “Add VxRail Hosts” automated expansion workflows have been updated to include a new Host Location step which allows for the ability add geo-location Rack Name and Rack Position metadata for the new hosts being added to an existing VxRail Cluster. The figure below shows what a host expansion operation would look like. 



Figure 4

 

In this fast paced world of digital transformation, it is not uncommon for cloud datacenter infrastructure to be moved within a datacenter after it has already been installed. This could be due to physical rack expansion design changes or infrastructure repurposing. These situations were also considered with using VxRail geo-location tags. Thus, there is an option to dynamically edit an existing host’s geo-location information. When this is performed, VxRail Manager will automatically update the host’s iDRAC with the new values. The figure below shows what the host edit would look like. 



Figure 5

 

All these geo-location management capabilities provide VCF on VxRail administrators with full stack physical to virtual infrastructure mapping that help further extend the Cloud Foundation management experience and simplify operations! And this capability is only available with the Dell Technologies Cloud Platform (VCF on VxRail)! How cool is that?! 



VxRail Security Enhancements 


Added Security Compliance With The Addition of FIPS 140-2 Level 1 Validated Cryptography For VxRail Manager 

Cloud Foundation on VxRail offers intrinsic security built into every layer of the solution stack, from hardware silicon to storage to compute to networking to governance controls.  This helps customers make security a built part of the platform for your traditional workloads as well as container based cloud native workloads rather than something that is bolted on after the fact. 

 

Building on the intrinsic security capabilities of the platform are the following new features: 


VxRail Manager is now FIPS 140-2 compliant, offering built-in intrinsic encryption, meeting the high levels of security standards required by the US Department of Defense. 


From VxRail 7.0.010 onward, VxRail has ‘FIPS inside’! This would entail having built-in features such as: 

  • VxRail Manager Data-in-Transit (e.g., HTTPS interfaces, SSH) 
  • VxRail Manager's SLES12 FIPS usage 
  • VxRail Manager - encryption used for password caching 


Disable VxRail LCM operations from vCenter 

In order to limit administrator configuration error by allowing for the performing of VxRail LCM operations from within vCenter rather than through SDDC Manager, all VCF on VxRail deployments will natively lockdown the vSphere Web Client VxRail Manager Plugin Updates screen out of the box. This enforces administrators to use SDDC Manager for all LCM operations which will guarantee that the full stack of HW/SW used have all been qualified and validated for their environment. The figure below illustrates what this looks like. 



Figure 6

 


Disable VxRail Host Rename/Re-IP operations in vCenter 


Continuing with the idea of trying to limit administration configuration errors, this feature deals with trying to avoid configuration errors by not allowing administrators to perform VxRail Host Edit operations from within vCenter that are not supported in VCF. This helps maintain an operating experience in which all VCF on VxRail deployments will natively lockdown the vSphere Web Client VxRail Manager Plugin Hosts screen out of the box. The figure below illustrates what this looks like



Figure 7

 

Now those are some intrinsic security features! 

 

Well that about covers all the new features! Thanks for taking the time to learn more about this latest release. As always, check out some of the links at the bottom of this page to access additional VCF on VxRail resources. 


Jason Marques 

Twitter -@vwhippersnapper 



Additional Resources 



 

Read Full Blog
  • Intel
  • HCI
  • VxRail
  • Optane

VxRail & Intel Optane for Extreme Performance

KJ Bedard KJ Bedard

Tue, 02 Mar 2021 17:47:20 -0000

|

Read Time: 0 minutes

Enabling high performance for HCI workloads is exactly what happens when VxRail is configured with Intel Optane Persistent Memory (PMem). Optane PMem provides compute and storage performance to better serve applications and business-critical workloads. So, what is Intel Optane Persistent Memory? Persistent memory is memory that can be used as storage, providing RAM-like performance, very low latency and high bandwidth. It’s great for applications that require or consume large amounts of memory like SAP HANA, and has many other use cases as shown in Figure 1 and VxRail is certified for SAP HANA as well as Intel Optane PMem.  

Moreover, PMem can be used as block storage where data can be written persistently, a great example is for DBMS log files. A key advantage to using this technology is that you can start small with a single PMem card (or module), then scale and grow as needed with the ability to add up to 12 cards. Customers can take advantage of PMem immediately because there’s no need to make major hardware or configuration changes, nor budget for a large capital expenditure. 

There are a wide variety of use cases today including those you see here:

Figure 1: Intel Optane PMem Use Cases


PMem offers two very different operating modes, that being Memory and App Direct, and in turn App Direct can be used in two very different ways.

First, Intel Optane PMem in Memory mode is not yet supported by VxRail.  This mode acts as volatile system memory and provides significantly lower cost per GB then traditional DRAM DIMMs. A follow-on update to this blog will describe this mode and test results in much more detail once it is supported.

As for App Direct mode (supported today), PMem is consumed by virtual machines as either a block storage device, known as vPMemDisk, or as byte addressable memory, known as Virtual NVDIMM. Both provide great benefit to the applications running in a virtual machine, just in very different ways. vPMemDisk can be used by any virtual machine hardware, and by any Guest OS.  Since it’s presented as a block device it will be treated like any other virtual disk. Applications and/or data can then be placed on this virtual disk. The second consumption method, NVDIMM has the advantage of being addressed in the same way as regular RAM, however, it can retain its data through reboots or power failures. This is a considerable plus for large in-memory databases like SAP HANA where cache warm-up or the time to load tables in memory  can be significant!

However, it’s important to note that, like any other memory module, the PMem module does not provide data redundancy. This may not be an issue for some data files on commonly used applications that can be re-created in case of a host failure. But a key principle when using PMem, either as block storage or byte addressable memory is that the applications are responsible for handling data replication to provide durability. 

New data redundancy options are expected on applications that are using PMem and should be well understood before deployment.

First, we’ll look at test results using PMem as virtual disk (or vPMemDisk). Our Engineering team tested VxRail with PMem in App Direct mode and ran comparison tests against a VxRail all-flash (P570F series platform). The testing simulated a typical 4K OLTP workload with 70/30 RW ratio. Our results achieve more than 1.8M IOPs or 6X more than the all-flash VxRail system. That equates to 93% faster response times (or lower latency) and 6X greater throughput as shown here:





Figure 2: VxRail PMem App Direct versus VxRail all-flash


This latency difference indicates the potential to improve the performance of legacy applications by placing specific data files on a PMem module, for example, placing log files on PMem. To verify the benefit of this log acceleration use case we ran a TPC-C benchmark comparing VxRail configured with a log file on a vPMEMDIsk to a VxRail all-flash vSAN, and we saw a 46% improvement on the number of transactions per minute.


Figure 3: Log file acceleration use case



For the second consumption method, we tested PMem in App direct mode using the NVDIMM consumption method. We performed tests using 1,2,4,8 and then 12 PMEM modules. All testing has been evaluated and validated by ESG (Enterprise Strategy Group). The certified white paper has been published as highlighted in the resources section.


Figure 4: NVDIMM device testing (vSAN not-optimized versus optimized PMem NVDIMM)


The results prove linear scalability as we increase the number of modules from 1 to 12. And with 12 PMem modules, VxRail achieves 80 times more IOPs than when running against vSAN not optimized (meaning VxRail all-flash vSAN with no PMem involved), and 100X for the 4K RW workload. The right half of the graphic depicts throughput results for very large IO, 64KB. When PMem is optimized on 12 modules we saw 28X higher throughput for a 64KB random read (RR) workload, and PMem is 13 times faster for the 64K RW.

What you see here is amazing performance on a single VxRail host and almost linear scalability when adding PMem!! Yes, that warrants a double bang. If you were to max out a 64-node cluster, the potential scalability is phenomenal and game changing! 

So, what does all this mean? Key takeaways are:  


  • The local performance of VxRail with Intel Optane PMem can scale to 12M read IOPS, and more than 4M write IOPs or 70GB/s read throughput / 22GB/s write throughput on a single host.
  • The use of PMEM modules doesn’t affect the regular activity on vSAN Datastores and extends the value of your VxRail platform in many ways;
    • It can be used to accelerate legacy applications, such as RDBMS Log acceleration
    • It enables the deployment of in memory databases and applications that can benefit from the higher IO throughput provided by PMEM while still taking the benefit of vSAN characteristics in the VxRail platform 
    • The local performance of a single host with 12 x 128GB PMem modules achieves more than 12M read IOPS, and more than 4M write IOPs
    • It not only increases performance of traditional HCI workloads such as VDI, but also support performance-intensive transactional and analytics workloads
    • It offers orders-of-magnitude faster performance than traditional storage
    • It provides more memory for less cost as PMem is much less costly than DRAM


The references and validation testing have been completed by ESG (Enterprise Strategy Group).  White papers and other resources on VxRail for Extreme Performance are available via the links listed below.


Additional Resources


ESG Validation: Dell EMC VxRail and Intel Optane Persistent Memory

VxRail and Intel Optane for Extreme Performance – Engineering presentation

High Performance for HCI Workloads with Dell EMC VxRail & Intel Optane Persistent Memory - infographic

Dell EMC & Intel Optane Persistent Memory - video

ESG Validation: Dell EMC VxRail with Intel Xeon Scalable Processors and Intel Optane SSDs




By: KJ Bedard – VxRail Technical Marketing Engineer

LinkedIn: https://www.linkedin.com/in/kj-bedard-50b25315/

Twitter: @KJbedard

Read Full Blog
  • VMware
  • vSphere
  • VxRail
  • networking
  • life cycle management

Adding to the VxRail summer party with the release of VxRail 7.0.010

Daniel Chiu Daniel Chiu

Mon, 17 Aug 2020 18:31:32 -0000

|

Read Time: 0 minutes

After releasing multiple VxRail 4.7 software versions in the early summer, the VxRail 7.0 software train has just now joined the party. Like any considerate guest, VxRail 7.0.010 does not come empty handed. This new software release brings new waves of cluster deployment flexibility so you can run a wider range of application workloads on VxRail, as well as new lifecycle management enhancements for you to sit back and enjoy the party during their next cluster update.

The following capabilities expand the workload possibilities that can run on VxRail clusters:

  • More network flexibility with support for customer-supplied virtual distributed switch (VDS) – Often times, customers with a large deployment of VxRail clusters prefer to standardize their VDS so they can re-use the same configuration on multiple clusters. Standardization simplifies cluster deployment operations and VDS management and reduces errors. This is sure to be a hit for our party guests with grand plans to expand their VxRail footprint.
  • Network redundancy – the support for customer-supplied VDS also enables support for network card level redundancy and link aggregation. Now you can create a NIC teaming policy that can tolerate a network card failure for VxRail system traffic. For example, the policy would include a port on the NDC and another port on the PCIe network card. If one network card becomes unavailable, the traffic can still run through the remaining network card. With link aggregation, you can increase the network bandwidth by utilizing multiple ports in an active/active network connection. You can select the load balancing option when configuring the NIC teaming policy.

Network card level redundancy with active/active network connections

 

    • FIPS 140-2 Level 1 validated cryptography – Industry sectors such as the federal sector require this level of security for any applications that access sensitive data. Now the VxRail software meets this standard by using cryptographic libraries and encrypting data in-transit and storage of keys and credentials. Combine that with existing vSAN encryption that already meets this standard for data at rest, VxRail clusters can be a fit for even more environments in various industry sectors with higher security standards. The guest list for this party is only getting bigger.

Along with these features that increase the market opportunity for VxRail clusters, lifecycle management enhancements also come along with VxRail 7.0.010’s entrance to the party. VxRail has strengthened in LCM pre-upgrade health check to include more ecosystem components in the VxRail stack. Already providing checks against the HCI hardware and software, VxRail is extending to ancillary components such as the vCenter Server, Secure Remote Services gateway, RecoverPoint for VMs software, and the witness host used for 2-node and stretched clusters. The LCM pre-upgrade health check performs a version compatibility against these components before upgrading the VxRail cluster. With a stronger LCM pre-upgrade health check, you’ll have more time for summer fun.

VxRail 7.0.010 is here to keep the VxRail summer party going. These new capabilities will help our customers accelerate innovation by providing an HCI platform that delivers the infrastructure flexibility their applications require, while giving the administrators the operational freedom and simplicity to fearlessly update their clusters freely.

Interested in learning more about VxRail 7.0.010?  You can find more details in the release notes

Daniel Chiu, VxRail Technical Marketing

LinkedIn

Read Full Blog
  • VMware
  • VxRail
  • Kubernetes
  • VMware Cloud Foundation
  • DTCP

Announcing VMware Cloud Foundation 4.0.1 on Dell EMC VxRail 7.0

Jason Marques Jason Marques

Wed, 03 Aug 2022 15:21:13 -0000

|

Read Time: 0 minutes

The latest Dell Technologies Cloud Platform release introduces new support for vSphere with Kubernetes for entry cloud deployments and more  

Dell Technologies and VMware are happy to announce the general availability VCF 4.0.1 on VxRail 7.0. 


This release offers several enhancements including vSphere with Kubernetes support for entry cloud deployments, enhanced bring up features for more extensibility and accelerated deployments, increased network configuration options, and more efficient LCM capabilities for NSX-T componentsBelow is the full listing of features that can be found in this release:


  • Kubernetes in the management domain: vSphere with Kubernetes is now supported in the management domain. With VMware Cloud Foundation Workload Management, you can deploy vSphere with Kubernetes on the management domain default cluster starting with only four VxRail nodes. This means that DTCP entry cloud deployments can take advantage of running Kubernetes containerized workloads alongside general purpose VM workloads on a common infrastructure! 
  • Multi-pNIC/multi-vDS during VCF bring-up: The Cloud Builder deployment parameter workbook now provides five vSphere Distributed Switch (vDS) profiles that allow you to perform bring-up of hosts with two, four, or six physical NICs (pNICs) and to create up to two vSphere Distributed Switches for isolating system (Management, vMotionvSAN) traffic from overlay (Host, Edge, and Uplinks) traffic. 
  • Multi-pNIC/multi-vDS API support: The VCF API now supports configuring a second vSphere Distributed Switch (vDS) using up to four physical NICs (pNICs), providing more flexibility to support high performance use cases and physical traffic separation. 
  • NSX-T cluster-level upgrade support: Users can upgrade specific host clusters within a workload domain so that the upgrade can fit into their maintenance windows bringing about more efficient upgrades. 
  • Cloud Builder API support for bring-up operations – VCF on VxRail deployment workflows have been enhanced to support using a new Cloud Builder API for bring-up operations. VCF software installation on VxRail during VCF bring-up can now be done using either an API or GUI providing even more platform extensibility capabilities.
  • Automated externalization of the vCenter Server for the management domain: Externalizing the vCenter Server that gets created during the VxRail first run (the one used for the management domain) is now automated as part of the bring-up process. This enhanced integration between the VCF Cloud Builder bring-up automation workflow and VxRail API helps to further accelerate installation times for VCF on VxRail deployments.
  • BOM Updates: Updated VCF software Bill of Materials with new product versions. 


Jason Marques 

Twitter -@vwhippersnapper 

 

Additional Resources 


Read Full Blog
  • VxRail
  • AMD

2nd Gen AMD EPYC now available to power your favorite hyperconverged platform ;) VxRail

David Glynn David Glynn

Mon, 17 Aug 2020 18:31:32 -0000

|

Read Time: 0 minutes

Expanding the range of VxRail choices to include 64-cores of 2nd Gen AMD EPYC compute

Last month, Dell EMC expanded our very popular E Series (the E for Everything Series) with the introduction of the E665/F/N, our very first VxRail with an AMD processor, and what a processor it is! The 2nd Gen AMD EPYC processor came to market with a lot of industry-leading capabilities:

  • Up to 64-cores in a single processor with 8, 12, 16, 24, 32 or 48 core offerings also available
  • Eight memory channels, but not only more channels, they are also faster at 3200MT/s. The 2nd Gen EPYC can also address much more memory per processor
  • 7nm transistors. Smaller transistors mean more powerful and more energy efficient processors 
  • Up to 128 lanes of PCIe Gen 4.0, with 2X the bandwidth of PCIe Gen 3.0.

These industry leading capabilities enable the VxRail E665 series to deliver dual socket performance in a single socket model - and can provide up to 90% greater general-purpose CPU capacity than other VxRail models when configured with single socket processors.

So, what is the sweet spot or ideal use case for the E665? As always, it depends on many things. Unlike the D Series (our D for Durable Series) that we also launched last month, which has clear rugged use cases, the E665 and the rest of the E Series very much live up to their “Everything” name, and perform admirably in a variety of use cases.

While the 2nd Gen EPYC 64-core processors grab the headlines, there are multiple AMD processor options, including the 16 core AMD 7F52 at 3.50GHz with a max boost of 3.9GHz for applications that benefit from raw clock speed, or where application licensing is core based. On the topic of licensing, I would be remiss if I didn’t mention VMware’s update to its per-CPU pricing earlier this year. This results in processors with more then 32-cores requiring a second VMware per-CPU license. This may make a 32-core processor an attractive option from an overall capacity & performance verses hardware & licensing cost perspective.

Speaking of overall costs, the E665 has dual 10Gb RJ45/SFP+ or dual 25Gb SFP28 base networking options, which can be further expanded with PCIe NICs including a dual 100Gb SFP28 option. From a cost perspective, the price delta between 10Gb and 25Gb networking is minimal. This is worth considering particularly for greenfield sites and even for brownfield sites where the networking maybe upgraded in the near future. Last year, we began offering Fibre Channel cards on VxRail, which are also available on the E665. While FC connectivity may sound strange for a hyperconverged infrastructure platform, it does make sense for many of our customers who have existing SAN infrastructure, or some applications (PowerMax for extremely large database requiring SRDF) or storage needs (Isilon for large file repository for medical files) that are more suited to SAN. While we’d prefer these SAN to be a Dell EMC product, as long as it is on the VMware SAN HCL, it can be connected. Providing this option enables customers to get the best both worlds have to offer.

The options don’t stop there. While the majority of VxRail nodes are sold with all-flash configurations, there are customers whose needs are met with hybrid configs, or who are looking towards all-NVMe options. The E665 can be configured with as little as 960GB to maximums of 14TB hybrid, 46TB all-flash, or 32TB all-NVMe of raw storage capacity. Memory options consist of 4, 8, or 16 RDIMMs of 16GB, 32GB or 64GB in size. Maximum memory performance, 3200 MT/s, is achieved with one DIMM per memory channel, adding a second matching DIMM reduces bandwidth slightly to 2933 MT/s.

VxRail and Dell Technologies, very much recognize that the needs of our customers vary greatly. A product with a single set of options cannot meet all our various customers’ different needs. Today, VxRail offers six different series, each with a different focus: 

    • Everything E Series a power packed 1U of choice
    • Performance-focused P Series with dual or quad socket options
    • VDI-focused V Series with a choice of five different NIVIDA GPUs
    • Durable D Series are MIL-STD 810G certified for extreme heat, sand, dust, and vibration
    • Storage-dense S Series with 96TB of hybrid storage capacity
    • General purpose and compute dense G Series with 228 cores in a 2U form factor
  • With the highly flexible configuration choices, there is a VxRail for almost every use case, and if there isn’t, there is more than likely something in the broad Dell Technologies portfolio that is.

     

    Author: David Glynn, Sr. Principal Engineer, VxRail Tech Marketing

    Resources:
    VxRail Spec Sheet
    E665 Product Brief
    E665 One Pager
    D560 3D product landing page
    D Series video
    D Series spec sheet
    D Series Product Brief

Read Full Blog
  • VxRail
  • EHR
  • MEDITECH
  • healthcare

Healthcare Homerun for VxRail – MEDITECH Certified

Vic Dery Vic Dery

Mon, 17 Aug 2020 18:31:31 -0000

|

Read Time: 0 minutes

At Dell Technologies we are excited and proud to announce the VxRail HCI (Hyperconverged Infrastructure) certification for MEDITECH. Dell Technologies is #1 in the Hyperconverged Systems segment, a position held for 12 consecutive quarters1. VxRail is the only fully integrated, pre-configured, and tested hyperconverged infrastructure that simplifies and extends VMware environments. This solution helps simplify MEDITECH environments using VMware VMs improving performance and scalability by bringing together and optimizing multiple workloads. 

With this Dell Technologies certified solution that leverages VxRail, MEDITECH environments are easier to use, have lower risk of failure while continuing to provide a fiscally responsible approach.

Dell EMC and MEDITECH worked closely with an approved integrator* during the certification of the VxRail running the MEDITECH test harness. Testing consisted of a VxRail cluster to support all VMs required for the MEDITECH application, and provide infrastructure redundancy etc.  MBF (MEDITECH Backup Facilitator) backups are accomplished with Dell EMC’s Networker-NMMEDI in conjunction with RecoverPoint for VM’s, that has been tested and is backed by best in class implementation and a continuous focus on positive customer experience.

IDC WW Quarterly Converged Systems Tracker, Vendor Revenue (US$M) Q1 2020, June 18, 2020

 

Conclusion:

Dell Technologies makes IT transformation real for MEDITECH environments with a data first approach and as a leading provider of Healthcare IT infrastructure we are uniquely positioned to offer a full breath of solutions for MEDITECH environments. In fact, more than 60% of MEDITECH’s customers deploy a Dell Technologies solution2. For these reasons at Dell Technologies we’re excited and proud to add this certification, which supports MEDITECH Expanse, 6.X, Client/Server and MAGIC environments, to our Dell Technologies Healthcare portfolio. 

*Special thanks to Teknicor for providing their best practices, assistance and lab space for this certification process.

2 HIMSS Analytics, May 2019.

 

Resources:

Dell Healthcare page

Read Solutions for MEDITECH Environments 

 

Author: Vic Dery, Senior Principal Engineer, VxRail Technical Marketing

@VxVicTX 

Linkedin

Read Full Blog
  • SQL Server
  • big data
  • Hadoop
  • VxRail
  • Microsoft
  • Big Data Cluster

Big Solutions on Dell EMC VxRail with SQL 2019 Big Data Cluster

Vic Dery Vic Dery

Mon, 17 Aug 2020 18:31:31 -0000

|

Read Time: 0 minutes

The amount of data and different formats organizations must manage, ingest, and analyze has been the driving force behind Microsoft SQL 2019 Big Data Clusters (BDC).  SQL Server 2019 BDC demonstrates the deployment of scalable clusters of SQL Server, Spark, and containerized HDFS (Hadoop Distributed File System) running on Kubernetes.

We recently deployed and tested SQL Server 2019 BDC on Dell EMC VxRail hyperconverged infrastructure to demonstrate how VxRail delivers the performance, scalability, and flexibility needed to bring these multiple workloads together.    

The Dell EMC VxRail platform was selected for its ability to incorporate compute, storage, virtualization, and management in one platform offering. The key feature of the VxRail HCI is the integration of vSphere, vSAN, and VxRail HCI System Software for an efficient and reliable deployment and operations experience. The use of VxRail with SQL Server 2019 BDC makes it easy to unite relational data with big data.  

The testing demonstrates the advantages of using VxRail with SQL Server 2019 BDC for analytic application development. This also demonstrates how Docker, Kubernetes, and the vSphere Container Storage Interface (CSI) driver accelerate the application development life cycle when they are used with VxRail. The lab environment for development and testing used four VxRail E560F nodes supported by the vSphere CSI driver. With this solution, developers can provision SQL Server BDC in containerized environments without the complexities of traditional methods for installing databases and provisioning storage.

Our white paper, Microsoft SQL Server 2019 Big Data Cluster on Dell EMC VxRail shows the power of implementing SQL Server 2019 BDC technologies on VxRail. Integrating SQL Server 2019 RDBMS, SQL Server BDC, MongoDB, and Oracle RDBMS helps to create a unified data analytics application. Using VxRail enhances the ability of SQL Server 2019 to scale out storage and compute clusters while embracing the virtualization techniques from VMware. This SQL Server 2019 BDC solution also benefits from the simplicity of a complete yet flexible validated Dell EMC VxRail with Kubernetes management and storage integration.

The solution demonstrates the combined value of the following technologies: 

  • VxRail E560F – All-flash performance
  • Large tables stored on a scaled-out HDFS storage cluster that is hosted by BDC 
  • Smaller related data tables that are hosted on SQL Server, MongoDB, and Oracle databases 
  • Distributed queries that are enabled by the PolyBase capability in SQL Server 2019 to process Transact-SQL queries that access external data in SQL Server, Oracle, Teradata, and MongoDB. 
  • Red Hat Enterprise Linux

 

Big Data Cluster Services



This diagram shows how the pools are built. It provides details of the benefits for Kubernetes features for container orchestration at scale, including:

  • Autoscaling, replication, and recovery of containers 
  • Intracontainer communication, such as IP sharing 
  • A single entity—a pod—for creating and managing multiple containers 
  • A container resource usage and performance analysis agent, cAdvisor 
  • Network pluggable architecture 
  • Load balancing 
  • Health check service

This white paper, Microsoft SQL Server 2019 Big Data Cluster on Dell EMC VxRail,  addresses big data storage, the tools for handling big data, and the details around testing with TPC-H. When we tested data virtualization with PolyBase, the queries were successful, running without error and returning the results that joined all four data sources.

Because data virtualization does not involve physically copying and moving the data (so that the data is available to business users in real-time), BDC simplifies and centralizes access to and analysis of the organization’s data sphere. It enables IT to manage the solution by consolidating big data and data virtualization on one platform with a proven set of tools.

Success starts with the right foundation:

SQL Server 2019 BDC is a compelling new way to utilize SQL Server to bring high-value relational data and high-volume big data together on a unified, scalable data platform. All of this can be deployed with VxRail, enabling enterprises to experience the power of PolyBase to virtualize their data stores, create data lakes, and create scalable data marts in a unified, secure environment without needing to implement slow and costly Extract, Transform, and Load (ETL) pipelines. This makes data-driven applications and analysis more responsive and productive. SQL Server 2019 BDC and Dell EMC VxRail provide a complete unified data platform to deliver intelligent applications that can help make any organization more successful.

Read the full paper to learn more about how Dell EMC VxRail with SQL 2019 Big Data Clusters can:

  • Bring high-value relational data and high-volume big data together on a single, scalable platform.
  • Incorporates intelligent features and gets insights from more of your data—including data stored beyond SQL Server in Hadoop, Oracle, Teradata, and MongoDB.
  • Supports and enhances your database management and data-driven apps with advanced analytics using Hadoop and Spark. 

 

Additional VxRail & SQL resources:

Microsoft SQL Server 2019 Big Data Cluster on Dell EMC VxRail 

Microsoft SQL Server on VMware Cloud Foundation on Dell EMC VxRail

SQL on VxRail Solution brief

Key Benefits of Running Microsoft SQL Server on Dell EMC hyperconverged infrastructure (HCI) - Whitepaper

Key benefits of running Microsoft SQL Server on Dell EMC Hyperconverged Infrastructure (HCI) - Infographic

Architecting Microsoft SQL Server on VMware vSphere

 

Author: Vic Dery, Senior Principal Engineer, VxRail Technical Marketing

@VxVicTX

Read Full Blog
  • VMware
  • PowerMax
  • VxRail
  • VMware Cloud Foundation
  • SRDF

Extending Dell Technologies Cloud Platform Availability for Mission Critical Applications

Jason Marques Jason Marques

Wed, 03 Aug 2022 21:32:15 -0000

|

Read Time: 0 minutes

Reference Architecture Validation Whitepaper Now Available!

Many of us here at Dell Technologies regularly have conversations with customers and talk about what we refer to as the “Power of the Portfolio.” What does this mean exactly? It is essentially a reference to the fact that, as Dell Technologies, we have a robust and broad portfolio of modern IT infrastructure products and solutions across storage, networking, compute, virtualization, data protection, security, and more! At first glance, it can seem overwhelming to many. Some even say it could be considered complex to sort through. But we, as Dell Technologies, on the other hand, see it as an advantage. It allows us to solve a vast majority of our customers’ technical needs and support them as a strategic technology partner. 

It is one thing to have the quality and quantity of products and tools to get the job done -- it’s another to leverage this portfolio of products to deliver on what customers want most: business outcomes.

As Dell Technologies continues to innovate, we are making the best use of the technologies we have and are developing ways to use them together seamlessly in order to deliver better business outcomes for our customers. The conversations we have are not about this product OR that product but instead they are about bringing together this set of products AND that set of products to deliver a SOLUTION giving our customers the best of everything Dell Technologies has to offer without compromise and with reduced risk.


Figure 1: Cloud Foundation on VxRail Platform Components


The Dell Technologies Cloud Platform is an example of one of these solutions. And there is no better example that illustrates how to take advantage of the “Power of the Portfolio” than one that appears in a newly published reference architecture white paper that focuses on validating the use of the Dell EMC PowerMax system with SRDF/Metro in a Dell Technologies Cloud Platform (VMware Cloud Foundation on a Dell EMC VxRail) multi-site stretched-cluster deployment configuration (Extending Dell Technologies Cloud Platform Availability for Mission Critical Applications).This configuration provides the highest levels of application availability for customers who are running mission-critical workloads in their Cloud Foundation on VxRail private cloud that would otherwise not be possible with core DTCP alone.

Let’s briefly review some of the components used in the reference architecture and how they were configured and tested. 


Using external storage with VCF on VxRail

Customers commonly ask whether they can use external storage in Cloud Foundation on VxRail deployments. The answer is yes! This helps customers ease into the transition to a software-defined architecture from an operational perspective. It also helps customers leverage the investments in their existing infrastructure for the many different workloads that might still require external storage services.

External storage and Cloud Foundation have two important use cases: principal storage and supplemental storage. 

  • Principal storage - SDDC Manager provisions a workload domain that uses vSAN, NFS, or Fiber Channel (FC) storage for a workload domain cluster’s principal storage (the initial shared storage that is used to create a cluster). By default, VCF uses vSAN storage as the principal storage for a cluster. The option to use NFS and FC-connected external storage is also available. This option enables administrators to create a workload domain cluster whose principal storage can be a previously provisioned NFS datastore or an FC-based VMFS datastore instead of vSAN. External storage as principal storage is only supported on VI Workload Domains as vSAN is the required principal storage for the management domain in VCF.
  • Supplemental storage - This involves mounting previously provisioned external NFS, iSCSI, vVols, or FC storage to a Cloud Foundation workload domain cluster that is using vSAN as the principal storage. Supporting external storage for these workload domain clusters is comparable to the experience of administrators using standard vSphere clusters who want to attach secondary datastores to those clusters. 

At the time of writing, Cloud Foundation on VxRail supports supplemental storage use cases only. This is how external storage was used in the reference architecture solution configuration.


PowerMax Family

The Dell EMC PowerMax is the first Dell EMC hardware platform that uses an end-to-end Non-Volatile Memory Express (NVMe) architecture for customer data. NVMe is a set of standards that define a PCI Express (PCIe) interface used to efficiently access data storage volumes based on Non-Volatile Memory (NVM) media, which includes modern NAND-based flash along with higher-performing Storage Class Memory (SCM) media technologies. The NVMe-based PowerMax array fully unlocks the bandwidth, IOPS, and latency performance benefits that NVM media and multi-core CPUs offer to host-based applications—benefits that are unattainable using the previous generation of all-flash storage arrays. For a more detailed technical overview of the PowerMax Family, please check out the whitepaper Dell EMC PowerMax: Family Overview

The following figure shows the PowerMax 2000 and PowerMax 8000 models.


Figure 2: PowerMax product family

SRDF/Metro

The Symmetrix Remote Data Facility (SRDF) maintains real-time (or near real-time) copies of data on a PowerMax production storage array at one or more remote PowerMax storage arrays. SRDF has three primary applications: 

  • Disaster recovery
  • High availability
  • Data migration

In the case of this reference architecture, SRDF/Metro was used to provide enhanced levels of high availability across two availability zone sites. For a complete technical overview of SRDF, please check out this great SRDF whitepaper: Dell EMC SRDF.


Solution Architecture

Now that we are familiar with the components used in the solution, let’s discuss the details of the solution architecture that was used. 

This overall solution design provides enhanced levels of flexibility and availability that extend the core capabilities of the VCF on VxRail cloud platform. The VCF on VxRail solution natively supports a stretched-cluster configuration for the management domain and a VI workload domain between two availability zones by using vSAN stretched clusters. A PowerMax SRDF/Metro with Metro Stretched Cluster (vMSC) configuration is added to protect VI workload domain workloads by using supplementary storage for the workloads that are running on them.

Two types of vMSC configurations are verified with stretched Cloud Foundation on VxRail: uniform and non-uniform.

  • Uniform host access configuration - vSphere hosts from both sites are all connected to a storage node in the storage cluster across all sites. Paths presented to vSphere hosts are stretched across a distance.
  • Non-uniform host access configuration - vSphere hosts at each site are connected only to storage nodes at the same site. Paths presented to vSphere hosts from storage nodes are limited to the local site.

The following figure shows the topology used in the reference architecture of the Cloud Foundation uniform stretched-cluster configuration with PowerMax SRDF/Metro.

Figure 3: Cloud Foundation on VxRail uniform stretched-cluster config with PowerMax SRDF/Metro 


The following figure shows the topology used in the reference architecture of the Cloud Foundation on VxRail non-uniform stretched cluster configuration with PowerMax SRDF/Metro.

 Figure 4: Cloud Foundation on VxRail non-uniform stretched-cluster config with PowerMax SRDF/Metro 


Solution Validation Testing Methodology

We completed solution validation testing across the following major categories for both iSCSI and FC connected devices:

  • Functional Verification Tests - This testing addresses the basic operations that are performed when PowerMax is used as supplementary storage with VMware VCF on VxRail.
  • High Availability Tests - HA testing helps validate the capability of the solution to avoid a single point of failure, from the hardware component port level up to the IDC site level.
  • Reliability Tests - In general, reliability testing validates whether the components and the whole system are reliable enough with a certain level of stress running on them.

For complete details on all of the individual validation test scenarios that were performed, and the pass/fail results, check out the whitepaper.


Summary

To summarize, this white paper describes how Dell EMC engineers integrated VMware Cloud Foundation on VxRail with PowerMax SRDF/Metro and provides the design configuration steps that they took to automatically provision PowerMax storage by using the PowerMax vRO plug-in. The paper validates that the Cloud Foundation on VxRail solution functions as expected in both a PowerMax uniform vMSC configuration and a non-uniform vMSC configuration by passing all the designed test cases. This reference architecture validation demonstrates the power of the Dell Technologies portfolio to provide customers with modern cloud infrastructure technologies that deliver the highest levels of application availability for business-critical and mission-critical applications running in their private clouds.

Find the link to the white paper below along with other VCF on VxRail resources and see how you can leverage the “Power of the Portfolio” to support your business!

Jason Marques

Twitter - @vwhippersnapper


Additional Resources

Extending Dell Technologies Cloud Platform Availability for Mission Critical Applications Reference Architecture Validation Whitepaper

VxRail page on DellTechnologies.com

VxRail Videos

VCF on VxRail Interactive Demos

Read Full Blog
  • VMware
  • VxRail
  • VMware Cloud Foundation
  • DTCP

Announcing General Availability of VCF 3.10.0.1 on VxRail 4.7.511

Karol Boguniewicz Karol Boguniewicz

Mon, 21 Sep 2020 14:08:45 -0000

|

Read Time: 0 minutes

Improved automated lifecycle management and new hardware options

Today (7/2), Dell Technologies is announcing General Availability of VMware Cloud Foundation 3.10.0.1 on VxRail 4.7.511. 


Why we are releasing 3.10.0.1?

Because we’ve been notified about an upcoming important patch for the Cloud Foundation version 3.10 from VMware, and we wanted to incorporate it in a GA version on VxRail for the best experience for our customers.


What’s New?

This new release introduces VCF enhancements and VxRail enhancements.


VMware Cloud Foundation 3.10.0.1 enhancements:

  • ESXi Cluster-Level and Parallel Upgrades - Enables customers to update the ESXi software on multiple clusters in the management domain or in a workload domain in parallel. Parallel upgrades reduce the overall time required to upgrade the VCF environment. 

Figure 1. ESXi Cluster-Level and Parallel Upgrades


  • NSX-T Data Center Cluster-Level and Parallel Upgrades - Enables customers to upgrade all edge clusters in parallel, and then all host clusters in parallel. Again, parallel upgrades reduce the overall time required to upgrade the VCF environment. There’s also a possibility to select specific clusters to upgrade, using multiple upgrade windows, so that there’s no requirement for all clusters to be available at a given time.

Figure 2. NSX-T Cluster-Level and Parallel Upgrades



  • Skip Level Upgrades - Enables customers to upgrade to VMware Cloud Foundation on Dell EMC VxRail 3.10 from versions 3.7 and later.  Note: in case of VCF on VxRail, this must be performed by Dell EMC Customer Support at this time – customer enabled skip level upgrades will be supported when the feature is available in the GUI. Customers with active support contracts should open a Service Request with Dell EMC Customer Support to schedule the skip level upgrade activity.


Option to disable Application Virtual Networks (AVNs) during Bring-up - AVNs deploy vRealize Suite components on NSX overlay networks. We recommend using this option during bring-up. Customers can now disable this feature, for instance, if they are not planning to use vRealize Suite components.

  • Support for multiple NSX-T Transport Zones - Some customers require this option due to their architecture/security standards, for even better separation of the network traffic. It’s now available as a Day 2 configuration option that can be enabled by customers or VMware Professional Services.
  • BOM Updates - Updated Bill of Materials with new product versions. For an updated BOM, please consult the release notes.


VxRail 4.7.511 enhancements:

  • VCF on VxRail login using RSA SecurID two-factor authentication - Allows customers to implement more secure, two-factor authentication for VCF on VxRail using the RSA SecurID solution.


  • Support for new hardware options - Please check this blog post and the press release for more details on VxRail 4.7.510 platform features:
  • Intel Optane Persistent Memory 
  • VxRail D560 / D560F – ruggedized VxRail nodes
  • VxRail E665/F/N – AMD-based VxRail nodes


Summary

VMware Cloud Foundation 3.10.0.1 on VxRail 4.7.511 provides several features that allow existing customers to upgrade their platform more efficiently than ever before. The updated LCM capabilities offer not only more efficiency (with parallelism), but more flexibility in terms of handling the maintenance windows. With skip level upgrade, available in this version as a professional service, it’s also possible to get to this latest release much faster. This increases security, and allows customers to get the most benefit from their existing investments in the platform. New customers will benefit from the broader spectrum of hardware options, including ruggedized (D-series) and AMD-based nodes.


Additional resources:

Blog post about VCF 4.0 on VxRail 7.0: The Dell Technologies Cloud Platform – Smaller in Size, Big on Features

Press release: Dell Technologies Brings IT Infrastructure and Cloud Capabilities to Edge Environments

Blog post about new features in VxRail 4.7.510: VxRail brings key features with the release of 4.7.510

VCF on VxRail technical whitepaper

VMware Cloud Foundation 3.10 on Dell EMC VxRail Release Notes from VMware

Blog post about VCF 3.10 from VMware: Introducing VMware Cloud Foundation 3.10


Author: Karol Boguniewicz, Senior Principal Engineer, VxRail Technical Marketing

Twitter: @cl0udguide 

Read Full Blog
  • Intel
  • HCI
  • VxRail
  • security
  • Optane
  • life cycle management

VxRail brings key features with the release of 4.7.510

KJ Bedard KJ Bedard

Mon, 17 Aug 2020 18:31:30 -0000

|

Read Time: 0 minutes

VxRail recently released a new version of our software, 4.7.510, which brings key feature functionality and product offerings

At a high level, this release further solidifies VxRail’s synchronous release commitment with vSphere of 30 days or less. VxRail and the 4.7.510 release integrates and aligns with VMware by including the vSphere 6.7U3 patch release.  More importantly, vSphere 6.7U3 provides the underlying support for Intel Optane persistent memory (or PMem), also offered in this release.

Intel Optane persistent memory is non-volatile storage medium with RAM-like performance characteristics. Intel Optane PMem in a hyperconverged VxRail environment accelerates IT transformation with faster analytics (think in-memory DBMS), and cloud services.

Intel Optane PMem (in App Direct mode) provides added memory options for the E560/F/N and P570/F and is supported on version 4.7.410. Additionally, PMem will be supported on the P580N beginning with version 4.7.510 on July 14.

This technology is ideal for many use cases including in-memory databases and block storage devices, and it’s flexible and scalable allowing you to start small with a single PMem module (card) and scale as needed. Other use cases include real-time analytics and transaction processing, journaling, massive parallel query functions, checkpoint acceleration, recovery time reduction, paging reduction and overall application performance improvements.

New functionality enables customers to schedule and run "on demand” health checks in advance, and in lieu of the LCM upgrade. Not only does this give customers the flexibility to pro-actively troubleshoot issues, but it ensures that clusters are in a ready state for the next upgrade or patch. This is extremely valuable for customers that have stringent upgrade schedules, as they can rest assured that clusters will seamlessly upgrade within a specified window. Of course, running health checks on a regular basis provides sanity in knowing that your clusters are always ready for unscheduled patches and security updates.

Finally, the VxRail 4.7.510 release introduces optimized security functionality with two-factor authentication (or 2FA) with SecurID for VxRail. 2FA allows users to login to VxRail via the vCenter plugin when the vCenter is configured for RSA 2FA. Prior to this version, the user would be required to enter username and password. The RSA authentication manager automatically verifies multiple prerequisites and system components to identify and authenticate users. This new functionality saves time by alleviating the username/password entry process for VxRail access. Two-factor authentication methods are often required by government agencies or large enterprises. VxRail has already incorporated enhanced security offerings including security hardening, VxRail ACLs and RBAC, KMIP compliant key management, secure logging, and DARE, and now with the release of 4.7.510, the inclusion of 2FA further distinguishes VxRail as a market leader.

Please check out these resources for more VxRail 4.7.510 information:

VxRail Spec sheet

VxRail Technical FAQ

VxRail 4.7.510 release notes

By:  KJ Bedard - VxRail Technical Marketing Engineer

LinkedIn: https://www.linkedin.com/in/kj-bedard-50b25315/

Twitter: @KJbedard

Read Full Blog
  • HCI
  • VxRail

Protecting VxRail from Power Disturbances

Karol Boguniewicz Karol Boguniewicz

Mon, 17 Aug 2020 18:31:30 -0000

|

Read Time: 0 minutes

Preserving data integrity in case of unplanned power events

The challenge

Over the last few years, VxRail has evolved significantly -- becoming an ideal platform for most use cases and applications, spanning the core data center, edge locations, and the cloud. With its simplicity, scalability, and flexibility, it’s a great foundation for customers’ digital transformation initiatives, as well as high value and more demanding workloads, such as SAP HANA.

Running more business-critical workloads requires following best practices regarding data protection and availability. Dell Technologies specializes in data protection solutions and offers a portfolio of products that can fulfill even the most demanding RPO/RTO requirements from our customers. However, we are probably not giving enough attention to the other area related to this topic: protection against power disturbances and outages. Uninterruptible Power Supply (UPS) systems are at the heart of a data center’s electrical systems, and because VxRail is running critical workloads, it is a best practice to leverage a UPS to protect them and to ensure data integrity in case of unplanned power events. I want to highlight a solution from one of our partners – Eaton.

The solution

Eaton is an Advantage member of the Dell EMC Technology Connect Partner Program and the first UPS vendor who integrated their solution with VxRail. Eaton’s solution is a great example of how Dell Technologies partners can leverage VxRail APIs to provide additional value for our joint customers. Having integrated  Eaton’s Intelligent Power Manager (IPM) software with VxRail APIs, and leveraged Eaton’s Gigabit Network Card, the solution can run on the same protected VxRail cluster. This removes the need for additional external compute infrastructure to host the power management software - just a compatible Eaton UPS is required.

The solution consists of:

  • VxRail version min. 4.5.300, 4.7.x, 7.0.x and above
  • Eaton IPM SW v 1.67.243 or above
  • Eaton UPS – 5P, 5PX, 9PX, 9SX, 93PM, 93E, 9PXM
  • Eaton M2 Network Card FW v 1.7.5
  • IPM Gold License Perpetual

The main benefits are:

  • Preserving data integrity and business continuity by enabling automated and graceful shutdown of VxRail clusters that are experiencing unplanned extended power events
  • Reducing the need for onsite IT staff with simple set-up and remote management of power infrastructure using familiar VMware tools
  • Safeguarding the VxRail system from power anomalies and environmental threats

How does it work?

It’s quite simple (see the figure below). What’s interesting and unique is that the IPM software, which is running on the cluster, delegates the final shutdown of the system VMs and cluster to the card in the UPS device, and the card uses VxRail APIs to execute the cluster shutdown.

Figure 1. Eaton UPS and VxRail integration explained

Summary

Protection against unplanned power events should be a part of a business continuity strategy for all customers who run their critical workloads on VxRail. This ensures data integrity by enabling automated and graceful shutdown of VxRail cluster(s). Eaton’s solution is a great example of providing such protection and how Dell Technologies partners can leverage VxRail APIs to provide additional value for our joint customers.


Additional resources:

Eaton website: Eaton ensures connectivity and protects Dell EMC VxRail from power disturbances

Brochure: Eaton delivers advanced power management for Dell EMC VxRail systems

Blog post: Take VxRail automation to the next level by leveraging APIs

 

Author: Karol Boguniewicz, Senior Principal Engineer, VxRail Technical Marketing

Twitter: @cl0udguide 

 

Read Full Blog
  • VxRail
  • Epic
  • EHR
  • healthcare

VxRail extends flexibility in Healthcare with new EHR best practices

Vic Dery Vic Dery

Mon, 17 Aug 2020 18:31:31 -0000

|

Read Time: 0 minutes

The Healthcare industry is pressured to deliver not only as health providers, but making the infrastructure that operates the healthcare system, secure, scalable, and simple to use. This allows healthcare providers to focus on patients.  VxRail has had a great deal of success in the healthcare vertical because its core values align so closely with those demanded by the industry. With early successes in VDI (Virtual Desktop Infrastructure), healthcare IT departments expanded to more business and even life critical IT use cases with VxRail, because it proved that it can be highly scalable, simple to use, and has security built into everything it does.

“Best Practices for VMware vSAN with Epic on Dell EMC VxRail” created in collaboration with our peers at VMware highlights the considerations around a small to medium size environment, specifically for Epic. It uses a 6 node VxRail configuration to provide predictable and consistent performance, as well as Life Cycle Management (LCM) for the VxRail. The VxRail node used in this best practice is an E560N – an all NVMe solution.  Balancing workload and budget requirements, the dual-socket E560N provides a cost-effective, space-efficient 1U platform. Available with up to 32TB of NVMe capacity, the E560N is the first all-NVMe 1U VxRail platform. The all-NVMe capability provides a higher performance at low queue depths making it much easier to reliably deliver very high real-world performance for a SQL Server database management system (DBMS). Being able to run multiple healthcare applications including EHR, while successfully maintaining the secure, scalable, and simplified use of VxRail is possible. Enabling healthcare IT departments to scale and expanded infrastructure to meet the ever-growing demands of the health providers and the healthcare industry.

VxRail has had a great deal of success in the healthcare vertical because its core values align so closely to those demanded by the industry

  • Secure - Security is a core part of VxRail design, this starts at the supply chain and the components used to build it, continues into the features and software designed into it, and evolves with every lifecycle management that updates the VxRail HCI (hyperconverged infrastructure) system software. The most recent feature added supports 2-factor authentication (2FA), to provide an additional layer of security. VxRail has FIPS 140-2 validated encryption based on vSAN architecture. There is a detailed whitepaper for the VxRail security that covers in detail security features and certifications here.
  • Scalable - What makes VxRail an effective solution for healthcare is its ability to scale, not just for EHR applications, but for any application sharing the solution. With VxRail, you can size based on the known or expected workloads for the initial deployment and provide a solution that meets that workload requirement. This allows for the healthcare infrastructure to buy for the requirements of today, not the estimated requirements of three to five years down the road, as VxRail will scale easily into the future. Scaling the VxRail is easy... need more compute, just add an additional node; need more storage space, just add additional capacity drives.
  • Simplicity - Why is simplicity important when it comes to not just healthcare solutions, but all workloads? It is about IT teams being able to focus on their business and less on the continuous effort to maintain environments. VxRail simplifies operations with software-driven automation and lifecycle management. VxRail is continuously tested and updated as a solution from the BIOS, firmware, and HCI to the VMware software

VxRail is flexible enough to support hospital systems, alongside other applications for business and even education. A great example of this this flexibility can be seen in this Mercy Ships case study. The new best practices for Epic EHR combined with the proven successes that VxRail has with VDI in the Healthcare vertical are a testament to VxRail’s versatility.

 

Additional Resources:

Author: Vic Dery, Senior Principal Engineer, VxRail Technical Marketing

@VxVicTX

Best Practices for VMware vSAN with Epic on Dell EMC VxRail - Here

Dell EMC VxRail Comprehensive Security Design - Here

See more solutions from Dell for healthcare and life sciences - Here  

Customer profile Mercy Ships - Here

Read Full Blog
  • Intel
  • VMware
  • VxRail
  • vSAN
  • Optane

Top benefits to using Intel Optane NVMe for cache drives in VxRail

David Glynn David Glynn

Mon, 17 Aug 2020 18:31:31 -0000

|

Read Time: 0 minutes

Performance, endurance, and all without a price jump!

There is a saying that “A picture paints a thousand words” but let me add that a “graph can make for an awesome picture”.

Last August we at VxRail worked with ESG on a technical validation paper that included, among other things, the recent addition of Intel Optane NVMe drives for the vSAN caching layer. Figure 3 in this paper is a graph showing the results of a throughput benchmark workload (more on benchmarks later). When I do customer briefings and the question of vSAN caching performance comes up, this is my go-to whiteboard sketch because on its own it paints a very clear picture about the benefit of using Optane drives – and also because it is easy to draw.

In the public and private cloud, predictability of performance is important, doubly so for any form of latency. This is where caching comes into play, rather than having to wait on a busy system, we just leave it in the write cache inbox and get an acknowledgment. The inverse is also true. Like many parents I read almost the same bedtime stories to my young kids every night, you can be sure those books remain close to hand on my bedside “read cache” table. This write and read caching greatly helps in providing performance and consistent latency. With vSAN all-flash there no longer any read cache as the flash drives at the capacity layer provide enough random read access performance… just as my collection of bedtime story books has been replaced with a Kindle full of eBooks. Back to the write cache inbox where we’ve been dropping things off – at some point, this write cache needs to be empty, and this is where the Intel Optane NVMe drives shine. Drawing the comparison back to my kids, I no longer drive to a library to drop off books. With a flick of my finger I can return, or in cache terms de-stage, books from my Kindle back to the town library - the capacity drives if you will. This is a lot less disruptive to my day-to-day life, I don’t need to schedule it, I don’t need to stop what I’m doing, and with a bit of practice I’ve been able to do this mid story Let’s look at this in actual IT terms and business benefits.

To really show off how well the Optane drives shine, we want to stress the write cache as much as possible. This is where benchmarking tools and the right knowledge of how to apply them come into play. We had ESG design and run these benchmarking workloads for us. Now let’s be clear, this test is not reflective of a real-world workload but was designed purely to stress the write cache, in particular the de-staging from cache to capacity. The workload that created my go-to whiteboard sketch was the 100% sequential 64KB workload with a 1.2TB working set per node for 75 minutes.

The graph clearly shows the benefit of the Optane drives, they keep on chugging at 2,500MB/sec of throughput the entire time without dropping a beat. What’s not to like about that! This is usually when the techie customer in the room will try to burst my bubble by pointing out the unrealistic workload that is in no way reflective of their environment, or most environments… which is true. A more real-world workload would be a simulated relational database workload with a 22KB block size, mixing random 8K and sequential 128K I/O, with 60% reads and 40% writes, and a 600GB per node working set, which is quite a mouthful and is shown in figure 5. The results there show a steady 8.4-8.8% increase in IOPS across the board and a slower rise in latency resulting in a 10.5% lower response time under 80% load. 

Those of you running OLTP workloads will appreciate the graph shown in figure 6 where HammerDB was used to emulate the database activity of a typical online brokerage firm. The Optane cache drives under that workload sustained a remarkable 61% more transactions per minute (TPM) and new orders per minute (NOPM). That can result in significant business improvement for an online brokerage firm who adopts Optane drives versus one who is using NAND SSDs.

When it comes to write cache, performance is not everything, write endurance is also extremely important. The vSAN spec requires that cache drives be SSD Endurance Class C (3,650 TBW) or above, and Intel Optane beats this hands down with an over tenfold margin at 41 PBW (41,984 TBW). The Intel Optane 3D XPoint architecture allows memory cells to be individually addressed in a dense, transistor-less, stackable design. This extremely high write endurance capability has let us spec a smaller sized cache drive, which in turn lets us maintain a similar VxRail node price point, enabling you the customer to get more performance for your dollar. 

What’s not to like? Typically, you get to pick any two; faster/better/cheaper. With Intel Optane drives in your VxRail you get all three; more performance and better endurance, at roughly the same cost. Wins all around!

Author: David Glynn, Sr Principal Engineer, VxRail Tech Marketing

Resources: Dell EMC VxRail with Intel Xeon Scalable Processors and Intel Optane SSDs


Read Full Blog
  • VxRail
  • DTCP

The Dell Technologies Cloud Platform – Smaller in Size, Big on Features

Jason Marques Jason Marques

Wed, 03 Aug 2022 21:32:15 -0000

|

Read Time: 0 minutes

The latest VMware Cloud Foundation 4.0 on VxRail 7.0 release introduces a more accessible entry cloud option with support for new four node configurations. It also delivers a simple and direct path to vSphere with Kubernetes at cloud scale.

The Dell Technologies team is very excited to announce that May 12, 2020 marked the general availability of our latest Dell Technologies Cloud Platform release, VMware Cloud Foundation 4.0 on VxRail 7.0. There is so much to unpack in this release across all layers of the platform, from the latest features of VCF 4.0 to newly supported deployment configurations new to VCF on VxRail. To help you navigate through all of the goodness, I have broken out this post into two sections: VCF 4.0 updates and new features introduced specifically to VCF on VxRail deployments. Let’s jump right to it!




VMware Cloud Foundation 4.0 Updates

A lot great information on VCF 4.0 features was already published by VMware as a part of their Modern Apps Launch earlier this year. If you haven’t caught yourself up, check out links to some VMware blogs at the end of this post. Some of my favorite new features include new support for vSphere for Kubernetes (GAMECHANGER!), support for NSX-T in the Management Domain, and the NSX-T compatible Virtual Distributed Switch.

Now let’s dive into the items that are new to VCF on VxRail deployments, specifically ones that customers can take advantage of on top of the latest VCF 4.0 goodness.


New to VCF 4.0 on VxRail 7.0 Deployments

VCF Consolidated Architecture Four Node Deployment Support for Entry Level Cloud (available beginning May 26, 2020)

New to VCF on VxRail is support for the VCF Consolidated Architecture deployment option. Until now, VCF on VxRail required that all deployments use the VCF Standard Architecture. This was due to several factors: a major one was that NSX-T was not supported in the VCF Management Domain until this latest release. Having this capability was a prerequisite before we could support  the consolidated architecture with VCF on VxRail.

Before we jump into the details of a VCF Consolidated Architecture deployment, let's review what the current VCF Standard deployment is all about.


VCF Standard Architecture Details


This deployment would consist of:

  • A minimum of seven VxRail nodes (however eight is recommended)
  • A four node Management Domain dedicated to run the VCF management software and at least one dedicated workload domain that consists of a three node cluster (however four is recommended) to run user workloads
  • The Management Domain runs its own dedicated vCenter and NSX-T instance
  • The workload domains are deployed with their own dedicated vCenter instances and choice of dedicated or shared NSX-T instances that are separate from the Management Domain NSX-T instance.


A summary of features includes:

  • Requires a minimum of 7 nodes (8 recommended)
  • A Management Domain dedicated to run management software components 
  • Dedicated VxRail VI domain(s) for user workloads
  • Each workload domain can consist of multiple clusters
  • Up to 15 domains are supported per VCF instance including the Management Domain
  • vCenter instances run in linked-mode
  • Supports vSAN storage only as principal storage
  • Supports using external storage as supplemental storage


This deployment architecture design is preferred because it provides the most flexibility, scalability, and workload isolation for customers scaling their clouds in production. However, this does require a larger initial infrastructure footprint, and thus cost, to get started.

For something that allows customers to start smaller, VMware developed a validated VCF Consolidated Architecture option. This allows for the Management domain cluster to run both the VCF management components and a customer’s general purpose server VM workloads. Since you are just using the Management Domain infrastructure to run both your management components and user workloads, your minimum infrastructure starting point consists of the four nodes required to create your Management Domain. In this model, vSphere Resource Pools are used to logically isolate cluster resources to the respective workloads running on the cluster. A single vCenter and NSX-T instance is used for all workloads running on the Management Domain cluster. 


VCF Consolidated Architecture Details


A summary of features of a Consolidated Architecture deployment:

  • Minimum of 4 VxRail nodes
  • Infrastructure and compute VMs run together on shared management domain
  • Resource Pools used to separate and isolate workload types
  • Supports multi-cluster and scale to documented vSphere maximums
  • Does not support running Horizon Virtual Desktop or vSphere with Kubernetes workloads
  • Supports vSAN storage only as principal storage
  • Supports using external storage as supplemental storage for workload clusters

For customers to get started with an entry level cloud for general purpose VM server workloads, this option provides a smaller entry point, both in terms of required infrastructure footprint as well as cost.

With the Dell Technologies Cloud Platform, we now have you covered across your scalability spectrum, from entry level to cloud scale! 


Automated and Validated Lifecycle Management Support for vSphere with Kubernetes Enabled Workload Domain Clusters

How is it that we can support this? How does this work? What benefits does this provide you, as a VCF on VxRail administrator, as a part of this latest release? You may be asking yourself these questions. Well, the answer is through the unique integration that Dell Technologies and VMware have co-engineered between SDDC Manager and VxRail Manager. With these integrations, we have developed a unique set of LCM capabilities that can benefit our customers tremendously. You can read more about the details in one of my previous blog posts here.

VCF 4.0 on VxRail 7.0 customers who benefit from the automated full stack LCM integration that is built into the platform can now include in this integration vSphere with Kubernetes components that are a part of the ESXi hypervisor! Customers are future proofed to be able to automatically LCM vSphere with Kubernetes enabled clusters when the need arises with fully automated and validated VxRail LCM workflows natively integrated into the SDDC Manager management experience. Cool right?! This means that you can now bring the same streamlined operations capabilities to your modern apps infrastructure just like you already do for your traditional apps! The figure below illustrates the LCM process for VCF on VxRail.


VCF on VxRail LCM Integrated Workflow


Introduction of initial support of VCF (SDDC Manager) Public APIs

VMware Cloud Foundation first introduced the concept of SDDC Manager Public APIs back in version 3.8. These APIs have expanded in subsequent releases and have been geared toward VCF deployments on Ready Nodes.

Well, we are happy to say that in this latest release, the VCF on VxRail team is offering initial support for VCF Public APIs. These will include a subset of the various APIs that are applicable to a VCF on VxRail deployment. For a full listing of the available APIs, please refer to the VMware Cloud Foundation on Dell EMC VxRail API Reference Guide.

Another new API related feature in this release is the availability of the VMware Cloud Foundation Developer Center. This provides some very handy API references and code samples built right into the SDDC Manager UI. These references are readily accessible and help our customers to better integrate their own systems and other third party systems directly into VMware Cloud Foundation on VxRail. The figure below provides a summary and a sneak peek at what this looks like.


VMware Cloud Foundation Developer Center SDDC Manager UI View


Reduced VxRail Networking Hardware Configuration Requirements

Finally, we end out journey of new features on the hardware front. In this release, we have officially reduced the minimum VxRail node networking hardware configurations required for VCF use cases. With the introduction of vSphere 7.0 in VCF 4.0, admins can now use the vSphere Distributed Switch (VDS) for NSX-T. The need for a separate N-VDS switch has been deprecated. So why is this important and how does this lead to VxRail node network hardware configuration improvements? 

Well, up until now, VxRail and SDDC management networks have been configured to use the VDS. And this VDS would be configured to use at least two physical NIC ports as uplinks for high availability. When introducing the use of NSX-T on VxRail, an administrator would need to create a separate N-VDS switch for the NSX-T traffic to use. This switch would require its own pair of dedicated uplinks for high availability. Thus, in VCF on VxRail environments in which NSX-T would be used, each VxRail node would require a minimum of four physical NIC ports to support the two different pairs of uplinks for each of the switches. This resulted in a higher infrastructure footprint for both the VxRail nodes and for a customer’s Top of Rack Switch infrastructure because they would need to turn on more ports on the switch to support all of these host connections. This, in turn, would come with a higher cost.

Fast forward to this release -- now we can run NSX-T traffic on the same VDS as the VxRail and SDDC Manager management traffic. And when you can share the same VDS, you can get away with reducing the number of physical uplink ports to provide high availability down to two and reduce the upfront hardware footprint and cost across the board! Win win! The following figure highlights this new feature.


NSX-T Dual pNIC Features


Well, that about sums it all up. Thanks for coming on this journey and learning about the boat load of new features in VCF 4.0 on VxRail 7.0. As always, feel free to check out the additional resources for more information. Until next time, stay safe and stay healthy out there!

Jason Marques

Twitter -@vwhippersnapper




Additional Resources

What’s New in Cloud Foundation 4 VMware Blog Post

Delivering Kubernetes At Scale With VMware Cloud Foundation (Part 1) VMware Blog Post

Consistency Makes the Tanzu Difference VMware Blog Post


VxRail page on DellTechnologies.com

VCF on VxRail Guides

VMware Cloud Foundation 4.0 on VxRail 7.0 Documentation and Release Notes

VxRail Videos

VCF on VxRail Interactive Demos

Read Full Blog
  • VMware
  • vSphere
  • VxRail
  • vSAN
  • life cycle management

Introducing VxRail 7.0.000 with vSphere 7.0 support

Daniel Chiu Daniel Chiu

Mon, 17 Aug 2020 18:31:31 -0000

|

Read Time: 0 minutes

The VxRail team may all be sheltering at our own homes nowadays, but that doesn’t mean we’re just binging on Netflix and Disney Plus content. We have been hard at work to deliver on our continuing commitment to provide our customers a supporting VxRail software bundle within 30 days of any vSphere release. And this time it’s for the highly touted vSphere 7.0! You can find more information about vSphere and vSAN 7.0 in the vSphere and vSAN product areas in VMware Virtual Blocks blogs.

Here’s what you need to know about VxRail 7.0.000:

  • VxRail 7.x train – You may have noticed we’ve jumped from a 4.7 release train to a 7.0 release train. What did you miss?? Well... there is no secret 5.x or 6.x release trains. We have made the decision to align with the vSAN versions, starting with VxRail 7.x. This will make it easier for you to map VxRail versions to vSAN versions.
  • Accelerate innovation – The primary focus of this VxRail release is our synchronous release commitment to the vSphere 7.0 release. This release provides our users the opportunity to run vSphere 7.0 on their clusters. The most likely use cases would be for users who are planning to transition production infrastructure to vSphere 7.0 but first want to evaluate it in a test environment, or for users who are keen on running the latest VMware software.
  • Operational freedom – You may have heard that vSphere 7.0 introduces an enhanced version of vSphere Update Manager that they call vSphere LCM, or vLCM for short. While vLCM definitely improves upon the automation and orchestration of updating an HCI stack, VxRail’s LCM still has the advantage over vLCM (check out my blog to learn more). For example, VMware is currently not recommending that vSAN Ready Nodes users upgrade to vSphere 7.0 because of drivers forward compatibility issues (you can read more about in this KB article). That doesn’t stop VxRail from allowing you to upgrade your clusters to vSphere 7.0. The extensive research, testing, and validation work that goes into delivering Continuously Validated States for VxRail mitigates that issue.
  • Networking flexibility – Aside from synchronous release, the most notable new feature/capability is that VxRail consolidates the switch configuration for VxRail system traffic and NSX-T traffic. You can now run your VM traffic managed by NSX-T Manager on the same two ports used for VxRail system traffic (such as VxRail Management, vSAN, and vMotion) on the Network Daughter Card (NDC). Instead of requiring a 4-port NDC, users can use a 2-port NDC.

Consolidated switch configuration for VxRail system traffic managed by VxRail Manager/vCenter and VM traffic by NSX-T Manager

All said, VxRail 7.0.000 is a critical release that further exemplifies our alignment with VMware’s strategy and why VxRail is the platform of choice for vSAN technology and VMware’s Software-Defined Data Center solutions.

Our commitment to synchronous release for any vSphere release is important for users who want to benefit from the latest VMware innovations or for users who prioritizes a secure platform over everything else. A case in point is the vCenter express patch that rolled out a couple weeks ago to address a critical security vulnerability (you can find out more here). Within eight days of the express patch release, the VxRail team was able to run through all its testing and validation against all supported configurations to deliver a supported software bundle. Our $60M testing lab investment and 100+ team members dedicated to testing and quality assurance make that possible.

If you’re interested in upgrading your clusters to VxRail 7.0.000, please be sure to read the Release Notes.

Daniel Chiu, VxRail Technical Marketing

LinkedIn

Read Full Blog
  • VMware
  • vSphere
  • VxRail
  • life cycle management

How does vSphere LCM compare with VxRail LCM?

Daniel Chiu Daniel Chiu

Mon, 17 Aug 2020 18:31:31 -0000

|

Read Time: 0 minutes

VMware’s announcement of vSphere 7.0 this month included a highly anticipated enhanced version of vSphere Update Manager (VUM), which is now called vSphere Lifecycle Manager (vLCM).   Beyond the name change, much is intriguing: its capabilities, the customer benefits, and (what I have often been asked) the key differences between vLCM and VxRail lifecycle management.   I’ll address these three main areas of interest in this post and explain why VxRail LCM still has the advantage.

At its core, vLCM shifts to a desired state configuration model that allows vSphere administrators to manage clusters by using image profiles for both server hardware and ESXi software. This new approach allows more consistency in the ESXi host image across clusters, and centralizes and simplifies managing the HCI stack. vSphere administrators can now design their own image profile that consists of the ESXi software, and the firmware and drivers for the hardware components in the hosts.   They can run a check for compliance against the vSAN Hardware Compatibility List (HCL) for HBA compliance before executing the update with the image. vLCM can check for version drift that identifies differences between what’s installed on ESXi hosts versus the image profile saved on the vCenter Server.  To top that off, vLCM can recommend new target versions that are compatible with the image profile.  All of these are great features to simplify the operational experience of HCI LCM.

Let’s dig deeper so you can get a better appreciation for how these capabilities are delivered.  vLCM relies on the Cluster Image Management service to allow administrators to build that desired state.  At the minimum, the desired state starts with the ESXi image which requires communication with the VMware Compatibility Guide and vSAN HCL to identify the appropriate version.  In order to build a plugin to vCenter Server that includes hardware drivers and firmware on top of the ESXi image, hardware vendors need to provide the files needed to fill out the rest of the desired image profile. A desired state complete with hardware and software is achieved when capabilities such as simplified upgrades, compliance checks, version drift detection, and version recommendation can benefit administrators the most.  At this time, Dell and HPE have provided this addon.

https://blogs.vmware.com/vsphere/files/2020/03/vlcm.png

vLCM Image Builder – courtesy of https://blogs.vmware.com/vsphere/2020/03/vsphere-7-features.html

While vLCM’s desired state configuration model provides a strong foundation to drive better operational efficiency in lifecycle management, there are caveats today.   I’ll focus on three key differences that will best help you in differentiating vLCM from VxRail LCM:

1. Validated state vs. desired state – Desired state does not mean validated state. VxRail invests in significant resources to identify a validated version set of software, drivers, and firmware (what we term as Continuously Validated State) to relieve the burden of defining a desired state, testing it, and validating it off the shoulders of administrators. With over 100+ dedicated VxRail team members, over $60 million of lab investments, and over 25,000 runtime hours to test each major release, VxRail users can rest assured when it comes to LCM of their VxRail clusters.

vLCM’s model relies heavily on its ecosystem to produce a desired state for the full stack.  Hardware vendors need to provide the bits for the drivers and firmware as well as the compliance check for most of the HCI stack.  Below is a snippet of the VxRail support matrix for VxRail 4.7.100 to show you some of the hardware components a VxRail Continuously Validated State delivers.   Beyond the storage HBA, it is the responsibility of the hardware vendor to perform compliance checks of the remaining hardware on the server.  Once compliance checks pass, users are responsible for validating the desired state.


2. Heterogeneous vs. homogeneous hosts – vCenter Server can only have one image profile per cluster.  That means clusters need to have hosts that have identical hardware configurations in order to use vLCM.  VxRail LCM supports a variety of mixed node configurations for use cases, such as when adding new generation servers into a cluster, or having multiple hardware configurations (that is, different node types) in the same cluster. For vSAN Ready Nodes, if an administrator has mixed node configurations, they still have the option to continue using VUM instead of vLCM -- a choice they have to make after they upgrade their cluster to vSphere 7.0.  


3. Support – troubleshooting LCM issues may well include the hardware vendor addon.   Though vLCM’s desired state includes hardware and software, the support is still potentially separate.  The administrator would need to collect the hardware vendor addon’s logs and contact the hardware vendor separately from VMware. (It is worth noting that both Dell and HPE are VMware certified support delivery partners. When considering your vSAN Ready Node partner, you may want to be sure that that hardware provider is also capable of delivering support for VMware as well.)  With VxRail, a single vendor support model by default streamlines all support calls directly to Dell Technical Support. With their in-depth VMware knowledge, Dell Technical Support can resolve cases quickly where 95% of support cases are resolved without requiring coordination with VMware support.  

In evaluating vLCM, I’ll refer to the LCM value tiers. There are three levels, starting from lower to higher customer value: update orchestration, configuration stability, and decision support:

  • Automation and orchestration is the foundation to streamlining full stack LCM. In order to simplify LCM, the stack needs to be managed as one.  
  • Configuration stability delivers the assurance to administrators that they can efficiently evolve their clusters (that is, new generation hardware, new software innovation) without disrupting availability or performance for their workloads.
  • Decision support is where we can offload the decision-making burden from the administrator.


Explaining the Lifecycle Management value tiers for customers

vLCM has simplified full stack LCM by automating and orchestrating hardware and software upgrades into a single process flow.  The next step is configuration stability, which is not just stable code (which all HCI stack should claim), but the confidence customers have in knowing that non-disruptive LCM of their HCI requires minimal work on their part. When VxRail releases a composite bundle, VxRail customers know that it has been extensively tested against a wide range of configurations to assure uptime and performance. For most VxRail customers I’ve talked to, LCM assurance and workload continuity are the benefits they value most.

VMware has done a great job with its initial release of vLCM. vSAN Ready Node customers, especially those who use nodes from vendors like Dell that support the capability (and who can also be a support delivery partner), will certainly benefit from the improvements over VUM. Hopefully, with the differences outlined above, you will have a greater appreciation for where vLCM is in its evolution, and where VxRail continues innovating and keeping its advantage.  


Daniel Chiu, VxRail Technical Marketing

LinkedIn

Read Full Blog
  • HCI
  • VxRail
  • SmartFabric
  • PowerSwitch
  • OpenManage

SmartFabric Services for VxRail

Karol Boguniewicz Karol Boguniewicz

Mon, 17 Aug 2020 18:31:31 -0000

|

Read Time: 0 minutes

HCI networking made easy (again!). Now even more powerful with multi-rack support.


The Challenge

Network infrastructure is a critical component of HCI. In contrast to legacy 3-tier architectures, which typically have a dedicated storage and storage network, HCI architecture is more integrated and simplified. Its design allows you to share the same network infrastructure used for workload-related traffic and inter-cluster communication with the software-defined storage. Reliability and the proper setup of this network infrastructure not only determines the accessibility of the running workloads (from the external network), it also determines the performance and availability of the storage, and as a result, the whole HCI system.

Unfortunately, in most cases, setting up this critical component properly is complex and error-prone. Why? Because of the disconnect between the responsible teams. Typically configuring a physical network requires expert network knowledge which is quite rare among HCI admins. The reverse is also true: network admins typically have a limited knowledge of HCI systems, because this is not their area of expertise and responsibility.

The situation gets even more challenging when you think about increasingly complex deployments, when you go beyond just a pair of ToR switches and beyond a single-rack system. This scenario is becoming more common, as HCI is becoming a mainstream architecture within the data center, thanks to its maturity, simplicity, and being recognized as a perfect infrastructure foundation for the digital transformation and VDI/End User Computing (EUC) initiatives. You need much more computing power and storage capacity to handle increased workload requirements.

At the same time, with the broader adoption of HCI, customers are looking for ways to connect their existing infrastructure to the same fabric, in order to simplify the migration process to the new architecture or to leverage dedicated external NAS systems, such as Isilon, to store files and application or user data.


A brief history of SmartFabric Services for VxRail

Here at Dell Technologies we recognize these challenges. That’s why we introduced SmartFabric Services (SFS) for VxRail. SFS for VxRail is built into Dell EMC Networking SmartFabric OS10 Enterprise Edition software that is built into the Dell EMC PowerSwitch networking switches portfolio. We announced the first version of SFS for VxRail at VMworld 2018. With this functionality, customers can quickly and easily deploy and automate data center fabrics for VxRail, while at the same time reduce risk of misconfiguration.

Since that time, Dell has expanded the capabilities of SFS for VxRail. The initial release of SFS for VxRail allowed VxRail to fully configure the switch fabric to support the VxRail cluster (as part of the VxRail 4.7.0 release back in Dec 2018). The following release included automated discovery of nodes added to a VxRail cluster (as part of VxRail 4.7.100 in Jan 2019).


The new solution

This week we are excited to introduce a major new release of SFS for VxRail as a part of Dell EMC SmartFabric OS 10.5.0.5 and VxRail 4.7.410.

So, what are the main enhancements?

  • Automation at scale
     Customers can easily scale their VxRail deployments, starting with a single rack with two ToR leaf switches, and expand to multi-rack, multi-cluster VxRail deployments with up to 20 switches in a leaf-spine network architecture at a single site. SFS now automates over 99% (!) of the network configuration steps* for leaf and spine fabrics across multiple racks, significantly simplifying complex multi-rack deployments.
  • Improved usability
     An updated version of the OpenManage Network Integration (OMNI) plugin provides a single pane for “day 2” fabric management and operations through vCenter (the main management interface used by VxRail and vSphere admins), and a new embedded SFS UI simplifying “day 1” setup of the fabric.
  • Greater expandability
     
    Customers can now connect non-VxRail devices, such as additional PowerEdge servers or NAS systems, to the same fabric. The onboarding can be performed as a “day 2” operation from the OMNI plugin. In this way, customers can reduce the cost of additional switching infrastructure when building more sophisticated solutions with VxRail.

 

Figure 1. Comparison of a multi-rack VxRail deployment, without and with SFS


Solution components

In order to take advantage of this solution, you need the following components:

  • At a minimum a pair of supported Dell EMC PowerSwitch Data Center Switches. For an up-to-date list of supported hardware and software components, please consult the latest VxRail Support Matrix. At the time of writing this post, the following models are supported: S4100 (10GbE) and S5200 (25GbE) series for the leaf and Z9200 series or S5232 for the spine layer. To learn more about the Dell EMC PowerSwitch product portfolio, please visit the PowerSwitch website.
  • Dell EMC Networking SmartFabric OS10 Enterprise Edition (version 10.5.0.5 or later). This operating system is available for the Dell EMC PowerSwitch Data Center Switches, and implements SFS functionality. To learn more, please visit the OS10 website.
  • A VxRail cluster consisting of 10GbE or 25GbE nodes, with software version 4.7.410 or later.
  • OpenManage Network Integration (OMNI) for VMware vCenter version 1.2.30 or later.


How does the multi-rack feature work?

The multi-rack feature is done through the use of the Hardware VTEP functionality in Dell EMC PowerSwitches and the automated creation of a VxLAN tunnel network across the switch fabric in multiple racks.

VxLAN (Virtual Extensible Local Area Network) is an overlay technology that allows you to extend a Layer 2 “overlay” network over a Layer 3 (L3) “underlay” network by adding a VxLAN header to the original Ethernet frame and encapsulating it. This encapsulation occurs by adding a VxLAN header to the original Layer 2 (L2) Ethernet frame, and placing it into an IP/UDP packet to be transported across the L3 underlay network.

By default, all VxRail networks are configured as L2. With the configuration of this VxLAN tunnel, the L2 network is “stretched” across multiple racks with VxRail nodes. This allows for the scalability of L3 networks with the VM mobility benefits of an L2 network. For example, the nodes in a VxRail cluster can reside on any rack within the SmartFabric network, and VMs can be migrated within the same VxRail cluster to any other node without manual network configuration.

Figure 2. Overview of the VLAN and VxLAN VxRail traffic with SFS for multi-rack VxRail

This new functionality is enabled by the new L3 Fabric personality, available as of OS 10.5.0.5, that automates configuration of a leaf-spine fabric in a single-rack or multi-rack fabric and supports both L2 and L3 upstream connectivity. What is this fabric personality? SFS personality is a setting that enables the functionality and supported configuration of the switch fabric.

To see how simple it is to configure the fabric and to deploy a VxRail multi-rack cluster with SFS, please see the following demo: Dell EMC Networking SFS Deployment with VxRail - L3 Uplinks.

Single pane for management and “day 2” operations

SFS not only automates the initial deployment (“day 1” fabric setup), but greatly simplifies the ongoing management and operations on the fabric. This is done in a familiar interface for VxRail / vSphere admins – vCenter, through the OMNI plugin, distributed as a virtual appliance.

It’s powerful! From this “VMware admin-friendly” interface you can:

  • Add a SmartFabric instance to be managed (OMNI supports multiple fabrics to be managed from the same vCenter / OMNI plugin).
  • Get visibility into the configured fabric – domain, fabric nodes, rack, switches, and so on.
  • Visualize the fabric and the configured connections between the fabric elements with a “live” diagram that allows “drill-down” to get more specific information (Figure 3).
  • Manage breakout ports and jump ports, as well as on-board additional servers or non-VxRail devices.
  • Configure L2 or L3 fabric uplinks, allowing more flexibility and support of multiple fabric topologies.
  • Create, edit, and delete VxLAN and VLAN-based networks, to customize the network setup for specific business needs.
  • Create a host-centric network inventory that provides a clear mapping between configured virtual and physical components (interfaces, switches, networks, and VMs). For instance, you can inspect virtual and physical network configuration from the same host monitoring view in vCenter (Figure 4). This is extremely useful for troubleshooting potential network connectivity issues.
  • Upgrade SmartFabric OS on the physical switches in the fabric and replace a switch that simplifies the lifecycle management of the fabric.


Figure 3. Sample view from the OMNI vCenter plugin showing a fabric topology

To see how simple it is to deploy the OMNI plugin and to get familiar with some of the options available from its interface, please see the following demo: Dell EMC OpenManage Network Integration for VMware vCenter.

OMNI also monitors the VMware virtual networks for changes (such as to portgroups in vSS and vDS VMware virtual switches) and as necessary, reconfigures the underlying physical fabric.

Figure 4. OMNI – monitor virtual and physical network configuration from a single view


Thanks to OMNI, managing the physical network for VxRail becomes much simpler, less error-prone, and can be done by the VxRail admin directly from a familiar management interface, without having to log into the console of the physical switches that are part of the fabric.

Supported topologies

This new SFS release is very flexible and supports multiple fabric topologies. Due to the limited size of this post, I will only list them by name:

  • Single-Rack – just a pair of leaf switches in a single rack, supports both L2 and L3 upstream connectivity / uplinks – the equivalent of the previous SFS functionality
  • (New) Single-Rack to Multi-Rack – starts with a pair of switches, expands to multi-rack by adding spine switches and additional racks with leaf switches
  • (New) Multi-Rack with Leaf Border – adds upstream connectivity via the pair of leaf switches; this supports both L2 or L3 uplinks
  • (New) Multi-Rack with Spine Border - adds upstream connectivity via the pair of leaf spine; this supports L3 uplinks
  • (New) Multi-Rack with Dedicated Leaf Border - adds upstream connectivity via the dedicated pair of border switches above the spine layer; this supports L3 uplinks

For detailed information on these topologies, please consult Dell EMC VxRail with SmartFabric Network Services Planning and Preparation Guide.

Note, that SFS for VxRail does not currently support NSX-T and VCF on VxRail.

Final thoughts

This latest version of SmartFabric Services for VxRail takes HCI network automation to the next level and solves now much bigger network complexity problem in a multi-rack environment, compared to much simpler, single-rack, dual switch configuration. With SFS, customers can:

  • Reduce the CAPEX and OPEX related to HCI network infrastructure, thanks to automation (reducing over 99% of required configuration steps* when setting up a multi-rack fabric), and a reduced infrastructure footprint
  • Accelerate the deployment of essential IT infrastructure for their business initiatives
  • Reduce the risk related to the error-prone configuration of complex multi-rack, multi-cluster HCI deployments
  • Increase the availability and performance of hosted applications
  • Use a familiar management console (vSphere Client / vCenter) to drive additional automation of day 2 operations
  • Rapidly perform any necessary changes to the physical network, in an automated way, without requiring highly-skilled network personnel


Additional resources:

VxRail Support Matrix

Dell EMC VxRail with SmartFabric Network Services Planning and Preparation Guide

Dell EMC Networking SmartFabric Services Deployment with VxRail

SmartFabric Services for OpenManage Network Integration User Guide Release 1.2

Demo: Dell EMC OpenManage Network Integration for VMware vCenter

Demo: Expand SmartFabric and VxRail to Multi-Rack

Demo: Dell EMC Networking SFS Deployment with VxRail - L2 Uplinks

Demo: Dell EMC Networking SFS Deployment with VxRail - L3 Uplinks


Author: Karol Boguniewicz, Senior Principal Engineer, VxRail Technical Marketing

Twitter: @cl0udguide 


*Disclaimer: Based on internal analysis comparing SmartFabric to manual network configuration, Oct 2019.  Actual results will vary.


Read Full Blog
  • VMware
  • Oracle
  • VxRail
  • Oracle RAC
  • VMware Cloud Foundation

Built to Scale with VCF on VxRail and Oracle 19C RAC

Vic Dery Vic Dery

Wed, 11 Nov 2020 19:58:39 -0000

|

Read Time: 0 minutes

The newly released Oracle RAC on Dell EMC VxRail with VMware Cloud Foundations (VCF) Reference Architecture (RA) guides customers to building an efficient and high performing hyperconverged infrastructure to run their OLTP workloads. Scalability was the primary goal of this RA, and performance was highlighted as the numbers were generated. As Oracle RAC scaled, TPM increased to over 1 million TPM, while read IOPs showed sub-milli-second (0.64-0.70 ms) performance. The performance achieved with VxRail is a great added benefit to the core design points for Oracle RAC environments of which the primary focus is the availability and resiliency of the solution. Links to a reference architecture (“Oracle RAC on VMware Cloud Foundation on Dell EMC VxRail”) and a solution brief (“Deploying Oracle RAC on Dell EMC VxRail “) are available here and at the end of this post.

The RAC solution with VxRail scaled-out easily — you simply add a new node to join an existing VxRail cluster. The VxRail Manager provides a simple path that automatically discovers and non-disruptively adds each new node. VMware vSphere and vSAN can then rebalance resources and workloads across the cluster, creating a single resource pool for compute and storage. 

The VxRail clusters were built with eight P570F nodes; four for the VCF Management Domain and four for the Oracle RAC Workload Domain. 


 


Specifics on the build, including the hardware and software used, are detailed within the reference architecture. It also provides information on the testing, tools used, and results.


 

This graph shows the performance of TPM and Response Time when increasing the RAC node count from one to four. Notice that the average TPM increased with near-linear trendline (shown by the dotted line) as additional RAC nodes were added, while total application response time was maintained at 20 milliseconds or less.

Note: TPM near-linear trendline is shown in the above graph (blue dotted line), As additional RAC nodes are added, an increase in performance is seen as well as an increase in RAC high availability. TPM linear performance (scale equal performance per each note) growth is not achieved due to RAC nodes’ dependency on concurrency of access, instance, network, or other factors. See the RA for additional performance related information.

Summary of performance

Different-sized databases kept the TPM at the same level (about one million transactions) while keeping the application response time at 20ms or below. When increasing the database size, the physical read and write IOPS increased near-linearly, as reported from the Oracle AWR. This indicated that more read and write I/O requests were served by the backend storage, under the same configuration. Overall, when the peak client IOPS was up to 100,000, vSAN still provided excellent storage performance at sub-milliseconds at read and single-digit milliseconds latency at write.

Sidebar about Oracle licensing: While not mentioned in the RA; the VxRail offers several facilities to both control Oracle licenses and in some cases eliminates the need for costly licensed options.  These include a broad choice of CPU core configurations, some with fewer cores and higher processing power per core, to maximize the customer’s Oracle workload performance while minimizing the license requirements. Costly add on options such as encryption and compression can be provided via vSAN and are handled by VxRail. Further, and the vSphere hypervisor features, like DRS, allow Oracle VMs to be contained to only licensed nodes. 

You can speak to a Dell Technologies’ Oracle specialist for more details on how to control Oracle licensing costs for VMware environments. 

Conclusion

Oracle Database 19c on VxRail offers customers performance, scalability, reliability, and security for all their operational and analytical workloads. The Oracle RAC on VxRail test environment was first created to highlight the architecture. It also had the added benefit of showcasing the great performance VxRail delivers. If you need more performance, it is simple to adjust the configuration by adding more VxRail nodes to the cluster. If you need more storage, add more drives to meet the scale required of the database. Dell Technologies has Oracle specialists to ensure the VxRail cluster will meet the scale and performance outcomes desired for Oracle environments.


Additional Resources:

Reference Architecture - Oracle RAC on VMware Cloud Foundation on Dell EMC VxRail

Solution Brief - Deploying Oracle RAC on Dell EMC VxRail

Author: Vic Dery, Senior Principal Engineer, VxRail Technical Marketing

@VxVicTX

Special thank you to David Glynn for assisting with the reviews  



Read Full Blog
  • VMware
  • VxRail
  • VMware Cloud Foundation
  • life cycle management

VMware Cloud Foundation on VxRail Integration Features Series: Part 1—Full Stack Automated LCM

Jason Marques Jason Marques

Wed, 03 Aug 2022 21:32:15 -0000

|

Read Time: 0 minutes

Full Stack Automated Lifecycle Management

It’s no surprise that VMware Cloud Foundation on VxRail features numerous unique integrations with many VCF components, such as SDDC Manager and even VMware Cloud Builder. These integrations are the result of the co-engineering efforts by Dell Technologies and VMware with every release of VCF on VxRail. The following figure highlights some of the components that are part of this integration effort.

These integrations of VCF on VxRail offer customers a unique set of features in various categories, from security to infrastructure deployment and expansion, to deep monitoring and visibility that have all been developed to drive infrastructure operations.

Where do these integrations exist? The following figure outlines how they impact a customer’s Day 0 to Day 2 operations experience with VCF on VxRail.

In this series I will showcase some of these unique integration features, including some of the more nuanced ones. But for this initial post, I want to highlight one of the most popular and differentiated customer benefits that emerged from this integration work: full stack automated lifecycle management (LCM).

VxRail already delivers a differentiated LCM customer experience through its Continuously Validated States capabilities for the entire VxRail hardware and software stack. (As you may know, the VxRail stack includes the hardware and firmware of compute, network, and storage components, along with VMware ESXi, VMware vSAN, and the Dell EMC VxRail HCI System software itself, which includes VxRail Manager.)

With VCF on VxRail, VxRail Manager is integrated natively into the SDDC Manager LCM management framework through the SDDC Manager UI, and through VxRail Manager APIs for LCM by SDDC Manager when executing LCM workflows. This integration allows SDDC Manager to leverage all of the LCM capabilities that natively exist in VxRail right out of the box. SDDC Manager can then execute SDDC software LCM AND drive native VxRail HCI system LCM. It does this by leveraging native VxRail Manager APIs and the continuously validated state update packages for both the VxRail software and hardware components.

All of this happens seamlessly behind the scenes when administrators use the SDDC Manager UI to kick off native SDDC Manager workflows. This means that customers don’t have to leave the SDDC Manager UI management experience at all for full stack SDDC software and VxRail HCI infrastructure LCM operations. How cool is that?! The following figure illustrates the concepts behind this effective relationship.

For more details about how this LCM experience works, check out my lightboard talk about it!

Also, if you want to get some hands on experience in walking through performing LCM operations for the full VCF on VxRail stack, check out the VCF on VxRail Interactive Demo to see this and some of the other unique integrations!

I am already hard at work writing up the next blog post in the series. Check back soon to learn more.

Jason Marques

Twitter - @vwhippersnapper

Additional Resources

VxRail page on DellTechnologies.com

VCF on VxRail Guides

VCF on VxRail Whitepaper

VCF 3.9.1 on VxRail 4.7.410 Release Notes

VxRail Videos

VCF on VxRail Interactive Demos

Read Full Blog
  • HCI
  • VxRail

Take VxRail automation to the next level by leveraging APIs

Karol Boguniewicz Karol Boguniewicz

Wed, 03 Aug 2022 21:32:15 -0000

|

Read Time: 0 minutes

VxRail REST APIs

The Challenge

VxRail Manager, available as a part of HCI System Software, drastically simplifies the lifecycle management and operations of a single VxRail cluster. With a “single click” user experience available directly in vCenter interface, you can perform a full upgrade off all software components of the cluster, including not only vSphere and vSAN, but also complete server hardware firmware and drivers, such as NICs, disk controller(s), drives, etc. That’s a simplified experience that you won’t find in any other VMware-based HCI solution.

But what if you need to manage not a single cluster, but a farm consisting of dozens or hundreds of VxRail clusters? Or maybe you’re using some orchestration tool to holistically automate the IT infrastructure and processes? Would you still need to login manually as an operator to each of these clusters separately and click a button to maybe shutdown a cluster, collect log information or health data or perform LCM operations?

This is where VxRail REST APIs come in handy.

The VxRail API Solution

REST APIs are very important for customers who would like to programmatically automate operations of their VxRail-based IT environment and integrate with external configuration management or cloud management tools.

In VxRail HCI System Software 4.7.300 we’ve introduced very significant improvements in this space:

  • Swagger integration - which allows for simplified consumption of the APIs and their documentation;
  • Comprehensiveness – we’ve almost doubled the number of public APIs available;
  • PowerShell integration – that allows consumption of the APIs from the Microsoft PowerShell or VMware PowerCLI.

The easiest way to start using and access these APIs is through the web browser, thanks to the Swagger integration. Swagger is an Open Source toolkit that simplifies Open API development and can be launched from within the VxRail Manager virtual appliance. To access the documentation, simply open the following URL in the web browser: https://<VxM_IP>/rest/vxm/api-doc.html (where <VxM IP> stands for the IP address of the VxRail Manager) and you should see a page similar to the one shown below:

Figure 1. Sample view into VxRail REST APIs via Swagger

This interface is dedicated for customers, who are leveraging orchestration or configuration management tools – they can use it to accelerate integration of VxRail clusters into their automation workflows. VxRail API is complementary to the APIs offered by VMware.

Would you like to see this in action? Watch the first part of the recorded demo available in the additional resources section.

PowerShell integration for Windows environments

Customers, who prefer scripting in Windows environment, using Microsoft PowerShell or VMware PowerCLI, will benefit from VxRail.API PowerShell Modules Package. It simplifies the consumption of the VxRail REST APIs from PowerShell and focuses more on the physical infrastructure layer, while management of VMware vSphere and solutions layered on the top (such as Software-Defined Data Center, Horizon, etc.), can be scripted using similar interface available in VMware PowerCLI.

Figure 2. VxRail.API PowerShell Modules Package

To see that in action, check the second part of the recorded demo available in the additional resources section.

Bringing it all together

VxRail REST APIs further simplify IT Operations, fostering operational freedom and a reduction in OPEX for large enterprises, service providers and midsize enterprises. Integrations with Swagger and PowerShell make them much more convenient to use. This is an area of VxRail HCI System Software that rapidly gains new capabilities, so please make sure to check the latest advancements with every new VxRail release.  


Additional resources:

Demo: VxRail API - Overview

Demo: VxRail API - PowerShell Package

Dell EMC VxRail Appliance – API User Guide

VxRail PowerShell Package

VxRail API Cookbook

VMware API Explorer


Author: Karol Boguniewicz, Sr Principal Engineer, VxRail Tech Marketing

Twitter: @cl0udguide 

Read Full Blog
  • VxRail

Latest enhancements to VxRail ACE

Daniel Chiu Daniel Chiu

Mon, 17 Aug 2020 18:31:30 -0000

|

Read Time: 0 minutes

VxRail ACE

February 4, 2020

One of the key areas of focus for VxRail ACE (Analytical Consulting Engine) is active multi-cluster management.  With ACE, users have a central point to manage multiple VxRail clusters more conveniently.  System updates for multiple VxRail clusters is one activity where ACE greatly benefits users.  It is a time-consuming operation that requires careful planning and coordination.   In the initial release of ACE, users were able to facilitate transfer of update bundles to all their clusters with ACE acting as the single control point versus logging onto every vCenter console to do the same activity.  That can save quite a bit of time.

On-demand pre-upgrade cluster health checks

In the latest ACE update, users can now run on-demand health checks prior to upgrading to find out if their cluster is ready for a system update.  By identifying which clusters are ready and which ones are not, users can more effectively schedule their maintenance windows in advance.  It allows them to see which clusters require troubleshooting and which ones can start the update process.  In ACE, on-demand cluster health checks are referred to as a Pre-Check.  

For more information about this feature, you can check out this video: https://vxrail.is/aceupdates

Deployment types

Another feature that came out with this update is the identification of the cluster deployment type.   This means ACE will now display whether the cluster is a standard VxRail cluster in a VMware Validated Design deployment, in a VMware Cloud Foundation on VxRail deployment used in Dell Technologies Datacenter-as-a-Service, a 2-node vSAN cluster, or in a stretched cluster configuration.

Daniel Chiu, VxRail Technical Marketing

LinkedIn

Read Full Blog
  • VMware
  • VxRail
  • VMware Cloud Foundation

VCF on VxRail – More business-critical workloads welcome!

Jason Marques Jason Marques

Wed, 07 Feb 2024 22:30:10 -0000

|

Read Time: 0 minutes

New platform enhancements for stronger mobility and flexibility 

February 4, 2020


Today, Dell EMC has made the newest VCF 3.9.1 on VxRail 4.7.410 release available for download for existing VCF on VxRail customers with plans for availability for new customers coming on February 19, 2020. Let’s dive into what’s new in this latest version.

Expand your turnkey cloud experience with additional unique VCF on VxRail integrations

This release continues the co-engineering innovation efforts of Dell EMC and VMware to provide our joint customers with better outcomes. We tackle the area of security in this case. VxRail password management for VxRail Manager accounts such as root and mystic as well as ESXi have been integrated into the SDDC Manager UI Password Management framework. Now the components of the full SDDC and HCI infrastructure stack can be centrally managed as one complete turnkey platform using your native VCF management tool, SDDC Manager. Figure 1 illustrates what this looks like.

Figure 1


Support for Layer 3 VxRail Stretched Cluster Configuration Automation

Building off the support for Layer 3 stretched clusters introduced in VCF 3.9 on VxRail 4.7.300 using manual guidance, VCF 3.9.1 on VxRail 4.7.410 now supports the ability to automate the configuration of Layer 3 VxRail stretched clusters for both NSX-V and NSX-T backed VxRail VI Workload Domains. This is accomplished using CLI in the VCF SOS Utility.


Greater management visibility and control across multiple VCF instances

For new installations, this release now provides the ability to extend a common management and security model across two VCF on VxRail instance deployments by sharing a common Single Sign On (SSO) Domain between the PSCs of multiple VMware Cloud Foundation instances so that the management and the VxRail VI Workload Domains are visible in each of the instances. This is known as a Federated SSO Domain.

What does this mean exactly?   Referring to Figure 2, this translates into the ability for Site B to join the SSO instance of Site A.  This allows VCF to further align to the VMware Validated Design (VVD)  to share SSO domains where it makes sense based upon Enhanced Linked Mode 150ms RTT limitation.

This would leverage a recent option made available in the VxRail first run to connect the VxRail cluster to an existing SSO Domain (PSCs).  So, when you stand up the VxRail cluster for the second MGMT Domain that is affiliated with the second VCF instance deployed in Site B, you would connect it to the SSO (PSCs) that was created by the first MGMT domain of the VCF instance in Site A.


Figure 2


Application Virtual Networks – Enabling Stronger Mobility and Flexibility with VMware Cloud Foundation

One of the new features in the 3.9.1 release of VMware Cloud Foundation (VCF) is use of Application Virtual Networks (AVNs) to completely abstract the hardware and realize the true value from a software-defined cloud computing model. Read more about it on VMware’s blog post here. Key note on this feature: It is automatically set up for new VCF 3.9.1 installations. Customers who are upgrading from a previous version of VCF would need to engage with the VMware Professional Services Organization (PSO) to configure AVN at this time. Figure 3 shows the message existing customers will see when attempting the upgrade.


Figure 3


VxRail 4.7.410 platform enhancements

VxRail 4.7.410 brings a slew of new hardware platforms and hardware configuration enhancements that expand your ability to support even more business-critical applications.


Figure 4


Figure 5


There you have it! We hope you find these latest features beneficial. Until next time…


Jason Marques

Twitter - @vwhippersnapper

Additional Resources

VxRail page on DellTechnologies.com

VCF on VxRail Guides

VCF on VxRail Whitepaper

VCF 3.9.1 on VxRail 4.7.410 Release Notes

VxRail Videos

VCF on VxRail Interactive Demos 


Read Full Blog
  • VMware
  • VxRail
  • vRealize Operations

Announcing all-new VxRail Management Pack for vRealize Operations

Daniel Chiu Daniel Chiu

Mon, 17 Aug 2020 18:31:30 -0000

|

Read Time: 0 minutes

Now adding VxRail awareness to your vRealize Operations 

January 22, 2020


As the new year rolls in, VxRail team is now slowly warming up to it.  Right as we settle back in after holiday festivities, we’re onto another release announcement.  This time, it’s an entirely new software tool: VxRail Management Pack for vRealize Operations.

For those not familiar with what vRealize Operations, it’s VMware’s operations management software tool that provides its customers the ability to maintain and tune their virtual application infrastructure with the aid of artificial intelligence and machine learning.  It connects to the vCenter Server and collects metrics, events, configurations, and logs about the vSAN clusters and virtual workloads running on them.   vRealize Operations also understands the topology and object relationships of the virtual application infrastructure.  With all these features, it is capable of driving intelligent remediation, ensuring configuration compliance, monitoring capacity and cost optimization, and maintaining performance optimization.  It’s an outcome-based tool designed to self-drive according to user-defined intents powered by its AI/ML engine.

The VxRail Management Pack is an additional free-of-charge software pack that can be installed onto vRealize Operations to provide VxRail cluster awareness.  Without this Management Pack, vRealize Operations can still detect vSAN clusters but cannot discern that they are VxRail clusters.  The Management Pack consists of an adapter that collects 65 distinct VxRail events, analytics logic specific to VxRail, and three custom dashboards.  These VxRail events are translated into VxRail alerts on vRealize Operations so that users have helpful information to understand health issues along with recommended course of resolution.  With custom dashboards, users can easily go to VxRail-specific views to troubleshoot issues and make use of existing vRealize Operations capabilities in the context of VxRail clusters.  

The VxRail Management Pack is not for every VxRail user because it requires a vRealize Operations Advanced or Enterprise license.  For enterprise customers or customers who have already invested in VMware’s vRealize Operations suite, it can be an easy add-on to help manage your VxRail clusters.

To download the VxRail Management Pack, go to VMware Solution Exchange: https://marketplace.vmware.com/vsx/.

Author:  Daniel Chiu, Dell EMC VxRail Technical Marketing

LinkedIn

Read Full Blog
  • HCI
  • VxRail

VxRail drives the hyperconverged evolution with the release of 4.7.410

KJ Bedard KJ Bedard

Mon, 17 Aug 2020 18:31:30 -0000

|

Read Time: 0 minutes

VxRail announces new hardware and software in this latest release

January 6, 2020

VxRail recently released a new version of our software, 4.7.410, which we announced at VMworld EMEA in November. This release brings cutting-edge enhancements for networking options and edge deployments, support for the Mellanox 100GbE PCIe NIC, and two new drive types.  

Improvements and newly developed functionality for VxRail 2-node implementations provide a more user-friendly experience. Now supporting both direct connect and new switched connectivity options. VxRail 2-node is increasingly popular for edge deployments, and Dell EMC continues to bolster features and functionality in support of our edge and 2-node customer base.

This release also includes improvements for VxRail networking capabilities that more closely align VxRail with VMware’s best practices for NIC port maximums and network teaming policies.   VxRail networking enhancements more efficiently handle network traffic due to support for two additional load balancing policies. These new policies determine how to route network traffic in the event of bottlenecks, and the result is better/increased throughput on a NIC port. In addition, VxRail now supports the same three routing/teaming policies as VMware. 

Dell EMC also announced support for Fiber channel HBAs in mid-summer of 2019, and with that, the 4.7.410 release has broadened capabilities by supporting external storage integration.  VxRail recognizes that an external array is connected and makes it available to the vCenter for use as secondary storage.  The storage is now automatically recognized during day 1 installation operations, or on day 2, when external storage is added to expand the storage capacity for VxRail.

In addition to the 4.7.410 release, VxRail added a new set of hardware choices and options to include the Mellanox ConnectX-5 100GBe NIC cards benefitting a variety of use cases including media broadcasting, a larger 8TB 2.5” 7200 rpm HDD commonly used for video surveillance, and a 7.6TB “Value SAS SSD”. Value SAS drives offer attractive pricing (similar to SATA) with performance slightly below other SAS drives and are great for larger read-friendly workloads. And finally, there’s big news for the VxRail E series platforms (E560/E560F/E560N) which all support the T4 GPU.  This is the first time VxRail is supporting GPU cards outside of the V series. The Nvidia T4 GPU is optimized for high-performance workloads and suitable for running a combination of entry-level machine learning, VDI, and data inferencing.

These exciting new features and enhancements in the 4.7.410 release enable customers to expand the breadth of business workloads across all VxRail implementations.

Please check out these resources for more VxRail information:

VMWare EMEA Announcements 

VxRail Spec Sheet

VxRail Network Planning Guide

VxRail 4.7.x Release Notes (requires log-in)


By:  KJ Bedard - VxRail Technical Marketing Engineer

LinkedIn: https://www.linkedin.com/in/kj-bedard-50b25315/

Twitter: @KJbedard

Read Full Blog
  • VxRail
  • SAP HANA
  • SAP

New all-NVMe VxRail platforms deliver highest levels of performance

Daniel Chiu Daniel Chiu

Mon, 17 Aug 2020 18:31:30 -0000

|

Read Time: 0 minutes

Two new all-NVMe VxRail platforms deliver highest levels of performance

December 11, 2019

If you have not been tuned into the VxRail announcements at VMworld Barcelona last month, this is news to you.  VxRail is adding more performance punch to the family with two new all-NVMe platforms.   The VxRail E Series 560N and P Series 580N, with the 2nd Generation Intel® Xeon® Scalable Processors, offer increased performance while enabling customers to take advantage of decreasing NVMe costs.

Balancing workload and budget requirements, the dual-socket E560N provide a cost-effective, space-efficient 1U platform for read-intensive workloads and other complex workloads.   Configured with up to 32TB of NVMe capacity, the E560N is the first all-NVMe 1U VxRail platform.  Based on the PowerEdge R640, the E560N can run a mix of workloads including data warehouses, ecommerce, databases, and high-performance computing.  With support for Nvidia T4 GPUs, the E560N is also equipped to run a wide range of modern cloud-based applications, including machine learning, deep learning, and virtual desktop workloads.

Built for memory-intensive high-compute workloads, the new P580N is the first quad-socket  and also the first all-NVMe 2U VxRail platform.  Based on the PowerEdge R840, the P580N can be configured with up to 80TB of NVMe capacity.  This platform is ideal for in-memory databases and has been certified by SAP for SAP HANA.  The P580N provides 2x the CPU compared to the P570/F and offers 25% more processing potential over virtual storage appliance (VSA) based 4-socket HCI platforms that require a dedicated socket to run (VSA).

The completion of the SAP HANA certification for the P580N which coincides with the P580N’s general availability demonstrates the ongoing commitment to position VxRail as the HCI platform of choice for SAP HANA solutions.  The P580N provides even more memory and processing power than the SAP HANA certified P570F platform.  An updated Validation Guide for SAP HANA on VxRail will be available in early January on the Dell EMC SAP solutions landing page for VxRail.

For more information about VxRail E560N and P580N, please check out the resources below:


VxRail Spec Sheet

All Things VxRail at dellemc.com

SAP HANA Certification page for VxRail

Dell EMC VxRail SAP Solutions at dellemc.com

Available 12/20/2019 - Dell EMC Validation Guide for SAP HANA with VxRail


By: 

Daniel Chiu

linkedin.com/in/daniel-chiu-8422287

Vic Dery

@VxVicTx

linkedin.com/in/vicdery

Read Full Blog
  • VMware
  • VxRail
  • VMware Cloud Foundation

Innovation with Cloud Foundation on VxRail

Jason Marques Jason Marques

Wed, 03 Aug 2022 21:32:16 -0000

|

Read Time: 0 minutes

VCF 3.9 ON VxRail 4.7.300 Improves Management, Flexibility, and Simplicity at Scale 

December, 2019

As you may already know, VxRail is the HCI foundation for the Dell Technologies Cloud Platform. With the new Dell Technologies On Demand offerings we combine the benefits of bringing automation and financial models similar to public cloud to on-premises environments. VMware Cloud Foundation on Dell EMC VxRail allows customers to manage all cloud operations through a familiar set of tools, offering a consistent experience, with a single vendor support relationship from Dell EMC.

Joint engineering between VMware and Dell EMC continuously improves VMware Cloud Foundation on VxRail. This has made VxRail the first hyperconverged system fully integrated with VMware Cloud Foundation SDDC Manager and is the only jointly engineered HCI system with deep VMware Cloud Foundation integration. VCF on VxRail to delivers unique integrations with Cloud Foundation that offer a seamless, automated upgrade experience. Customers adopting VxRail as the HCI foundation for Dell Technologies Cloud Platform will realize greater flexibility and simplicity when managing VMware Cloud Foundation on VxRail at scale. These benefits are further illustrated with the new features available in the latest version of VMware Cloud Foundation 3.9 on VxRail 4.7.300.

The first feature expands the ability to support global management and visibility across large, complex multi-region private and hybrid clouds. This is delivered through global multi-instance management of large-scale VCF 3.9 on VxRail 4.7.300 deployments with a single pane of glass (see figure below). Customers who have many VCF on VxRail instances deployed throughout their environment now have a common dashboard view into all of them to further simplify operations and gain insights.

Figure 1

The new features don’t just stop there, VCF 3.9 on VxRail 4.7.300 provides greater networking flexibility. VMware Cloud Foundation 3.9 on VxRail 4.7.300 adds support for Dell EMC VxRail layer 3 networking stretch cluster configurations, allowing customers to further scale VCF on VxRail environments for more highly available use cases in order to support mission-critical workloads. The layer 3 support applies to both NSX-V and NSX-T backed workload domain clusters.

Another area of new network flexibility features is the ability to select the host physical network adapters (pNICs) you want to assign for NSX-T traffic on your VxRail workload domain cluster (see figure below). Users can now select the pNICs used for the NSX-T Virtual Distributed Switch (N-VDS) from the SDDC Manager UI in the Add VxRail Cluster workflow. This allows you the flexibility to choose from a set of VxRail host physical network configurations that best aligns to your desired NSX-T configuration business requirements. Do you want to deploy your VxRail clusters using the base network daughter card (NDC) ports on each VxRail host for all standard traffic but use separate PCIe NIC ports for NSX-T traffic? Go for it! Do you want to use 10GbE connections for standard traffic and 25GbE for NSX-T traffic? We got you there too! Host network configuration flexibility is now in your hands and is only available with VCF on VxRail.

Figure 2

Finally, no good VCF on VxRail conversation can go by without talking about Lifecycle Management. VMware Cloud Foundation 3.9 on VxRail 4.7.300 also delivers simplicity and flexibility for managing at scale with greater control over workload domain upgrades. Customers now have the flexibility to select the clusters within a multi-cluster workload domain to upgrade in order to better align with business requirements and maintenance windows. Upgrading VCF on VxRail clusters is further simplified with VxRail Smart LCM (4.7.300 release) which determines exactly which firmware components need to be updated on each cluster, pre-stages each node in a cluster saving up to 20% of upgrade time (see next figure). The scheduling of these cluster upgrades is also supported. With VCF 3.9 and VxRail smart LCM, you can streamline the upgrade process across your hybrid cloud.

Figure 3

As you can see the innovation continues with Cloud Foundation on VxRail.

Jason Marques  Twitter - @vwhippersnapper  Linked In -  linkedin.com/in/jason-marques-47022837

Additional Resources

VxRail page on DellEMC.com

VCF on VxRail Guides

VCF on VxRail Whitepaper

VMware release notes for VCF 3.9 on VxRail 4.7.300

VxRail Videos

VCF on VxRail Interactive Demos 


Read Full Blog
  • VxRail

Analytical Consulting Engine (ACE)

Christine Spartichino Christine Spartichino

Mon, 17 Aug 2020 18:31:30 -0000

|

Read Time: 0 minutes

VxRail plays its ACE, now generally available 

November 2019

VxRail ACE (Analytical Consulting Engine), the new Artificial Intelligence infused component of the VxRail HCI System Software, was announced just a few months ago at Dell Technologies World and has been in global early access. Over 500 customers leveraged the early access program for ACE, allowing developers to collect feedback and implement enhancements prior to General Availability of the product.  It is with great excitement that VxRail ACE is now generally available to all VxRail customers. By incorporating continuous innovation/continuous development (CIDC) utilizing the Pivotal Platform (also known as Pivotal Cloud Foundry) container-based framework, Dell EMC developers behind ACE have made rapid iterations to improve the offering; and customer demand has driven new features added to the roadmap. ACE is holding true to its design principles and commitment to deliver adaptive, frequent releases.


Figure 1 ACE Design Principles and Goals

VxRail ACE is a centralized data collection and analytics platform that uses machine learning capabilities to perform capacity forecasting and self-optimization helping you keep your HCI stack operating at peak performance and ready for future workloads. In addition to some of the initial features available during early access, ACE now provides new functionality for intelligent upgrades of multiple clusters (see image below). You can now see the current software version of each cluster along with all available upgrade versions. ACE will allow you to select the desired version per each VxRail cluster. You can now manage at scale to standardize across all sites and clusters with the ability to customize by cluster. This becomes advantageous when some sites or clusters might need to remain at a specific version of VxRail software.

If you haven’t seen ACE in action yet, check out the additional links and videos below that showcase the features described in this post. For our 6,000+ VxRail customers, please visit our support site and Admin Guide to learn how to access ACE.

Christine Spartichino, Twitter - @cspartichino   Linked In -  linkedin.com/in/spartichino

For more information on VxRail, check out these great resources:

VxRail ACE Solution Brief

VxRail ACE Overview Demos

VxRail ACE Updates

VxRail ACE Announcement

All Things VxRail

Support (for existing customers)

Read Full Blog