
Delivering VxRail simplicity with vLCM compatibility
Tue, 28 Sep 2021 17:23:22 -0000
|Read Time: 0 minutes
As the days start off with cooler mornings and later sunrises, we welcome the autumn season. Growing up each season brought forth its own traditions and activities. While venturing through corn mazes was fun, autumn first and foremost meant that it was apple-picking time. Combing through the orchard, you’re constantly looking for which apple to pick, even comparing ones from the same branch because no two are alike. Just like the newly introduced VMware vSphere Lifecycle Manager (vLCM) compatibility in VxRail 7.0.240, there are differences to the VxRail implementation as compared to that of the Dell EMC vSAN Ready Nodes, though they’re from the same vLCM “branch.”
Now that VxRail offers vLCM compatibility, it’s a good opportunity to provide an update to Cliff’s blog post last year where he provided a comprehensive review of the customer experiences with lifecycle management of vSAN Ready Nodes and VxRail clusters. While my previous blog post about the VxRail 7.0.240 release provided a summary of VxRail’s vLCM implementation and the added value, I’ll focus more on customer experience this time. Combining the practice of Continuously Validated States to ensure cluster integrity with a VxRail-driven experience truly showcases how automated the vLCM process can be.
In this blog, I’ll cover the following:
- Overview of VMware vLCM
- Compare how to establish a baseline image
- Compare how to perform a cluster update
Overview of VMware vLCM
Figure 1: VMware vSphere Lifecycle Manager vLCM framework
VMware vLCM was introduced in vSphere 7.0 as a framework to allow for software and hardware to be updated together as a single system. Being able to combine the ESXi image and component firmware and drivers into a single workflow helps streamline the update experience. To do that, server vendors are tasked with developing their own plugin into this vLCM framework to perform the function of the firmware and drivers addon as depicted in the Figure 1. The server vendor implementation provides functionality to build the hardware catalog of firmware and drivers on the server and supply the bits to vCenter. For some components, the server vendors do not supply their firmware and drivers, and relies on individual vendors to provide the addon capability. Put together, the software and hardware form a cluster image. To start using vLCM, you need to build out a cluster image and assign it as the baseline image. For future updates, you have to build out a cluster image and assign it as the desired state image. Drift detection between the two determines what needs to be remediated for the cluster to arrive at the desired state.
For Dell EMC vSAN Ready Nodes, you will use the OMIVV (OpenManage Integration with VMware vCenter) plugin to vCenter to use the vLCM framework. Now VxRail has enhanced VxRail Manager to plug into vCenter in its vLCM implementation. The difference between the two implementations really drives home that vSAN Ready Nodes, whether its Dell EMC’s or other server vendors, deliver a customer-driven experience versus a VxRail-driven experience. Both implementations have their merits because they target different customer problems. The customer-driven experience makes sense for customers who have already invested the IT resources to have more operational control of what is installed on their clusters. For customers looking for operational efficiency that reduces and simplifies their day-to-day responsibility to administrate and secure infrastructure, the VxRail-driven experience provides them with the confidence to be able to so.
Enabling VMware vLCM with the baseline image
A baseline image is a cluster image that you have identified as the version set to deliver that happy state for your cluster. IT operations team is happy because the cluster is running secure and stable code that complies with their company’s security standards. End users of the applications running on the cluster are happy because they are getting the consistent service required to perform their jobs.
For Dell EMC vSAN Ready Nodes or any vSAN Ready Nodes, users first need to arrive at what the baseline image should be before deploying their clusters. That requires research and testing to validate that the set of firmware and drivers are compatible and interoperable with the ESXi image. Importing it into vLCM framework involves a series of steps.

Figure 2: Customer-driven process to establish a baseline image for Dell EMC vSAN Ready Nodes
Dell EMC vSAN Ready Node uses the OMIVV plugin to interface with vCenter Server. A user needs to first deploy this OMIVV virtual machine on vCenter.
- Once deployed, the user has to register it with vCenter Server.
- From the vCenter UI, the user must configure the host credentials profile for iDRAC and the host.
- To acquire the bits for the firmware and drivers, user needs to install the Dell Repository Manager which provides the depot to all firmware and drivers. Here is where the user can build the catalog of firmware and drivers component-by-component (BIOS, NICs, storage controllers, IO controllers, and so on) for their cluster.
- With the catalog in place, the user uploads each file into an NFS/CIFS share that the vCenter Server can access.
- From the vCenter UI, user creates a repository profile that points to the share with the firmware and drivers. Next is defining the cluster profile with the ESXi image running on the cluster and the repository profile. This cluster profile becomes the baseline image for future compliance checks and drift remediation scans.
For VxRail, vLCM is not automatically enabled once your cluster is updated to VxRail 7.0.240. It’s a decision you make based on the benefits that vLCM compatibility provides (described in my previous blog post). Once enabled, it cannot be disabled. To enable vLCM, your VxRail cluster needs to be running in a Continuously Validated State. It is a good idea to run the compliance checker first.
Once you have made the decision to move forward, VxRail’s vLCM implementation is astoundingly simple! There’s no need for you to define the baseline image because you’re already running in a Continuously Validated State. The VxRail implementation obfuscates the plugin interaction and uses the vLCM APIs to automate all the previously described manual steps. As a result, enabling vLCM and establishing the baseline image have been reduced to a 3-step process.
- Enter the vCenter user credentials.
- VxRail automatically performs a compliance check to verify the cluster is running in a Continuously Validate State.
- VxRail automatically ports the Continuously Validated State into the formation of the baseline image.
And that’s it! The following video clip captures the compliance check you can run first and then the three step process to enable vLCM:
Figure 3: How to enable vLCM on VxRail
Cluster update with vLCM
For Dell EMC vSAN Ready Nodes, the customer-driven process to build the desired state image is similar to the baseline image. It requires investigation, research, and testing to define the next happy state and the use of the Dell Repository Manager to save and export the hardware catalog to vCenter. From there, users build out a cluster image that includes the ESXi image and the hardware catalog that becomes the desired state image.
Not surprisingly, performing a cluster update with vLCM doesn’t fall too far from the VxRail tree, VxRail streamlines that process down to a few steps within VxRail Manager. By using vLCM APIs, VxRail incorporates the vLCM process into the VxRail Manager experience for a complete LCM experience.
Figure 4: Process to perform cluster update with VxRail
- From the new update advisor tool, select the target VxRail version to which you want to update your cluster. The update advisor then generates a drift remediation report (called an advisory report) that provides a component-by-component analysis of what needs to be updated. This information along with estimated update time will help you plan the length of your maintenance window.
- Running a cluster readiness precheck ahead of your maintenance window is good practice. It allows you time to address any issues that may be found ahead of your scheduled window or to plan for additional time.
- Having passed the precheck, VxRail Manager will incorporate the vLCM process into its own experience. VxRail Manager includes the vendor addon capability in vLCM so that you can add separate firmware and drivers that are not part of the VxRail Continuously Validated State, such as a Fibre-channel HBA. Using the vLCM APIs, VxRail can automatically port the Continuously Validated State LCM bundle and any non-VxRail managed component firmware and drivers into the cluster image for remediation.
- If you want to customize the cluster image even more with NSX-T or Tanzu VIBs, you can add them from vCenter UI. Once included in the desired state image, you have the option of either initiating the remediation from vCenter or from the VxRail Manager UI. For those not adding these VIBs, then the entire cluster update experience stays within the simple and familiar VxRail Manager experience.
Check out the following video clip to see this end-to-end process in action:
Figure 5: How to update your VxRail cluster with VMware vLCM
Conclusion
With both Dell EMC vSAN Ready Nodes and VxRail using the same vLCM framework, it’s a much easier task to deliver an apples-to-apples comparison that clearly shows the simplicity of VxRail LCM with vLCM compatibility. This vLCM implementation is a perfect example how VxRail is built with VMware and made to enhance VMware. We’ve integrated the innovations of vLCM into the simple and streamlined VxRail-driven experience. As VMware looks to deliver more features to vLCM, VxRail is well positioned to present these capabilities in VxRail fashion.
For more information about this topic, check out the latest podcast: https://infohub.delltechnologies.com/p/vxrail-vlcm-compatibility/
Author Information
Daniel Chiu, Senior Technical Marketing Manager at Dell Technologies
LinkedIn: https://www.linkedin.com/in/daniel-chiu-8422287/
Related Blog Posts

Learn About the Latest Major VxRail Software Release: VxRail 8.0.000
Mon, 09 Jan 2023 14:45:15 -0000
|Read Time: 0 minutes
Happy New Year! I hope you had a wonderful and restful holiday, and you have come back reinvigorated. Because much like the fitness centers in January, this VxRail blog site is going to get busy. We have a few major releases in line to greet you, and there is much to learn.
First in line is the VxRail 8.0.000 software release that provides introductory support for VMware vSphere 8, which has created quite the buzz these past few months. Let’s walk through the highlights of this release.
- For VxRail users who want to be early adopters of vSphere 8, VxRail 8.0.000 provides the first upgrade path for VxRail clusters to transition to VMware’s latest vSphere software train. Only clusters with VxRail nodes based on either the 14th generation or 15th generation PowerEdge servers can upgrade to vSphere 8, because VMware has removed support for a legacy BIOS driver used by 13th generation PowerEdge servers. Importantly, users need to upgrade their vCenter Server to version 8.0 before a cluster upgrade, and vSAN 8.0 clusters require users to upgrade their existing vSphere and vSAN licenses. In VxRail 8.0.000, the VxRail Manager has been enhanced to check platform compatibility and warn users of license issues to prevent compromised situations. Users should always consult the release notes to fully prepare for a major upgrade.
- VxRail 8.0.000 also provides introductory support for vSAN Express Storage Architecture (ESA), which has garnered much attention for its potential while eliciting just as much curiosity because of its newness. To level set, vSAN ESA is an optimized version of vSAN that exploits the full potential of the very latest in hardware, such as multi-core processing, faster and larger capacity memory, and NVMe technology to unlock new capabilities to drive new levels of performance and efficiency. You can get an in-depth look at vSAN ESA in David Glynn’s blog. It is important to note that vSAN ESA is an alternative, optional vSAN architecture. The existing architecture (which is now referred to as Original Storage Architecture (OSA)) is still available in vSAN 8. It’s a choice that users can make on which one to use when deploying clusters.
In order to deploy VxRail clusters with vSAN ESA, you need to order brand-new VxRail nodes specifically configured for vSAN ESA. This new architecture eliminates the use of discrete cache and capacity drives. Nodes will require all NVMe storage drives. Each drive will contribute to cache and capacity. VxRail 8.0.000 offers two choices for platforms: E660N and the P670N. The user will select either the 3.2 TB or 6.4 TB TLC NVMe storage drives to populate each node in their new VxRail cluster with vSAN ESA. To learn about the configuration options, see David Glynn’s blog.
- The support in vSphere 8 in VxRail 8.0.000 also includes support for the increased cache size for VxRail clusters with vSAN 8.0 OSA. The increase from 600 TB to 1.6 TB will provide significant performance gain. VxRail already has cache drives that can take advantage of the larger cache size. It is easier to deploy a new cluster with a larger cache size than for an existing cluster to expand the current cache size. (For existing clusters, nodes need their disk groups rebuilt when the cache is expanded. This can be a lengthy and tedious endeavor.)
Major VMware releases like vSphere 8 often shine a light on the differentiated experience that our VxRail users enjoy. The checklist of considerations only grows when you’re looking to upgrade to a new software train. VxRail users have come to expect that VxRail provides them the necessary guardrails to guide them safely along the upgrade path to reach their destination. The 800,000 hours of test run time performed by our 100+ staff members, who are dedicated to maintaining the VxRail Continuously Validated States, is what gives our customers the confidence to move fearlessly from one software version to the next. And for customers looking to explore the potential of vSAN ESA, the partnership between VxRail and VMware engineering teams adds to why VxRail is the fastest and most effective path for users to maximize the return on their investment in VMware’s latest technologies.
If you’re interested in upgrading to VxRail 8.0.000, please read the release notes.
If you’re looking for more information about vSAN ESA and VxRail’s support for vSAN ESA, check out this blog.
Author: Daniel Chiu

Learn About the Latest Major VxRail Software Release: VxRail 7.0.400
Thu, 22 Sep 2022 13:11:44 -0000
|Read Time: 0 minutes
As many parts of the world welcome the fall season and the cooler temperatures that it brings, one area that has not cooled down is VxRail. The latest VxRail software release, 7.0.400, introduces a slew of new features that will surely fire up our VxRail customers and spur them to schedule their next cluster update.
VxRail 7.0.400 provides support for VMware ESXi 7.0 Update 3g and VMware vCenter Server 7.0 Update 3g. All existing platforms that support VxRail 7.0 can upgrade to VxRail 7.0.400. Upgrades from VxRail 4.5 and 4.7 are supported, which is an important consideration because standard support from Dell for those versions ends on September 30.
VxRail 7.0.400 software introduces features in the following areas:
- Life cycle management
- Dynamic nodes
- Security
- Configuration flexibility
- Serviceability
This blog delves into major enhancements in those areas. For a more comprehensive rundown of the features added to this release, see the release notes.
Life cycle management
Because life cycle management is a key area of value differentiation for our VxRail customers, the VxRail team is continuously looking for ways to further enhance the life cycle management experience. One aspect that has come into recent focus is handling cluster update failures caused by VxRail nodes failing to enter maintenance mode.
During a cluster update, nodes are put into maintenance mode one at time. Their workloads are moved onto the remaining nodes in the cluster to maintain availability while the nodes go through software, firmware, and driver updates. VxRail 7.0.350 introduced capabilities to notify users of situations such as host pinning and mounted VM tools on the host that can cause nodes to fail to enter maintenance mode, so users can address those situations before initiating a cluster update.
VxRail 7.0.400 addresses this cluster update failure scenario even further by being smarter with how it handles this issue once the cluster update is in operation. If a node fails to enter maintenance mode, VxRail automatically skips that node and moves onto the next node. Previously, this scenario would cause the cluster update operation to fail. Now, users can run that cluster update and process as many nodes as possible. Users can then run a cluster update retry, which targets only the nodes that were skipped. The combination of skipping nodes and targeted retry of those skipped nodes significantly improves the cluster update experience.
Figure 1: Addressing nodes failing to enter maintenance mode
In VxRail 7.0.400, a Dell RecoverPoint for VMs compatibility check has been added to the update advisory report, cluster update pre-check, and cluster update operation to inform users of a potential incompatibility scenario. Having data protection in an unsupported state puts an environment at risk. The addition of the compatibility check is a great news for RecoverPoint for VMs users because this previously manual task is now automated, helping to reduce risk and streamline operations.
VxRail dynamic nodes
Since the introduction of VxRail dynamic nodes last year, we’ve incrementally added more storage protocol support for increased flexibility. NFS, CIFS, and iSCSI support were added earlier this year. In VxRail 7.0.400, users can configure their VxRail dynamic nodes with storage from Dell PowerStore using NVMe on Fabric over TCP (NVMe-oF/TCP). NVMe provides much faster data access compared to SATA and SAS. The support requires Dell PowerStoreOS 2.1 or later and Dell PowerSwitch with the virtual Dell SmartFabric Storage Service appliance.
VxRail cluster deployment using NVMe-oF/TCP is not much different from setting up iSCSI storage as the primary datastore for VxRail dynamic node clusters. The cluster must go through the Day 1 bring-up activities to establish IP connectivity. From there, the user can then set up the port group, VM kernels, and NVMe-oF/TCP adapter to access the storage shared from the PowerStore.
Setting up NVMe-oF/TCP between the VxRail dynamic node cluster and PowerStore is separate from the cluster deployment activities. You can find more information about deploying NVMe-oF/TCP here: https://infohub.delltechnologies.com/t/smartfabric-storage-software-deployment-guide/.
VxRail 7.0.400 also adds VMware Virtual Volumes (vVols) support for VxRail dynamic nodes. Cluster deployment with vVols over Fibre Channel follows a workflow similar to cluster deployment with a VMFS datastore. Provisioning and zoning of the Virtual Volume needs to be done before the Day 1 bring-up. The VxRail Manager VM is installed onto the datastore as part of the Day 1 bring-up.
For vVols over IP, the Day 1 bring-up needs to be completed first to establish IP connectivity. Then the Virtual Volume can be mounted and a datastore can be created from it for the VxRail Manager VM.
Figure 2: Workflow to set up VxRail dynamic node clusters with VMware Virtual Volumes
VxRail 7.0.400 introduces the option for customers to deploy a local VxRail managed vCenter Server with their VxRail dynamic node cluster. The Day 1 bring-up installs a vCenter Server onto the cluster with a 60-day evaluation license, but the customer is required to purchase their own vCenter Server license. VxRail customers are accustomed to having a Standard edition vCenter Server license packaged with their VxRail purchase. However, that vCenter Server license is bundled with the VMware vSAN license, not the VMware vSphere license.
VxRail 7.0.400 supports the use of Dell PowerPath/VE with VxRail dynamic nodes, which is important to many storage customers who have been relying on PowerPath software for multipathing capabilities. With VxRail 7.0.400, VxRail dynamic nodes can use PowerPath with PowerStore, PowerMax, or Unity XT storage array via NFS, iSCSI, or NVMe over Fibre Channel storage protocol.
Security
Another topic that continues to burn bright, no matter the season, is security. As threats continue to evolve, it’s important to continue to advance security measures for the infrastructure. VxRail 7.0.400 introduces capabilities that make it even easier for customers to further protect their clusters.
While the security configuration rules set forth by the Security Technical Implementation Guide (STIG) are required for customers working in or with the U.S. federal government and Department of Defense, other customers can benefit from hardening their own clusters. VxRail 7.0.400 automatically applies a subset of the STIG rules on all VxRail clusters. These rules protect VM controls and the underlying SUSE Linux operating system controls. Application of the rules occurs without any user intervention upon an upgrade to VxRail 7.0.400 and at the cluster deployment with this software version, providing a seamless experience. This feature increases the security baseline for all VxRail clusters starting with VxRail 7.0.400.
Digital certificates are used to verify the external communication between trusted entities. VxRail customers have two options for digital certificates. Self-signed certificates use the VxRail as the certificate authority to sign the certificate. Customers use this option if they don’t need a Certificate Authority or choose not to pay for the service. Otherwise, customers can import a certificate signed by a Certificate Authority to the VxRail Manager. Both options require certificates to be shared between the VxRail Manager and vCenter Server for secure communication to manage the cluster.
Previously, both options required manual intervention, at varying levels, to manage certificate renewals and ensure uninterrupted communication between the VxRail Manager and the vCenter Server. Loss of communication can affect cluster management operations, though not the application workloads.
Figure 3: Workflow for managing certificates
With VxRail 7.0.400, all areas of managing certificates have been simplified to make it easier and safer to import and manage certificates over time. Now, VxRail certificates can be imported via the VxRail Manager and API. There’s an API to import the vCenter certificate into the VxRail trust store. Renewals can be managed automatically via the VxRail Manager so that customers do not need to constantly check expiring certificates and replace certificates. Alternatively, new API calls have been created to perform these activities. While these features simplify the experience for customers already using certificates, hopefully the simplified certificate management will encourage more customers to use it to further secure their environment.
VxRail 7.0.400 also introduces end-to-end upgrade bundle integrity check. This feature has been added to the pre-update health check and the cluster update operation. The signing certificate is verified to ensure the validity of the root certificate authority. The digital certificate is verified. The bundle manifest is also checked to ensure that the contents in the bundle have not been altered.
Configuration flexibility
With any major VxRail software release comes enhancements in configuration flexibility. VxRail 7.0.400 provides more flexibility for base networking and more flexibility in using and managing satellite nodes.
Previous VxRail software releases introduced long-awaited support for dynamic link aggregation for vSAN and vSphere vMotion traffic and support for two vSphere Distributed Switches (VDS) to separate traffic management traffic from vSAN and vMotion traffic. VxRail 7.0.400 removes the previous port count restriction of four ports for base networking. Customers can now also deploy clusters with six or eight ports for base networking while employing link aggregation or multiple VDS, or both.
Figure 4: Two VDS with six NIC ports
Figure 5: Two VDS with eight NIC ports with link redundancy for vMotion traffic and link aggregation for vSAN traffic
With VxRail 7.0.400, customers can convert their vSphere Standard Switch on their satellite nodes to a customer-managed VDS after deployment. This support allows customers to more easily manage their VDS and satellite nodes at scale.
Serviceability
The most noteworthy serviceability enhancement I want to mention is the ability to create service tickets from the VxRail Manager UI. This functionality makes it easier for customers to submit service tickets, which can speed resolution time and improve the feedback loop for providing product improvement suggestions. This feature requires an active connection with the Embedded Service Enabler to Dell Support Services. Customers can submit up to five attachments to support a service ticket.
Figure 6: Input form to create a service request
Conclusion
VxRail 7.0.400 is no doubt one of the more feature-heavy VxRail software releases in some time. Customers big and small will find value in the capability set. This software release enhances existing features while also introducing new tools that further focus on VxRail operational simplicity. While this blog covers the highlights of this release, I recommend that you review the release notes to further understand all the capabilities in VxRail 7.0.400.