Your Browser is Out of Date

ShareDemos uses technology that works best in other browsers.
For a full experience use one of the browsers below


The latest news about VxRail releases and updates


Tag :

All Tags

Author :

All Authors

HCI vSphere VxRail security life cycle management SaaS

Building on VxRail HCI System Software: the advantages of multi-cluster active management capabilities

Daniel Chiu

Tue, 29 Sep 2020 19:03:05 -0000


Read Time: 0 minutes

The signs of autumn are all around us, from the total takeover of pumpkin-spiced everything to the beautiful fall foliage worthy of Bob Ross’s inspiration. Like the amount of change autumn brings forth, so too does the latest release of VxRail ACE, or should I preface that with ‘formerly known as’? I’ll get to that explanation shortly.

This release introduces multi-cluster update functionality that will further streamline the lifecycle management (LCM) of your VxRail clusters at scale. With this active management feature comes a new licensing structure and role-based access control to enable the active management of your clusters.

Formerly known as VxRail ACE

The colors of the leaves are changing and so is the VxRail ACE name. The brand name VxRail ACE (Analytical Consulting Engine), will no longer be used as of this release. While it had a catchy name and was easy to say, there are two reasons for this change. First, Analytical Consulting Engine no longer describes the full value or how we intend to expand the features in the future. It has grown beyond the analytics and monitoring capabilities of what was originally introduced in VxRail ACE a year ago and now includes several valuable LCM operations that greatly expand its scope. Secondly, VxRail ACE has always been part of the VxRail HCI System Software offering. Describing the functionality as part of the overall value of VxRail HCI System Software, instead of having its own name, simplifies the message of VxRail’s value differentiation.

Going forward, the capability set (that is, analytics, monitoring, and LCM operations) will be referred to as SaaS multi-cluster management -- a more accurate description. The web portal is now referred to as MyVxRail.  

Cluster updates

Cluster updates is the first active management feature offered by SaaS multi-cluster management. It builds on the existing LCM operational tools for planning cluster updates: on-demand pre-update health checks (LCM pre-check) and update bundle downloads and staging. Now you can initiate updates of your VxRail clusters at scale from MyVxRail. The benefits of cluster updates on MyVxRail tie closely with existing LCM operations. During the planning phase, you can run LCM pre-checks of the clusters you want to update. This informs you if a cluster is ready for an update and pinpoints areas for remediation for clusters that are not ready. From there, you can schedule your maintenance window to perform a cluster update and, from MyVxRail, initiate the download and staging of the VxRail update bundle onto those clusters. With this release, you can now execute cluster updates for those clusters. Now that’s operational efficiency!

When setting a cluster update operation, you have the benefit of two pieces of information – a time estimate for the update and the change data. The update time estimate will help you determine the length of the maintenance window. The estimate is generated by telemetry gathered about the install base to provide more accurate information. The change data is the list of the components that require an update to reach the target VxRail version.

Figure 1  MyVxRail Updates tab

Role-based access control

Active management requires role-based access control so that you can provide permissions to the appropriate individuals to perform configuration changes to your VxRail clusters. You don’t want anyone with access to MyVxRail to perform cluster updates on the clusters. SaaS multi-cluster management leverages vCenter Access Control for role-based access. From MyVxRail, you will be able to register MyVxRail with the vCenter Servers that are managing your VxRail clusters. The registration process will give VxRail privileges to vCenter Server to build roles with specific SaaS multi-cluster management capabilities.

MyVxRail registers the following privileges on vCenter:

  • Download software bundle: downloads and stages the VxRail software bundle onto the cluster
  • Execute health check: performs an on-demand pre-update health check on the cluster
  • Execute cluster update: initiates the cluster update operation on the clusters
  • Manage update credentials: modifies the VxRail infrastructure credentials used for active management

Figure 2  VxRail privileges for vCenter access control

VxRail Infrastructure Credentials

We’ve done more to make it easier to perform cluster updates at scale. Typically, when you’re performing a single cluster update, you have to enter the root account credentials for vCenter Server, Platform Services Controller, and VxRail Manager. That’s the same process when performing it from VxRail Manager. But that process can get tedious when you have multiple clusters to update.

VxRail Infrastructure Credentials can store those credentials so you can enter them once, at the initial setup of active management, and not have to do it again as you perform a multi-cluster update. MyVxRail can read the stored credentials that are saved on each individual cluster for security.

Big time saver! But how secure is it? More secure than hiding Halloween candy from children. For a user to perform cluster update, the administrator needs to add the ‘execute cluster update’ privilege to the role assigned to that user. Root credentials can only be managed by users assigned with a role that has the ‘manage update credentials’ privilege.

Figure 3  MyVxRail credentials manager


The last topic is licensing. While all the capabilities you have been using on MyVxRail come with the purchase of the VxRail HCI System Software license, multi-cluster update is different. This feature requires a fee-based add-on software license called ‘SaaS active multi-cluster management for VxRail HCI System Software’. All VxRail nodes come with VxRail HCI System Software and you have access to MyVxRail and SaaS multi-cluster management features, except for cluster update. For you to perform an update of a cluster on MyVxRail, all nodes in the clusters must have the add-on software license.   


That is a lot to consume for one release. Hopefully, unlike your Thanksgiving meal, you can stay awake for the ending. While the brand name VxRail ACE is no more, we’re continuing to deliver value-adding capabilities. Multi-cluster update is a great feature to further your use of MyVxRail for LCM of your VxRail clusters. With role-based access and VxRail infrastructure credentials, rest assured you’re benefitting from multi-cluster update without sacrificing security.

To learn more about these features, check out the VxRail techbook and the interactive demo for SaaS multi-cluster management.


Daniel Chiu, VxRail Technical Marketing


Read Full Blog
HCI vSphere VMware VxRail OpenManage

Exploring the customer experience with lifecycle management for vSAN Ready Nodes and VxRail clusters

Cliff Cahill

Thu, 24 Sep 2020 19:41:49 -0000


Read Time: 0 minutes

The difference between VMware vSphere LCM (vLCM) and Dell EMC VxRail LCM is still a trending topic that most HCI customers and prospects want more information about. While we compared the two methods at a high level in our previous blog post, let’s dive into the more technical aspects of the LCM operations of VMware vLCM and VxRail LCM. The detailed explanation in this blog post should give you a more complete understanding of your role as an administrator for cluster lifecycle management with vLCM versus VxRail LCM.

Even though vLCM has introduced a vast improvement in automating cluster updates, lifecycle management is more than executing cluster updates. With vLCM, lifecycle management is still very much a customer-driven endeavor. By contrast, VxRail’s overarching goal for LCM is operational simplicity, by leveraging Continuously Validated States to drive cluster LCM for the customer. This is a large part of why VxRail has over 8,600 customers since it was launched in early 2016.

In this blog post, I’ll explain the four major areas of LCM:

  • Defining the initial baseline configuration
  • Planning for a cluster update
  • Executing the cluster update
  • Sustaining cluster integrity over the long term

Defining the initial baseline configuration

The baseline configuration is a vital part of establishing a steady state for the life of your cluster. The baseline configuration is the current known good state of your HCI stack. In this configuration, all the component software and firmware versions are compatible with one another. Interoperability testing has validated full stack integrity for application performance and availability while also meeting security standards in place. This is the ‘happy’ state for you and your cluster. Any changes to the configuration use this baseline to know what needs to be rectified to return to the ‘happy’ state.

How is it done with vLCM?

vLCM depends on the hardware vendor to provide a Hardware Management Services virtual machine. Dell provides this support for its Dell EMC PowerEdge servers, including vSAN ReadyNodes. I’ll use this implementation to explain the overall process. Dell EMC vSAN ReadyNodes use the OpenManage Integration for VMware vCenter (OMIVV) plugin to connect to and register with the vCenter Server.

Once the VM is deployed and registered, you need to create a credential-based profile. This profile captures two accounts: one for the out-of-band hardware interface, the iDRAC, and the other for the root credentials for the ESXi host. Future changes to the passwords require updating the profile accordingly.

With the VM connection and profile in place, a Catalog XML file is used by vLCM to define the initial baseline configuration. To create the Catalog XML file, you need to install and configure the Dell Repository Manager (DRM) to build the hardware profile. Once a profile is defined to your specification, it must then be exported and stored on an NFS or CIFS share. The profile is then used to populate the Repository Profile data in the OMIVVV UI. If you are unsure of your configuration, refer to the vSAN Hardware Compatibility List (HCL) for the specific supported firmware versions. Once the hardware profile is created, you can then associate it with the cluster profile. With the cluster profile defined, you can enable drift detection. Any future change to the Catalog XML file is done within the DRM.

It’s important to note that vLCM was introduced in vSphere 7.0. To use vLCM, you must first update or deploy your clusters to run vSphere 7.x.

How is it done with VxRail LCM?

With VxRail, when the cluster arrives at the customer data center, it’s already running in a ‘happy’ state. For VxRail, the ‘happy’ state is called Continuously Validated States. The term is pluralized because VxRail defines all the ‘happy’ states that your cluster will update to over time. This means that your cluster is always running in a ‘happy’ state without you needing to research, define, and test to arrive at Continuously Validated States throughout the life of your cluster. VxRail – well, specifically the VxRail engineering team - does it for you. This has been a central tenet of VxRail since the product first launched with vSphere 6.0. Since then it has helped customers transition to vSphere 6.5, 6.7, and now 7.0.

Once the VxRail cluster initialization is completed, use your Dell EMC Support credentials to configure the VxRail repository setting within vCenter. VxRail Manager plugin to vCenter will automatically connect to the VxRail repository at Dell EMC and pull down the next available update package.

Figure 1  Defining the initial baseline configuration

Planning for a cluster update

Updates are a constant in IT, and VMware is constantly adding new capabilities or product/security fixes that require updating to newer versions of software. Take for example the vSphere 7.0 Update 1 release that VMware and Dell Technologies just announced. Those eye-opening features are available to you when you update to that release. You can check out just how often VMware has historically updated vSphere here:

As you know, planning for a cluster update is an iterative process with inherent risk associated with it. Failing to plan diligently can cause adverse effects on your cluster, ranging from network outages and node failure to data unavailability or data loss. That said, it’s important to mitigate the risk where you can.

How is it done with vLCM?

With vLCM, the responsibility of planning for a cluster update rests on the customers’ shoulders, including the risk. Understanding the Bill of Materials that makes up your server’s hardware profile is paramount to success. Once all the components are known, and a target version of vSphere ESXi is specified, the supported driver and firmware version needs to be investigated and documented. You must consult the VMware Compatibility Guide to find out which drivers/firmware are supported for each ESXi release.

It is important to note that although vLCM gives you the toolset to apply firmware and driver updates, it does not validate compatibility or support for each combination for you, except for the HBA Driver. This task is firmly in the customer’s domain. It is advisable to validate and test the combination in a separate test environment to ensure that no performance regression or issues are introduced into the production environment. Interoperability testing can be an extensive and expensive undertaking. Customers should create and define robust testing processes to ensure that full interoperability and compatibility is met for all components managed and upgraded by vLCM.

With Dell EMC vSAN Ready Nodes, customers can rest assured that the HCL certification and compatibility validation steps have been performed. However, the customer is still responsible for interoperability testing.

How is it done with VxRail LCM?

VxRail engineering has taken a unique approach to LCM. Rather than leaving the time-consuming LCM planning to already overburdened IT departments, they have drastically reduced the risk by investing over $60 million, more than 25,000 hours of testing for major releases, and more than 100 team members into a comprehensive regression test plan. This plan is completed prior to every VxRail code release. (This is in addition to the testing and validation performed by PowerEdge, on which VxRail nodes are built.)

Dell EMC VxRail engineering performs this testing within 30 days of any new VMware release (even quicker for express patches), so that customers can continually benefit from the latest VMware software innovations and confidently address security vulnerabilities. You may have heard this called “synchronous release”.

The outcome of this effort is a single update bundle that is used to update the entire HCI stack, including the operating system, the hardware’s drivers and firmware, and management components such as VxRail Manager and vCenter. This allows VxRail to define the declarative configuration we mentioned previously (“Continuously Validated States”), allowing us to move easily from one validated state to the next with each update.


Figure 2  Planning for a cluster update

Executing the cluster update

The biggest improvement with vLCM is its ability to orchestrate and automate a full stack HCI cluster update. This simplifies the update operation and brings enormous time savings. This process is showcased in a recent study performed by Principled Technologies with PowerEdge Servers with vSphere (not including vSAN).

How is it done with vLCM?

The first step is to import the ESXi ISO via the vLCM tab in the vCenter Server UI. Once uploaded, select the relevant cluster, ensure that the cluster profile (created in the initial baseline configuration phase) is associated with the cluster being updated. Now, you can apply the target configuration by editing the ESXi image and, from the OMIVV UI, choose the correct firmware and driver to apply to the hardware profile. Once a compliance scan is complete, you will have the option to remediate all hosts.

If there are multiple homogenous clusters you need to update, it can be as easy as using the same cluster profile to execute the cluster update against. However, if the next cluster has a different hardware configuration, then you would have to perform the above steps over again. Customers with varying hardware and software requirements for their clusters will have to repeat many of these steps, including the planning tasks, to ensure stack integrity.

How it is done with VxRail LCM?

With VxRail and Continuously Validated States, updating from one configuration to another is even simpler. You can access the VxRail Manager directly within the vCenter Server UI to initiate the update. The LCM operation automatically retrieves the update bundle from the VxRail repository, runs a full stack pre-update health check, and performs the cluster update.

With VxRail, performing multi-cluster updates is as simple as performing a single-cluster update. The same LCM cluster update workflow is followed. While different hardware configurations on separate clusters will add more labor for IT staff for vSAN Ready Nodes, this doesn’t apply to VxRail.  In fact, in the latest release of our SaaS multi-cluster management capability set, customers can now easily perform cluster updates at scale from our cloud-based management platform, MyVxRail.

Figure 3  Executing a cluster update

Sustaining cluster integrity over the long term

The long-term integrity of a cluster outlasts the software and hardware in it. As mentioned earlier, because new releases are made available frequently, software has a very short life span. While hardware has more staying power, it won’t outlast some of the applications running on them. New hardware platforms will emerge. New hardware devices will enter the market that will launch new workloads, such as machine learning, graphics rendering, and visualization workflows. You will need the cluster to evolve non-disruptively to deliver the application performance, availability, and diversity your end-users require.

How is it done with vLCM?

In its current form, vLCM will struggle in long-term cluster lifecycle management. In particular, its inability to support heterogeneous nodes (nodes with different hardware configurations) in the same cluster will limit its application diversification and its ability to take advantage of new hardware platforms without impacting end-users.

How it is done with VxRail LCM?

VxRail LCM touts its ability to allow customers to grow non-disruptively and to scale their clusters over time. That includes adding non-identical nodes into the clusters for new applications, adding new hardware devices for new applications or more capacity, or retiring old hardware from the cluster.


Figure 4  Comparing vSphere LCM and VxRail LCM cluster update operations driven by the customer

The VMware vLCM approach empowers customers who are looking for more configuration flexibility and control. They have the option to select their own hardware components and firmware to build the cluster profile. With this freedom comes the responsibility to define the HCI stack and make investments in equipment and personnel to ensure stack integrity. vLCM supports this customer-driven approach with improvements in cluster update execution for faster outcomes.

Dell EMC VxRail LCM continues to take a more comprehensive approach to optimize operational efficiency from the point of the view of the customer. VxRail customers value its LCM capabilities because it reduces operational time and effort which can be diverted into other areas of need in IT. VxRail takes on the responsibility to drive stack integrity for the lifecycle management of the cluster with Continuously Validated States. And VxRail sustains stack integrity throughout the life of the cluster, allowing you to simply and predictably evolve with technology trends.

Cliff Cahill
VxRail Engineering Technologist
Twitter @cliffcahill

Read Full Blog
VMware VxRail VMware Cloud Foundation

The Latest VxRail Platform Innovation is Now Included in Your Cloud

Jason Marques

Tue, 18 Aug 2020 15:32:11 -0000


Read Time: 0 minutes

The Dell Technologies Cloud Platform, VCF on VxRail, now supports the latest VxRail HCI System Software release featuring a new and improved first run experience, host geo-location tagging capabilities, hardware platform updates, and enhanced security features

Dell Technologies and VMware are happy to announce the general availability VCF on VxRail 7.0.010. 

This release brings support for the latest version of VxRail to the Dell Technologies Cloud Platform. Let’s review what these new features are all about. 

Updated VxRail Software Bill of Materials 

Please check out the VCF on VxRail release notes for a full listing of the supported software BOM associated with this release. You can find the link at the bottom of page. 

VxRail Hardware Platform Updates 

VxRail 7.0.010 brings about new support for ruggedized D-Series VxRail hardware platforms (D560/D560F). These ruggedized and durable platforms are designed to meet the demand for more compute, performance, storage, and more importantly, operational simplicity that deliver the full power of VxRail for workloads at the edge, in challenging environments, or for space-constrained areas. To read more about the technical details of VxRail D-Series, check out the VxRail D-Series Spec Sheet. 

Also, this release is reintroducing GPU support that was not in the initial VCF 4.0 on VxRail 7.0 release. 

New and Improved VxRail First Run Experience  

A new Day 1 VxRail cluster first run workflow and UI enhancements have been updated. The new day 1 VxRail first run deployment wizard is comprised of 13 steps or top level tasks. This day 1 workflow update was required to support new VxRail HCI System software enhancements. 

The new UI provides for improved levels of configuration data entry flexibility during deployment. These options include things like allowing unique hostnames for each ESX host without forcing a name configuration, allowing for non-sequential IP addresses for hosts in the cluster, support for a geographical location ID tag, e.g. Rack Name or Rack Location are now supported. It provides a cleaner interface with a consistent look and feel for Information, Warnings, and Errors. There is improved validation, providing a higher level of feedback when errors are encountered of validation checks fail. And finally, options to manually enter all the configuration parameters or upload a pre-defined configuration via a YAML or JSON file are till available too! The figure below illustrates the new first run steps and UI. 


Figure 1


New VxRail API to Automate Day 1 VxRail First Run Cluster Creation 

This feature allows for fast and consistent VxRail cluster deployments using the programmatic extensibility of a REST API. It provides administrators with an additional option for creating VxRail clusters in addition to the VxRail Manager first run UI.  

Day 1 Support to Initially Deploy Up to Six Nodes in a VxRail Cluster During VxRail First Run 

The previous maximum node deployment supported in the VxRail first run was four. Administrators who needed larger VxRail cluster sizes over four nodes would have needed to create the cluster with four nodes and once that was in place, perform node expansions to get to the desired cluster size. This new feature helps reduce time needed to initially create larger VxRail clusters by allowing for a larger starting point of six VxRail nodes. 

VxRail Host Geo-Location Tagging 

This is probably one of the coolest and most underrated features in the release in my opinion. VxRail Manager now supports geographic location tags for VxRail hosts. This capability allows for important admin-defined host metadata that can assist many customers in gaining greater visibility of the physical location of the HCI infrastructure that makes up their cloud. This information is configured as “Host Settings” during VxRail first run as illustrated in the figure below. 

Figure 2


As shown, the two values that make up the geo-location tags are Rack Name and Rack Position. These values are stored in the iDRAC of each VxRail host. You may be asking yourself, “Great! I have the ability to add additional metadata for my VxRail hosts but what can I do with it?”. Well, together, these values help a cloud administrator identify a VxRail host’s position within a given rack within the data center. Cloud administrators can then leverage this data to choose the VxRail host order they want to be displayed in the VxRail Manager vCenter plugin Physical View. The figure below illustrates what this would look like. 

Figure 3


As datacenter environments grow, VxRail host expansion operations can be used to add additional infrastructure capacity. The VxRail “Add VxRail Hosts” automated expansion workflows have been updated to include a new Host Location step which allows for the ability add geo-location Rack Name and Rack Position metadata for the new hosts being added to an existing VxRail Cluster. The figure below shows what a host expansion operation would look like. 

Figure 4


In this fast paced world of digital transformation, it is not uncommon for cloud datacenter infrastructure to be moved within a datacenter after it has already been installed. This could be due to physical rack expansion design changes or infrastructure repurposing. These situations were also considered with using VxRail geo-location tags. Thus, there is an option to dynamically edit an existing host’s geo-location information. When this is performed, VxRail Manager will automatically update the host’s iDRAC with the new values. The figure below shows what the host edit would look like. 

Figure 5


All these geo-location management capabilities provide VCF on VxRail administrators with full stack physical to virtual infrastructure mapping that help further extend the Cloud Foundation management experience and simplify operations! And this capability is only available with the Dell Technologies Cloud Platform (VCF on VxRail)! How cool is that?! 

VxRail Security Enhancements 

Added Security Compliance With The Addition of FIPS 140-2 Level 1 Validated Cryptography For VxRail Manager 

Cloud Foundation on VxRail offers intrinsic security built into every layer of the solution stack, from hardware silicon to storage to compute to networking to governance controls.  This helps customers make security a built part of the platform for your traditional workloads as well as container based cloud native workloads rather than something that is bolted on after the fact. 


Building on the intrinsic security capabilities of the platform are the following new features: 

VxRail Manager is now FIPS 140-2 compliant, offering built-in intrinsic encryption, meeting the high levels of security standards required by the US Department of Defense. 

From VxRail 7.0.010 onward, VxRail has ‘FIPS inside’! This would entail having built-in features such as: 

  • VxRail Manager Data-in-Transit (e.g., HTTPS interfaces, SSH) 
  • VxRail Manager's SLES12 FIPS usage 
  • VxRail Manager - encryption used for password caching 

Disable VxRail LCM operations from vCenter 

In order to limit administrator configuration error by allowing for the performing of VxRail LCM operations from within vCenter rather than through SDDC Manager, all VCF on VxRail deployments will natively lockdown the vSphere Web Client VxRail Manager Plugin Updates screen out of the box. This enforces administrators to use SDDC Manager for all LCM operations which will guarantee that the full stack of HW/SW used have all been qualified and validated for their environment. The figure below illustrates what this looks like. 

Figure 6


Disable VxRail Host Rename/Re-IP operations in vCenter 

Continuing with the idea of trying to limit administration configuration errors, this feature deals with trying to avoid configuration errors by not allowing administrators to perform VxRail Host Edit operations from within vCenter that are not supported in VCF. This helps maintain an operating experience in which all VCF on VxRail deployments will natively lockdown the vSphere Web Client VxRail Manager Plugin Hosts screen out of the box. The figure below illustrates what this looks like

Figure 7


Now those are some intrinsic security features! 


Well that about covers all the new features! Thanks for taking the time to learn more about this latest release. As always, check out some of the links at the bottom of this page to access additional VCF on VxRail resources. 

Jason Marques 

Twitter -@vwhippersnapper 

Additional Resources 


Read Full Blog
Intel HCI VxRail Optane

VxRail & Intel Optane for Extreme Performance

KJ Bedard

Fri, 07 Aug 2020 15:33:49 -0000


Read Time: 0 minutes

Enabling high performance for HCI workloads is exactly what happens when VxRail is configured with Intel Optane Persistent Memory (PMem). Optane PMem provides compute and storage performance to better serve applications and business-critical workloads. So, what is Intel Optane Persistent Memory? Persistent memory is memory that can be used as storage, providing RAM-like performance, very low latency and high bandwidth. It’s great for applications that require or consume large amounts of memory like SAP HANA, and has many other use cases as shown in Figure 1 and VxRail is certified for SAP HANA as well as Intel Optane PMem.  

Moreover, PMem can be used as block storage where data can be written persistently, a great example is for DBMS log files. A key advantage to using this technology is that you can start small with a single PMem card (or module), then scale and grow as needed with the ability to add up to 12 cards. Customers can take advantage of PMem immediately because there’s no need to make major hardware or configuration changes, nor budget for a large capital expenditure. 

There are a wide variety of use cases today including those you see here:

Figure 1: Intel Optane PMem Use Cases

PMem offers two very different operating modes, that being Memory and App Direct, and in turn App Direct can be used in two very different ways.

First, Intel Optane PMem in Memory mode is not yet supported by VxRail.  This mode acts as volatile system memory and provides significantly lower cost per GB then traditional DRAM DIMMs. A follow-on update to this blog will describe this mode and test results in much more detail once it is supported.

As for App Direct mode (supported today), PMem is consumed by virtual machines as either a block storage device, known as vPMemDisk, or as byte addressable memory, known as Virtual NVDIMM. Both provide great benefit to the applications running in a virtual machine, just in very different ways. vPMemDisk can be used by any virtual machine hardware, and by any Guest OS.  Since it’s presented as a block device it will be treated like any other virtual disk. Applications and/or data can then be placed on this virtual disk. The second consumption method, NVDIMM has the advantage of being addressed in the same way as regular RAM, however, it can retain its data through reboots or power failures. This is a considerable plus for large in-memory databases like SAP HANA where cache warm-up or the time to load tables in memory  can be significant!

However, it’s important to note that, like any other memory module, the PMem module does not provide data redundancy. This may not be an issue for some data files on commonly used applications that can be re-created in case of a host failure. But a key principle when using PMem, either as block storage or byte addressable memory is that the applications are responsible for handling data replication to provide durability. 

New data redundancy options are expected on applications that are using PMem and should be well understood before deployment.

First, we’ll look at test results using PMem as virtual disk (or vPMemDisk). Our Engineering team tested VxRail with PMem in App Direct mode and ran comparison tests against a VxRail all-flash (P570F series platform). The testing simulated a typical 4K OLTP workload with 70/30 RW ratio. Our results achieve more than 1.8M IOPs or 6X more than the all-flash VxRail system. That equates to 93% faster response times (or lower latency) and 6X greater throughput as shown here:

Figure 2: VxRail PMem App Direct versus VxRail all-flash

This latency difference indicates the potential to improve the performance of legacy applications by placing specific data files on a PMem module, for example, placing log files on PMem. To verify the benefit of this log acceleration use case we ran a TPC-C benchmark comparing VxRail configured with a log file on a vPMEMDIsk to a VxRail all-flash vSAN, and we saw a 46% improvement on the number of transactions per minute.

Figure 3: Log file acceleration use case

For the second consumption method, we tested PMem in App direct mode using the NVDIMM consumption method. We performed tests using 1,2,4,8 and then 12 PMEM modules. All testing has been evaluated and validated by ESG (Enterprise Strategy Group). The certified white paper has been published as highlighted in the resources section.

Figure 4: NVDIMM device testing (vSAN not-optimized versus optimized PMem NVDIMM)

The results prove linear scalability as we increase the number of modules from 1 to 12. And with 12 PMem modules, VxRail achieves 80 times more IOPs than when running against vSAN not optimized (meaning VxRail all-flash vSAN with no PMem involved), and 100X for the 4K RW workload. The right half of the graphic depicts throughput results for very large IO, 64KB. When PMem is optimized on 12 modules we saw 28X higher throughput for a 64KB random read (RR) workload, and PMem is 13 times faster for the 64K RW.

What you see here is amazing performance on a single VxRail host and almost linear scalability when adding PMem!! Yes, that warrants a double bang. If you were to max out a 64-node cluster, the potential scalability is phenomenal and game changing! 

So, what does all this mean? Key takeaways are:  

  • The local performance of VxRail with Intel Optane PMem can scale to 12M read IOPS, and more than 4M write IOPs or 70GB/s read throughput / 22GB/s write throughput on a single host.
  • The use of PMEM modules doesn’t affect the regular activity on vSAN Datastores and extends the value of your VxRail platform in many ways;
    • It can be used to accelerate legacy applications, such as RDBMS Log acceleration
    • It enables the deployment of in memory databases and applications that can benefit from the higher IO throughput provided by PMEM while still taking the benefit of vSAN characteristics in the VxRail platform 
    • The local performance of a single host with 12 x 128GB PMem modules achieves more than 12M read IOPS, and more than 4M write IOPs
    • It not only increases performance of traditional HCI workloads such as VDI, but also support performance-intensive transactional and analytics workloads
    • It offers orders-of-magnitude faster performance than traditional storage
    • It provides more memory for less cost as PMem is much less costly than DRAM

The references and validation testing have been completed by ESG (Enterprise Strategy Group).  White papers and other resources on VxRail for Extreme Performance are available via the links listed below.

Additional Resources

ESG Validation: Dell EMC VxRail and Intel Optane Persistent Memory

VxRail and Intel Optane for Extreme Performance – Engineering presentation

High Performance for HCI Workloads with Dell EMC VxRail & Intel Optane Persistent Memory - infographic

Dell EMC & Intel Optane Persistent Memory - video

ESG Validation: Dell EMC VxRail with Intel Xeon Scalable Processors and Intel Optane SSDs

By: KJ Bedard – VxRail Technical Marketing Engineer


Twitter: @KJbedard

Read Full Blog
vSphere VMware VxRail networking life cycle management

Adding to the VxRail summer party with the release of VxRail 7.0.010

Daniel Chiu

Wed, 29 Jul 2020 20:15:50 -0000


Read Time: 0 minutes

After releasing multiple VxRail 4.7 software versions in the early summer, the VxRail 7.0 software train has just now joined the party. Like any considerate guest, VxRail 7.0.010 does not come empty handed. This new software release brings new waves of cluster deployment flexibility so you can run a wider range of application workloads on VxRail, as well as new lifecycle management enhancements for you to sit back and enjoy the party during their next cluster update.

The following capabilities expand the workload possibilities that can run on VxRail clusters:

  • More network flexibility with support for customer-supplied virtual distributed switch (VDS) – Often times, customers with a large deployment of VxRail clusters prefer to standardize their VDS so they can re-use the same configuration on multiple clusters. Standardization simplifies cluster deployment operations and VDS management and reduces errors. This is sure to be a hit for our party guests with grand plans to expand their VxRail footprint.
  • Network redundancy – the support for customer-supplied VDS also enables support for network card level redundancy and link aggregation. Now you can create a NIC teaming policy that can tolerate a network card failure for VxRail system traffic. For example, the policy would include a port on the NDC and another port on the PCIe network card. If one network card becomes unavailable, the traffic can still run through the remaining network card. With link aggregation, you can increase the network bandwidth by utilizing multiple ports in an active/active network connection. You can select the load balancing option when configuring the NIC teaming policy.

Network card level redundancy with active/active network connections


    • FIPS 140-2 Level 1 validated cryptography – Industry sectors such as the federal sector require this level of security for any applications that access sensitive data. Now the VxRail software meets this standard by using cryptographic libraries and encrypting data in-transit and storage of keys and credentials. Combine that with existing vSAN encryption that already meets this standard for data at rest, VxRail clusters can be a fit for even more environments in various industry sectors with higher security standards. The guest list for this party is only getting bigger.

Along with these features that increase the market opportunity for VxRail clusters, lifecycle management enhancements also come along with VxRail 7.0.010’s entrance to the party. VxRail has strengthened in LCM pre-upgrade health check to include more ecosystem components in the VxRail stack. Already providing checks against the HCI hardware and software, VxRail is extending to ancillary components such as the vCenter Server, Secure Remote Services gateway, RecoverPoint for VMs software, and the witness host used for 2-node and stretched clusters. The LCM pre-upgrade health check performs a version compatibility against these components before upgrading the VxRail cluster. With a stronger LCM pre-upgrade health check, you’ll have more time for summer fun.

VxRail 7.0.010 is here to keep the VxRail summer party going. These new capabilities will help our customers accelerate innovation by providing an HCI platform that delivers the infrastructure flexibility their applications require, while giving the administrators the operational freedom and simplicity to fearlessly update their clusters freely.

Interested in learning more about VxRail 7.0.010?  You can find more details in the release notes

Daniel Chiu, VxRail Technical Marketing


Read Full Blog
VMware VxRail Kubernetes VMware Cloud Foundation DTCP

Announcing VMware Cloud Foundation 4.0.1 on Dell EMC VxRail 7.0

Jason Marques

Wed, 29 Jul 2020 13:38:33 -0000


Read Time: 0 minutes

The latest Dell Technologies Cloud Platform release introduces new support for vSphere with Kubernetes for entry cloud deployments and more  

Dell Technologies and VMware are happy to announce the general availability VCF 4.0.1 on VxRail 7.0. 

This release offers several enhancements including vSphere with Kubernetes support for entry cloud deployments, enhanced bring up features for more extensibility and accelerated deployments, increased network configuration options, and more efficient LCM capabilities for NSX-T componentsBelow is the full listing of features that can be found in this release:

  • Kubernetes in the management domain: vSphere with Kubernetes is now supported in the management domain. With VMware Cloud Foundation Workload Management, you can deploy vSphere with Kubernetes on the management domain default cluster starting with only four VxRail nodes. This means that DTCP entry cloud deployments can take advantage of running Kubernetes containerized workloads alongside general purpose VM workloads on a common infrastructure! 
  • Multi-pNIC/multi-vDS during VCF bring-up: The Cloud Builder deployment parameter workbook now provides five vSphere Distributed Switch (vDS) profiles that allow you to perform bring-up of hosts with two, four, or six physical NICs (pNICs) and to create up to two vSphere Distributed Switches for isolating system (Management, vMotionvSAN) traffic from overlay (Host, Edge, and Uplinks) traffic. 
  • Multi-pNIC/multi-vDS API support: The VCF API now supports configuring a second vSphere Distributed Switch (vDS) using up to four physical NICs (pNICs), providing more flexibility to support high performance use cases and physical traffic separation. 
  • NSX-T cluster-level upgrade support: Users can upgrade specific host clusters within a workload domain so that the upgrade can fit into their maintenance windows bringing about more efficient upgrades. 
  • Cloud Builder API support for bring-up operations – VCF on VxRail deployment workflows have been enhanced to support using a new Cloud Builder API for bring-up operations. VCF software installation on VxRail during VCF bring-up can now be done using either an API or GUI providing even more platform extensibility capabilities.
  • Automated externalization of the vCenter Server for the management domain: Externalizing the vCenter Server that gets created during the VxRail first run (the one used for the management domain) is now automated as part of the bring-up process. This enhanced integration between the VCF Cloud Builder bring-up automation workflow and VxRail API helps to further accelerate installation times for VCF on VxRail deployments.
  • BOM Updates: Updated VCF software Bill of Materials with new product versions. 

Jason Marques 

Twitter -@vwhippersnapper 


Additional Resources 

Read Full Blog
VxRail AMD

2nd Gen AMD EPYC now available to power your favorite hyperconverged platform ;) VxRail

David Glynn

Mon, 27 Jul 2020 18:46:53 -0000


Read Time: 0 minutes

Expanding the range of VxRail choices to include 64-cores of 2nd Gen AMD EPYC compute

Last month, Dell EMC expanded our very popular E Series (the E for Everything Series) with the introduction of the E665/F/N, our very first VxRail with an AMD processor, and what a processor it is! The 2nd Gen AMD EPYC processor came to market with a lot of industry-leading capabilities:

  • Up to 64-cores in a single processor with 8, 12, 16, 24, 32 or 48 core offerings also available
  • Eight memory channels, but not only more channels, they are also faster at 3200MT/s. The 2nd Gen EPYC can also address much more memory per processor
  • 7nm transistors. Smaller transistors mean more powerful and more energy efficient processors 
  • Up to 128 lanes of PCIe Gen 4.0, with 2X the bandwidth of PCIe Gen 3.0.

These industry leading capabilities enable the VxRail E665 series to deliver dual socket performance in a single socket model - and can provide up to 90% greater general-purpose CPU capacity than other VxRail models when configured with single socket processors.

So, what is the sweet spot or ideal use case for the E665? As always, it depends on many things. Unlike the D Series (our D for Durable Series) that we also launched last month, which has clear rugged use cases, the E665 and the rest of the E Series very much live up to their “Everything” name, and perform admirably in a variety of use cases.

While the 2nd Gen EPYC 64-core processors grab the headlines, there are multiple AMD processor options, including the 16 core AMD 7F52 at 3.50GHz with a max boost of 3.9GHz for applications that benefit from raw clock speed, or where application licensing is core based. On the topic of licensing, I would be remiss if I didn’t mention VMware’s update to its per-CPU pricing earlier this year. This results in processors with more then 32-cores requiring a second VMware per-CPU license. This may make a 32-core processor an attractive option from an overall capacity & performance verses hardware & licensing cost perspective.

Speaking of overall costs, the E665 has dual 10Gb RJ45/SFP+ or dual 25Gb SFP28 base networking options, which can be further expanded with PCIe NICs including a dual 100Gb SFP28 option. From a cost perspective, the price delta between 10Gb and 25Gb networking is minimal. This is worth considering particularly for greenfield sites and even for brownfield sites where the networking maybe upgraded in the near future. Last year, we began offering Fibre Channel cards on VxRail, which are also available on the E665. While FC connectivity may sound strange for a hyperconverged infrastructure platform, it does make sense for many of our customers who have existing SAN infrastructure, or some applications (PowerMax for extremely large database requiring SRDF) or storage needs (Isilon for large file repository for medical files) that are more suited to SAN. While we’d prefer these SAN to be a Dell EMC product, as long as it is on the VMware SAN HCL, it can be connected. Providing this option enables customers to get the best both worlds have to offer.

The options don’t stop there. While the majority of VxRail nodes are sold with all-flash configurations, there are customers whose needs are met with hybrid configs, or who are looking towards all-NVMe options. The E665 can be configured with as little as 960GB to maximums of 14TB hybrid, 46TB all-flash, or 32TB all-NVMe of raw storage capacity. Memory options consist of 4, 8, or 16 RDIMMs of 16GB, 32GB or 64GB in size. Maximum memory performance, 3200 MT/s, is achieved with one DIMM per memory channel, adding a second matching DIMM reduces bandwidth slightly to 2933 MT/s.

VxRail and Dell Technologies, very much recognize that the needs of our customers vary greatly. A product with a single set of options cannot meet all our various customers’ different needs. Today, VxRail offers six different series, each with a different focus: 

    • Everything E Series a power packed 1U of choice
    • Performance-focused P Series with dual or quad socket options
    • VDI-focused V Series with a choice of five different NIVIDA GPUs
    • Durable D Series are MIL-STD 810G certified for extreme heat, sand, dust, and vibration
    • Storage-dense S Series with 96TB of hybrid storage capacity
    • General purpose and compute dense G Series with 228 cores in a 2U form factor
  • With the highly flexible configuration choices, there is a VxRail for almost every use case, and if there isn’t, there is more than likely something in the broad Dell Technologies portfolio that is.


    Author: David Glynn, Sr. Principal Engineer, VxRail Tech Marketing

    VxRail Spec Sheet
    E665 Product Brief
    E665 One Pager
    D560 3D product landing page
    D Series video
    D Series spec sheet
    D Series Product Brief

Read Full Blog
VxRail EHR MEDITECH healthcare

Healthcare Homerun for VxRail – MEDITECH Certified

Vic Dery

Thu, 16 Jul 2020 19:14:23 -0000


Read Time: 0 minutes

At Dell Technologies we are excited and proud to announce the VxRail HCI (Hyperconverged Infrastructure) certification for MEDITECH. Dell Technologies is #1 in the Hyperconverged Systems segment, a position held for 12 consecutive quarters1. VxRail is the only fully integrated, pre-configured, and tested hyperconverged infrastructure that simplifies and extends VMware environments. This solution helps simplify MEDITECH environments using VMware VMs improving performance and scalability by bringing together and optimizing multiple workloads. 

With this Dell Technologies certified solution that leverages VxRail, MEDITECH environments are easier to use, have lower risk of failure while continuing to provide a fiscally responsible approach.

Dell EMC and MEDITECH worked closely with an approved integrator* during the certification of the VxRail running the MEDITECH test harness. Testing consisted of a VxRail cluster to support all VMs required for the MEDITECH application, and provide infrastructure redundancy etc.  MBF (MEDITECH Backup Facilitator) backups are accomplished with Dell EMC’s Networker-NMMEDI in conjunction with RecoverPoint for VM’s, that has been tested and is backed by best in class implementation and a continuous focus on positive customer experience.

IDC WW Quarterly Converged Systems Tracker, Vendor Revenue (US$M) Q1 2020, June 18, 2020



Dell Technologies makes IT transformation real for MEDITECH environments with a data first approach and as a leading provider of Healthcare IT infrastructure we are uniquely positioned to offer a full breath of solutions for MEDITECH environments. In fact, more than 60% of MEDITECH’s customers deploy a Dell Technologies solution2. For these reasons at Dell Technologies we’re excited and proud to add this certification, which supports MEDITECH Expanse, 6.X, Client/Server and MAGIC environments, to our Dell Technologies Healthcare portfolio. 

*Special thanks to Teknicor for providing their best practices, assistance and lab space for this certification process.

2 HIMSS Analytics, May 2019.



Dell Healthcare page

Read Solutions for MEDITECH Environments 


Author: Vic Dery, Senior Principal Engineer, VxRail Technical Marketing



Read Full Blog
SQL Server big data Hadoop VxRail Microsoft Big Data Cluster

Big Solutions on Dell EMC VxRail with SQL 2019 Big Data Cluster

Vic Dery

Thu, 09 Jul 2020 19:20:04 -0000


Read Time: 0 minutes

The amount of data and different formats organizations must manage, ingest, and analyze has been the driving force behind Microsoft SQL 2019 Big Data Clusters (BDC).  SQL Server 2019 BDC demonstrates the deployment of scalable clusters of SQL Server, Spark, and containerized HDFS (Hadoop Distributed File System) running on Kubernetes.

We recently deployed and tested SQL Server 2019 BDC on Dell EMC VxRail hyperconverged infrastructure to demonstrate how VxRail delivers the performance, scalability, and flexibility needed to bring these multiple workloads together.    

The Dell EMC VxRail platform was selected for its ability to incorporate compute, storage, virtualization, and management in one platform offering. The key feature of the VxRail HCI is the integration of vSphere, vSAN, and VxRail HCI System Software for an efficient and reliable deployment and operations experience. The use of VxRail with SQL Server 2019 BDC makes it easy to unite relational data with big data.  

The testing demonstrates the advantages of using VxRail with SQL Server 2019 BDC for analytic application development. This also demonstrates how Docker, Kubernetes, and the vSphere Container Storage Interface (CSI) driver accelerate the application development life cycle when they are used with VxRail. The lab environment for development and testing used four VxRail E560F nodes supported by the vSphere CSI driver. With this solution, developers can provision SQL Server BDC in containerized environments without the complexities of traditional methods for installing databases and provisioning storage.

Our white paper, Microsoft SQL Server 2019 Big Data Cluster on Dell EMC VxRail shows the power of implementing SQL Server 2019 BDC technologies on VxRail. Integrating SQL Server 2019 RDBMS, SQL Server BDC, MongoDB, and Oracle RDBMS helps to create a unified data analytics application. Using VxRail enhances the ability of SQL Server 2019 to scale out storage and compute clusters while embracing the virtualization techniques from VMware. This SQL Server 2019 BDC solution also benefits from the simplicity of a complete yet flexible validated Dell EMC VxRail with Kubernetes management and storage integration.

The solution demonstrates the combined value of the following technologies: 

  • VxRail E560F – All-flash performance
  • Large tables stored on a scaled-out HDFS storage cluster that is hosted by BDC 
  • Smaller related data tables that are hosted on SQL Server, MongoDB, and Oracle databases 
  • Distributed queries that are enabled by the PolyBase capability in SQL Server 2019 to process Transact-SQL queries that access external data in SQL Server, Oracle, Teradata, and MongoDB. 
  • Red Hat Enterprise Linux


Big Data Cluster Services

This diagram shows how the pools are built. It provides details of the benefits for Kubernetes features for container orchestration at scale, including:

  • Autoscaling, replication, and recovery of containers 
  • Intracontainer communication, such as IP sharing 
  • A single entity—a pod—for creating and managing multiple containers 
  • A container resource usage and performance analysis agent, cAdvisor 
  • Network pluggable architecture 
  • Load balancing 
  • Health check service

This white paper, Microsoft SQL Server 2019 Big Data Cluster on Dell EMC VxRail,  addresses big data storage, the tools for handling big data, and the details around testing with TPC-H. When we tested data virtualization with PolyBase, the queries were successful, running without error and returning the results that joined all four data sources.

Because data virtualization does not involve physically copying and moving the data (so that the data is available to business users in real-time), BDC simplifies and centralizes access to and analysis of the organization’s data sphere. It enables IT to manage the solution by consolidating big data and data virtualization on one platform with a proven set of tools.

Success starts with the right foundation:

SQL Server 2019 BDC is a compelling new way to utilize SQL Server to bring high-value relational data and high-volume big data together on a unified, scalable data platform. All of this can be deployed with VxRail, enabling enterprises to experience the power of PolyBase to virtualize their data stores, create data lakes, and create scalable data marts in a unified, secure environment without needing to implement slow and costly Extract, Transform, and Load (ETL) pipelines. This makes data-driven applications and analysis more responsive and productive. SQL Server 2019 BDC and Dell EMC VxRail provide a complete unified data platform to deliver intelligent applications that can help make any organization more successful.

Read the full paper to learn more about how Dell EMC VxRail with SQL 2019 Big Data Clusters can:

  • Bring high-value relational data and high-volume big data together on a single, scalable platform.
  • Incorporates intelligent features and gets insights from more of your data—including data stored beyond SQL Server in Hadoop, Oracle, Teradata, and MongoDB.
  • Supports and enhances your database management and data-driven apps with advanced analytics using Hadoop and Spark. 


Additional VxRail & SQL resources:

Microsoft SQL Server 2019 Big Data Cluster on Dell EMC VxRail 

Microsoft SQL Server on VMware Cloud Foundation on Dell EMC VxRail

SQL on VxRail Solution brief

Key Benefits of Running Microsoft SQL Server on Dell EMC hyperconverged infrastructure (HCI) - Whitepaper

Key benefits of running Microsoft SQL Server on Dell EMC Hyperconverged Infrastructure (HCI) - Infographic

Architecting Microsoft SQL Server on VMware vSphere


Author: Vic Dery, Senior Principal Engineer, VxRail Technical Marketing


Read Full Blog
VMware PowerMax VxRail VMware Cloud Foundation SRDF

Extending Dell Technologies Cloud Platform Availability for Mission Critical Applications

Jason Marques

Mon, 29 Jun 2020 14:48:57 -0000


Read Time: 0 minutes

Reference Architecture Validation Whitepaper Now Available!

Many of us here at Dell Technologies regularly have conversations with customers and talk about what we refer to as the “Power of the Portfolio.” What does this mean exactly? It is essentially a reference to the fact that, as Dell Technologies, we have a robust and broad portfolio of modern IT infrastructure products and solutions across storage, networking, compute, virtualization, data protection, security, and more! At first glance, it can seem overwhelming to many. Some even say it could be considered complex to sort through. But we, as Dell Technologies, on the other hand, see it as an advantage. It allows us to solve a vast majority of our customers’ technical needs and support them as a strategic technology partner. 

It is one thing to have the quality and quantity of products and tools to get the job done -- it’s another to leverage this portfolio of products to deliver on what customers want most: business outcomes.

As Dell Technologies continues to innovate, we are making the best use of the technologies we have and are developing ways to use them together seamlessly in order to deliver better business outcomes for our customers. The conversations we have are not about this product OR that product but instead they are about bringing together this set of products AND that set of products to deliver a SOLUTION giving our customers the best of everything Dell Technologies has to offer without compromise and with reduced risk.

Figure 1: Cloud Foundation on VxRail Platform Components

The Dell Technologies Cloud Platform is an example of one of these solutions. And there is no better example that illustrates how to take advantage of the “Power of the Portfolio” than one that appears in a newly published reference architecture white paper that focuses on validating the use of the Dell EMC PowerMax system with SRDF/Metro in a Dell Technologies Cloud Platform (VMware Cloud Foundation on a Dell EMC VxRail) multi-site stretched-cluster deployment configuration (Extending Dell Technologies Cloud Platform Availability for Mission Critical Applications).This configuration provides the highest levels of application availability for customers who are running mission-critical workloads in their Cloud Foundation on VxRail private cloud that would otherwise not be possible with core DTCP alone.

Let’s briefly review some of the components used in the reference architecture and how they were configured and tested. 

Using external storage with VCF on VxRail

Customers commonly ask whether they can use external storage in Cloud Foundation on VxRail deployments. The answer is yes! This helps customers ease into the transition to a software-defined architecture from an operational perspective. It also helps customers leverage the investments in their existing infrastructure for the many different workloads that might still require external storage services.

External storage and Cloud Foundation have two important use cases: principal storage and supplemental storage. 

  • Principal storage - SDDC Manager provisions a workload domain that uses vSAN, NFS, or Fiber Channel (FC) storage for a workload domain cluster’s principal storage (the initial shared storage that is used to create a cluster). By default, VCF uses vSAN storage as the principal storage for a cluster. The option to use NFS and FC-connected external storage is also available. This option enables administrators to create a workload domain cluster whose principal storage can be a previously provisioned NFS datastore or an FC-based VMFS datastore instead of vSAN. External storage as principal storage is only supported on VI Workload Domains as vSAN is the required principal storage for the management domain in VCF.
  • Supplemental storage - This involves mounting previously provisioned external NFS, iSCSI, vVols, or FC storage to a Cloud Foundation workload domain cluster that is using vSAN as the principal storage. Supporting external storage for these workload domain clusters is comparable to the experience of administrators using standard vSphere clusters who want to attach secondary datastores to those clusters. 

At the time of writing, Cloud Foundation on VxRail supports supplemental storage use cases only. This is how external storage was used in the reference architecture solution configuration.

PowerMax Family

The Dell EMC PowerMax is the first Dell EMC hardware platform that uses an end-to-end Non-Volatile Memory Express (NVMe) architecture for customer data. NVMe is a set of standards that define a PCI Express (PCIe) interface used to efficiently access data storage volumes based on Non-Volatile Memory (NVM) media, which includes modern NAND-based flash along with higher-performing Storage Class Memory (SCM) media technologies. The NVMe-based PowerMax array fully unlocks the bandwidth, IOPS, and latency performance benefits that NVM media and multi-core CPUs offer to host-based applications—benefits that are unattainable using the previous generation of all-flash storage arrays. For a more detailed technical overview of the PowerMax Family, please check out the whitepaper Dell EMC PowerMax: Family Overview

The following figure shows the PowerMax 2000 and PowerMax 8000 models.

Figure 2: PowerMax product family


The Symmetrix Remote Data Facility (SRDF) maintains real-time (or near real-time) copies of data on a PowerMax production storage array at one or more remote PowerMax storage arrays. SRDF has three primary applications: 

  • Disaster recovery
  • High availability
  • Data migration

In the case of this reference architecture, SRDF/Metro was used to provide enhanced levels of high availability across two availability zone sites. For a complete technical overview of SRDF, please check out this great SRDF whitepaper: Dell EMC SRDF.

Solution Architecture

Now that we are familiar with the components used in the solution, let’s discuss the details of the solution architecture that was used. 

This overall solution design provides enhanced levels of flexibility and availability that extend the core capabilities of the VCF on VxRail cloud platform. The VCF on VxRail solution natively supports a stretched-cluster configuration for the management domain and a VI workload domain between two availability zones by using vSAN stretched clusters. A PowerMax SRDF/Metro with Metro Stretched Cluster (vMSC) configuration is added to protect VI workload domain workloads by using supplementary storage for the workloads that are running on them.

Two types of vMSC configurations are verified with stretched Cloud Foundation on VxRail: uniform and non-uniform.

  • Uniform host access configuration - vSphere hosts from both sites are all connected to a storage node in the storage cluster across all sites. Paths presented to vSphere hosts are stretched across a distance.
  • Non-uniform host access configuration - vSphere hosts at each site are connected only to storage nodes at the same site. Paths presented to vSphere hosts from storage nodes are limited to the local site.

The following figure shows the topology used in the reference architecture of the Cloud Foundation uniform stretched-cluster configuration with PowerMax SRDF/Metro.

Figure 3: Cloud Foundation on VxRail uniform stretched-cluster config with PowerMax SRDF/Metro 

The following figure shows the topology used in the reference architecture of the Cloud Foundation on VxRail non-uniform stretched cluster configuration with PowerMax SRDF/Metro.

 Figure 4: Cloud Foundation on VxRail non-uniform stretched-cluster config with PowerMax SRDF/Metro 

Solution Validation Testing Methodology

We completed solution validation testing across the following major categories for both iSCSI and FC connected devices:

  • Functional Verification Tests - This testing addresses the basic operations that are performed when PowerMax is used as supplementary storage with VMware VCF on VxRail.
  • High Availability Tests - HA testing helps validate the capability of the solution to avoid a single point of failure, from the hardware component port level up to the IDC site level.
  • Reliability Tests - In general, reliability testing validates whether the components and the whole system are reliable enough with a certain level of stress running on them.

For complete details on all of the individual validation test scenarios that were performed, and the pass/fail results, check out the whitepaper.


To summarize, this white paper describes how Dell EMC engineers integrated VMware Cloud Foundation on VxRail with PowerMax SRDF/Metro and provides the design configuration steps that they took to automatically provision PowerMax storage by using the PowerMax vRO plug-in. The paper validates that the Cloud Foundation on VxRail solution functions as expected in both a PowerMax uniform vMSC configuration and a non-uniform vMSC configuration by passing all the designed test cases. This reference architecture validation demonstrates the power of the Dell Technologies portfolio to provide customers with modern cloud infrastructure technologies that deliver the highest levels of application availability for business-critical and mission-critical applications running in their private clouds.

Find the link to the white paper below along with other VCF on VxRail resources and see how you can leverage the “Power of the Portfolio” to support your business!

Jason Marques

Twitter - @vwhippersnapper

Additional Resources

Extending Dell Technologies Cloud Platform Availability for Mission Critical Applications Reference Architecture Validation Whitepaper

VxRail page on

VxRail Videos

VCF on VxRail Interactive Demos

Read Full Blog
VMware VxRail VMware Cloud Foundation DTCP

Announcing General Availability of VCF on VxRail 4.7.511

Karol Boguniewicz

Thu, 18 Jun 2020 14:57:10 -0000


Read Time: 0 minutes

Improved automated lifecycle management and new hardware options

Today (7/2), Dell Technologies is announcing General Availability of VMware Cloud Foundation on VxRail 4.7.511. 

Why we are releasing

Because we’ve been notified about an upcoming important patch for the Cloud Foundation version 3.10 from VMware, and we wanted to incorporate it in a GA version on VxRail for the best experience for our customers.

What’s New?

This new release introduces VCF enhancements and VxRail enhancements.

VMware Cloud Foundation enhancements:

  • ESXi Cluster-Level and Parallel Upgrades - Enables customers to update the ESXi software on multiple clusters in the management domain or in a workload domain in parallel. Parallel upgrades reduce the overall time required to upgrade the VCF environment. 

Figure 1. ESXi Cluster-Level and Parallel Upgrades

  • NSX-T Data Center Cluster-Level and Parallel Upgrades - Enables customers to upgrade all edge clusters in parallel, and then all host clusters in parallel. Again, parallel upgrades reduce the overall time required to upgrade the VCF environment. There’s also a possibility to select specific clusters to upgrade, using multiple upgrade windows, so that there’s no requirement for all clusters to be available at a given time.

Figure 2. NSX-T Cluster-Level and Parallel Upgrades

  • Skip Level Upgrades - Enables customers to upgrade to VMware Cloud Foundation on Dell EMC VxRail 3.10 from versions 3.7 and later.  Note: in case of VCF on VxRail, this must be performed by Dell EMC Customer Support at this time – customer enabled skip level upgrades will be supported when the feature is available in the GUI. Customers with active support contracts should open a Service Request with Dell EMC Customer Support to schedule the skip level upgrade activity.

Option to disable Application Virtual Networks (AVNs) during Bring-up - AVNs deploy vRealize Suite components on NSX overlay networks. We recommend using this option during bring-up. Customers can now disable this feature, for instance, if they are not planning to use vRealize Suite components.

  • Support for multiple NSX-T Transport Zones - Some customers require this option due to their architecture/security standards, for even better separation of the network traffic. It’s now available as a Day 2 configuration option that can be enabled by customers or VMware Professional Services.
  • BOM Updates - Updated Bill of Materials with new product versions. For an updated BOM, please consult the release notes.

VxRail 4.7.511 enhancements:

  • VCF on VxRail login using RSA SecurID two-factor authentication - Allows customers to implement more secure, two-factor authentication for VCF on VxRail using the RSA SecurID solution.

  • Support for new hardware options - Please check this blog post and the press release for more details on VxRail 4.7.510 platform features:
  • Intel Optane Persistent Memory 
  • VxRail D560 / D560F – ruggedized VxRail nodes
  • VxRail E665/F/N – AMD-based VxRail nodes


VMware Cloud Foundation on VxRail 4.7.511 provides several features that allow existing customers to upgrade their platform more efficiently than ever before. The updated LCM capabilities offer not only more efficiency (with parallelism), but more flexibility in terms of handling the maintenance windows. With skip level upgrade, available in this version as a professional service, it’s also possible to get to this latest release much faster. This increases security, and allows customers to get the most benefit from their existing investments in the platform. New customers will benefit from the broader spectrum of hardware options, including ruggedized (D-series) and AMD-based nodes.

Additional resources:

Blog post about VCF 4.0 on VxRail 7.0: The Dell Technologies Cloud Platform – Smaller in Size, Big on Features

Press release: Dell Technologies Brings IT Infrastructure and Cloud Capabilities to Edge Environments

Blog post about new features in VxRail 4.7.510: VxRail brings key features with the release of 4.7.510

VCF on VxRail technical whitepaper

VMware Cloud Foundation 3.10 on Dell EMC VxRail Release Notes from VMware

Blog post about VCF 3.10 from VMware: Introducing VMware Cloud Foundation 3.10

Author: Karol Boguniewicz, Senior Principal Engineer, VxRail Technical Marketing

Twitter: @cl0udguide 

Read Full Blog
Intel HCI VxRail security Optane life cycle management

VxRail brings key features with the release of 4.7.510

KJ Bedard

Thu, 18 Jun 2020 14:24:47 -0000


Read Time: 0 minutes

VxRail recently released a new version of our software, 4.7.510, which brings key feature functionality and product offerings

At a high level, this release further solidifies VxRail’s synchronous release commitment with vSphere of 30 days or less. VxRail and the 4.7.510 release integrates and aligns with VMware by including the vSphere 6.7U3 patch release.  More importantly, vSphere 6.7U3 provides the underlying support for Intel Optane persistent memory (or PMem), also offered in this release.

Intel Optane persistent memory is non-volatile storage medium with RAM-like performance characteristics. Intel Optane PMem in a hyperconverged VxRail environment accelerates IT transformation with faster analytics (think in-memory DBMS), and cloud services.

Intel Optane PMem (in App Direct mode) provides added memory options for the E560/F/N and P570/F and is supported on version 4.7.410. Additionally, PMem will be supported on the P580N beginning with version 4.7.510 on July 14.

This technology is ideal for many use cases including in-memory databases and block storage devices, and it’s flexible and scalable allowing you to start small with a single PMem module (card) and scale as needed. Other use cases include real-time analytics and transaction processing, journaling, massive parallel query functions, checkpoint acceleration, recovery time reduction, paging reduction and overall application performance improvements.

New functionality enables customers to schedule and run "on demand” health checks in advance, and in lieu of the LCM upgrade. Not only does this give customers the flexibility to pro-actively troubleshoot issues, but it ensures that clusters are in a ready state for the next upgrade or patch. This is extremely valuable for customers that have stringent upgrade schedules, as they can rest assured that clusters will seamlessly upgrade within a specified window. Of course, running health checks on a regular basis provides sanity in knowing that your clusters are always ready for unscheduled patches and security updates.

Finally, the VxRail 4.7.510 release introduces optimized security functionality with two-factor authentication (or 2FA) with SecurID for VxRail. 2FA allows users to login to VxRail via the vCenter plugin when the vCenter is configured for RSA 2FA. Prior to this version, the user would be required to enter username and password. The RSA authentication manager automatically verifies multiple prerequisites and system components to identify and authenticate users. This new functionality saves time by alleviating the username/password entry process for VxRail access. Two-factor authentication methods are often required by government agencies or large enterprises. VxRail has already incorporated enhanced security offerings including security hardening, VxRail ACLs and RBAC, KMIP compliant key management, secure logging, and DARE, and now with the release of 4.7.510, the inclusion of 2FA further distinguishes VxRail as a market leader.

Please check out these resources for more VxRail 4.7.510 information:

VxRail Spec sheet

VxRail Technical FAQ

VxRail 4.7.510 release notes

By:  KJ Bedard - VxRail Technical Marketing Engineer


Twitter: @KJbedard

Read Full Blog
HCI VxRail

Protecting VxRail from Power Disturbances

Karol Boguniewicz

Fri, 12 Jun 2020 13:03:51 -0000


Read Time: 0 minutes

Preserving data integrity in case of unplanned power events

The challenge

Over the last few years, VxRail has evolved significantly -- becoming an ideal platform for most use cases and applications, spanning the core data center, edge locations, and the cloud. With its simplicity, scalability, and flexibility, it’s a great foundation for customers’ digital transformation initiatives, as well as high value and more demanding workloads, such as SAP HANA.

Running more business-critical workloads requires following best practices regarding data protection and availability. Dell Technologies specializes in data protection solutions and offers a portfolio of products that can fulfill even the most demanding RPO/RTO requirements from our customers. However, we are probably not giving enough attention to the other area related to this topic: protection against power disturbances and outages. Uninterruptible Power Supply (UPS) systems are at the heart of a data center’s electrical systems, and because VxRail is running critical workloads, it is a best practice to leverage a UPS to protect them and to ensure data integrity in case of unplanned power events. I want to highlight a solution from one of our partners – Eaton.

The solution

Eaton is an Advantage member of the Dell EMC Technology Connect Partner Program and the first UPS vendor who integrated their solution with VxRail. Eaton’s solution is a great example of how Dell Technologies partners can leverage VxRail APIs to provide additional value for our joint customers. Having integrated  Eaton’s Intelligent Power Manager (IPM) software with VxRail APIs, and leveraged Eaton’s Gigabit Network Card, the solution can run on the same protected VxRail cluster. This removes the need for additional external compute infrastructure to host the power management software - just a compatible Eaton UPS is required.

The solution consists of:

  • VxRail version min. 4.5.300, 4.7.x, 7.0.x and above
  • Eaton IPM SW v 1.67.243 or above
  • Eaton UPS – 5P, 5PX, 9PX, 9SX, 93PM, 93E, 9PXM
  • Eaton M2 Network Card FW v 1.7.5
  • IPM Gold License Perpetual

The main benefits are:

  • Preserving data integrity and business continuity by enabling automated and graceful shutdown of VxRail clusters that are experiencing unplanned extended power events
  • Reducing the need for onsite IT staff with simple set-up and remote management of power infrastructure using familiar VMware tools
  • Safeguarding the VxRail system from power anomalies and environmental threats

How does it work?

It’s quite simple (see the figure below). What’s interesting and unique is that the IPM software, which is running on the cluster, delegates the final shutdown of the system VMs and cluster to the card in the UPS device, and the card uses VxRail APIs to execute the cluster shutdown.

Figure 1. Eaton UPS and VxRail integration explained


Protection against unplanned power events should be a part of a business continuity strategy for all customers who run their critical workloads on VxRail. This ensures data integrity by enabling automated and graceful shutdown of VxRail cluster(s). Eaton’s solution is a great example of providing such protection and how Dell Technologies partners can leverage VxRail APIs to provide additional value for our joint customers.

Additional resources:

Eaton website: Eaton ensures connectivity and protects Dell EMC VxRail from power disturbances

Brochure: Eaton delivers advanced power management for Dell EMC VxRail systems

Blog post: Take VxRail automation to the next level by leveraging APIs


Author: Karol Boguniewicz, Senior Principal Engineer, VxRail Technical Marketing

Twitter: @cl0udguide 


Read Full Blog
VxRail Epic EHR healthcare

VxRail extends flexibility in Healthcare with new EHR best practices

Vic Dery

Mon, 01 Jun 2020 13:39:38 -0000


Read Time: 0 minutes

The Healthcare industry is pressured to deliver not only as health providers, but making the infrastructure that operates the healthcare system, secure, scalable, and simple to use. This allows healthcare providers to focus on patients.  VxRail has had a great deal of success in the healthcare vertical because its core values align so closely with those demanded by the industry. With early successes in VDI (Virtual Desktop Infrastructure), healthcare IT departments expanded to more business and even life critical IT use cases with VxRail, because it proved that it can be highly scalable, simple to use, and has security built into everything it does.

“Best Practices for VMware vSAN with Epic on Dell EMC VxRail” created in collaboration with our peers at VMware highlights the considerations around a small to medium size environment, specifically for Epic. It uses a 6 node VxRail configuration to provide predictable and consistent performance, as well as Life Cycle Management (LCM) for the VxRail. The VxRail node used in this best practice is an E560N – an all NVMe solution.  Balancing workload and budget requirements, the dual-socket E560N provides a cost-effective, space-efficient 1U platform. Available with up to 32TB of NVMe capacity, the E560N is the first all-NVMe 1U VxRail platform. The all-NVMe capability provides a higher performance at low queue depths making it much easier to reliably deliver very high real-world performance for a SQL Server database management system (DBMS). Being able to run multiple healthcare applications including EHR, while successfully maintaining the secure, scalable, and simplified use of VxRail is possible. Enabling healthcare IT departments to scale and expanded infrastructure to meet the ever-growing demands of the health providers and the healthcare industry.

VxRail has had a great deal of success in the healthcare vertical because its core values align so closely to those demanded by the industry

  • Secure - Security is a core part of VxRail design, this starts at the supply chain and the components used to build it, continues into the features and software designed into it, and evolves with every lifecycle management that updates the VxRail HCI (hyperconverged infrastructure) system software. The most recent feature added supports 2-factor authentication (2FA), to provide an additional layer of security. VxRail has FIPS 140-2 validated encryption based on vSAN architecture. There is a detailed whitepaper for the VxRail security that covers in detail security features and certifications here.
  • Scalable - What makes VxRail an effective solution for healthcare is its ability to scale, not just for EHR applications, but for any application sharing the solution. With VxRail, you can size based on the known or expected workloads for the initial deployment and provide a solution that meets that workload requirement. This allows for the healthcare infrastructure to buy for the requirements of today, not the estimated requirements of three to five years down the road, as VxRail will scale easily into the future. Scaling the VxRail is easy... need more compute, just add an additional node; need more storage space, just add additional capacity drives.
  • Simplicity - Why is simplicity important when it comes to not just healthcare solutions, but all workloads? It is about IT teams being able to focus on their business and less on the continuous effort to maintain environments. VxRail simplifies operations with software-driven automation and lifecycle management. VxRail is continuously tested and updated as a solution from the BIOS, firmware, and HCI to the VMware software

VxRail is flexible enough to support hospital systems, alongside other applications for business and even education. A great example of this this flexibility can be seen in this Mercy Ships case study. The new best practices for Epic EHR combined with the proven successes that VxRail has with VDI in the Healthcare vertical are a testament to VxRail’s versatility.


Additional Resources:

Author: Vic Dery, Senior Principal Engineer, VxRail Technical Marketing


Best Practices for VMware vSAN with Epic on Dell EMC VxRail - Here

Dell EMC VxRail Comprehensive Security Design - Here

See more solutions from Dell for healthcare and life sciences - Here  

Customer profile Mercy Ships - Here

Read Full Blog
Intel VMware VxRail vSAN Optane

Top benefits to using Intel Optane NVMe for cache drives in VxRail

David Glynn

Wed, 20 May 2020 14:42:17 -0000


Read Time: 0 minutes

Performance, endurance, and all without a price jump!

There is a saying that “A picture paints a thousand words” but let me add that a “graph can make for an awesome picture”.

Last August we at VxRail worked with ESG on a technical validation paper that included, among other things, the recent addition of Intel Optane NVMe drives for the vSAN caching layer. Figure 3 in this paper is a graph showing the results of a throughput benchmark workload (more on benchmarks later). When I do customer briefings and the question of vSAN caching performance comes up, this is my go-to whiteboard sketch because on its own it paints a very clear picture about the benefit of using Optane drives – and also because it is easy to draw.

In the public and private cloud, predictability of performance is important, doubly so for any form of latency. This is where caching comes into play, rather than having to wait on a busy system, we just leave it in the write cache inbox and get an acknowledgment. The inverse is also true. Like many parents I read almost the same bedtime stories to my young kids every night, you can be sure those books remain close to hand on my bedside “read cache” table. This write and read caching greatly helps in providing performance and consistent latency. With vSAN all-flash there no longer any read cache as the flash drives at the capacity layer provide enough random read access performance… just as my collection of bedtime story books has been replaced with a Kindle full of eBooks. Back to the write cache inbox where we’ve been dropping things off – at some point, this write cache needs to be empty, and this is where the Intel Optane NVMe drives shine. Drawing the comparison back to my kids, I no longer drive to a library to drop off books. With a flick of my finger I can return, or in cache terms de-stage, books from my Kindle back to the town library - the capacity drives if you will. This is a lot less disruptive to my day-to-day life, I don’t need to schedule it, I don’t need to stop what I’m doing, and with a bit of practice I’ve been able to do this mid story Let’s look at this in actual IT terms and business benefits.

To really show off how well the Optane drives shine, we want to stress the write cache as much as possible. This is where benchmarking tools and the right knowledge of how to apply them come into play. We had ESG design and run these benchmarking workloads for us. Now let’s be clear, this test is not reflective of a real-world workload but was designed purely to stress the write cache, in particular the de-staging from cache to capacity. The workload that created my go-to whiteboard sketch was the 100% sequential 64KB workload with a 1.2TB working set per node for 75 minutes.

The graph clearly shows the benefit of the Optane drives, they keep on chugging at 2,500MB/sec of throughput the entire time without dropping a beat. What’s not to like about that! This is usually when the techie customer in the room will try to burst my bubble by pointing out the unrealistic workload that is in no way reflective of their environment, or most environments… which is true. A more real-world workload would be a simulated relational database workload with a 22KB block size, mixing random 8K and sequential 128K I/O, with 60% reads and 40% writes, and a 600GB per node working set, which is quite a mouthful and is shown in figure 5. The results there show a steady 8.4-8.8% increase in IOPS across the board and a slower rise in latency resulting in a 10.5% lower response time under 80% load. 

Those of you running OLTP workloads will appreciate the graph shown in figure 6 where HammerDB was used to emulate the database activity of a typical online brokerage firm. The Optane cache drives under that workload sustained a remarkable 61% more transactions per minute (TPM) and new orders per minute (NOPM). That can result in significant business improvement for an online brokerage firm who adopts Optane drives versus one who is using NAND SSDs.

When it comes to write cache, performance is not everything, write endurance is also extremely important. The vSAN spec requires that cache drives be SSD Endurance Class C (3,650 TBW) or above, and Intel Optane beats this hands down with an over tenfold margin at 41 PBW (41,984 TBW). The Intel Optane 3D XPoint architecture allows memory cells to be individually addressed in a dense, transistor-less, stackable design. This extremely high write endurance capability has let us spec a smaller sized cache drive, which in turn lets us maintain a similar VxRail node price point, enabling you the customer to get more performance for your dollar. 

What’s not to like? Typically, you get to pick any two; faster/better/cheaper. With Intel Optane drives in your VxRail you get all three; more performance and better endurance, at roughly the same cost. Wins all around!

Author: David Glynn, Sr Principal Engineer, VxRail Tech Marketing

Resources: Dell EMC VxRail with Intel Xeon Scalable Processors and Intel Optane SSDs

Read Full Blog

The Dell Technologies Cloud Platform – Smaller in Size, Big on Features

Jason Marques

Wed, 20 May 2020 13:07:08 -0000


Read Time: 0 minutes

The latest VMware Cloud Foundation 4.0 on VxRail 7.0 release introduces a more accessible entry cloud option with support for new four node configurations. It also delivers a simple and direct path to vSphere with Kubernetes at cloud scale.

The Dell Technologies team is very excited to announce that May 12, 2020 marked the general availability of our latest Dell Technologies Cloud Platform release, VMware Cloud Foundation 4.0 on VxRail 7.0. There is so much to unpack in this release across all layers of the platform, from the latest features of VCF 4.0 to newly supported deployment configurations new to VCF on VxRail. To help you navigate through all of the goodness, I have broken out this post into two sections: VCF 4.0 updates and new features introduced specifically to VCF on VxRail deployments. Let’s jump right to it!

VMware Cloud Foundation 4.0 Updates

A lot great information on VCF 4.0 features was already published by VMware as a part of their Modern Apps Launch earlier this year. If you haven’t caught yourself up, check out links to some VMware blogs at the end of this post. Some of my favorite new features include new support for vSphere for Kubernetes (GAMECHANGER!), support for NSX-T in the Management Domain, and the NSX-T compatible Virtual Distributed Switch.

Now let’s dive into the items that are new to VCF on VxRail deployments, specifically ones that customers can take advantage of on top of the latest VCF 4.0 goodness.

New to VCF 4.0 on VxRail 7.0 Deployments

VCF Consolidated Architecture Four Node Deployment Support for Entry Level Cloud (available beginning May 26, 2020)

New to VCF on VxRail is support for the VCF Consolidated Architecture deployment option. Until now, VCF on VxRail required that all deployments use the VCF Standard Architecture. This was due to several factors: a major one was that NSX-T was not supported in the VCF Management Domain until this latest release. Having this capability was a prerequisite before we could support  the consolidated architecture with VCF on VxRail.

Before we jump into the details of a VCF Consolidated Architecture deployment, let's review what the current VCF Standard deployment is all about.

VCF Standard Architecture Details

This deployment would consist of:

  • A minimum of seven VxRail nodes (however eight is recommended)
  • A four node Management Domain dedicated to run the VCF management software and at least one dedicated workload domain that consists of a three node cluster (however four is recommended) to run user workloads
  • The Management Domain runs its own dedicated vCenter and NSX-T instance
  • The workload domains are deployed with their own dedicated vCenter instances and choice of dedicated or shared NSX-T instances that are separate from the Management Domain NSX-T instance.

A summary of features includes:

  • Requires a minimum of 7 nodes (8 recommended)
  • A Management Domain dedicated to run management software components 
  • Dedicated VxRail VI domain(s) for user workloads
  • Each workload domain can consist of multiple clusters
  • Up to 15 domains are supported per VCF instance including the Management Domain
  • vCenter instances run in linked-mode
  • Supports vSAN storage only as principal storage
  • Supports using external storage as supplemental storage

This deployment architecture design is preferred because it provides the most flexibility, scalability, and workload isolation for customers scaling their clouds in production. However, this does require a larger initial infrastructure footprint, and thus cost, to get started.

For something that allows customers to start smaller, VMware developed a validated VCF Consolidated Architecture option. This allows for the Management domain cluster to run both the VCF management components and a customer’s general purpose server VM workloads. Since you are just using the Management Domain infrastructure to run both your management components and user workloads, your minimum infrastructure starting point consists of the four nodes required to create your Management Domain. In this model, vSphere Resource Pools are used to logically isolate cluster resources to the respective workloads running on the cluster. A single vCenter and NSX-T instance is used for all workloads running on the Management Domain cluster. 

VCF Consolidated Architecture Details

A summary of features of a Consolidated Architecture deployment:

  • Minimum of 4 VxRail nodes
  • Infrastructure and compute VMs run together on shared management domain
  • Resource Pools used to segregate and isolate workload types
  • Supports multi-cluster and scale to documented vSphere maximums
  • Does not support running Horizon Virtual Desktop or vSphere with Kubernetes workloads
  • Supports vSAN storage only as principal storage
  • Supports using external storage as supplemental storage for workload clusters

For customers to get started with an entry level cloud for general purpose VM server workloads, this option provides a smaller entry point, both in terms of required infrastructure footprint as well as cost.

With the Dell Technologies Cloud Platform, we now have you covered across your scalability spectrum, from entry level to cloud scale! 

Automated and Validated Lifecycle Management Support for vSphere with Kubernetes Enabled Workload Domain Clusters

How is it that we can support this? How does this work? What benefits does this provide you, as a VCF on VxRail administrator, as a part of this latest release? You may be asking yourself these questions. Well, the answer is through the unique integration that Dell Technologies and VMware have co-engineered between SDDC Manager and VxRail Manager. With these integrations, we have developed a unique set of LCM capabilities that can benefit our customers tremendously. You can read more about the details in one of my previous blog posts here.

VCF 4.0 on VxRail 7.0 customers who benefit from the automated full stack LCM integration that is built into the platform can now include in this integration vSphere with Kubernetes components that are a part of the ESXi hypervisor! Customers are future proofed to be able to automatically LCM vSphere with Kubernetes enabled clusters when the need arises with fully automated and validated VxRail LCM workflows natively integrated into the SDDC Manager management experience. Cool right?! This means that you can now bring the same streamlined operations capabilities to your modern apps infrastructure just like you already do for your traditional apps! The figure below illustrates the LCM process for VCF on VxRail.

VCF on VxRail LCM Integrated Workflow

Introduction of initial support of VCF (SDDC Manager) Public APIs

VMware Cloud Foundation first introduced the concept of SDDC Manager Public APIs back in version 3.8. These APIs have expanded in subsequent releases and have been geared toward VCF deployments on Ready Nodes.

Well, we are happy to say that in this latest release, the VCF on VxRail team is offering initial support for VCF Public APIs. These will include a subset of the various APIs that are applicable to a VCF on VxRail deployment. For a full listing of the available APIs, please refer to the VMware Cloud Foundation on Dell EMC VxRail API Reference Guide.

Another new API related feature in this release is the availability of the VMware Cloud Foundation Developer Center. This provides some very handy API references and code samples built right into the SDDC Manager UI. These references are readily accessible and help our customers to better integrate their own systems and other third party systems directly into VMware Cloud Foundation on VxRail. The figure below provides a summary and a sneak peek at what this looks like.

VMware Cloud Foundation Developer Center SDDC Manager UI View

Reduced VxRail Networking Hardware Configuration Requirements

Finally, we end out journey of new features on the hardware front. In this release, we have officially reduced the minimum VxRail node networking hardware configurations required for VCF use cases. With the introduction of vSphere 7.0 in VCF 4.0, admins can now use the vSphere Distributed Switch (VDS) for NSX-T. The need for a separate N-VDS switch has been deprecated. So why is this important and how does this lead to VxRail node network hardware configuration improvements? 

Well, up until now, VxRail and SDDC management networks have been configured to use the VDS. And this VDS would be configured to use at least two physical NIC ports as uplinks for high availability. When introducing the use of NSX-T on VxRail, an administrator would need to create a separate N-VDS switch for the NSX-T traffic to use. This switch would require its own pair of dedicated uplinks for high availability. Thus, in VCF on VxRail environments in which NSX-T would be used, each VxRail node would require a minimum of four physical NIC ports to support the two different pairs of uplinks for each of the switches. This resulted in a higher infrastructure footprint for both the VxRail nodes and for a customer’s Top of Rack Switch infrastructure because they would need to turn on more ports on the switch to support all of these host connections. This, in turn, would come with a higher cost.

Fast forward to this release -- now we can run NSX-T traffic on the same VDS as the VxRail and SDDC Manager management traffic. And when you can share the same VDS, you can get away with reducing the number of physical uplink ports to provide high availability down to two and reduce the upfront hardware footprint and cost across the board! Win win! The following figure highlights this new feature.

NSX-T Dual pNIC Features

Well, that about sums it all up. Thanks for coming on this journey and learning about the boat load of new features in VCF 4.0 on VxRail 7.0. As always, feel free to check out the additional resources for more information. Until next time, stay safe and stay healthy out there!

Jason Marques

Twitter -@vwhippersnapper

Additional Resources

What’s New in Cloud Foundation 4 VMware Blog Post

Delivering Kubernetes At Scale With VMware Cloud Foundation (Part 1) VMware Blog Post

Consistency Makes the Tanzu Difference VMware Blog Post

VxRail page on

VCF on VxRail Guides

VMware Cloud Foundation 4.0 on VxRail 7.0 Documentation and Release Notes

VxRail Videos

VCF on VxRail Interactive Demos

Read Full Blog
vSphere VMware VxRail vSAN life cycle management

Introducing VxRail 7.0.000 with vSphere 7.0 support

Daniel Chiu

Tue, 28 Apr 2020 13:23:14 -0000


Read Time: 0 minutes

The VxRail team may all be sheltering at our own homes nowadays, but that doesn’t mean we’re just binging on Netflix and Disney Plus content. We have been hard at work to deliver on our continuing commitment to provide our customers a supporting VxRail software bundle within 30 days of any vSphere release. And this time it’s for the highly touted vSphere 7.0! You can find more information about vSphere and vSAN 7.0 in the vSphere and vSAN product areas in VMware Virtual Blocks blogs.

Here’s what you need to know about VxRail 7.0.000:

  • VxRail 7.x train – You may have noticed we’ve jumped from a 4.7 release train to a 7.0 release train. What did you miss?? Well... there is no secret 5.x or 6.x release trains. We have made the decision to align with the vSAN versions, starting with VxRail 7.x. This will make it easier for you to map VxRail versions to vSAN versions.
  • Accelerate innovation – The primary focus of this VxRail release is our synchronous release commitment to the vSphere 7.0 release. This release provides our users the opportunity to run vSphere 7.0 on their clusters. The most likely use cases would be for users who are planning to transition production infrastructure to vSphere 7.0 but first want to evaluate it in a test environment, or for users who are keen on running the latest VMware software.
  • Operational freedom – You may have heard that vSphere 7.0 introduces an enhanced version of vSphere Update Manager that they call vSphere LCM, or vLCM for short. While vLCM definitely improves upon the automation and orchestration of updating an HCI stack, VxRail’s LCM still has the advantage over vLCM (check out my blog to learn more). For example, VMware is currently not recommending that vSAN Ready Nodes users upgrade to vSphere 7.0 because of drivers forward compatibility issues (you can read more about in this KB article). That doesn’t stop VxRail from allowing you to upgrade your clusters to vSphere 7.0. The extensive research, testing, and validation work that goes into delivering Continuously Validated States for VxRail mitigates that issue.
  • Networking flexibility – Aside from synchronous release, the most notable new feature/capability is that VxRail consolidates the switch configuration for VxRail system traffic and NSX-T traffic. You can now run your VM traffic managed by NSX-T Manager on the same two ports used for VxRail system traffic (such as VxRail Management, vSAN, and vMotion) on the Network Daughter Card (NDC). Instead of requiring a 4-port NDC, users can use a 2-port NDC.

Consolidated switch configuration for VxRail system traffic managed by VxRail Manager/vCenter and VM traffic by NSX-T Manager

All said, VxRail 7.0.000 is a critical release that further exemplifies our alignment with VMware’s strategy and why VxRail is the platform of choice for vSAN technology and VMware’s Software-Defined Data Center solutions.

Our commitment to synchronous release for any vSphere release is important for users who want to benefit from the latest VMware innovations or for users who prioritizes a secure platform over everything else. A case in point is the vCenter express patch that rolled out a couple weeks ago to address a critical security vulnerability (you can find out more here). Within eight days of the express patch release, the VxRail team was able to run through all its testing and validation against all supported configurations to deliver a supported software bundle. Our $60M testing lab investment and 100+ team members dedicated to testing and quality assurance make that possible.

If you’re interested in upgrading your clusters to VxRail 7.0.000, please be sure to read the Release Notes.

Daniel Chiu, VxRail Technical Marketing


Read Full Blog
vSphere VMware VxRail life cycle management

How does vSphere LCM compare with VxRail LCM?

Daniel Chiu

Fri, 24 Apr 2020 14:35:44 -0000


Read Time: 0 minutes

VMware’s announcement of vSphere 7.0 this month included a highly anticipated enhanced version of vSphere Update Manager (VUM), which is now called vSphere Lifecycle Manager (vLCM).   Beyond the name change, much is intriguing: its capabilities, the customer benefits, and (what I have often been asked) the key differences between vLCM and VxRail lifecycle management.   I’ll address these three main areas of interest in this post and explain why VxRail LCM still has the advantage.

At its core, vLCM shifts to a desired state configuration model that allows vSphere administrators to manage clusters by using image profiles for both server hardware and ESXi software. This new approach allows more consistency in the ESXi host image across clusters, and centralizes and simplifies managing the HCI stack. vSphere administrators can now design their own image profile that consists of the ESXi software, and the firmware and drivers for the hardware components in the hosts.   They can run a check for compliance against the vSAN Hardware Compatibility List (HCL) for HBA compliance before executing the update with the image. vLCM can check for version drift that identifies differences between what’s installed on ESXi hosts versus the image profile saved on the vCenter Server.  To top that off, vLCM can recommend new target versions that are compatible with the image profile.  All of these are great features to simplify the operational experience of HCI LCM.

Let’s dig deeper so you can get a better appreciation for how these capabilities are delivered.  vLCM relies on the Cluster Image Management service to allow administrators to build that desired state.  At the minimum, the desired state starts with the ESXi image which requires communication with the VMware Compatibility Guide and vSAN HCL to identify the appropriate version.  In order to build a plugin to vCenter Server that includes hardware drivers and firmware on top of the ESXi image, hardware vendors need to provide the files needed to fill out the rest of the desired image profile. A desired state complete with hardware and software is achieved when capabilities such as simplified upgrades, compliance checks, version drift detection, and version recommendation can benefit administrators the most.  At this time, Dell and HPE have provided this addon.

vLCM Image Builder – courtesy of

While vLCM’s desired state configuration model provides a strong foundation to drive better operational efficiency in lifecycle management, there are caveats today.   I’ll focus on three key differences that will best help you in differentiating vLCM from VxRail LCM:

1. Validated state vs. desired state – Desired state does not mean validated state. VxRail invests in significant resources to identify a validated version set of software, drivers, and firmware (what we term as Continuously Validated State) to relieve the burden of defining a desired state, testing it, and validating it off the shoulders of administrators. With over 100+ dedicated VxRail team members, over $60 million of lab investments, and over 25,000 runtime hours to test each major release, VxRail users can rest assured when it comes to LCM of their VxRail clusters.

vLCM’s model relies heavily on its ecosystem to produce a desired state for the full stack.  Hardware vendors need to provide the bits for the drivers and firmware as well as the compliance check for most of the HCI stack.  Below is a snippet of the VxRail support matrix for VxRail 4.7.100 to show you some of the hardware components a VxRail Continuously Validated State delivers.   Beyond the storage HBA, it is the responsibility of the hardware vendor to perform compliance checks of the remaining hardware on the server.  Once compliance checks pass, users are responsible for validating the desired state.

2. Heterogeneous vs. homogeneous hosts – vCenter Server can only have one image profile per cluster.  That means clusters need to have hosts that have identical hardware configurations in order to use vLCM.  VxRail LCM supports a variety of mixed node configurations for use cases, such as when adding new generation servers into a cluster, or having multiple hardware configurations (that is, different node types) in the same cluster. For vSAN Ready Nodes, if an administrator has mixed node configurations, they still have the option to continue using VUM instead of vLCM -- a choice they have to make after they upgrade their cluster to vSphere 7.0.  

3. Support – troubleshooting LCM issues may well include the hardware vendor addon.   Though vLCM’s desired state includes hardware and software, the support is still potentially separate.  The administrator would need to collect the hardware vendor addon’s logs and contact the hardware vendor separately from VMware. (It is worth noting that both Dell and HPE are VMware certified support delivery partners. When considering your vSAN Ready Node partner, you may want to be sure that that hardware provider is also capable of delivering support for VMware as well.)  With VxRail, a single vendor support model by default streamlines all support calls directly to Dell Technical Support. With their in-depth VMware knowledge, Dell Technical Support can resolve cases quickly where 95% of support cases are resolved without requiring coordination with VMware support.  

In evaluating vLCM, I’ll refer to the LCM value tiers. There are three levels, starting from lower to higher customer value: update orchestration, configuration stability, and decision support:

  • Automation and orchestration is the foundation to streamlining full stack LCM. In order to simplify LCM, the stack needs to be managed as one.  
  • Configuration stability delivers the assurance to administrators that they can efficiently evolve their clusters (that is, new generation hardware, new software innovation) without disrupting availability or performance for their workloads.
  • Decision support is where we can offload the decision-making burden from the administrator.

Explaining the Lifecycle Management value tiers for customers

vLCM has simplified full stack LCM by automating and orchestrating hardware and software upgrades into a single process flow.  The next step is configuration stability, which is not just stable code (which all HCI stack should claim), but the confidence customers have in knowing that non-disruptive LCM of their HCI requires minimal work on their part. When VxRail releases a composite bundle, VxRail customers know that it has been extensively tested against a wide range of configurations to assure uptime and performance. For most VxRail customers I’ve talked to, LCM assurance and workload continuity are the benefits they value most.

VMware has done a great job with its initial release of vLCM. vSAN Ready Node customers, especially those who use nodes from vendors like Dell that support the capability (and who can also be a support delivery partner), will certainly benefit from the improvements over VUM. Hopefully, with the differences outlined above, you will have a greater appreciation for where vLCM is in its evolution, and where VxRail continues innovating and keeping its advantage.  

Daniel Chiu, VxRail Technical Marketing


Read Full Blog
HCI VxRail SmartFabric PowerSwitch OpenManage

SmartFabric Services for VxRail

Karol Boguniewicz

Fri, 24 Apr 2020 13:50:14 -0000


Read Time: 0 minutes

HCI networking made easy (again!). Now even more powerful with multi-rack support.

The Challenge

Network infrastructure is a critical component of HCI. In contrast to legacy 3-tier architectures, which typically have a dedicated storage and storage network, HCI architecture is more integrated and simplified. Its design allows you to share the same network infrastructure used for workload-related traffic and inter-cluster communication with the software-defined storage. Reliability and the proper setup of this network infrastructure not only determines the accessibility of the running workloads (from the external network), it also determines the performance and availability of the storage, and as a result, the whole HCI system.

Unfortunately, in most cases, setting up this critical component properly is complex and error-prone. Why? Because of the disconnect between the responsible teams. Typically configuring a physical network requires expert network knowledge which is quite rare among HCI admins. The reverse is also true: network admins typically have a limited knowledge of HCI systems, because this is not their area of expertise and responsibility.

The situation gets even more challenging when you think about increasingly complex deployments, when you go beyond just a pair of ToR switches and beyond a single-rack system. This scenario is becoming more common, as HCI is becoming a mainstream architecture within the data center, thanks to its maturity, simplicity, and being recognized as a perfect infrastructure foundation for the digital transformation and VDI/End User Computing (EUC) initiatives. You need much more computing power and storage capacity to handle increased workload requirements.

At the same time, with the broader adoption of HCI, customers are looking for ways to connect their existing infrastructure to the same fabric, in order to simplify the migration process to the new architecture or to leverage dedicated external NAS systems, such as Isilon, to store files and application or user data.

A brief history of SmartFabric Services for VxRail

Here at Dell Technologies we recognize these challenges. That’s why we introduced SmartFabric Services (SFS) for VxRail. SFS for VxRail is built into Dell EMC Networking SmartFabric OS10 Enterprise Edition software that is built into the Dell EMC PowerSwitch networking switches portfolio. We announced the first version of SFS for VxRail at VMworld 2018. With this functionality, customers can quickly and easily deploy and automate data center fabrics for VxRail, while at the same time reduce risk of misconfiguration.

Since that time, Dell has expanded the capabilities of SFS for VxRail. The initial release of SFS for VxRail allowed VxRail to fully configure the switch fabric to support the VxRail cluster (as part of the VxRail 4.7.0 release back in Dec 2018). The following release included automated discovery of nodes added to a VxRail cluster (as part of VxRail 4.7.100 in Jan 2019).

The new solution

This week we are excited to introduce a major new release of SFS for VxRail as a part of Dell EMC SmartFabric OS and VxRail 4.7.410.

So, what are the main enhancements?

  • Automation at scale
     Customers can easily scale their VxRail deployments, starting with a single rack with two ToR leaf switches, and expand to multi-rack, multi-cluster VxRail deployments with up to 20 switches in a leaf-spine network architecture at a single site. SFS now automates over 99% (!) of the network configuration steps* for leaf and spine fabrics across multiple racks, significantly simplifying complex multi-rack deployments.
  • Improved usability
     An updated version of the OpenManage Network Integration (OMNI) plugin provides a single pane for “day 2” fabric management and operations through vCenter (the main management interface used by VxRail and vSphere admins), and a new embedded SFS UI simplifying “day 1” setup of the fabric.
  • Greater expandability
    Customers can now connect non-VxRail devices, such as additional PowerEdge servers or NAS systems, to the same fabric. The onboarding can be performed as a “day 2” operation from the OMNI plugin. In this way, customers can reduce the cost of additional switching infrastructure when building more sophisticated solutions with VxRail.


Figure 1. Comparison of a multi-rack VxRail deployment, without and with SFS

Solution components

In order to take advantage of this solution, you need the following components:

  • At a minimum a pair of supported Dell EMC PowerSwitch Data Center Switches. For an up-to-date list of supported hardware and software components, please consult the latest VxRail Support Matrix. At the time of writing this post, the following models are supported: S4100 (10GbE) and S5200 (25GbE) series for the leaf and Z9200 series or S5232 for the spine layer. To learn more about the Dell EMC PowerSwitch product portfolio, please visit the PowerSwitch website.
  • Dell EMC Networking SmartFabric OS10 Enterprise Edition (version or later). This operating system is available for the Dell EMC PowerSwitch Data Center Switches, and implements SFS functionality. To learn more, please visit the OS10 website.
  • A VxRail cluster consisting of 10GbE or 25GbE nodes, with software version 4.7.410 or later.
  • OpenManage Network Integration (OMNI) for VMware vCenter version 1.2.30 or later.

How does the multi-rack feature work?

The multi-rack feature is done through the use of the Hardware VTEP functionality in Dell EMC PowerSwitches and the automated creation of a VxLAN tunnel network across the switch fabric in multiple racks.

VxLAN (Virtual Extensible Local Area Network) is an overlay technology that allows you to extend a Layer 2 “overlay” network over a Layer 3 (L3) “underlay” network by adding a VxLAN header to the original Ethernet frame and encapsulating it. This encapsulation occurs by adding a VxLAN header to the original Layer 2 (L2) Ethernet frame, and placing it into an IP/UDP packet to be transported across the L3 underlay network.

By default, all VxRail networks are configured as L2. With the configuration of this VxLAN tunnel, the L2 network is “stretched” across multiple racks with VxRail nodes. This allows for the scalability of L3 networks with the VM mobility benefits of an L2 network. For example, the nodes in a VxRail cluster can reside on any rack within the SmartFabric network, and VMs can be migrated within the same VxRail cluster to any other node without manual network configuration.

Figure 2. Overview of the VLAN and VxLAN VxRail traffic with SFS for multi-rack VxRail

This new functionality is enabled by the new L3 Fabric personality, available as of OS, that automates configuration of a leaf-spine fabric in a single-rack or multi-rack fabric and supports both L2 and L3 upstream connectivity. What is this fabric personality? SFS personality is a setting that enables the functionality and supported configuration of the switch fabric.

To see how simple it is to configure the fabric and to deploy a VxRail multi-rack cluster with SFS, please see the following demo: Dell EMC Networking SFS Deployment with VxRail - L3 Uplinks.

Single pane for management and “day 2” operations

SFS not only automates the initial deployment (“day 1” fabric setup), but greatly simplifies the ongoing management and operations on the fabric. This is done in a familiar interface for VxRail / vSphere admins – vCenter, through the OMNI plugin, distributed as a virtual appliance.

It’s powerful! From this “VMware admin-friendly” interface you can:

  • Add a SmartFabric instance to be managed (OMNI supports multiple fabrics to be managed from the same vCenter / OMNI plugin).
  • Get visibility into the configured fabric – domain, fabric nodes, rack, switches, and so on.
  • Visualize the fabric and the configured connections between the fabric elements with a “live” diagram that allows “drill-down” to get more specific information (Figure 3).
  • Manage breakout ports and jump ports, as well as on-board additional servers or non-VxRail devices.
  • Configure L2 or L3 fabric uplinks, allowing more flexibility and support of multiple fabric topologies.
  • Create, edit, and delete VxLAN and VLAN-based networks, to customize the network setup for specific business needs.
  • Create a host-centric network inventory that provides a clear mapping between configured virtual and physical components (interfaces, switches, networks, and VMs). For instance, you can inspect virtual and physical network configuration from the same host monitoring view in vCenter (Figure 4). This is extremely useful for troubleshooting potential network connectivity issues.
  • Upgrade SmartFabric OS on the physical switches in the fabric and replace a switch that simplifies the lifecycle management of the fabric.

Figure 3. Sample view from the OMNI vCenter plugin showing a fabric topology

To see how simple it is to deploy the OMNI plugin and to get familiar with some of the options available from its interface, please see the following demo: Dell EMC OpenManage Network Integration for VMware vCenter.

OMNI also monitors the VMware virtual networks for changes (such as to portgroups in vSS and vDS VMware virtual switches) and as necessary, reconfigures the underlying physical fabric.

Figure 4. OMNI – monitor virtual and physical network configuration from a single view

Thanks to OMNI, managing the physical network for VxRail becomes much simpler, less error-prone, and can be done by the VxRail admin directly from a familiar management interface, without having to log into the console of the physical switches that are part of the fabric.

Supported topologies

This new SFS release is very flexible and supports multiple fabric topologies. Due to the limited size of this post, I will only list them by name:

  • Single-Rack – just a pair of leaf switches in a single rack, supports both L2 and L3 upstream connectivity / uplinks – the equivalent of the previous SFS functionality
  • (New) Single-Rack to Multi-Rack – starts with a pair of switches, expands to multi-rack by adding spine switches and additional racks with leaf switches
  • (New) Multi-Rack with Leaf Border – adds upstream connectivity via the pair of leaf switches; this supports both L2 or L3 uplinks
  • (New) Multi-Rack with Spine Border - adds upstream connectivity via the pair of leaf spine; this supports L3 uplinks
  • (New) Multi-Rack with Dedicated Leaf Border - adds upstream connectivity via the dedicated pair of border switches above the spine layer; this supports L3 uplinks

For detailed information on these topologies, please consult Dell EMC VxRail with SmartFabric Network Services Planning and Preparation Guide.

Note, that SFS for VxRail does not currently support NSX-T and VCF on VxRail.

Final thoughts

This latest version of SmartFabric Services for VxRail takes HCI network automation to the next level and solves now much bigger network complexity problem in a multi-rack environment, compared to much simpler, single-rack, dual switch configuration. With SFS, customers can:

  • Reduce the CAPEX and OPEX related to HCI network infrastructure, thanks to automation (reducing over 99% of required configuration steps* when setting up a multi-rack fabric), and a reduced infrastructure footprint
  • Accelerate the deployment of essential IT infrastructure for their business initiatives
  • Reduce the risk related to the error-prone configuration of complex multi-rack, multi-cluster HCI deployments
  • Increase the availability and performance of hosted applications
  • Use a familiar management console (vSphere Client / vCenter) to drive additional automation of day 2 operations
  • Rapidly perform any necessary changes to the physical network, in an automated way, without requiring highly-skilled network personnel

Additional resources:

VxRail Support Matrix

Dell EMC VxRail with SmartFabric Network Services Planning and Preparation Guide

Dell EMC Networking SmartFabric Services Deployment with VxRail

SmartFabric Services for OpenManage Network Integration User Guide Release 1.2

Demo: Dell EMC OpenManage Network Integration for VMware vCenter

Demo: Expand SmartFabric and VxRail to Multi-Rack

Demo: Dell EMC Networking SFS Deployment with VxRail - L2 Uplinks

Demo: Dell EMC Networking SFS Deployment with VxRail - L3 Uplinks

Author: Karol Boguniewicz, Senior Principal Engineer, VxRail Technical Marketing

Twitter: @cl0udguide 

*Disclaimer: Based on internal analysis comparing SmartFabric to manual network configuration, Oct 2019.  Actual results will vary.

Read Full Blog
HCI VxRail

Join the VxRail Xpert Crew (open to Partners!)

KJ Bedard

Wed, 22 Apr 2020 19:40:51 -0000


Read Time: 0 minutes

Join the VxRail Xpert Crew (open to Partners!)

VxRail is #1 in hyperconverged systems, and the fastest growing HCI product in the industry today[1]. Join a thriving community of >800 VxRail global experts who, like you, are interested in the most relevant, critical, and timely information for Sales awareness.

What is this? A community for VxRail Xperts who are advocates in their selling circles, regardless of organizational alignment and position in his/her company. Although this forum is not exclusive to SEs, our primary focus is on pre-sales efforts and initiatives. Our mission is to provide timely and technical information, knowledge, and tools in an effort to cultivate consistency for architecting, sizing, and configuring any VxRail solution. This is not a forum for post-sales or customer support.

Who is eligible? Any Dell Technologies employee or Dell EMC partner who passes the VxRail Xpert Crew assessment exam. VMware partners are welcome to join if they are also Dell EMC partners. Due to the NDA nature of this community, customers are not permitted.

Why should I join?
There are many reasons to pass the assessment and join the VxRail Xpert crew, including:

  • A dedicated communication platform with other Xperts
  • 24x7 access to a Global community with ~800 active members (including ~150 partners)
  • Opportunity to provide feedback to the VxRail BU team
  • Early access to information (performance data, product roadmaps, analyst data and more...)
  • Special events for community members only
    • Intel announcements
    • Exclusive technology sessions - monthly

What are my responsibilities as a member?

  • Collaborate: Share and learn with other Xperts
  • Participate: Stay active in the community
  • Stay smart: Be current, be knowledgeable and be relevant
  • Walk the talk: Be the Xpert in your selling circles

How do I join?

  • Know your stuff - read, learn, and master VxRail technology by reviewing materials listed below
  • Test your knowledge and pass the exam
  • Follow the onboarding process to become a full-fledged member of the VxRail Xpert crew

How do I access the exam?

Exam details:

  • You will execute the exam via SABA deeplink and using your partner login.
  • Assessment details: 60 questions, 90 minutes to complete, 75% for passing score, 5 retakes allowed (there is a 24 hour wait period between retakes).
  • Be sure to follow the instructions upon completion of the exam. (The onboarding process for the Xpert Crew is not automated.)
  • Dell EMC Technology partners having access to the partner portal can access the assessment exam here: (If you have issues accessing this URL, please send email to

What should I study? (some items are gated assets requiring login)

Additional/Suggested Resources

VxRail Ordering and Licensing Guide

VxRail vCenter Server Planning Guide

VxRail Interactive Demos

[1] IDC WW Quarterly Converged Systems Tracker, Vendor Revenue (US$M) Q4 2019, March 19, 2020

KJ Bedard, Twitter - @kjbedard, Linked In -

Read Full Blog
VMware Oracle VxRail Oracle RAC VMware Cloud Foundation

Built to Scale with VCF on VxRail and Oracle 19C RAC

Vic Dery

Fri, 17 Apr 2020 05:21:03 -0000


Read Time: 0 minutes

Built to Scale with VCF on VxRail and Oracle 19C RAC

The newly released Oracle RAC on Dell EMC VxRail with VMware Cloud Foundations (VCF) Reference Architecture (RA) guides customers to building an efficient and high performing hyperconverged infrastructure to run their OLTP workloads. Scalability was the primary goal of this RA, and performance was highlighted as the numbers were generated. As Oracle RAC scaled, TPM increased to over 1 million TPM, while read IOPs showed sub-milli-second (0.64-0.70 ms) performance. The performance achieved with VxRail is a great added benefit to the core design points for Oracle RAC environments of which the primary focus is the availability and resiliency of the solution. Links to a reference architecture (“Oracle RAC on VMware Cloud Foundation on Dell EMC VxRail”) and a solution brief (“Deploying Oracle RAC on Dell EMC VxRail “) are available here and at the end of this post.

The RAC solution with VxRail scaled-out easily — you simply add a new node to join an existing VxRail cluster. The VxRail Manager provides a simple path that automatically discovers and non-disruptively adds each new node. VMware vSphere and vSAN can then rebalance resources and workloads across the cluster, creating a single resource pool for compute and storage. 

The VxRail clusters were built with eight P570F nodes; four for the VCF Management Domain and four for the Oracle RAC Workload Domain. 


Specifics on the build, including the hardware and software used, are detailed within the reference architecture. It also provides information on the testing, tools used, and results.


This graph shows the performance of TPM and Response Time when increasing the RAC node count from one to four. Notice that the average TPM increased with near-linear trendline (shown by the dotted line) as additional RAC nodes were added, while total application response time was maintained at 20 milliseconds or less.

Note: TPM near-linear trendline is shown in the above graph (blue dotted line), As additional RAC nodes are added, an increase in performance is seen as well as an increase in RAC high availability. TPM linear performance (scale equal performance per each note) growth is not achieved due to RAC nodes’ dependency on concurrency of access, instance, network, or other factors. See the RA for additional performance related information.

Summary of performance

Different-sized databases kept the TPM at the same level (about one million transactions) while keeping the application response time at 20ms or below. When increasing the database size, the physical read and write IOPS increased near-linearly, as reported from the Oracle AWR. This indicated that more read and write I/O requests were served by the backend storage, under the same configuration. Overall, when the peak client IOPS was up to 100,000, vSAN still provided excellent storage performance at sub-milliseconds at read and single-digit milliseconds latency at write.

Sidebar about Oracle licensing: While not mentioned in the RA; the VxRail offers several facilities to both control Oracle licenses and in some cases eliminates the need for costly licensed options.  These include a broad choice of CPU core configurations, some with fewer cores and higher processing power per core, to maximize the customer’s Oracle workload performance while minimizing the license requirements. Costly add on options such as encryption and compression can be provided via vSAN and are handled by VxRail. Further, and the vSphere hypervisor features, like DRS, allow Oracle VMs to be contained to only licensed nodes. 

You can speak to a Dell Technologies’ Oracle specialist for more details on how to control Oracle licensing costs for VMware environments. 


Oracle Database 19c on VxRail offers customers performance, scalability, reliability, and security for all their operational and analytical workloads. The Oracle RAC on VxRail test environment was first created to highlight the architecture. It also had the added benefit of showcasing the great performance VxRail delivers. If you need more performance, it is simple to adjust the configuration by adding more VxRail nodes to the cluster. If you need more storage, add more drives to meet the scale required of the database. Dell Technologies has Oracle specialists to ensure the VxRail cluster will meet the scale and performance outcomes desired for Oracle environments.

Additional Resources:

Reference Architecture - Oracle RAC on VMware Cloud Foundation on Dell EMC VxRail

Solution Brief - Deploying Oracle RAC on Dell EMC VxRail

Author: Vic Dery, Senior Principal Engineer, VxRail Technical Marketing


Special thank you to David Glynn for assisting with the reviews  

Read Full Blog
VMware VxRail VMware Cloud Foundation life cycle management

VMware Cloud Foundation on Dell EMC VxRail Integration Features Series: Part 1—Full Stack Automated LCM

Jason Marques

Fri, 03 Apr 2020 21:39:05 -0000


Read Time: 0 minutes

VMware Cloud Foundation on Dell EMC VxRail Integration Features Series: Part 1

Full Stack Automated Lifecycle Management

It’s no surprise that VMware Cloud Foundation on VxRail features numerous unique integrations with many VCF components, such as SDDC Manager and even VMware Cloud Builder. These integrations are the result of the co-engineering efforts by Dell Technologies and VMware with every release of VCF on VxRail. The following figure highlights some of the components that are part of this integration effort.

These integrations of VCF on VxRail offer customers a unique set of features in various categories, from security to infrastructure deployment and expansion, to deep monitoring and visibility that have all been developed to drive infrastructure operations.

Where do these integrations exist? The following figure outlines how they impact a customer’s Day 0 to Day 2 operations experience with VCF on VxRail.

In this series I will showcase some of these unique integration features, including some of the more nuanced ones. But for this initial post, I want to highlight one of the most popular and differentiated customer benefits that emerged from this integration work: full stack automated lifecycle management (LCM).

VxRail already delivers a differentiated LCM customer experience through its Continuously Validated States capabilities for the entire VxRail hardware and software stack. (As you may know, the VxRail stack includes the hardware and firmware of compute, network, and storage components, along with VMware ESXi, VMware vSAN, and the Dell EMC VxRail HCI System software itself, which includes VxRail Manager.)

With VCF on VxRail, VxRail Manager is integrated natively into the SDDC Manager LCM management framework through the SDDC Manager UI, and through VxRail Manager APIs for LCM by SDDC Manager when executing LCM workflows. This integration allows SDDC Manager to leverage all of the LCM capabilities that natively exist in VxRail right out of the box. SDDC Manager can then execute SDDC software LCM AND drive native VxRail HCI system LCM. It does this by leveraging native VxRail Manager APIs and the continuously validated state update packages for both the VxRail software and hardware components.

All of this happens seamlessly behind the scenes when administrators use the SDDC Manager UI to kick off native SDDC Manager workflows. This means that customers don’t have to leave the SDDC Manager UI management experience at all for full stack SDDC software and VxRail HCI infrastructure LCM operations. How cool is that?! The following figure illustrates the concepts behind this effective relationship.

For more details about how this LCM experience works, check out my lightboard talk about it!

Also, if you want to get some hands on experience in walking through performing LCM operations for the full VCF on VxRail stack, check out the VCF on VxRail Interactive Demo to see this and some of the other unique integrations!

I am already hard at work writing up the next blog post in the series. Check back soon to learn more.

Jason Marques

Twitter - @vwhippersnapper

Additional Resources

VxRail page on

VCF on VxRail Guides

VCF on VxRail Whitepaper

VCF 3.9.1 on VxRail 4.7.410 Release Notes

VxRail Videos

VCF on VxRail Interactive Demos

Read Full Blog
HCI VxRail

Take VxRail automation to the next level by leveraging APIs

Karol Boguniewicz

Mon, 30 Mar 2020 15:20:04 -0000


Read Time: 0 minutes


The Challenge

VxRail Manager, available as a part of HCI System Software, drastically simplifies the lifecycle management and operations of a single VxRail cluster. With a “single click” user experience available directly in vCenter interface, you can perform a full upgrade off all software components of the cluster, including not only vSphere and vSAN, but also complete server hardware firmware and drivers, such as NICs, disk controller(s), drives, etc. That’s a simplified experience that you won’t find in any other VMware-based HCI solution.

But what if you need to manage not a single cluster, but a farm consisting of dozens or hundreds of VxRail clusters? Or maybe you’re using some orchestration tool to holistically automate the IT infrastructure and processes? Would you still need to login manually as an operator to each of these clusters separately and click a button to maybe shutdown a cluster, collect log information or health data or perform LCM operations?

This is where VxRail REST APIs come in handy.

The VxRail API Solution

REST APIs are very important for customers who would like to programmatically automate operations of their VxRail-based IT environment and integrate with external configuration management or cloud management tools.

In VxRail HCI System Software 4.7.300 we’ve introduced very significant improvements in this space:

  • Swagger integration - which allows for simplified consumption of the APIs and their documentation;
  • Comprehensiveness – we’ve almost doubled the number of public APIs available;
  • PowerShell integration – that allows consumption of the APIs from the Microsoft PowerShell or VMware PowerCLI.

The easiest way to start using and access these APIs is through the web browser, thanks to the Swagger integration. Swagger is an Open Source toolkit that simplifies Open API development and can be launched from within the VxRail Manager virtual appliance. To access the documentation, simply open the following URL in the web browser: https://<VxM_IP>/rest/vxm/api-doc.html (where <VxM IP> stands for the IP address of the VxRail Manager) and you should see a page similar to the one shown below:

Figure 1. Sample view into VxRail REST APIs via Swagger

This interface is dedicated for customers, who are leveraging orchestration or configuration management tools – they can use it to accelerate integration of VxRail clusters into their automation workflows. VxRail API is complementary to the APIs offered by VMware.

Would you like to see this in action? Watch the first part of the recorded demo available in the additional resources section.

PowerShell integration for Windows environments

Customers, who prefer scripting in Windows environment, using Microsoft PowerShell or VMware PowerCLI, will benefit from VxRail.API PowerShell Modules Package. It simplifies the consumption of the VxRail REST APIs from PowerShell and focuses more on the physical infrastructure layer, while management of VMware vSphere and solutions layered on the top (such as Software-Defined Data Center, Horizon, etc.), can be scripted using similar interface available in VMware PowerCLI.

Figure 2. VxRail.API PowerShell Modules Package

To see that in action, check the second part of the recorded demo available in the additional resources section.

Bringing it all together

VxRail REST APIs further simplify IT Operations, fostering operational freedom and a reduction in OPEX for large enterprises, service providers and midsize enterprises. Integrations with Swagger and PowerShell make them much more convenient to use. This is an area of VxRail HCI System Software that rapidly gains new capabilities, so please make sure to check the latest advancements with every new VxRail release.  

Additional resources:

Demo: VxRail API - Overview

Demo: VxRail API - PowerShell Package

Dell EMC VxRail Appliance – API User Guide

VxRail PowerShell Package

VxRail API Cookbook

VMware API Explorer

Author: Karol Boguniewicz, Sr Principal Engineer, VxRail Tech Marketing

Twitter: @cl0udguide 

Read Full Blog

Latest enhancements to VxRail ACE

Daniel Chiu

Mon, 30 Mar 2020 11:26:20 -0000


Read Time: 0 minutes

VxRail ACE

February 4, 2020

One of the key areas of focus for VxRail ACE (Analytical Consulting Engine) is active multi-cluster management.  With ACE, users have a central point to manage multiple VxRail clusters more conveniently.  System updates for multiple VxRail clusters is one activity where ACE greatly benefits users.  It is a time-consuming operation that requires careful planning and coordination.   In the initial release of ACE, users were able to facilitate transfer of update bundles to all their clusters with ACE acting as the single control point versus logging onto every vCenter console to do the same activity.  That can save quite a bit of time.

On-demand pre-upgrade cluster health checks

In the latest ACE update, users can now run on-demand health checks prior to upgrading to find out if their cluster is ready for a system update.  By identifying which clusters are ready and which ones are not, users can more effectively schedule their maintenance windows in advance.  It allows them to see which clusters require troubleshooting and which ones can start the update process.  In ACE, on-demand cluster health checks are referred to as a Pre-Check.  

For more information about this feature, you can check out this video:

Deployment types

Another feature that came out with this update is the identification of the cluster deployment type.   This means ACE will now display whether the cluster is a standard VxRail cluster in a VMware Validated Design deployment, in a VMware Cloud Foundation on VxRail deployment used in Dell Technologies Datacenter-as-a-Service, a 2-node vSAN cluster, or in a stretched cluster configuration.

Daniel Chiu, VxRail Technical Marketing


Read Full Blog
VMware VxRail VMware Cloud Foundation

VCF on VxRail – More business-critical workloads welcome!

Jason Marques

Mon, 30 Mar 2020 15:11:17 -0000


Read Time: 0 minutes

New platform enhancements for stronger mobility and flexibility 

February 4, 2020

Today, Dell EMC has made the newest VCF 3.9.1 on VxRail 4.7.410 release available for download for existing VCF on VxRail customers with plans for availability for new customers coming on February 19, 2020. Let’s dive into what’s new in this latest version.

Expand your turnkey cloud experience with additional unique VCF on VxRail integrations

This release continues the co-engineering innovation efforts of Dell EMC and VMware to provide our joint customers with better outcomes. We tackle the area of security in this case. VxRail password management for VxRail Manager accounts such as root and mystic as well as ESXi have been integrated into the SDDC Manager UI Password Management framework. Now the components of the full SDDC and HCI infrastructure stack can be centrally managed as one complete turnkey platform using your native VCF management tool, SDDC Manager. Figure 1 illustrates what this looks like.

Figure 1

Support for Layer 3 VxRail Stretched Cluster Configuration Automation

Building off the support for Layer 3 stretched clusters introduced in VCF 3.9 on VxRail 4.7.300 using manual guidance, VCF 3.9.1 on VxRail 4.7.410 now supports the ability to automate the configuration of Layer 3 VxRail stretched clusters for both NSX-V and NSX-T backed VxRail VI Workload Domains. This is accomplished using CLI in the VCF SOS Utility.

Greater management visibility and control across multiple VCF instances

For new installations, this release now provides the ability to extend a common management and security model across two VCF on VxRail instance deployments by sharing a common Single Sign On (SSO) Domain between the PSCs of multiple VMware Cloud Foundation instances so that the management and the VxRail VI Workload Domains are visible in each of the instances. This is known as a Federated SSO Domain.

What does this mean exactly?   Referring to Figure 2, this translates into the ability for Site B to join the SSO instance of Site A.  This allows VCF to further align to the VMware Validated Design (VVD)  to share SSO domains where it makes sense based upon Enhanced Linked Mode 150ms RTT limitation.

This would leverage a recent option made available in the VxRail first run to connect the VxRail cluster to an existing SSO Domain (PSCs).  So, when you stand up the VxRail cluster for the second MGMT Domain that is affiliated with the second VCF instance deployed in Site B, you would connect it to the SSO (PSCs) that was created by the first MGMT domain of the VCF instance in Site A.

Figure 2

Application Virtual Networks – Enabling Stronger Mobility and Flexibility with VMware Cloud Foundation

One of the new features in the 3.9.1 release of VMware Cloud Foundation (VCF) is use of Application Virtual Networks (AVNs) to completely abstract the hardware and realize the true value from a software-defined cloud computing model. Read more about it on VMware’s blog post here. Key note on this feature: It is automatically set up for new VCF 3.9.1 installations. Customers who are upgrading from a previous version of VCF would need to engage with the VMware Professional Services Organization (PSO) to configure AVN at this time. Figure 3 shows the message existing customers will see when attempting the upgrade.

Figure 3

VxRail 4.7.410 platform enhancements

VxRail 4.7.410 brings a slew of new hardware platforms and hardware configuration enhancements that expand your ability to support even more business-critical applications.

Figure 4

Figure 5

There you have it! We hope you find these latest features beneficial. Until next time…

Jason Marques

Twitter - @vwhippersnapper

Additional Resources

VxRail page on

VCF on VxRail Guides

VCF on VxRail Whitepaper

VCF 3.9.1 on VxRail 4.7.410 Release Notes

VxRail Videos

VCF on VxRail Interactive Demos 

Read Full Blog
VMware VxRail vRealize Operations

Announcing all-new VxRail Management Pack for vRealize Operations

Daniel Chiu

Mon, 30 Mar 2020 15:14:28 -0000


Read Time: 0 minutes

Now adding VxRail awareness to your vRealize Operations 

January 22, 2020

As the new year rolls in, VxRail team is now slowly warming up to it.  Right as we settle back in after holiday festivities, we’re onto another release announcement.  This time, it’s an entirely new software tool: VxRail Management Pack for vRealize Operations.

For those not familiar with what vRealize Operations, it’s VMware’s operations management software tool that provides its customers the ability to maintain and tune their virtual application infrastructure with the aid of artificial intelligence and machine learning.  It connects to the vCenter Server and collects metrics, events, configurations, and logs about the vSAN clusters and virtual workloads running on them.   vRealize Operations also understands the topology and object relationships of the virtual application infrastructure.  With all these features, it is capable of driving intelligent remediation, ensuring configuration compliance, monitoring capacity and cost optimization, and maintaining performance optimization.  It’s an outcome-based tool designed to self-drive according to user-defined intents powered by its AI/ML engine.

The VxRail Management Pack is an additional free-of-charge software pack that can be installed onto vRealize Operations to provide VxRail cluster awareness.  Without this Management Pack, vRealize Operations can still detect vSAN clusters but cannot discern that they are VxRail clusters.  The Management Pack consists of an adapter that collects 65 distinct VxRail events, analytics logic specific to VxRail, and three custom dashboards.  These VxRail events are translated into VxRail alerts on vRealize Operations so that users have helpful information to understand health issues along with recommended course of resolution.  With custom dashboards, users can easily go to VxRail-specific views to troubleshoot issues and make use of existing vRealize Operations capabilities in the context of VxRail clusters.  

The VxRail Management Pack is not for every VxRail user because it requires a vRealize Operations Advanced or Enterprise license.  For enterprise customers or customers who have already invested in VMware’s vRealize Operations suite, it can be an easy add-on to help manage your VxRail clusters.

To download the VxRail Management Pack, go to VMware Solution Exchange:

Author:  Daniel Chiu, Dell EMC VxRail Technical Marketing


Read Full Blog
HCI VxRail

VxRail drives the hyperconverged evolution with the release of 4.7.410

KJ Bedard

Mon, 30 Mar 2020 15:16:22 -0000


Read Time: 0 minutes

VxRail announces new hardware and software in this latest release

January 6, 2020

VxRail recently released a new version of our software, 4.7.410, which we announced at VMworld EMEA in November. This release brings cutting-edge enhancements for networking options and edge deployments, support for the Mellanox 100GbE PCIe NIC, and two new drive types.  

Improvements and newly developed functionality for VxRail 2-node implementations provide a more user-friendly experience. Now supporting both direct connect and new switched connectivity options. VxRail 2-node is increasingly popular for edge deployments, and Dell EMC continues to bolster features and functionality in support of our edge and 2-node customer base.

This release also includes improvements for VxRail networking capabilities that more closely align VxRail with VMware’s best practices for NIC port maximums and network teaming policies.   VxRail networking enhancements more efficiently handle network traffic due to support for two additional load balancing policies. These new policies determine how to route network traffic in the event of bottlenecks, and the result is better/increased throughput on a NIC port. In addition, VxRail now supports the same three routing/teaming policies as VMware. 

Dell EMC also announced support for Fiber channel HBAs in mid-summer of 2019, and with that, the 4.7.410 release has broadened capabilities by supporting external storage integration.  VxRail recognizes that an external array is connected and makes it available to the vCenter for use as secondary storage.  The storage is now automatically recognized during day 1 installation operations, or on day 2, when external storage is added to expand the storage capacity for VxRail.

In addition to the 4.7.410 release, VxRail added a new set of hardware choices and options to include the Mellanox ConnectX-5 100GBe NIC cards benefitting a variety of use cases including media broadcasting, a larger 8TB 2.5” 7200 rpm HDD commonly used for video surveillance, and a 7.6TB “Value SAS SSD”. Value SAS drives offer attractive pricing (similar to SATA) with performance slightly below other SAS drives and are great for larger read-friendly workloads. And finally, there’s big news for the VxRail E series platforms (E560/E560F/E560N) which all support the T4 GPU.  This is the first time VxRail is supporting GPU cards outside of the V series. The Nvidia T4 GPU is optimized for high-performance workloads and suitable for running a combination of entry-level machine learning, VDI, and data inferencing.

These exciting new features and enhancements in the 4.7.410 release enable customers to expand the breadth of business workloads across all VxRail implementations.

Please check out these resources for more VxRail information:

VMWare EMEA Announcements 

VxRail Spec Sheet

VxRail Network Planning Guide

VxRail 4.7.x Release Notes (requires log-in)

By:  KJ Bedard - VxRail Technical Marketing Engineer


Twitter: @KJbedard

Read Full Blog

New all-NVMe VxRail platforms deliver highest levels of performance

Daniel Chiu

Mon, 30 Mar 2020 15:24:55 -0000


Read Time: 0 minutes

Two new all-NVMe VxRail platforms deliver highest levels of performance

December 11, 2019

If you have not been tuned into the VxRail announcements at VMworld Barcelona last month, this is news to you.  VxRail is adding more performance punch to the family with two new all-NVMe platforms.   The VxRail E Series 560N and P Series 580N, with the 2nd Generation Intel® Xeon® Scalable Processors, offer increased performance while enabling customers to take advantage of decreasing NVMe costs.

Balancing workload and budget requirements, the dual-socket E560N provide a cost-effective, space-efficient 1U platform for read-intensive workloads and other complex workloads.   Configured with up to 32TB of NVMe capacity, the E560N is the first all-NVMe 1U VxRail platform.  Based on the PowerEdge R640, the E560N can run a mix of workloads including data warehouses, ecommerce, databases, and high-performance computing.  With support for Nvidia T4 GPUs, the E560N is also equipped to run a wide range of modern cloud-based applications, including machine learning, deep learning, and virtual desktop workloads.

Built for memory-intensive high-compute workloads, the new P580N is the first quad-socket  and also the first all-NVMe 2U VxRail platform.  Based on the PowerEdge R840, the P580N can be configured with up to 80TB of NVMe capacity.  This platform is ideal for in-memory databases and has been certified by SAP for SAP HANA.  The P580N provides 2x the CPU compared to the P570/F and offers 25% more processing potential over virtual storage appliance (VSA) based 4-socket HCI platforms that require a dedicated socket to run (VSA).

The completion of the SAP HANA certification for the P580N which coincides with the P580N’s general availability demonstrates the ongoing commitment to position VxRail as the HCI platform of choice for SAP HANA solutions.  The P580N provides even more memory and processing power than the SAP HANA certified P570F platform.  An updated Validation Guide for SAP HANA on VxRail will be available in early January on the Dell EMC SAP solutions landing page for VxRail.

For more information about VxRail E560N and P580N, please check out the resources below:

VxRail Spec Sheet

All Things VxRail at

SAP HANA Certification page for VxRail

Dell EMC VxRail SAP Solutions at

Available 12/20/2019 - Dell EMC Validation Guide for SAP HANA with VxRail


Daniel Chiu

Vic Dery


Read Full Blog
VMware VxRail VMware Cloud Foundation

Innovation with Cloud Foundation on VxRail

Jason Marques

Mon, 30 Mar 2020 15:24:55 -0000


Read Time: 0 minutes

VCF 3.9 ON VxRail 4.7.300 Improves Management, Flexibility, and Simplicity at Scale 

December, 2019

As you may already know, VxRail is the HCI foundation for the Dell Technologies Cloud Platform. With the new Dell Technologies On Demand offerings we combine the benefits of bringing automation and financial models similar to public cloud to on-premises environments. VMware Cloud Foundation on Dell EMC VxRail allows customers to manage all cloud operations through a familiar set of tools, offering a consistent experience, with a single vendor support relationship from Dell EMC.

Joint engineering between VMware and Dell EMC continuously improves VMware Cloud Foundation on VxRail. This has made VxRail the first hyperconverged system fully integrated with VMware Cloud Foundation SDDC Manager and is the only jointly engineered HCI system with deep VMware Cloud Foundation integration. VCF on VxRail to delivers unique integrations with Cloud Foundation that offer a seamless, automated upgrade experience. Customers adopting VxRail as the HCI foundation for Dell Technologies Cloud Platform will realize greater flexibility and simplicity when managing VMware Cloud Foundation on VxRail at scale. These benefits are further illustrated with the new features available in the latest version of VMware Cloud Foundation 3.9 on VxRail 4.7.300.

The first feature expands the ability to support global management and visibility across large, complex multi-region private and hybrid clouds. This is delivered through global multi-instance management of large-scale VCF 3.9 on VxRail 4.7.300 deployments with a single pane of glass (see figure below). Customers who have many VCF on VxRail instances deployed throughout their environment now have a common dashboard view into all of them to further simplify operations and gain insights.

Figure 1

The new features don’t just stop there, VCF 3.9 on VxRail 4.7.300 provides greater networking flexibility. VMware Cloud Foundation 3.9 on VxRail 4.7.300 adds support for Dell EMC VxRail layer 3 networking stretch cluster configurations, allowing customers to further scale VCF on VxRail environments for more highly available use cases in order to support mission-critical workloads. The layer 3 support applies to both NSX-V and NSX-T backed workload domain clusters.

Another area of new network flexibility features is the ability to select the host physical network adapters (pNICs) you want to assign for NSX-T traffic on your VxRail workload domain cluster (see figure below). Users can now select the pNICs used for the NSX-T Virtual Distributed Switch (N-VDS) from the SDDC Manager UI in the Add VxRail Cluster workflow. This allows you the flexibility to choose from a set of VxRail host physical network configurations that best aligns to your desired NSX-T configuration business requirements. Do you want to deploy your VxRail clusters using the base network daughter card (NDC) ports on each VxRail host for all standard traffic but use separate PCIe NIC ports for NSX-T traffic? Go for it! Do you want to use 10GbE connections for standard traffic and 25GbE for NSX-T traffic? We got you there too! Host network configuration flexibility is now in your hands and is only available with VCF on VxRail.

Figure 2

Finally, no good VCF on VxRail conversation can go by without talking about Lifecycle Management. VMware Cloud Foundation 3.9 on VxRail 4.7.300 also delivers simplicity and flexibility for managing at scale with greater control over workload domain upgrades. Customers now have the flexibility to select the clusters within a multi-cluster workload domain to upgrade in order to better align with business requirements and maintenance windows. Upgrading VCF on VxRail clusters is further simplified with VxRail Smart LCM (4.7.300 release) which determines exactly which firmware components need to be updated on each cluster, pre-stages each node in a cluster saving up to 20% of upgrade time (see next figure). The scheduling of these cluster upgrades is also supported. With VCF 3.9 and VxRail smart LCM, you can streamline the upgrade process across your hybrid cloud.

Figure 3

As you can see the innovation continues with Cloud Foundation on VxRail.

Jason Marques  Twitter - @vwhippersnapper  Linked In -

Additional Resources

VxRail page on

VCF on VxRail Guides

VCF on VxRail Whitepaper

VMware release notes for VCF 3.9 on VxRail 4.7.300

VxRail Videos

VCF on VxRail Interactive Demos 

Read Full Blog

Analytical Consulting Engine (ACE)

Christine Spartichino

Mon, 30 Mar 2020 15:27:16 -0000


Read Time: 0 minutes

VxRail plays its ACE, now generally available 

November 2019

VxRail ACE (Analytical Consulting Engine), the new Artificial Intelligence infused component of the VxRail HCI System Software, was announced just a few months ago at Dell Technologies World and has been in global early access. Over 500 customers leveraged the early access program for ACE, allowing developers to collect feedback and implement enhancements prior to General Availability of the product.  It is with great excitement that VxRail ACE is now generally available to all VxRail customers. By incorporating continuous innovation/continuous development (CIDC) utilizing the Pivotal Platform (also known as Pivotal Cloud Foundry) container-based framework, Dell EMC developers behind ACE have made rapid iterations to improve the offering; and customer demand has driven new features added to the roadmap. ACE is holding true to its design principles and commitment to deliver adaptive, frequent releases.

Figure 1 ACE Design Principles and Goals

VxRail ACE is a centralized data collection and analytics platform that uses machine learning capabilities to perform capacity forecasting and self-optimization helping you keep your HCI stack operating at peak performance and ready for future workloads. In addition to some of the initial features available during early access, ACE now provides new functionality for intelligent upgrades of multiple clusters (see image below). You can now see the current software version of each cluster along with all available upgrade versions. ACE will allow you to select the desired version per each VxRail cluster. You can now manage at scale to standardize across all sites and clusters with the ability to customize by cluster. This becomes advantageous when some sites or clusters might need to remain at a specific version of VxRail software.

If you haven’t seen ACE in action yet, check out the additional links and videos below that showcase the features described in this post. For our 6,000+ VxRail customers, please visit our support site and Admin Guide to learn how to access ACE.

Christine Spartichino, Twitter - @cspartichino   Linked In -

For more information on VxRail, check out these great resources:

VxRail ACE Solution Brief

VxRail ACE Overview Demos

VxRail ACE Updates

VxRail ACE Announcement

All Things VxRail

Support (for existing customers)

Read Full Blog