
Analytical Consulting Engine (ACE)
Mon, 17 Aug 2020 18:31:30 -0000
|Read Time: 0 minutes
VxRail plays its ACE, now generally available
November 2019
VxRail ACE (Analytical Consulting Engine), the new Artificial Intelligence infused component of the VxRail HCI System Software, was announced just a few months ago at Dell Technologies World and has been in global early access. Over 500 customers leveraged the early access program for ACE, allowing developers to collect feedback and implement enhancements prior to General Availability of the product. It is with great excitement that VxRail ACE is now generally available to all VxRail customers. By incorporating continuous innovation/continuous development (CIDC) utilizing the Pivotal Platform (also known as Pivotal Cloud Foundry) container-based framework, Dell EMC developers behind ACE have made rapid iterations to improve the offering; and customer demand has driven new features added to the roadmap. ACE is holding true to its design principles and commitment to deliver adaptive, frequent releases.
Figure 1 ACE Design Principles and Goals
VxRail ACE is a centralized data collection and analytics platform that uses machine learning capabilities to perform capacity forecasting and self-optimization helping you keep your HCI stack operating at peak performance and ready for future workloads. In addition to some of the initial features available during early access, ACE now provides new functionality for intelligent upgrades of multiple clusters (see image below). You can now see the current software version of each cluster along with all available upgrade versions. ACE will allow you to select the desired version per each VxRail cluster. You can now manage at scale to standardize across all sites and clusters with the ability to customize by cluster. This becomes advantageous when some sites or clusters might need to remain at a specific version of VxRail software.
If you haven’t seen ACE in action yet, check out the additional links and videos below that showcase the features described in this post. For our 6,000+ VxRail customers, please visit our support site and Admin Guide to learn how to access ACE.
Christine Spartichino, Twitter - @cspartichino Linked In - linkedin.com/in/spartichino
For more information on VxRail, check out these great resources:
Related Blog Posts

100 GbE Networking – Harness the Performance of vSAN Express Storage Architecture
Wed, 22 Mar 2023 07:04:42 -0000
|Read Time: 0 minutes
For a few years, 25GbE networking has been the mainstay of rack networking, with 100 GbE reserved for uplinks to spine or aggregation switches. 25 GbE provides a significant leap in bandwidth over 10 GbE, and today carries no outstanding price premium over 10 GbE, making it a clear winner for new buildouts. But should we still be continuing with this winning 25 GbE strategy? Is it time to look to a future of 100 GbE networking within the rack? Or is that future now?
This question stems from my last blog post: VxRail with vSAN Express Storage Architecture (ESA) where I called out VMware’s 100 GbE recommended for maximum performance. But just how much more performance can vSAN ESA deliver with 100GbE networking? VxRail is fortunate to have its performance team, who stood up two identical six-node VxRail with vSAN ESA clusters, except for the networking. One was configured with Broadcom 57514 25 GbE networking, and the other with Broadcom 57508 100 GbE networking.
When it comes to benchmark tests, there is a large variety to choose from. Some benchmark tests are ideal for generating headline hero numbers for marketing purposes – think quarter-mile drag racing. Others are good for helping with diagnosing issues. Finally, there are benchmark tests that are reflective of real-world workloads. OLTP32K is a popular one, reflective of online transaction processing with a 70/30 read-write split and a 32k block size, and according to the aggregated results from thousands of Live Optics workload observations across millions of servers.
One more thing before we get to the results of the VxRail Performance Team's testing. The environment configuration. We used a storage policy of erasure coding with a failure tolerance of two and compression enabled.
When VMware announced vSAN with Express Storage Architecture they published a series of blogs all of which I encourage you to read. But as part of our 25 GbE vs 100 GbE testing, we also wanted to verify the astounding claims of RAID-5/6 with the Performance of RAID-1 using the vSAN Express Storage Architecture and vSAN 8 Compression - Express Storage Architecture. In short, forget the normal rules of storage performance, VMware threw that book out of the window. We didn’t throw our copy out of the window, well not at first, but once our results validated their claims… it went out.
Let’s look at the data: Boom!
Figure 1. ESA: OLTP32KB 70/30 RAID6 25 GbE vs 100 GbE performance graph
Boom! A 78% increase in peak IOPS with a substantial 49% drop in latency. This is a HUGE increase in performance, and the sole difference is the use of the Broadcom 57508 100 GbE networking. Also, check out that latency ramp-up on the 25 GbE line, it’s just like hitting a wall. While it is almost flat on the 100 GbE line.
But nobody runs constantly at 100%, at least they shouldn’t be. 60 to 70% of absolute max is typically a normal day-to-day comfortable peak workload, leaving some headroom for spikes or node maintenance. At that range, there is an 88% increase in IOPS with a 19 to 21% drop in latency, with a smaller drop in latency attributable to the 25 GbE configuration not hitting a wall. As much as applications like high performance, it is needed to deliver performance with consistent and predictable latency, and if it is low all the better. If we focus on just latency, the 100 GbE networking enabled 350K IOPS to be delivered at 0.73 ms, while the 25 GbE networking can squeak out 106K IOPS at 0.72 ms. That may not be the fairest of comparisons, but it does highlight how much 100GbE networking can benefit latency-sensitive workloads.
Boom, again! This benchmark is not reflective of real-world workloads but is a diagnostic test that stresses the network with its 100% read-and-write workloads. Can this find the bottleneck that 25 GbE hit in the previous benchmark?
Figure 2. ESA: 512KB RAID6 25 GbE vs 100 GbE performance graph
This testing was performed on a six-node cluster, with each node contributing one-sixth of the throughput shown in this graph. 20359MB/s of random read throughput for the 25 GbE cluster or 3393 MB/s per node. Which is slightly above the theoretical max throughput of 3125 MB/s that 25 GbE can deliver. This is the absolute maximum that 25 GbE can deliver! In the world of HCI, the virtual machine workload is co-resident with the storage. As a result, some of the IO is local to the workload, resulting in higher than theoretical throughput. For comparison, the 100 GbE cluster achieved 48,594 MB/s of random read throughput, or 8,099 MB/s per node out of a theoretical maximum of 12,500 MB/s.
But this is just the first release of the Express Storage Architecture. In the past, VMware has added significant gains to vSAN, as seen in the lab-based performance analysis of Harnessing the Performance of Dell EMC VxRail 7.0.100. We can only speculate on what else they have in store to improve upon this initial release.
What about costs, you ask? Street pricing can vary greatly depending on the region, so it's best to reach out to your Dell account team for local pricing information. Using US list pricing as of March 2023, I got the following:
Component | Dell PN | List price | Per port | 25GbE | 100GbE |
Broadcom 57414 dual 25 Gb | 540-BBUJ | $769 | $385 | $385 |
|
S5248F-ON 48 port 25 GbE | 210-APEX | $59,216 | $1,234 | $1,234 |
|
25 GbE Passive Copper DAC | 470-BBCX | $125 | $125 | $125 |
|
Broadcom 57508 dual 100Gb | 540-BDEF | $2,484 | $1,242 |
| $1,242 |
S5232F-ON 32 port 100 GbE | 210-APHK | $62,475 | $1,952 |
| $1,952 |
100 GbE Passive Copper DAC | 470-ABOX | $360 | $360 |
| $360 |
Total per port |
|
|
| $1,743 | $3,554 |
Overall, the per-port cost of the 100 GbE equipment was 2.04 times that of the 25 GbE equipment. However, this doubling of network cost provides four times the bandwidth, a 78% increase in storage performance, and a 49% reduction in latency.
If your workload is IOPS-bound or latency-sensitive and you had planned to address this issue by adding more VxRail nodes, consider this a wakeup call. Adding dual 100Gb came at a total list cost of $42,648 for the twelve ports used. This cost is significantly less than the list price of a single VxRail node and a fraction of the list cost of adding enough VxRail nodes to achieve the same level of performance increase.
Reach out to your networking team; they would be delighted to help deploy the 100 Gb switches your savings funded. If decision-makers need further encouragement, send them a link to VMware's vSAN 8 Total Cost of Ownership white paper.
While 25 GbE has its place in the datacenter, when it comes to deploying vSAN Express Storage Architecture, it's clear that we're moving beyond it and onto 100 GbE. The future is now 100 GbE, and we thank Broadcom for joining us on this journey.

What’s happening with SmartFabric Services and VxRail 8.0
Tue, 28 Feb 2023 23:37:47 -0000
|Read Time: 0 minutes
This article describes the SmartFabric Services (SFS) Automated VxRail Switch Configuration feature in VxRail and explains why it was removed in VxRail 8.0.
VxRail 4.7 and 7.0 releases included Automated VxRail Switch Configuration. This feature was designed for SFS and was always enabled. It automatically configured VxRail networks on SmartFabric switches during VxRail deployment. However, this integration prevented the ability to support SFS with VxRail in some network environments, such as with a vSAN stretched cluster or VMware Cloud Foundation.
Starting in VxRail 7.0.400, the option to manually disable Automated VxRail Switch Configuration was added to the VxRail deployment wizard, as shown below.
Figure 1 VxRail 7.0.400 deployment wizard resources page
This option is described in New Deployment Option for SmartFabric Services with VxRail, and is present in VxRail 7.0.400 and later VxRail 7.x releases. If Automated VxRail Switch Configuration is set to Disabled during VxRail deployment as recommended, SFS can be supported in other network environments.
Starting in VxRail 8.0, the Top-of-Rack (TOR) Switch section in the VxRail deployment wizard has been removed as shown below.
Figure 2 VxRail 8.0 deployment wizard resources page
Automated VxRail Switch Configuration is automatically disabled in VxRail 8.0. Disabling this feature ensures that new SFS with VxRail installations are supported in other network environments.
Disabling automated switch configuration only affects SmartFabric switch automation during VxRail deployment or when adding VxRail nodes to an existing cluster after deployment. With the feature disabled, you will use the SFS UI to place VxRail node-connected ports in the correct networks instead of the automation.
You can still configure SmartFabric switches automatically after VxRail deployment by registering the VxRail vCenter Server with OpenManage Network Integration (OMNI). When registration is complete, networks created in vCenter continue to be automatically configured on the SmartFabric switches using OMNI.
The Dell Networking SmartFabric Services Deployment with VxRail 7.0.400 deployment guide still applies to VxRail 8.0 deployments. The only difference is the Resources page of the VxRail deployment wizard will look like Figure 2 instead of Figure 1.
Resources
Dell Networking SmartFabric Services Deployment with VxRail 7.0.400
New Deployment Option for SmartFabric Services with VxRail