The latest news about VxRail releases and updates
Breaking down the barriers for VDI with VxRail and NVIDIA vGPU
Wed, 21 Apr 2021 14:28:10 -0000|
Read Time: 0 minutes
Desktop transformation initiatives often lead customers to look at desktop and application virtualization. According to Gartner, “Although few organizations planned for the global circumstances of COVID-19, many will now decide to have some desktop virtualization presence to expedite business resumption.”
However, customers looking to embrace these technologies have faced several hurdles, including:
These hurdles have often caused desktop transformation initiatives to fail fast, but there is good news on the horizon. Dell Technologies and VMware have come together to provide customers with a superior solution stack that will allow them to get started more quickly than ever, with simple and cost-effective end-to-end desktop and application virtualization solutions using NVIDIA vGPU and powered by VxRail.
Dell Technologies VDI solutions powered by VxRail
Dell Technologies VDI solutions based on VxRail feature a superior solution stack at an exceptional total cost of ownership (TCO). The solutions are built on Dell EMC VxRail and they leverage VMware Horizon 8 or Horizon Apps and NVIDIA GPU for those who need high-performance graphics. Wyse Thin and Zero client, OptiPlex micro form factor desktop, and Dell monitors are also available as part of these solutions. Simply plug in, power up, and provision virtual desktops in less than an hour, reducing the time needed to plan, design, and scale your virtual desktop and application environment.
VxRail HCI system software provides out-of-the-box automation and orchestration for deployment and day-to-day system-based operational tasks, reducing the overall IT OpEx required to manage the stack. You are not likely to find any build-it-yourself solution that provides this level of lifecycle management, automation, and operational simplicity
Dell EMC VxRail and NVIDIA GPU a powerful combination
Remote work has become the new normal, and organizations must enable their workforces to be productive anywhere while ensuring critical data remains secure.
Enterprises are turning to GPU-accelerated virtual desktop infrastructure (VDI) because GPU-enabled VDI provides workstation-like performance, allowing creative and technical professionals to collaborate on large models and access the most intensive 3D graphics applications.
Together with VMware Horizon, NVIDIA virtual GPU solutions help businesses to securely centralize all applications and data while providing users with an experience equivalent to the traditional desktop.
NVIDIA vGPU software included with the latest VMware Horizon release, which is available now, helps transform workflows so users can access data outside the confines of traditional desktops, workstations, and offices. Enterprises can seamlessly collaborate in real time, from any location, and on any device.
With NVIDIA vGPU and VMware Horizon, professional artists, designers, and engineers can access new features such as 10bit HDR and high-resolution 8K display support while working from home by accessing their virtual workstation.
In a VDI environment powered by NVIDIA virtual GPU, the virtual GPU software is installed at the virtualization layer. The NVIDIA software creates virtual GPUs that enable every virtual machine to share a physical GPU installed on the server or allows for multiple GPUs to be allocated on a single VM to power the most demanding workloads. The NVIDIA virtualization software includes a driver for every VM. Because work that was previously done by the CPU is offloaded to the GPU, the users, even demanding engineering and creative users, have a much better experience.
As more knowledge workers are added on a server, the server will run out of CPU resources. Adding an NVIDIA GPU offloads CPU operations that would otherwise use the CPU, resulting in an improved user experience and performance. We used the NVIDIA nVector knowledge worker VDI workload to test user experience and performance with NVIDIA GPU. The NVIDIA M10, T4, A40, RTX6000/8000 and V100S, all of which are available on Dell EMC VxRail, achieve similar performance for this workload.
Customers are realizing the benefits of increased resource utilization by leveraging GPU-accelerated Dell EMC VxRail to run virtual desktops and workstations. They are also leveraging these resources to run compute workloads, for example AI or ML, when users are logged off. Customers who want to be able to run compute workloads on the same infrastructure on which they run VDI, might leverage a V100S to do so. For the complete list, see NVIDIA GPU cards supported on Dell EMC VxRail.
With the prevalence of graphics-intensive applications and the deployment of Windows 10 across the enterprise, adding graphics acceleration to VDI powered by NVIDIA virtual GPU technology is critical to preserving the user experience. Moreover, adding NVIDIA GRID with NVIDIA GPU to VDI deployments increases user density on each server, which means that more users can be supported with a better experience.
To learn more about measuring user experience in your own environments, contact your Dell Account Executive.
Dell Technologies Solutions: Empowering your remote workforce
Everything VxRail: Dell EMC VxRail
VDI Design Guide: VMware Horizon on VxRail and vSAN Ready Nodes
Latest VxRail release: Simpler cloud operations and more deployment options!
Simpler Cloud Operations and Even More Deployment Options Please!
Tue, 23 Feb 2021 17:31:49 -0000|
Read Time: 0 minutes
Dell Technologies and VMware are happy to announce the availability of VMware Cloud Foundation 4.2.0 on VxRail 7.0.131.
This release brings about support for the latest versions of VCF and Dell EMC VxRail that provide a simple and consistent operational experience for developer ready infrastructure across core, edge, and cloud. Let’s review these new updates and enhancements.
Ability For Customers to Perform Their Own VxRail Cluster Expansion Operations in VCF on VxRail Workload Domains. Sometimes some of the best announcements that come with a new release have nothing to do with a new technical feature but instead are about new customer driven serviceability operations. The VCF on VxRail team is happy to announce this new serviceability enhancement. Customers no longer must purchase a professional services engagement simply to expand a single site layer 2 configured VxRail WLD cluster deployment by adding nodes to it. This should save time and money and give customers the freedom to perform these operations on their own.
This aligns to existing support that already exists for customers performing similar cluster expansion operations for VxRail systems deployed as standard clusters in non-VCF use cases.
Note: There are some restrictions on which cluster configurations support customer driven expansion serviceability. Stretched VxRail cluster deployments and layer 3 VxRail cluster configurations will still require engagement with professional services as these are more advanced deployment scenarios. Please reach out to your local Dell Technologies account team for a complete list of the cluster configurations that are supported for customer driven expansions.
Support for Transitioning From VCF on VxRail Consolidated Architecture to VCF on VxRail Standard Architecture. Continuing the operations improvements, the VCF on VxRail team is also happy to announce this new capability. We introduced support for VCF Consolidated Architecture deployments in VCF on VxRail back in May 2020. You can read about it here. VCF Consolidated Architecture deployments provide customers a way to familiarize themselves with VCF on VxRail in their core datacenters without a significant investment in cost and infrastructure footprint. Now, with support for transitioning from VCF Consolidated Architecture to VCF Standard Architecture, customers can expand as their scale demands it in their core, edge, or distributed datacenters! Now that’s flexible!
Please reach out to your local Dell Technologies account team for details on the transition engagement process requirements.
AMD-based VxRail Platform Support in VCF 4.x Deployments. With the latest VxRail 7.0.131 HCI System Software release, ALL available AMD-based VxRail series models are now supported in VCF 4.x deployments. These models include VxRail E-Series and P-Series and support single socket 2nd Gen AMD EYPC™ processors with 8 to 64 cores, allowing for extremely high core densities per socket.
The figure below shows the latest VxRail HW platform family.
For more info on these AMD platforms, check out my colleague David Glynn’s blog post on the subject here when AMD platform support was first introduced to the VxRail family last year. (Note: New 2U P-Series options have been released since that post.)
NSX-T 3.1 Federation Now Supported with VCF 4.2 on VxRail 7.0.131. NSX-T Federation provides a cloud-like operating model for network administrators by simplifying the consumption of networking and security constructs. NSX-T Federation includes centralized management, consistent networking and security policy configuration with enforcement and synchronized operational state across large scale federated NSX-T deployments. With NSX-T Federation, VCF on VxRail customers can leverage stretched networks and unified security policies across multi-region VCF on VxRail deployments, providing workload mobility and simplified disaster recovery. This initial support will be through prescriptive manual guidance that will be made available soon after VCF on VxRail solution general availability. For a detailed explanation of NSX-T Federation, check out this VMware blog post here.
The figure below depicts what the high-level architecture would look like.
VCF 4.2 on VxRail 7.0.131 Support for VMware HCI Mesh. VMware HCI Mesh is a vSAN feature that provides for “Disaggregated HCI” exclusively through software. In the context of VCF on VxRail, HCI Mesh allows an administrator to easily define a relationship between two or more vSAN clusters contained within a workload domain. It also allows a vSAN cluster to borrow capacity from other vSAN clusters, improving the agility and efficiency in an environment. This disaggregation allows the administrator to separate compute from storage. HCI Mesh uses vSAN’s native protocols for optimal efficiency and interoperability between vSAN clusters. HCI Mesh accomplishes this by using a client/server mode architecture. vCenter is used to configure the remote datastore on the client side. Various configuration options are possible that can allow for multiple clients to access the same datastore on a server. VMs can be created that utilize the storage capacity provided by the server. This can enable other common features, such as performing a vMotion of a VM from one vSAN cluster to another.
The figure below depicts this architecture.
This release continues to extend networking flexibility to further adapt to various customer environments and to reduce deployment efforts.
Customer-Defined IP Pools for NSX-T TEP IP Addresses for the Management Domain and Workload Domain Hosts. To extend networking flexibility, this release introduces NSX-T TEP IP Address Pools. This enhances the existing support for using DHCP to assign NSX-T TEP IPs. This new feature allows customers to avoid deploying and maintaining a separate DHCP server for this purpose. Admins can select to use IP Pools as part of VCF Bring Up by entering this information in the Cloud Builder template configuration file. The IP Pool will then be automatically configured during Bring Up by Cloud Builder. There is also a new option to choose DHCP or IP Pools during new workload domain deployments in the SDDC Manager.
The figure below illustrates what this looks like. Once domains are deployed, IP address blocks are managed through each domain’s NSX Manager respectively.
pNIC-Level Redundancy Configuration During VxRail First Run. Network flexible configurations are further extended with this feature in VxRail 7.0.131. It allows an administrator to configure the VxRail System VDS traffic across NDC and PCIe pNICs automatically during VxRail First Run using a new VxRail Custom NIC Profile option. Not only does this help provide additional high availability network configurations for VCF on VxRail domain clusters, it also helps to further simplify operations by removing the need for additional Day 2 activities in order to get to the same host configuration outcome.
Specify the VxRail Network Port Group Binding Mode During VxRail First Run. To further accelerate and simplify VCF on VxRail deployments, VxRail 7.0.131 has introduced this new enhancement designed with VCF in mind. VCF requires all host Port Group Binding Modes be set to Ephemeral. VxRail First Run now enables admins to have this parameter configured automatically, reducing the number of manual steps needed to prep VxRail hosts for VCF on VxRail use. Admins can set this parameter using the VxRail First Run JSON configuration file or manually enter it into the UI during deployment.
The figure below illustrates an example of what this looks like in the Dell EMC VxRail Deployment Wizard UI.
New SDDC Manager LCM Manifest Architecture. This new LCM Manifest architecture changes the way SDDC Manager handles the metadata required to enable upgrade operations as compared to the legacy architecture used up until this release.
With the legacy LCM Manifest architecture:
The newly updated LCM Manifest architecture helps address these challenges by enabling dynamic updates to LCM metadata, enabling future functionality such as recalling upgrade bundles or modifying skip level upgrade sequencing.
VCF Skip-Level Upgrades Using SDDC Manager UI and Public API. Keeping up with new releases can be challenging and scheduling maintenance windows to perform upgrades may not be as readily available for every customer. The goal behind this enhancement is to provide VCF on VxRail administrators the flexibility to reduce the number of stepwise upgrades needed in order to get to the latest SDDC Manager/VCF release if they are multiple versions behind. All required upgrade steps are now automated as a single SDDC Manager orchestrated LCM workflow and is built upon the new SDDC Manager LCM Manifest architecture. VCF skip level upgrades allow admins to quickly and directly adopt code versions of choice and to reduce maintenance window requirements.
Note: To take advantage of VCF skip level upgrades for future VCF releases, customers must be at a minimum of VCF 4.2.
The figure below shows what this option looks like in the SDDC Manager UI.
Improvements to Upgrade Resiliency Through VCF Password Prechecks. Other LCM enhancements in this release come in the area of password prechecks. When performing an upgrade, VCF needs to communicate to various components to complete various actions. Of course, to do this, the SDDC Manager needs to have valid credentials. If the passwords have expired or have been changed outside of VCF, the patching operation fails. To avoid any potential issues, VCF now checks to ensure that the credentials needed are valid prior to commencing the patching operation. These checks will occur both during the execution of the pre-check validation as well as during an upgrade of a resource, such as ESXi, NSX-T, vCenter, or VxRail Manager. Check out what this looks like in the figure below
Automated In-Place vRSLCM Upgrades. Upgrading vRSLCM in the past required the deployment of a net new vRSLCM appliance. With VCF 4.2, the SDDC Manager keeps the existing vRSLCM appliance, takes a snapshot of it, then transfers the upgrade packages directly to it and upgrades everything in place. This provides a more simplified and streamlined LCM experience.
VCF API Performance Enhancements. Administrators who use a programmatic approach will experience a quicker retrieval of information through the caching of certain information when executing API calls.
Mitigate Man-In-The-Middle Attacks. Want to prevent Man-In-The-Middle Attacks on your VCF on VxRail cloud infrastructure? This release is for you. Introduced in VCF 4.2, customers can leverage SSH RSA fingerprint and SSL thumbprint enforcement capabilities that are natively built-into SDDC Manager in order to verify the authenticity of cloud infrastructure components (vCenter, ESXi, and VxRail Manager). Customers can choose to enable this feature for their VCF on VxRail deployment during VCF Bring Up by filling in the affiliated parameter fields in the Cloud Builder configuration file.
An SSH RSA Fingerprint comes from the host SSH public key while an SSL Thumbprint comes from the host’s certificates. One or more of these data points can be used to validate the authenticity of VCF on VxRail infrastructure components when being added and configured into the environment. For the Management Domain, both SSH fingerprints and SSL thumbprints are available to use while Workload Domains have SSH Fingerprints available. See what this looks like in the figure below.
Natively Integrated Dell Technologies Next Gen SSO Support With SDDC Manager. Dell Technologies Next Gen SSO is a newly implemented backend service used in authenticating with Dell Technologies support repositories where VxRail update bundles are published. With the native integration that SDDC Manager has with monitoring and downloading the latest supported VxRail upgrade bundles from this depot, SDDC Manager now utilizes this new SSO service for its authentication. While this is completely transparent to customers, existing VCF on VxRail customers may need to log SDDC Manager out of their current depot connection and re-authenticate with their existing credentials to ensure future VxRail updates are accessible by SDDC Manager.
New Advanced Security Add-on for VMware Cloud Foundation License SKUs: Though not necessarily affiliated with the VCF 4.2 on VxRail 7.0.131 BOM directly, new VMware security license SKUs for Cloud Foundation are now available for customers who want to bring their own VCF licenses to VCF on VxRail deployments.
The Advanced Security Add-on for VMware Cloud Foundation now includes advanced threat protection, and workload and endpoint security that provides the following capabilities:
VMware Cloud Foundation 4.2.0 on VxRail 7.0.131 introduces support for the latest versions of the SDDC and VxRail. For the complete list of component versions in the release, please refer to the VCF on VxRail release notes. A link is available at the end of this post.
Well, that about covers it for this release. The innovation continues with co-engineered features coming from all layers of the VCF on VxRail stack. This further illustrates the commitment that Dell Technologies and VMware have to drive simplified turnkey customer outcomes. Until next time, feel free to check out the links below to learn more about VCF on VxRail.
Twitter - @vwhippersnapper
Deploying SAP HANA at the Rugged Edge
Mon, 14 Dec 2020 18:38:19 -0000|
Read Time: 0 minutes
SAP HANA is one of those demanding workloads that has been steadfastly contained within the clean walls of the core data center. However, this time last year VxRail began to chip away at these walls and brought you SAP HANA certified configurations based on the VxRail all-flash P570F workhorse and powerful quad socket all-NVMe P580N. This year, we are once again in the giving mood and are bringing SAP HANA to the edge. Let us explain.
Dell Technologies defines the edge as “The edge exists wherever the digital world & physical world intersect. It’s where data is securely collected, generated and processed to create new value.” This is a very broad definition that extends the edge from the data center to oil rigs, to mobile response centers for natural disasters. It is a broad claim not only to provide compute and storage in such harsh locations, but also to provide enough of it that meets the strict and demanding needs of SAP HANA, all while not consuming a lot of physical space. After all -- it is the edge where space is at a premium.
Shrinking the amount of rack space needed was the easier of the two challenges, and our 1U E for Everything (or should that be E for Everywhere?) was a perfect fit. The all-flash E560F and all-NVMe E560N, both of which can be enhanced with Intel Optane Persistent Memory, can be thought of as the shorter sibling of our 2U P570F, packing a powerful punch with equivalent processor and memory configurations.
While the E Series fits the bill for space constrained environments, it still needs data center like conditions. This is not the case for the durable D560F, the tough little champion that joined the VxRail family in June of this year, and which is now the only SAP HANA certified ruggedized platform in the industry. Weighing in at a lightweight 28 lbs. and a short depth of 20 inches, this little fighter will run all day at 45°C with eight hour sprints of up to 55°C, all while enduring shock, vibration, dust, humidity, and EMI, as this little box is MIL-STD 810G and DNV-GL Maritime certified. In other words, if your holiday plans involve a trip to hot sand beaches, a ship cruise through a hurricane, or an alpine climb, and you’re bringing SAP HANA with you (we promise we won’t ask why), then the durable D560F is for you.
The best presents sometimes come in small packages. So, we won’t belabor this blog with anything more than to announce that these two little gems, the E560 and the D560, are now SAP HANA certified.
Author: David Glynn, Sr. Principal Engineer, VxRail Tech Marketing
360° View: VxRail D Series: The Toughest VxRail Yet
Video: HCI Computing at the Edge
Solution brief: Taking HCI to the Edge: Rugged Efficiency for Federal Teams
SAP Certification link: Certified and Supported SAP HANA® Hardware Directory
VxRail API—Updated List of Useful Public Resources
Fri, 20 Nov 2020 18:16:21 -0000|
Read Time: 0 minutes
Well-managed companies are always looking for new ways to increase efficiency and reduce costs while maintaining excellence in the quality of their products and services. Hence, IT departments and service providers look at the cloud and Application Programming Interfaces (APIs) as the enablers for automation, driving efficiency, consistency, and cost-savings.
This blog helps you get started with VxRail API by grouping the most useful VxRail API resources available from various public sources in one place. This list of resources is updated every few months. Consider bookmarking this blog as it is a useful reference.
Before jumping into the list, it is essential to answer some of the most obvious questions:
What is VxRail API?
VxRail API is a feature of the VxRail HCI System Software that exposes management functions with a RESTful application programming interface. It is designed for ease of use by VxRail customers and ecosystem partners who want to better integrate third-party products with VxRail systems. VxRail API is:
Why is VxRail API relevant?
VxRail API enables you to use the full power of automation and orchestration services across your data center. This extensibility enables you to build and operate infrastructure with cloud-like scale and agility. It also streamlines the integration of the infrastructure into your IT environment and processes. Instead of manually managing your environment through the user interface, the software can programmatically trigger and run repeatable operations.
More customers are embracing DevOps and Infrastructure as Code (IaC) models because they need reliable and repeatable processes to configure the underlying infrastructure resources that are required for applications. IaC uses APIs to store configurations in code, making operations repeatable and greatly reducing errors.
How can I start? Where can I find more information?
To help you navigate through all available resources, I grouped them by level of technical difficulty, starting with 101 (the simplest, explaining the basics, use cases, and value proposition), through 201, up to 301 (the most in-depth technical level).
(New!) Podcast—VxRail API podcast is part of the CI and HCI Solutions podcast series. This offering is a great option if you like to listen to technical podcasts.
The VxRail 7.0 Interactive Demo is a recent asset prepared by our team for the Dell Technologies World 2020 virtual conference. I highly recommend it. It was recorded with VxRail HCI System Software version 7.0.010, which introduced Day 1 API for VxRail cluster deployment.
Dell Technologies Support portal access is required.
This recent asset was prepared at the VMworld 2020 virtual conference and recorded with VxRail HCI System Software version 7.0.0.
I hope you find this list useful. If so, make sure that you bookmark this blog for your reference. I will update it over time to include the latest collateral.
Enjoy your Infrastructure as Code journey with VxRail API!
Author: Karol Boguniewicz, Senior Principal Engineer, VxRail Technical Marketing
Lifecycle Management for vSAN Ready Nodes and VxRail Clusters: Part 2 – Cloud Foundation Use Cases
Tue, 17 Nov 2020 22:10:26 -0000|
Read Time: 0 minutes
In my previous post I explored the customer experience between using vSphere Lifecycle Manager Images (vLCM Images) and VxRail Manager to maintain HCI stack integrity and completing full stack software updates for standard vSphere cluster use cases. It was clear to see that VxRail Manager optimized operational efficiencies while taking ownership of software validation of the complete cluster to remove the burden of testing and reducing the overall risk during a lifecycle management event. However, common questions I frequently get are: Do those same values carry over when using VxRail as part of a VMware Cloud Foundation on VxRail (Dell Technologies Cloud Platform) deployment? Is vLCM Images even used in VCF on VxRail deployments? In this post I want to dive into answering these questions.
There are some excellent resources available on the VxRail InfoHub web portal. Along with several blog posts that discuss the unique integration between SDDC Manager and VxRail Manager in the area of LCM (like this one), I suggest that you check them out if you are unfamiliar with VCF and SDDC Manager functionality as it will help in following along in this post.
Before we dive in, there are a few items that you should be aware of regarding SDDC Manager and vLCM. I won’t go into all of them here, but you can check out the VCF 4.1 Release Notes, vLCM Requirements and Limitations, VCF 4.1 Admin Guide, and Tanzu documentation for more details. A few worth highlighting include:
As in my last post, the main area of focus here is around the customer experience with VMware Cloud Foundation and VRSLCM and VxRail, specifically:
Oh, and one last important point to make before we get into the details. As of this writing, vLCM is only used when deploying VCF on server/vSAN Ready Nodes and is not used when deploying VCF on VxRail. As a result, all information covered here when comparing vLCM with VxRail Manager essentially compares the LCM experience of running VCF on servers/vSAN Ready Nodes vs VCF on VxRail.
We have covered this in detail in my previous post. The requirements for VCF-based systems also remains the same but one thing to highlight in VCF use cases is that the customer is still responsible for the installation, configuration, and ongoing updating of the hardware vendor HSM components used in the vLCM Images-based environment. SDDC Manager does not automatically deploy these components nor lifecycle them for you.
VCF deployments do differ in the area of initial VCF node imaging. In VCF deployments there are two initial node imaging options for customers:
The VIA service tool uses a PXE Boot environment to image nodes that need to be on the same L2 domain as the appliance and are reachable over an untagged VLAN (VLAN ID 0). ESXi Images and VIBs can be uploaded to the Cloud Builder appliance VIA service. Hostnames and IP address are assigned during the imaging process. Once initial imaging is complete, and Cloud Builder has run though its automated workflow, you are left with a provisioned Management Domain. (One important consideration here regarding initial node baseline images: customers need to ensure that the hardware and software components included in these images are validated against the VCF and ESXi software and hardware BOMs that have been certified for the version of VCF that will be installed in their environment.) This default cluster within the Management Domain cannot use vLCM Images for future cluster LCM updates.
When you are creating a new VI Workload Domain in a VCF on a vSAN Ready Nodes deployment, that is, when you can choose to enable vLCM Images as your method of choice for cluster updates or alternatively, you can also select vLCM Baselines. (Note: When using vLCM Baselines, firmware updates are not maintained as part of cluster lifecycle management). If you opt to use vLCM Images, you cannot revert to using vLCM Baselines for cluster management. So, it is very important to choose wisely and understand what LCM operating model is needed prior to deploying the workload domain. Because this blog post focuses on vLCM Images, let’s review what is involved when you select this option.
To begin, it’s important to know that you cannot create a vLCM Images-based workload domain until you import an image into SDDC Manager. But you cannot import an image into SDDC Manager until you have a vLCM Images enabled cluster.
To get over this chicken and egg scenario, the administrator needs to create an empty cluster within the Management Domain where you can set up the image requirements and assign firmware and driver profiles that you have validated during the planning and preparation phase for the initial cluster build. The following figure provides an example of creating the temporary cluster needed to configure vLCM Images.
Figure 1 Creating a temporary cluster to enable vLCM Images as part of the initial setup.
When defining vLCM images, similar to when defining the initial baseline node images, customers are responsible for ensuring that these images are validated against the VCF software BOM that has been certified for the version of VCF that is installed in the environment.
When you are satisfied with the image configuration and you have defined the Driver, Firmware and Cluster Profiles, export the required JSON, ESX ISO, and ZIP files from vSphere UI to your local file system, as shown in the following figure. These files include:
Figure 2 Exporting Images
Next, within the vCenter UI, go to the Development Center menu and choose the API Explorer Tab. At this stage you need to run several API commands.
To do this, first select your endpoint (vCenter Server) from the drop-down option, then select the vCenter Related APIs. When completed, you will be presented with all the applicable vCenter APIs for your chosen end point. Expand the Cluster section and execute the GET API command below for /rest/vCenter/Cluster as shown in the following figure.
Figure 3 In Developer Center: List all Clusters
This displays all the clusters managed by that vCenter and provides a variable for each cluster. Click on the vcenter.cluster.summary (Dell-VSRN-Temp) and copy this value (that is, Domain-c2022 in my example) that you will use in the next step.
Change the focus on the API explorer to ESX and execute a GET API command for /api/esx/settings/clusters/Domain-c2022/software.
Fill in the cluster id parameter (Domain-c2022) as the required value to run the API command (see the following figure). Once executed, click on the download json option and an additional json file downloads to your local file system.
Figure 4 Execute the Cluster software API Command
At this point in time, you have four files
Finally, within SDDC Manager, select Repository then Image Management and Import Cluster Image. Here you need to import the four files mentioned above. As you import the individual files, make sure that you specify a name for the cluster image and import them in the correct order. Once the import is successful, you can now start to deploy your first vLCM Images enabled workload domain.
VxRail key integrations with Cloud Foundation start even before any VCF on VxRail components are installed at Dell facilities, as part of the Dell manufacturing process. Here, the nodes are loaded with a VxRail Continuously Validated State image that includes all pre-validated vSphere, vSAN, and hardware firmware components. This means that once VxRail nodes are racked, stacked, and powered on within your datacenter, they are ready to be used to install a new VCF instance, create new workload domains, expand existing workload domains with new clusters, or a expand clusters on an existing system.
For new VCF deployments, Cloud Builder has unique integrated workflows that tailor a streamlined deployment process with VxRail, leveraging existing capabilities for VxRail cluster management operations. Once SDDC Manager is deployed using the Cloud Builder connectivity, two update bundle repositories can then be configured.
Figure 5 SDDC Manager Repository Settings
The first is to the VMware repository which is used for the VMware software such as vSphere, NSX, and SDDC Manager. The second is for the Dell EMC repository for the VxRail software. Once you configure and authenticate with the appropriate user account credentials in SDDC Manager, it will automatically connect to the VxRail repository at Dell EMC and pull down the next available VxRail update package. Each available VxRail update package will have already been validated, tested, and certified with the version of VCF running in the customer’s environment.
Figure 6 VxRail Software Bundle in SDDC Manager
The following figure summarizes the steps needed for defining initial baseline node images for VCF using vLCM Images and VCF using VxRail Manager.
Figure 7 Initial baseline node images and configuration
Although we have reviewed this in detail before, it is worth mentioning here again. Ownership of this process lies on the shoulders of the administrator. In this model, customers would take on the responsibility validating and testing the software and driver combination of their desired state image to ensure full stack interoperability and integrity, and ensuring that the component versions fall within the supported VCF software BOM being used in their environment.
The VxRail approach is much different. The VxRail engineering teams spend 1000s of test hours across multiple platforms to validate each release. The end user is given a single image to leverage knowing that Dell Technologies has completed the very heavy lift for platform validation. As I mentioned above, SDDC Manager will download the correct bundle based on your VCF Release and mark it available within your SDDC Manager. When a customer sees a new image available, they are guaranteed that it is already compatible with their VCF deployment. This curated bundle management and validation is part of the turnkey experience customers gain with VCF on VxRail.
The following figure illustrates the differences in planning a cluster update for VCF with vLCM Images and VCF with VxRail.
Figure 8 Planning for a cluster update
Defining the baseline node image is vital for defining the hardware health of your cluster. Defining a target version for your system’s next update is equally as important. It should involve testing the specific combination of components for the image that is desired. This would be in addition to some of the standard interoperability validation performed by the Ready Node hardware vendor when updates to server hardware firmware and drivers are released. Once the hardware baseline is known, the ESXi image must be imported into vCenter. Drivers, firmware, and Cluster Profiles must then be defined in vCenter so they can be ready to be exported.
We use the same process as originally outlined for the initial setup: Export the images, run the relevant APIs calls, and import the files into SDDC Manager. Every future update will follow the same process as I’ve outlined. Additional firmware and driver profiles will have to be created if new workload domains or clusters are added with different server hardware configuration. Thus, a deployment that caters to multiple hardware use cases will end up with several driver/firmware profiles that will need to be managed and tested independently.
SDDC Manager is the orchestration engine, defining:
For VxRail LCM updates, SDDC Manager will send API calls directly to each VxRail Manager for every cluster being updated to initiate a cluster upgrade. From that point on VxRail Manager will take ownership of the VxRail update execution using the same native VxRail Manager LCM execution process that is used in non-VCF VxRail deployments. During LCM execution, VxRail Manager provides constant feedback to the SDDC Manager throughout the process. VxRail updates these components:
To understand the full range of hardware components that are updated with each release, I urge you to check out the VxRail 7.0 Support Matrix.
The following figure summarizes the steps required to execute cluster updates for VCF with vLCM Images and VCF with VxRail.
Figure 9 Executing a cluster update workflow
Unlike standalone vSphere cluster deployments where vLCM Images manages images on a per cluster basis, VMware Cloud Foundation allows you to manage all cluster images, once imported and repurpose them for other clusters or workload domains. A definite improvement, but each new update requires you create the image, firmware, and driver combinations in vCenter first and then import into SDDC Manager. Of course, this is after you have repeated the planning phases and have completed all the driver and firmware interoperability testing.
Also, it is important to note that if your cluster is being managed by vLCM Images, and you need to expand your clusters with hardware that is not identical to the original hosts (this can happen in situations in which hardware components go end of sale or you have different hardware or firmware requirements for different nodes), you can no longer leverage vLCM Images or change back to using vLCM Baselines. So proper planning is very important.
VxRail LCM supports customers’ ability to grow their clusters with heterogenous nodes over time. Different generations of servers or servers with differing hardware characteristics can be mixed within a cluster, in accordance with application profile requirements. A single pre-validated image will be made available that will cover all hardware profiles. All of this is factored into each VxRail Continuously Validated State update bundle that is applied to each individual cluster based on its current component's version state.
When we piece together the bigger picture with all the LCM stages combined, it provides an excellent representation of the ease of management when VxRail is at the heart of your VCF deployment.
Figure 10 Comparing vLCM Images and VxRail LCM cluster update operations
It’s clear to see that VxRail, with its pre-validated engineered approach, can provide a differentiated customer experience when it comes to operational efficiency, during both the initial deployment phase and the continuous lifecycle management of the HCI.
While vLCM Images provides a significant improvement from manually applying the updates, the planning and testing required can become quite iterative. And when newer hardware profiles are introduced over the lifespan of the system, things could become more difficult to manage, introducing additional complexity.
By contrast, VxRail provides a single update file for each release that is curated and made accessible within SDDC Manager natively, with no additional administration effort required. It’s simplicity at its finest, and simplicity is at the core of the VxRail turnkey customer experience.
Dell EMC VxRail Engineering Technologist
Update to VxRail 7.0.100 and Unleash the Performance Within It
Mon, 02 Nov 2020 15:50:28 -0000|
Read Time: 0 minutes
Last week at Dell Technologies we released VxRail 7.0.100. This release brings support for the latest versions of VMware vSphere and vSAN 7.0 Update 1. Typically, in an update release we will see a new feature or two, but VMware out did themselves and crammed not only a load of new or significantly enhanced features into this update, but also some game changing performance enhancements. As my peers at VMware already did a fantastic job of explain these features, I won’t even attempt to replicate their work – you can find links to the blogs on features that caught my attention in the reference section below. Rather, I want to draw attention to the performance gains, and ask the question: Could RAID5 with compression only be the new normal?
Don’t worry, I can already hear the cries of “Max performance needs RAID1, RAID5 has IO amplification and parity overhead, data reduction services have drawbacks”, but bear with me a little. Also, I’m not suggesting that RAID5 compression only be used for all workloads, there are some workloads that are definitely unsuitable – streams of compressed video come to mind. Rather I’m merely suggesting that after our customers go through the painless process of updating their cluster to VxRail 7.0.100 from one of our 36 previous releases in over the past two years (yes you can leap straight from 4.5.211 to 7.0.100 in a single update and yes we do handle the converging and decommissioning of the Platform Services Controller), that they check out the reduction in storage IO latency that their existing workload is putting on their VxRail cluster, and investigate what it represents – in short, more storage performance headroom.
As customers buy VxRail clusters to support production workloads, they can’t exactly load it up with a variety of benchmark workload test to see how far they can push it. But at VxRail we are fortune to have our own dedicated performance team, who have enough VxRail nodes to run a mid-sized enterprise, and access to a large library of components so that they can replicate almost any VxRail configuration we sell (and a few we don’t). So, there is data behind my outrageous suggestion, it isn’t just back of the napkin mathematics. Grab a copy of the performance team’s recent findings in their whitepaper: Harnessing the performance of Dell EMC VxRail 7.0.100: A lab based performance analysis of VxRail, and skip to figure 3. There you’ll find some very telling before and after performance latency curves with and without data reduction services for an RDBMS workload. Spoiler: 58% more peak IOPS and almost 40% lower latency, with compression this only drops to a still very significant 45% more peak IOPS with 30% lower latency. (For those of you screaming “but failure domains” check out the blog Space Efficiency Using the New “Compression only” Option where Pete Kohler explains the issue, and how it not longer exists with compression only.) But what about RAID5? Skip up to figure 1 which summarizes the across the board performance gains for IOPS and throughput, impressive, right? Now slide down to figure 2 to compare the throughput, in particular compare RAID 1 on 7.0 with RAID 5 on 7.0 U1 – the read performance is almost identical, while the gap in write performance has narrowed. Write performance on RAID5 will likely always lag RAID1 due to IO amplification, but VMware is clearly looking to narrow that gap as much as possible.
If nothing else the whitepaper should tell you that a simple hassle-free upgrade to VxRail 7.0.100 will unlock additional performance headroom on your vSAN cluster without any additional costs, and that the tradeoffs associated with RAID5 and data reduction services (compression only) are greatly reduced. There are opportunistic space savings to be had from compression only, but they require committing to a cluster wide change to unlock, which is something that should not be taken lightly. However, realizing the guaranteed 33% capacity savings of RAID5, can be unlocked per virtual machine, reverted just as easily, represents a lower risk. I opened asking the question if RAID5 with compression only could be the new normal, and I firmly believe that the data indicates that this is a viable option for many more workloads.
My peers at VMware (John Nicholson, Pete Flecha (these two are the voices and brains behind the vSpeakingPodcast – definitely worth listening to), Teodora Todorova Hristov, Pete Koehler and Cedric Rajendran) have written great and in-depth blogs about these features that caught my attention:
vSAN HCI Mesh – eliminate stranded storage by enabling VMs registered to cluster A access storage from cluster B
Shared Witness for 2-Node Deployments - reduced administration time and infrastructure costs thru one witness for up to sixty-four 2-node clusters
Enhanced vSAN File Services – adds SMB v2.1 and v3 for Windows and Mac clients. Add Kerberos authentication for existing NFS v4.1
Space Efficiency: Compression only option - For demanding workloads that cannot take advantage of deduplication. Compression only has higher throughput, lower latency, and significantly reduced impact on write performance compared to deduplication and compression. Compression only has a reduced failure domain and 7x faster data rebuild rates.
Spare Capacity Management – Slack space guidance of 25% has been replaced with a calculated Operational Reserve the requires less space, and decreases with scale. Additional option to enable Host rebuild reserve, VxRail Sizing Tool reserves this by default when sizing configurations, with the filter Add HA
Enhanced Durability during Maintenance Mode – data being intended for a host in maintenance mode is temporally recorded in a delta file on another host, providing the configured FTT during Maintenance Mode operations
Take VMware Tanzu to the Cloud Edge with Dell Technologies Cloud Platform
Mon, 02 Nov 2020 15:50:28 -0000|
Read Time: 0 minutes
Dell Technologies and VMware are happy to announce the availability of VMware Cloud Foundation 4.1.0 on VxRail 7.0.100.
This release brings support for the latest versions of VMware Cloud Foundation and Dell EMC VxRail to the Dell Technologies Cloud Platform and provides a simple and consistent operational experience for developer ready infrastructure across core, edge, and cloud. Let’s review these new features.
Cloud Foundation 4.1 on VxRail 7.0.100 introduces support for the latest versions of the SDDC listed below:
For the complete list of component versions in the release, please refer to the VCF on VxRail release notes. A link is available at the end of this post.
vSphere Cluster Services is a new capability introduced in the vSphere 7 Update 1 release that is included as a part of VCF 4.1. It runs as a set of virtual machines deployed on top of every vSphere cluster. Its initial functionality provides foundational capabilities that are needed to create a decoupled and distributed control plane for clustering services in vSphere. vCLS ensures cluster services like vSphere DRS and vSphere HA are all available to maintain the resources and health of the workloads running in the clusters independent of the availability of vCenter Server. The figure below shows the components that make up vCLS from the vSphere Web Client.
Not only is vSphere 7 providing modernized data services like embedded vSphere Native Pods with vSphere with Tanzu but features like vCLS are now beginning the evolution of modernizing to distributed control planes too!
VCF can now rename resource objects post creation, including the ability to rename domains, datacenters, and VxRail clusters.
The domain is managed by the SDDC Manager. As a result, you will find that there are additional options within the SDDC Manager UI that will allow you to rename these objects.
VxRail Cluster objects are managed by a given vCenter server instance. In order to change cluster names, you will need to change the name within vCenter Server. Once you do, you can go back to the SDDC Manager and after a refresh of the UI, the new cluster name will be retrieved by the SDDC Manager and shown.
In addition to the domain and VxRail cluster object rename, SDDC Manager now supports the use of a customized Datacenter object name. The enhanced VxRail VI WLD creation wizard process has been updated to include inputs for Datacenter Name and is automatically imported into the SDDC Manager inventory during the VxRail VI WLD Creation SDDC Manager workflow. Note: Make sure the Datacenter name matches the one used during the VxRail Cluster First Run. The figure below shows the Datacenter Input step in the enhanced VxRail VI WLD creation wizard from within SDDC Manager.
Being able to customize resource object names makes VCF on VxRail more flexible in aligning with an IT organization’s naming policies.
Furthering the Dell Technologies and VMware co-engineering integration efforts for VCF on VxRail, new workflow optimizations have been introduced in VCF 4.1 that take advantage of VxRail Manager APIs for VxRail cluster host removal operations.
When the time comes for VCF on VxRail cloud administrators to remove hosts from WLD clusters and repurpose them for other domains, admins will use the SDDC Manager “Remove Host from WLD Cluster” workflow to perform this task. This remove host operation has now been fully integrated with native VxRail Manager APIs to automate removing physical VxRail hosts from a VxRail cluster as a single end-to-end automated workflow that is kicked off from the SDDC Manager UI or VCF API. This integration further simplifies and streamlines VxRail infrastructure management operations all from within common VMware SDDC management tools. The figure below illustrates the SDDC Manager sub tasks that include new VxRail API calls used by SDDC Manager as a part of the workflow.
Note: Removed VxRail nodes require reimaging prior to repurposing them into other domains. This reimaging currently requires Dell EMC support to perform.
SDDC Manager now has international language support that meets the I18N Internationalization and Localization standard. Options to select the desired language are available in the Cloud Builder UI, which installs SDDC Manager using the selected language settings. SDDC Manager will have localization support for the following languages – German, Japanese, Chinese, French, and Spanish. The figure below illustrates an example of what this would look like in the SDDC Manager UI.
New in VCF 4.1, the vRealize Suite is fully integrated into VCF. The SDDC Manager deploys the vRSLCM and creates a two way communication channel between the two components. When deployed, vRSLCM is now VCF aware and reports back to the SDDC Manager what vRealize products are installed. The installation of vRealize Suite components utilizes built standardized VVD best practices deployment designs leveraging Application Virtual Networks (AVNs).
Software Bundles for the vRealize Suite are all downloaded and managed through the SDDC Manager. When patches or updates become available for the vRealize Suite, lifecycle management of the vRealize Suite components is controlled from the SDDC Manager, calling on vRSLCM to execute the updates as part of SDDC Manager LCM workflows. The figure below showcases the process for enabling vRealize Suite for VCF.
VCF Remote Cluster Support enables customers to extend their VCF on VxRail operational capabilities to ROBO and Cloud Edge sites, enabling consistent operations from core to edge. Pair this with an awesome selection of VxRail hardware platform options and Dell Technologies has your Edge use cases covered. More on hardware platforms later…For a great detailed explanation on this exciting new feature check out the link to a detailed VMware blog post on the topic at the end of this post.
With previous VCF on VxRail releases, NSX-T upgrades were all encompassing, meaning that a single update required updates to all the transport hosts as well as the NSX Edge and Manager components in one evolution.
With VCF 4.1, support has been added to perform staggered NSX updates to help minimize maintenance windows. Now, an NSX upgrade can consist of three distinct parts:
Multiple NSX edge and/or host transport clusters within the NSX-T instance can be upgraded in parallel. The Administrator has the option to choose some clusters without having to choose all of them. Clusters within a NSX-T fabric can also be chosen to be upgraded sequentially, one at a time. Below are some examples of how NSX-T components can be updated.
NSX-T Components can be updated in several ways. These include updating:
The figure below visually depicts these options.
These options provide Cloud admins with a ton of flexibility so they can properly plan and execute NSX-T LCM updates within their respective maintenance windows. More flexible and simpler operations. Nice!
A new ‘view-only’ role has been added to VCF 4.1. For some context, let’s talk a bit now about what happens when logging into the SDDC Manager.
First, you will provide a username and password. This information gets sent to the SDDC Manager, who then sends it to the SSO domain for verification. Once verified, the SDDC Manager can see what role the account has privilege for.
In previous versions of Cloud Foundation, the role would either be for an Administrator or it would be for an Operator.
Now, there is a third role available called a ‘Viewer’. Like its name suggests, this is a view only role which has no ability to create, delete, or modify objects. Users who are assigned this role may not see certain items in the SDDC Manger UI, such as the User screen. They may also see a message saying they are unauthorized to perform certain actions.
Also new, VCF now has a local account that can be used during an SSO failure. To help understand why this is needed let’s consider this: What happens when the SSO domain is unavailable for some reason? In this case, the user would not be able to login. To address this, administrators now can configure a VCF local account called admin@local. This account will allow the performing of certain actions until the SSO domain is functional again. This VCF local account is defined in the deployment worksheet and used in the VCF bring up process. If bring up has already been completed and the local account was not configured, then a warning banner will be displayed on the SDDC Manager UI until the local account is configured.
Lastly, SDDC Manager now uses new service accounts to streamline communications between SDDC manager and the products within Cloud Foundation. These new service accounts follow VVD guidelines for pre-defined usernames and are administered through the admin user account to improve inter-VCF communications within SDDC Manager.
As described in this blog, with VCF 4.1, SDDC Manager backup-recovery workflows and APIs have been improved to add capabilities such as backup management, backup scheduling, retention policy, on-demand backup & auto retries on failure. The improvement also includes Public APIs for 3rd party ecosystem and certified backup solutions from Dell PowerProtect.
VxRail engineering continues innovate in areas that drive more value to customers. The latest VCF on VxRail release follows through on delivering just that for our customers. New in this release, customers can use the automated VxRail First Run Process to deploy VCF on VxRail nodes using 4 x 25Gbps physical port configurations to run the VxRail System vDS for system traffic like Management, vSAN, and vMotion, etc. The physical port configuration of the VxRail nodes would include 2 x 25Gbps NDC ports and additional 2 x 25Gbps PCIe NIC ports.
In this 4 x 25Gbps set up, NSX-T traffic would run on the same System vDS. But what is great here (and where the flexibility comes in) is that customers can also choose to separate NSX-T traffic on its own NSX-T vDS that uplinks to separate physical PCIe NIC ports by using SDDC Manager APIs. This ability was first introduced in the last release and can also be leveraged here to expand the flexibility of VxRail host network configurations.
The figure below illustrates the option to select the base 4 x 25Gbps port configuration during VxRail First Run.
By allowing customers to run the VxRail System VDS across the NDC NIC ports and PCIe NIC ports, customers gain an extra layer of physical NIC redundancy and high availability. This has already been supported with 10Gbps based VxRail nodes. This release now brings the same high availability option to 25Gbps based VxRail nodes. Extra network high availability AND 25Gbps performance!? Sign me up!
Recently introduced support for ruggedized D-Series VxRail hardware platforms (D560/D560F) continue expanding the available VxRail hardware platforms supported in the Dell Technologies Cloud Platform.
These ruggedized and durable platforms are designed to meet the demand for more compute, performance, storage, and more importantly, operational simplicity that deliver the full power of VxRail for workloads at the edge, in challenging environments, or for space-constrained areas.
These D-Series systems are a perfect match when paired with the latest VCF Remote Cluster features introduced in Cloud Foundation 4.1.0 to enable Cloud Foundation with Tanzu on VxRail to reach these space-constrained and challenging ROBO/Edge sites to run cloud native and traditional workloads, extending existing VCF on VxRail operations to these locations! Cool right?!
To read more about the technical details of VxRail D-Series, check out the VxRail D-Series Spec Sheet.
Well that about covers it all for this release. The innovation train continues. Until next time, feel free to check out the links below to learn more about DTCP (VCF on VxRail).
Twitter - @vwhippersnapper
Building on VxRail HCI System Software: the advantages of multi-cluster active management capabilities
Tue, 29 Sep 2020 19:03:05 -0000|
Read Time: 0 minutes
The signs of autumn are all around us, from the total takeover of pumpkin-spiced everything to the beautiful fall foliage worthy of Bob Ross’s inspiration. Like the amount of change autumn brings forth, so too does the latest release of VxRail ACE, or should I preface that with ‘formerly known as’? I’ll get to that explanation shortly.
This release introduces multi-cluster update functionality that will further streamline the lifecycle management (LCM) of your VxRail clusters at scale. With this active management feature comes a new licensing structure and role-based access control to enable the active management of your clusters.
The colors of the leaves are changing and so is the VxRail ACE name. The brand name VxRail ACE (Analytical Consulting Engine), will no longer be used as of this release. While it had a catchy name and was easy to say, there are two reasons for this change. First, Analytical Consulting Engine no longer describes the full value or how we intend to expand the features in the future. It has grown beyond the analytics and monitoring capabilities of what was originally introduced in VxRail ACE a year ago and now includes several valuable LCM operations that greatly expand its scope. Secondly, VxRail ACE has always been part of the VxRail HCI System Software offering. Describing the functionality as part of the overall value of VxRail HCI System Software, instead of having its own name, simplifies the message of VxRail’s value differentiation.
Going forward, the capability set (that is, analytics, monitoring, and LCM operations) will be referred to as SaaS multi-cluster management -- a more accurate description. The web portal is now referred to as MyVxRail.
Cluster updates is the first active management feature offered by SaaS multi-cluster management. It builds on the existing LCM operational tools for planning cluster updates: on-demand pre-update health checks (LCM pre-check) and update bundle downloads and staging. Now you can initiate updates of your VxRail clusters at scale from MyVxRail. The benefits of cluster updates on MyVxRail tie closely with existing LCM operations. During the planning phase, you can run LCM pre-checks of the clusters you want to update. This informs you if a cluster is ready for an update and pinpoints areas for remediation for clusters that are not ready. From there, you can schedule your maintenance window to perform a cluster update and, from MyVxRail, initiate the download and staging of the VxRail update bundle onto those clusters. With this release, you can now execute cluster updates for those clusters. Now that’s operational efficiency!
When setting a cluster update operation, you have the benefit of two pieces of information – a time estimate for the update and the change data. The update time estimate will help you determine the length of the maintenance window. The estimate is generated by telemetry gathered about the install base to provide more accurate information. The change data is the list of the components that require an update to reach the target VxRail version.
Figure 1 MyVxRail Updates tab
Active management requires role-based access control so that you can provide permissions to the appropriate individuals to perform configuration changes to your VxRail clusters. You don’t want anyone with access to MyVxRail to perform cluster updates on the clusters. SaaS multi-cluster management leverages vCenter Access Control for role-based access. From MyVxRail, you will be able to register MyVxRail with the vCenter Servers that are managing your VxRail clusters. The registration process will give VxRail privileges to vCenter Server to build roles with specific SaaS multi-cluster management capabilities.
MyVxRail registers the following privileges on vCenter:
Figure 2 VxRail privileges for vCenter access control
We’ve done more to make it easier to perform cluster updates at scale. Typically, when you’re performing a single cluster update, you have to enter the root account credentials for vCenter Server, Platform Services Controller, and VxRail Manager. That’s the same process when performing it from VxRail Manager. But that process can get tedious when you have multiple clusters to update.
VxRail Infrastructure Credentials can store those credentials so you can enter them once, at the initial setup of active management, and not have to do it again as you perform a multi-cluster update. MyVxRail can read the stored credentials that are saved on each individual cluster for security.
Big time saver! But how secure is it? More secure than hiding Halloween candy from children. For a user to perform cluster update, the administrator needs to add the ‘execute cluster update’ privilege to the role assigned to that user. Root credentials can only be managed by users assigned with a role that has the ‘manage update credentials’ privilege.
Figure 3 MyVxRail credentials manager
The last topic is licensing. While all the capabilities you have been using on MyVxRail come with the purchase of the VxRail HCI System Software license, multi-cluster update is different. This feature requires a fee-based add-on software license called ‘SaaS active multi-cluster management for VxRail HCI System Software’. All VxRail nodes come with VxRail HCI System Software and you have access to MyVxRail and SaaS multi-cluster management features, except for cluster update. For you to perform an update of a cluster on MyVxRail, all nodes in the clusters must have the add-on software license.
That is a lot to consume for one release. Hopefully, unlike your Thanksgiving meal, you can stay awake for the ending. While the brand name VxRail ACE is no more, we’re continuing to deliver value-adding capabilities. Multi-cluster update is a great feature to further your use of MyVxRail for LCM of your VxRail clusters. With role-based access and VxRail infrastructure credentials, rest assured you’re benefitting from multi-cluster update without sacrificing security.
Daniel Chiu, VxRail Technical Marketing
Exploring the customer experience with lifecycle management for vSAN Ready Nodes and VxRail clusters
Thu, 24 Sep 2020 19:41:49 -0000|
Read Time: 0 minutes
The difference between VMware vSphere LCM (vLCM) and Dell EMC VxRail LCM is still a trending topic that most HCI customers and prospects want more information about. While we compared the two methods at a high level in our previous blog post, let’s dive into the more technical aspects of the LCM operations of VMware vLCM and VxRail LCM. The detailed explanation in this blog post should give you a more complete understanding of your role as an administrator for cluster lifecycle management with vLCM versus VxRail LCM.
Even though vLCM has introduced a vast improvement in automating cluster updates, lifecycle management is more than executing cluster updates. With vLCM, lifecycle management is still very much a customer-driven endeavor. By contrast, VxRail’s overarching goal for LCM is operational simplicity, by leveraging Continuously Validated States to drive cluster LCM for the customer. This is a large part of why VxRail has over 8,600 customers since it was launched in early 2016.
In this blog post, I’ll explain the four major areas of LCM:
The baseline configuration is a vital part of establishing a steady state for the life of your cluster. The baseline configuration is the current known good state of your HCI stack. In this configuration, all the component software and firmware versions are compatible with one another. Interoperability testing has validated full stack integrity for application performance and availability while also meeting security standards in place. This is the ‘happy’ state for you and your cluster. Any changes to the configuration use this baseline to know what needs to be rectified to return to the ‘happy’ state.
vLCM depends on the hardware vendor to provide a Hardware Management Services virtual machine. Dell provides this support for its Dell EMC PowerEdge servers, including vSAN ReadyNodes. I’ll use this implementation to explain the overall process. Dell EMC vSAN ReadyNodes use the OpenManage Integration for VMware vCenter (OMIVV) plugin to connect to and register with the vCenter Server.
Once the VM is deployed and registered, you need to create a credential-based profile. This profile captures two accounts: one for the out-of-band hardware interface, the iDRAC, and the other for the root credentials for the ESXi host. Future changes to the passwords require updating the profile accordingly.
With the VM connection and profile in place, a Catalog XML file is used by vLCM to define the initial baseline configuration. To create the Catalog XML file, you need to install and configure the Dell Repository Manager (DRM) to build the hardware profile. Once a profile is defined to your specification, it must then be exported and stored on an NFS or CIFS share. The profile is then used to populate the Repository Profile data in the OMIVVV UI. If you are unsure of your configuration, refer to the vSAN Hardware Compatibility List (HCL) for the specific supported firmware versions. Once the hardware profile is created, you can then associate it with the cluster profile. With the cluster profile defined, you can enable drift detection. Any future change to the Catalog XML file is done within the DRM.
It’s important to note that vLCM was introduced in vSphere 7.0. To use vLCM, you must first update or deploy your clusters to run vSphere 7.x.
With VxRail, when the cluster arrives at the customer data center, it’s already running in a ‘happy’ state. For VxRail, the ‘happy’ state is called Continuously Validated States. The term is pluralized because VxRail defines all the ‘happy’ states that your cluster will update to over time. This means that your cluster is always running in a ‘happy’ state without you needing to research, define, and test to arrive at Continuously Validated States throughout the life of your cluster. VxRail – well, specifically the VxRail engineering team - does it for you. This has been a central tenet of VxRail since the product first launched with vSphere 6.0. Since then it has helped customers transition to vSphere 6.5, 6.7, and now 7.0.
Once the VxRail cluster initialization is completed, use your Dell EMC Support credentials to configure the VxRail repository setting within vCenter. VxRail Manager plugin to vCenter will automatically connect to the VxRail repository at Dell EMC and pull down the next available update package.
Figure 1 Defining the initial baseline configuration
Updates are a constant in IT, and VMware is constantly adding new capabilities or product/security fixes that require updating to newer versions of software. Take for example the vSphere 7.0 Update 1 release that VMware and Dell Technologies just announced. Those eye-opening features are available to you when you update to that release. You can check out just how often VMware has historically updated vSphere here: https://kb.vmware.com/s/article/2143832.
As you know, planning for a cluster update is an iterative process with inherent risk associated with it. Failing to plan diligently can cause adverse effects on your cluster, ranging from network outages and node failure to data unavailability or data loss. That said, it’s important to mitigate the risk where you can.
With vLCM, the responsibility of planning for a cluster update rests on the customers’ shoulders, including the risk. Understanding the Bill of Materials that makes up your server’s hardware profile is paramount to success. Once all the components are known, and a target version of vSphere ESXi is specified, the supported driver and firmware version needs to be investigated and documented. You must consult the VMware Compatibility Guide to find out which drivers/firmware are supported for each ESXi release.
It is important to note that although vLCM gives you the toolset to apply firmware and driver updates, it does not validate compatibility or support for each combination for you, except for the HBA Driver. This task is firmly in the customer’s domain. It is advisable to validate and test the combination in a separate test environment to ensure that no performance regression or issues are introduced into the production environment. Interoperability testing can be an extensive and expensive undertaking. Customers should create and define robust testing processes to ensure that full interoperability and compatibility is met for all components managed and upgraded by vLCM.
With Dell EMC vSAN Ready Nodes, customers can rest assured that the HCL certification and compatibility validation steps have been performed. However, the customer is still responsible for interoperability testing.
VxRail engineering has taken a unique approach to LCM. Rather than leaving the time-consuming LCM planning to already overburdened IT departments, they have drastically reduced the risk by investing over $60 million, more than 25,000 hours of testing for major releases, and more than 100 team members into a comprehensive regression test plan. This plan is completed prior to every VxRail code release. (This is in addition to the testing and validation performed by PowerEdge, on which VxRail nodes are built.)
Dell EMC VxRail engineering performs this testing within 30 days of any new VMware release (even quicker for express patches), so that customers can continually benefit from the latest VMware software innovations and confidently address security vulnerabilities. You may have heard this called “synchronous release”.
The outcome of this effort is a single update bundle that is used to update the entire HCI stack, including the operating system, the hardware’s drivers and firmware, and management components such as VxRail Manager and vCenter. This allows VxRail to define the declarative configuration we mentioned previously (“Continuously Validated States”), allowing us to move easily from one validated state to the next with each update.
Figure 2 Planning for a cluster update
The biggest improvement with vLCM is its ability to orchestrate and automate a full stack HCI cluster update. This simplifies the update operation and brings enormous time savings. This process is showcased in a recent study performed by Principled Technologies with PowerEdge Servers with vSphere (not including vSAN).
The first step is to import the ESXi ISO via the vLCM tab in the vCenter Server UI. Once uploaded, select the relevant cluster, ensure that the cluster profile (created in the initial baseline configuration phase) is associated with the cluster being updated. Now, you can apply the target configuration by editing the ESXi image and, from the OMIVV UI, choose the correct firmware and driver to apply to the hardware profile. Once a compliance scan is complete, you will have the option to remediate all hosts.
If there are multiple homogenous clusters you need to update, it can be as easy as using the same cluster profile to execute the cluster update against. However, if the next cluster has a different hardware configuration, then you would have to perform the above steps over again. Customers with varying hardware and software requirements for their clusters will have to repeat many of these steps, including the planning tasks, to ensure stack integrity.
With VxRail and Continuously Validated States, updating from one configuration to another is even simpler. You can access the VxRail Manager directly within the vCenter Server UI to initiate the update. The LCM operation automatically retrieves the update bundle from the VxRail repository, runs a full stack pre-update health check, and performs the cluster update.
With VxRail, performing multi-cluster updates is as simple as performing a single-cluster update. The same LCM cluster update workflow is followed. While different hardware configurations on separate clusters will add more labor for IT staff for vSAN Ready Nodes, this doesn’t apply to VxRail. In fact, in the latest release of our SaaS multi-cluster management capability set, customers can now easily perform cluster updates at scale from our cloud-based management platform, MyVxRail.
Figure 3 Executing a cluster update
The long-term integrity of a cluster outlasts the software and hardware in it. As mentioned earlier, because new releases are made available frequently, software has a very short life span. While hardware has more staying power, it won’t outlast some of the applications running on them. New hardware platforms will emerge. New hardware devices will enter the market that will launch new workloads, such as machine learning, graphics rendering, and visualization workflows. You will need the cluster to evolve non-disruptively to deliver the application performance, availability, and diversity your end-users require.
In its current form, vLCM will struggle in long-term cluster lifecycle management. In particular, its inability to support heterogeneous nodes (nodes with different hardware configurations) in the same cluster will limit its application diversification and its ability to take advantage of new hardware platforms without impacting end-users.
VxRail LCM touts its ability to allow customers to grow non-disruptively and to scale their clusters over time. That includes adding non-identical nodes into the clusters for new applications, adding new hardware devices for new applications or more capacity, or retiring old hardware from the cluster.
Figure 4 Comparing vSphere LCM and VxRail LCM cluster update operations driven by the customer
The VMware vLCM approach empowers customers who are looking for more configuration flexibility and control. They have the option to select their own hardware components and firmware to build the cluster profile. With this freedom comes the responsibility to define the HCI stack and make investments in equipment and personnel to ensure stack integrity. vLCM supports this customer-driven approach with improvements in cluster update execution for faster outcomes.
Dell EMC VxRail LCM continues to take a more comprehensive approach to optimize operational efficiency from the point of the view of the customer. VxRail customers value its LCM capabilities because it reduces operational time and effort which can be diverted into other areas of need in IT. VxRail takes on the responsibility to drive stack integrity for the lifecycle management of the cluster with Continuously Validated States. And VxRail sustains stack integrity throughout the life of the cluster, allowing you to simply and predictably evolve with technology trends.
The Latest VxRail Platform Innovation is Now Included in Your Cloud
Tue, 18 Aug 2020 15:32:11 -0000|
Read Time: 0 minutes
Dell Technologies and VMware are happy to announce the general availability VCF 188.8.131.52 on VxRail 7.0.010.
This release brings support for the latest version of VxRail to the Dell Technologies Cloud Platform. Let’s review what these new features are all about.
Updated VxRail Software Bill of Materials
Please check out the VCF on VxRail release notes for a full listing of the supported software BOM associated with this release. You can find the link at the bottom of page.
VxRail Hardware Platform Updates
VxRail 7.0.010 brings about new support for ruggedized D-Series VxRail hardware platforms (D560/D560F). These ruggedized and durable platforms are designed to meet the demand for more compute, performance, storage, and more importantly, operational simplicity that deliver the full power of VxRail for workloads at the edge, in challenging environments, or for space-constrained areas. To read more about the technical details of VxRail D-Series, check out the VxRail D-Series Spec Sheet.
Also, this release is reintroducing GPU support that was not in the initial VCF 4.0 on VxRail 7.0 release.
New and Improved VxRail First Run Experience
A new Day 1 VxRail cluster first run workflow and UI enhancements have been updated. The new day 1 VxRail first run deployment wizard is comprised of 13 steps or top level tasks. This day 1 workflow update was required to support new VxRail HCI System software enhancements.
The new UI provides for improved levels of configuration data entry flexibility during deployment. These options include things like allowing unique hostnames for each ESX host without forcing a name configuration, allowing for non-sequential IP addresses for hosts in the cluster, support for a geographical location ID tag, e.g. Rack Name or Rack Location are now supported. It provides a cleaner interface with a consistent look and feel for Information, Warnings, and Errors. There is improved validation, providing a higher level of feedback when errors are encountered of validation checks fail. And finally, options to manually enter all the configuration parameters or upload a pre-defined configuration via a YAML or JSON file are till available too! The figure below illustrates the new first run steps and UI.
New VxRail API to Automate Day 1 VxRail First Run Cluster Creation
This feature allows for fast and consistent VxRail cluster deployments using the programmatic extensibility of a REST API. It provides administrators with an additional option for creating VxRail clusters in addition to the VxRail Manager first run UI.
Day 1 Support to Initially Deploy Up to Six Nodes in a VxRail Cluster During VxRail First Run
The previous maximum node deployment supported in the VxRail first run was four. Administrators who needed larger VxRail cluster sizes over four nodes would have needed to create the cluster with four nodes and once that was in place, perform node expansions to get to the desired cluster size. This new feature helps reduce time needed to initially create larger VxRail clusters by allowing for a larger starting point of six VxRail nodes.
VxRail Host Geo-Location Tagging
This is probably one of the coolest and most underrated features in the release in my opinion. VxRail Manager now supports geographic location tags for VxRail hosts. This capability allows for important admin-defined host metadata that can assist many customers in gaining greater visibility of the physical location of the HCI infrastructure that makes up their cloud. This information is configured as “Host Settings” during VxRail first run as illustrated in the figure below.
As shown, the two values that make up the geo-location tags are Rack Name and Rack Position. These values are stored in the iDRAC of each VxRail host. You may be asking yourself, “Great! I have the ability to add additional metadata for my VxRail hosts but what can I do with it?”. Well, together, these values help a cloud administrator identify a VxRail host’s position within a given rack within the data center. Cloud administrators can then leverage this data to choose the VxRail host order they want to be displayed in the VxRail Manager vCenter plugin Physical View. The figure below illustrates what this would look like.
As datacenter environments grow, VxRail host expansion operations can be used to add additional infrastructure capacity. The VxRail “Add VxRail Hosts” automated expansion workflows have been updated to include a new Host Location step which allows for the ability add geo-location Rack Name and Rack Position metadata for the new hosts being added to an existing VxRail Cluster. The figure below shows what a host expansion operation would look like.
In this fast paced world of digital transformation, it is not uncommon for cloud datacenter infrastructure to be moved within a datacenter after it has already been installed. This could be due to physical rack expansion design changes or infrastructure repurposing. These situations were also considered with using VxRail geo-location tags. Thus, there is an option to dynamically edit an existing host’s geo-location information. When this is performed, VxRail Manager will automatically update the host’s iDRAC with the new values. The figure below shows what the host edit would look like.
All these geo-location management capabilities provide VCF on VxRail administrators with full stack physical to virtual infrastructure mapping that help further extend the Cloud Foundation management experience and simplify operations! And this capability is only available with the Dell Technologies Cloud Platform (VCF on VxRail)! How cool is that?!
VxRail Security Enhancements
Added Security Compliance With The Addition of FIPS 140-2 Level 1 Validated Cryptography For VxRail Manager
Cloud Foundation on VxRail offers intrinsic security built into every layer of the solution stack, from hardware silicon to storage to compute to networking to governance controls. This helps customers make security a built part of the platform for your traditional workloads as well as container based cloud native workloads rather than something that is bolted on after the fact.
Building on the intrinsic security capabilities of the platform are the following new features:
VxRail Manager is now FIPS 140-2 compliant, offering built-in intrinsic encryption, meeting the high levels of security standards required by the US Department of Defense.
From VxRail 7.0.010 onward, VxRail has ‘FIPS inside’! This would entail having built-in features such as:
Disable VxRail LCM operations from vCenter
In order to limit administrator configuration error by allowing for the performing of VxRail LCM operations from within vCenter rather than through SDDC Manager, all VCF on VxRail deployments will natively lockdown the vSphere Web Client VxRail Manager Plugin Updates screen out of the box. This enforces administrators to use SDDC Manager for all LCM operations which will guarantee that the full stack of HW/SW used have all been qualified and validated for their environment. The figure below illustrates what this looks like.
Disable VxRail Host Rename/Re-IP operations in vCenter
Continuing with the idea of trying to limit administration configuration errors, this feature deals with trying to avoid configuration errors by not allowing administrators to perform VxRail Host Edit operations from within vCenter that are not supported in VCF. This helps maintain an operating experience in which all VCF on VxRail deployments will natively lockdown the vSphere Web Client VxRail Manager Plugin Hosts screen out of the box. The figure below illustrates what this looks like
Now those are some intrinsic security features!
Well that about covers all the new features! Thanks for taking the time to learn more about this latest release. As always, check out some of the links at the bottom of this page to access additional VCF on VxRail resources.
Twitter - @vwhippersnapper
VxRail & Intel Optane for Extreme Performance
Fri, 07 Aug 2020 15:33:49 -0000|
Read Time: 0 minutes
Enabling high performance for HCI workloads is exactly what happens when VxRail is configured with Intel Optane Persistent Memory (PMem). Optane PMem provides compute and storage performance to better serve applications and business-critical workloads. So, what is Intel Optane Persistent Memory? Persistent memory is memory that can be used as storage, providing RAM-like performance, very low latency and high bandwidth. It’s great for applications that require or consume large amounts of memory like SAP HANA, and has many other use cases as shown in Figure 1 and VxRail is certified for SAP HANA as well as Intel Optane PMem.
Moreover, PMem can be used as block storage where data can be written persistently, a great example is for DBMS log files. A key advantage to using this technology is that you can start small with a single PMem card (or module), then scale and grow as needed with the ability to add up to 12 cards. Customers can take advantage of PMem immediately because there’s no need to make major hardware or configuration changes, nor budget for a large capital expenditure.
There are a wide variety of use cases today including those you see here:
Figure 1: Intel Optane PMem Use Cases
PMem offers two very different operating modes, that being Memory and App Direct, and in turn App Direct can be used in two very different ways.
First, Intel Optane PMem in Memory mode is not yet supported by VxRail. This mode acts as volatile system memory and provides significantly lower cost per GB then traditional DRAM DIMMs. A follow-on update to this blog will describe this mode and test results in much more detail once it is supported.
As for App Direct mode (supported today), PMem is consumed by virtual machines as either a block storage device, known as vPMemDisk, or as byte addressable memory, known as Virtual NVDIMM. Both provide great benefit to the applications running in a virtual machine, just in very different ways. vPMemDisk can be used by any virtual machine hardware, and by any Guest OS. Since it’s presented as a block device it will be treated like any other virtual disk. Applications and/or data can then be placed on this virtual disk. The second consumption method, NVDIMM has the advantage of being addressed in the same way as regular RAM, however, it can retain its data through reboots or power failures. This is a considerable plus for large in-memory databases like SAP HANA where cache warm-up or the time to load tables in memory can be significant!
However, it’s important to note that, like any other memory module, the PMem module does not provide data redundancy. This may not be an issue for some data files on commonly used applications that can be re-created in case of a host failure. But a key principle when using PMem, either as block storage or byte addressable memory is that the applications are responsible for handling data replication to provide durability.
New data redundancy options are expected on applications that are using PMem and should be well understood before deployment.
First, we’ll look at test results using PMem as virtual disk (or vPMemDisk). Our Engineering team tested VxRail with PMem in App Direct mode and ran comparison tests against a VxRail all-flash (P570F series platform). The testing simulated a typical 4K OLTP workload with 70/30 RW ratio. Our results achieve more than 1.8M IOPs or 6X more than the all-flash VxRail system. That equates to 93% faster response times (or lower latency) and 6X greater throughput as shown here:
Figure 2: VxRail PMem App Direct versus VxRail all-flash
This latency difference indicates the potential to improve the performance of legacy applications by placing specific data files on a PMem module, for example, placing log files on PMem. To verify the benefit of this log acceleration use case we ran a TPC-C benchmark comparing VxRail configured with a log file on a vPMEMDIsk to a VxRail all-flash vSAN, and we saw a 46% improvement on the number of transactions per minute.
Figure 3: Log file acceleration use case
For the second consumption method, we tested PMem in App direct mode using the NVDIMM consumption method. We performed tests using 1,2,4,8 and then 12 PMEM modules. All testing has been evaluated and validated by ESG (Enterprise Strategy Group). The certified white paper has been published as highlighted in the resources section.
Figure 4: NVDIMM device testing (vSAN not-optimized versus optimized PMem NVDIMM)
The results prove linear scalability as we increase the number of modules from 1 to 12. And with 12 PMem modules, VxRail achieves 80 times more IOPs than when running against vSAN not optimized (meaning VxRail all-flash vSAN with no PMem involved), and 100X for the 4K RW workload. The right half of the graphic depicts throughput results for very large IO, 64KB. When PMem is optimized on 12 modules we saw 28X higher throughput for a 64KB random read (RR) workload, and PMem is 13 times faster for the 64K RW.
What you see here is amazing performance on a single VxRail host and almost linear scalability when adding PMem!! Yes, that warrants a double bang. If you were to max out a 64-node cluster, the potential scalability is phenomenal and game changing!
So, what does all this mean? Key takeaways are:
The references and validation testing have been completed by ESG (Enterprise Strategy Group). White papers and other resources on VxRail for Extreme Performance are available via the links listed below.
By: KJ Bedard – VxRail Technical Marketing Engineer
Adding to the VxRail summer party with the release of VxRail 7.0.010
Wed, 29 Jul 2020 20:15:50 -0000|
Read Time: 0 minutes
After releasing multiple VxRail 4.7 software versions in the early summer, the VxRail 7.0 software train has just now joined the party. Like any considerate guest, VxRail 7.0.010 does not come empty handed. This new software release brings new waves of cluster deployment flexibility so you can run a wider range of application workloads on VxRail, as well as new lifecycle management enhancements for you to sit back and enjoy the party during their next cluster update.
The following capabilities expand the workload possibilities that can run on VxRail clusters:
Network card level redundancy with active/active network connections
Along with these features that increase the market opportunity for VxRail clusters, lifecycle management enhancements also come along with VxRail 7.0.010’s entrance to the party. VxRail has strengthened in LCM pre-upgrade health check to include more ecosystem components in the VxRail stack. Already providing checks against the HCI hardware and software, VxRail is extending to ancillary components such as the vCenter Server, Secure Remote Services gateway, RecoverPoint for VMs software, and the witness host used for 2-node and stretched clusters. The LCM pre-upgrade health check performs a version compatibility against these components before upgrading the VxRail cluster. With a stronger LCM pre-upgrade health check, you’ll have more time for summer fun.
VxRail 7.0.010 is here to keep the VxRail summer party going. These new capabilities will help our customers accelerate innovation by providing an HCI platform that delivers the infrastructure flexibility their applications require, while giving the administrators the operational freedom and simplicity to fearlessly update their clusters freely.
Interested in learning more about VxRail 7.0.010? You can find more details in the release notes.
Daniel Chiu, VxRail Technical Marketing
Announcing VMware Cloud Foundation 4.0.1 on Dell EMC VxRail 7.0
Wed, 29 Jul 2020 13:38:33 -0000|
Read Time: 0 minutes
Dell Technologies and VMware are happy to announce the general availability VCF 4.0.1 on VxRail 7.0.
This release offers several enhancements including vSphere with Kubernetes support for entry cloud deployments, enhanced bring up features for more extensibility and accelerated deployments, increased network configuration options, and more efficient LCM capabilities for NSX-T components. Below is the full listing of features that can be found in this release:
Twitter - @vwhippersnapper
2nd Gen AMD EPYC now available to power your favorite hyperconverged platform ;) VxRail
Mon, 27 Jul 2020 18:46:53 -0000|
Read Time: 0 minutes
Last month, Dell EMC expanded our very popular E Series (the E for Everything Series) with the introduction of the E665/F/N, our very first VxRail with an AMD processor, and what a processor it is! The 2nd Gen AMD EPYC processor came to market with a lot of industry-leading capabilities:
These industry leading capabilities enable the VxRail E665 series to deliver dual socket performance in a single socket model - and can provide up to 90% greater general-purpose CPU capacity than other VxRail models when configured with single socket processors.
So, what is the sweet spot or ideal use case for the E665? As always, it depends on many things. Unlike the D Series (our D for Durable Series) that we also launched last month, which has clear rugged use cases, the E665 and the rest of the E Series very much live up to their “Everything” name, and perform admirably in a variety of use cases.
While the 2nd Gen EPYC 64-core processors grab the headlines, there are multiple AMD processor options, including the 16 core AMD 7F52 at 3.50GHz with a max boost of 3.9GHz for applications that benefit from raw clock speed, or where application licensing is core based. On the topic of licensing, I would be remiss if I didn’t mention VMware’s update to its per-CPU pricing earlier this year. This results in processors with more then 32-cores requiring a second VMware per-CPU license. This may make a 32-core processor an attractive option from an overall capacity & performance verses hardware & licensing cost perspective.
Speaking of overall costs, the E665 has dual 10Gb RJ45/SFP+ or dual 25Gb SFP28 base networking options, which can be further expanded with PCIe NICs including a dual 100Gb SFP28 option. From a cost perspective, the price delta between 10Gb and 25Gb networking is minimal. This is worth considering particularly for greenfield sites and even for brownfield sites where the networking maybe upgraded in the near future. Last year, we began offering Fibre Channel cards on VxRail, which are also available on the E665. While FC connectivity may sound strange for a hyperconverged infrastructure platform, it does make sense for many of our customers who have existing SAN infrastructure, or some applications (PowerMax for extremely large database requiring SRDF) or storage needs (Isilon for large file repository for medical files) that are more suited to SAN. While we’d prefer these SAN to be a Dell EMC product, as long as it is on the VMware SAN HCL, it can be connected. Providing this option enables customers to get the best both worlds have to offer.
The options don’t stop there. While the majority of VxRail nodes are sold with all-flash configurations, there are customers whose needs are met with hybrid configs, or who are looking towards all-NVMe options. The E665 can be configured with as little as 960GB to maximums of 14TB hybrid, 46TB all-flash, or 32TB all-NVMe of raw storage capacity. Memory options consist of 4, 8, or 16 RDIMMs of 16GB, 32GB or 64GB in size. Maximum memory performance, 3200 MT/s, is achieved with one DIMM per memory channel, adding a second matching DIMM reduces bandwidth slightly to 2933 MT/s.
VxRail and Dell Technologies, very much recognize that the needs of our customers vary greatly. A product with a single set of options cannot meet all our various customers’ different needs. Today, VxRail offers six different series, each with a different focus:
With the highly flexible configuration choices, there is a VxRail for almost every use case, and if there isn’t, there is more than likely something in the broad Dell Technologies portfolio that is.
Author: David Glynn, Sr. Principal Engineer, VxRail Tech Marketing
Healthcare Homerun for VxRail – MEDITECH Certified
Thu, 16 Jul 2020 19:14:23 -0000|
Read Time: 0 minutes
At Dell Technologies we are excited and proud to announce the VxRail HCI (Hyperconverged Infrastructure) certification for MEDITECH. Dell Technologies is #1 in the Hyperconverged Systems segment, a position held for 12 consecutive quarters1. VxRail is the only fully integrated, pre-configured, and tested hyperconverged infrastructure that simplifies and extends VMware environments. This solution helps simplify MEDITECH environments using VMware VMs improving performance and scalability by bringing together and optimizing multiple workloads.
With this Dell Technologies certified solution that leverages VxRail, MEDITECH environments are easier to use, have lower risk of failure while continuing to provide a fiscally responsible approach.
Dell EMC and MEDITECH worked closely with an approved integrator* during the certification of the VxRail running the MEDITECH test harness. Testing consisted of a VxRail cluster to support all VMs required for the MEDITECH application, and provide infrastructure redundancy etc. MBF (MEDITECH Backup Facilitator) backups are accomplished with Dell EMC’s Networker-NMMEDI in conjunction with RecoverPoint for VM’s, that has been tested and is backed by best in class implementation and a continuous focus on positive customer experience.
1 IDC WW Quarterly Converged Systems Tracker, Vendor Revenue (US$M) Q1 2020, June 18, 2020
Dell Technologies makes IT transformation real for MEDITECH environments with a data first approach and as a leading provider of Healthcare IT infrastructure we are uniquely positioned to offer a full breath of solutions for MEDITECH environments. In fact, more than 60% of MEDITECH’s customers deploy a Dell Technologies solution2. For these reasons at Dell Technologies we’re excited and proud to add this certification, which supports MEDITECH Expanse, 6.X, Client/Server and MAGIC environments, to our Dell Technologies Healthcare portfolio.
*Special thanks to Teknicor for providing their best practices, assistance and lab space for this certification process.
2 HIMSS Analytics, May 2019.
Author: Vic Dery, Senior Principal Engineer, VxRail Technical Marketing
Big Solutions on Dell EMC VxRail with SQL 2019 Big Data Cluster
Thu, 09 Jul 2020 19:20:04 -0000|
Read Time: 0 minutes
The amount of data and different formats organizations must manage, ingest, and analyze has been the driving force behind Microsoft SQL 2019 Big Data Clusters (BDC). SQL Server 2019 BDC demonstrates the deployment of scalable clusters of SQL Server, Spark, and containerized HDFS (Hadoop Distributed File System) running on Kubernetes.
We recently deployed and tested SQL Server 2019 BDC on Dell EMC VxRail hyperconverged infrastructure to demonstrate how VxRail delivers the performance, scalability, and flexibility needed to bring these multiple workloads together.
The Dell EMC VxRail platform was selected for its ability to incorporate compute, storage, virtualization, and management in one platform offering. The key feature of the VxRail HCI is the integration of vSphere, vSAN, and VxRail HCI System Software for an efficient and reliable deployment and operations experience. The use of VxRail with SQL Server 2019 BDC makes it easy to unite relational data with big data.
The testing demonstrates the advantages of using VxRail with SQL Server 2019 BDC for analytic application development. This also demonstrates how Docker, Kubernetes, and the vSphere Container Storage Interface (CSI) driver accelerate the application development life cycle when they are used with VxRail. The lab environment for development and testing used four VxRail E560F nodes supported by the vSphere CSI driver. With this solution, developers can provision SQL Server BDC in containerized environments without the complexities of traditional methods for installing databases and provisioning storage.
Our white paper, Microsoft SQL Server 2019 Big Data Cluster on Dell EMC VxRail shows the power of implementing SQL Server 2019 BDC technologies on VxRail. Integrating SQL Server 2019 RDBMS, SQL Server BDC, MongoDB, and Oracle RDBMS helps to create a unified data analytics application. Using VxRail enhances the ability of SQL Server 2019 to scale out storage and compute clusters while embracing the virtualization techniques from VMware. This SQL Server 2019 BDC solution also benefits from the simplicity of a complete yet flexible validated Dell EMC VxRail with Kubernetes management and storage integration.
The solution demonstrates the combined value of the following technologies:
Big Data Cluster Services
This diagram shows how the pools are built. It provides details of the benefits for Kubernetes features for container orchestration at scale, including:
This white paper, Microsoft SQL Server 2019 Big Data Cluster on Dell EMC VxRail, addresses big data storage, the tools for handling big data, and the details around testing with TPC-H. When we tested data virtualization with PolyBase, the queries were successful, running without error and returning the results that joined all four data sources.
Because data virtualization does not involve physically copying and moving the data (so that the data is available to business users in real-time), BDC simplifies and centralizes access to and analysis of the organization’s data sphere. It enables IT to manage the solution by consolidating big data and data virtualization on one platform with a proven set of tools.
Success starts with the right foundation:
SQL Server 2019 BDC is a compelling new way to utilize SQL Server to bring high-value relational data and high-volume big data together on a unified, scalable data platform. All of this can be deployed with VxRail, enabling enterprises to experience the power of PolyBase to virtualize their data stores, create data lakes, and create scalable data marts in a unified, secure environment without needing to implement slow and costly Extract, Transform, and Load (ETL) pipelines. This makes data-driven applications and analysis more responsive and productive. SQL Server 2019 BDC and Dell EMC VxRail provide a complete unified data platform to deliver intelligent applications that can help make any organization more successful.
Read the full paper to learn more about how Dell EMC VxRail with SQL 2019 Big Data Clusters can:
Additional VxRail & SQL resources:
Author: Vic Dery, Senior Principal Engineer, VxRail Technical Marketing
Extending Dell Technologies Cloud Platform Availability for Mission Critical Applications
Mon, 29 Jun 2020 14:48:57 -0000|
Read Time: 0 minutes
Many of us here at Dell Technologies regularly have conversations with customers and talk about what we refer to as the “Power of the Portfolio.” What does this mean exactly? It is essentially a reference to the fact that, as Dell Technologies, we have a robust and broad portfolio of modern IT infrastructure products and solutions across storage, networking, compute, virtualization, data protection, security, and more! At first glance, it can seem overwhelming to many. Some even say it could be considered complex to sort through. But we, as Dell Technologies, on the other hand, see it as an advantage. It allows us to solve a vast majority of our customers’ technical needs and support them as a strategic technology partner.
It is one thing to have the quality and quantity of products and tools to get the job done -- it’s another to leverage this portfolio of products to deliver on what customers want most: business outcomes.
As Dell Technologies continues to innovate, we are making the best use of the technologies we have and are developing ways to use them together seamlessly in order to deliver better business outcomes for our customers. The conversations we have are not about this product OR that product but instead they are about bringing together this set of products AND that set of products to deliver a SOLUTION giving our customers the best of everything Dell Technologies has to offer without compromise and with reduced risk.
Figure 1: Cloud Foundation on VxRail Platform Components
The Dell Technologies Cloud Platform is an example of one of these solutions. And there is no better example that illustrates how to take advantage of the “Power of the Portfolio” than one that appears in a newly published reference architecture white paper that focuses on validating the use of the Dell EMC PowerMax system with SRDF/Metro in a Dell Technologies Cloud Platform (VMware Cloud Foundation on a Dell EMC VxRail) multi-site stretched-cluster deployment configuration (Extending Dell Technologies Cloud Platform Availability for Mission Critical Applications).This configuration provides the highest levels of application availability for customers who are running mission-critical workloads in their Cloud Foundation on VxRail private cloud that would otherwise not be possible with core DTCP alone.
Let’s briefly review some of the components used in the reference architecture and how they were configured and tested.
Customers commonly ask whether they can use external storage in Cloud Foundation on VxRail deployments. The answer is yes! This helps customers ease into the transition to a software-defined architecture from an operational perspective. It also helps customers leverage the investments in their existing infrastructure for the many different workloads that might still require external storage services.
External storage and Cloud Foundation have two important use cases: principal storage and supplemental storage.
At the time of writing, Cloud Foundation on VxRail supports supplemental storage use cases only. This is how external storage was used in the reference architecture solution configuration.
The Dell EMC PowerMax is the first Dell EMC hardware platform that uses an end-to-end Non-Volatile Memory Express (NVMe) architecture for customer data. NVMe is a set of standards that define a PCI Express (PCIe) interface used to efficiently access data storage volumes based on Non-Volatile Memory (NVM) media, which includes modern NAND-based flash along with higher-performing Storage Class Memory (SCM) media technologies. The NVMe-based PowerMax array fully unlocks the bandwidth, IOPS, and latency performance benefits that NVM media and multi-core CPUs offer to host-based applications—benefits that are unattainable using the previous generation of all-flash storage arrays. For a more detailed technical overview of the PowerMax Family, please check out the whitepaper Dell EMC PowerMax: Family Overview.
The following figure shows the PowerMax 2000 and PowerMax 8000 models.
Figure 2: PowerMax product family
The Symmetrix Remote Data Facility (SRDF) maintains real-time (or near real-time) copies of data on a PowerMax production storage array at one or more remote PowerMax storage arrays. SRDF has three primary applications:
In the case of this reference architecture, SRDF/Metro was used to provide enhanced levels of high availability across two availability zone sites. For a complete technical overview of SRDF, please check out this great SRDF whitepaper: Dell EMC SRDF.
Now that we are familiar with the components used in the solution, let’s discuss the details of the solution architecture that was used.
This overall solution design provides enhanced levels of flexibility and availability that extend the core capabilities of the VCF on VxRail cloud platform. The VCF on VxRail solution natively supports a stretched-cluster configuration for the management domain and a VI workload domain between two availability zones by using vSAN stretched clusters. A PowerMax SRDF/Metro with Metro Stretched Cluster (vMSC) configuration is added to protect VI workload domain workloads by using supplementary storage for the workloads that are running on them.
Two types of vMSC configurations are verified with stretched Cloud Foundation on VxRail: uniform and non-uniform.
The following figure shows the topology used in the reference architecture of the Cloud Foundation uniform stretched-cluster configuration with PowerMax SRDF/Metro.
Figure 3: Cloud Foundation on VxRail uniform stretched-cluster config with PowerMax SRDF/Metro
The following figure shows the topology used in the reference architecture of the Cloud Foundation on VxRail non-uniform stretched cluster configuration with PowerMax SRDF/Metro.
Figure 4: Cloud Foundation on VxRail non-uniform stretched-cluster config with PowerMax SRDF/Metro
We completed solution validation testing across the following major categories for both iSCSI and FC connected devices:
For complete details on all of the individual validation test scenarios that were performed, and the pass/fail results, check out the whitepaper.
To summarize, this white paper describes how Dell EMC engineers integrated VMware Cloud Foundation on VxRail with PowerMax SRDF/Metro and provides the design configuration steps that they took to automatically provision PowerMax storage by using the PowerMax vRO plug-in. The paper validates that the Cloud Foundation on VxRail solution functions as expected in both a PowerMax uniform vMSC configuration and a non-uniform vMSC configuration by passing all the designed test cases. This reference architecture validation demonstrates the power of the Dell Technologies portfolio to provide customers with modern cloud infrastructure technologies that deliver the highest levels of application availability for business-critical and mission-critical applications running in their private clouds.
Find the link to the white paper below along with other VCF on VxRail resources and see how you can leverage the “Power of the Portfolio” to support your business!
Twitter - @vwhippersnapper
Announcing General Availability of VCF 184.108.40.206 on VxRail 4.7.511
Thu, 18 Jun 2020 14:57:10 -0000|
Read Time: 0 minutes
Today (7/2), Dell Technologies is announcing General Availability of VMware Cloud Foundation 220.127.116.11 on VxRail 4.7.511.
Because we’ve been notified about an upcoming important patch for the Cloud Foundation version 3.10 from VMware, and we wanted to incorporate it in a GA version on VxRail for the best experience for our customers.
This new release introduces VCF enhancements and VxRail enhancements.
Figure 1. ESXi Cluster-Level and Parallel Upgrades
Option to disable Application Virtual Networks (AVNs) during Bring-up - AVNs deploy vRealize Suite components on NSX overlay networks. We recommend using this option during bring-up. Customers can now disable this feature, for instance, if they are not planning to use vRealize Suite components.
VMware Cloud Foundation 18.104.22.168 on VxRail 4.7.511 provides several features that allow existing customers to upgrade their platform more efficiently than ever before. The updated LCM capabilities offer not only more efficiency (with parallelism), but more flexibility in terms of handling the maintenance windows. With skip level upgrade, available in this version as a professional service, it’s also possible to get to this latest release much faster. This increases security, and allows customers to get the most benefit from their existing investments in the platform. New customers will benefit from the broader spectrum of hardware options, including ruggedized (D-series) and AMD-based nodes.
Blog post about VCF 4.0 on VxRail 7.0: The Dell Technologies Cloud Platform – Smaller in Size, Big on Features
Blog post about new features in VxRail 4.7.510: VxRail brings key features with the release of 4.7.510
Blog post about VCF 3.10 from VMware: Introducing VMware Cloud Foundation 3.10
Author: Karol Boguniewicz, Senior Principal Engineer, VxRail Technical Marketing
VxRail brings key features with the release of 4.7.510
Thu, 18 Jun 2020 14:24:47 -0000|
Read Time: 0 minutes
At a high level, this release further solidifies VxRail’s synchronous release commitment with vSphere of 30 days or less. VxRail and the 4.7.510 release integrates and aligns with VMware by including the vSphere 6.7U3 patch release. More importantly, vSphere 6.7U3 provides the underlying support for Intel Optane persistent memory (or PMem), also offered in this release.
Intel Optane persistent memory is non-volatile storage medium with RAM-like performance characteristics. Intel Optane PMem in a hyperconverged VxRail environment accelerates IT transformation with faster analytics (think in-memory DBMS), and cloud services.
Intel Optane PMem (in App Direct mode) provides added memory options for the E560/F/N and P570/F and is supported on version 4.7.410. Additionally, PMem will be supported on the P580N beginning with version 4.7.510 on July 14.
This technology is ideal for many use cases including in-memory databases and block storage devices, and it’s flexible and scalable allowing you to start small with a single PMem module (card) and scale as needed. Other use cases include real-time analytics and transaction processing, journaling, massive parallel query functions, checkpoint acceleration, recovery time reduction, paging reduction and overall application performance improvements.
New functionality enables customers to schedule and run "on demand” health checks in advance, and in lieu of the LCM upgrade. Not only does this give customers the flexibility to pro-actively troubleshoot issues, but it ensures that clusters are in a ready state for the next upgrade or patch. This is extremely valuable for customers that have stringent upgrade schedules, as they can rest assured that clusters will seamlessly upgrade within a specified window. Of course, running health checks on a regular basis provides sanity in knowing that your clusters are always ready for unscheduled patches and security updates.
Finally, the VxRail 4.7.510 release introduces optimized security functionality with two-factor authentication (or 2FA) with SecurID for VxRail. 2FA allows users to login to VxRail via the vCenter plugin when the vCenter is configured for RSA 2FA. Prior to this version, the user would be required to enter username and password. The RSA authentication manager automatically verifies multiple prerequisites and system components to identify and authenticate users. This new functionality saves time by alleviating the username/password entry process for VxRail access. Two-factor authentication methods are often required by government agencies or large enterprises. VxRail has already incorporated enhanced security offerings including security hardening, VxRail ACLs and RBAC, KMIP compliant key management, secure logging, and DARE, and now with the release of 4.7.510, the inclusion of 2FA further distinguishes VxRail as a market leader.
Please check out these resources for more VxRail 4.7.510 information:
By: KJ Bedard - VxRail Technical Marketing Engineer
Protecting VxRail from Power Disturbances
Fri, 12 Jun 2020 13:03:51 -0000|
Read Time: 0 minutes
Over the last few years, VxRail has evolved significantly -- becoming an ideal platform for most use cases and applications, spanning the core data center, edge locations, and the cloud. With its simplicity, scalability, and flexibility, it’s a great foundation for customers’ digital transformation initiatives, as well as high value and more demanding workloads, such as SAP HANA.
Running more business-critical workloads requires following best practices regarding data protection and availability. Dell Technologies specializes in data protection solutions and offers a portfolio of products that can fulfill even the most demanding RPO/RTO requirements from our customers. However, we are probably not giving enough attention to the other area related to this topic: protection against power disturbances and outages. Uninterruptible Power Supply (UPS) systems are at the heart of a data center’s electrical systems, and because VxRail is running critical workloads, it is a best practice to leverage a UPS to protect them and to ensure data integrity in case of unplanned power events. I want to highlight a solution from one of our partners – Eaton.
Eaton is an Advantage member of the Dell EMC Technology Connect Partner Program and the first UPS vendor who integrated their solution with VxRail. Eaton’s solution is a great example of how Dell Technologies partners can leverage VxRail APIs to provide additional value for our joint customers. Having integrated Eaton’s Intelligent Power Manager (IPM) software with VxRail APIs, and leveraged Eaton’s Gigabit Network Card, the solution can run on the same protected VxRail cluster. This removes the need for additional external compute infrastructure to host the power management software - just a compatible Eaton UPS is required.
The solution consists of:
The main benefits are:
How does it work?
It’s quite simple (see the figure below). What’s interesting and unique is that the IPM software, which is running on the cluster, delegates the final shutdown of the system VMs and cluster to the card in the UPS device, and the card uses VxRail APIs to execute the cluster shutdown.
Figure 1. Eaton UPS and VxRail integration explained
Protection against unplanned power events should be a part of a business continuity strategy for all customers who run their critical workloads on VxRail. This ensures data integrity by enabling automated and graceful shutdown of VxRail cluster(s). Eaton’s solution is a great example of providing such protection and how Dell Technologies partners can leverage VxRail APIs to provide additional value for our joint customers.
Author: Karol Boguniewicz, Senior Principal Engineer, VxRail Technical Marketing
VxRail extends flexibility in Healthcare with new EHR best practices
Mon, 01 Jun 2020 13:39:38 -0000|
Read Time: 0 minutes
The Healthcare industry is pressured to deliver not only as health providers, but making the infrastructure that operates the healthcare system, secure, scalable, and simple to use. This allows healthcare providers to focus on patients. VxRail has had a great deal of success in the healthcare vertical because its core values align so closely with those demanded by the industry. With early successes in VDI (Virtual Desktop Infrastructure), healthcare IT departments expanded to more business and even life critical IT use cases with VxRail, because it proved that it can be highly scalable, simple to use, and has security built into everything it does.
“Best Practices for VMware vSAN with Epic on Dell EMC VxRail” created in collaboration with our peers at VMware highlights the considerations around a small to medium size environment, specifically for Epic. It uses a 6 node VxRail configuration to provide predictable and consistent performance, as well as Life Cycle Management (LCM) for the VxRail. The VxRail node used in this best practice is an E560N – an all NVMe solution. Balancing workload and budget requirements, the dual-socket E560N provides a cost-effective, space-efficient 1U platform. Available with up to 32TB of NVMe capacity, the E560N is the first all-NVMe 1U VxRail platform. The all-NVMe capability provides a higher performance at low queue depths making it much easier to reliably deliver very high real-world performance for a SQL Server database management system (DBMS). Being able to run multiple healthcare applications including EHR, while successfully maintaining the secure, scalable, and simplified use of VxRail is possible. Enabling healthcare IT departments to scale and expanded infrastructure to meet the ever-growing demands of the health providers and the healthcare industry.
VxRail has had a great deal of success in the healthcare vertical because its core values align so closely to those demanded by the industry
VxRail is flexible enough to support hospital systems, alongside other applications for business and even education. A great example of this this flexibility can be seen in this Mercy Ships case study. The new best practices for Epic EHR combined with the proven successes that VxRail has with VDI in the Healthcare vertical are a testament to VxRail’s versatility.
Author: Vic Dery, Senior Principal Engineer, VxRail Technical Marketing
Best Practices for VMware vSAN with Epic on Dell EMC VxRail - Here
Dell EMC VxRail Comprehensive Security Design - Here
See more solutions from Dell for healthcare and life sciences - Here
Customer profile Mercy Ships - Here
Top benefits to using Intel Optane NVMe for cache drives in VxRail
Wed, 20 May 2020 14:42:17 -0000|
Read Time: 0 minutes
There is a saying that “A picture paints a thousand words” but let me add that a “graph can make for an awesome picture”.
Last August we at VxRail worked with ESG on a technical validation paper that included, among other things, the recent addition of Intel Optane NVMe drives for the vSAN caching layer. Figure 3 in this paper is a graph showing the results of a throughput benchmark workload (more on benchmarks later). When I do customer briefings and the question of vSAN caching performance comes up, this is my go-to whiteboard sketch because on its own it paints a very clear picture about the benefit of using Optane drives – and also because it is easy to draw.
In the public and private cloud, predictability of performance is important, doubly so for any form of latency. This is where caching comes into play, rather than having to wait on a busy system, we just leave it in the write cache inbox and get an acknowledgment. The inverse is also true. Like many parents I read almost the same bedtime stories to my young kids every night, you can be sure those books remain close to hand on my bedside “read cache” table. This write and read caching greatly helps in providing performance and consistent latency. With vSAN all-flash there no longer any read cache as the flash drives at the capacity layer provide enough random read access performance… just as my collection of bedtime story books has been replaced with a Kindle full of eBooks. Back to the write cache inbox where we’ve been dropping things off – at some point, this write cache needs to be empty, and this is where the Intel Optane NVMe drives shine. Drawing the comparison back to my kids, I no longer drive to a library to drop off books. With a flick of my finger I can return, or in cache terms de-stage, books from my Kindle back to the town library - the capacity drives if you will. This is a lot less disruptive to my day-to-day life, I don’t need to schedule it, I don’t need to stop what I’m doing, and with a bit of practice I’ve been able to do this mid story Let’s look at this in actual IT terms and business benefits.
To really show off how well the Optane drives shine, we want to stress the write cache as much as possible. This is where benchmarking tools and the right knowledge of how to apply them come into play. We had ESG design and run these benchmarking workloads for us. Now let’s be clear, this test is not reflective of a real-world workload but was designed purely to stress the write cache, in particular the de-staging from cache to capacity. The workload that created my go-to whiteboard sketch was the 100% sequential 64KB workload with a 1.2TB working set per node for 75 minutes.
The graph clearly shows the benefit of the Optane drives, they keep on chugging at 2,500MB/sec of throughput the entire time without dropping a beat. What’s not to like about that! This is usually when the techie customer in the room will try to burst my bubble by pointing out the unrealistic workload that is in no way reflective of their environment, or most environments… which is true. A more real-world workload would be a simulated relational database workload with a 22KB block size, mixing random 8K and sequential 128K I/O, with 60% reads and 40% writes, and a 600GB per node working set, which is quite a mouthful and is shown in figure 5. The results there show a steady 8.4-8.8% increase in IOPS across the board and a slower rise in latency resulting in a 10.5% lower response time under 80% load.
Those of you running OLTP workloads will appreciate the graph shown in figure 6 where HammerDB was used to emulate the database activity of a typical online brokerage firm. The Optane cache drives under that workload sustained a remarkable 61% more transactions per minute (TPM) and new orders per minute (NOPM). That can result in significant business improvement for an online brokerage firm who adopts Optane drives versus one who is using NAND SSDs.
When it comes to write cache, performance is not everything, write endurance is also extremely important. The vSAN spec requires that cache drives be SSD Endurance Class C (3,650 TBW) or above, and Intel Optane beats this hands down with an over tenfold margin at 41 PBW (41,984 TBW). The Intel Optane 3D XPoint architecture allows memory cells to be individually addressed in a dense, transistor-less, stackable design. This extremely high write endurance capability has let us spec a smaller sized cache drive, which in turn lets us maintain a similar VxRail node price point, enabling you the customer to get more performance for your dollar.
What’s not to like? Typically, you get to pick any two; faster/better/cheaper. With Intel Optane drives in your VxRail you get all three; more performance and better endurance, at roughly the same cost. Wins all around!
Author: David Glynn, Sr Principal Engineer, VxRail Tech Marketing
The Dell Technologies Cloud Platform – Smaller in Size, Big on Features
Wed, 20 May 2020 13:07:08 -0000|
Read Time: 0 minutes
The Dell Technologies team is very excited to announce that May 12, 2020 marked the general availability of our latest Dell Technologies Cloud Platform release, VMware Cloud Foundation 4.0 on VxRail 7.0. There is so much to unpack in this release across all layers of the platform, from the latest features of VCF 4.0 to newly supported deployment configurations new to VCF on VxRail. To help you navigate through all of the goodness, I have broken out this post into two sections: VCF 4.0 updates and new features introduced specifically to VCF on VxRail deployments. Let’s jump right to it!
VMware Cloud Foundation 4.0 Updates
A lot great information on VCF 4.0 features was already published by VMware as a part of their Modern Apps Launch earlier this year. If you haven’t caught yourself up, check out links to some VMware blogs at the end of this post. Some of my favorite new features include new support for vSphere for Kubernetes (GAMECHANGER!), support for NSX-T in the Management Domain, and the NSX-T compatible Virtual Distributed Switch.
Now let’s dive into the items that are new to VCF on VxRail deployments, specifically ones that customers can take advantage of on top of the latest VCF 4.0 goodness.
New to VCF 4.0 on VxRail 7.0 Deployments
VCF Consolidated Architecture Four Node Deployment Support for Entry Level Cloud (available beginning May 26, 2020)
New to VCF on VxRail is support for the VCF Consolidated Architecture deployment option. Until now, VCF on VxRail required that all deployments use the VCF Standard Architecture. This was due to several factors: a major one was that NSX-T was not supported in the VCF Management Domain until this latest release. Having this capability was a prerequisite before we could support the consolidated architecture with VCF on VxRail.
Before we jump into the details of a VCF Consolidated Architecture deployment, let's review what the current VCF Standard deployment is all about.
VCF Standard Architecture Details
This deployment would consist of:
A summary of features includes:
This deployment architecture design is preferred because it provides the most flexibility, scalability, and workload isolation for customers scaling their clouds in production. However, this does require a larger initial infrastructure footprint, and thus cost, to get started.
For something that allows customers to start smaller, VMware developed a validated VCF Consolidated Architecture option. This allows for the Management domain cluster to run both the VCF management components and a customer’s general purpose server VM workloads. Since you are just using the Management Domain infrastructure to run both your management components and user workloads, your minimum infrastructure starting point consists of the four nodes required to create your Management Domain. In this model, vSphere Resource Pools are used to logically isolate cluster resources to the respective workloads running on the cluster. A single vCenter and NSX-T instance is used for all workloads running on the Management Domain cluster.
VCF Consolidated Architecture Details
A summary of features of a Consolidated Architecture deployment:
For customers to get started with an entry level cloud for general purpose VM server workloads, this option provides a smaller entry point, both in terms of required infrastructure footprint as well as cost.
With the Dell Technologies Cloud Platform, we now have you covered across your scalability spectrum, from entry level to cloud scale!
Automated and Validated Lifecycle Management Support for vSphere with Kubernetes Enabled Workload Domain Clusters
How is it that we can support this? How does this work? What benefits does this provide you, as a VCF on VxRail administrator, as a part of this latest release? You may be asking yourself these questions. Well, the answer is through the unique integration that Dell Technologies and VMware have co-engineered between SDDC Manager and VxRail Manager. With these integrations, we have developed a unique set of LCM capabilities that can benefit our customers tremendously. You can read more about the details in one of my previous blog posts here.
VCF 4.0 on VxRail 7.0 customers who benefit from the automated full stack LCM integration that is built into the platform can now include in this integration vSphere with Kubernetes components that are a part of the ESXi hypervisor! Customers are future proofed to be able to automatically LCM vSphere with Kubernetes enabled clusters when the need arises with fully automated and validated VxRail LCM workflows natively integrated into the SDDC Manager management experience. Cool right?! This means that you can now bring the same streamlined operations capabilities to your modern apps infrastructure just like you already do for your traditional apps! The figure below illustrates the LCM process for VCF on VxRail.
VCF on VxRail LCM Integrated Workflow
Introduction of initial support of VCF (SDDC Manager) Public APIs
VMware Cloud Foundation first introduced the concept of SDDC Manager Public APIs back in version 3.8. These APIs have expanded in subsequent releases and have been geared toward VCF deployments on Ready Nodes.
Well, we are happy to say that in this latest release, the VCF on VxRail team is offering initial support for VCF Public APIs. These will include a subset of the various APIs that are applicable to a VCF on VxRail deployment. For a full listing of the available APIs, please refer to the VMware Cloud Foundation on Dell EMC VxRail API Reference Guide.
Another new API related feature in this release is the availability of the VMware Cloud Foundation Developer Center. This provides some very handy API references and code samples built right into the SDDC Manager UI. These references are readily accessible and help our customers to better integrate their own systems and other third party systems directly into VMware Cloud Foundation on VxRail. The figure below provides a summary and a sneak peek at what this looks like.
VMware Cloud Foundation Developer Center SDDC Manager UI View
Reduced VxRail Networking Hardware Configuration Requirements
Finally, we end out journey of new features on the hardware front. In this release, we have officially reduced the minimum VxRail node networking hardware configurations required for VCF use cases. With the introduction of vSphere 7.0 in VCF 4.0, admins can now use the vSphere Distributed Switch (VDS) for NSX-T. The need for a separate N-VDS switch has been deprecated. So why is this important and how does this lead to VxRail node network hardware configuration improvements?
Well, up until now, VxRail and SDDC management networks have been configured to use the VDS. And this VDS would be configured to use at least two physical NIC ports as uplinks for high availability. When introducing the use of NSX-T on VxRail, an administrator would need to create a separate N-VDS switch for the NSX-T traffic to use. This switch would require its own pair of dedicated uplinks for high availability. Thus, in VCF on VxRail environments in which NSX-T would be used, each VxRail node would require a minimum of four physical NIC ports to support the two different pairs of uplinks for each of the switches. This resulted in a higher infrastructure footprint for both the VxRail nodes and for a customer’s Top of Rack Switch infrastructure because they would need to turn on more ports on the switch to support all of these host connections. This, in turn, would come with a higher cost.
Fast forward to this release -- now we can run NSX-T traffic on the same VDS as the VxRail and SDDC Manager management traffic. And when you can share the same VDS, you can get away with reducing the number of physical uplink ports to provide high availability down to two and reduce the upfront hardware footprint and cost across the board! Win win! The following figure highlights this new feature.
NSX-T Dual pNIC Features
Well, that about sums it all up. Thanks for coming on this journey and learning about the boat load of new features in VCF 4.0 on VxRail 7.0. As always, feel free to check out the additional resources for more information. Until next time, stay safe and stay healthy out there!
Introducing VxRail 7.0.000 with vSphere 7.0 support
Tue, 28 Apr 2020 13:23:14 -0000|
Read Time: 0 minutes
The VxRail team may all be sheltering at our own homes nowadays, but that doesn’t mean we’re just binging on Netflix and Disney Plus content. We have been hard at work to deliver on our continuing commitment to provide our customers a supporting VxRail software bundle within 30 days of any vSphere release. And this time it’s for the highly touted vSphere 7.0! You can find more information about vSphere and vSAN 7.0 in the vSphere and vSAN product areas in VMware Virtual Blocks blogs.
Here’s what you need to know about VxRail 7.0.000:
Consolidated switch configuration for VxRail system traffic managed by VxRail Manager/vCenter and VM traffic by NSX-T Manager
All said, VxRail 7.0.000 is a critical release that further exemplifies our alignment with VMware’s strategy and why VxRail is the platform of choice for vSAN technology and VMware’s Software-Defined Data Center solutions.
Our commitment to synchronous release for any vSphere release is important for users who want to benefit from the latest VMware innovations or for users who prioritizes a secure platform over everything else. A case in point is the vCenter express patch that rolled out a couple weeks ago to address a critical security vulnerability (you can find out more here). Within eight days of the express patch release, the VxRail team was able to run through all its testing and validation against all supported configurations to deliver a supported software bundle. Our $60M testing lab investment and 100+ team members dedicated to testing and quality assurance make that possible.
If you’re interested in upgrading your clusters to VxRail 7.0.000, please be sure to read the Release Notes.
Daniel Chiu, VxRail Technical Marketing
How does vSphere LCM compare with VxRail LCM?
Fri, 24 Apr 2020 14:35:44 -0000|
Read Time: 0 minutes
VMware’s announcement of vSphere 7.0 this month included a highly anticipated enhanced version of vSphere Update Manager (VUM), which is now called vSphere Lifecycle Manager (vLCM). Beyond the name change, much is intriguing: its capabilities, the customer benefits, and (what I have often been asked) the key differences between vLCM and VxRail lifecycle management. I’ll address these three main areas of interest in this post and explain why VxRail LCM still has the advantage.
At its core, vLCM shifts to a desired state configuration model that allows vSphere administrators to manage clusters by using image profiles for both server hardware and ESXi software. This new approach allows more consistency in the ESXi host image across clusters, and centralizes and simplifies managing the HCI stack. vSphere administrators can now design their own image profile that consists of the ESXi software, and the firmware and drivers for the hardware components in the hosts. They can run a check for compliance against the vSAN Hardware Compatibility List (HCL) for HBA compliance before executing the update with the image. vLCM can check for version drift that identifies differences between what’s installed on ESXi hosts versus the image profile saved on the vCenter Server. To top that off, vLCM can recommend new target versions that are compatible with the image profile. All of these are great features to simplify the operational experience of HCI LCM.
Let’s dig deeper so you can get a better appreciation for how these capabilities are delivered. vLCM relies on the Cluster Image Management service to allow administrators to build that desired state. At the minimum, the desired state starts with the ESXi image which requires communication with the VMware Compatibility Guide and vSAN HCL to identify the appropriate version. In order to build a plugin to vCenter Server that includes hardware drivers and firmware on top of the ESXi image, hardware vendors need to provide the files needed to fill out the rest of the desired image profile. A desired state complete with hardware and software is achieved when capabilities such as simplified upgrades, compliance checks, version drift detection, and version recommendation can benefit administrators the most. At this time, Dell and HPE have provided this addon.
vLCM Image Builder – courtesy of https://blogs.vmware.com/vsphere/2020/03/vsphere-7-features.html
While vLCM’s desired state configuration model provides a strong foundation to drive better operational efficiency in lifecycle management, there are caveats today. I’ll focus on three key differences that will best help you in differentiating vLCM from VxRail LCM:
1. Validated state vs. desired state – Desired state does not mean validated state. VxRail invests in significant resources to identify a validated version set of software, drivers, and firmware (what we term as Continuously Validated State) to relieve the burden of defining a desired state, testing it, and validating it off the shoulders of administrators. With over 100+ dedicated VxRail team members, over $60 million of lab investments, and over 25,000 runtime hours to test each major release, VxRail users can rest assured when it comes to LCM of their VxRail clusters.
vLCM’s model relies heavily on its ecosystem to produce a desired state for the full stack. Hardware vendors need to provide the bits for the drivers and firmware as well as the compliance check for most of the HCI stack. Below is a snippet of the VxRail support matrix for VxRail 4.7.100 to show you some of the hardware components a VxRail Continuously Validated State delivers. Beyond the storage HBA, it is the responsibility of the hardware vendor to perform compliance checks of the remaining hardware on the server. Once compliance checks pass, users are responsible for validating the desired state.
2. Heterogeneous vs. homogeneous hosts – vCenter Server can only have one image profile per cluster. That means clusters need to have hosts that have identical hardware configurations in order to use vLCM. VxRail LCM supports a variety of mixed node configurations for use cases, such as when adding new generation servers into a cluster, or having multiple hardware configurations (that is, different node types) in the same cluster. For vSAN Ready Nodes, if an administrator has mixed node configurations, they still have the option to continue using VUM instead of vLCM -- a choice they have to make after they upgrade their cluster to vSphere 7.0.
3. Support – troubleshooting LCM issues may well include the hardware vendor addon. Though vLCM’s desired state includes hardware and software, the support is still potentially separate. The administrator would need to collect the hardware vendor addon’s logs and contact the hardware vendor separately from VMware. (It is worth noting that both Dell and HPE are VMware certified support delivery partners. When considering your vSAN Ready Node partner, you may want to be sure that that hardware provider is also capable of delivering support for VMware as well.) With VxRail, a single vendor support model by default streamlines all support calls directly to Dell Technical Support. With their in-depth VMware knowledge, Dell Technical Support can resolve cases quickly where 95% of support cases are resolved without requiring coordination with VMware support.
In evaluating vLCM, I’ll refer to the LCM value tiers. There are three levels, starting from lower to higher customer value: update orchestration, configuration stability, and decision support:
Explaining the Lifecycle Management value tiers for customers
vLCM has simplified full stack LCM by automating and orchestrating hardware and software upgrades into a single process flow. The next step is configuration stability, which is not just stable code (which all HCI stack should claim), but the confidence customers have in knowing that non-disruptive LCM of their HCI requires minimal work on their part. When VxRail releases a composite bundle, VxRail customers know that it has been extensively tested against a wide range of configurations to assure uptime and performance. For most VxRail customers I’ve talked to, LCM assurance and workload continuity are the benefits they value most.
VMware has done a great job with its initial release of vLCM. vSAN Ready Node customers, especially those who use nodes from vendors like Dell that support the capability (and who can also be a support delivery partner), will certainly benefit from the improvements over VUM. Hopefully, with the differences outlined above, you will have a greater appreciation for where vLCM is in its evolution, and where VxRail continues innovating and keeping its advantage.
Daniel Chiu, VxRail Technical Marketing
SmartFabric Services for VxRail
Fri, 24 Apr 2020 13:50:14 -0000|
Read Time: 0 minutes
HCI networking made easy (again!). Now even more powerful with multi-rack support.
Network infrastructure is a critical component of HCI. In contrast to legacy 3-tier architectures, which typically have a dedicated storage and storage network, HCI architecture is more integrated and simplified. Its design allows you to share the same network infrastructure used for workload-related traffic and inter-cluster communication with the software-defined storage. Reliability and the proper setup of this network infrastructure not only determines the accessibility of the running workloads (from the external network), it also determines the performance and availability of the storage, and as a result, the whole HCI system.
Unfortunately, in most cases, setting up this critical component properly is complex and error-prone. Why? Because of the disconnect between the responsible teams. Typically configuring a physical network requires expert network knowledge which is quite rare among HCI admins. The reverse is also true: network admins typically have a limited knowledge of HCI systems, because this is not their area of expertise and responsibility.
The situation gets even more challenging when you think about increasingly complex deployments, when you go beyond just a pair of ToR switches and beyond a single-rack system. This scenario is becoming more common, as HCI is becoming a mainstream architecture within the data center, thanks to its maturity, simplicity, and being recognized as a perfect infrastructure foundation for the digital transformation and VDI/End User Computing (EUC) initiatives. You need much more computing power and storage capacity to handle increased workload requirements.
At the same time, with the broader adoption of HCI, customers are looking for ways to connect their existing infrastructure to the same fabric, in order to simplify the migration process to the new architecture or to leverage dedicated external NAS systems, such as Isilon, to store files and application or user data.
A brief history of SmartFabric Services for VxRail
Here at Dell Technologies we recognize these challenges. That’s why we introduced SmartFabric Services (SFS) for VxRail. SFS for VxRail is built into Dell EMC Networking SmartFabric OS10 Enterprise Edition software that is built into the Dell EMC PowerSwitch networking switches portfolio. We announced the first version of SFS for VxRail at VMworld 2018. With this functionality, customers can quickly and easily deploy and automate data center fabrics for VxRail, while at the same time reduce risk of misconfiguration.
Since that time, Dell has expanded the capabilities of SFS for VxRail. The initial release of SFS for VxRail allowed VxRail to fully configure the switch fabric to support the VxRail cluster (as part of the VxRail 4.7.0 release back in Dec 2018). The following release included automated discovery of nodes added to a VxRail cluster (as part of VxRail 4.7.100 in Jan 2019).
The new solution
This week we are excited to introduce a major new release of SFS for VxRail as a part of Dell EMC SmartFabric OS 10.5.0.5 and VxRail 4.7.410.
So, what are the main enhancements?
Figure 1. Comparison of a multi-rack VxRail deployment, without and with SFS
In order to take advantage of this solution, you need the following components:
How does the multi-rack feature work?
The multi-rack feature is done through the use of the Hardware VTEP functionality in Dell EMC PowerSwitches and the automated creation of a VxLAN tunnel network across the switch fabric in multiple racks.
VxLAN (Virtual Extensible Local Area Network) is an overlay technology that allows you to extend a Layer 2 “overlay” network over a Layer 3 (L3) “underlay” network by adding a VxLAN header to the original Ethernet frame and encapsulating it. This encapsulation occurs by adding a VxLAN header to the original Layer 2 (L2) Ethernet frame, and placing it into an IP/UDP packet to be transported across the L3 underlay network.
By default, all VxRail networks are configured as L2. With the configuration of this VxLAN tunnel, the L2 network is “stretched” across multiple racks with VxRail nodes. This allows for the scalability of L3 networks with the VM mobility benefits of an L2 network. For example, the nodes in a VxRail cluster can reside on any rack within the SmartFabric network, and VMs can be migrated within the same VxRail cluster to any other node without manual network configuration.
Figure 2. Overview of the VLAN and VxLAN VxRail traffic with SFS for multi-rack VxRail
This new functionality is enabled by the new L3 Fabric personality, available as of OS 10.5.0.5, that automates configuration of a leaf-spine fabric in a single-rack or multi-rack fabric and supports both L2 and L3 upstream connectivity. What is this fabric personality? SFS personality is a setting that enables the functionality and supported configuration of the switch fabric.
To see how simple it is to configure the fabric and to deploy a VxRail multi-rack cluster with SFS, please see the following demo: Dell EMC Networking SFS Deployment with VxRail - L3 Uplinks.
Single pane for management and “day 2” operations
SFS not only automates the initial deployment (“day 1” fabric setup), but greatly simplifies the ongoing management and operations on the fabric. This is done in a familiar interface for VxRail / vSphere admins – vCenter, through the OMNI plugin, distributed as a virtual appliance.
It’s powerful! From this “VMware admin-friendly” interface you can:
Figure 3. Sample view from the OMNI vCenter plugin showing a fabric topology
To see how simple it is to deploy the OMNI plugin and to get familiar with some of the options available from its interface, please see the following demo: Dell EMC OpenManage Network Integration for VMware vCenter.
OMNI also monitors the VMware virtual networks for changes (such as to portgroups in vSS and vDS VMware virtual switches) and as necessary, reconfigures the underlying physical fabric.
Figure 4. OMNI – monitor virtual and physical network configuration from a single view
Thanks to OMNI, managing the physical network for VxRail becomes much simpler, less error-prone, and can be done by the VxRail admin directly from a familiar management interface, without having to log into the console of the physical switches that are part of the fabric.
This new SFS release is very flexible and supports multiple fabric topologies. Due to the limited size of this post, I will only list them by name:
For detailed information on these topologies, please consult Dell EMC VxRail with SmartFabric Network Services Planning and Preparation Guide.
Note, that SFS for VxRail does not currently support NSX-T and VCF on VxRail.
This latest version of SmartFabric Services for VxRail takes HCI network automation to the next level and solves now much bigger network complexity problem in a multi-rack environment, compared to much simpler, single-rack, dual switch configuration. With SFS, customers can:
Author: Karol Boguniewicz, Senior Principal Engineer, VxRail Technical Marketing
Join the VxRail Xpert Crew (open to Partners!)
Wed, 22 Apr 2020 19:40:51 -0000|
Read Time: 0 minutes
VxRail is #1 in hyperconverged systems, and the fastest growing HCI product in the industry today. Join a thriving community of >800 VxRail global experts who, like you, are interested in the most relevant, critical, and timely information for Sales awareness.
What is this? A community for VxRail Xperts who are advocates in their selling circles, regardless of organizational alignment and position in his/her company. Although this forum is not exclusive to SEs, our primary focus is on pre-sales efforts and initiatives. Our mission is to provide timely and technical information, knowledge, and tools in an effort to cultivate consistency for architecting, sizing, and configuring any VxRail solution. This is not a forum for post-sales or customer support.
Who is eligible? Any Dell Technologies employee or Dell EMC partner who passes the VxRail Xpert Crew assessment exam. VMware partners are welcome to join if they are also Dell EMC partners. Due to the NDA nature of this community, customers are not permitted.
Why should I join?
There are many reasons to pass the assessment and join the VxRail Xpert crew, including:
What are my responsibilities as a member?
How do I join?
How do I access the exam?
What should I study? (some items are gated assets requiring login)
Focused Reading & Learning
|VxRail Technical Presentation|
|VxRail Network Planning Guide|
|VxRail Online Sizing Tool|
|VxRail Performance and Sizing Guide|
|VxRail Administration Guide|
|VxRail Technical Reference Deck|
 IDC WW Quarterly Converged Systems Tracker, Vendor Revenue (US$M) Q4 2019, March 19, 2020
Built to Scale with VCF on VxRail and Oracle 19C RAC
Fri, 17 Apr 2020 05:21:03 -0000|
Read Time: 0 minutes
The newly released Oracle RAC on Dell EMC VxRail with VMware Cloud Foundations (VCF) Reference Architecture (RA) guides customers to building an efficient and high performing hyperconverged infrastructure to run their OLTP workloads. Scalability was the primary goal of this RA, and performance was highlighted as the numbers were generated. As Oracle RAC scaled, TPM increased to over 1 million TPM, while read IOPs showed sub-milli-second (0.64-0.70 ms) performance. The performance achieved with VxRail is a great added benefit to the core design points for Oracle RAC environments of which the primary focus is the availability and resiliency of the solution. Links to a reference architecture (“Oracle RAC on VMware Cloud Foundation on Dell EMC VxRail”) and a solution brief (“Deploying Oracle RAC on Dell EMC VxRail “) are available here and at the end of this post.
The RAC solution with VxRail scaled-out easily — you simply add a new node to join an existing VxRail cluster. The VxRail Manager provides a simple path that automatically discovers and non-disruptively adds each new node. VMware vSphere and vSAN can then rebalance resources and workloads across the cluster, creating a single resource pool for compute and storage.
The VxRail clusters were built with eight P570F nodes; four for the VCF Management Domain and four for the Oracle RAC Workload Domain.
Specifics on the build, including the hardware and software used, are detailed within the reference architecture. It also provides information on the testing, tools used, and results.
This graph shows the performance of TPM and Response Time when increasing the RAC node count from one to four. Notice that the average TPM increased with near-linear trendline (shown by the dotted line) as additional RAC nodes were added, while total application response time was maintained at 20 milliseconds or less.
Note: TPM near-linear trendline is shown in the above graph (blue dotted line), As additional RAC nodes are added, an increase in performance is seen as well as an increase in RAC high availability. TPM linear performance (scale equal performance per each note) growth is not achieved due to RAC nodes’ dependency on concurrency of access, instance, network, or other factors. See the RA for additional performance related information.
Different-sized databases kept the TPM at the same level (about one million transactions) while keeping the application response time at 20ms or below. When increasing the database size, the physical read and write IOPS increased near-linearly, as reported from the Oracle AWR. This indicated that more read and write I/O requests were served by the backend storage, under the same configuration. Overall, when the peak client IOPS was up to 100,000, vSAN still provided excellent storage performance at sub-milliseconds at read and single-digit milliseconds latency at write.
Sidebar about Oracle licensing: While not mentioned in the RA; the VxRail offers several facilities to both control Oracle licenses and in some cases eliminates the need for costly licensed options. These include a broad choice of CPU core configurations, some with fewer cores and higher processing power per core, to maximize the customer’s Oracle workload performance while minimizing the license requirements. Costly add on options such as encryption and compression can be provided via vSAN and are handled by VxRail. Further, and the vSphere hypervisor features, like DRS, allow Oracle VMs to be contained to only licensed nodes.
You can speak to a Dell Technologies’ Oracle specialist for more details on how to control Oracle licensing costs for VMware environments.
Oracle Database 19c on VxRail offers customers performance, scalability, reliability, and security for all their operational and analytical workloads. The Oracle RAC on VxRail test environment was first created to highlight the architecture. It also had the added benefit of showcasing the great performance VxRail delivers. If you need more performance, it is simple to adjust the configuration by adding more VxRail nodes to the cluster. If you need more storage, add more drives to meet the scale required of the database. Dell Technologies has Oracle specialists to ensure the VxRail cluster will meet the scale and performance outcomes desired for Oracle environments.
Reference Architecture - Oracle RAC on VMware Cloud Foundation on Dell EMC VxRail
Solution Brief - Deploying Oracle RAC on Dell EMC VxRail
Author: Vic Dery, Senior Principal Engineer, VxRail Technical Marketing
Special thank you to David Glynn for assisting with the reviews
VMware Cloud Foundation on VxRail Integration Features Series: Part 1—Full Stack Automated LCM
Fri, 03 Apr 2020 21:39:05 -0000|
Read Time: 0 minutes
It’s no surprise that VMware Cloud Foundation on VxRail features numerous unique integrations with many VCF components, such as SDDC Manager and even VMware Cloud Builder. These integrations are the result of the co-engineering efforts by Dell Technologies and VMware with every release of VCF on VxRail. The following figure highlights some of the components that are part of this integration effort.
These integrations of VCF on VxRail offer customers a unique set of features in various categories, from security to infrastructure deployment and expansion, to deep monitoring and visibility that have all been developed to drive infrastructure operations.
Where do these integrations exist? The following figure outlines how they impact a customer’s Day 0 to Day 2 operations experience with VCF on VxRail.
In this series I will showcase some of these unique integration features, including some of the more nuanced ones. But for this initial post, I want to highlight one of the most popular and differentiated customer benefits that emerged from this integration work: full stack automated lifecycle management (LCM).
VxRail already delivers a differentiated LCM customer experience through its Continuously Validated States capabilities for the entire VxRail hardware and software stack. (As you may know, the VxRail stack includes the hardware and firmware of compute, network, and storage components, along with VMware ESXi, VMware vSAN, and the Dell EMC VxRail HCI System software itself, which includes VxRail Manager.)
With VCF on VxRail, VxRail Manager is integrated natively into the SDDC Manager LCM management framework through the SDDC Manager UI, and through VxRail Manager APIs for LCM by SDDC Manager when executing LCM workflows. This integration allows SDDC Manager to leverage all of the LCM capabilities that natively exist in VxRail right out of the box. SDDC Manager can then execute SDDC software LCM AND drive native VxRail HCI system LCM. It does this by leveraging native VxRail Manager APIs and the continuously validated state update packages for both the VxRail software and hardware components.
All of this happens seamlessly behind the scenes when administrators use the SDDC Manager UI to kick off native SDDC Manager workflows. This means that customers don’t have to leave the SDDC Manager UI management experience at all for full stack SDDC software and VxRail HCI infrastructure LCM operations. How cool is that?! The following figure illustrates the concepts behind this effective relationship.
For more details about how this LCM experience works, check out my lightboard talk about it!
Also, if you want to get some hands on experience in walking through performing LCM operations for the full VCF on VxRail stack, check out the VCF on VxRail Interactive Demo to see this and some of the other unique integrations!
I am already hard at work writing up the next blog post in the series. Check back soon to learn more.
Twitter - @vwhippersnapper
Take VxRail automation to the next level by leveraging APIs
Mon, 30 Mar 2020 15:20:04 -0000|
Read Time: 0 minutes
VxRail Manager, available as a part of HCI System Software, drastically simplifies the lifecycle management and operations of a single VxRail cluster. With a “single click” user experience available directly in vCenter interface, you can perform a full upgrade off all software components of the cluster, including not only vSphere and vSAN, but also complete server hardware firmware and drivers, such as NICs, disk controller(s), drives, etc. That’s a simplified experience that you won’t find in any other VMware-based HCI solution.
But what if you need to manage not a single cluster, but a farm consisting of dozens or hundreds of VxRail clusters? Or maybe you’re using some orchestration tool to holistically automate the IT infrastructure and processes? Would you still need to login manually as an operator to each of these clusters separately and click a button to maybe shutdown a cluster, collect log information or health data or perform LCM operations?
This is where VxRail REST APIs come in handy.
The VxRail API Solution
REST APIs are very important for customers who would like to programmatically automate operations of their VxRail-based IT environment and integrate with external configuration management or cloud management tools.
In VxRail HCI System Software 4.7.300 we’ve introduced very significant improvements in this space:
The easiest way to start using and access these APIs is through the web browser, thanks to the Swagger integration. Swagger is an Open Source toolkit that simplifies Open API development and can be launched from within the VxRail Manager virtual appliance. To access the documentation, simply open the following URL in the web browser: https://<VxM_IP>/rest/vxm/api-doc.html (where <VxM IP> stands for the IP address of the VxRail Manager) and you should see a page similar to the one shown below:
Figure 1. Sample view into VxRail REST APIs via Swagger
This interface is dedicated for customers, who are leveraging orchestration or configuration management tools – they can use it to accelerate integration of VxRail clusters into their automation workflows. VxRail API is complementary to the APIs offered by VMware.
Would you like to see this in action? Watch the first part of the recorded demo available in the additional resources section.
PowerShell integration for Windows environments
Customers, who prefer scripting in Windows environment, using Microsoft PowerShell or VMware PowerCLI, will benefit from VxRail.API PowerShell Modules Package. It simplifies the consumption of the VxRail REST APIs from PowerShell and focuses more on the physical infrastructure layer, while management of VMware vSphere and solutions layered on the top (such as Software-Defined Data Center, Horizon, etc.), can be scripted using similar interface available in VMware PowerCLI.
Figure 2. VxRail.API PowerShell Modules Package
To see that in action, check the second part of the recorded demo available in the additional resources section.
Bringing it all together
VxRail REST APIs further simplify IT Operations, fostering operational freedom and a reduction in OPEX for large enterprises, service providers and midsize enterprises. Integrations with Swagger and PowerShell make them much more convenient to use. This is an area of VxRail HCI System Software that rapidly gains new capabilities, so please make sure to check the latest advancements with every new VxRail release.
Demo: VxRail API - Overview
Author: Karol Boguniewicz, Sr Principal Engineer, VxRail Tech Marketing
Latest enhancements to VxRail ACE
Mon, 30 Mar 2020 11:26:20 -0000|
Read Time: 0 minutes
One of the key areas of focus for VxRail ACE (Analytical Consulting Engine) is active multi-cluster management. With ACE, users have a central point to manage multiple VxRail clusters more conveniently. System updates for multiple VxRail clusters is one activity where ACE greatly benefits users. It is a time-consuming operation that requires careful planning and coordination. In the initial release of ACE, users were able to facilitate transfer of update bundles to all their clusters with ACE acting as the single control point versus logging onto every vCenter console to do the same activity. That can save quite a bit of time.
In the latest ACE update, users can now run on-demand health checks prior to upgrading to find out if their cluster is ready for a system update. By identifying which clusters are ready and which ones are not, users can more effectively schedule their maintenance windows in advance. It allows them to see which clusters require troubleshooting and which ones can start the update process. In ACE, on-demand cluster health checks are referred to as a Pre-Check.
For more information about this feature, you can check out this video: https://vxrail.is/aceupdates
Another feature that came out with this update is the identification of the cluster deployment type. This means ACE will now display whether the cluster is a standard VxRail cluster in a VMware Validated Design deployment, in a VMware Cloud Foundation on VxRail deployment used in Dell Technologies Datacenter-as-a-Service, a 2-node vSAN cluster, or in a stretched cluster configuration.
Daniel Chiu, VxRail Technical Marketing
VCF on VxRail – More business-critical workloads welcome!
Mon, 30 Mar 2020 15:11:17 -0000|
Read Time: 0 minutes
Today, Dell EMC has made the newest VCF 3.9.1 on VxRail 4.7.410 release available for download for existing VCF on VxRail customers with plans for availability for new customers coming on February 19, 2020. Let’s dive into what’s new in this latest version.
This release continues the co-engineering innovation efforts of Dell EMC and VMware to provide our joint customers with better outcomes. We tackle the area of security in this case. VxRail password management for VxRail Manager accounts such as root and mystic as well as ESXi have been integrated into the SDDC Manager UI Password Management framework. Now the components of the full SDDC and HCI infrastructure stack can be centrally managed as one complete turnkey platform using your native VCF management tool, SDDC Manager. Figure 1 illustrates what this looks like.
Building off the support for Layer 3 stretched clusters introduced in VCF 3.9 on VxRail 4.7.300 using manual guidance, VCF 3.9.1 on VxRail 4.7.410 now supports the ability to automate the configuration of Layer 3 VxRail stretched clusters for both NSX-V and NSX-T backed VxRail VI Workload Domains. This is accomplished using CLI in the VCF SOS Utility.
For new installations, this release now provides the ability to extend a common management and security model across two VCF on VxRail instance deployments by sharing a common Single Sign On (SSO) Domain between the PSCs of multiple VMware Cloud Foundation instances so that the management and the VxRail VI Workload Domains are visible in each of the instances. This is known as a Federated SSO Domain.
What does this mean exactly? Referring to Figure 2, this translates into the ability for Site B to join the SSO instance of Site A. This allows VCF to further align to the VMware Validated Design (VVD) to share SSO domains where it makes sense based upon Enhanced Linked Mode 150ms RTT limitation.
This would leverage a recent option made available in the VxRail first run to connect the VxRail cluster to an existing SSO Domain (PSCs). So, when you stand up the VxRail cluster for the second MGMT Domain that is affiliated with the second VCF instance deployed in Site B, you would connect it to the SSO (PSCs) that was created by the first MGMT domain of the VCF instance in Site A.
One of the new features in the 3.9.1 release of VMware Cloud Foundation (VCF) is use of Application Virtual Networks (AVNs) to completely abstract the hardware and realize the true value from a software-defined cloud computing model. Read more about it on VMware’s blog post here. Key note on this feature: It is automatically set up for new VCF 3.9.1 installations. Customers who are upgrading from a previous version of VCF would need to engage with the VMware Professional Services Organization (PSO) to configure AVN at this time. Figure 3 shows the message existing customers will see when attempting the upgrade.
VxRail 4.7.410 brings a slew of new hardware platforms and hardware configuration enhancements that expand your ability to support even more business-critical applications.
There you have it! We hope you find these latest features beneficial. Until next time…
Twitter - @vwhippersnapper
Announcing all-new VxRail Management Pack for vRealize Operations
Mon, 30 Mar 2020 15:14:28 -0000|
Read Time: 0 minutes
As the new year rolls in, VxRail team is now slowly warming up to it. Right as we settle back in after holiday festivities, we’re onto another release announcement. This time, it’s an entirely new software tool: VxRail Management Pack for vRealize Operations.
For those not familiar with what vRealize Operations, it’s VMware’s operations management software tool that provides its customers the ability to maintain and tune their virtual application infrastructure with the aid of artificial intelligence and machine learning. It connects to the vCenter Server and collects metrics, events, configurations, and logs about the vSAN clusters and virtual workloads running on them. vRealize Operations also understands the topology and object relationships of the virtual application infrastructure. With all these features, it is capable of driving intelligent remediation, ensuring configuration compliance, monitoring capacity and cost optimization, and maintaining performance optimization. It’s an outcome-based tool designed to self-drive according to user-defined intents powered by its AI/ML engine.
The VxRail Management Pack is an additional free-of-charge software pack that can be installed onto vRealize Operations to provide VxRail cluster awareness. Without this Management Pack, vRealize Operations can still detect vSAN clusters but cannot discern that they are VxRail clusters. The Management Pack consists of an adapter that collects 65 distinct VxRail events, analytics logic specific to VxRail, and three custom dashboards. These VxRail events are translated into VxRail alerts on vRealize Operations so that users have helpful information to understand health issues along with recommended course of resolution. With custom dashboards, users can easily go to VxRail-specific views to troubleshoot issues and make use of existing vRealize Operations capabilities in the context of VxRail clusters.
The VxRail Management Pack is not for every VxRail user because it requires a vRealize Operations Advanced or Enterprise license. For enterprise customers or customers who have already invested in VMware’s vRealize Operations suite, it can be an easy add-on to help manage your VxRail clusters.
To download the VxRail Management Pack, go to VMware Solution Exchange: https://marketplace.vmware.com/vsx/.
Author: Daniel Chiu, Dell EMC VxRail Technical Marketing
VxRail drives the hyperconverged evolution with the release of 4.7.410
Mon, 30 Mar 2020 15:16:22 -0000|
Read Time: 0 minutes
January 6, 2020
VxRail recently released a new version of our software, 4.7.410, which we announced at VMworld EMEA in November. This release brings cutting-edge enhancements for networking options and edge deployments, support for the Mellanox 100GbE PCIe NIC, and two new drive types.
Improvements and newly developed functionality for VxRail 2-node implementations provide a more user-friendly experience. Now supporting both direct connect and new switched connectivity options. VxRail 2-node is increasingly popular for edge deployments, and Dell EMC continues to bolster features and functionality in support of our edge and 2-node customer base.
This release also includes improvements for VxRail networking capabilities that more closely align VxRail with VMware’s best practices for NIC port maximums and network teaming policies. VxRail networking enhancements more efficiently handle network traffic due to support for two additional load balancing policies. These new policies determine how to route network traffic in the event of bottlenecks, and the result is better/increased throughput on a NIC port. In addition, VxRail now supports the same three routing/teaming policies as VMware.
Dell EMC also announced support for Fiber channel HBAs in mid-summer of 2019, and with that, the 4.7.410 release has broadened capabilities by supporting external storage integration. VxRail recognizes that an external array is connected and makes it available to the vCenter for use as secondary storage. The storage is now automatically recognized during day 1 installation operations, or on day 2, when external storage is added to expand the storage capacity for VxRail.
In addition to the 4.7.410 release, VxRail added a new set of hardware choices and options to include the Mellanox ConnectX-5 100GBe NIC cards benefitting a variety of use cases including media broadcasting, a larger 8TB 2.5” 7200 rpm HDD commonly used for video surveillance, and a 7.6TB “Value SAS SSD”. Value SAS drives offer attractive pricing (similar to SATA) with performance slightly below other SAS drives and are great for larger read-friendly workloads. And finally, there’s big news for the VxRail E series platforms (E560/E560F/E560N) which all support the T4 GPU. This is the first time VxRail is supporting GPU cards outside of the V series. The Nvidia T4 GPU is optimized for high-performance workloads and suitable for running a combination of entry-level machine learning, VDI, and data inferencing.
These exciting new features and enhancements in the 4.7.410 release enable customers to expand the breadth of business workloads across all VxRail implementations.
VxRail 4.7.x Release Notes (requires log-in)
By: KJ Bedard - VxRail Technical Marketing Engineer
New all-NVMe VxRail platforms deliver highest levels of performance
Mon, 30 Mar 2020 15:24:55 -0000|
Read Time: 0 minutes
Two new all-NVMe VxRail platforms deliver highest levels of performance
December 11, 2019
If you have not been tuned into the VxRail announcements at VMworld Barcelona last month, this is news to you. VxRail is adding more performance punch to the family with two new all-NVMe platforms. The VxRail E Series 560N and P Series 580N, with the 2nd Generation Intel® Xeon® Scalable Processors, offer increased performance while enabling customers to take advantage of decreasing NVMe costs.
Balancing workload and budget requirements, the dual-socket E560N provide a cost-effective, space-efficient 1U platform for read-intensive workloads and other complex workloads. Configured with up to 32TB of NVMe capacity, the E560N is the first all-NVMe 1U VxRail platform. Based on the PowerEdge R640, the E560N can run a mix of workloads including data warehouses, ecommerce, databases, and high-performance computing. With support for Nvidia T4 GPUs, the E560N is also equipped to run a wide range of modern cloud-based applications, including machine learning, deep learning, and virtual desktop workloads.
Built for memory-intensive high-compute workloads, the new P580N is the first quad-socket and also the first all-NVMe 2U VxRail platform. Based on the PowerEdge R840, the P580N can be configured with up to 80TB of NVMe capacity. This platform is ideal for in-memory databases and has been certified by SAP for SAP HANA. The P580N provides 2x the CPU compared to the P570/F and offers 25% more processing potential over virtual storage appliance (VSA) based 4-socket HCI platforms that require a dedicated socket to run (VSA).
The completion of the SAP HANA certification for the P580N which coincides with the P580N’s general availability demonstrates the ongoing commitment to position VxRail as the HCI platform of choice for SAP HANA solutions. The P580N provides even more memory and processing power than the SAP HANA certified P570F platform. An updated Validation Guide for SAP HANA on VxRail will be available in early January on the Dell EMC SAP solutions landing page for VxRail.
Innovation with Cloud Foundation on VxRail
Mon, 30 Mar 2020 15:24:55 -0000|
Read Time: 0 minutes
As you may already know, VxRail is the HCI foundation for the Dell Technologies Cloud Platform. With the new Dell Technologies On Demand offerings we combine the benefits of bringing automation and financial models similar to public cloud to on-premises environments. VMware Cloud Foundation on Dell EMC VxRail allows customers to manage all cloud operations through a familiar set of tools, offering a consistent experience, with a single vendor support relationship from Dell EMC.
Joint engineering between VMware and Dell EMC continuously improves VMware Cloud Foundation on VxRail. This has made VxRail the first hyperconverged system fully integrated with VMware Cloud Foundation SDDC Manager and is the only jointly engineered HCI system with deep VMware Cloud Foundation integration. VCF on VxRail to delivers unique integrations with Cloud Foundation that offer a seamless, automated upgrade experience. Customers adopting VxRail as the HCI foundation for Dell Technologies Cloud Platform will realize greater flexibility and simplicity when managing VMware Cloud Foundation on VxRail at scale. These benefits are further illustrated with the new features available in the latest version of VMware Cloud Foundation 3.9 on VxRail 4.7.300.
The first feature expands the ability to support global management and visibility across large, complex multi-region private and hybrid clouds. This is delivered through global multi-instance management of large-scale VCF 3.9 on VxRail 4.7.300 deployments with a single pane of glass (see figure below). Customers who have many VCF on VxRail instances deployed throughout their environment now have a common dashboard view into all of them to further simplify operations and gain insights.
The new features don’t just stop there, VCF 3.9 on VxRail 4.7.300 provides greater networking flexibility. VMware Cloud Foundation 3.9 on VxRail 4.7.300 adds support for Dell EMC VxRail layer 3 networking stretch cluster configurations, allowing customers to further scale VCF on VxRail environments for more highly available use cases in order to support mission-critical workloads. The layer 3 support applies to both NSX-V and NSX-T backed workload domain clusters.
Another area of new network flexibility features is the ability to select the host physical network adapters (pNICs) you want to assign for NSX-T traffic on your VxRail workload domain cluster (see figure below). Users can now select the pNICs used for the NSX-T Virtual Distributed Switch (N-VDS) from the SDDC Manager UI in the Add VxRail Cluster workflow. This allows you the flexibility to choose from a set of VxRail host physical network configurations that best aligns to your desired NSX-T configuration business requirements. Do you want to deploy your VxRail clusters using the base network daughter card (NDC) ports on each VxRail host for all standard traffic but use separate PCIe NIC ports for NSX-T traffic? Go for it! Do you want to use 10GbE connections for standard traffic and 25GbE for NSX-T traffic? We got you there too! Host network configuration flexibility is now in your hands and is only available with VCF on VxRail.
Finally, no good VCF on VxRail conversation can go by without talking about Lifecycle Management. VMware Cloud Foundation 3.9 on VxRail 4.7.300 also delivers simplicity and flexibility for managing at scale with greater control over workload domain upgrades. Customers now have the flexibility to select the clusters within a multi-cluster workload domain to upgrade in order to better align with business requirements and maintenance windows. Upgrading VCF on VxRail clusters is further simplified with VxRail Smart LCM (4.7.300 release) which determines exactly which firmware components need to be updated on each cluster, pre-stages each node in a cluster saving up to 20% of upgrade time (see next figure). The scheduling of these cluster upgrades is also supported. With VCF 3.9 and VxRail smart LCM, you can streamline the upgrade process across your hybrid cloud.
As you can see the innovation continues with Cloud Foundation on VxRail.
Analytical Consulting Engine (ACE)
Mon, 30 Mar 2020 15:27:16 -0000|
Read Time: 0 minutes
VxRail ACE (Analytical Consulting Engine), the new Artificial Intelligence infused component of the VxRail HCI System Software, was announced just a few months ago at Dell Technologies World and has been in global early access. Over 500 customers leveraged the early access program for ACE, allowing developers to collect feedback and implement enhancements prior to General Availability of the product. It is with great excitement that VxRail ACE is now generally available to all VxRail customers. By incorporating continuous innovation/continuous development (CIDC) utilizing the Pivotal Platform (also known as Pivotal Cloud Foundry) container-based framework, Dell EMC developers behind ACE have made rapid iterations to improve the offering; and customer demand has driven new features added to the roadmap. ACE is holding true to its design principles and commitment to deliver adaptive, frequent releases.
Figure 1 ACE Design Principles and Goals
VxRail ACE is a centralized data collection and analytics platform that uses machine learning capabilities to perform capacity forecasting and self-optimization helping you keep your HCI stack operating at peak performance and ready for future workloads. In addition to some of the initial features available during early access, ACE now provides new functionality for intelligent upgrades of multiple clusters (see image below). You can now see the current software version of each cluster along with all available upgrade versions. ACE will allow you to select the desired version per each VxRail cluster. You can now manage at scale to standardize across all sites and clusters with the ability to customize by cluster. This becomes advantageous when some sites or clusters might need to remain at a specific version of VxRail software.
If you haven’t seen ACE in action yet, check out the additional links and videos below that showcase the features described in this post. For our 6,000+ VxRail customers, please visit our support site and Admin Guide to learn how to access ACE.