Thu, 29 Aug 2024 12:23:21 -0000
|Read Time: 0 minutes
“Even more progress” sums up this latest Cloud Foundation on VxRail release—more progress in LCM enhancements, more progress with new hardware platform options, and more progress toward driving a simpler cloud operations experience for VCF on VxRail customers. This new release is based on the latest software bill of materials (BOM) featuring vSphere 8.0 U3, vCenter 8.0 U3a, vSAN 8.0 U3, and NSX 4.2.0. Read on for more on the new hardware platforms, significant lifecycle management enhancements, and networking & security updates in this release.
The latest release brings forth support for new VxRail hardware platforms and enhancements to existing ones. The AMD-based VxRail 16G VE-6615 and VP-7625 hardware platforms are among the newly introduced platforms, featuring the powerful AMD EPYC Gen 4 processor. The VE-6615, a compact 1U platform, is tailored for general-purpose workloads, while the VP-7625, a scalable 2U platform, is designed for optimized performance. Additionally, the release includes the introduction of the Intel®-based storage-optimized platform VS-760.
Furthermore, VCF 5.2 on VxRail 8.0.300 unveils the much-anticipated support for the VD-4000 hardware platform. The VD-4000 offers a cutting-edge VxRail node option specifically crafted for edge use cases. This ruggedized platform features a purpose-built smaller form factor, extending the advantages of VxRail to previously inaccessible locations due to challenging conditions, limited bandwidth, and space constraints. Supported in 3+ node cluster configurations within the workload domain, the VD-4000 integrates seamlessly with both vSAN OSA and ESA, providing a robust edge computing solution.
VCF 5.2 introduces several lifecycle management enhancements. It offers more flexible updates, better scale for operations, and faster deployment and patching.
Notably, all upgrade and patch bundles are now conveniently accessible within the SDDC Manager interface, eliminating the necessity for manual patching via the CLI. Moreover, patches can now be seamlessly applied during upgrade workflows, and newly deployed workload domains and clusters deploy with the latest patches already applied, streamlining administrative tasks and reducing operational overhead.
Let’s expand on these enhancements in more detail.
The skip-level upgrade feature enables customers running older versions of VCF to transition directly to the latest VCF 5.x release, bypassing intermediate VCF versions. Users on version VCF 4.5.1 and above can seamlessly upgrade to the VCF 5.2 release without migrating workloads or hardware swaps. This streamlined upgrade process simplifies the transition to the latest VCF version, enhancing operational efficiency and minimizing disruptions.
In VCF 5.2, the upgrade process for a workload domain now offers the flexibility to customize the Bill of Materials (BOM) to accommodate any newly released asynchronous patches. This enhancement streamlines the upgrade workflow, as only one single run is needed to align the domain with the preferred component versions. Previously, users had to conduct the upgrade and subsequently apply async patches separately. Workload domains can now feature distinct combinations of component versions tailored to specific workload requirements, with SDDC Manager conducting compatibility validations to verify that the customized BOM remains within supported configurations.
With the introduction of VCF 5.2, customers now have the option to incrementally upgrade the SDDC Manager component independently over time without updating the remaining infrastructure components within the management domain. This approach facilitates quicker adoption of new functionalities and fixes without necessitating extensive updates to NSX, vCenter Server, and ESXi hosts.
After a particular VCF BOM has been released, updating one or more of the individual components may be necessary to address reliability or security issues. These individual component updates (referred to as "async patches") are delivered separately from the VCF release BOM. Previously, customers looking to update their workload domains had to utilize the Async Patch Tool, a separate command-line interface utility. However, with VCF 5.2, the deployment of async patches can now be seamlessly applied within the familiar SDDC Manager UI. Users can conveniently select from available patches in a dropdown menu, which dynamically updates as the SDDC Manager receives the latest bundle data from online repositories or offline sources.
This feature is related to patching individual VCF on VxRail components using SDDC Manager. When applying async patches, it’s crucial to follow specific instructions for updating the VCF inventory using the Async Patch Tool’s sync command. Lack of synchronization could cause issues during upgrades. Before VCF 5.2, users had to manually sync the inventory using the CLI and the Async Patch Tool. Now, with VCF 5.2, there’s an automatic mechanism to make this process easier and improve the user experience.
In scenarios where VCF deployments are unable to connect to the Internet's online depot due to technical constraints or policy restrictions (referred to as "dark sites"), the downloading of essential software bundles for infrastructure updates and deployments becomes challenging. To address this limitation, customers traditionally relied on the Offline Bundle Transfer Utility (OBTU) CLI tool to download bundles to a separate system and then transfer and import them into SDDC Manager to then perform LCM operations. This method would need to be repeated, potentially requiring separate downloaded copies of update bundles and bundle transfers for each VCF instance deployed within a customer environment.
However, with the introduction of VCF 5.2, a new configuration option within SDDC Manager offers a solution by allowing downloads from a local web server instead of the online depot. This local server, referred to as an "offline depot," involves the utilization of a customer-managed web server equipped with an enhanced version of OBTU. This approach effectively establishes a local mirror of the online depot, enabling seamless centralized access to single copies of necessary software bundles within dark site environments.
In the past, every new workload domain and cluster was deployed with component versions matching those of the management domain BOM. Subsequently, async patches had to be manually applied to each new domain after deployment.
With the introduction of VCF 5.2, administrators can deploy new domains and clusters that match the patch levels implemented in the management domain. This advancement eliminates the need to patch a newly deployed domain, enhancing operational efficiency and scalability.
In VMware Cloud Foundation 5.2, the TKG Service in vSphere 8.0 U3 has been decoupled from vCenter Server, allowing it to be implemented independently as a core Supervisor Service. This architectural change empowers Administrators to introduce asynchronous updates to the service, ensuring alignment with upstream Kubernetes and facilitating the rapid delivery of new Kubernetes versions. This decoupling enhances agility and enables faster adoption of Kubernetes advancements within the VMware ecosystem.
In previous VMware Cloud Foundation versions, each individual Isolated Workload Domain required a dedicated NSX instance. However, with the introduction of VCF 5.2, Administrators now have the flexibility to configure multiple isolated domains utilizing a shared NSX instance. Each isolated workload domain utilizing a shared NSX instance is independently set up with its own SSO instance and Identity Provider.
Utilizing a shared NSX instance across domains offers the advantage of a unified management interface through the NSX Manager console for all NSX network components within a given topology. Additionally, a single transport zone is shared across all clusters present in either all VI workload domains or all isolated workload domains linked to the shared NSX Manager instance. This shared NSX instance also reduces the number of NSX controllers needed to manage the NSX environment effectively.
To learn more, please check the following demo of this feature:
In scenarios where NSX Edges handle traffic across multiple TEPs per NSX segment, in many cases, a single segment is involved when routing traffic through an Edge, leading to the utilization of a single TEP. Customers with significant throughput demands have observed limitations in the processing capacity of individual edges under such circumstances. To address this challenge and enhance traffic handling capabilities, implementing a more granular per-flow load sharing mechanism across the Edge TEPs is recommended and now possible.
VCF 5.2 now integrates with the NSX Advanced load balancer (formerly AVI). This integration allows VCF Administrators to leverage the SDDC Manager to deploy an NSX Advanced Load Balancer controller cluster within deployed workload domains. It's important to highlight that utilizing the NSX Advanced Load Balancer requires an add-on license. This integration streamlines the deployment of the AVI Load Balancer Controller Cluster. The AVI controller configuration and Service Engines' deployment are managed through the AVI Admin Console.
vSAN 8 U3 leverages the robust snapshotting capabilities of ESA to enhance data protection and operational flexibility. With the Data Protection for vSAN ESA, VCF administrators now have the capability to safeguard and restore VMs in the event of accidental deletions or malicious activities like ransomware attacks. This functionality simplifies the setup by enabling the configuration of protection groups to specify which VMs require protection, the frequency of snapshots, and the retention period. Moreover, the option to make snapshots immutable provides an additional layer of defense, particularly for scenarios requiring robust ransomware protection. Furthermore, seamless integration with VMware Live Cyber Recovery (VLCR) enhances the overall ransomware protection strategy by offering comprehensive cloud-based solutions.
VCF on VxRail now supports vSAN Witness Traffic Separation (WTS) configurations as a standard feature, eliminating the need to pursue Dell exception support. The VMware and Dell Technologies documentation will be revised to align with this updated standard feature.
VMware Cloud Foundation offers support for Identity Federation through various third-party providers. The VMware Identity Service allows configuration for federation utilizing Okta and Microsoft Entra ID (introduced in VCF 5.2) with plans for additional third-party identity providers in upcoming releases. When users initiate login requests to VCF, these requests are directed to the chosen authentication service. Following authentication validation based on specified criteria, the third-party authentication service provides an access token/claim for the user's login request, enabling VCF to authorize access to its dashboards. Authenticated users can seamlessly navigate between SDDC Manager, vCenter Server, and NSX Manager, enhancing operational efficiency and user experience within the VCF environment.
In previous releases of VMware Cloud Foundation, the capability to set up a proxy server within SDDC Manager was available but lacked the option to configure authentication. The latest version of VCF now enables administrators to configure a proxy server with authentication for the download and installation of update bundles. The SDDC Manager interface can be used to establish proxy server authentication, supporting basic or NTML authentication methods. Additionally, if your proxy server operates on secure HTTPS protocols, the SDDC Manager now includes support for configuring this flexible security protocol level.
New SDDC Manager compliance APIs have been introduced. While current support is limited to PCI compliance, VMware intends to expand this support to include additional compliance standards in upcoming releases. This enhancement enables users and third-party reporting software to programmatically access configuration audit results for SDDC Manager, facilitating streamlined compliance monitoring and reporting processes.
The Aria Operations console in VMware Cloud Foundation (VCF) offers a centralized interface for managing the VCF stack, streamlining application and infrastructure management. It serves as a control plane for global inventory, lifecycle operations, and administrative tasks, enhancing operational efficiency. Integrating diagnostic functionalities from Skyline Health Diagnostics and Skyline Advisor into Aria Operations provides administrators with a unified platform to identify issues, access automated recommendations, and ensure efficient troubleshooting. Additionally, the console simplifies license key management and enhances certificate visibility, allowing IT admins to monitor certificate health, receive expiration alerts, and generate compliance reports for audit purposes.
The enhanced VCF integration with Aria Suite boosts cloud resource visibility and streamlines cloud setup with Quick Start workflows. The Dashboard and Launchpad on the Home Page simplify access to crucial information. Through guided workflows, users can efficiently establish cloud services, create cloud accounts, and align resources with business needs. Admins can customize Supervisor Namespace classes for self-service provisioning, allowing developers to deploy workloads quickly via the Cloud Consumption Interface or curated catalog items. Governance is ensured through policies like leasing and approvals as well as maintaining operational compliance and control.
HCX 4.10 enhances performance and scalability and is fully compatible with VCF 5.2. Migration capacity increased from 600 to 1,000 VMs, optimizing efficiency and reducing costs. HCX Assisted vMotion (HAV) accelerates Cross vCenter vMotion for faster migration times. Enhanced encryption options are introduced for migration and network extension services. Finally, OS-Assisted Migration deployment is streamlined, reducing solution footprint.
The VMware Private AI Foundation with NVIDIA was initially introduced as a post-GA feature in version 5.1.1, however it has been validated and officially supported in VCF on VxRail. Within VMware Aria Automation—a part of the solution—users can access expanded catalog items, including AI Workstation, AI RAG Workstation, and Triton Inferencing Server. These new catalog items empower users to provision TKG clusters and leverage vGPU and RAG operators for deploying AI RAG applications on the TKG cluster. Additionally, Deep Learning VM catalog items now support the execution of custom cloud-init configurations for better customization.
The latest release is packed with new features, including support for multiple new hardware platforms, notable lifecycle management enhancements, networking and security updates, and substantial improvements in integration with Aria Suite. VCF 5.2 on VxRail 8.0.300 delivers a more robust, secure, integrated, and streamlined user operations experience. If you want more information beyond what was discussed here, please check out the following resources. Until next time!
Author: Karol Boguniewicz, ISG Cloud Platforms Technical Marketing
Twitter: @cl0udguide
Tue, 11 Jun 2024 18:37:28 -0000
|Read Time: 0 minutes
The partnership between VMware by Broadcom, Intel, and Dell Technologies simplifies the implementation of AI infrastructure, offering customers a seamless and efficient experience. The integration of Intel Xeon Scalable CPU with Advanced Matrix Extensions (AMX) within VMware Cloud Foundation on Dell VxRail presents a comprehensive solution consisting of software and hardware components that support AI workloads throughout the infrastructure without needing additional hardware such as GPUs. Customers may choose CPUs over GPUs for their AI infrastructure to better align costs with their specific workload requirements and to reduce power consumption. This approach helps right-size the infrastructure, avoiding over-engineered, costly solutions and achieving operational cost savings while meeting performance needs.
By using the capabilities of 4th Gen Intel Xeon processors, businesses can fully harness the advantages of AI technology, leading to enhanced business results. This strategic approach broadens the reach of AI advantages by efficiently using current infrastructure resources and the powerful features of Intel Xeon processors equipped with AMX technology. Furthermore, the seamless compatibility with VMware Cloud Foundation on Dell VxRail environments enhances cost-effectiveness, accessibility, operational efficiency, and experience, ultimately reducing the Total Cost of Ownership (TCO) and expediting Time to Value (TTV). The collaborative efforts of VMware, Intel, and Dell Technologies empower customers to accelerate their AI initiatives, ultimately realizing the vision of AI Everywhere.
VMware Private AI represents a strategic architectural framework designed to unlock the business benefits of AI adoption while simultaneously addressing organizations’ critical privacy and compliance requirements. This innovative approach encompasses a comprehensive set of components and specialized expertise to deliver AI services prioritizing data privacy, regulatory compliance, and operational control. Developed collaboratively with industry partners – Intel and Dell Technologies, VMware Private AI offers a tailored solution that empowers enterprises to maintain data privacy, leverage a mix of open-source and commercial AI tools, accelerate time-to-value, and ensure robust security and governance.
At its core, the VMware Private AI with Intel infrastructure stack is built upon VMware Cloud Foundation on Dell VxRail. It leverages acceleration capabilities such as the Intel AMX instructions, delivered as part of VxRail integrated systems with Intel 4th Gen Xeon Scalable CPUs, providing an optimized infrastructure framework designed to facilitate the deployment of Private AI solutions within the enterprise. This framework empowers customers to effectively harness the advantages of AI technologies while maintaining control over the utilization and storage of their data for various GenAI use cases. Since this approach does not require GPU accelerators, it also does not require SR-IOV.
By leveraging VMware Private AI with Intel on Dell VxRail, organizations can benefit from integrating Intel’s AI software suite, Intel Xeon processors featuring embedded on-chip AI accelerator, and VMware Cloud Foundation on Dell VxRail. This collaborative solution empowers customers to develop and implement private AI models within their existing infrastructure, leading to reduced total cost of ownership and a focus on environmental sustainability. This partnership between VMware, Intel, and Dell Technologies enables the optimization of smaller, cost-effective, state-of-the-art AI models that are easier to maintain and update within shared virtual environments. Once batch AI jobs are completed, the resources can seamlessly return to the IT shared resource pool, ready to support ongoing inferencing tasks that operate continuously rather than in batches. This flexible approach supports many use cases, such as AI-assisted code generation, innovative customer service centers using natural language processing, and traditional machine learning/statistical analytics. All these applications can coexist on the same general-purpose servers running traditional and cloud-native applications, optimizing resource utilization and efficiency.
Figure 1: High-level architecture of VMware Private AI with Intel on VMware Cloud Foundation on Dell VxRail
VxRail, powered by Dell PowerEdge server platforms and VxRail HCI System Software, incorporates advanced technology to ensure the longevity and adaptability of your infrastructure while fostering deep integration within the VMware ecosystem. VxRail HCI System Software, a suite of integrated software elements that sits between infrastructure components such as vSAN and VMware Cloud Foundation, delivers a seamless and automated operational experience. By embracing Intel’s 4th Gen Intel Xeon Scalable CPU with AMX, memory, and storage innovations, VxRail can support the latest technologies. Its versatile architecture spanning compute, memory, storage, network, and graphics options enable optimal performance across a diverse range of applications and workloads.
VxRail plays a crucial role in cost optimization by consolidating business-critical workloads onto a high-performing platform that excels in reliability, functionality, and performance. This consolidation enhances operational efficiency and enables IT teams to reallocate resources effectively through productivity-boosting features like streamlined deployments and automated patching and updates. Moreover, by minimizing the frequency and duration of service disruptions, VxRail is a protective barrier against revenue loss, ensuring a seamless user experience while enhancing data backup, protection, and recovery capabilities.
Intel 4th Gen Intel Xeon Scalable Processors with AMX and Intel’s AI software suite
The Intel AMX is an on-chip hardware AI accelerator that empowers 4th and 5th Gen Intel Xeon Scalable processors to enhance deep learning (DL) training and inferencing workloads. By leveraging Intel AMX, these processors can seamlessly transition between optimizing general computing tasks and AI workloads. The 4th and 5th Gen Intel Xeon Scalable processors offer developers the flexibility to leverage the Intel AMX instruction set for AI-specific functionalities while using the processor instruction set architecture (ISA) for non-AI tasks. Intel has further integrated the Intel oneAPI Deep Neural Network Library (oneDNN) and its oneAPI DL engine into widely used open-source AI tools such as TensorFlow, PyTorch, PaddlePaddle, and OpenVINO, enhancing the accessibility and efficiency of AI applications.
Intel’s AI software suite is equipped with a robust array of open-source tools and optional licensing components. It is designed to support developers to run full AI pipeline workflows from data preparation to fine-tuning to inference, accelerate building multi-node scaling, and deploying AI on enterprise IT infrastructure. Utilizing the open oneAPI framework, the suite allows for development that is agnostic to processors and hardware accelerators. This means developers can create applications once and deploy them across different architectures, avoiding the need to maintain multiple codebases or learn specialized programming languages. Additionally, Intel’s Transformer Extensions and PyTorch Extensions are tightly integrated with the widely-used Hugging Face open-source libraries, offering automated optimization techniques that streamline the process of model fine-tuning and compression for efficient inference. Developers looking to leverage these tools can download the AI Tools from AI Tools Selector.
VMware Cloud Foundation represents the next evolution of VMware’s hybrid cloud platform, building upon the industry-leading server virtualization technology of VMware vSphere. This advancement extends the core hypervisor by integrating software-defined storage, networking, and security features, providing users with flexible consumption options on-premises or in the public cloud. With the inclusion of integrated cloud management capabilities, the platform delivers a unified hybrid cloud solution that seamlessly spans private and public environments. It offers a consistent operational model leveraging familiar vSphere tools and processes, granting businesses the freedom to deploy applications across various environments without the complexity of application rewrites.
VMware Cloud Foundation on VxRail combines Dell VxRail and VMware Cloud Foundation and delivers a simple and direct path to modern apps and the hybrid cloud with one complete, automated platform. Deep integration between VxRail and VMware Cloud Foundation combines operational transparency, automation, support, and serviceability capabilities for a turnkey hybrid cloud experience. Enterprises can deploy, host, and manage traditional VMs alongside cloud-native workloads across core, edge, and cloud environments. Streamlined operations help IT rapidly provision infrastructure to developers so they can create and deploy code faster to market and drive business innovation.
VxRail is the only jointly engineered HCI system with deep VMware Cloud Foundation integration. That deep integration delivers a unique experience with cluster deployment and management from a single, familiar user interface and automated end-to-end LCM management to ensure VxRail clusters remain in continuously validated states.
Fri, 26 Apr 2024 12:20:39 -0000
|Read Time: 0 minutes
Dell Technologies and VMware are happy to announce the availability of VMware Cloud Foundation 4.3.0 on VxRail 7.0.202. This new release provides several security-related enhancements, including FIPS 140-2 support, password auto-rotation support, SDDC Manager secure API authentication, data protection enhancements, and more. VxRail-specific enhancements include support for the more powerful, 3rd Gen AMD EYPC™ CPUs and NVIDIA A100 GPUs (check this blog for more information about the corresponding VxRail release), and more flexible network configuration options with the support for multiple System Virtual Distributed Switches (vDS).
Let’s quickly discuss the comprehensive list of the new enhancements and features:
These include the updated version of vSphere, vSAN, NSX-T, and VxRail Manager. Please refer to the VCF on VxRail Release Notes for comprehensive, up-to-date information about the release and supported software versions.
The configuration of an NSX-T Edge cluster and AVN networks are now a post-deployment process that is automated through SDDC Manager. This approach simplifies and accelerates the VCF on VxRail Bring-up and provides more flexibility for the network configuration after the initial deployment of the platform.
Figure 1: Cloud Foundation Initial Deployment – Day 2 NSX-T Edge and AVN
NSX-T Edge Clusters can now be expanded and shrunk using in-built-in automation from within SDDC Manager. This allows VCF operators to scale the right level of resources on-demand without having to size for demand up-front, which results in more flexibility and better use of infrastructure resources in the platform.
Two System Virtual Distributed Switch (vDS) configuration support was introduced in VxRail 7.0.13x. VCF 4.3 on VxRail 7.0.202 now supports a VxRail deployed with two system vDS, offering more flexibility and choice for the network configuration of the platform. This is relevant for customers with strict requirements for separating the network traffic (for instance, some customers might be willing to use a dedicated network fabric and vDS for vSAN). See the Figure 2 below for a sample diagram of the new network topology supported:
Figure 2: Multiple System VDS Configuration Example
This new release introduces new abilities to define a periodic backup schedule, retention policies of backups, and disable or enable these schedules in the SDDC Manager UI, resulting in simplified backup and recovery of the platform (see the screenshot below in Figure 3).
Figure 3: Backup Schedule
The built-in automated workflow for generating certificate signing requests (CSRs) within SDDC Manager has been further enhanced to include the option to input a Subject Alternate Name (SAN) when generating a certificate signing request. This improves security and prevents vulnerability scanners from flagging invalid certificates.
Many customers need to rotate and update passwords regularly across their infrastructure, and this can be a tedious task if not automated. VCF 4.3 provides automation to update individual supported platform component passwords or rotate all supported platform component passwords (including integrated VxRail Manager passwords) in a single workflow. This feature enhances the security and improves the productivity of the platform admins.
This new support increases the number of VCF on VxRail components that are FIPS 140-2 compliant in addition to VxRail Manager, which is already compliant with this security standard. It improves platform security and regulatory compliance with FIPS 140-2.
Token based Auth API access is now enabled within VCF 4.3 for secure authentication to SDDC Manager by default. Access to private APIs that use Basic Auth has been restricted. This change improves platform security when interacting with the VCF API.
VCF 4.3 on VxRail 7.0.202 brings new hardware features including support for AMD 3rd Generation EPYC CPU Platform Support and Nvidia A100 GPUs.
These new hardware options provide better performance and more configuration choices. Check this blog for more information about the corresponding VxRail release.
New manual guidance for password and certificate management and backup & restore of Global Managers.
As you can see, most of the new enhancements in this release are focused on improving platform security and providing more flexibility of the network configurations. Dell Technologies and VMware continue to deliver the optimized, turnkey platform experience for customers adopting the hybrid cloud operating model. If you’d like to learn more, please check the additional resources linked below.
VMware Cloud Foundation on Dell EMC VxRail Release Notes
VxRail page on DellTechnologies.com
VCF on VxRail Interactive Demos
Author: Karol Boguniewicz, Senior Principal Engineering Technologist, VxRail Technical Marketing
Twitter: @cl0udguide
Fri, 26 Apr 2024 11:37:47 -0000
|Read Time: 0 minutes
HCI networking made easy (again!). Now even more powerful with multi-rack support.
The Challenge
Network infrastructure is a critical component of HCI. In contrast to legacy 3-tier architectures, which typically have a dedicated storage and storage network, HCI architecture is more integrated and simplified. Its design allows you to share the same network infrastructure used for workload-related traffic and inter-cluster communication with the software-defined storage. Reliability and the proper setup of this network infrastructure not only determines the accessibility of the running workloads (from the external network), it also determines the performance and availability of the storage, and as a result, the whole HCI system.
Unfortunately, in most cases, setting up this critical component properly is complex and error-prone. Why? Because of the disconnect between the responsible teams. Typically configuring a physical network requires expert network knowledge which is quite rare among HCI admins. The reverse is also true: network admins typically have a limited knowledge of HCI systems, because this is not their area of expertise and responsibility.
The situation gets even more challenging when you think about increasingly complex deployments, when you go beyond just a pair of ToR switches and beyond a single-rack system. This scenario is becoming more common, as HCI is becoming a mainstream architecture within the data center, thanks to its maturity, simplicity, and being recognized as a perfect infrastructure foundation for the digital transformation and VDI/End User Computing (EUC) initiatives. You need much more computing power and storage capacity to handle increased workload requirements.
At the same time, with the broader adoption of HCI, customers are looking for ways to connect their existing infrastructure to the same fabric, in order to simplify the migration process to the new architecture or to leverage dedicated external NAS systems, such as Isilon, to store files and application or user data.
A brief history of SmartFabric Services for VxRail
Here at Dell Technologies we recognize these challenges. That’s why we introduced SmartFabric Services (SFS) for VxRail. SFS for VxRail is built into Dell EMC Networking SmartFabric OS10 Enterprise Edition software that is built into the Dell EMC PowerSwitch networking switches portfolio. We announced the first version of SFS for VxRail at VMworld 2018. With this functionality, customers can quickly and easily deploy and automate data center fabrics for VxRail, while at the same time reduce risk of misconfiguration.
Since that time, Dell has expanded the capabilities of SFS for VxRail. The initial release of SFS for VxRail allowed VxRail to fully configure the switch fabric to support the VxRail cluster (as part of the VxRail 4.7.0 release back in Dec 2018). The following release included automated discovery of nodes added to a VxRail cluster (as part of VxRail 4.7.100 in Jan 2019).
The new solution
This week we are excited to introduce a major new release of SFS for VxRail as a part of Dell EMC SmartFabric OS 10.5.0.5 and VxRail 4.7.410.
So, what are the main enhancements?
Figure 1. Comparison of a multi-rack VxRail deployment, without and with SFS
Solution components
In order to take advantage of this solution, you need the following components:
How does the multi-rack feature work?
The multi-rack feature is done through the use of the Hardware VTEP functionality in Dell EMC PowerSwitches and the automated creation of a VxLAN tunnel network across the switch fabric in multiple racks.
VxLAN (Virtual Extensible Local Area Network) is an overlay technology that allows you to extend a Layer 2 “overlay” network over a Layer 3 (L3) “underlay” network by adding a VxLAN header to the original Ethernet frame and encapsulating it. This encapsulation occurs by adding a VxLAN header to the original Layer 2 (L2) Ethernet frame, and placing it into an IP/UDP packet to be transported across the L3 underlay network.
By default, all VxRail networks are configured as L2. With the configuration of this VxLAN tunnel, the L2 network is “stretched” across multiple racks with VxRail nodes. This allows for the scalability of L3 networks with the VM mobility benefits of an L2 network. For example, the nodes in a VxRail cluster can reside on any rack within the SmartFabric network, and VMs can be migrated within the same VxRail cluster to any other node without manual network configuration.
Figure 2. Overview of the VLAN and VxLAN VxRail traffic with SFS for multi-rack VxRail
This new functionality is enabled by the new L3 Fabric personality, available as of OS 10.5.0.5, that automates configuration of a leaf-spine fabric in a single-rack or multi-rack fabric and supports both L2 and L3 upstream connectivity. What is this fabric personality? SFS personality is a setting that enables the functionality and supported configuration of the switch fabric.
To see how simple it is to configure the fabric and to deploy a VxRail multi-rack cluster with SFS, please see the following demo: Dell EMC Networking SFS Deployment with VxRail - L3 Uplinks.
Single pane for management and “day 2” operations
SFS not only automates the initial deployment (“day 1” fabric setup), but greatly simplifies the ongoing management and operations on the fabric. This is done in a familiar interface for VxRail / vSphere admins – vCenter, through the OMNI plugin, distributed as a virtual appliance.
It’s powerful! From this “VMware admin-friendly” interface you can:
Figure 3. Sample view from the OMNI vCenter plugin showing a fabric topology
To see how simple it is to deploy the OMNI plugin and to get familiar with some of the options available from its interface, please see the following demo: Dell EMC OpenManage Network Integration for VMware vCenter.
OMNI also monitors the VMware virtual networks for changes (such as to portgroups in vSS and vDS VMware virtual switches) and as necessary, reconfigures the underlying physical fabric.
Figure 4. OMNI – monitor virtual and physical network configuration from a single view
Thanks to OMNI, managing the physical network for VxRail becomes much simpler, less error-prone, and can be done by the VxRail admin directly from a familiar management interface, without having to log into the console of the physical switches that are part of the fabric.
Supported topologies
This new SFS release is very flexible and supports multiple fabric topologies. Due to the limited size of this post, I will only list them by name:
For detailed information on these topologies, please consult Dell EMC VxRail with SmartFabric Network Services Planning and Preparation Guide.
Note, that SFS for VxRail does not currently support NSX-T and VCF on VxRail.
Final thoughts
This latest version of SmartFabric Services for VxRail takes HCI network automation to the next level and solves now much bigger network complexity problem in a multi-rack environment, compared to much simpler, single-rack, dual switch configuration. With SFS, customers can:
Dell EMC VxRail with SmartFabric Network Services Planning and Preparation Guide
Dell EMC Networking SmartFabric Services Deployment with VxRail
SmartFabric Services for OpenManage Network Integration User Guide Release 1.2
Demo: Dell EMC OpenManage Network Integration for VMware vCenter
Demo: Expand SmartFabric and VxRail to Multi-Rack
Demo: Dell EMC Networking SFS Deployment with VxRail - L2 Uplinks
Demo: Dell EMC Networking SFS Deployment with VxRail - L3 Uplinks
Author: Karol Boguniewicz, Senior Principal Engineer, VxRail Technical Marketing
Twitter: @cl0udguide
*Disclaimer: Based on internal analysis comparing SmartFabric to manual network configuration, Oct 2019. Actual results will vary.
Wed, 24 Apr 2024 13:49:39 -0000
|Read Time: 0 minutes
The challenge
Over the last few years, VxRail has evolved significantly -- becoming an ideal platform for most use cases and applications, spanning the core data center, edge locations, and the cloud. With its simplicity, scalability, and flexibility, it’s a great foundation for customers’ digital transformation initiatives, as well as high value and more demanding workloads, such as SAP HANA.
Running more business-critical workloads requires following best practices regarding data protection and availability. Dell Technologies specializes in data protection solutions and offers a portfolio of products that can fulfill even the most demanding RPO/RTO requirements from our customers. However, we are probably not giving enough attention to the other area related to this topic: protection against power disturbances and outages. Uninterruptible Power Supply (UPS) systems are at the heart of a data center’s electrical systems, and because VxRail is running critical workloads, it is a best practice to leverage a UPS to protect them and to ensure data integrity in case of unplanned power events. I want to highlight a solution from one of our partners – Eaton.
The solution
Eaton is an Advantage member of the Dell EMC Technology Connect Partner Program and the first UPS vendor who integrated their solution with VxRail. Eaton’s solution is a great example of how Dell Technologies partners can leverage VxRail APIs to provide additional value for our joint customers. Having integrated Eaton’s Intelligent Power Manager (IPM) software with VxRail APIs, and leveraged Eaton’s Gigabit Network Card, the solution can run on the same protected VxRail cluster. This removes the need for additional external compute infrastructure to host the power management software - just a compatible Eaton UPS is required.
The solution consists of:
The main benefits are:
How does it work?
It’s quite simple (see the figure below). What’s interesting and unique is that the IPM software, which is running on the cluster, delegates the final shutdown of the system VMs and cluster to the card in the UPS device, and the card uses VxRail APIs to execute the cluster shutdown.
Figure 1. Eaton UPS and VxRail integration explained
Summary
Protection against unplanned power events should be a part of a business continuity strategy for all customers who run their critical workloads on VxRail. This ensures data integrity by enabling automated and graceful shutdown of VxRail cluster(s). Eaton’s solution is a great example of providing such protection and how Dell Technologies partners can leverage VxRail APIs to provide additional value for our joint customers.
Eaton website: Eaton ensures connectivity and protects Dell EMC VxRail from power disturbances
Brochure: Eaton delivers advanced power management for Dell EMC VxRail systems
Blog post: Take VxRail automation to the next level by leveraging APIs
Author: Karol Boguniewicz, Senior Principal Engineer, VxRail Technical Marketing
Twitter: @cl0udguide
Wed, 24 Apr 2024 13:49:15 -0000
|Read Time: 0 minutes
The Challenge
VxRail Manager, available as a part of HCI System Software, drastically simplifies the lifecycle management and operations of a single VxRail cluster. With a “single click” user experience available directly in vCenter interface, you can perform a full upgrade off all software components of the cluster, including not only vSphere and vSAN, but also complete server hardware firmware and drivers, such as NICs, disk controller(s), drives, etc. That’s a simplified experience that you won’t find in any other VMware-based HCI solution.
But what if you need to manage not a single cluster, but a farm consisting of dozens or hundreds of VxRail clusters? Or maybe you’re using some orchestration tool to holistically automate the IT infrastructure and processes? Would you still need to login manually as an operator to each of these clusters separately and click a button to maybe shutdown a cluster, collect log information or health data or perform LCM operations?
This is where VxRail REST APIs come in handy.
The VxRail API Solution
REST APIs are very important for customers who would like to programmatically automate operations of their VxRail-based IT environment and integrate with external configuration management or cloud management tools.
In VxRail HCI System Software 4.7.300 we’ve introduced very significant improvements in this space:
The easiest way to start using and access these APIs is through the web browser, thanks to the Swagger integration. Swagger is an Open Source toolkit that simplifies Open API development and can be launched from within the VxRail Manager virtual appliance. To access the documentation, simply open the following URL in the web browser: https://<VxM_IP>/rest/vxm/api-doc.html (where <VxM IP> stands for the IP address of the VxRail Manager) and you should see a page similar to the one shown below:
Figure 1. Sample view into VxRail REST APIs via Swagger
This interface is dedicated for customers, who are leveraging orchestration or configuration management tools – they can use it to accelerate integration of VxRail clusters into their automation workflows. VxRail API is complementary to the APIs offered by VMware.
Would you like to see this in action? Watch the first part of the recorded demo available in the additional resources section.
PowerShell integration for Windows environments
Customers, who prefer scripting in Windows environment, using Microsoft PowerShell or VMware PowerCLI, will benefit from VxRail.API PowerShell Modules Package. It simplifies the consumption of the VxRail REST APIs from PowerShell and focuses more on the physical infrastructure layer, while management of VMware vSphere and solutions layered on the top (such as Software-Defined Data Center, Horizon, etc.), can be scripted using similar interface available in VMware PowerCLI.
Figure 2. VxRail.API PowerShell Modules Package
To see that in action, check the second part of the recorded demo available in the additional resources section.
Bringing it all together
VxRail REST APIs further simplify IT Operations, fostering operational freedom and a reduction in OPEX for large enterprises, service providers and midsize enterprises. Integrations with Swagger and PowerShell make them much more convenient to use. This is an area of VxRail HCI System Software that rapidly gains new capabilities, so please make sure to check the latest advancements with every new VxRail release.
Demo: VxRail API - Overview
Demo: VxRail API - PowerShell Package
Dell EMC VxRail Appliance – API User Guide
Author: Karol Boguniewicz, Sr Principal Engineer, VxRail Tech Marketing
Twitter: @cl0udguide
Tue, 26 Mar 2024 18:47:52 -0000
|Read Time: 0 minutes
The latest VCF on VxRail release delivers GenAI-ready infrastructure, runs more demanding workloads, and is an excellent choice for supporting hardware tech refreshes and achieving higher consolidation ratios.
VMware Cloud Foundation 5.1.1 on VxRail 8.0.210 is a minor release from the perspective of versioning and new functionality but is significant in terms of support for the latest VxRail hardware platforms. This new release is based on the latest software bill of materials (BOM) featuring vSphere 8.0 U2b, vSAN 8.0 U2b, and NSX 4.1.2.3. Read on for more details…
Cloud Foundation on VxRail customers can now benefit from the latest, more scalable, and robust 16th generation hardware platforms. This includes a full spectrum of hybrid, all-flash, and all NVMe options that have been qualified to run VxRail 8.0.210 software. This is fantastic news as these new hardware options bring many technical innovations, which my colleagues discussed in detail in previous blogs.
These new hardware platforms are based on Intel® 4th Generation Xeon® Scalable processors, which increase VxRail core density per socket to 56 (112 max per node). They also come with built-in Intel® AMX accelerators (Advanced Matrix Extensions) that support AI and HPC workloads without the need for additional drivers or hardware.
VxRail on the 16th generation hardware supports deployments with either vSAN Original Storage Architecture (OSA) or vSAN Express Storage Architecture (ESA). The VP-760 and VE-660 can take advantage of vSAN ESA’s single-tier storage architecture, which enables RAID-5 resiliency and capacity with RAID-1 performance.
This table summarizes the configurations of the newly added platforms:
To learn more about the VE-660 and VP-760 platforms, please check Mike Athanasiou’s VxRail’s Latest Hardware Evolution blog. To learn more about Intel® AMX capability set, make sure to check out the VxRail and Intel® AMX, Bringing AI Everywhere blog, authored by Una O’Herlihy.
Customers who already upgraded to VCF 5.x are already familiar with the concept of the skip-level upgrade, which allows them to upgrade directly to the latest 5.x release without the need to perform upgrades to the interim versions. It significantly reduces the time required to perform the upgrade and enhances the overall upgrade experience. VCF 5.1.1 introduces so-called “N-3” upgrade support (as illustrated on the following diagram), which supports the skip-level upgrade for VCF 4.4.x. This means they can now perform a direct LCM upgrade operation from VCF 4.4.x, 4.5.x, 5.0.x, and 5.1.0 to VCF 5.1.1.
Starting with VCF 5.1.1, vCenter Server, ESXi, and TKG component licenses are now entered using a single “VCF Solution License” key. This helps to simplify the licensing by minimizing the number of individual component keys that require separate management. VMware NSX Networking, HCX, and VMware Aria Suite components are automatically entitled from the vCenter Server post-deployment. The single licensing key and existing keyed licenses will continue to work in parallel.
The other significant licensing change is the deprecation of VCF+ licensing, which the new subscription model has replaced.
VMware Cloud Foundation 5.1.1 allows deploying a new VCF instance in evaluation mode without needing to enter license keys. An administrator has 60 days to enter licensing for the deployment, and SDDC Manager is fully functional at this time. The workflows for expanding a cluster, adding a new cluster, or creating a VI workload domain also provide an option to license later within a 60 day timeframe.
For more comprehensive information about changes in VCF licensing, please consult the VMware website.
One of the notable enhancements in VxRail 8.0.210 is adopting the vSphere Client remote plugin architecture. It showcases adopting the latest vSphere architecture guidelines, as the local plug-ins are deprecated in vSphere 8.0 and won’t be supported in vSphere 9.0. The vSphere Client remote plug-in architecture allows plug-in functionality integration without running inside a vCenter Server. It’s a more robust architecture that separates vCenter Server from plug-ins and provides more security, flexibility, and scalability when choosing the programming frameworks and introducing new features. Starting with 8.0.210, a new VxRail Manager remote plug-in is deployed in the VxRail Manager Appliance.
VxRail 8.0.210 also comes with several small features based on Customer feedback that combine to improve the LCM experience's reliability. These include:
Another group of features contributes to overall improved serviceability and visibility into the system:
With VCF 5.1.1, VMware introduces VMware Private AI Foundation with NVIDIA as Initial Access. Dell Technologies Engineering intends to validate this feature when it is generally available.
This solution aims to enable enterprise customers to adopt Generative AI capabilities more easily and securely by providing enterprises with a cost-effective, high-performance, and secure environment for delivering business value from Large Language Models (LLMs) using their private data.
The new VCF 5.1.1 on VxRail 8.0.210 release is an excellent option for customers looking for a hardware refresh, Gen AI-ready infrastructure to run more demanding workloads, or to achieve higher consolidation ratios. Additional enhancements introduced in the core VxRail functionality improve the overall LCM experience, serviceability, and visibility into the system.
Thank you for your time, and please check the additional resources if you like to learn more.
Author: Karol Boguniewicz
Twitter: @cl0udguide
Fri, 09 Feb 2024 16:07:26 -0000
|Read Time: 0 minutes
Well-managed companies are always looking for new ways to increase efficiency and reduce costs while maintaining excellence in the quality of their products and services. Hence, IT departments and service providers look at the cloud and Application Programming Interfaces (APIs) as the enablers for automation, driving efficiency, consistency, and cost-savings.
This blog helps you get started with VxRail API by grouping the most useful VxRail API resources available from various public sources in one place. This list of resources is updated every few months. Consider bookmarking this blog as it is a useful reference.
Before jumping into the list, it is essential to answer some of the most obvious questions:
What is VxRail API?
VxRail API is a feature of the VxRail HCI System Software that exposes management functions with a RESTful application programming interface. It is designed for ease of use by VxRail customers and ecosystem partners who want to better integrate third-party products with VxRail systems. VxRail API is:
Why is VxRail API relevant?
VxRail API enables you to use the full power of automation and orchestration services across your data center. This extensibility enables you to build and operate infrastructure with cloud-like scale and agility. It also streamlines the integration of the infrastructure into your IT environment and processes. Instead of manually managing your environment through the user interface, the software can programmatically trigger and run repeatable operations.
More customers are embracing DevOps and Infrastructure as Code (IaC) models because they need reliable and repeatable processes to configure the underlying infrastructure resources that are required for applications. IaC uses APIs to store configurations in code, making operations repeatable and greatly reducing errors.
How can I start? Where can I find more information?
To help you navigate through all available resources, I grouped them by level of technical difficulty, starting with 101 (the simplest, explaining the basics, use cases, and value proposition), through 201, up to 301 (the most in-depth technical level).
101 Level
201 Level
Note: If you’re a customer, you will need to ask your Dell or partner account team to create a session for you and a hyperlink to get the access to this lab.
301 Level
I hope you find this list useful. If so, make sure that you bookmark this blog for your reference. I will update it over time to include the latest collateral.
Enjoy your Infrastructure as Code journey with the VxRail API!
Author: Karol Boguniewicz, Senior Principal Engineering Technologist, VxRail Technical Marketing
Twitter: @cl0udguide
Thu, 30 Nov 2023 17:43:03 -0000
|Read Time: 0 minutes
In the previous blog, Infrastructure as Code with VxRail made easier with Ansible Modules for Dell VxRail, I introduced the modules which enable the automation of VxRail operations through code-driven processes using Ansible and VxRail API. This approach not only streamlines IT infrastructure management but also aligns with Infrastructure as Code (IaC) principles, benefiting both technical experts and business leaders.
The corresponding demo is available on YouTube:
The previous blog laid the foundation for the continued journey where we explore more advanced Ansible automation techniques, with a focus on satellite node management in the VxRail ecosystem. I highly recommend checking out that blog before diving deeper into the topics discussed here - as the concepts discussed in this demo will be much easier to absorb
VxRail satellite nodes are individual nodes designed specifically for deployment in edge environments and are managed through a centralized primary VxRail cluster. Satellite nodes do not leverage vSAN to provide storage resources and are an ideal solution for those workloads where the SLA and compute demands do not justify even the smallest of VxRail 2-node vSAN clusters.
Satellite nodes enable customers to achieve uniform and centralized operations within the data center and at the edge, ensuring VxRail management throughout. This includes comprehensive, automated lifecycle management for VxRail satellite nodes, while encompassing hardware and software and significantly reducing the need for manual intervention.
To learn more about satellite nodes, please check the following blogs from my colleagues:
You can leverage the Ansible Modules for Dell VxRail to automate various VxRail operations, including more advanced use cases, like satellite node management. It’s possible today by using the provided samples available in the official repository on GitHub.
Have a look at the following demo, which leverages the latest available version of these modules at the time of recording – 2.2.0. In the demo, I discuss and demonstrate how you can perform the following operations from Ansible:
The examples used in the demo are slightly modified versions of the following samples from the modules' documentation on GitHub. If you’d like to replicate these in your environment, here are the links to the corresponding samples for your reference, which need slight modification:
In the demo, you can also observe one of the interesting features of the Ansible Modules for Dell VxRail that is shown in action but not explained explicitly. You might be aware that some of the VxRail API functions are available in multiple versions – typically, a new version is made available when some new features are available in the VxRail HCI System Software, while the previous versions are stored to provide backward compatibility. The example is “GET /vX/system”, which is used to retrieve the number of the satellite nodes – this property was introduced in version 4. If you avoid specifying the version, the modules will automatically select the latest supported version, simplifying the end-user experience.
The above demo, discussing the satellite nodes management using Ansible, was configured in the VxRail API hands-on lab which is available in the Dell Technologies Demo Center. With the help of the Demo Center team, we built this lab as the self-education tool for learning VxRail API and how it can be used for automating VxRail operations using various methods – through exploring the built-in, interactive, web-based documentation, VxRail API PowerShell Modules, Ansible Modules for Dell VxRail and Postman.
The hands-on lab provides a safe VxRail API sandbox, where you can easily start experimenting by following the exercises from the lab guide or trying some other use cases on your own without any concerns about making configuration changes to the VxRail system.
The lab was refreshed for the Dell Technologies World 2023 conference to leverage VxRail HCI System Software 8.0.x and the latest version of the Ansible Modules. If you’re a Dell partner, you should have access directly, and if you’re a customer who’d like to get access – please contact your Account SE from Dell or Dell Partner. The lab is available in the catalog as: “HOL-0310-01 - Scalable Virtualization, Compute, and Storage with the VxRail REST API”.
In the fast-evolving landscape of IT infrastructure, the ability to automate operations efficiently is not just a convenience but a necessity. With the power of Ansible Modules for Dell VxRail, we've explored how this necessity can be met, looking at the examples of satellite nodes use case. We encourage you to embrace the full potential of VxRail automation using VxRail API and Ansible or other tools. If it is something new, you can get the experience by experimenting with the hands-on lab available in the Demo Center catalog.
Author: Karol Boguniewicz, Senior Principal Engineering Technologist, VxRail Technical Marketing
Twitter/X: @cl0udguide
LinkedIn: https://www.linkedin.com/in/boguniewicz/
Thu, 22 Jun 2023 13:00:59 -0000
|Read Time: 0 minutes
The latest release of the co-engineered turnkey hybrid cloud platform is now available, and I wanted to take this great opportunity to discuss its enhancements.
Many new features are included in this major release, including support for the latest VCF and VxRail software component versions based on the latest vSphere 8.0 U1 virtualization platform generation, and more. Read on for the details!
This is the most significant feature our customers have been waiting for. In the past, due to significant architectural changes between major VCF releases and their SDDC components (such as NSX), a migration was required to move from one major version to the next. (Moving from VCF 4.x to VCF 5.x is considered a major version upgrade.) In this release, this type of upgrade is now drastically improved.
After the SDDC Manager has been upgraded to version 5.0 (by downloading the latest SDDC Manager update bundle and performing an in-place automated SDDC Manager orchestrated LCM update operation), an administrator can follow the new built-in SDDC Manager in-place upgrade workflow. The workflow is designed to assist in upgrading the environment without needing any migrations. Domains and clusters can be upgraded sequentially or in parallel. This reduces the number and duration of maintenance windows, allowing administrators to complete an upgrade in less time. Also, VI domain skip-level upgrade support allows customers to run VCF 4.3.x or VCF 4.4.x BOMs in their domains to streamline their upgrade path to VCF 5.0, by skipping intermediary VCF 4.4.x and 4.5.x versions respectively. All this is performed automatically as part of the native SDDC Manager LCM workflows.
What does this look like from the VCF on VxRail administrator’s perspective? The administrator is first notified that a new SDDC Manager 5.0 upgrade is available. Administrators will be guided first to update their SDDC Manager instance to version 5.0. With SDDC Manager 5.0 in place, administrators can take advantage of the new enhancements which streamline the in-place upgrade process that can be used for the remaining components in the environment. These enhancements follow VMware best practices, reduce risk, and allow administrators to upgrade the full stack of the platform in a staged manner. These enhancements include:
The following image highlights part of the new upgrade experience from the SDDC Manager UI. First, on the updates tab for a given domain, we can see the availability of the upgrade from VCF 4.5.1 to VCF 5.0.0 on VxRail 8.0.100 (Note: In this example, the first upgrade bundle for the SDDC Manager 5.0 was already applied.)
When the administrator clicks View Bundles, SDDC Manager displays a high-level workflow that highlights the upgrade bundles that would be applied, and in which order.
To see the in-place upgrade in action, check out the following demo:
Now let’s dive a little deeper into the upgrade workflow steps. The following diagram depicts the end-to-end workflow for performing an in-place LCM upgrade from VCF 4.3.x/4.4.x/4.5.x to VCF 5.0 for the management domain.
The in-place upgrade workflow for the management domain consists of the following six steps:
Upgrading workload domains follows a similar workflow.
If performed manually, the in-place upgrade process to VCF 5.0 on VxRail 8.0.100 from previous releases would be potentially error-prone and time-consuming. The guided, simplified, and automated experience now provided in SDDC Manager 5.0 greatly reduces the effort and risk for customers, by helping them perform this operation in a fully controlled, guided, and automated manner on their own, providing a much better user experience and better value.
Keeping a large-scale cloud environment in a healthy, well-managed state is very important to achieve the desired service levels and increase the success rate of LCM operations. In SDDC Manager 5.0, prechecks have been further enhanced and are now context aware. But what does this mean?
Before performing the upgrade, administrators can choose to run a precheck against a specific VCF release (“Target Version”) or run a “General Update Readiness” precheck. Each type of precheck allows the administrator to select the specific VCF on VxRail objects to run the precheck on. For example, an administrator can run it against an entire domain, a VxRail cluster, or even an individual SDDC software component such as NSX and vRealize/Aria Suite components. For example, a precheck can be run at a per-VxRail cluster level, which might be useful for large workload domains configured with multiple clusters. It could reduce planned maintenance windows by updating components of the domain separately.
But what is the difference between the “Target Version” and “General Upgrade Readiness” precheck types? Let me explain:
The following screenshot shows what this feature looks like from the system administrator perspective in the SDDC Manager UI:
Another significant new feature I’d like to highlight is the introduction of Isolated workload domains. This has a significant impact on both the security and scalability of the platform.
In the past, VMware Cloud Foundation 4.x deployments by design have been configured to use a single SSO instance shared between the management domain and all VI workload domains (WLDs). All WLDs’ vCenter Servers are connected to each other using Enhanced Linked Mode (ELM). After a user is logged into SDDC Manager, ELM provides seamless access to all the products in the stack without being challenged to authenticate again.
VCF 5.0 on VxRail 8.0.100 deployments allow administrators to configure new workload domains using separate SSO instances. These are known as Isolated domains. This capability can be very useful, especially for Managed Service Providers (MSPs) who can allocate Isolated workload domains to different tenants with their own SSO domains for better security and separation between the tenant environments. Each Isolated SSO domain within VCF 5.0 on VxRail 8.0.100 is also configured with its own NSX instance.
As a positive side effect of this new design, the maximum number of supported domains per VCF on VxRail instance has now been increased to 25 (this includes the management domain and assumes all workload domains are deployed as isolated domains). This scalability enhancement results from not hitting the max number of vCenters configured in an ELM instance (which is 15) because Isolated domains are not configured with ELM with the management SSO domain. So, increasing the security and separation between the workload domains can also increase the overall scalability of the VCF on VxRail cloud platform.
The following diagram illustrates how customers can increase the scalability of the VCF on VxRail platform by adding isolated domains with their dedicated SSO:
What does this new feature look like from the VCF on VxRail administrator’s perspective?
When creating a new workload domain, there’s a new option in the UI wizard allowing either to join the new WLD into the management SSO domain or create a new SSO domain:
After the SSO domain is created, its information is shown in the workload domain summary screen:
These two features should be discussed together. Beginning in VCF 5.0 on VxRail 8.0.100 and higher, enhancements were made to the SDDC Manager LCM service that enables more granular compatibility and tracking of current and previous VxRail versions that are compatible with current and previous VCF versions. This opens VCF on VxRail to be more flexible by supporting different VxRail versions within a given VCF release. It allows admins to support applying and tracking asynchronous VxRail release patches outside of the 1:1 mapped, fully compatible VCF on VxRail release that could require waiting for it to be available. This information about available and supported release versions for VCF and VxRail is integrated into the SDDC Manager UI and API.
VCF 5.0 on VxRail 8.0.100 introduces the ability for each workload domain to have different versions as far back as N-2 releases, where N is the current version on the management domain. With this new flexibility, VCF on VxRail administrators are not forced to upgrade workload domain versions to match the management domain immediately. In the context of VCF 5.0 on VxRail 8.0.100, this can help admins plan upgrades over a long period of time when maintenance windows are tight.
Each VMware Cloud Foundation release introduces several new features and configuration changes to its underlying components. Update bundles contain these configuration changes to ensure an upgraded VCF on VxRail instance will function like a greenfield deployment of the same version. Configuration drift awareness allows administrators to view parameters and configuration changes as part of the upgrade. An example of configuration drift is adding a new service account or ESXi lockdown enhancement. This added visibility helps customers better understand new features and capabilities and their impact on their deployments.
The following screenshot shows how this new feature appears to the administrator of the platform:
SDDC Manager 5.0 allows administrators to run a precheck for vRealize/VMware Aria Suite component compatibility. The vRealize/Aria Suite component precheck is run before upgrading core SDDC components (NSX, vCenter Server, and ESXi) to a newer VCF target release, and can be run from VCF 4.3.x on VxRail 7.x and above. The precheck will verify if all existing vRealize/Aria Suite components will be compatible with core SDDC components of a newer VCF target release by checking them against the VMware Product Interoperability Matrix.
VCF 5.0 on VxRail 8.0.100 contains improved workflows that orchestrate the process of configuring Certificate Authorities and Certificate Signing Requests. Administrators can better manage certificates in VMware Cloud Foundation, with improved certificate upload and installation, and new workflows to ensure certificate validity, trust, and proper installation. These new workflows help to reduce configuration time and minimize configuration errors.
Supplemental storage can be used to add storage capacity to any domain or cluster within VCF, including the management domain. It is configured as a Day 2 operation. What’s new in VCF 5.0 on VxRail 8.0.100 is the support for the supplemental storage to be connected with the NVMe over TCP protocol.
Administrators can benefit from using NVMe over TCP storage in a standard Ethernet-based networking environment. NVMe over TCP can be more cost-efficient than NVMe over FC and eliminates the need to deploy and manage a fiber channel fabric if that is what an organization requires.
VMware Cloud Foundation+ has been enhanced for the VCF 5.0 release, allowing greater scale and integrated lifecycle management. First, the scalability increased – it allows administrators to connect up to eight domains per VCF instance (including the management domain) to the VMware Cloud portal. Second, updates to the Lifecycle Notification Service within the VMware Cloud portal provide visibility of pending updates to any component within the VCF+ Inventory. Administrators can initiate updates through the VCF+ Lifecycle Management Notification Service, which connects back to the specific SDDC Manager instance to be updated. From here, administrators can use familiar SDDC Manager prechecks and workflows to update their environment.
A new VxRail hardware platform is now supported, providing customers more flexibility and choice. The single-socket VxRail P670F, a performance-focused platform, is now supported in VCF on VxRail deployments and can offer customers savings on software licensing in specific scenarios.
While not directly tied to VCF 5.0 on VxRail 8.0.100 release, VMware has also released the latest version of the VCF Async Patch Tool. This latest version now supports applying patches to VCF 5.0 on VxRail 8.0.100 environments.
VMware Cloud Foundation 5.0 on Dell VxRail 8.0.100 is a new major platform release based on the latest generation of VMware’s vSphere 8 hypervisor. It provides several exciting new capabilities, especially around automated upgrades and lifecycle management. This is the first major release that provides guided, simplified upgrades between the major releases directly in the SDDC Manager UI, offering a much better experience and more value for customers.
All of this makes the new VCF on VxRail release a more flexible and scalable hybrid cloud platform, with simpler upgrades from previous releases, and lays the foundation for even more beneficial features to come.
Author: Karol Boguniewicz, Senior Principal Engineering Technologist, VxRail Technical Marketing
Twitter: @cl0udguide
VMware Cloud Foundation on Dell VxRail Release Notes
VxRail page on DellTechnologies.com
VCF on VxRail Interactive Demo
Thu, 26 Jan 2023 17:32:42 -0000
|Read Time: 0 minutes
Many customers are looking at Infrastructure as Code (IaC) as a better way to automate their IT environment, which is especially relevant for those adopting DevOps. However, not many customers are aware of the capability of accelerating IaC implementation with VxRail, which we have offered for some time already—Ansible Modules for Dell VxRail.
What is it? It's the Ansible collection of modules, developed and maintained by Dell, that uses the VxRail API to automate VxRail operations from Ansible.
By the way, if you're new to the VxRail API, first watch the introductory whiteboard video available on YouTube.
Ansible Modules for Dell VxRail are well-suited for IaC use cases. They are written in such a way that all requests are idempotent and hence fault-tolerant. This means that the result of a successfully performed request is independent of the number of times it is run.
Besides that, instead of just providing a wrapper for individual API functions, we automated holistic workflows (for instance, cluster deployment, cluster expansion, LCM upgrade, and so on), so customers don't have to figure out how to monitor the operation of the asynchronous VxRail API functions. These modules provide rich functionality and are maintained by Dell; this means we're introducing new functionality over time. They are already mature—we recently released version 1.4.
Finally, we are also reducing the risk for customers willing to adopt the Ansible modules in their environment, thanks to the community support model, which allows you to interact with the global community of experts. From the implementation point of view, the architecture and end-user experience are similar to the modules we provide for Dell storage systems.
Ansible Modules for Dell VxRail are available publicly from the standard code repositories: Ansible Galaxy and GitHub. You don't need a Dell Support account to download and start using them.
The requirements for the specific version are documented in the "Prerequisites" section of the description/README file.
In general, you need a Linux-based server with the supported Ansible and Python versions. Before installing the modules, you have to install a corresponding, lightweight Python SDK library named "VxRail Ansible Utility," which is responsible for the low-level communication with the VxRail API. You must also meet the minimum version requirements for the VxRail HCI System Software on the VxRail cluster.
This is a summary of requirements for the latest available version (1.4.0) at the time of writing this blog:
Ansible Modules for Dell VxRail | VxRail HCI System Software version | Python version | Python library (VxRail Ansible Utility) version | Ansible version |
1.4.0 | 7.0.400 | 3.7.x | 1.4.0 | 2.9 and 2.10 |
You can install the SDK library by using git and pip commands. For example:
git clone https://github.com/dell/ansible-vxrail-utility.git cd ansible-vxrail-utility/ pip install .
Then you can install the collection of modules with this command:
ansible-galaxy collection install dellemc.vxrail:1.4.0
Testing
After the successful installation, we're ready to test the modules and communication between the Ansible automation server and VxRail API.
I recommend performing that check with a simple module (and corresponding API function) such as dellemc_vxrail_getsysteminfo, using GET /system to retrieve VxRail System Information.
Let's have a look at this example (you can find the source code on GitHub):
Note that this playbook is run on a local Ansible server (localhost), which communicates with the VxRail API running on the VxRail Manager appliance using the SDK library. In the vars section, , we need to provide, at a minimum, the authentication to VxRail Manager for calling the corresponding API function. We could move these variable definitions to a separate file and include the file in the playbook with vars_files. We could also store sensitive information, such as passwords, in an encrypted file using the Ansible vault feature. However, for the simplicity of this example, we are not using this option.
After running this playbook, we should see output similar to the following example (in this case, this is the output from the older version of the module):
Now let's have a look at a bit more sophisticated, yet still easy-to-understand, example. A typical operation that many VxRail customers face at some point is cluster expansion. Let's see how to perform this operation with Ansible (the source code is available on GitHub):
In this case, I've exported the definitions of the sensitive variables, such as vcpasswd, mgt_passwd, and root_passwd, into a separate, encrypted Ansible vault file, sensitive-vars.yml, to follow the best practice of not storing them in the clear text directly in playbooks.
As you can expect, besides the authentication, we need now to provide more parameters—configuration of the newly added host—defined in the vars section. We select the new host from the pool of available hosts, using the PSNT identifier (host_psnt variable).
This is an example of an operation performed by an asynchronous API function. Cluster expansion is not something that is completed immediately but takes minutes. Therefore, the progress of the expansion is monitored in a loop until it finishes or the number of retries is passed. If you communicated with the VxRail API directly by using the URI module from your playbook, you would have to take care of such monitoring logic on your own; here, you can use the example we provide.
You can watch the operation of the cluster expansion Ansible playbook with my commentary in this demo:
Getting help
The primary source of information about the Ansible Modules for Dell VxRail is the documentation available on GitHub. There you'll find all the necessary details on all currently available modules, a quick description, supported endpoints (VxRail API functions used), required and optional parameters, return values, and location of the log file (modules have built-in logging feature to simplify troubleshooting— logs are written in the /tmp directory on the Ansible automation server). The GitHub documentation also contains multiple samples showing how to use the modules, which you can easily clone and adjust as needed to the specifics of your VxRail environment.
There's also built-in documentation for the modules, accessible with the ansible-doc command.
Finally, the Dell Automation Community is a public discussion forum where you can post your questions and ask for help as needed.
I hope you now understand the Ansible Modules for Dell VxRail and how to get started. Let me quickly recap the value proposition for our customers. The modules are well-suited for IaC use cases, thanks to automating holistic workflows and idempotency. They are maintained by Dell and supported by the Dell Automation Community, which reduces risk. These modules are much easier to use than the alternative of accessing the VxRail API on your own. We provide many examples that can be adjusted to the specifics of the customer’s environment.
To learn more, see these resources:
The following links provide additional information:
Author: Karol Boguniewicz, Senior Principal Engineering Technologist, VxRail Technical Marketing
Twitter: @cl0udguide
Tue, 15 Nov 2022 16:27:36 -0000
|Read Time: 0 minutes
Many customers are looking at Infrastructure as Code (IaC) as a better way to automate their IT environment, which is especially relevant for those adopting DevOps. However, not many customers are aware of the capability of accelerating IaC implementation with VxRail, which we have offered for some time already—Ansible Modules for Dell VxRail.
What is it? It's the Ansible collection of modules, developed and maintained by Dell, that uses the VxRail API to automate VxRail operations from Ansible.
By the way, if you're new to the VxRail API, first watch the introductory whiteboard video available on YouTube.
Ansible Modules for Dell VxRail are well-suited for IaC use cases. They are written in such a way that all requests are idempotent and hence fault-tolerant. This means that the result of a successfully performed request is independent of the number of times it is run.
Besides that, instead of just providing a wrapper for individual API functions, we automated holistic workflows (for instance, cluster deployment, cluster expansion, LCM upgrade, and so on), so customers don't have to figure out how to monitor the operation of the asynchronous VxRail API functions. These modules provide rich functionality and are maintained by Dell; this means we're introducing new functionality over time. They are already mature—we recently released version 1.4.
Finally, we are also reducing the risk for customers willing to adopt the Ansible modules in their environment, thanks to the community support model, which allows you to interact with the global community of experts. From the implementation point of view, the architecture and end-user experience are similar to the modules we provide for Dell storage systems.
Ansible Modules for Dell VxRail are available publicly from the standard code repositories: Ansible Galaxy and GitHub. You don't need a Dell Support account to download and start using them.
The requirements for the specific version are documented in the "Prerequisites" section of the description/README file.
In general, you need a Linux-based server with the supported Ansible and Python versions. Before installing the modules, you have to install a corresponding, lightweight Python SDK library named "VxRail Ansible Utility," which is responsible for the low-level communication with the VxRail API. You must also meet the minimum version requirements for the VxRail HCI System Software on the VxRail cluster.
This is a summary of requirements for the latest available version (1.4.0) at the time of writing this blog:
Ansible Modules for Dell VxRail | VxRail HCI System Software version | Python version | Python library (VxRail Ansible Utility) version | Ansible version |
1.4.0 | 7.0.400 | 3.7.x | 1.4.0 | 2.9 and 2.10 |
You can install the SDK library by using git and pip commands. For example:
git clone https://github.com/dell/ansible-vxrail-utility.git cd ansible-vxrail-utility/ pip install .
Then you can install the collection of modules with this command:
ansible-galaxy collection install dellemc.vxrail:1.4.0
Testing
After the successful installation, we're ready to test the modules and communication between the Ansible automation server and VxRail API.
I recommend performing that check with a simple module (and corresponding API function) such as dellemc_vxrail_getsysteminfo, using GET /system to retrieve VxRail System Information.
Let's have a look at this example (you can find the source code on GitHub):
Note that this playbook is run on a local Ansible server (localhost), which communicates with the VxRail API running on the VxRail Manager appliance using the SDK library. In the vars section, , we need to provide, at a minimum, the authentication to VxRail Manager for calling the corresponding API function. We could move these variable definitions to a separate file and include the file in the playbook with vars_files. We could also store sensitive information, such as passwords, in an encrypted file using the Ansible vault feature. However, for the simplicity of this example, we are not using this option.
After running this playbook, we should see output similar to the following example (in this case, this is the output from the older version of the module):
Now let's have a look at a bit more sophisticated, yet still easy-to-understand, example. A typical operation that many VxRail customers face at some point is cluster expansion. Let's see how to perform this operation with Ansible (the source code is available on GitHub):
In this case, I've exported the definitions of the sensitive variables, such as vcpasswd, mgt_passwd, and root_passwd, into a separate, encrypted Ansible vault file, sensitive-vars.yml, to follow the best practice of not storing them in the clear text directly in playbooks.
As you can expect, besides the authentication, we need now to provide more parameters—configuration of the newly added host—defined in the vars section. We select the new host from the pool of available hosts, using the PSNT identifier (host_psnt variable).
This is an example of an operation performed by an asynchronous API function. Cluster expansion is not something that is completed immediately but takes minutes. Therefore, the progress of the expansion is monitored in a loop until it finishes or the number of retries is passed. If you communicated with the VxRail API directly by using the URI module from your playbook, you would have to take care of such monitoring logic on your own; here, you can use the example we provide.
You can watch the operation of the cluster expansion Ansible playbook with my commentary in this demo:
Getting help
The primary source of information about the Ansible Modules for Dell VxRail is the documentation available on GitHub. There you'll find all the necessary details on all currently available modules, a quick description, supported endpoints (VxRail API functions used), required and optional parameters, return values, and location of the log file (modules have built-in logging feature to simplify troubleshooting— logs are written in the /tmp directory on the Ansible automation server). The GitHub documentation also contains multiple samples showing how to use the modules, which you can easily clone and adjust as needed to the specifics of your VxRail environment.
There's also built-in documentation for the modules, accessible with the ansible-doc command.
Finally, the Dell Automation Community is a public discussion forum where you can post your questions and ask for help as needed.
I hope you now understand the Ansible Modules for Dell VxRail and how to get started. Let me quickly recap the value proposition for our customers. The modules are well-suited for IaC use cases, thanks to automating holistic workflows and idempotency. They are maintained by Dell and supported by the Dell Automation Community, which reduces risk. These modules are much easier to use than the alternative of accessing the VxRail API on your own. We provide many examples that can be adjusted to the specifics of the customer’s environment.
To learn more, see these resources:
The following links provide additional information:
Author: Karol Boguniewicz, Senior Principal Engineering Technologist, VxRail Technical Marketing
Twitter: @cl0udguide
Thu, 16 Jun 2022 18:06:44 -0000
|Read Time: 0 minutes
In my previous blog, Protecting VxRail from Power Disturbances, I described the first API-integrated solution that helps customers preserve data integrity on VxRail if there are unplanned power events. Today, I'm excited to introduce another solution that resulted from our close partnership with Schneider Electric (APC).
Over the last few years, VxRail has become a critical HCI system and data-center building block for over 15,000 customers who have deployed more than 220,000 nodes globally. When HCI was first introduced, it was often considered for specific workloads such as VDI or ROBO locations. However, with the evolution of hardware and software capabilities, VxRail became a catalyst in data-center modernization, deployed across various use cases from core to cloud to edge. Today, customers are deploying VxRail for mission-critical workloads because it is powerful enough to meet the most demanding requirements for performance, capacity, availability, and rich data services.
Dell Technologies is a leader in data-protection solutions and offers a portfolio of products that can fulfill even the most demanding RPO and RTO requirements from customers. In addition to using traditional data-protection solutions, it is best practice to use a UPS to protect the infrastructure and ensure data integrity if there are unplanned power events. In this blog, I want to highlight a new solution from Schneider Electric, the provider of APC Smart-UPS systems.
Schneider Electric is one of Dell Technologies’ strategic partners in the Extended Technologies Complete Program. It provides Dell Technologies with APC UPS and IT rack enclosures offering a comprehensive solution set of infrastructure hardware, monitoring, management software, and service options.
PowerChute Network Shutdown in version 4.5 seamlessly integrates with VxRail by communicating over the network with the APC UPS. If there is a power outage, PowerChute can gracefully shut down VxRail clusters using the VxRail API. As a result of this integration, PowerChute can run on the same protected VxRail cluster, saving space and reducing hardware costs.
Solution components:
Key benefits of this solution include:
This is easiest to describe using the following diagram, which covers the steps taken in a power event and when the event is cleared:
How PowerChute Network Shutdown works with VxRail
I highly recommend watching the demo of this solution in action, which is listed in the Additional resources section at the end of this blog.
Protection against unplanned power events should be a part of a business continuity strategy for all customers who run their critical workloads on VxRail. This practice ensures data integrity by enabling automated and graceful shutdown of VxRail clusters. Customers now have more choice in providing such protection, with the new version of PowerChute Network Shutdown software for APC UPS systems integrated with VxRail API and validated with VxRail.
Website: Schneider Electric APC and Dell Technologies Alliance Website
Solution brochure: PowerChute Network Shutdown v4.5 Brochure
Solution demo video: PowerChute Network Shutdown v4.5 VxRail Technical Demo
Video: APC PowerChute Network Shutdown 4.5 and Dell VxRail Integration
Previous blog: Protecting VxRail from Power Disturbances
Author:
Karol Boguniewicz, Senior Principal Engineering Technologist, Dell Technologies
LinkedIn: Karol Boguniewicz
Twitter: @cl0udguide
Fri, 08 Apr 2022 18:14:37 -0000
|Read Time: 0 minutes
Cybersecurity and protection against ransomware attacks are among the top priorities for most customers who have successfully implemented or are going through a digital transformation. According to the ESG’s 2022 Technology Spending Intentions Survey:
The data clearly shows that this area is one of the top concerns for our customers today. They need solutions that significantly simplify increasing cybersecurity activities due to a perceived skills shortage.
It is worth reiterating the critical role that networking plays within Hyperconverged Infrastructure (HCI). In contrast to legacy three-tier architectures, which typically have a dedicated storage network and storage, HCI architecture is more integrated and simplified. Its design lets you share the same network infrastructure for workload-related traffic and intercluster communication with the software-defined storage. The accessibility of the running workloads (from the external network) depends on the reliability of this network infrastructure, and on setting it up properly. The proper setup also impacts the performance and availability of the storage and, as a result, the whole HCI system. To prevent human error, it is best to employ automated solutions to enforce configuration best practices.
VxRail as an HCI system supports VMware NSX, which provides tremendous value for increasing cybersecurity in the data center, with features like microsegmentation and AI-based behavioral analysis and prevention of threats. Although NSX is fully validated with VxRail as a part of VMware Cloud Foundation (VCF) on VxRail platform, setting it outside of VCF requires strong networking skills. The comprehensive capabilities of this network virtualization platform might be overwhelming for VMware vSphere administrators who are not networking experts. What if you only want to consume the security features? This scenario might present a common challenge, especially for customers who are deploying small VxRail environments with few nodes and do not require full VCF on the VxRail stack.
The great news is that VMware recognized these customer challenges and now offers a simplified method to deploy NSX for security use cases. This method fits the improved operational experience our customers are used to with VxRail. This experience is possible with a new VMware vCenter Plug-in for NSX, which we introduce in this blog.
NSX is a comprehensive virtualization platform that provides advanced networking and security capabilities that are entirely decoupled from the physical infrastructure. Implementing networking and security in software, distributed across the hosts responsible for running virtual workloads, provides significant benefits:
The networking benefits are evident for large deployments, with NSX running in almost all Fortune 100 companies and many medium scale businesses. In today’s world of widespread viruses, ransomware, and even cyber warfare, the security aspect of NSX built on top of the NSX distributed firewall (DFW) is relevant to vSphere customers, regardless of their size.
The NSX DFW is a software firewall instantiated on the vNICs of the virtual machines in the data center. Thanks to its inline position, it provides maximum filtering granularity because it can inspect the traffic coming in and going out of every virtual machine without requiring redirection of the traffic to a security appliance, as shown in the following figure. It also moves along with the virtual machine during vMotion and maintains its state.
Figure 1: Traditional firewall appliance compared to the NSX DFW
The NSX DFW state-of-the-art capabilities are configured centrally from the NSX Manager and allow implementing security policies independently of the network infrastructure. This method makes it easy to implement microsegmentation and compliance requirements without dedicating racks, servers, or subnets to a specific type of workload. With the NSX DFW, security teams can deploy advanced threat prevention capabilities such as distributed IDS/IPS, network sandboxing, and network traffic analysis/network detection and response (NTA/NDR) to protect against known and zero-day threats.
Many NSX customers who are satisfied with the networking capability of vSphere run their production environment on a VDS with VLAN-backed dvportgroups. They deploy NSX for its security features only, and do not need its advanced networking components. Until now, those customers had to migrate their virtual machines to NSX-backed dvportgroups to benefit from the NSX DFW. This migration is easy but managing networking from NSX modifies the workflow of all the teams, including those teams that are not concerned by security:
Figure 2: Traditional NSX deployment
Starting with NSX 3.2, you can run NSX security on a regular VDS, without introducing the networking components of NSX. The security team receives all the benefits of NSX DFW, and there is no impact to any other team:
Figure 3: NSX Security with vCenter Plugin
Even better, NSX can now integrate further with vCenter, thanks to a plug-in that allows you to configure NSX from the vCenter UI. This method means that NSX can be consumed as a simple security add-on for a traditional vSphere deployment.
First, we need to ensure that our VxRail environment meets the following requirements:
Running NSX in a vSphere environment consists of deploying a single NSX Manager virtual machine protected by vSphere HA. A shortcut in vCenter enables this step:
Figure 4: Deploy the NSX Manager appliance virtual machine from the NSX tab in vCenter
When the NSX Manager is up and running, it sets up a one-to-one association with vCenter and uploads the plug-in that presents the NSX UI in vCenter, as if NSX security is part of vCenter. The vCenter administrator becomes an effective NSX security administrator.
The next step, performed directly from the vCenter UI, is to enter the NSX license and select the cluster on which to install the NSX DFW binaries:
Figure 5: Select the clusters that will receive the NSX DFW binaries
After the DFW binaries are installed on the ESXi hosts, the NSX security is deployed and operational. You can exit the security configuration wizard (and configure directly from the NSX view in the vCenter UI) or let the wizard run.
After installing the NSX binaries on the ESXi hosts, the plug-in runs a wizard that guides you through the configuration of basic security rules according to VMware best practices. The wizard gives the vSphere administrator simple guidance for implementing a baseline configuration that the security team can build on later. There are three different steps in this guided workflow.
Perform the following steps, as shown in the following figure:
Figure 6: Example of group creation
Perform the following steps, as shown in the following figure:
Figure 7: Define the communication between environments using a graphcial represenation
After reviewing the configuration, publish the configuration to NSX:
Figure 8: Review DFW rules before exiting the wizard
The full NSX UI is now available in vCenter. Select the NSX tab to access the NSX UI directly.
The new VMware vCenter Plug-in for NSX drastically simplifies the deployment and adoption of NSX with VxRail for security use cases. In the past, advanced knowledge of the network virtualization platform was required. A vSphere adminstrator can now deploy it easily, using an intuitive configuration wizard available directly from vCenter.
The VMware vCenter Plug-in for NSX provides the kind of simplified and optimized experience that VxRail customers are used to when managing their HCI environment. It also addresses the challenge that customers face today, improving security even with a perceived shortage of skills in this area. Also, it can be configured easily and quickly, making the robust NSX security features more available for smaller HCI deployments.
VMworld 2021 Session: NET1483 - Deploy and Manage NSX-T via vCenter: A Single Console to Drive VMware SDDC
Planning Guide: Dell EMC VxRail Network Planning Guide – Physical and Logical Network Considerations and Planning
ESG Research Report: 2022 Technology Intentions Survey
Authors:
Francois Tallet, Technical Product Manager, VMware
Karol Boguniewicz, Senior Principal Engineering Technologist, Dell Technologies
Mon, 21 Sep 2020 14:08:45 -0000
|Read Time: 0 minutes
Today (7/2), Dell Technologies is announcing General Availability of VMware Cloud Foundation 3.10.0.1 on VxRail 4.7.511.
Because we’ve been notified about an upcoming important patch for the Cloud Foundation version 3.10 from VMware, and we wanted to incorporate it in a GA version on VxRail for the best experience for our customers.
This new release introduces VCF enhancements and VxRail enhancements.
Figure 1. ESXi Cluster-Level and Parallel Upgrades
Figure 2. NSX-T Cluster-Level and Parallel Upgrades
Option to disable Application Virtual Networks (AVNs) during Bring-up - AVNs deploy vRealize Suite components on NSX overlay networks. We recommend using this option during bring-up. Customers can now disable this feature, for instance, if they are not planning to use vRealize Suite components.
VMware Cloud Foundation 3.10.0.1 on VxRail 4.7.511 provides several features that allow existing customers to upgrade their platform more efficiently than ever before. The updated LCM capabilities offer not only more efficiency (with parallelism), but more flexibility in terms of handling the maintenance windows. With skip level upgrade, available in this version as a professional service, it’s also possible to get to this latest release much faster. This increases security, and allows customers to get the most benefit from their existing investments in the platform. New customers will benefit from the broader spectrum of hardware options, including ruggedized (D-series) and AMD-based nodes.
Blog post about VCF 4.0 on VxRail 7.0: The Dell Technologies Cloud Platform – Smaller in Size, Big on Features
Press release: Dell Technologies Brings IT Infrastructure and Cloud Capabilities to Edge Environments
Blog post about new features in VxRail 4.7.510: VxRail brings key features with the release of 4.7.510
VCF on VxRail technical whitepaper
VMware Cloud Foundation 3.10 on Dell EMC VxRail Release Notes from VMware
Blog post about VCF 3.10 from VMware: Introducing VMware Cloud Foundation 3.10
Author: Karol Boguniewicz, Senior Principal Engineer, VxRail Technical Marketing
Twitter: @cl0udguide