Simplifying Security Operations for Dell HCI Platforms with NSX
Thu, 08 Sep 2022 16:58:04 -0000|
Read Time: 0 minutes
Today, most technology companies in the IT space work to offer customers not only the best technology innovations but also those that help simplify their day-to-day lives.
One example of this is the new vCenter plug-in for NSX-T, introduced with vSphere 7.0 Update 3c and NSX-T 3.2. Through this new deployment method for NSX-T, management and operations users can now use NSX-T as a plug-in for vCenter, similar to how earlier versions of NSX were configured. Through wizard-assisted operations, security policies can easily be configured, deployed, and operated within vCenter.
Figure 1. The new vCenter plug-in for NSX-T simplifies security deployment and operations
For Dell HCI platforms such as VxRail, vSAN Ready Nodes, and PowerEdge servers hosting vSAN-based workloads, NSX becomes an optimal network and security engine.
Figure 2. Dell HCI platforms such as VxRail or vSAN Ready Nodes become the perfect targets for the new vCenter plug-in
The whole process is simple. It can be completed by following these steps:
- Install NSX-T Manager and provide a license key.
- Install the new method to configure and operate NSX security, the vCenter plugin for NSX.
- Configure the distributed firewall policies for the HCI cluster:
a. Define infrastructure services as needed (DNS, DHCP, custom…).
b. Create the environment to consume the defined infrastructure services.
- Define how the elements in the environment can communicate with each other.
- Define communication strategies for applications in the environment.
- Review and verify the defined security policies before they are published and effective.
Figure 3. Defined NSX security rules can be reviewed before going live
If you want to learn more about how simple security operations can become with the new vCenter plug-in for NSX, take a look at this video.
Author: Inigo Olcoz
- VxRail Info Hub
- vSAN Ready Nodes Info Hub
- HCI Security Simplified: Protecting Dell VxRail with VMware NSX Security
- Simplifying Security Deployment and Operations for Dell HCI Platforms
- Video: Simplifying HCI Security with the New vCenter Plug-in for NSX
Related Blog Posts
100 GbE Networking – Harness the Performance of vSAN Express Storage Architecture
Wed, 22 Mar 2023 07:04:42 -0000|
Read Time: 0 minutes
For a few years, 25GbE networking has been the mainstay of rack networking, with 100 GbE reserved for uplinks to spine or aggregation switches. 25 GbE provides a significant leap in bandwidth over 10 GbE, and today carries no outstanding price premium over 10 GbE, making it a clear winner for new buildouts. But should we still be continuing with this winning 25 GbE strategy? Is it time to look to a future of 100 GbE networking within the rack? Or is that future now?
This question stems from my last blog post: VxRail with vSAN Express Storage Architecture (ESA) where I called out VMware’s 100 GbE recommended for maximum performance. But just how much more performance can vSAN ESA deliver with 100GbE networking? VxRail is fortunate to have its performance team, who stood up two identical six-node VxRail with vSAN ESA clusters, except for the networking. One was configured with Broadcom 57514 25 GbE networking, and the other with Broadcom 57508 100 GbE networking.
When it comes to benchmark tests, there is a large variety to choose from. Some benchmark tests are ideal for generating headline hero numbers for marketing purposes – think quarter-mile drag racing. Others are good for helping with diagnosing issues. Finally, there are benchmark tests that are reflective of real-world workloads. OLTP32K is a popular one, reflective of online transaction processing with a 70/30 read-write split and a 32k block size, and according to the aggregated results from thousands of Live Optics workload observations across millions of servers.
One more thing before we get to the results of the VxRail Performance Team's testing. The environment configuration. We used a storage policy of erasure coding with a failure tolerance of two and compression enabled.
When VMware announced vSAN with Express Storage Architecture they published a series of blogs all of which I encourage you to read. But as part of our 25 GbE vs 100 GbE testing, we also wanted to verify the astounding claims of RAID-5/6 with the Performance of RAID-1 using the vSAN Express Storage Architecture and vSAN 8 Compression - Express Storage Architecture. In short, forget the normal rules of storage performance, VMware threw that book out of the window. We didn’t throw our copy out of the window, well not at first, but once our results validated their claims… it went out.
Let’s look at the data: Boom!
Figure 1. ESA: OLTP32KB 70/30 RAID6 25 GbE vs 100 GbE performance graph
Boom! A 78% increase in peak IOPS with a substantial 49% drop in latency. This is a HUGE increase in performance, and the sole difference is the use of the Broadcom 57508 100 GbE networking. Also, check out that latency ramp-up on the 25 GbE line, it’s just like hitting a wall. While it is almost flat on the 100 GbE line.
But nobody runs constantly at 100%, at least they shouldn’t be. 60 to 70% of absolute max is typically a normal day-to-day comfortable peak workload, leaving some headroom for spikes or node maintenance. At that range, there is an 88% increase in IOPS with a 19 to 21% drop in latency, with a smaller drop in latency attributable to the 25 GbE configuration not hitting a wall. As much as applications like high performance, it is needed to deliver performance with consistent and predictable latency, and if it is low all the better. If we focus on just latency, the 100 GbE networking enabled 350K IOPS to be delivered at 0.73 ms, while the 25 GbE networking can squeak out 106K IOPS at 0.72 ms. That may not be the fairest of comparisons, but it does highlight how much 100GbE networking can benefit latency-sensitive workloads.
Boom, again! This benchmark is not reflective of real-world workloads but is a diagnostic test that stresses the network with its 100% read-and-write workloads. Can this find the bottleneck that 25 GbE hit in the previous benchmark?
Figure 2. ESA: 512KB RAID6 25 GbE vs 100 GbE performance graph
This testing was performed on a six-node cluster, with each node contributing one-sixth of the throughput shown in this graph. 20359MB/s of random read throughput for the 25 GbE cluster or 3393 MB/s per node. Which is slightly above the theoretical max throughput of 3125 MB/s that 25 GbE can deliver. This is the absolute maximum that 25 GbE can deliver! In the world of HCI, the virtual machine workload is co-resident with the storage. As a result, some of the IO is local to the workload, resulting in higher than theoretical throughput. For comparison, the 100 GbE cluster achieved 48,594 MB/s of random read throughput, or 8,099 MB/s per node out of a theoretical maximum of 12,500 MB/s.
But this is just the first release of the Express Storage Architecture. In the past, VMware has added significant gains to vSAN, as seen in the lab-based performance analysis of Harnessing the Performance of Dell EMC VxRail 7.0.100. We can only speculate on what else they have in store to improve upon this initial release.
What about costs, you ask? Street pricing can vary greatly depending on the region, so it's best to reach out to your Dell account team for local pricing information. Using US list pricing as of March 2023, I got the following:
Broadcom 57414 dual 25 Gb
S5248F-ON 48 port 25 GbE
25 GbE Passive Copper DAC
Broadcom 57508 dual 100Gb
S5232F-ON 32 port 100 GbE
100 GbE Passive Copper DAC
Total per port
Overall, the per-port cost of the 100 GbE equipment was 2.04 times that of the 25 GbE equipment. However, this doubling of network cost provides four times the bandwidth, a 78% increase in storage performance, and a 49% reduction in latency.
If your workload is IOPS-bound or latency-sensitive and you had planned to address this issue by adding more VxRail nodes, consider this a wakeup call. Adding dual 100Gb came at a total list cost of $42,648 for the twelve ports used. This cost is significantly less than the list price of a single VxRail node and a fraction of the list cost of adding enough VxRail nodes to achieve the same level of performance increase.
Reach out to your networking team; they would be delighted to help deploy the 100 Gb switches your savings funded. If decision-makers need further encouragement, send them a link to VMware's vSAN 8 Total Cost of Ownership white paper.
While 25 GbE has its place in the datacenter, when it comes to deploying vSAN Express Storage Architecture, it's clear that we're moving beyond it and onto 100 GbE. The future is now 100 GbE, and we thank Broadcom for joining us on this journey.
Learn About the Latest Major VxRail Software Release: VxRail 8.0.000
Mon, 09 Jan 2023 14:45:15 -0000|
Read Time: 0 minutes
Happy New Year! I hope you had a wonderful and restful holiday, and you have come back reinvigorated. Because much like the fitness centers in January, this VxRail blog site is going to get busy. We have a few major releases in line to greet you, and there is much to learn.
First in line is the VxRail 8.0.000 software release that provides introductory support for VMware vSphere 8, which has created quite the buzz these past few months. Let’s walk through the highlights of this release.
- For VxRail users who want to be early adopters of vSphere 8, VxRail 8.0.000 provides the first upgrade path for VxRail clusters to transition to VMware’s latest vSphere software train. Only clusters with VxRail nodes based on either the 14th generation or 15th generation PowerEdge servers can upgrade to vSphere 8, because VMware has removed support for a legacy BIOS driver used by 13th generation PowerEdge servers. Importantly, users need to upgrade their vCenter Server to version 8.0 before a cluster upgrade, and vSAN 8.0 clusters require users to upgrade their existing vSphere and vSAN licenses. In VxRail 8.0.000, the VxRail Manager has been enhanced to check platform compatibility and warn users of license issues to prevent compromised situations. Users should always consult the release notes to fully prepare for a major upgrade.
- VxRail 8.0.000 also provides introductory support for vSAN Express Storage Architecture (ESA), which has garnered much attention for its potential while eliciting just as much curiosity because of its newness. To level set, vSAN ESA is an optimized version of vSAN that exploits the full potential of the very latest in hardware, such as multi-core processing, faster and larger capacity memory, and NVMe technology to unlock new capabilities to drive new levels of performance and efficiency. You can get an in-depth look at vSAN ESA in David Glynn’s blog. It is important to note that vSAN ESA is an alternative, optional vSAN architecture. The existing architecture (which is now referred to as Original Storage Architecture (OSA)) is still available in vSAN 8. It’s a choice that users can make on which one to use when deploying clusters.
In order to deploy VxRail clusters with vSAN ESA, you need to order brand-new VxRail nodes specifically configured for vSAN ESA. This new architecture eliminates the use of discrete cache and capacity drives. Nodes will require all NVMe storage drives. Each drive will contribute to cache and capacity. VxRail 8.0.000 offers two choices for platforms: E660N and the P670N. The user will select either the 3.2 TB or 6.4 TB TLC NVMe storage drives to populate each node in their new VxRail cluster with vSAN ESA. To learn about the configuration options, see David Glynn’s blog.
- The support in vSphere 8 in VxRail 8.0.000 also includes support for the increased cache size for VxRail clusters with vSAN 8.0 OSA. The increase from 600 TB to 1.6 TB will provide significant performance gain. VxRail already has cache drives that can take advantage of the larger cache size. It is easier to deploy a new cluster with a larger cache size than for an existing cluster to expand the current cache size. (For existing clusters, nodes need their disk groups rebuilt when the cache is expanded. This can be a lengthy and tedious endeavor.)
Major VMware releases like vSphere 8 often shine a light on the differentiated experience that our VxRail users enjoy. The checklist of considerations only grows when you’re looking to upgrade to a new software train. VxRail users have come to expect that VxRail provides them the necessary guardrails to guide them safely along the upgrade path to reach their destination. The 800,000 hours of test run time performed by our 100+ staff members, who are dedicated to maintaining the VxRail Continuously Validated States, is what gives our customers the confidence to move fearlessly from one software version to the next. And for customers looking to explore the potential of vSAN ESA, the partnership between VxRail and VMware engineering teams adds to why VxRail is the fastest and most effective path for users to maximize the return on their investment in VMware’s latest technologies.
If you’re interested in upgrading to VxRail 8.0.000, please read the release notes.
If you’re looking for more information about vSAN ESA and VxRail’s support for vSAN ESA, check out this blog.
Author: Daniel Chiu