
Multitenancy in MEC: When is it needed and what does it look like?
Wed, 27 Sep 2023 20:32:45 -0000
|Read Time: 0 minutes
Sometimes the best solutions stem from the simplest questions. Simple questions often prompt us to think about why we do things the way that we do. For example, a customer recently asked me, “What about multi-tenancy for my MEC?”
Multitenancy and MEC are both loaded terms, so an easier way to tackle this question is to start by asking, “Should I plan to support multiple customers on my network-edge cloud infrastructure; and if so, how do I do it?”
Multitenancy is one of the critical benefits of a cloud environment, and sharing resources seems to make sense in a highly constrained environment like the edge. Despite its benefits, multitenancy also introduces significant management complexities, which come at a cost. These complexities can drive customers to consider whether the efficiencies of multitenancy justify the costs. This answer depends on both the cloud model that is used to deliver a service (SaaS/PaaS/IaaS and co-location) and the customer. Moreover, in most cases, it is either innate to the solution hosted on public MEC, or not worth the cost.
Starting with SaaS, multitenancy is enabled through proper handling of organization accounts and associated user accounts, something that any successful cloud-based SaaS must enable. It is therefore innate in this model. The second model where multitenancy is innate is co-location because, presumably, a successful co-location business model needs more than one customer.
For PaaS (which includes container platforms such as Kubernetes), the answer to considering multitenancy is a straightforward no. Delivering multitenancy across customer boundaries (as opposed to simply hosting multiple projects of a single enterprise) typically involves creating a SaaS offering from a PaaS-based platform. For more information, see the discussion of this issue on the Kubernetes project site.
This leaves us with IaaS and reduces the question to whether a Mobile Network Operation (MNO) deploying MEC should invest in IaaS multi-tenancy. Conversely, it might be sufficient for such an MNO to provide each customer with physical infrastructure and a cloud stack, which the customer then integrates into an overall IaaS multicloud operational process. To address this, we need to dig deeper into requirements that different types of customers are likely to have for IaaS Edge infrastructure.
MEC IaaS and verticals
It is important to remember that the information in this section is opinion-based and should be interpreted as such. Also, it is equally important to keep in mind that each customer is different and broad-stroke statements such as the ones below may not be valid in every situation. In short, pay attention to your customer!
Let’s start with large enterprises, which are likely to have some of the most stringent security and management policies. IT downtime and security breaches carry significant costs, and there is substantial in-house IT expertise to deploy and enforce industry best practices. For enterprises, anything that could have been moved into the cloud while meeting IT policies and application requirements presumably is already there. This means there is unlikely to be low-hanging fruit candidates for a network-edge migration.
A multi-tenant IaaS environment introduces additional hurdles towards meeting IT policies which can complicate an already difficult sales motion and business case. In short, the opportunity cost of trying to do multi-tenant IaaS in this segment is usually not worth it. Capturing business for network edge here is hard enough. This remains true even when such enterprises have remote locations with limited IT expertise at such sites. While the ROI of moving compute off-prem improves in such cases, the hurdle of meeting IT policies remains, and the complications associated with trying to meet them are usually not justified by the cost savings of multitenancy.
Smaller enterprises (SMBs) often have a stronger case for moving compute off-prem while the burden of IT policies is lower. These SMBs are likely looking for an easy way to achieve an outcome. A SaaS-based solution is much more appropriate than an IaaS one, which means multi-tenancy is out of the question.
To find a business justification for multi-tenancy in IaaS MEC, we need to look outside of enterprises and to direct-to-customer applications providers that take advantage of the network edge. Examples of these include the independent SW vendors (ISVs) of SaaS solutions which are offering outcomes to SMBs (where the tenant is now the ISV itself, not its customers), and ISV offering consumer applications, like emerging interactive gaming. When such applications require Edge presence, the choice is often limited to public MEC, as there is no on-premises and traditional edge co-location providers may not be able to deliver sufficient proximity to customer to meet the required KPIs. Moreover, the customers (ISVs) are typically well-adapted to the cloud and are comfortable with issues such as multitenancy, provided that cloud-like shared responsibility structures are in place and adequately formalized (a legal and contractual issue as much as it is a technical one).
Delivering a multitenant IaaS solution
So how does an MNO go about creating that multitenant edge? First, we need something generic because attempting to guess what kind of applications might run on MEC simply limits the addressable market. Second, it is important to remember that operations (O&M) and management will be your biggest headache, so anything that simplifies O&M is likely to pay back for itself in spades. And third, it’s public ISVs are most likely to be your customers.
Typically, ISVs developing cloud-native applications use some flavor of Kubernetes (K8S) as the platform. K8S flavors can span the gamut from public clouds (Google, Amazon’s EKS) to enterprise clouds (Red Hat, OpenShift, VMware Tanzu). An ideal platform would address the following:
- Provide a way to efficiently address the need for compute-intensive applications, including high-performance computing and storage-intensive ones
- Make O&M easier in a meaningful and monetarily measurable way
- Support cloud-native applications developed for any (almost any) of the most commonly used Kubernetes frameworks.
Although that seems like a lot to ask out of a platform, solutions that address all of these points do exist. One excellent example is Dell’s VxRail VmWare HCI platform. The definition of VxRail, according to the Dell VxRail home page is:
VxRail goes further to deliver more highly differentiated features and benefits based on proprietary VxRail HCI System Software. This unique combination automates deployment, provides full stack lifecycle management and facilitates critical upstream and downstream integration points that create a truly better together experience
VxRail provides an MNO that flexible combination of compute (GPU and DPU options for high-performance computing) and storage, which can be easily and quickly scaled in response to actual demand. Notably, with vCloud Suite, a VxRail deployment can be turned into a multitenant public cloud. For more information, see VmWare Public Cloud Service Definition.
Last but not least, in addition to VmWare’s Tanzu, VxRail supports Google Anthos (Running Google Anthos on VmWare Cloud Foundation), Red Hat OpenShift (Vxrail and OpenShift Solution Brief), and AWS EKS (Amazon EKS Anywhere on Vxrail Solution Brief); delivering an all-in-one platform for a flexible public MEC at Network Edge cloud deployment.
Summary
The question of multitenancy for MEC is only relevant when considering the IaaS service model. In that case, multitenancy is not likely to be of interest when addressing most traditional enterprise customers but may be important when addressing the needs of ISVs providing SaaS solutions that need Edge presence. To succeed in delivering a MEC platform to such ISVs MNO needs an underlying platform like Dell’s VxRail, designed to address their diverse needs in a scalable and easily manageable fashion.
Related Blog Posts

VxRail Edge Automation Unleashed - Simplifying Satellite Node Management with Ansible
Thu, 30 Nov 2023 17:43:03 -0000
|Read Time: 0 minutes
VxRail Edge Automation Unleashed
Simplifying Satellite Node Management with Ansible
In the previous blog, Infrastructure as Code with VxRail made easier with Ansible Modules for Dell VxRail, I introduced the modules which enable the automation of VxRail operations through code-driven processes using Ansible and VxRail API. This approach not only streamlines IT infrastructure management but also aligns with Infrastructure as Code (IaC) principles, benefiting both technical experts and business leaders.
The corresponding demo is available on YouTube:
The previous blog laid the foundation for the continued journey where we explore more advanced Ansible automation techniques, with a focus on satellite node management in the VxRail ecosystem. I highly recommend checking out that blog before diving deeper into the topics discussed here - as the concepts discussed in this demo will be much easier to absorb
What are the VxRail satellite nodes?
VxRail satellite nodes are individual nodes designed specifically for deployment in edge environments and are managed through a centralized primary VxRail cluster. Satellite nodes do not leverage vSAN to provide storage resources and are an ideal solution for those workloads where the SLA and compute demands do not justify even the smallest of VxRail 2-node vSAN clusters.
Satellite nodes enable customers to achieve uniform and centralized operations within the data center and at the edge, ensuring VxRail management throughout. This includes comprehensive, automated lifecycle management for VxRail satellite nodes, while encompassing hardware and software and significantly reducing the need for manual intervention.
To learn more about satellite nodes, please check the following blogs from my colleagues:
- David’s introduction: Satellite nodes: Because sometimes even a 2-node cluster is too much
- Stephen’s update on enhancements: Enhancing Satellite Node Management at Scale
Automating VxRail satellite node operations using Ansible
You can leverage the Ansible Modules for Dell VxRail to automate various VxRail operations, including more advanced use cases, like satellite node management. It’s possible today by using the provided samples available in the official repository on GitHub.
Have a look at the following demo, which leverages the latest available version of these modules at the time of recording – 2.2.0. In the demo, I discuss and demonstrate how you can perform the following operations from Ansible:
- Collecting information about the number of satellite nodes added to the primary VxRail cluster
- Adding a new satellite node to the primary VxRail cluster
- Performing lifecycle management operations – staging the upgrade bundle and executing the upgrade on managed satellite nodes
- Removing a satellite node from the primary cluster
The examples used in the demo are slightly modified versions of the following samples from the modules' documentation on GitHub. If you’d like to replicate these in your environment, here are the links to the corresponding samples for your reference, which need slight modification:
- Retrieving system information: systeminfo.yml
- Adding a new satellite node: add_satellite_node.yml
- Performing LCM operations: upgrade_host_folder.yml (both staging and upgrading as explained in the demo)
- Removing a satellite node: remove_satellite_node.yml.
In the demo, you can also observe one of the interesting features of the Ansible Modules for Dell VxRail that is shown in action but not explained explicitly. You might be aware that some of the VxRail API functions are available in multiple versions – typically, a new version is made available when some new features are available in the VxRail HCI System Software, while the previous versions are stored to provide backward compatibility. The example is “GET /vX/system”, which is used to retrieve the number of the satellite nodes – this property was introduced in version 4. If you avoid specifying the version, the modules will automatically select the latest supported version, simplifying the end-user experience.
How can you get more hands-on experience with automating VxRail operations programmatically?
The above demo, discussing the satellite nodes management using Ansible, was configured in the VxRail API hands-on lab which is available in the Dell Technologies Demo Center. With the help of the Demo Center team, we built this lab as the self-education tool for learning VxRail API and how it can be used for automating VxRail operations using various methods – through exploring the built-in, interactive, web-based documentation, VxRail API PowerShell Modules, Ansible Modules for Dell VxRail and Postman.
The hands-on lab provides a safe VxRail API sandbox, where you can easily start experimenting by following the exercises from the lab guide or trying some other use cases on your own without any concerns about making configuration changes to the VxRail system.
The lab was refreshed for the Dell Technologies World 2023 conference to leverage VxRail HCI System Software 8.0.x and the latest version of the Ansible Modules. If you’re a Dell partner, you should have access directly, and if you’re a customer who’d like to get access – please contact your Account SE from Dell or Dell Partner. The lab is available in the catalog as: “HOL-0310-01 - Scalable Virtualization, Compute, and Storage with the VxRail REST API”.
Conclusion
In the fast-evolving landscape of IT infrastructure, the ability to automate operations efficiently is not just a convenience but a necessity. With the power of Ansible Modules for Dell VxRail, we've explored how this necessity can be met, looking at the examples of satellite nodes use case. We encourage you to embrace the full potential of VxRail automation using VxRail API and Ansible or other tools. If it is something new, you can get the experience by experimenting with the hands-on lab available in the Demo Center catalog.
Resources
- Previous blog: Infrastructure as Code with VxRail Made Easier with Ansible Modules for Dell VxRail
- The “master” blog containing a curated list of publicly-available educational resources about the VxRail API: VxRail API - Updated List of Useful Public Resources
- Ansible Modules for Dell VxRail on GitHub, which is the central code repository for the modules. It also contains complete product documentation and examples.
- Dell Technologies Demo Center, which includes VxRail API hands-on lab.
Author: Karol Boguniewicz, Senior Principal Engineering Technologist, VxRail Technical Marketing
Twitter/X: @cl0udguide
LinkedIn: https://www.linkedin.com/in/boguniewicz/

Learn More About the Latest Major VxRail Software Release: VxRail 7.0.480
Tue, 24 Oct 2023 15:51:48 -0000
|Read Time: 0 minutes
Happy Autumn, VxRail customers! As the morning air gets chillier and the sun rises later, this blog on our latest software release – VxRail 7.0.480 – paired with your Pumpkin Spice Latte will give you the boost you need to kick start your day. It may not be as tasty as freshly made cider donuts, but this software release has significant additions to the VxRail lifecycle management experience that can surely excite everyone.
VxRail 7.0.480 provides support for VMware ESXi 7.0 Update U3o and VMware vCenter 7.0 Update U3o. All existing platforms that support VxRail 7.0, except ones based on Dell PowerEdge 13th Generation platforms, can upgrade to VxRail 7.0.480. This includes the VxRail systems based on PowerEdge 16th Generation platforms that were released in August.
Read on for a deep dive into the VxRail Lifecycle Management (LCM) features and enhancements in this latest VxRail release. For a more comprehensive rundown of the features and enhancements in VxRail 7.0.480, see the release notes.
Improving update planning activities for unconnected clusters or clusters with limited connectivity
VxRail 7.0.450, released earlier this year, provided significant improvements to update planning activities in a major effort to streamline administrative work and increase cluster update success rates. Enhancements to the cluster pre-update health check and the introduction of the update advisor report were designed to drive even more simplicity to your update planning activities. By having VxRail Manager automatically run the update advisor report, inclusive of the pre-update health check, every 24 hours against the latest information, you will always have an up-to-date report to determine your cluster’s readiness to upgrade to the latest VxRail software version.
If you are not familiar with the LCM capabilities added in VxRail 7.0.450, you can review this blog for more information.
VxRail 7.0.450 offered a seamless path for clusters that are connected to the Dell cloud to take advantage of these new capabilities. Internet-connected clusters can automatically download LCM pre-checks and the installer metadata files, which provide the manifest information about the latest VxRail software version, from the Dell cloud. The ability to periodically scan the Dell cloud for the latest files ensures the update advisor report is always up to date to support your decision-making.
While unconnected clusters could use these features, the user experience in VxRail 7.0.450 made it more cumbersome for users to upload the latest LCM pre-checks and installer metadata files. VxRail 7.0.480 aims to improve the user experience for those who have clusters deployed in dark or remote sites that have limited network connectivity.
Starting in VxRail 7.0.480, users of unconnected clusters will have an easier experience uploading the latest LCM pre-checks file onto VxRail Manager. The VxRail Manager UI has been enhanced, so you no longer have to upload via CLI.
Knowing that some clusters are deployed in areas where network bandwidth is at a premium, the VxRail Manager UI has also been updated so that you only need to upload the installer metadata file to generate the update advisor report. In VxRail 7.0.450, users had to upload the full LCM bundle for the update advisor report. The difference in the payload size of greater than 10GB for a full LCM bundle versus a 50KB installer metadata file is a tremendous improvement for bandwidth-constrained clusters, eliminating a barrier to relying on the update advisor report as a standard cluster management practice. With VxRail 7.0.480, whether you have connected or unconnected clusters, these update planning features are easy to use and will help increase your cluster update success rates.
To accommodate these improvements, the Local Updates tab has been modified to support these new capabilities. There are now two sub-tabs underneath the Local Updates tab:
- The Update sub-tab represents the existing cluster update workflow where you would upload the full LCM bundle to generate the update advisor report and initiate the cluster update operation.
- The Plan and Update sub-tab is the recommended path which incorporates the enhancements in VxRail 7.0.480. Here you can upload the latest LCM pre-checks file and the installer metadata file that you found and downloaded from the Dell Support website. Uploading the LCM pre-checks file is optional to create a new report because there may not always be an updated file to apply. However, you do need to upload an installer metadata file to generate a new report from here. Once uploaded, VxRail Manager will generate an update advisor report against that installer metadata file every 24 hours.
Figure 1. New look to the Local Updates tab
Easier record-keeping for compliance drift and update advisor reports
VxRail 7.0.480 adds new functionality to make the compliance drift reports exportable to outside the VxRail Manager UI while also introducing a history tab to access past update advisor reports.
Some of you use the contents of the compliance drift report to build out a larger infrastructure status report for information sharing across your organizations. Making the report exportable would simplify that report building process. When exporting the report, there is an option to group the information by host if you prefer.
Note that the compliance check functionality has moved from the Compliance tab under the Updates page to a separate page, which you can navigate to by selecting Compliance from under the VxRail section.
Figure 2. Exporting the compliance drift report
The exit of the Compliance tab comes with the introduction of the History tab on the Updates page in VxRail 7.0.480. Because VxRail Manager automatically generates a new update advisor report every 24 hours and you have the option to generate one on-demand, the update advisor report is often overwritten. To avoid the need to constantly export them as a form of record-keeping, the new History tab stores the last 30 update advisor reports. The reports are listed in a table format where you can see which target version the report was run against and when it was run. To view the full report, you can click on the icon on the left-hand column.
Figure 3. New History tab to store the last 30 update advisor reports
Addressing cluster update challenges for larger-sized clusters
For some of you that have larger-sized clusters, cluster updates pose challenges that may prevent you from upgrading more frequently. For example, the length of the maintenance window required to complete a full cluster update may not fit within your normal business operations such that any cluster update activity will impact service availability. As a result, cluster updates are kept to a minimum and nodes inevitably are not rebooted for long periods of time. While the cluster pre-update health check is an effective tool to determine cluster readiness for an upgrade, some issues may be lurking that a node reboot can uncover. That’s why some of you script your own node reboot sequence that acts as a test run for a cluster upgrade. The script reboots each node one at a time to ensure service levels of your workloads are maintained. If any nodes fail to reboot, you can investigate those nodes.
VxRail 7.0.480 introduces the node reboot sequence on VxRail Manager UI so that you do not have to manage your scripts anymore. The new feature includes cluster-level and node-level prechecks to ensure it is safe to perform this activity. If nodes fail to reboot, there is an option for you to retry the reboot or skip it. Making this activity easy may also encourage more customers to do this additional pre-check before upgrading their clusters.
Figure 4. Selecting nodes in a cluster to reboot in sequential order
Figure 5. Monitoring the node reboot sequence on the dashboard
VxRail 7.0.480 also provides the capability to split your cluster update into multiple parts. Doing so allows you to separate your cluster upgrade into smaller maintenance windows and work around your business operation needs. Though this capability could reduce the impact of a cluster upgrade to your organization, VMware does recommend that you complete the full upgrade within one week given that there are some Day 2 operations that are disabled while the cluster is partially upgraded. VxRail enables this capability only through VxRail API. When a cluster is in a partially upgraded state, features in the Updates tab are disabled and a banner appears alerting you of the cluster state. Cluster expansion and node removal operations are also unavailable in this scenario.
Conclusion
The new lifecycle management capabilities added to VxRail 7.0.480 are part of the continual evolution of the VxRail LCM experience. They also represent how we value your feedback on how to improve the product and our dedication to making your suggestions come to fruition. The LCM capabilities added to this software release will drive more effective cluster update planning, which will result in higher rates of cluster update success that will drive more efficiencies in your IT operations. Though this blog focuses on the improvements in lifecycle management, please refer to the release notes for VxRail 7.0.480 for a complete list of features and enhancements added to this release. For more information about VxRail in general, visit the Dell Technologies website.
Author: Daniel Chiu