To slice or not to slice
Wed, 25 Jan 2023 21:53:29 -0000
|Read Time: 0 minutes
Network Slicing is possibly the most central feature of 5G – lots of game-changing potentials, but at the same time often overhyped and misunderstood. In this blog, we will give a fact-based assessment and guidance on the question of “To Slice Or Not To Slice.”
Guidance for the reader:
- Entire blog – 7-minute read to understand the background and future of Network Slicing
- From “Service differentiation starts in the RAN” – 4-minute read if only interested in the future of Network Slicing
The bar was set too high
5G doesn’t only promise to enhance mobile broadband but also to support a wide range of enterprise use cases – from those requiring better reliability and lower latency to those requiring a long battery life and greater device density. From the long list of 3GPP Release 15 features, Network Slicing is the cornerstone feature for service creation. The basic idea behind this feature is the ability to subdivide a single physical network into many logical networks where each is optimized to support the requirements of the services intended to run on it.
We can think of Network Slicing as ordering pizza for friends. 4G gets you the classic Margherita, which is acceptable to most. Yet some would be willing to pay more for extra toppings. In this case, 5G allows you to customize the pizza where half can still be the classic Margherita, but the remaining slices can be split into four cheese, pepperoni, and Hawaiian.
It all sounds great, but why are we not seeing Network Slicing everywhere today? Let us explore some of the hurdles it has to clear before becoming more mainstream.
Slicing requires new features to work – Network equipment providers need to develop these new features, especially on the Radio Access Network (RAN), and communications service providers need to implement them. This will take time since much of the initial industry focus has been on enhanced mobile broadband and fixed wireless access, which is the initial monetizable 5G use cases.
Slicing needs automation to be practical – While it is possible to create network slices manually at the start, doing so at scale takes too long and costs too much. An entirely new 3GPP-defined management and orchestration layer is needed for slicing orchestration and service assurance. Business Support Systems (BSS) also need new integrations and feature enhancements to support capabilities like online service ordering and SLA-based charging.
Slicing has to make money – There will come a time when we cannot live without the metaverse and web 3.0, but that is not today. There will also come a time when factories will be run by collaborative robots and infrastructures are maintained by autonomous drones, but that is not today. The reality is that there is limited demand for custom slices since most consumer and enterprise use cases today work fine on 4G or 5G non-standalone networks. For example, YouTube and other over-the-top streaming apps implement algorithms to adapt to varying speeds and latency. Lastly, Network Slicing also comes with additional costs related to implementation, operations, and reduced overall capacity (due to resource reservation and prioritization) that must be factored into the business case.
Regulatory challenges – Net Neutrality is an essential topic in the United States and European Union. Misinterpreting differentiated services as something that violates Net Neutrality may put communications service providers under scrutiny by regulators.
It’s not all doom and gloom
5G standalone may have been slow out of the gate, but it is gaining momentum. In 2022, GSA counted 112 operators in 52 countries investing in public 5G standalone networks. Some communications service providers are even more advanced. For example, Singtel has already implemented Paragon, an orchestration platform that allows them to offer network slices and mobile edge computing for mission-critical applications on demand. Another example is Telia Finland which uses Network Slicing to guarantee the service level for its home broadband (fixed wireless access) subscribers.
There are also a lot of ongoing and planned projects that aim to accelerate the development of enterprise use cases. Collaborations such as ARENA2036, a research campus in Germany, allow communications service providers, network equipment manufacturers, independent software vendors, system integrators, and the academe to work together in developing and testing new technologies and services.
Service differentiation starts in the RAN
One of the key reasons behind this positive momentum shift in 2022 is the major network equipment providers like Nokia and Ericsson bringing to the market their Network Slicing features for the RAN. These features enable the reservation and prioritization of RAN resources to particular slices. According to these vendors, Network Slice capacity management is done dynamically, which means the scarce air interface resources are allocated as efficiently as possible. This has been the much-needed catalyst for the first commercial launches and pre-commercial trials across several industries: fixed wireless access (live), video streaming (live), smart city, public safety, remote TV broadcast, assisted driving, enterprise interconnectivity, and mining.
Another positive development is related to smartphones where the biggest mobile operating system (Android OS) started supporting multiple Network Slices simultaneously on the same device (from Android 12). This is beneficial to both consumer and enterprise use cases that have more demanding requirements for speed and latency.
These enhancements on the RAN and devices close several gaps. We can therefore expect Network Slicing to gain even more traction in 2023.
Network Slicing versus Mobile Private Network
Several hundred successful 4G and 5G Mobile Private Networks (MPN) have been deployed globally. Many have specific indoor coverage, cybersecurity, or business-critical performance requirements that can be best accomplished with dedicated network resources. The common challenges for MPN are private spectrum availability, high cost of deployment and operations, and long lead times.
5G use cases can only be deployed through Network Slicing or MPN, but the majority can be deployed on either. In our view, the discussion should not focus too much on comparing Network Slicing to MPN but should rather be on the use case requirements such as coverage, where Network Slicing is a natural fit for wide areas and MPN is a natural fit for deep indoor. Communications service providers should have both solutions in their toolbox as individual enterprise customers may require both for their various use cases. Let the use case dictate the solution, similar to the approach of most network equipment providers for private wireless (4G/5G versus WiFi6/6E).
The Future of Network Slicing
In our view, the evidence is clear from the recently available slicing features and commercial/pre-commercial market deployments to conclude that Network Slicing is here to stay, enabling new service creation and fostering competitive differentiation. Only time will tell how successful it will be with consumer and enterprise market segments. The level of investments by governments, industry groups, communications service providers, and network equipment providers will play a major role in the success or failure of Network Slicing. At the same time, communications service providers should keep in mind other industry players like AWS and other webscale companies who are betting big on 5G with MPN-based solutions (as Network Slicing is not an option for them).
Communications service providers must understand that Network Slicing, in most situations, is not a sellable service, but rather an enabler to support services with performance or security requirements that are significantly different from mobile broadband. Differentiation for most of the use cases will be in the RAN domain since the air interface is a constrained resource and the RAN equipment is too costly to dedicate.
While there is no harm to having the management and orchestration layer from the start especially if CAPEX is not an issue, it is still recommended to first focus on deploying the end-to-end network features Network Slicing requires and on identifying monetizable use cases that will benefit from it. Note that some use cases require additional features such as those that lower latency and improve reliability.
The vast majority of consumer and enterprise end users are not interested in the underlying technologies, but rather just want to achieve the speed, latency, and reliability they need for the services they enjoy or need. And in many cases even discussions on speed, latency and reliability do not interest them as long as the services are performing as expected. Communications service providers should have the capability to create and market the services by themselves or, in most instances, with the right partners. Unlike 4G, the potential of 5G can no longer be realized just by the communications service providers and network equipment vendors.
Communications service providers should have a complete toolbox – different tools for different requirements. And the guidance is not to stand idly, but to gain experience and form partnerships for both Network Slicing and MPN.
Deploying Network Slicing or MPN and moving into new business models where offering multiple tailored and assured connectivity services are not trivial tasks. How Dell can help CSPs in this transformation journey:
- With its vast enterprise experience and solutions, as well as its Service Co-creation teams to co-develop services and go-to-market strategies.
- MPN and Edge solutions. E2E platform-centric solutions with connectivity and application fabric based on modular architectural design. Dell works with several industry-leading Independent Software Vendors (ISVs) to offer validated designs and ready-to-use platforms that are customizable and integrated by Dell's SI partners to meet desired outcomes of the business.
- Engineered and automated solutions of the underlying Cloud infrastructure. Open and cloud-native architecture is the industry-chosen platform for mobile networks. It has many benefits like better service agility and being vendor agnostic, but it also introduces more complexity. With Dell Technologies’ engineered and automated infrastructure blocks, communications service providers can focus on creating new services.
About the author: Tomi Varonen
Principal Global Enterprise Architect
Tomi Varonen is a Telecom Network Architect in Dell’s Telecom Systems Business Unit. He is based in Finland and working with the Cloud and Core Network customer cases in the EMEA region. Tomi has over 23 years of experience in the Telecom sector in various technical and sales positions. He has wide expertise in end-to-end mobile networks and enjoys creating solutions for new technology areas. Tomi has a passion for various outdoor activities with family and friends including skiing, golf, and bicycling.
About the author: Arthur Gerona
Principal Global Enterprise Architect
Arthur is a Principal Global Enterprise Architect at Dell Technologies. He is working on the Telecom Cloud and Core area for the Asia Pacific and Japan region. He has 19 years of experience in Telecommunications, holding various roles in delivery, technical sales, product management, and field CTO. During his free time, Arthur likes to travel with his family.
Related Blog Posts
What is Happening in the Network Edge
Mon, 26 Jun 2023 10:59:44 -0000
|Read Time: 0 minutes
Where is the Network Edge in Mobile Networks
The notion of ‘Edge’ can take on different meanings depending on the context, so it’s important to first define what we mean by Network Edge. This term can be broadly classified into two categories: Enterprise Edge and Network Edge. The former refers to when the infrastructure is hosted by the company using the service, while the latter refers to when the infrastructure is hosted by the Mobile Network Operator (MNO) providing the service.
This article focuses on the Network Edge, which can be located anywhere from the Radio Access Network (RAN) to next to the Core Network (CN). Network Edge sites collocated with the RAN are often referred to as Far Edge.
What is in the Network Edge
In a 5G Standalone (5G SA) Network, a Network Edge site typically contains a cloud platform that hosts a User Plane Function (UPF) to enable local breakout (LBO). It may include a suite of consumer and enterprise applications, for example, those that require lower latency or more privacy. It can also benefit the transport network when large content such as Video-on-Demand is brought closer to the end users.
Modern cloud platforms are envisioned to be open and disaggregated to enable MNOs to rapidly onboard new applications from different Independent Software Vendors (ISV) thus accelerating technology adoption. These modern cloud platforms are typically composed of Commercial-of-the-Shelf (COTS) hardware, multi-tenant Container-as-a-Service (CaaS) platforms, and multi-cloud Management and Orchestration solutions.
Similarly, modern applications are designed to be cloud-native to maximize service agility. By having microservices architectures and supporting containerized deployments, MNOs can rapidly adapt their services to meet changing market demands.
What contributes to Network Latency
The appeal of Network Edge or Multi-access Edge Computing (MEC) is commonly associated with lower latency or more privacy. While moving applications from beyond the CN to near the RAN does eliminate up to tens of milliseconds of delay, it is also important to understand that there are many other contributors to network latency which can be optimized. In fact, latency is added at every stage from the User Equipment (UE) to the application and back.
RAN is typically the biggest contributor to network latency and jitter, the latter being a measure of fluctuations in delay. Accordingly, 3GPP has introduced a lot of enhancements in 5G New Radio (5G NR) to reduce latency and jitter in the air interface. We can actively reduce latency through the following categories: There are three primary categories where latency can be reduced:
- Transmission time: reduce symbol duration with higher subcarrier spacing or with mini slots
- Waiting time: improve scheduling (optimize handshaking), simultaneous transmit/receive, and uplink/downlink switching with TDD
- Processing time: reduce UE and gNB processing and queuing with enhanced coding and modulation
Transport latency is relatively simple to understand as it is mainly due to light propagation in optical fiber. The industry rule of thumb is 1 millisecond round trip latency for every 100 kilometers. The number of hops along the path also impacts latency as every transport equipment adds a bit of delay.
Typically, CN adds less than 1 millisecond to the latency. The challenge for the CN is more about keeping the latency low for mobile UEs, by seamlessly changing anchors to the nearest Edge UPF through a new procedure called ‘make before break’. Also, the UPF architecture and Gi/SGi services (e.g., Deep Packet Inspection, Network Address Translation, and Content Optimization) may add a few additional milliseconds to the overall latency, depending on whether these functions are integrated or independent.
Architectural and Business approaches for the Network Edge
The physical locations that host RAN and Network Edge functionalities are widely recognized to be some of the MNOs’ most valuable assets. Few other entities today have the real estate and associated infrastructure (e.g., power, fiber) to bring cloud capabilities this close to the end clients. Consequently, monetization of the Network Edge is an important component of most MNOs’ strategy for maximizing their investment in the mobile network and, specifically, in 5G. In almost all cases, the Network Edge monetization strategy includes making Network Edge available for Enterprise customers to use as an “Edge Cloud.” However, doing so involves making architectural and business model choices across several dimensions:
- Connectivity or Cloud: should the MNO offer a cloud service or just the connectivity to a cloud service provided by a third party (and potentially hosted at a third party’s site).
- aaS model: in principle, the full range of as-a-Service models are available to the MNO to offer at the network edge. This includes co-location services; Bare-Metal-as-a-Service, Infrastructure-as-a-Service (IaaS), Containers-as-a-Service (CaaS), and Platform and Software-as-a-Service (PaaS and SaaS). Going up this value chain (up being from co-lo to SaaS) allows the MNO to capture more of the value provided to the Enterprise. However, it also requires it to take on significantly more of responsibility and puts it in direct competition with well-established players in this space – e.g., the cloud hyperscale companies. The right mix of offerings – and it is invariably a mix – thus involves a complex set of technical and business case tradeoffs. The end result will be different for every MNO and how each arrives there will also be unique.
- Management framework: our industry’s initial approach to exposing the Network Edge to the enterprises involved a management framework that tightly couples to how the MNO manages its network functions (e.g., the ETSI MEC family of standards for example (ETSI MEC)). However, this approach comes with several drawbacks from an Enterprise point of view. As a result, a loosely coupled approach, where the Enterprise manages its Edge Cloud applications using typical cloud management solutions appears to be gaining significant traction, with solutions such as Amazon’s Wavelength as an example. This approach, of course, has its own drawbacks and managing the interplay between the two is an important consideration in Network Edge (and one that is intertwined with the selection of aaS model).
- Network-as-a-Service: a unique aspect of the Network Edge is the MNOs ability to expose network information to applications as well as the ability to provide those applications (highly curated) means of controlling the network. How and if this makes sense is again both an issue of the business case – for the MNO and the Enterprise – as well as a technical/architectural issue.
Certainly, the likely end state is a complex mixture of services and go-to-market models focused on the Enterprise (B2B) segment. The exposition of operational automation and the features of 5G designed to address this make it likely that this is a huge opportunity for MNOs. Navigating the complexities of this space requires a deep understanding of both what services the Enterprises are looking for and how they are looking to consume these. It also requires an architectural approach that can handle the variable mix of what is needed in a way that is highly scalable.
As the long-time leader in Enterprise IT services, Dell is uniquely positioned to address this space – stay tuned for more details in an upcoming blog!
Building the Network Edge
There are several factors to consider when moving workloads from central sites to edge locations. Limited space and power are at the top of the list. The distance of locations from the main cities and generally more exposed to the elements require a new class of denser, easier-to-service, and even ruggedized form factors. Thanks to the popularity of Open RAN and Enterprise Edge, there are already solutions in the market today that can also be used for Network Edge. Read more on Edge blog series Computing on the Edge | Dell Technologies Info Hub
Higher deployment and operating costs are another major factor. The sheer number of edge locations combined with their degraded accessibility make them more expensive to build and maintain. The economics of the Network Edge thus necessitates automation and pre-integration. Dell’s solution is the newly engineered cloud-native solution with automated deployment and life-cycle management at its core. More on this novel approach here Dell Telecom MultiCloud Foundation | Dell USA.
Last is the lower cost of running applications centrally. Central sites have the advantage of pooling computes and sharing facilities such as power, connectivity, and cooling. It is therefore important to reduce overhead wherever possible, such as opting for containerized over VM-based cloud platforms. Moreover, having an open and disaggregated horizontal cloud platform not only allows for multitenancy at edge locations, which significantly reduces overhead but also enables application portability across the network to maximize efficiency.
The ideal situation is where Open/Cloud RAN and Network Edge are sharing sites thus splitting several of the deployment and operations costs. Due to the latency requirements, Distributed Unit (DU) must be placed within 20 kilometers of the Radio Unit (RU). Latency requirements for the mid-haul interface between DU and Central Unit (CU) are less stringent, and CU could be placed roughly around 80-100 kilometers from the DU. In addition, the Near-Real Time Radio Intelligent Controller (Near-RT RIC) and the related xApps must be placed within 10ms RTT. This makes it possible to collocate Network Edge sites with the CU sites and Near-RT RIC.
Future
What has happened over the past few years is that several MNOs have already moved away from having 2-3 national DCs for their entire CN to deploying 5-10 regional DCs where some network functions such as the UPF were distributed. One example of this is AT&Ts dozen “5G Edge Zones” which were introduced in the major metropolitan areas: AT&T Launching a Dozen 5G “Edge Zones” Across the U.S. (att.com).
This approach already suffices for the majority of “low latency” use cases and for smaller countries even the traditional 2-3 national DCs can offer sufficiently low transport latency. However, when moving into critical use cases with more stringent latency requirements, which means consistently very low latency is a must, then moving the applications to the Far Edge sites becomes a necessity in tandem with 5G SA enhancements such as network slicing and an optimized air interface.
The challenge with consumer use cases such as cloud gaming is supporting the required Service Level (i.e., low latency) country wide. And since enabling the network to support this requires a substantial initial investment, we are seeing the classic chicken and egg problem where independent software vendors opt not to develop these more demanding applications while MNOs keep waiting for these “killer use cases” to justify the initial investment for the Network Edge. As a result, we expect geographically limited enterprise use cases to gain market traction first and serve as catalysts for initially limited Network Edge deployments.
For use cases where assured speeds and low latency are critical, end-to-end Network Slicing is essential. In order to adopt a new more service-oriented approach, MNOs will need Network Edge and low latency enhancements together with Network Slicing in their toolbox. For more on this approach and Network Slicing, please check out our previous blog To slice or not to slice | Dell Technologies Info Hub.
About the author: Tomi Varonen
Tomi Varonen is a Telecom Network Architect in Dell’s Telecom Systems Business Unit. He is based in Finland and working with the Cloud, Core Network, and OSS&BSS customer cases in the EMEA region. Tomi has over 23 years of experience in the Telecom sector in various technical and sales positions. Wide expertise in end-to-end mobile networks and enjoys creating solutions for new technology areas. Passion for various outdoor activities with family and friends including skiing, golf, and bicycling.
About the author: Arthur Gerona
Arthur is a Principal Global Enterprise Architect at Dell Technologies. He is working on the Telecom Cloud and Core area for the Asia Pacific and Japan region. He has 19 years of experience in Telecommunications, holding various roles in delivery, technical sales, product management, and field CTO. When not working, Arthur likes to keep active and travel with his family.
About the author: Alex Reznik
ALEX REZNIK is a Global Principal Architect in Dell Technologies Telco Solutions Business organization. In this role, he is focused on helping Dell’s Telco and Enterprise partners navigate the complexities of Edge Cloud strategy and turning the potential of 5G Edge transformation into the reality of business outcomes. Alex is a recognized industry expert in the area of edge computing and a frequent speaker on the subject. He is a co-author of the book "Multi-Access Edge Computing in Action." From March 2017 through February 2021, Alex served as Chair of ETSI’s Multi-Access Edge Computing (MEC) ISG – the leading international standards group focused on enabling edge computing in access networks.
Prior to joining Dell, Alex was a Distinguished Technologist in HPE’s North American Telco organization. In this role, he was involved in various aspects of helping Tier 1 CSPs deploy state-of-the-art flexible infrastructure capable of delivering on the full promises of 5G. Prior to HPE Alex was a Senior Principal Engineer/Senior Director at InterDigital, leading the company’s research and development activities in the area of wireless internet evolution. Since joining InterDigital in 1999, he has been involved in a wide range of projects, including leadership of 3G modem ASIC architecture, design of advanced wireless security systems, coordination of standards strategy in the cognitive networks space, development of advanced IP mobility and heterogeneous access technologies and development of new content management techniques for the mobile edge.
Alex earned his B.S.E.E. Summa Cum Laude from The Cooper Union, S.M. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology, and Ph.D. in Electrical Engineering from Princeton University. He held a visiting faculty appointment at WINLAB, Rutgers University, where he collaborated on research in cognitive radio, wireless security, and future mobile Internet. He served as the Vice-Chair of the Services Working Group at the Small Cells Forum. Alex is an inventor of over 160 granted U.S. patents and has been awarded numerous awards for Innovation at InterDigital.
Defining the future of O-RAN Management with Vodafone, Amdocs, and Dell Technologies
Thu, 22 Feb 2024 13:08:00 -0000
|Read Time: 0 minutes
Seizing the initiative to define the future of Open RAN management
The transformative journey of communication service provider (CSP) networks has reached a new, exciting stage. As operators increasingly adopt cloud technologies and embrace disaggregated architecture, the O-RAN Alliance is leading an expansion into the radio access network (RAN) realm. By disrupting the traditional RAN landscape, O-RAN is driving the industry towards a software-driven approach that leverages diverse software and hardware from multiple vendors to achieve the best possible outcomes. The goal is to create integrated, tested and certified solutions that deliver lower total cost of ownership (TCO) and amplified innovation.
With over 40 years’ industry expertise, Amdocs is a leading provider of software and services to communications and media companies. The company offers market-leading capabilities for service providers’ operations support systems (OSS) and radio access networks (RANs), and has delivered proven solutions in network management, planning, and optimization. To meet emerging challenges, Amdocs also strongly collaborates with leading industry organizations like the Telecom Infra Project and the O-RAN Alliance.
Dell Technologies is a global leader in digital transformation and infrastructure. Its products are widely utilized by global telecom operators in network and IT infrastructure, ranging from purpose-built telecom servers to cloud-native orchestration and infrastructure automation solutions. The company also offers bundled solutions developed in close collaboration with a diverse ecosystem of partners in O-Cloud and workload layers, and has extensive representation in key industry forums, including the O-RAN Alliance, Telecom Infra Project, and 3GPP.
To advance a shared vision for O-RAN management, our two companies have partnered to enable cloud transformations throughout the industry. For example, consider Amdocs Service Management and Orchestration (SMO) for O-RAN, whose capabilities include orchestration, inventory and assurance for any managed element, including x/rAPPs.
While Amdocs offering supports any O-Cloud, across bare metal and CaaS, when integrated with Dell Telecom Infrastructure Automation Suite, it supports deployments on Dell Technology’s industry-leading telecom servers, as well as O-Cloud layer software, provided by partner organizations. This integration enables CSPs to rapidly provision, manage, and monitor their O-Cloud infrastructure, and simplify the lifecycle management of infrastructure nodes in a dynamic, disaggregated network. A proof of concept (PoC) showcasing this solution's capabilities is currently underway at Vodafone Group, encompassing both immediate use cases and a roadmap of forward-looking scenarios.
Bringing efficiencies to O-RAN with Service Management and Orchestration (SMO)
Service Management and Orchestration (SMO) is a key pillar in service and network orchestration, addressing specific CSP needs. By operating across multiple hierarchies, SMO efficiently manages multi-vendor, multi-technology entities with varying lifecycles. Furthermore, by focusing on cloud infrastructure, virtualized and containerized cloud-native functions (CNFs), it’s fully aligned with the industry’s developing architecture, seamlessly integrating with, and actively contributing to O-RAN standards and interfaces.
Amdocs SMO provides all the capabilities required to manage O-RAN. It supports the end-to-end lifecycle of the network, including design and onboarding, orchestration and management, inventory, and assurance processes. This approach also extends to embracing the openness and disaggregated approach of O-RAN, with support for heterogeneous multi-technology, multi-vendor networks – bringing CSPs cost efficiencies and empowering innovation.
Figure 1 Amdocs Service Management and Orchestration Solution Overview
Amdocs’ SMO supports a diverse set of use cases, from O-RAN network rollout, network slicing and O-RAN energy efficiency savings, to assurance and closed-loop operations. Furthermore, it’s instrumental in simplifying the rollout process, addressing challenges presented by the disaggregated, multi-vendor nature of O-RAN.
Post-rollout too, SMO plays a pivotal role managing each individual network slice, ensuring RAN performance, maintaining service-level objectives and undertaking corrective actions. This is achieved by leveraging standard FM, PM, SQM capabilities, as well as O-RAN apps, which are deployed within both the Non-RT RIC (rApps) and
Near-RT RIC (xApps) to support different optimization use cases. Throughout, the solution fully adheres to O-RAN specifications and standards.
Streamlining with Infrastructure and O-Cloud automation
Dell Technologies Infrastructure Automation Suite helps to simplify and automate infrastructure management in disaggregated networks, allowing CSPs to seamlessly provision, manage and monitor their infrastructure. In addition to operating based on the O-RAN O2-IMS and O2-DMS APIs, the Suite provides an open, model-driven framework for a ubiquitous single point of control. This suite then serves as the unified entry and exit point for automated deployment and orchestration of multi-site and multi-vendor infrastructure, as well as streamlined day 2 lifecycle management, including updates and upgrades.
Figure 2 Dell Telecom Infrastructure Automation Suite
Dell Telecom Infrastructure Automation Suite’s open and extensible architecture serves as the driving force behind O-RAN infrastructure automation. It includes a comprehensive set of components, including full orchestration, data-driven telemetry of cloud infrastructure, resource controllers, API adaptors, a user interface and a single pane of glass for complete cloud infrastructure.
Importantly, the suite, with its open declarative automation framework, also delivers support for cloud infrastructure operations, lower infrastructure total cost of ownership (TCO), accelerated time to market (TTM)/time to repair (TTR), and a modular, extensible architecture to avoid vendor lock-in.
A ground-breaking proof of concept with Vodafone
A main takeaway from our collaboration with Vodafone was that the ability to replace manual processes with zero-touch operations would represent a real game changer. To showcase this vision, Amdocs and Dell Technologies set the goal of building a proof-of-concept (PoC) that would achieve this objective. Taking an end-to-end distributed zero-touch deployment approach, we set out to build a model that significantly reduces the time to bring new sites and services online. Ultimately, Vodafone also seeks to automate the radio network rollout and validate the joint solution’s ability to manage a hybrid, multi-vendor, and disaggregated O-RAN network.
For this PoC, a joint blueprint was created, whereby Amdocs would manage SMO and system integration, with Dell overseeing O-Cloud and infrastructure (including bare metal) layers, and Radisys providing O-RAN CNFs. Additional software will include Red Hat® OpenShift®, a hybrid cloud application platform powered by Kubernetes, as a CaaS platform and Open Telemetry for performance metrics in CaaS.
Figure 3 Vodafone O-RAN PoC blueprint
Vodafone Proof of Concept use cases
The PoC aims to showcase the seamless integration of Amdocs SMO with Dell Technologies Infrastructure Automation Suite, enabling zero-touch deployment of a RAN site. The deployment involves transitioning infrastructure from bare-metal to the cloud using a declarative approach. Once the site is deployed, Amdocs and Dell will demonstrate end-to-end implementation through a data call. Both Amdocs SMO assurance capabilities and Dell Technologies Infrastructure Automation Suite will gather and transmit various telemetry data from the infrastructure, CaaS and the RAN network functions to Amdocs SMO, facilitating real-time monitoring of alarms and events. The setup is both versatile and supports service assurance and closed-loop automation.
Roadmap to innovation
Looking ahead, Amdocs and Dell Technologies remain committed to evolving SMO and O-Cloud management in alignment with O-RAN standards, and empowering CSPs with the flexibility and agility they need for O-RAN deployment activities.
Amdocs SMO remains central to this goal, supporting a rich set of capabilities, including model-driven dynamic orchestration, service decomposition, network slicing, dynamic inventory and closed-loop SLA assurance. Importantly, we’re also investing in specific O-RAN capabilities such as O1, O2, R1, and A1 interfaces, as well as management of x/rApps and respective ML-models.
Meanwhile, Dell Telecom Infrastructure Automation Suite effectively manages the complete lifecycle of the O-Cloud, using the O2 API and RESTful APIs. Employing an open software framework with vendor-agnostic resource controllers, the Suite empowers CSPs to fully capitalize on the advantages of disaggregated infrastructure and cloud layers. It can also seamlessly configure the O-Cloud by orchestrating intricate dependencies, coordinating tasks across various infrastructure elements and cloud stacks.
Even as Amdocs and Dell Technologies solidify our positions as key players in O-RAN development, we remain equally excited to find new ways to collaborate and innovate in the ever-evolving O-RAN management landscape.