Your Browser is Out of Date

Nytro.ai uses technology that works best in other browsers.
For a full experience use one of the browsers below

Dell.com Contact Us
United States/English

Blogs

Dell Technologies industry experts post their thoughts about our Communication Service Providers Solutions.

Blogs (32)

Distribution of 5G Core to Network Edge

Gaurav Gangwal Christina Perfetto Gaurav Gangwal Christina Perfetto

Thu, 25 Apr 2024 16:15:38 -0000

|

Read Time: 0 minutes


Thus far in our blog series, we have discussed migrating to an open cloud-native ecosystem, the 5G Core and its architecture, and how Dell Telecom Infrastructure Block for Red Hat can help simplify 5G network deployment for Red Hat® OpenShift®. Now, we would like to introduce a key use case for distributing 5G core User Plane functions from the centralized data center to the network edge.

Distributing Core Functions in 5G networks

The evolution of communication technology has brought us to the era of 5G networks, promising faster speeds, lower latency, and the ability to connect billions of devices simultaneously. However, to achieve these ambitious goals, the architecture of 5G networks needs to be more flexible, scalable, and efficient than ever before. With the advent of CUPS, or Control and User Plane Separation, in later LTE releases, the telecommunications industry had high expectations for a prototypical distributed control-user plane architecture. This development was seen as a steppingstone towards the more advanced 5G networks that were on the horizon. CUPS aimed to separate the control plane and user plane functionalities where the Control Plane (specifically the Session Management Function or SMF) is typically centralized while the User Plane Function (UPF) can be located alongside the Control Plane or distributed to other locations in the network as demanded by specific use cases. 

Understanding the need for Distributed UPF 

The UPF is a key component in 5G networks, responsible for handling user data traffic. Distributed User Plane Function (D-UPF) is an advanced network architecture that distributes the UPF functionality across multiple nodes closer to the user and enables local breakout (LBO) to manage use cases that requires lower latency or more privacy, enabling a more scalable and flexible networking environment. With D-UPF, operators can handle increasing data volumes, reduce latency, and enhance overall network performance. By distributing the UPF, operators can effectively manage the increasing data demands across different consumer and enterprise use cases in a cost-effective manner.

 

Figure 1: Distributed User Plane function in 5G Core Architecture

D-UPF also plays a crucial role in enabling edge computing in 5G networks. By distributing the user plane traffic closer to the network edge, D-UPF reduces the latency associated with data transmission to and from centralized data centers. This opens opportunities for real-time applications, such as autonomous vehicles, augmented reality, and industrial automation, where low latency is critical for their proper functioning. 

Distributed UPF deployment options 

Figure 2: D-UPF deployment and functionality 

The above diagram provides an overview of the different roles D-UPF may play in a 5G architecture. For example:

  • Centralized UPF/PSA-UPF: In the simplest scenario, the UPF is centralized and session anchor occurs within the data center and takes care of the long-term stable IP address assignment.  One such example includes VoLTE / NR call where PDU Session Anchor (PSA)-UPF traffic steers to IMS.
  • Intermediate UPF (I-UPF):An intermediate UPF (I-UPF) can be inserted on the User Plane path between the RAN and a PSA. Here are two possible reasons to do that:
    • If due to mobility, the UE moves to a new RAN node and the new RAN node cannot support N3 tunnel to the old PSA, then an I-UPF is inserted and this I-UPF will have the N3 interface towards RAN and an N9 interface towards the PSA UPF. This process of linking multiple UPFs is called UPF chaining, which involves directing user data flows through a series of UPFs, each of which is performing specific functions.
    • You might want to deploy UPF within the Local Data Center/Edge for a low latency use case to steer data traffic to a co-deployed MEC for edge services or to break-out traffic to the local data network.

Challenges and considerations for D-UPF deployments 

Now that we have reviewed the need for D-UPF and the different deployment scenarios, let’s consider some of the obstacles you will encounter along the way. As we all know, these network functions have their own needs, especially when it comes to the amount of data being inspected, routed, and forwarded across from the core to the edge, and back again.  Below are four areas for consideration:

  • Resource Constraints: Edge or remote locations often have limited physical space available for deploying network equipment. The challenge lies in accommodating the necessary hardware, cooling systems, and other infrastructure within these space-constrained environments. Remote locations may also have limited or unreliable power supply infrastructure. Opting for infrastructure with optimal power efficiency, high density, serviceability, ruggedized exterior, and optimized for edge form factors becomes important as UPFs are extended to the edge.
  • Performance Requirements: The need for low latency Infrastructure is critical to ensure real-time responsiveness and a seamless user experience when deploying core functions to the edge. Also, by processing data at the network edge with minimal latency, the need for large bandwidth networks to transmit data to centralized core is reduced. This helps in optimizing network bandwidth and lowering the operating costs. This ultimately reduces the CSP’s reliance on centralized core infrastructure for time-sensitive operations.
  • Orchestration and Automation: Deploying and managing UPFs distributed across edge locations is a complex challenge. This includes tasks such as workload placement, resource allocation, and automated management of edge infrastructure. Choosing a horizontal telco cloud platform that supports automated distributed core deployment and provides the capability to expand and scale the compute and storage resources to accommodate the varying demands at the edge is a must.
  • Lifecycle Management and Operating Cost: Another significant factor is the increased costs associated with first deploying and then operating remote deployments. The large number of these locations coupled with their limited accessibility makes them more expensive to construct and maintain. To address this, zero-touch provisioning at network setup k and sustainable lifecycle management are necessary to optimize the economics of the edge.

The Horizontal Cloud Platform: Dell Telecom Infrastructure Blocks

Figure3: An Implementation View of Dell Telecom Infrastructure Blocks for Red Hat running the UPF 

Dell Technologies is at the forefront of providing cutting-edge cloud-native solutions for the 5G telecom industry. As discussed in our previous blog, Telecom Infrastructure Blocks for Red Hat is one of those solutions, helping operators break down technology silos and empowering them to deploy a common cloud platform from Core to Edge to RAN. These are engineered systems, based on high-performance telecom edge-optimized Dell PowerEdge servers, that have been pre-validated and integrated with Red Hat OpenShift ecosystem software. This makes them a perfect solution for tackling the D-UPF challenges outlined in this blog.

  • Resource Constraints:
    • Space-efficient modular designed server options for telecom environments, such as the Dell PowerEdge XR8000 series servers, allow providers to mix and match components based on workload needs.  They can run multiple workloads, such as CU/DU and UPF in the same chassis.
    • Smart cooling designs support harsh edge environments and keep systems at optimal temperatures without using more energy than is needed.[1]
    • Rugged and flexible server options that are less than half the length of traditional servers and offer front or rear connectivity make installation in small enclosures at the base of cell towers easier.[2]

  • Performance Requirements:
  • Provides low-latency processing for edge computing nodes at the network edge.[3]
  • 4th Gen Intel Xeon Scalable processors with Intel vRAN Boost increase DU capacity (up to 2X in specific scenarios), and increases the packet core UPF and RAN CU performance by 42%.[2]   
  • Orchestration and Automation:
    • Horizontal cloud stack engineered platform based on Red Hat OpenShift allows operators to pool resources to meet changing workload requirements. This is achieved by automating server discovery, creating and maintaining a server inventory, and adding the ability to configure and reconfigure the full hardware and software stack to meet evolving workload requirements.
    • Servers leverage dynamic resource allocation to ensure that computing resources are allocated precisely where and when they are needed. This real-time optimization minimizes resource waste and maximizes network efficiency.[3]
       
  • Lifecycle Management and Operating Cost:
    • Include Dell Telecom Infrastructure Automation software to automate the deployment and life-cycle management as its fundamental components.
    • Backed by a unified support model from Dell with options that meet carrier grade SLAs, CSPs do not have to worry about multi-vendor management for the cloud infrastructure support (for both the hardware and cloud platform software), as Dell becomes the single point of contact in the support of telco cloud platform and works with its partners to resolve issues.

Summary

In summary, the need for D-UPF in 5G networks arises from the requirements of handling massive data volumes, improving network efficiency, reducing latency, enabling edge computing, and supporting advanced 5G services. Selecting among the different deployment scenarios possible will require ensuring you have an infrastructure capable of meeting your changing objectives for today and the flexibility and scalability to see you through your long-term goals. For example, you can host and support the deployment and management of content delivery network (CDN) at the network edge, where Dell Telecom Infrastructure Blocks for Red Hat can also serve as a telco cloud building block. By implementing this engineered telco cloud platform solution from Dell, we believe CSPs will be able to streamline the process and reduce costs associated with the deployment and maintenance of UPF across edge locations.

To learn more about Telecom Infrastructure Blocks for Red Hat, visit our website Dell Telecom Multicloud Foundation solutions.


[1] Source: ACG Report, “The TCO Benefits of Dell’s Next-Generation Telco Servers“, February 2023

[2] Source: Dell Technologies, “Introducing New Dell OEM PowerEdge XR Servers“,

[3] Source: Dell Technologies, “Competing in the new world of Open RAN in Telecom”,

Read Full Blog

Empowering Telecom: Samsung, Dell, and Intel Lead the Open Disaggregated Revolution

Suresh Raam Suresh Raam

Fri, 29 Mar 2024 16:00:28 -0000

|

Read Time: 0 minutes

The telecom industry and Communication Service Providers (CSPs) are transitioning to a new phase with disaggregated RAN. This includes embracing open, cloud-based technologies as the foundational model to support their current and upcoming services.

Samsung, Dell, and Intel are leading this transformation by crafting solutions that integrate advanced open technologies such as virtual Radio Access Network (vRAN) and Open Radio Access Network (Open RAN). Their end-to-end delivery model includes open commercial hardware designed to facilitate a horizontally automated, multi-vendor network built on modern open and disaggregated architectures.

Understanding the importance of a seamless technology transition, Dell Technologies and Intel initiated an Early Customer Evaluation Program. This program empowers customers to leverage the latest infrastructure technologies even before they hit the market. This initiative also provides Samsung, and other strategic partners of Dell and Intel, access to cutting-edge prototype offerings to conduct early solution validation.

In the initial phase of the evaluation program, the Dell PowerEdge XR 11 featuring Intel® vRAN Accelerator ACC100 Adapter accelerator cards were selected. In the second phase, the team selected Dell PowerEdge XR8000 servers, powered by 4th Gen Intel Xeon Scalable processors with Intel vRAN Boost integrated acceleration, as the foundational hardware for the open infrastructure platform. These ruggedized servers, with features such as short-depth design, NEBS certification, and optimized total cost of ownership (TCO), are ideal for vRAN and ORAN deployments.

The Early Customer Evaluation Program facilitates the integration and verification phase of Samsung’s virtual Central Unit (vCU) and virtualized Distributed Unit (vDU), well before the commercial release dates of Dell's infrastructure platforms. This emphasizes the telecom industry’s mantra, "Time is Money."

Below are some of Samsung’s recent accomplishments, made possible with the support of Dell and Intel:

  • Conducting trials with several major Operators across Asia Pacific, North America, and Europe.
  • Successfully implementing the Samsung vRAN solution on Dell Servers powered by Intel Xeon processors for Vodafone UK’s commercial network, demonstrating superior performance compared to legacy hardware-based RAN in urban areas. The test results surpassed targeted Key Performance Indicators (KPIs), including 4G and 5G downlink and uplink throughput.

Henrik Jansson, Vice President and Head of SI Business Group, Networks Business at Samsung Electronics Samsung's Vice President states, "With the backing of Dell and Intel’s Early Customer Evaluation Program, we've expedited the integration of our solutions on the latest server and processor versions, delivering greater value to operators."

Gautam Bhagra, Dell Technologies Vice President of Strategic Partnerships adds, "We are happy to support Samsung’s open Telecom initiatives, along with our partner Intel. Our Early Customer Evaluation Program has been a proven testament to the power of collaboration, enabling our partners to anticipate the integration and validation efforts, with significant improvement on time to value.”

Additionally, Cristina Rodriguez, Vice President and General Manager of Wireless Access Networking Division at Intel states, “Intel is pleased to collaborate with Dell in the Early Customer Evaluation Program for our valued strategic ecosystem allies and customers. This impactful initiative enables the ecosystem to seamlessly integrate and validate their solutions on cutting-edge computing platforms, even before these platforms hit the commercial market."

Read Full Blog

Advancing O-RAN-Based 5G Adoption Through Collaborative Excellence

Tuomo Sipila Tuomo Sipila

Thu, 22 Feb 2024 19:15:40 -0000

|

Read Time: 0 minutes

Within the telecom industry, a robust endorsement for the O-RAN ecosystem has emerged—a testament to the sector's commitment to accelerating open RAN adoption. This commitment is encapsulated in the Open RAN Ecosystem Experience (OREX), an O-RAN service brand by DOCOMO, collaboratively developed with multiplex global partners.

DOCOMO’s new open RAN service makes mobile networks a whole new experience, allowing operators to build open RAN networks with Service Customization in mind. By selecting OREX to deploy O-RAN networks, Communication Service Providers (CSPs) will benefit from reduced Total Cost of Ownership (TCO), power consumption, and improved operations. In a strategic collaboration, DOCOMO, Fujitsu, Wind River, and Dell, have worked together to integrate and life cycle manage a specific OREX network blueprint. CSPs can benefit by leveraging the open and disaggregated advantages of the O-RAN ecosystem efficiently without incurring expensive network design and system integration tasks.

A multi-vendor stack for open RAN

To support Cloud RAN workloads and to provide the infrastructure layer for the full stack, Fujitsu chose Dell PowerEdge XR8000 and XR5610 servers. These ruggedized servers, which are ideally suited for O-RAN and vRAN deployments, are TCO-optimized, have short-depth, and NEBS certification. Dell engineers these servers for demanding deployment environments, and they integrate seamlessly with Wind River Studio.

Wind River’s Studio Cloud container as a service (CaaS) platform, a cloud-native toolset for deploying and operating distributed clouds at scale, is the cloud software that provides the abstraction layer between the 5G workloads and the infrastructure hardware.

The collaboration with Dell enables Fujitsu’s vRAN software deployed on Dell's open RAN Accelerator. This inline accelerator, powered by Marvell, handles Layer 1 computations, offloading server CPU cores and eliminating the need for separate fronthaul and midhaul network interface cards (NIC).

Dell Open Telecom Ecosystems Lab (OTEL) has been selected for the full stack validation process. OTEL removes the risk from the open transformation journey that CSPs and telecom software vendors are undertaking by providing access to an advanced testing environment that can be accessed remotely using secure connectivity. OTEL established a collaborative testing environment to bring together engineering teams from DOCOMO, Fujitsu, Wind River, and Dell, to test the deployment readiness of this multivendor O-RAN blueprint. The engineering teams have cooperated to mutually agree on test scenarios that will be rigorously and repeatably tested to ensure the stability of the full stack. 

  • “By embracing this validated design, CSPs now possess an end-solution that not only simplifies deployment but also expedites time to value—a testament to the collaborative efficiency with our partners DOCOMO, Fujitsu, and Wind River." -- Gautam Bhagra, Dell Technologies, Vice President (Telecom System Business – Strategic Alliances)
  • “The combined solution, powered by Fujitsu, Dell, and Wind River results in a remarkable 30 percent reduction in power consumption, contributing to environmental sustainability while also substantially reducing TCO.” -- Toru Sekino, Fujitsu, Executive Director (vRAN)
  • “By working together with NTT DOCOMO, Fujitsu, and Dell on the OREX RAN platform, we can deliver new efficiencies and greater flexibility to CSPs . . . . A proven solution with global service providers, Wind River Studio delivers a fully cloud-native, Kubernetes and container-based architecture for the development, deployment, operations, and servicing of distributed edge networks at scale. -- Scott Walker, Wind River Global Vice President (Telco Ecosystem and CSP Sales)
  • “The collaborative work among Dell, Fujitsu, Wind River, and DOCOMO helps all the OREX partners to expand their open RAN business opportunities by focusing on the development of their products and realizing timely end-to-end verification of integrated OREX products. Through this effort, we will provide true open RAN to CSPs across the globe.” -- Masafumi Masuda, General Manager of Radio Access Network Technology Promotion Office, Radio Access Network Design Department (NTT DOCOMO, INC.)
Read Full Blog
  • Telecom
  • Open RAN
  • vRAN
  • Dell Telecom Infrastructure Automation Suite

Defining the future of O-RAN Management with Vodafone, Amdocs, and Dell Technologies

Kamlesh Shah Waqar Azeem Eran Paran Anton Palagin Jose Carlos Mendez Kamlesh Shah Waqar Azeem Eran Paran Anton Palagin Jose Carlos Mendez

Thu, 22 Feb 2024 13:08:00 -0000

|

Read Time: 0 minutes

Seizing the initiative to define the future of Open RAN management

The transformative journey of communication service provider (CSP) networks has reached a new, exciting stage. As operators increasingly adopt cloud technologies and embrace disaggregated architecture, the O-RAN Alliance is leading an expansion into the radio access network (RAN) realm. By disrupting the traditional RAN landscape, O-RAN is driving the industry towards a software-driven approach that leverages diverse software and hardware from multiple vendors to achieve the best possible outcomes. The goal is to create integrated, tested and certified solutions that deliver lower total cost of ownership (TCO) and amplified innovation.

With over 40 years’ industry expertise, Amdocs is a leading provider of software and services to communications and media companies. The company offers market-leading capabilities for service providers’ operations support systems (OSS) and radio access networks (RANs), and has delivered proven solutions in network management, planning, and optimization. To meet emerging challenges, Amdocs also strongly collaborates with leading industry organizations like the Telecom Infra Project and the O-RAN Alliance.

Dell Technologies is a global leader in digital transformation and infrastructure. Its products are widely utilized by global telecom operators in network and IT infrastructure, ranging from purpose-built telecom servers to cloud-native orchestration and infrastructure automation solutions. The company also offers bundled solutions developed in close collaboration with a diverse ecosystem of partners in O-Cloud and workload layers, and has extensive representation in key industry forums, including the O-RAN Alliance, Telecom Infra Project, and 3GPP.

To advance a shared vision for O-RAN management, our two companies have partnered to enable cloud transformations throughout the industry. For example, consider Amdocs Service Management and Orchestration (SMO) for O-RAN, whose capabilities include orchestration, inventory and assurance for any managed element, including x/rAPPs.

While Amdocs offering supports any O-Cloud, across bare metal and CaaS, when integrated with Dell Telecom Infrastructure Automation Suite, it supports deployments on Dell Technology’s industry-leading telecom servers, as well as O-Cloud layer software, provided by partner organizations. This integration enables CSPs to rapidly provision, manage, and monitor their O-Cloud infrastructure, and simplify the lifecycle management of infrastructure nodes in a dynamic, disaggregated network. A proof of concept (PoC) showcasing this solution's capabilities is currently underway at Vodafone Group, encompassing both immediate use cases and a roadmap of forward-looking scenarios. 

Bringing efficiencies to O-RAN with Service Management and Orchestration (SMO)

Service Management and Orchestration (SMO) is a key pillar in service and network orchestration, addressing specific CSP needs. By operating across multiple hierarchies, SMO efficiently manages multi-vendor, multi-technology entities with varying lifecycles. Furthermore, by focusing on cloud infrastructure, virtualized and containerized cloud-native functions (CNFs), it’s fully aligned with the industry’s developing architecture, seamlessly integrating with, and actively contributing to O-RAN standards and interfaces.

Amdocs SMO provides all the capabilities required to manage O-RAN. It supports the end-to-end lifecycle of the network, including design and onboarding, orchestration and management, inventory, and assurance processes. This approach also extends to embracing the openness and disaggregated approach of O-RAN, with support for heterogeneous multi-technology, multi-vendor networks – bringing CSPs cost efficiencies and empowering innovation.

 

Figure 1 Amdocs Service Management and Orchestration Solution Overview

Amdocs’ SMO supports a diverse set of use cases, from O-RAN network rollout, network slicing and O-RAN energy efficiency savings, to assurance and closed-loop operations. Furthermore, it’s instrumental in simplifying the rollout process, addressing challenges presented by the disaggregated, multi-vendor nature of O-RAN.

Post-rollout too, SMO plays a pivotal role managing each individual network slice, ensuring RAN performance, maintaining service-level objectives and undertaking corrective actions. This is achieved by leveraging standard FM, PM, SQM capabilities, as well as O-RAN apps, which are deployed within both the Non-RT RIC (rApps) and

Near-RT RIC (xApps) to support different optimization use cases. Throughout, the solution fully adheres to O-RAN specifications and standards.

Streamlining with Infrastructure and O-Cloud automation

Dell Technologies Infrastructure Automation Suite helps to simplify and automate infrastructure management in disaggregated networks, allowing CSPs to seamlessly provision, manage and monitor their infrastructure. In addition to operating based on the O-RAN O2-IMS and O2-DMS APIs, the Suite provides an open, model-driven framework for a ubiquitous single point of control. This suite then serves as the unified entry and exit point for automated deployment and orchestration of multi-site and multi-vendor infrastructure, as well as streamlined day 2 lifecycle management, including updates and upgrades.

Figure 2 Dell Telecom Infrastructure Automation Suite

Dell Telecom Infrastructure Automation Suite’s open and extensible architecture serves as the driving force behind O-RAN infrastructure automation. It includes a comprehensive set of components, including full orchestration, data-driven telemetry of cloud infrastructure, resource controllers, API adaptors, a user interface and a single pane of glass for complete cloud infrastructure.

Importantly, the suite, with its open declarative automation framework, also delivers support for cloud infrastructure operations, lower infrastructure total cost of ownership (TCO), accelerated time to market (TTM)/time to repair (TTR), and a modular, extensible architecture to avoid vendor lock-in. 

A ground-breaking proof of concept with Vodafone

A main takeaway from our collaboration with Vodafone was that the ability to replace manual processes with zero-touch operations would represent a real game changer. To showcase this vision, Amdocs and Dell Technologies set the goal of building a proof-of-concept (PoC) that would achieve this objective. Taking an end-to-end distributed zero-touch deployment approach, we set out to build a model that significantly reduces the time to bring new sites and services online. Ultimately, Vodafone also seeks to automate the radio network rollout and validate the joint solution’s ability to manage a hybrid, multi-vendor, and disaggregated O-RAN network.

For this PoC, a joint blueprint was created, whereby Amdocs would manage SMO and system integration, with Dell overseeing O-Cloud and infrastructure (including bare metal) layers, and Radisys providing O-RAN CNFs. Additional software will include Red Hat® OpenShift®, a hybrid cloud application platform powered by Kubernetes,  as a CaaS platform and Open Telemetry for performance metrics in CaaS.

 

Figure 3 Vodafone O-RAN PoC blueprint

Vodafone Proof of Concept use cases

The PoC aims to showcase the seamless integration of Amdocs SMO with Dell Technologies Infrastructure Automation Suite, enabling zero-touch deployment of a RAN site. The deployment involves transitioning infrastructure from bare-metal to the cloud using a declarative approach. Once the site is deployed, Amdocs and Dell will demonstrate end-to-end implementation through a data call. Both Amdocs SMO assurance capabilities and Dell Technologies Infrastructure Automation Suite will gather and transmit various telemetry data from the infrastructure, CaaS and the RAN network functions to Amdocs SMO, facilitating real-time monitoring of alarms and events. The setup is both versatile and supports service assurance and closed-loop automation.

Roadmap to innovation

Looking ahead, Amdocs and Dell Technologies remain committed to evolving SMO and O-Cloud management in alignment with O-RAN standards, and empowering CSPs with the flexibility and agility they need for O-RAN deployment activities.

Amdocs SMO remains central to this goal, supporting a rich set of capabilities, including model-driven dynamic orchestration, service decomposition, network slicing, dynamic inventory and closed-loop SLA assurance. Importantly, we’re also investing in specific O-RAN capabilities such as O1, O2, R1, and A1 interfaces, as well as management of x/rApps and respective ML-models.

Meanwhile, Dell Telecom Infrastructure Automation Suite effectively manages the complete lifecycle of the O-Cloud, using the O2 API and RESTful APIs. Employing an open software framework with vendor-agnostic resource controllers, the Suite empowers CSPs to fully capitalize on the advantages of disaggregated infrastructure and cloud layers. It can also seamlessly configure the O-Cloud by orchestrating intricate dependencies, coordinating tasks across various infrastructure elements and cloud stacks.

Even as Amdocs and Dell Technologies solidify our positions as key players in O-RAN development, we remain equally excited to find new ways to collaborate and innovate in the ever-evolving O-RAN management landscape.


Read Full Blog

Telecom Cloud Core Optimized with the AMD-based PowerEdge R7615

Jillian Kaplan Jillian Kaplan

Tue, 13 Feb 2024 19:09:50 -0000

|

Read Time: 0 minutes

The transition of telecom core functions from purpose-built hardware to commercially available servers has introduced a number of challenges, including compute density, power efficiency, and performance optimization in a cloud environment.  Since this large-scale migration kicked off about ten years ago, we’ve seen a massive computing densification, moving from the area of 12 cores per CPU to upwards of 128 core per CPU today.  Basically, now a few servers can provide the same compute capacity that previously required an entire rack of equipment.

AMD EPYC CPUs Powered by Zen4 and Zen4cFigure 1. AMD EPYC CPUs Powered by Zen4 and Zen4c

Dell Technologies is enabling such dense telecom cloud core solutions with the introduction of support for the AMD EPYC™ 9654P and EPYC™ 9754 in the PowerEdge R7615, which will feature NEBS Level 3 Certification for deployments into Telecom Environments. The R7615 is a 2U, single socket 19” rackmount server and with its upcoming NEBS certification (2Q24) will provide the densest, most power-efficient telecom cloud core solution available. Paired with the increased compute density are the inclusion of twelve fast DDR5 (4800MT/s) memory channels, support for PCIe Gen5 and NVMe-based storage will deliver a dense, power-efficient cloud computing platform that has no equal in the telecom space.

NEBS Level 3 Certification is often required in the telecom space. The key feature of this certification is the resilience to provide non-throttled performance from -5C to +55C and the ability to survive a catastrophic seismic event while continuing to provide critical Cloud Core Services, providing a highly available and fault tolerant environment.

Figure 2. - Dell PowerEdge R7615 with EDSFF StorageThe PowerEdge R7615 provides the densest telecom cloud core solution available on the market today and achieves this with a single CPU (up to 128 Cores/256 Threads) per server.  This allows for deployments based on a single non-uniform memory access (NUMA) node, simplifying the challenges of deploying core services to a telecom cloud. For those that have had to deal with the planning involved in deploying to two (or more) NUMA nodes, the benefit of the R7615 is crystal clear.

The NEBS certified R7615, planned certification coming in 2Q2024, with 1 x 128c AMD EPYC 9654P will provide the same core/thread compute density as 2 NEBS certified servers of the next closest dual socket solution available today.  This provides a reduction in maintenance activities, b an estimated 56 percent reduction in power consumption, and a 40 percent performance/watt improvement.  This can result in an estimated savings of 1049€ ($1128) per year, given an average kWh cost of 0.26€ ($0.28), which was the average European cost of electricity at the time of publication. Of course, results will vary depending on many factors, including server configuration, workload, server utilization and the price of electricity.

If you’re planning to attend MWC24 and are interested in finding out more about this significant advancement in Telecom Cloud Core compute densification, stop by the AMD (2M61) and Dell Technologies (3M30) booths to see the HW and engage in further discussions.

Read Full Blog
  • Dell Telecom Multi-Cloud Foundation
  • Dell Telecom Infrastructure Blocks for Red Hat
  • AIOps
  • Observability

Accelerate intelligent operations using AIOps for cloud native networks

Saad Sheikh Abdullah Abuzaid Saad Sheikh Abdullah Abuzaid

Tue, 30 Jan 2024 17:07:35 -0000

|

Read Time: 0 minutes

Dell Technologies infrastructure blocks enable telco customers to adopt telco-centric AIOps to improve operations.

Communication service providers (CSPs) are racing towards fully autonomous networks and consider automation and artificial intelligence (AI) adoption in telecom networks to be of great value. According to the latest industry insights report published by TM Forum® (New-generation intelligent operations: the service-centric transformation path), most CSPs aim to achieve Level-3 automation (conditional autonomous networks) and Level-4 automation (highly autonomous networks) by 2025. There is increased interest in accelerating Level-5 automation (AI-driven automation) using AIOps solutions.

However, telco adoption of AI-centric automation is not easy primarily because most CSPs operate a geographically distributed brownfield network and manage a multi-generation fleet of infrastructure and resources. CSPs also operate at different scales which means there is no simple, “cookie-cutter” approach towards AI-driven operations (AIOps).

In addition, CSPs adopt solutions based on clearly defined, standard telecom architectures like ETSI® (European Telecommunications Standards Institute), TM Forum® (Telecom Management Forum), 3GPP® (3rd Generation Partnership Project), and O-RAN (Open RAN alliance). CSPs also source solutions that can interwork and interoperate at a global scale. Finally, CSPs expect these solutions to fully integrate into their brownfield environments.

Hence, there is a requirement to build an outcome-based solution that supports existing operations. At the same time, the solution must enable them to accelerate the adoption of the next era of operations (based on data-driven insights and artificial intelligence).

How to adopt AIOps-based, intelligent operations for networks

CSPs are working alongside many standards bodies (especially the TM Forum®) to accelerate automation towards Level-4 (full-service orchestration and automation) and Level-5(AI-driven automation). However, there still lacks a clear path for applying these principles to large-scale networks. Building the right architecture starts from clear requirements quantification.

The right AIOps solution that is designed for CSPs must align with unique telco-specific requirements on: 

  • Distributed topology maps. Telco networks are being purpose-built and deployed to deliver critical and differentiated services for many decades. These networks are not like data centers but instead a fleet of resources—such as home networking, fiber, transport, radio, core, cloud, and WAN services. Topology alignment (like A/B plane, Ring) and service resilience are key requirements.
  • Multi-vendor and multi-generation. Typically, CSPs operate a brownfield network over multiple generations. Most of these systems have an extended lifetime of 10 to 15 years. So, the solution should not only be future-proof but also cater to the requirements of existing deployed solutions.
  • Data models. CSP networks, by nature, are highly protected—with network data existing in many silos. Network operations also follow a hierarchical, process-based delivery that is defined by Network Operations Centers (NOCs). In addition, data knowledge is based on tools and systems that vendors provide. 

Given that AIOps systems are already proven and prevalent in the Cloud and IT industries, these systems and solutions must be adapted to meet CSP requirements. 

AIOps in networks should strictly align with both telecom standards and network-specific needs—delivering the following capabilities:

  • Process alignment. The current operational model of a telco cloud heavily relies on an operational team knowledge base and expertise. So, it is not just about data but also the unique experience of CSP operations—which are important. 
  • Data access. CSPs follow strict security and privacy requirements where customer data and information cannot be exposed. So, in order to adopt AIOps, data access models must be standardized to ensure AIOps use cases can retrieve data as per approved policies. 
  • Tuning. Because CSP-deployed networks must operate for extended periods of time, current solutions—which follow strict AI rules—cannot meet their future requirements. Therefore, AIOps systems must be adaptive. 
  • Scalability. CSPs operate at different scales starting from Tier-1 (many geographies) to Tier-2 (small scale). Therefore, telco-specific AIOps systems should offer a T-shirt sizing approach. 

Accelerate the network AIOps journey

Today, many CSPs have already deployed small-scale AIOps solutions. However, most of these solutions are not highly aligned with telco-specific requirements—resulting in many silos that are hard to manage and scale. Further, CSPs must invest heavily in terms of time and cost to do Life Cycle Management (LCM) of these solutions. It all translates to barriers towards cloud native transformation.

Just as telco cloud CSPs have adopted standards like ETSI®, LFN® (Linux Foundation Networking) and ORAN® (Open RAN), there is a requirement to adopt a standard architecture for the Telco Multicloud AI foundation that can smoothly integrate with brownfield networks. Below are the key capabilities of an AI-centric telco platform that can enable AIOps use cases:

  • Horizontal AI platform. The telco-centric AI platform should enable a composable platform that consists of:
    • AIOps application layer: hosting various AIOps tasks
    • Machine Learning (ML) layer: adopting specific ML models suitable for AIOps
    • Knowledge layer: integrating the NOC processes and knowledge of CSPs 
    • Data layer: resolving any data silos in networks
    • Physical layer: managing telecom networks using fully decoupled infrastructure automation
  • Distributed data ingestion. The telco-centric AI platform should ingest data from fully distributed networks—delivering both reactive (respond after event occurs) and proactive (predictive) use cases on:
    • MOP integration: Existing MOP and workflows must be integrated. 
    • Operational processes integration: Existing NOC processes must be integrated in data pipelines.
  • Cloud native MLOps and AIOps capabilities. Telcos must supplement operational in-house knowledge with ML models and find a way to tune and extend it. Different models must be integrated. A systematic integration of knowledge systems with ML models (in a use case-driven approach) is required for success in network operations.

A screenshot of a computer programDescription automatically generated

Figure: Reference architecture for AIOps-driven telecom network

How to adopt AIOps operations using telecom infrastructure blocks

Dell Technologies has worked closely with leading cloud partners, including Wind River® and Red Hat®, to bring forward an operationally ready telco cloud platform. This platform is thoroughly tested, validated, and automated to deliver telco AIOps use cases. This platform also accelerates a CSP’s adoption of zero-touch operations while consistently aligning to telecom standards and frameworks.

The Dell Technologies Telecom Multicloud Foundation flexibly transforms network operations towards programmable infrastructure using a consistent tooling and AIOps capabilities approach.

Because the platform supports multiple versions and offers with various partners, CSPs can operate all such foundational infrastructure blocks as one. Through the following key capabilities, our solution can quickly transform operational models and processes and enables agile MLOps (required in a telco environment).  

A diagram of a computer serverDescription automatically generated

Figure: Solution architecture for AIOps Foundation

To support the unique CSP requirements to adopt AIOps and a cloud operating model, our Telecom Infrastructure Blocks provide the following key capabilities: 

  • Consistent platform. The first challenge is to deliver a standard and consistent platform that can integrate all layers above—abstracting the complexities of multiple technologies and components from different vendors.
  • Cloud native MLOps. AIOps use cases require cloud-type agility towards data and ML. Our current version of infrastructure blocks delivers a ready platform. In future releases, we plan to enable AI enhancements (like Openshift® AI) on top of this platform. This means CSPs can build, program, and manage all their ML models and capabilities in the same way they manage cloud resources. 
  • Autonomous operations. Adopting data-centric architectures and ML approaches provides CSPs a smooth evolution path from their current automation approaches to an AI-centric automation that is aligned with the telco future mode of operations (FMO).
  • Data-driven architecture: The automation architecture is data-driven and distributed, so data can be tapped from edge and regional sites—enabling real-time use cases and data-driven operations.
  • Automated fault management: The FMO follows zero-touch and intent-driven networks. Our solution is fully aligned with this vision that enables all cloud platforms to use declarative workflows. The solution also enables all northbound integration towards orchestration and assurance systems.
  • Single pane for DevOps and MLOps operations: As CSPs adopt ML/AI frameworks to deliver AIOps use cases, there is an increasing requirement to integrate and operate both DevOps and MLOPs as one. In addition, AIOps platform capabilities must be enabled in telco cloud platforms. Doing so provides a single management and observation platform.

A diagram of a modelDescription automatically generated

Figure: DevOps and MLOps workflow using AIOps platform capabilities

Dell Technologies developed Telecom Multicloud Foundation and Telecom Infrastructure Blocks to accelerate telco cloud transformation. Our engineered and factory-integrated system delivers a consistent platform. This platform is ready to deliver telco-specific AIOps use cases that are fully aligned with telecom architectures—enabling our customers to accelerate AIOps solutions in networks.

Visit the Dell Telecom Multicloud Foundation site to learn more about our solution.

Read Full Blog
  • 5G
  • Telco Cloud
  • Dell Telecom Infrastructure Blocks for Red Hat
  • DTIB RH

Simplifying 5G Network Deployment with Dell Telecom Infrastructure Blocks for Red Hat

Gaurav Gangwal Kevin Gray Abdullah Abuzaid David Kypuros Gaurav Gangwal Kevin Gray Abdullah Abuzaid David Kypuros

Fri, 19 Jan 2024 15:08:05 -0000

|

Read Time: 0 minutes

 


Welcome back to our 5G Core blog series.  In the second blog post of the series, we discussed the 5G Core, its architecture, and how it stands apart from its predecessors, the role of cloud-native architectures, the concept of network slicing, and how these elements come together to define the 5G Network Architecture.

In this third blog, we look at Dell Technologies’ and Red Hat's collaboration with their latest offering of Dell Telecom Infrastructure Blocks for Red Hat. We explore how Infrastructure Blocks streamline Communications Service Providers’ (CSPs) processes for a Telco cloud used with 5G core from initial technology onboarding at day 0/1 to day 2 life cycle management.

Helping CSPs transition to a cloud-native 5G core 

Building a cloud-native 5G core network is not easy. It requires careful planning, implementation, and expertise in cloud-native architectures. The network needs to be designed and deployed in a way that ensures high availability, resiliency, low latency, efficient resource utilization, and flawless component interoperability. CSPs may feel overwhelmed when considering the transition from legacy architectures to an open, best-of-breed cloud-native architecture. This can lead to delays in design, deployment, and life cycle management processes that stall projects and reduce a CSP’s ability to effectively deploy and manage their disaggregated cloud-native network.   

Automation plays a critical role in managing deployment and life cycle management processes.  Many projects stall or fail due to poorly defined automation strategies that make it difficult to ensure compatibility between hardware and software configurations across a large, distributed network.  This is especially true when trying to deploy and manage a cloud platform running on bare metal.   

Dell Telecom Infrastructure Blocks for Red Hat are foundational building blocks for creating a Telco cloud that is based on Red Hat OpenShift. They aim to reduce the time, cost, and risk of designing, deploying, and maintaining 5G networks using open software and industry standard infrastructure. The current release of Telecom Infrastructure Blocks for Red Hat supports the creation of management and workload clusters for 5G core network functions running Red Hat OpenShift on bare metal servers.

There are a number of challenges to build and maintain Kubernetes clusters on bare metal to run 5G network functions:

  • Ensuring interoperability and fault tolerance in a disaggregated network is not an easy task. Deploying and managing Kubernetes clusters on bare metal requires extensive design, planning, and interoperability testing to ensure a reliable, fault tolerant, and performant system.  
  • Automating the deployment and life cycle management of hardware resources and cloud software in a bare metal environment can be complex. It involves deploying and updating a fleet of bare metal servers at scale.
  • There is a lack of pre-built software integrations specifically designed for deploying Kubernetes clusters on bare metal servers and bringing those cluster configurations to a state where they are ready to run workloads. This means that configuring and deploying Kubernetes on bare metal frequently requires more manual effort to build and maintain the automation needed to manage deployments and upgrades at scale.  This manual effort can be time-consuming and add complexity that introduces risk to the process. 
  • This lack of consistent, easy-to-manage automation to deploy and update the cloud stack to meet workload requirements also make it harder to implement a unified cloud platform across all workloads.   This leads to infrastructure silos that limit the ability to pool resources to improve infrastructure utilization rates, which in turn reduces network TCO efficiency.     

These challenges are amplified when running 5G network functions, which require low latency and high reliability to meet carrier-grade service level agreement (SLAs). This collaboration between Dell and Red Hat aims to offer a comprehensive solution for CSPs that addresses the challenges associated with building and maintaining carrier-grade cloud infrastructure for 5G core network functions.

Key objectives of Dell Telecom Infrastructure Blocks for Red Hat

Implement a “shift left” approach

In  software development, the term “shift left” refers to the ability to move tasks to an earlier stage in the development or production process to reduce time to value for those processes.  The shift left approach being offered with Infrastructure Blocks moves much of the testing and integration work performed by the CSP into the supply chain prior to onboarding the new technology. This method provides CSPs with a speedy path to value by shortening the preparation and validation phase for a new network deployment. It also simplifies the procurement process by reducing the number of suppliers the CSP needs to work with and simplifies support by providing one call support for the full cloud stack.  Proactive problem-solving, reduction of field touch points, risk minimization, and operational simplification are byproducts of the Infrastructure Block approach that hasten the introduction of new technology into a CSP’s network. By adopting this approach, CSPs can obtain faster rollout times and reduced operational costs. Dell does three things to help CSPs shift the technology onboarding processes left:

Engineering

Telecom Infrastructure Blocks are foundational building blocks that are co-designed with Red Hat to help CSPs build and scale out their network. These building blocks are purpose-built to meet specific workload requirements.  Dell collaborates with Red Hat to maintain a roadmap of feature enhancements and perform continuous design and integration testing to accelerate the adoption of new technologies and software upgrades. The design planning and extensive interoperability testing performed by Dell simplifies the processes of building and maintaining a fault tolerant and performant cloud platform to run 5G core workloads.  

Automation

Many CSPs today rely on procedural automation that they build and maintain on their own to automate the deployment and life cycle management of their cloud platform at scale.  Procedural automation requires an understanding of the current state of their cloud stack and the maintenance of scripts or playbooks to define the steps needed to update the configuration to the desired state. When deploying Kubernetes on bare metal to support 5G core workloads, there are a number of items with dependencies that must be configured appropriately, including the following properties:

  • Cloud platform software version
  • BIOS version and settings 
  • Firmware versions for network interface cards (NICs) and other Peripheral Component Interconnect Express (PCIe) cards
  • Single root I/O virtualization (SR-IOV) / Data Plane Development Kit (DPDK) configurations
  • RAID configurations 
  • Site-specific data

Building and maintaining these scripts and automation playbooks is no easy task. It requires an up-to-date view of the current configuration of the infrastructure, an understanding of the dependencies between hardware and software that must be met to perform an update, and people with specialized skills that include knowledge of server hardware and the tools to manage them, the cloud software, and how to write or update playbooks to execute deployments and upgrades. 

Managing this across a large, distributed network with a range of workloads that frequently require unique configurations of the cloud stack is a difficult and time-consuming process.  Also consider that, in an open ecosystem environment, there is always a new version of software, BIOS, or firmware coming and the people with the needed skill sets are in short supply, resulting in a herculean effort with mixed results.

Telecom Infrastructure Blocks include purpose-built automation software that is easy to use and maintain.  The blocks integrate with Red Hat Advanced Cluster Management and Red Hat OpenShift to automate deployment and life cycle management of the hardware and software stack used in a Telco cloud.  This software uses declarative automation to simplify the deployment and upgrade of the cloud platform hardware and software to align with approved configuration profiles.  With declarative automation, the CSP simply defines the desired state of the cloud stack, and the automation software determines the steps required to achieve the desired state and executes those steps to align the system with the approved configuration.      

This infrastructure automation software uses a declarative data model that defines the desired state of the system, the resources properties, and keeps a list of the current state of the cloud configuration and inventory, which significantly simplifies the deployment and life cycle management of the cloud stack.  Infrastructure Blocks come with Topology and Orchestration Specification for Cloud Applications (TOSCA) workflows that define the configurations needed to update the system based on the extensive design and validation testing performed by Dell and Red Hat.  Dell provides regular updates that simplify the process of upgrading to the latest release. CSPs simply update a Customer Input Questionnaire and the automation software updates the hardware and the cloud platform software, and brings them to a workload-ready state with a single click of a button.

Integration

Dell ships fully integrated systems that are designed, optimized, and tested to meet the requirements of a range of telecom use cases and workloads. They include all the hardware, software, and licenses needed to build and scale out a Telco cloud. Delivering fully integrated building blocks from Dell’s factory significantly reduces the time spent configuring infrastructure onsite or in a network configuration center.  They are also backed by Dell Technologies one-call support for the full cloud stack that meets telecom SLAs.  Dell has established escalation paths with Red Hat to ensure the highest levels of support for customers.       

Streamlining Day 0 through Day 2 tasks with Telecom Infrastructure Blocks

In a typical CSP network operating model, there are four stages an CSPs goes through from initial technology onboarding through managing ongoing operations.  These stages are: 

  • Stage 1: Technology onboarding (Day 0)
  • Stage 2: Pre-production (Day 0)
  • Stage 3: Production (Day 1)
  • Stage 4: Operations and lifecycle management (Day 2+)

Dell Telecom Infrastructure Blocks were built to streamline each stage of the processes to reduce the time and risk of building and maintaining a Telco cloud.  We do this by proactively working with Red Hat to create an engineered system that meets telecom SLAs, includes automation that delivers zero-touch provisioning, and simplifies life cycle management through continuous design and integration testing.

 

Let's look at how Dell Telecom Infrastructure blocks affects CSP processes from Day 0 to Day 2+.

Stage 1: Day 0 technology onboarding 

Dell and Red Hat collaborate to design an engineered system that is validated through an extensive array of test cases to ensure its reliability and performance. These test cases cover aspects such as functionality, interoperability, security, reliability, scalability, and infrastructure-specific requirements for 5G Core workloads. This testing is aimed at ensuring optimal performance across a diverse array of performance metrics and scale points, thereby guaranteeing performant system operation. 

Some of our design and validation test cases include:

Cloud Infrastructure Cluster testing:  Cloud Infrastructure Cluster testing refers to the process of testing the infrastructure components of a cloud cluster to ensure their proper functioning, performance, and scalability. It involves validating the networking, storage, compute resources, and other infrastructure elements within a cluster. These test cases include:

  • Installation and validation of Infrastructure Block plugins and automation.
  • Cluster storage (Red Hat OpenShift AI) validation and testing.
  • Validation of cluster network configurations 
  • Verification of Container Network Interface (CNI) plugin configurations in the cluster.
  • Validation of high availability configurations
  • Scalability and performance testing 

Performance Benchmarking testing: Performance Benchmarking testing usually includes several steps. First, test scenarios are created to simulate real-world usage. Second, performance tests are conducted and the resulting performance data is collected and analyzed. Finally, the results are compared to established benchmarks. Some of the testing performed with Infrastructure Blocks includes:

  • CPU benchmarking 
  • Memory benchmarking 
  • Storage benchmarking 
  • Network benchmarking
  • Container as a Service (CaaS) benchmarking
  • Interoperability tests between the Infrastructure and CaaS layers

At this step, we define the automation workflows and configuration templates that will be used by the automation software during the deployment. Dell works proactively with its customers and partners to understand their best practices and incorporate those into the blueprints included with every Telecom Infrastructure Block. 

This process produces an engineered system that streamlines the CSP’s reference architecture design, benchmarking, and proof of concept (POC) processes to reduce engineering costs and accelerate the onboarding of new technology.  To further streamline the design, validation, and certification process, we provide Dell Open Telecom Ecosystem Lab (OTEL), which can act as an extension of a CSP’s lab to validate workloads on Infrastructure Blocks that meet the CSP’s defined requirements.

Stage 2: Pre-production

The main objective in stage 2 is to onboard network functions onto the cloud infrastructure, define the network golden configuration, and to prepare it for production deployment. Infrastructure Blocks eliminates some of the touch steps in the onboarding by:

  • Delivering an integrated and validated building block direct from Dell’s factory
  • Delivering a deployment guide that simplifies onboarding 
  • Providing Customer Input Questionnaires that are configuration input templates used by the automation software to streamline deployment

CSPs can also leverage the Red Hat test line in Dell’s Open Telecom Ecosystem Lab (OTEL) to validate and certify the CSP’s workloads on Infrastructure Blocks.  OTEL can play a significant role in enabling CSPs and partners by developing custom blueprints or performing custom tests required by the CSP using Dell’s Solution Integration Platform (SIP) which is part of OTEL.  SIP is an advanced automation and service integration platform developed by Dell. It supports multi-vendor system integration and life cycle management testing at scale.   It uses industry standard components, toolkits, and solutions such as GitOps.  

 Dell Services also offers tailor-made configurations to cater to specific operator needs. These are carried out at a Dell second-touch configuration facility. Here, configurations customized to the customer's specifications are pre-installed and then dispatched directly to the customer's location or configuration facility.

Stage 3: Production

At the production stage, Dell integrates and configures all hardware and settings to support the discovery and installation of the validated version of the cloud platform software, which eliminates the need to configure hardware on site or in a configuration center. Dell’s infrastructure automation software then deploys the validated versions of Red Hat Advanced Cluster Manager and Red Hat Openshift on the servers used in the management and workload clusters and brings those clusters to a workload ready state.  This process ensures a consistent and reliable installation process that reduces the risk of configuration errors or compatibility issues. Dell's automation enables zero-touch provisioning that configures hundreds of servers at the same time with full visibility into the health and status of the server infrastructure before and after deployment.  Should the CSP need assistance with deployment, Dell's professional services team is standing by to assist. Dell ProDeploy for Telecom provides on-site support to rack, stack, and integrate servers into their network or  remote support for deployment. 

Stage 4: Operations and lifecycle management

In Day 2+ operations, CSPs must sustain network performance while adapting to changes to the network over time.  This includes ensuring software and infrastructure compatibility as updates are made to the network, performing rolling updates, fault and performance management, scaling resources to meet network demands, ensuring efficient use of network resources, and adapting to technology evolution. 

Infrastructure Blocks simplify Day 2+ operations in several ways:

  • Dell works with Red Hat, its customers, and other partners to capture updates and new requirements necessary to evolve Infrastructure Blocks to support new technologies and software enhancements over time. This requires pro-active collaboration to ensure continuous roadmap alignment across parties.  Dell then performs extensive design and validation testing on these enhancements before integrating them into Infrastructure Blocks to deliver a resilient and performant design. This helps CSPs stay on the leading edge of the technology curve while minimizing the risk of encountering faults and performance issues in production.
  • Today, Telecom Infrastructure Blocks offers support for three releases per year. In every release, we prioritize the introduction of new capabilities, features, components, and solution enhancements. In addition, there are six patch releases per year that prioritize sub features to ensure compatibility across different releases. Long Term Support releases are provided at the end of the twelve-month release cycle, with a focus on fixing any solution defects that may arise.
  • The out-of-the box automation provided with Infrastructure Blocks ensures a consistent, carrier-grade deployment or upgrade of the hardware and cloud platform software each time. This eliminates configuration errors to further reduce issues found in production.
  • When bringing together various hardware and software components, CSPs frequently manage different release cycles to support a range of workload requirements. To address any difficulties with software compatibility and life cycle management, Dell Technologies has created a system release cadence process.  It includes testing, validating, and locking the release compatibility matrixes for all Infrastructure Block components. This helps to resolve deployment problems affecting software compatibility and Day 2+ life cycle management procedures.
  • Dell Professional Services can also provide custom integrations into a CSP’s CI/CD pipeline, providing the CSP with validated updates to the cloud infrastructure that pass directly into the CSP’s CI/CD tool chains to enhance DevOps processes.
  • In addition, Dell offers single-call, carrier grade support that meets telecom grade SLAs with guaranteed response and service restoration times for the entire cloud stack (hardware & software).
  • The declarative automation provided with Infrastructure Blocks eliminates the time spent updating scripts and playbooks to push out system updates and minimizes the risk of configuration errors that lead to fault or performance issues.   

Summary

Dell Telecom Infrastructure Blocks for Red Hat offers a streamlined and efficient way to build and manage Telco cloud infrastructure. From initial technology onboarding to Day 2+ operations, they simplify every step of the process.  This makes it easier for CSPs to transition from their vertically integrated, legacy architectures of today to an open cloud-native software platform running on industry standard hardware that delivers reliable and high-quality services to their customers.

This blog post is a collaborative effort from the staff of Dell Technologies and Red Hat.  

 


Read Full Blog
  • 5G
  • 5G Core
  • Telecommunications
  • NEF
  • Open APis
  • Network Exposure function

Monetizing Network Exposure Through Open APIs

Gaurav Gangwal Arthur Gerona Tomi Varonen Gaurav Gangwal Arthur Gerona Tomi Varonen

Fri, 08 Dec 2023 19:40:56 -0000

|

Read Time: 0 minutes

Market Background

5G, especially 5G standalone, has not yet developed to fulfill expectations. End users are not yet seeing significant differences in comparison to 4G, and CSPs are not yet seeing new revenue streams. To address these challenges, we have previously presented two blogs (Network Slicing and Network Edge).  In this third blog, we continue to describe a realistic view of how CSPs could maximize the 5G standalone experience and go beyond being merely connectivity providers. This blog focuses on exposing the network capabilities (services and user/network information) that service providers and enterprises can use to enable innovative and monetizable services.

illustration depicting monitizing network exposure with quality demand and device location generating income through Open APIsFigure 1: Monetizing Network Exposure

The success of numerous innovative mobile applications can be traced to the availability of mobile Software Development Kits (SDKs).  SDKs are available for both iOS and Android mobile platforms. These SDKs provide open tools, libraries, and documentation that allow application developers to easily create mobile applications that rely upon the capabilities of existing mobile platforms (such as notifications and analytics) and device hardware (like GPS and camera.). Most importantly, these two mobile platforms alone currently support over four billion users. The next step is to use the same principles on the network side by using Open APIs that allow unified access to network capabilities for increased network exposure.

The concept of network exposure is not new. There have been a few less-than-successful attempts in the past, such as Service Capabilities Exposure Function (SCEF) and APIs for the IMS/Voice. These solutions were not able to scale sufficiently to attract a significant number of application developers. The specifications have been too complicated for anybody outside of the telecom world to understand or implement. The integration of network exposure into the 5G design is groundbreaking.  API exposure is now fundamental to 5G and is natively built into the architecture, enabling applications to seamlessly interact with the network.

Monetizing mobile networks using Open APIs relies on the implementation of communication APIs for voice, video, and messaging, as well as network APIs for location, authentication, and quality of service. By exposing these capabilities through Open APIs, CSPs can establish partnerships by facilitating the creation of tailored, high-value services for businesses, thereby enabling them to monetize 5G beyond traditional connectivity and bundled offerings. These new revenue streams are paramount as the traditional revenue streams from mobile broadband services are flat while costs continue to rise. Moreover, the deployment of a cloud-native 5G standalone network requires substantial investments, making it crucial to identify new revenue streams that can justify the business case.

 

Technical Background and Standardization

5G standalone was specified in 3GPP release 15 and its architecture standardized the Network Exposure Function (NEF). One of the 5G core network functions, NEF allows applications to subscribe to network changes, or instruct them to extract network information and capabilities. NEF enables an extensive set of network exposure capabilities, but it lacks the scale, agility, and simplicity that application developers require. GSMA’s Open Gateway Initiative, the CAMARA project, and TM Forum’s Open APIs all aim to address this gap.

  • GSMA’s Open Gateway Initiative achieves scale by committing CSPs to implement the common system framework in a unified manner.
  • Actual Service APIs are defined under the CAMARA project where the work is done as an open-source project at the Linux foundation.
  • TM Forum’s Open APIs are used in this framework for Operation, Administration, and Management (OAM).

The use case is well described in the GSMA’s Open Gateway white paper.

 

Network APIs

Open APIs and network capabilities in this new concept have much to offer. The CAMARA project has already defined 18 Service APIs such as Quality on Demand, Device Location, Device Status, Number Verification, Simple Edge Discovery, One Time Password SMS, Carrier Billing, and SIM swap. Three of the most popular elements are described in more detail below:

Quality on Demand: It is easy to imagine that multiple applications can benefit from better quality (bandwidth and latency). The challenge is to address how the network can fulfill this request instantaneously and cost-effectively. Some Proof of Concepts (PoCs) demonstrate that implementing Quality on Demand improvements can trigger either a new Network Slice or a different Quality of Service Class Identifier (QCI). For more information, see our Network Slicing blog. 

Device Location: This API verifies that the device is in a specific geographical area. The main benefits of the network-based request are that it can be used when a GPS signal is not available, and it is considered more trustworthy (location info cannot be spoofed).

Device Status: This API provides a very simple and straightforward request to determine whether the subscriber is roaming.

None of these Service APIs offer anything unique that the market has not seen before. Their intrinsic value comes from being part of a unified platform that enables a consistent way of accessing network capabilities and information, similar to how mobile SDKs became a catalyst to the thriving mobile device ecosystem we know today. Only time will tell how much value-add application developers will see from these Open APIs.

 

Use Cases and Commercial Models

The value of new features and applications is considered whenever 5G monetization is discussed. We are still in the early phase of Open APIs, but the TM Forum’s Catalyst Program and CAMARA Open API showcases can give good insights into what the coming commercial deployments could look like. These programs have triggered several PoCs where the related use cases have required optimized performance (Quality of Demand), user location/roaming information, and feedback on consumer experience. In these PoCs, the service providers have been able to consume the Open APIs directly or through a Hyperscale marketplace. As an example, in one PoC, guaranteed Quality of Delivery was needed for a 360-degree 8K live streaming service with content monetization through APIs (with CSPs curating markets at the edge). Another PoC included an end-to-end implementation of a marketplace from which one could consume network services from multiple CSP networks (Simple hyperscaler integrated network experience).

We can expect several commercial models for these Open APIs, because these APIs can be utilized in various ways such as providing network/subscriber information, optimizing functionalities/features, and allocating network capacity/resources. it is yet to be determined how these Open APIs can be consumed easily. Service providers are unlikely to integrate and set up individual contracts with every other service provider in the world. Therefore, there must be a place for aggregation in order to hide the complexity behind a portal. This role can be assumed by a group of service providers or hyperscalers who can onboard these services onto their marketplaces.


Challenges and The Road Ahead

One of the main Key Performance Indicators (KPI’s that define success for service providers is the ability to scale and have a global reach. It is critical that there be no fragmentation and that the community work towards a unified approach. Jointly agreed upon solutions and specifications require more time to develop; therefore, another year may pass before we start to see commercial use case launches (as forecast by Borje Ekholm, Ericsson CEO during the Q3-2023 earnings call).

The journey to unified 5G is not easy, and it presents various challenges:

  • Technology migration poses a challenge as mobile operators need to transition from existing systems to effectively utilize the potential of the 5G NEF and Open APIs.
  • Another significant obstacle is bridging the gap between software developers and mobile operators. Developers require clear, simple, unified, and well-documented APIs to leverage the network capabilities effectively.
  • At the same time, mobile operators must ensure that exposing their network does not compromise the security of the network while ensuring that end users have full control of where and how their information is stored and used.

Some service providers have already launched platforms with a few Service APIs. Early deployments can introduce a risk of fragmentation. However, the risk is outweighed by the positive impact testing the concept in the real world and constructing more concrete requirements from actual user experiences with these services.

Regardless of how much commercial success these new Network and Service APIs realize in the coming years, they will have made an important step towards more Open, Agile, and Programmable networks. Similarly, Dell has been embracing this vision in our Telecom strategy as reflected on our Multi-Cloud Foundation Concept, Bare Metal Orchestration, and Open RAN development projects. In our vision, Open APIs are needed in all layers (Infrastructure, Network, Operations, and Services). Stay tuned for more to come from Dell about the open infrastructure ecosystem and automation (#MWC24).

 

Read Full Blog
  • Dell Telecom Multicloud Foundation

Improving Network Operations and Observability for Cloud-Native Networks

Saad Sheikh Saad Sheikh

Tue, 24 Oct 2023 14:35:04 -0000

|

Read Time: 0 minutes

Communication service providers (CSPs) are rapidly modernizing their networks towards cloud-native and open architectures. However, as the scale of these deployments increases, so does the ever-growing concern about operational management and complexity.

According to the latest report by TM Forum on Autonomous Networks, most CSPs still manage and operate their networks at level-2 automation. This level is where most tasks are completed using statically configured rules, limiting a CSPs ability to monetize Network transformation benefits. As a result, major customers are investing in automizing their operations in order to move to level-3 automation at scale (from 13 percent today to 36 percent by 2026) to achieve zero touch closed-loop operations through dynamic and programmable policies. Level-3 automation will also enable pathways that accelerate adoption towards level-4 automation (from 4 percent today to 23 percent by 2026), which is ML and AI-centric.

Intelligent operations offer several business benefits, including improving return through better TCO, or enhancing Time to Value (TTV) for offerings and further improving resource efficiency. However, there is no cookie-cutter approach to improving network operations. The primary challenge CSPs face is that a significant lot operates a brownfield network and manages a fleet of networks and resources. As a result, building a reference architecture that aligns with their existing operations—and simultaneously accelerates the adoption of the next era of operations—is a complex challenge.

Simplifying Network Operations and Observability

Today, many CSPs are working to address the right solution and platform to optimize operational models. However, these efforts usually result in be-spoke solutions that are hard to manage and scale. CSPs also invest heavily in both time and cost to perform Life Cycle Management (LCM) of these solutions. These challenges create barriers to reaping Cloud and Network transformation benefits.

Dell Technologies has worked closely with leading cloud partners, including Wind River and Red Hat, to offer an operationally ready Telco Cloud platform as part of the Dell Telecom Multi-Cloud Foundation offer. This solution includes co-engineered building blocks referred to as Telecom Infrastructure Blocks, which support zero-touch operations and closed-loop automation. By automating the deployment and life-cycle management of the cloud platforms used in a telecom network, Dell’s Telco Cloud reduces operational costs while consistently meeting telco-grade SLAs.

Additionally, customers can optimize their infrastructures with a cloud platform of choice that is aligned end-to-end to workload vendor specifications and use cases, effectively transforming their operational models and processes. This solution not only streamlines telecom cloud design, deployment, and management with integrated hardware, software, and support, but also fully aligns with a telco-centric operational model.

Telecom Infrastructure Blocks releases will be agile delivered with multiple yearly releases to simplify life cycle management. By the end of 2023, Dell Telecom Infrastructure Blocks will support workloads for Radio Access Network and Core Network functions with:

  • Dell Telecom Infrastructure Blocks for Wind River, which will support vRAN and Open RAN workloads.
  • Dell Telecom Infrastructure Blocks for Red Hat that initially target Core Network workloads.

To support CSPs’ operational transformation that addresses optimal cost structure, telecom SLAs, and their ability to automate and orchestrate at scale, Dell Telecom Infrastructure Blocks provide the following key capabilities: 

  • Interoperability – Operating all telco cloud platforms as one abstracts all the complexities from multiple technologies and multiple components from different vendors. This allows CSPs to run and manage the entire platform together.
  • Lifecycle management – Typically, a telco network requires long life-cycle time commitments and the ability to coordinate multiple systems with different versions. Dell Telecom Infrastructure Blocks address these issues by providing configuration changes and version alignment for firmware, BIOS, CaaS software and more.
  • Closed-loop operations – Operational transformation is evolving towards zero-touch. This requires a new strategic platform that can de-couple infrastructure from application and enable smooth integration with application orchestration and assurance systems following a telco future mode of operations (FMO).

Transforming Operations Using Telecom Infrastructure Blocks

Dell Telecom Multi-Cloud Foundation provides CSPs a platform-centric solution that promises full support and alignment toward CSP level-4 automation. CSPs can flexibly transform their operations to programmable infrastructure using a consistent tooling and capabilities approach.

Through multiple versions and offers with various partners, CSPs can operate all such foundational infrastructure blocks as one through the following key capabilities:

  • Remote upgrades – This solution follows a consistent tooling and operational model, which allows operational teams to operate full telecom cloud platforms at scale and enables seamless use from central orchestration tools.
  • Operational automation – The Day2 and NOC (Network Operations Center) Telco cloud platform operations can be performed at scale in an automated manner. For CSPs, this means that whole platform can be operated as one following a true IaC (Infrastructure as code) and Programmable infrastructure principle using declarative blueprints. Similarly, patching and LCM options are possible using Dell and CaaS partner-offered tools. Use cases of automatic upgrades and CI/CD of day-n patches and upgrades are also supported as per the road map.
  • Single pane of glass – Customers are known to deploy different cloud stacks to optimally support different use cases (possibly runnin g workloads from various partners), which has led to an increasing requirement to operate all cloud stacks as one using a single pane of glass. This provides the operation team a single management and observation platform, which this solution not only supports along the road map, but also creates a unified layer for NOC teams to monitor and manage.
  • Green network operations – As CSPs find ways to reduce carbon emissions, there is an increased interest in full observability and monitoring, along with actionable insights to optimize and tune cloud platforms. These areas are on the road map and will also be part of Dell Telecom Multi-Cloud Foundation solution.
  • Data-driven architecture – For data to be tapped from core to edge to Radio Access Networks (RAN) the automation architecture used in this solution is data-driven and distributed, enabling real-time use cases and data-driven operations.
  • Automated fault management – This solution is fully aligned with the future mode of operations, which follows zero-touch and intent-driven networks. This vision enables all cloud platforms to use declarative workflows and Northbound integration towards Service Management and Orchestration (SMO) and assurance systems.
  • Brownfield operations – This solution aligns with CSPs brownfield requirements, which unlocks a range of benefits. These include the ability to integrate existing clusters, integrate existing CI/CD pipelines, and align with existing NOC tools and processes, enabling customers to operate and manage distributed and open infrastructure.

 

Dell Technologies developed Telecom Multi-Cloud Foundation and Telecom Infrastructure Blocks to accelerate 5G cloud infrastructure transformation. Telecom Infrastructure Blocks for Wind River and Red Hat delivers an engineered, validated, and factory-integrated Telco Cloud platform that is performance-optimized for RAN and Core use cases. It is also fully aligned for CSPs looking to accelerate Intelligent operations and evolution towards level-4 autonomous networks.

To learn more about this solution, visit the Dell Telecom Multi-Cloud Foundation solutions site.

This blog is co-authored with Abdullah Abuzaid, Technical Product Manager, and Anjali Bhatia, Technical Marketing Engineer at Dell Technologies.





Read Full Blog
  • VxRail
  • multitenancy

Multitenancy in MEC: When is it needed and what does it look like?

Alex Reznik Alex Reznik

Wed, 27 Sep 2023 20:32:45 -0000

|

Read Time: 0 minutes

Sometimes the best solutions stem from the simplest questions. Simple questions often prompt us to think about why we do things the way that we do. For example, a customer recently asked me, “What about multi-tenancy for my MEC?”

Multitenancy and MEC are both loaded terms, so an easier way to tackle this question is to start by asking, “Should I plan to support multiple customers on my network-edge cloud infrastructure; and if so, how do I do it?”  

Multitenancy is one of the critical benefits of a cloud environment, and sharing resources seems to make sense in a highly constrained environment like the edge. Despite its benefits, multitenancy also introduces significant management complexities, which come at a cost. These complexities can drive customers to consider whether the efficiencies of multitenancy justify the costs. This answer depends on both the cloud model that is used to deliver a service (SaaS/PaaS/IaaS and co-location) and the customer.  Moreover, in most cases, it is either innate to the solution hosted on public MEC, or not worth the cost.  

Starting with SaaS, multitenancy is enabled through proper handling of organization accounts and associated user accounts, something that any successful cloud-based SaaS must enable. It is therefore innate in this model. The second model where multitenancy is innate is co-location because, presumably, a successful co-location business model needs more than one customer.  

For PaaS (which includes container platforms such as Kubernetes), the answer to considering multitenancy is a straightforward no. Delivering multitenancy across customer boundaries (as opposed to simply hosting multiple projects of a single enterprise) typically involves creating a SaaS offering from a PaaS-based platform. For more information, see the discussion of this issue on the Kubernetes project site.

This leaves us with IaaS and reduces the question to whether a Mobile Network Operation (MNO) deploying MEC should invest in IaaS multi-tenancy. Conversely, it might be sufficient for such an MNO to provide each customer with physical infrastructure and a cloud stack, which the customer then integrates into an overall IaaS multicloud operational process. To address this, we need to dig deeper into requirements that different types of customers are likely to have for IaaS Edge infrastructure.  

MEC IaaS and verticals

It is important to remember that the information in this section is opinion-based and should be interpreted as such.  Also, it is equally important to keep in mind that each customer is different and broad-stroke statements such as the ones below may not be valid in every situation. In short, pay attention to your customer!  

Let’s start with large enterprises, which are likely to have some of the most stringent security and management policies. IT downtime and security breaches carry significant costs, and there is substantial in-house IT expertise to deploy and enforce industry best practices. For enterprises, anything that could have been moved into the cloud while meeting IT policies and application requirements presumably is already there. This means there is unlikely to be low-hanging fruit candidates for a network-edge migration.  

A multi-tenant IaaS environment introduces additional hurdles towards meeting IT policies which can complicate an already difficult sales motion and business case. In short, the opportunity cost of trying to do multi-tenant IaaS in this segment is usually not worth it. Capturing business for network edge here is hard enough. This remains true even when such enterprises have remote locations with limited IT expertise at such sites. While the ROI of moving compute off-prem improves in such cases, the hurdle of meeting IT policies remains, and the complications associated with trying to meet them are usually not justified by the cost savings of multitenancy.  

Smaller enterprises (SMBs) often have a stronger case for moving compute off-prem while the burden of IT policies is lower. These SMBs are likely looking for an easy way to achieve an outcome. A SaaS-based solution is much more appropriate than an IaaS one, which means multi-tenancy is out of the question.

To find a business justification for multi-tenancy in IaaS MEC, we need to look outside of enterprises and to direct-to-customer applications providers that take advantage of the network edge. Examples of these include the independent SW vendors (ISVs) of SaaS solutions which are offering outcomes to SMBs (where the tenant is now the ISV itself, not its customers), and ISV offering consumer applications, like emerging interactive gaming. When such applications require Edge presence, the choice is often limited to public MEC, as there is no on-premises and traditional edge co-location providers may not be able to deliver sufficient proximity to customer to meet the required KPIs. Moreover, the customers (ISVs) are typically well-adapted to the cloud and are comfortable with issues such as multitenancy, provided that cloud-like shared responsibility structures are in place and adequately formalized (a legal and contractual issue as much as it is a technical one).

Delivering a multitenant IaaS solution   

So how does an MNO go about creating that multitenant edge?  First, we need something generic because attempting to guess what kind of applications might run on MEC simply limits the addressable market. Second, it is important to remember that operations (O&M) and management will be your biggest headache, so anything that simplifies O&M is likely to pay back for itself in spades. And third, it’s public ISVs are most likely to be your customers.   

Typically, ISVs developing cloud-native applications use some flavor of Kubernetes (K8S) as the platform. K8S flavors can span the gamut from public clouds (Google, Amazon’s EKS) to enterprise clouds (Red Hat, OpenShift, VMware Tanzu). An ideal platform would address the following:

  • Provide a way to efficiently address the need for compute-intensive applications, including high-performance computing and storage-intensive ones
  • Make O&M easier in a meaningful and monetarily measurable way
  • Support cloud-native applications developed for any (almost any) of the most commonly used Kubernetes frameworks.  

Although that seems like a lot to ask out of a platform, solutions that address all of these points do exist.   One excellent example is Dell’s VxRail VmWare HCI platform. The definition of VxRail, according to the Dell VxRail home page is:  

VxRail goes further to deliver more highly differentiated features and benefits based on proprietary VxRail HCI System Software. This unique combination automates deployment, provides full stack lifecycle management and facilitates critical upstream and downstream integration points that create a truly better together experience

VxRail provides an MNO that flexible combination of compute (GPU and DPU options for high-performance computing) and storage, which can be easily and quickly scaled in response to actual demand.  Notably, with vCloud Suite, a VxRail deployment can be turned into a multitenant public cloud. For more information, see VmWare Public Cloud Service Definition.

Last but not least, in addition to VmWare’s Tanzu, VxRail supports Google Anthos (Running Google Anthos on VmWare Cloud Foundation), Red Hat OpenShift (Vxrail and OpenShift Solution Brief), and AWS EKS (Amazon EKS Anywhere on Vxrail Solution Brief); delivering an all-in-one platform for a flexible public MEC at Network Edge cloud deployment.   

Summary

The question of multitenancy for MEC is only relevant when considering the IaaS service model. In that case, multitenancy is not likely to be of interest when addressing most traditional enterprise customers but may be important when addressing the needs of ISVs providing SaaS solutions that need Edge presence.  To succeed in delivering a MEC platform to such ISVs MNO needs an underlying platform like Dell’s VxRail, designed to address their diverse needs in a scalable and easily manageable fashion.   

Read Full Blog
  • PowerEdge
  • Cloud RAN
  • OTEL

Dell Technologies and Nokia Pave the Way for Open Ecosystem Cloud RAN

Ken Stumpf Ken Stumpf

Tue, 26 Sep 2023 20:17:54 -0000

|

Read Time: 0 minutes

Introduction

The advent of 5G technology has ushered in a new era of connectivity, promising faster speeds, lower latency, and the potential to transform industries across the board. Achieving the full potential of 5G requires collaboration between those who are truly driving the open ecosystem. Dell Technologies and Nokia have come together to create an open ecosystem for 5G networks, enabling Communication Service Providers (CSPs) and Enterprises to harness the power of 5G like never before.

The need for open ecosystems in 5G

5G networks incorporate a whole new level of complexity compared to previous generations of radio networks, including multi-layer cloud infrastructure. A hybrid solution combining purpose-built 5G RAN and Cloud RAN, with a combination of centralized and distributed deployments can best serve the diverse requirements of different 5G use cases. The need for performance and cost-effectiveness implies the use of automation and targeted densification. Focused attention to security breach concerns is also essential. The Zero Trust model used by Dell enforces trust across devices, users, networks, applications, infrastructure, and data with automation and orchestration. Open ecosystems drive innovation and scalability by fostering competition, reducing vendor lock-in, and enhancing cost efficiency.

Cloud RAN collaboration

  1. Nokia has unveiled a groundbreaking concept known as anyRAN, signaling a pivotal shift in how radio access networks (RANs) are built and evolve. Dell's latest generation of XR8000 servers plays a crucial role in this integration. These servers are known for their reliability, performance, and scalability, making them an ideal choice for hosting the cloud-native components of anyRAN. By leveraging Dell's infrastructure, Nokia can ensure that the network's foundation is stable and capable of handling the demands of 5G connectivity. 
  2. Nokia and Dell Technologies are integrating and validating a solution combining Nokia’s 5G Cloud RAN software and a RAN SmartNIC L1 accelerator card with Dell’s purpose-designed telecom open infrastructure, including Dell PowerEdge servers, and Dell storage and switches.
  3. Dell Technologies and Nokia are working together to deploy research and development (R&D) and testing resources. Dell is using the Open Telecom Ecosystem Lab (OTEL) as the center for testing and validation, while Nokia focuses its work on its Nokia System Test Lab. 

Nokia’s Core Networks NFVi platforms are undergoing meticulous testing and verification on the newest generation of Dell Power Edge servers. The testing, verification, and certification process at Dell's OTEL are essential steps in the journey to deploy containerized services in real-world telecommunications networks. By selecting the latest generation of Power Edge servers for this testing phase, Nokia is ensuring that NFVi 4.0 supports modern containerized requirements and is adaptable to the quickly evolving cloud platforms.

Figure 1. PowerEdge XR800

Ongoing evolution of the collaboration

Dell Technologies and Nokia have achieved significant progress since the initial announcement of this collaboration at MWC 2023. Both companies have continued to work closely to refine their offerings, address challenges, and push the boundaries of 5G technology. 

Both companies hold high hopes for this collaboration, as they see it as the correct path to reduce deployment complexity, streamline processes, and expedite innovation, as demonstrated by the following quotes from leadership at Dell and Nokia: 

“The ongoing evolution of this partnership underscores the commitment of Dell and Nokia to staying at the forefront of the 5G revolution.” Gautam Bhagra, Vice President, Strategic Business Development, Dell Telecom Systems Business.

“Our strategic collaboration with Dell is an important component of our innovative anyRAN approach, which brings communications service providers and enterprises full freedom of choice to mix and match purpose-built and cloud-based RAN solutions. Together we translate our collaborative advantage into a competitive advantage for our customers, in a dynamic technology landscape, where 5G meets Cloud," Pasi Toivanen, SVP and Head of Partner Cloud RAN Solutions, Mobile Networks, Nokia.

Additional resources

Dell Technologies 5G Technologies - Telecommunication Solutions

Nokia AirScale Cloud RAN

Read Full Blog
  • 5G Core
  • Telco Cloud
  • Dell Telecom Infrastructure Blocks for Red Hat
  • 5GC
  • UPF
  • SMF
  • DTIB

The 5G Core Network Demystified

Gaurav Gangwal Kevin Gray Gaurav Gangwal Kevin Gray

Thu, 17 Aug 2023 19:29:23 -0000

|

Read Time: 0 minutes

In the first blog of this 5G Core series, we looked at the concept of cloud-native design, its applications in the 5G network, the benefits and how Dell and Red Hat are simplifying the deployment and management of cloud-native 5G networks.

With this second blog post we aim to demystify the 5G Core network, its architecture, and how it stands apart from its predecessors. We will delve into the core network functions, the role of Cloud-Native architecture, the concept of network slicing, and how these elements come together to define the 5G Network Architecture.

The essence of 5G Core

5G Core, often abbreviated as 5GC, is the heart of the 5G network. It is the control center that governs all the protocols, network interfaces, and services that make the 5G system function seamlessly. The 5G Core is the brainchild of 3GPP (3rd Generation Partnership Project), a standards organization whose specifications cover cellular telecommunications technologies, including radio access, core network and service capabilities, which provide a complete system description for mobile telecommunications.

The 5G Core is not just an upgrade from the 4G core network, it is a radical transformation designed to revolutionize the mobile network landscape. It is built to handle a broader audience, extending its reach to all industry sectors and time-critical applications, such as autonomous driving. The 5G core is responsible for managing a wide variety of functions within the mobile network that make it possible for users to communicate. These functions include mobility management, authentication, authorization, data management, policy management, and quality of service (QOS) for end users.

5G Network Architecture: What You Need to Know

5G was built from the ground up, with network functions divided by service. As a result, this architecture1 is also known as the 5G core Service-Based Architecture (SBA), The 5G core is a network of interconnected services, as illustrated in the figure below.

A picture containing text, screenshot, diagram, font 
Description automatically generated

3GPP defines that 5G Core Network as a decomposed network architecture with a service-based Architecture (SBA) where each 5G Network Function (NF) can subscribe to and register for services from other NF, using HTTP/2 as a baseline communication protocol. 

A second concept in the architecture of 5G is to decrease dependencies between the Access Network (AN) and the Core Network (CN) by employing a unified access-agnostic core network with a common interface between the Access Network and Core Network that integrates diverse 3GPP and non-3GPP access type.

In addition, the 5G core decouples the user plane (UP) (or data plane) from the control plane (CP).This function, which is known as CUPS2 (Control & User Plane Separation), was first introduced in 3GPP release 14. An important characteristic of this function being that,  in case of a traffic peak, you can dynamically scale the CP functions without affecting the user plane operations, allowing deployment of UP functions (UPF) closer to the RAN and User Equipment (UE) to support use cases like Ultra Reliable low latency Communication (URLLC) and achieve benefits in both Capex and Opex.

5G Core Network Functions and What They Do

The 5G Core Network is composed of various network functions, each serving a unique purpose. These functions communicate internally and externally over well-defined standard interfaces, making the 5G network highly flexible and agile. Let's take a closer look at some3 of the critical 5G Core Network functions:

User Plane Function (UPF)

The User Plane Function is a critical component of the 5G core network architecture It oversees the managment of user data during the data transmission process. The UPF serves as a connection point between the RAN and the data network. It takes user data from the RAN and performs a variety of functions like as packet inspection, traffic routing, packet processing, and QoS enforcement before delivering it to the Data Network or Internet. This function allows the data plane to be shifted closer to the network edge, resulting in faster data rates and shorter latencies. The UPF combines the user traffic transport functions previously performed in 4G by the Serving Gateway (S-GW) and Packet Data Network Gateway (P-GW) in the 4G Evolved Packet Core (EPC).

A diagram of a relay scheme Description automatically generated

UPF Interfaces/reference points with employed protocols:

  • N3 (GTP-U): Interface between the RAN (gNB) and the UPF
  • N9 (GTP-U): Interface between two UPF’s (i.e the Intermediate I-UPF and the UPF Session Anchor)
  • N6 (GTP-U): Interface between the Data Network (DN) and the UPF
  • N4 (PFCP): Interface between the Session Management Function (SMF) and the UPF  

Session Management Function (SMF)

The Session Management Function (SMF) is crucial element that make up the 5G Core Network responsible for establishing, maintaining, and terminating network sessions for User Equipment (UE). The SMF carries out these tasks using network protocols such as Packet Forwarding Control Protocol (PFCP) and Network Function-specific Service-based interface (Nsmf).

SMF communicates with other network functions like the Policy Control Function (PCF), Access and Mobility Management Function (AMF), and the UPF to ensure seamless data flow, effective policy enforcement, and efficient use of network resources. It also plays a significant role in handling Quality of Service (QoS) parameters, routing information, and charging characteristics for individual network sessions.

SMF brings some control plane functionality of the serving gateway control plane (SGW-C) and packet gateway control plane (PGW-C) in addition to providing the session management functionality of the 4G Mobility Management Entity (MME).

Access and Mobility Management Function (AMF)

The Access and Mobility Management Function (AMF) oversees the management of connections and mobility. It receives policy control, session-related, and authentication information from the end devices and passes the session information to the PCF, SMF and other network functions. In the 4G/EPC network, the corresponding network element to the AMF is the Mobility Management Entity. While the MME's functionality has been decomposed in the 5G core network, the AMF retains some of these roles, focusing primarily on connection and mobility management, and forwarding session management messages to the SMF.

Additionally, the AMF retrieves subscription information and supports short message service (SMS). It identifies a network slice using the Single Network Slice Selection Assistance Information (S- NSSAI), which includes the Slice/Service Type (SST) and Slice Differentiator (SD). The AMF's operations enable the management of Registration, Reachability, Connection, and Mobility of UE, making it an essential component of the 5G Core Network.  

Policy Control Function (PCF)

The Policy Control Function (PCF) provides the framework for creating policies to be consumed by the other control plane network functions. These policies can include aspects like QOS, Subscriber Spending/Usage Monitoring, network slicing management, and management of subscribers, applications, and network resources. The PCF in the 5G network serves as a policy decision point, like the PCRF (Policy and Charging Rules Function) in 4G/EPC Network. It communicates with other network elements such as the AMF, SMF, and Unified Data Management (UDM) to acquire critical information and make sound policy decisions.

Unified Data Management (UDM) and Unified Data Repository (UDR)

The Unified Data Management (UDM) and Unified Data Repository(UDR) are critical components of the 5G core network. The UDM maintains subscriber data, policies, and other associated information, while the UDR stores this data. They collaborate to conduct data management responsibilities that were previously handled by the HSS (Home Subscriber Server) in the 4G EPC. When compared to the HSS, the UDM and UDR provide greater flexibility and efficiency, supporting the enhanced capabilities of the 5G network.

Network Exposure Function (NEF)

The Network Exposure Function (NEF) is another key component of 5G core network that enables network operators to securely expose network functionality and interfaces on a granular level by creating a bridge between the 5G core network and external application (E.g., internal exposure/re-exposure, Edge Computing). The NEF also provides a means for the Application Functions (AFs) to securely provide information to 3GPP network (E.g., Expected UE Behavior).

The NEF northbound interface is between the NEF and the AF. It specifies RESTful APIs that allow the AF to access the services and capabilities provided by 3GPP network entities and securely exposed by the NEF. It communicates with each NF through a southbound interface facilitated by a northbound API. The 3GPP interface refers to the southbound interface between NEF and 5G network functions, such as the N29 interface between NEF and Session Management Function (SMF), the N30 interface between NEF and Policy Control Function (PCF), and so on.

By opening the network's capabilities to third-party applications, NEF enables a seamless connection between network capabilities and business requirements, optimizing network resource allocation and enhancing the overall business experience.

Network Repository Function (NRF)

The Network Resource Function (NRF) serves as critical component required to implement the new service-based architecture in the 5G core network which serves as a centralized repository for all NF’s instances. It is in charge of managing the lifecycle of NF profiles, which includes registering new profiles, updating old ones, and deregistering those that are no longer in use. The NRF offers a standards-based API for 5G NF registration and discovery.  

Technically, NRF operates by storing data about all Network Function (NF) instances, including their supported functionalities, services, and capacities. When a new NF instance is instantiated, it registers with the NRF, providing all the necessary details. Subsequently, any NF that needs to communicate with another NF can query the NRF for the target NF's instance details. Upon receiving this query, the NRF responds with the most suitable NF instance information based on the requested service and capacity.

How Does the 5G Core Differ from Previous Generations?  

The primary architectural distinction between the 5G Core and the 4G EPC is that the 5G Core makes use of the Service-Based Architecture (SBA) with cloud-native flexible configurations of loosely coupled and independent NFs deployed as containerized microservices. The microservices based architecture provides the ability for NFs to scale and upgrade independently of each other which is significant benefit to CSPs. The 4G EPC, on the other hand, employs a flat architecture for efficient data handling with network components deployed as physical network elements in most cases and the interface between core network elements was specified as point-to-point running proprietary protocols and was not scalable.

A picture containing text, font, screenshot, line 
Description automatically generated

Another significant distinction between 5G Core and EPC is the formation of the control plane (CP). The control plane functionality is more intelligently shared between Access and Mobility Management Functions (AMF) and Session Management Functions (SMF) in the 5G Core than the MME and SGW/PGW in the 4G/EPC. This separation allows for more efficient scaling of network resources and improved network performance. 

In addition to the design and functional updates, the business' priorities with 5G have been updated. With 5GC, CSPs are moving away from proprietary, vertically integrated systems and shifting to cloud-native and open source-based platforms like Red Hat OpenShift Container Platform that runs on industry standard hardware. This helps improve the responsiveness while also cutting the operating expenses will be the primary focus going forward with 5G Core for CSPs. 

Key distinctions between the 4G LTE and 5G QoS models 

The key distinctions between 4G LTE and 5G QoS models primarily lie in their approach to quality-of-service enforcement and their level of complexity. In 4G LTE, QoS is enforced at the EPS bearer level (S5/S8 + E-RAB) with each bearer assigned an EPS bearer ID. On the other hand, 5G QoS is a more flexible approach that enforces QoS at the QoS flow level. Each QoS flow is identified by a QoS Flow ID (QFI).  

A diagram of a cloud computing system 
Description automatically generated


Furthermore, the process of ensuring end-to-end QoS for a Packet Data Unit (PDU) session in 5G involves packet classification, user plane marking, and mapping to radio resources. Data Packets are classified into QoS flows by UPF using Packet Detection Rules (PDRs) for downlink and QoS rules for uplink.

5G leverages Service Data Adaptation Protocol (SDAP) for mapping between a QOS flow from the 5G core network and a data radio bearer (DRB). This level of control and adaptability provides an improved QoS model in 5G as compared to 4G networks. 

The Power of Cloud-Native Architecture in 5G Core 

One of the standout features of the 5G Core is its cloud-native architecture. This architecture allows the 5G core network to be built with microservices that can be reused for supporting other network functions. The 5G core leverages technologies like microservices, containers, orchestration, CI/CD pipelines, APIs, and service meshes, making it more agile and flexible. 

With Cloud-Native architecture, 5G Core can be easily deployed and operated, offering a cost-effective solution that complies with regulatory requirements and supports a wide range of use cases. The adherence to cloud-native principles is of utmost importance as it allows for the independent scaling of components and their dynamic placement based on service demands and resource availability. This architecture also allows for network slicing, which enables the creation of end-to-end virtual networks on top of a shared infrastructure.    

A picture containing text 
Description automatically generated


Network Slicing: Enabling a Range of 5G Services

Network Slicing is considered as one of the key features by 3GPP in 5G.  A network slice can be looked like a logical end-to-end network that can be dynamically created. A UE may access to multiple slices over the same gNB, within a network slice, UEs can create PDU sessions to different Gateways via Data network name (DNNs). This architecture allows operators to provide a custom Quality of Service (QoS) for different services and/or customers with agreed upon Service-level Agreement (SLA).

The Network Slice Selection Function (NSSF) plays a vital role in the network slicing architecture of 5G Core. It facilitates the process of selecting the appropriate network slice for a device based on the Network Slice Selection Assistance Information (NSSAI) specified by the device. When a device sends a registration request, it mentions the NSSAI, thereby indicating its network slice preference. The NSSF uses this information to determine which network slice would best meet the device's requirements and accordingly assigns the device to that network slice. This ability to customize network slices based on specific needs is a defining feature of 5G network slicing, enabling a single physical network infrastructure to cater to a diverse range of services with contrasting QOS requirements. To read and learn more on Network Slicing check out this amazing blog post To slice or not to slice | Dell Technologies Info Hub.

Next steps

To learn how Dell and Red Hat are helping CSPs in their cloud-native journey, see the blog Cloud-native or Bust: Telco Cloud Platforms and 5G Core Migration on Info Hub. In the next blog of the 5G Core series, we will explore the collaboration between Dell Technologies and Red Hat to simplify operator processes, starting from the initial technology onboarding all the way to Day 2 operations. The focus is on deploying a telco cloud that supports 5G core network functions using Dell Telecom Infrastructure Blocks for Red Hat.  

To learn more about about Telecom Infrastructure Blocks for Red Hat, kindly visit our website Dell Telecom Multi-Cloud Foundation solutions.



1 The 5G Architecture shown here is the simplified version, there are other 5G NFs like UDSF, SCP, BSF, SEPP, NWDAF, N3IWF etc. not shown here.

2 CUPS is a pre-5G technology (5G Standalone (SA) was introduced in 3GPP Rel-15). 5G SA offers more innovation with the ability to change anchors (SSC Mode 3), daisy chain UPFs, and connect to multiple UPFs.

3 The Network Functions mentioned in this section are a subset of the standardized NFs in 5G Core network.  

Authored by:


Gaurav Gangwal

Senior Principal Engineer – Technical Marketing, Product Management

About the author:

Gaurav Gangwal works in Dell's Telecom Systems Business (TSB) as a Technical Marketing Engineer on the Product Management team. He is currently focused on 5G products and solutions for RAN, Edge, and Core. Prior to joining Dell in July 2022, he worked for AT&T for over ten years and previously with Viavi, Alcatel-Lucent, and Nokia. Gaurav has an Engineering degree in Electronics and Telecommunications and has worked in the telecommunications industry for about 14+ years. He currently resides in Bangalore, India.


Kevin Gray

Senior Consultant, Product Marketing – Product Marketing

About the author:

Kevin Gray leads marketing for Dell Technologies Telecom Systems Business Foundations solutions. He has more than 25 years of experience in telecommunications and enterprise IT sectors. His most recent roles include leading marketing teams for Dell’s telecommunications, enterprise solutions and hybrid cloud businesses. He received his Bachelor of Science in Electrical Engineering from the University of Massachusetts in Amherst and his MBA from Bentley University.  He was born and raised in the Boston area and is a die-hard Boston sports fan.

The 5G Core Network is composed of various network functions, each serving a unique purpose. These functions communicate internally and externally over well-defined standard interfaces, making the 5G network highly flexible and agile. Let's take a closer look at some3 of the critical 5G Core Network functions: 


Read Full Blog
  • Intel
  • PowerEdge
  • Open RAN
  • 5G
  • Edge
  • vRAN

Dell Shifts vRAN into High Gear on PowerEdge with Intel vRAN Boost

Mike Moore Mike Moore

Thu, 17 Aug 2023 18:33:05 -0000

|

Read Time: 0 minutes

What has past

Mobile World Congress 2023 was an important event for both Dell Technologies and Intel that marked a true foundational turning point for vRAN viability. At this event, Intel launched its  4th Gen Intel Xeon Scalable processor, with Intel vRAN Boost, and Dell announced two new ruggedized server platforms, the PowerEdge XR5610 and XR8000, with support for vRAN Boost CPU SKUs.  

The features and capabilities of the PowerEdge XR5610 and XR8000 have been highlighted in previous blogs and both have been available to order since May 2023. These new ruggedized servers have been evaluated and adopted as Cloud RAN reference platforms by NEPs such as Samsung and Ericsson. Short-depth, NEBS certified and TCO-optimized, these servers are purpose-built for the demanding deployment environments of Mobile Operators and are now married to the Intel vRAN Boost processor to provide a powerful and efficient alternative to classical appliance options.

What is now

Starting August 16, 2023, the 4th Gen Intel Xeon Scalable processor with Intel vRAN Boost is available to order with the PowerEdge XR5610 and XR8000. These two critical pieces of the vRAN puzzle have been brought together and are now available to order from our PowerEdge XR Rugged Servers page with the following CPU SKUs.

CPU SKUCoresBase Freq.TDP

6433 N

32

2.0

205 W

5423 N

20

2.1

145 W

6423 N

28

2.0

195 W

6403 N

24

1.9

185 W

Table 1. Intel vRAN Boost SKUs available today from Dell

Additional details on these new CPU SKUs and all 4th Gen Intel® Xeon® Scalable processors can be found on the Intel Ark Site.

These processors, with Intel vRAN Boost, integrate key acceleration blocks for 4G and 5G Radio Layer 1 processing into the CPU. These include: 

  • 5G Low Density Parity Check (LDPC) encoder/decoder
  • 4G Turbo encoder/decoder
  • Rate match/dematch
  • Hybrid Automatic Repeat
  • Request (HARQ) with access to DDR memory for buffer management
  • Fast Fourier Transform (FFT) block providing DFT/iDFT for the 5G Sounding Reference Signal (SRS)
  • Queue Manager (QMGR)
  • DMA subsystem

One of the most interesting features of the vRAN Boost CPU is how this acceleration block is accessed by software. Although it is integrated on-chip with the CPU, the vRAN Boost block still presents itself to the Cores/OS as a PCIe device. The genius of this approach is in software compatibility. Virtual Distributed Unit (vDU) applications written for the previous generation HW will access the new vRAN Boost block using the same standardized, open APIs that were developed for the previous generation product. This creates a platform that can support past, present (and possibly future) generations of Intel’s vRAN optimized HW with the same software image.

What is to come

Prior to vRAN Boost, the reference architecture for vDU was a 3rd Gen Intel Xeon Scalable processor along with a FEC/LDPC accelerator, such as the Intel vRAN Accelerator ACC100 Adapter, and most today’s vRAN deployments can be found with this configuration. While the ACC100 does meet the L1 acceleration needs of vRAN it does this at a price, in terms of the space of an HHHL PCIe card and at the cost of an additional 54 W of power consumed (and cooled).  In addition, using a PCIe interface will further reduce additional I/O expansion options and impact the ability to scale in-chassis due to slot count – both of which are alleviated with vRAN Boost.

With the new Intel vRAN Boost processors’ fully integrated acceleration, Intel has taken a huge step in closing the performance gap with purpose-built hardware, while remaining true to the “Open” in O-RAN.

Intel says that, compared to the previous generation, the new Intel vRAN Boost processor delivers up to 2x capacity and ~ 20% compute power savings compared to its previous generation processor with ACC100 external acceleration. At the Cell Site, where every watt is counted, operators are constantly exploring opportunities to reduce both power consumption and the associated “cooling tax” of keeping the HW in its operational range, typically within a sealed environment.

Dell and Intel have worked together to provide early access Intel vRAN Boost provisioned XR5610s and XR8000s to multiple partners and customers for integration, evaluation, and proof-of-concepts.  One early evaluator, Deutsche Telekom, states:

“Deutsche Telekom recently conducted a performance evaluation of Dell’s PowerEdge XR5610 server, based on Intel’s 4th Gen Intel Xeon Scalable processor with Intel vRAN Boost. Testing under selected scenarios concluded a 2x capacity gain, using approximately 20% less power versus the previous generation. We aim to leverage these significant performance gains on our journey to vRAN.” 

-- Petr Ledl , Vice President of Network Trials and Integration Lab and Access Disaggregation Chief Architect, Deutsche Telekom AG

With such a solid industry foundation of the telecom-optimized PowerEdge XR5610s/XR8000s, and 4th Gen Intel Xeon Scalable processors with Intel vRAN Boost, expect to see accelerated deployments of open, vRAN-based infrastructure solutions.

Read Full Blog

A Bachelor’s Guide to Network Automation

David Han David Han

Mon, 24 Jul 2023 20:18:24 -0000

|

Read Time: 0 minutes

As a bachelor, I have a deep appreciation for how automation can simplify life. My dishwasher automatically washes my dishes for me while I’m at work. My DVR automatically records sports games for me while I’m out with friends. My smart thermostat automatically takes care of regulating my energy use when I’m on vacation. In each case, automating everyday tasks like these allows me to focus on more important things, like writing this blog!

 

It’s the same with automating telecom network infrastructure. There are so many little things that Communications Service Providers (CSPs) need to do just to keep running, from updating software to deploying and decommissioning hardware. All those little things take up a lot of time (and work), particularly when you’re talking about CSPs that may have millions of subscribers and tens of thousands of network nodes. This is time that could be spent on more important things like creating new revenue-generating services.

 

Is your network too high maintenance? 

A telecommunications data center is, of course, more complex than your average apartment (although sometimes no less crowded). And the appliances are a bit more complicated; I can use a fork when something gets stuck in my toaster, but no such luck when retrieving a packet stuck in a gateway. With 5G networks, the complexity rises exponentially. Now, you have multiple architectures (3G, 4G, 5G) that need to share services, hardware from multiple vendors that need to be configured consistently, cloud-native functions that are running on virtualized servers, etc. It makes programming your entire home theatre entertainment systems look simple by comparison.

 

Historically, CSPs have managed all their network infrastructure manually, using multiple management tools for each individual vendor platform. Manual configurations can lead to human errors, cause configuration drift, and decrease the efficiency of the network. And this raises an interesting question: If automation can simplify my life as a bachelor, can it also simplify the lifecycle management of a network? The answer is a resounding YES.

 

Automating network infrastructures matters a lot

Of course, I’m leading up to Dell Technologies’ Bare Metal Orchestrator. Since its initial launch, Bare Metal Orchestrator has helped CSPs around the globe simplify their network management through infrastructure automation and service orchestration. In independent studies, Bare Metal Orchestrator has been shown to reduce a CSP’s operational expenses between 39 and 57 percent and improve network ROI between 88 and 255 percent. (You can download the complete research paper here.) That’s a lot of extra money and time that can be re-invested in things that make you more money, like new 5G services.

 

Bare Metal Orchestrator replaces the independent management tools that CSPs are using today by unifying infrastructure management on a single platform. Dell 16G Server? Cisco router? HPE server? Bare Metal Orchestrator automates and orchestrates them all. And it supports the industry’s leading cloud platforms such as VMware, Red Hat, and Wind River. 

 

Yes, you can have a network and a life

One of the biggest challenges facing CSPs in their 5G transformation is scale. As 5G networks grow larger and network functions become more disaggregated, there’s more software and hardware to manage. And the larger the network grows, the bigger the network management problem becomes.

 

Bare Metal Orchestrator is built for massive scale. We’re talking about automating and orchestrating tens of thousands of hardware nodes across multiple geographies, multiple platforms, and multiple domains of the network (e.g., core, edge, RAN). And we’re building more features into it all the time. Just recently, we’ve expanded Bare Metal Orchestrator with new features, including auto-discovery/inventory of third-party devices, configuration drift detection and reporting, identity access management, and features that allow you to automate existing (brownfield) network deployments with no service disruption.

 

You don’t have to be a bachelor to know that complexity, whether in life or in your network, is no fun. CSPs certainly have a lot on their plates already without juggling multiple management tools, trying to identify configuration drifts across tens of thousands of nodes, and worrying about whether their latest network deployments will meet their SLAs or melt down under pressure. 

 

We’re removing the network infrastructure barriers so that CSPs can unlock their true potential and embrace the future without fear. If you’re ready to experience that kind of freedom, ask about our free trial of Bare Metal Orchestrator and see what you’ve been missing.

Read Full Blog
  • edge
  • Telecom
  • 5G
  • Dell Telecom Multi-Cloud Foundation
  • 5G Core

What is Happening in the Network Edge

Alex Reznik Tomi Varonen Arthur Gerona Alex Reznik Tomi Varonen Arthur Gerona

Mon, 26 Jun 2023 10:59:44 -0000

|

Read Time: 0 minutes


Where is the Network Edge in Mobile Networks

The notion of ‘Edge’ can take on different meanings depending on the context, so it’s important to first define what we mean by Network Edge. This term can be broadly classified into two categories: Enterprise Edge and Network Edge. The former refers to when the infrastructure is hosted by the company using the service, while the latter refers to when the infrastructure is hosted by the Mobile Network Operator (MNO) providing the service.

This article focuses on the Network Edge, which can be located anywhere from the Radio Access Network (RAN) to next to the Core Network (CN). Network Edge sites collocated with the RAN are often referred to as Far Edge.

What is in the Network Edge

In a 5G Standalone (5G SA) Network, a Network Edge site typically contains a cloud platform that hosts a User Plane Function (UPF) to enable local breakout (LBO). It may include a suite of consumer and enterprise applications, for example, those that require lower latency or more privacy. It can also benefit the transport network when large content such as Video-on-Demand is brought closer to the end users.

Modern cloud platforms are envisioned to be open and disaggregated to enable MNOs to rapidly onboard new applications from different Independent Software Vendors (ISV) thus accelerating technology adoption. These modern cloud platforms are typically composed of Commercial-of-the-Shelf (COTS) hardware, multi-tenant Container-as-a-Service (CaaS) platforms, and multi-cloud Management and Orchestration solutions.

Similarly, modern applications are designed to be cloud-native to maximize service agility. By having microservices architectures and supporting containerized deployments, MNOs can rapidly adapt their services to meet changing market demands.

What contributes to Network Latency

The appeal of Network Edge or Multi-access Edge Computing (MEC) is commonly associated with lower latency or more privacy. While moving applications from beyond the CN to near the RAN does eliminate up to tens of milliseconds of delay, it is also important to understand that there are many other contributors to network latency which can be optimized. In fact, latency is added at every stage from the User Equipment (UE) to the application and back.

RAN is typically the biggest contributor to network latency and jitter, the latter being a measure of fluctuations in delay. Accordingly, 3GPP has introduced a lot of enhancements in 5G New Radio (5G NR) to reduce latency and jitter in the air interface. We can actively reduce latency through the following categories: There are three primary categories where latency can be reduced:

  • Transmission time: reduce symbol duration with higher subcarrier spacing or with mini slots
  • Waiting time: improve scheduling (optimize handshaking), simultaneous transmit/receive, and uplink/downlink switching with TDD
  • Processing time: reduce UE and gNB processing and queuing with enhanced coding and modulation

Transport latency is relatively simple to understand as it is mainly due to light propagation in optical fiber. The industry rule of thumb is 1 millisecond round trip latency for every 100 kilometers. The number of hops along the path also impacts latency as every transport equipment adds a bit of delay.

Typically, CN adds less than 1 millisecond to the latency. The challenge for the CN is more about keeping the latency low for mobile UEs, by seamlessly changing anchors to the nearest Edge UPF through a new procedure called ‘make before break’. Also, the UPF architecture and Gi/SGi services (e.g., Deep Packet Inspection, Network Address Translation, and Content Optimization) may add a few additional milliseconds to the overall latency, depending on whether these functions are integrated or independent.

Architectural and Business approaches for the Network Edge

The physical locations that host RAN and Network Edge functionalities are widely recognized to be some of the MNOs’ most valuable assets. Few other entities today have the real estate and associated infrastructure (e.g., power, fiber) to bring cloud capabilities this close to the end clients. Consequently, monetization of the Network Edge is an important component of most MNOs’ strategy for maximizing their investment in the mobile network and, specifically, in 5G. In almost all cases, the Network Edge monetization strategy includes making Network Edge available for Enterprise customers to use as an “Edge Cloud.” However, doing so involves making architectural and business model choices across several dimensions:

  • Connectivity or Cloud: should the MNO offer a cloud service or just the connectivity to a cloud service provided by a third party (and potentially hosted at a third party’s site).
  • aaS model: in principle, the full range of as-a-Service models are available to the MNO to offer at the network edge. This includes co-location services; Bare-Metal-as-a-Service, Infrastructure-as-a-Service (IaaS), Containers-as-a-Service (CaaS), and Platform and Software-as-a-Service (PaaS and SaaS). Going up this value chain (up being from co-lo to SaaS) allows the MNO to capture more of the value provided to the Enterprise. However, it also requires it to take on significantly more of responsibility and puts it in direct competition with well-established players in this space – e.g., the cloud hyperscale companies. The right mix of offerings – and it is invariably a mix – thus involves a complex set of technical and business case tradeoffs. The end result will be different for every MNO and how each arrives there will also be unique.
  • Management framework: our industry’s initial approach to exposing the Network Edge to the enterprises involved a management framework that tightly couples to how the MNO manages its network functions (e.g., the ETSI MEC family of standards for example (ETSI MEC)). However, this approach comes with several drawbacks from an Enterprise point of view. As a result, a loosely coupled approach, where the Enterprise manages its Edge Cloud applications using typical cloud management solutions appears to be gaining significant traction, with solutions such as Amazon’s Wavelength as an example. This approach, of course, has its own drawbacks and managing the interplay between the two is an important consideration in Network Edge (and one that is intertwined with the selection of aaS model).
  • Network-as-a-Service: a unique aspect of the Network Edge is the MNOs ability to expose network information to applications as well as the ability to provide those applications (highly curated) means of controlling the network. How and if this makes sense is again both an issue of the business case – for the MNO and the Enterprise – as well as a technical/architectural issue.

Certainly, the likely end state is a complex mixture of services and go-to-market models focused on the Enterprise (B2B) segment. The exposition of operational automation and the features of 5G designed to address this make it likely that this is a huge opportunity for MNOs. Navigating the complexities of this space requires a deep understanding of both what services the Enterprises are looking for and how they are looking to consume these. It also requires an architectural approach that can handle the variable mix of what is needed in a way that is highly scalable.

As the long-time leader in Enterprise IT services, Dell is uniquely positioned to address this space – stay tuned for more details in an upcoming blog!

Building the Network Edge

There are several factors to consider when moving workloads from central sites to edge locations. Limited space and power are at the top of the list. The distance of locations from the main cities and generally more exposed to the elements require a new class of denser, easier-to-service, and even ruggedized form factors. Thanks to the popularity of Open RAN and Enterprise Edge, there are already solutions in the market today that can also be used for Network Edge. Read more on Edge blog series Computing on the Edge | Dell Technologies Info Hub

Higher deployment and operating costs are another major factor. The sheer number of edge locations combined with their degraded accessibility make them more expensive to build and maintain. The economics of the Network Edge thus necessitates automation and pre-integration. Dell’s solution is the newly engineered cloud-native solution with automated deployment and life-cycle management at its core. More on this novel approach here Dell Telecom MultiCloud Foundation | Dell USA.

Last is the lower cost of running applications centrally. Central sites have the advantage of pooling computes and sharing facilities such as power, connectivity, and cooling. It is therefore important to reduce overhead wherever possible, such as opting for containerized over VM-based cloud platforms. Moreover, having an open and disaggregated horizontal cloud platform not only allows for multitenancy at edge locations, which significantly reduces overhead but also enables application portability across the network to maximize efficiency.

The ideal situation is where Open/Cloud RAN and Network Edge are sharing sites thus splitting several of the deployment and operations costs. Due to the latency requirements, Distributed Unit (DU) must be placed within 20 kilometers of the Radio Unit (RU). Latency requirements for the mid-haul interface between DU and Central Unit (CU) are less stringent, and CU could be placed roughly around 80-100 kilometers from the DU. In addition, the Near-Real Time Radio Intelligent Controller (Near-RT RIC) and the related xApps must be placed within 10ms RTT. This makes it possible to collocate Network Edge sites with the CU sites and Near-RT RIC. 

Future

What has happened over the past few years is that several MNOs have already moved away from having 2-3 national DCs for their entire CN to deploying 5-10 regional DCs where some network functions such as the UPF were distributed. One example of this is AT&Ts dozen “5G Edge Zones” which were introduced in the major metropolitan areas: AT&T Launching a Dozen 5G “Edge Zones” Across the U.S. (att.com).

This approach already suffices for the majority of “low latency” use cases and for smaller countries even the traditional 2-3 national DCs can offer sufficiently low transport latency. However, when moving into critical use cases with more stringent latency requirements, which means consistently very low latency is a must, then moving the applications to the Far Edge sites becomes a necessity in tandem with 5G SA enhancements such as network slicing and an optimized air interface.

The challenge with consumer use cases such as cloud gaming is supporting the required Service Level (i.e., low latency) country wide. And since enabling the network to support this requires a substantial initial investment, we are seeing the classic chicken and egg problem where independent software vendors opt not to develop these more demanding applications while MNOs keep waiting for these “killer use cases” to justify the initial investment for the Network Edge. As a result, we expect geographically limited enterprise use cases to gain market traction first and serve as catalysts for initially limited Network Edge deployments.

For use cases where assured speeds and low latency are critical, end-to-end Network Slicing is essential. In order to adopt a new more service-oriented approach, MNOs will need Network Edge and low latency enhancements together with Network Slicing in their toolbox. For more on this approach and Network Slicing, please check out our previous blog To slice or not to slice | Dell Technologies Info Hub.

 

 

About the author: Tomi Varonen 

Tomi Varonen is a Telecom Network Architect in Dell’s Telecom Systems Business Unit. He is based in Finland and working with the Cloud, Core Network, and OSS&BSS customer cases in the EMEA region. Tomi has over 23 years of experience in the Telecom sector in various technical and sales positions. Wide expertise in end-to-end mobile networks and enjoys creating solutions for new technology areas. Passion for various outdoor activities with family and friends including skiing, golf, and bicycling. 

 

About the author: Arthur Gerona 

Arthur is a Principal Global Enterprise Architect at Dell Technologies. He is working on the Telecom Cloud and Core area for the Asia Pacific and Japan region. He has 19 years of experience in Telecommunications, holding various roles in delivery, technical sales, product management, and field CTO. When not working, Arthur likes to keep active and travel with his family. 


About the author: Alex Reznik

ALEX REZNIK is a Global Principal Architect in Dell Technologies Telco Solutions Business organization.  In this role, he is focused on helping Dell’s Telco and Enterprise partners navigate the complexities of Edge Cloud strategy and turning the potential of 5G Edge transformation into the reality of business outcomes.  Alex is a recognized industry expert in the area of edge computing and a frequent speaker on the subject.  He is a co-author of the book "Multi-Access Edge Computing in Action."  From March 2017 through February 2021, Alex served as Chair of ETSI’s Multi-Access Edge Computing (MEC) ISG – the leading international standards group focused on enabling edge computing in access networks. 

Prior to joining Dell, Alex was a Distinguished Technologist in HPE’s North American Telco organization.   In this role, he was involved in various aspects of helping Tier 1 CSPs deploy state-of-the-art flexible infrastructure capable of delivering on the full promises of 5G.  Prior to HPE Alex was a Senior Principal Engineer/Senior Director at InterDigital, leading the company’s research and development activities in the area of wireless internet evolution.  Since joining InterDigital in 1999, he has been involved in a wide range of projects, including leadership of 3G modem ASIC architecture, design of advanced wireless security systems, coordination of standards strategy in the cognitive networks space, development of advanced IP mobility and heterogeneous access technologies and development of new content management techniques for the mobile edge. 

Alex earned his B.S.E.E. Summa Cum Laude from The Cooper Union, S.M. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology, and Ph.D. in Electrical Engineering from Princeton University. He held a visiting faculty appointment at WINLAB, Rutgers University, where he collaborated on research in cognitive radio, wireless security, and future mobile Internet.   He served as the Vice-Chair of the Services Working Group at the Small Cells Forum.  Alex is an inventor of over 160 granted U.S. patents and has been awarded numerous awards for Innovation at InterDigital.

Read Full Blog
  • 5G
  • Dell Telecom Multi-Cloud Foundation
  • 5G Core
  • Telco Cloud
  • Dell Telecom Infrastructure Blocks for Red Hat

Cloud-native or Bust: Telco Cloud Platforms and 5G Core Migration

Gaurav Gangwal Kevin Gray Gaurav Gangwal Kevin Gray

Thu, 25 Apr 2024 16:23:22 -0000

|

Read Time: 0 minutes

Breaking down barriers with an open, disaggregated, and cloud-native 5G Core

As 5G network rollouts accelerate, communication service providers (CSPs) around the world are shifting away from purpose-built, vertically integrated solutions in favor of open, disaggregated, and cloud-native architectures running containerized network functions. This allows them to take advantage of modern DevSecOps practices and an emerging ecosystem of telecom hardware and software suppliers delivering cloud-native solutions based on open APIs, open-source software, and industry-standard hardware to boost innovation, streamline network operations, and reduce costs.

To take advantage of the benefits of cloud-native architectures, many CSPs are moving their 5G Core network functions onto commercially available cloud native application platforms like Red Hat OpenShift, the industry's leading enterprise Kubernetes platform. However, building an open, disaggregated telco cloud for 5G Core is not easy and it comes with its own set of challenges that need to be tackled before large scale deployments.

In an disaggregated network, the system integration and support tasks become the CSP's responsibility. To achieve their objectives for 5G, they must:

  • Accelerate the introduction and management of new technologies by simplifying and streamlining processes from Day 0, network design and integrations tasks, to Day 1 deployment, and Day 2 lifecycle management and operations.
  • Break down digital silos to deploy a horizontal cloud platform to reduce CapEx and OpEx while lowering power consumption
  • Deploy architectures and technologies that consistently meet strict telecom service level agreements (SLAs).

Digging into the challenges of deploying 5G Core network functions on cloud infrastructure

This will be a five-part blog series that addresses the challenges when deploying 5G Core network functions on a telco cloud. 

  • In this first blog, we will highlight CSPs’ key challenges as they migrate to an open, disaggregated, and cloud-native ecosystem; 
  • The next blog will explore the 3GPP 5G Core network architecture and its components;  
  • The third blog in the series will discuss how Dell Technologies is working with Red Hat to streamline operator processes from initial technology onboarding through Day 2 operations when deploying a telco cloud to support core network functions;
  • The fourth blog will focus on distributing User Plane functionality from the centralized data center to the network edge, so operators can create a more scalable and flexible 5G network environment;
  • The final blog in the series will discuss how Dell is integrating Intel technology that consistently meets CSP SLAs for 5G Core network functions.

Accelerating the introduction and simplifying the management of 5G Core network functions on cloud infrastructure

Cloud native architectures offer the potential to achieve superior performance, agility, flexibility, and scalability, resulting in easily updated, scaled, and maintained Core network functions with improved network performance and lower operational costs. Nevertheless, operating 5G Core network functions on a telco cloud can be difficult due to new challenges operators face in integration, deployment, lifecycle management, and developing and maintaining the right skill sets.

Different integration and validation requirements

Open multi-vendor cloud-native architectures require the CSP to take on more ownership of design, integration, validation, and management of many complex components, such as compute, storage, networking hardware, the virtualization software, and the 5G Core workload that runs on top. This increases the complexity of deployment and lifecycle management processes while requiring investment in development of new skill sets.

Complexity of the deployment process demands automation

5G Core deployment on a telco cloud platform can be a complex process that requires integrating multiple systems and components into a unified whole with automated deployment from the hardware up through the Core network functions. This complexity creates the need for automation that not only to streamlines processes, but also ensures a consistent deployment or upgrade each time that aligns with established configuration best practices. Many CSPs may lack deployment experience with automation and cloud native tools making this a difficult task. 

Lifecycle management and orchestration of a disaggregated 5G Core

The size and complexity of the 5G Core can make lifecycle management and orchestration challenging. Every one of the components starts a new validation cycle and increases the risk of introducing security vulnerabilities and configuration issues into the environment.

Lack of cloud-native skills and experience

Managing a telco cloud requires a different set of skills and expertise than operating traditional networks environments. CSPs often need to acquire additional staff and invest in cloud native training and development to obtain the skills and experience to put cloud native principles into practice as they build, deploy, and manage cloud-native applications and services.

Breaking down vertical silos with a horizontal, 5G telco cloud platform

In recent years, many CSPs embarked on a journey away from vertically integrated, proprietary appliances to virtualized network functions (VNFs). One of the goals when adopting network functions virtualization was to obtain greater freedom in selecting hardware and software components from multiple suppliers, making services more cost-effective and scalable. However, CSPs often experiences difficulties in designing, integrating and testing their individual stacks, resulting in higher integration costs, interoperability issues and regression testing delays leading to less efficient operations.

Despite efforts to move to virtualized network functions, silos of vertically integrated cloud deployments can emerge where the virtual network functions suppliers define their own cloud stack to simplify their process of meeting the requirements for each workload. These vertical silos prevented CSPs from pooling resources, which can reduce infrastructure utilization rates and increase power consumption. It also increases the complexity of lifecycle management as each layer of the stack for each silo needs to be validated whenever a change to a component of the stack is made.

Vertically integrated 5G Core stack on a telco cloudVertically integrated 5G Core stack on a telco cloud

CSPs are now looking to implement a horizontal platform that can provide a common cloud infrastructure to help break down these silos to lower costs, reduce power consumption, improve operational efficiency, and minimize complexity allowing CSPs to adopt cloud native infrastructure from the core to the radio access network (RAN).

Horizontal platform for 5G telco workloadsHorizontal platform for 5G telco workloads

Maintaining compliance with telecom industry SLAs

Creating and managing a geographically dispersed telco cloud based on a broad range of suppliers while consistently adhering to CSP SLAs takes a lot of effort, resources and time and can introduce new complications and risks. To meet these SLAs and accelerate the introduction of new technologies, CSPs will need a novel approach when working with vendors that reduces integration and deployment times and costs while simplifying ongoing operations. This will include developing a tighter relationship with their supply base to offload integration tasks while maintaining the flexibility provided by an open telecom ecosystem. As an example, Vodafone recently introduced a paper outlining their vision for a new operating model to improve systems integrations with their supply base to help achieve these objectives. It would also include following a proven path in enterprise IT by adopting engineered systems, similar to the converged and hyper converged systems used by IT today, that have been optimized for telecom use cases to simplify deployment, scaling and management.

Short-term total cost of ownership (TCO)

When it comes to optimizing short-term TCO, there are several options available to CSPs. One such option is to work closely with vendors in order to reduce integration, deployment times and costs while simplifying ongoing operations. This approach can help CSPs leverage the expertise of vendors who specialize in the software and hardware components required for a disaggregated telco cloud. By working with skilled vendors, CSPs can reduce the risk of validating and integrating components themselves, which can lead to cost savings in the short term.

Another option that CSPs can consider is to adopt a phased approach to implementation. This involves deploying disaggregated telco cloud technologies in stages, starting with the most critical components and gradually expanding to include additional components over time. This approach can help to mitigate the initial costs associated with disaggregated telco cloud adoption while still realizing the benefits of increased flexibility, scalability, and cost efficiency.

CSPs can also take advantage of initiatives like Vodafone's new operating model for improving systems integrations with their supply base. This model aims to simplify the process of integrating components from multiple vendors by providing a standardized framework for testing and validation. By adopting frameworks like this, CSPs can reduce the time and costs associated with integrating components from multiple vendors, which can help to optimize short-term TCO.

Although implementing a disaggregated telco cloud can increased investment in the short term, there are several options available to CSPs for optimizing short-term TCO. Whether it's working closely with trusted vendors, adopting a phased approach, or leveraging standardized frameworks, CSPs can take steps to reduce costs and maximize the benefits of a disaggregated telco cloud.

Your partners for simplifying telco cloud platform design, deployment, and operations 

Dell and Red Hat are leading experts in cloud-native technology used in building 5G networks and are working together to simplify their deployment and management for CSPs. Dell Telecom Infrastructure Blocks for Red Hat is a solution that combines Dell's hardware and software with Red Hat OpenShift, providing a pre-integrated and validated solution for deploying and managing 5G Core workloads. This offering enables CSPs to quickly launch and scale 5G networks to meet market demand for new services while minimizing the complexity and risk associated with deploying cloud-native infrastructure.

Next steps

In the next blog we will dive deeper into the the service-based architecture of the 5G Core architecture and how it was developed to support cloud native principles. To learn more about how Dell Technologies and Red Hat are partnering to simplify the deployment and management of a telco cloud platform built to support 5G Core workloads, see the ACG Research Industry Directions Brief: Extending the Value of Open Cloud Foundations to the 5G Network Core with Telecom Infrastructure Blocks for Red Hat.


Authored by:


Gaurav Gangwal

Senior Principal Engineer – Technical Marketing, Product Management

About the author:

Gaurav Gangwal works in Dell's Telecom Systems Business (TSB) as a Technical Marketing Engineer on the Product Management team. He is currently focused on 5G products and solutions for RAN, Edge, and Core. Prior to joining Dell in July 2022, he worked for AT&T for over ten years and previously with Viavi, Alcatel-Lucent, and Nokia. Gaurav has an Engineering degree in Electronics and Telecommunications and has worked in the telecommunications industry for about 14+ years. He currently resides in Bangalore, India.



Kevin Gray

Senior Consultant, Product Marketing – Product Marketing

About the author:

Kevin Gray leads marketing for Dell Technologies Telecom Systems Business Foundations solutions. He has more than 25 years of experience in telecommunications and enterprise IT sectors. His most recent roles include leading marketing teams for Dell’s telecommunications, enterprise solutions and hybrid cloud businesses. He received his Bachelor of Science in Electrical Engineering from the University of Massachusetts in Amherst and his MBA from Bentley University.  He was born and raised in the Boston area and is a die-hard Boston sports fan.


Read Full Blog

Dell Technologies and Samsung collaborate to bring innovative Open RAN solutions to CSPs

Scott Heinlein Scott Heinlein

Mon, 27 Feb 2023 07:00:00 -0000

|

Read Time: 0 minutes

Open RAN promises to enable communication service providers (CSPs) with choice and flexibility by opening up the interfaces of the RAN system to enable multi-vendor solutions. However, opening up RAN interfaces creates integration challenges that must be solved. This process includes fully integrating and testing multi-vendor solutions, while gaining the CSP's trust that the solution will provide the reliability their customers have come to expect. Simplifying this process can help accelerate the adoption of Open RAN.

Dell Technologies and Samsung are collaborating to solve Open RAN challenges and support seamless multi-vendor solution integration. Samsung will work alongside Dell to integrate Samsung’s virtualized centralized unit (vCU) and distributed unit (vDU) software with Dell Poweredge XR8000 and XR5610 servers, which are purpose-built for RAN environments and provide the performance and power consumption characteristics required in RAN deployments. 

The companies will also offer a flexible model for joint customer engagements and deliver post-sales support to customers. 

“Network operators are on the journey of transforming to open technologies, but they need help validating and testing the various solutions for their networks,” said Andrew Vaz, vice president of product management, Dell Technologies Telecom Systems Business. “Together with Samsung, our aim is to provide validated, price performant RAN solutions that network operators can confidently deploy in their networks.”

“We constantly strive to deliver products and solutions that meet the exceptional standards of global network operators, keeping flexibility, reliability and performance top of mind,” said Wook Heo, vice president, head of business operation, networks business, Samsung Electronics. “We have a robust ecosystem of partners and we look forward to continue working together with Dell to drive innovation to the next level, helping operators scale their open and virtualized networks.”

Open RAN promotes multi-vendor technologies to give CSPs more choice and flexibility. Collaborations like Dell and Samsung will help the industry overcome current Open RAN integration challenges to propel Open RAN forward.

 

Read Full Blog

Dell and Nokia collaborate to accelerate cloud RAN adoption

Marco Castanheira Marco Castanheira

Mon, 27 Feb 2023 07:00:00 -0000

|

Read Time: 0 minutes

The adoption of cloud RAN architectures has been slowed by concerns about performance and implementation challenges. To address these issues, the industry has moved toward the development of hardware acceleration solutions and the deployment of denser compute platforms to address the cloud RAN cost-performance challenge. Pre-integration of network architecture and validation of use case requirements needs to be done in advance, so solutions are simple and efficient to deploy.

 To accelerate cloud RAN adoption, Dell Technologies and Nokia have formed an agreement to integrate, interoperability test and validate a solution combining Nokia 5G Cloud RAN software and Nokia Cloud RAN SmartNIC (Network Interface Card) in-line Layer 1 (L1) accelerator hardware with Dell open infrastructure, including Dell PowerEdge servers. 

 To achieve these results, Dell and Nokia will both deploy R&D and testing resources. Dell will utilize its Open Telecom Ecosystem Lab as the center for testing and validation, while Nokia will focus its work on its Open Cloud RAN E2E System Test Lab. The companies will be sharing engineering and R&D resources to jointly complete the scope of the collaboration and take the resulting solution to market.

 Additionally, Dell Technologies and Nokia have achieved a Layer 3 end-to-end data call running Nokia Cloud RAN software and the Nokia Cloud RAN SmartNIC In-Line L1 acceleration on Dell PowerEdge servers. Dell and Nokia will establish joint marketing and sales strategies and a mutual co-sell agreement to promote and deliver the resulting solution to prospective customers. 

 “The combination of Nokia’s Cloud RAN software and Cloud RAN SmartNIC with Dell’s purpose-designed telecom open infrastructure and Dell PowerEdge servers unlocks more flexibility, choice and helps our customers choose competitive Cloud RAN solutions that prioritize performance and energy efficiency. By pooling our resources and expertise together we can create more compelling and integrated solutions that accelerate the adoption of open technologies and bring innovative solutions to the market faster. Nokia’s approach in collaborating with our best-in-class partners is delivering competitive advantage to organizations embracing Cloud RAN,” commented Tommi Uitto, president of Mobile Networks, Nokia.

 “It’s critical to collaborate with partners such as Nokia to enable telecom network disaggregation and accelerate the adoption of open network architectures,” said Dennis Hoffman, senior vice president and general manager, Dell Technologies Telecom Systems Business. “Integrating Nokia’s Cloud RAN software and Cloud RAN SmartNIC with Dell infrastructure that is purpose built for telecom networks, will give network operators another choice to realize the value of open technologies and quickly bring innovative and revenue generating solutions to market.”    

Collaborations like Dell and Nokia will help the industry adopt cloud-native solutions. Dell is eager to work alongside RAN leaders such as Nokia to bring open and innovative RAN solutions to the market.

Read Full Blog

Dell’s PowerEdge XR7620 for Telecom/Edge Compute

Mike Moore Mike Moore

Fri, 07 Jul 2023 15:03:04 -0000

|

Read Time: 0 minutes

The XR7620 is an Edge-optimized short-depth, dual-socket server, purpose-built and compact, offering acceleration-focused solutions for the Edge.  Similar to the other new PowerEdge XR servers reviewed in this blog series (the XR4000, XR8000, and XR5610)the XR7620 is a ruggedized design built to tolerate dusty environments, extreme temperatures, and humidity and is both NEBS Level 3, GR-3108 Class 1 and MIL-STD-810G certified. Figure 1. PowerEdge XR7620 Server








XR7620 is intended to be a generational improvement over the previous PowerEdge XR2 and XE2420 servers, with similar base features and the newest components, including:

  • A CPU upgrade to the recently announced Intel 4th Generation Xeon Scalable processor, up to 32 cores.
  • 2x the memory bandwidth with the upgrade from DDR4 to DDR5
  • Higher performance I/O capabilities with the upgrade from PCIe Gen 4 to PCIe Gen 5, with 5 x PCIe slots.
  • Enhanced storage capabilities with up to 8 x NVMe drives, BOSS support, and HW-based NVMe RAID.
  • Dense acceleration capabilities at the edge where there the XR7620 excels, with support for up to 2 x double-width (DW) accelerators at up to 300W each, or 4 single-width (SW) accelerators at up to 150W each. Filtered bezel for work in dusty environments

Targeted workloads include Digital Manufacturing workloads for machine aggregation, VDI, AI inferencing, OT/IT translation, industrial automation, ROBO, and military applications where a rugged design is required.  In the Retail vertical, the XR7620 is designed for such applications as warehouse operations, POS aggregation, inventory management, robotics and AI inferencing.

For additional details on the XR7620’s performance, see the tech notes on the servers machine learning (ML) capabilities.

The XR7620 shares the ruggedized design features of the previously reviewed XR servers, and its strength lies in its ability to bring dense acceleration capabilities to the Edge, but instead of repeating the same feature and capabilities highlighted in previous blogs, I would like to discuss a few other PowerEdge features that have special significance at the Edge.  These are in the areas of:

  • Security
  • Cooling
  • Management

Security

Security is a core tenant and the common foundation of the entire PowerEdge Portfolio.  Dell designs PowerEdge with security in mind at every phase in the server lifecycle, starting before the server build, with a Secure Supply Chain, extending to the delivered servers, with Secure Lifecycle Management and Silicon Root of Trust then secures what’s created/stored by the server in Data Protection.

Figure 2. Dell's Cyber Resilient Architecture

This is a Zero Trust security approach that assumes at least privilege access permissions and requires validation at every access/implementation point, with features such as Identify Access Management (IAM) and Multi-Factor Authentication (MFA).

Especially at the Edge, where servers are not typically deployed in a “lights out” environment the ability to detect and respond to any tampering or intrusion is critical.   Dell’s silicon-based platform Root of Trust created a secured booth environment to ensure that firmware comes from a trusted, untampered source. PowerEdge can also lock down a system configuration, detect any changes in firmware versions or configuration and on detection, can initiate a rollback to the last known good environment.

Cooling

Figure 3. Intelligent Cooling DesignsAs covered in a previous blog, optimized thermal performance is critical in the design of resilient, ruggedized Edge Server designs.  The PowerEdge XR servers are designed with balanced, cooling-efficient airflow and comprehensive thermal management that provide optimized airflow while minimizing fan speeds and reducing server power consumption. XR Servers have a cooling design that allows them to operate between -5oC to 55oC. Dell engineers are  currentlyworking on solutions to extend that operational range even further. 

All PowerEdge XR servers are designed with multiple, dual counter-rotating fans (basically 2 fans in 1 housing) and support for N+1 fan redundancy.   While for NEBS certification, fan failure is only “evaluated”, to certify as a GR-3108 Class 1 device, the server must continue to operate with a single fan failure, at a maximum of 40oC for a minimum of four hours.

Management

All Dell PowerEdge servers have a common, three tier approach to system management, in the forms of the Integrated Dell Remote Access Controller (iDRAC), Open Manage Enterprise (OME) and CloudIQ.  These three tiers build upon Dell’s approach to system management, of a unified, simple, automated, and secure solution.  This approach scales from the management of a single server, at the iDRAC Baseboard Management Controller (BMC) console, to managing 1000s of servers simultaneously with OME, to leveraging intelligent infrastructure insights and predictive analytics to maximize server productivity with CloudIQ.

Conclusion

The XR7620 is a valuable addition to the PowerEdge XR portfolio, providing dense compute, storage, and I/O capabilities in a short-depth and ruggedized form factor, for environmentally challenging deployments.  But far and away, the XR7620’s best capability is a design that brings a dense GPU acceleration environment to the edge, while continuing the meeting the performance requirements of NEBS Level 3, an ability that has previously not been an option.

Dell’s focus on security, cooling, and management creates a solution that can be efficiently and confidently deployed and maintained in the challenging environment that is today’s Edge.

Author information

In closing out this blog series, I would like to thank you for taking your valuable time to review my thoughts on Design for the Edge. To continue these discussions, connect with me here:

Mike Moore, Telecom Solutions Marketing Consultant at Dell Technologies

LinkedIn | Twitter

Read Full Blog

Dell’s PowerEdge XR5610 for Telecom/Edge Compute

Mike Moore Mike Moore

Tue, 25 Apr 2023 16:58:19 -0000

|

Read Time: 0 minutes

In June 2021, Dell announced the PowerEdge XR11 Server.  This was Dell’s first design created for the requirements of Telecom Edge Environments.   A 1U, short-depth, ruggedized, NEBS Level 3 compliant server, the XR11 has been successfully deployed in multiple O-RAN compliant commercial networks, including DISH Networks and Vodafone.

Dell has followed on the success of the XR11, with a generational improvement in the introduction of the PowerEdge XR5610.

Like its predecessor, the XR5610 is a short-depth ruggedized, single socket, 1U monolithic server, purpose-built for the Edge and Telecom workloads.  Its rugged design also accommodates military and defense deployments, retail AI including video monitoring, IoT device aggregation, and PoS analytics.

Figure 1. PowerEdge XR5610 1U Server

Improvements to the XR5610 include:

  • A CPU upgrade to the recently announced 4th Generation Xeon Scalable processor, up to 32c,
  • Support for the new Intel vRAN Boost variant, which will embed a vRAN accelerator in the CPU.  
  • A doubling of the memory bandwidth with the upgrade from DDR4 to DDR5.  
  • Higher performance I/O capabilities with the upgrade from PCIe Gen 4 to Gen 5.  
  • Dry inputs, common in remote environments to gain some insights into edge enclosure conditions, such as door open alarms, moisture detection, and more.
  • Support for multiple accelerators, such as GPUs, O-RAN L1 Accelerators, and storage options including SAS, SATA or NVMe,

Topics where the XR5610 delivers at the Edge are:

  • Form factor and deployability
  • Environment and rugged design
  • Efficient power options

Form factor and deployability

The monolithic chassis design of the XR5610 is a traditional, short depth form factor and fills certain deployment cases more efficiently than the XR8000.  This form factor will often be preferred where limited or single server edge deployments are required, or if this is a planned long-term installation with limited or planned upgrades.

Figure 2. Site support cabinetThe XR5610 is compatible with much of today’s Edge infrastructure.   These servers are designed with a short depth, “400mm Class” form factor, compatible with most existing Telecom Site Support Cabinets with flexible Power Supply options and dynamic power management to efficiently use limited resources at the edge.

This 400mm Class server fits well within the commonly deployed edge enclosure depths of 600mm.  With front maintenance capabilities, the XR5610 can be installed in Edge Cloud racks, and provide sufficient front clearance for power and network cabling, without creating a difficult-to-maintain cabling design or potentially one that obstructs airflow.

Environment and rugged design

While the XR5610 is designed to meet the environmental requirements of NEBS Level 3 and GR-3108 Class 1 for deploying into the Telecom Edge, Dell also wanted to create a platform that had uses and applications outside the Telecom Sector, beyond mee The PowerEdge XR5610 is also designed as a ruggedized compute platform for both military and maritime environments.  The XR5610 is tested to MIL-STD and Maritime specifications, including shock, vibration, altitude, sand, and dust. This wider vision for the deployment potential of the XR5610 creates a computing platform that can exist comfortably in an O-RAN Edge Cloud environment, without being restricted to Telecom-only.

A smart filtered bezel option is also available so the XR5610 can work in dusty environments and send an alert when a filter should be replaced. This saves maintenance costs because technicians can be called out on an as-needed basis, and customers don’t have to be concerned with over-temperature alarms caused by prematurely clogged filters.

Efficient power options

The XR5610 supports 2 PSU slots that can accommodate multiple power capacities, in both 120/240 VAC and -48 VDC input powers.

Dell has worked with our power supply vendors to create an efficient range of Power Supply Units (PSUs), from 800W to 1800W.  This allows the customer to select a PSU that most closely matches the current version available at the facility and power draw of the server, reducing wasted power loss in the voltage conversion process.  

Conclusion

The Dell PowerEdge XR servers, in particular the XR5610 and XR8000, are providing a new Infrastructure Hardware Foundation that allows Wireless Operators to transition away from traditional, purpose-built, classical BBU appliances, decoupling HW and SW to an open, virtualized RAN that gives operators the choice to create innovative, best-in-class solutions from a multi-vendor ecosystem.

Read Full Blog

Dell’s PowerEdge XR8000 for Telecom/Edge Compute

Mike Moore Mike Moore

Fri, 31 Mar 2023 17:38:53 -0000

|

Read Time: 0 minutes

The design goals of a Telecom-inspired Edge Server are to not only to complement existing installations such as traditional Baseband Units (BBUs) all the way out to the cell site, but to eventually replace the purpose-built proprietary platforms with a cloud-based and open solution.  The new Dell Technologies PowerEdge XR8000 achieves this goal, in terms of form factor, operations, and environmental specifications.

Figure 1. XR8000 2U Chassis

The XR8000 is composed of a 2U, short depth, 400mm class Chassis with options to choose from 1U or 2U half-width hot-swappable Compute Sleds with up to 4 nodes per chassis.  The XR8000 supports 3 sled configurations designed for flexible deployments.  These can be 4 x 1U sleds, 2 x 1U and 1 x 2U sleds or 2 x 2U sleds.

The Chassis also supports 2 PSU slots that can accommodate up to 5 power capacities, with both 120/240 AC and -48 VDC input powers supported.

The 1U and 2U Compute Sleds are based on Intel’s 4th Generation Xeon Scalable Processors, up to 32 cores, with support for both Sapphire Rapids SP and Edge Enhanced (EE) Intel® vRAN Boost processors.  Both sled types have 8 x RDIMM slots and support for 2 x M.2 NVMe boot devices with optional RAID1 support, 2 optional 25GbE LAN-on-Motherboard (LoM) ports and 8 Dry Contact Sensors though an RJ-45 connector. 

The 1U Compute Sleds adds support for one x16 FHHL (Full Height Half Length) Slot (PCIeFigure 2. XR8610t 1U Compute Sled Gen4 or Gen5).

The 2U Compute Sled builds upon the foundation of the 1U Sled and adds support for an additional two x16 FHHL slots.

These 2 Sled configurations can create both dense compute and dense I/O configurations.  The 2U Sled also provides the ability to accommodate GPU-optimized workloads.

This sledded architecture is designed for deployment into traditional Edge and Cell Site Environments, complementing or replacing current hardware and allowing for the reuse of existing infrastructure.  Design features that make this platform ideal for Edge deployments include:Figure 3. XR8620t 2U Compute Sled

  • Improved Thermal Performance
  • Efficient maintenance operations
  • Reduced power cabling
  • Simplified generational upgrades

Let’s take a look at each one of these.

Improved thermal performance

The XR8000 is designed for NEBS Level 3 compliance, which specifies an operational temperature range of -5oC to +55oC.  However, creating a server that operates efficiently through this whole temperature range can require some “padding” on either side.  Dell has designed the XR8000 to operate at both below -5oC and above +55oC.  This creates a server that operates comfortably and efficiently in the NEBS Level 3 range.

On the low side of the temperature scale, as discussed in the sixth blog in this series, commercial-grade components are typically not specified to operate below 0°C.   New to Dell PowerEdge design, is the XR8000 sled pre-heater controller, which on cold start where the temperature is below -5°C, will internally warm the server up to the specified starting temperature before applying power to the rest of the server.

On the high side of the temperature scale, Dell is introducing new, advanced heat sink technologies to allow for extended operations above +55°C. Another advantage of this new class of heat sinks will be in power savings, as at more nominal operating temperatures the sled’s cooling fans will not have to spin at as high a rate to dissipate the equivalent amount of heat, consuming fewer watts per fan.

Efficient maintenance operations

Figure 4. XR8000 Front View.   All Front MaintenanceFigure 5. XR8000 Rear View.  Nothing to see here.

In many Cell Site deployments, access to the back of the server is not possible without pulling the entire server.  This is typical, for example, in dedicated Site Support Cabinets, with no rear access, or in Concrete Huts where racks of equipment are located close to the wall, allowing no rear aisle for maintenance.   

Maintenance procedures at a Cell Site are intended to be fast and simple.  The area where a Cell Site Enclosure sits is not a controlled environment.  Sometimes, there will be a roof over the enclosure to reduce solar load, but more times than not it’s exposed to everything Mother Nature has to offer.  So the FRU (Field Replaceable Unit) maintenance needs to be simple, fast, and quickly bring the system back into full service.  For the XR8000, the 2 basic FRUs are the Compute Sled and the PSUs.  Simple and fast not only restore service more quickly, but the shorter maintenance cycle allows more sites to be serviced by the same technicians, saving both time and money.

Reduced power cabling

Up to four compute sleds are supported in the XR8000 Chassis, supplied by two 60mm PSUs.  If you looked at a traditional, rackmount server equivalent there would be either 4U of single socket or 2U-4U of dual-socket servers.  Assuming redundant PSUs for each server, there would be between four to eight PSUs for equivalent compute capacity and between four to eight more power cables.   This consolidation of PSUs and cables not only reduces the cost of the installation, due to fewer PSUs but also reduces the cabling, clutter, and Power Distribution Unit (PDU) ports used in the installation.

Simplified generational upgrades

With the release of Intel’s new 4th Generation Xeon Scalable Processor, a server in 2023 can execute the equivalent of multiple servers from only 10 years ago.  It can be expected that not only will processors efficiently continue to improve, but greater capabilities and performance in peripherals, including GPUs, DPU/IPUs, and Application Specific Accelerators will continue this processing densification trend.  The XR8000 Chassis is designed to accommodate multiple generations of future Compute Sleds, enabling fast and efficient upgrades while keeping any service disruptions to a minimum.

Conclusion

It is said that imitation is the sincerest form of flattery.  In this respect, our customers have requested, and Dell has delivered the XR8000, which is designed in a compact and efficient form factor with similar maintenance procedures as found in the existing, deployed RAN infrastructure.

Building upon a Classical BBU architecture, the XR8000 adopts an all-front maintenance approach with 1U and 2U Sledded design that makes server/PSU installation and upgrades quick and efficient.

Read Full Blog

Dell’s PowerEdge XR4000 for Telecom/Edge Compute

Mike Moore Mike Moore

Wed, 08 Feb 2023 18:04:19 -0000

|

Read Time: 0 minutes

Compute capabilities are increasingly migrating away from the centralized data center and deployed closer to the end user—where data is created, processed, and analyzed in order to generate rapids insights and new value.

Dell Technologies is committed to building infrastructure that can withstand unpredictable and challenging deployment environments. In October 2022, Dell announced the PowerEdge XR4000, a high-performance server, based on the Intel® Xeon® D Processor that is designed and optimized for edge use cases.

 

       

Figure 1. Dell PowerEdge XR4000 “rackable” (left) and “stackable” (right) Edge Optimized Servers

The PowerEdge XR4000 is designed from the ground up with the specifications to withstand rugged and harsh deployment environments for multiple industry verticals. This includes a server/chassis designed with the foundation requirements of GR-63-CORE (including -5C to +55C operations) and GR1089-CORE for NEBS Level 3 and GR-3108 Class 1 certification.  Designed beyond the NEBS requirements of Telecom, the XR4000 also meets MIL-STD specifications for defense applications, marine specifications for shipboard deployments, and environmental requirements for installations in the power industry.

The XR4000 marks a continuation of Dell Technologies’ commitment to creating platforms that can withstand the unpredictable and often challenging deployment environments encountered at the edge, as focused compute capabilities are increasingly migrating away from the Centralized Data Center and deployed closer to the End User, at the Network Edge or OnPrem.

Attention to a wide range of deployment environments creates a platform that can be reliably deployed from the Data Center to the Cell Site, to the Desktop, and anywhere in between.   Its rugged design makes the XR4000 an attractive option to deploy at the Industrial Edge, on the Manufacturing Floor, with the power and expandability to support a wide range of computing requirements, including AI/Analytics with bleeding-edge GPU-based acceleration.

The XR4000 is also an extremely short depth platform, measuring only 342.5mm (13.48 inches) in depth which makes it extremely deployable into a variety of locations.  And with a focus on deployments, the XR4000 supports not only EIA-310 compatible 19” rack mounting rails, but also the “stackable” version supports common, industry-standard VESA/DIN rail mounts, with built-in latches to allow the chassis to be mounted on top of each other, leveraging a single VESA/DIN mount.

Additionally, both Chassis types have the option to include a lockable intelligent filtered bezel, to prevent unwanted access to the Sleds and PSUs, with filter monitoring which will create a system alert when the filter needs to be changed.  Blocking airborne contaminants, as discussed in a previous blog, is key to extending the life of a server by reducing contaminant build-up that can lead to reduced cooling performance, greater energy costs, corrosion and outage-inducing shorts.

The modular design of the XR4000, along with the short-depth Compute Sled design creates an easily scalable solution.  Maintenance procedures are simplified with an all-front-facing, sled-based design.  

Conclusion

Specifying and deploying Edge Compute can very often involve selecting a Server Solution outside of the more traditional data center choices.  The XR4000 addresses the challenges of moving to compute to the Edge with a compact, NEBS-compliant, and ruggedized approach, with Sled-based servers and all front access, reversible airflow and flexible mounting options, to provide ease of maintenance and upgrades, reducing server downtime and improving TCO.

Read Full Blog
  • Telecom

To slice or not to slice

Arthur Gerona Tomi Varonen Arthur Gerona Tomi Varonen

Wed, 25 Jan 2023 21:53:29 -0000

|

Read Time: 0 minutes

Network Slicing is possibly the most central feature of 5G – lots of game-changing potentials, but at the same time often overhyped and misunderstood. In this blog, we will give a fact-based assessment and guidance on the question of “To Slice Or Not To Slice.”

Guidance for the reader:

  • Entire blog – 7-minute read to understand the background and future of Network Slicing
  • From “Service differentiation starts in the RAN” – 4-minute read if only interested in the future of Network Slicing

The bar was set too high

5G doesn’t only promise to enhance mobile broadband but also to support a wide range of enterprise use cases – from those requiring better reliability and lower latency to those requiring a long battery life and greater device density. From the long list of 3GPP Release 15 features, Network Slicing is the cornerstone feature for service creation. The basic idea behind this feature is the ability to subdivide a single physical network into many logical networks where each is optimized to support the requirements of the services intended to run on it.

We can think of Network Slicing as ordering pizza for friends. 4G gets you the classic Margherita, which is acceptable to most. Yet some would be willing to pay more for extra toppings. In this case, 5G allows you to customize the pizza where half can still be the classic Margherita, but the remaining slices can be split into four cheese, pepperoni, and Hawaiian.

It all sounds great, but why are we not seeing Network Slicing everywhere today? Let us explore some of the hurdles it has to clear before becoming more mainstream.

Slicing requires new features to work – Network equipment providers need to develop these new features, especially on the  Radio Access Network (RAN), and communications service providers need to implement them. This will take time since much of the initial industry focus has been on enhanced mobile broadband and fixed wireless access, which is the initial monetizable 5G use cases.

Slicing needs automation to be practical – While it is possible to create network slices manually at the start, doing so at scale takes too long and costs too much. An entirely new 3GPP-defined management and orchestration layer is needed for slicing orchestration and service assurance. Business Support Systems (BSS) also need new integrations and feature enhancements to support capabilities like online service ordering and SLA-based charging.

Slicing has to make money – There will come a time when we cannot live without the metaverse and web 3.0, but that is not today. There will also come a time when factories will be run by collaborative robots and infrastructures are maintained by autonomous drones, but that is not today. The reality is that there is limited demand for custom slices since most consumer and enterprise use cases today work fine on 4G or 5G non-standalone networks. For example, YouTube and other over-the-top streaming apps implement algorithms to adapt to varying speeds and latency. Lastly, Network Slicing also comes with additional costs related to implementation, operations, and reduced overall capacity (due to resource reservation and prioritization) that must be factored into the business case.

Regulatory challenges – Net Neutrality is an essential topic in the United States and European Union. Misinterpreting differentiated services as something that violates Net Neutrality may put communications service providers under scrutiny by regulators.

It’s not all doom and gloom

5G standalone may have been slow out of the gate, but it is gaining momentum. In 2022, GSA counted 112 operators in 52 countries investing in public 5G standalone networks. Some communications service providers are even more advanced. For example, Singtel has already implemented Paragon, an orchestration platform that allows them to offer network slices and mobile edge computing for mission-critical applications on demand. Another example is Telia Finland which uses Network Slicing to guarantee the service level for its home broadband (fixed wireless access) subscribers.

There are also a lot of ongoing and planned projects that aim to accelerate the development of enterprise use cases. Collaborations such as ARENA2036, a research campus in Germany, allow communications service providers, network equipment manufacturers, independent software vendors, system integrators, and the academe to work together in developing and testing new technologies and services.

Service differentiation starts in the RAN

One of the key reasons behind this positive momentum shift in 2022 is the major network equipment providers like Nokia and Ericsson bringing to the market their Network Slicing features for the RAN. These features enable the reservation and prioritization of RAN resources to particular slices. According to these vendors, Network Slice capacity management is done dynamically, which means the scarce air interface resources are allocated as efficiently as possible. This has been the much-needed catalyst for the first commercial launches and pre-commercial trials across several industries: fixed wireless access (live), video streaming (live), smart city, public safety, remote TV broadcast, assisted driving, enterprise interconnectivity, and mining.

Another positive development is related to smartphones where the biggest mobile operating system (Android OS) started supporting multiple Network Slices simultaneously on the same device (from Android 12). This is beneficial to both consumer and enterprise use cases that have more demanding requirements for speed and latency.

These enhancements on the RAN and devices close several gaps. We can therefore expect Network Slicing to gain even more traction in 2023.

Network Slicing versus Mobile Private Network

 Several hundred successful 4G and 5G Mobile Private Networks (MPN) have been deployed globally. Many have specific indoor coverage, cybersecurity, or business-critical performance requirements that can be best accomplished with dedicated network resources. The common challenges for MPN are private spectrum availability, high cost of deployment and operations, and long lead times.

5G use cases can only be deployed through Network Slicing or MPN, but the majority can be deployed on either. In our view, the discussion should not focus too much on comparing Network Slicing to MPN but should rather be on the use case requirements such as coverage, where Network Slicing is a natural fit for wide areas and MPN is a natural fit for deep indoor. Communications service providers should have both solutions in their toolbox as individual enterprise customers may require both for their various use cases. Let the use case dictate the solution, similar to the approach of most network equipment providers for private wireless (4G/5G versus WiFi6/6E).

The Future of Network Slicing

In our view, the evidence is clear from the recently available slicing features and commercial/pre-commercial market deployments to conclude that Network Slicing is here to stay, enabling new service creation and fostering competitive differentiation. Only time will tell how successful it will be with consumer and enterprise market segments. The level of investments by governments, industry groups, communications service providers, and network equipment providers will play a major role in the success or failure of Network Slicing. At the same time, communications service providers should keep in mind other industry players like AWS and other webscale companies who are betting big on 5G with MPN-based solutions (as Network Slicing is not an option for them).

Communications service providers must understand that Network Slicing, in most situations, is not a sellable service, but rather an enabler to support services with performance or security requirements that are significantly different from mobile broadband. Differentiation for most of the use cases will be in the RAN domain since the air interface is a constrained resource and the RAN equipment is too costly to dedicate.

While there is no harm to having the management and orchestration layer from the start especially if CAPEX is not an issue, it is still recommended to first focus on deploying the end-to-end network features Network Slicing requires and on identifying monetizable use cases that will benefit from it. Note that some use cases require additional features such as those that lower latency and improve reliability.

The vast majority of consumer and enterprise end users are not interested in the underlying technologies, but rather just want to achieve the speed, latency, and reliability they need for the services they enjoy or need. And in many cases even discussions on speed, latency and reliability do not interest them as long as the services are performing as expected. Communications service providers should have the capability to create and market the services by themselves or, in most instances, with the right partners. Unlike 4G, the potential of 5G can no longer be realized just by the communications service providers and network equipment vendors.

Communications service providers should have a complete toolbox – different tools for different requirements. And the guidance is not to stand idly, but to gain experience and form partnerships for both Network Slicing and MPN.

Deploying Network Slicing or MPN and moving into new business models where offering multiple tailored and assured connectivity services are not trivial tasks. How Dell can help CSPs in this transformation journey:

  • With its vast enterprise experience and solutions, as well as its Service Co-creation teams to co-develop services and go-to-market strategies.
  • MPN and Edge solutions. E2E platform-centric solutions with connectivity and application fabric based on modular architectural design. Dell works with several industry-leading Independent Software Vendors (ISVs) to offer validated designs and ready-to-use platforms that are customizable and integrated by Dell's SI partners to meet desired outcomes of the business.
  • Engineered and automated solutions of the underlying Cloud infrastructure. Open and cloud-native architecture is the industry-chosen platform for mobile networks. It has many benefits like better service agility and being vendor agnostic, but it also introduces more complexity. With Dell Technologies’ engineered and automated infrastructure blocks, communications service providers can focus on creating new services.



About the author: Tomi Varonen

Principal Global Enterprise Architect

        

Tomi Varonen is a Telecom Network Architect in Dell’s Telecom Systems Business Unit. He is based in Finland and working with the Cloud and Core Network customer cases in the EMEA region. Tomi has over 23 years of experience in the Telecom sector in various technical and sales positions.  He has wide expertise in end-to-end mobile networks and enjoys creating solutions for new technology areas. Tomi has a passion for various outdoor activities with family and friends including skiing, golf, and bicycling.

About the author: Arthur Gerona

Principal Global Enterprise Architect

       

Arthur is a Principal Global Enterprise Architect at Dell Technologies. He is working on the Telecom Cloud and Core area for the Asia Pacific and Japan region. He has 19 years of experience in Telecommunications, holding various roles in delivery, technical sales, product management, and field CTO. During his free time, Arthur likes to travel with his family.

Read Full Blog

Computing on the Edge–Other Design Considerations for the Edge: Part 2

Mike Moore Mike Moore

Thu, 02 Feb 2023 15:30:52 -0000

|

Read Time: 0 minutes

The previous blog discussed the physical aspects of a ruggedized chassis, including short depth and serviceability. The overarching theme being that of creating a durable server, in a form factor that can be deployed in a wide range of Telecom and Edge compute environments.

This blog will focus on the inside of the server, specifically design aspects that cover efficient and long-term, reliable server operations. This blog covers the following topics:

  • Optimal Thermal Performance
  • Power Efficiency
  • Contaminant Protections and Smart Bezel Design

Optimal thermal performanceFigure 1. Serial Heat Rise Example from Front to Back

Certainly, one of the greatest challenges of Edge Server Design is architecture and layout. It is extremely challenging to optimize airflow such that heat is efficiently dissipated over the entire operational temperature and humidity range.   

In an Edge Server, there are still the same compute, storage, memory, and networking demands required for a traditional data center server. However, the designers are dealing with 30 percent less real estate to work with—and even less space when dealing with some of the sledded server architectures, such as with Dell’s new PowerEdge XR4000 Server.

These design restrictions typically result in components being placed much closer together on the motherboard and a concentration of heat creation in a smaller area.  Smart component placement, which mitigates pre-heated air from passing over other sensitive components and advanced heat sinks specifying high-performance fans and the use of air channels to internally direct air through the server, is critical to creating server designs that can tolerate temperature extremes without creating excessive hotspots in the server.

These designs are repeatedly simulated and optimized using a Computational Fluid Dynamics (CFD) application.  Hot spots are identified and mitigated until a design is created that maintains all active components within their specified operating temperatures, over the entire operational range of the overall server. For example, for NEBS Level 3 this would range from -5C to +55C, as discussed in the third blog of this series. 

Bringing together these server performance requirements, thermal dissipation challenges, component selection, and effective airflow simulations, while involving considerable engineering and applied science is very much an art form.  A well-designed server is remarkable not only in its performance but the efficiency and elegance of its layout.  Perhaps that’s a little overboard, but I can’t help but admire an efficient server layout and consider all the design iterations, time, engineering efforts and simulations that went into its creation.

Figure 2. Example 80PLUS Platinum Efficiency CorePower efficiency

Having high-efficiency Power Supply Units (PSUs) options, that support multiple voltages (both AC and DC) and multiple PSU capacities will allow for the optimal conversion of input power (110VAC, 220VAC, -48VDC) server consumable voltages (12VDC, 5VDC). 

Power Supplies operate most efficiently in a utilization range.  PSUs are generally rated with a voluntary certification, called 80PLUS, which is a rating of power conversion efficiency. The minimum efficiency is 80 percent for power conversion rating. The flip side of an 80 percent efficiency rating, is that 20 percent of the input power is wasted as heat. Maximum PSU efficiency rating is currently around 96 percent. Of course, the higher the efficiency the higher the price of the PSU.  The increasing costs of electricity globally is dimensioning the PSU, resulting in significant TCO savings.

Ensuring that a server vendor has multiple PSU options that provide optimal PSU efficiencies, over the performance range of the server can save hundreds to thousands of dollars in inefficient power conversion over the lifetime of the server.  If you also consider that the power conversion loss represents generated heat, the potential savings in cooling costs are even greater. 


Contaminant protection and Smart Bezel design

GR-63-Core specifies three types of airborne contaminants that need to be addressed:  particulate, organic vapors, and reactive gases. Organic vapors and reactive gases can lead to rapid corrosion, especially where copper or silver components are exposed in the server.  With the density of server components on a motherboard increasing from generation to generation and the size of the components decreasing, corrosion becomes an increasingly complex issue to resolve.

Particulate contaminants, which include particulates—such as salt crystals on the fine side and common dust and metallic particles like zinc whiskers on the coarse side—can cause corrosion but can also result in leakage, eventual electrical arcing, and sudden failures. Common dust build-up within a server can reduce the efficiency of heat dissipation and dust can absorb humidity that can cause shorts and resulting failures.

Hybrid outdoor cabinet solutions may become more common as operators look toward reducing energy costs.  These would involve combination of Air Ventilation (AV), Active Cooling (AC), and Heat Exchangers (HEX). Depending on the region AV+AC (warmer) or AV+HEX (cooler) can be used to efficiently evacuate heat from an enclosure, only falling back on AC and HEX when AV cannot sufficiently cool the cabinet. However, exposure to outside air brings in a whole new set of design challenges, which increases the risk of corrosion.

One method of protection employed is a Conformal Covering is a protection method that combats corrosive contaminates in hostile environments.  This is a thin layer of a non-conductive material that is applied to the electronics in a server and acts as a barrier to corrosive elements. This layer and the material used (typically some acrylic) is thin enough that its application does not impede heat conduction. Conformal Coverings can also assist against dust build-up.  This is not a common practice in servers due to the complexity of applying the coating to the multiple modules (motherboard, DIMMs, PCIe Cards, and more) that compose a modern server and is not without cost.  However, the tradeoff of coating a server compared to the savings of using AV may make this practice more common in the future.

Using a filtered bezel is a common option for dust. These filters block dust from entering the server but not keep dust out of the filter. Eventually, the dust accumulated in the filter reduces airflow through the server which can cause components to run hotter or cause the fans to spin at a higher rate consuming more electricity.

Periodically replacing filters is critical—but how often and when? The use of Smart Filter Bezels can be an effective solution to this question. These bezels notify operations when a filter needs to be swapped and may save time with unnecessary periodic checks or rapidly reacting when over-temperature alarms are suddenly received from the server.

Conclusion

The last two blogs in this series covered a few of the design aspects that should be considered when designing a compute solution for the edge that is powerful, compact, ruggedized, environmentally tolerant and power efficient. These designs need to be flexible, deployable into existing environments, often short-depth, and operate reliably with a minimum of physical maintenance for multiple years.

Read Full Blog
  • telecom
  • automation

Accelerating the Journey towards Autonomous Telecom Networks

Saad Sheikh Arthur Gerona Saad Sheikh Arthur Gerona

Fri, 06 Jan 2023 14:29:40 -0000

|

Read Time: 0 minutes

How Dell Technologies is supporting communications service providers accelerate automation

 

 

Communications service providers (CSPs) are on a journey of digital transformation that gives them the ability to offer new innovative services and a better customer experience in an open, agile, and cost-effective manner. Recent developments in 5G, Edge, Radio Access Network disaggregation, and, most importantly the pandemic have all proven to be catalysts that accelerated this digital transformation.   However, all these advancements in telecom come with their own set of challenges. New architectures and solutions have made the modern network considerably more complex and difficult to manage.

In response, CSPs are evaluating new ways of managing their complex networks using automation and artificial intelligence. The ability to fully orchestrate the operation of digital platforms is vital for touchless operations and consistent delivery of services. Almost every CSP is working on this today. However, the standard automation architecture and tools can't be directly applied by CSPs as all these solutions need to adhere to strict telecom requirements and specifications such as those defined by enhanced Telecom Operations Map (eTOM), Telecom Management Forum (TM Forum), European Telecommunications Standards Institute (ETSI), 3rd Generation Partnership Project (3GPP), etc.  CSPs also need to operate many telecom solutions including legacy physical network functions (PNF), virtual network functions (VNF), and the latest 5G era containerized network functions (CNF).   

 

 

Removing barriers with telecom automation

Although many CSPs have built cloud platforms, only a handful have achieved their automation targets. So, what do you do when there is no ready-made industry-standard automation solution? You build one. And that’s exactly what Dell Technologies did with the recent launch of its Dell Telecom Multi-Cloud Foundation. Dell Telecom Multi-Cloud Foundation automates the deployment and life-cycle management of the cloud platforms used in a telecom network to reduce operational costs while consistently meeting telco-grade SLAs. It also supports the leading cloud platforms offering operators the flexibility of choosing the platform that best meets their needs based on workload requirements and cost-to-serve. It streamlines telecom cloud design, deployment, and management with integrated hardware, software, and support.

 

 

 

The solution includes Dell Telecom Infrastructure Blocks. Telecom Infrastructure Blocks are engineered systems that provide foundational building blocks that include all the hardware, software and licenses to build and scale out cloud infrastructure for a defined telecom use case. 

Telecom Infrastructure Block releases will be delivered in an agile manner with multiple releases per year to simplify lifecycle management. In 2023, Dell Telecom Infrastructure Blocks will support workloads for Radio Access Network and Core network functions with:

  • Dell Telecom Infrastructure Blocks for Wind River which will support vRAN and Open RAN workloads.

 

Dell Telecom Infrastructure Blocks for RedHat will target core network workloads (planned). The primary goal of Telecom Multi-Cloud Foundation with Telecom Infrastructure Blocks is to deliver telco cloud platforms that are engineered for scaled deployments, providing three core capabilities: 

 

  • Integration: All components of the platform, including computing, storage, networking, ancillaries like accelerators, Cloud CaaS software, and management tools are integrated into Dell’s factories.
  • Validation: A solution engineered and validated by our cloud partners and already proven to work in the field. The engineering and validation process includes detailed test cases across both functional and non-functional aspects of the platform
  • Automation:  A Solution that is fully automated and that can seamlessly integrate with Telco’s existing orchestration and inventory systems.

 

 

 

Dell Technologies Telecom Multi-Cloud Foundation meets Telco automation requirements

 

Dell Technologies Multi-Cloud Foundation provides communications service providers with a platform-centric solution based on open Application Programming interfaces (APIs) and consistent tools. This means the platform can deliver outcomes based on a unique use case and workload and then scale out deployments using an API-based approach.

Dell Telcom Multi-Cloud Foundation enables telco-grade automation through the following key capabilities:


  • An open API and workflow approach: All the capabilities of the platform are available as declarative APIs so there is no need to manage each infrastructure component independently, rather open APIs and workflows are triggered via northbound orchestration systems. This capability not only automates deployment but also Day 2 operations and life-cycle management.
  • Scalable architecture: The automation architecture is based on a fully distributed and federated architecture, so it can scale to 100,000’s of sites.
  • Data-Driven architecture: The automation architecture is data-driven and distributed so data can be tapped from edge and regional sites enabling real-time use cases and data-driven automation.

 

 

Automation use cases with Dell Technologies Telecom Multi-Cloud Foundation

Telecom Automation is not just about Day 0 (design) and Day 1 (deployment) but should also cover Day 2 (operations and lifecycle management). Dell Telecom Multi-Cloud Foundation supports the following use cases:

 

  • Automated DeploymentIt includes a fully-automated deployment of the cloud infrastructure based on customer specifications.
  • O-Cloud as Code: It employs declarative automation using infrastructure data, which includes site data, networking, resources, and credentials to automate tasks independent of the workflow. This de-coupling is crucial to orchestrate the platform.  

 

  • Operational fulfillment: Integrations with Wind River Studio Conductor delivers full set of operational tools that provide a single management and observation platform for the operations team. This helps with creating a unified layer for Network Operations Center (NOC) teams to monitor and manage the platform.
  • Staging: The platform is staged in Dell’s factory to reduce the time spent deploying and configuring the system on-site and can be tuned in the field using the built-in automation to meet any unique operator specifications. 

 

Dell Technologies developed Dell Telecom Multi-Cloud Foundation and Dell Telecom Infrastructure Blocks to accelerate 5G cloud infrastructure transformation. Our current release of Telecom Infrastructure Blocks for Wind River delivers an engineered and factory-integrated system that comes with a fully automated deployment model for CSPs looking to build resilient and high-performance RAN.

 

To learn more about our solution, please visit the Dell Telecom Multi-Cloud Foundation solutions site.

 

 

 

 profile imageAbout the Author: Saad Sheikh

Saad Sheikh is APJ's Lead Systems Architect in Telecom Systems Business at Dell Technologies. In his current role, he is responsible for driving Telecom Cloud, Automation, and NGOPS transformations in APJ supporting partners, NEPs, and customers to accelerate Network Transformation for 5G, Open RAN, Core, and Edge using Dell’s products and capabilities. He is an industry leader with over 20 years of experience in Telco industry holding roles in Telco, System Integrators, Consulting businesses, and with Telecom vendors where he has worked on E2E Telecoms systems (RAN, Transport, Core, Networks), Cloud platforms, Automation, Orchestration, and Intelligent Networking. As part of Dell CTO team, he represents Dell in Linux Foundation, TMforum, GSMA, and TIP.

Read Full Blog
  • edge
  • NEBS

Computing on the Edge: Other Design Considerations for the Edge – Part 1

Mike Moore Mike Moore

Fri, 13 Jan 2023 19:46:50 -0000

|

Read Time: 0 minutes

In past blogs, the requirements for NEBS Level 3 certifications were addressed, with even higher demands depending on the Outside Plant (OSP) installation requirements.  Now, additional design considerations need to be considered, to create a hardware solution that is not only going to survive the environment at the edge, but provides a platform that can be effectively deployed to the edge.

Ruggedized Chassis Design

The first design consideration that we’ll cover for an Edge Server is the Ruggedized Chassis.  This is certainly a chassis that can stand up to the demands of Seismic Zone 4 testing and can also withstand impacts, drops, and vibration, right? 

Not necessarily.

While earthquakes are violent, demanding, but relatively short-duration events, the shock and vibration profile can differ significantly when the server is taken out from under the Cell Tower.  We are talking beyond the base of the tower, and to edge environments that might be encountered in Private Wireless or Multi-Access Edge Compute (MEC) deployments. Some vibration and shock impacts are tested in GR-63-Core, under test criteria for Transportation and Packaging, but ruggedized designsA picture containing luggage, suitcase, indoor, piece

Description automatically generatedFigure 1. Portable Edge Compute Platforms need to go beyond this level of testing.

For example, the need for ruggedized servers in mining or military environments, where setting up compute can be more temporary in nature and often includes the use of portable cases, such as Pelican Cases.  These cases are subject to environmental stresses and can require ruggedized rails and upgraded mounting brackets on the chassis for those rails. For longer-lasting deployments, enclosures can be less than ideal and require all the requirements of a GR-3108 Class 2 device and perhaps some additional considerations.  

Dell Technologies also tests our Ruggedized (XR-series) Servers to MIL-STD-810 and Marine testing specifications.  In general, MIL-STD-810 temperature requirements are aligned with GR-63-CORE on the high side but test operationally down to -57C (-70F) on the low side.  This reflects some extreme parts of the world where the military is expected to operate.   But MIL-STD-810 also covers land, sea, and air deployments. This means that non-operational (shipping) criteria is much more in-depth, as are acceleration, shock, and vibration.  Criteria includes scenarios, such as crash survivability, where the server can be exposed to up to 40Gs of acceleration.  Of course, this tests not only the server, but the enclosure and mounting rails used in testing.

So why have I detoured onto MIL-STD and Marine testing?  For one, it’s interesting in the extreme “dynamic” testing requirements that are not seen in NEBS.  Secondly, creating a server that is survivable in MIL-STD and Marine environments is only complementary to NEBS and creates an even more durable product that has applications beyond the Cellular Network.

Server Form Factor

Figure 2. Typical Short Depth Cell Site EnclosureAnother key factor in chassis design for the edge is the form factor.  This involves understanding the physical deployment scenarios and legacy environments, leading to a server form factor that can be installed in existing enclosures without the need for major infrastructure improvements.  For servers, 19 inch rackmount or 2 post mounting is common, with 1U or 2U heights.  But the key driver in the chassis design for compatibility with legacy telecom environments is short depth.

Server depth is not something covered by NEBS, but supplemental documentation created by the Telecoms, and typically reflected in RFPs, define the depth required for installation into Legacy Environments.  For instance, AT&T’s Network Equipment Power, Grounding, Environmental, and Physical Design Requirements document states that “newer technology” deployed to a 2 post rack, which certainly applies to deployments like vRAN and MEC, “shall not” exceed 24 inches (609mm) in depth.  This disqualifies most traditional rackmount servers.

The key is deployment flexibility.  Edge Compute should be able to be mounted anywhere and adapt to the constraints of the deployment environment.  For instance, in a space-constrained location, front maintenance is a needed design requirement.  Often these servers will be installed close to a wall or mounted in a cabinet with no rear access.  In addition, supporting reversible airflow can allow the server to adapt to the cooling infrastructure (if any) already installed.

Conclusion

While NEBS requirements focus on Environmental and Electrical Testing, ultimately the design needs to consider the target deployment environment and meet the installation requirements of the targeted edge locations.  

Read Full Blog
  • Wind River
  • Dell Telecom Multi-Cloud Foundation

How Dell Telecom Infrastructure Blocks are Simplifying 5G RAN Cloud Transformation

Gaurav Gangwal Gaurav Gangwal

Thu, 08 Dec 2022 20:01:48 -0000

|

Read Time: 0 minutes

5G is a technology that is transforming industry, society, and how we communicate and live in ways we’ve yet to imagine. Communication Service Providers (CSPs) are at the heart of this technological transformation. Although 5G builds on existing 4G infrastructure, 5G networks deployed at scale will require a complete redesign of communication infrastructure. 5G network transformation is undergoing, where more than 220 operators in more than 85 countries have already launched services, and they have realized that operational agility and accelerated deployment model in such a decentralized and cloud-native landscape are considered a must-have to meet customer demands for new innovative capabilities, services, and digital experiences for both Telecom and vertical industries. This is accompanied by the promise of cloud native architectures and open and flexible deployments, which enable CSPs to scale and enable new data-driven architectures in an open ecosystem. While the initial deployments of 5G are based on the Virtualized Radio Access Network (vRAN), which offers CSPs enhanced operational efficiency and flexibility to fulfill the needs of 5G customers, Open RAN expands vRAN's design concepts as well as goals and is truly considered the future. Although O-RAN disaggregates the network, providing network operators more flexibility in terms of how their networks are built and allowing them the benefits of interoperability, the trade-off for the flexibility is typically increased operational complexity, which incurs additional costs of continuous testing, validation, and integration of the 5G RAN system components, which are now provided by a diverse set of suppliers.

Another aspect of this growing complexity is the need for denser networks. Although powerful, new 5G antennas and RAN gear required to attain maximum bandwidth cover substantially less distance than 4G macro cells operating at lower frequencies. This means similar coverage requires more 5G hardware and supporting software. Adding the essential gear for 5G networks can dramatically raise operational costs, but the hardware is only a portion of these costs. The expenses of maintaining a network include the time and money spent on configuration changes, testing, monitoring, repairs, and upgrades.

For most nationwide operators, Edge and RAN cell sites are widely deployed and geographically dispersed across the nation. As network densification increases, it becomes impractical to manually onboard thousands of servers across multiple sites. CSPs need to create a strategy for incorporating greater automation into their network and continue service operations to ensure robust connectivity, manage to expand network complexities, and preserve cost efficiencies without the need for a complete "rip and replace" strategy.

As CSPs migrate to an edge-computing architecture, a new set of requirements emerges. As workloads move closer to the network's edge, CSPs must still maintain ultra-high availability often 5-6 nines. Legacy technology is incapable of attaining this degree of availability.

Scalability, specifically down to a single node with a small footprint at the edge. When a single network reaches tens of thousands of cell sites, you simply cannot afford to have a significant physical footprint with many servers. As a result, the need for a new architecture that can scale up and down grew. As applications grow more real-time, ultra-low latency at the edge is required.  CSPs need in-built lifecycle management to perform live software upgrades and manage this environment. Finally, CSPs are demanding more and more open-source software for their networks. Wind River Studio addresses each of these network issues.

Wind River Studio Cloud Platform, which is the StarlingX project with commercial support, provides a production-grade distributed Kubernetes solution for managing edge cloud infrastructure. In addition to the Kubernetes-based Wind River Studio Cloud Platform, Studio also provides orchestration (Wind River Studio Conductor) and analytics (Wind River Studio Analytics) capabilities so operators can deploy and manage their intelligent 5G edge networks globally.

Mobile Network Operators who adopt vRAN and Open RAN must integrate cloud platform software on optimized and tuned hardware to create a cloud platform for vRAN and Open RAN applications. Dell and Wind River have worked together to create a fully engineered, pre-integrated solution designed to streamline 5G vRAN and Open RAN design, deployment, and lifecycle management.  Dell Telecom Infrastructure Blocks for Wind River integrate Dell Bare Metal Orchestrator (BMO) and Wind River Studio on Dell PowerEdge servers to provide factory-integrated building blocks for deploying ultra-low latency, vRAN and Open RAN networks with centralized, zero-touch provisioning and management capabilities.

 

Diagram, qr code

Description automatically generated

 

Key Advantages:

  • Reduces the complexity of integration and lifecycle management in a highly distributed, disaggregated network, allowing lower operating costs while reducing time to deploy new services while accelerating innovation.
  • Dell's comprehensive, factory-integrated solution simplifies supply chain management by reducing the number of components and suppliers needed to build out the network. In addition, to back-haul optimization by preloading all software needed for day-0 and day-1 automation. 
  • It has been thoroughly tested and includes design guidance for building and scaling out a network that provides low latency, redundancy, and High Availability (HA) for carrier-grade RAN Applications.
  • Simplified support with Dell providing single contact support for the whole stack including all hardware and software from Dell and Wind River with an option for carrier-grade support. 
  • Reduces the total cost of ownership (TCO) for CSPs by deploying a fully integrated, validated, and production-ready vRAN/O-RAN cloud infrastructure solution with a smaller footprint, low latency, and operational simplicity.


Qr code

Description automatically generated with medium confidence

 

Wind River Studio Cloud Platform Architecture

Wind River Studio Cloud Platform Distributed Cloud configuration supports an edge computing solution by providing central management and orchestration for a geographically distributed network of cloud platform systems with easy installation with support for complete Zero Touch Provisioning of the entire cloud, from the Central Region to all the Sub-Clouds

The architecture features a synchronized distributed control plane for reduced latency, with an autonomous control plane such that all sub-cloud local services are operational even during loss of Northbound connectivity to the Central Region (Distributed cloud system controllers cluster location) which is quite important because Studio Cloud Platform can scale horizontal or vertical independent from the main cloud in the regional data center (RDC) or in National Data center (NDC). 

 

Graphical user interface, diagram

Description automatically generated


Cell Sites, or sub-clouds, are geographically dispersed edge sites of varying sizes. Dell Telecom Infrastructure Blocks for Wind River cell site installations can be either All-in-One Simplex (AIO- SX), AIO Duplex (DX), or All-in-One (AIO) DX + workers. For a typical AIO SX deployment, at least one server is needed in a sub-cloud. Remote worker sites running Bare Metal Orchestrator are where sub-clouds are set up.

  • AIO-SX (All-In-One Simplex) - A complete hyper-converged cloud with no HA, with ultra-low cloud platform overhead of 2 physical cores and 64G Memory, and 500G Disk, which is required to run the cloud, while the rest of the CPU cores, memory, and disk are used for the applications.
  • AIO-DX (All-In-One Duplex) - Same as AIO-SX except that it runs on 2 servers to provide High Availability (HA) up to 6-9's. 
  • AIO-DX (All-In-One Duplex) +Workers - Two nodes plus a set of worker nodes (starting small and growing as workload demands increase) 

The Central Site at the RDC is deployed as a standard cluster across three Dell PowerEdge R750 servers, two of which are the controller nodes and one of which is a worker node, The Central Site also known as the system controller, provides orchestration and synchronization services for up to 1000 distributed sub-clouds, or cell sites. Controller-0, Controller-1, and Workers-0 through n are the various controllers in the system. To implement AIO DX, both Controller-0 and Controller-1 are required.  

Wind River Studio Conductor runs in the National Data Center (NDC) as an orchestrator and infrastructure automation manager. It integrates with Dell's Bare Metal Orchestrator (BMO) to provide complete end-to-end automation for the full hardware and software stack. Additionally, it provides a centralized point of control for managing and automating application deployment in an environment that is large-scale and distributed.

Studio Conductor receives information from Bare Metal Orchestrator as new cell sites come online. Studio Conductor instructs the system controller (CaaS manager) to install, bootstrap, and deploy Studio Cloud Platform at the cell sites. It supports TOSCA (Topology and Orchestration Specification for Cloud Application) based blueprint modeling. (Blueprints are policies that enable orchestration modeling.) Studio Conductor uses blueprints to map services to all distributed clouds and determine the right place to deploy. It also includes built-in secret storage to securely store password keys internally, reducing threat opportunities.

Studio Conductor can adapt and integrate with existing orchestration solutions. The plug-in architecture allows it to accommodate new and old technologies, so it can easily be extended to accommodate evolving requirements.

Wind River Studio Analytics is an integrated data collection, monitoring, analysis, and reporting tool used to optimize distributed network operations. Studio Analytics specifically solves a unique use case for the distributed edge.  It provides visibility and operational insights into the Studio Cloud Platform from a Kubernetes and application workload perspective. Studio Analytics has a built-in alerting system with the ability to integrate with several third-party monitoring systems. Studio Analytics uses technology from Elastic. co as a foundation to take data reliably and securely from any source and format, then search, analyze, and visualize it in real time. Studio Analytics also uses the Kibana product from Elastic as an open user interface to visually display the data in a dashboard.

Dell Telecom Multi-Cloud Foundation Infrastructure Blocks provides a validated, automated, and factory integrated engineered system that paves the way for the zero-touch deployment of 5G Telco Cloud Infrastructure, to operation and management of the lifecycle of vRAN and Open RAN sites, all of which contribute to a high-performing network that lessens the cost, time, complexity, and risk of deploying and maintaining a telco cloud for the delivery of 5G services.

To learn more about our solution, please visit the Dell Telecom Multi-Cloud Foundation solutions site


Authored by: 
Gaurav Gangwal
Senior Principal Engineer – Technical Marketing, Product Management

About the author:

Gaurav Gangwal works in Dell's Telecom Systems Business (TSB) as a Technical Marketing Engineer on the Product Management team. He is currently focused on 5G products and solutions for RAN, Edge, and Core. Prior to joining Dell in July 2022, he worked for AT&T for over ten years and previously with Viavi, Alcatel-Lucent, and Nokia. Gaurav has an engineering degree in electronics and telecommunications and has worked in the telecommunications industry for about 14 years. He currently resides in Bangalore, India.


 

Read Full Blog

Computing on the Edge: Outdoor Deployment Classes

Mike Moore Mike Moore

Fri, 02 Dec 2022 20:21:29 -0000

|

Read Time: 0 minutes

Ultimately, all the testing involved with GR-63-CORE and GR-1089-CORE is intended to qualify hardware designs that have the environmental, electrical, and safety qualities that allow for installations from the Central Office, all the way out to the Cell Site. For deployments at the Cell Site, it turns out that NEBS Level 3 is really only the start and the minimum environmental threshold for Cell Site Controlled Environment.

This is where GR-3108-CORE comes into scope.  GR-3108-CORE, Generic Requirements for Network Equipment in Outside Plant (OSP), defines the environmental tolerances for equipment deployed throughout a Telecom Network, from the Central Office, to up the tower of the Cell Site, and to the customer premises.   

Figure 1. GR-3108-CORE Equipment ClassesThe four Classes of equipment defined in GR-3108-CORE are:

Class 1: Equipment in Controlled or Protected Environments 

Class 2: Protected Equipment in Outside Environments

Class 3: Protected Equipment in Severe Outside Environments

Class 4: Products in Unprotected Environment directly exposed to the weather



The primary drivers of these classes include:

  • Thermal, including Cold Start and Hot Start
  • Temperature and Humidity Cycling
  • Salt Fog Exposure
  • Closure and Housing Requirements

Class 1: Equipment in Controlled or Protected Environments

The OSP for Class 1 enclosures includes Controlled Environmental Vaults, Huts and Cabinets with active heating and cooling, Telecom Closets in-Building/on-Figure 2. Typical OSP Enclosure and Concrete HutBuilding or Residential Locations.  The requirements for Class 1 installations are very much in line with NEBS Level 3 specifications with the recurring theme on these enclosures being that there is some active means of environmental control.   The methods of maintaining a controlled environment are not specified, but the method used must maintain the defined operating temperatures between -5°C (23°F) to 50°C (122°F) and humidity levels between 5-85 percent.

Other expectations for Class 1 enclosures include performing initial, cold, or hot startup throughout the entire temperature range and continued operation if device single fan failures occurs, but at a lower upper-temperature threshold.

Class 2: Protected Equipment in Outside Environments

Figure 3. Example Class 2 Protected EnclosuresThe internal enclosures or spaces of a Class 2 OSP have an extended temperature range of -40°C (-40°F) to 65°C (149°F), with humidity levels the same as for Class 1.  Typically, while these OSPs continue to protect the hardware from the outside elements, environmental controls are less capable and often involve the use of cooling fans, heat exchangers and raised fins to dissipate heat.  Besides outdoor enclosures, Class 2 environments can also include customer premise locations such as garages, attics or uncontrolled warehouses.

For hardware designers, creating Carrier Grade Servers, this is where it’s particularly important to pay attention to the components being used when you’re looking at your target Class 2 deployment environment.  Many manufacturers will provide specifications on the maximum temperature where the IC component will operate. Typically, the maximum temperature is a die temperature, and the method of heat evacuation is left to the HW designers, in the form of heat sinks, fans for airflow, and others.

However, for those attempting compliance with Class 2, the lower temperature range also becomes important because many ICs are not tested for operation below 0°C.  The IC temperature grades generally come in commercial (0°C/32°F to 70°C/158°F), Industrial (-40°C/-40°F to 85°C/185°F), Military (-55°C/-67°F to 125°C/257°F), and Automotive grades.

So the specs for Commercial grade ICs may not even accommodate the requirements of even a Class 1 OSP.   So, what do designers do?   Sometimes, you’ll see an “asterisk” on the server spec sheet, indicating that the device can run at the lower temperature range, but not start at the lower range.  This is where the design can start at 0C and provide sufficient heat to keep the IC warm down through -5°C (23°F).

Designers may also consider including some pre-heater or enclosure heater to bring the device up to 0°C before a startup is allowed or incur the added expense of extended temperature parts.

Class 3: Protected Equipment in Severe Outside Environments

Severe is certainly the theme for Class 3 OSPs.  In these environments, while inside an enclosure to protect the device from direct sunlight and rain, the enclosure may not be sealed from other outside stresses like hot, cold and humidity extremes, dust and other airborne contaminants, salt fog, etc.  Temperature ranges from -40°C (-40°F) to 70°C (158°F) and humidity levels from 5% to 95%, with single fan failure requirements of 65°C.  Certainly, indoor hostile environments, such as boiler rooms, furnace spaces and attics also exist that would require Class 3 designed solutions.Figure 4.Protected Sever Cabinet 






Class 4: Products in Unprotected Environment directly exposed to the weather

Figure 5. Class 4 Radio UnitsThis class of equipment is intended for outdoor deployments, with full exposure to sun, rain, wind and all the environmental challenges found, for example, at the top of a Cell Tower.  For Telecom, Class 4 certification would typically be the domain of Antennas and Remote Radio Heads.  These units, mounted on towers, buildings, street lamps, and other places are fully exposed to the entire spectrum of environmental challenges.  Class 4 devices get a bit of a break on temperature, -40°C (-40°F) to 46°C (115°F), due to direct exposure to sunlight, but 100% humidity due to its exposure to rain.

Conclusion

For Carrier Grade Servers, Class 1 (NEBS Level 3 equivalent) is the most common target of designers creating compute, storage, and networking platforms for Telecom consumption.  Class 2 servers are also achievable, and their demand may increase as Edge Computing and O-RAN/Cloud RAN deployments become more common, moving beyond Class 2 will require specialty, more purpose-defined designs.

Read Full Blog
  • edge
  • NEBS

Computing on the Edge: NEBS Criteria Levels

Mike Moore Mike Moore

Tue, 15 Nov 2022 14:43:44 -0000

|

Read Time: 0 minutes

In our previous blogs, we’ve explored the type of tests involved to successfully pass the criteria of GR-63-CORE, Physical Protections, GR-1089-CORE, Electromagnetic Compatibility, and Electrical Safety.  The goal of successfully completing these tests is to create Carrier Grade, NEBS compliant equipment.  However, outside of highlighting the set of documents that compose NEBS, nothing is mentioned of the NEBS levels and the requirements to achieve each level. NEBS levels are defined in Special Report, SR-3580.  

Figure 1. NEBS Certification LevelsNEBS Level 3 compliance is expected from most Telecom environments, outside of a traditional data center. So, what NEBS level do equipment manufacturers aim to achieve?  

At first, I created Figure 1 as a pyramid, not inverted, with Level 1 as the base and Level 3 as the peak. However, I reorganized the graphic because Level 1 isn’t really a foundation, it is a minimum acceptable level. Let’s dive into what is required to achieve each NEBS certification level. 

NEBS Level 1

NEBS Level 1 is the lowest level of NEBS certification. It provides the minimum level of environmental hardening and stresses safety criteria to minimize hazards for installation and maintenance administrators.  

This level is the minimum acceptable level of NEBS environmental compatibility required to preclude hazards and degradation of the network facility and hazards to personnel.

This level includes the following tests:

  • Fire resistance
  • Radiated radiofrequency (RF)
  • Electrical safety
  • Bonding or grounding

Level 1 criteria does not assess Temperature/Humidity, Seismic, ESD or Corrosion.

Operability, enhanced resilience, and environmental tolerances are assessed in Levels 2 and 3.

NEBS Level 2


Figure 2. Map of Seismic Potential in the US

NEBS Level 2 assesses some environmental criteria, but the target deployment is in a “normal” environment, such as data center installations where temperatures and humidity are well controlled. These environments typically experience limited impacts of EMI, ESD, and EFTs, and have some protection from lightning, Surges and Power Faults.  There is also some Seismic Testing performed on the EUT, but only to Zone 2. While there is no direct correlation between seismic zones and earthquake intensity, in the United States, zone 2 generally covers the Rocky Mountains, much of the West and parts Southeast and Northeast Regions. 

NEBS Level 2 certification may be sufficient for some Central Office (CO) installations but is not sufficient for deployment to Far Edge or Cell Site Enclosures which can be exposed to environmental and electromagnetic extremes, or in regions covered by seismic zones 3 or 4.

NEBS Level 3

Figure 3. Level 3 criteria

NEBS Level 3 certification is the highest level of NEBS Certification and is the level  that is expected by most North American telecom and network providers when specificizing equipment requirements for installation into controlled environments.

Level 3 is required to provide maximum assurance of equipment operability within the network facility environment. 

Level 3 criteria are also suited for equipment applications that demand minimal service interruptions over the equipment’s life.

Full NEBS Level 3 certification can take from three to six months to complete. This includes prepping and delivering the hardware to the lab, test scheduling, performance, analysis of test results, and the production of the final report. If a failure occurs, systems can be redesigned for retesting.  

Conclusion

While environmental, electrical, electromagnetic, and safety specifications described in NEBS Level 3 certification, it is the minimum required for deployment into a controlled telecom network environment; these specifications are only the beginning for outdoor deployments. The next blog in this series will explore more of these specifications such as GR-3108-CORE and general requirements for Network Equipment in Outside Plant (OSP). Stay tuned.


Read Full Blog
  • Wind River

Accelerate Telecom Cloud Deployments with Dell Telecom Infrastructure Blocks

Gaurav Gangwal Gaurav Gangwal

Mon, 31 Oct 2022 16:48:10 -0000

|

Read Time: 0 minutes

During MWC Vegas, Dell Technologies announced Dell’s first Telecom Infrastructure Blocks co-engineered with our partner Wind River to help communication service providers (CSPs) reduce complexity, accelerate network deployments, and simplify life cycle management of 5G network infrastructure. Their first use cases will be focused on infrastructure for virtual Radio Access Network (vRAN) and Open RAN workloads.

Deploying and supporting open, virtualized, and cloud-native 5G RANs is one of the key requirements to accelerate 5G adoption. The number of options available in 5G RAN design makes it imperative that infrastructures supporting them are flexible, fully automated for distributed operations, and maximally efficient in terms of power, cost, the resources they consume, and the performance they deliver.

Dell Telecom Infrastructure Blocks for Wind River are designed and fully engineered to provide a turnkey experience with fully integrated hardware and software stacks from Dell and Wind River that are RAN workload-ready and aligned with workload requirements. This means the engineered system, once delivered, will be ready for RAN network functions onboarding through a simple and standard workflow avoiding any integration and lifecycle management complexities normally expected from a fully disaggregated network deployment.  

The Dell Telecom Infrastructure Blocks for Wind River are a part of the Dell Technologies Multi-Cloud Foundation, a telecom cloud designed specifically to assist CSPs in providing network services on a large scale by lowering the cost, time, complexity, and risk of deploying and maintaining a distributed telco-cloud. Dell Telecom Infrastructure Blocks for Wind River are comprised of:

  • Dell hardware that has been validated and optimized for RAN
  • Dell Bare Metal Orchestrator and a Bare Metal Orchestrator Module (a combination of a Bare Metal Orchestrator plug-in and a Wind River Conductor integration plug-in)
  • Wind River Studio, which is comprised of:
    • Wind River Conductor
    • Wind River Cloud Platform
    • Wind River Analytics


How do Dell Telecom Infrastructure Blocks for Wind River make infrastructure design, delivery, and lifecycle management of a telecom cloud better and easier?

From technology onboarding to Day 2+ operations for CSPs, Dell Telecom Infrastructure Blocks streamline the processes for technology acquisition, design, and management. We have broken down these processes into 4 stages. Let us examine how Dell Telecom Infrastructure Blocks for Wind River can impact each stage of this journey. 

Stage 1: Technology onboarding | Faster Time to Market

The first stage is the Technology onboarding, where Dell Technologies works with Wind River in Dell’s Solution Engineering Lab to develop the engineered system. Together we design, validate, build, and run a broad range of test cases to create an optimized engineered system for 5G RAN vCU/vDU and Telecom Multi-Cloud Foundation Management clusters. During this stage, we conduct extensive solution testing with Wind River performing more than 650 test cases. This includes validating functionality, interoperability, security, scalability, high availability, and test cases specific to the workload’s infrastructure requirements to ensure this system operates flawlessly across a range of scale and performance points.  

We also launched our OTEL Lab (Open Telecom Ecosystem Lab) to allow telecom ecosystem suppliers (ISVs) to integrate or certify their workload applications on Dell infrastructure including Telecom Infrastructure Blocks. Customers and partners working in OTEL can fine-tune the Infrastructure Block to a given CSP’s needs, marrying the efficiency of Infrastructure Block development with the nuances presented in meeting a CSP’s specific requirements.   

Continuous improvement in the design of Infrastructure Blocks is enabled by ongoing feedback on the process throughout the life of the solution which can further streamline the design, validation, and certification. This extensive process produces an engineered system that streamlines the operator’s reference architecture design, benchmarking, proof of concept, and end-to-end validation processes to reduce engineering costs and accelerate the onboarding of new technology.

All hardware and software required for this Engineered system are integrated in Dell’s factory and sold and supported as a single system to simplify procurement, reduce configuration time, and streamline product support.

This "shift left" in the design, development, validation, and integration of the stacks means readiness testing and integration are finished sooner in the development cycle than they would have been with more traditional and segregated development and test processes. For CSPs, this method speeds up time to value by reducing the time needed to prepare and validate a new solution for deployment.

Now we go from Technology onboarding to the second phase, pre-production.

Stage 2: Pre-production | Accelerated onboarding

From Dell’s Solution Engineering Labs, the engineered system moves into the CSPs pre-production environment where the  golden configuration is defined. Rather than receiving a collection of disaggregated components, (infrastructure, cloud stacks, automation, and so on.) CSPs start with a factory-integrated, engineered system that can be quickly deployed in their pre-production test lab. At this stage, customers leverage the best practices, design guidance, and lessons learned to create a fully validated stack for their workload. The next step is to pre-stage the Telco Cloud stack including the workload and start preparing for Day 1 and Day 2 by integrating with the customer CI/CD pipeline and defining/agreeing on the life-cycle management process to support the first office application deployment.

Stage 3: Production | Automation enables faster deployment

Advancing the flow, deployment into production is accelerated by:

  1. Factory integration that reduces procurement, installation, and integration time on-site.
  2. Embedded automation that reduces time spent configuring hardware or software. This includes validating configurations and streamlining processes with Customer Information Questionnaires (CIQs). CIQs are YAML files that list credentials, management networks, storage details, physical locations, and other relevant data needed to set the telco cloud stack at different physical locations for CSPs.
  3. Streamlining support with a unified single call carrier-grade support model for the full cloud stack.  

Automating deployment eliminates manual configuration errors to accelerate product delivery. Should the CSP need assistance with deployment, Dell's professional services team is standing by to assist. Dell provides on-site services to rack, stack, and integrate servers into their network.

Stage 4: Day 2+ Operations | Performance, lifecycle management, and support

Day 2+ operations are simplified in several ways. First, the automation provided, combined with the extensive validation testing Dell and Wind River perform, ensure a consistent, telco-grade deployment, or upgrade each time. This streamlines daily fault, configuration, performance, and security management in the fully distributed cloud.  In addition, Dell Bare Metal Orchestrator will automate the detection of configuration drift and its remediation. And, Wind River Studio Analytics utilizes machine learning to proactively detect issues before they become a problem.   

Second, Dell’s Solutions Engineering lab validates all-new feature enhancements to the software and hardware including new updates, upgrades, bug fixes, and security patches. Once we have updated the engineered system, we push it via Dell CI/CD pipeline to Dell factory and OTEL Lab. We can also push the update to the CSP's CI/CD pipeline using integrations set up by Dell Services to reduce the testing our customers perform in their labs.

We complement all this by providing unified, single-call support for the entire cloud stack with options for carrier-grade SLAs for service response and restoration times.   


Proprietary appliance-based networks are being replaced by best-of-breed, multivendor cloud networks as CSPs adapt their network designs for 5G RAN. As CSPs adopt disaggregated, cloud-native architectures, Dell Technologies is ready to lend a helping hand. With Dell Telecom Multi-Cloud Foundation, we provide an automated, validated, and continuously integrated foundation for deploying and managing disaggregated, cloud-native telecom networks. 


Ready to talk?  Request a callback.
To learn more about our solution, please visit the Dell Telecom Multi-Cloud Foundation solutions site


Authored by: 
Gaurav Gangwal
Senior Principal Engineer – Technical Marketing, Product Management

About the author:

Gaurav Gangwal works in Dell's Telecom Systems Business (TSB) as a Technical Marketing Engineer on the Product Management team. He is currently focused on 5G products and solutions for RAN, Edge, and Core. Prior to joining Dell in July 2022, he worked for AT&T for over ten years and previously with Viavi, Alcatel-Lucent, and Nokia. Gaurav has an engineering degree in electronics and telecommunications and has worked in the telecommunications industry for about 14 years. He currently resides in Bangalore, India.


 


 

Read Full Blog

Telecom Innovations: Breaking Down the Barriers to DevSecOps

Saad Sheikh Saad Sheikh

Fri, 02 Sep 2022 15:16:44 -0000

|

Read Time: 0 minutes

DevOps—the fusion of software development with IT operations—has been a best practice among development and IT teams for quite some time now. More recently, the need to integrate security within the DevOps process has made DevSecOps the new gold standard for software development and operations. While this may seem like great idea on paper, but what happens when the developers, security architects, and network ops teams are not part of the same company? Telecom networks are typically developed by multiple suppliers.

In many cases, telecom software is developed by external vendors in a walled fashion where Communication Service Providers (CSPs) have little visibility into the development process. 

The need to adhere to strict telecom standards and models such as Enhanced Telecom Operations Map (eTOM) and European Telecommunications Standards Institute (ETSI) also compounds the complexity of DevSecOps in telecom. The third barrier is managing a single DevSecOps pipeline while juggling multiple generations of network equipment and configurations

Removing barriers with open telecommunications

What happens when there is no unified environment to support DevSecOps processes? You build one. That’s what Dell Technologies did with the recent launch of its Open Telecom Ecosystem Lab (OTEL). With OTEL, telecom operators and software and technology partners can work together using an end-to-end systems approach that spans seamlessly across vendor, lab, staging, and production environments. 

Diagram, timeline

Description automatically generated

OTEL provides everything that CSPs and vendors need to support DevSecOps processes with the new Solutions Integration Platform (SIP) including:

  • Continuous integration across environments
  • Continuous deployment of all new software releases in a controlled manner
  • Continuous testing to ensure that updates/changes are mostly (80+ percent) automated
  • A closed-loop system where pipeline decisions are driven by real-time data insights 

A holistic approach to integration, deployment, and testing

In the last few years, there has been a big push to incorporate continuous integration/deployment (CI/CD) pipelines in the telecommunications industry. This push has been met with resistance because of the following challenges: 

  • Walled software development,
  • Multi-generation network technology, 
  • and stringent requirements around performance, reliability, and security. 

Telecom operators’ enterprise customers also have limited involvement in software development despite a deep interest in the functionality and outcomes of that software. For the operaters, becoming a part of the software development process can mean getting services to market sooner with a finished product that meets the needs of end users.

One of the primary goals of OTEL is to deliver telecom innovation as a platform, providing three core capabilities: 

  • Integrated software development: Although telecom software vendors will ultimately define and control this process, OTEL offers them a unified packaging template and test specifications that can be shared easily across CSP and partner ecosystems.
  • Lab and staging environment: Once the software is validated and security-hardened, it can be deployed in the OTEL lab and pre-deployment environments to identify and fix potential issues before deployment in the production network.
  • Replicated pre/production environment: OTEL can replicate the production environment to ensure seamless integration between all components.

Addressing the telco security challenges 

Telecom Networks are critical infrastructure and have a unique requirements on security driven by service needs and SLA’s, strong regulations and geographical laws, and cyber and data privacy . For 5G and cloud solutions, which involve many vendors, it is important to build a zero trust security architecture that can be validated and tested in a automated CI/CD driven approach. It is also important to enable security mechanisms that can automate security tests across each layer of network. These include:

  • Telecom network layer security
  • Service layer security 
  • End point security
  • Data platforms and close loop automation 

Integrating both the functional and non-functional requirements of telecom networks including security, reliability, and performance is the unique challenge Dell is trying to address through its state of art OTEL . By reducing the complexity of telecom software development and ensuring better integration and collaboration, OTEL is giving CSPs and their partners the agility and security they need to deliver the next generation of 5G and edge solutions.

To learn more about OTEL and how you can take advantage of OTEL’s state-of-the-art lab environment, contact Dell at Open Telecom Ecosystem Labs (OTEL.)

Author information

Saad Sheikh is a APJ Lead Systems Architect for Orchestration and NextGen Ops in Dell Telecom Systems Business (TSB) . In this role he is responsible to support partners, NEP’s, and customers to simplify and accelerate networks transformation to open and dis-aggregated infrastructures and solutions (5G, edge computing, core, and cloud platforms) using Dell’s products and capabilities that are based on multi cloud, data driven, ML/AI supported and open ways to build next generation Operational capabilities. In addition as part of Dell CTO team he represent Dell in Linux Foundation , TMforum , GSMA, ETSI, ONAP, and TIP. He has more than 20 years of experience in industry in telco's system integrators, consulting business, and with telecom vendors where he has worked on E2E Telecoms systems (RAN, Transport, Core, Networks), cloud platforms, automation and orchestration, and intelligent networking.


Read Full Blog
  • containers
  • telecom

Bandwidth Guarantees for Telecom Services using SR-IOV and Containers

John Williams John Williams

Mon, 12 Dec 2022 19:14:38 -0000

|

Read Time: 0 minutes

With the emergence of Container-native Virtualization (CNV) or the ability to run and manage virtual machines alongside container workloads, Single Root I/O Virtualization (SR-IOV) takes on an important role in the Communications Industry. Most telecom services require guarantees of capacity e.g. number of simultaneous TCP connections, or concurrent voice calls, or other similar metrics. Each telecom service capacity requirement can be translated into the amount of upload/download data that must be handled, and the maximum amount of time that can pass before a service is deemed non-operational. These bounds of data and time must be met end-to-end, as a telecom service is delivered. The SR-IOV technology plays a crucial role on meeting these requirements.

With SR-IOV being available to workloads and VMs, Telecom customers can divide the bandwidth provided by the physical PCIe device (NICs) into virtual functions or virtual NICs. This allows the virtual NICs with dedicated bandwidth to be assigned to individual workloads or VMs ensuring SLA agreements can be fulfilled. 

In the illustration above, say we have a 100GB NIC device that is shared amongst workloads and VMs on a single hardware server.  The bandwidth on a single interface is typically shared amongst the workloads and VMs as shown for interface 1. If one workload or VM is extremely bandwidth hungry it could consume a large portion of the bandwidth, say 50%, leaving the other workloads or VMs to share the remaining 50% of the bandwidth which could impact the SLAs agreements under contract the Telco customer. 

To ensure this doesn’t happen the specification for SR-IOV allows the PCIe NIC to be sliced up into virtual NICs or VFs as shown with interface 2 above.  Slicing the NIC interface into VFs, one can specify the bandwidth per VF.  For example, 30GB bandwidth could be specified for VF1 and VF2 for the workloads while VF3–5 could be allocated the remaining bandwidth divided evenly or perhaps only give 5GB each leaving 15GB for future VMS or workloads.  By specifying the bandwidth at the VF level, Telco companies can guarantee bandwidths for workloads or VMs thus meeting the SLA agreement with their customers.   

While this high-level description of the mechanics illustrates how you enabled the two aspects: SR-IOV for workloads and SR-IOV for VMs, Dell Technology has a white paper, SR-IOL Enablement for Container Pods in OpenShift 4.3 Ready Stack, which provides the step-by-step details for enabling this technology.  

Read Full Blog