Dell Technologies industry experts post their thoughts about our Communication Service Providers Solutions.
Cloud-native or Bust: Telco Cloud Platforms and 5G Core Migration
Wed, 24 May 2023 12:45:00 -0000
|Read Time: 0 minutes
As 5G network rollouts accelerate, communication service providers (CSPs) around the world are shifting away from purpose-built, vertically integrated solutions in favor of open, disaggregated, and cloud-native architectures running containerized network functions. This allows them to take advantage of modern DevSecOps practices and an emerging ecosystem of telecom hardware and software suppliers delivering cloud-native solutions based on open APIs, open-source software, and industry-standard hardware to boost innovation, streamline network operations, and reduce costs.
To take advantage of the benefits of cloud-native architectures, many CSPs are moving their 5G Core network functions onto commercially available cloud native application platforms like Red Hat OpenShift, the industry's leading enterprise Kubernetes platform. However, building an open, disaggregated telco cloud for 5G Core is not easy and it comes with its own set of challenges that need to be tackled before large scale deployments.
In an disaggregated network, the system integration and support tasks become the CSP's responsibility. To achieve their objectives for 5G, they must:
Accelerate the introduction and management of new technologies by simplifying and streamlining processes from Day 0, network design and integrations tasks, to Day 1 deployment, and Day 2 lifecycle management and operations.
Break down digital silos to deploy a horizontal cloud platform to reduce CapEx and OpEx while lowering power consumption
Deploy architectures and technologies that consistently meet strict telecom service level agreements (SLAs).
This will be a five-part blog series that addresses the challenges when deploying 5G Core network functions on a telco cloud.
In this first blog, we will highlight CSPs’ key challenges as they migrate to an open, disaggregated, and cloud-native ecosystem;
The next blog will explore the 3GPP 5G Core network architecture and its components;
The third blog in the series will discuss how Dell Technologies is working with Red Hat to streamline operator processes from initial technology onboarding through Day 2 operations when deploying a telco cloud to support core network functions;
The fourth blog will focus on how Dell Telecom Infrastructure Blocks for Red Hat can help CSPs move to a horizontal cloud platform to reduce costs and power consumption;
The final blog in the series will discuss how Dell is integrating Intel technology that consistently meets CSP SLAs for 5G Core network functions.
Cloud native architectures offer the potential to achieve superior performance, agility, flexibility, and scalability, resulting in easily updated, scaled, and maintained Core network functions with improved network performance and lower operational costs. Nevertheless, operating 5G Core network functions on a telco cloud can be difficult due to new challenges operators face in integration, deployment, lifecycle management, and developing and maintaining the right skill sets.
Open multi-vendor cloud-native architectures require the CSP to take on more ownership of design, integration, validation, and management of many complex components, such as compute, storage, networking hardware, the virtualization software, and the 5G Core workload that runs on top. This increases the complexity of deployment and lifecycle management processes while requiring investment in development of new skill sets.
5G Core deployment on a telco cloud platform can be a complex process that requires integrating multiple systems and components into a unified whole with automated deployment from the hardware up through the Core network functions. This complexity creates the need for automation that not only to streamlines processes, but also ensures a consistent deployment or upgrade each time that aligns with established configuration best practices. Many CSPs may lack deployment experience with automation and cloud native tools making this a difficult task.
The size and complexity of the 5G Core can make lifecycle management and orchestration challenging. Every one of the components starts a new validation cycle and increases the risk of introducing security vulnerabilities and configuration issues into the environment.
Managing a telco cloud requires a different set of skills and expertise than operating traditional networks environments. CSPs often need to acquire additional staff and invest in cloud native training and development to obtain the skills and experience to put cloud native principles into practice as they build, deploy, and manage cloud-native applications and services.
In recent years, many CSPs embarked on a journey away from vertically integrated, proprietary appliances to virtualized network functions (VNFs). One of the goals when adopting network functions virtualization was to obtain greater freedom in selecting hardware and software components from multiple suppliers, making services more cost-effective and scalable. However, CSPs often experiences difficulties in designing, integrating and testing their individual stacks, resulting in higher integration costs, interoperability issues and regression testing delays leading to less efficient operations.
Despite efforts to move to virtualized network functions, silos of vertically integrated cloud deployments can emerge where the virtual network functions suppliers define their own cloud stack to simplify their process of meeting the requirements for each workload. These vertical silos prevented CSPs from pooling resources, which can reduce infrastructure utilization rates and increase power consumption. It also increases the complexity of lifecycle management as each layer of the stack for each silo needs to be validated whenever a change to a component of the stack is made.
Vertically integrated 5G Core stack on a telco cloud
CSPs are now looking to implement a horizontal platform that can provide a common cloud infrastructure to help break down these silos to lower costs, reduce power consumption, improve operational efficiency, and minimize complexity allowing CSPs to adopt cloud native infrastructure from the core to the radio access network (RAN).
Horizontal platform for 5G telco workloads
Creating and managing a geographically dispersed telco cloud based on a broad range of suppliers while consistently adhering to CSP SLAs takes a lot of effort, resources and time and can introduce new complications and risks. To meet these SLAs and accelerate the introduction of new technologies, CSPs will need a novel approach when working with vendors that reduces integration and deployment times and costs while simplifying ongoing operations. This will include developing a tighter relationship with their supply base to offload integration tasks while maintaining the flexibility provided by an open telecom ecosystem. As an example, Vodafone recently introduced a paper outlining their vision for a new operating model to improve systems integrations with their supply base to help achieve these objectives. It would also include following a proven path in enterprise IT by adopting engineered systems, similar to the converged and hyper converged systems used by IT today, that have been optimized for telecom use cases to simplify deployment, scaling and management.
When it comes to optimizing short-term TCO, there are several options available to CSPs. One such option is to work closely with vendors in order to reduce integration, deployment times and costs while simplifying ongoing operations. This approach can help CSPs leverage the expertise of vendors who specialize in the software and hardware components required for a disaggregated telco cloud. By working with skilled vendors, CSPs can reduce the risk of validating and integrating components themselves, which can lead to cost savings in the short term.
Another option that CSPs can consider is to adopt a phased approach to implementation. This involves deploying disaggregated telco cloud technologies in stages, starting with the most critical components and gradually expanding to include additional components over time. This approach can help to mitigate the initial costs associated with disaggregated telco cloud adoption while still realizing the benefits of increased flexibility, scalability, and cost efficiency.
CSPs can also take advantage of initiatives like Vodafone's new operating model for improving systems integrations with their supply base. This model aims to simplify the process of integrating components from multiple vendors by providing a standardized framework for testing and validation. By adopting frameworks like this, CSPs can reduce the time and costs associated with integrating components from multiple vendors, which can help to optimize short-term TCO.
Although implementing a disaggregated telco cloud can increased investment in the short term, there are several options available to CSPs for optimizing short-term TCO. Whether it's working closely with trusted vendors, adopting a phased approach, or leveraging standardized frameworks, CSPs can take steps to reduce costs and maximize the benefits of a disaggregated telco cloud.
Dell and Red Hat are leading experts in cloud-native technology used in building 5G networks and are working together to simplify their deployment and management for CSPs. Dell Telecom Infrastructure Blocks for Red Hat is a solution that combines Dell's hardware and software with Red Hat OpenShift, providing a pre-integrated and validated solution for deploying and managing 5G Core workloads. This offering enables CSPs to quickly launch and scale 5G networks to meet market demand for new services while minimizing the complexity and risk associated with deploying cloud-native infrastructure.
In the next blog we will dive deeper into the the service-based architecture of the 5G Core architecture and how it was developed to support cloud native principles. To learn more about how Dell Technologies and Red Hat are partnering to simplify the deployment and management of a telco cloud platform built to support 5G Core workloads, see the ACG Research Industry Directions Brief: Extending the Value of Open Cloud Foundations to the 5G Network Core with Telecom Infrastructure Blocks for Red Hat.
About the author:
Gaurav Gangwal works in Dell's Telecom Systems Business (TSB) as a Technical Marketing Engineer on the Product Management team. He is currently focused on 5G products and solutions for RAN, Edge, and Core. Prior to joining Dell in July 2022, he worked for AT&T for over ten years and previously with Viavi, Alcatel-Lucent, and Nokia. Gaurav has an Engineering degree in Electronics and Telecommunications and has worked in the telecommunications industry for about 14+ years. He currently resides in Bangalore, India.
About the author:
Kevin Gray leads marketing for Dell Technologies Telecom Systems Business Foundations solutions. He has more than 25 years of experience in telecommunications and enterprise IT sectors. His most recent roles include leading marketing teams for Dell’s telecommunications, enterprise solutions and hybrid cloud businesses. He received his Bachelor of Science in Electrical Engineering from the University of Massachusetts in Amherst and his MBA from Bentley University. He was born and raised in the Boston area and is a die-hard Boston sports fan.
Dell Technologies and Samsung collaborate to bring innovative Open RAN solutions to CSPs
Mon, 27 Feb 2023 07:00:00 -0000
|Read Time: 0 minutes
Open RAN promises to enable communication service providers (CSPs) with choice and flexibility by opening up the interfaces of the RAN system to enable multi-vendor solutions. However, opening up RAN interfaces creates integration challenges that must be solved. This process includes fully integrating and testing multi-vendor solutions, while gaining the CSP's trust that the solution will provide the reliability their customers have come to expect. Simplifying this process can help accelerate the adoption of Open RAN.
Dell Technologies and Samsung are collaborating to solve Open RAN challenges and support seamless multi-vendor solution integration. Samsung will work alongside Dell to integrate Samsung’s virtualized centralized unit (vCU) and distributed unit (vDU) software with Dell Poweredge XR8000 and XR5610 servers, which are purpose-built for RAN environments and provide the performance and power consumption characteristics required in RAN deployments.
The companies will also offer a flexible model for joint customer engagements and deliver post-sales support to customers.
“Network operators are on the journey of transforming to open technologies, but they need help validating and testing the various solutions for their networks,” said Andrew Vaz, vice president of product management, Dell Technologies Telecom Systems Business. “Together with Samsung, our aim is to provide validated, price performant RAN solutions that network operators can confidently deploy in their networks.”
“We constantly strive to deliver products and solutions that meet the exceptional standards of global network operators, keeping flexibility, reliability and performance top of mind,” said Wook Heo, vice president, head of business operation, networks business, Samsung Electronics. “We have a robust ecosystem of partners and we look forward to continue working together with Dell to drive innovation to the next level, helping operators scale their open and virtualized networks.”
Open RAN promotes multi-vendor technologies to give CSPs more choice and flexibility. Collaborations like Dell and Samsung will help the industry overcome current Open RAN integration challenges to propel Open RAN forward.
Dell and Nokia collaborate to accelerate cloud RAN adoption
Mon, 27 Feb 2023 07:00:00 -0000
|Read Time: 0 minutes
The adoption of cloud RAN architectures has been slowed by concerns about performance and implementation challenges. To address these issues, the industry has moved toward the development of hardware acceleration solutions and the deployment of denser compute platforms to address the cloud RAN cost-performance challenge. Pre-integration of network architecture and validation of use case requirements needs to be done in advance, so solutions are simple and efficient to deploy.
To accelerate cloud RAN adoption, Dell Technologies and Nokia have formed an agreement to integrate, interoperability test and validate a solution combining Nokia 5G Cloud RAN software and Nokia Cloud RAN SmartNIC (Network Interface Card) in-line Layer 1 (L1) accelerator hardware with Dell open infrastructure, including Dell PowerEdge servers.
To achieve these results, Dell and Nokia will both deploy R&D and testing resources. Dell will utilize its Open Telecom Ecosystem Lab as the center for testing and validation, while Nokia will focus its work on its Open Cloud RAN E2E System Test Lab. The companies will be sharing engineering and R&D resources to jointly complete the scope of the collaboration and take the resulting solution to market.
Additionally, Dell Technologies and Nokia have achieved a Layer 3 end-to-end data call running Nokia Cloud RAN software and the Nokia Cloud RAN SmartNIC In-Line L1 acceleration on Dell PowerEdge servers. Dell and Nokia will establish joint marketing and sales strategies and a mutual co-sell agreement to promote and deliver the resulting solution to prospective customers.
“The combination of Nokia’s Cloud RAN software and Cloud RAN SmartNIC with Dell’s purpose-designed telecom open infrastructure and Dell PowerEdge servers unlocks more flexibility, choice and helps our customers choose competitive Cloud RAN solutions that prioritize performance and energy efficiency. By pooling our resources and expertise together we can create more compelling and integrated solutions that accelerate the adoption of open technologies and bring innovative solutions to the market faster. Nokia’s approach in collaborating with our best-in-class partners is delivering competitive advantage to organizations embracing Cloud RAN,” commented Tommi Uitto, president of Mobile Networks, Nokia.
“It’s critical to collaborate with partners such as Nokia to enable telecom network disaggregation and accelerate the adoption of open network architectures,” said Dennis Hoffman, senior vice president and general manager, Dell Technologies Telecom Systems Business. “Integrating Nokia’s Cloud RAN software and Cloud RAN SmartNIC with Dell infrastructure that is purpose built for telecom networks, will give network operators another choice to realize the value of open technologies and quickly bring innovative and revenue generating solutions to market.”
Collaborations like Dell and Nokia will help the industry adopt cloud-native solutions. Dell is eager to work alongside RAN leaders such as Nokia to bring open and innovative RAN solutions to the market.
Dell’s PowerEdge XR7620 for Telecom/Edge Compute
Mon, 01 May 2023 15:39:36 -0000
|Read Time: 0 minutes
The XR7620 is an Edge-optimized short-depth, dual-socket server, purpose-built and compact, offering acceleration-focused solutions for the Edge. Similar to the other new PowerEdge XR servers reviewed in this blog series (the XR4000, XR8000, and XR5610)the XR7620 is a ruggedized design built to tolerate dusty environments, extreme temperatures, and humidity and is both NEBS Level 3, GR-3108 Class 1 and MIL-STD-810G certified. Figure 1. PowerEdge XR7620 Server
XR7620 is intended to be a generational improvement over the previous PowerEdge XR2 and XE2420 servers, with similar base features and the newest components, including:
Targeted workloads include Digital Manufacturing workloads for machine aggregation, VDI, AI inferencing, OT/IT translation, industrial automation, ROBO, and military applications where a rugged design is required. In the Retail vertical, the XR7620 is designed for such applications as warehouse operations, POS aggregation, inventory management, robotics and AI inferencing.
For additional details on the XR7620’s performance, see the tech notes on the servers machine learning (ML) capabilities.
The XR7620 shares the ruggedized design features of the previously reviewed XR servers, and its strength lies in its ability to bring dense acceleration capabilities to the Edge, but instead of repeating the same feature and capabilities highlighted in previous blogs, I would like to discuss a few other PowerEdge features that have special significance at the Edge. These are in the areas of:
Security is a core tenant and the common foundation of the entire PowerEdge Portfolio. Dell designs PowerEdge with security in mind at every phase in the server lifecycle, starting before the server build, with a Secure Supply Chain, extending to the delivered servers, with Secure Lifecycle Management and Silicon Root of Trust then secures what’s created/stored by the server in Data Protection.
Figure 2. Dell's Cyber Resilient Architecture
This is a Zero Trust security approach that assumes at least privilege access permissions and requires validation at every access/implementation point, with features such as Identify Access Management (IAM) and Multi-Factor Authentication (MFA).
Especially at the Edge, where servers are not typically deployed in a “lights out” environment the ability to detect and respond to any tampering or intrusion is critical. Dell’s silicon-based platform Root of Trust created a secured booth environment to ensure that firmware comes from a trusted, untampered source. PowerEdge can also lock down a system configuration, detect any changes in firmware versions or configuration and on detection, can initiate a rollback to the last known good environment.
Figure 3. Intelligent Cooling DesignsAs covered in a previous blog, optimized thermal performance is critical in the design of resilient, ruggedized Edge Server designs. The PowerEdge XR servers are designed with balanced, cooling-efficient airflow and comprehensive thermal management that provide optimized airflow while minimizing fan speeds and reducing server power consumption. XR Servers have a cooling design that allows them to operate between -5oC to 55oC. Dell engineers are currentlyworking on solutions to extend that operational range even further.
All PowerEdge XR servers are designed with multiple, dual counter-rotating fans (basically 2 fans in 1 housing) and support for N+1 fan redundancy. While for NEBS certification, fan failure is only “evaluated”, to certify as a GR-3108 Class 1 device, the server must continue to operate with a single fan failure, at a maximum of 40oC for a minimum of four hours.
All Dell PowerEdge servers have a common, three tier approach to system management, in the forms of the Integrated Dell Remote Access Controller (iDRAC), Open Manage Enterprise (OME) and CloudIQ. These three tiers build upon Dell’s approach to system management, of a unified, simple, automated, and secure solution. This approach scales from the management of a single server, at the iDRAC Baseboard Management Controller (BMC) console, to managing 1000s of servers simultaneously with OME, to leveraging intelligent infrastructure insights and predictive analytics to maximize server productivity with CloudIQ.
The XR7620 is a valuable addition to the PowerEdge XR portfolio, providing dense compute, storage, and I/O capabilities in a short-depth and ruggedized form factor, for environmentally challenging deployments. But far and away, the XR7620’s best capability is a design that brings a dense GPU acceleration environment to the edge, while continuing the meeting the performance requirements of NEBS Level 3, an ability that has previously not been an option.
Dell’s focus on security, cooling, and management creates a solution that can be efficiently and confidently deployed and maintained in the challenging environment that is today’s Edge.
In closing out this blog series, I would like to thank you for taking your valuable time to review my thoughts on Design for the Edge. To continue these discussions, connect with me here:
Mike Moore, Telecom Solutions Marketing Consultant at Dell Technologies
Dell’s PowerEdge XR5610 for Telecom/Edge Compute
Tue, 25 Apr 2023 16:58:19 -0000
|Read Time: 0 minutes
In June 2021, Dell announced the PowerEdge XR11 Server. This was Dell’s first design created for the requirements of Telecom Edge Environments. A 1U, short-depth, ruggedized, NEBS Level 3 compliant server, the XR11 has been successfully deployed in multiple O-RAN compliant commercial networks, including DISH Networks and Vodafone.
Dell has followed on the success of the XR11, with a generational improvement in the introduction of the PowerEdge XR5610.
Like its predecessor, the XR5610 is a short-depth ruggedized, single socket, 1U monolithic server, purpose-built for the Edge and Telecom workloads. Its rugged design also accommodates military and defense deployments, retail AI including video monitoring, IoT device aggregation, and PoS analytics.
Figure 1. PowerEdge XR5610 1U Server
Improvements to the XR5610 include:
Topics where the XR5610 delivers at the Edge are:
Form factor and deployability
The monolithic chassis design of the XR5610 is a traditional, short depth form factor and fills certain deployment cases more efficiently than the XR8000. This form factor will often be preferred where limited or single server edge deployments are required, or if this is a planned long-term installation with limited or planned upgrades.
Figure 2. Site support cabinetThe XR5610 is compatible with much of today’s Edge infrastructure. These servers are designed with a short depth, “400mm Class” form factor, compatible with most existing Telecom Site Support Cabinets with flexible Power Supply options and dynamic power management to efficiently use limited resources at the edge.
This 400mm Class server fits well within the commonly deployed edge enclosure depths of 600mm. With front maintenance capabilities, the XR5610 can be installed in Edge Cloud racks, and provide sufficient front clearance for power and network cabling, without creating a difficult-to-maintain cabling design or potentially one that obstructs airflow.
Environment and rugged design
While the XR5610 is designed to meet the environmental requirements of NEBS Level 3 and GR-3108 Class 1 for deploying into the Telecom Edge, Dell also wanted to create a platform that had uses and applications outside the Telecom Sector, beyond mee The PowerEdge XR5610 is also designed as a ruggedized compute platform for both military and maritime environments. The XR5610 is tested to MIL-STD and Maritime specifications, including shock, vibration, altitude, sand, and dust. This wider vision for the deployment potential of the XR5610 creates a computing platform that can exist comfortably in an O-RAN Edge Cloud environment, without being restricted to Telecom-only.
A smart filtered bezel option is also available so the XR5610 can work in dusty environments and send an alert when a filter should be replaced. This saves maintenance costs because technicians can be called out on an as-needed basis, and customers don’t have to be concerned with over-temperature alarms caused by prematurely clogged filters.
Efficient power options
The XR5610 supports 2 PSU slots that can accommodate multiple power capacities, in both 120/240 VAC and -48 VDC input powers.
Dell has worked with our power supply vendors to create an efficient range of Power Supply Units (PSUs), from 800W to 1800W. This allows the customer to select a PSU that most closely matches the current version available at the facility and power draw of the server, reducing wasted power loss in the voltage conversion process.
Conclusion
The Dell PowerEdge XR servers, in particular the XR5610 and XR8000, are providing a new Infrastructure Hardware Foundation that allows Wireless Operators to transition away from traditional, purpose-built, classical BBU appliances, decoupling HW and SW to an open, virtualized RAN that gives operators the choice to create innovative, best-in-class solutions from a multi-vendor ecosystem.
Dell’s PowerEdge XR8000 for Telecom/Edge Compute
Fri, 31 Mar 2023 17:38:53 -0000
|Read Time: 0 minutes
The design goals of a Telecom-inspired Edge Server are to not only to complement existing installations such as traditional Baseband Units (BBUs) all the way out to the cell site, but to eventually replace the purpose-built proprietary platforms with a cloud-based and open solution. The new Dell Technologies PowerEdge XR8000 achieves this goal, in terms of form factor, operations, and environmental specifications.
Figure 1. XR8000 2U Chassis
The XR8000 is composed of a 2U, short depth, 400mm class Chassis with options to choose from 1U or 2U half-width hot-swappable Compute Sleds with up to 4 nodes per chassis. The XR8000 supports 3 sled configurations designed for flexible deployments. These can be 4 x 1U sleds, 2 x 1U and 1 x 2U sleds or 2 x 2U sleds.
The Chassis also supports 2 PSU slots that can accommodate up to 5 power capacities, with both 120/240 AC and -48 VDC input powers supported.
The 1U and 2U Compute Sleds are based on Intel’s 4th Generation Xeon Scalable Processors, up to 32 cores, with support for both Sapphire Rapids SP and Edge Enhanced (EE) Intel® vRAN Boost processors. Both sled types have 8 x RDIMM slots and support for 2 x M.2 NVMe boot devices with optional RAID1 support, 2 optional 25GbE LAN-on-Motherboard (LoM) ports and 8 Dry Contact Sensors though an RJ-45 connector.
The 1U Compute Sleds adds support for one x16 FHHL (Full Height Half Length) Slot (PCIeFigure 2. XR8610t 1U Compute Sled Gen4 or Gen5).
The 2U Compute Sled builds upon the foundation of the 1U Sled and adds support for an additional two x16 FHHL slots.
These 2 Sled configurations can create both dense compute and dense I/O configurations. The 2U Sled also provides the ability to accommodate GPU-optimized workloads.
This sledded architecture is designed for deployment into traditional Edge and Cell Site Environments, complementing or replacing current hardware and allowing for the reuse of existing infrastructure. Design features that make this platform ideal for Edge deployments include:Figure 3. XR8620t 2U Compute Sled
Let’s take a look at each one of these.
The XR8000 is designed for NEBS Level 3 compliance, which specifies an operational temperature range of -5oC to +55oC. However, creating a server that operates efficiently through this whole temperature range can require some “padding” on either side. Dell has designed the XR8000 to operate at both below -5oC and above +55oC. This creates a server that operates comfortably and efficiently in the NEBS Level 3 range.
On the low side of the temperature scale, as discussed in the sixth blog in this series, commercial-grade components are typically not specified to operate below 0°C. New to Dell PowerEdge design, is the XR8000 sled pre-heater controller, which on cold start where the temperature is below -5°C, will internally warm the server up to the specified starting temperature before applying power to the rest of the server.
On the high side of the temperature scale, Dell is introducing new, advanced heat sink technologies to allow for extended operations above +55°C. Another advantage of this new class of heat sinks will be in power savings, as at more nominal operating temperatures the sled’s cooling fans will not have to spin at as high a rate to dissipate the equivalent amount of heat, consuming fewer watts per fan.
Figure 4. XR8000 Front View. All Front Maintenance
Figure 5. XR8000 Rear View. Nothing to see here.
In many Cell Site deployments, access to the back of the server is not possible without pulling the entire server. This is typical, for example, in dedicated Site Support Cabinets, with no rear access, or in Concrete Huts where racks of equipment are located close to the wall, allowing no rear aisle for maintenance.
Maintenance procedures at a Cell Site are intended to be fast and simple. The area where a Cell Site Enclosure sits is not a controlled environment. Sometimes, there will be a roof over the enclosure to reduce solar load, but more times than not it’s exposed to everything Mother Nature has to offer. So the FRU (Field Replaceable Unit) maintenance needs to be simple, fast, and quickly bring the system back into full service. For the XR8000, the 2 basic FRUs are the Compute Sled and the PSUs. Simple and fast not only restore service more quickly, but the shorter maintenance cycle allows more sites to be serviced by the same technicians, saving both time and money.
Up to four compute sleds are supported in the XR8000 Chassis, supplied by two 60mm PSUs. If you looked at a traditional, rackmount server equivalent there would be either 4U of single socket or 2U-4U of dual-socket servers. Assuming redundant PSUs for each server, there would be between four to eight PSUs for equivalent compute capacity and between four to eight more power cables. This consolidation of PSUs and cables not only reduces the cost of the installation, due to fewer PSUs but also reduces the cabling, clutter, and Power Distribution Unit (PDU) ports used in the installation.
With the release of Intel’s new 4th Generation Xeon Scalable Processor, a server in 2023 can execute the equivalent of multiple servers from only 10 years ago. It can be expected that not only will processors efficiently continue to improve, but greater capabilities and performance in peripherals, including GPUs, DPU/IPUs, and Application Specific Accelerators will continue this processing densification trend. The XR8000 Chassis is designed to accommodate multiple generations of future Compute Sleds, enabling fast and efficient upgrades while keeping any service disruptions to a minimum.
It is said that imitation is the sincerest form of flattery. In this respect, our customers have requested, and Dell has delivered the XR8000, which is designed in a compact and efficient form factor with similar maintenance procedures as found in the existing, deployed RAN infrastructure.
Building upon a Classical BBU architecture, the XR8000 adopts an all-front maintenance approach with 1U and 2U Sledded design that makes server/PSU installation and upgrades quick and efficient.
Dell’s PowerEdge XR4000 for Telecom/Edge Compute
Wed, 08 Feb 2023 18:04:19 -0000
|Read Time: 0 minutes
Compute capabilities are increasingly migrating away from the centralized data center and deployed closer to the end user—where data is created, processed, and analyzed in order to generate rapids insights and new value.
Dell Technologies is committed to building infrastructure that can withstand unpredictable and challenging deployment environments. In October 2022, Dell announced the PowerEdge XR4000, a high-performance server, based on the Intel® Xeon® D Processor that is designed and optimized for edge use cases.
Figure 1. Dell PowerEdge XR4000 “rackable” (left) and “stackable” (right) Edge Optimized Servers
The PowerEdge XR4000 is designed from the ground up with the specifications to withstand rugged and harsh deployment environments for multiple industry verticals. This includes a server/chassis designed with the foundation requirements of GR-63-CORE (including -5C to +55C operations) and GR1089-CORE for NEBS Level 3 and GR-3108 Class 1 certification. Designed beyond the NEBS requirements of Telecom, the XR4000 also meets MIL-STD specifications for defense applications, marine specifications for shipboard deployments, and environmental requirements for installations in the power industry.
The XR4000 marks a continuation of Dell Technologies’ commitment to creating platforms that can withstand the unpredictable and often challenging deployment environments encountered at the edge, as focused compute capabilities are increasingly migrating away from the Centralized Data Center and deployed closer to the End User, at the Network Edge or OnPrem.
Attention to a wide range of deployment environments creates a platform that can be reliably deployed from the Data Center to the Cell Site, to the Desktop, and anywhere in between. Its rugged design makes the XR4000 an attractive option to deploy at the Industrial Edge, on the Manufacturing Floor, with the power and expandability to support a wide range of computing requirements, including AI/Analytics with bleeding-edge GPU-based acceleration.
The XR4000 is also an extremely short depth platform, measuring only 342.5mm (13.48 inches) in depth which makes it extremely deployable into a variety of locations. And with a focus on deployments, the XR4000 supports not only EIA-310 compatible 19” rack mounting rails, but also the “stackable” version supports common, industry-standard VESA/DIN rail mounts, with built-in latches to allow the chassis to be mounted on top of each other, leveraging a single VESA/DIN mount.
Additionally, both Chassis types have the option to include a lockable intelligent filtered bezel, to prevent unwanted access to the Sleds and PSUs, with filter monitoring which will create a system alert when the filter needs to be changed. Blocking airborne contaminants, as discussed in a previous blog, is key to extending the life of a server by reducing contaminant build-up that can lead to reduced cooling performance, greater energy costs, corrosion and outage-inducing shorts.
The modular design of the XR4000, along with the short-depth Compute Sled design creates an easily scalable solution. Maintenance procedures are simplified with an all-front-facing, sled-based design.
Specifying and deploying Edge Compute can very often involve selecting a Server Solution outside of the more traditional data center choices. The XR4000 addresses the challenges of moving to compute to the Edge with a compact, NEBS-compliant, and ruggedized approach, with Sled-based servers and all front access, reversible airflow and flexible mounting options, to provide ease of maintenance and upgrades, reducing server downtime and improving TCO.
To slice or not to slice
Wed, 25 Jan 2023 21:53:29 -0000
|Read Time: 0 minutes
Network Slicing is possibly the most central feature of 5G – lots of game-changing potentials, but at the same time often overhyped and misunderstood. In this blog, we will give a fact-based assessment and guidance on the question of “To Slice Or Not To Slice.”
Guidance for the reader:
5G doesn’t only promise to enhance mobile broadband but also to support a wide range of enterprise use cases – from those requiring better reliability and lower latency to those requiring a long battery life and greater device density. From the long list of 3GPP Release 15 features, Network Slicing is the cornerstone feature for service creation. The basic idea behind this feature is the ability to subdivide a single physical network into many logical networks where each is optimized to support the requirements of the services intended to run on it.
We can think of Network Slicing as ordering pizza for friends. 4G gets you the classic Margherita, which is acceptable to most. Yet some would be willing to pay more for extra toppings. In this case, 5G allows you to customize the pizza where half can still be the classic Margherita, but the remaining slices can be split into four cheese, pepperoni, and Hawaiian.
It all sounds great, but why are we not seeing Network Slicing everywhere today? Let us explore some of the hurdles it has to clear before becoming more mainstream.
Slicing requires new features to work – Network equipment providers need to develop these new features, especially on the Radio Access Network (RAN), and communications service providers need to implement them. This will take time since much of the initial industry focus has been on enhanced mobile broadband and fixed wireless access, which is the initial monetizable 5G use cases.
Slicing needs automation to be practical – While it is possible to create network slices manually at the start, doing so at scale takes too long and costs too much. An entirely new 3GPP-defined management and orchestration layer is needed for slicing orchestration and service assurance. Business Support Systems (BSS) also need new integrations and feature enhancements to support capabilities like online service ordering and SLA-based charging.
Slicing has to make money – There will come a time when we cannot live without the metaverse and web 3.0, but that is not today. There will also come a time when factories will be run by collaborative robots and infrastructures are maintained by autonomous drones, but that is not today. The reality is that there is limited demand for custom slices since most consumer and enterprise use cases today work fine on 4G or 5G non-standalone networks. For example, YouTube and other over-the-top streaming apps implement algorithms to adapt to varying speeds and latency. Lastly, Network Slicing also comes with additional costs related to implementation, operations, and reduced overall capacity (due to resource reservation and prioritization) that must be factored into the business case.
Regulatory challenges – Net Neutrality is an essential topic in the United States and European Union. Misinterpreting differentiated services as something that violates Net Neutrality may put communications service providers under scrutiny by regulators.
5G standalone may have been slow out of the gate, but it is gaining momentum. In 2022, GSA counted 112 operators in 52 countries investing in public 5G standalone networks. Some communications service providers are even more advanced. For example, Singtel has already implemented Paragon, an orchestration platform that allows them to offer network slices and mobile edge computing for mission-critical applications on demand. Another example is Telia Finland which uses Network Slicing to guarantee the service level for its home broadband (fixed wireless access) subscribers.
There are also a lot of ongoing and planned projects that aim to accelerate the development of enterprise use cases. Collaborations such as ARENA2036, a research campus in Germany, allow communications service providers, network equipment manufacturers, independent software vendors, system integrators, and the academe to work together in developing and testing new technologies and services.
One of the key reasons behind this positive momentum shift in 2022 is the major network equipment providers like Nokia and Ericsson bringing to the market their Network Slicing features for the RAN. These features enable the reservation and prioritization of RAN resources to particular slices. According to these vendors, Network Slice capacity management is done dynamically, which means the scarce air interface resources are allocated as efficiently as possible. This has been the much-needed catalyst for the first commercial launches and pre-commercial trials across several industries: fixed wireless access (live), video streaming (live), smart city, public safety, remote TV broadcast, assisted driving, enterprise interconnectivity, and mining.
Another positive development is related to smartphones where the biggest mobile operating system (Android OS) started supporting multiple Network Slices simultaneously on the same device (from Android 12). This is beneficial to both consumer and enterprise use cases that have more demanding requirements for speed and latency.
These enhancements on the RAN and devices close several gaps. We can therefore expect Network Slicing to gain even more traction in 2023.
Several hundred successful 4G and 5G Mobile Private Networks (MPN) have been deployed globally. Many have specific indoor coverage, cybersecurity, or business-critical performance requirements that can be best accomplished with dedicated network resources. The common challenges for MPN are private spectrum availability, high cost of deployment and operations, and long lead times.
5G use cases can only be deployed through Network Slicing or MPN, but the majority can be deployed on either. In our view, the discussion should not focus too much on comparing Network Slicing to MPN but should rather be on the use case requirements such as coverage, where Network Slicing is a natural fit for wide areas and MPN is a natural fit for deep indoor. Communications service providers should have both solutions in their toolbox as individual enterprise customers may require both for their various use cases. Let the use case dictate the solution, similar to the approach of most network equipment providers for private wireless (4G/5G versus WiFi6/6E).
In our view, the evidence is clear from the recently available slicing features and commercial/pre-commercial market deployments to conclude that Network Slicing is here to stay, enabling new service creation and fostering competitive differentiation. Only time will tell how successful it will be with consumer and enterprise market segments. The level of investments by governments, industry groups, communications service providers, and network equipment providers will play a major role in the success or failure of Network Slicing. At the same time, communications service providers should keep in mind other industry players like AWS and other webscale companies who are betting big on 5G with MPN-based solutions (as Network Slicing is not an option for them).
Communications service providers must understand that Network Slicing, in most situations, is not a sellable service, but rather an enabler to support services with performance or security requirements that are significantly different from mobile broadband. Differentiation for most of the use cases will be in the RAN domain since the air interface is a constrained resource and the RAN equipment is too costly to dedicate.
While there is no harm to having the management and orchestration layer from the start especially if CAPEX is not an issue, it is still recommended to first focus on deploying the end-to-end network features Network Slicing requires and on identifying monetizable use cases that will benefit from it. Note that some use cases require additional features such as those that lower latency and improve reliability.
The vast majority of consumer and enterprise end users are not interested in the underlying technologies, but rather just want to achieve the speed, latency, and reliability they need for the services they enjoy or need. And in many cases even discussions on speed, latency and reliability do not interest them as long as the services are performing as expected. Communications service providers should have the capability to create and market the services by themselves or, in most instances, with the right partners. Unlike 4G, the potential of 5G can no longer be realized just by the communications service providers and network equipment vendors.
Communications service providers should have a complete toolbox – different tools for different requirements. And the guidance is not to stand idly, but to gain experience and form partnerships for both Network Slicing and MPN.
Deploying Network Slicing or MPN and moving into new business models where offering multiple tailored and assured connectivity services are not trivial tasks. How Dell can help CSPs in this transformation journey:
About the author: Tomi Varonen
Principal Global Enterprise Architect
Tomi Varonen is a Telecom Network Architect in Dell’s Telecom Systems Business Unit. He is based in Finland and working with the Cloud and Core Network customer cases in the EMEA region. Tomi has over 23 years of experience in the Telecom sector in various technical and sales positions. He has wide expertise in end-to-end mobile networks and enjoys creating solutions for new technology areas. Tomi has a passion for various outdoor activities with family and friends including skiing, golf, and bicycling.
About the author: Arthur Gerona
Principal Global Enterprise Architect
Arthur is a Principal Global Enterprise Architect at Dell Technologies. He is working on the Telecom Cloud and Core area for the Asia Pacific and Japan region. He has 19 years of experience in Telecommunications, holding various roles in delivery, technical sales, product management, and field CTO. During his free time, Arthur likes to travel with his family.
Computing on the Edge–Other Design Considerations for the Edge: Part 2
Thu, 02 Feb 2023 15:30:52 -0000
|Read Time: 0 minutes
The previous blog discussed the physical aspects of a ruggedized chassis, including short depth and serviceability. The overarching theme being that of creating a durable server, in a form factor that can be deployed in a wide range of Telecom and Edge compute environments.
This blog will focus on the inside of the server, specifically design aspects that cover efficient and long-term, reliable server operations. This blog covers the following topics:
Certainly, one of the greatest challenges of Edge Server Design is architecture and layout. It is extremely challenging to optimize airflow such that heat is efficiently dissipated over the entire operational temperature and humidity range.
In an Edge Server, there are still the same compute, storage, memory, and networking demands required for a traditional data center server. However, the designers are dealing with 30 percent less real estate to work with—and even less space when dealing with some of the sledded server architectures, such as with Dell’s new PowerEdge XR4000 Server.
These design restrictions typically result in components being placed much closer together on the motherboard and a concentration of heat creation in a smaller area. Smart component placement, which mitigates pre-heated air from passing over other sensitive components and advanced heat sinks specifying high-performance fans and the use of air channels to internally direct air through the server, is critical to creating server designs that can tolerate temperature extremes without creating excessive hotspots in the server.
These designs are repeatedly simulated and optimized using a Computational Fluid Dynamics (CFD) application. Hot spots are identified and mitigated until a design is created that maintains all active components within their specified operating temperatures, over the entire operational range of the overall server. For example, for NEBS Level 3 this would range from -5C to +55C, as discussed in the third blog of this series.
Bringing together these server performance requirements, thermal dissipation challenges, component selection, and effective airflow simulations, while involving considerable engineering and applied science is very much an art form. A well-designed server is remarkable not only in its performance but the efficiency and elegance of its layout. Perhaps that’s a little overboard, but I can’t help but admire an efficient server layout and consider all the design iterations, time, engineering efforts and simulations that went into its creation.
Having high-efficiency Power Supply Units (PSUs) options, that support multiple voltages (both AC and DC) and multiple PSU capacities will allow for the optimal conversion of input power (110VAC, 220VAC, -48VDC) server consumable voltages (12VDC, 5VDC).
Power Supplies operate most efficiently in a utilization range. PSUs are generally rated with a voluntary certification, called 80PLUS, which is a rating of power conversion efficiency. The minimum efficiency is 80 percent for power conversion rating. The flip side of an 80 percent efficiency rating, is that 20 percent of the input power is wasted as heat. Maximum PSU efficiency rating is currently around 96 percent. Of course, the higher the efficiency the higher the price of the PSU. The increasing costs of electricity globally is dimensioning the PSU, resulting in significant TCO savings.
Ensuring that a server vendor has multiple PSU options that provide optimal PSU efficiencies, over the performance range of the server can save hundreds to thousands of dollars in inefficient power conversion over the lifetime of the server. If you also consider that the power conversion loss represents generated heat, the potential savings in cooling costs are even greater.
GR-63-Core specifies three types of airborne contaminants that need to be addressed: particulate, organic vapors, and reactive gases. Organic vapors and reactive gases can lead to rapid corrosion, especially where copper or silver components are exposed in the server. With the density of server components on a motherboard increasing from generation to generation and the size of the components decreasing, corrosion becomes an increasingly complex issue to resolve.
Particulate contaminants, which include particulates—such as salt crystals on the fine side and common dust and metallic particles like zinc whiskers on the coarse side—can cause corrosion but can also result in leakage, eventual electrical arcing, and sudden failures. Common dust build-up within a server can reduce the efficiency of heat dissipation and dust can absorb humidity that can cause shorts and resulting failures.
Hybrid outdoor cabinet solutions may become more common as operators look toward reducing energy costs. These would involve combination of Air Ventilation (AV), Active Cooling (AC), and Heat Exchangers (HEX). Depending on the region AV+AC (warmer) or AV+HEX (cooler) can be used to efficiently evacuate heat from an enclosure, only falling back on AC and HEX when AV cannot sufficiently cool the cabinet. However, exposure to outside air brings in a whole new set of design challenges, which increases the risk of corrosion.
One method of protection employed is a Conformal Covering is a protection method that combats corrosive contaminates in hostile environments. This is a thin layer of a non-conductive material that is applied to the electronics in a server and acts as a barrier to corrosive elements. This layer and the material used (typically some acrylic) is thin enough that its application does not impede heat conduction. Conformal Coverings can also assist against dust build-up. This is not a common practice in servers due to the complexity of applying the coating to the multiple modules (motherboard, DIMMs, PCIe Cards, and more) that compose a modern server and is not without cost. However, the tradeoff of coating a server compared to the savings of using AV may make this practice more common in the future.
Using a filtered bezel is a common option for dust. These filters block dust from entering the server but not keep dust out of the filter. Eventually, the dust accumulated in the filter reduces airflow through the server which can cause components to run hotter or cause the fans to spin at a higher rate consuming more electricity.
Periodically replacing filters is critical—but how often and when? The use of Smart Filter Bezels can be an effective solution to this question. These bezels notify operations when a filter needs to be swapped and may save time with unnecessary periodic checks or rapidly reacting when over-temperature alarms are suddenly received from the server.
The last two blogs in this series covered a few of the design aspects that should be considered when designing a compute solution for the edge that is powerful, compact, ruggedized, environmentally tolerant and power efficient. These designs need to be flexible, deployable into existing environments, often short-depth, and operate reliably with a minimum of physical maintenance for multiple years.
Accelerating the Journey towards Autonomous Telecom Networks
Fri, 06 Jan 2023 14:29:40 -0000
|Read Time: 0 minutes
Communications service providers (CSPs) are on a journey of digital transformation that gives them the ability to offer new innovative services and a better customer experience in an open, agile, and cost-effective manner. Recent developments in 5G, Edge, Radio Access Network disaggregation, and, most importantly the pandemic have all proven to be catalysts that accelerated this digital transformation. However, all these advancements in telecom come with their own set of challenges. New architectures and solutions have made the modern network considerably more complex and difficult to manage.
In response, CSPs are evaluating new ways of managing their complex networks using automation and artificial intelligence. The ability to fully orchestrate the operation of digital platforms is vital for touchless operations and consistent delivery of services. Almost every CSP is working on this today. However, the standard automation architecture and tools can't be directly applied by CSPs as all these solutions need to adhere to strict telecom requirements and specifications such as those defined by enhanced Telecom Operations Map (eTOM), Telecom Management Forum (TM Forum), European Telecommunications Standards Institute (ETSI), 3rd Generation Partnership Project (3GPP), etc. CSPs also need to operate many telecom solutions including legacy physical network functions (PNF), virtual network functions (VNF), and the latest 5G era containerized network functions (CNF).
Removing barriers with telecom automation
Although many CSPs have built cloud platforms, only a handful have achieved their automation targets. So, what do you do when there is no ready-made industry-standard automation solution? You build one. And that’s exactly what Dell Technologies did with the recent launch of its Dell Telecom Multi-Cloud Foundation. Dell Telecom Multi-Cloud Foundation automates the deployment and life-cycle management of the cloud platforms used in a telecom network to reduce operational costs while consistently meeting telco-grade SLAs. It also supports the leading cloud platforms offering operators the flexibility of choosing the platform that best meets their needs based on workload requirements and cost-to-serve. It streamlines telecom cloud design, deployment, and management with integrated hardware, software, and support.
The solution includes Dell Telecom Infrastructure Blocks. Telecom Infrastructure Blocks are engineered systems that provide foundational building blocks that include all the hardware, software and licenses to build and scale out cloud infrastructure for a defined telecom use case.
Telecom Infrastructure Block releases will be delivered in an agile manner with multiple releases per year to simplify lifecycle management. In 2023, Dell Telecom Infrastructure Blocks will support workloads for Radio Access Network and Core network functions with:
Dell Telecom Infrastructure Blocks for RedHat will target core network workloads (planned). The primary goal of Telecom Multi-Cloud Foundation with Telecom Infrastructure Blocks is to deliver telco cloud platforms that are engineered for scaled deployments, providing three core capabilities:
Dell Technologies Telecom Multi-Cloud Foundation meets Telco automation requirements
Dell Technologies Multi-Cloud Foundation provides communications service providers with a platform-centric solution based on open Application Programming interfaces (APIs) and consistent tools. This means the platform can deliver outcomes based on a unique use case and workload and then scale out deployments using an API-based approach.
Dell Telcom Multi-Cloud Foundation enables telco-grade automation through the following key capabilities:
Automation use cases with Dell Technologies Telecom Multi-Cloud Foundation
Telecom Automation is not just about Day 0 (design) and Day 1 (deployment) but should also cover Day 2 (operations and lifecycle management). Dell Telecom Multi-Cloud Foundation supports the following use cases:
Dell Technologies developed Dell Telecom Multi-Cloud Foundation and Dell Telecom Infrastructure Blocks to accelerate 5G cloud infrastructure transformation. Our current release of Telecom Infrastructure Blocks for Wind River delivers an engineered and factory-integrated system that comes with a fully automated deployment model for CSPs looking to build resilient and high-performance RAN.
To learn more about our solution, please visit the Dell Telecom Multi-Cloud Foundation solutions site.
About the Author: Saad Sheikh
Saad Sheikh is APJ's Lead Systems Architect in Telecom Systems Business at Dell Technologies. In his current role, he is responsible for driving Telecom Cloud, Automation, and NGOPS transformations in APJ supporting partners, NEPs, and customers to accelerate Network Transformation for 5G, Open RAN, Core, and Edge using Dell’s products and capabilities. He is an industry leader with over 20 years of experience in Telco industry holding roles in Telco, System Integrators, Consulting businesses, and with Telecom vendors where he has worked on E2E Telecoms systems (RAN, Transport, Core, Networks), Cloud platforms, Automation, Orchestration, and Intelligent Networking. As part of Dell CTO team, he represents Dell in Linux Foundation, TMforum, GSMA, and TIP.
Computing on the Edge: Other Design Considerations for the Edge – Part 1
Fri, 13 Jan 2023 19:46:50 -0000
|Read Time: 0 minutes
In past blogs, the requirements for NEBS Level 3 certifications were addressed, with even higher demands depending on the Outside Plant (OSP) installation requirements. Now, additional design considerations need to be considered, to create a hardware solution that is not only going to survive the environment at the edge, but provides a platform that can be effectively deployed to the edge.
The first design consideration that we’ll cover for an Edge Server is the Ruggedized Chassis. This is certainly a chassis that can stand up to the demands of Seismic Zone 4 testing and can also withstand impacts, drops, and vibration, right?
Not necessarily.
While earthquakes are violent, demanding, but relatively short-duration events, the shock and vibration profile can differ significantly when the server is taken out from under the Cell Tower. We are talking beyond the base of the tower, and to edge environments that might be encountered in Private Wireless or Multi-Access Edge Compute (MEC) deployments. Some vibration and shock impacts are tested in GR-63-Core, under test criteria for Transportation and Packaging, but ruggedized designsFigure 1. Portable Edge Compute Platforms need to go beyond this level of testing.
For example, the need for ruggedized servers in mining or military environments, where setting up compute can be more temporary in nature and often includes the use of portable cases, such as Pelican Cases. These cases are subject to environmental stresses and can require ruggedized rails and upgraded mounting brackets on the chassis for those rails. For longer-lasting deployments, enclosures can be less than ideal and require all the requirements of a GR-3108 Class 2 device and perhaps some additional considerations.
Dell Technologies also tests our Ruggedized (XR-series) Servers to MIL-STD-810 and Marine testing specifications. In general, MIL-STD-810 temperature requirements are aligned with GR-63-CORE on the high side but test operationally down to -57C (-70F) on the low side. This reflects some extreme parts of the world where the military is expected to operate. But MIL-STD-810 also covers land, sea, and air deployments. This means that non-operational (shipping) criteria is much more in-depth, as are acceleration, shock, and vibration. Criteria includes scenarios, such as crash survivability, where the server can be exposed to up to 40Gs of acceleration. Of course, this tests not only the server, but the enclosure and mounting rails used in testing.
So why have I detoured onto MIL-STD and Marine testing? For one, it’s interesting in the extreme “dynamic” testing requirements that are not seen in NEBS. Secondly, creating a server that is survivable in MIL-STD and Marine environments is only complementary to NEBS and creates an even more durable product that has applications beyond the Cellular Network.
Figure 2. Typical Short Depth Cell Site EnclosureAnother key factor in chassis design for the edge is the form factor. This involves understanding the physical deployment scenarios and legacy environments, leading to a server form factor that can be installed in existing enclosures without the need for major infrastructure improvements. For servers, 19 inch rackmount or 2 post mounting is common, with 1U or 2U heights. But the key driver in the chassis design for compatibility with legacy telecom environments is short depth.
Server depth is not something covered by NEBS, but supplemental documentation created by the Telecoms, and typically reflected in RFPs, define the depth required for installation into Legacy Environments. For instance, AT&T’s Network Equipment Power, Grounding, Environmental, and Physical Design Requirements document states that “newer technology” deployed to a 2 post rack, which certainly applies to deployments like vRAN and MEC, “shall not” exceed 24 inches (609mm) in depth. This disqualifies most traditional rackmount servers.
The key is deployment flexibility. Edge Compute should be able to be mounted anywhere and adapt to the constraints of the deployment environment. For instance, in a space-constrained location, front maintenance is a needed design requirement. Often these servers will be installed close to a wall or mounted in a cabinet with no rear access. In addition, supporting reversible airflow can allow the server to adapt to the cooling infrastructure (if any) already installed.
While NEBS requirements focus on Environmental and Electrical Testing, ultimately the design needs to consider the target deployment environment and meet the installation requirements of the targeted edge locations.
How Dell Telecom Infrastructure Blocks are Simplifying 5G RAN Cloud Transformation
Thu, 08 Dec 2022 20:01:48 -0000
|Read Time: 0 minutes
5G is a technology that is transforming industry, society, and how we communicate and live in ways we’ve yet to imagine. Communication Service Providers (CSPs) are at the heart of this technological transformation. Although 5G builds on existing 4G infrastructure, 5G networks deployed at scale will require a complete redesign of communication infrastructure. 5G network transformation is undergoing, where more than 220 operators in more than 85 countries have already launched services, and they have realized that operational agility and accelerated deployment model in such a decentralized and cloud-native landscape are considered a must-have to meet customer demands for new innovative capabilities, services, and digital experiences for both Telecom and vertical industries. This is accompanied by the promise of cloud native architectures and open and flexible deployments, which enable CSPs to scale and enable new data-driven architectures in an open ecosystem. While the initial deployments of 5G are based on the Virtualized Radio Access Network (vRAN), which offers CSPs enhanced operational efficiency and flexibility to fulfill the needs of 5G customers, Open RAN expands vRAN's design concepts as well as goals and is truly considered the future. Although O-RAN disaggregates the network, providing network operators more flexibility in terms of how their networks are built and allowing them the benefits of interoperability, the trade-off for the flexibility is typically increased operational complexity, which incurs additional costs of continuous testing, validation, and integration of the 5G RAN system components, which are now provided by a diverse set of suppliers.
Another aspect of this growing complexity is the need for denser networks. Although powerful, new 5G antennas and RAN gear required to attain maximum bandwidth cover substantially less distance than 4G macro cells operating at lower frequencies. This means similar coverage requires more 5G hardware and supporting software. Adding the essential gear for 5G networks can dramatically raise operational costs, but the hardware is only a portion of these costs. The expenses of maintaining a network include the time and money spent on configuration changes, testing, monitoring, repairs, and upgrades.
For most nationwide operators, Edge and RAN cell sites are widely deployed and geographically dispersed across the nation. As network densification increases, it becomes impractical to manually onboard thousands of servers across multiple sites. CSPs need to create a strategy for incorporating greater automation into their network and continue service operations to ensure robust connectivity, manage to expand network complexities, and preserve cost efficiencies without the need for a complete "rip and replace" strategy.
As CSPs migrate to an edge-computing architecture, a new set of requirements emerges. As workloads move closer to the network's edge, CSPs must still maintain ultra-high availability often 5-6 nines. Legacy technology is incapable of attaining this degree of availability.
Scalability, specifically down to a single node with a small footprint at the edge. When a single network reaches tens of thousands of cell sites, you simply cannot afford to have a significant physical footprint with many servers. As a result, the need for a new architecture that can scale up and down grew. As applications grow more real-time, ultra-low latency at the edge is required. CSPs need in-built lifecycle management to perform live software upgrades and manage this environment. Finally, CSPs are demanding more and more open-source software for their networks. Wind River Studio addresses each of these network issues.
Wind River Studio Cloud Platform, which is the StarlingX project with commercial support, provides a production-grade distributed Kubernetes solution for managing edge cloud infrastructure. In addition to the Kubernetes-based Wind River Studio Cloud Platform, Studio also provides orchestration (Wind River Studio Conductor) and analytics (Wind River Studio Analytics) capabilities so operators can deploy and manage their intelligent 5G edge networks globally.
Mobile Network Operators who adopt vRAN and Open RAN must integrate cloud platform software on optimized and tuned hardware to create a cloud platform for vRAN and Open RAN applications. Dell and Wind River have worked together to create a fully engineered, pre-integrated solution designed to streamline 5G vRAN and Open RAN design, deployment, and lifecycle management. Dell Telecom Infrastructure Blocks for Wind River integrate Dell Bare Metal Orchestrator (BMO) and Wind River Studio on Dell PowerEdge servers to provide factory-integrated building blocks for deploying ultra-low latency, vRAN and Open RAN networks with centralized, zero-touch provisioning and management capabilities.
Key Advantages:
Wind River Studio Cloud Platform Distributed Cloud configuration supports an edge computing solution by providing central management and orchestration for a geographically distributed network of cloud platform systems with easy installation with support for complete Zero Touch Provisioning of the entire cloud, from the Central Region to all the Sub-Clouds
The architecture features a synchronized distributed control plane for reduced latency, with an autonomous control plane such that all sub-cloud local services are operational even during loss of Northbound connectivity to the Central Region (Distributed cloud system controllers cluster location) which is quite important because Studio Cloud Platform can scale horizontal or vertical independent from the main cloud in the regional data center (RDC) or in National Data center (NDC).
Cell Sites, or sub-clouds, are geographically dispersed edge sites of varying sizes. Dell Telecom Infrastructure Blocks for Wind River cell site installations can be either All-in-One Simplex (AIO- SX), AIO Duplex (DX), or All-in-One (AIO) DX + workers. For a typical AIO SX deployment, at least one server is needed in a sub-cloud. Remote worker sites running Bare Metal Orchestrator are where sub-clouds are set up.
The Central Site at the RDC is deployed as a standard cluster across three Dell PowerEdge R750 servers, two of which are the controller nodes and one of which is a worker node, The Central Site also known as the system controller, provides orchestration and synchronization services for up to 1000 distributed sub-clouds, or cell sites. Controller-0, Controller-1, and Workers-0 through n are the various controllers in the system. To implement AIO DX, both Controller-0 and Controller-1 are required.
Wind River Studio Conductor runs in the National Data Center (NDC) as an orchestrator and infrastructure automation manager. It integrates with Dell's Bare Metal Orchestrator (BMO) to provide complete end-to-end automation for the full hardware and software stack. Additionally, it provides a centralized point of control for managing and automating application deployment in an environment that is large-scale and distributed.
Studio Conductor receives information from Bare Metal Orchestrator as new cell sites come online. Studio Conductor instructs the system controller (CaaS manager) to install, bootstrap, and deploy Studio Cloud Platform at the cell sites. It supports TOSCA (Topology and Orchestration Specification for Cloud Application) based blueprint modeling. (Blueprints are policies that enable orchestration modeling.) Studio Conductor uses blueprints to map services to all distributed clouds and determine the right place to deploy. It also includes built-in secret storage to securely store password keys internally, reducing threat opportunities.
Studio Conductor can adapt and integrate with existing orchestration solutions. The plug-in architecture allows it to accommodate new and old technologies, so it can easily be extended to accommodate evolving requirements.
Wind River Studio Analytics is an integrated data collection, monitoring, analysis, and reporting tool used to optimize distributed network operations. Studio Analytics specifically solves a unique use case for the distributed edge. It provides visibility and operational insights into the Studio Cloud Platform from a Kubernetes and application workload perspective. Studio Analytics has a built-in alerting system with the ability to integrate with several third-party monitoring systems. Studio Analytics uses technology from Elastic. co as a foundation to take data reliably and securely from any source and format, then search, analyze, and visualize it in real time. Studio Analytics also uses the Kibana product from Elastic as an open user interface to visually display the data in a dashboard.
Dell Telecom Multi-Cloud Foundation Infrastructure Blocks provides a validated, automated, and factory integrated engineered system that paves the way for the zero-touch deployment of 5G Telco Cloud Infrastructure, to operation and management of the lifecycle of vRAN and Open RAN sites, all of which contribute to a high-performing network that lessens the cost, time, complexity, and risk of deploying and maintaining a telco cloud for the delivery of 5G services.
To learn more about our solution, please visit the Dell Telecom Multi-Cloud Foundation solutions site
About the author:
Gaurav Gangwal works in Dell's Telecom Systems Business (TSB) as a Technical Marketing Engineer on the Product Management team. He is currently focused on 5G products and solutions for RAN, Edge, and Core. Prior to joining Dell in July 2022, he worked for AT&T for over ten years and previously with Viavi, Alcatel-Lucent, and Nokia. Gaurav has an engineering degree in electronics and telecommunications and has worked in the telecommunications industry for about 14 years. He currently resides in Bangalore, India.
Computing on the Edge: Outdoor Deployment Classes
Fri, 02 Dec 2022 20:21:29 -0000
|Read Time: 0 minutes
Ultimately, all the testing involved with GR-63-CORE and GR-1089-CORE is intended to qualify hardware designs that have the environmental, electrical, and safety qualities that allow for installations from the Central Office, all the way out to the Cell Site. For deployments at the Cell Site, it turns out that NEBS Level 3 is really only the start and the minimum environmental threshold for Cell Site Controlled Environment.
This is where GR-3108-CORE comes into scope. GR-3108-CORE, Generic Requirements for Network Equipment in Outside Plant (OSP), defines the environmental tolerances for equipment deployed throughout a Telecom Network, from the Central Office, to up the tower of the Cell Site, and to the customer premises.
Figure 1. GR-3108-CORE Equipment ClassesThe four Classes of equipment defined in GR-3108-CORE are:
Class 1: Equipment in Controlled or Protected Environments
Class 2: Protected Equipment in Outside Environments
Class 3: Protected Equipment in Severe Outside Environments
Class 4: Products in Unprotected Environment directly exposed to the weather
The primary drivers of these classes include:
The OSP for Class 1 enclosures includes Controlled Environmental Vaults, Huts and Cabinets with active heating and cooling, Telecom Closets in-Building/on-Figure 2. Typical OSP Enclosure and Concrete HutBuilding or Residential Locations. The requirements for Class 1 installations are very much in line with NEBS Level 3 specifications with the recurring theme on these enclosures being that there is some active means of environmental control. The methods of maintaining a controlled environment are not specified, but the method used must maintain the defined operating temperatures between -5°C (23°F) to 50°C (122°F) and humidity levels between 5-85 percent.
Other expectations for Class 1 enclosures include performing initial, cold, or hot startup throughout the entire temperature range and continued operation if device single fan failures occurs, but at a lower upper-temperature threshold.
Figure 3. Example Class 2 Protected EnclosuresThe internal enclosures or spaces of a Class 2 OSP have an extended temperature range of -40°C (-40°F) to 65°C (149°F), with humidity levels the same as for Class 1. Typically, while these OSPs continue to protect the hardware from the outside elements, environmental controls are less capable and often involve the use of cooling fans, heat exchangers and raised fins to dissipate heat. Besides outdoor enclosures, Class 2 environments can also include customer premise locations such as garages, attics or uncontrolled warehouses.
For hardware designers, creating Carrier Grade Servers, this is where it’s particularly important to pay attention to the components being used when you’re looking at your target Class 2 deployment environment. Many manufacturers will provide specifications on the maximum temperature where the IC component will operate. Typically, the maximum temperature is a die temperature, and the method of heat evacuation is left to the HW designers, in the form of heat sinks, fans for airflow, and others.
However, for those attempting compliance with Class 2, the lower temperature range also becomes important because many ICs are not tested for operation below 0°C. The IC temperature grades generally come in commercial (0°C/32°F to 70°C/158°F), Industrial (-40°C/-40°F to 85°C/185°F), Military (-55°C/-67°F to 125°C/257°F), and Automotive grades.
So the specs for Commercial grade ICs may not even accommodate the requirements of even a Class 1 OSP. So, what do designers do? Sometimes, you’ll see an “asterisk” on the server spec sheet, indicating that the device can run at the lower temperature range, but not start at the lower range. This is where the design can start at 0C and provide sufficient heat to keep the IC warm down through -5°C (23°F).
Designers may also consider including some pre-heater or enclosure heater to bring the device up to 0°C before a startup is allowed or incur the added expense of extended temperature parts.
Severe is certainly the theme for Class 3 OSPs. In these environments, while inside an enclosure to protect the device from direct sunlight and rain, the enclosure may not be sealed from other outside stresses like hot, cold and humidity extremes, dust and other airborne contaminants, salt fog, etc. Temperature ranges from -40°C (-40°F) to 70°C (158°F) and humidity levels from 5% to 95%, with single fan failure requirements of 65°C. Certainly, indoor hostile environments, such as boiler rooms, furnace spaces and attics also exist that would require Class 3 designed solutions.Figure 4.Protected Sever Cabinet
Figure 5. Class 4 Radio UnitsThis class of equipment is intended for outdoor deployments, with full exposure to sun, rain, wind and all the environmental challenges found, for example, at the top of a Cell Tower. For Telecom, Class 4 certification would typically be the domain of Antennas and Remote Radio Heads. These units, mounted on towers, buildings, street lamps, and other places are fully exposed to the entire spectrum of environmental challenges. Class 4 devices get a bit of a break on temperature, -40°C (-40°F) to 46°C (115°F), due to direct exposure to sunlight, but 100% humidity due to its exposure to rain.
For Carrier Grade Servers, Class 1 (NEBS Level 3 equivalent) is the most common target of designers creating compute, storage, and networking platforms for Telecom consumption. Class 2 servers are also achievable, and their demand may increase as Edge Computing and O-RAN/Cloud RAN deployments become more common, moving beyond Class 2 will require specialty, more purpose-defined designs.
Computing on the Edge: NEBS Criteria Levels
Tue, 15 Nov 2022 14:43:44 -0000
|Read Time: 0 minutes
In our previous blogs, we’ve explored the type of tests involved to successfully pass the criteria of GR-63-CORE, Physical Protections, GR-1089-CORE, Electromagnetic Compatibility, and Electrical Safety. The goal of successfully completing these tests is to create Carrier Grade, NEBS compliant equipment. However, outside of highlighting the set of documents that compose NEBS, nothing is mentioned of the NEBS levels and the requirements to achieve each level. NEBS levels are defined in Special Report, SR-3580.
Figure 1. NEBS Certification LevelsNEBS Level 3 compliance is expected from most Telecom environments, outside of a traditional data center. So, what NEBS level do equipment manufacturers aim to achieve?
At first, I created Figure 1 as a pyramid, not inverted, with Level 1 as the base and Level 3 as the peak. However, I reorganized the graphic because Level 1 isn’t really a foundation, it is a minimum acceptable level. Let’s dive into what is required to achieve each NEBS certification level.
NEBS Level 1 is the lowest level of NEBS certification. It provides the minimum level of environmental hardening and stresses safety criteria to minimize hazards for installation and maintenance administrators.
This level is the minimum acceptable level of NEBS environmental compatibility required to preclude hazards and degradation of the network facility and hazards to personnel.
This level includes the following tests:
Level 1 criteria does not assess Temperature/Humidity, Seismic, ESD or Corrosion.
Operability, enhanced resilience, and environmental tolerances are assessed in Levels 2 and 3.
Figure 2. Map of Seismic Potential in the US
NEBS Level 2 assesses some environmental criteria, but the target deployment is in a “normal” environment, such as data center installations where temperatures and humidity are well controlled. These environments typically experience limited impacts of EMI, ESD, and EFTs, and have some protection from lightning, Surges and Power Faults. There is also some Seismic Testing performed on the EUT, but only to Zone 2. While there is no direct correlation between seismic zones and earthquake intensity, in the United States, zone 2 generally covers the Rocky Mountains, much of the West and parts Southeast and Northeast Regions.
NEBS Level 2 certification may be sufficient for some Central Office (CO) installations but is not sufficient for deployment to Far Edge or Cell Site Enclosures which can be exposed to environmental and electromagnetic extremes, or in regions covered by seismic zones 3 or 4.
Figure 3. Level 3 criteria
NEBS Level 3 certification is the highest level of NEBS Certification and is the level that is expected by most North American telecom and network providers when specificizing equipment requirements for installation into controlled environments.
Level 3 is required to provide maximum assurance of equipment operability within the network facility environment.
Level 3 criteria are also suited for equipment applications that demand minimal service interruptions over the equipment’s life.
Full NEBS Level 3 certification can take from three to six months to complete. This includes prepping and delivering the hardware to the lab, test scheduling, performance, analysis of test results, and the production of the final report. If a failure occurs, systems can be redesigned for retesting.
Conclusion
While environmental, electrical, electromagnetic, and safety specifications described in NEBS Level 3 certification, it is the minimum required for deployment into a controlled telecom network environment; these specifications are only the beginning for outdoor deployments. The next blog in this series will explore more of these specifications such as GR-3108-CORE and general requirements for Network Equipment in Outside Plant (OSP). Stay tuned.
Accelerate Telecom Cloud Deployments with Dell Telecom Infrastructure Blocks
Mon, 31 Oct 2022 16:48:10 -0000
|Read Time: 0 minutes
During MWC Vegas, Dell Technologies announced Dell’s first Telecom Infrastructure Blocks co-engineered with our partner Wind River to help communication service providers (CSPs) reduce complexity, accelerate network deployments, and simplify life cycle management of 5G network infrastructure. Their first use cases will be focused on infrastructure for virtual Radio Access Network (vRAN) and Open RAN workloads.
Deploying and supporting open, virtualized, and cloud-native 5G RANs is one of the key requirements to accelerate 5G adoption. The number of options available in 5G RAN design makes it imperative that infrastructures supporting them are flexible, fully automated for distributed operations, and maximally efficient in terms of power, cost, the resources they consume, and the performance they deliver.
Dell Telecom Infrastructure Blocks for Wind River are designed and fully engineered to provide a turnkey experience with fully integrated hardware and software stacks from Dell and Wind River that are RAN workload-ready and aligned with workload requirements. This means the engineered system, once delivered, will be ready for RAN network functions onboarding through a simple and standard workflow avoiding any integration and lifecycle management complexities normally expected from a fully disaggregated network deployment.
The Dell Telecom Infrastructure Blocks for Wind River are a part of the Dell Technologies Multi-Cloud Foundation, a telecom cloud designed specifically to assist CSPs in providing network services on a large scale by lowering the cost, time, complexity, and risk of deploying and maintaining a distributed telco-cloud. Dell Telecom Infrastructure Blocks for Wind River are comprised of:
From technology onboarding to Day 2+ operations for CSPs, Dell Telecom Infrastructure Blocks streamline the processes for technology acquisition, design, and management. We have broken down these processes into 4 stages. Let us examine how Dell Telecom Infrastructure Blocks for Wind River can impact each stage of this journey.
The first stage is the Technology onboarding, where Dell Technologies works with Wind River in Dell’s Solution Engineering Lab to develop the engineered system. Together we design, validate, build, and run a broad range of test cases to create an optimized engineered system for 5G RAN vCU/vDU and Telecom Multi-Cloud Foundation Management clusters. During this stage, we conduct extensive solution testing with Wind River performing more than 650 test cases. This includes validating functionality, interoperability, security, scalability, high availability, and test cases specific to the workload’s infrastructure requirements to ensure this system operates flawlessly across a range of scale and performance points.
We also launched our OTEL Lab (Open Telecom Ecosystem Lab) to allow telecom ecosystem suppliers (ISVs) to integrate or certify their workload applications on Dell infrastructure including Telecom Infrastructure Blocks. Customers and partners working in OTEL can fine-tune the Infrastructure Block to a given CSP’s needs, marrying the efficiency of Infrastructure Block development with the nuances presented in meeting a CSP’s specific requirements.
Continuous improvement in the design of Infrastructure Blocks is enabled by ongoing feedback on the process throughout the life of the solution which can further streamline the design, validation, and certification. This extensive process produces an engineered system that streamlines the operator’s reference architecture design, benchmarking, proof of concept, and end-to-end validation processes to reduce engineering costs and accelerate the onboarding of new technology.
All hardware and software required for this Engineered system are integrated in Dell’s factory and sold and supported as a single system to simplify procurement, reduce configuration time, and streamline product support.
This "shift left" in the design, development, validation, and integration of the stacks means readiness testing and integration are finished sooner in the development cycle than they would have been with more traditional and segregated development and test processes. For CSPs, this method speeds up time to value by reducing the time needed to prepare and validate a new solution for deployment.
Now we go from Technology onboarding to the second phase, pre-production.
From Dell’s Solution Engineering Labs, the engineered system moves into the CSPs pre-production environment where the golden configuration is defined. Rather than receiving a collection of disaggregated components, (infrastructure, cloud stacks, automation, and so on.) CSPs start with a factory-integrated, engineered system that can be quickly deployed in their pre-production test lab. At this stage, customers leverage the best practices, design guidance, and lessons learned to create a fully validated stack for their workload. The next step is to pre-stage the Telco Cloud stack including the workload and start preparing for Day 1 and Day 2 by integrating with the customer CI/CD pipeline and defining/agreeing on the life-cycle management process to support the first office application deployment.
Advancing the flow, deployment into production is accelerated by:
Automating deployment eliminates manual configuration errors to accelerate product delivery. Should the CSP need assistance with deployment, Dell's professional services team is standing by to assist. Dell provides on-site services to rack, stack, and integrate servers into their network.
Day 2+ operations are simplified in several ways. First, the automation provided, combined with the extensive validation testing Dell and Wind River perform, ensure a consistent, telco-grade deployment, or upgrade each time. This streamlines daily fault, configuration, performance, and security management in the fully distributed cloud. In addition, Dell Bare Metal Orchestrator will automate the detection of configuration drift and its remediation. And, Wind River Studio Analytics utilizes machine learning to proactively detect issues before they become a problem.
Second, Dell’s Solutions Engineering lab validates all-new feature enhancements to the software and hardware including new updates, upgrades, bug fixes, and security patches. Once we have updated the engineered system, we push it via Dell CI/CD pipeline to Dell factory and OTEL Lab. We can also push the update to the CSP's CI/CD pipeline using integrations set up by Dell Services to reduce the testing our customers perform in their labs.
We complement all this by providing unified, single-call support for the entire cloud stack with options for carrier-grade SLAs for service response and restoration times.
Proprietary appliance-based networks are being replaced by best-of-breed, multivendor cloud networks as CSPs adapt their network designs for 5G RAN. As CSPs adopt disaggregated, cloud-native architectures, Dell Technologies is ready to lend a helping hand. With Dell Telecom Multi-Cloud Foundation, we provide an automated, validated, and continuously integrated foundation for deploying and managing disaggregated, cloud-native telecom networks.
Ready to talk? Request a callback.
To learn more about our solution, please visit the Dell Telecom Multi-Cloud Foundation solutions site
About the author:
Gaurav Gangwal works in Dell's Telecom Systems Business (TSB) as a Technical Marketing Engineer on the Product Management team. He is currently focused on 5G products and solutions for RAN, Edge, and Core. Prior to joining Dell in July 2022, he worked for AT&T for over ten years and previously with Viavi, Alcatel-Lucent, and Nokia. Gaurav has an engineering degree in electronics and telecommunications and has worked in the telecommunications industry for about 14 years. He currently resides in Bangalore, India.
Telecom Innovations: Breaking Down the Barriers to DevSecOps
Fri, 02 Sep 2022 15:16:44 -0000
|Read Time: 0 minutes
DevOps—the fusion of software development with IT operations—has been a best practice among development and IT teams for quite some time now. More recently, the need to integrate security within the DevOps process has made DevSecOps the new gold standard for software development and operations. While this may seem like great idea on paper, but what happens when the developers, security architects, and network ops teams are not part of the same company? Telecom networks are typically developed by multiple suppliers.
In many cases, telecom software is developed by external vendors in a walled fashion where Communication Service Providers (CSPs) have little visibility into the development process.
The need to adhere to strict telecom standards and models such as Enhanced Telecom Operations Map (eTOM) and European Telecommunications Standards Institute (ETSI) also compounds the complexity of DevSecOps in telecom. The third barrier is managing a single DevSecOps pipeline while juggling multiple generations of network equipment and configurations
What happens when there is no unified environment to support DevSecOps processes? You build one. That’s what Dell Technologies did with the recent launch of its Open Telecom Ecosystem Lab (OTEL). With OTEL, telecom operators and software and technology partners can work together using an end-to-end systems approach that spans seamlessly across vendor, lab, staging, and production environments.
OTEL provides everything that CSPs and vendors need to support DevSecOps processes with the new Solutions Integration Platform (SIP) including:
In the last few years, there has been a big push to incorporate continuous integration/deployment (CI/CD) pipelines in the telecommunications industry. This push has been met with resistance because of the following challenges:
Telecom operators’ enterprise customers also have limited involvement in software development despite a deep interest in the functionality and outcomes of that software. For the operaters, becoming a part of the software development process can mean getting services to market sooner with a finished product that meets the needs of end users.
One of the primary goals of OTEL is to deliver telecom innovation as a platform, providing three core capabilities:
Telecom Networks are critical infrastructure and have a unique requirements on security driven by service needs and SLA’s, strong regulations and geographical laws, and cyber and data privacy . For 5G and cloud solutions, which involve many vendors, it is important to build a zero trust security architecture that can be validated and tested in a automated CI/CD driven approach. It is also important to enable security mechanisms that can automate security tests across each layer of network. These include:
Integrating both the functional and non-functional requirements of telecom networks including security, reliability, and performance is the unique challenge Dell is trying to address through its state of art OTEL . By reducing the complexity of telecom software development and ensuring better integration and collaboration, OTEL is giving CSPs and their partners the agility and security they need to deliver the next generation of 5G and edge solutions.
To learn more about OTEL and how you can take advantage of OTEL’s state-of-the-art lab environment, contact Dell at Open Telecom Ecosystem Labs (OTEL.)
Saad Sheikh is a APJ Lead Systems Architect for Orchestration and NextGen Ops in Dell Telecom Systems Business (TSB) . In this role he is responsible to support partners, NEP’s, and customers to simplify and accelerate networks transformation to open and dis-aggregated infrastructures and solutions (5G, edge computing, core, and cloud platforms) using Dell’s products and capabilities that are based on multi cloud, data driven, ML/AI supported and open ways to build next generation Operational capabilities. In addition as part of Dell CTO team he represent Dell in Linux Foundation , TMforum , GSMA, ETSI, ONAP, and TIP. He has more than 20 years of experience in industry in telco's system integrators, consulting business, and with telecom vendors where he has worked on E2E Telecoms systems (RAN, Transport, Core, Networks), cloud platforms, automation and orchestration, and intelligent networking.
Bandwidth Guarantees for Telecom Services using SR-IOV and Containers
Mon, 12 Dec 2022 19:14:38 -0000
|Read Time: 0 minutes
With the emergence of Container-native Virtualization (CNV) or the ability to run and manage virtual machines alongside container workloads, Single Root I/O Virtualization (SR-IOV) takes on an important role in the Communications Industry. Most telecom services require guarantees of capacity e.g. number of simultaneous TCP connections, or concurrent voice calls, or other similar metrics. Each telecom service capacity requirement can be translated into the amount of upload/download data that must be handled, and the maximum amount of time that can pass before a service is deemed non-operational. These bounds of data and time must be met end-to-end, as a telecom service is delivered. The SR-IOV technology plays a crucial role on meeting these requirements.
With SR-IOV being available to workloads and VMs, Telecom customers can divide the bandwidth provided by the physical PCIe device (NICs) into virtual functions or virtual NICs. This allows the virtual NICs with dedicated bandwidth to be assigned to individual workloads or VMs ensuring SLA agreements can be fulfilled.
In the illustration above, say we have a 100GB NIC device that is shared amongst workloads and VMs on a single hardware server. The bandwidth on a single interface is typically shared amongst the workloads and VMs as shown for interface 1. If one workload or VM is extremely bandwidth hungry it could consume a large portion of the bandwidth, say 50%, leaving the other workloads or VMs to share the remaining 50% of the bandwidth which could impact the SLAs agreements under contract the Telco customer.
To ensure this doesn’t happen the specification for SR-IOV allows the PCIe NIC to be sliced up into virtual NICs or VFs as shown with interface 2 above. Slicing the NIC interface into VFs, one can specify the bandwidth per VF. For example, 30GB bandwidth could be specified for VF1 and VF2 for the workloads while VF3–5 could be allocated the remaining bandwidth divided evenly or perhaps only give 5GB each leaving 15GB for future VMS or workloads. By specifying the bandwidth at the VF level, Telco companies can guarantee bandwidths for workloads or VMs thus meeting the SLA agreement with their customers.
While this high-level description of the mechanics illustrates how you enabled the two aspects: SR-IOV for workloads and SR-IOV for VMs, Dell Technology has a white paper, SR-IOL Enablement for Container Pods in OpenShift 4.3 Ready Stack, which provides the step-by-step details for enabling this technology.