Your Browser is Out of Date

ShareDemos uses technology that works best in other browsers.
For a full experience use one of the browsers below



Tag :

All Tags

Author :

All Authors


O-RAN: Paving the Path for Innovation

Hemant Rawat

Fri, 12 Mar 2021 16:29:21 -0000


Read Time: 0 minutes

Radio Access Network (RAN) implementations in mobile networks have traditionally been vendor-specific and implemented with proprietary technology. They do follow the specifications laid out by 3GPP, ITU, IEEE, and other standards bodies, but RAN is complex (as governed by the laws of physics) and meeting the performance KPIs is challenging. So, to meet those KPIs, traditional RAN vendors created proprietary solutions that combine unique software and interfaces with purpose-built hardware designed for sometimes-harsh RAN environments.

The Baseband Unit (BBU), the key component in RAN, is typically designed internally as a “black box”. BBU implementations vary from vendor to vendor. The BBU connects to a proprietary Remote Radio Unit (RRU) through a vendor-specific implementation of the Common Public Radio Interface (CPRI) protocol (see diagram below).

The current implementation of CPRI is posing limitations in realizing next-generation RAN architecture, as it does not support a virtualized BBU architecture because of its fixed RRU-BBU mapping. It requires constant message transfers, which impact the implementation of Ethernet-based packet technologies. Also, its scaling is dependent on system bandwidth and antennas, not on data rate, and has stringent latency requirements.

RAN is essential in enabling the three main use cases that 5G promises:

  • Enhanced Mobile Broadband (eMBB), with peak data rates of 10-20 Gbps (UL/DL)
  • Ultra-Reliable Low Latency Communications (URLLC), which specifies less than one millisecond of air-interface latency
  • Massive Machine Type Communications (mMTC), which must support more than one million devices per square kilometer


To realize the above use cases, BBU needs to be disaggregated into various functional splits so that the site-specific (backhaul, spectrum, etc.) and use case-specific (latency, jitter, throughput, etc.) requirements can be met. 

O-RAN is an initiative to open the 5G RAN ecosystem by standardizing the interfaces. This enables the decoupling of hardware and software, which provides flexible deployment options to telcos and drives innovation across the ecosystem. 

The O-RAN Alliance seeks to achieve 5G’s true potential through various working groups, including one focused on the fronthaul interface between the radio unit (RU) and distributed unit (DU), as shown below:

The O-RAN Alliance is committed to providing an open, software-driven intelligent RAN. New interfaces are introduced in support of this vision, such as Split Option 7-2x, which creates a single split point that supports variable data rates and latencies. The 7-2x option scales on “streams,” which allows telcos to plan for scalable bandwidth transport using a higher number of antennas while pooling their existing fiber resources across multiple sites. The open interface simplifies design as less user-specific parameters are used at the 7-2x split and improve coordination gains because of lower-layer splits.

Operators can choose the appropriate split based on bandwidth and latency between the DU and RU. This architecture encourages pooling; multiple RUs can be served through a single DU in a ring-based packet fronthaul architecture, providing a cloud-like deployment that also increases cost efficiency by reducing power consumption.

A choice of fronthaul options gives operators the flexibility to design their 5G network based on user requirements and site-specific constraints (e.g., power, space, connectivity, coverage, capacity), whether they choose O-RAN’s Split Option 7-2, SCF’s Option 6, or 3GPP’s Option 2. In addition to selecting the appropriate functional split, operators can also mix and match software and hardware components in their fronthaul architecture from best-of-breed vendors. The figure below depicts the implementation for realizing a URLLC/mMTC use case with strict latency and jitter requirements. A different implementation may be warranted for use case of eMBB/macro sites where DU+RU/CU+DU combinations will be deployed.


O-RAN enables the radio portion of the network to run on x86 servers with cloud-based software in a multivendor environment. This environment uses programmable, open interfaces that enable intelligent radio control through artificial intelligence (AI). This will enable the deployment of edge computing resources closer to the radio (RU/DU/CU), delivering reduced latency to customers and reduced long-haul transport costs. 

There will likely be some teething issues around radio performance which needs to be addressed collectively by standards bodies, vendors and operators. Other areas for discussion/consideration include brownfield deployments where 2G/3G/4G radios co-exist within a single RAN; higher-order MIMO with multiple carriers per sector; Dynamic Spectrum Sharing (DSS); 5G NR carrier aggregation; synchronization between remote radio heads (RRH) and BBU; and the weight of antennas.

Why run O-RAN on Dell EMC?

  • Dell Technologies is a member of the O-RAN Alliance and we play an active role in numerous working groups.
  • The Dell EMC PowerEdge portfolioof servers is a proven solution deployed in RANs worldwide:
    1. Dell EMC PowerEdge XE2420 can accommodate up to four accelerators[KJ1]  and supports up to 92TB of storage.
    2. Dell EMC PowerEdge R640 runs CU/DU functions. It offers a high degree of computing density in a 1U two-socket rack server form factor.
    3. Dell EMC PowerEdge R740 is a 2U server that offers a high degree of expandability for networking bandwidth and hardware acceleration.
    4. Dell PowerEdge XR2 rack server is a 1U ruggedized server that can be used in locations where additional environmental constraints are present.
  • Select Dell EMC PowerEdge servers include carrier-grade server that supports extended temperature operation with special thermal tables and fan-control algorithms. 
  • Dell EMC PowerEdge servers support all virtualization technologies.
  • The Dell EMC PowerEdge family has demonstrated the benefits of wireless protocol offload onto field-programmable gateway arrays (FPGAs).
  • Dell EMC PowerEdge servers include a hardware-based root of trust with strong security checks embedded via integrated Dell Remote Access Controller (iDRAC).

O-RAN must be a collaborative effort among all players contributing towards interoperability, integration, testing, and radio optimization (e.g., interference, intermodulation, beamforming, beam steering, number of bands). Automation using real-time and non-real-time radio intelligent controller (RT-RIC/NRT-RIC) capabilities will also be a key enabler. These efforts will build enough confidence among telcos for the mass adoption of O-RAN technology in all possible deployment scenarios, including dense urban deployments. Dell Technologies is committed to helping Telco partners adopt O-RAN for all of these deployments. 


Read Full Blog
containers telecom

Bandwidth Guarantees for Telecom Services using SR-IOV and Containers

John Williams

Fri, 12 Mar 2021 16:29:21 -0000


Read Time: 0 minutes

With the emergence of Container-native Virtualization (CNV) or the ability to run and manage virtual machines alongside container workloads, Single Root I/O Virtualization (SR-IOV) takes on an important role in the Communications Industry. Most telecom services require guarantees of capacity e.g. number of simultaneous TCP connections, or concurrent voice calls, or other similar metrics. Each telecom service capacity requirement can be translated into the amount of upload/download data that must be handled, and the maximum amount of time that can pass before a service is deemed non-operational. These bounds of data and time must be met end-to-end, as a telecom service is delivered. The SR-IOV technology plays a crucial role on meeting these requirements.

With SR-IOV being available to workloads and VMs, Telecom customers can divide the bandwidth provided by the physical PCIe device (NICs) into virtual functions or virtual NICs. This allows the virtual NICs with dedicated bandwidth to be assigned to individual workloads or VMs ensuring SLA agreements can be fulfilled. 

In the illustration above, say we have a 100GB NIC device that is shared amongst workloads and VMs on a single hardware server.  The bandwidth on a single interface is typically shared amongst the workloads and VMs as shown for interface 1. If one workload or VM is extremely bandwidth hungry it could consume a large portion of the bandwidth, say 50%, leaving the other workloads or VMs to share the remaining 50% of the bandwidth which could impact the SLAs agreements under contract the Telco customer. 

To ensure this doesn’t happen the specification for SR-IOV allows the PCIe NIC to be sliced up into virtual NICs or VFs as shown with interface 2 above.  Slicing the NIC interface into VFs, one can specify the bandwidth per VF.  For example, 30GB bandwidth could be specified for VF1 and VF2 for the workloads while VF3–5 could be allocated the remaining bandwidth divided evenly or perhaps only give 5GB each leaving 15GB for future VMS or workloads.  By specifying the bandwidth at the VF level, Telco companies can guarantee bandwidths for workloads or VMs thus meeting the SLA agreement with their customers.   

While this high-level description of the mechanics illustrates how you enabled the two aspects: SR-IOV for workloads and SR-IOV for VMs, Dell Technology has a white paper, SR-IOL Enablement for Container Pods in OpenShift 4.3 Ready Stack, which provides the step-by-step details for enabling this technology.  

Read Full Blog