Home > Communication Service Provider Solutions > Enabling Telecom Transformation > Network-Based Energy Saving Opportunities in RAN for Improved TCO > AI/ML strategy: data-driven improvements in O-RU efficiency
The scope for application of AI/ML techniques in network management and operational optimization is vast. The focus here is on features that minimize TCO as a function of energy consumption and costs. The role of AI/ML in network optimization is critical for improved energy efficiency due to the plethora of applicable techniques and their corresponding impact on the network’s ability to handle live traffic when the ESFs are active. AI/ML techniques that leverage current state-of-the-art network architecture are implemented at higher layers of the O-RAN network stack in RAN intelligent controllers (RICs) due to better visibility into E2E network behavior.
Decision-making for optimal policies using the vast amounts of data generated by modern networks could be a daunting task if operators were to develop manual policies based on such data. However, proven AI/ML techniques can assimilate seemingly disparate data points and recommend optimal network policy with respect to predetermined objectives. Often, network policy objectives vary, leading to a multi-dimensional optimization problem and an optimization space that is a multi-dimensional hyperspace where no closed form solutions exist. The large dimensionality and degrees of freedom, the sizeable amounts of data, and the need for scalability lead to the use of AI/ML-based computationally-efficient techniques in discerning patterns in data.
Methods such as ASM can leverage E2E AI/ML techniques and innovations in O DU/O-RU design to reduce energy consumption. For the real-time actuation of these solutions, the use of evaluation methods anchored on "Digital Twins" [7] of the operational network can be immensely valuable. Network Digital Twins are able to emulate the implications of the several possible actions related to actuating ESFs and facilitate proper risk assessment and performance impact, before effecting any change to the real network. For ASM specifically, the actual control strategy is carried out with:
To incorporate state-of-the-art AI/ML techniques that determine when the O-DU/O-RU can be put into a lower energy consumption state, one needs to establish how aggressively an energy-efficient policy may be pursued with other network KPIs. The role of an intelligent controller for the O-DU/O-RU is to determine which power consumption (sleep) state is optimal (relative to the current network state) to enable higher levels of energy efficiency.
Furthermore, since the actions taken by one O-DU/O-RU also affect the load experienced by neighboring units, such decisions should be made at a level where there is full visibility within a network cluster. This makes the RIC the ideal entity to make these decisions, allowing the O-DU/O-RU to combine the learnings from the energy consumption characteristics of not just a single O-RU but a multitude of them in the RF neighborhood.
However, given the timescale difference between ESFs, a longer loop with RIC is not always the ideal decision-making approach and some micro-sleep ESFs may even be determined within the O-RU. To that end, Figure 2 presents an AI/ML-driven sleep control architecture incorporating these design principles.
Consideration for the different timescales and the need to make decisions with low latency leads to a dual-tiered (potentially multi-tiered depending on the complexity of the implemented ESFs) architecture for the design of the sleep control engine. The proposed solution includes the following within the control engines:
Each of these is responsible for the micro and macro behavior of the O-DU/O-RU such that persistent patterns in traffic over any timescale can be appropriately exploited. In doing so, they are aided by highly accurate traffic prediction engines that can leverage both temporal and spatial correlations in traffic demand as shown in Figure 2. While the two sleep control engines activate ESFs with different timescales, there needs to be synchronization of actions between these two engines based on the time granularity of the ESFs and the predicted traffic.
Furthermore, a conflict may arise between modules that are optimizing different aspects of the network based on a data-driven approach. For example, a throughput maximization and network energy savings module may potentially be in conflict in terms of the adaptive capacity level required for that base station. Such conflicts can in some cases be resolved by adding adequate constraints in the operational optimization strategy or by setting appropriate action precedence based on a global optimization policy. Dell’s network policy design is flexible enough to accommodate such changes in accordance with the network operator’s operational objectives without requiring any major re-designs.
Actuation of the ESFs may affect network KPIs such as latency, throughput, drop rates, and handover degradation. RAN vendors must recognize that every customer and network has unique deployment subtleties and network optimization goals as captured within the network intent. O-RAN creates opportunities to optimize and harmonize these objectives while delivering TCO gains during systems integration and benchmarking.