Jump label

Service navigation

Main navigation

You are here:

Main content

Publication on „SAMUS: Slice-Aware Machine Learning-based Ultra Reliable Scheduling“ to be presented on IEEE flagship conference ICC 2021

5G network slicing is particularly important for critical infrastructure communications researched at the CNI, especially in the context of so-called "mixed-critical" services, as multiple service types such as Ultra-Reliable Low Latency Communication (uRLLC) and Enhanced Mobile Broadband (eMBB) are envisioned to be incorporated into a single physical communication network. To balance the needs of low latency slices and demanding high bitrate best effort slices, the data-driven scheduler SAMUS was developed and evaluated at the CNI, which effectively minimizes latency for critical infrastructure slices while providing the maximum data rate possible for other participants in the network based on Machine Learning.


The developed system consists of two major components: The SAMUS scheduler itself as a prototype system and additionally, a 5G Resource Grid Simulation (5G-RGS) Framework to evaluate the SAMUS prototype based on state-of-the-art 5G Release 16 numerology (cf. Fig. 1). Using internal and external data sources, such as historical channel conditions and historical data traffic demands per User Equipment (UE), respectively, the SAMUS scheduler pre-allocates resources for the critical URLCC slices. This means that UEs in URLCC slices are able to send data without requesting resources from the scheduler, effectively reducing the scheduling latency to almost 0ms.


Figure 1: Overview of the developed system comprising inputs, outputs, and modules

While a static allocation of resources could achieve the same low latencies, a lot of resources for the best effort slices are wasted, if the statically assigned resources are not utilized by the critical slices. The respective trade-offs of different strategies are depicted in Fig. 2.

As can be seen, the best trade-off is achieved by the SAMUS system utilizing the so-called Configured Grants based on data-driven prediction methods. By reducing the scheduling latency, a high margin for the remaining end-to-end latency components can be provided, while still reducing resource waste thus maintaining high spectral efficiency.


Figure 2: Average best effort data rates versus mean and standard deviation of mission-critical sice latencies in comparison for different conducted modes. The arrows indicate margins for remaining end-to-end latency components.

The paper containing the results is accepted for presentation in the course of the upcoming 2021 IEEE International Conference on Communications (ICC) in June.