This ITU-R draft report is not complete or agreed upon at this time. Therefore, all of the text below is subject to change. For sure, AI will play a huge role in future IMT
International Mobile Telecommunications (IMT) systems are mobile broadband systems including both IMT-2000 (3G), IMT-Advanced (true 4G) and IMT-2020 (M.2150 5G RIT/SRIT/RAN previously know as IMT-2020.specs).
IMT-2000 provides access by means of one or more radio links to a wide range of telecommunications services supported by the fixed telecommunications networks (e.g. PSTN/Internet) and other services specific to mobile users. Since the year 2000, IMT-2000 has been continuously enhanced, and Recommendation ITU-R M.1457 providing the detailed radio interface specifications of IMT-2000, has been updated accordingly. Some new features and technologies were introduced to IMT-2000 which enhanced its capabilities.
IMT-Advanced is a mobile system that includes the new capabilities of IMT that go far beyond those of IMT-2000 and also has capabilities for high-quality multimedia applications within a wide range of services and platforms providing a significant improvement in performance and quality of the current services. IMT-Advanced systems can work in low to high mobility conditions and a wide range of data rates in accordance with user and service demands in multiple user environments. Such systems provide access to a wide range of telecommunication services including advanced mobile services, supported by mobile and fixed networks, which are generally packet-based. Recommendations ITU-R M.2012 provides the detailed radio interface specifications of IMT‑Advanced.
ITU-R studied the technology trends for the preparation of development of IMT-Advanced and IMT-2020, and the results were documented in Reports ITU-R M.2038 and ITU-R M.2320, respectively.
Since the approval of Report ITU-R M.2320 in 2014, there have been significant advances in IMT technologies and the deployment of IMT systems. The capabilities of IMT systems are being continuously enhanced in line with user trends and technology developments.  IMT-2020 systems include new capabilities of IMT that go beyond those of IMT-2000 and IMT-Advanced and make IMT-2020 more efficient, fast, flexible and reliable when providing diverse services in the intended usage scenarios including enhanced Mobile Broadband, ultra-reliable low-latency communication and massive machine-type communication.
This Report provides information on the technology trends of terrestrial IMT systems considering the time-frame 2023-2030 and beyond. Technologies described in this Report are collections of possible technology enablers which may be applied in the future. This Report does not preclude the adoption of any other technologies that exist or appear in the future, and newly emerging technologies are expected in the future.
This Report provides a broad view of future technical aspects of terrestrial IMT systems considering the time frame up to 2030 and beyond, characterized with respect to key attributes and alignment with relevant driving factors. It includes information on technical and operational characteristics of terrestrial IMT systems, including the evolution of IMT through advances in technology and spectrally efficient techniques, and their deployment.
New services and application trends:
The development of IMT systems for 2030 and beyond calls for a thorough reconsideration of several types of interactions . The roles of modularity and complementarity of new technological solutions become increasingly important in the development of increasingly complex systems. The use of data and algorithms, such as AI, will play an important role and technological complementarities are required to ensure that the technology innovations complement each other. This is particularly important as the role of IMT for 2030 and beyond can be seen as a pervasive general-purpose system, instead of simply an enabling technology, resulting in complex technical dependencies.
The role of the users of new services and applications is important in the technology development for IMT for 2030 and beyond, and users will need to have access to the services, required devices, and knowledge to use them, including non-users and potential reasons for their exclusion. Users’ opportunities to actively participate as experientials and developers will increase through a deeper understanding of technologies and skills and allows to shape the technologies for personalized needs.
Key new services and application trends for IMT for 2030 and beyond can be summarized as follows:
– Networks support enabling services that help to steer communities and countries towards reaching the UN SDGs
– Customization of user experience will increase with the help of user-centric resource orchestration models
– Localized demand–supply–consumption models will become prominent at a global level
– Community-driven networks and public–private partnerships (PPP) will bring about new models for future service provisioning
– Networks will have a strong role in various vertical and industrial contexts
– Market entry barriers will be lowered by the decoupling of technology platforms, making it possible for multiple entities to contribute to innovations
– Empowering citizens as knowledge producers, users and developers will contribute to a process of human-centred innovation, contributing to pluralism and increased diversity
– Privacy will be strongly influenced by the increased platform data economy or sharing economy, emergence of intelligent assistants (AI), connected living in smart cities, transhumanism, and digital twins
– Monitoring and steering of circular economy will be possible, helping to create better understanding of sustainable data economy
– Sharing- and circular economy-based co-creation will enable promoting sustainable interaction also with existing resources and processes
– Development of products and technologies that innovate to zero are promoted, for example, zero-waste and zero-emission technologies
– Immersive digital realities will facilitate novel ways of learning, understanding, and memorizing in several fields of science.
The role of IMT for 2030 and beyond will be to connect a number of feasible devices, processes as well as humans to a global information grid in a cognitive fashion, offering new opportunities for various verticals . Considering their different development cycles, a full trolley of the potential advances and vertical transformations will continue to be occur in the beyond 2030 era. The trend towards higher data rates will continue going towards 2030 leading to peak data rates approaching Tbit/s regime indoors, which will require large available bandwidths giving rise to (sub-) THz communications. On the other hand, a large portion of the verticals’ data traffic will be measurement based or actuation related small data which in many cases require extreme low latency in rapid control loops necessitating short over the air latencies to allow time for computation and decision making. At the same time, the reliability requirement in many vertical applications will be stringent. Industrial devices and processes, future haptic applications and multi-stream holographic applications require timing synchronization setting tight requirements for transmission jitter. In the future, there will be use cases that require extreme performance as well as new combinations of requirements that do not fall into the three categories of IMT-2020: eMBB, URLLC, and massive machine type communication (mMTC). Some of these use cases will require wide coverage whereas others are confined in small areas.
The three usage scenarios described in IMT-2020 i.e. eMBB, mMTC and URLLC will still be important and new use cases and applications should be all taken into account for the continuing evolution, especially for those driving the technologies development and reflecting the future requirements.
Services and trend opportunities:
– Holographic Communications
Holographic displays are the next evolution in multimedia experience delivering 3D images from one or multiple sources to one or multiple destinations, providing an immersive 3D experience for the end user. Interactive holographic capability in the network will require a combination of very high data rates and ultra-low latency. The former arises because a hologram consists of multiple 3D images, while the latter is rooted in the fact that parallax is added so that the user can interact with the image, which also changes with the viewer’s position.
Holographic communication provides real-time three-dimensional representation of people, things, and their surroundings into a remote scenario. It requires at least an order of magnitude high transmission rate and powerful 3D display capability.
– Tactile and Haptic Internet Applications
Advanced robotics scenarios in manufacturing need a maximum latency target in a communication link of 100 microseconds (µs), and round-trip reaction times of 1 millisecond (ms). Human operators can monitor the remote machines by VR or holographic-type communications, and are aided by tactile sensors, which could also involve actuation and control via kinaesthetic feedback.
Vehicle-to-vehicle (V2V) or vehicle-to-infrastructure communication (V2I) and coordination, autonomous driving can result in a large reduction of road accidents and traffic jams. Latency in the order of a few ms will likely be needed for collision avoidance and remote driving.
Tele-diagnosis, remote surgery and telerehabilitation are just some of the many potential applications in healthcare. Tele-diagnostic tools, medical expertise/consultation could be available anywhere and anytime regardless of the location of the patient and the medical practitioner. Remote and robotic surgery is an application where a surgeon gets real-time audio-visual feeds of the patient that is being operated upon in a remote location. The technical requirements for haptic internet capability cannot be fully provided by current systems.
– Network and Computing Convergence
Mobile edge compute (MEC) will be deployed as part of 5G networks, yet this architecture will continue towards IMT 2030 networks. When a client requests a low latency service, the network may direct this to the nearest edge computing site. For computation-intensive applications, and due to the need for load balancing, a multiplicity of edge computing sites may be involved, but the computing resources must be utilized in a coordinated manner. Augmented reality/virtual reality (AR/VR) rendering, autonomous driving and holographic type communications are all candidates for edge cloud coordination.
– Extremely High Rate Information Access
Access points in metro stations, shopping malls, and other public places may provide information access kiosks. The data rates for these information access kiosks could be up to 1 Tbps. The kiosks will provide fibre-like speeds. They could also act as the backhaul needs of millimeter-wave (mmWave) small cells. Co-existence with contemporaneous cellular services as well as security seems to be the major issue requiring further attention in this direction.
– Connectivity for Everything
Scenarios include real-time monitoring of buildings, cities, environment, cars and transportation, roads, critical infrastructure, water and power etc. The internet of bio-things through smart wearable devices, intra-body communications achieved via implanted sensors will drive the need of connectivity much beyond mMTC.
It is anticipated that Private networks, applications or vertical-specific networks, mini and micro, enterprises, IoT / sensor networks will increase in numbers in the coming years (based on multiple Radio technologies). Interoperability is one of the most significant challenges in such a Ubiquitous connectivity / compute environment (smart environments), where different products, processes, applications, use cases and organizations are connected. Interactions among telecommunications networks, computers, and other peripheral devices have been of interest since the earliest distributed computing systems.
– XR – Interactive immersive experience
The interactive immersive experience use case will have the ability to seamlessly blend virtual and real-world environments and offer new multi-sensory experiences to users. This use case will enable the users to interact with avatars of other remotely located users and flexibly manipulate objects from representations of real and/or virtual environments with high degree of realism. The implications of this use case are expected to be immense, given its wide-ranging applicability to social, entertainment, gaming, industry, and business sectors.
X-Reality, such as virtual reality (VR), augmented reality (AR), and mixed reality (MR) is expected to provide higher resolution, larger FoV, higher FPS, and lower MTP, which all translate into higher demand on the transmission data rate and end-to-end latency.
A key challenge to address when supporting interactive experiences in network include synchronized transport of multi-modality of flows (e.g., visual media, audio, haptics) to and from different devices in a collaborative group serving the same XR application. Another important consideration is supporting real-time adaptations in the network relative to user movements and actions to ensure the interactions with other users and objects appear highly realistic in terms of placement and responsivity. Enabling spatial interactions will also require fast accessibility and ease of integration of content containing up-to-date and accurate representations of real/virtual environments from different content sources.
– Multidimensional sensing
Sensing based on measuring and analysing wireless signals will open opportunities for high-precision positioning, ultra-high-resolution imaging, mapping and environment reconstruction, gesture and motion recognition, which will demand high sensing resolution, accuracy, and detection rate.
– Digital Twin
Digital twin is a digital replica of entities in physical world, which demands real time and high accuracy sensing to ensure the accuracy, and low latency and high data transmission rate to guarantee the real time interaction between virtual and physical worlds.
A digital twin network is a dynamic replica of the physical network for its full life cycle. It should be capable of generating perceptive and cognitive intelligence based on collection of historical and on-line network data. It should be capable of continuously seeking the optimal state of the physical network in advance, and enforcing management operations accordingly. Digital twin enables network self-boosting, self-evolving, self-optimizing by verifying new functionalities, services and optimization features before deployment. Sensing and learning are the two fundamental functions to fuse the physical and cyber world.
Mixed Reality and Virtual Presence
Enabling efficient Machine Type Communication (MTC) continues to be important driver for IMT for 2030 and beyond. It allows machines and devices to communicate with each other without direct human involvement is a major driver behind the Internet of Things (IoT) and the future digitalization of economies and society . MTC encompasses critical MTC (cMTC) and massive MTC (mMTC). The former targets mission-critical connectivity with stringent requirements on key performance indicators (KPI) such as reliability, latency, dependability, and synchronization accuracy. On the other hand, the latter addresses connectivity needs for massive number of potentially low-rate, low-energy simple devices, where the connection density and the energy efficiency are the most important KPIs.
For 2030 and beyond, data markets will become increasingly important technology area connecting data suppliers and customers . The data generated by the widely distributed MTDs will have enormous business and societal value. The value-added services of data marketplaces will be empowered by emerging technologies like artificial intelligence (AI) and distributed ledger technology (DLT), while adding new data-centric KPIs such as the age of information, privacy and localization accuracy.
Proliferation of intelligence
Real-time distributed learning, joint inferring among proliferation of intelligent devices, and collaboration between intelligent robots demand a re-thinking of the communication system and networks design.
– Global Seamless Coverage
In order to connect the unconnected and provide continuously high quality mobile broadband service in various areas, it is expected that the interconnection of terrestrial and non-terrestrial networks will facilitate the provision of such services.
Technology Drivers for future technology trends towards 2030 and beyond:
The continuing evolution of the IMT systems, and the underlying technologies, must be guided by the imperative to satisfy fundamental needs, and contextualized in terms of how they can help the society, the end users, and the value creation and delivery. These necessities and key driving factors are:
– Societal goals – Future technologies should help contribute further to the success of a number of UN SDG goals including environmental sustainability, trust and inclusion, efficient delivery of health care, reduction in poverty and inequality, improvements in public safety and privacy, support for ageing populations, and managing expanding urbanization.
– Market expectations – New technologies should enable significant and novel capabilities, supporting radically new and differentiated services, opening up greater market opportunities
– Operational necessities – The need to manage complexity, drive efficiency, and reduce costs, with end to end automation and visibility, is also an imperative as a motivation and driving factor
Key considerations for IMT Systems for 2030 and beyond include:
– Sustainability/Energy efficiency
Energy efficiency has long been one important design target for both network and terminal. While improving the energy efficiency, the total energy consumption should also be kept as low as possible for the sustainable development.
Energy efficiency has long been one important design target for both network and terminal. While improving the energy efficiency, the total energy consumption should also be kept as low as possible for the sustainable development. Power efficient technology solutions are needed both in backhaul and local access to make use of small-scale renewable energy sources.
– Peak Data Rate/Guaranteed Data Rate
The peak data rate for future system should be largely increased in order to support extremely high bandwidth services such as extremely immersive XR and holographic communication.
Guaranteed data rate usually refers to the achievable data rate at the edge of coverage area. Future system should guarantee users’ experience regardless of users’ location and network traffic conditions.
Services with real-time and precise control usually have high demands on the low latency of communications, such as the air interface delay, end-to-end latency, and roundtrip latency.
Usually refers to the degree of latency variation. Some of the future services such as time sensitive industry automation applications may request the jitter close to zero.
– Sensing resolution and accuracy
Sensing based services, including traditional positioning and new functions such as imaging and mapping, will be widely integrated with future smart services, including indoor and outdoor scenarios. Very high accuracy and resolution will be needed to support a better service experience.
– Connection density
Refers to the number of connected or accessible devices per unit space. It is an important indicator to measure the ability of mobile networks to support large-scale terminal devices. With the popularity of the Internet of Things (IoT) and the diversification of terminal accesses in the specific applications, such as industrial automation and personal health care etc., mobile system needs to have the ability to support ultra-large connections.
– Coverage and full connectivity
The future network should be able to provide global coverage and full connectivity by wireless and wired, terrestrial and non-terrestrial coverage with heterogeneous multi-layer architecture. The full connectivity network should support intelligent scheduling of connectivity according to application requirements and network status to improve the resource efficiency and service experience. It will extend the provision of quality guaranteed services, such as MBB, massive IoT, high precision navigation services etc, from outdoor to outdoor from urban to rural areas and from terrestrial to non-terrestrial spaces.
Refers to the maximum speed supported under a specific Quality of Service (QoS) requirement. Future system will not only support terminals on land including high speed train, but it will also provide services to terminals in high-speed airplane, drone and so on.
– Spectrum utilization
With new services and applications towards 2030 and beyond, more spectrum is required to accommodate the explosive mobile data traffic growth. Further study will be introduced on novel usage of low and mid band, and the extension to much higher frequency band with much broader channel bandwidth. The smart utilization of multiple bands and improvement of spectrum efficiency through advanced technologies are essential to achieve high throughput in limited bandwidth.
– Simplified user-centric network
With huge amounts of new services and scenarios towards 2030 and beyond, the network is required to satisfy diversified demand and personalized performance. The soft network should be designed as a fully service-based and native cloud-based radio access network, which can guarantee the QoS and provide consistent user experience. The lite network should be constructed as a globally unified access network with the simple architecture and the powerful capabilities of robust signalling control, accurate network services and efficient transmission through the converged communication protocols and access technologies with plug-and-play, on-demand deployment. A user-centric network is required to enable a fully distributed/decentralized network mitigating single point of failure as well as to enable the user-controlled data ownership which is critical to the next generation network.
– Native AI
The future mobile system will have stronger capabilities and support more diversified services, which will inevitably increase the complexity of the network. Artificial Intelligence (AI) reasoning will be embedded everywhere in the future network including physical layer design, radio resource management, network security, and application enhancement, as well as network architecture, which results in a multi-layer deep integrated intelligent network design. Meanwhile the future network can also support distributed AI as a service for larger scale intelligence.
The future network supports more advanced system resilience for reliable operation and service provision, security to provide confidentiality, integrity and availability, privacy with self-sovereign data, and safety regarding the impact to the human being and environment etc.
The roles of trust, security and privacy are somewhat interconnected, but different facets of the future networks. Inherited and novel security threats in future networks need to be addressed. Diversity and volume of novel IoT and other networked devices and their control systems will continue to pose significant security and privacy risks and additional threat vectors as we move from IMT-2020 to beyond. IMT for 2030 and beyond needs to support embedded end-to-end trust such that the resulting level of information security in the networks to is significantly better than in state-of-the art networks. Trust modelling, trust policies and trust mechanisms need to be defined.
Security algorithms can use machine learning to identify attacks and respond to them. Continuous deep learning is needed on a packet/byte level and applying machine learning to enforce policies, detect, contain, mitigate, and prevent threats or active attacks. While IMT-2020 is still largely device / network specific, future networks envisage far more immersive engagement with the network.
Conventional trust, security and privacy solutions may not be directly applicable to specific machine type communication scenarios owing to their lack of humans in the loop, massive deployment, diverse requirements, and wide range of deployment scenarios. This motivates the design of resource-efficient unsupervised solutions to be exploited by MTD, e.g., based on distributed ledger technology (DLT).
– Dynamically controllable radio environment
To be able to dynamically change the characteristics of radio propagation environment and create favourable channel conditions to support higher data rate communication and improve the coverage.
Emerging Technology trends and enablers:
Technologies to use AI in communications:
The big success of artificial intelligence (AI) in image, video and audio signal processing, data mining and knowledge discovery, etc., has made it possible to shift wireless communication to an intelligent paradigm in a similar manner, i.e., learning from the wireless big data which has yet to be fully exploited to design new and efficient architectures, protocols, schemes and algorithms for the future communication system. In turn, with the wide deployment of base stations, edge servers and intelligent devices, the mobile network will provide a new and powerful platform for ubiquitous data collection, storage, exchange and computing which are needed for future mobile / distributed / collaborative machine learning. For the future communication system, an emerging and transformative move will be providing the access of AI to everyone, every business, every service anywhere anytime. AI is the design tablet of the future communication system, and it will be the cornerstone to create intelligence everywhere. One of the main differences of the future communication system compared to IMT-2020 is that it will use mobile technologies to enable the proliferation of AI and use the radio networks to augment ubiquitous, distributed machine learning. Furthermore, AI ethics issues, existing in all AI-based systems and applications, have been raised and discussed in wireless community from different aspects. Then, future IMT technology would request on fairness, robustness to avoid AI ethics issues in certain level.
AI native new air interface:
Applying tools from Artificial Intelligence (AI), and Machine Learning (ML) and its sub-set Deep Learning (DL), in wireless communications have gained a lot of traction in recent years. This trend in large part has been motivated by the significant increase in the system complexity in the IMT-2020 radio access network (RAN) and its evolution over previous wireless technology generations. Deep neural networks allow the characterization of specific or even unknown channel environment and network environment, i.e., the traffic, the interferences and user behaviours, and then adapt the radio signalling to the channel and network environment. With learning it can optimize user signalling, power consumption as well as its end-to-end connectivity, and smartly coordinate the multi-user access of radio resources, thus optimizing the data and control plane signalling and improving the overall system performances.
The most challenging issue in air interface design is to sense the communication environments, i.e. the estimation and prediction of propagation channels. To this end, traditional air interface pays much effort to pilot design and channel estimation. Now with machine learning and especially the capability of black-box modelling and hyper-parameterization of a deep neural network, the unknowns of the underlying channel could be properly learned providing that sufficient data is available. Thus, we can reconstruct a physical channel rather than just estimate it. With transfer learning, the learned model can be transferred to adjacent nodes. This gives new way to air interface design. Several components in the transceiver chain are expected to be implemented through AI/ML based algorithms. This includes the transmitter side – beamforming and management and the receiver side – channel estimation, symbol detection and/or decoding at a minimum. Therefore, there will be a heavy focus to redesign the physical layer of the communication protocol stack using AI. On the other hand, the implementation issues related to periodic updating of deep learning models that are used in various blocks of the physical layer must be addressed.
In addition, radio resource management or resource allocation can also be implemented via AI/ML based methods. In a multi-user environment, with reinforcement learning, base stations and user equipment could automatically coordinate the channel access and resource allocation based on the signals they respectively received. Each of the nodes calculates its reward for each transmission, and adjusts its power, beam direction and other signalling to accomplish the distributed interference coordination and improve the system capacity. Following are some potential usages:
– For QoE bottleneck of last mile radio link. It is expected for RAN AI to expose radio channel prediction capabilities for upper layer adaptation, for example, available bandwidth and predicted latency, by taking into account multi-users radio channel fluctuation, traffic pattern and cell load variation and etc. This interaction could be based on the subscribed request from upper layer and would be only triggered when the predefined threshold is satisfied.
– The optimization of radio resource allocation to meet the requirements of highly demanding applications such as the cloud based interactive applications which requires low latency and high throughput. The optimization will take into account multi-dimensional metrics, for example, application-level traffic pattern, i.e., video frame level I/B/P frames distribution, transport layer congestion control, low layer buffer status, QoS profiles (e.g. bandwidth, latency)
– For the randomness and uncertainty of traffic distribution in the vehicle network, use the deep reinforcement learning based adaptive exploration approach for the resource allocation, including offline training, online distributed learning method etc.
Machine learning techniques can be used for symbol detection and/or decoding. While de-modulation/decoding in the presence of Gaussian noise or interference by classical means has been studied for many decades, and optimal solutions are available in many cases, ML could be useful in scenarios where either the interference/noise situation does not conform to the assumptions of the optimal theory, or where the optimal solutions are too complex. Meanwhile, IMT for 2030 and beyond will likely utilize even shorter codewords than IMT-2020 with low-resolution hardware (which inherently introduce non-linearity that is difficult to handle with classical methods). ML could play a major role, from symbol detection, to precoding, to beam selection, and antenna selection.
Another promising area for ML is the estimation and prediction of propagation channels. Previous generations, including IMT-2020, have mostly exploited CSI at the receiver, while CSI at the transmitter was mostly based on roughly quantized feedback of received signal quality and/or beam directions. In systems with even larger number of antenna elements, wider bandwidths, and higher degree of time variations, the performance loss of these techniques is non negligible. ML may be a promising approach to overcome such limitations.
MAC layer is a major application area of AI where many of the problems that have legacy solutions can be replaced with AI based methods using supervised learning, data collection and ML model deployment options. Next generation MAC algorithms will need to consider the AI that is used in various layers of the network, especially in physical layer. This is needed because of the need to update the deployed machine learning models, collect data for supervised learning tasks and enable reinforcement learning on different blocks of the network.
AI techniques can be used to target one or more wireless domains, including non-real-time (non-RT) network orchestration and management, such as configuration of antenna parameters and near-real-time (near-RT) network operation, such as load balancing and mobility robustness optimization. Each wireless domain involves different sets of physical and virtual components, family of parameters including key-performance-indicators (KPIs), underlying complexities, and time constraints for updates. Hence, there is a need to consider tailored AI solutions for different classes of the RAN, and their associated problems. There already exists a rich body of research and practical demonstrations of the potential benefits of AI for Wireless, including significant network energy savings.
With the progresses of machine learning and information theory, the ultimate air interface can hopefully perform the automatic semantic communications. There are many open fundamental problems in this direction for the wireless community. For example, learning algorithms usually relies highly on the wireless data which may be hard to obtain or be preserved under privacy constraints. To solve it, we can learn with both the practical wireless data and the statistical models.
Questions related to the most optimal ML algorithms given certain conditions, required amount of training data, transferability of parameters to different environments, and improvement of explainability will be the major topics of research in the foreseeable future. There will be various phases towards development of AI for Wireless, and it is imperative to ensure the increased integration of the technology comes with minimum disruption to the rollout and operation of wireless systems and services. In the short and medium terms, AI models may be targeted for optimisation of specific features within RAN for IMT-2020 and its evolution, such as network operation and management functionalities. In the longer term, AI may be used to enable new features over legacy wireless systems.
AI-Native radio network
Future IMT-systems are required to support extremely reliable and performance-guaranteed services. They will introduce a multi-dimensional network topology, which will make network management and operation more difficult and introduce more challenging problems. To address these problems, it will adopt AI technologies for automated and intelligent networking services. Consequently, to assist computation intensive tasks in AI applications, it will evolve into an AI-native network architecture.
The highest level of AI-native radio network should be designed and implemented by AI to be an intelligent radio network, which can automatically optimize and adjust the network according to specific requirements/objectives/commands, or changes of the environment. The research may include the high-layer protocols, network architecture and networking technologies enabling the above intelligent radio network.
RAN optimization is one of the problems that is rather difficult to solve due to the complexity of the mathematical formulation of the problems. Deep reinforcement learning paradigm in AI can enable zero-touch optimization of the RAN elements with minimum hand-crafted engineering. In addition, Radio networks architecture design is often a challenging task that can be automated with the use of AI. Methods such as graph representation learning could be utilized to enable the network architecture design that can simplify the problem.
Various use cases of AI-empowered network automation are proposed, including fault recovery/root cause analysis, AI-based energy optimization, optimal scheduling, and network planning. Key challenges of training issues have been identified: lack of bounding performance, lack of explainability, uncertainty in generalization, and lack of interoperability to realize full network automation. Four types of analytics can be classified for future AI-native networks, and they are: descriptive analytics, diagnostic analytics, predictive analytics, and prescriptive analytics. The key for successful network automation in AI-native network architecture is how to collect rich and reliable network data that is not typically open to other players other than network operators.
In general, an overall network architecture consists of four tiers of entities: UE, BS, core network, and application server. Application of AI can be categorized into three levels as shown in Figure 1: 1) local AI, 2) joint AI, and 3) E2E AI. This use case family consists in being present and interacting anytime, anywhere, using all senses if so desired. It enables humans to interact with each other without any limitation on physical presence.
The future RAN will be able to perceive and adapt to complex and dynamic environments by monitoring and tracking conditions in the radio network while diagnosing and restoring any RAN issues in an automated fashion. To achieve autonomy for its full life cycle management, at least the following novel networking technologies need to be considered: 1) efficient and intelligent network telemetry technologies that leverage AI to apply management operations based on a collection of historical and live network data, 2) automated network management and orchestration technologies that continuously seek the optimal state of the RAN and enforce management operations accordingly, 3) automatically perform life cycle management operations, adjust configurations on radio network elements, and optimize new services and features during and after deployment, and 4) provide AI based assistance, in particular for aspects such as forecasting, root cause analysis, anomaly detection and intent translation.
More specifically, large quantity of data transportation will bring burdens to each network interface. Besides, data sensed from the radio environment sometimes don’t have the corresponding labels. Intelligent data perception, e.g., utilizing GAN to generate the required data so as to simulate real data, will avoid transferring large amount of data over interfaces, and protect the data privacy to a certain degree. To further this vision of zero-touch network management, an open network data set and open eco-system need to be established.
It is also possible that user feedback is introduced into the decision-making process of the network to improve the decision-making of AI algorithms and help the machine better understand user preferences and make more user-preferred decisions.
In future IMT-systems, more computation nodes will be required to support highly computation-intensive services. Thus, computation nodes will be pervasive from core to edge and from network to device. To cope with this trend, the control and user planes of the network for future IMT-systems need to be redesigned, and emerging technologies such as programmable switches and distributed/federated learning need to be aggressively adopted.
To support services in multiple application scenarios, an intelligent network is needed. In the AI-Native Radio Network, AI is no longer just optimizing the wireless resources of the wireless network, but an intelligent system integrating with radio network, which can realize the supply of on-demand capability.
In order to realize the intelligence of Radio Network, the new functions of Sensing and AI need to be supported. Through the data sensing function, we can realize the end-to-end collection, processing and storage of network data. While the AI function can call and subscribe to these data on demand, and provide capability support according to different application scenarios. In this way, the utilization and support of AI capabilities can be realized more efficiently and globally.
AI system in AI-Native Radio Network is distributed on different network functions. AI algorithms running on different functions or AI models trained on different functions are all components of this distributed AI system, and all components are organic unity. Under the control or coordination of the unified AI control center, each component of the distributed AI system independently completes the assigned tasks, interacts with other components, and reports measurements to the control center. Distributed AI system should be end-to-end solution.
Edge AI is to be considered one of the key enablers for future IMT-systems, especially for sensing-communication-computing-control. On the other hand, a distributed deep learning architecture is to be considered for realizing URLLC in future IMT-systems. Thus, its RAN can be flexibly and adaptively optimized with the aid of AI to guarantee QoS and leading to the topics of interest: 1) adaptive RAN slicing architecture and the corresponding distributed intelligence architecture, 2) knowledge-assisted learning architecture and methods, and 3) fast training/federated learning methods.
In addition, Self-Synthesising Networks automate the actual design process, or large parts thereof. Whilst the actual invention of engineering principles may still be done by human researchers or in combination with AI, the system design, prototyping and standards development are now largely executed by machines. Given that the two phases of systems design and standardisation take years, it is hoped that the introduction of self-synthesising networking principles will accelerate feature development in telecoms by 5-8 years.
Radio network for AI:
The radio network will be migrated from over the top towards the AI era. Wireless networks should consider the AI applications and paradigms that require exchange of large amounts of data, machine learning models, and inference data exchange between different entities in the networks. We must find long-term platform technologies to better support AI service, which will greatly impact the design of future radio network, i.e. radio network for AI, The distributed and collaborative machine learning is required to fully leverage the computing/communication load and the efficiency and furthermore to comply with the local governance of data requirement and data privacy. Therefore, the data-split and model‑split approaches will be the major focuses for future research. The impacts of this on the future network design are threefold:
– Shift from downlink-centric radio to uplink-centric radio: Unlike existing downlink-centric radio which usually supports heavier traffic and better QoS for downlinks, AI requires more frequent model and data exchanges between a base station and the different users it serves. The uplinks should be reconsidered in network design to attain a balanced, efficient and robust distributed machine learning.
– Shift from the core network to the deep edge: The locality of data and the computing/communication needed for deep machine learning bring big challenges to the end-to-end delay. To mitigate it, new network as well as the corresponding protocols should be redesigned. One of such research directions is to place the major learning processes and threads close to the edge and thus forms a deep edge which can greatly mitigate the system delay.
– Shift from cloudification to machine learning: Due to the distributive nature of data and computing power, the communication and computing procedures of a machine learning algorithm often take place across the whole network from the cloud to the edge and the devices. Therefore, traditional cloudification should also be reconsidered to be application-centric, i.e., to meet the specific needs of the more general distributed machine learning applications with proper deployment of computing and communication resources.
In addition, Future data-intensive, real-time applications require distributed ML/AI solutions deployed on the edge-cloud continuum, shortly known as EdgeAI or Edge Intelligence. These solutions support augmenting human-decision processes, developing autonomous systems from small devices to complete factories, and optimising the network performance and marshalling the billions of IoT devices expected to be interacting in the 2030s. Distributed ML/AI has become an inseparable part of wireless networks and increasing volumes of heterogeneous streaming data will require more advanced computing paradigms. Since heterogeneous IoT devices are not as reliable as high-performance, centralised servers, distributed and self-organising schemes are mandatory to provide strong robustness in device and link failures. The current open questions in fulfilling the requirements of the true Edge Intelligence include data and resource distribution, distributed and online model training, and inference on those models across multiple heterogeneous devices, locations, and domains of varying context-awareness. The future network architecture is expected to provide native support for radio-based sensing and, through versatile connectivity, accommodate ultra-dense sensor and actuator networks, enabling hyper-local and real-time sensing, communication, and interaction the intelligent edge-cloud continuum.
Explainable AI for RAN:
Automation principles were introduced into the telecommunications architecture as early as in 2008. Despite a swath of algorithmic ML/AI/SON frameworks available, uptake was not as widespread as expected. An important reason for this was that –whilst the developed automation frameworks outperformed any other operational approach– it exhibited occasional and unexplainable outages which operators could not accept. Since the proposal of the concept of wireless AI, there has been a widespread concern on how to harmonize the relationship between existing communication mechanism and the so-called “black-box” AI (machine learning, or even deep learning) models. It has been highly encouraged that existing expert wireless knowledge should be fused in the design of AI models to improve their performance and interpretability, for example, AI based MIMO channel estimation achieving the significant performance gain.
In the context of telecoms, explainable AI (XAI) enables the creation of Trusted Networks which are trusted both by consumers and operators. Individual building blocks in the network are still embodied through machine learning (eg regression) or deep learning (e.g., CNNs, RNNs or GANs), but the overall interaction between these automated components is supervised through XAI. It is typically enabled through a fairly deterministic but “human-influenceable” decision tree which trades the levels of trust with performance through planning optimization approaches. Given the high-level of automation at the radio interface, RAN, Core and Transport networks, XAI will play an instrumental role in 5G and 6G to ensure an end-to-end trusted operation of the networks.
Furthermore, it should be pointed out that the solutions for integrating existing communication mechanism and AI models should go beyond simple “one plus one” splices, for example, AI-based channel state information (CSI) feedback improving the CSI reconstruction accuracy. It could be anticipated that the exploration of expert knowledge will be one of the determinant factors in wireless AI model designs. Even more, we can envision that the ultimate goal of wireless AI will be to develop a kind of models specifically designed based on the distinguishing characteristics of data from wireless networks, just like those models in computer vision and natural language processing.
Technologies for integrated sensing and communication:
Wireless sensing including object detection, ranging, positioning, tracking, as well as imaging, etc, has long been a separated technology developed in parallel with the mobile communication systems. Positioning is the only sensing service that mobile communication systems up to IMT-2020 could offer. Departing from the traditional approach of designing wireless networks solely for communication purposes, IMT for 2030 and beyond will consider an integrated sensing and communication (ISAC) system from its outset. In the future communication system, enabled by the potential use of very high frequency bands (e.g. from mmWave, THz, up to visible light), wider bandwidth, denser deployment of large antenna arrays, reconfigurable intelligent surface(RIS), artificial intelligence(AI) and collaboration between communication nodes/devices, sensing will become a new function integrated with the communication system to enable new services and solutions with higher degree of accuracy in aspects such as ranging, doppler, and angular estimation, as well as positioning.
In the ISAC system, sensing and communication function will mutually benefit within the integrated system. On one hand, the communications system can assist sensing service. It can explore the radio wave transmission, reflection, and scattering to sense and better understand the physical world, also known as “Network as a sensor”. On the other hand, sensing results can be used to assist communication in access or management such as more accurate beamforming, better interference management, faster beam failure recovery, and less overhead to track the channel state information, improving quality-of-service and efficiency of the communication system. This is known as “sensing assisted communication”. Moreover, as a foundational feature for 6G, sensing can be seen as a “new channel” linking the physical world to the digital world. Real-time sensing combined with AI technologies is thus essential to realize the concept of digital twin.
In general, the interaction level between communication and sensing systems can be classified as (a) co-existence, where sensing and communication operate on physically separated hardware, use the same or different spectrum resources and do not share any information, treating each other as interference,(b) cooperation, where the two systems operate on physically separated hardware While information can be shared to each other(e.g. prior knowledge of sensing/communication could be shared to reduce inter-system interference or in some case enhance the other system), and (c) integrated design, where the two systems are designed to behave as a single system with information sharing and joint design in spectrum usage, hardware, wireless resource management, air interface, and signal transmission and processing, etc . The focus of ISAC in future IMT is on (c).
In the integrated design, the technology development of the ISAC can be divided into different stages which can range from loosely coupled to fully integrated. As a starting point, communication and sensing system share the resources such as spectrum, hardware. Communication and sensing can be implemented as one system serving two traffic forms simultaneously. The key research issues in this stage could be efficient scheduling and coordination algorithms between sensing and communication modules to minimize the interference to each other. As a step further, communication and sensing will work together to improve the performance for one single system. The integration of signal processing, such as the time, frequency and spatial domain processing techniques can be jointly designed to serve both sensing and communication. Potential directions in this stage would be air interface design based on joint waveform, unified beamforming scheme, etc., which is essential to improve the efficiency of the ISAC system. Towards the mature stage of the ISAC, communications and sensing will be completely coordinated and collaborated in all possible dimensions including spectrum, hardware, signalling, protocol, networking, etc., achieving mutual promotion and benefits. Further combined with technologies such as AI, network cooperation and multi-nodes cooperative sensing, the ISAC system will have benefits in enhanced mutual performance, overall cost, size and power consumption of the whole system.
The ISAC capabilities enable many new services which the mobile operators can offer, including but not limited to very high accuracy positioning, tracking, imaging (e.g. for biomedical and security applications), simultaneous localization and mapping, pollution or natural disaster monitoring, gesture and activity recognition, flaw and materials detection. These capabilities will in turn enable application scenarios in future consumer and vertical applications in all kinds of business such as context-aware immersive human-centric communications, industrial automation (Industry 4.0), connected automated vehicles and transportation, energy, healthcare/e-health and so on.
The technology enablers include transceivers building on new RF spectrum at the high-frequency range, the RIS that allows mobile operators to shape and control the electromagnetic response of the environment, and advanced beam-space processing to track users and objects, passive tag (e.g. RFID tag) aided sensing to improve object identification accuracy and efficiency. Equally important, ML/AI algorithms will exploit large datasets to provide new sensing services and improve the communication. And yet communication and sensing services need to share available hardware and waveforms, while fusing information from distinct sources of measurements in the network deployment area. Research challenges remain in areas such as system level design and evaluation methodologies to characterize the fundamental trade-offs of the two functions in the integrated system, the solutions to deal with the increased sensitivity to hardware imperfections, joint waveform design and optimization, etc.
Technologies to support convergence of communication and computing architecture:
From emerging IMT towards 2030 and beyond use cases such as digital twin, cyber-physical systems, mixed reality, industrial/service robots, a number of technology trends can be observed. One trend is towards processing data at the network edge close to data source for real-time response, low data transport cost, energy efficiency, and privacy protection. There, edge computing is a distinguished form of cloud computing that moves part of the service-specific processing and data storage from the central cloud to edge network nodes that are physically and logically close to the data providers and end users. Among the expected benefits of edge-computing deployment in current networks are performance improvements, traffic optimization, and new ultra-low latency services. Edge intelligence in IMT for 2030 and beyond will significantly contribute to all these aspects. Pervasive compute with seamless task mobility can be enabled by evolved container formats based on portable codes and associated system interfaces. This will allow the platform to dynamically schedule workloads on nodes regardless of varying hardware and system software setups. As a result, several optimizations can be performed with limited overhead, such as moving computations close to a data source or consumer. This can be useful for having computational tasks follow mobile users, opportunistically offloading workloads from the device to preserve device energy and moving computations for an optimized cost/performance/power trade-off.
Another trend is towards scaling out device computing capability beyond its physical limitations for advanced application computing workloads. Future applications, such as truly immersive XR, mobile holograms and digital replica require extensive computation capabilities to deliver real-time immersive user experience. However, it would be challenging to meet such computational requirements solely with mobile devices. In order to overcome the limits of the computing power of mobile devices, split computing makes use of reachable computing resources over the network. These computing resources could be available on various entities of networks, e.g., mobile devices, BSs, MEC servers and cloud servers. With split computing, mobile devices can effectively achieve higher performance even as they extend their battery life, as devices offload heavy computation tasks to computation resources available in the network. Additionally, a third trend is that the ubiquity of AI needs ubiquitous computing and data resources.
These new technology trends bring in new technology challenges on scalability, dynamic workload distribution, data collection/management/sharing. One challenge is scalability. In today’s cloud computing, computing resource are often centralized in a few national or regional data centers. Centralized service discovery and orchestration mechanisms used are given full visibility on computing resources and services in data centers. When computing resources and services become more widely distributed, the centralized approach is no longer scalable; a more scalable approach is needed for widely distributed computing resources.
Another challenge is dynamic computing workload distribution. Today’s workload distribution between devices and the cloud is based on client-server model with a fixed workload partition between the client and the cloud. The fixed workload partition is application specific and is pre-determined in the application development phase and assumes that there are always sufficient computing resources in the cloud to fulfil the server-side workload. As computing resources become distributed, there needs to be a scheme that allows dynamic device computing scaling out based on conditions such as workload requirements, communication and computing resource availability, etc. To minimize the impact on applications, dynamic computing scaling scheme should be enabled as an IMT system capability with minimal dependency on applications.
A third challenge is data collection, synchronization, processing, management and sharing – With the widespread application of AI in society/industry, a systematic approach in collecting, processing, management and sharing data to facilitate AI/Machine Learning becomes very important. Split computing also requires synchronization of a large amount of data, context, and the program itself among network entities. The conventional data management functions in cellular networks focus on managing subscription information and policies. In IMT-2020, a network data analytics function (NWDAF) was added into the specifications through which network functions’ measurement data can be collected and used for analytics. Future IMT towards 2030 and beyond are anticipated to have further diversification on data sources, types and consumptions. Therefore, it is expected that data plane functions will be part of the IMT system function from the beginning and can support full-blown data services to devices, network functions and applications. Finally, fourth challenge is low power and low latency wireless communication. To support extreme services on a lightweight device such as AR glasses, the device needs low latency wireless communication with low device power consumption.
To address the above-mentioned challenges, computing services and data services are expected to become an integral component of the future IMT system. Pervasive/ubiquitous computing and data services can be enabled alongside the ubiquitous connectivity as integral services of the IMT system. Dynamic computing workload distribution can be inherently supported as an IMT system capability. Applications can use the IMT system’s workload distribution and scaling capability to achieve optimized performance. Data plane services in the IMT system such as data collection, processing, management and sharing can be enabled to support AI needs in air interface, cellular network and applications.
Technologies for integrated access and superlink communications:
Short-range device-to-device (D2D) wireless communication with extremely high throughput ultra accuracy positioning and low latency will be a very import communication paradigm for the future communication system. On the one hand, lots of new applications, such as ultimate immersive cloud XR, holographic display, tactile internet and remote motion control, integrated aerial and vehicle communication, and sidelink enhanced industry internet things (SL-IIoT), which need either Tbps throughput or sub-ms level latency and low power consumption wireless link, will mature in the next decade and the wireless communication distance for these D2D applications are comparatively short. On the other hand, to satisfy the above wireless requirement, extremely wide bandwidth technologies with short propagation distance, such as THz technology, optical wireless technology, ultra accuracy sidelink positioning technology, and enhance terminal power reduction technology, may be potential candidates. Therefore, how to integrate these short range D2D applications and its related sidelink technologies into cellular system need to be considered in the future communication system.
The above sidelink by nature may significantly increase the system capacity. THz and optical wireless link normally have very narrow beam and short transmission distance. Therefore, the spectrum or channel can be easily reused by other sidelinks, which can increase the system capacity. Meanwhile, a dynamical self-organized short-range network, such as mesh network, may also solve the bottleneck of previous cellular system in which all the resources on the Uu interface were managed centrally by base station. However, the D2D or multi-hop short range mesh network may have the risk of slow convergence time and large signalling overhead due to frequent movement of nodes. Therefore, the integrated design of short range and cellular may help the sidelink to achieve optimized system level performance. How to increase the integration efficiency, as well as how it may co-exist with other systems on the same spectrum deserve further research.
Radio on THz:
In the subjected technology, a UE is connected to its peripheral devices by using terahertz broadband radio, while the peripheral devices receive/transmit data signal in THz band radio with UE and also receive/transmit data signal in the different (lower) frequencies connecting to BSs (operated in, e.g., the millimeter wave bands and the sub-6 gigahertz band) and then connect to the APs located in BS. Here the peripheral devices play a role to mediate between a UE and BS with AP.
Generally, the terahertz radio has been investigated for use in fixed and long-range radio applications such as wireless backhaul. However, it is expected that the terahertz radio application could also be disseminated in the form of these short-range use cases.
In the IMT for 2030 and beyond networks, UEs themselves will also need to evolve to meet individual users’ high-communications performance demands. For example, while UEs have gradually evolved in terms of weight and shape, their capabilities as radio devices have not significantly changed since mobile phones first appeared 40 years ago.
In achieving the exchange of information with the quality and quantity high enough to meet individual users’ diverse demands, UEs present significant limitations in terms of their size, which limits the number of integrated antennae and their maximum transmission power. It is not practical to increase the size of UEs to alleviate constraints such as the number of antennae, and the performance of uplink communication from UEs to BSs is vastly inferior to downlink communications from BSs.
Therefore, we introduce the cooperation technique between various peripheral devices that communicate with UEs. Specifically, through the cooperation, it would be possible to solve issues arising from the constraints caused by a single user device, such as power transmission and the number of integrated antennae.
For examples, peripheral devices around UEs, such as PCs, watches, glasses (smart glasses), or self-driving cars, can become wireless devices and cooperate with one another, making it possible to overcome transmission power constraints in a single user terminal, and to virtually overcome limitations in the number of antennae. When riding in a car with a UE, the antenna on the car can also be used virtually as the UE’s antenna to improve communications performance.
Here, communication between a UE and its peripheral devices requires a short-range but extremely wideband signal transmission. Since the capabilities required for wireless signal processing are limited in small devices such as watches and glasses, complex wireless signal processing should be avoided in such devices. Therefore, it is expected that the technology mentioned at the beginning will be introduced.
Technologies to efficiently utilize spectrums:
It is expected that the spectrum for future IMT systems will continue to follow the mixed use of high, medium, and low-frequency bands as in the IMT-2020 system, but with potentially larger bandwidths and higher operating frequencies in different bands, i.e. using a mix of centimetre, millimetre, and terahertz waves. Bands below 6 GHz, millimetre waves, and terahertz spectrum resources can be utilized jointly to provide various wireless links of different bandwidths and beam-propagation characteristics that satisfy the wide range of service requirements of future IMT systems. It is also envisioned that the diverse future use cases, followed by their different system requirements, for DL and UL transmission services can be better met by using the propagation and bandwidth characteristics of different frequency bands.
Spectrum utilization can be further enhanced by efficiently managing resources through technologies such as advanced carrier aggregation (CA) and distributed cell deployments (cell-free/distributed MIMO). By enabling devices to simultaneously and flexibly connect to a set of carriers, which are offered in a set of nodes according to availability and necessity, higher bandwidths can be achieved. Therefore, we can achieve higher rates and the usage of available bands can be steered towards best efficiency. In distributed MIMO, a set of network nodes act as one cell-less system that enable high-density deployment and spectral reuse. This allows for efficient antenna and transport solutions, which can more efficiently utilize spectrum resources through central coordination.
In addition to the above, there are also alternative explorations for spectrum utilization improvements.
Spectrum sharing technologies:
Spectrum sharing refers to when two or more radio systems operate in the same frequency band. Fundamentally, two forms of spectrum sharing are present: 1) horizontal spectrum sharing between systems with the same level of access rights to the spectrum, and 2) vertical spectrum sharing between systems with different levels of access rights to the spectrum. Vertical and horizontal spectrum sharing are not mutually exclusive. Current IMT-2020 systems involve various combinations of horizontal and vertical spectrum sharing through different techniques for interference management. The same is expected in IMT for 2030 and beyond. Spectrum sharing in specific areas, such as remote areas where spectrum may be unused or underused, will allow for quicker resolutions in interference management problems, e.g., more bandwidth for backhaul links leading to more energy efficient operations.
[Work on intelligent spectrum management technologies that enable opportunistic and intelligent spectrum sharing is now necessary in guaranteeing the continued development of future wireless network services and applications. This applies to intelligent database-driven spectrum sharing, smart spectrum sensing, intelligent software defined radios, and reconfigurable radio networks. All of these are expected to play an important role in addressing demand for next generation gigabit wireless services while enabling the broadband connectivity and digital inclusion in underserved areas.]
New medium access control (MAC) designs based on spectrum-sensing or spectrum-sharing have been considered. In order to evolve future IMT systems with a dynamic spectrum sharing nature, entire radio resource control (RRC) and radio access network (RAN) layer 2 (L2) frameworks need to be redesigned. Such dynamic spectrum requests from each subsystem are expected to come from various computing needs in devices, implying that future MAC must be redesigned for computing and communication convergence. The edge-cloud computation/architecture for new MAC is necessary, where a trade-off in computing and communication must be carefully considered. Massive training-data upload in uplink also requires new MACs to achieve accurate spectrum sharing.
Key aspect is the centralization or decentralization of spectrum sharing control, where this decision is dependent on the type of application and network environment.
Enabling the transition from IMT-2020 to IMT systems for 2030 and beyond will require a smooth transition from one technology to the other while maintaining optimum use of spectrum resources. IMT for 2030 and beyond should facilitate co-existence between the two technologies in order to allow a network operator to divide spectrum between the two technologies while balancing the bandwidth allotted to each technology according to user demand while utilizing both technologies simultaneously.
Technologies for broader frequency spectrum:
By changing the higher frequency spectrum from “millimetre waves” to “terahertz waves,” we use a drastically wider bandwidth compared to IMT-2020. For this reason, studies have started researching the possibility of achieving “extreme high data rate and high capacity” communication exceeding the IMT-2020 peak data rate requirements. Currently, “radio waves” up to about 300 GHz are considered to be within the scope of IMT for 2030 and beyond. However, unlike millimetre waves, terahertz waves must travel through a straight path and cannot propagate long distances. Thus, it is necessary to carry out technical studies on terahertz waves to identify their radio propagation characteristics and establish their propagation model, as well as study how to utilize these waves based on various network configurations.
Regarding device technology, it is necessary to implement a digital signal processing circuit capable of supporting wider bandwidths, a digital to analog converter, and an analog to digital converter at low cost and low power consumption. Additionally, antennas, filters, amplifiers, mixers, and local oscillators that operate in high frequency bands must be developed to be compatible with massive MIMO’s multiple antenna elements. RF (Radio Frequency) circuits must also be enhanced for higher performance and higher integration in high frequency bands exceeding 100 GHz.
The radio access technologies for such high frequency bands and the current bands for IMT systems have common technical issues regarding coverage and power efficiency. Here, single-carrier signal waveforms are preferred over OFDM signal as a radio technology due to their power efficiency. As we apply radio technologies, including integrated access and backhaul, to a wider range of areas, the importance of power-efficient radio technology like single carrier may increase.
Technologies to enhance energy efficiency and low power consumption:
Up until now, humans have been a principal data consumer, but in the future, the main agents of data consuming will be gradually shifted from human to the smart machines equipped with AI. In alignment with this, a paradigm-shift from the smartphone dominated era to a multi-stream era, where various types of terminals, not only wearables, skin-patches, bio-implants, exoskeletons integrated with advanced man-machine interfaces such as gestures, haptics, and brain sensors but also cars, UAVs, and robots equipped with AI, will be the norm while smartphones still being around us. This diversification of terminals will lead to new verticals to emerge and prosperous.
Future IMT systems are expected to support about a trillion devices, mainly driven by the surge in demand for IoT devices that cover a wide variety of applications such as smart cities, smart industry, and smart homes. A key category is power-constrained devices that are meant to be left in place for very long periods of time, stretching into several years. This may be because the devices are inaccessible, or it is difficult or expensive to reach them once installed. These devices may perform a wide range of functions such as asset tracking, supply chain logistics and infrastructure monitoring. Such devices may also include the category of Internet-of-Tags, which involves tracking, sensing or actuation functions. The need to improve energy efficiency has given rise to the field of energy-efficient communications, or green communications.
Low energy consumption issues can be considered from both the user device and the network’s perspectives. Technological advances on AI/ML, molecular, backscatter and visible light communications, fog/edge computing, and metamaterials/metasurfaces aim at lowering devices’ and network’s power consumption . Efficient low-overhead communications are appealing to save overhead-related energy at the devices. E.g., by using channel state information (CSI)-limited/free schemes instead of training-based instantaneous CSI. On the other hand, network densification, distributed antenna deployments, moving/flying transmitters can shorten the communication distances, lowering communications’ energy consumption at devices’ side, and RF pollution in general. Reconfigurable antennas and rotating antennas are also promising technologies in this regard.
Wireless charging technologies:
Support of energy harvesting from wireless signals can eliminate an IoT device’s need to draw power from its battery for downlink signal detection and processing, which enables “zero-energy” radio operations. It is also possible to utilize natural energy sources for energy harvesting to meet this requirement, e.g., solar power, etc.
Wireless charging through RF wireless energy transfer (WET) has emerged as a promising charging technology. WET is currently being considered, analyzed and tested as a nascent stand-alone technology, and its wide integration to main wireless systems can be envisioned in the coming years. However, increasing the end-to-end eﬃciency, supporting mobility at least at pedestrian speed, facilitating ubiquitous power accessibility within the network coverage area, resolving the safety and health issues of WET systems, compliance with regulations, and enabling seamless integration with wireless communications are the main challenges ahead. Energy beamforming is among the most appealing techniques to enable WET as an efficient solution for powering future IoT networks. Energy beamforming also allows transmitted signals to adapt to the propagation environment, thus optimizing wireless energy delivery.
Backscattering technology is an alternative approach for low power and low-cost communication. A device can send information by modulating and reflecting received wireless signals from ambient sources, without the need for power hungry transceivers, amplifiers, and other traditional communication modules. Thus we can achieve extremely low power consumption and low-cost communication through backscattering technology. It can harvest the energy of the ambient wireless signals and/or other energy for its communication, and therefore achieve nearly zero power communication. Ambient Backscatter Communication (AmBC) is referred to as the backscattering communication system that exploits ambient RF signals to transmit information bits without active RF transmission. The main challenges for these backscattering technologies include interference between backscattering signals and source signals, and limited communications range and data rates. Therefore, the techniques that must be developed for backscattering communication include modulation and channel coding, signal detection algorithms, interference coordination techniques, combinations with MIMO technology, multi-user access approaches, etc.
On-demand access technologies:
Another alternative approach to resolve low power communication is the on-demand passive device using the energy of received signals to trigger the wakeup of the receiving chain. The on-demand passive device would stay in sleeping mode with zero power consumption and will have the receiver waken up when the network sends the wakeup signals when data arrival, which turns on the transceiver to switch to Connected status. The zero-energy passive device for triggering UE wakeup would be particular usefully for machine type communication, wearable devices, health devices, and general mobile phone. In order to support the zero power passive device of UE wakeup, the next generation wireless system needs to design the network and control signal for on-demand access with UE wakeup passive device.
The UE can support on-demand network access based on backscattering technology to minimize the power consumption. The challenge of the on-demand network access with passive wakeup device is the receiver sensitivity. This will limit the coverage of the passive wakeup device. In order to accommodate the low receiver sensitivity of the frond-end passive device, the wakeup signals need to transmit in much higher density comparing to that of the traditional base station deployments. This requirement will not only be difficult to accomplish the blank coverage of on-demand network access but also increase the network energy consumption of tracking the UE with front-end wakeup passive devices. The low receiver sensitivity would hinge the coverage and development of next generation wireless network.
The on-demand access technologies for improving energy efficiency are to have the front-end wakeup device with zero or low power consumption in triggering the wakeup of UE receiver of the next generation wireless technologies. The UE receivers and transmitter circuits of the next generation wireless technologies would be in sleeping state, which most of the hardware, such ASCI, DSP, controller, and memory, are turned off and software are in the standby state. The front-end wakeup device would be used mainly for monitoring and receiving wakeup signals in active or passive way. The low-power active device, e.g. low-voltage tune RF (TRF) Wakeup receiver with passive RF gain and high gain envelope detection, used for front-end wakeup device could extend the receiver sensitivity. Once the wakeup signal is detected by the front-end wakeup device, the wakeup device would activate the hardware and initialize the software from the sleeping state to active state.
If an ultra-low power simplified receiver technology is used to continuously monitors wake-up signal, the power consumption could be dramatically reduced and the of the IoT devices can be extended significantly while low paging latency can be guaranteed. To meet the power consumption budget, the ultra-low power receiver may not require digital receivers or digitize the RF signal directly or even be a passive envelope detector, but pursues simple schemes for modulation, e.g., On-Off Keying (OOK) or Frequency-Shift Keying (FSK), and coding, e.g., Manchester coding.
The system designs of next generation wireless technologies need to take the processing time of hardware activation and software initialization into consideration for the on-demand access technologies to achieve optimum UE energy efficiency. The next generation cellular technologies would not only be standalone but always be integrated with the legacy technologies in the mobile phone for multi-technologies multi-connectivity.
Technologies to natively support real-time services/communications:
There are two essential technology components that support real-time communications and realize extreme low latency.
One is accurate time and frequency information shared in the terrestrial network. Especially when network nodes equip compact atomic clocks, their high holdover performance can dramatically reduce synchronization iterations over the local network. The high frequency accuracy obtained from the atomic clocks also allows reducing the frequency offset between Tx and Rx, leading to the low BER particularly in high carrier frequency. The collection of the time differences among node clocks enables the estimation of more stable and robust time using maximum likelihood method, and the result can be delivered back to each node for their self-corrections. Wireless Space-Time Synchronization, where clocks are synchronized at Pico second level together with the determination of positions, is another method on which low latency communication protocol can be built with a capability of autonomous and distributed operations. Such synchronized network supports the schedule management in edge processing in mobile backhauls. The common time and frequency can be traceable to the standard time or frequency by linking one node to the precision time/frequency source.
Another enabler is fine-grained and proactive just-in-time radio access which incorporates the extremely short transmission time intervals (TTI) for the scheduling, leading to the reduction of the buffering and channel access delay. The benefit of these two technologies can be further enhanced by time-sensitive communications protocols, which enables the prioritization of latency-sensitive or mission-critical traffic, leading to real-time communications. Resource management can be supported by leveraging application-domain information about the predictability of actual resource requirements considering the context and traffic characteristics. Periodic transmissions can be pre-scheduled with given and precise time boundaries while AI and ML tools can be used to schedule algorithms. Resource allocation for real-time communications may also span over a multi-dimensional solution space comprising multi-RAT, multi-link, etc.; managed by a dedicated real-time management function that is aware of resource needs, availability and surrounding environment. …………………………………………… CONTINUED…………………………………………………………………………………..
Status: This draft report is scheduled to be completed at the next ITU-R WP 5D meeting and if so will be submitted to ITU-R SG 5 for approval in November 2022.