This ITU-R draft report is not complete or agreed upon at this time. Therefore, all of the text below is subject to change. For sure, AI will play a huge role in future IMT
International Mobile Telecommunications (IMT) systems are mobile broadband systems including both IMT-2000 (3G), IMT-Advanced (true 4G) and IMT-2020 (M.2150 5G RIT/SRIT/RAN previously know as IMT-2020.specs).
IMT-2000 provides access by means of one or more radio links to a wide range of telecommunications services supported by the fixed telecommunications networks (e.g. PSTN/Internet) and other services specific to mobile users. Since the year 2000, IMT-2000 has been continuously enhanced, and Recommendation ITU-R M.1457 providing the detailed radio interface specifications of IMT-2000, has been updated accordingly. Some new features and technologies were introduced to IMT-2000 which enhanced its capabilities.
IMT-Advanced is a mobile system that includes the new capabilities of IMT that go far beyond those of IMT-2000 and also has capabilities for high-quality multimedia applications within a wide range of services and platforms providing a significant improvement in performance and quality of the current services. IMT-Advanced systems can work in low to high mobility conditions and a wide range of data rates in accordance with user and service demands in multiple user environments. Such systems provide access to a wide range of telecommunication services including advanced mobile services, supported by mobile and fixed networks, which are generally packet-based. Recommendations ITU-R M.2012 provides the detailed radio interface specifications of IMT‑Advanced.
ITU-R studied the technology trends for the preparation of development of IMT-Advanced and IMT-2020, and the results were documented in Reports ITU-R M.2038 and ITU-R M.2320, respectively.
Since the approval of Report ITU-R M.2320 in 2014, there have been significant advances in IMT technologies and the deployment of IMT systems. The capabilities of IMT systems are being continuously enhanced in line with user trends and technology developments.  IMT-2020 systems include new capabilities of IMT that go beyond those of IMT-2000 and IMT-Advanced and make IMT-2020 more efficient, fast, flexible and reliable when providing diverse services in the intended usage scenarios including enhanced Mobile Broadband, ultra-reliable low-latency communication and massive machine-type communication.
This Report provides information on the technology trends of terrestrial IMT systems considering the time-frame 2023-2030 and beyond. Technologies described in this Report are collections of possible technology enablers which may be applied in the future. This Report does not preclude the adoption of any other technologies that exist or appear in the future, and newly emerging technologies are expected in the future.
This Report provides a broad view of future technical aspects of terrestrial IMT systems considering the time frame up to 2030 and beyond, characterized with respect to key attributes and alignment with relevant driving factors. It includes information on technical and operational characteristics of terrestrial IMT systems, including the evolution of IMT through advances in technology and spectrally efficient techniques, and their deployment.
New services and application trends:
The development of IMT systems for 2030 and beyond calls for a thorough reconsideration of several types of interactions . The roles of modularity and complementarity of new technological solutions become increasingly important in the development of increasingly complex systems. The use of data and algorithms, such as AI, will play an important role and technological complementarities are required to ensure that the technology innovations complement each other. This is particularly important as the role of IMT for 2030 and beyond can be seen as a pervasive general-purpose system, instead of simply an enabling technology, resulting in complex technical dependencies.
The role of the users of new services and applications is important in the technology development for IMT for 2030 and beyond, and users will need to have access to the services, required devices, and knowledge to use them, including non-users and potential reasons for their exclusion. Users’ opportunities to actively participate as experientials and developers will increase through a deeper understanding of technologies and skills and allows to shape the technologies for personalized needs.
Key new services and application trends for IMT for 2030 and beyond can be summarized as follows:
– Networks support enabling services that help to steer communities and countries towards reaching the UN SDGs
– Customization of user experience will increase with the help of user-centric resource orchestration models
– Localized demand–supply–consumption models will become prominent at a global level
– Community-driven networks and public–private partnerships (PPP) will bring about new models for future service provisioning
– Networks will have a strong role in various vertical and industrial contexts
– Market entry barriers will be lowered by the decoupling of technology platforms, making it possible for multiple entities to contribute to innovations
– Empowering citizens as knowledge producers, users and developers will contribute to a process of human-centred innovation, contributing to pluralism and increased diversity
– Privacy will be strongly influenced by the increased platform data economy or sharing economy, emergence of intelligent assistants (AI), connected living in smart cities, transhumanism, and digital twins
– Monitoring and steering of circular economy will be possible, helping to create better understanding of sustainable data economy
– Sharing- and circular economy-based co-creation will enable promoting sustainable interaction also with existing resources and processes
– Development of products and technologies that innovate to zero are promoted, for example, zero-waste and zero-emission technologies
– Immersive digital realities will facilitate novel ways of learning, understanding, and memorizing in several fields of science.
The role of IMT for 2030 and beyond will be to connect a number of feasible devices, processes as well as humans to a global information grid in a cognitive fashion, offering new opportunities for various verticals . Considering their different development cycles, a full trolley of the potential advances and vertical transformations will continue to be occur in the beyond 2030 era. The trend towards higher data rates will continue going towards 2030 leading to peak data rates approaching Tbit/s regime indoors, which will require large available bandwidths giving rise to (sub-) THz communications. On the other hand, a large portion of the verticals’ data traffic will be measurement based or actuation related small data which in many cases require extreme low latency in rapid control loops necessitating short over the air latencies to allow time for computation and decision making. At the same time, the reliability requirement in many vertical applications will be stringent. Industrial devices and processes, future haptic applications and multi-stream holographic applications require timing synchronization setting tight requirements for transmission jitter. In the future, there will be use cases that require extreme performance as well as new combinations of requirements that do not fall into the three categories of IMT-2020: eMBB, URLLC, and massive machine type communication (mMTC). Some of these use cases will require wide coverage whereas others are confined in small areas.
The three usage scenarios described in IMT-2020 i.e. eMBB, mMTC and URLLC will still be important and new use cases and applications should be all taken into account for the continuing evolution, especially for those driving the technologies development and reflecting the future requirements.
Services and trend opportunities:
– Holographic Communications
Holographic displays are the next evolution in multimedia experience delivering 3D images from one or multiple sources to one or multiple destinations, providing an immersive 3D experience for the end user. Interactive holographic capability in the network will require a combination of very high data rates and ultra-low latency. The former arises because a hologram consists of multiple 3D images, while the latter is rooted in the fact that parallax is added so that the user can interact with the image, which also changes with the viewer’s position.
Holographic communication provides real-time three-dimensional representation of people, things, and their surroundings into a remote scenario. It requires at least an order of magnitude high transmission rate and powerful 3D display capability.
– Tactile and Haptic Internet Applications
Advanced robotics scenarios in manufacturing need a maximum latency target in a communication link of 100 microseconds (µs), and round-trip reaction times of 1 millisecond (ms). Human operators can monitor the remote machines by VR or holographic-type communications, and are aided by tactile sensors, which could also involve actuation and control via kinaesthetic feedback.
Vehicle-to-vehicle (V2V) or vehicle-to-infrastructure communication (V2I) and coordination, autonomous driving can result in a large reduction of road accidents and traffic jams. Latency in the order of a few ms will likely be needed for collision avoidance and remote driving.
Tele-diagnosis, remote surgery and telerehabilitation are just some of the many potential applications in healthcare. Tele-diagnostic tools, medical expertise/consultation could be available anywhere and anytime regardless of the location of the patient and the medical practitioner. Remote and robotic surgery is an application where a surgeon gets real-time audio-visual feeds of the patient that is being operated upon in a remote location. The technical requirements for haptic internet capability cannot be fully provided by current systems.
– Network and Computing Convergence
Mobile edge compute (MEC) will be deployed as part of 5G networks, yet this architecture will continue towards IMT 2030 networks. When a client requests a low latency service, the network may direct this to the nearest edge computing site. For computation-intensive applications, and due to the need for load balancing, a multiplicity of edge computing sites may be involved, but the computing resources must be utilized in a coordinated manner. Augmented reality/virtual reality (AR/VR) rendering, autonomous driving and holographic type communications are all candidates for edge cloud coordination.
– Extremely High Rate Information Access
Access points in metro stations, shopping malls, and other public places may provide information access kiosks. The data rates for these information access kiosks could be up to 1 Tbps. The kiosks will provide fibre-like speeds. They could also act as the backhaul needs of millimeter-wave (mmWave) small cells. Co-existence with contemporaneous cellular services as well as security seems to be the major issue requiring further attention in this direction.
– Connectivity for Everything
Scenarios include real-time monitoring of buildings, cities, environment, cars and transportation, roads, critical infrastructure, water and power etc. The internet of bio-things through smart wearable devices, intra-body communications achieved via implanted sensors will drive the need of connectivity much beyond mMTC.
It is anticipated that Private networks, applications or vertical-specific networks, mini and micro, enterprises, IoT / sensor networks will increase in numbers in the coming years (based on multiple Radio technologies). Interoperability is one of the most significant challenges in such a Ubiquitous connectivity / compute environment (smart environments), where different products, processes, applications, use cases and organizations are connected. Interactions among telecommunications networks, computers, and other peripheral devices have been of interest since the earliest distributed computing systems.
– XR – Interactive immersive experience
The interactive immersive experience use case will have the ability to seamlessly blend virtual and real-world environments and offer new multi-sensory experiences to users. This use case will enable the users to interact with avatars of other remotely located users and flexibly manipulate objects from representations of real and/or virtual environments with high degree of realism. The implications of this use case are expected to be immense, given its wide-ranging applicability to social, entertainment, gaming, industry, and business sectors.
X-Reality, such as virtual reality (VR), augmented reality (AR), and mixed reality (MR) is expected to provide higher resolution, larger FoV, higher FPS, and lower MTP, which all translate into higher demand on the transmission data rate and end-to-end latency.
A key challenge to address when supporting interactive experiences in network include synchronized transport of multi-modality of flows (e.g., visual media, audio, haptics) to and from different devices in a collaborative group serving the same XR application. Another important consideration is supporting real-time adaptations in the network relative to user movements and actions to ensure the interactions with other users and objects appear highly realistic in terms of placement and responsivity. Enabling spatial interactions will also require fast accessibility and ease of integration of content containing up-to-date and accurate representations of real/virtual environments from different content sources.
– Multidimensional sensing
Sensing based on measuring and analysing wireless signals will open opportunities for high-precision positioning, ultra-high-resolution imaging, mapping and environment reconstruction, gesture and motion recognition, which will demand high sensing resolution, accuracy, and detection rate.
– Digital Twin
Digital twin is a digital replica of entities in physical world, which demands real time and high accuracy sensing to ensure the accuracy, and low latency and high data transmission rate to guarantee the real time interaction between virtual and physical worlds.
A digital twin network is a dynamic replica of the physical network for its full life cycle. It should be capable of generating perceptive and cognitive intelligence based on collection of historical and on-line network data. It should be capable of continuously seeking the optimal state of the physical network in advance, and enforcing management operations accordingly. Digital twin enables network self-boosting, self-evolving, self-optimizing by verifying new functionalities, services and optimization features before deployment. Sensing and learning are the two fundamental functions to fuse the physical and cyber world.
Mixed Reality and Virtual Presence
Enabling efficient Machine Type Communication (MTC) continues to be important driver for IMT for 2030 and beyond. It allows machines and devices to communicate with each other without direct human involvement is a major driver behind the Internet of Things (IoT) and the future digitalization of economies and society . MTC encompasses critical MTC (cMTC) and massive MTC (mMTC). The former targets mission-critical connectivity with stringent requirements on key performance indicators (KPI) such as reliability, latency, dependability, and synchronization accuracy. On the other hand, the latter addresses connectivity needs for massive number of potentially low-rate, low-energy simple devices, where the connection density and the energy efficiency are the most important KPIs.
For 2030 and beyond, data markets will become increasingly important technology area connecting data suppliers and customers . The data generated by the widely distributed MTDs will have enormous business and societal value. The value-added services of data marketplaces will be empowered by emerging technologies like artificial intelligence (AI) and distributed ledger technology (DLT), while adding new data-centric KPIs such as the age of information, privacy and localization accuracy.
Proliferation of intelligence
Real-time distributed learning, joint inferring among proliferation of intelligent devices, and collaboration between intelligent robots demand a re-thinking of the communication system and networks design.
– Global Seamless Coverage
In order to connect the unconnected and provide continuously high quality mobile broadband service in various areas, it is expected that the interconnection of terrestrial and non-terrestrial networks will facilitate the provision of such services.
Technology Drivers for future technology trends towards 2030 and beyond:
The continuing evolution of the IMT systems, and the underlying technologies, must be guided by the imperative to satisfy fundamental needs, and contextualized in terms of how they can help the society, the end users, and the value creation and delivery. These necessities and key driving factors are:
– Societal goals – Future technologies should help contribute further to the success of a number of UN SDG goals including environmental sustainability, trust and inclusion, efficient delivery of health care, reduction in poverty and inequality, improvements in public safety and privacy, support for ageing populations, and managing expanding urbanization.
– Market expectations – New technologies should enable significant and novel capabilities, supporting radically new and differentiated services, opening up greater market opportunities
– Operational necessities – The need to manage complexity, drive efficiency, and reduce costs, with end to end automation and visibility, is also an imperative as a motivation and driving factor
Key considerations for IMT Systems for 2030 and beyond include:
– Sustainability/Energy efficiency
Energy efficiency has long been one important design target for both network and terminal. While improving the energy efficiency, the total energy consumption should also be kept as low as possible for the sustainable development.
Energy efficiency has long been one important design target for both network and terminal. While improving the energy efficiency, the total energy consumption should also be kept as low as possible for the sustainable development. Power efficient technology solutions are needed both in backhaul and local access to make use of small-scale renewable energy sources.
– Peak Data Rate/Guaranteed Data Rate
The peak data rate for future system should be largely increased in order to support extremely high bandwidth services such as extremely immersive XR and holographic communication.
Guaranteed data rate usually refers to the achievable data rate at the edge of coverage area. Future system should guarantee users’ experience regardless of users’ location and network traffic conditions.
Services with real-time and precise control usually have high demands on the low latency of communications, such as the air interface delay, end-to-end latency, and roundtrip latency.
Usually refers to the degree of latency variation. Some of the future services such as time sensitive industry automation applications may request the jitter close to zero.
– Sensing resolution and accuracy
Sensing based services, including traditional positioning and new functions such as imaging and mapping, will be widely integrated with future smart services, including indoor and outdoor scenarios. Very high accuracy and resolution will be needed to support a better service experience.
– Connection density
Refers to the number of connected or accessible devices per unit space. It is an important indicator to measure the ability of mobile networks to support large-scale terminal devices. With the popularity of the Internet of Things (IoT) and the diversification of terminal accesses in the specific applications, such as industrial automation and personal health care etc., mobile system needs to have the ability to support ultra-large connections.
– Coverage and full connectivity
The future network should be able to provide global coverage and full connectivity by wireless and wired, terrestrial and non-terrestrial coverage with heterogeneous multi-layer architecture. The full connectivity network should support intelligent scheduling of connectivity according to application requirements and network status to improve the resource efficiency and service experience. It will extend the provision of quality guaranteed services, such as MBB, massive IoT, high precision navigation services etc, from outdoor to outdoor from urban to rural areas and from terrestrial to non-terrestrial spaces.
Refers to the maximum speed supported under a specific Quality of Service (QoS) requirement. Future system will not only support terminals on land including high speed train, but it will also provide services to terminals in high-speed airplane, drone and so on.
– Spectrum utilization
With new services and applications towards 2030 and beyond, more spectrum is required to accommodate the explosive mobile data traffic growth. Further study will be introduced on novel usage of low and mid band, and the extension to much higher frequency band with much broader channel bandwidth. The smart utilization of multiple bands and improvement of spectrum efficiency through advanced technologies are essential to achieve high throughput in limited bandwidth.
– Simplified user-centric network
With huge amounts of new services and scenarios towards 2030 and beyond, the network is required to satisfy diversified demand and personalized performance. The soft network should be designed as a fully service-based and native cloud-based radio access network, which can guarantee the QoS and provide consistent user experience. The lite network should be constructed as a globally unified access network with the simple architecture and the powerful capabilities of robust signalling control, accurate network services and efficient transmission through the converged communication protocols and access technologies with plug-and-play, on-demand deployment. A user-centric network is required to enable a fully distributed/decentralized network mitigating single point of failure as well as to enable the user-controlled data ownership which is critical to the next generation network.
– Native AI
The future mobile system will have stronger capabilities and support more diversified services, which will inevitably increase the complexity of the network. Artificial Intelligence (AI) reasoning will be embedded everywhere in the future network including physical layer design, radio resource management, network security, and application enhancement, as well as network architecture, which results in a multi-layer deep integrated intelligent network design. Meanwhile the future network can also support distributed AI as a service for larger scale intelligence.
The future network supports more advanced system resilience for reliable operation and service provision, security to provide confidentiality, integrity and availability, privacy with self-sovereign data, and safety regarding the impact to the human being and environment etc.
The roles of trust, security and privacy are somewhat interconnected, but different facets of the future networks. Inherited and novel security threats in future networks need to be addressed. Diversity and volume of novel IoT and other networked devices and their control systems will continue to pose significant security and privacy risks and additional threat vectors as we move from IMT-2020 to beyond. IMT for 2030 and beyond needs to support embedded end-to-end trust such that the resulting level of information security in the networks to is significantly better than in state-of-the art networks. Trust modelling, trust policies and trust mechanisms need to be defined.
Security algorithms can use machine learning to identify attacks and respond to them. Continuous deep learning is needed on a packet/byte level and applying machine learning to enforce policies, detect, contain, mitigate, and prevent threats or active attacks. While IMT-2020 is still largely device / network specific, future networks envisage far more immersive engagement with the network.
Conventional trust, security and privacy solutions may not be directly applicable to specific machine type communication scenarios owing to their lack of humans in the loop, massive deployment, diverse requirements, and wide range of deployment scenarios. This motivates the design of resource-efficient unsupervised solutions to be exploited by MTD, e.g., based on distributed ledger technology (DLT).
– Dynamically controllable radio environment
To be able to dynamically change the characteristics of radio propagation environment and create favourable channel conditions to support higher data rate communication and improve the coverage.
Emerging Technology trends and enablers:
Technologies to use AI in communications:
The big success of artificial intelligence (AI) in image, video and audio signal processing, data mining and knowledge discovery, etc., has made it possible to shift wireless communication to an intelligent paradigm in a similar manner, i.e., learning from the wireless big data which has yet to be fully exploited to design new and efficient architectures, protocols, schemes and algorithms for the future communication system. In turn, with the wide deployment of base stations, edge servers and intelligent devices, the mobile network will provide a new and powerful platform for ubiquitous data collection, storage, exchange and computing which are needed for future mobile / distributed / collaborative machine learning. For the future communication system, an emerging and transformative move will be providing the access of AI to everyone, every business, every service anywhere anytime. AI is the design tablet of the future communication system, and it will be the cornerstone to create intelligence everywhere. One of the main differences of the future communication system compared to IMT-2020 is that it will use mobile technologies to enable the proliferation of AI and use the radio networks to augment ubiquitous, distributed machine learning. Furthermore, AI ethics issues, existing in all AI-based systems and applications, have been raised and discussed in wireless community from different aspects. Then, future IMT technology would request on fairness, robustness to avoid AI ethics issues in certain level.
AI native new air interface:
Applying tools from Artificial Intelligence (AI), and Machine Learning (ML) and its sub-set Deep Learning (DL), in wireless communications have gained a lot of traction in recent years. This trend in large part has been motivated by the significant increase in the system complexity in the IMT-2020 radio access network (RAN) and its evolution over previous wireless technology generations. Deep neural networks allow the characterization of specific or even unknown channel environment and network environment, i.e., the traffic, the interferences and user behaviours, and then adapt the radio signalling to the channel and network environment. With learning it can optimize user signalling, power consumption as well as its end-to-end connectivity, and smartly coordinate the multi-user access of radio resources, thus optimizing the data and control plane signalling and improving the overall system performances.
The most challenging issue in air interface design is to sense the communication environments, i.e. the estimation and prediction of propagation channels. To this end, traditional air interface pays much effort to pilot design and channel estimation. Now with machine learning and especially the capability of black-box modelling and hyper-parameterization of a deep neural network, the unknowns of the underlying channel could be properly learned providing that sufficient data is available. Thus, we can reconstruct a physical channel rather than just estimate it. With transfer learning, the learned model can be transferred to adjacent nodes. This gives new way to air interface design. Several components in the transceiver chain are expected to be implemented through AI/ML based algorithms. This includes the transmitter side – beamforming and management and the receiver side – channel estimation, symbol detection and/or decoding at a minimum. Therefore, there will be a heavy focus to redesign the physical layer of the communication protocol stack using AI. On the other hand, the implementation issues related to periodic updating of deep learning models that are used in various blocks of the physical layer must be addressed.
In addition, radio resource management or resource allocation can also be implemented via AI/ML based methods. In a multi-user environment, with reinforcement learning, base stations and user equipment could automatically coordinate the channel access and resource allocation based on the signals they respectively received. Each of the nodes calculates its reward for each transmission, and adjusts its power, beam direction and other signalling to accomplish the distributed interference coordination and improve the system capacity. Following are some potential usages:
– For QoE bottleneck of last mile radio link. It is expected for RAN AI to expose radio channel prediction capabilities for upper layer adaptation, for example, available bandwidth and predicted latency, by taking into account multi-users radio channel fluctuation, traffic pattern and cell load variation and etc. This interaction could be based on the subscribed request from upper layer and would be only triggered when the predefined threshold is satisfied.
– The optimization of radio resource allocation to meet the requirements of highly demanding applications such as the cloud based interactive applications which requires low latency and high throughput. The optimization will take into account multi-dimensional metrics, for example, application-level traffic pattern, i.e., video frame level I/B/P frames distribution, transport layer congestion control, low layer buffer status, QoS profiles (e.g. bandwidth, latency)
– For the randomness and uncertainty of traffic distribution in the vehicle network, use the deep reinforcement learning based adaptive exploration approach for the resource allocation, including offline training, online distributed learning method etc.
Machine learning techniques can be used for symbol detection and/or decoding. While de-modulation/decoding in the presence of Gaussian noise or interference by classical means has been studied for many decades, and optimal solutions are available in many cases, ML could be useful in scenarios where either the interference/noise situation does not conform to the assumptions of the optimal theory, or where the optimal solutions are too complex. Meanwhile, IMT for 2030 and beyond will likely utilize even shorter codewords than IMT-2020 with low-resolution hardware (which inherently introduce non-linearity that is difficult to handle with classical methods). ML could play a major role, from symbol detection, to precoding, to beam selection, and antenna selection.
Another promising area for ML is the estimation and prediction of propagation channels. Previous generations, including IMT-2020, have mostly exploited CSI at the receiver, while CSI at the transmitter was mostly based on roughly quantized feedback of received signal quality and/or beam directions. In systems with even larger number of antenna elements, wider bandwidths, and higher degree of time variations, the performance loss of these techniques is non negligible. ML may be a promising approach to overcome such limitations.
MAC layer is a major application area of AI where many of the problems that have legacy solutions can be replaced with AI based methods using supervised learning, data collection and ML model deployment options. Next generation MAC algorithms will need to consider the AI that is used in various layers of the network, especially in physical layer. This is needed because of the need to update the deployed machine learning models, collect data for supervised learning tasks and enable reinforcement learning on different blocks of the network.
AI techniques can be used to target one or more wireless domains, including non-real-time (non-RT) network orchestration and management, such as configuration of antenna parameters and near-real-time (near-RT) network operation, such as load balancing and mobility robustness optimization. Each wireless domain involves different sets of physical and virtual components, family of parameters including key-performance-indicators (KPIs), underlying complexities, and time constraints for updates. Hence, there is a need to consider tailored AI solutions for different classes of the RAN, and their associated problems. There already exists a rich body of research and practical demonstrations of the potential benefits of AI for Wireless, including significant network energy savings.
With the progresses of machine learning and information theory, the ultimate air interface can hopefully perform the automatic semantic communications. There are many open fundamental problems in this direction for the wireless community. For example, learning algorithms usually relies highly on the wireless data which may be hard to obtain or be preserved under privacy constraints. To solve it, we can learn with both the practical wireless data and the statistical models.
Questions related to the most optimal ML algorithms given certain conditions, required amount of training data, transferability of parameters to different environments, and improvement of explainability will be the major topics of research in the foreseeable future. There will be various phases towards development of AI for Wireless, and it is imperative to ensure the increased integration of the technology comes with minimum disruption to the rollout and operation of wireless systems and services. In the short and medium terms, AI models may be targeted for optimisation of specific features within RAN for IMT-2020 and its evolution, such as network operation and management functionalities. In the longer term, AI may be used to enable new features over legacy wireless systems.
AI-Native radio network
Future IMT-systems are required to support extremely reliable and performance-guaranteed services. They will introduce a multi-dimensional network topology, which will make network management and operation more difficult and introduce more challenging problems. To address these problems, it will adopt AI technologies for automated and intelligent networking services. Consequently, to assist computation intensive tasks in AI applications, it will evolve into an AI-native network architecture.
The highest level of AI-native radio network should be designed and implemented by AI to be an intelligent radio network, which can automatically optimize and adjust the network according to specific requirements/objectives/commands, or changes of the environment. The research may include the high-layer protocols, network architecture and networking technologies enabling the above intelligent radio network.
RAN optimization is one of the problems that is rather difficult to solve due to the complexity of the mathematical formulation of the problems. Deep reinforcement learning paradigm in AI can enable zero-touch optimization of the RAN elements with minimum hand-crafted engineering. In addition, Radio networks architecture design is often a challenging task that can be automated with the use of AI. Methods such as graph representation learning could be utilized to enable the network architecture design that can simplify the problem.
Various use cases of AI-empowered network automation are proposed, including fault recovery/root cause analysis, AI-based energy optimization, optimal scheduling, and network planning. Key challenges of training issues have been identified: lack of bounding performance, lack of explainability, uncertainty in generalization, and lack of interoperability to realize full network automation. Four types of analytics can be classified for future AI-native networks, and they are: descriptive analytics, diagnostic analytics, predictive analytics, and prescriptive analytics. The key for successful network automation in AI-native network architecture is how to collect rich and reliable network data that is not typically open to other players other than network operators.
In general, an overall network architecture consists of four tiers of entities: UE, BS, core network, and application server. Application of AI can be categorized into three levels as shown in Figure 1: 1) local AI, 2) joint AI, and 3) E2E AI. This use case family consists in being present and interacting anytime, anywhere, using all senses if so desired. It enables humans to interact with each other without any limitation on physical presence.
The future RAN will be able to perceive and adapt to complex and dynamic environments by monitoring and tracking conditions in the radio network while diagnosing and restoring any RAN issues in an automated fashion. To achieve autonomy for its full life cycle management, at least the following novel networking technologies need to be considered: 1) efficient and intelligent network telemetry technologies that leverage AI to apply management operations based on a collection of historical and live network data, 2) automated network management and orchestration technologies that continuously seek the optimal state of the RAN and enforce management operations accordingly, 3) automatically perform life cycle management operations, adjust configurations on radio network elements, and optimize new services and features during and after deployment, and 4) provide AI based assistance, in particular for aspects such as forecasting, root cause analysis, anomaly detection and intent translation.
More specifically, large quantity of data transportation will bring burdens to each network interface. Besides, data sensed from the radio environment sometimes don’t have the corresponding labels. Intelligent data perception, e.g., utilizing GAN to generate the required data so as to simulate real data, will avoid transferring large amount of data over interfaces, and protect the data privacy to a certain degree. To further this vision of zero-touch network management, an open network data set and open eco-system need to be established.
It is also possible that user feedback is introduced into the decision-making process of the network to improve the decision-making of AI algorithms and help the machine better understand user preferences and make more user-preferred decisions.
In future IMT-systems, more computation nodes will be required to support highly computation-intensive services. Thus, computation nodes will be pervasive from core to edge and from network to device. To cope with this trend, the control and user planes of the network for future IMT-systems need to be redesigned, and emerging technologies such as programmable switches and distributed/federated learning need to be aggressively adopted.
To support services in multiple application scenarios, an intelligent network is needed. In the AI-Native Radio Network, AI is no longer just optimizing the wireless resources of the wireless network, but an intelligent system integrating with radio network, which can realize the supply of on-demand capability.
In order to realize the intelligence of Radio Network, the new functions of Sensing and AI need to be supported. Through the data sensing function, we can realize the end-to-end collection, processing and storage of network data. While the AI function can call and subscribe to these data on demand, and provide capability support according to different application scenarios. In this way, the utilization and support of AI capabilities can be realized more efficiently and globally.
AI system in AI-Native Radio Network is distributed on different network functions. AI algorithms running on different functions or AI models trained on different functions are all components of this distributed AI system, and all components are organic unity. Under the control or coordination of the unified AI control center, each component of the distributed AI system independently completes the assigned tasks, interacts with other components, and reports measurements to the control center. Distributed AI system should be end-to-end solution.
Edge AI is to be considered one of the key enablers for future IMT-systems, especially for sensing-communication-computing-control. On the other hand, a distributed deep learning architecture is to be considered for realizing URLLC in future IMT-systems. Thus, its RAN can be flexibly and adaptively optimized with the aid of AI to guarantee QoS and leading to the topics of interest: 1) adaptive RAN slicing architecture and the corresponding distributed intelligence architecture, 2) knowledge-assisted learning architecture and methods, and 3) fast training/federated learning methods.
In addition, Self-Synthesising Networks automate the actual design process, or large parts thereof. Whilst the actual invention of engineering principles may still be done by human researchers or in combination with AI, the system design, prototyping and standards development are now largely executed by machines. Given that the two phases of systems design and standardisation take years, it is hoped that the introduction of self-synthesising networking principles will accelerate feature development in telecoms by 5-8 years.
Radio network for AI:
The radio network will be migrated from over the top towards the AI era. Wireless networks should consider the AI applications and paradigms that require exchange of large amounts of data, machine learning models, and inference data exchange between different entities in the networks. We must find long-term platform technologies to better support AI service, which will greatly impact the design of future radio network, i.e. radio network for AI, The distributed and collaborative machine learning is required to fully leverage the computing/communication load and the efficiency and furthermore to comply with the local governance of data requirement and data privacy. Therefore, the data-split and model‑split approaches will be the major focuses for future research. The impacts of this on the future network design are threefold:
– Shift from downlink-centric radio to uplink-centric radio: Unlike existing downlink-centric radio which usually supports heavier traffic and better QoS for downlinks, AI requires more frequent model and data exchanges between a base station and the different users it serves. The uplinks should be reconsidered in network design to attain a balanced, efficient and robust distributed machine learning.
– Shift from the core network to the deep edge: The locality of data and the computing/communication needed for deep machine learning bring big challenges to the end-to-end delay. To mitigate it, new network as well as the corresponding protocols should be redesigned. One of such research directions is to place the major learning processes and threads close to the edge and thus forms a deep edge which can greatly mitigate the system delay.
– Shift from cloudification to machine learning: Due to the distributive nature of data and computing power, the communication and computing procedures of a machine learning algorithm often take place across the whole network from the cloud to the edge and the devices. Therefore, traditional cloudification should also be reconsidered to be application-centric, i.e., to meet the specific needs of the more general distributed machine learning applications with proper deployment of computing and communication resources.
In addition, Future data-intensive, real-time applications require distributed ML/AI solutions deployed on the edge-cloud continuum, shortly known as EdgeAI or Edge Intelligence. These solutions support augmenting human-decision processes, developing autonomous systems from small devices to complete factories, and optimising the network performance and marshalling the billions of IoT devices expected to be interacting in the 2030s. Distributed ML/AI has become an inseparable part of wireless networks and increasing volumes of heterogeneous streaming data will require more advanced computing paradigms. Since heterogeneous IoT devices are not as reliable as high-performance, centralised servers, distributed and self-organising schemes are mandatory to provide strong robustness in device and link failures. The current open questions in fulfilling the requirements of the true Edge Intelligence include data and resource distribution, distributed and online model training, and inference on those models across multiple heterogeneous devices, locations, and domains of varying context-awareness. The future network architecture is expected to provide native support for radio-based sensing and, through versatile connectivity, accommodate ultra-dense sensor and actuator networks, enabling hyper-local and real-time sensing, communication, and interaction the intelligent edge-cloud continuum.
Explainable AI for RAN:
Automation principles were introduced into the telecommunications architecture as early as in 2008. Despite a swath of algorithmic ML/AI/SON frameworks available, uptake was not as widespread as expected. An important reason for this was that –whilst the developed automation frameworks outperformed any other operational approach– it exhibited occasional and unexplainable outages which operators could not accept. Since the proposal of the concept of wireless AI, there has been a widespread concern on how to harmonize the relationship between existing communication mechanism and the so-called “black-box” AI (machine learning, or even deep learning) models. It has been highly encouraged that existing expert wireless knowledge should be fused in the design of AI models to improve their performance and interpretability, for example, AI based MIMO channel estimation achieving the significant performance gain.
In the context of telecoms, explainable AI (XAI) enables the creation of Trusted Networks which are trusted both by consumers and operators. Individual building blocks in the network are still embodied through machine learning (eg regression) or deep learning (e.g., CNNs, RNNs or GANs), but the overall interaction between these automated components is supervised through XAI. It is typically enabled through a fairly deterministic but “human-influenceable” decision tree which trades the levels of trust with performance through planning optimization approaches. Given the high-level of automation at the radio interface, RAN, Core and Transport networks, XAI will play an instrumental role in 5G and 6G to ensure an end-to-end trusted operation of the networks.
Furthermore, it should be pointed out that the solutions for integrating existing communication mechanism and AI models should go beyond simple “one plus one” splices, for example, AI-based channel state information (CSI) feedback improving the CSI reconstruction accuracy. It could be anticipated that the exploration of expert knowledge will be one of the determinant factors in wireless AI model designs. Even more, we can envision that the ultimate goal of wireless AI will be to develop a kind of models specifically designed based on the distinguishing characteristics of data from wireless networks, just like those models in computer vision and natural language processing.
Technologies for integrated sensing and communication:
Wireless sensing including object detection, ranging, positioning, tracking, as well as imaging, etc, has long been a separated technology developed in parallel with the mobile communication systems. Positioning is the only sensing service that mobile communication systems up to IMT-2020 could offer. Departing from the traditional approach of designing wireless networks solely for communication purposes, IMT for 2030 and beyond will consider an integrated sensing and communication (ISAC) system from its outset. In the future communication system, enabled by the potential use of very high frequency bands (e.g. from mmWave, THz, up to visible light), wider bandwidth, denser deployment of large antenna arrays, reconfigurable intelligent surface(RIS), artificial intelligence(AI) and collaboration between communication nodes/devices, sensing will become a new function integrated with the communication system to enable new services and solutions with higher degree of accuracy in aspects such as ranging, doppler, and angular estimation, as well as positioning.
In the ISAC system, sensing and communication function will mutually benefit within the integrated system. On one hand, the communications system can assist sensing service. It can explore the radio wave transmission, reflection, and scattering to sense and better understand the physical world, also known as “Network as a sensor”. On the other hand, sensing results can be used to assist communication in access or management such as more accurate beamforming, better interference management, faster beam failure recovery, and less overhead to track the channel state information, improving quality-of-service and efficiency of the communication system. This is known as “sensing assisted communication”. Moreover, as a foundational feature for 6G, sensing can be seen as a “new channel” linking the physical world to the digital world. Real-time sensing combined with AI technologies is thus essential to realize the concept of digital twin.
In general, the interaction level between communication and sensing systems can be classified as (a) co-existence, where sensing and communication operate on physically separated hardware, use the same or different spectrum resources and do not share any information, treating each other as interference,(b) cooperation, where the two systems operate on physically separated hardware While information can be shared to each other(e.g. prior knowledge of sensing/communication could be shared to reduce inter-system interference or in some case enhance the other system), and (c) integrated design, where the two systems are designed to behave as a single system with information sharing and joint design in spectrum usage, hardware, wireless resource management, air interface, and signal transmission and processing, etc . The focus of ISAC in future IMT is on (c).
In the integrated design, the technology development of the ISAC can be divided into different stages which can range from loosely coupled to fully integrated. As a starting point, communication and sensing system share the resources such as spectrum, hardware. Communication and sensing can be implemented as one system serving two traffic forms simultaneously. The key research issues in this stage could be efficient scheduling and coordination algorithms between sensing and communication modules to minimize the interference to each other. As a step further, communication and sensing will work together to improve the performance for one single system. The integration of signal processing, such as the time, frequency and spatial domain processing techniques can be jointly designed to serve both sensing and communication. Potential directions in this stage would be air interface design based on joint waveform, unified beamforming scheme, etc., which is essential to improve the efficiency of the ISAC system. Towards the mature stage of the ISAC, communications and sensing will be completely coordinated and collaborated in all possible dimensions including spectrum, hardware, signalling, protocol, networking, etc., achieving mutual promotion and benefits. Further combined with technologies such as AI, network cooperation and multi-nodes cooperative sensing, the ISAC system will have benefits in enhanced mutual performance, overall cost, size and power consumption of the whole system.
The ISAC capabilities enable many new services which the mobile operators can offer, including but not limited to very high accuracy positioning, tracking, imaging (e.g. for biomedical and security applications), simultaneous localization and mapping, pollution or natural disaster monitoring, gesture and activity recognition, flaw and materials detection. These capabilities will in turn enable application scenarios in future consumer and vertical applications in all kinds of business such as context-aware immersive human-centric communications, industrial automation (Industry 4.0), connected automated vehicles and transportation, energy, healthcare/e-health and so on.
The technology enablers include transceivers building on new RF spectrum at the high-frequency range, the RIS that allows mobile operators to shape and control the electromagnetic response of the environment, and advanced beam-space processing to track users and objects, passive tag (e.g. RFID tag) aided sensing to improve object identification accuracy and efficiency. Equally important, ML/AI algorithms will exploit large datasets to provide new sensing services and improve the communication. And yet communication and sensing services need to share available hardware and waveforms, while fusing information from distinct sources of measurements in the network deployment area. Research challenges remain in areas such as system level design and evaluation methodologies to characterize the fundamental trade-offs of the two functions in the integrated system, the solutions to deal with the increased sensitivity to hardware imperfections, joint waveform design and optimization, etc.
Technologies to support convergence of communication and computing architecture:
From emerging IMT towards 2030 and beyond use cases such as digital twin, cyber-physical systems, mixed reality, industrial/service robots, a number of technology trends can be observed. One trend is towards processing data at the network edge close to data source for real-time response, low data transport cost, energy efficiency, and privacy protection. There, edge computing is a distinguished form of cloud computing that moves part of the service-specific processing and data storage from the central cloud to edge network nodes that are physically and logically close to the data providers and end users. Among the expected benefits of edge-computing deployment in current networks are performance improvements, traffic optimization, and new ultra-low latency services. Edge intelligence in IMT for 2030 and beyond will significantly contribute to all these aspects. Pervasive compute with seamless task mobility can be enabled by evolved container formats based on portable codes and associated system interfaces. This will allow the platform to dynamically schedule workloads on nodes regardless of varying hardware and system software setups. As a result, several optimizations can be performed with limited overhead, such as moving computations close to a data source or consumer. This can be useful for having computational tasks follow mobile users, opportunistically offloading workloads from the device to preserve device energy and moving computations for an optimized cost/performance/power trade-off.
Another trend is towards scaling out device computing capability beyond its physical limitations for advanced application computing workloads. Future applications, such as truly immersive XR, mobile holograms and digital replica require extensive computation capabilities to deliver real-time immersive user experience. However, it would be challenging to meet such computational requirements solely with mobile devices. In order to overcome the limits of the computing power of mobile devices, split computing makes use of reachable computing resources over the network. These computing resources could be available on various entities of networks, e.g., mobile devices, BSs, MEC servers and cloud servers. With split computing, mobile devices can effectively achieve higher performance even as they extend their battery life, as devices offload heavy computation tasks to computation resources available in the network. Additionally, a third trend is that the ubiquity of AI needs ubiquitous computing and data resources.
These new technology trends bring in new technology challenges on scalability, dynamic workload distribution, data collection/management/sharing. One challenge is scalability. In today’s cloud computing, computing resource are often centralized in a few national or regional data centers. Centralized service discovery and orchestration mechanisms used are given full visibility on computing resources and services in data centers. When computing resources and services become more widely distributed, the centralized approach is no longer scalable; a more scalable approach is needed for widely distributed computing resources.
Another challenge is dynamic computing workload distribution. Today’s workload distribution between devices and the cloud is based on client-server model with a fixed workload partition between the client and the cloud. The fixed workload partition is application specific and is pre-determined in the application development phase and assumes that there are always sufficient computing resources in the cloud to fulfil the server-side workload. As computing resources become distributed, there needs to be a scheme that allows dynamic device computing scaling out based on conditions such as workload requirements, communication and computing resource availability, etc. To minimize the impact on applications, dynamic computing scaling scheme should be enabled as an IMT system capability with minimal dependency on applications.
A third challenge is data collection, synchronization, processing, management and sharing – With the widespread application of AI in society/industry, a systematic approach in collecting, processing, management and sharing data to facilitate AI/Machine Learning becomes very important. Split computing also requires synchronization of a large amount of data, context, and the program itself among network entities. The conventional data management functions in cellular networks focus on managing subscription information and policies. In IMT-2020, a network data analytics function (NWDAF) was added into the specifications through which network functions’ measurement data can be collected and used for analytics. Future IMT towards 2030 and beyond are anticipated to have further diversification on data sources, types and consumptions. Therefore, it is expected that data plane functions will be part of the IMT system function from the beginning and can support full-blown data services to devices, network functions and applications. Finally, fourth challenge is low power and low latency wireless communication. To support extreme services on a lightweight device such as AR glasses, the device needs low latency wireless communication with low device power consumption.
To address the above-mentioned challenges, computing services and data services are expected to become an integral component of the future IMT system. Pervasive/ubiquitous computing and data services can be enabled alongside the ubiquitous connectivity as integral services of the IMT system. Dynamic computing workload distribution can be inherently supported as an IMT system capability. Applications can use the IMT system’s workload distribution and scaling capability to achieve optimized performance. Data plane services in the IMT system such as data collection, processing, management and sharing can be enabled to support AI needs in air interface, cellular network and applications.
Technologies for integrated access and superlink communications:
Short-range device-to-device (D2D) wireless communication with extremely high throughput ultra accuracy positioning and low latency will be a very import communication paradigm for the future communication system. On the one hand, lots of new applications, such as ultimate immersive cloud XR, holographic display, tactile internet and remote motion control, integrated aerial and vehicle communication, and sidelink enhanced industry internet things (SL-IIoT), which need either Tbps throughput or sub-ms level latency and low power consumption wireless link, will mature in the next decade and the wireless communication distance for these D2D applications are comparatively short. On the other hand, to satisfy the above wireless requirement, extremely wide bandwidth technologies with short propagation distance, such as THz technology, optical wireless technology, ultra accuracy sidelink positioning technology, and enhance terminal power reduction technology, may be potential candidates. Therefore, how to integrate these short range D2D applications and its related sidelink technologies into cellular system need to be considered in the future communication system.
The above sidelink by nature may significantly increase the system capacity. THz and optical wireless link normally have very narrow beam and short transmission distance. Therefore, the spectrum or channel can be easily reused by other sidelinks, which can increase the system capacity. Meanwhile, a dynamical self-organized short-range network, such as mesh network, may also solve the bottleneck of previous cellular system in which all the resources on the Uu interface were managed centrally by base station. However, the D2D or multi-hop short range mesh network may have the risk of slow convergence time and large signalling overhead due to frequent movement of nodes. Therefore, the integrated design of short range and cellular may help the sidelink to achieve optimized system level performance. How to increase the integration efficiency, as well as how it may co-exist with other systems on the same spectrum deserve further research.
Radio on THz:
In the subjected technology, a UE is connected to its peripheral devices by using terahertz broadband radio, while the peripheral devices receive/transmit data signal in THz band radio with UE and also receive/transmit data signal in the different (lower) frequencies connecting to BSs (operated in, e.g., the millimeter wave bands and the sub-6 gigahertz band) and then connect to the APs located in BS. Here the peripheral devices play a role to mediate between a UE and BS with AP.
Generally, the terahertz radio has been investigated for use in fixed and long-range radio applications such as wireless backhaul. However, it is expected that the terahertz radio application could also be disseminated in the form of these short-range use cases.
In the IMT for 2030 and beyond networks, UEs themselves will also need to evolve to meet individual users’ high-communications performance demands. For example, while UEs have gradually evolved in terms of weight and shape, their capabilities as radio devices have not significantly changed since mobile phones first appeared 40 years ago.
In achieving the exchange of information with the quality and quantity high enough to meet individual users’ diverse demands, UEs present significant limitations in terms of their size, which limits the number of integrated antennae and their maximum transmission power. It is not practical to increase the size of UEs to alleviate constraints such as the number of antennae, and the performance of uplink communication from UEs to BSs is vastly inferior to downlink communications from BSs.
Therefore, we introduce the cooperation technique between various peripheral devices that communicate with UEs. Specifically, through the cooperation, it would be possible to solve issues arising from the constraints caused by a single user device, such as power transmission and the number of integrated antennae.
For examples, peripheral devices around UEs, such as PCs, watches, glasses (smart glasses), or self-driving cars, can become wireless devices and cooperate with one another, making it possible to overcome transmission power constraints in a single user terminal, and to virtually overcome limitations in the number of antennae. When riding in a car with a UE, the antenna on the car can also be used virtually as the UE’s antenna to improve communications performance.
Here, communication between a UE and its peripheral devices requires a short-range but extremely wideband signal transmission. Since the capabilities required for wireless signal processing are limited in small devices such as watches and glasses, complex wireless signal processing should be avoided in such devices. Therefore, it is expected that the technology mentioned at the beginning will be introduced.
Technologies to efficiently utilize spectrums:
It is expected that the spectrum for future IMT systems will continue to follow the mixed use of high, medium, and low-frequency bands as in the IMT-2020 system, but with potentially larger bandwidths and higher operating frequencies in different bands, i.e. using a mix of centimetre, millimetre, and terahertz waves. Bands below 6 GHz, millimetre waves, and terahertz spectrum resources can be utilized jointly to provide various wireless links of different bandwidths and beam-propagation characteristics that satisfy the wide range of service requirements of future IMT systems. It is also envisioned that the diverse future use cases, followed by their different system requirements, for DL and UL transmission services can be better met by using the propagation and bandwidth characteristics of different frequency bands.
Spectrum utilization can be further enhanced by efficiently managing resources through technologies such as advanced carrier aggregation (CA) and distributed cell deployments (cell-free/distributed MIMO). By enabling devices to simultaneously and flexibly connect to a set of carriers, which are offered in a set of nodes according to availability and necessity, higher bandwidths can be achieved. Therefore, we can achieve higher rates and the usage of available bands can be steered towards best efficiency. In distributed MIMO, a set of network nodes act as one cell-less system that enable high-density deployment and spectral reuse. This allows for efficient antenna and transport solutions, which can more efficiently utilize spectrum resources through central coordination.
In addition to the above, there are also alternative explorations for spectrum utilization improvements.
Spectrum sharing technologies:
Spectrum sharing refers to when two or more radio systems operate in the same frequency band. Fundamentally, two forms of spectrum sharing are present: 1) horizontal spectrum sharing between systems with the same level of access rights to the spectrum, and 2) vertical spectrum sharing between systems with different levels of access rights to the spectrum. Vertical and horizontal spectrum sharing are not mutually exclusive. Current IMT-2020 systems involve various combinations of horizontal and vertical spectrum sharing through different techniques for interference management. The same is expected in IMT for 2030 and beyond. Spectrum sharing in specific areas, such as remote areas where spectrum may be unused or underused, will allow for quicker resolutions in interference management problems, e.g., more bandwidth for backhaul links leading to more energy efficient operations.
[Work on intelligent spectrum management technologies that enable opportunistic and intelligent spectrum sharing is now necessary in guaranteeing the continued development of future wireless network services and applications. This applies to intelligent database-driven spectrum sharing, smart spectrum sensing, intelligent software defined radios, and reconfigurable radio networks. All of these are expected to play an important role in addressing demand for next generation gigabit wireless services while enabling the broadband connectivity and digital inclusion in underserved areas.]
New medium access control (MAC) designs based on spectrum-sensing or spectrum-sharing have been considered. In order to evolve future IMT systems with a dynamic spectrum sharing nature, entire radio resource control (RRC) and radio access network (RAN) layer 2 (L2) frameworks need to be redesigned. Such dynamic spectrum requests from each subsystem are expected to come from various computing needs in devices, implying that future MAC must be redesigned for computing and communication convergence. The edge-cloud computation/architecture for new MAC is necessary, where a trade-off in computing and communication must be carefully considered. Massive training-data upload in uplink also requires new MACs to achieve accurate spectrum sharing.
Key aspect is the centralization or decentralization of spectrum sharing control, where this decision is dependent on the type of application and network environment.
Enabling the transition from IMT-2020 to IMT systems for 2030 and beyond will require a smooth transition from one technology to the other while maintaining optimum use of spectrum resources. IMT for 2030 and beyond should facilitate co-existence between the two technologies in order to allow a network operator to divide spectrum between the two technologies while balancing the bandwidth allotted to each technology according to user demand while utilizing both technologies simultaneously.
Technologies for broader frequency spectrum:
By changing the higher frequency spectrum from “millimetre waves” to “terahertz waves,” we use a drastically wider bandwidth compared to IMT-2020. For this reason, studies have started researching the possibility of achieving “extreme high data rate and high capacity” communication exceeding the IMT-2020 peak data rate requirements. Currently, “radio waves” up to about 300 GHz are considered to be within the scope of IMT for 2030 and beyond. However, unlike millimetre waves, terahertz waves must travel through a straight path and cannot propagate long distances. Thus, it is necessary to carry out technical studies on terahertz waves to identify their radio propagation characteristics and establish their propagation model, as well as study how to utilize these waves based on various network configurations.
Regarding device technology, it is necessary to implement a digital signal processing circuit capable of supporting wider bandwidths, a digital to analog converter, and an analog to digital converter at low cost and low power consumption. Additionally, antennas, filters, amplifiers, mixers, and local oscillators that operate in high frequency bands must be developed to be compatible with massive MIMO’s multiple antenna elements. RF (Radio Frequency) circuits must also be enhanced for higher performance and higher integration in high frequency bands exceeding 100 GHz.
The radio access technologies for such high frequency bands and the current bands for IMT systems have common technical issues regarding coverage and power efficiency. Here, single-carrier signal waveforms are preferred over OFDM signal as a radio technology due to their power efficiency. As we apply radio technologies, including integrated access and backhaul, to a wider range of areas, the importance of power-efficient radio technology like single carrier may increase.
Technologies to enhance energy efficiency and low power consumption:
Up until now, humans have been a principal data consumer, but in the future, the main agents of data consuming will be gradually shifted from human to the smart machines equipped with AI. In alignment with this, a paradigm-shift from the smartphone dominated era to a multi-stream era, where various types of terminals, not only wearables, skin-patches, bio-implants, exoskeletons integrated with advanced man-machine interfaces such as gestures, haptics, and brain sensors but also cars, UAVs, and robots equipped with AI, will be the norm while smartphones still being around us. This diversification of terminals will lead to new verticals to emerge and prosperous.
Future IMT systems are expected to support about a trillion devices, mainly driven by the surge in demand for IoT devices that cover a wide variety of applications such as smart cities, smart industry, and smart homes. A key category is power-constrained devices that are meant to be left in place for very long periods of time, stretching into several years. This may be because the devices are inaccessible, or it is difficult or expensive to reach them once installed. These devices may perform a wide range of functions such as asset tracking, supply chain logistics and infrastructure monitoring. Such devices may also include the category of Internet-of-Tags, which involves tracking, sensing or actuation functions. The need to improve energy efficiency has given rise to the field of energy-efficient communications, or green communications.
Low energy consumption issues can be considered from both the user device and the network’s perspectives. Technological advances on AI/ML, molecular, backscatter and visible light communications, fog/edge computing, and metamaterials/metasurfaces aim at lowering devices’ and network’s power consumption . Efficient low-overhead communications are appealing to save overhead-related energy at the devices. E.g., by using channel state information (CSI)-limited/free schemes instead of training-based instantaneous CSI. On the other hand, network densification, distributed antenna deployments, moving/flying transmitters can shorten the communication distances, lowering communications’ energy consumption at devices’ side, and RF pollution in general. Reconfigurable antennas and rotating antennas are also promising technologies in this regard.
Wireless charging technologies:
Support of energy harvesting from wireless signals can eliminate an IoT device’s need to draw power from its battery for downlink signal detection and processing, which enables “zero-energy” radio operations. It is also possible to utilize natural energy sources for energy harvesting to meet this requirement, e.g., solar power, etc.
Wireless charging through RF wireless energy transfer (WET) has emerged as a promising charging technology. WET is currently being considered, analyzed and tested as a nascent stand-alone technology, and its wide integration to main wireless systems can be envisioned in the coming years. However, increasing the end-to-end eﬃciency, supporting mobility at least at pedestrian speed, facilitating ubiquitous power accessibility within the network coverage area, resolving the safety and health issues of WET systems, compliance with regulations, and enabling seamless integration with wireless communications are the main challenges ahead. Energy beamforming is among the most appealing techniques to enable WET as an efficient solution for powering future IoT networks. Energy beamforming also allows transmitted signals to adapt to the propagation environment, thus optimizing wireless energy delivery.
Backscattering technology is an alternative approach for low power and low-cost communication. A device can send information by modulating and reflecting received wireless signals from ambient sources, without the need for power hungry transceivers, amplifiers, and other traditional communication modules. Thus we can achieve extremely low power consumption and low-cost communication through backscattering technology. It can harvest the energy of the ambient wireless signals and/or other energy for its communication, and therefore achieve nearly zero power communication. Ambient Backscatter Communication (AmBC) is referred to as the backscattering communication system that exploits ambient RF signals to transmit information bits without active RF transmission. The main challenges for these backscattering technologies include interference between backscattering signals and source signals, and limited communications range and data rates. Therefore, the techniques that must be developed for backscattering communication include modulation and channel coding, signal detection algorithms, interference coordination techniques, combinations with MIMO technology, multi-user access approaches, etc.
On-demand access technologies:
Another alternative approach to resolve low power communication is the on-demand passive device using the energy of received signals to trigger the wakeup of the receiving chain. The on-demand passive device would stay in sleeping mode with zero power consumption and will have the receiver waken up when the network sends the wakeup signals when data arrival, which turns on the transceiver to switch to Connected status. The zero-energy passive device for triggering UE wakeup would be particular usefully for machine type communication, wearable devices, health devices, and general mobile phone. In order to support the zero power passive device of UE wakeup, the next generation wireless system needs to design the network and control signal for on-demand access with UE wakeup passive device.
The UE can support on-demand network access based on backscattering technology to minimize the power consumption. The challenge of the on-demand network access with passive wakeup device is the receiver sensitivity. This will limit the coverage of the passive wakeup device. In order to accommodate the low receiver sensitivity of the frond-end passive device, the wakeup signals need to transmit in much higher density comparing to that of the traditional base station deployments. This requirement will not only be difficult to accomplish the blank coverage of on-demand network access but also increase the network energy consumption of tracking the UE with front-end wakeup passive devices. The low receiver sensitivity would hinge the coverage and development of next generation wireless network.
The on-demand access technologies for improving energy efficiency are to have the front-end wakeup device with zero or low power consumption in triggering the wakeup of UE receiver of the next generation wireless technologies. The UE receivers and transmitter circuits of the next generation wireless technologies would be in sleeping state, which most of the hardware, such ASCI, DSP, controller, and memory, are turned off and software are in the standby state. The front-end wakeup device would be used mainly for monitoring and receiving wakeup signals in active or passive way. The low-power active device, e.g. low-voltage tune RF (TRF) Wakeup receiver with passive RF gain and high gain envelope detection, used for front-end wakeup device could extend the receiver sensitivity. Once the wakeup signal is detected by the front-end wakeup device, the wakeup device would activate the hardware and initialize the software from the sleeping state to active state.
If an ultra-low power simplified receiver technology is used to continuously monitors wake-up signal, the power consumption could be dramatically reduced and the of the IoT devices can be extended significantly while low paging latency can be guaranteed. To meet the power consumption budget, the ultra-low power receiver may not require digital receivers or digitize the RF signal directly or even be a passive envelope detector, but pursues simple schemes for modulation, e.g., On-Off Keying (OOK) or Frequency-Shift Keying (FSK), and coding, e.g., Manchester coding.
The system designs of next generation wireless technologies need to take the processing time of hardware activation and software initialization into consideration for the on-demand access technologies to achieve optimum UE energy efficiency. The next generation cellular technologies would not only be standalone but always be integrated with the legacy technologies in the mobile phone for multi-technologies multi-connectivity.
Technologies to natively support real-time services/communications:
There are two essential technology components that support real-time communications and realize extreme low latency.
One is accurate time and frequency information shared in the terrestrial network. Especially when network nodes equip compact atomic clocks, their high holdover performance can dramatically reduce synchronization iterations over the local network. The high frequency accuracy obtained from the atomic clocks also allows reducing the frequency offset between Tx and Rx, leading to the low BER particularly in high carrier frequency. The collection of the time differences among node clocks enables the estimation of more stable and robust time using maximum likelihood method, and the result can be delivered back to each node for their self-corrections. Wireless Space-Time Synchronization, where clocks are synchronized at Pico second level together with the determination of positions, is another method on which low latency communication protocol can be built with a capability of autonomous and distributed operations. Such synchronized network supports the schedule management in edge processing in mobile backhauls. The common time and frequency can be traceable to the standard time or frequency by linking one node to the precision time/frequency source.
Another enabler is fine-grained and proactive just-in-time radio access which incorporates the extremely short transmission time intervals (TTI) for the scheduling, leading to the reduction of the buffering and channel access delay. The benefit of these two technologies can be further enhanced by time-sensitive communications protocols, which enables the prioritization of latency-sensitive or mission-critical traffic, leading to real-time communications. Resource management can be supported by leveraging application-domain information about the predictability of actual resource requirements considering the context and traffic characteristics. Periodic transmissions can be pre-scheduled with given and precise time boundaries while AI and ML tools can be used to schedule algorithms. Resource allocation for real-time communications may also span over a multi-dimensional solution space comprising multi-RAT, multi-link, etc.; managed by a dedicated real-time management function that is aware of resource needs, availability and surrounding environment. …………………………………………… CONTINUED…………………………………………………………………………………..
Status: This draft report is scheduled to be completed at the next ITU-R WP 5D meeting and if so will be submitted to ITU-R SG 5 for approval in November 2022.
South Korea’s LG Electronics showcased so called “6G” technology during the ‘2021 Korea Science and Technology Exhibition’ that was recently held at KINTEX in Ilsan, South Korea. Specifically, LG is unveiling a power amplifier for 6G, which is jointly developed by Germany’s Fraunhofer Research and Institute.
Author’s Note: This news comes despite there is no definition of 6G from either ITU-R or 3GPP. Further, 5G standards are in their infancy, with only the RAN standardized in ITU-R M.2150 (but not the frequency arrangements for terrestrial IMT in a yet to be completed revision of M.1036).
The power amplifier is the same one that was used by LG during its 6G demo in Berlin in August 2021. At that time, LG Electronics said it has successfully demonstrated the transmission and reception of wireless “6G” terahertz (THz) data over 100 meters in an outdoor setting. That was impressive because THz transmission is short-range and experiences power loss during transmission and reception between antennas. This is where the power amplifier proved crucial, as it was able to generate a stable signal across ultra-wideband frequencies.
The power amplifier is capable of generating stable signal output up to 15 dBm in the frequency range between 155 to 175 GHz. LG noted that it was also successful in demonstrating adaptive beamforming technology, which alters the signal’s direction in accordance with changes to the channel and receiver position; as well as high-gain antenna switching, which combines output signals of multiple power amplifiers and transmits them to specific antennas. LG also introduced the FDR full-duplex system which allows simultaneous transmission and reception across the same frequency band.
At the Korea exhibition, LG Electronics and Keysight Technologies (a global wireless communication test and measurement equipment manufacturer) also demonstrated ‘Adaptive beamforming’ technology that converts beam directions according to channel changes and receiver positions.
In 2019, LG established the LG-KAIST 6G Research Center in partnership with the Korea Advanced Institute of Science and Technology (KAIST). LG and KAIST had previously partnered with U.S.-based test and measurement firm Keysight Technologies with the aim of carrying out research on future 6G technologies.
Under the terms of the agreement, the three partners will cooperate in developing technologies related to terahertz frequencies, widely seen a key frequency band for 6G communications, which have not yet been standardized. The partners aim to complete 6G research by 2024.
LG previously said that 6G is expected to be commercialized in 2029. LG also noted that future 6G technologies will provide faster data speed, lower latency and higher reliability than 5G, and will be able to bring the concept of Ambient Internet of Everything (AIoE), which provides enhanced connected experience to users.
The government of South Korea previously said it aims to launch a pilot project for not-yet-standardized 6G mobile services in 2026. The Korean government expects 6G services could be commercially available in Korea between 2028 and 2030.
The Korean government’s strategy for 6G consists of preemptive development of next-generation technologies, securing standard and high value-added patents, and laying R&D and industry foundations. The government selected five major areas for the pilot project: Digital healthcare immersive content, self-driving cars, smart cities and smart factories.
LG expects 6G communication to be commercialized in 2029, with talks for standardization beginning in 2025. “6G will be a key component of Ambient Internet of Everything, the emerging technology that aims to improve living and business environments by making them more sensitive, adaptive, autonomous and personalized to consumers’ needs by recognizing human presence and preferences,” LG said.
According to market research dynamo Omdia, 2022 will be rife with regulatory activity that will impact the telecommunications market for years to come.
“As technology evolves, regulation will become more important than ever in the TMT industry,” said Sarah McBride, senior analyst for regulation at Omdia.
Omdia identified several trends it says will be “at the heart of regulatory activity” next year, including spectrum licensing, fiber networks, the digital divide and 6G (even though 5G spectrum has not been standardized by ITU-R in a revision to M.1036).
Regarding the digital divide (between the broadband haves and have nots), Omdia says “governments should learn from the pandemic and recognize the need for these broadband services to be affordable to all.”
The Omdia analysts say that governments must define a “comprehensive national digital strategy that includes providing state-aid tools to improve broadband availability and affordability.”
Such a strategy should go beyond deployment to “ensure citizens can use connectivity transformatively to bring about innovation and growth.” Doing so will encourage more deployment and investment, writes Omdia.
However, to avoid too much government intervention, Omdia also stresses the need for cooperation by service providers.
“Experience shows that market-led development, not a reliance on government intervention, is the most effective model for effective allocation of resources. However, economic viability is lower in some rural and sparsely populated areas than in populous areas,” Omdia said. The firm recommends that network operators collaborate by sharing infrastructure to reduce deployment costs and create shared wireless networks to “remove the need for regulators to set ambitious coverage obligations as part of spectrum licenses or universal service obligations.”
According to Omdia’s tracker for 5G networks, more than 150 5G networks have been launched around the world to date, which the research firm says will continue to drive demand for more spectrum.
“5G will profoundly affect society because of its ultrafast speeds, low latency, and high reliability, which enable digital transformation and support new use cases,” writes Omdia.
Regulators need to effectively manage spectrum allocation, “allowing access to the right amount of internationally harmonized spectrum (e.g., 700MHz, 3.6GHz, and 26GHz bands in the EU) in a timely manner to keep costs down.”
As operators continue to build out their 5G networks, Omdia tells policymakers it’s important to plan ahead on 6G standards, given the role these networks will play in the digital economy and the danger posed by a lack of cohesion.
Specifically, the firm warns against further splintering the telecom and Internet ecosystem, or what it calls “the splinternet.”
“It is especially important that regulators and policymakers prepare for future network generations by ensuring agreement is reached on 6G standards. A fragmentation of standards must be avoided to prevent any further separation of the telecoms and internet ecosystem, a ‘splinternet’,” writes Omdia.
Acknowledging that plans for 6G are in their infancy, Omdia further tells policymakers to begin identifying appropriate spectrum bands, though it notes that such plans “will need to be balanced with the need to release spectrum for 5G.”
Part of the rush to deploy high-speed internet everywhere includes a migration to fiber, whether through new builds or upgrades of existing cable networks. Omdia says that as network operators migrate to fiber, regulators should focus on promoting competition, pricing strategies and raising awareness amongst consumers about fiber access.
The firm further states that regulators should include fiber access in wholesale obligations, “once sufficient fiber coverage is reached.”
It’s important for network operators to collaborate with regulators on network upgrade plans and give wholesale customers advance warning to avoid disruption.
“Operators need to give their wholesale customers a sufficient notice period when withdrawing copper networks. This includes providing formal notifications that outline the timeframes involved, the replacement products on offer, and the new price terms,” writes Omdia.
In a separate report titled, 2022 Trends to Watch: Global 5G, Omdia says that 5G network rollouts are still in the early stages, especially in developing regions.
“But there are compelling reasons for telcos to commit to 5G so they can differentiate around an improved network experience, as well as realize network efficiencies and lower operating costs. Moreover, 5G’s enhancements over 4G – most noticeably speed and latency – will come to be appreciated by consumers more next year as an increasing number of data-intensive services and applications become popular in the mass market,” the research firm said.
“A surprise to many next year may be the rapid emergence of satellite to augment telcos’ terrestrial network coverage,” Omdia observed.
“A key driver for hybrid satellite-cellular deployments is the need for ubiquitous high-speed data coverage, something which telcos can greatly benefit from if their rivals’ 5G network coverage remains patchy.”
Major telcos including BT, Deutsche Telekom, Telecom Italia and Verizon signed significant deals with satellite internet providers in 2021 to offer a hybrid approach to targeted residential, enterprise and industrial markets.
Omdia believes that the likely success of these satellite internet initiatives could jump-start a flurry of new activity in this area in 2022.
“Although most end users aren’t rushing to buy 5G, the quality of their network experience in terms of reliability, speed, and coverage is increasingly important to them. As such, 5G offers telcos a better opportunity than 4G to differentiate, especially for ones that can claim they offer the best-in-market network experience,” Omdia said.
Omdia thinks that partnership strategies will be even more important for telco 5G success in 2022.
“How good telcos are at partnering, whether for content, service, or technology development, will increasingly define how successful they are in consumer, enterprise, and industrial markets. Because of its enhanced capabilities over 4G, 5G enables telcos to offer much more, and they will have to partner effectively to capitalize on this.”
“Except for 5G MEC (really ?), the ecosystem and markets for advanced 5G technologies are still in their infancy. However, 5G front-runners are already launching them, placing them in a strong position to gain a first-mover advantage when the market is ready to adopt them,” Omdia said.
Nokia said it is working with Reliance Jio, Bharti Airtel and Vodafone Idea (Vi) in 5G field trials in advance of India’s repeatedly delayed 5G spectrum auction, now scheduled for April-May, 2022. India’s big three telecos are using government spectrum to conduct their 5G field trials.
Nokia’s chief strategy and technology officer Nishant Batra said Wednesday at India Mobile Congress that his company expects to bring in advanced 5G solutions (?):
“We have active engagements with Bharti, VI, and Jio for 5G field trials and have made several public announcements on the milestones achieved. 5G will open up new possibilities that will have a massive impact on society, industry and consumers in India…in the coming years and beyond the 5G era,” said Batra. By the end of 2023, we expect to see release 18 of 3GPP (see the actual 3GPP Timeline below)….or as we prefer to call it: 5G-Advanced. This version of the 5G evolution will develop 5G to its fullest capabilities and is an important stepping-stone to the new interactive use cases we will see on a large scale in the coming 6G era,” he added.
3GPP Timeline for Release 17 and 18
Telcos so far have highlighted that 5G use cases (what are they without URLLC?) can bring in transformation in healthcare, agriculture, education related areas. Batra also gave a glimpse of how human technology interfaces will change in the future and the internet used via smartphones will be “less exciting.”
“By 2030, we expect two of the biggest drivers of network evolution will be human augmentation and digital/physical fusion. Consumer broadband will still be the biggest service, and video will still drive the bulk of Internet traffic,” Batra opined. India is one of the biggest data consumers. “Practically speaking, digital/physical fusion means that by 2030 every physical thing that makes sense to connect digitally will be connected to the Internet,” he added. Nokia, which has its “Conscious Factory” in Chennai, is betting big on machine learning, automation, cloud services and automation. “As enterprises, governments and networks invest in their digital transformations, Nokia is well-positioned to provide the critical networking solutions they need,” Batra said.
It is premature to start thinking of 5G-Advanced when so many of the needed 5G ITU-R recommendations/standards and 3GPP specs are incomplete. These include: URLLC in the RAN and updates to ITU-R M.2150 for URLLC (to meet the 5G minimum performance requirements specified in ITU-R M.2410), implementation specs/agreements for 5G SA Core network (cloud native/microservices/containers, etc or otherwise), frequency arrangements for terrestrial 5G (ITU-R M.1036 revision to include mmWave frequencies approved at WRC’19), network management, security, roaming, interworking with WiFi 6E, etc.
The UK-India Future Networks Initiative (UKI-FNI) is a£1.4 million project, led by the University of East Anglia in collaboration with other UK and Indian universities. Its objective is to build the capability, capacity, and relationships between the two countries in telecoms diversification technologies and research for 5G and beyond. The project will explore hardware and software solutions for future digital networks, as well as develop a joint UK/India vision for Beyond 5G and 5G. The development of Open Radio Access Networks (OpenRAN) will be a key part of the project.
The project is funded by the UK Engineering & Physical Sciences Research Council (EPSRC).
The 5G/6G Innovation Centre (5G/6GIC) at the University of Surrey in the UK will play a key role in a project to examine advanced technologies for future digital telecoms networks. The 5G/6GIC will work with the University of East Anglia (project lead), University College London and the University of Southampton in the UK; and the Indian Institute of Technology (IIT) Delhi and the Indian Institute of Science in (IIS) Bangalore.
The 5G vision of the Centre includes:
- Indoors and outdoors
- Dense urban centres with capacity challenges
- Sparse rural locations where coverage is the main challenge
- Places with existing infrastructure, and areas where there is none
India has an excellent research and innovation base in networking systems software and has the complex testbeds required for proving new technologies. Indeed, under a previous £20 million EPSRC initiative led in the UK by Prof Parr (the India-UK Advanced Technology Centre), the team collaborated for more than 10 years with partners across India – an experience that will be leveraged in the UKI-FNI project.
Prof Parr said: “To those of us who have access to telecommunications services and the Internet, it comes as no surprise how reliant we are on voice, data and web services for email, video conferencing and file sharing, as well as social media for business and personal needs. This has been much more visible during the Covid pandemic. For the telecoms service providers there are important considerations in providing all these systems across regions and nations, including performance, cyber security, energy efficiency, scalability and operational costs for maintenance and upgrades.”
“The consideration on costs is attracting increasing attention when we consider the limited number of global vendors who manufacture and supply the systems over which our data flows across the national and international networks.”
There is a global push to explore innovations that will deliver the infrastructure, systems and services for next-generation mobile communication networks. Part of this drive is coming from network operators who are seeking solutions to reduce the costs for network components by aiming to remove dependence and lock-in to a small group of telecom original equipment manufacturers.
A leading idea is that the 5G infrastructure should be far more demand/user/device centric with the agility to marshal network/spectrum resources to deliver “always sufficient” data rate and low latency to give the users the perception of infinite capacity. This offers a route to much higher-performing networks and a far more predictable quality of experience that is essential for an infrastructure that is to support an expanding digital economy and connected society.
Sanjeev K Varshney, Head of International Cooperation at the DST, said: “The announcement of the India-UK partnership to develop newer research opportunities in future telecom networks is very timely and we look forward to developing new bilateral collaboration in this and other emerging areas of mutual interest.”
Rebecca Fairbairn, Director UKRI India, said: “UKRI India, in collaboration with our partner funders in India, is delighted to announce a drive towards a new Indo-UK research and innovation partnership on future telecom networks.
“Bringing together both our countries’ scientists, engineers, and innovators we will jointly develop new knowledge and high-impact research and innovation in line with our shared 2030 India-UK roadmap.”
Professor Gerard Parr, Principal Investigator for UKI-FNI, University of East Anglia, said: “There are many benefits to be accrued from the UKI-FNI project as we explore new innovative solutions in hardware, software and protocols.
“Ultimately, we will develop a roadmap for a much larger, mutually beneficial and longer-term collaboration between India and the UK in the important digital telecoms sector.”
From emerging IMT towards 2030 and beyond use cases such as digital twin, cyber-physical systems, mixed reality, industrial/service robots, the following technology trends can be observed:
There is a need to process data at the network edge for real-time response, low transport cost, and privacy protection.
There is need to scale out device computing capability beyond its physical limitations for advanced application computing workloads.
The ubiquity of AI needs ubiquitous computing and data resources.
These new technology trends bring in new technology issues on scalability, dynamic workload distribution, data collection/management/sharing:
Scalability – In today’s cloud computing, computing resource are often centralized in a few national or regional data centers. Centralized service discovery and orchestration mechanisms used are given full visibility on computing resources and services in the data centers. When computing resources and services become more widely distributed, the centralized approach is no longer scalable; a more scalable approach is needed for widely distributed computing resources.
Dynamic computing workload distribution – Today’s workload distribution between devices and the cloud is based on client-server model with a fixed workload partition between the client and the cloud. The fixed workload partition is application specific and is pre-determined in the application development phase. Such a fixed workload partition is based on the assumption that there are always sufficient computing resources in the cloud to fulfil the server-side workload. Moving forward, as computing resources become distributed, the assumption of unlimited server-side computing resource would likely no longer hold so there needs to be a scheme that allows dynamic device computing scaling out based on conditions such as workload requirements, communication and computing resource availability, etc. To minimize the impact on applications, dynamic computing scaling scheme should be enabled as an IMT system capability with minimal dependency on applications.
Data collection, processing, management and sharing – With the widespread application of AI in society/industry, a systematic approach in collecting, processing, management and sharing data to facilitate AI/Machine Learning becomes very important. The conventional data management functions in cellular networks focus on managing subscription information and policies. In IMT-2020, driven by the use of AI tools for network optimization and automation, a network data analytics function (NWDAF) was added into the specifications through which network functions’ measurement data can be collected and used for analytics. Future IMT towards 2030 and beyond are anticipated to have further diversification on data sources, types and consumptions, so it is expected that data plane functions will be part of the IMT system function from the beginning and can support full-blown data services to devices, network functions and applications.
To address the above-mentioned challenges, computing services and data services are expected to become an integral component of the future IMT system. Ubiquitous computing and data services can be enabled alongside the ubiquitous connectivity as integral services of the IMT system. Dynamic computing workload distribution can be inherently supported as an IMT system capability. Applications can use the IMT system’s workload distribution and scaling capability to achieve optimized performance. Data plane services in the IMT system such as data collection, processing, management and sharing can be enabled to support AI needs in air interface, cellular network and applications.
Source: Intel contribution to ITU WP5D: “Further development of working document towards preliminary draft new Report on future technology trends” Sept 21, 2021
No organization, standards or spec writing body have detailed anything real related to “6G.” All the 6G claims from telecom equipment vendors and network operators are pure propaganda/hype. There is no consensus of what 6G will be, nor is there any effort to standardize “5G Advanced.” Hence, there is no basis whatsoever to talk about standardized 5G Advanced or 6G anytime soon.
Yes, we know 3GPP is working on Release 18 which will have many new features and functions, but their Release 16 (frozen one year ago) is not complete– at least not for the URLLC 5G NR specification and performance testing. Don’t talk about “5G Advanced” or “6G” if the key use case (URLLC) for 5G is not complete. Nor is the implementation specified for “5G core” or 5G advanced functions, e.g. network slicing, as we’ve stated many, many times.
This article examines what’s real: the important ongoing work by ITU-R (the official standards body for cellular communications and frequencies) on the vision, goals and objectives for what may become 6G. Or maybe not?
ITU-R WP 5D Efforts on IMT Vision for 2030 (which will include “6G”):
ITU-R Working Party 5D (WP 5D) has started to develop a new draft Recommendation “IMT Vision for 2030 and beyond” at their March 2021 meeting. This Recommendation might be helpful to drive the industries and administrations to encourage further development of IMT for 2030 and beyond.
This Recommendation will define the framework and overall objectives of the future development of IMT for 2030 and beyond, including the role that IMT could play to better serve the needs of the future society, for both developed and developing countries.
For the development of this draft new Recommendation, WP 5D would like to invite the views of External Organizations on the IMT Vision for 2030 and beyond, including but not limited to, user and application trends, evolution of IMT, usage scenario, capabilities and framework and objectives.
WP 5D will also develop a new draft Report ITU-R M.[IMT.FUTURE TECHNOLOGY TRENDS] which focuses on the following aspects:
“This Report provides a broad view of future technical aspects of terrestrial IMT systems considering the time frame up to 2030 and beyond. It includes information on technical and operational characteristics of terrestrial IMT systems, including the evolution of IMT through advances in technology and spectrally-efficient techniques, and their deployment.”
For the development of these reports, WP 5D invites the views of External Organizations on future technology trends for terrestrial IMT systems, including but not limited to the motivation on driving factors such as new use cases, applications, capabilities, technology trends and enablers. These technical inputs are intended for the timeframe towards 2030 and beyond and are proposed to be significantly advanced and different from that of IMT-2020.
Related documents: ITU Recommendations, Reports, Documents and Handbook:
Recommendation ITU-R M.1645 – Framework and overall objectives of the future development of IMT‑2000 and systems beyond IMT‑2000
Recommendation ITU-R M.2083 – IMT Vision – “Framework and overall objectives of the future development of IMT for 2020 and beyond”
Recommendation ITU-R M.1457 – Detailed specifications of the terrestrial radio interfaces of International Mobile Telecommunications-2000 (IMT-2000)
Recommendation ITU-R M.2012 – Detailed specifications of the terrestrial radio interfaces of International Mobile Telecommunications Advanced (IMT-Advanced)
Recommendation ITU-R M.2150 – Detailed specifications of the terrestrial radio interfaces of International Mobile Telecommunications-2020 (IMT-2020)
Report ITU-R M.2243 – Assessment of the global mobile broadband deployments and forecasts for International Mobile Telecommunications
Report ITU-R M.2320 – Future technology trends of terrestrial IMT systems
Report ITU-R M.2370 – IMT Traffic estimates for the years 2020 to 2030
Report ITU-R M.2376 – Technical feasibility of IMT in bands above 6 GHz
Report ITU-R M.2134 – Requirements related to technical performance for IMT‑Advanced radio interface(s)
Report ITU-R M.2410 – Minimum requirements related to technical performance for IMT-2020 radio interface(s)
Report ITU-R M.2441 – Emerging usage of the terrestrial component of International Mobile Telecommunication (IMT)
Report ITU-R M.[IMT.FUTURE TECHNOLOGY TRENDS TOWARDS 2030 AND BEYOND] – Future technology trends of terrestrial IMT systems towards 2030 and beyond
Key objectives of the Vision towards IMT for 2030 and beyond:
Focus on continued need for increased coverage, increased capacity and extremally high user data rates;
Focus on continued need for lower latency and both high and low speed of movement of the mobile terminals;
Fully support the development of a Ubiquitous Intelligent Mobile Society;
Focus on tackling societal challenges identified in UN Sustainable Development Goals (SDGs), in particular to meet the needs of Industry, Innovation and Infrastructure;
Consider what the future heterogenous mobile broadband networks can offer to the society and the economy through the applications and services they support;
Target the changing global scenario on how we work and how we stay safe during the societal challenges such COVID-19 pandemic and global climate changes;
Focus on delivering on digital inclusion and connecting the rural and remote communities.
The 4 key pillars for the vision:
Any future technology should help in the development of a Ubiquitous Intelligent Mobile Connected Society (whatever that means is TBD).
Any future technology should support technologies that can help bridge the digital divide.
Any future technology should support technologies that can Personalize / localize services.
Any future technology should support the connectivity / compute technologies that can address issues of real-world data ownership sensitivities.
Brief text for each of the pillars is as below:
1. Development of a Ubiquitous Intelligent Mobile Connected Society:
It is anticipated that Public / Private / Enterprise networks, specialized networks (application / vertical specific), IOT / sensor networks will increase in numbers in the coming years and could be based on multiple radio access technologies. Interoperability is one of the most significant challenges to enable a ubiquitous intelligent, connected / compute environment, where different networks, processes, applications, use cases and organizations are connected. This includes supporting very high bandwidth requirements applications such as holographic communications, digital twins etc to supporting extremely low bandwidth requirement use cases such as sensors.
2. Support technologies that can bridge the digital divide: It is a very important considerations for any future technology development.
Future networks / technologies should support affordability as a key parameter and to that end support technologies such as:
Highly composable networks /architectures to address issues of cost and affordability.
Dynamic Spectrum Sharing technologies which can lower the cost of initial spectrum purchase.
Heterogeneous device types to bring the cost of affordability down without compromising high end usage scenarios.
Energy efficiency to enable affordability and sustainability.
3. Support technologies that can Personalize /localize services.
As home network capabilities, edge device / network capabilities are enhanced, there is an opportunity to personalize services like never before. It’s important that personalization (focused on individuals, homes, apartments small / medium enterprises) services is a key focus area.
4. Support technologies that can mimic real world data ownerships and hierarchies.
Personal data protection is becoming important and as nations are focused on data protection and management it is important that any future network / technology takes into account the intrinsic data hierarchies and management aspects. Data ownership granularity spans from personal data, enterprise or group data, organizational data, data considered as national assets (data that is not allowed to leave the geographic boundaries)
External Organizations will be invited to contribute to this work item via contributions to future ITU-R WP 5D meetings in 2021 and 2022.
Source: ITU-R WP 5D
Addendum from Leo Lehmann, Chairman ITU-T SG13:
ITU-T had run Focus Group Network-2030, which was concluded in July 2020. This Focus Group studied the capabilities of networks for the year 2030 and beyond. Those networks are expected to support novel forward-looking scenarios, such as holographic type communications, extremely fast response in critical situations and high-precision communication demands of emerging market verticals.
It has produced a remarkable “White Paper: “Network 2030 – A Blueprint of Technology, Applications and Market Drivers Towards the Year 2030 and Beyond” (May 2019).”
Even though studies are focusing only on “non-radio-related” aspects, the given use cases might be very important for the further discussion how they might be supported by corresponding spectrum requirements (whatever “G”).
The Alliance for Telecom Industry Solutions (ATIS) has announced election results for the Next G Alliance and its Steering Group as well as the launch of work on a 6G Roadmap.
Andre Fuetsch, Executive Vice President & Chief Technology Officer, AT&T, has been named chair of the Next G Alliance executive governing body, the Full Member Group (FMG). Jan Söderström, Ericsson’s Head of Technology Office Silicon Valley, has been named FMG vice chair. Among its many roles, the FMG sets the overall strategy and direction for the Next G Alliance as well as its organizational policies. Both the chair and vice chair serve a two-year term.
Three co-chairs have also been named for the Next G Alliance Steering Group (SG). The SG is composed of technology leaders and experts who will identify key North American R&D needs, standards strategies and market readiness policies to achieve the goals established by the Next G Alliance. The SG co-chairs are: AT&T Assistant Vice President – Standards & Industry Alliances Brian Daly; Head of North American Standardization at Nokia, Devaki Chandramouli; and VMware Director, Edge & AI Ecosystems, Telco Cloud Business Unit, Benoit Pelletier.
Setting the stage for the eventual commercialization of 6G, the work of the Next G Alliance will influence and encompass the full lifecycle of research and development, manufacturing, standardization and market readiness. As an initial priority, a 6G Roadmap Working Group has been launched. The National 6G Roadmap being developed will act as a foundation for future outputs, delivering a common vision and destination point for achieving North American 6G wireless leadership. It will define what is needed in terms of research needs, technology developments, service and application enablers, policies and government actions and market priorities.
In addition to the 6G Roadmap Working Group, the Next G Alliance will simultaneously launch a “Green G” Working Group focused on achieving energy efficiency by reducing power consumption and assessing how to achieve a sustainable ecosystem with emerging technologies. The Working Group will evaluate the environmental impact of a broad range of sources including water and materials consumption as well as the use of renewable or ambient energy.
“While innovation frequently occurs in response to market needs, long-term technology leadership takes strategic foresight and critical stakeholders committed to reaching the desired future state,” said Susan M. Miller, President and CEO, ATIS. “With its leadership set and work on both sustainability and the 6G Roadmap launched, the Next G Alliance is well positioned to create a national vision for the next decade.”
Thus far, the Next G Alliance has united 45 of the leading information and communications companies in a shared commitment to advance the evolution of 5G, chart the future of 6G technology and put North America at the forefront of wireless technology leadership for the next decade and beyond. The membership spans infrastructure, semiconductors and device vendors; operators; hyper-scalers and other organizations, including those in the area of research.
If your company is interested in joining, contact ATIS Membership Director Rich Moran.
Learn more about the Next G Alliance at: https://nextgalliance.org/
As a leading technology and solutions development organization, the Alliance for Telecommunications Industry Solutions (ATIS) brings together the top global ICT companies to advance the industry’s business priorities. ATIS’ 150 member companies are currently working to address 6G, 5G, robocall mitigation, IoT, Smart Cities, artificial intelligence-enabled networks, distributed ledger/blockchain technology, cybersecurity, emergency services, quality of service, billing support, operations, and much more. These priorities follow a fast-track development lifecycle – from design and innovation through standards, specifications, requirements, business use cases, software toolkits, open source solutions, and interoperability testing.
ATIS is accredited by the American National Standards Institute (ANSI). ATIS is the North American Organizational Partner for the 3rd Generation Partnership Project (3GPP), a founding Partner of the oneM2M global initiative, a member of the International Telecommunication Union (ITU), as well as a member of the Inter-American Telecommunication Commission (CITEL). For more information, visit www.atis.org. Follow ATIS on Twitter and on LinkedIn.
We think it’s very premature to start an INDEPENDENT group to plan the future of 6G networks for North America. That’s because 5G standards and specs are not even close to be finished. The standardization work on 6G hasn’t started in earnest yet. There’s only an ITU-R draft report on “Technology Trends of terrestrial IMT systems towards 2030 and beyond,” which is scheduled to be completed in July 2022.
Regarding 5G standards and specs being incomplete, revision 6 of ITU-R M.1036 recommendation specifying Frequency Arrangements for the terrestrial component of IMT (including 5G/IMT 2020) has not yet been agreed upon yet in ITU-R WP5D. It should include all the WRC 19 recommended frequencies for 5G/IMT 2020, especially mmWave.
Another example is that 3GPP Release 16 URLLC in the RAN [Physical Layer Enhancements for NR Ultra-Reliable and Low Latency Communication (URLLC)] has not been completed, despite that release being frozen last July.
3GPP Release 16 5G NR-URLLC in the RAN spec status as of as of March 25, 2021:
- RP-191584 5G NR Physical Layer Enhancements for Ultra-Reliable and Low Latency Communication (URLLC) [UID=830074 and CODE=NR_L1enh_URLLC] was 37% complete. It is scheduled for completion June 12, 2022).
- RP-190726 Performance part: Physical Layer Enhancements for NR Ultra-Reliable and Low Latency Communication (URLLC) spec was 0% complete and hasn’t been updated since 2019.
- RP-200472 revised NR performance requirement enhancement [UID=840094 CODE=NR_perf_enh] was 0% complete.
Note also that there are no ITU-T recommendations/standards that specify implementation for IMT 2020/5G non radio aspects. All the work is being done in 3GPP and at a reference architecture level that does NOT specify detailed implementation. That applies to 3GPP specs on 5G core network, network slicing, and other highly touted 5G features.
Hence, there will surely be many implementations of 5G “cloud native” core networks, network slicing, virtualization, security, etc
We think any 6G technology aspects and specification work should be done in ITU-R WP5D for the RAN and 3GPP for the RAN and Core network.
Verizon today announced a deal with Deloitte to collaborate on 5G mobile edge computing services for manufacturing and retail businesses and ultimately expand to other industry verticals. The companies plan to create transformational solutions to serve client-specific needs using Deloitte’s industry and solution engineering expertise combined with Verizon’s advanced mobile and private enterprise wireless networks, 5G Edge MEC platform, IoT, Software Defined-Wide Area Network (SD-WAN), and VNS Application Edge capabilities.
Verizon and Deloitte are collaborating on innovative solutions to transform manufacturers into “real-time enterprises” with real-time intelligence and business agility by integrating next-gen technologies including 5G, MEC, computer vision and AI with cloud and advanced networking. The companies are co-developing a smart factory solution at Verizon’s Customer Technology Center in Richardson, TX that will utilize computer vision and sensor-based detection coupled with MEC to identify and predict quality defects on the assembly line and automatically alert plant engineering and management in near real-time.
The companies will also introduce an integrated network and application edge compute environment for next generation application functionality and performance that reduces the need for manual quality inspection, avoids lost productivity, reduces production waste, and ultimately lowers the cost of raw materials and improves plant efficiency. The combination of SD-WAN and VNS Application Edge will bring together software defined controls, application awareness, and application lifecycle management to deliver on-demand network transformation and edge application deployment and management.
“By bringing together Verizon’s 5G and MEC prowess with Deloitte’s deep industry expertise and track record in system integration with large enterprises on smart factories, we plan to deliver cutting-edge solutions that will close the gap between digital business operations and legacy manufacturing environments and unlock the value of the end-to-end digital enterprise,” said Tami Erwin, CEO of Verizon Business. “This collaboration is part of Verizon’s broader strategy to align with enterprises, startups, universities and government to explore how 5G and MEC can disrupt and transform nearly every industry.”
“In our recently published Deloitte Advanced Wireless Adoption study, over 85% of US executives surveyed indicated that advanced wireless is a force multiplier that will unlock the full potential of edge computing, AI, Cloud, IoT, and data analytics. Our collaboration with Verizon combines Deloitte’s business transformation expertise with advanced wireless and MEC technology to deliver game changing solutions,” said Ajit Prabhu, US Ecosystems & Alliances Strategy Officer and 5G/Edge Computing Commercialization leader, Deloitte Consulting LLP.
The #1 U.S. wireless telco still plans to reach an additional two cities with its mobile edge computing (MEC) network, ending the year with availability in 10 cities.
Verizon is also working with Microsoft Azure on private 5G MEC, Amazon Web Services (AWS) on consumer-oriented 5G MEC, IBM on IoT, Samsung and Corning on in-building 5G radios, Apple, major sporting leagues, and other organizations — all in an effort to explore and develop new use cases for 5G.
The MEC activities follows a flurry of announcements last week when Verizon expanded its low-band 5G network to reach up to 230 million people, said its millimeter-wave 5G network is now live in parts of 61 U.S. cities, revealed an on-premises private 4G LTE service for enterprises, expanded a partnership with SAP, inked a multi-year deal with Walgreens Boot Alliance, and launched an IoT services platform.
Separately, Verizon CTO Kyle Malady said that there’s currently no clear reason to move beyond 5G. “I really don’t know what the hell 6G is,” he said. Neither does anyone else- see Opinion below.
“We just put 5G in. And I think there’s a lot of development still to come on that one.”
Verizon, AT&T, Apple, Google and a wide range of other companies have already teamed under ATIS’ “Next G Alliance” that seeks to unite US industry, government and academia around 6G efforts.
Opinion on “6G”:
Talk of “6G” is preposterous at this time, since we don’t even have an approved 5G RAN/ IMT 2020 RIT spec or standard that meets the 5G URLLC performance requirements in ITU M.2410. Despite numerous 3GPP Release 16 specs, we don’t have a standard for 5G core network implementation, 5G security, 5G network management, 5G network slicing, etc.
At its 34th meeting (19-26 February 2020), ITU‑R Working Party (WP) 5D decided to start study on future technology trends for the future evolution of IMT. A preliminary draft new Report ITU-R M.[IMT.FUTURE TECHNOLOGY TRENDS] will be developed and will consider related information from various external organizations and country/regional research programs.
The scope of the new report ITU-R M.[IMT.FUTURE TECHNOLOGY TRENDS] focuses on the following aspects:
“This Report provides a broad view of future technical aspects of terrestrial IMT systems considering the time frame up to 2030 and beyond. It includes information on technical and operational characteristics of terrestrial IMT systems, including the evolution of IMT through advances in technology and spectrally-efficient techniques, and their deployment.”
In a Sept 27, 2020 ITU-R WP5D contribution, China stated:
IMT technology needs to show sustainable vitality in the perspective of technical development. There are emerging services and applications, and their further development towards 2030 and beyond will impose higher requirements on the IMT system. It motivates the introduction of new IMT technical features, e.g., very high spectrum up to Terahertz, native artificial intelligence (AI), integrated sensing and communications, integrated terrestrial and non-terrestrial networks, block chain and quantum computing for multi-lateral trustworthiness architecture, etc., which were not emphasised in Report ITU-R M.2320-0 considering the time-frame for 2015-2020. IMT technology continues to develop and it is necessary for ITU to provide a broad view of future technical aspects of IMT systems considering 2030 and beyond.
And suggested topics to be covered in this new IMT.FUTURE TECHNOLOGY TRENDS Report:
IMT technology trends and enablers for the time up to 2030 and beyond:
Technologies for further enhanced radio interface, including advanced modulation, coding and multiple access schemes, E-MIMO (Extreme -MIMO), Co-frequency Co-time Full Duplex (CCFD) communications, multiple physical dimension transmission
Technologies for Tera Hertz communication and optical wireless communication
Technologies for native AI based communication
Technologies for integrated sensing and communication
Technologies for integrated terrestrial and non-terrestrial communications
Technologies for integrated access and super sidelink communications
Technologies for high energy efficiency and low energy consumption
Technologies for native security, privacy, and trust
Technologies for efficient spectrum utilization
Editor’s Note: The next meeting of ITU-R WP5D is March 1-to-12, 2021 (e-meeting)
This author is truly astounded with all the buzz about 6G when neither 3GPP or ITU-R WP5D (or ITU-T) have completed their 5G specs. However, there is work progressing in ITU-R WP5D on the evolution of IMT in the next ten years with a report scheduled to be completed in June 2022.
Future Technology Trends for the evolution of IMT towards 2030 and beyond:
Considering the successful accomplishments by ITU-R for the evolution of IMT-2000, IMT‑Advanced and IMT-2020, similar actions are proposed for the evolution of IMT towards 2030 and beyond. The approach taken for IMT‑Advanced evolution towards IMT-2020 was to start with the work on the Report ITU-R M.2320 entitled “Future technology trends of terrestrial IMT systems” (approved in 2014) to develop the evolution for IMT-Advanced (aka “4G”).
At its 34th meeting (19-26 February 2020), ITU‑R Working Party (WP) 5D decided to start study on future technology trends for the future evolution of IMT. A preliminary draft new Report ITU-R M.[IMT.FUTURE TECHNOLOGY TRENDS] will be developed and will consider related information from various external organizations and country/regional research programs.
The scope of the new Report ITU-R M.[IMT.FUTURE TECHNOLOGY TRENDS] focuses on the following aspects:
“This Report provides a broad view of future technical aspects of terrestrial IMT systems considering the time frame up to 2030 and beyond. It includes information on technical and operational characteristics of terrestrial IMT systems, including the evolution of IMT through advances in technology and spectrally-efficient techniques, and their deployment.”
For the development of this report, WP 5D invites the views of External Organizations on future technology trends for terrestrial IMT systems, including but not limited to the motivation on driving factors such as new use cases, applications, capabilities, technology trends and enablers. These technical inputs are intended for the timeframe towards 2030 and beyond and are proposed to be significantly advanced and different from that of IMT-2020.
A few potential aspects of the new report (subject to change based on inputs from external organizations):
- Motivation on driving factors for future technology trends towards 2030 and beyond
- Driving factors in the design of future IMT technology
- Technology Trends and Enablers
- Technologies to enhance the radio interface
- Technologies to enhance radio network performance and precision
- Technologies for native AI based communication
- Technologies to enhance service coverage
- Technologies to enhance privacy and security
- Technologies for integrated sensing and communication
- Technologies for integrated terrestrial and non-terrestrial communications
- Technologies for integrated access and super sidelink communications
- Technologies to enhance adaptability and sustainability
- Technologies for efficient spectrum utilization
- Terminal technologies
- Technologies to support a wide range of new use cases and applications
- Summary and Conclusion
- Acronyms, Terminology, Abbreviations
WP 5D plans to complete this study at the 41st WP 5D meeting in June 2022.