AI in Networks Market
SoftBank’s Transformer AI model boosts 5G AI-RAN uplink throughput by 30%, compared to a baseline model without AI
Softbank has developed its own Transformer-based AI model that can be used for wireless signal processing. SoftBank used its Transformer model to improve uplink channel interpolation which is a signal processing technique where the network essentially makes an educated guess as to the characteristics and current state of a signal’s channel. Enabling this type of intelligence in a network contributes to faster, more stable communication, according to SoftBank. The Japanese wireless network operator successfully increased uplink throughput by approximately 20% compared to a conventional signal processing method (the baseline method). In the latest demonstration, the new Transformer-based architecture was run on GPUs and tested in a live Over-the-Air (OTA) wireless environment. In addition to confirming real-time operation, the results showed further throughput gains and achieved ultra-low latency.
Editor’s note: A Transformer model is a type of neural network architecture that emerged in 2017. It excels at interpreting streams of sequential data associated with large language models (LLMs). Transformer models have also achieved elite performance in other fields of artificial intelligence (AI), including computer vision, speech recognition and time series forecasting. Transformer models are lightweight, efficient, and versatile – capable of natural language processing (NLP), image recognition and wireless signal processing as per this Softbank demo.
Significant throughput improvement:
- Uplink channel interpolation using the new architecture improved uplink throughput by approximately 8% compared to the conventional CNN model. Compared to the baseline method without AI, this represents an approximately 30% increase in throughput, proving that the continuous evolution of AI models leads to enhanced communication quality in real-world environments.
Higher AI performance with ultra-low latency:
- While real-time 5G communication requires processing in under 1 millisecond, this demonstration with the Transformer achieved an average processing time of approximately 338 microseconds, an ultra-low latency that is about 26% faster than the convolution neural network (CNN) [1.] based approach. Generally, AI model processing speeds decrease as performance increases. This achievement overcomes the technically difficult challenge of simultaneously achieving higher AI performance and lower latency. Editor’s note: Perhaps this can overcome the performance limitations in ITU-R M.2150 for URRLC in the RAN, which is based on an uncompleted 3GPP Release 16 specification.
Note 1. CNN-based approaches to achieving low latency focus on optimizing model architecture, computation, and hardware to accelerate inference, especially in real-time applications. Rather than relying on a single technique, the best results are often achieved through a combination of methods.
Using the new architecture, SoftBank conducted a simulation of “Sounding Reference Signal (SRS) prediction,” a process required for base stations to assign optimal radio waves (beams) to terminals. Previous research using a simpler Multilayer Perceptron (MLP) AI model for SRS prediction confirmed a maximum downlink throughput improvement of about 13% for a terminal moving at 80 km/h.*2
In the new simulation with the Transformer-based architecture, the downlink throughput for a terminal moving at 80 km/h improved by up to approximately 29%, and by up to approximately 31% for a terminal moving at 40 km/h. This confirms that enhancing the AI model more than doubled the throughput improvement rate (see Figure 1). This is a crucial achievement that will lead to a dramatic improvement in communication speeds, directly impacting the user experience.
The most significant technical challenge for the practical application of “AI for RAN” is to further improve communication quality using high-performance AI models while operating under the real-time processing constraint of less than one millisecond. SoftBank addressed this by developing a lightweight and highly efficient Transformer-based architecture that focuses only on essential processes, achieving both low latency and maximum AI performance. The important features are:
(1) Grasps overall wireless signal correlations
By leveraging the “Self-Attention” mechanism, a key feature of Transformers, the architecture can grasp wide-ranging correlations in wireless signals across frequency and time (e.g., complex signal patterns caused by radio wave reflection and interference). This allows it to maintain high AI performance while remaining lightweight. Convolution focuses on a part of the input, while Self-Attention captures the relationships of the entire input (see Figure 2).
(2) Preserves physical information of wireless signals
While it is common to normalize input data to stabilize learning in AI models, the architecture features a proprietary design that uses the raw amplitude of wireless signals without normalization. This ensures that crucial physical information indicating communication quality is not lost, significantly improving the performance of tasks like channel estimation.
(3) Versatility for various tasks
The architecture has a versatile, unified design. By making only minor changes to its output layer, it can be adapted to handle a variety of different tasks, including channel interpolation/estimation, SRS prediction, and signal demodulation. This reduces the time and cost associated with developing separate AI models for each task.
The demonstration results show that high-performance AI models like Transformer and the GPUs that run them are indispensable for achieving the high communication performance required in the 5G-Advanced and 6G eras. Furthermore, an AI-RAN that controls the RAN on GPUs allows for continuous performance upgrades through software updates as more advanced AI models emerge, even after the hardware has been deployed. This will enable telecommunication carriers to improve the efficiency of their capital expenditures and maximize value.
Moving forward, SoftBank will accelerate the commercialization of the technologies validated in this demonstration. By further improving communication quality and advancing networks with AI-RAN, SoftBank will contribute to innovation in future communication infrastructure. The Japan based conglomerate strongly endorsed AI RAN at MWC 2025.
References:
https://www.softbank.jp/en/corp/news/press/sbkk/2025/20250821_02/
https://www.telecoms.com/5g-6g/softbank-claims-its-ai-ran-tech-boosts-throughput-by-30-
https://www.telecoms.com/ai/softbank-makes-mwc-25-all-about-ai-ran
https://www.ibm.com/think/topics/transformer-model
https://www.itu.int/rec/R-REC-M.2150/en
Softbank developing autonomous AI agents; an AI model that can predict and capture human cognition
Dell’Oro Group: RAN Market Grows Outside of China in 2Q 2025
Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined
Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
OpenAI announces new open weight, open source GPT models which Orange will deploy
Deutsche Telekom and Google Cloud partner on “RAN Guardian” AI agent
RtBrick survey: Telco leaders warn AI and streaming traffic to “crack networks” by 2030
Respondents to a RtBrick survey of 200 senior telecom decision makers in the U.S., UK, and Australia finds that network operator leaders are failing to make key decisions and lack the motivation to change. The report exposes urgent warnings from telco engineers that their networks are on a five-year collision course with AI and streaming traffic. It finds that 93% of respondents report a lack of support from leadership to deploy disaggregated network equipment. Key findings:
- Risk-averse leadership and a lack of skills are the top factors that are choking progress.
- Majority are stuck in early planning, while AT&T, Deutsche Telekom, and Comcast lead large-scale disaggregation rollouts.
- Operators anticipate higher broadband prices but fear customer backlash if service quality can’t match the price.
- Organizations require more support from leadership to deploy disaggregation (93%).
- Complexity around operational transformation (42%), such as redesigning architectures and workflows.
- Critical shortage of specialist skills/staff (38%) to manage disaggregated systems.
The survey finds that almost nine in ten operators (87%) expect customers to demand higher broadband speeds by 2030, while roughly the same (79%) state their customers expect costs to increase, suggesting they will pay more for it. Yet half of all leaders (49%) admit they lack complete confidence in delivering services at a viable cost. Eighty-four percent say customer expectations for faster, cheaper broadband are already outpacing their networks, while 81% concede their current architectures are not well-suited to handling the future increases in bandwidth demand, suggesting they may struggle with the next wave of AI and streaming traffic.
“Senior leaders, engineers, and support staff inside operators have made their feelings clear: the bottleneck isn’t capacity, it’s decision-making,” said Pravin S Bhandarkar, CEO and Founder of RtBrick. “Disaggregated networks are no longer an experiment. They’re the foundation for the agility, scalability, and transparency operators need to thrive in an AI-driven, streaming-heavy future,” he added noting the intent to deploy disaggregation as per this figure:
However, execution continues to trail ambition. Only one in twenty leaders has confirmed they’re “in deployment” today, while 49% remain stuck in early-stage “exploration”, and 38% are still “in planning”. Meanwhile, big-name operators such as AT&T, Deutsche Telekom, and Comcast are charging ahead and already actively deploying disaggregation at scale, demonstrating faster rollouts, greater operational control, and true vendor flexibility. Here’s a snapshot of those activities:
- AT&T has deployed an open, disaggregated routing network in their core, powered by DriveNets Network Cloud software on white-box bare metal switches and routers from Taiwanese ODMs, according to Israel based DriveNets. DriveNets utilizes a Distributed Disaggregated Chassis (DDC) architecture, where a cluster of bare metal switches act as a single routing entity. That architecture has enabled AT&T to accelerate 5G and fiber rollouts and improve network scalability and performance. It has made 1.6Tb/s transport a reality on AT&T’s live network.
- Deutsche Telekom has deployed a disaggregated broadband network using routing software from RtBrick running on bare-metal switch hardware to provide high-speed internet connectivity. They’re also actively promoting Open BNG solutions as part of this initiative.
- Comcast uses network cloud software from DriveNets and white-box hardware to disaggregate their core network, aiming to increase efficiency and enable new services through a self-healing and consumable network. This also includes the use of disaggregated, pluggable optics from multiple vendors.
Nearly every leader surveyed also claims their organization is “using” or “planning to use” AI in network operations, including for planning, optimization, and fault resolution. However, nine in ten (93%) say they cannot unlock AI’s full value without richer, real-time network data. This requires more open, modular, software-driven architecture, enabled by network disaggregation.
“Telco leaders see AI as a powerful asset that can enhance network performance,” said Zara Squarey, Research Manager at Vanson Bourne. “However, the data shows that without support from leadership, specialized expertise, and modern architectures that open up real-time data, disaggregation deployments may risk further delays.”
When asked what benefits they expect disaggregation to deliver, operators focused on outcomes that could deliver the following benefits:
- 54% increased operational automation
- 54% enhanced supply chain resilience
- 51% improved energy efficiency
- 48% lower purchase and operational costs
- 33% reduced vendor lock-in
Transformation priorities align with those goals, with automation and agility (57%) ranked first, followed by vendor flexibility (55%), supply chain security (51%), cost efficiency (46%) and energy usage and sustainability (47%).
About the research:
The ‘State of Disaggregation’ research was independently conducted by Vanson Bourne in June 2025 and commissioned by RtBrick to identify the primary drivers and barriers to disaggregated network rollouts. The findings are based on responses from 200 telecom decision makers across the U.S., UK, and Australia, representing operations, engineering, and design/Research and Development at organizations with 100 to 5,000 or more employees.
References:
https://www.rtbrick.com/state-of-disaggregation-report-2
https://drivenets.com/blog/disaggregation-is-driving-the-future-of-atts-ip-transport-today/
Disaggregation of network equipment – advantages and issues to consider
OpenAI announces new open weight, open source GPT models which Orange will deploy
Overview:
OpenAI today introduced two new open-weight, open-source GPT models (gpt-oss-120b and gpt-oss-20b) designed to deliver top-tier performance at a lower cost. Available under the flexible Apache 2.0 license, these models outperform similarly sized open models on reasoning tasks, demonstrate strong tool use capabilities, and are optimized for efficient deployment on consumer hardware. They were trained using a mix of reinforcement learning and techniques informed by OpenAI’s most advanced internal models, including o3 and other frontier systems.
These two new AI models require much less compute power to run, with the gpt-oss20B version able to run on just 16 GB of memory. The smaller memory size and less compute power enables OpenAI’s models to run in a wider variety of environments, including at the network edge. The open weights mean those using the models can tweak the training parameters and customize them for specific tasks.
OpenAI has been working with early partner companies, including AI Sweden, Orange, and Snowflake to learn about real-world applications of our open models, from hosting these models on-premises for data security to fine-tuning them on specialized datasets. We’re excited to provide these best-in-class open models to empower everyone—from individual developers to large enterprises to governments—to run and customize AI on their own infrastructure. Coupled with the models available in our API, developers can choose the performance, cost, and latency they need to power AI workflows.
In lockstep with OpenAI, France’s Orange today announced plans to deploy the new OpenAI models in its regional cloud data centers as well as small on-premises servers and edge sites to meet demand for sovereign AI solutions. Orange’s deep AI engineering talent enables it to customize and distill the OpenAI models for specific tasks, effectively creating smaller sub-models for particular use-cases, while ensuring the protection of all sensitive data used in these customized models. This process facilitates innovative use-cases in network operations and will enable Orange to build on its existing suite of ‘Live Intelligence’ AI solutions for enterprises, as well as utilizing it for its own operational needs to improve efficiency, and drive cost savings.
Using AI to improve the quality and resilience of its networks, for example by enabling Orange to more easily explore and diagnose complex network issues with the help of AI. This can be achieved with trusted AI models that operate entirely within Orange sovereign data centers where Orange has complete control over the use of sensitive network data. This ability to create customized, secure, and sovereign AI models for network use cases is a key enabler in Orange’s mission to achieve higher levels of automation across all of its networks.
Steve Jarrett, Orange’s Chief AI Officer, noted the decision to use state-of-the-art open-weight models will allow it to drive “new use cases to address sensitive enterprise needs, help manage our networks, enable innovating customer care solutions including African regional languages, and much more.”
Performance of the new OpenAI models:
gpt-oss-120b outperforms OpenAI o3‑mini and matches or exceeds OpenAI o4-mini on competition coding (Codeforces), general problem solving (MMLU and HLE) and tool calling (TauBench). It furthermore does even better than o4-mini on health-related queries (HealthBench) and competition mathematics (AIME 2024 & 2025). gpt-oss-20b matches or exceeds OpenAI o3‑mini on these same evals, despite its small size, even outperforming it on competition mathematics and health.
Sovereign AI Market Forecasts:
Open-weight and open-source AI models play a significant role in enabling and shaping the development of Sovereign AI, which refers to a nation’s or organization’s ability to control its own AI technologies, data, and infrastructure to meet its specific needs and regulations.
Sovereign AI refers to a nation’s ability to control and manage its own AI development and deployment, including data, infrastructure, and talent. It’s about ensuring a country’s strategic autonomy in the realm of artificial intelligence, enabling them to leverage AI for their own economic, social, and security interests, while adhering to their own values and regulations.
Bank of America’s financial analysts recently forecast the sovereign AI market segment could become a “$50 billion a year opportunity, accounting for 10%–15% of the global $450–$500 billion AI infrastructure market.”
BofA analysts said, “Sovereign AI nicely complements commercial cloud investments with a focus on training and inference of LLMs in local culture, language and needs,” and could mitigate challenges such as “limited power availability for data centers in US” and trade restrictions with China.
References:
https://openai.com/index/introducing-gpt-oss/
https://newsroom.orange.com/orange-and-openai-collaborate-on-trusted-responsible-and-inclusive-ai/
https://finance.yahoo.com/news/nvidia-amd-targets-raised-bofa-162314196.html
Open AI raises $8.3B and is valued at $300B; AI speculative mania rivals Dot-com bubble
OpenAI partners with G42 to build giant data center for Stargate UAE project
Reuters & Bloomberg: OpenAI to design “inference AI” chip with Broadcom and TSMC
Nvidia’s networking solutions give it an edge over competitive AI chip makers
Nvidia’s networking equipment and module sales accounted for $12.9 billion of its $115.1 billion in data center revenue in its prior fiscal year. Composed of its NVLink, InfiniBand, and Ethernet solutions, Nvidia’s networking products (from its Mellanox acquisition) are what allow its GPU chips to communicate with each other, let servers talk to each other inside massive data centers, and ultimately ensure end users can connect to it all to run AI applications.
“The most important part in building a supercomputer is the infrastructure. The most important part is how you connect those computing engines together to form that larger unit of computing,” explained Gilad Shainer, senior vice president of networking at Nvidia.
In Q1-2025, networking made up $4.9 billion of Nvidia’s $39.1 billion in data center revenue. And it’ll continue to grow as customers continue to build out their AI capacity, whether that’s at research universities or massive data centers.
“It is the most underappreciated part of Nvidia’s business, by orders of magnitude,” Deepwater Asset Management managing partner Gene Munster told Yahoo Finance. “Basically, networking doesn’t get the attention because it’s 11% of revenue. But it’s growing like a rocket ship. “[Nvidia is a] very different business without networking,” Munster explained. “The output that the people who are buying all the Nvidia chips [are] desiring wouldn’t happen if it wasn’t for their networking.”
Nvidia senior vice president of networking Kevin Deierling says the company has to work across three different types of networks:
- NVLink technology connects GPUs to each other within a server or multiple servers inside of a tall, cabinet-like server rack, allowing them to communicate and boost overall performance.
- InfiniBand connects multiple server nodes across data centers to form what is essentially a massive AI computer.
- Ethernet connectivity for front-end network for storage and system management.
Note: Industry groups also have their own competing networking technologies including UALink, which is meant to go head-to-head with NVLink, explained Forrester analyst Alvin Nguyen.
“Those three networks are all required to build a giant AI-scale, or even a moderately sized enterprise-scale, AI computer,” Deierling explained. Low latency is key as longer transit times for data going to/from GPUs slows the entire operation, delaying other processes and impacting the overall efficiency of an entire data center.
Nvidia CEO Jensen Huang presents a Grace Blackwell NVLink72 as he delivers a keynote address at the Consumer Electronics Show (CES) in Las Vegas, Nevada on January 6, 2025. Photo by PATRICK T. FALLON/AFP via Getty Images
As companies continue to develop larger AI models and autonomous and semi-autonomous agentic AI capabilities that can perform tasks for users, making sure those GPUs work in lockstep with each other becomes increasingly important.
The AI industry is in the midst of a broad reordering around the idea of inferencing, which requires more powerful data center systems to run AI models. “I think there’s still a misperception that inferencing is trivial and easy,” Deierling said.
“It turns out that it’s starting to look more and more like training as we get to [an] agentic workflow. So all of these networks are important. Having them together, tightly coupled to the CPU, the GPU, and the DPU [data processing unit], all of that is vitally important to make inferencing a good experience.”
Competitor AI chip makers, like AMD are looking to grab more market share from Nvidia, and cloud giants like Amazon, Google, and Microsoft continue to design and develop their own AI chips. However, none of them have the low latency, high speed connectivity solutions provided by Nvidia (again, think Mellanox).
References:
https://www.nvidia.com/en-us/networking/
Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections
Nvidia enters Data Center Ethernet market with its Spectrum-X networking platform
Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers
Does AI change the business case for cloud networking?
The case for and against AI-RAN technology using Nvidia or AMD GPUs
Telecom sessions at Nvidia’s 2025 AI developers GTC: March 17–21 in San Jose, CA
Huawei launches CloudMatrix 384 AI System to rival Nvidia’s most advanced AI system
On Saturday, Huawei Technologies displayed an advanced AI computing system in China, as the Chinese technology giant seeks to capture market share in the country’s growing artificial intelligence sector. Huawei’s CloudMatrix 384 system made its first public debut at the World Artificial Intelligence Conference (WAIC), a three-day event in Shanghai where companies showcase their latest AI innovations, drawing a large crowd to the company’s booth.
The Huawei CloudMatrix 384 is a high-density AI computing system featuring 384 Huawei Ascend 910C chips, designed to rival Nvidia’s GB200 NVL72 (more below). The AI system employs a “supernode” architecture with high-speed internal chip interconnects. The system is built with optical links for low-latency, high-bandwidth communication. Huawei has also integrated the CloudMatrix 384 into its cloud platform. The system has drawn close attention from the global AI community since Huawei first announced it in April.
The CloudMatrix 384 resides on the super-node Ascend platform and uses high-speed bus interconnection capability, resulting in low latency linkage between 384 Ascend NPUs. Huawei says that “compared to traditional AI clusters that often stack servers, storage, network technology, and other resources, Huawei CloudMatrix has a super-organized setup. As a result, it also reduces the chance of facing failures at times of large-scale training.

Huawei staff at its WAIC booth declined to comment when asked to introduce the CloudMatrix 384 system. A spokesperson for Huawei did not respond to questions. However, Huawei says that “early reports revealed that the CloudMatrix 384 can offer 300 PFLOPs of dense BF16 computing. That’s double of Nvidia GB200 NVL72 AI tech system. It also excels in terms of memory capacity (3.6x) and bandwidth (2.1x).” Indeed, industry analysts view the CloudMatrix 384 as a direct competitor to Nvidia’s GB200 NVL72, the U.S. GPU chipmaker’s most advanced system-level product currently available in the market.
One industry expert has said the CloudMatrix 384 system rivals Nvidia’s most advanced offerings. Dylan Patel, founder of semiconductor research group SemiAnalysis, said in an April article that Huawei now had AI system capabilities that could beat Nvidia’s AI system. The CloudMatrix 384 incorporates 384 of Huawei’s latest 910C chips and outperforms Nvidia’s GB200 NVL72 on some metrics, which uses 72 B200 chips, according to SemiAnalysis. The performance stems from Huawei’s system design capabilities, which compensate for weaker individual chip performance through the use of more chips and system-level innovations, SemiAnalysis said.
Huawei has become widely regarded as China’s most promising domestic supplier of chips essential for AI development, even though the company faces U.S. export restrictions. Nvidia CEO Jensen Huang told Bloomberg in May that Huawei had been “moving quite fast” and named the CloudMatrix as an example.
Huawei says the system uses “supernode” architecture that allows the chips to interconnect at super-high speeds and in June, Huawei Cloud CEO Zhang Pingan said the CloudMatrix 384 system was operational on Huawei’s cloud platform.
According to Huawei, the Ascend AI chip-based CloudMatrix 384 with three important benefits:
- Ultra-large bandwidth
- Ultra-Low Latency
- Ultra-Strong Performance
These three perks can help enterprises achieve better AI training as well as stable reasoning performance for models. They could further retain long-term reliability.
References:
https://www.huaweicentral.com/huawei-launches-cloudmatrix-384-ai-chip-cluster-against-nvidia-nvl72/
https://semianalysis.com/2025/04/16/huawei-ai-cloudmatrix-384-chinas-answer-to-nvidia-gb200-nvl72/
U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China
Huawei’s “FOUR NEW strategy” for carriers to be successful in AI era
FT: Nvidia invested $1bn in AI start-ups in 2024
Gen AI eroding critical thinking skills; AI threatens more telecom job losses
Two alarming research studies this year have drawn attention to the damage that Gen AI agents like ChatGPT are doing to our brains:
The first study, published in February, by Microsoft and Carnegie Mellon University, surveyed 319 knowledge workers and concluded that “while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skills for independent problem-solving.”
An MIT study divided participants into three essay-writing groups. One group had access to Gen AI and another to Internet search engines while the third group had access to neither. This “brain” group, as MIT’s researchers called it, outperformed the others on measures of cognitive ability. By contrast, participants in the group using a Gen AI large language model (LLM) did the worst. “Brain connectivity systematically scaled down with the amount of external support,” said the report’s authors.
Across the 20 companies regularly tracked by Light Reading, headcount fell by 51,700 last year. Since 2015, it has dropped by more than 476,600, more than a quarter of the previous total.
Source: Light Reading
………………………………………………………………………………………………………………………………………………
Doing More with Less:
- In 2015, Verizon generated sales of $131.6 billion with a workforce of 177,700 employees. Last year, it made $134.8 billion with fewer than 100,000. Revenues per employee, accordingly, have risen from about $741,000 to more than $1.35 million over this period.
- AT&T made nearly $868,000 per employee last year, compared with less than $522,000 in 2015.
- Deutsche Telekom, buoyed by its T-Mobile US business, has grown its revenue per employee from about $356,000 to more than $677,000 over the same time period.
- Orange’s revenue per employee has risen from $298,000 to $368,000.
Significant workforce reductions have happened at all those companies, especially AT&T which finished last year with 141,000 employees – about half the number it had in 2015!
Not to be outdone, headcount at network equipment companies are also shrinking. Ericsson, Europe’s biggest 5G vendor, cut 6,000 jobs or 6% of its workforce last year and has slashed 13,000 jobs since 2023. Nokia’s headcount fell from 86,700 in 2023 to 75,600 at the end of last year. The latest message from Börje Ekholm, Ericsson’s CEO, is that AI will help the company operate with an even smaller workforce in future. “We also see and expect big benefits from the use of AI, and that is one reason why we expect restructuring costs to remain elevated during the year,” he said on this week’s earnings call with analysts.
………………………………………………………………………………………………………………………………………………
Other Voices:
Light Reading’s Iain Morris wrote, “An erosion of brainpower and ceding of tasks to AI would entail a loss of control as people are taken out of the mix. If AI can substitute for a junior coder, as experts say it can, the entry-level job for programming will vanish with inevitable consequences for the entire profession. And as AI assumes responsibility for the jobs once done by humans, a shrinking pool of individuals will understand how networks function.
“If you can’t understand how the AI is making that decision, and why it is making that decision, we could end up with scenarios where when something goes wrong, we simply just can’t understand it,” said Nik Willetts, the CEO of a standards group called the TM Forum, during a recent conversation with Light Reading. “It is a bit of an extreme to just assume no one understands how it works,” he added. “It is a risk, though.”
………………………………………………………………………………………………………………………………………………
References:
AI spending is surging; companies accelerate AI adoption, but job cuts loom large
Verizon and AT&T cut 5,100 more jobs with a combined 214,350 fewer employees than 2015
Big Tech post strong earnings and revenue growth, but cuts jobs along with Telecom Vendors
Nokia (like Ericsson) announces fresh wave of job cuts; Ericsson lays off 240 more in China
Deutsche Telekom exec: AI poses massive challenges for telecom industry
Ericsson reports ~flat 2Q-2025 results; sees potential for 5G SA and AI to drive growth
Ericsson’s second-quarter results were not impressive, with YoY organic sales growth of +2% for the company and +3% for its network division (its largest). Its $14 billion AT&T OpenRAN deal, announced in December of 2023, helped lift Swedish vendor’s share of the global RAN market by +1.4 percentage points in 2024 to 25.7%, according to new research from analyst company Omdia (owned by Informa). As a result of its AT&T contract, the U.S. accounted for a stunning 44% of Ericsson’s second-quarter sales while the North American market resulted in a 10% YoY increase in organic revenues to SEK19.8bn ($2.05bn). Sales dropped in all other regions of the world! The charts below depict that very well:
Ericsson’s attention is now shifting to a few core markets that Ekholm has identified as strategic priorities, among them the U.S., India, Japan and the UK. All, unsurprisingly, already make up Ericsson’s top five countries by sales, although their contribution minus the US came to just 15% of turnover for the recent second quarter. “We are already very strong in North America, but we can do more in India and Japan,” said Ekholm. “We see those as critically important for the long-term success.”
Opportunities: As telco investment in RAN equipment has declined by 12.5% (or $5 billion) last year, the Swedish equipment vendor has had few other obvious growth opportunities. Ericsson’s Enterprise division, which is supposed to be the long-term provider of sales growth for Ericsson, is still very small – its second-quarter revenues stood at just SEK5.5bn ($570m) and even once currency exchange changes are taken into account, its sales shrank by 6% YoY.
On Tuesday’s earnings call, Ericsson CEO Börje Ekholm said that the RAN equipment sector, while stable currently, isn’t offering any prospects of exciting near-term growth. For longer-term growth the industry needs “new monetization opportunities” and those could come from the ongoing modest growth in 5G-enabled fixed wireless access (FWA) deployments, from 5G standalone (SA) deployments that enable mobile network operators to offer “differentiated solutions” and from network APIs (that ultra hyped market is not generating meaningful revenues for anyone yet).
Cost Cutting Continues: Ericsson also has continued to be aggressive about cost reduction, eliminating thousands of jobs since it completed its Vonage takeover. “Over the last year, we have reduced our total number of employees by about 6% or 6,000,” said Ekholm on his routine call with analysts about financial results. “We also see and expect big benefits from the use of AI and that is one reason why we expect restructuring costs to remain elevated during the year.”
Use of AI: Ericsson sees AI as an opportunity to enable network automation and new industry revenue opportunities. The company is now using AI as an aid in network design – a move that could have negative ramifications for staff involved in research and development. Ericsson is already using AI for coding and “other parts of internal operations to drive efficiency… We see some benefits now. And it’s going to impact how the network is operated – think of fully autonomous, intent-based networks that will require AI as a fundamental component. That’s one of the reasons why we invested in an AI factory,” noted the CEO, referencing the consortium-based investment in a Swedish AI Factory that was announced in late May. At the time, Ericsson noted that it planned to “leverage its data science expertise to develop and deploy state-of-the-art AI models – improving performance and efficiency and enhancing customer experience.
Ericsson is also building AI capability into the products sold to customers. “I usually use the example of link adaptation,” said Per Narvinger, the head of Ericsson’s mobile networks business group, on a call with Light Reading, referring to what he says is probably one of the most optimized algorithms in telecom. “That’s how much you get out of the spectrum, and when we have rewritten link adaptation, and used AI functionality on an AI model, we see we can get a gain of 10%.”
Ericsson hopes that AI will boost consumer and business demand for 5G connectivity. New form factors such as smart glasses and AR headsets will need lower-latency connections with improved support for the uplink, it has repeatedly argued. But analysts are skeptical, while Ericsson thinks Europe is ill equipped for more advanced 5G services.
“We’re still very early in AI, in [understanding] how applications are going to start running, but I think it’s going to be a key driver of our business going forward, both on traffic, on the way we operate networks, and the way we run Ericsson,” Ekholm said.
Europe Disappoints: In much of Europe, Ericsson and Nokia have been frustrated by some government and telco unwillingness to adopt the European Union’s “5G toolbox” recommendations and evict Chinese vendors. “I think what we have seen in terms of implementation is quite varied, to be honest,” said Narvinger. Rather than banning Huawei outright, Germany’s government has introduced legislation that allows operators to use most of its RAN products if they find a substitute for part of Huawei’s management system by 2029. Opponents have criticized that move, arguing it does not address the security threat posed by Huawei’s RAN software. Nevertheless, Ericsson clearly eyes an opportunity to serve European demand for military communications, an area where the use of Chinese vendors would be unthinkable.
“It is realistic to say that a large part of the increased defense spending in Europe will most likely be allocated to connectivity because that is a critical part of a modern defense force,” said Ekholm. “I think this is a very good opportunity for western vendors because it would be far-fetched to think they will go with high-risk vendors.” Ericsson is also targeting related demand for mission-critical services needed by first responders.
5G SA and Mobile Core Networks: Ekholm noted that 5G SA deployments are still few and far between – only a quarter of mobile operators have any kind of 5G SA deployment in place right now, with the most notable being in the US, India and China. “Two things need to happen,” for greater 5G SA uptake, stated the CEO.
- “One is mid-band [spectrum] coverage… there’s still very low build out coverage in, for example, Europe, where it’s probably less than half the population covered… Europe is clearly behind on that“ compared with the U.S., China and India.
- “The second is that [network operators] need to upgrade their mobile core [platforms]... Those two things will have to happen to take full advantage of the capabilities of the [5G] network,” noted Ekholm, who said the arrival of new devices, such as AI glasses, that require ultra low latency connections and “very high uplink performance” is starting to drive interest. “We’re also seeing a lot of network slicing opportunities,” he added, to deliver dedicated network resources to, for example, police forces, sports and entertainment stadiums “to guarantee uplink streams… consumers are willing to pay for these things. So I’m rather encouraged by the service innovation that’s starting to happen on 5G SA and… that’s going to drive the need for more radio coverage [for] mid-band and for core [systems].”
Ericsson’s Summary -Looking Ahead:
- Continue to strengthen competitive position
- Strong customer engagement for differentiated connectivity
- New use cases to monetize network investments taking shape
- Expect RAN market to remain broadly stable
- Structurally improving the business through rigorous cost management
- Continue to invest in technology leadership
………………………………………………………………………………………………………………………………………………………………………………………………
References:
https://www.telecomtv.com/content/5g/ericsson-ceo-waxes-lyrical-on-potential-of-5g-sa-ai-53441/
https://www.lightreading.com/5g/ericsson-targets-big-huawei-free-places-ai-and-nato-as-profits-soar
Ericsson revamps its OSS/BSS with AI using Amazon Bedrock as a foundation
Agentic AI and the Future of Communications for Autonomous Vehicle (V2X)
by Prashant Vajpayee (bio below), edited by Alan J Weissberger
Abstract:
Autonomous vehicles increasingly depend on Vehicle-to-Everything (V2X) communications, but 5G networks face challenges such as latency, coverage gaps, high infrastructure costs, and security risks. To overcome these limitations, this article explores alternative protocols like DSRC, VANETs, ISAC, PLC, and Federated Learning, which offer decentralized, low-latency communication solutions.
Of critical importance for this approach is Agentic AI—a distributed intelligence model based on the Object, Orient, Decide, and Act (OODA) loop—that enhances adaptability, collaboration, and security across the V2X stack. Together, these technologies lay the groundwork for a resilient, scalable, and secure next-generation Intelligent Transportation System (ITS).
Problems with 5G for V2X Communications:
There are several problems with using 5G for V2X communications, which is why the 5G NR (New Radio) V2X specification, developed by the 3rd Generation Partnership Project (3GPP) in Release 16, hasn’t been widely implemented. Here are a few of them:
- Variable latency: Even though 5G promises sub-milliseconds latency, realistic deployment often reflects 10 to 50 milliseconds delay, specifically V2X server is hosted in cloud environment. Furthermore, multi-hop routing, network slicing, and delay in handovers cause increment in latency. Due to this fact, 5G becomes unsuitable for ultra-reliable low-latency communication (URLLC) in critical scenarios [1, 2].
- Coverage Gaps & Handover Issues: Availability of 5G network is a problem in rural and remote areas. Furthermore, in fast moving vehicle, switching between 5G networks can cause delays in communication and connectivity failure [3, 4].
- Infrastructure and Cost Constraint: The deployment of full 5G infrastructure requires dense small-cell infrastructure, which cost burden and logistically complex solution especially in developing regions and along highways.
- Spectrum Congestion and Interference: During the scenarios of share spectrum, other services can cause interference in realm of 5G network, which cause degradation on V2X reliability.
- Security and Trust Issues: Centralized nature of 5G architectures remain vulnerable to single point of failure, which is risky for autonomous systems in realm of cybersecurity.
Alternative Communications Protocols as a Solution for V2X (when integrated with Agentic AI):
The following list of alternative protocols offers a potential remedy for the above 5G shortcomings when integrated with Agentic AI.
|
While these alternatives reduce dependency on centralized infrastructure and provide greater fault tolerance, they also introduce complexity. As autonomous vehicles (AVs) become increasingly prevalent, Vehicle-to-Everything (V2X) communication is emerging as the digital nervous system of intelligent transportation systems. Given the deployment and reliability challenges associated with 5G, the industry is shifting toward alternative networking solutions—where Agentic AI is being introduced as a cognitive layer that renders these ecosystems adaptive, secure, and resilient.
The following use cases show how Agentic AI can bring efficiency:
- Cognitive Autonomy: Each vehicle or roadside unit (RSU) operates an AI agent capable of observing, orienting, deciding, and acting (OOAD) without continuous reliance on cloud supervision. This autonomy enables real-time decision-making for scenarios such as rerouting, merging, or hazard avoidance—even in disconnected environments [12].
- Multi-Agent Collaboration: AI agents negotiate and coordinate with one another using standardized protocols (e.g., MCP, A2A), enabling guidance on optimal vehicle spacing, intersection management, and dynamic traffic control—without the need for centralized orchestration [13].
- Embedded Security Intelligence: While multiple agents collaborate, dedicated security agents monitor system activities for anomalies, enforce access control policies, and quarantine threats at the edge. As Forbes notes, “Agentic AI demands agentic security,” emphasizing the importance of embedding trust and resilience into every decision node [14].
- Protocol-Agnostic Adaptability: Agentic AI can dynamically switch among various communication protocols—including DSRC, VANETs, ISAC, or PLC—based on real-time evaluations of signal quality, latency, and network congestion. Agents equipped with cognitive capabilities enhance system robustness against 5G performance limitations or outages.
- Federated Learning and Self-Improvement: Vehicles independently train machine learning models locally and transmit only model updates—preserving data privacy, minimizing bandwidth usage, and improving processing efficiency.
The figure below illustrates the proposed architectural framework for secure Agentic AI enablement within V2X communications, leveraging alternative communication protocols and the OODA (Observe–Orient–Decide–Act) cognitive model.
Conclusions:
With the integration of an intelligent Agentic AI layer into V2X systems, autonomous, adaptive, and efficient decision-making emerges from seamless collaboration of the distributed intelligent components.
Numerous examples highlight the potential of Agentic AI to deliver significant business value.
- TechCrunch reports that Amazon’s R&D division is actively developing an Agentic AI framework to automate warehouse operations through robotics [15]. A similar architecture can be extended to autonomous vehicles (AVs) to enhance both communication and cybersecurity capabilities.
- Forbes emphasizes that “Agentic AI demands agentic security,” underscoring the need for every action—whether executed by human or machine—to undergo rigorous review and validation from a security perspective [16]. Forbes notes, “Agentic AI represents the next evolution in AI—a major transition from traditional models that simply respond to human prompts.” By combining Agentic AI with alternative networking protocols, robust V2X ecosystems can be developed—capable of maintaining resilience despite connectivity losses or infrastructure gaps, enforcing strong cyber defense, and exhibiting intelligence that learns, adapts, and acts autonomously [19].
- Business Insider highlights the scalability of Agentic AI, referencing how Qualtrics has implemented continuous feedback loops to retrain its AI agents dynamically [17]. This feedback-driven approach is equally applicable in the mobility domain, where it can support real-time coordination, dynamic rerouting, and adaptive decision-making.
- Multi-agent systems are also advancing rapidly. As Amazon outlines its vision for deploying “multi-talented assistants” capable of operating independently in complex environments, the trajectory of Agentic AI becomes even more evident [18].
References:
-
- Coll-Perales, B., Lucas-Estañ, M. C., Shimizu, T., Gozalvez, J., Higuchi, T., Avedisov, S., … & Sepulcre, M. (2022). End-to-end V2X latency modeling and analysis in 5G networks. IEEE Transactions on Vehicular Technology, 72(4), 5094-5109.
- Horta, J., Siller, M., & Villarreal-Reyes, S. (2025). Cross-layer latency analysis for 5G NR in V2X communications. PloS one, 20(1), e0313772.
- Cellular V2X Communications Towards 5G- Available at “pdf”
- Al Harthi, F. R. A., Touzene, A., Alzidi, N., & Al Salti, F. (2025, July). Intelligent Handover Decision-Making for Vehicle-to-Everything (V2X) 5G Networks. In Telecom (Vol. 6, No. 3, p. 47). MDPI.
- DSRC Safety Modem, Available at- “https://www.nxp.com/products/wireless-connectivity/dsrc-safety-modem:DSRC-MODEM”
- VANETs and V2X Communication, Available at- “https://www.sanfoundry.com/vanets-and-v2x-communication/#“
- Yu, K., Feng, Z., Li, D., & Yu, J. (2023). Secure-ISAC: Secure V2X communication: An integrated sensing and communication perspective. arXiv preprint arXiv:2312.01720.
- Study on integrated sensing and communication (ISAC) for C-V2X application, Available at- “https://5gaa.org/content/uploads/2025/05/wi-isac-i-tr-v.1.0-may-2025.pdf“
- Ramasamy, D. (2023). Possible hardware architectures for power line communication in automotive v2g applications. Journal of The Institution of Engineers (India): Series B, 104(3), 813-819.
- Xu, K., Zhou, S., & Li, G. Y. (2024). Federated reinforcement learning for resource allocation in V2X networks. IEEE Journal of Selected Topics in Signal Processing.
- Asad, M., Shaukat, S., Nakazato, J., Javanmardi, E., & Tsukada, M. (2025). Federated learning for secure and efficient vehicular communications in open RAN. Cluster Computing, 28(3), 1-12.
- Bryant, D. J. (2006). Rethinking OODA: Toward a modern cognitive framework of command decision making. Military Psychology, 18(3), 183-206.
- Agentic AI Communication Protocols: The Backbone of Autonomous Multi-Agent Systems, Available at- “https://datasciencedojo.com/blog/agentic-ai-communication-protocols/”
- Agentic AI And The Future Of Communications Networks, Available at- “https://www.forbes.com/councils/forbestechcouncil/2025/05/27/agentic-ai-and-the-future-of-communications-networks/”
- Amazon launches new R&D group focused on agentic AI and robotics, Available at- “Amazon launches new R&D group focused on agentic AI and robotics”
- Securing Identities For The Agentic AI Landscape, Available at “https://www.forbes.com/councils/forbestechcouncil/2025/07/03/securing-identities-for-the-agentic-ai-landscape/”
- Qualtrics’ president of product has a vision for agentic AI in the workplace: ‘We’re going to operate in a multiagent world’, Available at- “https://www.businessinsider.com/agentic-ai-improve-qualtrics-company-customer-communication-data-collection-2025-5”
- Amazon’s R&D lab forms new agentic AI group, Available at- “https://www.cnbc.com/2025/06/04/amazons-rd-lab-forms-new-agentic-ai-group.html”
- Agentic AI: The Next Frontier In Autonomous Work, Available at- “https://www.forbes.com/councils/forbestechcouncil/2025/06/27/agentic-ai-the-next-frontier-in-autonomous-work/”
About the Author:
Prashant Vajpayee is a Senior Product Manager and researcher in AI and cybersecurity, with expertise in enterprise data integration, cyber risk modeling, and intelligent transportation systems. With a foundation in strategic leadership and innovation, he has led transformative initiatives at Salesforce and advanced research focused on cyber risk quantification and resilience across critical infrastructure, including Transportation 5.0 and global supply chain. His work empowers organizations to implement secure, scalable, and ethically grounded digital ecosystems. Through his writing, Prashant seeks to demystify complex cybersecurity as well as AI challenges and share actionable insights with technologists, researchers, and industry leaders.
Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined
AI RAN [1.] is projected to account for approximately a third of the RAN market by 2029, according to a recent AI RAN Advanced Research Report published by the Dell’Oro Group. In the near term, the focus within the AI RAN segment will center on Distributed-RAN (D-RAN), single-purpose deployments, and 5G.
“Near-term priorities are more about efficiency gains than new revenue streams,” said Stefan Pongratz, Vice President at Dell’Oro Group. “There is strong consensus that AI RAN can improve the user experience, enhance performance, reduce power consumption, and play a critical role in the broader automation journey. Unsurprisingly, however, there is greater skepticism about AI’s ability to reverse the flat revenue trajectory that has defined operators throughout the 4G and 5G cycles,” continued Pongratz.
Note 1. AI RAN integrates AI and machine learning (ML) across various aspects of the RAN domain. The AI RAN scope in this report is aligned with the greater industry vision. While the broader AI RAN vision includes services and infrastructure, the projections in this report focus on the RAN equipment market.
Additional highlights from the July 2025 AI RAN Advanced Research Report:
- The base case is built on the assumption that AI RAN is not a growth vehicle. But it is a crucial technology/tool for operators to adopt. Over time, operators will incorporate more virtualization, intelligence, automation, and O-RAN into their RAN roadmaps.
- This initial AI RAN report forecasts the AI RAN market based on location, tenancy, technology, and region.
- The existing RAN radio and baseband suppliers are well-positioned in the initial AI-RAN phase, driven primarily by AI-for-RAN upgrades leveraging the existing hardware. Per Dell’Oro Group’s regular RAN coverage, the top 5 RAN suppliers contributed around 95 percent of the 2024 RAN revenue.
- AI RAN is projected to account for around a third of total RAN revenue by 2029.
In the first quarter of 2025, Dell’Oro said the top five RAN suppliers based on revenues outside of China are Ericsson, Nokia, Huawei, Samsung and ZTE. In terms of worldwide revenue, the ranking changes to Huawei, Ericsson, Nokia, ZTE and Samsung.
About the Report: Dell’Oro Group’s AI RAN Advanced Research Report includes a 5-year forecast for AI RAN by location, tenancy, technology, and region. Contact: [email protected]
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………
Author’s Note: Nvidia’s Aerial Research portfolio already contains a host of AI-powered tools designed to augment wireless network simulations. It is also collaborating with T-Mobile and Cisco to develop AI RAN solutions to support future 6G applications. The GPU king is also working with some of those top five RAN suppliers, Nokia and Ericsson, on an AI-RAN Innovation Center. Unveiled last October, the project aims to bring together cloud-based RAN and AI development and push beyond applications that focus solely on improving efficiencies.
……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….
The one year old AI RAN Alliance has now increased its membership to over 100, up from around 84 in May. However, there are not many telco members with only Vodafone joining since May. The other telco members are: Turkcell ,Boost Mobile, Globe, Indosat Ooredoo Hutchison (Indonesia), Korea Telecom, LG UPlus, SK Telecom, T-Mobile US and Softbank. This limited telco presence could reflect the ongoing skepticism about the goals of AI-RAN, including hopes for new revenue opportunities through network slicing, as well as hosting and monetizing enterprise AI workloads at the edge.
Francisco Martín Pignatelli, head of open RAN at Vodafone, hardly sounded enthusiastic in his statement in the AI-RAN Alliance press release. “Vodafone is committed to using AI to optimize and enhance the performance of our radio access networks. Running AI and RAN workloads on shared infrastructure boosts efficiency, while integrating AI and generative applications over RAN enables new real-time capabilities at the network edge,” he added.
Perhaps, the most popular AI RAN scenario is “AI on RAN,” which enables AI services on the RAN at the network edge in a bid to support and benefit from new services, such as AI inferencing.
“We are thrilled by the extraordinary growth of the AI-RAN Alliance,” said Alex Jinsung Choi, Chair of the AI-RAN Alliance and Principal Fellow at SoftBank Corp.’s Research Institute of Advanced Technology. “This milestone underscores the global momentum behind advancing AI for RAN, AI and RAN, and AI on RAN. Our members are pioneering how artificial intelligence can be deeply embedded into radio access networks — from foundational research to real-world deployment — to create intelligent, adaptive, and efficient wireless systems.”
Choi recently suggested that now is the time to “revisit all our value propositions and then think about what should be changed or what should be built” to be able to address issues including market saturation and the “decoupling” between revenue growth and rising TCO. He also cited self-driving vehicles and mobile robots, where low latency is critical, as specific use cases where AI-RAN will be useful for running enterprise workloads.
About the AI-RAN Alliance:
The AI-RAN Alliance is a global consortium accelerating the integration of artificial intelligence into Radio Access Networks. Established in 2024, the Alliance unites leading companies, researchers, and technologists to advance open, practical approaches for building AI-native wireless networks. The Alliance focuses on enabling experimentation, sharing knowledge, and real-world performance to support the next generation of mobile infrastructure. For more information, visit: https://ai-ran.org
References:
https://www.delloro.com/advanced-research-report/ai-ran/
https://www.delloro.com/news/ai-ran-to-top-10-billion-by-2029/
Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum
AI RAN Alliance selects Alex Choi as Chairman
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
Deutsche Telekom and Google Cloud partner on “RAN Guardian” AI agent
The case for and against AI-RAN technology using Nvidia or AMD GPUs
ZTE’s AI infrastructure and AI-powered terminals revealed at MWC Shanghai
ZTE Corporation unveiled a full range of AI initiatives under the theme “Catalyzing Intelligent Innovation” at MWC Shanghai 2025. Those innovations include AI + networks, AI applications, and AI-powered terminals. During several demonstrations, ZTE showcased its key advancements in AI phones and smart homes. Leveraging its underlying capabilities, the company is committed to providing full-stack solutions—from infrastructure to application ecosystems—for operators, enterprises, and consumers, co-creating an era of AI for all.
ZTE’s Chief Development Officer Cui Li outlined the vendor’s roadmap for building intelligent infrastructure and accelerating artificial intelligence (AI) adoption across industries during a keynote session at MWC Shanghai 2025. During her speech, Cui highlighted the growing influence of large AI models and the critical role of foundational infrastructure. “No matter how AI technology evolves in the future, the focus will remain on efficient infrastructure, optimized algorithms and practical applications,” she said. The Chinese vendor is deploying modular, prefabricated data center units and AI-based power management, which she said reduce energy use and cooling loads by more than 10%. These developments are aimed at delivering flexible, sustainable capacity to meet growing AI demands, the ZTE executive said.
ZTE is also advancing “AI-native” networks that shift from traditional architectures to heterogeneous computing platforms, with embedded AI capabilities. This, Cui said, marks a shift from AI as a support tool to autonomous agents shaping operations. Ms. Cui emphasized the role of high-quality, secure data and efficient algorithms in building more capable AI. “Data is like fertile ‘soil’. Its volume, purity and security decide how well AI as a plant can grow,” she said. “Every digital application — including AI — depends on efficient and green infrastructure,” she said.
ZTE is heavily investing in AI-native network architecture and high-efficiency computing:
- AI-native networks – ZTE is redesigning telecom infrastructure with embedded intelligence, modular data centers and AI-driven energy systems to meet escalating AI compute demands.
- Smarter models, better data – With advanced training methods and tools, ZTE is pushing the boundaries of model accuracy and real-world performance.
- Edge-to-core deployment – ZTE is integrating AI across consumer, home and industry use cases, delivering over 100 applied solutions across 18 verticals under its “AI for All” strategy.
ZTE has rolled out a full range of innovative solutions for network intelligence upgrades.
-
AIR RAN solution: deeply integrating AI to fully improve energy efficiency, maintenance efficiency, and user experience, driving the transition towards value creation of 5G
-
AIR Net solution: a high-level autonomous network solution that encompasses three engines to advance network operations towards “Agentic Operations”
-
AI-optical campus solution: addressing network pain points in various scenarios for higher operational efficiency in cities
-
HI-NET solution: a high-performance and highly intelligent transport network solution enabling “terminal-edge-network-computing” synergy with multiple groundbreaking innovations, including the industry’s first integrated sensing-communication-computing CPE, full-band OTNs, highest-density 800G intelligent switches, and the world’s leading AI-native routers
Through technological innovations in wireless and wired networks, ZTE is building an energy-efficient, wide-coverage, and intelligent network infrastructure that meets current business needs and lays the groundwork for future AI-driven applications, positioning operators as first movers in digital transformation.
In the home terminal market, ZTE AI Home establishes a family-centric vDC and employs MoE-based AI agents to deliver personalized services for each household member. Supported by an AI network, home-based computing power, AI screens, and AI companion robots, ZTE AI Home ensures a seamless and engaging experience—providing 24/7 all-around, warm-hearted care for every family member. The product highlights include:
-
AI FTTR: Serving as a thoughtful life assistant, it is equipped with a household knowledge base to proactively understand and optimize daily routines for every family member.
-
AI Wi-Fi 7: Featuring the industry’s first omnidirectional antenna and smart roaming solution, it ensures high-speed and stable connectivity.
-
Smart display: It acts like an exclusive personal trainer, leveraging precise semantic parsing technology to tailor personalized services for users.
-
AI flexible screen & cloud PC: Multi-screen interactions cater to diverse needs for home entertainment and mobile office, creating a new paradigm for smart homes.
-
AI companion robot: Backed by smart emotion recognition and bionic interaction systems, the robot safeguards children’s healthy growth with emotionally intelligent connections.
ZTE will anchor its product strategy on “Connectivity + Computing.” Collaborating with industry partners, the company is committed to driving industrial transformation, and achieving computing and AI for all, thereby contributing to a smarter, more connected world.
References: