Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
An increasing focus on deploying AI into radio access networks (RANs) was among the key findings of NVIDIA’s third annual “State of AI in Telecommunications” survey of 450 telecom professionals, as more than a third of respondents indicated they’re investing or planning to invest in AI-RAN. The survey polled more than 450 telecommunications professionals worldwide, revealing continued momentum for AI adoption — including growth in generative AI use cases — and how the technology is helping optimize customer experiences and increase employee productivity. The percentage of network operators planning to use open source tools increased from 28% in 2023 to 40% in 2025. AvidThink Founder and Principal Roy Chua said one of the biggest challenges network operators will have when using open source models is vetting the outputs they get during training.
Of the telecommunications professionals surveyed, almost all stated that their company is actively deploying or assessing AI projects. Here are some top insights on impact and use cases:
- 84% said AI is helping to increase their company’s annual revenue
- 77% said AI helped reduce annual operating costs
- 60% said increased employee productivity was their biggest benefit from AI
- 44% said they’re investing in AI for customer experience optimization, which is the No. 1 area of investment for AI in telecommunications
- 40% said they’re deploying AI into their network planning and operations, including RAN
The percentage of respondents who indicated they will build AI solutions in-house rose from 27% in 2024 to 37% this year. “Telcos are really looking to do more of this work themselves,” Nvidia’s Global Head of Business Development for Telco Chris Penrose [1.] said. “They’re seeing the importance of them taking control and ownership of becoming an AI center of excellence, of doing more of the training of their own resources.”
With respect to using AI inferencing, Chris said, “”We’ve got 14 publicly announced telcos that are doing this today, and we’ve got an equally big funnel.” Penrose noted that the AI skills gap remains the biggest hurdle for operators. Why? Because, as he put it, just because someone is an AI scientist doesn’t mean they are also necessarily a generative AI or agentic AI scientist specifically. And in order to attract the right talent, operators need to demonstrate that they have the infrastructure that will allow top-tier employees to do amazing work. See also: GPUs, data center infrastructure, etc.
Note 1. Penrose represented AT&T’s IoT business for years at various industry trade shows and events before leaving the company in 2020.
Rather than the large data centers processing AI Large Language Models (LLMs), AI inferencing could be done more quickly at smaller “edge” facilities that are closer to end users. That’s where telecom operators might step in. “Telcos are in a unique position,” Penrose told Light Reading. He explained that many countries want to ensure that their AI data and operations remain inside the boundaries of that country. Thus, telcos can be “the trusted providers of [AI] infrastructure in their nations.”
“We’ll call it AI RAN-ready infrastructure. You can make money on it today. You can use it for your own operations. You can use it to go drive some services into the market. … Ultimately your network itself becomes a key anchor workload,” Penrose said.
Source: Skorzewiak/Alamy Stock Photo
Nvidia proposes that network operators can not only run their own AI workloads on Nvidia GPUs, they can also sell those inferencing services to third parties and make a profit by doing so. “We’ve got lots of indications that many [telcos] are having success, and have not only deployed their first [AI compute] clusters, but are making reinvestments to deploy additional compute in their markets,” Penrose added.
Nvidia specifically pointed to AI inferencing announcements by Singtel, Swisscom, Telenor, Indosat and SoftBank.
Other vendors are hoping for similar sales. “I think this vision of edge computing becoming AI inferencing at the end of the network is massive for us,” HPE boss Antonio Neri said last year, in discussing HPE’s $14 billion bid for Juniper Networks.
That comes after multi-access edge computing (MEC) has not lived up to its potential, partially because a 5G SA core network is needed for that and few have been commercially deployed. Edge computing disillusionment is clear among hyperscalers and also network operators. For example, Cox folded its edge computing business into its private networks operation. AT&T no longer discusses the edge computing locations it was building with Microsoft and Google. And Verizon has admitted to edge computing “miscalculations.”
Will AI inferencing be the savior for MEC? The jury is out on that topic. However, Nvidia said that 40% of its revenues already come from AI inferencing. Presumably that inferencing is happening in larger data centers and then delivered to nearby users. Meaning, a significant amount of inferencing is being done today without additional facilities, distributed at a network’s edge, that could enable speedier, low-latency AI services.
“The idea that AI inferencing is going to be all about low-latency connections, and hence stuff like AI RAN and and MEC and assorted other edge computing concepts, doesn’t seem to be a really good fit with the current main direction of AI applications and models,” argued Disruptive Wireless analyst Dean Bubley in a Linked In post.
References:
https://blogs.nvidia.com/blog/ai-telcos-survey-2025/
State of AI in Telecommunications
https://www.fierce-network.com/premium/whitepaper/edge-computing-powered-global-ai-inference
https://www.fierce-network.com/cloud/are-ai-services-telcos-magic-revenue-bullet
Another telco has joined the AI-RAN club. Nokia highlighted its AI-RAN advances with T-Mobile US and KDDI on the eve of the MWC25 show and now the Finnish vendor has added Indosat Ooredoo Hutchison (IOH) to that list. The Indonesian network operator has announced it is working with Nokia and Nvidia to deploy what it calls “a unified accelerated computing infrastructure for hosting both AI and RAN workloads.”
The three companies have agreed to develop, test and deploy an AI-RAN solution with an initial focus on managing AI inference workloads using Nvidia’s AI Aerial system and then, later on, to integrate radio access network (RAN) workloads on the same platform.
To support their efforts, IOH, Nokia and Nvidia will collaborate with leading Indonesian universities and research institutions to drive AI-RAN development.
“This collaboration will support academic programs to foster AI innovation in telecom applications, and provide hands-on opportunities for students and researchers to contribute to next-generation AI-powered networks. By engaging with academia, the companies aim to accelerate breakthroughs in AI-driven network optimization, spectral efficiency and energy consumption,” noted IOH in this announcement.
https://www.telecomtv.com/content/telcos-and-ai-channel/what-s-up-with-ioh-ai-ran-telef-nica-airspan-52501/
Nvidia could succeed where open RAN has mostly failed. In the early days of the O-RAN Alliance, the technology was heralded as a way for operators to break big vendors’ lock on expensive RAN components. But today most agree that open RAN hasn’t done much to upend the global RAN order – Ericsson, Nokia, Huawei and Samsung still sit at the top of the market.
“The concept of open and interoperable interfaces will live on in some form of incarnation, but the original vision is no longer viable,” wrote Chetan Sharma, an independent analyst, on social media.
https://www.lightreading.com/ai-machine-learning/nvidia-the-vendor-that-must-not-be-named
In the telco realm, some operators are using AI to improve productivity, while others use AI to improve network operations, reducing OpEx by automating network operations. Nvidia has found itself at the epicenter of many of these efforts.
SoftBank and T-Mobile are working with Nvidia to move to an infrastructure that can support both RAN and AI. Nvidia is also partnering with T-Mobile, SoftBank, Ericsson and Nokia on software-defined RAN for mobile switching offices.
Nvidia is also working with Indosat Ooredoo Hutchison on building an AI factory — a data center specialized for AI processing — using Nvidia reference architecture. Some 14 companies worldwide have built AI factories with Nvidia, Ronnie Vasishta said.
Nvidia is partnering with Verizon on its AI Connect initiative to deliver on skyrocketing enterprise requirements for bandwidth and connectivity to support AI.
For telcos seeking to enhance customer experience, Nvidia can support automatic speech recognition and help build “digital humans” with realistic faces.
https://www.fierce-network.com/cloud/ai-underhyped-says-nvidias-telecom-chief