Orange, Nokia, Nvidia, and Intel debate: ASICs vs. GPUs vs. General-Purpose CPUs for RAN Baseband Processing

For Orange CTO Laurent Leboucher, the main attraction of AI today lies in its potential to improve the efficiency of 5G radio access networks (RANs). That helps explain Orange’s recent collaboration with Nokia and Nvidia. Orange already deploys Nokia’s purpose-built 5G network equipment and software at mobile sites in France and other markets. Until recently, it had little obvious need for Nvidia, the U.S. chip making king best known for the graphics processing units (GPUs) used to train large language models. But Nokia and Nvidia became closely aligned last October, when Nvidia took a 3% stake in Nokia as part of a $1 billion investment. Nokia is now developing AI RAN software designed to run on GPUs.

Leboucher’s interest is driven in part by concerns over the cost of custom silicon — the application-specific integrated circuits (ASICs) used in purpose-built 5G networks. “It creates an opportunity to bring a general-purpose chipset instead of an ASIC implementation,” he told Light Reading at last week’s FutureNet World event in London. “I think we could, at some point, benefit from the economies of scale of new chipsets. That could be Nvidia.”

The rationale is much easier to understand than arguments about 5G for autonomous vehicles. Chip manufacturing is already expensive, and both Nokia and Ericsson expect component costs to rise further this year amid relentless AI demand. At the same time, the RAN market remains relatively small and has contracted. According to market research firm Omdia, telco spending fell from $45 billion in 2022 to $35 billion last year and is expected to stay at that level. In that context, it is increasingly difficult to justify designing high-cost chips with limited reuse outside telecom.

Image Credit: Orange

Last year, Nvidia spent about $18.5 billion on research and development, generated nearly $216 billion in revenue, and reported a gross margin of more than 70%. Its financial strength is not in question. If telecom operators can use its GPUs for RAN software, they may face less pressure to secure the long-term economics of 5G and 6G development. That alone could be enough to support the case for Nvidia. The counterarguments are cost and power consumption. By design, custom silicon is optimized for a specific workload and will always outperform a more general-purpose processor at that task. An Nvidia GPU in the RAN could therefore be seen as excessive — like using a crop duster to water a hanging basket.

Leboucher, believes that Nokia and Nvidia are developing something far more compact than a typical data-center deployment. “It is not a Blackwell GPU,” he said, referring to Nvidia’s current hyperscaler-class product line. “I have an understanding it’s something which is a little bit smaller.” One of the first GPU-based products is expected to come on a card that Orange can insert into an existing Nokia AirScale chassis.

He is also interested in replacing traditional RAN algorithms with AI to improve spectral efficiency and overall performance. Through trials with Nokia and Nvidia, Orange wants to determine whether a GPU is actually required to capture the full benefit. “We can completely rethink the way we are doing algorithms today, using AI for the radio Layer 1,” he said, referring to the most compute-intensive part of the RAN software stack. Some of the “AI-RAN” narrative still sounds “a little bit like science fiction,” Leboucher admitted. “But I think there are some very interesting ideas behind that. We want to understand where we are.”

This is not the first time the industry has debated a shift from ASICs to general-purpose processors for RAN equipment. Alongside its purpose-built 5G portfolio, Ericsson already offers cloud RAN products based on Intel CPUs. Samsung is now focused on Intel-based virtual RAN and has recently predicted the end of purpose-built 5G. Even so, cloud and virtual RAN still account for only a small share of live 5G deployments. Huawei and Ericsson, the two largest RAN vendors, remain committed to custom silicon development.

Nvidia’s entry into the market has clearly given Leboucher and his team more to evaluate as RAN technology becomes more sophisticated. “We are introducing new requirements for radio networks, typically for beamforming, and we have to consider the need for quite powerful chipsets,” he said. “Whether the best way to keep going is using ASICs or a general-purpose architecture – I think this is a good time to ask the question. Before, it was too early.”

The answer could shape Orange’s next major RAN decisions. The operator is preparing for what Leboucher describes as a “refresh” of RAN equipment across several countries ahead of the expected 6G launch in 2030. For the first time, he said, Orange will include cloud RAN as a “major option” in its request for proposal.

The concern around Intel as an alternative to Nvidia is its still-fragile financial position. Before December, Intel had been trying to spin off its network and edge group (NEX), which develops RAN chips. Those plans were later shelved, but the company’s net loss widened to about $4.3 billion in the most recent first quarter, from $887 million a year earlier, while revenue rose only 7% year over year to $13.6 billion. Cristina Rodriguez, who had led NEX, left this month to join Coherent, and Intel has not yet named a successor.  “The shares jumped 28% in after-hours trading, taking Intel firmly into meme-stock territory,” said Radio Free Mobile analyst Richard Windsor in a blog published after results came out on April 23. “I say meme-stock because there is no other way to describe it when the shares are on a 2026 PER [price-to-earnings ratio] of 137x, and its technology looks obsolete.”

Orange places significant value on separating hardware from software, allowing the same RAN software to run across multiple hardware platforms. Ericsson and Samsung both say the virtual RAN software they have built for Intel CPUs could, with relatively modest changes, be ported to AMD silicon using the same x86 architecture or to Arm-based CPUs.

By contrast, Layer 1 code written for Nvidia GPUs and the CUDA software stack would not be portable to other platforms, according to Ericsson. “I think the main challenge we see with that is we are trying very hard to keep our stack portable, to give hardware options,” Michael Begley, Ericsson’s head of RAN compute, told Light Reading at MWC Barcelona this year. “If you go all in on one, it’s great, but you’re all in on one, and you can’t offer those other options to the operators or the ecosystem.”

Leboucher acknowledges that risk. “The risk of lock-in exists, definitely,” he said. “We really want to stay open. At the same time, we know that benefiting from a very, very large-scale general-purpose architecture should improve the TCO [total cost of ownership]. At the end of the day, it will be a trade-off. But we would welcome an architecture where we have the capacity at some point to decide to swap if we need to swap.”

Nokia’s hope is that much of the Layer 1 software written for Nvidia GPUs will eventually be deployable on other GPU platforms. But Nvidia’s near-monopoly in that segment leaves the industry with few alternatives for now. There is also optimism inside Nokia that GPU-based code could later be adapted for capable CPUs, although Ericsson’s comments suggest that would be much harder. For telecom executives, the choices made over the next couple of years may be pivotal as 6G approaches.

………………………………………………………………………………………………………………………………………………………

References:

https://www.lightreading.com/5g/orange-weighs-nvidia-against-intel-for-5g-chips-ahead-of-new-rfp

RAN Silicon Rethink- Part II; vRAN and General-Purpose Compute

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Big Tech AI spending binge results in massive job cuts!

Executive Summary:

The tech industry is undergoing a massive structural realignment. Hyperscalers, Software as a Service (SaaS) vendors, and telecom network and equipment providers are aggressively slashing workforces to reallocate capital toward massive AI infrastructure investments.  Alphabet, Meta, Amazon, and Microsoft are projected to spend a collective $674 billion in 2026—over double their 2024 levels.  Most of that spending is AI related.

From the referenced WSJ article:

“Tech companies are in effect playing a game of chicken with each other on capital-spending plans. They are shelling out as much as they can—more than their rivals, they hope—on AI chips and data centers that could put them in the lead in a race they feel they can’t afford to lose. That in turn is heightening competition over who can use AI to help do more with a lot less, freeing up money to spend on expensive chips.”

Hyperscalers, such as Microsoft and Meta Platforms (Meta), are the latest to  their significantly reduce their workforces to scale AI-driven operations. Meta is reportedly reducing its headcount by approximately 8,000, while Microsoft has initiated a “voluntary retirement program” (aka a buyout) targeting 7% of its U.S. workforce—a strategic move to trim payroll before resorting to involuntary layoffs.

This trend is industry-wide: Oracle and Snap have executed significant reductions, while Block announced plans to cut 40% of its staff (over 4,000 employees).  March 2026 represented a two-year peak in tech industry contraction, with Layoffs.fyi reporting 45,800 tech job reductions.

…………………………………………………………………………………………..

Source:  Layoffs.fyi
……………………………………………………………………………………………………………………

The AI Transformation Narrative vs. Financial Reality:

Executive leadership is framing these cuts as a strategic pivot toward an AI-native future where automated workflows replace legacy human-centric processes. While CEOs like Block’s Jack Dorsey insist these decisions aren’t driven by distress, a “game of chicken” is unfolding in capital planning.

Companies are locked in an escalating race to secure AI silicon (GPUs), High Bandwidth Memory (HBM) and expand Data Center footprints, creating a massive drain on liquidity.  This heightens the pressure to achieve “doing more with less”—using AI to automate internal functions and free up the capital necessary for expensive infrastructure. However, in many cases, these cuts are simply corrective measures for pandemic-era overhiring or efforts to normalize efficiency metrics:

  • Oracle: Annual revenue per employee remains significantly below industry leaders like Microsoft.
  • Snap: Headcount remains 65% above pre-COVID levels despite consistent operating losses.

Strategic Risks and “Off-Balance-Sheet” Engineering:

While slashing headcounts improves Revenue Per Employee (RPE)—a key KPI for Wall Street—it introduces significant long-term risks:

  • Talent Attrition & Brain Drain: Aggressive layoffs degrade morale and may drive elite engineering talent toward startups, potentially creating new competitors.
  • Governance & Safety: Reducing human oversight during AI deployment could lead to safety and business model integration failures.
  • Regulatory & Public Backlash: The “AI as a job killer” narrative is fueling community opposition to massive data center builds, complicating infrastructure rollouts.

The CAPEX Burden:

The financial strain is becoming evident even for “Deep Pocket” firms. Alphabet, Meta, Amazon, and Microsoft are projected to spend $674 billion in CAPEX this year—more than double their 2022 spend.

  • Amazon is projected to be cash-flow negative this year.
  • Meta’s CAPEX is set to exceed 50% of its annual revenue, with its debt-to-equity ratio climbing to 39% (up from 8% five years ago).
  • Some firms are reportedly utilizing “off-balance-sheet financial wizardry” to maintain their AI compute growth without alarming debt markets.

Verdict of the Market?

Markets are sending mixed signals. While analysts are obsessed with efficiency metrics (questions about efficiency on earnings calls have tripled in two years), they are becoming “skittish” regarding unbridled spending. Tesla (TSLA), for instance, saw a 4% stock dip after raising its spending target to $25 billion.

Ultimately, tech giants—who already average $2M in annual revenue per employee—are betting that further workforce reductions will juice efficiency and fund the AI arms race. The trade-off remains whether these “leaner” organizations can maintain the innovation and safety standards required to lead the next technological cycle.

………………………………………………………………………………………………………..

The telecom sector is particularly vulnerable, as AI-native “zero-touch” operations begin to replace legacy roles permanently.

  • Network Operators:BT has announced plans to replace up to 10,000 roles with AI by 2030, specifically targeting network management and customer service.
  • Network Equipment Vendors: Equipment giants Ericsson and Nokia have collectively shed over 36,000 roles in recent years, pivoting from traditional hardware to AI-optimized software and networking.
  • Integrators:Accenture and IBM are utilizing AI to automate junior-level coding and back-office HR tasks, signaling that AI reskilling is now a prerequisite for workforce retention.

Strategic Outlook – Monetization and the “RPE” Battle:   

For both MNOs and tech giants, the coming years are about monetization. Investors have shifted from cheering bold AI visions to demanding tangible results, with a heavy focus on Revenue Per Employee (RPE)—a metric that workforce reductions are designed to “juice.”

That “Great Realignment” is a high-stakes gamble, in this author’s opinion.  The firms that successfully bridge the gap between massive infrastructure investments and scalable, profitable AI-native services will lead the next generation of global technology. Those that fail to balance efficiency with talent retention may find themselves outpaced by leaner, AI-native startups born from the very talent they have released.

……………………………………………………………………………………………………………….

References:

https://www.wsj.com/tech/ai/the-ai-splurge-is-costing-big-tech-its-workforce-34a88e68

AI spending boom accelerates: Big tech to invest an aggregate of $400 billion in 2025; much more in 2026!

AI infrastructure spending boom: a path towards AGI or speculative bubble?

Gartner: AI spending >$2 trillion in 2026 driven by hyperscalers data center investments

AI spending is surging; companies accelerate AI adoption, but job cuts loom large

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Canalys & Gartner: AI investments drive growth in cloud infrastructure spending

STL Partners webinar: Agentic AI needed for RAN autonomy & efficiency

Yesterday, a STL Partners webinar titled “Turning autonomy into margin: Agentic AI and the autonomous RAN,” suggested agentic AI is the missing layer that can turn RAN autonomy from a technical goal into a direct profit margin booster. It argues that operators should prioritize autonomy use cases by business impact, not just by how much automation coverage they add, and that the right roadmap can move autonomy from an engineering KPI to a commercial advantage.

The central message was that autonomy only matters if it improves economics (see poll results below). The webinar revealed that network operators need a dual-axis framework that combines the usual autonomous-network maturity view with a value-creation lens, so they can focus on the capabilities that scale into measurable business outcomes.

Agentic AI is presented as the practical enabler for moving beyond human-in-the-loop operations. In this framing, agents help orchestrate tasks, make decisions, and coordinate network actions in ways that support more closed-loop automation than traditional workflows can deliver.

The results of an “actuality” poll relating to RAN autonomy revealed that controlling costs and reliability were most important, with the enablement of new revenue growth through APIs and sensing only scoring 10.87% of respondents.  Similarly, results for an “aspirations” poll for RAN autonomy were also fairly evenly spread between reducing costs and optimizing the customer experience, with just 13.21% citing new revenue growth.

Source: STL Partners

Terje Jensen, SVP, global business security officer and head of network and cloud technology strategy at Telenor, said that he had expected to see network operators’ aspirations shift more clearly towards improving customer experience and even revenue generation, not just efficiency.

Darwin Janz, strategic technology planner at SaskTel, also thought network operators’ ambitions would be higher, but he noted that they still struggle to identify concrete, monetizable use cases. Without that, there’s a real risk of building technical solutions in search of a problem, rather than starting from clear enterprise needs and value, Darwin noted. “We really need to see those use cases and enterprise customer needs,” he added.

……………………………………………………………………………………………………………………….

The webinar was built around four practical questions:

  1. Which use cases create real commercial impact?
  2. How to shift from autonomy as an engineering metric to a margin driver?
  3. Where agentic does AI add value today?
  4. What data, orchestration, and organizational foundations are needed to scale beyond pilots.

For network operators, the implication is that autonomous RAN strategy should be tied to P&L outcomes such as lower operating cost, better resource utilization, and faster optimization cycles. The webinar’s message is that autonomy becomes strategically important only when it is deployed in a way that compounds across the network and business.

…………………………………………………………………………………………………………………..

References:

https://www.lightreading.com/network-automation/telcos-showing-limited-aspiration-for-ran-autonomy-benefit

The Financial Trap of Autonomous Networks: Scaling Agentic AI in the Telecom Core

Nokia to showcase agentic AI network slicing; Ericsson partners with Ookla to measure 5G network slicing performance

 

 

Nokia’s AI Applications Study: “Physical AI” may require RAN redesign to support high‑volume, low‑latency uplink traffic

According to Nokia, AI-generated traffic in most mobile networks is at an early stage, with application maturity and adoption by consumers and enterprises only at the start of a broader AI super cycle.  The Finland based company analyzed more than 50 AI applications and came to three conclusions: higher uplink traffic, overall data growth and increasing sensitivity to delay in conversational services such as chat and voice. Also, the mobile network industry is moving toward “AI-RAN” or “6G-native” structures that embed AI into the network, transforming radio sites into “robotic” nodes capable of edge inference and handling these new demands.

–>Do those findings require a structural change in Radio Access Network (RAN) design?  Let’s take a fresh look…..

Mobile networks traditionally support a heterogeneous mix of traffic, ranging from high-throughput video streaming to low-bandwidth, delay-tolerant messaging. Network operators typically address escalating capacity demands through infrastructure expansion and overprovisioning, relying on best-effort delivery—a model that has proven remarkably resilient. However, capacity alone is insufficient for new use cases.

The transition from circuit-switched voice to packet-switched (voice/video/data) IP traffic requires a redesign to accommodate variable packet sizes instead of predictable, continuous voice patterns. The proliferation of Internet of Things (IoT) devices introduced requirements for massive machine-type communications (mMTC), driving the development of LTE-M and NB-IoT to optimize for deep indoor penetration and power efficiency.  Conversely, consumer web-based services and video streaming scale seamlessly by adding RAN and core capacity. Existing AI applications, such as generative AI chatbots, follow this model, making current RAN architectures adequate for the present load.

A paradigm shift is emerging with Physical AI [1.], which enables machines like autonomous vehicles and robots to interact with the environment in real time. Unlike traditional video streaming, these applications cannot leverage buffering to absorb network jitter. In Physical AI, high-definition video frames and sensor data must arrive within stringent time-to-live (TTL) constraints to remain actionable. This shifts the focus from average throughput to consistent low latency. Maintaining this strict QoS, particularly in the uplink, requires abandoning best-effort, overprovisioned models in favor of guaranteed scheduling, which necessitates substantial reserved capacity or specialized AI-RAN functionalities.

Note 1. Physical AI combines sensors, perception, decision-making, and actuators so machines can understand their environment and take physical (real world) action. Physical AI is used by robots, vehicles, drones, industrial machines, and smart infrastructure that generate and consume real-time sensor, video, and control traffic. These systems need tight coupling between low latency, high reliability, and continuous feedback loops because decisions in software immediately affect physical motion or control. Physical AI is different from typical generative AI because the output is not text or images; it is real-world action. That makes network performance critical, especially for uplink-heavy, latency-sensitive traffic where delays can affect safety, control accuracy, and operational efficiency.

Physical AI introduces the possibility that large-volume uplink video with strict latency requirements. It will become a meaningful part of mobile traffic, creating both a design challenge and a monetization opportunity,” says Harish Viswanathan, Head of the Radio Systems Research Group at Nokia.

Image Credit: Techslang

Delivering uplink video with sub‑20 ms end-to-end latency can require provisioning three to four times the average uplink capacity. While this level of redundancy is manageable for low-bandwidth services such as voice or control signaling, it becomes prohibitively expensive when supporting high-throughput video streams.

As device densities increase, the required headroom for reserved capacity grows disproportionately, significantly constraining network scalability and driving up cost per bit. This makes Physical AI traffic—characterized by real-time sensor and video inputs for machine analysis—fundamentally different from conventional services, and unsuited to existing best‑effort transport models.  From a Nokia blog post:

“Physical AI will rely on low latency videos to enable real-time control. While the machines or robots will perform most functions locally, there will be situations where they need to rely on more powerful models or human operators to provide remote control via the network. For example, driverless taxis may require remote assistance in unexpected scenarios; service robots may need guidance in complex environments; drones may depend on real‑time video analysis at the point of delivery; and field workers using AR may require timely visual instructions. In all these cases, the network must deliver fresh video information with low and predictable latency.”

To address these challenges, telecom operators are expected to adopt a multi‑layer approach encompassing network architecture, traffic management, and service monetization.

At the Application layer, not all traffic requires identical latency treatment. When video or sensor data is processed by AI rather than consumed by humans, only semantically relevant information may need immediate uplink transmission. This emerging paradigm, known as semantic communication, allows for significant data reduction while preserving information integrity within latency‑critical loops.

Within the network domain, established mechanisms such as Quality of Service (QoS) and network slicing remain essential. QoS enables prioritization of specific traffic classes, while slicing supports logically isolated virtual networks with guaranteed service-level attributes—latency, jitter, bandwidth, and reliability.

At the service and business model level, supporting low-latency, bandwidth-intensive applications reshapes network economics. Operators must evolve beyond best‑effort pricing structures toward differentiated service tiers or performance-based charging models aligned with enterprise and industrial use cases.

For the RAN, Physical AI underscores the need for greater programmability and elasticity. Future RAN designs will depend on dynamic resource allocation, real-time traffic classification, and AI-driven orchestration to balance throughput, latency, and reliability at scale.

As Physical AI deployments expand—from autonomous mobility to precision manufacturing and tele‑robotics—managing high‑volume, low‑latency uplink traffic will become a defining capability for next‑generation network strategy and differentiation. Unlike conventional mobile data, Physical AI cannot rely on buffering to manage traffic spikes. The requirement for continuous video and sensor data to arrive within strict time limits to inform real-time actions makes traditional “best-effort” network approaches inefficient and costly.

Reasons for RAN Redesign:
  • Uplink-Centric Demand: Physical AI shifts the network requirement from downlink-heavy (human consumption) to uplink-heavy (machine-generated) traffic.
  • Strict Latency & Throughput: Maintaining consistent low latency (e.g., around 20 milliseconds) for high-volume video uploads can require 3x to 4x more capacity than average, making overprovisioning unsustainable.
  • Need for Programmable Architectures: To support this, RAN must move toward more flexible, AI-native architectures that prioritize critical data and provide deterministic, rather than best-effort, performance.
  • Semantic Communication: To reduce data volume while maintaining performance, the RAN will need to adopt semantic communication—transmitting only the essential data needed for the AI to make decisions.

………………………………………………………………………………………………………………………………………………………..

References:

https://www.nokia.com/asset/215147/

https://www.nokia.com/blog/physical-ai-redefining-ran-and-telco-monetization/

https://telcomagazine.com/news/nokia-report-points-to-ai-driven-shift-in-mobile-traffic

What Is Physical AI?

Arm Holdings unveils “Physical AI” business unit to focus on robotics and automotive

Is the “far edge” a bridge to far to cross for AI inferencing? What about “Distributed AI Grids”?

The Financial Trap of Autonomous Networks: Scaling Agentic AI in the Telecom Core

Ericsson and Intel collaborate to accelerate AI-Native 6G; other AI-Native 6G advancements at MWC 2026

NVIDIA and global telecom leaders to build 6G on open and secure AI-native platforms + Linux Foundation launches OCUDU

Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)

AI-RAN Reality Check: hype vs hesitation, shaky business case, no specific definition, no standards?

Is the “far edge” a bridge to far to cross for AI inferencing? What about “Distributed AI Grids”?

How Far is the Far Edge?

As major telcos size up distributed edge sites for a possible AI inferencing model, they’re trying to determine how far out the right place is in their networks to invest in AI computing capacity.  According to Light Reading, the “far edge” is a divisive option for inferencing. According to Omdia, owned by Informa, the Far edge includes: radio access network (RAN) cell sites, aggregation hubs, exchange offices, optical line terminal (OLT) nodes, and Tier 2 metro hubs. 

Many telcos are struggling to define how far is the edge from customer premises and how to serve various use cases with compute and intelligence?  It seems that 5G SA core with network slicing would be mandatory to support multiple unique use cases, each with different QoS requirements.

According to Omdia’s Telco Edge Computing Survey last year, just 15% of telcos ranked network far edge as the top location for where most AI inferencing will take place, while even less (11%) said the network near edge would be the main spot (which includes central offices, headend sites and large telco data centers). The results showed AI inferencing is expected to be handled mostly on the end devices themselves and at the enterprise edge (e.g., offices, campus or manufacturing sites).

Kerem Arsal, Omdia senior principal analyst for telco enterprise and whoIe sale, predicted in a research note that this year will see telcos split into camps of “believers” and “doubters” of the far edge. 

Image Credit:  Sphere

…………………………………………………………………………………………………………………………………………………………………………………………………………………..

AT&T VP Yigal Elbaz, speaking at the recent New Street Research and BCG Global Connectivity Leaders Conference, expressed a cautious view on AI compute at the “far edge,” questioning how far the edge truly needs to extend to serve specific use cases effectively.  He said the following (Source: Light Reading)

“The proliferation of compute and high-performing compute across the nation, in all metros is just happening, with a software layer on top of this [and] with the tools that developers need. So, I am not sure that there’s much value in extending that compute all the way to the far edge just to save another millisecond or two milliseconds of latency.”

“AT&T’s fiber and wireless networks can provide the “deterministic experience” needed between any new use cases and help them to “intelligently connect to the right model that they use, the context or the infrastructure that they need because that’s going to be heavily distributed across the US.”

“There’s no doubt that that AI is going to be embedded into wireless networks, and we’re going to call it AI-native and combine the physical space with the intelligence of the network. This is all true,” said Elbaz.

………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Distributed AI Grids:

At this year’s Nvidia GTC event, AT&T was cited as a lead collaborator in the development of distributed AI grids—a geographically dispersed, interconnected fabric designed for high-performance AI infrastructure. In partnership with Cisco and Nvidia, AT&T is architecting an enterprise IoT AI grid focused on localized inference. By moving the compute layer to the network edge—potentially via On-Premises Edge (oPE)—the architecture aims to minimize backhaul latency and process workloads at the data source. Current Proof of Concept (PoC) deployments include a public safety framework and an edge AI-powered video intelligence pilot for site security. Similarly, Comcast is trialing Nvidia GPU-accelerated edge nodes to support deterministic, low-latency AI applications.
For the Cisco AI Grid with Nvidia architecture used by AT&T and Comcast, the interconnect strategy moves beyond standard backhaul to a specialized, deterministic fabric designed for distributed AI inference. AI Grid Interconnect Stack: The architecture leverages a multi-layer protocol approach to ensure low-latency, secure communication between edge nodes and the core:
  • Ethernet with RDMA (RoCE): The foundation is built on Nvidia Spectrum-X Ethernet, which utilizes RDMA over Converged Ethernet (RoCE). This allows for direct memory access between edge GPUs (e.g., Nvidia RTX PRO 6000 Blackwell Server Edition) and the network core, bypassing CPU overhead to achieve near-line-rate performance.
  • Scale-Across Networking: Using Nvidia Spectrum-XGS, the architecture extends standard RoCE to scale across geographically distributed sites. This creates a unified “AI Factory Grid” where remote edge nodes function as a single, programmable compute substrate.
  • Silicon One Routing: Cisco’s Silicon One-based routing is utilized for AI-optimized traffic management, providing the high-speed, high-density throughput required for token-intensive inference workloads.
  • Zero Trust & Secure Pathways: The interconnect includes a Zero Trust security layer embedded directly into the fabric. It utilizes localized traffic breakout and policy-enforced pathways to ensure that sensitive IoT and video data (such as public safety feeds) remain within the customer’s secure domain at the network edge.
  • Orchestration Control Plane: A workload-aware control plane manages these protocols to intelligently route tasks based on real-time KPIs (latency, cost-per-token, and data sovereignty), ensuring that “mission-critical” inference happens at the optimal node.
Focusing specifically on interoperability, the primary concern with a single-vendor AI Grid is the risk of architectural silos that could undermine years of industry progress toward Open RAN and multi-vendor environments.Key interoperability risks for carriers include:
  • Proprietary Software Lock-in: Integrating network functions into a proprietary ecosystem (like Nvidia’s CUDA or AI Aerial) can create a “subscription trap,” where software is inseparable from specific hardware, making it nearly impossible to swap vendors without a total architectural overhaul.
  • Data Fragmentation: Deploying AI across a distributed grid often leads to fragmented data sets across legacy and new multi-vendor platforms, which can result in inaccurate AI models and increased operational complexity.
  • Standardization Lag: While industry bodies like the GSMA are pushing for Open Telco AI standards, the rapid deployment of proprietary AI systems often outpaces these frameworks, leading to entrenched, incompatible systems that require significantly more resources to reconcile later.
  • Integration with Legacy Systems: Modern “agentic AI” and AI-native stacks often struggle to orchestrate processes across siloed legacy infrastructure, creating rigid operational environments that prevent the seamless flow of data needed for automated network troubleshooting.

Bottom Line: While the AI Grid may offer a more viable roadmap than AI-RAN, there is insufficient industry discourse regarding the strategic risks of a global, geographically distributed computing platform—as Nvidia defines it—reliant on a single-vendor hardware stack. Although Nvidia currently maintains undisputed market dominance, historical precedents such as Intel serve as a cautionary tale; long-term dominance is never guaranteed, and even market leaders face potential obsolescence. Furthermore, Nvidia’s practice of providing capital injections to entities that subsequently re-invest those funds back into Nvidia’s own ecosystem raises significant concerns regarding market sustainability and long-term financial health.

……………………………………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.lightreading.com/ai-machine-learning/at-t-cto-casts-doubt-on-ai-compute-at-the-far-edge

https://www.lightreading.com/5g/nvidia-lines-up-ai-grid-as-orange-cto-echoes-the-ai-ran-doubts

Edge AI Computing Explained: Key Concepts and Industry Use Cases

Will “AI at the Edge” transform telecom or be yet another telco monetization failure?

Analysis: Edge AI and Qualcomm’s AI Program for Innovators 2026 – APAC for startups to lead in AI innovation

Private 5G networks move to include automation, autonomous systems, edge computing & AI operations

Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?

Nvidia’s networking solutions give it an edge over competitive AI chip makers

Using AI, DeepSig Advances Open, Intelligent Baseband RAN Architectures

Using advanced AI techniques, DeepSig has reportedly managed to eliminate a mobile network’s pilot signal, thereby removing signaling overhead without degrading overall performance. Founded in 2016, the U.S.-based startup occupies a leading position at the intersection of artificial intelligence (AI) and the radio access network (RAN), developing data-driven models that could supplant traditional, human-engineered signal processing algorithms.

This work has become especially relevant as the telecom industry moves toward open and software-defined RAN architectures. DeepSig is now a visible contributor to OCUDU (Open Centralized Unit Distributed Unit), an open-source initiative announced by the Linux Foundation in collaboration with the U.S. Department of Defense and its FutureG ecosystem partners to accelerate open CU/DU development for 5G and early 6G systems. OCUDU is intended to establish a carrier-grade reference platform for baseband software, with support for AI-based algorithms and solutions embedded in the RAN compute stack.

As AI becomes a central theme across the telecom ecosystem, DeepSig has rapidly moved from relative obscurity to prominence through collaborations with major industry and government stakeholders. Most recently, the company emerged as a key contributor to OCUDU—the Open Central Unit Distributed Unit initiative announced by the Linux Foundation and the U.S. Department of Defense (DoD) ahead of MWC Barcelona 2026. The program’s goal is to introduce open-source software elements into the RAN baseband domain, an area historically dominated by proprietary offerings from Ericsson, Nokia, and Samsung. By lowering barriers to entry, OCUDU aims to foster innovation and enable smaller players like DeepSig to participate more freely in the U.S. baseband ecosystem.

Image Credit:  DeepSig

DeepSig was identified, alongside Ireland-based Software Radio Systems (SRS), as one of two startups selected to deliver OCUDU’s initial software stack. “The National Spectrum Consortium had an RFQ for developing an open-source stack,” explained Jim Shea, DeepSig’s CEO. “SRS already had a capable baseline, but it needed to be elevated to carrier-grade—adding new features and strengthening reliability,” he added.

Meanwhile, major vendors Ericsson and Nokia were named “premier members” of the new OCUDU Ecosystem Foundation. While both could, in principle, leverage the platform to integrate third-party components into their baseband systems, industry observers remain skeptical that these incumbents will fully embrace open-source alternatives over their established proprietary stacks. In comments at MWC, Nokia CEO Justin Hotard characterized OCUDU as a welcome ecosystem evolution to accelerate innovation but clarified that “not everything necessarily needs to be open source.”

Driven in part by DoD interests, OCUDU reflects broader U.S. government ambitions to ensure that 5G and future 6G networks remain open to domestic innovation, particularly for defense and mission-critical use cases. For vendors like Ericsson and Nokia—who view defense markets as increasingly strategic—this alignment could bring both opportunity and complexity.

DeepSig’s trajectory extends beyond OCUDU. The company’s technology originated from research by Tim O’Shea, now CTO, during his tenure at Virginia Tech, where he explored deep learning’s application to wireless signal processing. “You can apply deep learning to enhance the way communication systems operate by replacing many of the traditional algorithms,” said Jim Shea. While these methods do not circumvent theoretical limits such as Shannon’s Law, small efficiency gains can yield substantial operational and economic benefits for cost-sensitive mobile operators.

As DeepSig and peers continue to redefine how intelligence is integrated into the RAN, their work signals a shift toward AI-native architectures—where machine learning, rather than handcrafted algorithms, becomes the foundation for next-generation network optimization.

 

References:

https://www.lightreading.com/5g/small-deepsig-is-at-heart-of-ai-ran-challenge-to-ericsson-nokia

Accelerating 5G vRAN, AI-RAN, and 6G on OCUDU, “the Linux of RAN”

AI-RAN Reality Check: hype vs hesitation, shaky business case, no specific definition, no standards?

Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN

Dell’Oro: RAN Market Stabilized in 2025 with 1% CAG forecast over next 5 years; Opinion on AI RAN, 5G Advanced, 6G RAN/Core risks

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

InterDigital led consortium to advance wireless spectrum coexistence & sharing

Telecom sessions at Nvidia’s 2025 AI developers GTC: March 17–21 in San Jose, CA

Sources: AI is Getting Smarter, but Hallucinations Are Getting Worse

AI-RAN Reality Check: hype vs hesitation, shaky business case, no specific definition, no standards?

Introduction:

The narrative surrounding “AI-RAN” — a term thrust into the spotlight by Nvidia — may have left many believing that boatloads of GPUs are already powering baseband compute in RAN equipment across the world’s seven million mobile sites. In truth, the reality is far more nascent.

Among major RAN vendors, Nokia stands alone in adapting baseband software for GPU acceleration. Yet even Nokia does not anticipate commercial readiness until late 2026, as confirmed by its Chief Technology Officer, Pallavi Mahajan, during the company’s MWC press conference earlier this year. For now, no operator has announced a commercial deployment — despite the buzz around trials.

Early Movers, Limited Momentum:

Much of the current AI-RAN activity centers on two operators: T-Mobile US and Japan’s SoftBank. At MWC, T-Mobile’s Executive Vice President of Innovation and ex-CTO, John Saw, acknowledged the limited availability of deployable solutions, quipping that he hoped Nokia would deliver an AI-RAN product within the year. Nokia CEO Justin Hotard quickly assured him that such a milestone was indeed on track.

Still, the debut of a GPU-based RAN stack does not imply an imminent large-scale rollout. Without tangible network performance or cost advantages over existing virtualized or disaggregated RAN approaches, operators are unlikely to move past controlled trials.

SoftBank, while often positioned as an AI-RAN pioneer, remains cautious. As Ryuji Wakikawa, Vice President of its Advanced Technology Division, outlined last year, the operator aims to deploy only a handful of AI-RAN sites over the next fiscal cycle. Transitioning from testing to carrying live commercial traffic, he emphasized, demands a significant maturity leap in quality and feature completeness.

Beyond Hype: Limited Commercial Engagement:

Elsewhere, Indonesia’s Indosat Ooredoo Hutchison (IOH) was heralded in 2025 as the first operator in Southeast Asia pursuing AI-RAN. More than a year later, authoritative sources indicate IOH’s work remains confined to its research facility in Surabaya, with no near-term plans for GPU investment at cell sites until measurable value is demonstrated.

The challenge for Nokia — and for GPU-backed AI-RAN broadly — is convincing operators that general-purpose accelerators offer sufficient performance or efficiency gains for most RAN workloads. T-Mobile and SoftBank continue evaluating both Nokia and Ericsson, whose AI-RAN philosophies diverge sharply. Nokia is developing GPU-based baseband software, while Ericsson maintains its focus on custom silicon and CPU architectures.

Divergent Architectures and Use Cases:

Ericsson contends that no core RAN performance enhancements intrinsically require GPUs. Its ongoing collaboration with Nvidia leverages the latter’s Grace CPU technology rather than its GPU portfolio, reserving GPU acceleration only for compute-intensive functions like forward error correction (FEC).

If Ericsson’s premise holds, GPUs in the RAN become justifiable only when supporting AI inference workloads. Even then, inference at every radio site remains improbable. A more incremental strategy — deploying GPUs selectively at edge locations where AI workloads justify their cost — may prove more practical.

This modular approach aligns with existing virtual RAN deployments based on Intel CPUs, which already include native FEC acceleration. “It is an off-the-shelf card that you can slide right into an HPE or Dell or Supermicro server,” said Alok Shah, the vice president of network strategy for Samsung Networks. “That gets you the edge functionality you are looking for.”

Rethinking the Economic Case for AI RAN:

Initially, Nvidia positioned GPUs for AI-RAN as viable only if broadly utilized for AI inference across the RAN. Following its strategic alignment with Nokia, however, the company has softened its stance — now suggesting that appropriately sized, power-efficient GPUs could make sense even when dedicated solely to baseband computation.

For now, the global RAN landscape remains far from GPU-saturated. AI-RAN remains an exploratory frontier — one testing not only the technical feasibility of GPUs at the edge, but also the economic/business case rationale for re-architecting a trillion-dollar telecom infrastructure around them.

The AI models suitable for RAN environments must be compact and efficient, far slimmer than those that drive data center-scale AI. There’s no room for the massive, parameter-heavy neural networks that justify a GPU’s cost or energy appetite. In that light, a GPU looks less like a breakthrough and more like a mismatch — a chainsaw brought to a task better handled with a sharp pair of scissors.

Evaluating the Case for AI-RAN Acceleration:

The central question is whether GPUs can deliver meaningful benefits over custom silicon or conventional CPUs for RAN compute. Ericsson’s engineers argue that AI models deployed at the RAN must remain relatively lightweight, with far fewer parameters than those used in large-scale data centers. Excessive model complexity could introduce signaling delays unacceptable in real-time radio environments. In this context, deploying a GPU for such workloads might seem disproportionate — a high-powered tool for a low-demand task.

The most compelling defense of GPU-based RAN acceleration came from Ronnie Vasishta, Nvidia’s Senior Vice President for Telecom, who told Light Reading last summer, “The world is developing on Nvidia.” His point underscores a shift in semiconductor economics: the cost and risk of building dedicated silicon for a mature and shrinking RAN market make general-purpose processors — supported by large-volume ecosystems — increasingly attractive alternatives.

Intel’s difficulties further illustrate this dynamic. Despite $53 billion in revenue during 2025, the former microprocessor king barely broke even despite $53 billion in revenue, following a $19 billion loss the previous year. A major restructuring cut its headcount by nearly 24,000, and its planned spinoff of the Network and Edge division — serving telecom infrastructure customers — was ultimately abandoned in December.  Nvidia, the world’s most valuable company, may be eager to step into that space — but the economic logic seems upside down. Wireless network operators are looking to reduce costs, not import data center economics into the RAN.

Ecosystem or Echo Chamber?

Nvidia’s Aerial platform and CUDA-based software ecosystem do present a compelling story: open infrastructure, modular APIs, and space for smaller developers to innovate alongside giants like Nokia. On paper, it’s an alluring image of democratized RAN software. In practice, it ties the industry even more tightly to a vertically integrated, proprietary ecosystem.

Nokia appears comfortable with that trade-off.  Nokia CTO Pallavi Mahajan’s recent blog post framed AI-RAN as a vehicle for “software speed and innovation.” He added, “Nokia’s AI-RAN initiative begins with a simple observation: AI is changing not only how networks are operated, but also the nature of the traffic they carry. AI workloads have already reached massive scale, with mobile devices accounting for more than half of AI interactions. Large language model interactions introduce richer uplink flows and burstier patterns as devices continuously send context to models.”

Indeed, that me be true someday.  But for now, most wireless network operators need stable, cost-efficient networks, not AI-driven complexity or GPU-level power draw.

Image Credit: Nokia

Conclusions:

The uncomfortable truth is that AI-RAN feels more like a vendor-driven experiment than an operator-driven demand. Until someone proves that GPUs in the RAN deliver a measurable payoff — in performance, cost, or operational simplicity — the whole concept risks joining the long list of telecom “game-changers” that never made it past the trial stage.  The hype cycle is predictable; the economics are not. Unless that equation changes, the real intelligence may be knowing when not to deploy AI RAN.

………………………………………………………………………………………………………………

In a Substack post today, Sebastian Barros  writes: What Does AI-RAN Even Mean?

Despite the crazy hype, there is no definition for AI-RAN. Today it is at best a vibe, a dangerous reality for an industry that moves on strict standards that are currently completely absent.

The AI RAN hype is crazy right now. But despite the endless talk and vendor announcements, there is no actual technical definition of what it even means. As wild as it sounds for an industry built on strict 3GPP and O-RAN standards (those are specs- not standards), AI RAN is currently just a vendor interpretation designed to move hardware. Moreover, telecom has been using AI in the RAN before it was even cool. In fact, we were among the first industries to use neural networks in signal processing back in the 80s.

The problem is that treating AI-RAN as a marketing narrative rather than a rigid standard actively stalls progress. When the definition of AI-RAN is as different as night and day depending on which OEM you ask, it becomes impossible for any Telco to accurately model TCO or make solid CAPEX decisions.

Editor Notes:

  • ITU-R’s IMT-2030 framework (ITU-R Recommendation M.2160-0 for IMT-2030) calls for an AI-native new air interface and AI-enhanced radio networks, but does not mention Nokia’s AI RAN.
  • 3GPP Release 18 and later have study/work items on AI/ML for RAN functions such as energy saving, load balancing, mobility optimization, and AI/ML on the RAN air interface, but again no specifics have been discussed let alone agreed upon.
  • 3GPP Release 19 continues and expands this work, with reporting that it builds on Release 18’s normative work and adds new AI/ML-based use cases for NG-RAN. In other words, 3GPP does have AI-RAN-related specs in progress and some normative features, but they are distributed across multiple RAN work items rather than packaged as one standalone “AI RAN” specification.
  • AI RAN Alliance “is dedicated to driving the enhancement of RAN performance and capability with AI.”  However, they’ve not yet produced any implementable specifications for AI RAN.  Yet there are only four carriers that are “executive members“: Vodafone, T-Mobile, and SK Telecom, and Softbank (which is a conglomerate).

In Japan, NTT Docomo holds the largest cellular market share, with KDDI second, followed by SoftBank and the rapidly expanding Rakuten Mobile.

References:

https://www.lightreading.com/5g/ai-ran-lots-of-talk-little-action-no-guarantees

https://www.nokia.com/blog/ai-ran-bringing-software-speed-innovation-into-the-radio-network/

https://ai-ran.org/

Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN

Dell’Oro: RAN Market Stabilized in 2025 with 1% CAG forecast over next 5 years; Opinion on AI RAN, 5G Advanced, 6G RAN/Core risks

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

Ericsson and Forschungszentrum Jülich MoU for neuromorphic computing use in 5G and 6G

Ericsson and major European research center Forschungszentrum Jülich are collaborating to develop technologies for the continued evolution of 5G and for the future introduction of 6G (IMT 2030) networks.  The organizations signed a Memorandum of Understanding (MoU) on March 24, 2026.The project aims to leverage JUPITER, Europe’s first “exascale” supercomputer, to design and test new artificial intelligence solutions for the complex demands of 6G. The partnership will explore AI models and methods to enhance Ericsson’s core network, network management, and Radio Access Network (RAN).

Important objectives include exploring ultra-efficient, “brain-inspired” computing approaches like neuromorphic computing [1.] to handle intense network tasks and strengthen Europe’s digital infrastructure.  Modern mobile networks rely heavily on Massive MIMO, a technology where many devices communicate simultaneously via numerous antennas. By exploring novel system architecture approaches like neuromorphic computing, researchers aim to speed up optimization and reduce energy use versus classical methods.

Note 1. Neuromorphic computing is a brain-inspired engineering approach that mimics biological neural networks using analog or digital electronic circuits. It combines memory and processing in one place—similar to neurons and synapses—to achieve extreme energy efficiency, speed, and learning capabilities, moving beyond the limitations of traditional computing architecture. Unlike traditional AI that uses continuous data, neuromorphic systems use “spikes”—discrete events in time—to mimic how neurons communicate. Such systems only consume significant power when processing data (“spiking”), making them ideal for ultra-low-power edge computing, unlike traditional computers that are always on. They can process complex, real-world data (like vision or touch) much faster and with far less power than traditional computers.

…………………………………………………………………………………………………………………………………………………………………………………………..

The alliance will study operational strategies like heat recovery to boost energy efficiency in HPC and cloud deployments. The collaboration involves systematic benchmarking of AI methods – including the application of neuromorphic AI – across Ericsson products to assess execution speed, scalability to large datasets, information retention, and storage efficiency.  In addition, the partnership will provide insights into the feasibility of cloud strategies based on concepts from the EuroHPC ecosystem, which is establishing a world-class supercomputing infrastructure.

Professor Laurens Kuipers, a member of the Executive Board of Forschungszentrum Jülich, said: “This collaboration has the potential to make a significant contribution to a more sustainable digital future. By combining our excellence in high-performance computing and our research into novel, neuro-inspired computing approaches with Ericsson’s expertise in telecommunications, we aim to develop more energy-efficient network solutions and strengthen a sovereign European digital infrastructure.”

Image Credit: Image: Forschungszentrum Jülich / Kurt Steinhausen

……………………………………………………………………………………………………………………………………….

Nicole Dinion, Head of Architecture and Technology, Cloud Software and Services, Ericsson said: “The future of mobile networks is deeply intertwined with AI and the need for unparalleled energy efficiency. Our collaboration with Forschungszentrum Jülich, for years a global leader in supercomputing and applied physics, combines their research and computing power with our expertise in all domains of telecoms technology. We will explore architectures that define the next generation of telecommunication.”

The collaboration covers several areas of research:

  • AI methods for Ericsson products across the full portfolio: systematic benchmarking of approaches to assess execution speed, scalability to large datasets, information retention, and storage efficiency. Where security and commercial conditions permit, the teams may also use JUPITER for large-scale model training, leveraging its compute resources.
  • Energy-efficient computing for AI inference at the radio and edge: developing and prototyping highly efficient solutions for tasks such as radio channel estimation and Massive MIMO – a key technology in modern mobile networks, in which many devices communicate simultaneously via numerous antennas. This includes exploring novel system architecture approaches like neuromorphic computing (e.g., memristors) to speed up optimization and reduce energy use versus classical methods.
  • HPC and cloud architectures and operations for AI: researching and implementing Modular Supercomputing Architecture (MSA) concepts from exascale work at Forschungszentrum Jülich – in particular, at the Jülich Supercomputing Centre (JSC) – and studying operational strategies, such as heat recovery, to boost energy efficiency in HPC and cloud deployments.

The collaboration will provide insights into the feasibility of cloud strategies based on concepts from the EuroHPC ecosystem, which is establishing a world-class supercomputing infrastructure with leading European centers such as the JSC.

ABOUT FORSCHUNGSZENTRUM JÜLICH:

Shaping change: This is what drives us at Forschungszentrum Jülich. As a member of the Helmholtz Association with more than 7,000 employees, we conduct research into the possibilities of a digitized society, a climate-friendly energy system, and a resource-efficient economy. We combine natural, life, and engineering sciences in the fields of information, energy, and the bioeconomy with specialist expertise in simulation and data science. www.fz-juelich.de

 

References:

https://www.ericsson.com/en/press-releases/2026/3/ericsson-and-forschungszentrum-julich-to-develop-advanced-ai-for-6g

https://www.ericsson.com/en/blog/2026/1/ai-future-will-be-defined-by-the-intelligent-digital-fabric

https://www.ibm.com/think/topics/neuromorphic-computing

China vs U.S.: Race to Generate Power for AI Data Centers as Electricity Demand Soars

AI infrastructure spending boom: a path towards AGI or speculative bubble?

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Expose: AI is more than a bubble; it’s a data center debt bomb

Sovereign AI infrastructure for telecom companies: implementation and challenges

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Custom AI Chips: Powering the next wave of Intelligent Computing

Groq and Nvidia in non-exclusive AI Inference technology licensing agreement; top Groq execs joining Nvidia

 

 

 

Will “AI at the Edge” transform telecom or be yet another telco monetization failure?

New Telco Opportunity – AI at the Edge:

At MWC 2026 last week, there were a flurry of claims that “AI at the Edge” would transform the telecom industry.  One of many examples is an article titled, “The AI edge boom is giving telecom a new strategic role.”  In that piece, Jeff Aaron, vice president of product and solutions marketing at Hewlett Packard Enterprise (HPE) spoke with theCUBE’s John Furrier at MWC Barcelona, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed telecom edge AI and why networking is becoming a strategic foundation for data-centric services.  Aaron said:

“A big reason for [reignited interest in routing] is AI workloads. They’re moving everywhere now. They have to move to the edge.  For them to move to the edge, you’ve got to get them outside of the factory and to all the locations. We’re right in the core of that, and it’s super exciting.”

As AI expands to the edge, data will need to move not only to local compute, but also between many distributed edge sites, making routing paramount. There are four ways AI infrastructure is scaling — inside data centers and across distributed edge locations, according to Aaron.

“There’s scale-out, scale-across, scale-up, and on-ramp. Two are within the data center — scale-out and scale-up — but scale-across and edge on-ramp basically mean you got to figure out how to connect to those areas, and those are just networking,” he added.

Scale-across refers to connecting distributed data centers and edge locations, while edge on-ramp brings remote sites such as factories or branch locations into the network to access AI services. Supporting those distributed environments creates an opportunity for HPE to bring networking and compute together into a more integrated infrastructure stack. At MWC 2026 Barcelona, those trends are clearly coming into focus, according to Aaron.

“Data is moving everywhere right now, and the network is back. The network isn’t just plumbing. The network is how you build a value-added service using an AI workload as a telco infrastructure,” he added.

Telecom carriers are now urgently trying to move from being “dumb data pipes” to becoming “AI performance platforms” by leveraging their geographically distributed infrastructure to host AI closer to the end user.  They urgently want to pivot from selling just bandwidth and connectivity to selling outcomes and intelligence with a heavy focus on industrial and enterprise-specific edge deployments.  They are considering the following services and business models:

  • Infrastructure as a Service (IaaS) & GPUaaS: Offering raw computing power, specifically GPUs, from edge data centers to enterprises that need low-latency processing without building their own facilities.
  • Sovereign AI Clouds: Providing AI services that guarantee data remains within national borders, appealing to government and highly regulated sectors like finance and healthcare.
  • API Monetization: Exposing real-time network data (e.g., location intelligence, predictive network quality, fraud risk scoring) via APIs that enterprises pay to integrate into their own applications.
  • Outcome-Based Pricing: Charging for specific business results, such as a “guaranteed video call quality” or “fraud loss reduction share,” rather than just data usage.
  • AI-as-a-Service (AIaaS): Bundling pre-trained models or specialized AI agents (e.g., for customer service or industrial monitoring) with connectivity

Major Carrier AI Edge Deployment Plans:

  • AT&T:
    • Launched Connected AI for Manufacturing in March 2026, which unifies 5G, IoT, and generative AI to provide real-time fault detection (claiming a 70% reduction in waste).
    • Deploying “Edge Zones” in major U.S. cities (Detroit, LA, Dallas) to allow developers to run low-latency, cloud-based software locally.
    • Partnering with AWS to link fiber and 5G directly into AWS environments for distributed AI workloads.
  • Verizon:
    • Unveiled Verizon AI Connect, a suite of products designed to manage resource-intensive AI workloads for hyperscalers like Google Cloud and Meta.
    • Trialing V2X (Vehicle-to-Everything) platforms to provide carmakers with standardized APIs for low-latency edge processing in autonomous driving.
    • Collaborating with NVIDIA to integrate GPUs into private 5G networks for on-premise AI inferencing in robotics and AR.
  • SK Telecom (SKT):
    • Announced an “AI Native” strategy at MWC 2026, including a roadmap for AI-RAN (Radio Access Network) that uses GPUs to optimize network performance and host user AI apps simultaneously.
    • Building a Manufacturing AI Cloud powered by over 2,000 NVIDIA RTX GPUs to support digital twin simulations and robotics.
    • Expanding AI Data Centers (AIDC) across South Korea and Southeast Asia (Vietnam, Malaysia) using energy-optimized LNG-powered facilities.
  • Orange & Deutsche Telekom:
    • Deploying AI-powered planning tools to cut fiber rollout costs and optimize site power consumption by up to 33% using AI “Deep Sleep” modes.
    • Focusing on Sovereign AI strategies to ensure data governance for European enterprise customers.
  • Vodafone:
    • Utilizing AI/ML applications for daily power reduction at 5G sites and testing autonomous network healing via AI agents
  • BT:
    • Offers 5G-connected VR for manufacturing design teams (e.g., Hyperbat) to collaborate on 3D models in real-time.  
……………………………………………………………………………………………………………..
Summary of Emerging AI Edge Products:
Product Category Primary Target Key Value Proposition
AI-RAN Industry 4.0 Seamless, ultra-low latency for robotics and sensing.
Connected AI Platforms Manufacturing Real-time predictive maintenance and waste reduction.
AI-as-a-Service (AIaaS) Developers/SMBs Access to GPU power and pre-trained models via telco edge nodes.
Network Slicing APIs App Developers Programmatic control over bandwidth for AR/VR and gaming.

…………………………………………………………………………………………………………………………………………………………………………………………..

A Dissenting View of “AI at the Edge”:

The global market for AI within the global telecommunications sector is valued at $6.69 billion in 2026, growing at a compound annual rate (CAGR) of 41.9% from 2025.   The broader edge AI market—including hardware, software, and services—is forecast to reach $29.98 billion in 2026, according to The Business Research Company We think those estimates are way too high.

The market research firm states:

………………………………………………………………………………………………………

Author’s Opinion:

Unless telcos change their corporate culture along with slowing the footprint growth of cloud service providers/hyperscalers, we think that AI at the Edge will be yet another telco monetization failure.  Just like their failure to monetize: 4G LTE apps, the telco cloud, 5G, multi-access edge computing (MEC), OpenRAN, LPWANs and other telecom technologies that never lived up to their promise and potential.

That’s largely because telcos are very weak: developing IT platforms, compute services, killer applications, and rapid execution of new services (e.g. 5G services require a 5G SA core network which telcos were very slow to deploy).  Telecom execs themselves cite cultural and speed‑of‑change issues: the industry is not organized like a software company, so it struggles to iterate products at AI/cloud pace. Also, telcos historically struggle with software. Managing distributed GPU clusters is vastly different from managing cell towers.

After spending billions on 5G with very  little or no ROI, investors are skeptical of the increased capex required for AI-grade edge servers which must be maintained by telcos.  Those servers will be expensive (especially if they contain clusters of Nvidia GPUs) and consume a lot of power, which is a critical issue at the edge of the carrier’s network.

Many network operators frame AI/edge as “network optimization” or “utilizing underused sites,” not as building monetizable AI platforms with APIs, SDKs, and ecosystems. This mirrors 5G, where huge RAN/core builds were not matched by a clear product and platform strategy, leaving value to OTTs and hyperscalers which are  extending their control planes and protocol stacks to the network edge (local zones, operator co‑lo, on‑premises stacks).

Telcos risk becoming “dumb pipes” for AI traffic if they can’t provide a superior developer ecosystem.  If they only sell space/power/connectivity, the cloud service providers will continue to own the developer and AI value chain.  Analysts warn that edge is a “right to participate, not a right to win.”  As such, value accrues to whoever owns the AI platform, tools, marketplace, and pricing power, not the entity that provides connectivity, PoP or cell towers.

Data fragmentation and weak “intelligence” layer:

  • AI monetization depends on high‑quality, cross‑domain data, but telco data is fragmented across OSS, BSS, probes, and partner systems; without unification, it is hard to expose compelling network/edge intelligence services.

  • Analysts emphasize that failure here reduces telcos to generic GPU landlords, while higher‑margin offers (real‑time quality, fraud, identity, mobility/context APIs) remain unrealized.

Narrow internal focus on cost savings:

  • Many operators’ early AI focus is inward (Opex reduction in assurance, planning, customer care) rather than building external, revenue‑generating products, echoing how early 5G was justified mainly on cost/efficiency.

  • Commentators warn that if AI/edge remains a “network efficiency” play, the commercial upside will go to cloud/AI natives that turn similar capabilities into products sold to enterprises.

What analysts say telcos must do differently:

  • Build “Sovereign AI factories” and edge AI clouds: GPU‑enabled sites with cloud‑like developer experience (APIs, self‑service portals, metering, SLAs) and clear sovereign/regional guarantees.

  • Combine differentiated connectivity with AI services (latency‑backed SLAs, AI‑on‑RAN, domain‑specific models for verticals) and use modern, flexible commercial models instead of just selling bandwidth or colocation.

Conclusions:

In summary, the main risk for telcos is to successfully transition from owning and maintaining network infrastructure to owning and operating AI platforms and products at software industry speed.  AI at the edge is less of a new service or product and more an architectural upgrade. The two ways telcos can benefit are from:

  1.  Internal cost reduction: If telcos use it to lower their own costs (fraud prevention, risk management, predictive maintenance, fault isolation, self-healing networks, etc.), it’s an automatic win but won’t increase the top line.
  2.  Revenue from new AI -Edge services, e.g. Verizon uses edge-based video analytics in warehouses to improve inventory turnover by up to 40%.   If they expect to charge a massive premium for “AI-enabled 5G,” they face the same monetization wall that has doomed them for the past 20 years!

References:

https://siliconangle.com/2026/03/04/telecom-edge-ai-makes-networking-strategic-mwc26/

https://www.nvidia.com/en-us/lp/ai/the-blueprint-for-ai-success-ebook/

How telcos can monetize AI beyond connectivity

https://www.thebusinessresearchcompany.com/report/generative-artificial-intelligence-ai-in-telecom-global-market-report

AT&T and AWS to deliver last mile connectivity for AI workloads; AT&T Geo Modeler™ AI simulation tool

Analysis: Edge AI and Qualcomm’s AI Program for Innovators 2026 – APAC for startups to lead in AI innovation

Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN

Private 5G networks move to include automation, autonomous systems, edge computing & AI operations

Dell’Oro: RAN Market Stabilized in 2025 with 1% CAG forecast over next 5 years; Opinion on AI RAN, 5G Advanced, 6G RAN/Core risks

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum

Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

CES 2025: Intel announces edge compute processors with AI inferencing capabilities

AT&T and AWS to deliver last mile connectivity for AI workloads; AT&T Geo Modeler™ AI simulation tool

AT&T is strategically re-architecting its infrastructure for the AI era through high-capacity network modernization and deep integration with hyperscale cloud providers.

In addition to its almost six year old deal to run its 5G SA core network in Microsoft Azure’s cloudAT&T announced at MWC 2026 that it’s now woring with Amazon Web Services (AWS) to extend 5G and fiber connectivity from business customers and locations directly into AWS environments, creating secure, resilient and reliable premises‑to‑cloud architectures for AI workloads. The collaboration is designed to reduce network complexity and latency while supporting real‑time analytics, machine learning, and agentic AI use cases.

This collaboration continues a long-standing relationship between AT&T and AWS and follows recent news outlining broader efforts to modernize the nation’s connectivity infrastructure by providing high-capacity fiber to AWS data centers, migrate AT&T workloads to AWS cloud capabilities and explore emerging satellite technologies.

AWS Interconnect – last mile embeds AT&T‑delivered connectivity directly into AWS workflows, designed to enable customers to provision and manage last‑mile connectivity within the AWS environment and lays the foundation for the use of AI agents to monitor and manage the AI experience from the user to the cloud. This streamlined, self‑managed approach helps enterprises reduce network complexity while maintaining control of their extended enterprise network, allowing businesses to move faster as they scale AI.

High level illustration of the planned AWS Interconnect – last mile architecture, showing how resilient interconnections and AT&T Fiber and fixed wireless access are intended to simplify private connectivity from customer locations into AWS environments. 

Diagram Source: AT&T

………………………………………………………………………………………………………

“AI does not just need more compute; it needs flatter networks and faster connections,” said Shawn Hakl, SVP & Head of Product, AT&T Business. “By bringing high‑capacity connectivity closer to cloud platforms, integrating the management of the networks directly into the cloud provisioning process and engineering for resiliency at the metro level, AT&T is helping enterprises streamline their networks, improve performance, security, and scale AI with confidence.”

AT&T says they are building an AI‑ready network (?) designed to scale performance by continuing ongoing network investment, including the growth of capacities up to 1.6Tbps across key metro and long‑haul routes.

AT&T also announced it would work with Nvidia, Microsoft and MicroAI through its Connected AI platform for “smart manufacturing.”

………………………………………………………………………………………………………………..

Finally, AT&T described  AT&T Geo Modeler which is able to better predict connectivity for emerging technologies like autonomous vehicles, drones, and robotics.

The Geo Modeler is an AI-powered simulation tool that helps predict, in near real time, how a wireless network will perform in the real world. Inspired by the video games Kounev played with his family growing up, the virtual model and simulation is “essentially like a giant video game of the United States” that, infused with AI tools, gives engineers a clearer picture of where potential weak spots may appear. Then issues can be addressed earlier and fixes can roll out faster. In essence, it creates virtual models, similar to the way video games are designed and developed.

“The Geo Modeler helps us see how the real world will shape coverage before we build, so we can deliver connectivity that’s ready for what’s next,” said AT&T scientist Velin Kounev.

Matt Harden, VP of Connected Solutions at AT&T, agrees. “The Geo Modeler is a foundational capability for the connected mobility era,” he said. “By marrying advanced geospatial simulation with AI-driven network orchestration, we can deliver predictable, high-performance connectivity that adapts with the environment. Whether it’s a hurricane, a packed stadium, or a city corridor full of autonomous vehicles, we will be prepared.”

References:

https://about.att.com/story/2026/aws-collaboration-scalable-business-ai.html

https://about.att.com/blogs/2026/150-years-of-connection.html

https://about.att.com/blogs/2025/geo-modeler.html

AT&T and Ericsson boost Cloud RAN performance with AI-native software running on Intel Xeon 6 SoC

AT&T deploys nationwide 5G SA while Verizon lags and T-Mobile leads

AT&T to buy spectrum licenses from EchoStar for $23 billion

AT&T’s convergence strategy is working as per its 3Q 2025 earnings report

Progress report: Moving AT&T’s 5G core network to Microsoft Azure Hybrid Cloud platform

AT&T 5G SA Core Network to run on Microsoft Azure cloud platform

 

Page 1 of 6
1 2 3 6