Analysis: Amazon <- Globalstar - a strategic move for D2D and spectrum parity

Overview:

Amazon said today that it will acquire Globalstar in ​an $11.57 billion deal, bolstering its fledgling satellite internet business as it tries to catch up with Elon Musk’s Starlink.

Amazon is accelerating its Project Kuiper deployment, aiming to launch approximately 3,200 Low Earth Orbit (LEO) satellites by 2029. To meet regulatory milestones, nearly 50% of the constellation must be operational by the July deadline, with commercial satellite broadband services slated for a soft launch later this year.
The acquisition of Globalstar augments Amazon’s Direct-to-Device (D2D) connectivity offerings. Globalstar’s current architecture is optimized for low-bandwidth, high-reliability mobile links that bypass traditional terrestrial RAN infrastructure. This capability is vital for ubiquitous emergency services and IoT connectivity in non-terrestrial network (NTN) white spaces. Through this deal, Amazon expects to operationalize its own D2D offerings by 2028.  It should be noted that ONLY 3GPP is developing the standards for NTNs – ITU-R and ETSI SDOs are simply rubber stamp SDOs for 3GPP NTN specs.
“There are billions of customers out there living, traveling, and operating in places beyond the reach of existing networks, and we started Amazon Leo to help bridge that divide,” said Panos Panay, Senior Vice President of Devices & Services, Amazon.
“By combining Globalstar’s proven expertise and strong foundation with Amazon’s customer-obsession and innovation, customers can expect faster, more reliable service in more places—keeping them connected to the people and things that matter most. We’re excited to support Apple users through the Leo D2D system, and look forward to working with mobile network partners to help extend coverage to every corner of the planet.”
Image credit: Amazon
The Competitive Landscape: Starlink vs. Kuiper:

SpaceX’s Starlink currently maintains a significant lead with over 9 million global subscribers. While Starlink’s core business remains high-throughput fixed wireless via proprietary user terminals, it is aggressively pursuing D2D through spectrum-sharing partnerships with Mobile Network Operators (MNOs) like T-Mobile.
Industry analysts suggest that acquiring Globalstar is a “spectrum play.” Armand Musey of Summit Ridge Group noted that the deal allows Amazon to secure a critical spectrum position and potentially leapfrog Starlink in D2D deployment timelines. Furthermore, Amazon’s proposed data center constellation is engineered for a massive scaling of network capacity, intended to exceed current LEO benchmarks.
“Amazon has been falling behind Starlink on satellite broadband. Acquiring Globalstar allows them to catch up on their D2D spectrum position, and leap ahead on D2D deployment,” said Armand Musey, president & founder of Summit Ridge Group.
Amazon LEO’s proposed data center constellation would dwarf Starlink’s current network by several magnitudes.
The Apple-Globalstar Ecosystem:

Crucially, Globalstar’s existing partnership with Apple remains intact. Globalstar currently provides the L-band connectivity powering Apple’s Emergency SOS and Find My features. Amazon has confirmed it will honor these agreements, maintaining the 2024 framework where Apple invested $1.5 billion for a 20% equity stake to expand the constellation to 54 satellites.  See References below.
Market Consolidation and Valuations:

The move follows a broader trend of sector consolidation as players seek the scale required to compete with SpaceX’s vertical integration and launch frequency.
  • Deal Metrics: Amazon’s acquisition values Globalstar at approximately $10.8 billion ($90/share), representing a 31% premium over the pre-announcement close.
  • Regulatory Path: The merger is expected to close in 2025, pending FCC approval and the achievement of specific deployment KPIs. FCC Chair Brendan Carr indicated the agency remains “open-minded” regarding the consolidation.

Author’s Opinion & Analysis (aided by perplexity.ai):

Amazon’s Globalstar acquisition is a strong strategic move for D2D, but it is more a spectrum-and-regulatory shortcut than a pure technology leap. The telecom significance is that Amazon is buying not just satellites, but licensed Mobile Satellite Spectrum (MSS), operational know-how, and an immediate path into direct-to-device connectivity that would otherwise take years to assemble.

From a telecom perspective, the key asset is spectrum parity. Globalstar holds licensed MSS spectrum in the L/S-band ranges used for satellite mobile services, and that spectrum is hard to replicate because the FCC has previously rejected or constrained new entrants in those bands. That makes the deal valuable less as a fleet expansion play and more as a way to secure a legally usable radio layer for D2D.

Amazon’s stated plan is to combine Globalstar’s spectrum and MSS operations with Amazon Leo to deliver D2D services beginning in 2028, with claims of higher spectrum efficiency than legacy direct-to-cell systems. In telecom terms, that implies Amazon wants to move from “coverage extension” into a more integrated NTN architecture that can support voice, text, and eventually data services at scale.

Against Starlink, this is a defensive and offensive move at once. Starlink already has a lead in satellite scale and has commercialized carrier partnerships like T-Mobile’s direct-to-cell offering, so Amazon’s problem has been less launch capacity than spectrum and service readiness. Buying Globalstar narrows that gap by giving Amazon a ready-made regulatory and spectrum base instead of forcing it to negotiate every D2D pathway from scratch.

Against carriers, the move is more nuanced. Amazon is not simply disintermediating mobile operators; its own materials describe D2D as a way to help MNOs extend voice, text, and data beyond terrestrial reach. That suggests a wholesale or partner model, but the long-term competitive risk is obvious: if Amazon owns the satellite layer and the device/service stack, carriers may become optional distribution partners rather than network gatekeepers.

The phrase “spectrum parity” is the real strategic clue. In telecom, constellation size matters, but spectrum rights determine whether a constellation can actually deliver service with usable link budgets, device compatibility, and regulatory clearance. Globalstar’s spectrum therefore acts like a license to compete, not just a frequency block.

This also helps explain why the deal is strategically defensive for Amazon. Without Globalstar, Amazon would face a slower, less certain path through band planning, interference disputes, and NTNspecific regulatory work, especially in crowded MSS allocations. In that sense, the acquisition is a classic telecom play: buy scarce spectrum, then scale the network around it.

The biggest near-term risk to this deal is regulatory. The transaction will need FCC and likely antitrust review, and Amazon will also have to navigate the Apple/Globalstar relationship because Globalstar powers Apple’s Emergency SOS service. That creates both transition risk and potential bargaining leverage for Apple, which could complicate service continuity and deal terms.

Technically, D2D is still constrained by small link budgets, handset antenna limits, and the need to prioritize messaging and emergency services before richer data use cases. Even if Amazon claims better spectrum efficiency, the first commercially meaningful services will likely remain low-throughput, coverage-oriented offerings rather than full terrestrial substitutes. So the real competition is not “satellite internet for phones” in the consumer broadband sense, but who controls the premium coverage layer for dead zones, emergency service, enterprise continuity, and carrier augmentation.

In conclusion, Amazon is making a category-defining infrastructure purchase, not just a corporate acquisition. If approved, it gives Amazon a credible D2D spectrum position, reduces its regulatory latency, and turns Amazon Leo into a more complete and highly competitive NTN platform and D2D service provider.

……………………………………………………………………………………………………………………….

References:

https://www.aboutamazon.com/news/company-news/amazon-globalstar-apple

https://www.reuters.com/business/media-telecom/amazon-signs-1157-billion-deal-satellite-firm-globalstar-challenge-starlink-2026-04-14/

Amazon Leo (formerly Project Kuiper) unveils satellite broadband for enterprises; Competitive analysis with Starlink

Blue Origin announces TeraWave – satellite internet rival for Starlink and Amazon Leo

NBN selects Amazon Project Kuiper over Starlink for LEO satellite internet service in Australia

Amazon launches first Project Kuiper satellites in direct competition with SpaceX/Starlink

Emergency SOS: Apple iPhones to be able to send/receive texts via Globalstar LEO satellites in November

FCC proposes regulatory framework for space-mobile network operator collaboration

AT&T deal with AST SpaceMobile to provide wireless service from space

Starlink Direct to Cell service (via Entel) is coming to Chile and Peru be end of 2024

Starlink’s Direct to Cell service for existing LTE phones “wherever you can see the sky”

Anthropic’s Project Glasswing aims to reshape IT cybersecurity

Backgrounder:

Late last year, Anthropic said that state-sponsored Chinese hackers had used its artificial intelligence (AI) technology in an effort to infiltrate the computer systems of roughly 30 companies and government agencies around the world. The company said it was the first reported case of a cyberattack in which AI technologies had gathered sensitive information with limited help from human operators.

As Anthropic and its chief rival, OpenAI, prepare to release new and more powerful AI systems, cybersecurity experts are increasingly vocal in their warnings that AI is fundamentally changing cybersecurity.  AI technology could allow hackers to identify security holes in computer systems far faster than in the past, vastly raising the stakes in the decades-long fight between hackers and the security experts guarding computer networks.  As hackers deploy AI to break and steal, security experts are also leaning on AI to spot flaws in their systems — including some that had gone unnoticed for decades.

“This is the most change in the cyber environment, ever,” said Francis deSouza, the chief operating officer and president of security products at Google Cloud. “You have to fight A.I. “This is the most change in the cyber environment, ever,” said Francis deSouza, the chief operating officer and president of security products at Google Cloud. “You have to fight AI with AI.”

Hackers have used AI chatbots to draft phishing emails and ransom notes, cybersecurity experts said. Others have used AI to parse large quantities of stolen data and determine what information might be valuable. Without help from AI attackers could sometimes break into computer networks within minutes, Mr. deSouza said, but with the help of AI breaches can take just seconds.  Some hackers specialize in breaking into systems and then selling off their access to other attackers. Those handoffs used to take as much as eight hours, as hackers negotiated the sales and passed along the compromised entry points, deSouza added. Now that process has accelerated to about 20 seconds, he said, with hackers sometimes using A.I. agents to speed up the process.

Some experts argue that the guardrails added by companies like Anthropic and OpenAI can actually provide an advantage to malicious attackers. Guardrails could cause an AI chatbot to deny help to a user trying to defend a system from an attack, they argue, but persistent hackers could be more diligent about finding vulnerabilities — and keeping those tricks to themselves.

In February, Anthropic said it had used its A.I. technologies to find over 500 so-called zero-day vulnerabilities — security holes that were unknown to software makers — in various pieces of commonly used open source software. The next month, a researcher at Anthropic revealed that he had used A.I. to find a serious security vulnerability in the core of the Linux operating system, which is software that powers much of the internet and is used in computer servers, cloud computing services, Android phones and Teslas. The bug had existed, apparently undiscovered, since 2003.

Project Glasswing Overview:

Anthropic has announced Project Glasswing – a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks – in an effort to secure the world’s most critical software.

The fast growing AI private company has found that AI models (like its own Claude) have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities. Their Mythos Preview language model has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser.

Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout—for economies, public safety, and national security—could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes.

The Project Glasswig partners will use Mythos Preview as part of their defensive security work. Anthropic will share what they learn so the entire IT industry can benefit. They have also extended access to a group of over 40 additional organizations that build or maintain critical software infrastructure so they can use the model to scan and secure both first-party and open-source systems.

Anthropic is committing up to $100M in usage credits for Mythos Preview across these efforts, as well as $4M in direct donations to open-source security organizations.

Project Glasswing Core Objectives:
  • Give Defenders a Head Start: The initiative aims to use Mythos’s capabilities to find and fix zero-day vulnerabilities in critical codebases before they can be discovered by malicious actors.
  • Secure Critical Infrastructure: Partners use the model to scan first-party systems and open-source software that underpin global banking, energy, and logistics networks.
  • Modernize Defense Practices: Anthropic is collaborating with partners to evolve security workflows, such as patching and disclosure processes, to match the “machine speed” of AI-driven vulnerability discovery.
Claude Mythos Capabilities:
The Glasswing initiative was formed after Anthropic researchers observed that the Mythos model had reached a threshold where its reasoning and coding skills surpassed all but the most skilled human security researchers.
  • Zero-Day Discovery: In early testing, the model autonomously found thousands of high-severity vulnerabilities, including a 27-year-old bug in OpenBSD and a 16-year-old flaw in FFmpeg code that had been scanned by automated tools millions of times without detection.
  • Performance Benchmarks: Mythos Preview scored 83% on the CyberGym cybersecurity benchmark, significantly outperforming previous models like Claude Opus.

 

References:

https://www.anthropic.com/glasswing

https://www.nytimes.com/2026/04/06/technology/ai-cybersecurity-hackers.html

Anthropic Glasswing: AI Vulnerability Detection Has Crossed a Threshold

Anthropic Claude Users Reveal AI Hallucinations as their Top Concern

Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators

New Linux Foundation white paper: How to integrate AI applications with telecom networks using standardized CAMARA APIs and the Model Context Protocol (MCP)

US Mobile’s new bundle combines its multi-network mobile service with Starlink residential internet

MVNO US Mobile has announced a partnership with Starlink to offer customers a bundle which includes its pre-paid wireless service with home internet from the Space X owned LEO satellite internet provider.  Ahmed Khattak, CEO of US Mobile, announced the partnership on Reddit, saying their Starlink One service will be offered without data caps.  Khattak stated the Starlink bundle will be offered with US Mobile’s unlimited standard or premium plans able to access all three networks, which means customers only need to deal with one bill, one app and “one company that actually picks up the phone.”

“I won’t tease numbers too hard, but imagine a plan for less than $50 a month that spans every major network in the United States, extends across Canada and Mexico, includes internet from space at home,” Khattak wrote. US Mobile has MNVO deals in place with AT&T, Verizon and T-Mobile US and uses a platform which gives customers the ability to switch between networks.  This “terrestrial and celestial” unification allows customers to manage their home and mobile connectivity under a single bill and app.

US Mobile and SpaceX have joined forces to redefine convergence. | Image by US Mobile

Details on the exact cost of the bundled tier and Starlink equipment were not available.  Wave7 Research analyst Jeff Moore told Mobile World Live Starlink started offering its home broadband service last month in 120 T-Mobile Boost retail stores as part of a pilot program.  “If Starlink is working to sell home Internet via Boost and providing mobile connectivity via US Mobile, then Starlink is probably having conversations with other MVNOs about options for becoming channels for internet sales and for mobile satellite connectivity,” he explained.

MeanwhileKhattak stated he expects similar deals will follow with additional satellite broadband providers such as Amazon Leo.  “The endgame is Global Multi-Orbit ConvergenceEvery major terrestrial network on the ground, every major LEO constellation in the sky, stitched together into a single plan that follows you anywhere on earth,” Khattak added.

The mobile portion of the bundle leverages US Mobile’sunification layer,” which provides dynamic access to all three major US networks.
  • Dynamic Network Switching: Users can access Warp (Verizon), Dark Star (AT&T), and Light Speed (T-Mobile).
  • Automatic Handover: While US Mobile previously required manual “Teleporting” between networks, the new Multi-Network Add-on allows phones to automatically switch to the strongest available signal or a backup network if the primary one fails.
  • Unified Account: Both the Starlink satellite session and terrestrial cellular lines are managed via a single “unification layer,” which CEO Ahmed Khattak describes as a software infrastructure that’s been a decade in the making.
Plan Limitations:
  • Introductory Pricing: Most Starlink discounts revert to standard pricing (an increase of roughly $20/month) after the first six months.
  • Availability: The bundle is not available in certain areas subject to Starlink congestion pricing.
  • Hardware Requirements: To use dynamic network switching, your device must support multiple active eSIMs.

AT&T recently launched OneConnect, a cellular and fiber bundle providing one mobile line and fiber internet for $90 per month. T-Mobile’s MVNO Mint Mobile countered with a wireless and 5G internet bundle starting at $45 per month.

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

References:

US MVNO teams with Starlink on home, wireless bundle

https://www.phonearena.com/news/us-mobile-starlink_id179545

Direct-to-Device (D2D) satellite network comparison: Starlink V2 (Starlink Mobile) vs “Satellite Connect Europe”

Blue Origin announces TeraWave – satellite internet rival for Starlink and Amazon Leo

Amazon Leo (formerly Project Kuiper) unveils satellite broadband for enterprises; Competitive analysis with Starlink

Starlink doubles subscriber base; expands to to 42 new countries, territories & markets

Elon Musk: Starlink could become a global mobile carrier; 2 year timeframe for new smartphones

KDDI unveils AU Starlink direct-to-cell satellite service

GEO satellite internet from HughesNet and Viasat can’t compete with LEO Starlink in speed or latency

Nokia’s AI Applications Study: “Physical AI” may require RAN redesign to support high‑volume, low‑latency uplink traffic

According to Nokia, AI-generated traffic in most mobile networks is at an early stage, with application maturity and adoption by consumers and enterprises only at the start of a broader AI super cycle.  The Finland based company analyzed more than 50 AI applications and came to three conclusions: higher uplink traffic, overall data growth and increasing sensitivity to delay in conversational services such as chat and voice. Also, the mobile network industry is moving toward “AI-RAN” or “6G-native” structures that embed AI into the network, transforming radio sites into “robotic” nodes capable of edge inference and handling these new demands.

–>Do those findings require a structural change in Radio Access Network (RAN) design?  Let’s take a fresh look…..

Mobile networks traditionally support a heterogeneous mix of traffic, ranging from high-throughput video streaming to low-bandwidth, delay-tolerant messaging. Network operators typically address escalating capacity demands through infrastructure expansion and overprovisioning, relying on best-effort delivery—a model that has proven remarkably resilient. However, capacity alone is insufficient for new use cases.

The transition from circuit-switched voice to packet-switched (voice/video/data) IP traffic requires a redesign to accommodate variable packet sizes instead of predictable, continuous voice patterns. The proliferation of Internet of Things (IoT) devices introduced requirements for massive machine-type communications (mMTC), driving the development of LTE-M and NB-IoT to optimize for deep indoor penetration and power efficiency.  Conversely, consumer web-based services and video streaming scale seamlessly by adding RAN and core capacity. Existing AI applications, such as generative AI chatbots, follow this model, making current RAN architectures adequate for the present load.

A paradigm shift is emerging with Physical AI [1.], which enables machines like autonomous vehicles and robots to interact with the environment in real time. Unlike traditional video streaming, these applications cannot leverage buffering to absorb network jitter. In Physical AI, high-definition video frames and sensor data must arrive within stringent time-to-live (TTL) constraints to remain actionable. This shifts the focus from average throughput to consistent low latency. Maintaining this strict QoS, particularly in the uplink, requires abandoning best-effort, overprovisioned models in favor of guaranteed scheduling, which necessitates substantial reserved capacity or specialized AI-RAN functionalities.

Note 1. Physical AI combines sensors, perception, decision-making, and actuators so machines can understand their environment and take physical (real world) action. Physical AI is used by robots, vehicles, drones, industrial machines, and smart infrastructure that generate and consume real-time sensor, video, and control traffic. These systems need tight coupling between low latency, high reliability, and continuous feedback loops because decisions in software immediately affect physical motion or control. Physical AI is different from typical generative AI because the output is not text or images; it is real-world action. That makes network performance critical, especially for uplink-heavy, latency-sensitive traffic where delays can affect safety, control accuracy, and operational efficiency.

Physical AI introduces the possibility that large-volume uplink video with strict latency requirements. It will become a meaningful part of mobile traffic, creating both a design challenge and a monetization opportunity,” says Harish Viswanathan, Head of the Radio Systems Research Group at Nokia.

Image Credit: Techslang

Delivering uplink video with sub‑20 ms end-to-end latency can require provisioning three to four times the average uplink capacity. While this level of redundancy is manageable for low-bandwidth services such as voice or control signaling, it becomes prohibitively expensive when supporting high-throughput video streams.

As device densities increase, the required headroom for reserved capacity grows disproportionately, significantly constraining network scalability and driving up cost per bit. This makes Physical AI traffic—characterized by real-time sensor and video inputs for machine analysis—fundamentally different from conventional services, and unsuited to existing best‑effort transport models.  From a Nokia blog post:

“Physical AI will rely on low latency videos to enable real-time control. While the machines or robots will perform most functions locally, there will be situations where they need to rely on more powerful models or human operators to provide remote control via the network. For example, driverless taxis may require remote assistance in unexpected scenarios; service robots may need guidance in complex environments; drones may depend on real‑time video analysis at the point of delivery; and field workers using AR may require timely visual instructions. In all these cases, the network must deliver fresh video information with low and predictable latency.”

To address these challenges, telecom operators are expected to adopt a multi‑layer approach encompassing network architecture, traffic management, and service monetization.

At the Application layer, not all traffic requires identical latency treatment. When video or sensor data is processed by AI rather than consumed by humans, only semantically relevant information may need immediate uplink transmission. This emerging paradigm, known as semantic communication, allows for significant data reduction while preserving information integrity within latency‑critical loops.

Within the network domain, established mechanisms such as Quality of Service (QoS) and network slicing remain essential. QoS enables prioritization of specific traffic classes, while slicing supports logically isolated virtual networks with guaranteed service-level attributes—latency, jitter, bandwidth, and reliability.

At the service and business model level, supporting low-latency, bandwidth-intensive applications reshapes network economics. Operators must evolve beyond best‑effort pricing structures toward differentiated service tiers or performance-based charging models aligned with enterprise and industrial use cases.

For the RAN, Physical AI underscores the need for greater programmability and elasticity. Future RAN designs will depend on dynamic resource allocation, real-time traffic classification, and AI-driven orchestration to balance throughput, latency, and reliability at scale.

As Physical AI deployments expand—from autonomous mobility to precision manufacturing and tele‑robotics—managing high‑volume, low‑latency uplink traffic will become a defining capability for next‑generation network strategy and differentiation. Unlike conventional mobile data, Physical AI cannot rely on buffering to manage traffic spikes. The requirement for continuous video and sensor data to arrive within strict time limits to inform real-time actions makes traditional “best-effort” network approaches inefficient and costly.

Reasons for RAN Redesign:
  • Uplink-Centric Demand: Physical AI shifts the network requirement from downlink-heavy (human consumption) to uplink-heavy (machine-generated) traffic.
  • Strict Latency & Throughput: Maintaining consistent low latency (e.g., around 20 milliseconds) for high-volume video uploads can require 3x to 4x more capacity than average, making overprovisioning unsustainable.
  • Need for Programmable Architectures: To support this, RAN must move toward more flexible, AI-native architectures that prioritize critical data and provide deterministic, rather than best-effort, performance.
  • Semantic Communication: To reduce data volume while maintaining performance, the RAN will need to adopt semantic communication—transmitting only the essential data needed for the AI to make decisions.

………………………………………………………………………………………………………………………………………………………..

References:

https://www.nokia.com/asset/215147/

https://www.nokia.com/blog/physical-ai-redefining-ran-and-telco-monetization/

https://telcomagazine.com/news/nokia-report-points-to-ai-driven-shift-in-mobile-traffic

What Is Physical AI?

Arm Holdings unveils “Physical AI” business unit to focus on robotics and automotive

Is the “far edge” a bridge to far to cross for AI inferencing? What about “Distributed AI Grids”?

The Financial Trap of Autonomous Networks: Scaling Agentic AI in the Telecom Core

Ericsson and Intel collaborate to accelerate AI-Native 6G; other AI-Native 6G advancements at MWC 2026

NVIDIA and global telecom leaders to build 6G on open and secure AI-native platforms + Linux Foundation launches OCUDU

Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)

AI-RAN Reality Check: hype vs hesitation, shaky business case, no specific definition, no standards?

Analysis: Nvidia’s $2 billion investment in Marvell; NVLink Fusion ecosystem & RAN vendor silicon strategy

NVIDIA just announced a $2 billion investment in custom silicon developer Marvell Technology (NASDAQ:MRVL). This comes right on the heels of its $2 billion investments in Lumentum, Coherent, and $1 billion in Nokia.

  • NVIDIA is also deepening its relationship with Marvell within its NVLink Fusion ecosystem. NVLink is NVIDIA’s proprietary scale-up networking system. Scale-up refers to connecting computing components within a rack rather than between racks.
  • NVLink Fusion essentially allows customers to connect non-NVIDIA components to NVIDIA components within the same rack. Thus, customers can mix and match technologies from different vendors when they make a purchase. However, each platform does need to have at least one NVIDIA component.
  • NVLink Fusion is in opposition to the UALink consortium, of which NVIDIA is not a member. Key NVIDIA competitors like Broadcom (NASDAQ:AVGO) and Advanced Micro Devices (NASDAQ:AMD) back UALink. Companies in this group have the same goal as NVIDIA does for NVLink Fusion: to allow customers to easily connect their devices together within racks.
  • UALink’s goal is to reduce NVIDIA’s power by providing an alternative to NVLink Fusion. One of the key benefits to data center operators, which buy AI chips, is avoiding vendor lock-in. By being able to source components from a wide range of companies, there is greater competitive pressure, and thus more room to negotiate. Building AI infrastructure solely on NVLink grants NVIDIA massive bargaining power.

Photo Credit: Marvell Technology

Marvell has been a member of both NVLink and UALink, one of the few major chip companies that can make this claim. Now, NVIDIA is more formally recognizing Marvell’s place within NVLink, potentially expanding its ability to win customers. Meanwhile, Marvell strengthens its standing in the AI market.  From Marvell’s perspective, the deal has significant benefits. Even though Marvell was already a part of NVLink Fusion, the company’s place within this ecosystem is now elevated. Not all companies in NVLink Fusion have received a multi-billion-dollar investment from NVIDIA or their own dedicated announcement.

These factors suggest that NVIDIA is particularly confident in Marvell’s solutions and that it will put in more effort to sell them to customers. NVIDIA now has 2 billion more reasons to do just that. This is particularly noteworthy, as MediaTek and Alchip Technologies are also in NVLink Fusion, and compete with Marvell in custom silicon.

In fact, Alchip has been the source of considerable volatility in Marvell shares over the recent past. This comes as some investors believed that the firm would siphon off much of the custom chip business that Marvell has built with Amazon. However, Marvell’s last earnings report helped to significantly quell these fears. Additionally, Marvell will add $2 billion to its balance sheet. That is very significant, as the company ended last quarter with cash and equivalents of just $2.64 billion, adding meaningful financial flexibility.

Samsung’s Current ASIC Strategy in Purpose-Built RAN:
  • Legacy and Specialized Support: Samsung continues to sell and support its traditional Baseband Units (BBUs), which are powered by proprietary silicon developed in partnership with companies like Marvell. These ASICs are still used for high-density, performance-critical deployments where standard CPUs are not yet chosen by the operator.
  • Hardware Acceleration: In non-vRAN scenarios, custom ASICs handle the most computationally intensive Layer 1 (L1) tasks, such as beamforming for Massive MIMO and FEC (Forward Error Correction).
  • Phased-Out Trajectory: Samsung executives have acknowledged that the era of proprietary hardware is likely nearing its end. Alok Shah, VP of Network Strategy at Samsung, has noted that while they still provide purpose-built BBUs, it is only a “matter of time” before vRAN becomes the universal standard.

……………………………………………………………………………………………………………………………………………………….

Is the “far edge” a bridge to far to cross for AI inferencing? What about “Distributed AI Grids”?

How Far is the Far Edge?

As major telcos size up distributed edge sites for a possible AI inferencing model, they’re trying to determine how far out the right place is in their networks to invest in AI computing capacity.  According to Light Reading, the “far edge” is a divisive option for inferencing. According to Omdia, owned by Informa, the Far edge includes: radio access network (RAN) cell sites, aggregation hubs, exchange offices, optical line terminal (OLT) nodes, and Tier 2 metro hubs. 

Many telcos are struggling to define how far is the edge from customer premises and how to serve various use cases with compute and intelligence?  It seems that 5G SA core with network slicing would be mandatory to support multiple unique use cases, each with different QoS requirements.

According to Omdia’s Telco Edge Computing Survey last year, just 15% of telcos ranked network far edge as the top location for where most AI inferencing will take place, while even less (11%) said the network near edge would be the main spot (which includes central offices, headend sites and large telco data centers). The results showed AI inferencing is expected to be handled mostly on the end devices themselves and at the enterprise edge (e.g., offices, campus or manufacturing sites).

Kerem Arsal, Omdia senior principal analyst for telco enterprise and whoIe sale, predicted in a research note that this year will see telcos split into camps of “believers” and “doubters” of the far edge. 

Image Credit:  Sphere

…………………………………………………………………………………………………………………………………………………………………………………………………………………..

AT&T VP Yigal Elbaz, speaking at the recent New Street Research and BCG Global Connectivity Leaders Conference, expressed a cautious view on AI compute at the “far edge,” questioning how far the edge truly needs to extend to serve specific use cases effectively.  He said the following (Source: Light Reading)

“The proliferation of compute and high-performing compute across the nation, in all metros is just happening, with a software layer on top of this [and] with the tools that developers need. So, I am not sure that there’s much value in extending that compute all the way to the far edge just to save another millisecond or two milliseconds of latency.”

“AT&T’s fiber and wireless networks can provide the “deterministic experience” needed between any new use cases and help them to “intelligently connect to the right model that they use, the context or the infrastructure that they need because that’s going to be heavily distributed across the US.”

“There’s no doubt that that AI is going to be embedded into wireless networks, and we’re going to call it AI-native and combine the physical space with the intelligence of the network. This is all true,” said Elbaz.

………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Distributed AI Grids:

At this year’s Nvidia GTC event, AT&T was cited as a lead collaborator in the development of distributed AI grids—a geographically dispersed, interconnected fabric designed for high-performance AI infrastructure. In partnership with Cisco and Nvidia, AT&T is architecting an enterprise IoT AI grid focused on localized inference. By moving the compute layer to the network edge—potentially via On-Premises Edge (oPE)—the architecture aims to minimize backhaul latency and process workloads at the data source. Current Proof of Concept (PoC) deployments include a public safety framework and an edge AI-powered video intelligence pilot for site security. Similarly, Comcast is trialing Nvidia GPU-accelerated edge nodes to support deterministic, low-latency AI applications.
For the Cisco AI Grid with Nvidia architecture used by AT&T and Comcast, the interconnect strategy moves beyond standard backhaul to a specialized, deterministic fabric designed for distributed AI inference. AI Grid Interconnect Stack: The architecture leverages a multi-layer protocol approach to ensure low-latency, secure communication between edge nodes and the core:
  • Ethernet with RDMA (RoCE): The foundation is built on Nvidia Spectrum-X Ethernet, which utilizes RDMA over Converged Ethernet (RoCE). This allows for direct memory access between edge GPUs (e.g., Nvidia RTX PRO 6000 Blackwell Server Edition) and the network core, bypassing CPU overhead to achieve near-line-rate performance.
  • Scale-Across Networking: Using Nvidia Spectrum-XGS, the architecture extends standard RoCE to scale across geographically distributed sites. This creates a unified “AI Factory Grid” where remote edge nodes function as a single, programmable compute substrate.
  • Silicon One Routing: Cisco’s Silicon One-based routing is utilized for AI-optimized traffic management, providing the high-speed, high-density throughput required for token-intensive inference workloads.
  • Zero Trust & Secure Pathways: The interconnect includes a Zero Trust security layer embedded directly into the fabric. It utilizes localized traffic breakout and policy-enforced pathways to ensure that sensitive IoT and video data (such as public safety feeds) remain within the customer’s secure domain at the network edge.
  • Orchestration Control Plane: A workload-aware control plane manages these protocols to intelligently route tasks based on real-time KPIs (latency, cost-per-token, and data sovereignty), ensuring that “mission-critical” inference happens at the optimal node.
Focusing specifically on interoperability, the primary concern with a single-vendor AI Grid is the risk of architectural silos that could undermine years of industry progress toward Open RAN and multi-vendor environments.Key interoperability risks for carriers include:
  • Proprietary Software Lock-in: Integrating network functions into a proprietary ecosystem (like Nvidia’s CUDA or AI Aerial) can create a “subscription trap,” where software is inseparable from specific hardware, making it nearly impossible to swap vendors without a total architectural overhaul.
  • Data Fragmentation: Deploying AI across a distributed grid often leads to fragmented data sets across legacy and new multi-vendor platforms, which can result in inaccurate AI models and increased operational complexity.
  • Standardization Lag: While industry bodies like the GSMA are pushing for Open Telco AI standards, the rapid deployment of proprietary AI systems often outpaces these frameworks, leading to entrenched, incompatible systems that require significantly more resources to reconcile later.
  • Integration with Legacy Systems: Modern “agentic AI” and AI-native stacks often struggle to orchestrate processes across siloed legacy infrastructure, creating rigid operational environments that prevent the seamless flow of data needed for automated network troubleshooting.

Bottom Line: While the AI Grid may offer a more viable roadmap than AI-RAN, there is insufficient industry discourse regarding the strategic risks of a global, geographically distributed computing platform—as Nvidia defines it—reliant on a single-vendor hardware stack. Although Nvidia currently maintains undisputed market dominance, historical precedents such as Intel serve as a cautionary tale; long-term dominance is never guaranteed, and even market leaders face potential obsolescence. Furthermore, Nvidia’s practice of providing capital injections to entities that subsequently re-invest those funds back into Nvidia’s own ecosystem raises significant concerns regarding market sustainability and long-term financial health.

……………………………………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.lightreading.com/ai-machine-learning/at-t-cto-casts-doubt-on-ai-compute-at-the-far-edge

https://www.lightreading.com/5g/nvidia-lines-up-ai-grid-as-orange-cto-echoes-the-ai-ran-doubts

Edge AI Computing Explained: Key Concepts and Industry Use Cases

Will “AI at the Edge” transform telecom or be yet another telco monetization failure?

Analysis: Edge AI and Qualcomm’s AI Program for Innovators 2026 – APAC for startups to lead in AI innovation

Private 5G networks move to include automation, autonomous systems, edge computing & AI operations

Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?

Nvidia’s networking solutions give it an edge over competitive AI chip makers

IDC Survey of Networking Leaders: Enterprise AI progress stalls despite ambitious goals

New IDC research released in April 2026 highlights a growing disconnect between ambitious enterprise AI goals and the reality of their technical execution.  The 2026 IDC AI in Networking Special Report (LinkedIn Video hyperlink) [1.] found that organizations expecting to move from early and selective AI use for business and IT initiatives to more advanced deployments largely haven’t. The result is a widening gap between intent and execution that is becoming harder to ignore.  This widening gap in AI execution is driven by a mismatch between ambitious goals and the realities of legacy infrastructure, which cannot handle the data demands for production-grade models.

Despite high expectations, many organizations have seen their AI progress stall over the last 18 months, with “select use” adopters failing to advance to more “substantial” deployments. A critical shortage of specialized AI experienced personnel, combined with lagging security and governance controls, has caused widespread “pilot paralysis” across most enterprises. To overcome this, organizations are shifting toward “AI factories” to create a repeatable, governed pipeline for deploying AI.

Note 1. IDC’s 2026 AI in Networking Special Report is a report driven by a worldwide survey of 500+ enterprise network executives and experts. The report covers both the impact and plans for supporting AI workloads across the network and using AI-powered networking solutions. The focus of this research is comprehensive, covering datacenters, cloud services, multi-cloud environments, network core and edge, and network management.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Mark Leary, IDC research director, Network Observability and Automation:

“Many solution suppliers are prioritizing a platform approach to the challenges associated with moving AI workloads into production. This survey of networking leaders highlights the shift in preference from platforms to best-in-class solutions when supporting AI workloads across their networks. As certain functional requirements intensify, as IT staff experience and expertise build, and as platforms fall short in delivering expected advantages, IT organizations are more willing to take on the added responsibilities associated with assembling their own mix of best-in-class solutions. For the supplier, the challenge is to avoid developing and delivering a platform that is classified as a jack-of-all-trades and master of none.”

Agentic AI is to have a profound effect on the network infrastructure and on networking staff. Two years ago, AI assistants were labeled leading edge when they offered natural language processing for operator interactions and network management guidance driven by technical manual content. How things have changed! Agentic AI is no longer just a passive informer and instructor but an active intelligent virtual network engineer. Agents gather and process comprehensive network data, develop deep and precise insights, and determine and, increasingly, execute needed network management actions. Whether fixing a network problem, activating a network service, optimizing a network configuration, or responding to a developing network condition, agentic AI solutions are proving more and more useful across the entire network and the entire set of tasks required to engineer and operate the network.”

While this IDC Survey Spotlight offers only an overview of responses relating to agentic AI, detailed results are available by geographic region, select country, company size, major vertical industries, respondent role, and the AI maturity level of the respondent’s organization.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Organizations are pursuing AI in networking across two categories:

1.] Supporting AI workloads across network infrastructure and

2.] Applying AI to network operations. 

But in both cases, progress is constrained by persistent challenges. “2026 is when organizations find out if AI in networking delivers real operational impact—or remains stuck in pilot mode,” Leary said in the referenced LinkedIn Video.

Source: IDC

……………………………………………………………………………………………………………………………

Security remains the top concern among enterprises, both as a barrier to deployment and a primary use case for AI itself. “You have to fight AI with AI from a network security perspective,” said Brandon Butler, senior research manager at IDC. “There’s a realization that nefarious actors are leveraging AI themselves. The pressure is already on the network. The question now is whether organizations can keep up with what AI is demanding of their infrastructure,” he added.

Integration with existing systems and a shortage of skilled talent follow close behind. “Most folks don’t feel their staff can fully evaluate and select the right solutions,” Leary said. As a result, many organizations are turning outward for help:

  • 81% say they are increasing spending on managed service providers (MSP) to support AI initiatives.
  • 89% of data centers expect to increase bandwidth by at least 11% within the next year, driven by AI workloads.
  • That demand extends beyond individual facilities, with 91% expecting similar growth in inter-data center connectivity, highlighting the strain on distributed architectures.
  • Nearly half of respondents (46%) prefer AI systems that can both determine and execute network actions autonomously.
  • Another 41% favor a guided approach, while 13% prefer no AI involvement.

Cloud environments are seeing sharper increases in AI use. Organizations anticipate an average 49% rise in bandwidth for cloud connectivity over the next year. “The cloud is almost always involved,” Leary says. “The biggest group mixes one cloud platform with one or more data centers.”

Beyond the data center and cloud, the network edge is emerging as the next major growth area. Today, 27% of organizations have deployed AI workloads at the edge, and 54% plan to do so within two years. Butler said: “Folks who are leveraging AI more extensively are already pushing workloads to the edge. We see this as a leading indicator of where the market is going.”

“Two years in a row, the largest group said they want AI to both determine and execute actions. It was honestly surprising,” he added.

Enterprise edge bandwidth is projected to grow by an average of 51% in the next year. As AI becomes more distributed, network teams will need to manage greater complexity across environments while maintaining performance and security.

…………………………………………………………………………………………………………………………………………………………………………….

When assessing expected ROI from AI in networking, IDC survey respondents focused on elevating IT capabilities, with 31% prioritizing superior service levels and 30% focusing on operational efficiency. These outcomes ranked above worker productivity and revenue, suggesting that leaders are strategically utilizing AI to enhance foundational operational workflows. Notably, reducing operating costs ranked seventh, suggesting a focus on strategic value rather than immediate expense reduction.

Source: IDC

……………………………………………………………………………………………………

IDC Research identified specific applications—from automated configuration validation to AI-enhanced threat response—as catalysts for measurable performance gains and the organizational trust essential for broader implementation. For network executives, this phased approach represents the most strategic methodology for achieving long-term operational objectives.

“It doesn’t have to be handing the keys of your kingdom to AI to really get some benefits from these AI tools,” Butler concluded.

……………………………………………………………………………………………………………………………………………………………………………………….

References:

https://www.linkedin.com/posts/brandon-butler-29761a3_idc-recently-published-our-second-annual-activity-7429576183614320640-p5PA/

https://www.networkworld.com/article/4152655/ai-for-it-stalls-as-network-complexity-rises.html

Inside TM Forum’s Catalyst project “Living Networks – Phase III”

TM Forum’s [1.] Catalyst project “Living Networks – Phase III” brings together a broad ecosystem of communications service providers and technology innovators to advance autonomous, resilient, and energy-efficient network operations. It will be showcased at DTW Ignite 2026, taking place June 23–25 in Copenhagen.

Note 1. TM Forum is a global alliance of 800+ organizations across the connectivity ecosystem. Members include the top 10 Communication Service Providers, top three hyperscalers, leading Network Equipment Providers, and a wide range of vendors, consultancies, and system integrators.

Image Credit: TM Forum

………………………………………………………………………………………………..

“Living Networks – Phase III” builds on earlier Catalyst work focused on intent-based automation and traffic resilience. According to TM Forum and project materials, Phase III advances that foundation toward governed, adaptive intelligence. It introduces a cloud-native, Kubernetes-based architecture with stronger data governance to help networks predict failures, optimize resources, and support programmable, platform-based business models such as Connectivity-as-a-Service.

The project is designed to help operators improve resilience, reduce operational effort, and lower energy consumption. It also enables them to scale autonomous operations safely across increasingly complex multi-domain environments.

Digital Global Systems (DGS), a company using AI/ML to optimize radio frequencies to monitor radio spectrum, is collaborating with a distinguished group of global Catalyst participants, including Beyond Now, Chunghwa Telecom, Globetom, Infosim, Julius-Maximilians-Universität Würzburg, MTN Nigeria, MTN South Africa, NTT Group, Orange, Telekomunikasi Indonesia International, Seacom, and Telecom Italia. TM Forum identifies several of these operators as project champions, underscoring the depth of CSP engagement behind the initiative.

The Catalyst project aligns closely with the DGS’s mission to bring AI-powered intelligence to complex communications environments. DGS develops real-time RF Awareness and spectrum optimization technologies that help operators detect issues early, improve reliability, and make communications infrastructure more resilient and efficient.

“Telecom networks are becoming too dynamic and too essential to be managed with yesterday’s operating models,” said Armando Montalvo, CTO of Digital Global Systems. “By participating in Living Networks – Phase III with leading CSPs and technology innovators from around the world, DGS is helping advance a future in which networks become more autonomous, more resilient, and more responsive to both operational demands and business opportunities. This kind of collaboration is exactly what the industry needs to move from automation experiments to real, scalable transformation.”

“Living Networks – Phase III demonstrates what becomes possible when CSPs, research institutions, and specialized technology providers work together around a common vision for autonomous networks,” said Dr. David Hock, Director of Research at Infosim. “The collaboration within this Catalyst is especially powerful because it connects innovation with practical operational outcomes, helping the industry move toward more trusted, scalable, and intelligent network automation.”

The Catalyst focuses on a set of pressing industry challenges, including rising service demands, sustainability pressures, operational complexity, and the business impact of network outages. By combining AI, digital twins, multi-domain orchestration, and stronger governance over data and automation workflows, the team aims to show how operators can reduce mean time to repair and improve SLA performance. It also highlights how operators can create new monetization opportunities across partner ecosystems.

DGS said the project is another example of how collaboration across the telecom ecosystem can accelerate innovation beyond what any one company can achieve alone. At DTW Ignite in Copenhagen this June, the team will demonstrate how communications networks can evolve from static infrastructure into adaptive, intelligent platforms. These platforms will support the next generation of digital services.

About Digital Global Systems (DGS):

Digital Global Systems (DGS) delivers AI-driven RF awareness and spectrum optimization solutions that power resilient communications for governments, industries, and communities worldwide. With more than 725 issued and pending patents, DGS helps nations and enterprises rebuild stronger, smarter, and more connected.

References:

www.digitalglobalsystems.com

https://www.globenewswire.com/news-release/2026/04/06/3268465/0/en/DGS-Joins-Global-CSP-and-Technology-Leaders-in-TM-Forum-Catalyst-Living-Networks-Phase-III-at-DTW-Ignite-2026.html

https://www.globenewswire.com/news-release/2026/03/16/3256345/0/en/Digital-Global-Systems-Open-Letter-Why-Edge-RF-Awareness-Is-Essential-for-the-M2M-Era.html

Deloitte and TM Forum : How AI could revitalize the ailing telecom industry?

GSMA, ETSI, IEEE, ITU & TM Forum: AI Telco Troubleshooting Challenge + TelecomGPT: a dedicated LLM for telecom applications

Broadband Forum new work areas to enable broadband services & apps

Verizon’s 6G Innovation Forum joins a crowded list of 6G efforts that may conflict with 3GPP and ITU-R IMT-2030 work

TM Forum Survey: Communications Service Providers Struggle with Business Case for NFV & Digital Transformation

 

Will 2026 be the “Year of the AI Ontology” for telecoms?

Overview:

For the telecommunications industry, many pundits say 2026 will be the year of “AI Ontology [1.],” primarily because a standardized knowledge plane is now seen as the “ultimate driver” for reaching higher levels of network autonomy. Industry experts from companies like Telstra and Amdocs emphasize that for agentic AI to move from isolated pilots to enterprise-scale operations, it requires a structured, explainable, and typed world model—an ontology—to unify data across fragmented systems.

Note 1. An ontology in AI is a formal, machine-readable framework that defines the concepts, properties, and relationships within a specific domain to enable knowledge sharing, reasoning, and semantic understanding. It structures data into a network of “things” (classes) rather than just files, acting as a “Rosetta stone” that allows AI systems to understand context, infer conclusions, and act on data.

…………………………………………………………………………………………………

Several network providers are adopting a “standardized, ontology-driven knowledge plane” to enable agentic AI to operate across traditionally siloed network systems. This shift in 2026, is driven by the need for Level 4 and 5 network autonomy, where agents require a common language to reason about network states and business intents.

1.  Mark Sanders, Telstra’s chief architect, talked about the emergence of a structured, explainable knowledge plane that removes silo barriers between agents, freeing them up to become the workhorses of network automation. “We think for the autonomous network to reach level four or five is going to require a standardized, ontology-driven approach on the knowledge plane,” said Sanders at a recent Ericsson conference, touting this approach as the ultimate driver in next-level autonomous networks.

2.  For BT, agentic AI is already yielding tangible results in IT service desks, especially as organizations shift from assistance to execution, according to Girish Mahajan, senior leader for mobile AI data/automation. In particular,  AI agents have reduced trouble ticket resolution times. “It has reduced the time of the manual effort, and it has also increased efficiency of the service desk,” he said.  However, same autonomy that drives value also introduces unpredictability.

“The outcome of agentic AI is something unpredictable because it’s continuously adapting during execution,” he said, adding a call for better design principles. “We need reflection-based architecture, and we need better AI/human collaboration. AI agents should learn from their actions and should work along with humans in their day-to-day.”

3. For Vodafone, work has revolved around lighthouse projects: small-scale efforts to demonstrate the value of a larger business use case.

“It’s quite a mundane use case around energy cost recovery. So obviously, energy is a huge operational expense for our industry,” said Simon Norton, digital/OSS engineering director, Vodafone Group. “It’s very complex, especially when you’re working in that multi-market environment, to manually compare line by line with energy bills against your own data sets.”

Vodafone’s AI agents, therefore, have been automatically ingesting bills and comparing them to identify any tariff anomalies.

“It’s mundane but actually super valuable,” said Norton, who stressed operators should find a project with a clear value proposition and get it out into production quickly. “You build the credibility, you start to get the funding into the system, and it buys you the time to work on that longer-term strategy.”

………………………………………………………………………………………..

The Role of Agentic AI Improvements:
Improvements in agentic AI are acting as the primary catalyst for this ontological shift:
  • From Assistant to Doer: AI is evolving from a “helper” that provides insights to a “doer” that autonomously observes, decides, and executes actions within governed boundaries.
  • Multi-Agent Orchestration: 2026 will see the rise of coordinated multi-agent ecosystems. These systems require an ontology to ensure that a “planner agent” can accurately break down goals for specialized “worker agents” without semantic confusion.
  • Intent-Based Orchestration: To ensure network stability, telcos are adopting intent-based orchestration layers. These layers use ontologies to provide the deterministic, model-driven framework necessary to ground agent actions in real-world business intent.
Strategic Impact for 2026:
  • Network Autonomy: CSPs are aiming for TM Forum Level 3 or 4 autonomy by late 2026, using agents to turn intent into outcomes in live networks.
  • Operational Leverage: Rather than massive headcount cuts, agentic AI is providing “operational leverage,” allowing teams to manage growing network complexity with the same workforce.
  • Measurable ROI: Investments are focusing on high-impact areas like autonomous incident handling (30-40% cost reduction) and predictive maintenance (up to 40% fewer outages).
2026 as the Year of “AI Ontology”:
  • Structured Knowledge Plane: Operators are shifting toward a standardized, ontology-driven knowledge plane to remove silo barriers between agents. This allows multiple specialized agents to collaborate on “broader, bigger outcomes” like root cause analysis across billing, CRM, and network systems.
  • Enabling Agentic Autonomy: While 2025 focused on “agentic AI” as a buzzword, 2026 is about the foundational infrastructure—specifically graph-based data systems and digital twins—that gives agents the “executable semantics” they need to plan and act safely.
  • Unified Truth for Agents: Without a central ontology, horizontal AI platforms often suffer from “agent drift,” where different agents interpret the same business logic (e.g., “unlimited plan”) differently, leading to billing and provisioning errors.

Ericsson’s View:

Hassan Iftikhar, Ericsson’s head of product domain data & analytics,  called for better hyperscaler collaboration on scale, foundational cloud, and AI capabilities.

“The AI tooling, the security framework, we use those to industrialize and put agents into production… It’s pretty much an ecosystem that works together,” he said. At the panel, the data head revealed the vendor’s role in the agentic ecosystem through the use case of one operator needing help with catalog management, as well as scarce developer skills.

“They wanted to take the pain out of product configuration. So we designed a multi-agentic system where it basically helps product managers and marketers to configure and publish new instances through an actual language. So very complex catalog engineering, which can take weeks, is reduced to hours where you can search for reuse and launch.”

Iftikhar also revealed an OSS tool to help one operator’s engineers to diagnose and resolve issues within their operational instances – resulting in an agent that was seemingly too autonomous for the client.

“We put this use case together, basically taking an intent from an operations engineer, such as data diagnostics, and into it, we built the ability to take remediation actions automatically. What we sort of decided from that was a bit of a step too far to just throw that to an operations department for it to autonomously take steps. So we actually had to go in and build guardrails to limit that capability to a human oversight capability.”

“I think what we learned is that we have to sort of build that confidence in the team step by step before we can actually go to fully autonomous operation. Our learning from adjusting that use case was to be practical and adapt very quickly to what the business really needs.”

…………………………………………………………………………………

References:

https://www.sdxcentral.com/analysis/has-telco-already-faced-the-year-of-ai-agents/

The Financial Trap of Autonomous Networks: Scaling Agentic AI in the Telecom Core

Telecom operators investing in Agentic AI while Self Organizing Network AI market set for rapid growth

Nokia to showcase agentic AI network slicing; Ericsson partners with Ookla to measure 5G network slicing performance

T-Mobile US announces new broadband wireless and fiber targets, 5G-A with agentic AI and live voice call translation

Ericsson integrates Agentic AI into its NetCloud platform for self healing and autonomous 5G private networks

Agentic AI and the Future of Communications for Autonomous Vehicles (V2X)

AWS to deploy AI inference chips from Cerebras in its data centers; Anapurna Labs/Amazon in-house AI silicon products

 

Using AI, DeepSig Advances Open, Intelligent Baseband RAN Architectures

Using advanced AI techniques, DeepSig has reportedly managed to eliminate a mobile network’s pilot signal, thereby removing signaling overhead without degrading overall performance. Founded in 2016, the U.S.-based startup occupies a leading position at the intersection of artificial intelligence (AI) and the radio access network (RAN), developing data-driven models that could supplant traditional, human-engineered signal processing algorithms.

This work has become especially relevant as the telecom industry moves toward open and software-defined RAN architectures. DeepSig is now a visible contributor to OCUDU (Open Centralized Unit Distributed Unit), an open-source initiative announced by the Linux Foundation in collaboration with the U.S. Department of Defense and its FutureG ecosystem partners to accelerate open CU/DU development for 5G and early 6G systems. OCUDU is intended to establish a carrier-grade reference platform for baseband software, with support for AI-based algorithms and solutions embedded in the RAN compute stack.

As AI becomes a central theme across the telecom ecosystem, DeepSig has rapidly moved from relative obscurity to prominence through collaborations with major industry and government stakeholders. Most recently, the company emerged as a key contributor to OCUDU—the Open Central Unit Distributed Unit initiative announced by the Linux Foundation and the U.S. Department of Defense (DoD) ahead of MWC Barcelona 2026. The program’s goal is to introduce open-source software elements into the RAN baseband domain, an area historically dominated by proprietary offerings from Ericsson, Nokia, and Samsung. By lowering barriers to entry, OCUDU aims to foster innovation and enable smaller players like DeepSig to participate more freely in the U.S. baseband ecosystem.

Image Credit:  DeepSig

DeepSig was identified, alongside Ireland-based Software Radio Systems (SRS), as one of two startups selected to deliver OCUDU’s initial software stack. “The National Spectrum Consortium had an RFQ for developing an open-source stack,” explained Jim Shea, DeepSig’s CEO. “SRS already had a capable baseline, but it needed to be elevated to carrier-grade—adding new features and strengthening reliability,” he added.

Meanwhile, major vendors Ericsson and Nokia were named “premier members” of the new OCUDU Ecosystem Foundation. While both could, in principle, leverage the platform to integrate third-party components into their baseband systems, industry observers remain skeptical that these incumbents will fully embrace open-source alternatives over their established proprietary stacks. In comments at MWC, Nokia CEO Justin Hotard characterized OCUDU as a welcome ecosystem evolution to accelerate innovation but clarified that “not everything necessarily needs to be open source.”

Driven in part by DoD interests, OCUDU reflects broader U.S. government ambitions to ensure that 5G and future 6G networks remain open to domestic innovation, particularly for defense and mission-critical use cases. For vendors like Ericsson and Nokia—who view defense markets as increasingly strategic—this alignment could bring both opportunity and complexity.

DeepSig’s trajectory extends beyond OCUDU. The company’s technology originated from research by Tim O’Shea, now CTO, during his tenure at Virginia Tech, where he explored deep learning’s application to wireless signal processing. “You can apply deep learning to enhance the way communication systems operate by replacing many of the traditional algorithms,” said Jim Shea. While these methods do not circumvent theoretical limits such as Shannon’s Law, small efficiency gains can yield substantial operational and economic benefits for cost-sensitive mobile operators.

As DeepSig and peers continue to redefine how intelligence is integrated into the RAN, their work signals a shift toward AI-native architectures—where machine learning, rather than handcrafted algorithms, becomes the foundation for next-generation network optimization.

 

References:

https://www.lightreading.com/5g/small-deepsig-is-at-heart-of-ai-ran-challenge-to-ericsson-nokia

Accelerating 5G vRAN, AI-RAN, and 6G on OCUDU, “the Linux of RAN”

AI-RAN Reality Check: hype vs hesitation, shaky business case, no specific definition, no standards?

Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN

Dell’Oro: RAN Market Stabilized in 2025 with 1% CAG forecast over next 5 years; Opinion on AI RAN, 5G Advanced, 6G RAN/Core risks

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

InterDigital led consortium to advance wireless spectrum coexistence & sharing

Telecom sessions at Nvidia’s 2025 AI developers GTC: March 17–21 in San Jose, CA

Sources: AI is Getting Smarter, but Hallucinations Are Getting Worse

Page 1 of 356
1 2 3 356