Analysis: AT&T’s $250B network investment to advance U.S. connectivity

Rapid adoption of artificial intelligence (AI), cloud computing and IoT connected devices has prompted telecom operators to invest heavily in fiber and 5G networks.  In line with that movement,  AT&T announced it will spend more than $250 ​billion over five years in the U.S. to expand its network and make deals to boost wireless and fiber connectivity in the U.S.

“Today, we’re committing more than $250 billion to increase U.S. connectivity competitiveness and expand access to AT&T’s leading fiber and wireless networks – the best way to get on the internet,” said John Stankey, Chairman and CEO of AT&T. “Current Federal telecommunications policy is as strong as I’ve seen in my career, making our commitment to invest possible. We look forward to serving American communities and businesses for the next 150 years.”

Ubiquitous networks that provide reliable, always-on connectivity are the critical conduits that make Artificial Intelligence, autonomous technologies, cloud computing, and data-heavy digital services possible. AT&T’s investment will expand future-ready fiber and wireless services, modernize critical infrastructure, and strengthen network resilience and security to support communities and the economy for decades to come, including:

  • Accelerating the deployment of fiber, 5G home internet, wireless and satellite across urban, suburban, and rural America.
    • AT&T’s satellite collaboration with AST SpaceMobile will extend coverage into remote areas.
  • Strengthening FirstNet®, Built with AT&T – the nation’s first and only network built with and for first responders – and modernizing vital infrastructure for public safety and resilience
    • With AT&T Dynamic Defense, we deliver the only network connectivity with comprehensive built-in security controls.
  • Laying the groundwork for the next wave of American technological leadership through smart infrastructure and network optimization.
    • AT&T’s Wi-Fi Personalization provides a tailored home experience that matches our customers’ daily habits, and AT&T Turbo Live allows customers to boost their data experience at live events to get the reliable connection they want, even in crowded venues.

AT&T says they will continue investing in technologies that advance and protect the connected economy, including:

  • Scaling network security and AI-driven threat intelligence.
  • Enabling the next wave of American invention across industries by opening up our network to allow new entrants to innovate and supply telecommunications equipment.
  • Strengthening collaboration with public-sector partners to support national resilience and first responders.
  • Supporting America’s leadership in global technology and innovation.

With this commitment, AT&T says it will keep building the network Americans rely on, whether delivered by fiber, wireless, or satellite, so more people and businesses have access to fast, reliable connectivity. It’s the foundation for what’s next, from remote care, to autonomous vehicles to AI, and it will help keep America connected for the next 150 years.

AT&T store, building exterior, Fifth Avenue, New York City, New York, USA.  Photo by: Plexi Images/GHI/Universal Images Group via Getty Images

…………………………………………………………………………………………………………………

Comment and Analysis:

The spending push comes alongside federal broadband initiatives created under the ‌2021 infrastructure ⁠law, including the $42.5 billion Broadband Equity, Access, and Deployment (BEAD) Program.  However, the rollout of funding has faced delays due to a combination of implementation challenges and policy changes under the Trump administration. AT&T has secured the largest share of BEAD funding for fiber build‑outs, winning about $1.06 billion, according to New Street Research.
………………………………………………………………………………………………………………………………………………………………………………..
Fiber broadband has become ​a key battleground between carriers ​and cable providers as ⁠they compete for home internet customers:
  • Comcast is defending its subscriber base while undergoing strategic changes. The company on ​Tuesday began a $5.9 million network‑expansion project in Greater Hartford and Middletown, set to ​finish later ⁠this year.
  • Verizon has accelerated its fixed‑broadband expansion after completing its acquisition of Frontier Communications earlier this year and is rolling out limited‑time discounted bundles to attract customers.
Investment Comparison (2026 Forecasts):
Feature AT&T Verizon T-Mobile
Headline Commitment $250 Billion (5-Year Total) $16.0 – $16.5 Billion (Annual) ~$10 Billion (Annual)
Estimated Annual Capex $23 – $24 Billion $16.0 – $16.5 Billion ~$10 Billion
Key Strategic Focus Aggressive fiber-to-the-home (FTTH) and 5G/6G Network “densification,” software, and Frontier integration 5G Advanced features and rural expansion via BEAD
Spending Trend Increasing: Doubling previous capex levels Decreasing: Down from $17B in 2025 to improve margins Disciplined: Focusing on cash generation over heavy builds
Strategic Divergence:
  • AT&T’s “All-In” Approach: AT&T is significantly outspending its rivals to “build something more valuable tomorrow”. Its $250 billion figure reflects a broad “inclusive spend” that covers fiber expansion, 5G upgrades, and recent spectrum acquisitions like the $23 billion EchoStar deal.
  • Verizon’s Fiscally Responsible Pivot: Under new CEO Dan Schulman, Verizon is reducing its capex for 2026. The company is transitioning from a “coverage” phase to a “densification” and software-focused phase, as its C-band deployment is now 90% complete. Verizon is prioritizing free cash flow and dividend sustainability over aggressive new builds.
  • T-Mobile’s Capital Efficiency: T-Mobile is maintaining the lowest capex among the “Big Three,” focusing instead on shareholder returns (with an authorized $14.6 billion for 2026). Its growth strategy has shifted toward upselling customers to higher-rate plans (“more for more”) and leveraging government funding, like the BEAD program, for rural coverage rather than pure internal spending.
Market Implications:
  • Analysts at Recon Analytics note that AT&T’s proposed annual spend ($50B if divided evenly, though actual capex guidance is closer to $24B) is roughly 3x Verizon and 5x T-Mobile.
  • While AT&T bets on long-term infrastructure dominance, the high debt load ($118.4B) remains a risk compared to Verizon’s clearer deleveraging path.

………………………………………………………………………………………………………………………………………………………………………

Details Lacking:

AT&T’s $250 billion spend announcement through 2030 lacks granular details on several fronts, making it more of a high-level commitment than a fully specified plan.

  • AT&T reported capital investment of $22B for full-year 2025 and its outlook for the 2026-2028 period puts capital investment at $23B-to-$24B per annum. That accounts for about half the annual sum AT&T now says it will spend ($25B/year for the next 5 years).
  • AT&T did not state how much of the $250B to be spent would be on network infrastructure build-outs vs deals with other companies (e.g. AST Space Mobile) vs money spent on new hires. The AT&T press release (see Reference #1 below) says the telco will be recruiting and training new technicians to build and maintain those networks. The plan includes “hiring thousands of technicians in 2026 alone.”
  • More importantly, there were no network coverage targets announced or new technologies to be deployed, e.g. 5G Advanced, 6G, 50G PON, etc.

Coverage Targets:

The announcement targets “unmatched coverage for more than 100 million customers” across fiber and wireless networks in urban, suburban, and rural areas, but provides no maps, timelines, or metrics like gigabit availability percentages or specific unserved locations.

Technologies Deployed:

AT&T highlights accelerating fiber broadband, 5G wireless and home internet, satellite via AST SpaceMobile partnership for remote areas, FirstNet modernization, and AI-driven security like Dynamic Defense, without naming new equipment vendors, spectrum bands beyond past deals, or deployment schedules.  No mention of new technologies.

Spending Breakdown:

No explicit allocation is given for infrastructure capex versus partnerships (e.g., AST SpaceMobile collaboration or the prior $23B EchoStar spectrum purchase), hiring (thousands of technicians in 2026 alone), or training within its ~110,000 U.S. workforce; the total is framed as a multi-year pledge dependent on favorable tax/regulatory conditions.

AT&T’s press release did not mention its $23 billion spectrum deal with EchoStar, which has yet to close. That $23B is surely included in the total spend. There will likely be other similar lines in its spreadsheet that will enable AT&T to get to the magic $250 billion mark.

References:

https://about.att.com/story/2026/att-announces-250-billion-commitment.html

https://www.telecoms.com/operator-ecosystem/at-t-s-250-billion-investment-pledge-not-as-big-as-it-sounds

https://www.reuters.com/business/media-telecom/att-invest-250-billion-over-five-years-us-boost-infrastructure-2026-03-10/

AT&T and AWS to deliver last mile connectivity for AI workloads; AT&T Geo Modeler™ AI simulation tool

AT&T and Ericsson boost Cloud RAN performance with AI-native software running on Intel Xeon 6 SoC

AT&T’s convergence strategy is working as per its 3Q 2025 earnings report

AT&T deploys nationwide 5G SA while Verizon lags and T-Mobile leads

AT&T to buy spectrum licenses from EchoStar for $23 billion

T-Mobile’s new CEO Srini Gopalan faces fierce competition from AT&T, Verizon and MVNOs

 

Semtech LoRa® PHY technology enables Amazon Sidewalk to expand while supporting fixed and mobile IoT endpoints

Introduction:

Semtech Corporation, a leading provider of high-performance semiconductor, Internet of Things (IoT) systems and cloud connectivity service solutions, is the creator and primary owner of the intellectual property (IP) for LoRa® technology, providing the Physical layer chips (PHY transceivers) used in LoRaWAN – the very popular Low Power Wide Area Network (LPWAN) for IoT endpoints.

The Camarillo, CA based company last week announced that LoRa® technology will continue to serve as the core radio modulation for Amazon Sidewalk across all markets in this year’s Sidewalk international expansion.  Sidewalk’s global expansion officially begins in Canada and Mexico with further expansion to other international regions is scheduled for later in 2026. The network is projected to expand to over 30 new countries by year’s end.

Amazon Sidewalk is increasingly viewed as a commercial success in terms of infrastructure deployment and technical capability, transitioning from a niche smart home feature to a broad, LoRa-based Low Power Wide Area Network (LPWAN). While it faced initial skepticism regarding privacy and adoption, the network now boasts massive, passive coverage of over 95% of the U.S. population and is undergoing rapid international expansion.

 

Architectural role of LoRa in Sidewalk:

LoRa is the de facto wireless platform of LPWANs for IoT. Semtech’s LoRa chipsets connect sensors to the Cloud and enable real-time communication of data and analytics that can be utilized to enhance efficiency and productivity. LoRa devices enable smart IoT applications that solve some of the biggest challenges facing our planet: energy management, natural resource reduction, pollution control, and infrastructure efficiency.

Amazon Sidewalk aggregates spectrum in unlicensed bands and combines multiple physical layers, with Semtech’s LoRa modulation providing the long‑range, low‑power tier for neighborhood‑scale coverage beyond home Wi‑Fi and short‑range Personal Area Networks (PANs). By using ONLY LoRa as the core wide‑area PHY, Sidewalk evolves from a home‑centric LAN into a geographically distributed WAN that can support both fixed and mobile IoT endpoints across dense residential environments.

Network scale and coverage:

Sidewalk already covers roughly 95% of the U.S. population, making it one of the largest license‑free, consumer‑facing LPWA deployments, and the 2026 roadmap extends the footprint into Canada and Mexico first, followed by additional international markets later in the year.  This expansion effectively turns Sidewalk into a multi‑continent overlay network, leveraging existing consumer premises equipment and LoRa‑enabled endpoints to provide persistent connectivity without requiring dedicated operator‑grade RAN build‑outs.

Technology differentiation vs other LPWAN options:

NB-IoT (included in ITU-R M.2150 IMT 2020 RIT/SRIT standard) holds the largest LPWAN share at roughly 54%–58% of total LPWAN connections,  due to massive adoption in China which accounts for approximately 84% of all global NB-IoT connections. Outside of China, LoRaWAN is the clear market leader with a 41% share of connections. As of late 2025, there are over 125 million LoRaWAN end devices deployed globally, growing at a 25% annual rate. It is the preferred choice for private IoT networks, specifically in smart buildings, agriculture, and industrial asset tracking.

LoRa’s combination of long range, ultra‑low power operation, and mature ecosystem (silicon, gateways, and cloud stacks) gives Sidewalk a differentiated profile relative to alternatives such as narrowband cellular IoT and other unlicensed LPWAN modulation methods.  For Amazon, anchoring Sidewalk on LoRa reduces RF and protocol fragmentation on the end‑device side while preserving flexibility to layer higher‑level Sidewalk services and security on top of the underlying LoRa/LoRaWAN protocol stack.

Market and ecosystem context:

Amazon Sidewalk now sits alongside large industrial and enterprise LoRaWAN networks, reinforcing LoRa’s position as the leading low‑power wide‑area connectivity technology in unlicensed spectrum. The LoRaWAN IoT connectivity market is forecast to grow from about 10.7 billion USD in 2025 to 44.8 billion USD by 2030 (33.1% CAGR), while LoRaWAN deployments have surpassed 125 million devices globally with a 25% CAGR, signaling a robust runway for Sidewalk‑class Massive IoT use cases.

Implications for device and service design:

For device OEMs and service providers, Amazon’s decision effectively de‑risks LoRa as a long‑term connectivity bet for consumer and prosumer IoT, given Sidewalk’s trajectory to tens of millions of active devices worldwide.  Vendors integrating LoRa‑based designs can now target both traditional LoRaWAN operator networks and the Sidewalk ecosystem, enabling common hardware platforms to support smart home, safety, environmental monitoring, and asset‑tracking applications at neighborhood and city scale.

LoRa Enables Sidewalk’s Technical Evolution:

Chirp spread spectrum (CSS) modulation in LoRa technology provides the technical foundation enabling Amazon Sidewalk’s new capabilities:

  • Enhanced Network Density: LoRa multi-spreading factor capability optimizes longer range and shorter time-on-air, supporting higher device concentrations in urban environments while maintaining reliable connectivity.
  • Location-Based Services: Unique location accuracy service that combines the power of Wi-Fi, Bluetooth Low Energy (BLE) and GPS enables a new class of location aware devices that don’t need expensive cellular solutions for asset tracking applications.
  • Hub-Less Deployments: Utilized for both out-of-band-diagnostics as well as signaling radio for battery-powered cameras, LoRa lowers the need for hubs/repeaters, reducing infrastructure complexity for consumers while extending effective coverage areas.

Proven Heritage of LoRa in Massive IoT Networks:

Semtech’s LoRa technology has been deployed by more than 170 major mobile network operators globally, with over 500 million connected devices across smart cities, utilities, logistics, unmanned aircraft systems, and industrial applications. This proven deployment heritage provides the technical foundation and ecosystem maturity required for Amazon Sidewalk’s global expansion.

The technology’s long-range capability, extending connectivity up to several kilometers from Sidewalk bridge devices, combined with its ability to penetrate buildings and operate in dense urban environments makes it uniquely suited for neighborhood-scale networks. LoRa provides free, long-range connectivity that consumers can rely on for years of battery-powered operation.

Building on CES 2026 Momentum:

Ring showcased its expanded product portfolio using LoRa at CES 2026, introducing comprehensive sensor families for security, safety and home automation. These products join the growing network of devices powered on Sidewalk, including water leak and freeze detection sensors, wearable devices and environmental monitoring solutions, all leveraging the connectivity advantages of LoRa.

The Sidewalk network’s architecture—combining LoRa for long-range communication with Bluetooth Low Energy for device setup—creates a robust, resilient IoT infrastructure that can scale to support millions of devices while maintaining the ultra-low power consumption critical for battery-operated sensors and cameras.

…………………………………………………………………………………………………………………………………………………………………..

About Semtech:

Semtech Corporation (Nasdaq: SMTC) is a leading provider of high-performance semiconductor, IoT systems and cloud connectivity service solutions dedicated to delivering high-quality technology solutions that enable a smarter, more connected and sustainable planet. Our global teams are committed to empowering solution architects and application developers to develop breakthrough products for the infrastructure, industrial and consumer markets.

References:

Will “AI at the Edge” transform telecom or be yet another telco monetization failure?

New Telco Opportunity – AI at the Edge:

At MWC 2026 last week, there were a flurry of claims that “AI at the Edge” would transform the telecom industry.  One of many examples is an article titled, “The AI edge boom is giving telecom a new strategic role.”  In that piece, Jeff Aaron, vice president of product and solutions marketing at Hewlett Packard Enterprise (HPE) spoke with theCUBE’s John Furrier at MWC Barcelona, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed telecom edge AI and why networking is becoming a strategic foundation for data-centric services.  Aaron said:

“A big reason for [reignited interest in routing] is AI workloads. They’re moving everywhere now. They have to move to the edge.  For them to move to the edge, you’ve got to get them outside of the factory and to all the locations. We’re right in the core of that, and it’s super exciting.”

As AI expands to the edge, data will need to move not only to local compute, but also between many distributed edge sites, making routing paramount. There are four ways AI infrastructure is scaling — inside data centers and across distributed edge locations, according to Aaron.

“There’s scale-out, scale-across, scale-up, and on-ramp. Two are within the data center — scale-out and scale-up — but scale-across and edge on-ramp basically mean you got to figure out how to connect to those areas, and those are just networking,” he added.

Scale-across refers to connecting distributed data centers and edge locations, while edge on-ramp brings remote sites such as factories or branch locations into the network to access AI services. Supporting those distributed environments creates an opportunity for HPE to bring networking and compute together into a more integrated infrastructure stack. At MWC 2026 Barcelona, those trends are clearly coming into focus, according to Aaron.

“Data is moving everywhere right now, and the network is back. The network isn’t just plumbing. The network is how you build a value-added service using an AI workload as a telco infrastructure,” he added.

Telecom carriers are now urgently trying to move from being “dumb data pipes” to becoming “AI performance platforms” by leveraging their geographically distributed infrastructure to host AI closer to the end user.  They urgently want to pivot from selling just bandwidth and connectivity to selling outcomes and intelligence with a heavy focus on industrial and enterprise-specific edge deployments.  They are considering the following services and business models:

  • Infrastructure as a Service (IaaS) & GPUaaS: Offering raw computing power, specifically GPUs, from edge data centers to enterprises that need low-latency processing without building their own facilities.
  • Sovereign AI Clouds: Providing AI services that guarantee data remains within national borders, appealing to government and highly regulated sectors like finance and healthcare.
  • API Monetization: Exposing real-time network data (e.g., location intelligence, predictive network quality, fraud risk scoring) via APIs that enterprises pay to integrate into their own applications.
  • Outcome-Based Pricing: Charging for specific business results, such as a “guaranteed video call quality” or “fraud loss reduction share,” rather than just data usage.
  • AI-as-a-Service (AIaaS): Bundling pre-trained models or specialized AI agents (e.g., for customer service or industrial monitoring) with connectivity

Major Carrier AI Edge Deployment Plans:

  • AT&T:
    • Launched Connected AI for Manufacturing in March 2026, which unifies 5G, IoT, and generative AI to provide real-time fault detection (claiming a 70% reduction in waste).
    • Deploying “Edge Zones” in major U.S. cities (Detroit, LA, Dallas) to allow developers to run low-latency, cloud-based software locally.
    • Partnering with AWS to link fiber and 5G directly into AWS environments for distributed AI workloads.
  • Verizon:
    • Unveiled Verizon AI Connect, a suite of products designed to manage resource-intensive AI workloads for hyperscalers like Google Cloud and Meta.
    • Trialing V2X (Vehicle-to-Everything) platforms to provide carmakers with standardized APIs for low-latency edge processing in autonomous driving.
    • Collaborating with NVIDIA to integrate GPUs into private 5G networks for on-premise AI inferencing in robotics and AR.
  • SK Telecom (SKT):
    • Announced an “AI Native” strategy at MWC 2026, including a roadmap for AI-RAN (Radio Access Network) that uses GPUs to optimize network performance and host user AI apps simultaneously.
    • Building a Manufacturing AI Cloud powered by over 2,000 NVIDIA RTX GPUs to support digital twin simulations and robotics.
    • Expanding AI Data Centers (AIDC) across South Korea and Southeast Asia (Vietnam, Malaysia) using energy-optimized LNG-powered facilities.
  • Orange & Deutsche Telekom:
    • Deploying AI-powered planning tools to cut fiber rollout costs and optimize site power consumption by up to 33% using AI “Deep Sleep” modes.
    • Focusing on Sovereign AI strategies to ensure data governance for European enterprise customers.
  • Vodafone:
    • Utilizing AI/ML applications for daily power reduction at 5G sites and testing autonomous network healing via AI agents
  • BT:
    • Offers 5G-connected VR for manufacturing design teams (e.g., Hyperbat) to collaborate on 3D models in real-time.  
……………………………………………………………………………………………………………..
Summary of Emerging AI Edge Products:
Product Category Primary Target Key Value Proposition
AI-RAN Industry 4.0 Seamless, ultra-low latency for robotics and sensing.
Connected AI Platforms Manufacturing Real-time predictive maintenance and waste reduction.
AI-as-a-Service (AIaaS) Developers/SMBs Access to GPU power and pre-trained models via telco edge nodes.
Network Slicing APIs App Developers Programmatic control over bandwidth for AR/VR and gaming.

…………………………………………………………………………………………………………………………………………………………………………………………..

A Dissenting View of “AI at the Edge”:

The global market for AI within the global telecommunications sector is valued at $6.69 billion in 2026, growing at a compound annual rate (CAGR) of 41.9% from 2025.   The broader edge AI market—including hardware, software, and services—is forecast to reach $29.98 billion in 2026, according to The Business Research Company We think those estimates are way too high.

The market research firm states:

………………………………………………………………………………………………………

Author’s Opinion:

Unless telcos change their corporate culture along with slowing the footprint growth of cloud service providers/hyperscalers, we think that AI at the Edge will be yet another telco monetization failure.  Just like their failure to monetize: 4G LTE apps, the telco cloud, 5G, multi-access edge computing (MEC), OpenRAN, LPWANs and other telecom technologies that never lived up to their promise and potential.

That’s largely because telcos are very weak: developing IT platforms, compute services, killer applications, and rapid execution of new services (e.g. 5G services require a 5G SA core network which telcos were very slow to deploy).  Telecom execs themselves cite cultural and speed‑of‑change issues: the industry is not organized like a software company, so it struggles to iterate products at AI/cloud pace. Also, telcos historically struggle with software. Managing distributed GPU clusters is vastly different from managing cell towers.

After spending billions on 5G with very  little or no ROI, investors are skeptical of the increased capex required for AI-grade edge servers which must be maintained by telcos.  Those servers will be expensive (especially if they contain clusters of Nvidia GPUs) and consume a lot of power, which is a critical issue at the edge of the carrier’s network.

Many network operators frame AI/edge as “network optimization” or “utilizing underused sites,” not as building monetizable AI platforms with APIs, SDKs, and ecosystems. This mirrors 5G, where huge RAN/core builds were not matched by a clear product and platform strategy, leaving value to OTTs and hyperscalers which are  extending their control planes and protocol stacks to the network edge (local zones, operator co‑lo, on‑premises stacks).

Telcos risk becoming “dumb pipes” for AI traffic if they can’t provide a superior developer ecosystem.  If they only sell space/power/connectivity, the cloud service providers will continue to own the developer and AI value chain.  Analysts warn that edge is a “right to participate, not a right to win.”  As such, value accrues to whoever owns the AI platform, tools, marketplace, and pricing power, not the entity that provides connectivity, PoP or cell towers.

Data fragmentation and weak “intelligence” layer:

  • AI monetization depends on high‑quality, cross‑domain data, but telco data is fragmented across OSS, BSS, probes, and partner systems; without unification, it is hard to expose compelling network/edge intelligence services.

  • Analysts emphasize that failure here reduces telcos to generic GPU landlords, while higher‑margin offers (real‑time quality, fraud, identity, mobility/context APIs) remain unrealized.

Narrow internal focus on cost savings:

  • Many operators’ early AI focus is inward (Opex reduction in assurance, planning, customer care) rather than building external, revenue‑generating products, echoing how early 5G was justified mainly on cost/efficiency.

  • Commentators warn that if AI/edge remains a “network efficiency” play, the commercial upside will go to cloud/AI natives that turn similar capabilities into products sold to enterprises.

What analysts say telcos must do differently:

  • Build “Sovereign AI factories” and edge AI clouds: GPU‑enabled sites with cloud‑like developer experience (APIs, self‑service portals, metering, SLAs) and clear sovereign/regional guarantees.

  • Combine differentiated connectivity with AI services (latency‑backed SLAs, AI‑on‑RAN, domain‑specific models for verticals) and use modern, flexible commercial models instead of just selling bandwidth or colocation.

Conclusions:

In summary, the main risk for telcos is to successfully transition from owning and maintaining network infrastructure to owning and operating AI platforms and products at software industry speed.  AI at the edge is less of a new service or product and more an architectural upgrade. The two ways telcos can benefit are from:

  1.  Internal cost reduction: If telcos use it to lower their own costs (fraud prevention, risk management, predictive maintenance, fault isolation, self-healing networks, etc.), it’s an automatic win but won’t increase the top line.
  2.  Revenue from new AI -Edge services, e.g. Verizon uses edge-based video analytics in warehouses to improve inventory turnover by up to 40%.   If they expect to charge a massive premium for “AI-enabled 5G,” they face the same monetization wall that has doomed them for the past 20 years!

References:

https://siliconangle.com/2026/03/04/telecom-edge-ai-makes-networking-strategic-mwc26/

https://www.nvidia.com/en-us/lp/ai/the-blueprint-for-ai-success-ebook/

How telcos can monetize AI beyond connectivity

https://www.thebusinessresearchcompany.com/report/generative-artificial-intelligence-ai-in-telecom-global-market-report

AT&T and AWS to deliver last mile connectivity for AI workloads; AT&T Geo Modeler™ AI simulation tool

Analysis: Edge AI and Qualcomm’s AI Program for Innovators 2026 – APAC for startups to lead in AI innovation

Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN

Private 5G networks move to include automation, autonomous systems, edge computing & AI operations

Dell’Oro: RAN Market Stabilized in 2025 with 1% CAG forecast over next 5 years; Opinion on AI RAN, 5G Advanced, 6G RAN/Core risks

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum

Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

CES 2025: Intel announces edge compute processors with AI inferencing capabilities

AT&T and AWS to deliver last mile connectivity for AI workloads; AT&T Geo Modeler™ AI simulation tool

AT&T is strategically re-architecting its infrastructure for the AI era through high-capacity network modernization and deep integration with hyperscale cloud providers.

In addition to its almost six year old deal to run its 5G SA core network in Microsoft Azure’s cloudAT&T announced at MWC 2026 that it’s now woring with Amazon Web Services (AWS) to extend 5G and fiber connectivity from business customers and locations directly into AWS environments, creating secure, resilient and reliable premises‑to‑cloud architectures for AI workloads. The collaboration is designed to reduce network complexity and latency while supporting real‑time analytics, machine learning, and agentic AI use cases.

This collaboration continues a long-standing relationship between AT&T and AWS and follows recent news outlining broader efforts to modernize the nation’s connectivity infrastructure by providing high-capacity fiber to AWS data centers, migrate AT&T workloads to AWS cloud capabilities and explore emerging satellite technologies.

AWS Interconnect – last mile embeds AT&T‑delivered connectivity directly into AWS workflows, designed to enable customers to provision and manage last‑mile connectivity within the AWS environment and lays the foundation for the use of AI agents to monitor and manage the AI experience from the user to the cloud. This streamlined, self‑managed approach helps enterprises reduce network complexity while maintaining control of their extended enterprise network, allowing businesses to move faster as they scale AI.

High level illustration of the planned AWS Interconnect – last mile architecture, showing how resilient interconnections and AT&T Fiber and fixed wireless access are intended to simplify private connectivity from customer locations into AWS environments. 

Diagram Source: AT&T

………………………………………………………………………………………………………

“AI does not just need more compute; it needs flatter networks and faster connections,” said Shawn Hakl, SVP & Head of Product, AT&T Business. “By bringing high‑capacity connectivity closer to cloud platforms, integrating the management of the networks directly into the cloud provisioning process and engineering for resiliency at the metro level, AT&T is helping enterprises streamline their networks, improve performance, security, and scale AI with confidence.”

AT&T says they are building an AI‑ready network (?) designed to scale performance by continuing ongoing network investment, including the growth of capacities up to 1.6Tbps across key metro and long‑haul routes.

AT&T also announced it would work with Nvidia, Microsoft and MicroAI through its Connected AI platform for “smart manufacturing.”

………………………………………………………………………………………………………………..

Finally, AT&T described  AT&T Geo Modeler which is able to better predict connectivity for emerging technologies like autonomous vehicles, drones, and robotics.

The Geo Modeler is an AI-powered simulation tool that helps predict, in near real time, how a wireless network will perform in the real world. Inspired by the video games Kounev played with his family growing up, the virtual model and simulation is “essentially like a giant video game of the United States” that, infused with AI tools, gives engineers a clearer picture of where potential weak spots may appear. Then issues can be addressed earlier and fixes can roll out faster. In essence, it creates virtual models, similar to the way video games are designed and developed.

“The Geo Modeler helps us see how the real world will shape coverage before we build, so we can deliver connectivity that’s ready for what’s next,” said AT&T scientist Velin Kounev.

Matt Harden, VP of Connected Solutions at AT&T, agrees. “The Geo Modeler is a foundational capability for the connected mobility era,” he said. “By marrying advanced geospatial simulation with AI-driven network orchestration, we can deliver predictable, high-performance connectivity that adapts with the environment. Whether it’s a hurricane, a packed stadium, or a city corridor full of autonomous vehicles, we will be prepared.”

References:

https://about.att.com/story/2026/aws-collaboration-scalable-business-ai.html

https://about.att.com/blogs/2026/150-years-of-connection.html

https://about.att.com/blogs/2025/geo-modeler.html

AT&T and Ericsson boost Cloud RAN performance with AI-native software running on Intel Xeon 6 SoC

AT&T deploys nationwide 5G SA while Verizon lags and T-Mobile leads

AT&T to buy spectrum licenses from EchoStar for $23 billion

AT&T’s convergence strategy is working as per its 3Q 2025 earnings report

Progress report: Moving AT&T’s 5G core network to Microsoft Azure Hybrid Cloud platform

AT&T 5G SA Core Network to run on Microsoft Azure cloud platform

 

Direct-to-Device (D2D) satellite network comparison: Starlink V2 (Starlink Mobile) vs “Satellite Connect Europe”

Executive Summary:

1.  Starlink is preparing a new Direct-to-Device (D2D) constellation to provide satellite fill-in services and has renamed their V2 D2D services as Starlink Mobile.  This rebrand coincides with the introduction of their next-generation V2 satellites, which aim to provide 5G-like broadband speeds (up to 150 Mbit/s) directly to unmodified smartphones.  With 650 direct-to-cell Starlink satellites active, part of a constellation of almost 10,000 Starlink satellites of various kinds, that roaming service now offers connectivity to 32 countries across six continents. Today, Starlink V1 D2D has 10 million active users a month – and the company expects to top 25 million by the end of 2026.

Where Starlink V1 delivers text and what Nicolls described as “light data,” meaning only for selected apps, Starlink V2 (Starlink Mobile) will deliver what was called “terrestrial-like connectivity.”  In good conditions, “it should look and feel like you’re connected to a high-performing 5G terrestrial network.”  To make that happen, V2 will need both new frequencies – the same globally-licensed S-band Starlink will use for emergency alerts – and new, much larger satellites.

Image Credit: ZUMA Press Inc/Alamy Stock Photo

………………………………………………………………………………………………………………

2. European operators have launched “Satellite Connect Europe to offer wholesale D2D services to mobile carriers.  Satellite Connect Europe is actually a joint venture between AST SpaceMobile and Vodafone. It will primarily use satellites provided by AST SpaceMobile to offer direct-to-device (D2D) services in Europe. The venture is building a dedicated, sovereign European constellation, with plans to establish an operations center in Germany.

Five major mobile network operator groups will deploy D2D satellite mobile broadband services across Europe. The agreements cover CK Hutchison, Orange, Sunrise, Telefonica and Vodafone, with customer trials scheduled to start this summer (2026).  The service is expected to launch around the end of 2026, with demonstrations planned in Romania before then.

Role of 3GPP NTN specifications:

Both of these initiatives are dependent on 3GPP-based non‑terrestrial networking (NTN) specs, introduced primarily in Release 17 and enhanced in Release 18 to enable direct satellite-to-device connectivity using 5G NR (new radio) and IoT (NB-IoT/eMTC) protocols. 3GPP detailed NTN specs include TR 38.821 (architecture), TS 38.101-5 (user equipment radio performance), and TS 38.104 (base station requirements), supporting LEO/GEO orbits and S/Ka-band spectrum.

  • 3GPP Release 17 introduced NR‑NTN and IoT‑NTN profiles, defining waveform adaptations, timing and Doppler compensation, mobility procedures, and MSS band mappings so that satellite and terrestrial RANs interoperate under a single 5G system architecture.  These NTN specs will be submitted to ITU-R WP 4B for rubber stamping as ITU-R recommendations (official standards).

  • Both the Starlink and Satellite Connect Europe/AST initiatives map their radio interfaces and mobility behavior to these NTN specifications over time, which should let future 5G devices with NTN support hand over natively between cell towers and satellites without custom stacks.

These two D2D initiatives differ in radio design, spectrum, and integration models with mobile operators which provide the actual end point connections as follows:

Starlink D2D technical details:

  • Starlink’s Direct‑to‑Cell satellites use software‑defined radios and large phased‑array antennas so each LEO satellite behaves like a moving LTE/NR macro cell in space.

  • Unlike standard Starlink Ku/Ka user terminals, the D2D layer transmits and receives in allocated terrestrial/mobile bands (roughly 800–2000 MHz) to talk directly to 3GPP LTE/NR chipsets in unmodified handsets, using TDD LTE initially.

  • The payload compensates for fast LEO motion (~550 km altitude, ~7.5 km/s) with Doppler pre‑correction and timing advance logic in the satellite SDR so that ordinary UE modems still see acceptable frequency and timing error.

  • Onboard beamforming and beam‑hopping allow very narrow spot beams and dynamic power control, which is critical to protect terrestrial networks sharing IMT spectrum and to deliver enough link budget for small handset antennas at long slant ranges.

  • Backhaul from the D2D layer uses Starlink’s existing Ku/Ka links and optical inter‑satellite links into the ground segment, so D2D traffic can be routed either to the MNO’s core via gateways or across the Starlink mesh to another region.

Service model and 3GPP spec alignment:

  • Starlink positions Direct‑to‑Cell as a “fill‑in” layer: SMS/low‑rate data first, then higher‑rate NR‑NTN services as 3GPP Release 17+ NTN features become available in commercial chipsets.

  • The network integrates at the EPC/5GC interface so MNOs can advertise satellite coverage as just another PLMN/RA, letting devices roam seamlessly between terrestrial eNB/gNBs and the Starlink NTN cells, subject to roaming and spectrum agreements.

Satellite Connect Europe D2D technology:

  • Satellite Connect Europe is a wholesale platform that exposes AST SpaceMobile’s LEO D2D satellite RAN to European MNOs, with ground stations in multiple EU markets providing regional gateways, traffic anchoring, and regulatory control within European jurisdiction.

  • AST’s constellation uses very large phased arrays in LEO to form direct 4G/5G broadband links to standard smartphones, targeting multi‑Mbps throughput per device over IMT and MSS spectrum, again without any handset hardware or software changes.

  • The ground segment is designed so that radio resource control, data handling, lawful intercept, and policy enforcement for European traffic all sit under EU‑based operational control, which is a key differentiator versus non‑European satellite operators.

  • Integration work with operators such as Telefónica and Orange focuses on core‑network interconnect, mobility management between terrestrial 4G/5G sites and satellite cells, and using D2D mainly for rural coverage and resilience in outages or disasters.

Aspect Starlink D2D Satellite Connect Europe / AST
Primary spectrum Mobile mid‑bands (LTE/NR IMT), Ku/Ka for backhaul IMT + MSS bands exposed via AST’s LEO payloads
Device support Standard LTE/NR phones, starting with LTE TDD Standard 4G/5G smartphones, broadband‑class links
Constellation role Global fill‑in layer on top of existing Starlink mesh European‑focused wholesale access to AST constellation
Control plane SpaceX‑operated RAN, MNO integration at core level EU‑based ground stations, MNO‑first governance and policy
Standards trajectory Migrating from LTE to full NR‑NTN as device support matures Positioned explicitly as 4G/5G D2D aligned with NTN evolution

……………………………………………………………………………………………………………………………………………………………..

Addendum:  Starlink deal with Deutsche Telekom:

In a partnership with Starlink, Deutsche Telekom will bring mobile communications to areas where network expansion is particularly challenging, for example due to nature conservation requirements or demanding topography.

“We provide our customers with the best mobile network. And we continue to invest heavily in expanding our infrastructure,” said Abdu Mudesir, Board Member for Product and Technology at Deutsche Telekom. “At the same time, there are regions where expansion is especially complex due to topographical conditions or official constraints. We want to ensure reliable connectivity for our customers in those areas as well. That is why we are strategically complementing our network with satellite-to-mobile connectivity. For us, it is clear: connectivity creates security and trust. And we deliver. Everywhere.”

“We’re so pleased to bring reliable satellite-to-mobile connectivity to millions of people across 10 countries in partnership with Deutsche Telekom,“ said Stephanie Bednarek, VP of Starlink Sales. “This agreement will be the first-of-its-kind in Europe to launch Starlink’s V2 next-generation technology that will expand on data, voice and messaging by providing broadband directly to mobile phones.“

……………………………………………………………………………………………………………………………………………………………..

References:

https://www.3gpp.org/technologies/ntn-overview

https://itbrief.co.uk/story/satellite-connect-europe-seals-five-mno-trial-deals

https://www.telekom.com/en/media/media-information/archive/telekom-and-starlink-satellite-to-mobile-for-europe-1103000

https://www.lightreading.com/satellite/at-mwc-spacex-execs-tout-starlink-v2-and-a-key-carrier-partner-for-it

Non-Terrestrial Networks (NTNs): market, specifications & standards in 3GPP and ITU-R

ITU-R recommendation IMT-2020-SAT.SPECS from ITU-R WP 5B to be based on 3GPP 5G NR-NTN and IoT-NTN (from Release 17 & 18)

Starlink doubles subscriber base; expands to to 42 new countries, territories & markets

Elon Musk: Starlink could become a global mobile carrier; 2 year timeframe for new smartphones

Amazon Leo (formerly Project Kuiper) unveils satellite broadband for enterprises; Competitive analysis with Starlink

Blue Origin announces TeraWave – satellite internet rival for Starlink and Amazon Leo

From LPWAN to Hybrid Networks: Satellite and NTN as Enablers of Enterprise IoT – Part 2

Keysight Technologies Demonstrates 3GPP Rel-19 NR-NTN Connectivity in Band n252

Telecoms.com’s survey: 5G NTNs to highlight service reliability and network redundancy

China ITU filing to put ~200K satellites in low earth orbit while FCC authorizes 7.5K additional Starlink LEO satellites

NBN selects Amazon Project Kuiper over Starlink for LEO satellite internet service in Australia

GEO satellite internet from HughesNet and Viasat can’t compete with LEO Starlink in speed or latency

 

Huawei unveils AI Centric Network roadmap, U6 GHz products, 5G Advanced strategy and SuperPoD cluster computing platforms

Missing from all the MWC 2026 6G AI alliance announcements, Huawei released a series of all-scenario U6 GHz products to help carriers unlock the full potential of 5G Advanced (5G-A) and set the stage for a seamless transition to 6G.  Huawei also showcased its SuperPoD cluster for the first time outside China, which they have created to offer “a new option for the intelligent world.”

  • The all-scenario U6 GHz products and solutions Huawei released today use innovative technologies to create a high-capacity, low-latency, optimal-experience backbone designed for mobile AI applications.
  • There are already 70 million 5G-A users globally, and 5G-A is increasingly being adopted by carriers at scale. In China, Huawei has helped carriers deliver contiguous 5G-A coverage across 270 cities and launch 5G-A packages that monetize experience in over 30 provinces.

The company also launched enhanced AI-Centric Network solutions [1.] that will help carriers prepare for the agentic era by enabling intelligent services, networks, and network elements (NEs). The company’s plans to build more AI-centric networks and computing backbones that will help carriers and industry customers seize opportunities from the AI era.

Note 1. Huawei’s AI-Centric Network roadmap is designed to integrate intelligence directly into 5G-Advanced (5G-A) infrastructure and accelerate the transition toward Level-4 Autonomous Networks. The company  plans to work with global carriers (where its not blacklisted) on the large-scale 5G-A deployment, use high uplink to address surging consumer and industry demand for mobile AI applications, and use the U6 GHz band to unlock the full value of spectrum and pave the way for smooth evolution to 6G.

Photo Credit: Huawei

…………………………………………………………………………………………………………………………………………………………………………….

Three-Layer Intelligence in AI-Centric Networks: Accelerating the Agentic Era:

As mobile network operators transition toward AI-native 5G-Advanced and early 6G architectures, Huawei is positioning its AI-Centric Network portfolio as the blueprint for next-generation intelligent networks. By embedding intelligence across service, network, and network element (NE) layers, Huawei aims to establish the foundation for fully agentic, autonomously managed infrastructures.

  • Service Layer: Focuses on multi-agent collaboration platforms to transform core carrier services—such as voice and home broadband—into intelligent service platforms.
  • Network Layer: Aims to evolve from single-scenario automation to end-to-end single-domain network autonomy. Huawei officially launched AUTINOps, an AI-native intelligent operations solution designed to replace traditional manual O&M with predictive, preventive “digital employees”.
  • Network Element (NE) Layer: Utilizes AI to optimize algorithms for RANs (Radio Access Networks) and core networks, improving spectral efficiency and service awareness.

At the Service layer, Huawei is enabling carriers to operationalize multi-agent collaboration frameworks that embed domain-specific intelligence into key service categories: voice, broadband, and digital experience monetization. These AI agents dynamically manage customer experience and lifecycle value, supporting the transformation of core connectivity services into intelligent, context-aware digital offerings.

At the Network layer, the company’s Autonomous Driving Network Level 4 (ADN L4) initiative focuses on single-scenario automation, delivering measurable improvements in O&M efficiency, service quality, and monetization agility. By the close of 2025, ADN single-scenario deployments were active across more than 130 commercial telecom networks. The next phase targets end-to-end, single-domain autonomy across transport, access, and core networks—an essential step toward zero-touch O&M and intent-driven orchestration in 5G-A and 6G environments.

At the Network Element layer, Huawei is jointly advancing AI-driven innovation across RAN, WAN, and core domains. This includes algorithmic optimization for intelligent RAN schedulingservice-aware traffic identification in WANs, and unified intent modeling across B2C and B2H use cases. Such capabilities enhance spectral and energy efficiency, enable predictive resilience, and provide fine-grained service awareness—all foundational for AI-native air interface and network control in 6G.

Computing Backbone with SuperPoD Clusters:

Supporting this vision, Huawei is introducing its next-generation SuperPoD and cluster computing platforms, designed as high-performance compute backbones for distributed AI model training and inference within telecom and enterprise domains. Featuring the proprietary UnifiedBus interconnect and system-level architecture innovations, the Atlas 950TaiShan 950, and Atlas 850E SuperPoDs, along with the TaiShan 200–500 servers, deliver ultra-low latency and high throughput optimized for trillion-parameter AI models and real-time agentic operations.

Aligned with its open innovation strategy, Huawei continues to expand an open, collaborative computing ecosystem, supporting open-source frameworks and open-access platforms to accelerate the deployment of intelligent, AI-driven digital infrastructure worldwide.

Intelligent Transformation Across Industry Domains:

At MWC Barcelona 2026, Huawei is highlighting 115 end-to-end industrial intelligence showcases across verticals, underscoring its role in helping enterprises adopt AI-centric operational models. Through the SHAPE 2.0 Partner Framework, 22 co-developed AI and digital infrastructure solutions will demonstrate how vertical industries—from manufacturing and energy to transportation and healthcare—can harness 5G-A and AI integration to deliver measurable business outcomes.

Toward 5G-A Commercialization and 6G Evolution:

With large-scale 5G-Advanced rollouts accelerating, Huawei is collaborating with global carriers and ecosystem partners to realize level-4 autonomous networks and establish the architectural bridge to 6G. Central to this evolution is the convergence of AI, connectivity, and computing—enabling networks that can self-learn, self-optimize, and autonomously orchestrate service intent. These AI-Centric Network initiatives and SuperPoD-based computing backbones form the foundation for value-driven, intelligent networks built for the agentic era.

5G-Advanced and Infrastructure Innovations:

Huawei’s 5G-A strategy, branded as GigaUplink, focuses on delivering the high-uplink capacity and low latency required for mobile AI applications:

  • U6 GHz Spectrum: Launched a comprehensive portfolio of all-scenario U6 GHz products to unlock 5G-A’s full potential and provide a smooth evolution path to 6G.
  • Agentic Core: Introduced the Agentic Core solution, which integrates intelligence natively into the core network to support ubiquitous AI agent access across devices.
  • All-Optical Target Network: Proposed an AI-centric optical roadmap featuring dual strategies: “AI for networks” (optimizing operations) and “networks for AI” (supporting AI workloads with ultra-low latency benchmarks of 1-5ms).

………………………………………………………………………………………………………………………………………………………..

References:

https://www.huawei.com/en/news/2026/3/mwc-ai-centric-network

https://carrier.huawei.com/en/minisite/events/mwc2026/

NVIDIA and global telecom leaders to build 6G on open and secure AI-native platforms + Linux Foundation launches OCUDU

Omdia on resurgence of Huawei: #1 RAN vendor in 3 out of 5 regions; RAN market has bottomed

Huawei, Qualcomm, Samsung, and Ericsson Leading Patent Race in $15 Billion 5G Licensing Market

Huawei Cloud Review and Global Sales Partner Policies for 2026

Huawei’s Electric Vehicle Charging Technology & Top 10 Charging Trends

Huawei to Double Output of Ascend AI chips in 2026; OpenAI orders HBM chips from SK Hynix & Samsung for Stargate UAE project

Huawei launches CloudMatrix 384 AI System to rival Nvidia’s most advanced AI system

U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China

AT&T and Ericsson boost Cloud RAN performance with AI-native software running on Intel Xeon 6 SoC

 

AT&T and Ericsson boost Cloud RAN performance with AI-native software running on Intel Xeon 6 SoC

Overview:

AT&T and Ericsson have completed a milestone Cloud RAN test by successfully demonstrating Ericsson’s AI-native Link Adaptation [1.] on a Cloud RAN stack powered by Intel Xeon 6 SoC.  The test showed how artificial intelligence (AI) can improve spectral efficiency and network responsiveness in real-world conditions.  Conducted over AT&T’s licensed frequency bands, the experiment was the first to use portable Ericsson RAN software running on Intel’s new Xeon 6 system-on-chip (SoC) platform—an architecture designed for high-performance, cloud-native processing of RAN workloads. Engineered specifically for network and edge deployments, Intel Xeon 6 SoC delivers breakthrough AI RAN performance with built-in acceleration. Integrated Intel Advanced Vector Extensions (AVX) and Intel Advanced Matrix Extension (AMX) technologies eliminate the need for discrete accelerators while maximizing capacity, efficiency, and TCO optimization.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Note 1. AI-native Link Adaptation dynamically adjusts to changes in signal quality and interference, boosting RAN performance on purpose-built and cloud-based infrastructure alike.

Other Notes:

  • vRAN: A radio access network (RAN) in which the baseband processing functions run as software on general-purpose processors (mostly from Intel) instead of on dedicated hardware at the cell site. In vRAN, the functional split defines how baseband processing is divided between centralized processors and the radio unit at the site, and that split drives fronthaul bandwidth, latency, and cost.

  • Cloud RAN: An evolution of vRAN where those same RAN functions are re-architected as cloud‑native microservices/containers with CI/CD (Continuous Integration and either Continuous Delivery or Continuous Deployment), automation, and orchestrators, optimized for elastic scaling across distributed cloud infrastructure.
  • Ericsson Cloud RAN is a cloud native software solution that handles compute functionality in the RAN. It virtualizes RAN functions on Commercial Off The Shelf (COTS) hardware, decoupling software from hardware to enable more flexible, scalable, and efficient network deployments.
  • According to Dell’Oro Group, Cloud RAN (often encompassing vRAN) accounted for approximately 5% to 10% of the total global Radio Access Network (RAN) market revenues in 2025.  In early 2026, Dell’Oro revised Cloud RAN projections downward. While virtualization remains a “key pillar” for the long term, short-term adoption is being slowed by performance, power, and cost-parity challenges when compared to purpose-built hardware.
  • The total RAN market stabilized in late 2025 after losing approximately 20% of its value between 2022 and 2024. Market concentration reached a 10-year high in 2025, with the top five vendors (Huawei, Ericsson, Nokia, ZTE, and Samsung) capturing 96% of the revenue.

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Image Credit: Ericsson

In this proof-of-concept setup, Ericsson’s disaggregated and containerized RAN software operated within AT&T’s target Cloud RAN configuration, built on open, commercial off-the-shelf hardware. The test advanced from basic call functionality to validation of feature-rich network behavior in a cloud computing environment. Ericsson’s AI-native Link Adaptation is a learning algorithm that continuously assesses channel state and interference to determine the optimal modulation and coding scheme for each transmission interval. By generating real-time predictions of link quality, the AI model dynamically adjusts data rates to maximize throughput and spectral efficiency.

Early results were promising. Throughput gains reached up to 20% compared with conventional rule-based link adaptation approaches, alongside measurable improvements in spectral efficiency. Ericsson and Intel also used the trial to benchmark various AI inference models, demonstrating performance scalability and energy efficiency on general-purpose compute nodes rather than proprietary hardware accelerators. This suggests a more pragmatic path for deploying AI workloads across distributed RAN architectures.

AI-native Link Adaptation dynamically adjusts to changes in signal quality and interference, boosting RAN performance on purpose-built and cloud-based infrastructure alike.

Ericsson Cloud RAN is a cloud native software solution that handles compute functionality in the RAN. It virtualizes RAN functions on Commercial Off The Shelf (COTS) hardware, decoupling software from hardware to enable more flexible, scalable, and efficient network deployments.

Engineered specifically for network and edge deployments, Intel Xeon 6 SoC delivers breakthrough AI RAN performance with built-in acceleration. Integrated Intel Advanced Vector Extensions (AVX) and Intel Advanced Matrix Extension (AMX) technologies eliminate the need for discrete accelerators while maximizing capacity, efficiency, and TCO optimization.

Beyond the immediate performance improvements, the trial illustrates how open RAN architectures can accelerate innovation. By decoupling RAN software from vendor-specific hardware, AT&T can integrate AI capabilities and update network functions more quickly, avoiding the constraints of lock-in. The portability demonstrated here—running production-grade Ericsson RAN software on Intel Xeon 6 silicon—marks an industry first.

For AT&T, the achievement represents more than a lab milestone. It provides a technical template for scaling AI-native RAN functions into its cloud infrastructure, pointing to a future where machine learning operates natively within radio environments to fine-tune performance in real time. As operators continue balancing cost, flexibility, and efficiency, AI-optimized Cloud RAN deployments could become the next competitive frontier in 5G—and eventually, 6G—network evolution.

………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Quotes:

Rob Soni, Vice President, RAN Technology at AT&T, says: “AT&T is leading the charge toward an open, intelligent, and scalable network future by advancing Open RAN and Cloud RAN with AI-native capabilities at their core. This demo highlights how AI capabilities, powered by our next-generation Cloud RAN platform, can be deployed seamlessly to drive innovation and deliver superior customer experiences.”

Mårten Lerner, Head of Networks Strategy and Product Management, Business Area Networks at Ericsson, says: “Together with AT&T and Intel, Ericsson is demonstrating how our domain expertise combined with AI-native RAN software can drive transformative advancements in both Cloud RAN and purpose-built deployments. Our industry-leading AI-native Link Adaptation serves as the first proof point on this journey. With a hardware-agnostic RAN software stack, Ericsson is committed to offering maximum flexibility and enabling all our customers to benefit from future innovations – regardless of their chosen underlying hardware. This milestone underscores Ericsson’s commitment to helping operators advance their networks by deploying AI functionality across the RAN stack.”

Cristina Rodriguez, VP and GM, Network and Edge at Intel, says: “This successful collaboration with AT&T and Ericsson showcases the power of Intel Xeon 6 SoC to enable and accelerate AI workloads in Cloud RAN environments. Xeon 6 SoC is architected to handle the demanding compute requirements of AI-native network functions, delivering the performance and efficiency operators need to unlock the full potential of intelligent networks. By providing a flexible, standards-based platform, Intel Xeon 6 enables service providers like AT&T to deploy innovative AI capabilities while maintaining the openness and choice that drive industry innovation.”

………………………………………………………………………………………………………………………………………………………………………………………………………………………….

AI-Native Link Adaptation vs. Traditional Methods:

Traditional link adaptation in RAN relies on deterministic, rule-based algorithms that select the Modulation and Coding Scheme (MCS) from predefined lookup tables. These methods primarily use instantaneous Channel Quality Indicator (CQI) reports or estimated Signal-to-Interference-plus-Noise Ratio (SINR) thresholds, often adjusted via Outer Loop Link Adaptation (OLLA) based on ACK/NACK feedback from the UE. This reactive approach applies conservative margins to account for channel estimation errors, prediction lag, and varying interference, which can lead to suboptimal throughput—either underutilizing the link with low MCS or triggering excess HARQ retransmissions with overly aggressive selections.

AI-native Link Adaptation shifts to a predictive, model-driven paradigm using machine learning (typically lightweight neural networks or time-series models) trained on historical channel data. Rather than static thresholds, the AI processes sequences of CQI, beam metrics, mobility patterns, and interference traces to forecast the probable channel state for the next transmission time interval (TTI). This enables precise MCS selection that hugs the Shannon capacity limit more closely, minimizing BLER while maximizing spectral efficiency in dynamic scenarios like high-mobility NLOS or bursty interference.

Key differences include:

Aspect Traditional (Rule-Based) AI-Native (ML-Based)
Decision Mechanism Lookup tables, SINR thresholds, OLLA offsets Real-time inference from ML models
Channel Handling Reactive (past CQI/SINR) Predictive (time-series forecasting)
Adaptation Speed Step-wise, with feedback lag Continuous, sub-TTI granularity
Performance Gains Baseline (0% reference) Up to 20% throughput, 10% spectral efficiency
Compute Needs Low (fixed arithmetic) Moderate (edge inference on COTS like Xeon 6)
Limitations Struggles with non-stationary channels Requires training data, retraining overhead
In practice, as shown in AT&T/Ericsson trials, AI-native methods exploit patterns invisible to heuristics—like correlated fading in massive MIMO—delivering consistent gains across diverse propagation environments. This positions it as a foundational element for Cloud RAN evolution.
References:

Ericsson and Intel collaborate to accelerate AI-Native 6G; other AI-Native 6G advancements at MWC 2026

Ericsson and Intel at MWC 2026:

Building on milestones in Cloud RAN, 5G Core, and open network innovation, Ericsson and Intel are showcasing joint technology advancements at the Mobile World Congress (MWC) 2026 in Barcelona this week. Demonstrations can be experienced at the Ericsson Pavilion (Hall 2)Intel Booth (Hall 3, Stand 3E31), and across partner event spaces, highlighting the companies’ shared progress in enabling the next era of AI-driven networks.

The two companies are strengthening their long-standing technology partnership to accelerate ecosystem readiness for AI-native 6G networks and use cases. The expanded collaboration spans next-generation mobile connectivity, cloud infrastructure, and compute acceleration — with a focus on AI-driven RAN and packet core evolution, platform-level security, and scalable cloud-native architectures designed to shorten time-to-market for advanced network solutions.

“6G is not merely an iteration of mobile technology; it will serve as the foundational infrastructure distributing AI across devices, the edge, and the cloud,” said Börje Ekholm, President and CEO of Ericsson. “With our deep history in network innovation and global-scale operator deployments, Ericsson is uniquely positioned to drive practical 6G integration from research to commercialization.”

Lip-Bu Tan, CEO of Intel, added: “Intel’s vision is to lead the industry in unifying RAN, Core, and edge AI to enable seamless deployment of AI-native 6G environments. Together with Ericsson, we are proving that next-generation connectivity can be open, energy-efficient, secure, and intelligent. With future Ericsson Silicon built on Intel’s most advanced process technologies, coupled with Intel Xeon-powered AI-RAN ready Cloud RAN and collaborative multi-year research efforts, we are delivering the performance, efficiency, and supply assurance demanded by leading operators worldwide.”

As 6G transitions from research to commercialization, the industry must align around a mature, standards-based ecosystem. The Ericsson–Intel collaboration aims to accelerate development of high-performance, energy-efficient compute architectures optimized for both AI for Networks and Networks for AI.

AI-native 6G will fuse intelligent, programmable network functions with distributed compute and real-time sensing, bringing processing power closer to the network edge and enabling ultra-responsive, adaptive services. This convergence will enhance network efficiency, agility, and service intelligence across future deployments.

About Ericsson:

Ericsson‘s high-performing networks provide connectivity for billions of people every day. For 150 years, we’ve been pioneers in creating technology for communication. We offer mobile communication and connectivity solutions for service providers and enterprises. Together with our customers and partners, we make the digital world of tomorrow a reality.

About Intel:

Intel is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better.

…………………………………………………………………………………………………………………………………………………………

Related AI-Native 6G Announcements at MWC 2026:

In addition to the Ericsson-Intel collaboration, several vendors and operators announced AI-native 6G advancements or related demos at MWC Barcelona 2026. These initiatives emphasize AI-RAN integration, software-defined architectures, and ecosystem partnerships to bridge 5G-A to 6G.

NVIDIA Multi-Partner Commitment: NVIDIA rallied operators and vendors including Booz Allen, BT Group, Cisco, Deutsche Telekom, Ericsson, Nokia, SK Telecom, SoftBank, and T-Mobile to build open, secure AI-native 6G platforms. The focus is on software-defined wireless with AI embedded in RAN, edge, and core for integrated sensing, communications, and interoperability. ​

Nokia AI-RAN:  Nokia highlighted new partnerships with Dell, Quanta, Red Hat, SuperMicro, NVIDIA, and operators like T-Mobile, Indosat Ooredoo Hutchison, BT, Elisa, NTT DOCOMO, and Vodafone for AI-RAN trials paving the way to cognitive 6G networks. Live demos at Nokia’s Hall 3 Booth 3B20 included Southeast Asia’s first AI-RAN Layer 3 5G call on shared GPU infrastructure and vision AI for immersive services. ​

T-Mobile & Deutsche Telekom Hub: T-Mobile US and (major shareholder) Deutsche Telekom launched a joint 6G Innovation Hub targeting AI-native autonomous networks, secure sensing/positioning, and connectivity-compute convergence for Physical AI. It builds on agentic AI proofs like network-integrated translation, emphasizing “kinetic tokens” for real-time physical world control.

ZTE GigaMIMO 6G Prototype: ZTE unveiled the world’s first 6G prototype with 2000+ U6G-band antenna elements (GigaMIMO), powered by AI algorithms for 10x capacity over 5G-A, 30% spectral efficiency gains, and AI-driven immersive services. Booth 3F30 demos integrate AI across connectivity, computing, and devices for “AI serves AI” networks. ​

Qualcomm Agentic AI RAN: Qualcomm announced AI-native RAN management services in its Dragonwing suite for autonomous 6G-grade networks, plus new Open RAN AI features for performance optimization. CEO Cristiano Amon’s keynote focused on “Architecting 6G for the AI Era,” with device-to-data-center transformations.

Huawei U6GHz for 6G Path:

Huawei released all-scenario U6GHz products (macro/micro sites, microwave) with AI-centric solutions for 5G-A capacity (100 Gbps downlink) and low-latency AI apps, enabling smooth 6G evolution. Emphasizes hyper-resolution MU-MIMO and multi-band coordination for indoor/outdoor AI experiences.

Summary Chart:

Vendor/Operator Key Focus Partners/Demos Booth/Location
NVIDIA Open AI-native platforms Multiple operators/vendors MWC general
Nokia AI-RAN trials & cognitive networks NVIDIA, T-Mobile, IOH et al. Hall 3, 3B20
T-Mobile/DT Physical AI hub Joint R&D Announced pre-MWC
ZTE GigaMIMO 6G prototype China Mobile, Qualcomm Hall 3, 3F30
Qualcomm Agentic RAN automation Open RAN ecosystem Keynote & demos
Huawei U6GHz AI-centric evolution Carrier-focused MWC showcase

…………………………………………………………………………………………………………………………………………………………………………………….

References:

https://www.prnewswire.com/news-releases/ericsson-and-intel-collaborate-to-accelerate-the-path-to-commercial-ai-native-6g-302700703.html

NVIDIA and global telecom leaders to build 6G on open and secure AI-native platforms + Linux Foundation launches OCUDU

Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)

SKT 6G ATHENA White Paper: a mid-to-long term network evolution strategy for the AI era

Dell’Oro: RAN Market Stabilized in 2025 with 1% CAG forecast over next 5 years; Opinion on AI RAN, 5G Advanced, 6G RAN/Core risks

Nokia and Rohde & Schwarz collaborate on AI-powered 6G receiver years before IMT 2030 RIT submissions to ITU-R WP5D

SK Telecom, DOCOMO, NTT and Nokia develop 6G AI-native air interface

Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos

Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

 

 

 

 

 

 

NVIDIA and global telecom leaders to build 6G on open and secure AI-native platforms + Linux Foundation launches OCUDU

Executive Summary:
NVIDIA today announced a strategic collaboration with a global coalition of industry leaders—including NokiaEricssonT-Mobile, and Deutsche Telekom—to architect the next generation of AI-native wireless infrastructure.
As we’ve noted many times, 6G/IMT 2030 will be AI-native and software-defined, enabling wireless networks to advance at the pace of innovation. 6G networks, built on AI-RAN architecture, will continuously evolve through software, enabling real-time intelligence and rapid advancement. This transformation opens the door for a diverse ecosystem of participants — from global operators and technology providers to startups, researchers and developers — all contributing through open and programmable platforms.  This initiative focuses on transitioning legacy architectures toward software-defined, open, and secure 6G platforms. By embedding AI across the Radio Access Network (RAN), edge, and core, the coalition aims to transform traditional connectivity into a robust fabric for physical AI, supporting the massive scaling of autonomous systems and sensors.
–>Of course, realizing this vision will be dependent on 3GPP specification of a 6G AI native, secure core network, without which no 6G features, including security, could be realized.  Also, ITU-R WP 5D must unambiguously specify an AI native RAN interface in its forthcoming IMT 2030 RIT/SRITs in late 2030.
Key Objectives of this alliance:
  • AI-RAN Integration: Shifting from fixed-function hardware to AI-RAN architecture to turn networks into programmable AI infrastructure.
  • Architectural Resilience: Implementing open and trusted principles to ensure interoperability, supply-chain security, and rapid innovation cycles.
  • Integrated Sensing & Communication: Leveraging AI-native platforms to enable real-time intelligence and decision-making at the network edge.
  • Scalability: Addressing the complexity of 6G to support billions of autonomous endpoints that demand higher security and lower latency than current architectures can provide.

The NVIDIA AI Aerial platform is a software-defined, cloud-native framework for building, training, and deploying AI-native 5G and 6G wireless networks. It transitions traditional fixed-function hardware to a programmable, multi-tenant infrastructure that runs both Radio Access Network (RAN) and AI workloads simultaneously on NVIDIA-accelerated computing.

Image Credit: NVIDIA

Quotes:

“AI is driving the largest infrastructure buildout in history, and telecommunications is the next frontier,” stated Jensen Huang, founder and CEO of NVIDIA. “By building AI-RAN, we are transforming global telecom networks into a ubiquitous AI fabric.”

Allison Kirkby, chief executive of BT Group, said: “Connectivity is the backbone of economic growth, and with this collaboration, we’re helping lay the foundations for a future ecosystem that is intelligent, sustainable and secure. By building on open and trustworthy AI native platforms, we can simplify future technologies like 6G, ensuring they build upon the strengths of today’s 5G networks while still unlocking powerful new capabilities at scale.”

Tim Höttges, CEO of Deutsche Telekom AG, said: “Best network, best customer experience — that remains our promise. With an open, intelligent and trusted 6G infrastructure, we are laying the foundation for the era of physical AI and unlocking new value for our customers, for industry and for society.”

Arielle Roth, Assistant Secretary of Commerce for Communications and Information, and Administrator at the National Telecommunications and Information Administration, said: “America’s 6G leadership will be critical to our nation’s economic prosperity, national security and global competitiveness. Today’s announcement demonstrates that the United States and our allies and partners around the world are leading in this next-generation technology. We look forward to the next steps from this international industry coalition as they advance and implement their shared 6G vision.”

Jung Jai-hun, president and CEO of SK Telecom, said: “SKT is evolving telco infrastructure to serve as the foundation for the AI era, where connectivity serves as a platform for intelligence and innovation. Together, we can build open, trusted infrastructure that drives a global ecosystem of AI innovation.”

Hideyuki Tsukuda, executive vice president and chief technology officer of SoftBank Corp., said: “Al-native 6G will transform wireless networks into secure, software-defined infrastructure that supports the next wave of global innovation. SoftBank Corp. is driving this innovation with NVIDIA by advancing open and trusted platforms that enable interoperability, resilience and continuous evolution at scale.”

Srini Gopalan, CEO of T-Mobile, said: “We’re at a pivotal moment. In the U.S., we’ve laid the foundation with 5G Advanced and AI-native networks where intelligence lives inside the network. As 6G becomes the backbone of the AI era, telecom will serve as the nervous system of the digital economy, enabling autonomous systems and intelligent industries at scale and unlocking new value for customers and businesses alike. T-Mobile is proud to help define what’s next through deep ecosystem collaboration and sustained innovation.”

……………………………………………………………………………………………………………………………………………

Linux Foundation launches OCUDU:

Separately, the Linux Foundation (LF) today announced the formation of the Open Centralized Unit Distributed Unit (OCUDU) Ecosystem Foundation, an open collaboration hub dedicated to building, scaling, and sustaining the OCUDU technical project assets and leveraging them to establish a foundational reference platform for RAN including AI based algorithms and solutions. The OCUDU Ecosystem Foundation provides a critical mechanism for industry vendors to optimally guide OCUDU development to support 5G and early AI Native 6G services.

The OCUDU Ecosystem Foundation brings together an ecosystem across enterprise, telecom operators, cloud providers, equipment vendors, and research institutions to co-develop and integrate critical components required for 5G and early 6G deployments. This community-driven model complements global standards from 3GPP and O-RAN alliance and industry alliances like AI-RAN alliance. This global effort ensures that innovation, transparency, and interoperability remain at the core of global software-defined RAN evolution.

“By aligning global efforts under the Linux Foundation, we’re building an open, trusted, and secure open source platform to power the next decade of wireless innovation,” said Arpit Joshipura, general manager, Networking, Edge and IoT, at the Linux Foundation. “The OCUDU Ecosystem Foundation represents a key step forward in open source RAN, specifically for CU and DU.” 

“This initiative brings the best of the open source model to one of the most critical layers of future wireless: the foundation for an interoperable, software-defined radio access network,” said Dr. Tom Rondeau, principal director for FutureG. “By shifting the maintenance of these common components to a collaborative, open-source project, under neutral governance at the Linux Foundation, we enable our industry partners to focus their resources on the innovative and monetizable technologies that are most effective for the nation. We are building a foundation that enables shared success and accelerates progress for the entire ecosystem. We are looking forward to seeing this approach provide a vital platform for strengthening our relationships and collaboration with our allies and international partners.”

“The key to driving innovation in wireless is to leverage a broad ecosystem of experts in networking, radio software, and emerging AI technologies,” said Joe Kochan, CEO of NSC. “What started with a competitive proposal process to elicit the best technology solutions from among NSC’s large and diverse membership is now expanding under the Linux Foundation, and NSC is proud to continue partnering with both LF and the FutureG team to advance OCUDU development efforts and build the next generation of wireless capabilities.”

References:

https://nvidianews.nvidia.com/news/nvidia-and-global-telecom-leaders-commit-to-build-6g-on-open-and-secure-ai-native-platforms

https://ocudu.org/news/linux-foundation-announces-ocudu-ecosystem-foundation-to-accelerate-open-source-ai-ran-innovation/

Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)

SKT 6G ATHENA White Paper: a mid-to-long term network evolution strategy for the AI era

Dell’Oro: RAN Market Stabilized in 2025 with 1% CAG forecast over next 5 years; Opinion on AI RAN, 5G Advanced, 6G RAN/Core risks

Nokia and Rohde & Schwarz collaborate on AI-powered 6G receiver years before IMT 2030 RIT submissions to ITU-R WP5D

SK Telecom, DOCOMO, NTT and Nokia develop 6G AI-native air interface

Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos

Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

 

Intel and AI chip startup SambaNova partner; SN50 AI inferencing chip max speed said to be 5X faster than competitive AI chips

Intel and AI chip startup SambaNova have entered into a multi-year strategic collaboration to deploy high-performance, cost-efficient AI inference solutions [1.] tailored for AI-native firms, enterprises, and government sectors. This global initiative leverages Intel® Xeon® infrastructure, with Intel Capital further signaling commitment through participation in SambaNova’s $350M Series E financing round.  The collaboration will give customers a powerful alternative to GPU‑centric solutions, offering optimized performance for leading open‑source models with predictable throughput and total cost of ownership. Founded in 2017, the Palo Alto, CA company specializes in AI chips and software. SambaNova’s Chairman is Lip-Bu Tan, who is also the CEO of Intel!

Note 1. AI inferencing is the process of using a trained AI model to make real-time predictions, decisions, or generate content from new, previously unseen data. It transforms inputs (a query, image, sensor reading) into useful results (a sentence, classification, alert). Unlike training and language models, inference is about prompt execution, often requiring low-latency (speed) and high efficiency. AI Inference chips have attracted intense investor interest following a wave of deal making around rivals to Nvidia, as AI companies seek faster and more efficient hardware. See References below for more information.

………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

For high-scale AI workloads, the integration of Intel CPUs with SambaNova’s specialized AI platform was said to offer a high-efficiency rack-level inference alternative. This partnership serves as a strategic bridge as Intel scales its independent GPU-based offerings. Intel remains fully committed to its internal GPU roadmap, continuing aggressive investment across architecture, software, and systems. This collaboration enhances Intel’s edge-to-cloud strategy without altering its competitive trajectory in the GPU market. By combining Xeon processors, Intel networking, and SambaNova systems, the two companies are positioned to capture a significant share of the multi-billion-dollar inference market through heterogeneous data center architectures.

As part of the collaboration, Intel plans to make a strategic investment in SambaNova to accelerate the rollout of an Intel‑powered AI cloud. The collaboration is expected to span three key areas:

  • AI Cloud Expansion – Scaling SambaNova’s vertically integrated AI cloud, built on Intel Xeon‑based infrastructure and optimized for large language and multimodal models. The platform will deliver low‑latency, high‑throughput AI services, supported by reference architectures, deployment blueprints, and partnerships with system integrators and software vendors.
  • Integrated AI Infrastructure – Combining SambaNova’s systems with Intel’s CPUs, accelerators, and networking technologies to power scalable, production‑ready inference for reasoning, code generation, multimodal applications, and agentic workflows.
  • Go‑to‑Market Execution – Joint co‑selling and co‑marketing through Intel’s global enterprise, cloud, and partner channels to accelerate adoption across the AI ecosystem.

Together, SambaNova and Intel aim to shape the next generation of heterogeneous AI data centers — integrating Intel Xeon processors, Intel GPUs, Intel networking and storage, and SambaNova systems — to unlock a multi‑billion‑dollar inference market opportunity.

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………

SambaNova also announced its SN50 AI chip, which boasts a max speed that’s 5X faster than competitive chips, according to the company.

Image Credit: SambaNova

Positioned as the most efficient chip for agentic AI, the SN50 chip offers enterprises a 3X lower total cost of ownership – a powerful foundation to scale fast inference and bring autonomous AI agents into full production. The SN50 will be shipping to customers later this year.  To quickly scale and distribute SN50, SambaNova is collaborating with Intel, and has obtained $350 million in strategic Series E financing to expand manufacturing and cloud capacity.

“AI is no longer a contest to build the biggest model,” said Rodrigo Liang, co‑founder and CEO of SambaNova. “With the SN50 and our deep collaboration with Intel, the real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud.”

“Customers are asking for more choice and more efficient ways to scale AI,” said Kevork Kechichian, EVP, General Manager, Data Center Group, Intel. “By combining Intel’s leadership in compute, networking, and memory with SambaNova’s full-stack AI systems and inference cloud platform, we are delivering a compelling option for organizations looking for GPU alternatives to deploy advanced AI at scale.”

The SN50 delivers five times more compute per accelerator and four times more network bandwidth than the previous generation. It links up to 256 accelerators over a multi‑terabyte‑per‑second interconnect, cutting time‑to‑first‑token and supporting larger batch sizes. The result: Enterprises can deploy bigger, longer‑context AI models with higher throughput and responsiveness — while keeping performance high and costs and latency under control.

“AI is moving from a software story to an infrastructure story,” said Landon Downs, co-founder and managing partner at Cambium Capital. “SN50 is engineered for the real-world latency and economic requirements that will determine who successfully deploys agentic AI at scale.”

Built on SambaNova’s Reconfigurable Data Unit (RDU) architecture, SN50 delivers:

  • Instant AI Experiences – Ultra‑low latency delivers real‑time responsiveness for next‑gen enterprise apps like voice assistants.
  • Unmatched Scale and Concurrency – Power thousands of simultaneous AI sessions with consistent high performance.
  • Breakthrough Model Capacity – Three‑tier memory architecture unlocks 10T+ parameter models and 10M+ context lengths for deeper reasoning and richer outputs.
  • Maximum Efficiency at Scale – Higher hardware utilization lowers cost‑per‑token, driving greater performance and ROI.
  • Smarter Memory, Smarter Efficiency – Resident multi‑model memory and agentic caching optimize the three‑tier architecture, cutting infrastructure costs for enterprise‑scale AI deployments.

“The new SambaNova SN50 RDU changes the tokenomics of AI inference at scale. By delivering both high performance and high throughput with a chip that uses existing power and is air cooled, SambaNova is changing the game,” said Peter Rutten, Research Vice-President Performance Intensive Computing at analyst firm IDC.

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………

SoftBank Corp. will be the first customer to deploy SN50 within its next‑generation AI data centers in Japan. The deployment will power low‑latency inference services for sovereign and enterprise customers across Asia‑Pacific, supporting both open‑source and proprietary frontier models with aggressive latency and throughput requirements.

“With SN50, we are building an AI inference fabric for Japan that can serve our customers and partners with the speed, resiliency and sovereignty they expect from SoftBank,” said Hironobu Tamba, Vice President and Head of the Data Platform Strategy Division of the Technology Unit at SoftBank Corp. “By standardizing on SN50, we gain the ability to deliver world‑class AI services on our own terms — with the performance of the best GPU clusters, but with far better economics and control.”

The SN50 deployment deepens SambaNova’s existing relationship with SoftBank Corp., which already hosts SambaCloud to provide ultra‑fast inference for developers in the region. By anchoring its newest clusters on SN50, SoftBank positions SambaNova as the inference backbone for its sovereign AI initiatives and future large‑scale agentic services.

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………

References:

https://newsroom.intel.com/data-center/intel-and-sambanova-planning-multi-year-collaboration-for-xeon-based-ai-inference

https://sambanova.ai/press/sambanova-unveils-fastest-chip-for-agentic-ai-collaborates-with-intel-and-raises-350m

Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?

CES 2025: Intel announces edge compute processors with AI inferencing capabilities

Groq and Nvidia in non-exclusive AI Inference technology licensing agreement; top Groq execs joining Nvidia

Analysis: Edge AI and Qualcomm’s AI Program for Innovators 2026 – APAC for startups to lead in AI innovation

Custom AI Chips: Powering the next wave of Intelligent Computing

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

OpenAI and Broadcom in $10B deal to make custom AI chips

Huawei to Double Output of Ascend AI chips in 2026; OpenAI orders HBM chips from SK Hynix & Samsung for Stargate UAE project

U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

Page 1 of 353
1 2 3 353