AI
Huawei unveils AI Centric Network roadmap, U6 GHz products, 5G Advanced strategy and SuperPoD cluster computing platforms
Missing from all the MWC 2026 6G AI alliance announcements, Huawei released a series of all-scenario U6 GHz products to help carriers unlock the full potential of 5G Advanced (5G-A) and set the stage for a seamless transition to 6G. Huawei also showcased its SuperPoD cluster for the first time outside China, which they have created to offer “a new option for the intelligent world.”
- The all-scenario U6 GHz products and solutions Huawei released today use innovative technologies to create a high-capacity, low-latency, optimal-experience backbone designed for mobile AI applications.
- There are already 70 million 5G-A users globally, and 5G-A is increasingly being adopted by carriers at scale. In China, Huawei has helped carriers deliver contiguous 5G-A coverage across 270 cities and launch 5G-A packages that monetize experience in over 30 provinces.
The company also launched enhanced AI-Centric Network solutions [1.] that will help carriers prepare for the agentic era by enabling intelligent services, networks, and network elements (NEs). The company’s plans to build more AI-centric networks and computing backbones that will help carriers and industry customers seize opportunities from the AI era.
Note 1. Huawei’s AI-Centric Network roadmap is designed to integrate intelligence directly into 5G-Advanced (5G-A) infrastructure and accelerate the transition toward Level-4 Autonomous Networks. The company plans to work with global carriers (where its not blacklisted) on the large-scale 5G-A deployment, use high uplink to address surging consumer and industry demand for mobile AI applications, and use the U6 GHz band to unlock the full value of spectrum and pave the way for smooth evolution to 6G.

Photo Credit: Huawei
…………………………………………………………………………………………………………………………………………………………………………….
Three-Layer Intelligence in AI-Centric Networks: Accelerating the Agentic Era:
As mobile network operators transition toward AI-native 5G-Advanced and early 6G architectures, Huawei is positioning its AI-Centric Network portfolio as the blueprint for next-generation intelligent networks. By embedding intelligence across service, network, and network element (NE) layers, Huawei aims to establish the foundation for fully agentic, autonomously managed infrastructures.
- Service Layer: Focuses on multi-agent collaboration platforms to transform core carrier services—such as voice and home broadband—into intelligent service platforms.
- Network Layer: Aims to evolve from single-scenario automation to end-to-end single-domain network autonomy. Huawei officially launched AUTINOps, an AI-native intelligent operations solution designed to replace traditional manual O&M with predictive, preventive “digital employees”.
- Network Element (NE) Layer: Utilizes AI to optimize algorithms for RANs (Radio Access Networks) and core networks, improving spectral efficiency and service awareness.
At the Service layer, Huawei is enabling carriers to operationalize multi-agent collaboration frameworks that embed domain-specific intelligence into key service categories: voice, broadband, and digital experience monetization. These AI agents dynamically manage customer experience and lifecycle value, supporting the transformation of core connectivity services into intelligent, context-aware digital offerings.
At the Network layer, the company’s Autonomous Driving Network Level 4 (ADN L4) initiative focuses on single-scenario automation, delivering measurable improvements in O&M efficiency, service quality, and monetization agility. By the close of 2025, ADN single-scenario deployments were active across more than 130 commercial telecom networks. The next phase targets end-to-end, single-domain autonomy across transport, access, and core networks—an essential step toward zero-touch O&M and intent-driven orchestration in 5G-A and 6G environments.
At the Network Element layer, Huawei is jointly advancing AI-driven innovation across RAN, WAN, and core domains. This includes algorithmic optimization for intelligent RAN scheduling, service-aware traffic identification in WANs, and unified intent modeling across B2C and B2H use cases. Such capabilities enhance spectral and energy efficiency, enable predictive resilience, and provide fine-grained service awareness—all foundational for AI-native air interface and network control in 6G.
Computing Backbone with SuperPoD Clusters:
Supporting this vision, Huawei is introducing its next-generation SuperPoD and cluster computing platforms, designed as high-performance compute backbones for distributed AI model training and inference within telecom and enterprise domains. Featuring the proprietary UnifiedBus interconnect and system-level architecture innovations, the Atlas 950, TaiShan 950, and Atlas 850E SuperPoDs, along with the TaiShan 200–500 servers, deliver ultra-low latency and high throughput optimized for trillion-parameter AI models and real-time agentic operations.
Aligned with its open innovation strategy, Huawei continues to expand an open, collaborative computing ecosystem, supporting open-source frameworks and open-access platforms to accelerate the deployment of intelligent, AI-driven digital infrastructure worldwide.
Intelligent Transformation Across Industry Domains:
At MWC Barcelona 2026, Huawei is highlighting 115 end-to-end industrial intelligence showcases across verticals, underscoring its role in helping enterprises adopt AI-centric operational models. Through the SHAPE 2.0 Partner Framework, 22 co-developed AI and digital infrastructure solutions will demonstrate how vertical industries—from manufacturing and energy to transportation and healthcare—can harness 5G-A and AI integration to deliver measurable business outcomes.
Toward 5G-A Commercialization and 6G Evolution:
With large-scale 5G-Advanced rollouts accelerating, Huawei is collaborating with global carriers and ecosystem partners to realize level-4 autonomous networks and establish the architectural bridge to 6G. Central to this evolution is the convergence of AI, connectivity, and computing—enabling networks that can self-learn, self-optimize, and autonomously orchestrate service intent. These AI-Centric Network initiatives and SuperPoD-based computing backbones form the foundation for value-driven, intelligent networks built for the agentic era.
5G-Advanced and Infrastructure Innovations:
Huawei’s 5G-A strategy, branded as GigaUplink, focuses on delivering the high-uplink capacity and low latency required for mobile AI applications:
- U6 GHz Spectrum: Launched a comprehensive portfolio of all-scenario U6 GHz products to unlock 5G-A’s full potential and provide a smooth evolution path to 6G.
- Agentic Core: Introduced the Agentic Core solution, which integrates intelligence natively into the core network to support ubiquitous AI agent access across devices.
- All-Optical Target Network: Proposed an AI-centric optical roadmap featuring dual strategies: “AI for networks” (optimizing operations) and “networks for AI” (supporting AI workloads with ultra-low latency benchmarks of 1-5ms).
………………………………………………………………………………………………………………………………………………………..
References:
https://www.huawei.com/en/news/2026/3/mwc-ai-centric-network
https://carrier.huawei.com/en/minisite/events/mwc2026/
NVIDIA and global telecom leaders to build 6G on open and secure AI-native platforms + Linux Foundation launches OCUDU
Omdia on resurgence of Huawei: #1 RAN vendor in 3 out of 5 regions; RAN market has bottomed
Huawei, Qualcomm, Samsung, and Ericsson Leading Patent Race in $15 Billion 5G Licensing Market
Huawei Cloud Review and Global Sales Partner Policies for 2026
Huawei’s Electric Vehicle Charging Technology & Top 10 Charging Trends
Huawei to Double Output of Ascend AI chips in 2026; OpenAI orders HBM chips from SK Hynix & Samsung for Stargate UAE project
Huawei launches CloudMatrix 384 AI System to rival Nvidia’s most advanced AI system
U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China
AT&T and Ericsson boost Cloud RAN performance with AI-native software running on Intel Xeon 6 SoC
AT&T and Ericsson boost Cloud RAN performance with AI-native software running on Intel Xeon 6 SoC
Overview:
AT&T and Ericsson have completed a milestone Cloud RAN test by successfully demonstrating Ericsson’s AI-native Link Adaptation [1.] on a Cloud RAN stack powered by Intel Xeon 6 SoC. The test showed how artificial intelligence (AI) can improve spectral efficiency and network responsiveness in real-world conditions. Conducted over AT&T’s licensed frequency bands, the experiment was the first to use portable Ericsson RAN software running on Intel’s new Xeon 6 system-on-chip (SoC) platform—an architecture designed for high-performance, cloud-native processing of RAN workloads. Engineered specifically for network and edge deployments, Intel Xeon 6 SoC delivers breakthrough AI RAN performance with built-in acceleration. Integrated Intel Advanced Vector Extensions (AVX) and Intel Advanced Matrix Extension (AMX) technologies eliminate the need for discrete accelerators while maximizing capacity, efficiency, and TCO optimization.
…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..
Note 1. AI-native Link Adaptation dynamically adjusts to changes in signal quality and interference, boosting RAN performance on purpose-built and cloud-based infrastructure alike.
Other Notes:
-
vRAN: A radio access network (RAN) in which the baseband processing functions run as software on general-purpose processors (mostly from Intel) instead of on dedicated hardware at the cell site. In vRAN, the functional split defines how baseband processing is divided between centralized processors and the radio unit at the site, and that split drives fronthaul bandwidth, latency, and cost.
- Cloud RAN: An evolution of vRAN where those same RAN functions are re-architected as cloud‑native microservices/containers with CI/CD (Continuous Integration and either Continuous Delivery or Continuous Deployment), automation, and orchestrators, optimized for elastic scaling across distributed cloud infrastructure.
- Ericsson Cloud RAN is a cloud native software solution that handles compute functionality in the RAN. It virtualizes RAN functions on Commercial Off The Shelf (COTS) hardware, decoupling software from hardware to enable more flexible, scalable, and efficient network deployments.
- According to Dell’Oro Group, Cloud RAN (often encompassing vRAN) accounted for approximately 5% to 10% of the total global Radio Access Network (RAN) market revenues in 2025. In early 2026, Dell’Oro revised Cloud RAN projections downward. While virtualization remains a “key pillar” for the long term, short-term adoption is being slowed by performance, power, and cost-parity challenges when compared to purpose-built hardware.
- The total RAN market stabilized in late 2025 after losing approximately 20% of its value between 2022 and 2024. Market concentration reached a 10-year high in 2025, with the top five vendors (Huawei, Ericsson, Nokia, ZTE, and Samsung) capturing 96% of the revenue.
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Image Credit: Ericsson
In this proof-of-concept setup, Ericsson’s disaggregated and containerized RAN software operated within AT&T’s target Cloud RAN configuration, built on open, commercial off-the-shelf hardware. The test advanced from basic call functionality to validation of feature-rich network behavior in a cloud computing environment. Ericsson’s AI-native Link Adaptation is a learning algorithm that continuously assesses channel state and interference to determine the optimal modulation and coding scheme for each transmission interval. By generating real-time predictions of link quality, the AI model dynamically adjusts data rates to maximize throughput and spectral efficiency.
Early results were promising. Throughput gains reached up to 20% compared with conventional rule-based link adaptation approaches, alongside measurable improvements in spectral efficiency. Ericsson and Intel also used the trial to benchmark various AI inference models, demonstrating performance scalability and energy efficiency on general-purpose compute nodes rather than proprietary hardware accelerators. This suggests a more pragmatic path for deploying AI workloads across distributed RAN architectures.
AI-native Link Adaptation dynamically adjusts to changes in signal quality and interference, boosting RAN performance on purpose-built and cloud-based infrastructure alike.
Ericsson Cloud RAN is a cloud native software solution that handles compute functionality in the RAN. It virtualizes RAN functions on Commercial Off The Shelf (COTS) hardware, decoupling software from hardware to enable more flexible, scalable, and efficient network deployments.
Engineered specifically for network and edge deployments, Intel Xeon 6 SoC delivers breakthrough AI RAN performance with built-in acceleration. Integrated Intel Advanced Vector Extensions (AVX) and Intel Advanced Matrix Extension (AMX) technologies eliminate the need for discrete accelerators while maximizing capacity, efficiency, and TCO optimization.
Beyond the immediate performance improvements, the trial illustrates how open RAN architectures can accelerate innovation. By decoupling RAN software from vendor-specific hardware, AT&T can integrate AI capabilities and update network functions more quickly, avoiding the constraints of lock-in. The portability demonstrated here—running production-grade Ericsson RAN software on Intel Xeon 6 silicon—marks an industry first.
For AT&T, the achievement represents more than a lab milestone. It provides a technical template for scaling AI-native RAN functions into its cloud infrastructure, pointing to a future where machine learning operates natively within radio environments to fine-tune performance in real time. As operators continue balancing cost, flexibility, and efficiency, AI-optimized Cloud RAN deployments could become the next competitive frontier in 5G—and eventually, 6G—network evolution.
………………………………………………………………………………………………………………………………………………………………………………………………………………………..
Quotes:
Rob Soni, Vice President, RAN Technology at AT&T, says: “AT&T is leading the charge toward an open, intelligent, and scalable network future by advancing Open RAN and Cloud RAN with AI-native capabilities at their core. This demo highlights how AI capabilities, powered by our next-generation Cloud RAN platform, can be deployed seamlessly to drive innovation and deliver superior customer experiences.”
Mårten Lerner, Head of Networks Strategy and Product Management, Business Area Networks at Ericsson, says: “Together with AT&T and Intel, Ericsson is demonstrating how our domain expertise combined with AI-native RAN software can drive transformative advancements in both Cloud RAN and purpose-built deployments. Our industry-leading AI-native Link Adaptation serves as the first proof point on this journey. With a hardware-agnostic RAN software stack, Ericsson is committed to offering maximum flexibility and enabling all our customers to benefit from future innovations – regardless of their chosen underlying hardware. This milestone underscores Ericsson’s commitment to helping operators advance their networks by deploying AI functionality across the RAN stack.”
Cristina Rodriguez, VP and GM, Network and Edge at Intel, says: “This successful collaboration with AT&T and Ericsson showcases the power of Intel Xeon 6 SoC to enable and accelerate AI workloads in Cloud RAN environments. Xeon 6 SoC is architected to handle the demanding compute requirements of AI-native network functions, delivering the performance and efficiency operators need to unlock the full potential of intelligent networks. By providing a flexible, standards-based platform, Intel Xeon 6 enables service providers like AT&T to deploy innovative AI capabilities while maintaining the openness and choice that drive industry innovation.”
………………………………………………………………………………………………………………………………………………………………………………………………………………………….
AI-Native Link Adaptation vs. Traditional Methods:
Traditional link adaptation in RAN relies on deterministic, rule-based algorithms that select the Modulation and Coding Scheme (MCS) from predefined lookup tables. These methods primarily use instantaneous Channel Quality Indicator (CQI) reports or estimated Signal-to-Interference-plus-Noise Ratio (SINR) thresholds, often adjusted via Outer Loop Link Adaptation (OLLA) based on ACK/NACK feedback from the UE. This reactive approach applies conservative margins to account for channel estimation errors, prediction lag, and varying interference, which can lead to suboptimal throughput—either underutilizing the link with low MCS or triggering excess HARQ retransmissions with overly aggressive selections.
AI-native Link Adaptation shifts to a predictive, model-driven paradigm using machine learning (typically lightweight neural networks or time-series models) trained on historical channel data. Rather than static thresholds, the AI processes sequences of CQI, beam metrics, mobility patterns, and interference traces to forecast the probable channel state for the next transmission time interval (TTI). This enables precise MCS selection that hugs the Shannon capacity limit more closely, minimizing BLER while maximizing spectral efficiency in dynamic scenarios like high-mobility NLOS or bursty interference.
Key differences include:
| Aspect | Traditional (Rule-Based) | AI-Native (ML-Based) |
|---|---|---|
| Decision Mechanism | Lookup tables, SINR thresholds, OLLA offsets | Real-time inference from ML models |
| Channel Handling | Reactive (past CQI/SINR) | Predictive (time-series forecasting) |
| Adaptation Speed | Step-wise, with feedback lag | Continuous, sub-TTI granularity |
| Performance Gains | Baseline (0% reference) | Up to 20% throughput, 10% spectral efficiency |
| Compute Needs | Low (fixed arithmetic) | Moderate (edge inference on COTS like Xeon 6) |
| Limitations | Struggles with non-stationary channels | Requires training data, retraining overhead |
Analysis: Rakuten Mobile and Intel partnership to embed AI directly into vRAN
RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN
vRAN market disappoints – just like OpenRAN and mobile 5G
Nokia and Eolo deploy 5G SA mmWave “Cloud RAN” network
Ericsson and Google Cloud expand partnership with Cloud RAN solution
Ericsson and O2 Telefónica demo Europe’s 1st Cloud RAN 5G mmWave FWA use case
Cloud RAN with Google Distributed Cloud Edge; Strategy: host network functions of other vendors on Google Cloud
vRAN market disappoints – just like OpenRAN and mobile 5G
Ericsson and Intel collaborate to accelerate AI-Native 6G; other AI-Native 6G advancements at MWC 2026
Ericsson and Intel at MWC 2026:
Building on milestones in Cloud RAN, 5G Core, and open network innovation, Ericsson and Intel are showcasing joint technology advancements at the Mobile World Congress (MWC) 2026 in Barcelona this week. Demonstrations can be experienced at the Ericsson Pavilion (Hall 2), Intel Booth (Hall 3, Stand 3E31), and across partner event spaces, highlighting the companies’ shared progress in enabling the next era of AI-driven networks.
The two companies are strengthening their long-standing technology partnership to accelerate ecosystem readiness for AI-native 6G networks and use cases. The expanded collaboration spans next-generation mobile connectivity, cloud infrastructure, and compute acceleration — with a focus on AI-driven RAN and packet core evolution, platform-level security, and scalable cloud-native architectures designed to shorten time-to-market for advanced network solutions.
“6G is not merely an iteration of mobile technology; it will serve as the foundational infrastructure distributing AI across devices, the edge, and the cloud,” said Börje Ekholm, President and CEO of Ericsson. “With our deep history in network innovation and global-scale operator deployments, Ericsson is uniquely positioned to drive practical 6G integration from research to commercialization.”
Lip-Bu Tan, CEO of Intel, added: “Intel’s vision is to lead the industry in unifying RAN, Core, and edge AI to enable seamless deployment of AI-native 6G environments. Together with Ericsson, we are proving that next-generation connectivity can be open, energy-efficient, secure, and intelligent. With future Ericsson Silicon built on Intel’s most advanced process technologies, coupled with Intel Xeon-powered AI-RAN ready Cloud RAN and collaborative multi-year research efforts, we are delivering the performance, efficiency, and supply assurance demanded by leading operators worldwide.”
As 6G transitions from research to commercialization, the industry must align around a mature, standards-based ecosystem. The Ericsson–Intel collaboration aims to accelerate development of high-performance, energy-efficient compute architectures optimized for both AI for Networks and Networks for AI.
AI-native 6G will fuse intelligent, programmable network functions with distributed compute and real-time sensing, bringing processing power closer to the network edge and enabling ultra-responsive, adaptive services. This convergence will enhance network efficiency, agility, and service intelligence across future deployments.
About Ericsson:
Ericsson‘s high-performing networks provide connectivity for billions of people every day. For 150 years, we’ve been pioneers in creating technology for communication. We offer mobile communication and connectivity solutions for service providers and enterprises. Together with our customers and partners, we make the digital world of tomorrow a reality.
About Intel:
Intel is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better.
…………………………………………………………………………………………………………………………………………………………
Related AI-Native 6G Announcements at MWC 2026:
In addition to the Ericsson-Intel collaboration, several vendors and operators announced AI-native 6G advancements or related demos at MWC Barcelona 2026. These initiatives emphasize AI-RAN integration, software-defined architectures, and ecosystem partnerships to bridge 5G-A to 6G.
NVIDIA Multi-Partner Commitment: NVIDIA rallied operators and vendors including Booz Allen, BT Group, Cisco, Deutsche Telekom, Ericsson, Nokia, SK Telecom, SoftBank, and T-Mobile to build open, secure AI-native 6G platforms. The focus is on software-defined wireless with AI embedded in RAN, edge, and core for integrated sensing, communications, and interoperability.
Nokia AI-RAN: Nokia highlighted new partnerships with Dell, Quanta, Red Hat, SuperMicro, NVIDIA, and operators like T-Mobile, Indosat Ooredoo Hutchison, BT, Elisa, NTT DOCOMO, and Vodafone for AI-RAN trials paving the way to cognitive 6G networks. Live demos at Nokia’s Hall 3 Booth 3B20 included Southeast Asia’s first AI-RAN Layer 3 5G call on shared GPU infrastructure and vision AI for immersive services.
T-Mobile & Deutsche Telekom Hub: T-Mobile US and (major shareholder) Deutsche Telekom launched a joint 6G Innovation Hub targeting AI-native autonomous networks, secure sensing/positioning, and connectivity-compute convergence for Physical AI. It builds on agentic AI proofs like network-integrated translation, emphasizing “kinetic tokens” for real-time physical world control.
ZTE GigaMIMO 6G Prototype: ZTE unveiled the world’s first 6G prototype with 2000+ U6G-band antenna elements (GigaMIMO), powered by AI algorithms for 10x capacity over 5G-A, 30% spectral efficiency gains, and AI-driven immersive services. Booth 3F30 demos integrate AI across connectivity, computing, and devices for “AI serves AI” networks.
Qualcomm Agentic AI RAN: Qualcomm announced AI-native RAN management services in its Dragonwing suite for autonomous 6G-grade networks, plus new Open RAN AI features for performance optimization. CEO Cristiano Amon’s keynote focused on “Architecting 6G for the AI Era,” with device-to-data-center transformations.
Huawei U6GHz for 6G Path:
Huawei released all-scenario U6GHz products (macro/micro sites, microwave) with AI-centric solutions for 5G-A capacity (100 Gbps downlink) and low-latency AI apps, enabling smooth 6G evolution. Emphasizes hyper-resolution MU-MIMO and multi-band coordination for indoor/outdoor AI experiences.
Summary Chart:
| Vendor/Operator | Key Focus | Partners/Demos | Booth/Location |
|---|---|---|---|
| NVIDIA | Open AI-native platforms | Multiple operators/vendors | MWC general |
| Nokia | AI-RAN trials & cognitive networks | NVIDIA, T-Mobile, IOH et al. | Hall 3, 3B20 |
| T-Mobile/DT | Physical AI hub | Joint R&D | Announced pre-MWC |
| ZTE | GigaMIMO 6G prototype | China Mobile, Qualcomm | Hall 3, 3F30 |
| Qualcomm | Agentic RAN automation | Open RAN ecosystem | Keynote & demos |
| Huawei | U6GHz AI-centric evolution | Carrier-focused | MWC showcase |
…………………………………………………………………………………………………………………………………………………………………………………….
References:
NVIDIA and global telecom leaders to build 6G on open and secure AI-native platforms + Linux Foundation launches OCUDU
Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)
SKT 6G ATHENA White Paper: a mid-to-long term network evolution strategy for the AI era
Dell’Oro: RAN Market Stabilized in 2025 with 1% CAG forecast over next 5 years; Opinion on AI RAN, 5G Advanced, 6G RAN/Core risks
Nokia and Rohde & Schwarz collaborate on AI-powered 6G receiver years before IMT 2030 RIT submissions to ITU-R WP5D
SK Telecom, DOCOMO, NTT and Nokia develop 6G AI-native air interface
Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos
Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN
Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN
RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN
NVIDIA and global telecom leaders to build 6G on open and secure AI-native platforms + Linux Foundation launches OCUDU
- AI-RAN Integration: Shifting from fixed-function hardware to AI-RAN architecture to turn networks into programmable AI infrastructure.
- Architectural Resilience: Implementing open and trusted principles to ensure interoperability, supply-chain security, and rapid innovation cycles.
- Integrated Sensing & Communication: Leveraging AI-native platforms to enable real-time intelligence and decision-making at the network edge.
- Scalability: Addressing the complexity of 6G to support billions of autonomous endpoints that demand higher security and lower latency than current architectures can provide.
The NVIDIA AI Aerial platform is a software-defined, cloud-native framework for building, training, and deploying AI-native 5G and 6G wireless networks. It transitions traditional fixed-function hardware to a programmable, multi-tenant infrastructure that runs both Radio Access Network (RAN) and AI workloads simultaneously on NVIDIA-accelerated computing.

Image Credit: NVIDIA
Quotes:
“AI is driving the largest infrastructure buildout in history, and telecommunications is the next frontier,” stated Jensen Huang, founder and CEO of NVIDIA. “By building AI-RAN, we are transforming global telecom networks into a ubiquitous AI fabric.”
Allison Kirkby, chief executive of BT Group, said: “Connectivity is the backbone of economic growth, and with this collaboration, we’re helping lay the foundations for a future ecosystem that is intelligent, sustainable and secure. By building on open and trustworthy AI native platforms, we can simplify future technologies like 6G, ensuring they build upon the strengths of today’s 5G networks while still unlocking powerful new capabilities at scale.”
Tim Höttges, CEO of Deutsche Telekom AG, said: “Best network, best customer experience — that remains our promise. With an open, intelligent and trusted 6G infrastructure, we are laying the foundation for the era of physical AI and unlocking new value for our customers, for industry and for society.”
Arielle Roth, Assistant Secretary of Commerce for Communications and Information, and Administrator at the National Telecommunications and Information Administration, said: “America’s 6G leadership will be critical to our nation’s economic prosperity, national security and global competitiveness. Today’s announcement demonstrates that the United States and our allies and partners around the world are leading in this next-generation technology. We look forward to the next steps from this international industry coalition as they advance and implement their shared 6G vision.”
Jung Jai-hun, president and CEO of SK Telecom, said: “SKT is evolving telco infrastructure to serve as the foundation for the AI era, where connectivity serves as a platform for intelligence and innovation. Together, we can build open, trusted infrastructure that drives a global ecosystem of AI innovation.”
Hideyuki Tsukuda, executive vice president and chief technology officer of SoftBank Corp., said: “Al-native 6G will transform wireless networks into secure, software-defined infrastructure that supports the next wave of global innovation. SoftBank Corp. is driving this innovation with NVIDIA by advancing open and trusted platforms that enable interoperability, resilience and continuous evolution at scale.”
Srini Gopalan, CEO of T-Mobile, said: “We’re at a pivotal moment. In the U.S., we’ve laid the foundation with 5G Advanced and AI-native networks where intelligence lives inside the network. As 6G becomes the backbone of the AI era, telecom will serve as the nervous system of the digital economy, enabling autonomous systems and intelligent industries at scale and unlocking new value for customers and businesses alike. T-Mobile is proud to help define what’s next through deep ecosystem collaboration and sustained innovation.”
……………………………………………………………………………………………………………………………………………
Linux Foundation launches OCUDU:
Separately, the Linux Foundation (LF) today announced the formation of the Open Centralized Unit Distributed Unit (OCUDU) Ecosystem Foundation, an open collaboration hub dedicated to building, scaling, and sustaining the OCUDU technical project assets and leveraging them to establish a foundational reference platform for RAN including AI based algorithms and solutions. The OCUDU Ecosystem Foundation provides a critical mechanism for industry vendors to optimally guide OCUDU development to support 5G and early AI Native 6G services.
The OCUDU Ecosystem Foundation brings together an ecosystem across enterprise, telecom operators, cloud providers, equipment vendors, and research institutions to co-develop and integrate critical components required for 5G and early 6G deployments. This community-driven model complements global standards from 3GPP and O-RAN alliance and industry alliances like AI-RAN alliance. This global effort ensures that innovation, transparency, and interoperability remain at the core of global software-defined RAN evolution.
“By aligning global efforts under the Linux Foundation, we’re building an open, trusted, and secure open source platform to power the next decade of wireless innovation,” said Arpit Joshipura, general manager, Networking, Edge and IoT, at the Linux Foundation. “The OCUDU Ecosystem Foundation represents a key step forward in open source RAN, specifically for CU and DU.”
“This initiative brings the best of the open source model to one of the most critical layers of future wireless: the foundation for an interoperable, software-defined radio access network,” said Dr. Tom Rondeau, principal director for FutureG. “By shifting the maintenance of these common components to a collaborative, open-source project, under neutral governance at the Linux Foundation, we enable our industry partners to focus their resources on the innovative and monetizable technologies that are most effective for the nation. We are building a foundation that enables shared success and accelerates progress for the entire ecosystem. We are looking forward to seeing this approach provide a vital platform for strengthening our relationships and collaboration with our allies and international partners.”
“The key to driving innovation in wireless is to leverage a broad ecosystem of experts in networking, radio software, and emerging AI technologies,” said Joe Kochan, CEO of NSC. “What started with a competitive proposal process to elicit the best technology solutions from among NSC’s large and diverse membership is now expanding under the Linux Foundation, and NSC is proud to continue partnering with both LF and the FutureG team to advance OCUDU development efforts and build the next generation of wireless capabilities.”
References:
Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)
SKT 6G ATHENA White Paper: a mid-to-long term network evolution strategy for the AI era
Dell’Oro: RAN Market Stabilized in 2025 with 1% CAG forecast over next 5 years; Opinion on AI RAN, 5G Advanced, 6G RAN/Core risks
Nokia and Rohde & Schwarz collaborate on AI-powered 6G receiver years before IMT 2030 RIT submissions to ITU-R WP5D
SK Telecom, DOCOMO, NTT and Nokia develop 6G AI-native air interface
Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos
Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN
Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN
RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN
Intel and AI chip startup SambaNova partner; SN50 AI inferencing chip max speed said to be 5X faster than competitive AI chips
Intel and AI chip startup SambaNova have entered into a multi-year strategic collaboration to deploy high-performance, cost-efficient AI inference solutions [1.] tailored for AI-native firms, enterprises, and government sectors. This global initiative leverages Intel® Xeon® infrastructure, with Intel Capital further signaling commitment through participation in SambaNova’s $350M Series E financing round. The collaboration will give customers a powerful alternative to GPU‑centric solutions, offering optimized performance for leading open‑source models with predictable throughput and total cost of ownership. Founded in 2017, the Palo Alto, CA company specializes in AI chips and software. SambaNova’s Chairman is Lip-Bu Tan, who is also the CEO of Intel!
Note 1. AI inferencing is the process of using a trained AI model to make real-time predictions, decisions, or generate content from new, previously unseen data. It transforms inputs (a query, image, sensor reading) into useful results (a sentence, classification, alert). Unlike training and language models, inference is about prompt execution, often requiring low-latency (speed) and high efficiency. AI Inference chips have attracted intense investor interest following a wave of deal making around rivals to Nvidia, as AI companies seek faster and more efficient hardware. See References below for more information.
………………………………………………………………………………………………………………………………………………………………………………………………………………………………….
For high-scale AI workloads, the integration of Intel CPUs with SambaNova’s specialized AI platform was said to offer a high-efficiency rack-level inference alternative. This partnership serves as a strategic bridge as Intel scales its independent GPU-based offerings. Intel remains fully committed to its internal GPU roadmap, continuing aggressive investment across architecture, software, and systems. This collaboration enhances Intel’s edge-to-cloud strategy without altering its competitive trajectory in the GPU market. By combining Xeon processors, Intel networking, and SambaNova systems, the two companies are positioned to capture a significant share of the multi-billion-dollar inference market through heterogeneous data center architectures.
As part of the collaboration, Intel plans to make a strategic investment in SambaNova to accelerate the rollout of an Intel‑powered AI cloud. The collaboration is expected to span three key areas:
- AI Cloud Expansion – Scaling SambaNova’s vertically integrated AI cloud, built on Intel Xeon‑based infrastructure and optimized for large language and multimodal models. The platform will deliver low‑latency, high‑throughput AI services, supported by reference architectures, deployment blueprints, and partnerships with system integrators and software vendors.
- Integrated AI Infrastructure – Combining SambaNova’s systems with Intel’s CPUs, accelerators, and networking technologies to power scalable, production‑ready inference for reasoning, code generation, multimodal applications, and agentic workflows.
- Go‑to‑Market Execution – Joint co‑selling and co‑marketing through Intel’s global enterprise, cloud, and partner channels to accelerate adoption across the AI ecosystem.
Together, SambaNova and Intel aim to shape the next generation of heterogeneous AI data centers — integrating Intel Xeon processors, Intel GPUs, Intel networking and storage, and SambaNova systems — to unlock a multi‑billion‑dollar inference market opportunity.
……………………………………………………………………………………………………………………………………………………………………………………………………………………………………
SambaNova also announced its SN50 AI chip, which boasts a max speed that’s 5X faster than competitive chips, according to the company.

Image Credit: SambaNova
Positioned as the most efficient chip for agentic AI, the SN50 chip offers enterprises a 3X lower total cost of ownership – a powerful foundation to scale fast inference and bring autonomous AI agents into full production. The SN50 will be shipping to customers later this year. To quickly scale and distribute SN50, SambaNova is collaborating with Intel, and has obtained $350 million in strategic Series E financing to expand manufacturing and cloud capacity.
“AI is no longer a contest to build the biggest model,” said Rodrigo Liang, co‑founder and CEO of SambaNova. “With the SN50 and our deep collaboration with Intel, the real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud.”
“Customers are asking for more choice and more efficient ways to scale AI,” said Kevork Kechichian, EVP, General Manager, Data Center Group, Intel. “By combining Intel’s leadership in compute, networking, and memory with SambaNova’s full-stack AI systems and inference cloud platform, we are delivering a compelling option for organizations looking for GPU alternatives to deploy advanced AI at scale.”
The SN50 delivers five times more compute per accelerator and four times more network bandwidth than the previous generation. It links up to 256 accelerators over a multi‑terabyte‑per‑second interconnect, cutting time‑to‑first‑token and supporting larger batch sizes. The result: Enterprises can deploy bigger, longer‑context AI models with higher throughput and responsiveness — while keeping performance high and costs and latency under control.
“AI is moving from a software story to an infrastructure story,” said Landon Downs, co-founder and managing partner at Cambium Capital. “SN50 is engineered for the real-world latency and economic requirements that will determine who successfully deploys agentic AI at scale.”
Built on SambaNova’s Reconfigurable Data Unit (RDU) architecture, SN50 delivers:
- Instant AI Experiences – Ultra‑low latency delivers real‑time responsiveness for next‑gen enterprise apps like voice assistants.
- Unmatched Scale and Concurrency – Power thousands of simultaneous AI sessions with consistent high performance.
- Breakthrough Model Capacity – Three‑tier memory architecture unlocks 10T+ parameter models and 10M+ context lengths for deeper reasoning and richer outputs.
- Maximum Efficiency at Scale – Higher hardware utilization lowers cost‑per‑token, driving greater performance and ROI.
- Smarter Memory, Smarter Efficiency – Resident multi‑model memory and agentic caching optimize the three‑tier architecture, cutting infrastructure costs for enterprise‑scale AI deployments.
“The new SambaNova SN50 RDU changes the tokenomics of AI inference at scale. By delivering both high performance and high throughput with a chip that uses existing power and is air cooled, SambaNova is changing the game,” said Peter Rutten, Research Vice-President Performance Intensive Computing at analyst firm IDC.
……………………………………………………………………………………………………………………………………………………………………………………………………………………………………
SoftBank Corp. will be the first customer to deploy SN50 within its next‑generation AI data centers in Japan. The deployment will power low‑latency inference services for sovereign and enterprise customers across Asia‑Pacific, supporting both open‑source and proprietary frontier models with aggressive latency and throughput requirements.
“With SN50, we are building an AI inference fabric for Japan that can serve our customers and partners with the speed, resiliency and sovereignty they expect from SoftBank,” said Hironobu Tamba, Vice President and Head of the Data Platform Strategy Division of the Technology Unit at SoftBank Corp. “By standardizing on SN50, we gain the ability to deliver world‑class AI services on our own terms — with the performance of the best GPU clusters, but with far better economics and control.”
The SN50 deployment deepens SambaNova’s existing relationship with SoftBank Corp., which already hosts SambaCloud to provide ultra‑fast inference for developers in the region. By anchoring its newest clusters on SN50, SoftBank positions SambaNova as the inference backbone for its sovereign AI initiatives and future large‑scale agentic services.
……………………………………………………………………………………………………………………………………………………………………………………………………………………………………
References:
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
CES 2025: Intel announces edge compute processors with AI inferencing capabilities
Groq and Nvidia in non-exclusive AI Inference technology licensing agreement; top Groq execs joining Nvidia
Analysis: Edge AI and Qualcomm’s AI Program for Innovators 2026 – APAC for startups to lead in AI innovation
Custom AI Chips: Powering the next wave of Intelligent Computing
RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN
OpenAI and Broadcom in $10B deal to make custom AI chips
Huawei to Double Output of Ascend AI chips in 2026; OpenAI orders HBM chips from SK Hynix & Samsung for Stargate UAE project
U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China
Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers
Nscale pitches “Sovereign AI” to telecom operators to provide AI-as-a-service (AIaaS)
Nscale [1.], headquartered in London, UK, is suggesting that telecom networks host “Sovereign AI” infrastructure, to ensure that data remains within regional borders while driving efficiency and automation. The company is collaborating with Nokia to accelerate global AI infrastructure deployment and is showcasing these solutions at MWC 2026. The company is partnering with telecom operators to transform their existing national fiber and edge sites into high-performance AI data centers. They aim to leverage telco assets to deliver GPU-powered AI-as-a-Service (AIaaS), optimize their 5G networks, and support AI-driven analytics.
Note 1. Nscale is building the advanced infrastructure, systems and solutions that enables practitioners, enterprises, and governments across the globe to create, deploy, and scale their most transformative AI systems. Nscale’s AI Compute offering provides on-demand access to high-performance GPUs, enabling businesses and developers to execute complex computational tasks like AI model training and data analysis without the need for upfront investment in expensive hardware. Nscale is building its own high-density data centers with direct liquid cooling to support these initiatives.
Nscale says they are “empowering telecommunications providers to deliver a range of AI services and solutions which help support network optimization and network performance monitoring, alongside improving customer experience with AI-powered automation tools. With our scalable GPU infrastructure and AI expertise, our telco customers can provide industry-leading AI-as-a-service (AIaaS), scale for 5G and benefit from artificial intelligence.”
Last week at the UK Telecoms Innovation Network (UKTIN)’s AI & Advanced Connectivity: State of AI panel, Nscale’s Simon Rowell spoke about the importance of building infrastructure that is resilient and able to adapt over time. Technologies evolve, but what matters is whether the underlying infrastructure can accommodate that change. Across telco networks and digital services, the fundamentals remain consistent: efficiency, automation, productivity, and resilience. Nscale is focused on building flexible AI infrastructure that can support real services as requirements change.
UK Telecoms Innovation Network Panel Session State of AI in UK Telecoms. Photo Credit: Nscale
…………………………………………………………………………………………………………………………………………………………………
Nokia and Nscale are collaborating to accelerate the development of AI-ready data center infrastructure across Europe and globally. As part of this partnership, Nokia serves as the preferred networking partner for Nscale, providing IP, optical networking, and data center switching technology to support high-performance AI clusters. Key aspects of the collaboration include:
- Infrastructure Build-out: Nokia is supplying its 7220 IXR and 7750 SR platforms to support Nscale’s AI-ready data centers, including a key project in Stavanger, Norway, and a 50 MW AI Campus in Loughton, U.K..
- Strategic Investment: Nokia is an investor in Nscale’s Series B funding round, supporting the company’s expansion and the deployment of up to 300,000 GPUs.
- Technology & Innovation: The partnership focuses on co-developing networking stacks for AI clusters, utilizing Nokia’s Ethernet-based data center fabric for low-latency, high-performance computing. Sustainability
- Focus: The collaboration emphasizes energy-efficient cooling and 100% renewable energy for data center operations. Nokia Nokia +4
David Power CTO at Nscale said, “Our mission is to redefine the boundaries of AI and High-Performance Computing through innovative, sustainable solutions. Nokia’s data center fabric enables us to scale our GPU clusters while maintaining the reliability and performance needed to serve our customers with cutting-edge AI services. The flexibility of Nokia’s solution ensures we can bring advanced AI capabilities to market faster.”
……………………………………………………………………………………………………………………………………………………………………..
References:
Sovereign AI infrastructure for telecom companies: implementation and challenges
Nokia in major pivot from traditional telecom to AI, cloud infrastructure, data center networking and 6G
Nokia selects Intel’s Justin Hotard as new CEO to increase growth in IP networking and data center connections
Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)
Private 5G networks move to include automation, autonomous systems, edge computing & AI operations
Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)
China vs U.S.: Race to Generate Power for AI Data Centers as Electricity Demand Soars
The International Energy Agency (IEA) forecasts that in the next five years, the global demand for power (electricity) is set to grow roughly 50% faster than it did during the previous decade – and more than twice as fast as energy demand overall. That tremendous increase in demand is due to power hungry AI data centers. There’s also electric cars and buses, electric-powered industrial machines, and electric heating of homes.
Global AI growth will be contingent on generating more power for data centers:
- Global data center power demand is now expected to rise to a record 1,596 terawatt-hours by 2035 – +255% increase from 2025 levels.
- The U.S. is set to remain the leader in energy consumption with a +144% surge in demand over this period, to 430 terawatt-hours.
- China’s demand is projected to rise +255%, to 397 terawatt-hours.
- European demand is expected to surge +303%, to 274 terawatt-hours.
- New data centers coming online between now and 2030 will need more than 600 terawatt-hours of electricity. This is enough to power ~60 million homes.
Power for AI Data Centers: China vs U.S.:
China is currently ahead of the United States in generating and building out power infrastructure to support AI data centers, a phenomenon sometimes described by industry observers as an “electron gap.”
China’s rapid, centralized expansion of electricity generation—including both massive renewable projects and traditional, dispatchable power—has created a significant capacity advantage in the race to support AI workloads, which are increasingly limited by energy availability rather than just chip access.
Key factors in China’s power advantage for AI include:
Massive Generation Growth: Between 2010 and 2024, China’s power production increased by more than the rest of the world combined. In 2024 alone, China added 543 gigawatts of power capacity—more than the total capacity added by the U.S. in its entire history.
Significant Surplus Capacity: By 2030, China is projected to have roughly 400 gigawatts of spare power capacity, which is triple the expected power demand of the global data center fleet at that time.
“Eastern Data, Western Computing” Initiative: China is actively shifting energy-intensive data centers to its resource-rich western regions (like Inner Mongolia) while powering them with surplus renewable energy, such as wind and solar.
Lower Costs and Faster Buildouts: Data centers in China can pay less than half the rates for electricity that American data centers do. Furthermore, projects in China can move from planning to operation in months, compared to years in the U.S. due to faster permitting and fewer regulatory hurdles.
Conclusions:
While the U.S. currently leads in advanced AI chips and model development, it is facing a severe “energy bottleneck” for new data centers, with some requiring over a gigawatt of power. U.S. power demand has remained relatively flat for 20 years, resulting in a lag in building new capacity, whereas China has traditionally built power infrastructure in anticipation of high demand. Morgan Stanley has forecast that U.S. data centers could face a 44-gigawatt electricity shortfall in the next three years.
Despite China’s advantage in energy, U.S. export controls on high-end AI chips (such as Nvidia’s GPUs) have acted as a significant constraint on China’s actual AI compute power. This has led to a situation where the U.S. has the best “brains” (chips) but limited power to run them, while China has the “muscle” (energy) but limited access to top-tier AI brains.
However, the rapid improvements in Chinese AI models (such as DeepSeek), which are more energy-efficient and optimized for lower-tier hardware, may help mitigate this constraint.
References:
https://www.iea.org/reports/electricity-2026
https://x.com/KobeissiLetter/status/2023437717888250284
Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)
Analysis: Ethernet gains on InfiniBand in data center connectivity market; White Box/ODM vendors top choice for AI hyperscalers
Fiber Optic Boost: Corning and Meta in multiyear $6 billion deal to accelerate U.S data center buildout
How will fiber and equipment vendors meet the increased demand for fiber optics in 2026 due to AI data center buildouts?
Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers
Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections
Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators
Analysis: Rakuten Mobile and Intel partnership to embed AI directly into vRAN
Today, Rakuten Mobile and Intel announced a partnership to embed Artificial Intelligence (AI) directly into the virtualized Radio Access Network (vRAN) stack. While vRAN currently represents a small percentage of the total RAN market (Dell’Oro Group recently forecasts vRAN to account for 5% to 10% of the total RAN market by 2026), this partnership could boost increase that percentage as it addresses key adoption hurdles—performance, power, and AI integration. Key areas of innovation include:
- Enhanced Wireless Spectral Efficiency: Optimizing spectrum utilization for superior network performance and capacity.
- Automated RAN Operations: Streamlining network management and reducing operational complexities through intelligent automation.
- Optimized Resource Allocation: Dynamically allocating network resources for maximum efficiency and subscriber experience.
- Increased Energy Efficiency: Significantly reducing power consumption in the RAN, contributing to sustainable network operations.
The partnership essentially aims to make vRAN superior in performance and TCO (Total Cost of Ownership) compared to traditional, proprietary, purpose built RAN hardware.
“We are incredibly excited to expand our collaboration with Intel to pioneer truly AI-native RAN architectures,” said Sharad Sriwastawa, co-CEO and CTO, Rakuten Mobile. “Together, we are validating transformative AI-driven innovations that will not only shape but define the future of mobile networks. This partnership showcases how intelligent RAN can be achieved through the seamless and efficient integration of AI workloads directly within existing vRAN software stacks, delivering unparalleled performance and efficiency.”
Rakuten Mobile and Intel are engaged in rigorous testing and validation of cutting-edge RAN AI use cases across Layer 1, Layer 2, and comprehensive RAN operation and network platform management. A core objective is the seamless integration of AI directly into the RAN stack, meticulously addressing integration challenges while upholding carrier-grade reliability and stringent latency requirements.
Utilizing Intel FlexRAN reference software, the Intel vRAN AI Development Kit, and a robust suite of AI tools and libraries, Rakuten Mobile is collaboratively training, optimizing, and deploying sophisticated AI models specifically tailored for demanding RAN workloads. This collaborative effort is designed to realize ultra-low, real-time AI latency on Intel Xeon 6 SoC, capitalizing on their built-in AI acceleration capabilities, including AVX512/VNNI and AMX.
“AI is transforming how networks are built and operated,” said Kevork Kechichian, Executive Vice President and General Manager of the Data Center Group, Intel Corporation. “Together with Rakuten, we are demonstrating how AI benefits can be achieved in vRAN. Intel Xeon processors power the majority of commercial vRAN deployments worldwide, and this transformation momentum continues to accelerate. Intel is providing AI-ready Xeon platforms that allow operators like Rakuten to design AI-ready infrastructure from the ground up, with built-in acceleration capabilities.”
Rakuten says they are “poised to unlock new levels of RAN performance, efficiency, and automation by embedding AI directly into the RAN software stack, this AI-native evolution represents the future of cloud-native, AI-powered RAN – inherently software-upgradable and built on open, general-purpose computing platforms. Additionally, the extended collaboration between Rakuten Mobile and Intel marks a significant step toward realizing the vision of autonomous, self-optimizing networks and powerfully reinforces both companies’ commitment to open, programmable, and intelligent RAN infrastructure worldwide.”
……………………………………………………………………………………………………………………………………………………………………..
- AI-Native Efficiency & Performance: The collaboration focuses on integrating AI to improve network performance and energy efficiency, which is a major pain point for operators. By embedding AI directly into the vRAN stack, they are enhancing wireless spectral efficiency, reducing power consumption, and automating RAN operations.
- Leveraging High-Performance Hardware: The initiative utilizes Intel® Xeon® 6 processors with built-in vRAN Boost. This eliminates the need for external, power-hungry accelerator cards, offering up to 2.4x more capacity and 70% better performance-per-watt.
- Validation of Large-Scale Commercial Viability: Rakuten Mobile operates the world’s first fully virtualized, cloud-native network. Its continued collaboration with Intel to make the vRAN AI-native provides a proven blueprint for other operators, reducing the perceived risk of adopting vRAN, particularly in brownfield (existing) networks.
- Acceleration of Open RAN Ecosystem: The collaboration supports the broader push towards Open RAN, which is expected to see a significant rise in market share, doubling between 2022 and 2026.
………………………………………………………………………………………………………………………………………………………………
- Market Share Shift: Omdia forecasts that vRAN’s share of the RAN baseband subsector will reach 20% by 2028. That’s a significant jump from its current low single-digit percentage.
- Explosive CAGR: The global vRAN market is projected to grow from approximately $16.6 billion in 2024 to nearly $80 billion by 2033, representing a 19.5% CAGR.
- Small Cell Dominance: By the end of 2026, it is estimated that 77% of all vRAN implementations will be on small cell architectures, a key area where Rakuten and Intel have demonstrated success.
References:
https://corp.mobile.rakuten.co.jp/english/news/press/2026/0210_01/
Virtual RAN gets a boost from Samsung demo using Intel’s Grand Rapids/Xeon Series 6 SoC
RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN
vRAN market disappoints – just like OpenRAN and mobile 5G
LightCounting: Open RAN/vRAN market is pausing and regrouping
Dell’Oro: Private 5G ecosystem is evolving; vRAN gaining momentum; skepticism increasing
https://www.mordorintelligence.com/industry-reports/virtualized-ran-vran-market
https://www.grandviewresearch.com/industry-analysis/virtualized-radio-access-network-market-report
Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators
Executive Summary:
In a February 6, 2026 CNBC interview with with Scott Wapner, Nvidia CEO Jensen Huang [1.] characterized the current AI build‑out as “the largest infrastructure buildout in human history,” driven by exceptionally high demand for compute from hyperscalers and AI companies. “Through the roof” is how he described AI infrastructure spending. It’s a “once-in-a-generation infrastructure buildout,” specifically highlighting that demand for Nvidia’s Blackwell chips and the upcoming Vera Rubin platform is “sky-high.” He emphasized that the shift from experimental AI to AI as a fundamental utility has reached a definitive inflection point for every major industry.
Jensen forecasts aa roughly 7–to- 8‑year AI investment cycle lies ahead, with capital intensity justified because deployed AI infrastructure is already generating rising cash flows for operators. He maintains that the widely cited ~$660 billion AI data center capex pipeline is sustainable, on the grounds that GPUs and surrounding systems are revenue‑generating assets, not speculative overbuild. In his view, as long as customers can monetize AI workloads profitably, they will “keep multiplying their investments,” which underpins continued multi‑year GPU demand, including for prior‑generation parts that remain fully leased.
Note 1. Being the undisputed leader of AI hardware (GPU chips and networking equipment via its Mellanox acquisition), Nvidia MUST ALWAYS MAKE POSITIVE REMARKS AND FORECASTS related to the AI build out boom. Reader discretion is advised regarding Huang’s extremely bullish, “all-in on AI” remarks.

Huang reiterated that AI will “fundamentally change how we compute everything,” shifting data centers from general‑purpose CPU‑centric architectures to accelerated computing built around GPUs and dense networking. He emphasizes Nvidia’s positioning as a full‑stack infrastructure and computing platform provider—chips, systems, networking, and software—rather than a standalone chip vendor. He accuratedly stated that Nvidia designs “all components of AI infrastructure” so that system‑level optimization (GPU, NIC, interconnect, software stack) can deliver performance gains that outpace what is possible with a single chip under a slowing Moore’s Law. The installed base is presented as productive: even six‑year‑old A100‑class GPUs are described as fully utilized through leasing, underscoring persistent elasticity of AI compute demand across generations.
AI Poster Childs – OpenAI and Anthropic:
Huang praised OpenAI and Anthropic, the two leading artificial intelligence labs, which both use Nvidia chips through cloud providers. Nvidia invested $10 billion in Anthropic last year, and Huang said earlier this week that the chipmaker will invest heavily in OpenAI’s next fundraising round.
“Anthropic is making great money. Open AI is making great money,” Huang said. “If they could have twice as much compute, the revenues would go up four times as much.”
He said that all the graphics processing units that Nvidia has sold in the past — even six-year old chips such as the A100 — are currently being rented, reflecting sustained demand for AI computing power.
“To the extent that people continue to pay for the AI and the AI companies are able to generate a profit from that, they’re going to keep on doubling, doubling, doubling, doubling,” Huang said.
Economics, utilization, and returns:
On economics, Huang’s central claim is that AI capex converts into recurring, growing revenue streams for cloud providers and AI platforms, which differentiates this cycle from prior overbuilds. He highlights very high utilization: GPUs from multiple generations remain in service, with cloud operators effectively turning them into yield‑bearing infrastructure.
This utilization and monetization profile underlies his view that the capex “arms race” is rational: when AI services are profitable, incremental racks of GPUs, network fabric, and storage can be modeled as NPV‑positive infrastructure projects rather than speculative capacity. He implies that concerns about a near‑term capex cliff are misplaced so long as end‑market AI adoption continues to inflect.
Competitive and geopolitical context:
Huang acknowledges intensifying global competition in AI chips and infrastructure, including from Chinese vendors such as Huawei, especially under U.S. export controls that have reduced Nvidia’s China revenue share to roughly half of pre‑control levels. He frames Nvidia’s strategy as maintaining an innovation lead so that developers worldwide depend on its leading‑edge AI platforms, which he sees as key to U.S. leadership in the AI race.
He also ties AI infrastructure to national‑scale priorities in energy and industrial policy, suggesting that AI data centers are becoming a foundational layer of economic productivity, analogous to past buildouts in electricity and the internet.
Implications for hyperscalers and chips:
Hyperscalers (and also Nvidia customers) Meta , Amazon, Google/Alphabet and Microsoft recently stated that they plan to dramatically increase spending on AI infrastructure in the years ahead. In total, these hyperscalers could spend $660 billion on capital expenditures in 2026 [2.] , with much of that spending going toward buying Nvidia’s chips. Huang’s message to them is that AI data centers are evolving into “AI factories” where each gigawatt of capacity represents tens of billions of dollars of investment spanning land, compute, and networking. He suggests that the hyperscaler industry—roughly a $2.5 trillion sector with about $500 billion in annual capex transitioning from CPU to GPU‑centric generative AI—still has substantial room to run.
Note 2. An understated point is that while these hyperscalers are spending hundered of billions of dollars on AI data centers and Nvidia chips/equipment they are simultaneously laying off tens of thousands of employees. For example, Amazon recently announced 16,000 job cuts this year after 14,000 layoffs last October.
From a chip‑level perspective, he argues that Nvidia’s competitive moat stems from tightly integrated hardware, networking, and software ecosystems rather than any single component, positioning the company as the systems architect of AI infrastructure rather than just a merchant GPU vendor.
References:
Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)
Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers
Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections
Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?
Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers
184K global tech layoffs in 2025 to date; ~27.3% related to AI replacing workers
Analysis: Edge AI and Qualcomm’s AI Program for Innovators 2026 – APAC for startups to lead in AI innovation
Qualcomm is a strong believer in Edge AI as an enabler of faster, more secure, and energy-efficient processing directly on devices—rather than the cloud—unlocking real-time intelligence for industries like robotics and smart cities.
In support of that vision, the fabless SoC company announced the official launch of its Qualcomm AI Program for Innovators (QAIPI) 2026 – APAC, a regional startup incubation initiative that supports startups across Japan, Singapore, and South Korea in advancing the development and commercialization of innovative edge AI solutions.
Building on Qualcomm’s commitment to edge AI innovation, the second edition of QAIPI-APAC invites startups to develop intelligent solutions across a broad range of edge-AI applications using Qualcomm Dragonwing™ and Snapdragon® platforms, together with the new Arduino® UNO Q development board, strengthening their pathway toward global commercialization.
Startups gain comprehensive support and resources, including access to Qualcomm Dragonwing™ and Snapdragon® platforms, the Arduino® UNO Q development board, technical guidance and mentorship, a grant of up to US$10,000, and eligibility for up to US$5,000 in patent filing incentives, accelerating AI product development and deployment.
Applications are open now through April 30, 2026 and will be evaluated based on innovation, technical feasibility, potential societal impact, and commercial relevance. The program will be implemented in two phases. The application phase is open to eligible startups incorporated and registered in Japan, Singapore, or South Korea. Shortlisted startups will enter the mentorship phase, receiving one-on-one guidance, online training, technical support, and access to Qualcomm-powered hardware platforms and development kits for product development. They will also receive a shortlist grant of up to US$10,000 and may be eligible for a patent filing incentive of up to US$5,000. At the conclusion of the program, shortlisted startups may be invited to showcase their innovations at a signature Demo Day in late 2026, engaging with industry leaders, investors, and potential collaborators across the APAC innovation ecosystem.
Comment and Analysis:
Qualcomm is a strong believer in Edge AI—the practice of running AI models directly on devices (smartphones, cars, IoT, PCs) rather than in the cloud—because they view it as the next major technological paradigm shift, overcoming limitations inherent in cloud computing. Despite the challenges of power consumption and processing limitations, Qualcomm’s strategy hinges on specialized, heterogenous computing rather than relying solely on RISC-based CPU cores.
Key Issues for Qualcomm’s Edge AI solutions:
- Qualcomm® AI Engine: This combines specialized hardware, including the Hexagon NPU (Neural Processing Unit), Adreno GPU, and CPU. The NPU is specifically designed to handle high-performance, complex AI workloads (like Generative AI) far more efficiently than a generic CPU.
- Custom Oryon CPU: The latest Snapdragon X Elite platform features customized cores that provide high performance while outperforming traditional x86 solutions in power efficiency for everyday tasks.
- Specialization Saves Power: By using specialized AI engines (NPUs) rather than general-purpose CPU/GPU cores, Qualcomm can run inference tasks at a fraction of the power cost.
- Lower Overall Energy: Doing AI at the edge can save total energy by avoiding the need to send data to a power-hungry data center, which requires network infrastructure, and then sending it back.
- Intelligent Efficiency: The Snapdragon 8 Elite, for example, saw a 27% reduction in power consumption while increasing AI performance significantly.
- Instant Responsiveness (Low Latency): For autonomous vehicles or industrial robotics, a few milliseconds of latency to the cloud can be catastrophic. Edge AI provides real-time, instantaneous analysis.
- Privacy and Security: Data never leaves the device. This is crucial for privacy-conscious users (biometrics) and compliance (GDPR), which is a major advantage over cloud-based AI.
- Offline Capability: Edge devices, such as agricultural sensors or smart home devices in remote areas, continue to function without internet connectivity.
- Diversification: With the smartphone market maturing, Qualcomm sees the “Connected Intelligent Edge” as a huge growth opportunity, extending their reach into automotive, IoT, and PCs.
- “Ecosystem of You”: Qualcomm aims to connect billions of devices, making AI personal and context-aware, rather than generic.
- Qualcomm AI Hub: This makes it easier for developers to deploy optimized models on Snapdragon devices.
- Model Optimization: They specialize in making AI models smaller and more efficient (using quantization and specialized AI inference) to run on devices without requiring massive, cloud-sized computing power.
References:
Qualcomm CEO: AI will become pervasive, at the edge, and run on Snapdragon SoC devices
Huawei, Qualcomm, Samsung, and Ericsson Leading Patent Race in $15 Billion 5G Licensing Market
Private 5G networks move to include automation, autonomous systems, edge computing & AI operations
Nvidia’s networking solutions give it an edge over competitive AI chip makers
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
CES 2025: Intel announces edge compute processors with AI inferencing capabilities
Qualcomm CEO: expect “pre-commercial” 6G devices by 2028

