China vs U.S.: Race to Generate Power for AI Data Centers as Electricity Demand Soars

The International Energy Agency (IEA) forecasts that in the next five years, the global demand for power (electricity) is set to grow roughly 50% faster than it did during the previous decade – and more than twice as fast as energy demand overall.  That tremendous increase in demand is due to power hungry AI data centers.  There’s also electric cars and buses, electric-powered industrial machines, and electric heating of homes.

Global AI growth will be contingent on generating more power for data centers:

  • Global data center power demand is now expected to rise to a record 1,596 terawatt-hours by 2035 – +255% increase from 2025 levels.
  • The U.S. is set to remain the leader in energy consumption with a +144% surge in demand over this period, to 430 terawatt-hours.
  • China’s demand is projected to rise +255%, to 397 terawatt-hours.
  • European demand is expected to surge +303%, to 274 terawatt-hours.
  • New data centers coming online between now and 2030 will need more than 600 terawatt-hours of electricity. This is enough to power ~60 million homes.

 

Power for AI Data Centers: China vs U.S.:

China is currently ahead of the United States in generating and building out power infrastructure to support AI data centers, a phenomenon sometimes described by industry observers as an “electron gap.”

China’s rapid, centralized expansion of electricity generation—including both massive renewable projects and traditional, dispatchable power—has created a significant capacity advantage in the race to support AI workloads, which are increasingly limited by energy availability rather than just chip access.

Key factors in China’s power advantage for AI include:

Massive Generation Growth: Between 2010 and 2024, China’s power production increased by more than the rest of the world combined. In 2024 alone, China added 543 gigawatts of power capacity—more than the total capacity added by the U.S. in its entire history.

Significant Surplus Capacity: By 2030, China is projected to have roughly 400 gigawatts of spare power capacity, which is triple the expected power demand of the global data center fleet at that time.

“Eastern Data, Western Computing” Initiative: China is actively shifting energy-intensive data centers to its resource-rich western regions (like Inner Mongolia) while powering them with surplus renewable energy, such as wind and solar.

Lower Costs and Faster Buildouts: Data centers in China can pay less than half the rates for electricity that American data centers do. Furthermore, projects in China can move from planning to operation in months, compared to years in the U.S. due to faster permitting and fewer regulatory hurdles.

Conclusions:

While the U.S. currently leads in advanced AI chips and model development, it is facing a severe “energy bottleneck” for new data centers, with some requiring over a gigawatt of power. U.S. power demand has remained relatively flat for 20 years, resulting in a lag in building new capacity, whereas China has traditionally built power infrastructure in anticipation of high demand. Morgan Stanley has forecast that U.S. data centers could face a 44-gigawatt electricity shortfall in the next three years.

Despite China’s advantage in energy, U.S. export controls on high-end AI chips (such as Nvidia’s GPUs) have acted as a significant constraint on China’s actual AI compute power. This has led to a situation where the U.S. has the best “brains” (chips) but limited power to run them, while China has the “muscle” (energy) but limited access to top-tier AI brains.

However, the rapid improvements in Chinese AI models (such as DeepSeek), which are more energy-efficient and optimized for lower-tier hardware, may help mitigate this constraint.

References:

https://www.bloomberg.com/news/newsletters/2026-02-14/ai-battle-turbocharged-by-50-power-demand-surge-new-economy

https://www.iea.org/reports/electricity-2026

https://x.com/KobeissiLetter/status/2023437717888250284

How will the United States and China power the AI race?

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Analysis: Ethernet gains on InfiniBand in data center connectivity market; White Box/ODM vendors top choice for AI hyperscalers

Fiber Optic Boost: Corning and Meta in multiyear $6 billion deal to accelerate U.S data center buildout

How will fiber and equipment vendors meet the increased demand for fiber optics in 2026 due to AI data center buildouts?

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators

 

Analysis: Rakuten Mobile and Intel partnership to embed AI directly into vRAN

Today, Rakuten Mobile and Intel announced a partnership to embed Artificial Intelligence (AI) directly into the virtualized Radio Access Network (vRAN) stack.   While vRAN currently represents a small percentage of the total RAN market (Dell’Oro Group recently forecasts vRAN to account for 5% to 10% of the total RAN market by 2026), this partnership could boost increase that percentage as it addresses key adoption hurdles—performance, power, and AI integration.   Key areas of innovation include:

  • Enhanced Wireless Spectral Efficiency: Optimizing spectrum utilization for superior network performance and capacity.
  • Automated RAN Operations: Streamlining network management and reducing operational complexities through intelligent automation.
  • Optimized Resource Allocation: Dynamically allocating network resources for maximum efficiency and subscriber experience.
  • Increased Energy Efficiency: Significantly reducing power consumption in the RAN, contributing to sustainable network operations.

The partnership essentially aims to make vRAN superior in performance and TCO (Total Cost of Ownership) compared to traditional, proprietary, purpose built RAN hardware.

“We are incredibly excited to expand our collaboration with Intel to pioneer truly AI-native RAN architectures,” said Sharad Sriwastawa, co-CEO and CTO, Rakuten Mobile. “Together, we are validating transformative AI-driven innovations that will not only shape but define the future of mobile networks. This partnership showcases how intelligent RAN can be achieved through the seamless and efficient integration of AI workloads directly within existing vRAN software stacks, delivering unparalleled performance and efficiency.”

Rakuten Mobile and Intel are engaged in rigorous testing and validation of cutting-edge RAN AI use cases across Layer 1, Layer 2, and comprehensive RAN operation and network platform management. A core objective is the seamless integration of AI directly into the RAN stack, meticulously addressing integration challenges while upholding carrier-grade reliability and stringent latency requirements.

Utilizing Intel FlexRAN reference software, the Intel vRAN AI Development Kit, and a robust suite of AI tools and libraries, Rakuten Mobile is collaboratively training, optimizing, and deploying sophisticated AI models specifically tailored for demanding RAN workloads. This collaborative effort is designed to realize ultra-low, real-time AI latency on Intel Xeon 6 SoC, capitalizing on their built-in AI acceleration capabilities, including AVX512/VNNI and AMX.

“AI is transforming how networks are built and operated,” said Kevork Kechichian, Executive Vice President and General Manager of the Data Center Group, Intel Corporation. “Together with Rakuten, we are demonstrating how AI benefits can be achieved in vRAN. Intel Xeon processors power the majority of commercial vRAN deployments worldwide, and this transformation momentum continues to accelerate. Intel is providing AI-ready Xeon platforms that allow operators like Rakuten to design AI-ready infrastructure from the ground up, with built-in acceleration capabilities.”

Rakuten says they are “poised to unlock new levels of RAN performance, efficiency, and automation by embedding AI directly into the RAN software stack, this AI-native evolution represents the future of cloud-native, AI-powered RAN – inherently software-upgradable and built on open, general-purpose computing platforms. Additionally, the extended collaboration between Rakuten Mobile and Intel marks a significant step toward realizing the vision of autonomous, self-optimizing networks and powerfully reinforces both companies’ commitment to open, programmable, and intelligent RAN infrastructure worldwide.”

……………………………………………………………………………………………………………………………………………………………………..

Here is why this partnership might boost the vRAN market:
  • AI-Native Efficiency & Performance: The collaboration focuses on integrating AI to improve network performance and energy efficiency, which is a major pain point for operators. By embedding AI directly into the vRAN stack, they are enhancing wireless spectral efficiency, reducing power consumption, and automating RAN operations.
  • Leveraging High-Performance Hardware: The initiative utilizes Intel® Xeon® 6 processors with built-in vRAN Boost. This eliminates the need for external, power-hungry accelerator cards, offering up to 2.4x more capacity and 70% better performance-per-watt.
  • Validation of Large-Scale Commercial Viability: Rakuten Mobile operates the world’s first fully virtualized, cloud-native network. Its continued collaboration with Intel to make the vRAN AI-native provides a proven blueprint for other operators, reducing the perceived risk of adopting vRAN, particularly in brownfield (existing) networks.
  • Acceleration of Open RAN Ecosystem: The collaboration supports the broader push towards Open RAN, which is expected to see a significant rise in market share, doubling between 2022 and 2026.

………………………………………………………………………………………………………………………………………………………………

vRAN Market Outlook (2026–2033):
Market analysts expect 2026 to be a “pivotal year” for early real-world deployments of these intelligent architectures. While the base RAN market is stagnant, the virtualized segment is projected for aggressive growth:
  • Market Share Shift: Omdia forecasts that vRAN’s share of the RAN baseband subsector will reach 20% by 2028. That’s a significant jump from its current low single-digit percentage.
  • Explosive CAGR: The global vRAN market is projected to grow from approximately $16.6 billion in 2024 to nearly $80 billion by 2033, representing a 19.5% CAGR.
  • Small Cell Dominance: By the end of 2026, it is estimated that 77% of all vRAN implementations will be on small cell architectures, a key area where Rakuten and Intel have demonstrated success.
Despite these gains, vRAN still faces “performance parity” challenges with traditional RAN in high-capacity macro environments, which may temper the speed of total market replacement in the near term.
………………………………………………………………………………………………………………………………………………………………

References:

https://corp.mobile.rakuten.co.jp/english/news/press/2026/0210_01/

Virtual RAN gets a boost from Samsung demo using Intel’s Grand Rapids/Xeon Series 6 SoC

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

vRAN market disappoints – just like OpenRAN and mobile 5G

LightCounting: Open RAN/vRAN market is pausing and regrouping

Dell’Oro: Private 5G ecosystem is evolving; vRAN gaining momentum; skepticism increasing

https://www.mordorintelligence.com/industry-reports/virtualized-ran-vran-market

https://www.grandviewresearch.com/industry-analysis/virtualized-radio-access-network-market-report

Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators

Executive Summary:

In a February 6, 2026 CNBC interview with with Scott Wapner, Nvidia CEO Jensen Huang [1.] characterized the current AI build‑out as “the largest infrastructure buildout in human history,” driven by exceptionally high demand for compute from hyperscalers and AI companies. “Through the roof” is how he described AI infrastructure spending.  It’s a “once-in-a-generation infrastructure buildout,” specifically highlighting that demand for Nvidia’s Blackwell chips and the upcoming Vera Rubin platform is “sky-high.” He emphasized that the shift from experimental AI to AI as a fundamental utility has reached a definitive inflection point for every major industry.

Jensen forecasts aa roughly 7–to- 8‑year AI investment cycle lies ahead, with capital intensity justified because deployed AI infrastructure is already generating rising cash flows for operators.  He maintains that the widely cited ~$660 billion AI data center capex pipeline is sustainable, on the grounds that GPUs and surrounding systems are revenue‑generating assets, not speculative overbuild. In his view, as long as customers can monetize AI workloads profitably, they will “keep multiplying their investments,” which underpins continued multi‑year GPU demand, including for prior‑generation parts that remain fully leased.

Note 1.  Being the undisputed leader of AI hardware (GPU chips and networking equipment via its Mellanox acquisition), Nvidia MUST ALWAYS MAKE POSITIVE REMARKS AND FORECASTS related to the AI build out boom.  Reader discretion is advised regarding Huang’s extremely bullish, “all-in on AI” remarks.

Huang reiterated that AI will “fundamentally change how we compute everything,” shifting data centers from general‑purpose CPU‑centric architectures to accelerated computing built around GPUs and dense networking. He emphasizes Nvidia’s positioning as a full‑stack infrastructure and computing platform provider—chips, systems, networking, and software—rather than a standalone chip vendor.  He accuratedly stated that Nvidia designs “all components of AI infrastructure” so that system‑level optimization (GPU, NIC, interconnect, software stack) can deliver performance gains that outpace what is possible with a single chip under a slowing Moore’s Law. The installed base is presented as productive: even six‑year‑old A100‑class GPUs are described as fully utilized through leasing, underscoring persistent elasticity of AI compute demand across generations.

AI Poster Childs – OpenAI and Anthropic:

Huang praised OpenAI and Anthropic, the two leading artificial intelligence labs, which both use Nvidia chips through cloud providers. Nvidia invested $10 billion in Anthropic last year, and Huang said earlier this week that the chipmaker will invest heavily in OpenAI’s next fundraising round.

“Anthropic is making great money. Open AI is making great money,” Huang said. “If they could have twice as much compute, the revenues would go up four times as much.”

He said that all the graphics processing units that Nvidia has sold in the past — even six-year old chips such as the A100 — are currently being rented, reflecting sustained demand for AI computing power.

“To the extent that people continue to pay for the AI and the AI companies are able to generate a profit from that, they’re going to keep on doubling, doubling, doubling, doubling,” Huang said.

Economics, utilization, and returns:

On economics, Huang’s central claim is that AI capex converts into recurring, growing revenue streams for cloud providers and AI platforms, which differentiates this cycle from prior overbuilds. He highlights very high utilization: GPUs from multiple generations remain in service, with cloud operators effectively turning them into yield‑bearing infrastructure.

This utilization and monetization profile underlies his view that the capex “arms race” is rational: when AI services are profitable, incremental racks of GPUs, network fabric, and storage can be modeled as NPV‑positive infrastructure projects rather than speculative capacity. He implies that concerns about a near‑term capex cliff are misplaced so long as end‑market AI adoption continues to inflect.

Competitive and geopolitical context:

Huang acknowledges intensifying global competition in AI chips and infrastructure, including from Chinese vendors such as Huawei, especially under U.S. export controls that have reduced Nvidia’s China revenue share to roughly half of pre‑control levels. He frames Nvidia’s strategy as maintaining an innovation lead so that developers worldwide depend on its leading‑edge AI platforms, which he sees as key to U.S. leadership in the AI race.

He also ties AI infrastructure to national‑scale priorities in energy and industrial policy, suggesting that AI data centers are becoming a foundational layer of economic productivity, analogous to past buildouts in electricity and the internet.

Implications for hyperscalers and chips:

Hyperscalers (and also Nvidia customers) Meta , Amazon, Google/Alphabet and Microsoft recently stated that they plan to dramatically increase spending on AI infrastructure in the years ahead. In total, these hyperscalers could spend $660 billion on capital expenditures in 2026 [2.] , with much of that spending going toward buying Nvidia’s chips. Huang’s message to them is that AI data centers are evolving into “AI factories” where each gigawatt of capacity represents tens of billions of dollars of investment spanning land, compute, and networking. He suggests that the hyperscaler industry—roughly a $2.5 trillion sector with about $500 billion in annual capex transitioning from CPU to GPU‑centric generative AI—still has substantial room to run.

Note 2.  An understated point is that while these hyperscalers are spending hundered of billions of dollars on AI data centers and Nvidia chips/equipment they are simultaneously laying off tens of thousands of employees.  For example, Amazon recently announced 16,000 job cuts this year after 14,000 layoffs last October.

From a chip‑level perspective, he argues that Nvidia’s competitive moat stems from tightly integrated hardware, networking, and software ecosystems rather than any single component, positioning the company as the systems architect of AI infrastructure rather than just a merchant GPU vendor.

References:

https://www.cnbc.com/2026/02/06/nvidia-rises-7percent-as-ceo-says-660-billion-capex-buildout-is-sustainable.html

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

184K global tech layoffs in 2025 to date; ~27.3% related to AI replacing workers

 

 

Analysis: Edge AI and Qualcomm’s AI Program for Innovators 2026 – APAC for startups to lead in AI innovation

Qualcomm is a strong believer in Edge AI as an enabler of faster, more secure, and energy-efficient processing directly on devices—rather than the cloud—unlocking real-time intelligence for industries like robotics and smart cities.

In support of that vision, the fabless SoC company announced the official launch of its Qualcomm AI Program for Innovators (QAIPI) 2026 – APAC, a regional startup incubation initiative that supports startups across Japan, Singapore, and South Korea in advancing the development and commercialization of innovative edge AI solutions.

Building on Qualcomm’s commitment to edge AI innovation, the second edition of QAIPI-APAC invites startups to develop intelligent solutions across a broad range of edge-AI applications using Qualcomm Dragonwing™ and Snapdragon® platforms, together with the new Arduino® UNO Q development board, strengthening their pathway toward global commercialization.

Startups gain comprehensive support and resources, including access to Qualcomm Dragonwing™ and Snapdragon® platforms, the Arduino® UNO Q development board, technical guidance and mentorship, a grant of up to US$10,000, and eligibility for up to US$5,000 in patent filing incentives, accelerating AI product development and deployment.

Applications are open now through April 30, 2026 and will be evaluated based on innovation, technical feasibility, potential societal impact, and commercial relevance. The program will be implemented in two phases. The application phase is open to eligible startups incorporated and registered in Japan, Singapore, or South Korea. Shortlisted startups will enter the mentorship phase, receiving one-on-one guidance, online training, technical support, and access to Qualcomm-powered hardware platforms and development kits for product development. They will also receive a shortlist grant of up to US$10,000 and may be eligible for a patent filing incentive of up to US$5,000. At the conclusion of the program, shortlisted startups may be invited to showcase their innovations at a signature Demo Day in late 2026, engaging with industry leaders, investors, and potential collaborators across the APAC innovation ecosystem.

Comment and Analysis:

Qualcomm is a strong believer in Edge AI—the practice of running AI models directly on devices (smartphones, cars, IoT, PCs) rather than in the cloud—because they view it as the next major technological paradigm shift, overcoming limitations inherent in cloud computing. Despite the challenges of power consumption and processing limitations, Qualcomm’s strategy hinges on specialized, heterogenous computing rather than relying solely on RISC-based CPU cores.

Key Issues for Qualcomm’s Edge AI solutions:

1.  The “Heterogeneous” Solution to Processing Limits
While it is true that standard CPU cores (even RISC-based ones) are inefficient for AI, Qualcomm does not use them for AI workloads. Instead, they use a heterogeneous architecture:
  • Qualcomm® AI Engine: This combines specialized hardware, including the Hexagon NPU (Neural Processing Unit), Adreno GPU, and CPU. The NPU is specifically designed to handle high-performance, complex AI workloads (like Generative AI) far more efficiently than a generic CPU.
  • Custom Oryon CPU: The latest Snapdragon X Elite platform features customized cores that provide high performance while outperforming traditional x86 solutions in power efficiency for everyday tasks.
2. Overcoming Power Consumption (Performance/Watt)
Qualcomm focus on “Performance per Watt” rather than raw power.
  • Specialization Saves Power: By using specialized AI engines (NPUs) rather than general-purpose CPU/GPU cores, Qualcomm can run inference tasks at a fraction of the power cost.
  • Lower Overall Energy: Doing AI at the edge can save total energy by avoiding the need to send data to a power-hungry data center, which requires network infrastructure, and then sending it back.
  • Intelligent Efficiency: The Snapdragon 8 Elite, for example, saw a 27% reduction in power consumption while increasing AI performance significantly.
3. Critical Advantages of Edge over Cloud
Qualcomm believes edge is essential because cloud AI cannot solve certain critical problems:
  • Instant Responsiveness (Low Latency): For autonomous vehicles or industrial robotics, a few milliseconds of latency to the cloud can be catastrophic. Edge AI provides real-time, instantaneous analysis.
  • Privacy and Security: Data never leaves the device. This is crucial for privacy-conscious users (biometrics) and compliance (GDPR), which is a major advantage over cloud-based AI.
  • Offline Capability: Edge devices, such as agricultural sensors or smart home devices in remote areas, continue to function without internet connectivity.
4. Market Expansion and Economic Drivers
  • Diversification: With the smartphone market maturing, Qualcomm sees the “Connected Intelligent Edge” as a huge growth opportunity, extending their reach into automotive, IoT, and PCs.
  • “Ecosystem of You”: Qualcomm aims to connect billions of devices, making AI personal and context-aware, rather than generic.
5. Bridging the Gap: Software & Model Optimization
Qualcomm is not just providing hardware; they are simplifying the deployment of AI:
  • Qualcomm AI Hub: This makes it easier for developers to deploy optimized models on Snapdragon devices.
  • Model Optimization: They specialize in making AI models smaller and more efficient (using quantization and specialized AI inference) to run on devices without requiring massive, cloud-sized computing power.
In summary, Qualcomm believes in Edge AI because they are building highly specialized hardware designed to excel within tight power and thermal constraints.
……………………………………………………………………………………………………………………………………………………………………………

References:

https://www.prnewswire.com/apac/news-releases/qualcomm-ai-program-for-innovators-2026–apac-officially-kicks-off—empowering-startups-across-japan-singapore-and-south-korea-to-lead-the-ai-innovation-302676025.html

Qualcomm CEO: AI will become pervasive, at the edge, and run on Snapdragon SoC devices

Huawei, Qualcomm, Samsung, and Ericsson Leading Patent Race in $15 Billion 5G Licensing Market

Private 5G networks move to include automation, autonomous systems, edge computing & AI operations

Nvidia’s networking solutions give it an edge over competitive AI chip makers

Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?

CES 2025: Intel announces edge compute processors with AI inferencing capabilities

Qualcomm CEO: expect “pre-commercial” 6G devices by 2028

 

Analysis: SpaceX FCC filing to launch up to 1M LEO satellites for solar powered AI data centers in space

SpaceX has applied to the Federal Communications Commission (FCC) for permission to launch up to 1 million LEO satellites for a new solar-powered AI data center system in space.  The private company, 40% owned by Elon Musk, envisions an orbital data center system with “unprecedented computing capacity” needed to run large-scale AI inference and applications for billions of users, according to SpaceX’s filing entered late on Friday.

Data centers are the physical backbone of artificial intelligence, requiring massive amounts of power. “By directly harnessing near-constant solar power with little operating or maintenance costs, these satellites will achieve transformative cost and energy efficiency while significantly reducing the environmental impact associated with terrestrial data centers,” the FCC filing said. Musk would need the telecom regulator’s approval to move forward.

Credit: Blueee/Alamy Stock Photo

The proposed new satellites would operate in “narrow orbital shells” of up to 50 kilometers each. The satellites would operate at altitudes of between 500 kilometers and 2,000 kilometers, and 30 degrees, and “sun-synchronous orbit inclinations” to capture power from the sun. The system is designed to be interconnected via optical links with existing Starlink broadband satellites, which would transmit data traffic back to ground Earth stations.

SpaceX’s request bets heavily on reduced costs of Starship, the company’s next-generation reusable rocket under development.  Starship has test-launched 11 times since 2023. Musk expects the rocket, which is crucial for expanding Starlink with more powerful satellites, to put its first payloads into orbit this year.
“Fortunately, the development of fully reusable launch vehicles like Starship that can deploy millions of tons of mass per year to orbit when launching at rate, means on-orbit processing capacity can reach unprecedented scale and speed compared to terrestrial buildouts, with significantly reduced environmental impact,” SpaceX said.
SpaceX is positioning orbital AI compute as the definitive solution to the terrestrial capacity crunch, arguing that space-based infrastructure represents the most efficient path for scaling next-generation workloads. As ground-based data centers face increasing grid density constraints and power delivery limitations, SpaceX intends to leverage high-availability solar irradiation to bypass Earth’s energy bottlenecks.The company’s technical rationale hinges on several key architectural advantages:
  • Energy Density & Sustainability: By tapping into “near-constant solar power,” SpaceX aims to utilize a fraction of the Sun’s output—noting that even a millionth of its energy exceeds current civilizational demand by four orders of magnitude.
  • Thermal Management: To address the cooling requirements of high-density AI clusters, these satellites will utilize radiative heat dissipation, eliminating the water-intensive cooling loops required by terrestrial facilities.
  • Opex & Scalability: The financial viability of this orbital layer is tethered to the Starship launch platform. SpaceX anticipates that the radical reduction in $/kg launch costs provided by a fully reusable heavy-lift vehicle will enable rapid scaling and ensure that, within years, the lowest LCOA (Levelized Cost of AI) will be achieved in orbit.
The transition to orbital AI compute introduces a fundamental shift in network topology, moving processing from terrestrial hubs to a decentralized, space-based edge layer. The latency implications are characterized by three primary architectural factors:
  • Vacuum-Speed Data Transmission: In a vacuum, light propagates roughly 50% faster than through terrestrial fiber optic cables. By utilizing Starlink’s optical inter-satellite links (OISLs)—a “petabit” laser mesh—data can bypass terrestrial bottlenecks and subsea cables. This potentially reduces intercontinental latency for AI inference to under 50ms, surpassing many long-haul terrestrial routes.
  • Edge-Native Processing & Data Gravity: Current workflows require downlinking massive raw datasets (e.g., Synthetic Aperture Radar imagery) for terrestrial processing, a process that can take hours. Shifting to orbital edge computing allows for “in-situ” AI inference, processing data onboard to deliver actionable insights in minutes rather than hours. This “Space Cloud” architecture eliminates the need to route raw data back to the Earth’s internet backbone, reducing data transmission volumes by up to 90%.
  • LEO Proximity vs. Terrestrial Hops: While terrestrial fiber remains the “gold standard” for short-range latency (typically 1–10ms), it is often hindered by inefficient routing and multiple hops. SpaceX’s LEO constellation, operating at altitudes between 340km and 614km, currently delivers median peak-hour latencies of ~26ms in the US. Future orbital configurations may feature clusters at varying 50km intervals to optimize for specific workload and latency tiers.

………………………………………………………………………………………………………………………………………………………………………………………

The SpaceX FCC filing on Friday follows an exclusive report by Reuters that Elon Musk is considering merging SpaceX with his xAI (Grok chatbot) company ahead of an IPO later this year. Under the proposed merger, shares of xAI would be exchanged for shares in SpaceX. Two entities have been set up in Nevada to facilitate the transaction, Reuters said.  Musk also runs electric automaker Tesla, tunnel company The Boring Co. and neurotechnology company Neuralink.

………………………………………………………………………………………………………………………………………………………………………………………

References:

https://www.reuters.com/business/aerospace-defense/spacex-seeks-fcc-nod-solar-powered-satellite-data-centers-ai-2026-01-31/

https://www.lightreading.com/satellite/spacex-seeks-fcc-approval-for-mega-ai-data-center-constellation

https://www.reuters.com/world/musks-spacex-merger-talks-with-xai-ahead-planned-ipo-source-says-2026-01-29/

Google’s Project Suncatcher: a moonshot project to power ML/AI compute from space

Blue Origin announces TeraWave – satellite internet rival for Starlink and Amazon Leo

China ITU filing to put ~200K satellites in low earth orbit while FCC authorizes 7.5K additional Starlink LEO satellites

Amazon Leo (formerly Project Kuiper) unveils satellite broadband for enterprises; Competitive analysis with Starlink

Telecoms.com’s survey: 5G NTNs to highlight service reliability and network redundancy

 

Huge significance of EchoStar’s AWS-4 spectrum sale to SpaceX

U.S. BEAD overhaul to benefit Starlink/SpaceX at the expense of fiber broadband providers

Telstra selects SpaceX’s Starlink to bring Satellite-to-Mobile text messaging to its customers in Australia

SpaceX launches first set of Starlink satellites with direct-to-cell capabilities

AST SpaceMobile to deliver U.S. nationwide LEO satellite services in 2026

GEO satellite internet from HughesNet and Viasat can’t compete with LEO Starlink in speed or latency

How will fiber and equipment vendors meet the increased demand for fiber optics in 2026 due to AI data center buildouts?

Subsea cable systems: the new high-capacity, high-resilience backbone of the AI-driven global network

Fiber Optic Boost: Corning and Meta in multiyear $6 billion deal to accelerate U.S data center buildout

Corning Incorporated and Meta Platforms, Inc. (previously known as Facebook) have entered a multiyear agreement valued at up to $6 billion. This strategic collaboration aims to accelerate the deployment of cutting-edge data center infrastructure within the U.S. to bolster Meta’s advanced applications, technologies, and ambitious artificial intelligence initiatives.   The agreement specifies that Corning will furnish Meta with its latest advancements in optical fiber, cable, and comprehensive connectivity solutions. As part of this commitment, Corning plans to significantly scale its manufacturing capabilities across its North Carolina facilities.

A key element of this expansion is a substantial capacity increase at its fiber optic cable manufacturing plant in Hickory NC, for which Meta will serve as the foundational anchor customer.  The construction and operation of these data centers — critical infrastructure that supports our technologies and moves us toward personalized superintelligence — necessitate robust server and hardware systems designed to facilitate information transfer and connectivity with minimal latency. Fiber optic cabling is a cornerstone component for enabling this high-speed, near real-time connectivity, powering applications from sophisticated wearable technology like the Ray-Ban Meta AI glasses to the global connectivity services utilized by billions of individuals and enterprises.

“This long-term partnership with Meta reflects Corning’s commitment to develop, innovate, and manufacture the critical technologies that power next-generation data centers here in the U.S.,” said Wendell P. Weeks, Chairman and Chief Executive Officer, Corning Incorporated. “The investment will expand our manufacturing footprint in North Carolina, support an increase in Corning’s employment levels in the state by 15 to 20 percent, and help sustain a highly skilled workforce of more than 5,000 — including the scientists, engineers, and production teams at two of the world’s largest optical fiber and cable manufacturing facilities. Together with Meta, we’re strengthening domestic supply chains and helping ensure that advanced data centers are built using U.S. innovation and advanced manufacturing.”

Meta is expanding its commitment to build industry-leading data centers in the U.S. and to source advanced technology made domestically.  Here are two quotes from them:

  1. “Building the most advanced data centers in the U.S. requires world-class partners and American manufacturing,” said Joel Kaplan, Chief Global Affairs Officer at Meta. “We’re proud to partner with Corning – a company with deep expertise in optical connectivity and commitment to domestic manufacturing – for the high-performance fiber optic cables our AI infrastructure needs. This collaboration will help create good-paying, skilled U.S. jobs, strengthen local economies, and help secure the U.S. lead in the global AI race.”
  2. “As digital tools and generative AI continue to transform our economy — in fields like healthcare, finance, agriculture, and more — the demand for fiber connectivity will continue to grow. By supporting American companies like Corning and building and operating data centers in America, we’re helping ensure that our nation maintains its competitive edge in the digital economy and the global race for AI leadership.”

Key elements of the agreement:

  • Multiyear, up to $6 billion commitment.
  • Corning to supply latest generation optical fiber, cable and connectivity products designed to meet the density and scale demands of advanced AI data centers.
  • New optical cable manufacturing facility in Hickory, North Carolina, in addition to expanded production capacity across Corning’s North Carolina operations.
  • Agreement supports Corning’s projected employment growth in North Carolina by 15 to 20 percent, sustaining a skilled workforce of more than 5,000 employees in the state, including thousands of jobs tied to two of the world’s largest optical fiber and cable manufacturing facilities.

…………………………………………………………………………………………………………………………………………………………….

Comment and Analysis:

Corning’s “up to $6 billion” Meta agreement is essentially a long‑term, anchor‑tenant bet that AI‑era data centers will be fundamentally more fiber‑intensive than legacy cloud resident data centers, with Corning positioning itself as the default U.S. optical plant for Meta’s buildout through ~2030.  In practice, this deal is a long‑term take‑or‑pay style capacity lock that de‑risks Corning’s capex while giving Meta priority access to scarce, high‑performance data‑center‑grade fiber and cabling.

AI data centers are becoming the new FTTH in the sense that hyperscale AI buildouts are now the primary structural driver of incremental fiber demand, design innovation, and capex prioritization—but with far higher fiber intensity per site and far tighter performance constraints than residential access ever imposed.

Why “AI Data Centers are the new FTTH” for fiber optic vendors:

For fiber‑optic vendors, AI data centers now play the role that FTTH did in the 2005–2015 cycle: the anchor use case that justifies new glass, cable, and connectivity capacity.

  • AI‑optimized data centers need 2–4× more fiber cabling than traditional hyperscalers, and in some designs more than 10×, driven by massively parallel GPU fabrics and east–west traffic.

  • U.S. hyperscale capacity is expected to triple by 2029, forcing roughly a 2× increase in fiber route miles and a 2.3× increase in total fiber miles, a demand shock comparable to or larger than the early FTTH boom but concentrated in fewer, much larger customers.

  • This is already reshaping product roadmaps toward ultra‑high‑fiber‑count (UHFC) cable, bend‑insensitive fiber, and very‑small‑form‑factor connectors to handle hundreds to thousands of fibers per rack and per duct.

In other words, where FTTH once dictated volume and economies of scale, AI data centers now dictate density, performance, and margin mix.

Carrier‑infrastructure: from access to fabric:

From a carrier perspective, the “new FTTH” analogy is about what drives long‑haul and metro planning: instead of last‑mile penetration, it’s AI fabric connectivity and east–west inter‑DC routes.

  • Each new hyperscale/AI data center is modeled to require on the order of 135 new fiber route miles just to reach three core network interconnection points, plus additional miles for new long‑haul routes and capacity upgrades.

  • An FBA‑commissioned study projects U.S. data centers alone will need on the order of 214 million additional fiber miles by 2029, nearly doubling the installed base from ~160M to ~373M fiber miles; that is the new “build everywhere” narrative operators once used for FTTH.

  • Carriers now plan backbone routes, ILAs, and regional rings around dense clusters of AI campuses, treating them as primary traffic gravity wells rather than as just a handful of peering sites at the edge of a consumer broadband network.

The strategic shift: FTTH made the access network fiber‑rich; AI makes the entire cloud and transport fabric fiber‑hungry.

Strategic implications:

  • AI is now the dominant incremental fiber use case: residential fiber adds subscribers; AI adds orders of magnitude more fibers per site and per route.

  • Network economics are moving from passing more homes to feeding more GPUs: route miles, fiber counts, and connector density are being dimensioned to training clusters and inference fabrics, not household penetration curves.

  • Policy and investment narratives should treat AI inter‑DC and campus fiber as “national infrastructure” on par with last‑mile FTTH, given the scale of projected doubling in route miles and more than doubling in fiber miles by 2029.

In summary,  the next decade of fiber innovation and capex will be written less in curb‑side PON and more in ultra‑dense, AI‑centric data centers with internal fiber optical fabrics and interconnects.

……………………………………………………………………………………………………………………………………………………………………………………………….

References:

https://www.corning.com/worldwide/en/about-us/news-events/news-releases/2026/01/corning-and-meta-announce-multiyear-up-to-6-billion-agreement-to-accelerate-us-data-center-buildout.html

Meta Announces Up to $6 Billion Agreement With Corning to Support US Manufacturing

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

Lumen Technologies to connect Prometheus Hyperscale’s energy efficient AI data centers

Proposed solutions to high energy consumption of Generative AI LLMs: optimized hardware, new algorithms, green data centers

Hyper Scale Mega Data Centers: Time is NOW for Fiber Optics to the Compute Server

China’s open source AI models to capture a larger share of 2026 global AI market

Overview of AI Models – China vs U.S. :

Chinese AI language models (LMs) have advanced rapidly and are now contesting with the U.S. for global market leadership.  Alibaba’s Qwen-Image-2512 is emerging as a top-performing, free, open-source model capable of high-fidelity human, landscape, and text rendering. Other key, competitive models include Zhipu AI’s GLM-Image (trained on domestic chips), ByteDance’s Seedream 4.0, and UNIMO-G.

Today, Alibaba-backed Moonshot AI released an upgrade of its flagship AI model, heating up a domestic arms race ahead of an expected rollout by Chinese AI hotshot DeepSeek. The latest iteration of Moonshot’s Kimi can process text, images, and videos simultaneously from a single prompt, the company said in a statement, aligning with a trend toward so-called omni models pioneered by industry leaders like OpenAI and Alphabet Inc.’s Google.

Moonshot AI Kimi website. Photographer: Raul Ariano/Bloomberg

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Chinese AI models are rapidly narrowing the gap with Western counterparts in quality and accessibility.  That shift is forcing U.S. AI leaders like Alphabet’s Google, Microsoft’s Copilot, OpenAI, and Anthropic to fight harder to  maintain their technological lead in AI.  That’s despite their humongous spending on AI data centers, related AI models and infrastructure.

In early 2025, investors seized on DeepSeek’s purportedly lean $5.6 million LM training bill as a sign that Nvidia’s high-end GPUs were already a relic and that U.S. hyperscalers had overspent on AI infrastructure. Instead, the opposite dynamic played out: as models became more capable and more efficient, usage exploded, proving out a classic Jevons’ Paradox and validating the massive data-center build-outs by Microsoft, Amazon, and Google.

The real competitive threat from DeepSeek and its peers is now coming from a different direction. Many Chinese foundation models are released as “open source” or “open weight” AI models which makes them effectively free to download, easy to modify, and cheap to run at scale. By contrast, most leading U.S. players keep tight control over their systems, restricting access to paid APIs and higher-priced subscriptions that protect margins but limit diffusion.

That strategic divergence is visible in how these systems are actually used. U.S. models such as Google’s Gemini, Anthropic’s Claude, and OpenAI’s GPT series still dominate frontier benchmarks [1′] and high‑stakes reasoning tasks. According to a recently published report by OpenRouter, a third-party AI model aggregator, and venture capital firm Andreessen Horowitz. Chinese open-source models have  captured roughly 30% of the “working” AI market. They are especially strong in coding support and roleplay-style assistants—where developers and enterprises optimize for cost efficiency, local customization, and deployment freedom rather than raw leaderboard scores.

Note 1. A frontier benchmark for AI models is a high-difficulty evaluation designed to test the absolute limits of artificial intelligence in complex,, often unsolved, reasoning tasks. FrontierMath, for example, is a prominent benchmark focusing on expert-level mathematics, requiring AI to solve hundreds of unpublished problems that challenge, rather than merely measure, current capabilities.

China’s open playbook:

China’s more permissive stance on model weights is not just a pricing strategy — it’s an acceleration strategy. Opening weights turns the broader developer community into an extension of the R&D pipeline, allowing users to inspect internals, pressure‑test safety, and push incremental improvements upstream.

As Kyle Miller at Georgetown’s Center for Security and Emerging Technology argues, China is effectively trading away some proprietary control to gain speed and breadth: by letting capability diffuse across the ecosystem, it can partially offset the difficulty of going head‑to‑head with tightly controlled U.S. champions like OpenAI and Anthropic. That diffusion logic is particularly potent in a system where state planners, big tech platforms, and startups are all incentivized to show visible progress in AI.

Even so, the performance gap has not vanished. Estimates compiled by Epoch AI suggest that Chinese models, on average, trail leading U.S. releases by about seven months. The window briefly narrowed during DeepSeek’s R1 launch in early 2025, when it looked like Chinese labs might have structurally compressed the lag; since then, the gap has widened again as U.S. firms have pushed ahead at the frontier.

Capital, chips, and the power problem:

The reason the U.S. lead has held is massive AI infrastructure spending. Consensus forecasts put capital expenditure by largely American hyperscalers at roughly $400 billion in 2025 and more than $520 billion in 2026, according to Goldman Sachs Research. By comparison, UBS analysts estimate that China’s major internet platforms collectively spent only about $57 billion last year—a fraction of U.S. outlays, even if headline Chinese policy rhetoric around AI is more aggressive.

But sustaining that level of investment runs into a physical constraint that can’t be hand‑waved away: electricity. The newest data-center designs draw more than a gigawatt of power each—about the output of a nuclear reactor—turning grid capacity into a strategic bottleneck. China now generates more than twice as much power as the U.S., and its centralized planning system can more readily steer incremental capacity toward AI clusters than America’s fragmented, heavily regulated electricity market.

That asymmetry is already shaping how some on Wall Street frame the race. Christopher Woods, global head of equity strategy at Jefferies, recently reiterated that China’s combination of open‑source models and abundant cheap power makes it a structurally formidable AI competitor. In his view, the “DeepSeek moment” of early last year remains a warning that markets have largely chosen to ignore as they rotate back into U.S. AI mega‑caps.

A fragile U.S. AI advantage:

For now, U.S. companies still control the most important chokepoint in the stack: advanced AI accelerators. Access to Nvidia’s cutting‑edge GPUs remains a decisive advantage.  Yesterday, Microsoft announced the Maia 200 chip – their first silicon and system platform optimized specifically for AI inference.  The chip was  was designed for efficiency, both in terms of its ability to deliver tokens per dollar and performance per watt of power used.

“Maia 200 can deliver 30% better performance per dollar than the latest generation hardware in our fleet today,” Microsoft EVP for Cloud and AI Scott Guthrie wrote in a blog post.

Image Credit: Microsoft

……………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Leading Chinese AI research labs have struggled to match training results using only domestic designed silicon. DeepSeek, which is developing the successor to its flagship model and is widely expected to release it around Lunar New Year, reportedly experimented with chips from Huawei and other local vendors before concluding that performance was inadequate and turning to Nvidia GPUs for at least part of the training run.

That reliance underscores the limits of China’s current self‑reliance push—but it also shouldn’t be comforting to U.S. strategists. Chinese firms are actively working around the hardware gap, not waiting for it to close. DeepSeek’s latest research focuses on training larger models with fewer chips through more efficient memory design, an incremental but important reminder that architectural innovation can partially offset disadvantages in raw compute.

From a technology‑editorial perspective, the underlying story is not simply “China versus the U.S.” at the model frontier. It is a clash between two AI industrial strategies: an American approach that concentrates capital, compute, and control in a handful of tightly integrated platforms, and a Chinese approach that leans on open weights, diffusion, and state‑backed infrastructure to pull the broader ecosystem forward.

The question for 2026 is whether U.S. AI firms’ lead in capability and chips can keep outrunning China’s advantages in openness and power—or whether the market will again wait for a shock like DeepSeek to re‑price that risk.

Deepseek and Other Chinese AI Models:

DeepSeek published research this month outlining a method of training larger models using fewer chips through a more efficient memory design. “We view DeepSeek’s architecture as a new, promising engineering solution that could enable continued model scaling without a proportional increase in GPU capacity,” wrote UBS analyst Timothy Arcuri.

Export controls haven’t prevented Chinese companies from training advanced models, but challenges emerge when the models are deployed at scale. Zhipu AI, which released its open-weight GLM 4.7 model in December, said this month it was rationing sales of its coding product to 20% of previous capacity after demand from users overwhelmed its servers.

Moonshot, Zhipu AI and MiniMax Group Inc are among a handful of AI LM front-runners in a hotly contested battle among Chinese large language model makers, which at one point was dubbed the “War of One Hundred Models.”

“I don’t see compute constraints limiting [Chinese companies’] ability to make models that are better and compete near the U.S. frontier,” Georgetown’s Miller says. “I would say compute constraints hit on the wider ecosystem level when it comes to deployment.”

Gaining access to Nvidia AI chips:

U.S. President Donald Trump’s plan to allow Nvidia to sell its H200 chips to China could be pivotal. Alibaba Group and ByteDance, TikTok’s parent company, have privately indicated interest in ordering more than 200,000 units each, Bloomberg reported.  The H200 outperforms any Chinese-produced AI chip, with a roughly 32% processing-power advantage over Huawei’s Ascend 910C.

With access to Nvidia AI chips, Chinese labs could build AI-training supercomputers as capable as American ones at 50% extra cost compared with U.S.-made ones, according to the Institute for Progress. Subsidies by the Chinese government could cover that differential, leveling the playing field, the institute says.

Conclusions:

A combination of open-source innovation and loosened chip controls could create a cheaper, more capable Chinese AI ecosystem. The possibility is emerging just as OpenAI and Anthropic consider public stock listings (IPOs) and U.S. hyperscalers such as Microsoft and Meta Platforms face pressure to justify heavy spending.

The risk for U.S. AI leaders is no longer theoretical; China’s open‑weight, low‑cost model ecosystem is already eroding the moat that Google, OpenAI, and Anthropic thought they were building. By prioritizing diffusion over tight control, Chinese firms are seeding a broad developer base, compressing iteration cycles, and normalizing expectations that powerful models should be cheap—or effectively free—to adapt and deploy.

U.S. AI leaders could face pressure on pricing and profit margins from China AI competitors while having to deal with AI infrastructure costs and power constraints. Their AI advantage remains real, but fragile—highly exposed to regulatory shifts, export controls, and any breakthrough in China’s workarounds on hardware and training efficiency. The uncomfortable prospect for U.S. AI incumbents is that they could win the race for the best models and still lose ground in the market if China’s diffusion‑driven strategy defines how AI is actually consumed at scale.

…………………………………………………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.barrons.com/articles/deepseek-ai-gemini-chatgpt-stocks-ccde892c

https://blogs.microsoft.com/blog/2026/01/26/maia-200-the-ai-accelerator-built-for-inference/

https://www.bloomberg.com/news/articles/2026-01-27/china-s-moonshot-unveils-new-ai-model-ahead-of-deepseek-release

https://www.scmp.com/tech/tech-trends/article/3335602/chinas-open-source-models-make-30-global-ai-usage-led-qwen-and-deepseek

China gaining on U.S. in AI technology arms race- silicon, models and research

U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China

Goldman Sachs: Big 3 China telecom operators are the biggest beneficiaries of China’s AI boom via DeepSeek models; China Mobile’s ‘AI+NETWORK’ strategy

Bloomberg: China Lures Billionaires Into Race to Catch U.S. in AI

Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)

Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)

Executive Summary:

AI integration in 6G specifications (3GPP) and standards (ITU-R IMT 2030) highlights a strategic shift in the telecom industry towards AI-native networks, with telecom industry heavyweights like Huawei, Samsung, Ericsson, and Nokia actively developing foundational technologies. Unlike 5G, where AI and machine learning were limited applications or add-on features over existing architecture, 6G will incorporate AI from the onset with an “AI native” approach where intelligence will allow the network to be smart, agile, and able to learn and adapt according to changing network dynamics.

This transformation is necessary because future 6G networks will be too complex for human operators to manage, requiring AI-empowered and learning-driven networks that can facilitate zero-touch network management through capabilities including learning, reasoning, and decision-making.

Key Developments and Analysis:
  • AI-Native Networks: The industry consensus is that 6G will be “AI-native,” meaning artificial intelligence will be built directly into the core functions of network control, resource management, and service orchestration. This moves AI from an optimization layer in 5G to an foundational element in 6G.

AI Native Image Courtesy of Ericsson

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

  • Company Initiatives:
    • Huawei is focused on making AI a native element of the network architecture (AI-native 6G) rather than an overlay technology, integrating communication, sensing, computing, and intelligence. This vision, called “Connected Intelligence,” involves two aspects: AI for 6G (network automation) and 6G for AI (AI as a Service, AIaaS).  More in Huawei Research Areas below.
    • Samsung is a major proponent of AI-RAN (Radio Access Network) technology. The company hosted a summit in November 2025 to showcase working AI-RAN technology that autonomously optimizes network performance and is conducting joint research with SK Telecom (SKT) on AI-supported RAN. Samsung sees vRAN (virtualized RAN) as a key enabler for “AI-native, 6G-ready networks”.
    • Ericsson emphasizes the necessity of a strong 5G Standalone (5G SA) foundation for an AI future, using AI to manage and automate current networks in preparation for 6G’s demands. Ericsson is also integrating agentic AI into its platforms for more autonomous network management.
    • Nokia is deepening its AI push, licensing software to expand AI use in mobile networks and preparing for early field trials in 2026 by porting baseband software to platforms like NVIDIA’s, which opens the door for more advanced AI use cases.
  • Industry Analysis and Trends:
    • Standardization: 2026 is crucial as formal 6G specification work begins in earnest within 3GPP with Release 21. In WP5D, the IMT 2030 RIT/SRIT standardization work will commence at the February 2027 meeting with the final deadline for submissions at the February 2029 meeting.  More in the ITU-R WP5D section below. 
    • The AI-RAN Alliance is an industry initiative (not a traditional SDO) focused on accelerating real-world AI applications and integration within the RAN. It works alongside SDOs, providing industry insights and pushing for rapid validation and testing of AI-RAN technologies, with a specific focus on leveraging accelerated computing.
    • Automation and Efficiency: AI-native algorithms in 6G are expected to deliver extreme spectrum and energy efficiency, significantly reducing operational costs for telcos while improving reliability and performance.
    • Monetization Challenges: Despite the technological promise, analysts caution that 6G remains largely theoretical for now. Some operators are stalling on full 5G SA deployment, waiting to move to 6G-ready cores later in the decade, leading to concerns that 5G SA might become an “odd generation.”
    • Infrastructure Constraints: The physical demands of AI infrastructure, particularly energy consumption and construction timelines, are becoming operational realities that may bound the pace of AI growth in 2026, regardless of software advancements. 
    • ITU-R Working Party (WP) 5D is making AI a native and foundational element of the 6G (IMT-2030) system, rather than the “add-on” or “overlay” status it had in 5G (IMT 2020). This shift is being achieved through the definition of specific AI capabilities and requirements that future 6G technologies must inherently support. In particular:
  • Defining AI as a Core Capability: The Recommendation ITU-R M.2160 (“Framework and overall objectives of the future development of IMT for 2030 and Beyond”) officially defines “Artificial Intelligence and Communication” as one of the six major usage scenarios and an overarching design principle for IMT-2030.
  • Integrating AI into the Radio Interface: WP 5D is actively developing technical performance requirements (TPRs) and evaluation criteria for proposed 6G radio interface technologies (RITs) that inherently incorporate AI/Machine Learning (ML). This includes work on:
    • AI-enabled air interface design: This involves the physical layer, potentially moving towards AI-native physical (PHY) layers that can dynamically adapt waveforms and network parameters in real-time, rather than relying on predefined, static configurations.
    • AI-driven resource management: AI/ML algorithms will be crucial for real-time optimization of spectral and energy efficiency, managing complex traffic, and ensuring Quality of Service (QoS).
  • Enabling AI-Driven Services: The framework for IMT-2030 is designed to support the full lifecycle of AI components, from data collection and model training to deployment and performance monitoring, enabling new AI-driven services and applications directly within the network infrastructure.
  • Establishing a Formal Timeline: WP 5D has established a clear timeline for 6G standardization, with specific stages for vision, requirements, evaluation methodology, and specifications. This structured approach ensures that all proposed RITs/SRITs are evaluated against the new AI-native requirements, promoting global alignment and preventing AI from becoming a fragmented, proprietary solution.
    • Stage 1 (Vision): Completed in June 2023.
    • Stage 2 (Requirements & Evaluation): Targeted for completion in 2026.
    • Stage 3 (Specifications): Expected by the end of 2030.
6G, as envisioned in the ITU-R’s IMT-2030 framework, is being designed from the ground up as an “AI-native” system. 
  • Purpose: AI is integral to the entire network lifecycle, from initial design and deployment to autonomous operation and service creation.
  • Integration Level: Intelligence is embedded across all layers of the network stack, including the physical layer (air interface), control plane, and data plane.
  • Scope: AI enables core functionalities such as real-time self-optimization, self- healing capabilities, and dynamic resource allocation, rather than static, predefined configurations.
  • Outcome: The creation of a fully cognitive, self-managing, and highly adaptable “intelligence fabric” capable of supporting advanced use cases like real-time holographic communication, digital twins, and autonomous systems with ultra-low latency. 
Comparing AI as an overlay in 5G (IMT 2030) vs AI native mode in 6G (IMT 2030):
Feature  5G (IMT-2020) 6G (IMT-2030)
AI Role Optimization tool (overlay) Foundational and native element
Network Operation Manual configuration with AI assistance Autonomous and self-managing
Air Interface Human-designed with some ML optimization AI/ML-designed and managed
Complexity Management Relies on standard protocols Manages complexity through embedded AI/ML
Services Supported Enhanced mobile broadband, basic IoT Integrated AI & Communication, sensing, holographic comms

–>By embedding AI into the fundamental design principles and technical requirements of IMT-2030, ITU-R WP 5D is ensuring that 6G is an AI-native network capable of self-management, self-optimization, and supporting a vast ecosystem of AI applications, a significant shift from the supplementary role AI played in 5G. 

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..
Huawei’s Research Areas and Activities:
  • Agentic-AI Core (A-Core): Huawei unveiled a blueprint for a 6G core network (which will be specified by 3GPP and NOT ITU) where services are managed by specialized AI agents using a large-scale network AI model called “NetGPT”. This allows the network to program, update, and execute its own control procedures automatically without human intervention, based on natural language instructions.
  • Network Architecture Redesign: Huawei proposes the NET4AI system architecture, a service-oriented design that moves beyond the 5G service-based architecture. It introduces a dedicated data plane (DP) to handle the massive volume of data generated by AI and sensing services, enabling flexible and efficient many-to-many data flow for distributed learning and inference.
  • Integrated Sensing and Communication (ISAC): A core pillar of Huawei’s 6G work is the native integration of sensing with communication. This allows the network to use radio waves for high-resolution sensing, localization, and imaging, creating a “digital twin” of the physical world. The large volume of data collected from sensing then serves as a source for AI model training and real-time environmental monitoring.
  • Distributed Machine Learning: Huawei researches deep-edge architecture to enable massive, distributed, and collaborative machine learning (ML). This includes the development of frameworks like a two-level learning architecture that combines federated learning (FL) and split learning (SL) to optimize computing resources and ensure data privacy by keeping raw data local to devices.
  • AI as a Service (AIaaS): The 6G network is designed to provide AI capabilities as a service, allowing the training and inference of large AI models to be distributed across the network (edge and cloud). This offers low-latency performance and access to rich data for AI-driven applications like collaborative robotics and autonomous driving.
  • Energy Efficiency and Sustainability: The company is researching how native AI capabilities can improve overall energy efficiency by up to 100 times compared to 5G. This involves smart energy control, dynamic resource scaling, and optimizing communication paths for lower power consumption.
  • Standardization and White Papers: Huawei is actively contributing to global 6G discussions and standardization bodies like the ITU-R, sharing its vision through publications such as the book 6G: The Next Horizon – From Connected People and Things to Connected Intelligence and various technical white papers. The goal is to define the technical specifications and use cases for 6G that will drive industry-wide innovation by around 2030. 
In summary, the telecom industry is laying the critical groundwork for an AI-native 6G era through research, standard setting, and strategic investments in AI-powered network solutions, even as commercial deployment remains several years away. Decisions must be made on spectrum use (especially in the FR3 range of 7-24 GHz), silicon roadmaps, and network architectures which will have lasting impact.
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.ericsson.com/en/reports-and-papers/white-papers/ai-native

Roles of 3GPP and ITU-R WP 5D in the IMT 2030/6G standards process

AI wireless and fiber optic network technologies; IMT 2030 “native AI” concept

ITU-R WP5D IMT 2030 Submission & Evaluation Guidelines vs 6G specs in 3GPP Release 20 & 21

ITU-R WP 5D Timeline for submission, evaluation process & consensus building for IMT-2030 (6G) RITs/SRITs

ITU-R WP 5D reports on: IMT-2030 (“6G”) Minimum Technology Performance Requirements; Evaluation Criteria & Methodology

AI wireless and fiber optic network technologies; IMT 2030 “native AI” concept

Highlights of 3GPP Stage 1 Workshop on IMT 2030 (6G) Use Cases

Should Peak Data Rates be specified for 5G (IMT 2020) and 6G (IMT 2030) networks?

GSMA Vision 2040 study identifies spectrum needs during the peak 6G era of 2035–2040

Highlights and Summary of the 2025 Brooklyn 6G Summit

NGMN: 6G Key Messages from a network operator point of view

Nokia and Rohde & Schwarz collaborate on AI-powered 6G receiver years before IMT 2030 RIT submissions to ITU-R WP5D

Verizon’s 6G Innovation Forum joins a crowded list of 6G efforts that may conflict with 3GPP and ITU-R IMT-2030 work

Nokia Bell Labs and KDDI Research partner for 6G energy efficiency and network resiliency

Deutsche Telekom: successful completion of the 6G-TakeOff project with “3D networks”

Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos

Qualcomm CEO: expect “pre-commercial” 6G devices by 2028

Ericsson and e& (UAE) sign MoU for 6G collaboration vs ITU-R IMT-2030 framework

KT and LG Electronics to cooperate on 6G technologies and standards, especially full-duplex communications

Highlights of Nokia’s Smart Factory in Oulu, Finland for 5G and 6G innovation

Nokia sees new types of 6G connected devices facilitated by a “3 layer technology stack”

Rakuten Symphony exec: “5G is a failure; breaking the bank; to the extent 6G may not be affordable”

India’s TRAI releases Recommendations on use of Tera Hertz Spectrum for 6G

New ITU report in progress: Technical feasibility of IMT in bands above 100 GHz (92 GHz and 400 GHz)

 

Telecom operators investing in Agentic AI while Self Organizing Network AI market set for rapid growth

Telecom companies are planning to use Agentic AI [1.] for customer experience and network automation. A recent RADCOM survey shows 71% of network operators plan to deploy agentic AI in 2026, while 14% have already begun, prioritizing areas that directly influence trust and customer satisfaction: security and fraud prevention (57%) and customer service and support (56%).  The top use cases are automated customer complaint resolution and autonomous fault resolution.

Operators are betting on agentic AI to remove friction before customers feel it, with the highest-value use cases reflecting this shift, including:

  • 57% – automated customer complaint resolution
  • 54% – autonomous fault resolution before it impacts service
  • 52% – predicting experience to prevent churn

This technology is shifting networks from simply detecting issues to preventing them before customers notice. In contact centers, 2026 is expected to see a rise in human and AI agent collaboration to improve efficiency and customer service.

Note 1.  Agentic AI refers to autonomous artificial intelligence systems that can perceive, reason, plan, and act independently to achieve complex goals with minimal human intervention, going beyond simple command-response to manage multi-step tasks, use various tools, and adapt to new information for proactive automation in dynamic environments. These intelligent agents function like digital coworkers, coordinating internally and with other systems to execute sophisticated workflows.

……………………………………………………………………………………………………………………………………………………………………………………………

ResearchAndMarkets.com has just published a “Self-Organizing Network Artificial Intelligence (AI) Global Market Report 2025.” The market research firm says that the self-organizing network AI [2.] is forecast to expand from $5.19 billion in 2024 to $6.18 billion in 2025, at a CAGR of 19.2%. This surge is driven by the integration of machine learning and AI in telecom networks, smart network management investment, and the growing demand for features like self-healing and self-optimization, as well as predictive maintenance technologies.driven by the expansion of 5G, increasing automation demands, and AI integration for network optimization. Opportunities include AI-driven RRM and predictive maintenance. Asia-Pacific emerges as the fast-growing region, boosting telecom innovations amid global trade shifts.

Note 2.  Self-organizing network AI leverages software, hardware, and services to dynamically optimize and manage telecom networks, applicable across various network types and deployment modes. The market encompasses a broad range of solutions, from network optimization software to AI-driven planning products, underscoring its expansive potential.

Looking further ahead, the market is expected to reach $12.32 billion by 2029, with a CAGR of 18.8%. Key drivers during this period include heightened demand for automation, increased 5G deployments, and growing network densification, accompanied by rising data traffic and subscriber numbers. Trends such as AI-driven network automation advancements, machine learning integration for real-time optimization, and the rise of generative AI for analytics are reshaping the landscape.

The expansion of 5G networks plays a pivotal role in propelling this growth. These networks, characterized by high-speed data and ultra-low latency, significantly enhance the capabilities of self-organizing network AI. The integration facilitates real-time data processing, supporting automation, optimization, and predictive maintenance, thereby improving service quality and user experience. A notable development in 2023 saw UK outdoor 5G coverage rise to 85-93%, reflecting growing demand and technological advancement.

Huawei Technologies and other major tech companies, are pioneering innovative solutions like AI-driven radio resource management (RRM), which optimizes network performance and enhances user experience. These solutions rely on AI and machine learning for dynamic spectrum and network resource management. For instance, Huawei’s AI Core Network, introduced at MWC 2025, marks a substantial leap in intelligent telecommunications, integrating AI into core systems for seamless connectivity and real-time decision-making.

Strategic acquisitions are also shaping the market, exemplified by Amdocs Limited acquiring TEOCO Corporation in 2023 to bolster its network optimization and analytics capabilities. This acquisition aims to enhance end-to-end network intelligence and operational efficiency.

Leading players in the market include Huawei, Cisco Systems Inc., Qualcomm Incorporated, and many others, driving innovation and competition. Europe held the largest market share in 2024, with Asia-Pacific poised to be the fastest-growing region through the forecast period.

References:

Operator Priorities for 2026 and Beyond: Data, Automation, Customer Experience

https://uk.finance.yahoo.com/news/self-organizing-network-artificial-intelligence-105400706.html

Ericsson integrates agentic AI into its NetCloud platform for self healing and autonomous 5G private network

Agentic AI and the Future of Communications for Autonomous Vehicles (V2X)

IDC Report: Telecom Operators Turn to AI to Boost EBITDA Margins

Omdia: How telcos will evolve in the AI era

Palo Alto Networks and Google Cloud expand partnership with advanced AI infrastructure and cloud security

Sovereign AI infrastructure for telecom companies: implementation and challenges

Sovereign AI infrastructure refers to the domestic capability of a nation or an organization to own and control the entire technology stack for artificial intelligence (AI) systems within its own borders, subject to local laws and governance. This includes the physical data centers, specialized hardware (like GPUs), software, data, and skilled workforce.  Sovereign AI infrastructure involves a full “stack” designed to ensure national control and reduce reliance on foreign providers. A few key features:

  • Policies and technical controls (e.g., data localization, encryption) to ensure that sensitive data used for training and inference remains within the jurisdiction.
  • Development and hosting of proprietary or locally tailored AI models and software frameworks that align with national values, languages, and ethical standards.
  • Workforce Development: Investing in domestic talent, including data scientists, engineers, and legal experts, to build and maintain the local AI ecosystem.
  • Regulatory Framework: A comprehensive legal and ethical framework for AI development and deployment that ensures compliance with national laws and standards.

Why It’s Important – The pursuit of sovereign AI infrastructure is driven by several strategic considerations for both governments and private enterprises:

  • National Security: To ensure that critical systems in defense, intelligence, and public infrastructure are not dependent on potentially adversarial foreign technologies or subject to extraterritorial access laws (like the U.S. CLOUD Act).
  • Economic Competitiveness: To foster a domestic tech industry, create high-skilled jobs, protect intellectual property, and capture the significant economic benefits of AI-driven growth.
  • Data Privacy and Compliance: To comply with stringent local data protection regulations (e.g., GDPR in the EU) and build public trust by ensuring citizen data is handled securely and according to local laws. Cultural Preservation: To train AI models on local datasets and languages, preserving cultural nuances and avoiding bias found in generalized, globally trained models.

Image Credit: Nvidia

………………………………………………………………………………………………………………………………………………………………………………………………………..

Governments around the world are starting to build sovereign AI infrastructure, and according to a new report from Morningstar DBRS, which opines that major telecommunications companies are uniquely positioned to benefit from that shift.  Here are a few take-aways from the report:

  • Sovereign AI funding opens a new growth path for telcos – Governments investing in domestic AI infrastructure are increasingly turning to operators, whose network and regulatory strengths position them to capture a large share of this emerging market.
  • Telcos’ capabilities align with sovereignty needs – Their expertise in large-scale networks, local presence, and established government relationships give them an edge over hyperscalers for sensitive, sovereignty-focused AI projects.
  • Early adopters gain advantage – Operators in Canada and Europe are already moving into sovereign AI, positioning themselves to secure higher-margin enterprise and government workloads as national AI buildouts accelerate.
Infrastructure advantages provide a strategic head start for telecommunications companies. Telcos currently manage extensive data centers, fiber optic networks, and computing infrastructure nationwide. Leveraging these established physical assets can significantly reduce the barriers to implementing sovereign AI solutions, contrasting favorably with the greenfield development required by other entities. 
The sophisticated data governance expertise within telcos is well-suited for the stringent requirements of sovereign AI. Their decades of experience managing and processing massive datasets have resulted in mature data handling practices directly applicable to the data infrastructure demands of secure, sovereign AI systems.
Furthermore, existing edge computing capabilities offer a distinct competitive advantage. Telecom networks facilitate localized AI processing near data sources while adhering to data residency requirements—a crucial combination for sovereign AI deployments.  This translates to “embedding AI within their network fabric for both optimization and distributed inference,” enabling AI consumption that offers lower latency, reduced cost, and applicability for high-sensitivity use cases in sectors like government and national security.
The opportunity to integrate AI workloads with emerging 5G and 6G infrastructures creates additional strategic value. Sovereign AI represents a pivotal opportunity for telecom operators to position themselves as central players in national AI strategies, evolving their role beyond primary connectivity provisioning.
……………………………………………………………………………………………………………………………………………………………………………….
Implementing sovereign AI presents substantial challenges despite its strategic potential. Key bottlenecks and technical complexities include:
  • Infrastructure Demands: Building robust domestic AI ecosystems requires specialized expertise spanning hardware, software, data governance, and policy.
  • Resource Constraints: Dr. Matt Hasan, CEO at aiRESULTS and a former AT&T executive, highlights specific bottlenecks:
    • Compute Density at Scale.
    • Spectrum Allocation amidst political pressures.
    • Energy Demand exceeding existing grid capacity.
  • Intensified Reliability Requirements: Sovereign AI implementation places heightened demands on telecom providers for system uptime, reliability, quality, and data privacy. This necessitates a focus on efficient power consumption, resilient routing and backups, robust encryption, and comprehensive cybersecurity measures.
  • Supply Chain Vulnerabilities: Geopolitical tensions introduce risks to the supply of critical components such as GPUs and specialized chips, underscoring the interconnected nature of global hardware supply chains.
  • The rapid evolution of AI technology mandates continuous investment and technical agility to ensure sovereign deployments remain current.
Competitive landscape dynamics:
  • The interplay between global hyperscalers and regional telecom operators is expected to shift.
  • Hasan predicts a collaborative model, with regional telcos leveraging their position as sovereign partners through joint ventures, rather than an outright displacement of hyperscalers.
Ultimately, the objective of sovereign AI is strategic resilience, not complete digital isolation. Nations must judiciously balance sovereignty goals with the advantages of global technological collaboration. For telecom operators, adeptly managing these complexities and investment demands will define sovereign AI’s realization as a viable growth opportunity.
…………………………………………………………………………………………………………………………………………………………………………….

References:

Telcos Across Five Continents Are Building NVIDIA-Powered Sovereign AI Infrastructure

https://dbrs.morningstar.com/research/468155/telecoms-are-well-placed-to-benefit-from-sovereign-ai-infrastructure-plans

How “sovereign AI” could shape telecom

https://www.rcrwireless.com/20251202/ai/sovereign-ai-telcos

Subsea cable systems: the new high-capacity, high-resilience backbone of the AI-driven global network

Analysis: OpenAI and Deutsche Telekom launch multi-year AI collaboration

AI infrastructure spending boom: a path towards AGI or speculative bubble?

Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos

Omdia: How telcos will evolve in the AI era

OpenAI announces new open weight, open source GPT models which Orange will deploy

Expose: AI is more than a bubble; it’s a data center debt bomb

Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?

Custom AI Chips: Powering the next wave of Intelligent Computing

AI spending boom accelerates: Big tech to invest an aggregate of $400 billion in 2025; much more in 2026!

IBM and Groq Partner to Accelerate Enterprise AI Inference Capabilities

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

 

Page 3 of 15
1 2 3 4 5 15