Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators

Executive Summary:

In a February 6, 2026 CNBC interview with with Scott Wapner, Nvidia CEO Jensen Huang [1.] characterized the current AI build‑out as “the largest infrastructure buildout in human history,” driven by exceptionally high demand for compute from hyperscalers and AI companies. “Through the roof” is how he described AI infrastructure spending.  It’s a “once-in-a-generation infrastructure buildout,” specifically highlighting that demand for Nvidia’s Blackwell chips and the upcoming Vera Rubin platform is “sky-high.” He emphasized that the shift from experimental AI to AI as a fundamental utility has reached a definitive inflection point for every major industry.

Jensen forecasts aa roughly 7–to- 8‑year AI investment cycle lies ahead, with capital intensity justified because deployed AI infrastructure is already generating rising cash flows for operators.  He maintains that the widely cited ~$660 billion AI data center capex pipeline is sustainable, on the grounds that GPUs and surrounding systems are revenue‑generating assets, not speculative overbuild. In his view, as long as customers can monetize AI workloads profitably, they will “keep multiplying their investments,” which underpins continued multi‑year GPU demand, including for prior‑generation parts that remain fully leased.

Note 1.  Being the undisputed leader of AI hardware (GPU chips and networking equipment via its Mellanox acquisition), Nvidia MUST ALWAYS MAKE POSITIVE REMARKS AND FORECASTS related to the AI build out boom.  Reader discretion is advised regarding Huang’s extremely bullish, “all-in on AI” remarks.

Huang reiterated that AI will “fundamentally change how we compute everything,” shifting data centers from general‑purpose CPU‑centric architectures to accelerated computing built around GPUs and dense networking. He emphasizes Nvidia’s positioning as a full‑stack infrastructure and computing platform provider—chips, systems, networking, and software—rather than a standalone chip vendor.  He accuratedly stated that Nvidia designs “all components of AI infrastructure” so that system‑level optimization (GPU, NIC, interconnect, software stack) can deliver performance gains that outpace what is possible with a single chip under a slowing Moore’s Law. The installed base is presented as productive: even six‑year‑old A100‑class GPUs are described as fully utilized through leasing, underscoring persistent elasticity of AI compute demand across generations.

AI Poster Childs – OpenAI and Anthropic:

Huang praised OpenAI and Anthropic, the two leading artificial intelligence labs, which both use Nvidia chips through cloud providers. Nvidia invested $10 billion in Anthropic last year, and Huang said earlier this week that the chipmaker will invest heavily in OpenAI’s next fundraising round.

“Anthropic is making great money. Open AI is making great money,” Huang said. “If they could have twice as much compute, the revenues would go up four times as much.”

He said that all the graphics processing units that Nvidia has sold in the past — even six-year old chips such as the A100 — are currently being rented, reflecting sustained demand for AI computing power.

“To the extent that people continue to pay for the AI and the AI companies are able to generate a profit from that, they’re going to keep on doubling, doubling, doubling, doubling,” Huang said.

Economics, utilization, and returns:

On economics, Huang’s central claim is that AI capex converts into recurring, growing revenue streams for cloud providers and AI platforms, which differentiates this cycle from prior overbuilds. He highlights very high utilization: GPUs from multiple generations remain in service, with cloud operators effectively turning them into yield‑bearing infrastructure.

This utilization and monetization profile underlies his view that the capex “arms race” is rational: when AI services are profitable, incremental racks of GPUs, network fabric, and storage can be modeled as NPV‑positive infrastructure projects rather than speculative capacity. He implies that concerns about a near‑term capex cliff are misplaced so long as end‑market AI adoption continues to inflect.

Competitive and geopolitical context:

Huang acknowledges intensifying global competition in AI chips and infrastructure, including from Chinese vendors such as Huawei, especially under U.S. export controls that have reduced Nvidia’s China revenue share to roughly half of pre‑control levels. He frames Nvidia’s strategy as maintaining an innovation lead so that developers worldwide depend on its leading‑edge AI platforms, which he sees as key to U.S. leadership in the AI race.

He also ties AI infrastructure to national‑scale priorities in energy and industrial policy, suggesting that AI data centers are becoming a foundational layer of economic productivity, analogous to past buildouts in electricity and the internet.

Implications for hyperscalers and chips:

Hyperscalers (and also Nvidia customers) Meta , Amazon, Google/Alphabet and Microsoft recently stated that they plan to dramatically increase spending on AI infrastructure in the years ahead. In total, these hyperscalers could spend $660 billion on capital expenditures in 2026 [2.] , with much of that spending going toward buying Nvidia’s chips. Huang’s message to them is that AI data centers are evolving into “AI factories” where each gigawatt of capacity represents tens of billions of dollars of investment spanning land, compute, and networking. He suggests that the hyperscaler industry—roughly a $2.5 trillion sector with about $500 billion in annual capex transitioning from CPU to GPU‑centric generative AI—still has substantial room to run.

Note 2.  An understated point is that while these hyperscalers are spending hundered of billions of dollars on AI data centers and Nvidia chips/equipment they are simultaneously laying off tens of thousands of employees.  For example, Amazon recently announced 16,000 job cuts this year after 14,000 layoffs last October.

From a chip‑level perspective, he argues that Nvidia’s competitive moat stems from tightly integrated hardware, networking, and software ecosystems rather than any single component, positioning the company as the systems architect of AI infrastructure rather than just a merchant GPU vendor.

References:

https://www.cnbc.com/2026/02/06/nvidia-rises-7percent-as-ceo-says-660-billion-capex-buildout-is-sustainable.html

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

184K global tech layoffs in 2025 to date; ~27.3% related to AI replacing workers

 

 

Analysis: Edge AI and Qualcomm’s AI Program for Innovators 2026 – APAC for startups to lead in AI innovation

Qualcomm is a strong believer in Edge AI as an enabler of faster, more secure, and energy-efficient processing directly on devices—rather than the cloud—unlocking real-time intelligence for industries like robotics and smart cities.

In support of that vision, the fabless SoC company announced the official launch of its Qualcomm AI Program for Innovators (QAIPI) 2026 – APAC, a regional startup incubation initiative that supports startups across Japan, Singapore, and South Korea in advancing the development and commercialization of innovative edge AI solutions.

Building on Qualcomm’s commitment to edge AI innovation, the second edition of QAIPI-APAC invites startups to develop intelligent solutions across a broad range of edge-AI applications using Qualcomm Dragonwing™ and Snapdragon® platforms, together with the new Arduino® UNO Q development board, strengthening their pathway toward global commercialization.

Startups gain comprehensive support and resources, including access to Qualcomm Dragonwing™ and Snapdragon® platforms, the Arduino® UNO Q development board, technical guidance and mentorship, a grant of up to US$10,000, and eligibility for up to US$5,000 in patent filing incentives, accelerating AI product development and deployment.

Applications are open now through April 30, 2026 and will be evaluated based on innovation, technical feasibility, potential societal impact, and commercial relevance. The program will be implemented in two phases. The application phase is open to eligible startups incorporated and registered in Japan, Singapore, or South Korea. Shortlisted startups will enter the mentorship phase, receiving one-on-one guidance, online training, technical support, and access to Qualcomm-powered hardware platforms and development kits for product development. They will also receive a shortlist grant of up to US$10,000 and may be eligible for a patent filing incentive of up to US$5,000. At the conclusion of the program, shortlisted startups may be invited to showcase their innovations at a signature Demo Day in late 2026, engaging with industry leaders, investors, and potential collaborators across the APAC innovation ecosystem.

Comment and Analysis:

Qualcomm is a strong believer in Edge AI—the practice of running AI models directly on devices (smartphones, cars, IoT, PCs) rather than in the cloud—because they view it as the next major technological paradigm shift, overcoming limitations inherent in cloud computing. Despite the challenges of power consumption and processing limitations, Qualcomm’s strategy hinges on specialized, heterogenous computing rather than relying solely on RISC-based CPU cores.

Key Issues for Qualcomm’s Edge AI solutions:

1.  The “Heterogeneous” Solution to Processing Limits
While it is true that standard CPU cores (even RISC-based ones) are inefficient for AI, Qualcomm does not use them for AI workloads. Instead, they use a heterogeneous architecture:
  • Qualcomm® AI Engine: This combines specialized hardware, including the Hexagon NPU (Neural Processing Unit), Adreno GPU, and CPU. The NPU is specifically designed to handle high-performance, complex AI workloads (like Generative AI) far more efficiently than a generic CPU.
  • Custom Oryon CPU: The latest Snapdragon X Elite platform features customized cores that provide high performance while outperforming traditional x86 solutions in power efficiency for everyday tasks.
2. Overcoming Power Consumption (Performance/Watt)
Qualcomm focus on “Performance per Watt” rather than raw power.
  • Specialization Saves Power: By using specialized AI engines (NPUs) rather than general-purpose CPU/GPU cores, Qualcomm can run inference tasks at a fraction of the power cost.
  • Lower Overall Energy: Doing AI at the edge can save total energy by avoiding the need to send data to a power-hungry data center, which requires network infrastructure, and then sending it back.
  • Intelligent Efficiency: The Snapdragon 8 Elite, for example, saw a 27% reduction in power consumption while increasing AI performance significantly.
3. Critical Advantages of Edge over Cloud
Qualcomm believes edge is essential because cloud AI cannot solve certain critical problems:
  • Instant Responsiveness (Low Latency): For autonomous vehicles or industrial robotics, a few milliseconds of latency to the cloud can be catastrophic. Edge AI provides real-time, instantaneous analysis.
  • Privacy and Security: Data never leaves the device. This is crucial for privacy-conscious users (biometrics) and compliance (GDPR), which is a major advantage over cloud-based AI.
  • Offline Capability: Edge devices, such as agricultural sensors or smart home devices in remote areas, continue to function without internet connectivity.
4. Market Expansion and Economic Drivers
  • Diversification: With the smartphone market maturing, Qualcomm sees the “Connected Intelligent Edge” as a huge growth opportunity, extending their reach into automotive, IoT, and PCs.
  • “Ecosystem of You”: Qualcomm aims to connect billions of devices, making AI personal and context-aware, rather than generic.
5. Bridging the Gap: Software & Model Optimization
Qualcomm is not just providing hardware; they are simplifying the deployment of AI:
  • Qualcomm AI Hub: This makes it easier for developers to deploy optimized models on Snapdragon devices.
  • Model Optimization: They specialize in making AI models smaller and more efficient (using quantization and specialized AI inference) to run on devices without requiring massive, cloud-sized computing power.
In summary, Qualcomm believes in Edge AI because they are building highly specialized hardware designed to excel within tight power and thermal constraints.
……………………………………………………………………………………………………………………………………………………………………………

References:

https://www.prnewswire.com/apac/news-releases/qualcomm-ai-program-for-innovators-2026–apac-officially-kicks-off—empowering-startups-across-japan-singapore-and-south-korea-to-lead-the-ai-innovation-302676025.html

Qualcomm CEO: AI will become pervasive, at the edge, and run on Snapdragon SoC devices

Huawei, Qualcomm, Samsung, and Ericsson Leading Patent Race in $15 Billion 5G Licensing Market

Private 5G networks move to include automation, autonomous systems, edge computing & AI operations

Nvidia’s networking solutions give it an edge over competitive AI chip makers

Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?

CES 2025: Intel announces edge compute processors with AI inferencing capabilities

Qualcomm CEO: expect “pre-commercial” 6G devices by 2028

 

Analysis: SpaceX FCC filing to launch up to 1M LEO satellites for solar powered AI data centers in space

SpaceX has applied to the Federal Communications Commission (FCC) for permission to launch up to 1 million LEO satellites for a new solar-powered AI data center system in space.  The private company, 40% owned by Elon Musk, envisions an orbital data center system with “unprecedented computing capacity” needed to run large-scale AI inference and applications for billions of users, according to SpaceX’s filing entered late on Friday.

Data centers are the physical backbone of artificial intelligence, requiring massive amounts of power. “By directly harnessing near-constant solar power with little operating or maintenance costs, these satellites will achieve transformative cost and energy efficiency while significantly reducing the environmental impact associated with terrestrial data centers,” the FCC filing said. Musk would need the telecom regulator’s approval to move forward.

Credit: Blueee/Alamy Stock Photo

The proposed new satellites would operate in “narrow orbital shells” of up to 50 kilometers each. The satellites would operate at altitudes of between 500 kilometers and 2,000 kilometers, and 30 degrees, and “sun-synchronous orbit inclinations” to capture power from the sun. The system is designed to be interconnected via optical links with existing Starlink broadband satellites, which would transmit data traffic back to ground Earth stations.

SpaceX’s request bets heavily on reduced costs of Starship, the company’s next-generation reusable rocket under development.  Starship has test-launched 11 times since 2023. Musk expects the rocket, which is crucial for expanding Starlink with more powerful satellites, to put its first payloads into orbit this year.
“Fortunately, the development of fully reusable launch vehicles like Starship that can deploy millions of tons of mass per year to orbit when launching at rate, means on-orbit processing capacity can reach unprecedented scale and speed compared to terrestrial buildouts, with significantly reduced environmental impact,” SpaceX said.
SpaceX is positioning orbital AI compute as the definitive solution to the terrestrial capacity crunch, arguing that space-based infrastructure represents the most efficient path for scaling next-generation workloads. As ground-based data centers face increasing grid density constraints and power delivery limitations, SpaceX intends to leverage high-availability solar irradiation to bypass Earth’s energy bottlenecks.The company’s technical rationale hinges on several key architectural advantages:
  • Energy Density & Sustainability: By tapping into “near-constant solar power,” SpaceX aims to utilize a fraction of the Sun’s output—noting that even a millionth of its energy exceeds current civilizational demand by four orders of magnitude.
  • Thermal Management: To address the cooling requirements of high-density AI clusters, these satellites will utilize radiative heat dissipation, eliminating the water-intensive cooling loops required by terrestrial facilities.
  • Opex & Scalability: The financial viability of this orbital layer is tethered to the Starship launch platform. SpaceX anticipates that the radical reduction in $/kg launch costs provided by a fully reusable heavy-lift vehicle will enable rapid scaling and ensure that, within years, the lowest LCOA (Levelized Cost of AI) will be achieved in orbit.
The transition to orbital AI compute introduces a fundamental shift in network topology, moving processing from terrestrial hubs to a decentralized, space-based edge layer. The latency implications are characterized by three primary architectural factors:
  • Vacuum-Speed Data Transmission: In a vacuum, light propagates roughly 50% faster than through terrestrial fiber optic cables. By utilizing Starlink’s optical inter-satellite links (OISLs)—a “petabit” laser mesh—data can bypass terrestrial bottlenecks and subsea cables. This potentially reduces intercontinental latency for AI inference to under 50ms, surpassing many long-haul terrestrial routes.
  • Edge-Native Processing & Data Gravity: Current workflows require downlinking massive raw datasets (e.g., Synthetic Aperture Radar imagery) for terrestrial processing, a process that can take hours. Shifting to orbital edge computing allows for “in-situ” AI inference, processing data onboard to deliver actionable insights in minutes rather than hours. This “Space Cloud” architecture eliminates the need to route raw data back to the Earth’s internet backbone, reducing data transmission volumes by up to 90%.
  • LEO Proximity vs. Terrestrial Hops: While terrestrial fiber remains the “gold standard” for short-range latency (typically 1–10ms), it is often hindered by inefficient routing and multiple hops. SpaceX’s LEO constellation, operating at altitudes between 340km and 614km, currently delivers median peak-hour latencies of ~26ms in the US. Future orbital configurations may feature clusters at varying 50km intervals to optimize for specific workload and latency tiers.

………………………………………………………………………………………………………………………………………………………………………………………

The SpaceX FCC filing on Friday follows an exclusive report by Reuters that Elon Musk is considering merging SpaceX with his xAI (Grok chatbot) company ahead of an IPO later this year. Under the proposed merger, shares of xAI would be exchanged for shares in SpaceX. Two entities have been set up in Nevada to facilitate the transaction, Reuters said.  Musk also runs electric automaker Tesla, tunnel company The Boring Co. and neurotechnology company Neuralink.

………………………………………………………………………………………………………………………………………………………………………………………

References:

https://www.reuters.com/business/aerospace-defense/spacex-seeks-fcc-nod-solar-powered-satellite-data-centers-ai-2026-01-31/

https://www.lightreading.com/satellite/spacex-seeks-fcc-approval-for-mega-ai-data-center-constellation

https://www.reuters.com/world/musks-spacex-merger-talks-with-xai-ahead-planned-ipo-source-says-2026-01-29/

Google’s Project Suncatcher: a moonshot project to power ML/AI compute from space

Blue Origin announces TeraWave – satellite internet rival for Starlink and Amazon Leo

China ITU filing to put ~200K satellites in low earth orbit while FCC authorizes 7.5K additional Starlink LEO satellites

Amazon Leo (formerly Project Kuiper) unveils satellite broadband for enterprises; Competitive analysis with Starlink

Telecoms.com’s survey: 5G NTNs to highlight service reliability and network redundancy

 

Huge significance of EchoStar’s AWS-4 spectrum sale to SpaceX

U.S. BEAD overhaul to benefit Starlink/SpaceX at the expense of fiber broadband providers

Telstra selects SpaceX’s Starlink to bring Satellite-to-Mobile text messaging to its customers in Australia

SpaceX launches first set of Starlink satellites with direct-to-cell capabilities

AST SpaceMobile to deliver U.S. nationwide LEO satellite services in 2026

GEO satellite internet from HughesNet and Viasat can’t compete with LEO Starlink in speed or latency

How will fiber and equipment vendors meet the increased demand for fiber optics in 2026 due to AI data center buildouts?

Subsea cable systems: the new high-capacity, high-resilience backbone of the AI-driven global network

Fiber Optic Boost: Corning and Meta in multiyear $6 billion deal to accelerate U.S data center buildout

Corning Incorporated and Meta Platforms, Inc. (previously known as Facebook) have entered a multiyear agreement valued at up to $6 billion. This strategic collaboration aims to accelerate the deployment of cutting-edge data center infrastructure within the U.S. to bolster Meta’s advanced applications, technologies, and ambitious artificial intelligence initiatives.   The agreement specifies that Corning will furnish Meta with its latest advancements in optical fiber, cable, and comprehensive connectivity solutions. As part of this commitment, Corning plans to significantly scale its manufacturing capabilities across its North Carolina facilities.

A key element of this expansion is a substantial capacity increase at its fiber optic cable manufacturing plant in Hickory NC, for which Meta will serve as the foundational anchor customer.  The construction and operation of these data centers — critical infrastructure that supports our technologies and moves us toward personalized superintelligence — necessitate robust server and hardware systems designed to facilitate information transfer and connectivity with minimal latency. Fiber optic cabling is a cornerstone component for enabling this high-speed, near real-time connectivity, powering applications from sophisticated wearable technology like the Ray-Ban Meta AI glasses to the global connectivity services utilized by billions of individuals and enterprises.

“This long-term partnership with Meta reflects Corning’s commitment to develop, innovate, and manufacture the critical technologies that power next-generation data centers here in the U.S.,” said Wendell P. Weeks, Chairman and Chief Executive Officer, Corning Incorporated. “The investment will expand our manufacturing footprint in North Carolina, support an increase in Corning’s employment levels in the state by 15 to 20 percent, and help sustain a highly skilled workforce of more than 5,000 — including the scientists, engineers, and production teams at two of the world’s largest optical fiber and cable manufacturing facilities. Together with Meta, we’re strengthening domestic supply chains and helping ensure that advanced data centers are built using U.S. innovation and advanced manufacturing.”

Meta is expanding its commitment to build industry-leading data centers in the U.S. and to source advanced technology made domestically.  Here are two quotes from them:

  1. “Building the most advanced data centers in the U.S. requires world-class partners and American manufacturing,” said Joel Kaplan, Chief Global Affairs Officer at Meta. “We’re proud to partner with Corning – a company with deep expertise in optical connectivity and commitment to domestic manufacturing – for the high-performance fiber optic cables our AI infrastructure needs. This collaboration will help create good-paying, skilled U.S. jobs, strengthen local economies, and help secure the U.S. lead in the global AI race.”
  2. “As digital tools and generative AI continue to transform our economy — in fields like healthcare, finance, agriculture, and more — the demand for fiber connectivity will continue to grow. By supporting American companies like Corning and building and operating data centers in America, we’re helping ensure that our nation maintains its competitive edge in the digital economy and the global race for AI leadership.”

Key elements of the agreement:

  • Multiyear, up to $6 billion commitment.
  • Corning to supply latest generation optical fiber, cable and connectivity products designed to meet the density and scale demands of advanced AI data centers.
  • New optical cable manufacturing facility in Hickory, North Carolina, in addition to expanded production capacity across Corning’s North Carolina operations.
  • Agreement supports Corning’s projected employment growth in North Carolina by 15 to 20 percent, sustaining a skilled workforce of more than 5,000 employees in the state, including thousands of jobs tied to two of the world’s largest optical fiber and cable manufacturing facilities.

…………………………………………………………………………………………………………………………………………………………….

Comment and Analysis:

Corning’s “up to $6 billion” Meta agreement is essentially a long‑term, anchor‑tenant bet that AI‑era data centers will be fundamentally more fiber‑intensive than legacy cloud resident data centers, with Corning positioning itself as the default U.S. optical plant for Meta’s buildout through ~2030.  In practice, this deal is a long‑term take‑or‑pay style capacity lock that de‑risks Corning’s capex while giving Meta priority access to scarce, high‑performance data‑center‑grade fiber and cabling.

AI data centers are becoming the new FTTH in the sense that hyperscale AI buildouts are now the primary structural driver of incremental fiber demand, design innovation, and capex prioritization—but with far higher fiber intensity per site and far tighter performance constraints than residential access ever imposed.

Why “AI Data Centers are the new FTTH” for fiber optic vendors:

For fiber‑optic vendors, AI data centers now play the role that FTTH did in the 2005–2015 cycle: the anchor use case that justifies new glass, cable, and connectivity capacity.

  • AI‑optimized data centers need 2–4× more fiber cabling than traditional hyperscalers, and in some designs more than 10×, driven by massively parallel GPU fabrics and east–west traffic.

  • U.S. hyperscale capacity is expected to triple by 2029, forcing roughly a 2× increase in fiber route miles and a 2.3× increase in total fiber miles, a demand shock comparable to or larger than the early FTTH boom but concentrated in fewer, much larger customers.

  • This is already reshaping product roadmaps toward ultra‑high‑fiber‑count (UHFC) cable, bend‑insensitive fiber, and very‑small‑form‑factor connectors to handle hundreds to thousands of fibers per rack and per duct.

In other words, where FTTH once dictated volume and economies of scale, AI data centers now dictate density, performance, and margin mix.

Carrier‑infrastructure: from access to fabric:

From a carrier perspective, the “new FTTH” analogy is about what drives long‑haul and metro planning: instead of last‑mile penetration, it’s AI fabric connectivity and east–west inter‑DC routes.

  • Each new hyperscale/AI data center is modeled to require on the order of 135 new fiber route miles just to reach three core network interconnection points, plus additional miles for new long‑haul routes and capacity upgrades.

  • An FBA‑commissioned study projects U.S. data centers alone will need on the order of 214 million additional fiber miles by 2029, nearly doubling the installed base from ~160M to ~373M fiber miles; that is the new “build everywhere” narrative operators once used for FTTH.

  • Carriers now plan backbone routes, ILAs, and regional rings around dense clusters of AI campuses, treating them as primary traffic gravity wells rather than as just a handful of peering sites at the edge of a consumer broadband network.

The strategic shift: FTTH made the access network fiber‑rich; AI makes the entire cloud and transport fabric fiber‑hungry.

Strategic implications:

  • AI is now the dominant incremental fiber use case: residential fiber adds subscribers; AI adds orders of magnitude more fibers per site and per route.

  • Network economics are moving from passing more homes to feeding more GPUs: route miles, fiber counts, and connector density are being dimensioned to training clusters and inference fabrics, not household penetration curves.

  • Policy and investment narratives should treat AI inter‑DC and campus fiber as “national infrastructure” on par with last‑mile FTTH, given the scale of projected doubling in route miles and more than doubling in fiber miles by 2029.

In summary,  the next decade of fiber innovation and capex will be written less in curb‑side PON and more in ultra‑dense, AI‑centric data centers with internal fiber optical fabrics and interconnects.

……………………………………………………………………………………………………………………………………………………………………………………………….

References:

https://www.corning.com/worldwide/en/about-us/news-events/news-releases/2026/01/corning-and-meta-announce-multiyear-up-to-6-billion-agreement-to-accelerate-us-data-center-buildout.html

Meta Announces Up to $6 Billion Agreement With Corning to Support US Manufacturing

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

Lumen Technologies to connect Prometheus Hyperscale’s energy efficient AI data centers

Proposed solutions to high energy consumption of Generative AI LLMs: optimized hardware, new algorithms, green data centers

Hyper Scale Mega Data Centers: Time is NOW for Fiber Optics to the Compute Server

China’s open source AI models to capture a larger share of 2026 global AI market

Overview of AI Models – China vs U.S. :

Chinese AI language models (LMs) have advanced rapidly and are now contesting with the U.S. for global market leadership.  Alibaba’s Qwen-Image-2512 is emerging as a top-performing, free, open-source model capable of high-fidelity human, landscape, and text rendering. Other key, competitive models include Zhipu AI’s GLM-Image (trained on domestic chips), ByteDance’s Seedream 4.0, and UNIMO-G.

Today, Alibaba-backed Moonshot AI released an upgrade of its flagship AI model, heating up a domestic arms race ahead of an expected rollout by Chinese AI hotshot DeepSeek. The latest iteration of Moonshot’s Kimi can process text, images, and videos simultaneously from a single prompt, the company said in a statement, aligning with a trend toward so-called omni models pioneered by industry leaders like OpenAI and Alphabet Inc.’s Google.

Moonshot AI Kimi website. Photographer: Raul Ariano/Bloomberg

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Chinese AI models are rapidly narrowing the gap with Western counterparts in quality and accessibility.  That shift is forcing U.S. AI leaders like Alphabet’s Google, Microsoft’s Copilot, OpenAI, and Anthropic to fight harder to  maintain their technological lead in AI.  That’s despite their humongous spending on AI data centers, related AI models and infrastructure.

In early 2025, investors seized on DeepSeek’s purportedly lean $5.6 million LM training bill as a sign that Nvidia’s high-end GPUs were already a relic and that U.S. hyperscalers had overspent on AI infrastructure. Instead, the opposite dynamic played out: as models became more capable and more efficient, usage exploded, proving out a classic Jevons’ Paradox and validating the massive data-center build-outs by Microsoft, Amazon, and Google.

The real competitive threat from DeepSeek and its peers is now coming from a different direction. Many Chinese foundation models are released as “open source” or “open weight” AI models which makes them effectively free to download, easy to modify, and cheap to run at scale. By contrast, most leading U.S. players keep tight control over their systems, restricting access to paid APIs and higher-priced subscriptions that protect margins but limit diffusion.

That strategic divergence is visible in how these systems are actually used. U.S. models such as Google’s Gemini, Anthropic’s Claude, and OpenAI’s GPT series still dominate frontier benchmarks [1′] and high‑stakes reasoning tasks. According to a recently published report by OpenRouter, a third-party AI model aggregator, and venture capital firm Andreessen Horowitz. Chinese open-source models have  captured roughly 30% of the “working” AI market. They are especially strong in coding support and roleplay-style assistants—where developers and enterprises optimize for cost efficiency, local customization, and deployment freedom rather than raw leaderboard scores.

Note 1. A frontier benchmark for AI models is a high-difficulty evaluation designed to test the absolute limits of artificial intelligence in complex,, often unsolved, reasoning tasks. FrontierMath, for example, is a prominent benchmark focusing on expert-level mathematics, requiring AI to solve hundreds of unpublished problems that challenge, rather than merely measure, current capabilities.

China’s open playbook:

China’s more permissive stance on model weights is not just a pricing strategy — it’s an acceleration strategy. Opening weights turns the broader developer community into an extension of the R&D pipeline, allowing users to inspect internals, pressure‑test safety, and push incremental improvements upstream.

As Kyle Miller at Georgetown’s Center for Security and Emerging Technology argues, China is effectively trading away some proprietary control to gain speed and breadth: by letting capability diffuse across the ecosystem, it can partially offset the difficulty of going head‑to‑head with tightly controlled U.S. champions like OpenAI and Anthropic. That diffusion logic is particularly potent in a system where state planners, big tech platforms, and startups are all incentivized to show visible progress in AI.

Even so, the performance gap has not vanished. Estimates compiled by Epoch AI suggest that Chinese models, on average, trail leading U.S. releases by about seven months. The window briefly narrowed during DeepSeek’s R1 launch in early 2025, when it looked like Chinese labs might have structurally compressed the lag; since then, the gap has widened again as U.S. firms have pushed ahead at the frontier.

Capital, chips, and the power problem:

The reason the U.S. lead has held is massive AI infrastructure spending. Consensus forecasts put capital expenditure by largely American hyperscalers at roughly $400 billion in 2025 and more than $520 billion in 2026, according to Goldman Sachs Research. By comparison, UBS analysts estimate that China’s major internet platforms collectively spent only about $57 billion last year—a fraction of U.S. outlays, even if headline Chinese policy rhetoric around AI is more aggressive.

But sustaining that level of investment runs into a physical constraint that can’t be hand‑waved away: electricity. The newest data-center designs draw more than a gigawatt of power each—about the output of a nuclear reactor—turning grid capacity into a strategic bottleneck. China now generates more than twice as much power as the U.S., and its centralized planning system can more readily steer incremental capacity toward AI clusters than America’s fragmented, heavily regulated electricity market.

That asymmetry is already shaping how some on Wall Street frame the race. Christopher Woods, global head of equity strategy at Jefferies, recently reiterated that China’s combination of open‑source models and abundant cheap power makes it a structurally formidable AI competitor. In his view, the “DeepSeek moment” of early last year remains a warning that markets have largely chosen to ignore as they rotate back into U.S. AI mega‑caps.

A fragile U.S. AI advantage:

For now, U.S. companies still control the most important chokepoint in the stack: advanced AI accelerators. Access to Nvidia’s cutting‑edge GPUs remains a decisive advantage.  Yesterday, Microsoft announced the Maia 200 chip – their first silicon and system platform optimized specifically for AI inference.  The chip was  was designed for efficiency, both in terms of its ability to deliver tokens per dollar and performance per watt of power used.

“Maia 200 can deliver 30% better performance per dollar than the latest generation hardware in our fleet today,” Microsoft EVP for Cloud and AI Scott Guthrie wrote in a blog post.

Image Credit: Microsoft

……………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Leading Chinese AI research labs have struggled to match training results using only domestic designed silicon. DeepSeek, which is developing the successor to its flagship model and is widely expected to release it around Lunar New Year, reportedly experimented with chips from Huawei and other local vendors before concluding that performance was inadequate and turning to Nvidia GPUs for at least part of the training run.

That reliance underscores the limits of China’s current self‑reliance push—but it also shouldn’t be comforting to U.S. strategists. Chinese firms are actively working around the hardware gap, not waiting for it to close. DeepSeek’s latest research focuses on training larger models with fewer chips through more efficient memory design, an incremental but important reminder that architectural innovation can partially offset disadvantages in raw compute.

From a technology‑editorial perspective, the underlying story is not simply “China versus the U.S.” at the model frontier. It is a clash between two AI industrial strategies: an American approach that concentrates capital, compute, and control in a handful of tightly integrated platforms, and a Chinese approach that leans on open weights, diffusion, and state‑backed infrastructure to pull the broader ecosystem forward.

The question for 2026 is whether U.S. AI firms’ lead in capability and chips can keep outrunning China’s advantages in openness and power—or whether the market will again wait for a shock like DeepSeek to re‑price that risk.

Deepseek and Other Chinese AI Models:

DeepSeek published research this month outlining a method of training larger models using fewer chips through a more efficient memory design. “We view DeepSeek’s architecture as a new, promising engineering solution that could enable continued model scaling without a proportional increase in GPU capacity,” wrote UBS analyst Timothy Arcuri.

Export controls haven’t prevented Chinese companies from training advanced models, but challenges emerge when the models are deployed at scale. Zhipu AI, which released its open-weight GLM 4.7 model in December, said this month it was rationing sales of its coding product to 20% of previous capacity after demand from users overwhelmed its servers.

Moonshot, Zhipu AI and MiniMax Group Inc are among a handful of AI LM front-runners in a hotly contested battle among Chinese large language model makers, which at one point was dubbed the “War of One Hundred Models.”

“I don’t see compute constraints limiting [Chinese companies’] ability to make models that are better and compete near the U.S. frontier,” Georgetown’s Miller says. “I would say compute constraints hit on the wider ecosystem level when it comes to deployment.”

Gaining access to Nvidia AI chips:

U.S. President Donald Trump’s plan to allow Nvidia to sell its H200 chips to China could be pivotal. Alibaba Group and ByteDance, TikTok’s parent company, have privately indicated interest in ordering more than 200,000 units each, Bloomberg reported.  The H200 outperforms any Chinese-produced AI chip, with a roughly 32% processing-power advantage over Huawei’s Ascend 910C.

With access to Nvidia AI chips, Chinese labs could build AI-training supercomputers as capable as American ones at 50% extra cost compared with U.S.-made ones, according to the Institute for Progress. Subsidies by the Chinese government could cover that differential, leveling the playing field, the institute says.

Conclusions:

A combination of open-source innovation and loosened chip controls could create a cheaper, more capable Chinese AI ecosystem. The possibility is emerging just as OpenAI and Anthropic consider public stock listings (IPOs) and U.S. hyperscalers such as Microsoft and Meta Platforms face pressure to justify heavy spending.

The risk for U.S. AI leaders is no longer theoretical; China’s open‑weight, low‑cost model ecosystem is already eroding the moat that Google, OpenAI, and Anthropic thought they were building. By prioritizing diffusion over tight control, Chinese firms are seeding a broad developer base, compressing iteration cycles, and normalizing expectations that powerful models should be cheap—or effectively free—to adapt and deploy.

U.S. AI leaders could face pressure on pricing and profit margins from China AI competitors while having to deal with AI infrastructure costs and power constraints. Their AI advantage remains real, but fragile—highly exposed to regulatory shifts, export controls, and any breakthrough in China’s workarounds on hardware and training efficiency. The uncomfortable prospect for U.S. AI incumbents is that they could win the race for the best models and still lose ground in the market if China’s diffusion‑driven strategy defines how AI is actually consumed at scale.

…………………………………………………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.barrons.com/articles/deepseek-ai-gemini-chatgpt-stocks-ccde892c

https://blogs.microsoft.com/blog/2026/01/26/maia-200-the-ai-accelerator-built-for-inference/

https://www.bloomberg.com/news/articles/2026-01-27/china-s-moonshot-unveils-new-ai-model-ahead-of-deepseek-release

https://www.scmp.com/tech/tech-trends/article/3335602/chinas-open-source-models-make-30-global-ai-usage-led-qwen-and-deepseek

China gaining on U.S. in AI technology arms race- silicon, models and research

U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China

Goldman Sachs: Big 3 China telecom operators are the biggest beneficiaries of China’s AI boom via DeepSeek models; China Mobile’s ‘AI+NETWORK’ strategy

Bloomberg: China Lures Billionaires Into Race to Catch U.S. in AI

Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)

Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)

Executive Summary:

AI integration in 6G specifications (3GPP) and standards (ITU-R IMT 2030) highlights a strategic shift in the telecom industry towards AI-native networks, with telecom industry heavyweights like Huawei, Samsung, Ericsson, and Nokia actively developing foundational technologies. Unlike 5G, where AI and machine learning were limited applications or add-on features over existing architecture, 6G will incorporate AI from the onset with an “AI native” approach where intelligence will allow the network to be smart, agile, and able to learn and adapt according to changing network dynamics.

This transformation is necessary because future 6G networks will be too complex for human operators to manage, requiring AI-empowered and learning-driven networks that can facilitate zero-touch network management through capabilities including learning, reasoning, and decision-making.

Key Developments and Analysis:
  • AI-Native Networks: The industry consensus is that 6G will be “AI-native,” meaning artificial intelligence will be built directly into the core functions of network control, resource management, and service orchestration. This moves AI from an optimization layer in 5G to an foundational element in 6G.

AI Native Image Courtesy of Ericsson

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

  • Company Initiatives:
    • Huawei is focused on making AI a native element of the network architecture (AI-native 6G) rather than an overlay technology, integrating communication, sensing, computing, and intelligence. This vision, called “Connected Intelligence,” involves two aspects: AI for 6G (network automation) and 6G for AI (AI as a Service, AIaaS).  More in Huawei Research Areas below.
    • Samsung is a major proponent of AI-RAN (Radio Access Network) technology. The company hosted a summit in November 2025 to showcase working AI-RAN technology that autonomously optimizes network performance and is conducting joint research with SK Telecom (SKT) on AI-supported RAN. Samsung sees vRAN (virtualized RAN) as a key enabler for “AI-native, 6G-ready networks”.
    • Ericsson emphasizes the necessity of a strong 5G Standalone (5G SA) foundation for an AI future, using AI to manage and automate current networks in preparation for 6G’s demands. Ericsson is also integrating agentic AI into its platforms for more autonomous network management.
    • Nokia is deepening its AI push, licensing software to expand AI use in mobile networks and preparing for early field trials in 2026 by porting baseband software to platforms like NVIDIA’s, which opens the door for more advanced AI use cases.
  • Industry Analysis and Trends:
    • Standardization: 2026 is crucial as formal 6G specification work begins in earnest within 3GPP with Release 21. In WP5D, the IMT 2030 RIT/SRIT standardization work will commence at the February 2027 meeting with the final deadline for submissions at the February 2029 meeting.  More in the ITU-R WP5D section below. 
    • The AI-RAN Alliance is an industry initiative (not a traditional SDO) focused on accelerating real-world AI applications and integration within the RAN. It works alongside SDOs, providing industry insights and pushing for rapid validation and testing of AI-RAN technologies, with a specific focus on leveraging accelerated computing.
    • Automation and Efficiency: AI-native algorithms in 6G are expected to deliver extreme spectrum and energy efficiency, significantly reducing operational costs for telcos while improving reliability and performance.
    • Monetization Challenges: Despite the technological promise, analysts caution that 6G remains largely theoretical for now. Some operators are stalling on full 5G SA deployment, waiting to move to 6G-ready cores later in the decade, leading to concerns that 5G SA might become an “odd generation.”
    • Infrastructure Constraints: The physical demands of AI infrastructure, particularly energy consumption and construction timelines, are becoming operational realities that may bound the pace of AI growth in 2026, regardless of software advancements. 
    • ITU-R Working Party (WP) 5D is making AI a native and foundational element of the 6G (IMT-2030) system, rather than the “add-on” or “overlay” status it had in 5G (IMT 2020). This shift is being achieved through the definition of specific AI capabilities and requirements that future 6G technologies must inherently support. In particular:
  • Defining AI as a Core Capability: The Recommendation ITU-R M.2160 (“Framework and overall objectives of the future development of IMT for 2030 and Beyond”) officially defines “Artificial Intelligence and Communication” as one of the six major usage scenarios and an overarching design principle for IMT-2030.
  • Integrating AI into the Radio Interface: WP 5D is actively developing technical performance requirements (TPRs) and evaluation criteria for proposed 6G radio interface technologies (RITs) that inherently incorporate AI/Machine Learning (ML). This includes work on:
    • AI-enabled air interface design: This involves the physical layer, potentially moving towards AI-native physical (PHY) layers that can dynamically adapt waveforms and network parameters in real-time, rather than relying on predefined, static configurations.
    • AI-driven resource management: AI/ML algorithms will be crucial for real-time optimization of spectral and energy efficiency, managing complex traffic, and ensuring Quality of Service (QoS).
  • Enabling AI-Driven Services: The framework for IMT-2030 is designed to support the full lifecycle of AI components, from data collection and model training to deployment and performance monitoring, enabling new AI-driven services and applications directly within the network infrastructure.
  • Establishing a Formal Timeline: WP 5D has established a clear timeline for 6G standardization, with specific stages for vision, requirements, evaluation methodology, and specifications. This structured approach ensures that all proposed RITs/SRITs are evaluated against the new AI-native requirements, promoting global alignment and preventing AI from becoming a fragmented, proprietary solution.
    • Stage 1 (Vision): Completed in June 2023.
    • Stage 2 (Requirements & Evaluation): Targeted for completion in 2026.
    • Stage 3 (Specifications): Expected by the end of 2030.
6G, as envisioned in the ITU-R’s IMT-2030 framework, is being designed from the ground up as an “AI-native” system. 
  • Purpose: AI is integral to the entire network lifecycle, from initial design and deployment to autonomous operation and service creation.
  • Integration Level: Intelligence is embedded across all layers of the network stack, including the physical layer (air interface), control plane, and data plane.
  • Scope: AI enables core functionalities such as real-time self-optimization, self- healing capabilities, and dynamic resource allocation, rather than static, predefined configurations.
  • Outcome: The creation of a fully cognitive, self-managing, and highly adaptable “intelligence fabric” capable of supporting advanced use cases like real-time holographic communication, digital twins, and autonomous systems with ultra-low latency. 
Comparing AI as an overlay in 5G (IMT 2030) vs AI native mode in 6G (IMT 2030):
Feature  5G (IMT-2020) 6G (IMT-2030)
AI Role Optimization tool (overlay) Foundational and native element
Network Operation Manual configuration with AI assistance Autonomous and self-managing
Air Interface Human-designed with some ML optimization AI/ML-designed and managed
Complexity Management Relies on standard protocols Manages complexity through embedded AI/ML
Services Supported Enhanced mobile broadband, basic IoT Integrated AI & Communication, sensing, holographic comms

–>By embedding AI into the fundamental design principles and technical requirements of IMT-2030, ITU-R WP 5D is ensuring that 6G is an AI-native network capable of self-management, self-optimization, and supporting a vast ecosystem of AI applications, a significant shift from the supplementary role AI played in 5G. 

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..
Huawei’s Research Areas and Activities:
  • Agentic-AI Core (A-Core): Huawei unveiled a blueprint for a 6G core network (which will be specified by 3GPP and NOT ITU) where services are managed by specialized AI agents using a large-scale network AI model called “NetGPT”. This allows the network to program, update, and execute its own control procedures automatically without human intervention, based on natural language instructions.
  • Network Architecture Redesign: Huawei proposes the NET4AI system architecture, a service-oriented design that moves beyond the 5G service-based architecture. It introduces a dedicated data plane (DP) to handle the massive volume of data generated by AI and sensing services, enabling flexible and efficient many-to-many data flow for distributed learning and inference.
  • Integrated Sensing and Communication (ISAC): A core pillar of Huawei’s 6G work is the native integration of sensing with communication. This allows the network to use radio waves for high-resolution sensing, localization, and imaging, creating a “digital twin” of the physical world. The large volume of data collected from sensing then serves as a source for AI model training and real-time environmental monitoring.
  • Distributed Machine Learning: Huawei researches deep-edge architecture to enable massive, distributed, and collaborative machine learning (ML). This includes the development of frameworks like a two-level learning architecture that combines federated learning (FL) and split learning (SL) to optimize computing resources and ensure data privacy by keeping raw data local to devices.
  • AI as a Service (AIaaS): The 6G network is designed to provide AI capabilities as a service, allowing the training and inference of large AI models to be distributed across the network (edge and cloud). This offers low-latency performance and access to rich data for AI-driven applications like collaborative robotics and autonomous driving.
  • Energy Efficiency and Sustainability: The company is researching how native AI capabilities can improve overall energy efficiency by up to 100 times compared to 5G. This involves smart energy control, dynamic resource scaling, and optimizing communication paths for lower power consumption.
  • Standardization and White Papers: Huawei is actively contributing to global 6G discussions and standardization bodies like the ITU-R, sharing its vision through publications such as the book 6G: The Next Horizon – From Connected People and Things to Connected Intelligence and various technical white papers. The goal is to define the technical specifications and use cases for 6G that will drive industry-wide innovation by around 2030. 
In summary, the telecom industry is laying the critical groundwork for an AI-native 6G era through research, standard setting, and strategic investments in AI-powered network solutions, even as commercial deployment remains several years away. Decisions must be made on spectrum use (especially in the FR3 range of 7-24 GHz), silicon roadmaps, and network architectures which will have lasting impact.
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.ericsson.com/en/reports-and-papers/white-papers/ai-native

Roles of 3GPP and ITU-R WP 5D in the IMT 2030/6G standards process

AI wireless and fiber optic network technologies; IMT 2030 “native AI” concept

ITU-R WP5D IMT 2030 Submission & Evaluation Guidelines vs 6G specs in 3GPP Release 20 & 21

ITU-R WP 5D Timeline for submission, evaluation process & consensus building for IMT-2030 (6G) RITs/SRITs

ITU-R WP 5D reports on: IMT-2030 (“6G”) Minimum Technology Performance Requirements; Evaluation Criteria & Methodology

AI wireless and fiber optic network technologies; IMT 2030 “native AI” concept

Highlights of 3GPP Stage 1 Workshop on IMT 2030 (6G) Use Cases

Should Peak Data Rates be specified for 5G (IMT 2020) and 6G (IMT 2030) networks?

GSMA Vision 2040 study identifies spectrum needs during the peak 6G era of 2035–2040

Highlights and Summary of the 2025 Brooklyn 6G Summit

NGMN: 6G Key Messages from a network operator point of view

Nokia and Rohde & Schwarz collaborate on AI-powered 6G receiver years before IMT 2030 RIT submissions to ITU-R WP5D

Verizon’s 6G Innovation Forum joins a crowded list of 6G efforts that may conflict with 3GPP and ITU-R IMT-2030 work

Nokia Bell Labs and KDDI Research partner for 6G energy efficiency and network resiliency

Deutsche Telekom: successful completion of the 6G-TakeOff project with “3D networks”

Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos

Qualcomm CEO: expect “pre-commercial” 6G devices by 2028

Ericsson and e& (UAE) sign MoU for 6G collaboration vs ITU-R IMT-2030 framework

KT and LG Electronics to cooperate on 6G technologies and standards, especially full-duplex communications

Highlights of Nokia’s Smart Factory in Oulu, Finland for 5G and 6G innovation

Nokia sees new types of 6G connected devices facilitated by a “3 layer technology stack”

Rakuten Symphony exec: “5G is a failure; breaking the bank; to the extent 6G may not be affordable”

India’s TRAI releases Recommendations on use of Tera Hertz Spectrum for 6G

New ITU report in progress: Technical feasibility of IMT in bands above 100 GHz (92 GHz and 400 GHz)

 

Telecom operators investing in Agentic AI while Self Organizing Network AI market set for rapid growth

Telecom companies are planning to use Agentic AI [1.] for customer experience and network automation. A recent RADCOM survey shows 71% of network operators plan to deploy agentic AI in 2026, while 14% have already begun, prioritizing areas that directly influence trust and customer satisfaction: security and fraud prevention (57%) and customer service and support (56%).  The top use cases are automated customer complaint resolution and autonomous fault resolution.

Operators are betting on agentic AI to remove friction before customers feel it, with the highest-value use cases reflecting this shift, including:

  • 57% – automated customer complaint resolution
  • 54% – autonomous fault resolution before it impacts service
  • 52% – predicting experience to prevent churn

This technology is shifting networks from simply detecting issues to preventing them before customers notice. In contact centers, 2026 is expected to see a rise in human and AI agent collaboration to improve efficiency and customer service.

Note 1.  Agentic AI refers to autonomous artificial intelligence systems that can perceive, reason, plan, and act independently to achieve complex goals with minimal human intervention, going beyond simple command-response to manage multi-step tasks, use various tools, and adapt to new information for proactive automation in dynamic environments. These intelligent agents function like digital coworkers, coordinating internally and with other systems to execute sophisticated workflows.

……………………………………………………………………………………………………………………………………………………………………………………………

ResearchAndMarkets.com has just published a “Self-Organizing Network Artificial Intelligence (AI) Global Market Report 2025.” The market research firm says that the self-organizing network AI [2.] is forecast to expand from $5.19 billion in 2024 to $6.18 billion in 2025, at a CAGR of 19.2%. This surge is driven by the integration of machine learning and AI in telecom networks, smart network management investment, and the growing demand for features like self-healing and self-optimization, as well as predictive maintenance technologies.driven by the expansion of 5G, increasing automation demands, and AI integration for network optimization. Opportunities include AI-driven RRM and predictive maintenance. Asia-Pacific emerges as the fast-growing region, boosting telecom innovations amid global trade shifts.

Note 2.  Self-organizing network AI leverages software, hardware, and services to dynamically optimize and manage telecom networks, applicable across various network types and deployment modes. The market encompasses a broad range of solutions, from network optimization software to AI-driven planning products, underscoring its expansive potential.

Looking further ahead, the market is expected to reach $12.32 billion by 2029, with a CAGR of 18.8%. Key drivers during this period include heightened demand for automation, increased 5G deployments, and growing network densification, accompanied by rising data traffic and subscriber numbers. Trends such as AI-driven network automation advancements, machine learning integration for real-time optimization, and the rise of generative AI for analytics are reshaping the landscape.

The expansion of 5G networks plays a pivotal role in propelling this growth. These networks, characterized by high-speed data and ultra-low latency, significantly enhance the capabilities of self-organizing network AI. The integration facilitates real-time data processing, supporting automation, optimization, and predictive maintenance, thereby improving service quality and user experience. A notable development in 2023 saw UK outdoor 5G coverage rise to 85-93%, reflecting growing demand and technological advancement.

Huawei Technologies and other major tech companies, are pioneering innovative solutions like AI-driven radio resource management (RRM), which optimizes network performance and enhances user experience. These solutions rely on AI and machine learning for dynamic spectrum and network resource management. For instance, Huawei’s AI Core Network, introduced at MWC 2025, marks a substantial leap in intelligent telecommunications, integrating AI into core systems for seamless connectivity and real-time decision-making.

Strategic acquisitions are also shaping the market, exemplified by Amdocs Limited acquiring TEOCO Corporation in 2023 to bolster its network optimization and analytics capabilities. This acquisition aims to enhance end-to-end network intelligence and operational efficiency.

Leading players in the market include Huawei, Cisco Systems Inc., Qualcomm Incorporated, and many others, driving innovation and competition. Europe held the largest market share in 2024, with Asia-Pacific poised to be the fastest-growing region through the forecast period.

References:

Operator Priorities for 2026 and Beyond: Data, Automation, Customer Experience

https://uk.finance.yahoo.com/news/self-organizing-network-artificial-intelligence-105400706.html

Ericsson integrates agentic AI into its NetCloud platform for self healing and autonomous 5G private network

Agentic AI and the Future of Communications for Autonomous Vehicles (V2X)

IDC Report: Telecom Operators Turn to AI to Boost EBITDA Margins

Omdia: How telcos will evolve in the AI era

Palo Alto Networks and Google Cloud expand partnership with advanced AI infrastructure and cloud security

Sovereign AI infrastructure for telecom companies: implementation and challenges

Sovereign AI infrastructure refers to the domestic capability of a nation or an organization to own and control the entire technology stack for artificial intelligence (AI) systems within its own borders, subject to local laws and governance. This includes the physical data centers, specialized hardware (like GPUs), software, data, and skilled workforce.  Sovereign AI infrastructure involves a full “stack” designed to ensure national control and reduce reliance on foreign providers. A few key features:

  • Policies and technical controls (e.g., data localization, encryption) to ensure that sensitive data used for training and inference remains within the jurisdiction.
  • Development and hosting of proprietary or locally tailored AI models and software frameworks that align with national values, languages, and ethical standards.
  • Workforce Development: Investing in domestic talent, including data scientists, engineers, and legal experts, to build and maintain the local AI ecosystem.
  • Regulatory Framework: A comprehensive legal and ethical framework for AI development and deployment that ensures compliance with national laws and standards.

Why It’s Important – The pursuit of sovereign AI infrastructure is driven by several strategic considerations for both governments and private enterprises:

  • National Security: To ensure that critical systems in defense, intelligence, and public infrastructure are not dependent on potentially adversarial foreign technologies or subject to extraterritorial access laws (like the U.S. CLOUD Act).
  • Economic Competitiveness: To foster a domestic tech industry, create high-skilled jobs, protect intellectual property, and capture the significant economic benefits of AI-driven growth.
  • Data Privacy and Compliance: To comply with stringent local data protection regulations (e.g., GDPR in the EU) and build public trust by ensuring citizen data is handled securely and according to local laws. Cultural Preservation: To train AI models on local datasets and languages, preserving cultural nuances and avoiding bias found in generalized, globally trained models.

Image Credit: Nvidia

………………………………………………………………………………………………………………………………………………………………………………………………………..

Governments around the world are starting to build sovereign AI infrastructure, and according to a new report from Morningstar DBRS, which opines that major telecommunications companies are uniquely positioned to benefit from that shift.  Here are a few take-aways from the report:

  • Sovereign AI funding opens a new growth path for telcos – Governments investing in domestic AI infrastructure are increasingly turning to operators, whose network and regulatory strengths position them to capture a large share of this emerging market.
  • Telcos’ capabilities align with sovereignty needs – Their expertise in large-scale networks, local presence, and established government relationships give them an edge over hyperscalers for sensitive, sovereignty-focused AI projects.
  • Early adopters gain advantage – Operators in Canada and Europe are already moving into sovereign AI, positioning themselves to secure higher-margin enterprise and government workloads as national AI buildouts accelerate.
Infrastructure advantages provide a strategic head start for telecommunications companies. Telcos currently manage extensive data centers, fiber optic networks, and computing infrastructure nationwide. Leveraging these established physical assets can significantly reduce the barriers to implementing sovereign AI solutions, contrasting favorably with the greenfield development required by other entities. 
The sophisticated data governance expertise within telcos is well-suited for the stringent requirements of sovereign AI. Their decades of experience managing and processing massive datasets have resulted in mature data handling practices directly applicable to the data infrastructure demands of secure, sovereign AI systems.
Furthermore, existing edge computing capabilities offer a distinct competitive advantage. Telecom networks facilitate localized AI processing near data sources while adhering to data residency requirements—a crucial combination for sovereign AI deployments.  This translates to “embedding AI within their network fabric for both optimization and distributed inference,” enabling AI consumption that offers lower latency, reduced cost, and applicability for high-sensitivity use cases in sectors like government and national security.
The opportunity to integrate AI workloads with emerging 5G and 6G infrastructures creates additional strategic value. Sovereign AI represents a pivotal opportunity for telecom operators to position themselves as central players in national AI strategies, evolving their role beyond primary connectivity provisioning.
……………………………………………………………………………………………………………………………………………………………………………….
Implementing sovereign AI presents substantial challenges despite its strategic potential. Key bottlenecks and technical complexities include:
  • Infrastructure Demands: Building robust domestic AI ecosystems requires specialized expertise spanning hardware, software, data governance, and policy.
  • Resource Constraints: Dr. Matt Hasan, CEO at aiRESULTS and a former AT&T executive, highlights specific bottlenecks:
    • Compute Density at Scale.
    • Spectrum Allocation amidst political pressures.
    • Energy Demand exceeding existing grid capacity.
  • Intensified Reliability Requirements: Sovereign AI implementation places heightened demands on telecom providers for system uptime, reliability, quality, and data privacy. This necessitates a focus on efficient power consumption, resilient routing and backups, robust encryption, and comprehensive cybersecurity measures.
  • Supply Chain Vulnerabilities: Geopolitical tensions introduce risks to the supply of critical components such as GPUs and specialized chips, underscoring the interconnected nature of global hardware supply chains.
  • The rapid evolution of AI technology mandates continuous investment and technical agility to ensure sovereign deployments remain current.
Competitive landscape dynamics:
  • The interplay between global hyperscalers and regional telecom operators is expected to shift.
  • Hasan predicts a collaborative model, with regional telcos leveraging their position as sovereign partners through joint ventures, rather than an outright displacement of hyperscalers.
Ultimately, the objective of sovereign AI is strategic resilience, not complete digital isolation. Nations must judiciously balance sovereignty goals with the advantages of global technological collaboration. For telecom operators, adeptly managing these complexities and investment demands will define sovereign AI’s realization as a viable growth opportunity.
…………………………………………………………………………………………………………………………………………………………………………….

References:

Telcos Across Five Continents Are Building NVIDIA-Powered Sovereign AI Infrastructure

https://dbrs.morningstar.com/research/468155/telecoms-are-well-placed-to-benefit-from-sovereign-ai-infrastructure-plans

How “sovereign AI” could shape telecom

https://www.rcrwireless.com/20251202/ai/sovereign-ai-telcos

Subsea cable systems: the new high-capacity, high-resilience backbone of the AI-driven global network

Analysis: OpenAI and Deutsche Telekom launch multi-year AI collaboration

AI infrastructure spending boom: a path towards AGI or speculative bubble?

Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos

Omdia: How telcos will evolve in the AI era

OpenAI announces new open weight, open source GPT models which Orange will deploy

Expose: AI is more than a bubble; it’s a data center debt bomb

Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?

Custom AI Chips: Powering the next wave of Intelligent Computing

AI spending boom accelerates: Big tech to invest an aggregate of $400 billion in 2025; much more in 2026!

IBM and Groq Partner to Accelerate Enterprise AI Inference Capabilities

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

 

AI wireless and fiber optic network technologies; IMT 2030 “native AI” concept

To date, the main benefit of AI for telecom has been to reduce headcount/layoff employees. Light Reading’s Iain Morris wrote, “Telecom operators and vendors, nevertheless, are already using AI as the excuse for thousands of job cuts made and promised. So far, those cuts have not brought any improvement in the sector’s fortunes. Meanwhile, ceding basic but essential skills to systems that hardly anyone understands seems incredibly risky.”  Some say that will change with 6G/ IMT 2030, but that’s a long way off.  Others point to AI RAN, but that has not gotten any real market traction with wireless telcos.

As Gen AI development accelerates, robust wireless and fiber optic network infrastructure will be essential to accommodate the substantial data and communication volume generated by AI systems. Initially, the existing network ecosystem—encompassing wireless, wireline, broadband, and satellite services—will absorb this traffic load. However, the expanding requirements of AI are anticipated to drive the future emergence of entirely new network architectures and communication paradigms.

For sure, AI needs massive, fast, reliable connectivity to function, driving demand for low latency optical networks and 6G/ IMT 2030, which AI itself will optimize, leading to better efficiency, security, resource management, and new services like real-time AR/VR, ultimately boosting telecom revenue and innovation across the entire digital ecosystem.

Source: Pitinan Piyavatin/Alamy Stock Photo

……………………………………………………………………………………………………………………………………………………………………..

Key emerging and evolving network types and technologies include:
  • AI Backend Scale-Out and Scale-Up Networks: These are specialized, private networks within and across data centers designed to connect numerous GPUs and enable them to function as one massive compute resource. They utilize technologies like:
    • InfiniBand: A long-standing high-bandwidth, low-latency technology that has become a top choice for connecting GPU clusters in AI training environments.
    • Optimized Ethernet: Ethernet is gaining ground for AI workloads through the development of enhanced, open standards via the Ultra Ethernet Consortium (UEC). These enhancements aim to provide lossless, low-latency fabrics that can match or exceed InfiniBand’s performance at scale.
    • High-Speed Optics: The use of 400 Gbps and 800 Gbps (and soon 1.6 Tbps) optical interconnects is critical for meeting the massive bandwidth and power requirements within and between AI data centers.
  • Edge AI Networking: As AI inferencing (generating responses from AI models) moves closer to the end-user or device (e.g., in autonomous vehicles, smart hospitals, or factories), specialized edge networks are needed. These networks must ensure low latency and localized processing to enable real-time responses.
  • AI-Native 6G Networks: The upcoming sixth-generation (6G) wireless networks are being designed with AI integration as a core principle, rather than an add-on. 
    • These networks are expected to be fully automated and self-evolving, using AI to optimize resource allocation, predict issues, and enhance security autonomously.
    • They will support extremely high data rates (up to 1 Tbps), ultra-low latency (around 1 ms), and new technologies like AI-RAN (Radio Access Network) that integrate AI capabilities directly into the network infrastructure.
    • More in next section below.
  • Self-Evolving Networks: The ultimate goal is the development of “self-evolving networks” where AI agents manage and optimize the network infrastructure autonomously, adapting to new demands and challenges without human intervention. 

……………………………………………………………………………………………………………………………………………………………………..

In IMT 2030/6G networks, AI will shift from being an “add-on” optimization tool (as in 5G) to a native, foundational component of the entire network architecture. This deep integration will enable the network to be self-organizing, highly efficient, and capable of supporting advanced AI applications as a service. Native AI for IMT-2030 (6G) means building AI directly into the network’s core architecture, making it AI-first and pervasive, rather than adding AI as an overlay; this enables self-optimizing, intelligent networks that can autonomously manage resources, provide ubiquitous AI services, and offer seamless, context-aware experiences with minimal human intervention, fundamentally transforming both network operations and user applications by 2030.

Core Concepts of Native AI in IMT-2030 (6G):
AI-Native Architecture: AI isn’t just an application; it’s a foundational, intrinsic component throughout the entire system, from the radio interface (RAN) to the core.
  • Ubiquitous Intelligence: Embedding AI everywhere, enabling distributed intelligence for AI model training, inference, and deployment directly within the network infrastructure, extending to the network edge.
  • Autonomous Operations: AI handles complex tasks like network optimization, resource allocation, and automated maintenance (O&M) in real-time, reducing reliance on manual intervention.
  • AI-as-a-Service (AIaaS): The network transforms into a unified platform providing both communication and AI capabilities, making AI accessible for various applications.
  • Intelligent Processing: AI drives functions across the air interface, resource management, and control planes for highly efficient operations.
  • Data-Driven Automation: Leverages big data and real-time analytics to predict issues, optimize performance, and automate complex decision-making.
  • Seamless User Experience: Moves beyond touchscreens to AI-driven interactions, offering more natural and contextual computing.
AI for Network Management and Optimization (“AI-Empowered Networks”):
AI and Machine Learning (ML) will be intrinsically embedded within the network’s functions to enhance performance, reliability, and efficiency in ways that conventional, rule-based algorithms cannot. 
  • Autonomous Operations: AI will enable self-monitoring, self-optimization, and self-healing networks, drastically reducing the need for human intervention in operation and maintenance (O&M).
  • Dynamic Resource Management: ML algorithms will analyze massive amounts of network data in real-time to predict traffic patterns and user demands, dynamically allocating bandwidth, power, and computing resources to ensure optimal performance and energy efficiency.
  • AI-Native Air Interface: AI/ML models will replace traditional, manually engineered signal processing blocks in the physical layer (e.g., channel estimation, beam management) to adapt dynamically to complex and time-varying wireless environments, improving spectral efficiency.
  • Enhanced Security: AI will be critical for real-time threat detection and automated incident response across the hyper-connected 6G ecosystem, identifying anomalies and mitigating security risks that are not well understood by current systems.
  • Digital Twins: AI will power the creation and management of real-time digital twins (virtual replicas) of the physical network, allowing for sophisticated simulations and testing of network changes before real-world deployment. 
Network as an Enabler of AI Services (“Network-Enabled AI” or “AI as a Service”):
The 6G network itself will serve as a platform for pervasive, distributed AI, bringing compute power closer to the end-users and devices.
  • Pervasive Edge AI: AI model training and inference will be distributed throughout the network, from the cloud to the edge (devices, base stations), reducing latency and enabling real-time, localized decision-making for applications like autonomous driving and industrial automation.
  • Support for Advanced Use Cases: The massive data rates (up to 1 Tbps), ultra-low latency, and high reliability enabled by AI in 6G will facilitate new applications such as holographic communication, remote robotic surgery with haptic feedback, and collaborative robotics that were not feasible with 5G.
  • Federated Learning: The network will support distributed machine learning techniques, such as federated learning, which allow AI models to be trained on local data across various devices without the need to centralize sensitive user data, thus ensuring data privacy and security.
  • Integrated Sensing and Communication (ISAC): AI will process the rich environmental data gathered through 6G’s new sensing capabilities (e.g., precise positioning, motion detection, environmental monitoring), allowing the network to interact with and understand the physical world in a holistic manner for applications like smart city management or augmented reality. 

……………………………………………………………………………………………………………………………………………………………………..

AI‑native air interface and RAN:

IMT‑2030 explicitly expects a new AI‑native air interface that uses AI/ML models for core PHY/MAC functions such as channel estimation, symbol detection/decoding, beam management, interference handling, and CSI feedback. This enables adaptive waveforms and link control that react in real time to channel and traffic conditions, going beyond deterministic algorithms in 5G‑Advanced.

At the RAN level, IMT‑2030 envisions “native‑AI enabled” architectures that are simpler but more intelligent, with data‑driven operation and distributed learning across gNBs, edge nodes, and devices. AI/ML will be applied end‑to‑end for resource allocation, mobility, energy optimization, and fault management, effectively turning the RAN into a self‑optimizing, self‑healing system.

Integrated AI and communication services:

The framework defines “Artificial Intelligence and Communication” (often phrased as Integrated AI and Communication) as a specific usage scenario where the network provides AI compute, model hosting, and inference as a service. Example use cases include IMT‑2030‑assisted automated driving, cooperative medical robotics, digital twins, and offloading heavy computation from devices to edge/cloud via the 6G network.

To support this, IMT‑2030 includes “applicable AI‑related capabilities” such as distributed data processing, distributed learning, AI model execution and inference, and AI‑aware scheduling as native capabilities of the system. Computing and data services (not just connectivity) are treated as integral IMT‑2030 components, especially at the edge for low‑latency, energy‑efficient AI workloads.

System intelligence and new use cases:

AI is central to several new IMT‑2030 usage scenarios beyond classic eMBB/mMTC/URLLC, including Immersive Communication, Integrated Sensing and Communication, and Integrated AI and Communication. In integrated sensing, AI fuses multi‑dimensional radio sensing data (position, motion, environment, even human behavior) to provide contextual awareness for applications like smart cities, industrial control, and XR.

Embedding intelligence across air interface, edge, and cloud is seen as necessary to manage 6G complexity and enable “Intelligence of Everything,” including real‑time digital twins and AIGC‑driven services. The vision is for the 6G/IMT‑2030 network to act as a distributed neural system that tightly couples communication, sensing, and computing.

IMT 2030 Goals:

  • To create self-healing, self-optimizing networks that can adapt to diverse demands.
  • To enable new AI-driven applications, from intelligent digital twins to advanced immersive experiences.
  • To build a truly intelligent communication fabric that supports a hyper-connected, AI-enhanced world.

​Summary table: AI’s roles in IMT‑2030:

Dimension AI role in IMT‑2030
Air interface AI‑native PHY/MAC for channel estimation, decoding, beamforming, interference control.
RAN/core architecture Native‑AI enabled, data‑driven, self‑optimizing/self‑healing network functions.
Compute and data services Built‑in edge/cloud compute for AI training, inference, and data processing.
Usage scenarios Dedicated “Integrated AI and Communication” plus AI‑rich sensing and immersive use cases.
Applications and ecosystems Support for digital twins, automated driving, robotics, AIGC, and industrial automation.

In summary, AI in IMT‑2030 is both an internal engine for network intelligence and an exported capability the network offers to verticals, making 6G effectively AI‑native end‑to‑end.

………………………………………………………………………………………………………………………………………………

References:

https://www.lightreading.com/ai-machine-learning/the-lessons-of-pluribus-for-telecom-s-genai-fans

https://www.ericsson.com/en/reports-and-papers/white-papers/ai-native

https://www.5gamericas.org/wp-content/uploads/2024/08/ITUs-IMT-2030-Vision_Id.pdf

ITU-R WP 5D Timeline for submission, evaluation process & consensus building for IMT-2030 (6G) RITs/SRITs

ITU-R WP 5D reports on: IMT-2030 (“6G”) Minimum Technology Performance Requirements; Evaluation Criteria & Methodology

Ericsson and e& (UAE) sign MoU for 6G collaboration vs ITU-R IMT-2030 framework

Nokia and Rohde & Schwarz collaborate on AI-powered 6G receiver years before IMT 2030 RIT submissions to ITU-R WP5D

NTT DOCOMO successful outdoor trial of AI-driven wireless interface with 3 partners

Verizon’s 6G Innovation Forum joins a crowded list of 6G efforts that may conflict with 3GPP and ITU-R IMT-2030 work

ITU-R WP5D IMT 2030 Submission & Evaluation Guidelines vs 6G specs in 3GPP Release 20 & 21

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

Highlights of 3GPP Stage 1 Workshop on IMT 2030 (6G) Use Cases

Draft new ITU-R recommendation (not yet approved): M.[IMT.FRAMEWORK FOR 2030 AND BEYOND]

 

Analysis: OpenAI and Deutsche Telekom launch multi-year AI collaboration

Deutsche Telekom (DT) has formalized a strategic, multi-year collaboration with OpenAI to integrate advanced artificial intelligence (AI) solutions across its internal operations and customer engagement platforms. The partnership aims to co-develop “simple, personal, and multi-lingual AI experiences” focused on enhancing communication and productivity. Initial pilot programs are slated for deployment in Q1 2026. AI will also play a larger role in customer care, internal copilots, and network operations as the Group advances toward more autonomous, self-healing networks.DT plans a company-wide rollout of ChatGPT Enterprise, leveraging AI to streamline core functions including:

  • Customer Care: Deploying sophisticated virtual assistants to manage billing inquiries, service outages, plan modifications, roaming support, and device troubleshooting [1].
  • Internal Operations: Utilizing AI copilots to increase internal efficiency.
  • Network Management: Optimizing core network provisioning and operations.
This collaboration underscores DT’s long-standing strategic imperative to establish itself as a leader in European cloud and AI infrastructure, emphasizing digital sovereignty. Some historical initiatives supporting this strategy include:
  • Sovereign Cloud (2021): DT’s T-Systems division partnered with Google Cloud to offer sovereign cloud services.
  • T Cloud Suite (Early 2025): The launch of a comprehensive suite providing sovereign public, private, and AI cloud options leveraging hybrid infrastructure.
  • Industrial AI Cloud (Early 2025): A collaboration with Nvidia to build a dedicated industrial AI data center in Munich, scheduled for Q1 2026 operations.

The integration of OpenAI technology strategically positions DT to offer a comprehensive value proposition to enterprise clients, combining connectivity, data center capabilities, and specialized AI software under a sovereign framework, according to Recon Analytics Founder Roger Entner.  “There are not that many AI data centers in Europe and in Germany,” Entner explained, noting this leaves the door open for operators like DT to fill in the gap. “In the U.S. you have a ton of data centers that you can do AI. Therefore, it doesn’t make sense for a network operator to have also a data center. They tried to compete with hyperscalers, and it failed. And the scale in the U.S. is a lot bigger than in Europe.”
OpenAI and Deutsche Telekom collaborate. © Deutsche Telekom
…………………………………………………………………………………………………………………………………………………………….
Tekonyx President and Chief Research Officer Sid Nag suggests the integration could extend to employing ChatGPT-based coding tools for developing proprietary Operational Support Systems (OSS) and Business Support Systems (BSS).   He anticipates the partnership will generate new revenue streams through offerings including:
  • Edge AI compute services for enterprises.
  • Vertical AI solutions tailored for healthcare, retail, and manufacturing sectors.
  • Integrated private 5G and AI bundles for industrial logistical hubs.

“Telcos – if they execute – will have a big play in the edge inferencing space as well as providing hosting and colo services that can host domain specific SLMs that need to be run closer to the user data,” he said. “Furthermore, telcos will play a role in connectivity services across Neocloud providers such as CoreWeave, Lambda Labs, Digital Ocean, Vast.AI etc. OpenAI does not want to lose the opportunity to partner with telcos so they are striking early,” Nag added.

Other Voices:

  • Roger Entner notes the model is highly applicable to European incumbents (e.g., Orange, Telefonica) due to the relative scarcity of existing AI data centers in the region, allowing operators to fill a critical infrastructure gap.  Conversely, the model is less viable for U.S. operators, where hyperscalers already dominate the extensive data center market.
  • AvidThink Founder and colleague Roy Chua cautions that while DT presents a robust “reference blueprint,” replicating this strategy requires significant scale, substantial financial investment, and regulatory alignment—factors not easily accessible to all network operators.
  • Futurum Group VP and Practice Lead Nick Patience told Fierce Network, “This deal elevates DT from being a user of AI to being a co-developer, which is pretty significant. DT is one of the few operators building a full-stack AI story. This is an example of OpenAI treating telcos as high-scale distribution and data channels – customer care, billing, network telemetry, national reach and government relationships. This suggests OpenAI is deliberately building an operator channel in key regions (U.S., Korea, EU) but still in partnership with existing cloud and infra providers rather than displacing them.”
………………………………………………………………………………………………………………………………………………………….
Open AI’s Telco Deals:

OpenAI has established significant partnerships with several telecom network providers and related technology companies to integrate AI into network operations, enhance customer experience, and develop new AI-native platforms. Those deals and collaborations include:

  • T-Mobile: T-Mobile has a multi-year agreement with OpenAI and is actively testing the integration of AI (specifically IntentCX) into its business operations for customer service improvements. T-Mobile is also collaborating with Nokia and Nvidia on AI-RAN (Radio Access Network) technologies for 6G innovation.
  • SK Telecom (SKT): SK Telecom has an in-house AI company and collaborates with OpenAI and other AI leaders like Anthropic to enhance its AI capabilities, build sovereign AI infrastructure, and explore new services for its customers in South Korea and globally. They are also reportedly integrating Perplexity into their offerings.
  • Deutsche Telekom (DT): DT is partnering with OpenAI to offer ChatGPT Enterprise across its business to help teams work more effectively, improve customer service, and automate network operations.
  • Circles: This global telco technology company and OpenAI announced a strategic global collaboration to build a fully AI-native telco SaaS platform, which will first launch in Singapore. The platform aims to revolutionize the consumer experience and drive operational efficiencies for telcos worldwide.
  • Rakuten: Rakuten and OpenAI launched a strategic partnership to develop AI tools and a platform aimed at leveraging Rakuten’s Open RAN expertise to revolutionize the use of AI in telecommunications.
  • Orange: Orange is working with OpenAI to drive new use cases for enterprise needs, manage networks, and enable innovative customer care solutions, including those that support African regional languages.
  • Indian Telecoms (Reliance Jio, Airtel): Telecom providers in India are integrating AI tools from companies like Google and Perplexity into their mobile subscriptions, providing millions of users access to advanced intelligence resources.
  • Nokia & Nvidia: In a broader industry collaboration, Nvidia invested $1 billion in Nokia to add Nvidia-powered AI-RAN products to Nokia’s portfolio, enabling telecom service providers to launch AI-native 5G-Advanced and 6G networks. This partnership also includes T-Mobile US for testing.

Conclusions:

With more than 261 million mobile customers globally, Deutsche Telekom provides a strong foundation to bring AI into everyday use at scale. The new collaboration marks the next step in Deutsche Telekom’s AI journey – moving from early pilots to large-scale products that make AI useful for everyone

………………………………………………………………………………………………………………………………………………………………….

Deutsche Telekom: successful completion of the 6G-TakeOff project with “3D networks”

Deutsche Telekom and Google Cloud partner on “RAN Guardian” AI agent

Deutsche Telekom offers 5G mmWave for industrial customers in Germany on 5G SA network

Deutsche Telekom migrates IP-based voice telephony platform to the cloud

Open AI raises $8.3B and is valued at $300B; AI speculative mania rivals Dot-com bubble

OpenAI and Broadcom in $10B deal to make custom AI chips

Custom AI Chips: Powering the next wave of Intelligent Computing

OpenAI orders HBM chips from SK Hynix & Samsung for Stargate UAE project

OpenAI announces new open weight, open source GPT models which Orange will deploy

OpenAI partners with G42 to build giant data center for Stargate UAE project

Reuters & Bloomberg: OpenAI to design “inference AI” chip with Broadcom and TSMC

Page 2 of 14
1 2 3 4 14