AI infrastructure spending boom: a path towards AGI or speculative bubble?

by Rahul Sharma, Indxx with Alan J Weissberger, IEEE Techblog

Introduction:

The ongoing wave of artificial intelligence (AI) infrastructure investment by U.S. mega-cap tech firms marks one of the largest corporate spending cycles in history. Aggregate annual AI investments, mostly for cloud resident mega-data centers, are expected to exceed $400 billion in 2025, potentially surpassing $500 billion by 2026 — the scale of this buildout rivals that of past industrial revolutions — from railroads to the internet era.[1]

At its core, this spending surge represents a strategic arms race for computational dominance. Meta, Alphabet, Amazon and Microsoft are racing to secure leadership in artificial intelligence capabilities — a contest where access to data, energy, and compute capacity are the new determinants of market power.

AI Spending & Debt Financing:

Leading technology firms are racing to secure dominance in compute capacity — the new cornerstone of digital power:

  • Meta plans to spend $72 billion on AI infrastructure in 2025.
  • Alphabet (Google) has expanded its capex guidance to $91–93 billion.[3]
  • Microsoft and Amazon are doubling data center capacity, while AWS will drive most of Amazon’s $125 billion 2026 investment.[4]
  • Even Apple, typically conservative in R&D, has accelerated AI infrastructure spending.

Their capex is shown in the chart below:

Analysts estimate that AI could add up to 0.5% to U.S. GDP annually over the next several years. Reflecting this optimism, Morgan Stanley forecasts $2.9 trillion in AI-related investments between 2025 and 2028. The scale of commitment from Big Tech is reshaping expectations across financial markets, enterprise strategies, and public policy, marking one of the most intense capital spending cycles in corporate history.[2]

Meanwhile, OpenAI’s trillion-dollar partnerships with Nvidia, Oracle, and Broadcom have redefined the scale of ambition, turning compute infrastructure into a strategic asset comparable to energy independence or semiconductor sovereignty.[5]

Growth Engine or Speculative Bubble?

As Big Tech pours hundreds of billions of dollars into AI infrastructure, analysts and investors remain divided — some view it as a rational, long-term investment cycle, while others warn of a potential speculative bubble.  Yet uncertainty remains — especially around Meta’s long-term monetization of AGI-related efforts.[8]

Some analysts view this huge AI spending as a necessary step towards achieving Artificial General Intelligence (AGI) – an unrealized type of AI that possesses human-level cognitive abilities, allowing it to understand, learn, and adapt to any intellectual task a human can. Unlike narrow AI, which is designed for specific functions like playing chess or image recognition, AGI could apply its knowledge to a wide range of different situations and problems without needing to be explicitly programmed for each one.

Other analysts believe this is a speculative bubble, fueled by debt that can never be repaid. Tech sector valuations have soared to dot-com era levels – and, based on price-to-sales ratios, are well beyond them. Some of AI’s biggest proponents acknowledge the fact that valuations are overinflated, including OpenAI chairman Bret Taylor: “AI will transform the economy… and create huge amounts of economic value in the future,” Taylor told The Verge. “I think we’re also in a bubble, and a lot of people will lose a lot of money,” he added.

Here are a few AI bubble points and charts:

  • AI-related capex is projected to consume up to 94% of operating cash flows by 2026, according to Bank of America.[6]
  • Over $75 billion in AI-linked corporate bonds have been issued in just two months — a signal of mounting leverage. Still, strong revenue growth from AI services (particularly cloud and enterprise AI) keeps optimism alive.[7]
  • Meta, Google, Microsoft, Amazon and xAI (Elon Musk’s company) are all using off-balance-sheet debt vehicles, including special-purpose vehicles (SPVs) to fund part of their AI investments. A slowdown in AI demand could render the debt tied to these SPVs worthless, potentially triggering another financial crisis.
  • Alphabet’s (Google’s parent company) CEO Sundar Pichai sees “elements of irrationality” in the current scale of AI investing which is much more than excessive investments during the dot-com/fiber optic built-out boom of the late 1990s. If the AI bubble bursts, Pichai said that no company will be immune, including Alphabet, despite its breakthrough technology, Gemini, fueling gains in the company’s stock price.

…………………………………………………………………………………………………………………..

From Infrastructure to Intelligence:

Executives justify the massive spend by citing acute compute shortages and exponential demand growth:

  • Microsoft’s CFO Amy Hood admitted, “We’ve been short on capacity for many quarters” and confirmed that the company will increase its spending on GPUs and CPUs in 2026 to meet surging demand.
  • Amazon’s Andy Jassy noted that “every new tranche of capacity is immediately monetized”, underscoring strong and sustained demand for AI and cloud services.
  • Google reported billions in quarterly AI revenue, offering early evidence of commercial payoff.

Macro Ripple Effects – Industrializing Intelligence:

AI data centers have become the factories of the digital age, fueling demand for:

  • Semiconductors, especially GPUs (Nvidia, AMD, Broadcom)
  • Cloud and networking infrastructure (Oracle, Cisco)
  • Energy and advanced cooling systems for AI data centers (Vertiv, Schneider Electric, Johnson Controls, and other specialists such as Liquid Stack and Green Revolution Cooling).
Leading Providers of Energy and Cooling Systems for AI Data Centers:
Company Name  Core Expertise Key Solutions for AI Data Centers
Vertiv Critical infrastructure (power & cooling) Offers full-stack solutions with air and liquid cooling, power distribution units (PDUs), and monitoring systems, including the AI-ready Vertiv 360AI portfolio.
Schneider Electric Energy management & automation Provides integrated power and thermal management via its EcoStruxure platform, specializing in modular and liquid cooling solutions for HPC and AI applications.
Johnson Controls HVAC & building solutions Offers integrated, energy-efficient solutions from design to maintenance, including Silent-Aire cooling and YORK chillers, with a focus on large-scale operations.
Eaton Power management Specializes in electrical distribution systems, uninterruptible power supplies (UPS), and switchgear, which are crucial for reliable energy delivery to high-density AI racks.
These companies focus heavily on innovative liquid cooling technologies, which are essential for managing the extreme heat generated by high-density AI servers and GPUs: 
  • LiquidStack: A leader in two-phase and modular immersion cooling and direct-to-chip systems, trusted by large cloud and hardware providers.
  • Green Revolution Cooling (GRC): Pioneers in single-phase immersion cooling solutions that help simplify thermal management and improve energy efficiency.
  • Iceotope: Focuses on chassis-level precision liquid cooling, delivering dielectric fluid directly to components for maximum efficiency and reduced operational costs.
  • Asetek: Specializes in direct-to-chip (D2C) liquid cooling solutions and rack-level Coolant Distribution Units (CDUs) for high-performance computing.
  • CoolIT Systems: Known for its custom direct liquid cooling technologies, working closely with server OEMs (Original Equipment Manufacturers) to integrate cold plates and CDUs for AI and HPC workloads. 

–>This new AI ecosystem is reshaping global supply chains — but also straining local energy and water resources. For example, Meta’s massive data center in Georgia has already triggered environmental concerns over energy and water usage.

Global Spending Outlook:

  • According to UBS, global AI capex will reach $423 billion in 2025, $571 billion by 2026 and $1.3 trillion by 2030, growing at a 25% CAGR during the period 2025-2030.
    Compute demand is outpacing expectations, with Google’s Gemini saw 130 times rise in AI token usage over the past 18 months, highlighting soaring compute and Meta’s infrastructure needs expanding sharply.[9]

Conclusions:

The AI infrastructure boom reflects a bold, forward-looking strategy by Big Tech, built on the belief that compute capacity will define the next decade’s leaders. If Artificial General Intelligence (AGI) or large-scale AI monetization unfolds as expected, today’s investments will be seen as visionary and transformative. Either way, the AI era is well underway — and the race for computational excellence is reshaping the future of global markets and innovation.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Footnotes:

[1] https://www.investing.com/news/stock-market-news/ai-capex-to-exceed-half-a-trillion-in-2026-ubs-4343520?utm_medium=feed&utm_source=yahoo&utm_campaign=yahoo-www

[2] https://www.venturepulsemag.com/2025/08/01/big-techs-400-billion-ai-bet-the-race-thats-reshaping-global-technology/#:~:text=Big%20Tech’s%20$400%20Billion%20AI%20Bet:%20The%20Race%20That’s%20Reshaping%20Global%20Technology,-3%20months%20ago&text=The%20world’s%20largest%20technology%20companies,enterprise%20strategy%2C%20and%20public%20policy.

[3] https://www.businessinsider.com/big-tech-capex-spending-ai-earnings-2025-10?

[4] https://www.investing.com/analysis/meta-plunged-12-amazon-jumped-11–same-ai-race-different-economics-200669410

[5] https://www.cnbc.com/2025/10/15/a-guide-to-1-trillion-worth-of-ai-deals-between-openai-nvidia.html

[6] https://finance.yahoo.com/news/bank-america-just-issued-stark-152422714.html

[7] https://news.futunn.com/en/post/64706046/from-cash-rich-to-collective-debt-how-does-wall-street?level=1&data_ticket=1763038546393561

[8] https://www.businessinsider.com/big-tech-capex-spending-ai-earnings-2025-10?

[9] https://finance.yahoo.com/news/ai-capex-exceed-half-trillion-093015889.html

……………………………………………………………………………………………………………………………………………………………………………………………………………………………

About the Author:

Rahul Sharma is President & Co-Chief Executive Officer at Indxx a provider of end-to-end indexing services, data and technology products.  He has been instrumental in leading the firm’s growth since 2011. Raul manages Indxx’s Sales, Client Engagement, Marketing and Branding teams while also helping to set the firm’s overall strategic objectives and vision.

Rahul holds a BS from Boston College and an MBA with Beta Gamma Sigma honors from Georgetown University’s McDonough School of Business.

……………………………………………………………………………………………………………………………………………………………………………………………………………………………

References:

Curmudgeon/Sperandeo: New AI Era Thinking and Circular Financing Deals

Expose: AI is more than a bubble; it’s a data center debt bomb

Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?

AI spending boom accelerates: Big tech to invest an aggregate of $400 billion in 2025; much more in 2026!

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

FT: Scale of AI private company valuations dwarfs dot-com boom

Amazon’s Jeff Bezos at Italian Tech Week: “AI is a kind of industrial bubble”

AI Data Center Boom Carries Huge Default and Demand Risks

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Nokia in major pivot from traditional telecom to AI, cloud infrastructure, data center networking and 6G

Reuters: US Department of Energy forms $1 billion AI supercomputer partnership with AMD

………………………………………………………………………………………………………………………………………………………………………….

 

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

According to Dell’Oro VP Stefan Pongratz, Nokia has outlined a clear plan to arrest its declining RAN revenue share (see chart below), with NVIDIA  now a central pillar of that strategy. The partnership is designed to deliver AI RAN [1.] while meeting wireless network operators’ near-term constraints and concerns on performance, power, and TCO (Total Cost of Ownership).  IEEE Techblog has noted in many past blog posts that telcos have huge doubts about AI RAN which implies they won’t buy into that new RAN architecture.

This is especially relevant considering the monumental failure of multi-vendor Open RAN which was promoted as a game changer, but has dismally failed to attain that vision.

Note 1.  AI RAN is a mobile RAN architecture where AI and machine learning are embedded into the RAN software and underlying compute platform to optimize how the network is planned, configured, and operated.  It is being pushed by NVIDIA to get its GPUs into 5G, 5G Advanced and 6G base stations and other wireless network equipment in the RAN.

……………………………………………………………………………………………………………………………………………………..

Nokia aims to use collaboration with NVIDIA (which invested $1B in the Finland based company) to stabilize its RAN market share in the near term and create a platform for long-term growth in AI-native 5G-Advanced and 6G networks. The timing—following a dense cadence of disclosures at NVIDIA’s GPU Technology Conference and Nokia’s Capital Markets Day—makes this an ideal time to reassess the scope of the joint announcements, the RAN implications, and Nokia’s broader competitive posture in an increasingly concentrated market.

Both companies share a belief that telecom networks will evolve from best-effort connectivity into a distributed compute fabric underpinning autonomous machines, self-driving vehicles, humanoids, and industrial digital twins. From that perspective, the RAN becomes an “AI grid” that executes and orchestrates AI workloads at the edge, enabling massive numbers of latency-sensitive, bandwidth-intensive AI use cases.

Unlike prior attempts to penetrate the RAN market with its GPUs, NVIDIA is now taking a more pragmatic approach, explicitly targeting parity with incumbent, purpose-built RAN equipment based on performance, power, and TCO rather than leading with speculative multi-tenant or new-revenue narratives. Nokia, acutely aware of wireless telco risk tolerance, is positioning the solution so that the ROI must be justifiable on a pure RAN basis, with additional AI and edge-compute upside treated as optional rather than foundational.

A quick recap of NVIDIA’s entry into RAN: Based on the announcement and subsequent discussions, our understanding is that NVIDIA will invest $1 B in Nokia and that NVIDIA-powered AI-RAN products will be incorporated into Nokia’s RAN portfolio starting in 2027 (with trials beginning in 2026). While RAN compute—which represents less than half of the $30B+ RAN market—is immaterial relative to NVIDIA’s $4+ T market cap, the potential upside becomes more meaningful when viewed in the context of NVIDIA’s broader telecom ambitions and its $165 B in trailing-twelve-month revenue.

With a deployed base of more than 1 million BTS, Nokia is prioritizing three migration vectors to GPU/AI-RAN, in order of expected impact:

  • Purpose-built D-RAN [2.], by inserting a new card into existing AirScale slots.

  • D-RAN vRAN [3.], using COTS servers at the cell site.

  • Cloud RAN [4.] or vRAN, using centralized COTS infrastructure.

This approach aligns with wireless network operators’ desire to sweat existing AirScale assets while minimizing operational disruption.

Note 2.  Purpose-built D-RAN is a distributed RAN architecture where the baseband processing runs on dedicated, vendor-specific hardware at or very close to the cell site, rather than on generic COTS servers. It is “purpose-built” because the silicon, boards, and software stack are tightly integrated and optimized for RAN performance, power efficiency, and footprint, not general-purpose compute.

Note 3. vRAN or virtual RAN is a technology that virtualizes the functions of a cellular network’s radio access network, moving them from dedicated hardware to software running on general-purpose servers. This approach makes mobile networks more flexible, scalable, and cost-efficient by replacing proprietary hardware with software on common-off-the-shelf (COTS) hardware.

Note 4. Cloud RAN (C-RAN) is a centralized cellular network architecture that uses cloud computing to virtualize and process radio access network (RAN) functions. This architecture centralizes baseband units in a “BBU hotel,” allowing for more flexible and scalable network management, efficient resource allocation, and improved network performance. It allows operators to pool resources, adjust capacity based on demand, and support new services, which is a key enabler for 5G networks.

………………………………………………………………………………………………………………………………………………

In this model, the Distributed Unit, and often the higher-layer functions, are physically collocated with the radio unit at the site, making each site a largely self-contained RAN node. This contrasts with Cloud RAN or vRAN, where baseband functions are centralized or virtualized on shared cloud infrastructure, and with cloud/AI-RAN approaches that rely on GPUs or other general-purpose accelerators instead of custom RAN hardware.

The macro-RAN market (baseband plus radio) is roughly a 30 billion USD annual opportunity, with on the order of 1–2 million macro sites shipped per year. In that context, operators have limited appetite to pay more than 10,000 USD for a GPU per sector, even if software-led benefits accumulate over time, which is why NVIDIA is signaling GPU pricing in line with ARC-Compact but at roughly double the capacity and Nokia is targeting 48–50% gross margins in Mobile Infrastructure by 2028, slightly above the current run-rate.

If the TCO and performance-per-watt gap versus custom silicon continues to narrow, the partnership could materially influence AI-RAN and Cloud-RAN trajectories while also supporting Nokia’s margin expansion goals. AI-RAN was already expected to scale to roughly one-third of the RAN market by 2029; Nokia’s decision to lean harder into GPUs amplifies this structural shift without fundamentally changing the long-term 6G direction.

In the near term, GPU-enabled D-RAN using empty AirScale slots is expected to dominate deployments, reflecting operators’ preference for incremental, site-level upgrades. At the same time, the Nokia-NVIDIA partnership is not expected to meaningfully alter the overall Cloud RAN vs. D-RAN mix, Open RAN adoption (slow or non-existent) , or the trajectory of multi-tenant RAN, which remain more dependent on network operator architecture and commercial decisions than on a single vendor–silicon alignment.

Nokia plans to remain disciplined and focus on areas where it can differentiate and unlock value—particularly through software/faster innovation cycles via its recently announced partnership with NVIDIA. The company sees meaningful opportunities to capture incremental share in North America, Europe, India, and select APAC markets. And it is already off to a solid start— we estimate that Nokia’s 1Q25–3Q25 RAN revenue share outside North America improved slightly relative to 2024. Following this stabilization phase, Nokia is betting that its investments will pay off and that it will be well-positioned to lead with AI-native networks and 6G.

Nokia’s objective is clear: stabilize RAN in the short term, then grow by leading in AI-native networks and 6G over the longer horizon. Success now hinges on Nokia’s ability to operationalize the GPU-based RAN roadmap at scale and on NVIDIA’s ability to deliver carrier-grade economics and performance—turning the AI-RAN narrative into production-grade, repeatable deployments.

Nokia sees meaningful opportunities to capture incremental RAN market share in North America, Europe, India, and select APAC markets. And it is already off to a solid start— we estimate that Nokia’s 1Q25–3Q25 RAN revenue share outside North America improved slightly relative to 2024. Following this stabilization phase, Nokia is betting that its investments will pay off and that it will be well-positioned to lead with AI-native networks and 6G.

References:

Nokia and NVIDIA Take on RAN

Nokia in major pivot from traditional telecom to AI, cloud infrastructure, data center networking and 6G

Dell’Oro: RAN market stable, Mobile Core Network market +14% Y/Y with 72 5G SA core networks deployed

Indosat Ooredoo Hutchison, Nokia and Nvidia AI-RAN research center in Indonesia amongst telco skepticism

Nvidia pays $1 billion for a stake in Nokia to collaborate on AI networking solutions

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum

AI RAN Alliance selects Alex Choi as Chairman

Expose: AI is more than a bubble; it’s a data center debt bomb

GSMA, ETSI, IEEE, ITU & TM Forum: AI Telco Troubleshooting Challenge + TelecomGPT: a dedicated LLM for telecom applications

The GSMA — along with ETSI, IEEE GenAINet, the ITU, and TM Forum — today opened an innovation challenge calling on telco operators, AI researchers, and startups to build large-language models (LLMs) capable of root-cause analysis (RCA) for telecom network faults.  The AI Telco Troubleshooting Challenge is supported by Huawei, InterDigital, NextGCloud, RelationalAI, xFlowResearch and technical advisors from AT&T.

The new competition invites teams to submit AI models in three categories: Generalization to New Faults will assess the best performing LLMs for RCA; Small Models at the Edge will evaluate lightweight edge-deployable models; and Explainability/Reasoning will focus on the AI systems that clearly explain their reasoning. Additional categories will include securing edge-cloud deployments and enabling AI services for application developers.

The goal is to deliver AI tools that help operators automatically identify, diagnose, and (eventually) remediate network problems — potentially reducing both downtime and operational costs. This marks a concrete step toward turning “telco-AI” from pilot projects into operational infrastructure.

As telecom networks scale (5G, 5G-Advanced, edge, IoT), faults and failures become costlier. Automating fault detection and troubleshooting with AI could significantly boost network resilience, reduce manual labor, and enable faster recovery from outages.

“Large Language Models have become instrumental in the pursuit of autonomous, resilient and adaptive networks,” said Prof. Merouane Debbah, General Chair of IEEE GenAINet ETI. “Through this challenge, we are tackling core research and engineering challenges, such as generalisation to unseen network faults, interpretability and edge-efficient AI, that are vital for making AI-native telecom infrastructures a reality. IEEE GenAINet ETI is proud to support this initiative, which serves as a testbed for future-ready innovations across the global telco ecosystem.”

“ITU’s global AI challenges connect innovators with computing resources, datasets, and expert mentors to nurture AI innovation ecosystems worldwide,” said Seizo Onoe, Director of the ITU Telecommunication Standardization Bureau. “Crowdsourcing new solutions and creating conditions for them to scale, our challenges boost business by helping innovations achieve meaningful impact.”

“The future of telecoms depends on the autonomation of network resiliency – shifting from static infrastructure to AI-driven, context-aware, self-optimising networks. TM Forum’s AI-Native Blueprint provides the architectural foundation to make this reality, and the AI Telco Troubleshooting Challenge aligns perfectly to support the industry in moving beyond isolated pilots to production-grade resilient autonomation,” said Guy Lupo, AI and Data Mission lead at TM Forum.

The initiative builds on recent breakthroughs in applying AI to network operations, leveraging curated datasets such as TeleLogs and benchmarking frameworks developed by GSMA and its partners under the GSMA Open-Telco LLM Benchmarks community, which includes a  leaderboard that highlights how various LLMs perform on telco-specific use cases.

“Network faults cost operators millions annually and root cause analysis is a critical pain point for operators,” said Louis Powell, Director of AI Technologies at GSMA. “By harnessing AI models capable of reasoning and diagnosing unseen faults, the industry can dramatically improve reliability and reduce operational costs. Through this challenge, we aim to accelerate the development of LLMs that combine reasoning, efficiency and scalability.”

“We are encouraged by the upside of this challenge after our team at AT&T fine-tuned a 4-billion-parameter small language model that topped all other evaluated models on the GSMA Open-Telco LLM Benchmarks (TeleLogs RCA task), including frontier models such as GPT-5, Claude Sonnet 4.5 and Grok-4,” said Andy Markus, Chief Data Officer at AT&T. “This challenge has the right mix of an important business problem and a technical opportunity, and we welcome the industry’s collaboration to take it to the next level.”

The AI Telco Troubleshooting Challenge is open for submissions on the 28th November and it closes on 1st February 2026, with the winners announced at a dedicated prize-giving session at MWC26 Barcelona.

…………………………………………………………………………………………………………………………………………………………………………

Separately, the GSMA Foundry and Khalifa University announced a strategic collaboration to develop “TelecomGPT,” a dedicated LLM for telecom applications, plus an Open-Telco Knowledge Graph based on 3GPP specifications.

  • These assets are intended to help the industry overcome limitations of general-purpose LLMs, which often struggle with telecom-specific technical contexts. PR Newswire+2Mobile World Live+2

  • The plan: make TelecomGPT and related knowledge tools available for operators, vendors and researchers to accelerate AI-driven telco innovations. PR Newswire+1

Why it matters: A specialized “telco-native” LLM could improve automation, operations, R&D and standardization efforts — for example, helping operators configure networks, analyze logs, or build AI-powered services. It represents a shift toward embedding AI more deeply into core telecom infrastructure and operations.

…………………………………………………………………………………………………………………………………………………………………………………..

About GSMA
The GSMA is a global organization unifying the mobile ecosystem to discover, develop and deliver innovation foundational to positive business environments and societal change. Our vision is to unlock the full power of connectivity so that people, industry, and society thrive. Representing mobile operators and organizations across the mobile ecosystem and adjacent industries, the GSMA delivers for its members across three broad pillars: Connectivity for Good, Industry Services and Solutions, and Outreach. This activity includes advancing policy, tackling today’s biggest societal challenges, underpinning the technology and interoperability that make mobile work, and providing the world’s largest platform to convene the mobile ecosystem at the MWC and M360 series of events.

We invite you to find out more at gsma.com

About ETSI

ETSI is one of only three bodies officially recognized by the European Union as a European Standards Organization (ESO). It is an independent, not-for-profit body dedicated to ICT standardisation. With over 900 member organizations from more than 60 countries across five continents, ETSI offers an open and inclusive environment for members representing large and small private companies, research institutions, academia, governments, and public organizations. ETSI supports the timely development, ratification, and testing of globally applicable standards for ICT‑enabled systems, applications, and services across all sectors of industry and society. More on: etsi.org

About IEEE GenAINet

The aim of the IEEE Large Generative AI Models in Telecom Emerging Technology Initiative (GenAINet ETI) is to create a dynamic platform of research and innovation for academics, researchers, and industry leaders to advance the research on large generative AI in Telecom, through collaborative efforts across various disciplines, including mathematics, information theory, wireless communications, signal processing, networking, artificial intelligence, and more. More on: https://genainet.committees.comsoc.org

About ITU

The International Telecommunication Union (ITU) is the United Nations agency for digital technologies, driving innovation for people and the planet with 194 Member States and a membership of over 1,000 companies, universities, civil society, and international and regional organizations. Established in 1865, ITU coordinates the global use of the radio spectrum and satellite orbits, establishes international technology standards, drives universal connectivity and digital services, and is helping to make sure everyone benefits from sustainable digital transformation, including the most remote communities. From artificial intelligence (AI) to quantum, from satellites and submarine cables to advanced mobile and wireless broadband networks, ITU is committed to connecting the world and beyond. Learn more: www.itu.int

About TM Forum

TM Forum is an alliance of over 800 organizations spanning the global connectivity ecosystem, including the world’s top ten Communication Service Providers (CSPs), top three hyperscalers and Network Equipment Providers (NEPs), vendors, consultancies and system integrators, large and small. We provide a place for our Members to collaborate, innovate, and deliver lasting change. Together, we are building a sustainable future for the industry in connectivity and beyond. To find out more, visit: www.tmforum.org

References:

The AI Telco Troubleshooting Challenge Launches to Transform Network Reliability

AI Telco Troubleshooting Challenge global launch webinar

https://www.prnewswire.com/il/news-releases/gsma-foundry-and-khalifa-university-to-accelerate-ai-innovation-with-the-development-of-telecomgpt-302625362.html

GSMA Vision 2040 study identifies spectrum needs during the peak 6G era of 2035–2040

Gartner: Gen AI nearing trough of disillusionment; GSMA survey of network operator use of AI

 

NTT DOCOMO successful outdoor trial of AI-driven wireless interface with 3 partners

NTT DOCOMO  has successfully executed the world’s premier outdoor field trial of real-time transceiver systems leveraging artificial intelligence (AI)-driven wireless technology, a critical advancement for sixth-generation (6G) mobile communications (AKA IMT 2030).

Conducted in collaboration with parent company NTT, Inc. (NTT), Nokia Bell Labs, and SK Telecom Co., Ltd, the field trials were held across three sites in Yokosuka City, Kanagawa Prefecture. The results validated that the application of AI optimized system throughput (transmission speed), achieving up to a 100% improvement over conventional, non-AI methods under identical environmental conditions, effectively doubling communication speeds.

Wireless communication quality can be compromised by fluctuations in radio propagation environments, leading to unstable connections. To mitigate this challenge, the partners developed “AI-AI technology,” which applies AI to both the transmitting and receiving ends of the wireless interface. This system dynamically optimizes modulation and demodulation schemes based on prevailing radio conditions, facilitating stable communication across diverse use cases. The efficacy of this technology had previously been confirmed in indoor environments.

The recent field trials aimed to verify the technology’s stable performance in complex outdoor settings, where radio conditions are subject to greater variability from factors such as temperature, weather, and physical obstructions.

Source: Pitinan Piyavatin/Alamy Stock Photo

This innovative AI wireless technology was evaluated across three distinct outdoor courses with varying propagation conditions, including the presence of obstacles and terminal mobility:

  • Course 1: A public road featuring gentle curves, with a test vehicle traveling up to 40 km/h.
  • Course 2: An environment with partial signal obstructions.
  • Course 3: A road with minimal obstructions, with a test vehicle traveling up to 60 km/h.

In all test scenarios, the technology demonstrated its ability to compensate for signal degradation, confirming enhanced communication speeds. Specifically, in the highly complex propagation conditions of Course 1, the AI-AI technology yielded an average throughput improvement of 18% and a maximum increase of 100% compared to traditional methods.

These findings enable higher-speed data transmission for users and allow network operators to enhance spectrum efficiency and deliver superior quality of service (QoS). The successful outdoor validation marks a significant milestone toward the practical implementation of 6G systems, which promise a combination of high wireless transmission efficiency and reduced power consumption.  NTT DOCOMO remains committed to refining this technology under a wide range of conditions and accelerating R&D efforts toward 6G realization, while simultaneously collaborating with global partners on 6G standardization (in 3GPP and ITU-R WP5D) and deployment.

This new technology will be featured at the NTT R&D FORUM 2025 hosted by NTT, scheduled from November 19–21 and November 25–26, 2025.

…………………………………………………………………………………………………………………………………………………………………………………….

These three AI-wireless field trials represent the latest joint effort stemming from the collaborative AI research partnership of DOCOMO, parent NTT, Nokia Bell Labs, and SK Telecom Co, which was established at Mobile World Congress (MWC) in February 2024.

NTT Docomo has forged additional 6G alliances with a range of partners, including Ericsson, domestic Japanese suppliers Fujitsu and NEC, and testing specialists Keysight Technologies and Rohde & Schwarz.

This collaboration highlights the extensive international cooperation in 6G development involving Japanese, Korean, and Western corporations. This contrasts sharply with 6G development initiatives in the People’s Republic of China, which remain predominantly insular and confined almost exclusively to domestic Chinese entities.

This year has seen an increase in partnerships among Korean and Japanese operators. Earlier this month, KDDI‘s research partnership with Nokia Bell Labs was announced, focusing on achieving 6G energy efficiency and enhanced network resilience. Samsung and SoftBank entered into a memorandum of understanding (MoU) last month to co-develop prospective next-generation technologies, encompassing 6G, AI-driven Radio Access Networks (AI RAN), and Large Telecom Models (LTMs).

In a separate MoU signed in March, KT‘s and Samsung’s collaboration was formalized to jointly advance 6G antenna technology. Additionally, KT has maintained a separate research engagement with Nokia centered on semantic communications research.

………………………………………………………………………………………………………………………………………………………………………………………….

About NTT DOCOMO:

NTT DOCOMO, Japan’s leading mobile operator with over 91 million subscribers, is one of the global leaders in 3G, 4G and 5G mobile network technologies.
Under the slogan “Bridging Worlds for Wonder & Happiness,” DOCOMO is actively collaborating with global partners to expand its business scope from mobile services to comprehensive solutions, aiming to deliver unsurpassed value and drive innovation in technology and communications, ultimately to support positive change and advancement in global society.

………………………………………………………………………………………………………………………………………………………………………………………….

References:

https://www.docomo.ne.jp/english/info/media_center/pr/2025/1117_00.html

https://www.docomo.ne.jp/english/

https://www.lightreading.com/6g/ntt-docomo-doubles-6g-throughput-in-ai-trials

NTT Docomo will use its wireless technology to enter the metaverse

IDC Report: Telecom Operators Turn to AI to Boost EBITDA Margins

IDC Report: With Telecom Services Spending Growing Less than 2% Annually, Operators Turn to AI to Boost EBITDA Margins, November 6, 2025:

Worldwide spending on telecommunication and pay TV services will reach $1,532 billion in 2025, representing an increase of +1.7% year-on-year, according to the International Data Corporation (IDC) Worldwide Semiannual Telecom Services Tracker. The latest forecast is slightly more optimistic compared to the forecast published earlier this year, as it assumes a 0.1 percentage point higher growth of the total market value.

“The regional dynamics remain mixed, with inflationary effects, competition, and Average Revenue per User (ARPU) trends playing a central role in shaping market trajectories,” said Kresimir Alic, research director, Worldwide Telecom Services at IDC.

Global telecom operators are strategically adopting AI to drive significant business improvements across several key areas. The integration of AI technology is enhancing network operations, refining customer service interactions, and strengthening fraud prevention mechanisms which are reduce losses, reinforcing customer trust and regulatory compliance. With AI accelerating time-to-market for new services, telecoms can better monetize emerging technologies like 5G and edge computing.

“In the longer term, as AI continues to evolve, it will be increasingly recognized not as a mere technological enhancement, but as a strategic enabler poised to drive sustainable growth for telecommunications operators,” according to the report. This strategic adoption is accelerating time-to-market for new services, enabling better monetization of technologies like 5G and edge computing (which requires a 5G SA core network). It represents cautious optimism for a global connectivity services market that has been stagnant for many years.

Key areas of AI adoption and expected improvements include:
  • Network Planning and Operations: AI is heavily used to optimize network performance and manage the complexity of modern networks, including 5G and future 6G technologies. This involves:
    • Predictive Maintenance: Anticipating hardware failures and network issues to ensure uninterrupted service and reduce downtime.
    • Automation and Orchestration: Automating complex tasks and managing physical, virtual, and containerized network functions.
    • Energy Efficiency: Making intelligent choices about radio access network (RAN) energy consumption and resource allocation to increase efficiency.
  • Customer Experience (CX) and Service: Enhancing customer engagement and service is a top priority. This is achieved through:
    • Personalized Services: Analyzing customer behavior and preferences to offer tailored products and marketing campaigns.
    • Intelligent Virtual Assistants/Chatbots: Automating customer interactions and improving self-service capabilities.
    • Churn Reduction: Using AI to predict customer churn and implement retention strategies.
  • Business Efficiency and Productivity: Operators are focused on driving agility and productivity across the organization. This includes:
    • Employee Productivity: Streamlining workflows and automating tasks using generative AI (GenAI) and agentic AI.
    • Cost Reduction: Driving efficiency in operations to lower overall costs.
    • Fraud Prevention: Deploying AI-enhanced systems to detect and mitigate fraud, protecting revenue streams and customer trust.
  • New Revenue Opportunities: AI is seen as a cornerstone for developing new services, such as AI-as-a-Service, and monetizing existing network assets like 5G capabilities. 
Overall, AI is moving from pilot projects to full-scale deployment, becoming a strategic engine for transformation across the entire telecom value chain. North American operators are leading the charge, and investments in AI infrastructure and solutions are expected to grow significantly, reaching an estimated $65 billion by 2029. 
……………………………………………………………………………………………………………………………………………………………………..
Telecom Services Revenue Comparison and Growth Rates:
The report stated that worldwide spending on telecommunication and pay TV services is projected to reach $1,532 billion in 2025, representing an increase of +1.7% year-over-year (YoY) increase. The breakdown by telecom service type confirms that established trends remain intact, despite adjustments to overall market forecasts. IDC forecasts only 1% YoY growth for the Americas and Asia Pacific as per this table:
Global Regional Services Revenue and Year-on-Year Growth (revenues in $B)
Global Region 2024 Revenue 2025 Revenue 25/24

Growth

Americas $568 $574 1.0%
Asia/Pacific $476 $481 1.0%
EMEA $462 $477 3.2%
Grand Total $1,507 $1,532 1.7%
Source: IDC Worldwide Semiannual Services Tracker – 1H 2025

Mobile continues to dominate, driven by rising data consumption and the expansion of M2M applications, which are offsetting declines in traditional voice and messaging revenues.

Fixed data services are also expected to grow steadily, fueled by increasing demand for high-bandwidth connectivity.

In summary, IDC forecasts that the global connectivity services market is projected to grow at a compound annual rate of 1.5% over the next five years, maintaining a cautiously optimistic outlook. As highlighted by recent IMF forecasts, the overall market environment is expected to be less stimulating than in previous years, shaped by rising protectionism and persistent economic uncertainty in key regions. While declining inflation may ease cost pressures, it is also likely to reduce the inflation-driven boost to telecom service spending seen in recent cycles. Political instability in areas such as Eastern Europe and the Middle East adds further complexity to the growth landscape. Most notably, saturation in mature telecom markets continues to be the primary constraint on expansion, limiting upside potential in traditional service segments.

………………………………………………………………………………………………………………………………………………………….

About IDC Trackers:

IDC Tracker products provide accurate and timely market size, vendor share, and forecasts for hundreds of technology markets from more than 100 countries around the globe. Using proprietary tools and research processes, IDC’s Trackers are updated on a semiannual, quarterly, and monthly basis. Tracker results are delivered to clients in user-friendly excel deliverables and on-line query tools.

For more information about IDC’s Worldwide Semiannual Telecom Services Tracker, please contact Kathy Nagamine at 650-350-6423 or [email protected].

About IDC:

International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets. With more than 1,000 analysts worldwide, IDC offers global, regional, and local expertise on technology and industry opportunities and trends in over 100 countries. IDC’s analysis and insight helps IT professionals, business executives, and the investment community to make fact-based technology decisions and to achieve their key business objectives. Founded in 1964, IDC is the world’s leading media, data and marketing services company that activates and engages the most influential technology buyers. To learn more about IDC, please visit www.idc.com. Follow IDC on Twitter at @IDC and LinkedIn.

………………………………………………………………………………………………………………………………………………………….

References:

https://my.idc.com/getdoc.jsp?containerId=prUS53913925&

https://my.idc.com/getdoc.jsp?containerId=prEUR253369525

https://my.idc.com/getdoc.jsp?containerId=US53765725

Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos

Nokia and Rohde & Schwarz collaborate on AI-powered 6G receiver years before IMT 2030 RIT submissions to ITU-R WP5D

Nokia and the test and measurement firm Rohde & Schwarz have created and successfully tested a “6G” radio receiver that uses AI technologies to overcome one of the biggest anticipated challenges of 6G network rollouts, coverage limitations inherent in 6G’s higher-frequency spectrum.

–>This is truly astonishing as ITU-R WP5D doesn’t even plan to evaluate 6G RIT/SRITs till February 2027 when the first submissions are invited to be presented.

Nokia Bell Labs developed the receiver and validated it using 6G test equipment and methodologies from Rohde & Schwarz. The two companies will unveil a proof-of-concept receiver at the Brooklyn 6G Summit on November 6, 2025.  Nokia says, “the machine learning capabilities in the receiver greatly boost uplink distance, enhancing coverage for future 6G networks. This will help operators roll out 6G over their existing 5G footprints, reducing deployment costs and accelerating time to market.”

Image Credit: Rohde & Schwarz

Nokia Bell Labs and Rohde & Schwarz have tested this new AI receiver under real world conditions, achieving uplink distance improvements over today’s receiver technologies ranging from 10% to 25%. The testbed comprises an R&S SMW200A vector signal generator, used for uplink signal generation and channel emulation. On the receive side, the newly launched FSWX signal and spectrum analyzer from Rohde & Schwarz is employed to perform the AI inference for Nokia’s AI receiver. In addition to enhancing coverage, the AI technology also demonstrates improved throughput and power efficiency, multiplying the benefits it will provide in the 6G era.

“One of the key issues facing future 6G deployments is the coverage limitations inherent in 6G’s higher-frequency spectrum. Typically, we would need to build denser networks with more cell sites to overcome this problem. By boosting the coverage of 6G receivers, however, AI technology will help us build 6G infrastructure over current 5G footprints,” said Peter Vetter, President, Core Research, Bell Labs, Nokia.

“Rohde & Schwarz is excited to collaborate with Nokia in pioneering AI-driven 6G receiver technology. Leveraging more than 90 years of experience in test and measurement, we’re uniquely positioned to support the development of next-generation wireless, allowing us to evaluate and refine AI algorithms at this crucial pre-standardization stage. This partnership builds on our long history of innovation and demonstrates our commitment to shaping the future of 6G,” said Michael Fischlein, VP, Spectrum & Network Analyzers, EMC and Antenna Test, Rohde & Schwarz.

…………………………………………………………………………………………………………………………………………………………………………………………

Last month, Nokia teamed up with rival kit vendor Ericsson to work on video coding standardization in preparation for 6G. The project, which also involved Berlin’s Fraunhofer Heinrich Hertz Institute (HHI), demonstrated a new video codec which they claim has higher compression efficiency than the current standards (H.264/AVC, H.265/HEVC, and H.266/VVC) without significantly increasing complexity, and its wider aim is to strengthen Europe’s role in next generation standardization, we were told at the time.

…………………………………………………………………………………………………………………………………………………………………………………………

About Nokia:

At Nokia, we create technology that helps the world act together.

As a B2B technology innovation leader, we are pioneering networks that sense, think and act by leveraging our work across mobile, fixed and cloud networks. In addition, we create value with intellectual property and long-term research, led by the award-winning Nokia Bell Labs, which is celebrating 100 years of innovation.

With truly open architectures that seamlessly integrate into any ecosystem, our high-performance networks create new opportunities for monetization and scale. Service providers, enterprises and partners worldwide trust Nokia to deliver secure, reliable and sustainable networks today – and work with us to create the digital services and applications of the future

About Rohde & Schwarz:

Rohde & Schwarz is striving for a safer and connected world with its Test & Measurement, Technology Systems and Networks & Cybersecurity Divisions. For over 90 years, the global technology group has pushed technical boundaries with developments in cutting-edge technologies. The company’s leading-edge products and solutions empower industrial, regulatory and government customers to attain technological and digital sovereignty. The privately owned, Munich-based company can act independently, long-term and sustainably. Rohde & Schwarz generated a net revenue of EUR 3.16 billion in the 2024/2025 fiscal year (July to June). On June 30, 2025, Rohde & Schwarz had more than 15,000 employees worldwide.

 

References:

https://www.nokia.com/newsroom/nokia-and-rohde–schwarz-collaborate-on-ai-powered-6g-receiver-to-cut-costs-accelerate-time-to-market/

https://www.rohde-schwarz.com/de/unternehmen/news-und-presse/all-news/nokia-and-rohde-schwarz-collaborate-on-ai-powered-6g-receiver-to-cut-costs-accelerate-time-to-market-pressemitteilungen-detailseite_229356-1593925.html

ITU-R WP 5D Timeline for submission, evaluation process & consensus building for IMT-2030 (6G) RITs/SRITs

ITU-R WP5D IMT 2030 Submission & Evaluation Guidelines vs 6G specs in 3GPP Release 20 & 21

ITU-R WP 5D reports on: IMT-2030 (“6G”) Minimum Technology Performance Requirements; Evaluation Criteria & Methodology

Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos

Nvidia pays $1 billion for a stake in Nokia to collaborate on AI networking solutions

Highlights of Nokia’s Smart Factory in Oulu, Finland for 5G and 6G innovation

Verizon’s 6G Innovation Forum joins a crowded list of 6G efforts that may conflict with 3GPP and ITU-R IMT-2030 work

Qualcomm CEO: expect “pre-commercial” 6G devices by 2028

NGMN: 6G Key Messages from a network operator point of view

Nvidia pays $1 billion for a stake in Nokia to collaborate on AI networking solutions

This is not only astonishing but unheard of:  the world’s largest and most popular fabless semiconductor company –Nvidia– taking a $1 billion stake in a telco/reinvented data center connectivity company-Nokia.

Indeed, GPU king Nvidia will pay $1 billion for a stake of 2.9% in Nokia as part of a deal focused on AI and data centers, the Finnish telecom equipment maker said on Tuesday as its shares hit their highest level in nearly a decade on hope for AI to lift their business revenue and profits. The nonexclusive partnership and the investment will make Nvidia the second-largest shareholder in Nokia. Nokia said it will issue 166,389,351 new shares for Nvidia, which the U.S. company will subscribe to at $6.01 per share.

Nokia said the companies will collaborate on artificial intelligence networking solutions and explore opportunities to include its data center communications products in Nvidia’s future AI infrastructure plans. Nokia and its Swedish rival Ericsson both make networking equipment for connectivity inside (intra-) data centers and between (inter-) data centers and have been benefiting from increased AI use.

Summary:

  • NVIDIA and Nokia to establish a strategic partnership to enable accelerated development and deployment of next generation AI native mobile networks and AI networking infrastructure.
  • NVIDIA introduces NVIDIA Arc Aerial RAN Computer, a 6G-ready telecommunications computing platform.
  • Nokia to expand its global access portfolio with new AI-RAN product based on NVIDIA platform.
  • T-Mobile U.S. is working with Nokia and NVIDIA to integrate AI-RAN technologies into its 6G development process.
  • Collaboration enables new AI services and improved consumer experiences to support explosive growth in mobile AI traffic.
  • Dell Technologies provides PowerEdge servers to power new AI-RAN solution.
  • Partnership marks turning point for the industry, paving the way to AI-native 6G by taking AI-RAN to innovation and commercialization at a global scale.

In some respects, this new partnership competes with Nvidia’s own data center connectivity solutions from its Mellanox Technologies division, which it acquired for $6.9 billion in 2019.  Meanwhile, Nokia now claims to have worked on a redesign to ensure its RAN software is compatible with Nvidia’s compute unified device architecture (CUDA) platform, meaning it can run on Nvidia’s GPUs. Nvidia has also modified its hardware offer, creating capacity cards that will slot directly into Nokia’s existing AirScale baseband units at mobile sites.

Having dethroned Intel several years ago, Nvidia now has a near-monopoly in supplying GPU chips for data centers and has partnered with companies ranging from OpenAI to Microsoft.  AMD is a distant second but is gaining ground in the data center GPU space as is ARM Ltd with its RISC CPU cores. Capital expenditure on data center infrastructure is expected to exceed $1.7 trillion by 2030, consulting firm McKinsey, largely because of the expansion of AI.

Nvidia CEO Jensen Huang said the deal would help make the U.S. the center of the next revolution in 6G. “Thank you for helping the United States bring telecommunication technology back to America,” Huang said in a speech in Washington, addressing Nokia CEO Justin Hotard (x-Intel). “The key thing here is it’s American technology delivering the base capability, which is the accelerated computing stack from Nvidia, now purpose-built for mobile,” Hotard told Reuters in an interview.  “Jensen and I have been talking for a little bit and I love the pace at which Nvidia moves,” Hotard said. “It’s a pace that I aspire for us to move at Nokia.”  He expects the new equipment to start contributing to revenue from 2027 as it goes into commercial deployment, first with 5G, followed by 6G after 2030.

Nvidia has been on a spending spree in recent weeks. The company in September pledged to invest $5 billion in beleaguered chip maker Intel. The investment pairs the world’s most valuable company, which has been a darling of the AI boom, with a chip maker that has almost completely fallen out of the AI conversation.

Later that month, Nvidia said it planned to invest up to $100 billion in OpenAI over an unspecified period that will likely span at least a few years. The partnership includes plans for an enormous data-center build-out and will allow OpenAI to build and deploy at least 10 gigawatts of Nvidia systems.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Tech Details:

Nokia uses Marvell Physical Layer (1) baseband chips for many of its products. Among other things, this ensured Nokia had a single software stack for all its virtual and purpose-built RAN products. Pallavi Mahajan, Nokia’s recently joined chief technology and AI officer recently told Light Reading that their software could easily adapt to run on Nvidia’s GPUs: “We built a hardware abstraction layer so that whether you are on Marvell, whether you are on any of the x86 servers or whether you are on GPUs, the abstraction takes away from that complexity, and the software is still the same.”

Fully independent software has been something of a Holy Grail for the entire industry. It would have ramifications for the whole market and its economics. Yet Nokia has conceivably been able to minimize the effort required to put its Layer 1 and specific higher-layer functions on a GPU. “There are going to be pieces of the software that are going to leverage on the accelerated compute,” said Mahajan. “That’s where we will bring in the CUDA integration pieces. But it’s not the entire software,” she added.  The appeal of Nvidia as an alternative was largely to be found in “the programmability pieces that you don’t have on any other general merchant silicon,” said Mahajan. “There’s also this entire ecosystem, the CUDA ecosystem, that comes in.” One of Nvidia’s most eye-catching recent moves is the decision to “open source” Aerial, its own CUDA-based RAN software framework, so that other developers can tinker, she says. “What it now enables is the entire ecosystem to go out and contribute.”

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Quotes:

“Telecommunications is a critical national infrastructure — the digital nervous system of our economy and security,” said Jensen Huang, founder and CEO of NVIDIA. “Built on NVIDIA CUDA and AI, AI-RAN will revolutionize telecommunications — a generational platform shift that empowers the United States to regain global leadership in this vital infrastructure technology. Together with Nokia and America’s telecom ecosystem, we’re igniting this revolution, equipping operators to build intelligent, adaptive networks that will define the next generation of global connectivity.”

“The next leap in telecom isn’t just from 5G to 6G — it’s a fundamental redesign of the network to deliver AI-powered connectivity, capable of processing intelligence from the data center all the way to the edge. Our partnership with NVIDIA, and their investment in Nokia, will accelerate AI-RAN innovation to put an AI data center into everyone’s pocket,” said Justin Hotard, President and CEO of Nokia. “We’re proud to drive this industry transformation with NVIDIA, Dell Technologies, and T-Mobile U.S., our first AI-RAN deployments in T-Mobile’s network will ensure America leads in the advanced connectivity that AI needs.”

……………………………………………………………………………………………………………………………………………………………………………………

Editor’s Notes:

1.  In more advanced 5G networks, Physical Layer functions have demanded the support of custom silicon, or “accelerators.”  A technique known as “lookaside,” favored by Ericsson and Samsung, uses an accelerator for only a single problematic Layer 1 task – forward error correction – and keeps everything else on the CPU. Nokia prefers the “inline” approach, which puts the whole of Layer 1 on the accelerator.

2. The huge AI-RAN push that Nvidia started with the formation of the AI-RAN Alliance in early 2024 has not met with an enthusiastic telco response so far. Results from Nokia as well as Ericsson show wireless network operators are spending less on 5G rollouts than they were in the early 2020s. Telco numbers indicate the appetite for smartphone and other mobile data services has not produced any sales growth. As companies prioritize efficiency above all else, baseband units that include Marvell and Nvidia cards may seem too expensive.

……………………………………………………………………………………………………………………………………………………………………………………….

Other Opinions and Quotes:

Nvidia chips are likely to be more expensive, said Mads Rosendal, analyst at Danske Bank Credit Research, but the proposed partnership would be mutually beneficial, given Nvidia’s large share in the U.S. data center market.

“This is a strong endorsement of Nokia’s capabilities,” said PP Foresight analyst Paolo Pescatore. “Next-generation networks, such as 6G, will play a significant role in enabling new AI-powered experiences,” he added.

Iain Morris, International Editor, Light Reading: “Layer 1 control software runs on ARM RISC CPU cores in both Marvell and Nvidia technologies. The bigger differences tend to be in the hardware acceleration “kernels,” or central components, which have some unique demands. Yet Nokia has been working to put as much as it possibly can into a bucket of common software. Regardless, if Nvidia is effectively paying for all this with its $1 billion investment, the risks for Nokia may be small………….Nokia’s customers will in future have an AI-RAN choice that limits or even shrinks the floorspace for Marvell. The development also points to more subtle changes in Nokia’s thinking. The message earlier this year was that Nokia did not require a GPU to implement AI for RAN, whereby machine-generated algorithms help to improve network performance and efficiency. Marvell had that covered because it had incorporated AI and machine-learning technologies into the baseband chips used by Nokia.”

“If you start doing inline, you typically get much more locked into the hardware,” said Per Narvinger, the president of Ericsson’s mobile networks business group, on a recent analyst call. During its own trials with Nvidia, Ericsson said it was effectively able to redeploy virtual RAN software written for Intel’s x86 CPUs on the Grace CPU with minimal changes, leaving the GPU only as a possible option for the FEC accelerator.  Putting the entire Layer 1 on a GPU would mean “you probably also get more tightly into that specific implementation,” said Narvinger. “Where does it really benefit from having that kind of parallel compute system?”

………………………………………………………………………………………………………………………………………………….

Separately, Nokia and Nvidia will partner with T-Mobile U.S. to develop and test AI RAN technologies for developing 6G, Nokia said in its press release.  Trials are expected to begin in 2026, focused on field validation of performance and efficiency gains for customers.

References:

https://nvidianews.nvidia.com/news/nvidia-nokia-ai-telecommunications

https://www.reuters.com/world/europe/nvidia-make-1-billion-investment-finlands-nokia-2025-10-28/

https://www.lightreading.com/5g/nvidia-takes-1b-stake-in-nokia-which-promises-5g-and-6g-overhaul

https://www.wsj.com/business/telecom/nvidia-takes-1-billion-stake-in-nokia-69f75bb6

Highlights of Nokia’s Smart Factory in Oulu, Finland for 5G and 6G innovation

Nokia & Deutsche Bahn deploy world’s first 1900 MHz 5G radio network meeting FRMCS requirements

Will the wave of AI generated user-to/from-network traffic increase spectacularly as Cisco and Nokia predict?

Indosat Ooredoo Hutchison and Nokia use AI to reduce energy demand and emissions

Verizon partners with Nokia to deploy large private 5G network in the UK

184K global tech layoffs in 2025 to date; ~27.3% related to AI replacing workers

As of October, over 184,000 global tech jobs were cut in 2025, according to a report from Silicon Valley Business Journal.  50,184 were directly related to the implementation of artificial intelligence (AI) and automation tools by businesses. Silicon Valley’s AI boom has been pummeling headcounts across major companies in the region — and globally.  U.S. companies accounted for about 123,000 of the layoffs.

These are the 10 tech companies with the most significant mass layoffs since January 2025:

  • Intel: 33,900 layoffs. The company has cited the need to reduce costs and restructure its organization after years of technical and financial setbacks.
  • Microsoft: 19,215 layoffs. The tech giant has conducted multiple rounds of cuts throughout the year across various departments as it prioritizes AI investments.
  • TCS: 12,000 layoffs. As a major IT firm, Tata Consultancy Services’ cuts largely affected mid-level and senior positions, which are becoming redundant due to AI and evolving client demands.
  • Accenture: 11,000 layoffs. The consulting company reduced its headcount as it shifts toward greater automation and AI-driven services.
  • Panasonic: 10,000 layoffs. The Japanese manufacturer announced these job cuts as part of a strategy to improve efficiency and focus on core business areas.
  • IBM: 9,000 layoffs as part of a restructuring effort to shift some roles to India and align the workforce with areas like AI and hybrid cloud. The layoffs were reportedly concentrated in certain teams, including the Cloud Classic division, and impacted locations such as Raleigh, New York, Dallas, and California. 
  • Amazon: 5,555 layoffs. Cuts have impacted various areas, including the Amazon Web Services (AWS) cloud unit and the consumer retail business.
  • Salesforce: 5,000 layoffs. Many of these cuts impacted the customer service division, where AI agents now handle a significant portion of client interactions.
  • STMicro: 5,000 cuts in the next three years, including 2,800 job cuts announced earlier this year, its chief executive said on Wednesday. Around 2,000 employees will leave the Franco-Italian chipmaker due to attrition, bringing the total count with voluntary departures to 5,000, Jean-Marc Chery said at a June 4th event in Paris, hosted by BNP Paribas.
  • Meta: 3,720 layoffs. The company has made multiple rounds of cuts targeting “low-performers” and positions within its AI and virtual reality divisions.  More details below.

……………………………………………………………………………………………………………………………………………………………………..

Image Credit: simplehappyart via Getty Images

……………………………………………………………………………………………………………………………………………………………………..

In a direct contradiction in August, Cisco announced layoffs of 221 employees in the San Francisco Bay Area, affecting roles in Milpitas and San Francisco. This occurred despite strong financial results and the CEO’s previous statement that the company would not cut jobs in favor of AI. The cuts, which included software engineering roles, are part of the company’s broader strategy to streamline operations & focus on AI.

Only days after revealing a partnership with OpenAI, semiconductor maker Broadcom is cutting hundreds of staff in Palo Alto.  For Broadcom, the cuts follow its 2023 acquisition of VMware, which was accompanied by thousands of job cuts as part of a multiyear restructuring effort. Current reports indicate that the company is eliminating additional positions across its sales and account management teams.
“The wave of tech layoffs in 2025 keeps growing — and Broadcom has once again become one of the biggest names in the mix,” said RationalFX analyst Alan Cohen in a statement. “The broader industry climate isn’t helping: a squeeze from tariffs, trade tensions, and weakening demand has forced tech giants to slash costs just when AI automation was supposed to create new jobs — instead, it’s replacing more of them,” Cohen added.
………………………………………………………………………………………………………………………………………………………………..
Meta followed suit on October 22nd, announcing 600 job cuts within its AI division — both part of a widening wave of tech layoffs tied to automation and artificial intelligence. 700 additional jobs were cut by Meta after the report was published- 600 from its AI Division and 100 from its risk review organization. That group is largely staffed by employees responsible for making sure Meta’s products abide by an agreement with the Federal Trade Commission as well as privacy rules set by world-wide regulatory bodies.
………………………………………………………………………………………………………………………………………………………………..

About two-thirds of all job cuts — roughly 123,000 positions — came from U.S.-based companies, with the remainder spread across mainly Ireland, India and Japan. The report compiles data from WARN notices, TrueUp, TechCrunch and Layoffs.fyi through Oct. 21st.

Several trends are driving the ongoing reduction in tech jobs:
  • Shift to AI and automation: Many companies are restructuring their workforce to focus on AI-centric growth and are automating tasks previously done by human workers, particularly in customer service and quality assurance.
  • Economic headwinds: Ongoing economic uncertainty, inflation, and higher interest rates are prompting tech companies to cut costs and streamline operations.
  • Market corrections: Following a period of rapid over-hiring, many tech companies are now “right-sizing” their staff to become leaner and more efficient.

References:

https://www.bizjournals.com/sanjose/news/2025/10/22/tech-layoffs-ai-automation-broadcom-meta-intel.html

Report: Broadcom Announces Further Job Cuts as Global Tech Layoffs Approach 185,000 in 2025

 

The Tech Industry’s Workforce Crisis: 166,387 layoffs so far in 2025, projected to reach 235K by the end of the year

 

https://www.linkedin.com/posts/edmund-ho-1277b2125_180k-job-cuts-biggest-tech-company-layoffs-activity-7381201561152184320-N5ah/

 

Tech layoffs continue unabated: pink slip season in hard-hit SF Bay Area

HPE cost reduction campaign with more layoffs; 250 AI PoC trials or deployments

High Tech Layoffs Explained: The End of the Free Money Party

Massive layoffs and cost cutting will decimate Intel’s already tiny 5G network business

Big Tech post strong earnings and revenue growth, but cuts jobs along with Telecom Vendors

Telecom layoffs continue unabated as AT&T leads the pack – a growth engine with only 1% YoY growth?

Cisco restructuring plan will result in ~4100 layoffs; focus on security and cloud based products

Cloud Computing Giants Growth Slows; Recession Looms, Layoffs Begin

IBM and Groq Partner to Accelerate Enterprise AI Inference Capabilities

 IBM and Groq [1.] today announced a strategic market and technology partnership designed to give clients immediate access to Groq’s inference technology — GroqCloud, on watsonx Orchestrate – providing clients high-speed AI inference capabilities at a cost that helps accelerate agentic AI deployment. As part of the partnership, Groq and IBM plan to integrate and enhance RedHat open source vLLM technology with Groq’s LPU architecture. IBM Granite models are also planned to be supported on GroqCloud for IBM clients.

………………………………………………………………………………………………………………………………………………….

Note 1. Groq is a privately held company founded by Jonathan Ross in 2016. As a startup, its ownership is distributed among its founders, employees, and a variety of venture capital and institutional investors including BlackRock Private Equity PartnersGroq developed the LPU and GroqCloud to make compute faster and more affordable. The company says it is trusted by over two million developers and teams worldwide and is a core part of the American AI Stack.

NOTE that Grok, a conversational AI assistant developed by Elon Musk’s xAI is a completely different entity.

………………………………………………………………………………………………………………………………………………….

Enterprises moving AI agents from pilot to production still face challenges with speed, cost, and reliability, especially in mission-critical sectors like healthcare, finance, government, retail, and manufacturing. This partnership combines Groq’s inference speed, cost efficiency, and access to the latest open-source models with IBM’s agentic AI orchestration to deliver the infrastructure needed to help enterprises scale.

Powered by its custom LPU, GroqCloud delivers over 5X faster and more cost-efficient inference than traditional GPU systems. The result is consistently low latency and dependable performance, even as workloads scale globally. This is especially powerful for agentic AI in regulated industries.

For example, IBM’s healthcare clients receive thousands of complex patient questions simultaneously. With Groq, IBM’s AI agents can analyze information in real-time and deliver accurate answers immediately to enhance customer experiences and allow organizations to make faster, smarter decisions.

This technology is also being applied in non-regulated industries. IBM clients across retail and consumer packaged goods are using Groq for HR agents to help enhance automation of HR processes and increase employee productivity.

“Many large enterprise organizations have a range of options with AI inferencing when they’re experimenting, but when they want to go into production, they must ensure complex workflows can be deployed successfully to ensure high-quality experiences,” said Rob Thomas, SVP, Software and Chief Commercial Officer at IBM. “Our partnership with Groq underscores IBM’s commitment to providing clients with the most advanced technologies to achieve AI deployment and drive business value.”

“With Groq’s speed and IBM’s enterprise expertise, we’re making agentic AI real for business. Together, we’re enabling organizations to unlock the full potential of AI-driven responses with the performance needed to scale,” said Jonathan Ross, CEO & Founder at Groq. “Beyond speed and resilience, this partnership is about transforming how enterprises work with AI, moving from experimentation to enterprise-wide adoption with confidence, and opening the door to new patterns where AI can act instantly and learn continuously.”

IBM will offer access to GroqCloud’s capabilities starting immediately and the joint teams will focus on delivering the following capabilities to IBM clients, including:

  • High speed and high-performance inference that unlocks the full potential of AI models and agentic AI, powering use cases such as customer care, employee support and productivity enhancement.
  • Security and privacy-focused AI deployment designed to support the most stringent regulatory and security requirements, enabling effective execution of complex workflows.
  • Seamless integration  with IBM’s agentic product, watsonx Orchestrate, providing clients flexibility to adopt purpose-built agentic patterns tailored to diverse use cases.

The partnership also plans to integrate and enhance RedHat open source vLLM technology with Groq’s LPU architecture to offer different approaches to common AI challenges developers face during inference. The solution is expected to enable watsonx to leverage capabilities in a familiar way and let customers stay in their preferred tools while accelerating inference with GroqCloud. This integration will address key AI developer needs, including inference orchestration, load balancing, and hardware acceleration, ultimately streamlining the inference process.

Together, IBM and Groq provide enhanced access to the full potential of enterprise AI, one that is fast, intelligent, and built for real-world impact.

References:

https://www.prnewswire.com/news-releases/ibm-and-groq-partner-to-accelerate-enterprise-ai-deployment-with-speed-and-scale-302588893.html

FT: Scale of AI private company valuations dwarfs dot-com boom

AI adoption to accelerate growth in the $215 billion Data Center market

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?

 

FT: Scale of AI private company valuations dwarfs dot-com boom

The Financial Times reports that ten loss­ mak­ing arti­fi­cial intel­li­gence (AI) start-ups have gained close to $1 trillion in private market valu­ation in the past 12 months, fuel­ling fears about a bubble in private mar­kets that is much greater than the dot com bubble at the end of the 20th century.  OpenAI leads the pack with a $500 billion valuation, but Anthropic and xAI have also seen their val­ues march higher amid a mad scramble to buy into emerging AI com­pan­ies. Smal­ler firms build­ing AI applic­a­tions have also surged, while more estab­lished busi­nesses, like Dat­ab­ricks, have soared after embra­cing the tech­no­logy.

U.S. venture capitalists (VCs) have poured $161 billion into artificial intelligence startups this year — roughly two-thirds of all venture spending, according to PitchBook — even as the technology’s commercial payoff remains elusive. VCs are on track to spend well over $200bn on AI companies this year.

Most of that money has gone to just 10 companies, including OpenAI, Anthropic, Databricks, xAI, Perplexity, Scale AI, and Figure AI, whose combined valuations have swelled by nearly $1 trillion, Financial Times calculations show.  Those AI start-ups are all burning cash with no profits forecasted for many years.

Start-ups with about $5mn in annual recurring revenue, a metric used by fast-growing young businesses to provide a snapshot of their earnings, are seeking valuations of more than $500mn, according to a senior Silicon Valley venture capitalist.

Valuing unproven businesses at 100 times their earnings or more dwarfs the excesses of 2021, he added: “Even during peak Zirp [zero-interest rate policies], these would have been $250mn-$300mn valuations.”

“The market is investing as if all these companies are outliers. That’s generally not the way it works out,” he said. VCs typically expect to lose money on most of their bets, but see one or two pay the rest off many times over.

There will be casualties. Just like there always will be, just like there always is in the tech industry,” said Marc Benioff, co-founder and chief executive of Salesforce, which has invested heavily in AI. He estimates $1tn of investment on AI might be wasted, but that the technology will ultimately yield 10 times that in new value.

“The only way we know how to build great technology is to throw as much against the wall as possible, see what sticks, and then focus on the winners,” he added.

Of course there’s a bubble,” said Hemant Taneja, chief executive of General Catalyst, which raised an $8 billion fund last year and has backed Anthropic and Mistral. “Bubbles align capital and talent around new trends. There’s always some destruction, but they also produce lasting innovation.”

Venture investors have weathered cycles of boom and bust before — from the dot-com crash in 2000 to the software downturn in 2022 — but the current wave of AI funding is unprecedented. In 2000, VCs invested $10.5 billion in internet startups; in 2021, they deployed $135 billion into software firms. This year, they are on pace to exceed $200 billion in AI. “We’ve gone from the doldrums to full-on FOMO,” said one investment executive.

OpenAI and its start-up peers are competing with Meta, Google, Microsoft, Amazon, IBM, and others in a hugely capital-intensive race to train ever-better models, meaning the path to profitability is also likely to be longer than for previous generations of start-ups.

Backers are betting that AI will open multi-trillion-dollar markets, from automated coding to AI friends or companionship. Yet some valuations are testing credulity. Startups generating about $5 million in annual recurring revenue are seeking valuations above $500 million, a Silicon Valley investor said — 100 times revenue, surpassing even the excesses of 2021. “The market is behaving as if every company will be an outlier,” he said. “That’s rarely how it works.”

The enthusiasm has spilled into public markets. Shares of Nvidia, AMD, Broadcom, and Oracle have collectively gained hundreds of billions in market value from their ties to OpenAI. But those gains could unwind quickly if questions about the startup’s mounting losses and financial sustainability persist.

Sebastian Mallaby, author of The Power Law, summed it up beautifully:

“The logic among investors is simple — if we get AGI (Artificial General Intelligence, which would match or exceed human thinking), it’s all worth it. If we don’t, it isn’t…. “It comes down to these articles of faith about Sam’s (Sam Altman of OpenAI) ability to work it out.”

References:

https://www.ft.com/content/59baba74-c039-4fa7-9d63-b14f8b2bb9e2

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?

Amazon’s Jeff Bezos at Italian Tech Week: “AI is a kind of industrial bubble”

Gartner: AI spending >$2 trillion in 2026 driven by hyperscalers data center investments

AI Data Center Boom Carries Huge Default and Demand Risks

Canalys & Gartner: AI investments drive growth in cloud infrastructure spending

 

Page 1 of 12
1 2 3 12