Custom AI Chips: Powering the next wave of Intelligent Computing

by the  Indxx team of researchers with Alan J Weissberger

The Market for AI Related Semiconductors:

Several market research firms and banks forecast that revenue from AI-related semiconductors will grow at about 18% annually over the next few years—five times faster than non-AI semiconductor market segments.

  • IDC forecasts that global AI hardware spending, including chip demand, will grow at an annual rate of 18%.
  • Morgan Stanley analysts predict that AI-related semiconductors will grow at an 18% annual rate for a specific company, Taiwan Semiconductor (TSMC).
  • Infosys notes that data center semiconductor sales are projected to grow at an 18% CAGR.
  • MarketResearch.biz and the IEEE IRDS predict an 18% annual growth rate for AI accelerator chips.
  • Citi also forecasts aggregate chip sales for potential AI workloads to grow at a CAGR of 18% through 2030. 

AI-focused chips are expected to represent nearly 20% of global semiconductor demand in 2025, contributing approximately $67 billion in revenue [1].  The global AI chip market is projected to reach $40.79 billion in 2025 [2.] and continue expanding rapidly toward $165 billion by 2030.

…………………………………………………………………………………………………………………………………………………

Types of AI Custom Chips:

Artificial intelligence is advancing at a speed that traditional computing hardware can no longer keep pace with. To meet the demands of massive AI models, lower latency, and higher computing efficiency, companies are increasingly turning to custom AI chips which are purpose-built processors optimized for neural networks, training, and inference workloads.

Those AI chips include Application Specific Integrated Circuits (ASICs) and Field- Programmable Gate Arrays (FPGAs) to Neural Processing Units (NPUs) and Google’s Tensor Processing Units (TPUs).  They are optimized for core AI tasks like matrix multiplications and convolutions, delivering far higher performance-per-watt than CPUs or GPUs. This efficiency is key as AI workloads grow exponentially with the rise of Large Language Models (LLMs)  and generative AI.

OpenAI – Broadcom Deal:

Perhaps the biggest custom AI chip design is being done by an OpenAI partnership with Broadcom in a multi-year, multi-billion dollar deal announced in October 2025.  In this arrangement, OpenAI will design the hardware and Broadcom will develop custom chips to integrate AI model knowledge directly into the silicon for efficiency.

Here’s a summary of the partnership:

  • OpenAI designs its own AI processors (GPUs) and systems, embedding its AI insights directly into the hardware. Broadcom develops and deploys these custom chips and the surrounding infrastructure, using its Ethernet networking solutions to scale the systems.
  • Massive Scale: The agreement covers 10 gigawatts (GW) of AI compute, with deployments expected over four years, potentially extending to 2029.
  • Cost Savings: This custom silicon strategy aims to significantly reduce costs compared to off-the-shelf Nvidia or AMD chips, potentially saving 30-40% on large-scale deployments.
  • Strategic Goal: The collaboration allows OpenAI to build tailored hardware to meet the intense demands of developing frontier AI models and products, reducing reliance on other chip vendors.

AI Silicon Market Share of Key Players:

NVIDIA, with its extremely popular AI GPUs and CUDA software ecosystem., is expected to maintain its market leadership. It currently holds an estimated 86% share of the AI GPU market segment according to one source [2.]. Others put NVIDIA’s market AI chip market share between 80% and 92%.

AMD holds a smaller, but growing, AI chip market share, with estimates placing its discrete GPU market share around 4% to 7% in early to mid-2025. AMD is projected to grow its AI chip division significantly, aiming for a double-digit share with products like the MI300X.

Intel accounts for approximately 1% of the discrete GPU market share, but is focused on expanding its presence in the AI training accelerator market with its Gaudi 3 platform, where it aims for an 8.7% share by the end of 2025.

Big tech companies, including Google, Meta, Amazon, and Apple—are designing their own custom AI silicon to reduce costs, accelerate performance, and scale AI across industries. Yet nearly all rely on TSMC for manufacturing, thanks to its leadership in advanced chip fabrication technology [3.]

For example, Google recently announced Ironwood, its 7th-generation Tensor Processing Unit (TPU), a major AI chip for LLM training and inference, offering 4x the performance of its predecessor (Trillium) and massive scalability for demanding AI workloads like Gemini, challenging Nvidia’s dominance by efficiently powering complex AI at scale for Google Cloud and major partners like Meta.

Ironwood is significantly faster, with claims of over 4x improvement in training and inference compared to the previous Trillium (6th gen) TPU.  It allows for superpods of up to 9,216 interconnected chips, enabling huge computational power for cutting-edge models. It’s optimized for high-volume, low-latency AI inference, handling complex thinking models and real-time chatbots efficiently.

Other AI Silicon Facts and Figures:

  • Edge AI chips are forecast to reach $13.5 billion in 2025, driven by IoT and smartphone integration.
  • AI accelerators based on ASIC designs are expected to grow by 34% year-over-year in 2025.
  • Automotive AI chips are set to surpass $6.3 billion in 2025, thanks to advancements in autonomous driving.
  • Google’s TPU v5p reached 30% faster matrix math throughput in benchmark tests.
  • U.S.-based AI chip startups raised over $5.1 billion in venture capital in the first half of 2025 alone.

Conclusions:

Custom silicon is now essential for deploying AI in real-world applications such as automation, robotics, healthcare, finance, and mobility. As AI expands across every sector, these purpose-built chips are becoming the true backbone of modern computing—driving a hardware race that is just as important as advances in software.

Links for Notes:

1.  https://www.mckinsey.com/industries/semiconductors/our-insights/artificial-intelligence-hardware-%20new-opportunities-for-semiconductor-companies/pt-PT

2. https://sqmagazine.co.uk/ai-chip-statistics/

3. https://www.ibm.com/think/news/custom-chips-ai-future

References:

AI infrastructure spending boom: a path towards AGI or speculative bubble?

OpenAI and Broadcom in $10B deal to make custom AI chips

Reuters & Bloomberg: OpenAI to design “inference AI” chip with Broadcom and TSMC

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

Cisco CEO sees great potential in AI data center connectivity, silicon, optics, and optical systems

Expose: AI is more than a bubble; it’s a data center debt bomb

China gaining on U.S. in AI technology arms race- silicon, models and research