Cisco’s Silicon One G300 as the dominant AI networking fabric, competing with Broadcom’s Tomahawk 6 series

On February 10, 2026, Cisco announced the Silicon One G300 102.4 Tbps Ethernet switch silicon, claiming it can power gigawatt-scale AI clusters for training, inference, and real-time agentic workloads, while maximizing GPU utilization with a 28% improvement in job completion time. The G300 was said to offer Intelligent Collective Networking, which combines an industry-leading fully shared packet buffer, path-based load balancing, and proactive network telemetry to offer better performance and profitability for large-scale data centers. It efficiently absorbs bursty AI traffic, responds faster to link failures, and prevents packet drops that can stall jobs, ensuring reliable data delivery even over long distances. With Intelligent Collective Networking, Cisco can deliver 33% increased network utilization, and a 28% reduction in job completion time versus simulated non-optimized path selection, making AI data centers more profitable with more tokens generated per GPU-hour.  Also, the Cisco Silicon One G300 is highly programmable, enabling equipment to be upgraded for new network functionality even after it has been deployed. This enables Silicon One-based products to support emerging use cases and play multiple network roles, protecting long-term infrastructure investments. And with security fused into the hardware, customers can embrace holistic, at-speed security to keep clusters up and running.

The Cisco Silicon One G300 will power new Cisco N9000 and Cisco 8000 systems that push the frontier of AI networking in the data center. The systems feature innovative liquid cooling and support high-density optics to achieve new efficiency benchmarks and ensure customers get the most out of their GPU investments. In addition, the company enhanced Nexus One to make it easier for enterprises to operate their AI networks — on-premises or in the cloud — removing the complexity that can hold organizations back from scaling AI data centers.

“We are spearheading performance, manageability, and security in AI networking by innovating across the full stack – from silicon to systems and software,” said Jeetu Patel, President and Chief Product Officer, Cisco. “We’re building the foundation for the future of infrastructure, supporting every type of customer—from hyperscalers to enterprises—as they shift to AI-powered workloads.”

“As AI training and inference continues to scale, data movement is the key to efficient AI compute; the network becomes part of the compute itself. It’s not just about faster GPUs – the network must deliver scalable bandwidth and reliable, congestion-free data movement,” said Martin Lund, Executive Vice President of Cisco’s Common Hardware Group. “Cisco Silicon One G300, powering our new Cisco N9000 and Cisco 8000 systems, delivers high-performance, programmable, and deterministic networking – enabling every customer to fully utilize their compute and scale AI securely and reliably in production.”

The networking industry reaction to Cisco’s newest ASIC has been largely positive, with industry analysts and partners highlighting its role in reclaiming Cisco’s dominance in the AI infrastructure market. For example, Brendan Burke of Futurium thinks Cisco’s Silicon One G300 could be the backbone of Agentic AI Inference.  His take: “Cisco’s latest announcements represent a calculated move to assert dominance in the AI networking fabric by attacking the specific bottlenecks of GPU cluster efficiency. As AI workloads shift toward agentic inference, where autonomous agents continuously interact across distributed environments, the network must handle unpredictable traffic patterns, unlike the structured flows of traditional training. Cisco is leveraging its vertical integration strategy to address the reliability and power constraints that plague these massive clusters. By emphasizing programmable silicon and rigorous optic qualification, Cisco aims to decouple network lifespan from rapid GPU innovation cycles, ensuring infrastructure can adapt to new traffic steering algorithms without hardware replacements. The G300 is a bid to make Ethernet the undisputed standard for AI back-end networks.”

Key Performance Indicators:
  • Industry-Leading Specs: Market analysts have noted that the G300’s 102.4 Tbps switching capacity sets a new benchmark for AI scale-out and scale-across networking.
  • Efficiency Gains: Initial simulations showing a 28% reduction in job completion time (JCT) and a 33% increase in network utilization have been cited as major differentiators for large-scale AI clusters.
  • Sustainability Focus: The shift toward liquid-cooled systems for the G300, which offers 70% greater energy efficiency per bit, is being viewed as a critical move for sustainable AI growth.
Strategic & Market Impact:
  • Competitive Positioning: Experts from HyperFRAME Research suggest that the new silicon signals a “new confidence” from Cisco, positioning them as the “Apple of infrastructure” by tightly integrating hardware and software.
  • AI Infrastructure Pivot: Financial analysts at Seeking Alpha have upgraded Cisco’s outlook, viewing the company no longer as just a legacy hardware firm but as a central player in the AI revolution.
  • Partner Confidence: Major partners, such as Shanghai Lichan Technology, have expressed excitement about the Nexus 9100 Series powered by this silicon, specifically for its ability to simplify and scale AI deployments.
Critical Observations:
  • Nvidia & Broadcom Competition: While the  G300 is seen as a strong challenger to Nvidia’s Spectrum-X and Broadcom’s Tomahawk/Jericho lines, some observers note that Cisco still faces a steep climb to regain market share lost to these competitors in recent years.
  • Complexity Concerns: Some industry veterans have pointed out that while the silicon is “hyperscale ready,” the success of these ASICs in the enterprise will depend on Cisco’s ability to maintain operational simplicity through tools like the Nexus Dashboard.

……………………………………………………………………………………………………………………………………………………………………………………………

Cisco’s Silicon One G300 and Broadcom’s latest Tomahawk 6 series both offer a top-tier 102.4 Tbps switching capacity, with the primary differentiators lying in each company’s unique approach to congestion management and network programmability.
Technical Spec. Comparison:
Cisco Silicon One G300
Broadcom Tomahawk 6 (BCM78910 Series)
Bandwidth

102.4 Tbps

TechPowerUp
Bandwidth

102.4 Tbps

Broadcom
Manufacturing Process

TSMC 3nm

X
Manufacturing Process

3nm technology

Broadcom
SerDes Lanes & Speed

512 lanes at 200 Gbps per link

The Register
SerDes Lanes & Speed

512 lanes at 200 Gbps per link, or 1024 lanes at 100G

Broadcom
Port Configuration

Up to 64 x 1.6TbE ports or 512 x 200GbE ports

The Register
Port Configuration

Up to 64 x 1.6TbE ports or 512 x 200GbE ports

Broadcom
Target AI Cluster Size

Supports deployments of up to 128,000 GPUs

The Register
Target AI Cluster Size

Supports over 100,000 XPUs (accelerators)

BroadcomBroadcom
Key Feature Differences:
  • Congestion Management: Cisco differentiates its G300 with an “Intelligent Collective Networking” approach featuring a fully shared packet buffer and a load-balancing agent that communicates across all G300s in the network to build a global map of congestion. Broadcom’s Tomahawk series also includes smart congestion control and global load balancing, though Cisco claims its implementation achieves higher network utilization (33% better).
  • Programmability: Cisco emphasizes P4 programmability, allowing customers to update network functionality even after deployment.
  • Ecosystem & Integration: Broadcom operates primarily in the merchant silicon market, with their chips used by various partners like HPE Juniper Networking. Cisco uses its own silicon to power its 
    Nexus 9000 and 8000 Series switches, tightly integrating hardware with software management platforms like Nexus One for a unified solution.
  • Cooling Solutions: The Cisco G300 is designed to support high-density optics and is offered in new systems that include liquid-cooled options, providing 70% greater energy efficiency per bit compared to previous generations.

………………………………………………………………………………………………………………………………………………………………………………

References:

https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2026/m02/cisco-announces-new-silicon-one-g300.html

https://blogs.cisco.com/sp/cisco-silicon-one-g300-the-next-wave-of-ai-innovation

Will Cisco’s Silicon One G300 Be the Backbone of Agentic Inference?

Analysis: Ethernet gains on InfiniBand in data center connectivity market; White Box/ODM vendors top choice for AI hyperscalers

Cisco CEO sees great potential in AI data center connectivity, silicon, optics, and optical systems

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Nvidia enters Data Center Ethernet market with its Spectrum-X networking platform

Will AI clusters be interconnected via Infiniband or Ethernet: NVIDIA doesn’t care, but Broadcom sure does!

Custom AI Chips: Powering the next wave of Intelligent Computing

by the  Indxx team of market researchers with Alan J Weissberger

The Market for AI Related Semiconductors:

Several market research firms and banks forecast that revenue from AI-related semiconductors will grow at about 18% annually over the next few years—five times faster than non-AI semiconductor market segments.

  • IDC forecasts that global AI hardware spending, including chip demand, will grow at an annual rate of 18%.
  • Morgan Stanley analysts predict that AI-related semiconductors will grow at an 18% annual rate for a specific company, Taiwan Semiconductor (TSMC).
  • Infosys notes that data center semiconductor sales are projected to grow at an 18% CAGR.
  • MarketResearch.biz and the IEEE IRDS predict an 18% annual growth rate for AI accelerator chips.
  • Citi also forecasts aggregate chip sales for potential AI workloads to grow at a CAGR of 18% through 2030. 

AI-focused chips are expected to represent nearly 20% of global semiconductor demand in 2025, contributing approximately $67 billion in revenue [1].  The global AI chip market is projected to reach $40.79 billion in 2025 [2.] and continue expanding rapidly toward $165 billion by 2030.

…………………………………………………………………………………………………………………………………………………

Types of AI Custom Chips:

Artificial intelligence is advancing at a speed that traditional computing hardware can no longer keep pace with. To meet the demands of massive AI models, lower latency, and higher computing efficiency, companies are increasingly turning to custom AI chips which are purpose-built processors optimized for neural networks, training, and inference workloads.

Those AI chips include Application Specific Integrated Circuits (ASICs) and Field- Programmable Gate Arrays (FPGAs) to Neural Processing Units (NPUs) and Google’s Tensor Processing Units (TPUs).  They are optimized for core AI tasks like matrix multiplications and convolutions, delivering far higher performance-per-watt than CPUs or GPUs. This efficiency is key as AI workloads grow exponentially with the rise of Large Language Models (LLMs)  and generative AI.

OpenAI – Broadcom Deal:

Perhaps the biggest custom AI chip design is being done by an OpenAI partnership with Broadcom in a multi-year, multi-billion dollar deal announced in October 2025.  In this arrangement, OpenAI will design the hardware and Broadcom will develop custom chips to integrate AI model knowledge directly into the silicon for efficiency.

Here’s a summary of the partnership:

  • OpenAI designs its own AI processors (GPUs) and systems, embedding its AI insights directly into the hardware. Broadcom develops and deploys these custom chips and the surrounding infrastructure, using its Ethernet networking solutions to scale the systems.
  • Massive Scale: The agreement covers 10 gigawatts (GW) of AI compute, with deployments expected over four years, potentially extending to 2029.
  • Cost Savings: This custom silicon strategy aims to significantly reduce costs compared to off-the-shelf Nvidia or AMD chips, potentially saving 30-40% on large-scale deployments.
  • Strategic Goal: The collaboration allows OpenAI to build tailored hardware to meet the intense demands of developing frontier AI models and products, reducing reliance on other chip vendors.

AI Silicon Market Share of Key Players:

  • Nvidia, with its extremely popular AI GPUs and CUDA software ecosystem., is expected to maintain its market leadership. It currently holds an estimated 86% share of the AI GPU market segment according to one source [2.]. Others put NVIDIA’s market AI chip market share between 80% and 92%.
  • AMD holds a smaller, but growing, AI chip market share, with estimates placing its discrete GPU market share around 4% to 7% in early to mid-2025. AMD is projected to grow its AI chip division significantly, aiming for a double-digit share with products like the MI300X.  In response to the extraordinary demand for advanced AI processors, AMD’s Chief Executive Officer, Dr. Lisa Su, presented a strategic initiative to the Board of Directors: to pivot the company’s core operational focus towards artificial intelligence. Ms. Su articulated the view that the “insatiable demand for compute” represented a sustained market trend. AMD’s strategic reorientation has yielded significant financial returns; AMD’s market capitalization has nearly quadrupled, surpassing $350 billion [1]. Furthermore, the company has successfully executed high-profile agreements, securing major contracts to provide cutting-edge silicon solutions to key industry players, including OpenAI and Oracle.
  • Intel accounts for approximately 1% of the discrete GPU market share, but is focused on expanding its presence in the AI training accelerator market with its Gaudi 3 platform, where it aims for an 8.7% share by the end of 2025.  The former microprocessor king has recently invested heavily in both its design and manufacturing businesses and is courting customers for its advanced data-center processors.
  • Qualcomm, which is best known for designing chips for mobile devices and cars, announced in October that it would launch two new AI accelerator chips. The company said the new AI200 and AI250 are distinguished by their very high memory capabilities and energy efficiency.

Big Tech Custom AI chips vs Nvidia AI GPUs:

Big tech companies, including Google, Meta, Amazon, and Apple—are designing their own custom AI silicon to reduce costs, accelerate performance, and scale AI across industries. Yet nearly all rely on TSMC for manufacturing, thanks to its leadership in advanced chip fabrication technology [3.]

  • Google recently announced Ironwood, its 7th-generation Tensor Processing Unit (TPU), a major AI chip for LLM training and inference, offering 4x the performance of its predecessor (Trillium) and massive scalability for demanding AI workloads like Gemini, challenging Nvidia’s dominance by efficiently powering complex AI at scale for Google Cloud and major partners like Meta. Ironwood is significantly faster, with claims of over 4x improvement in training and inference compared to the previous Trillium (6th gen) TPU.  It allows for super-pods of up to 9,216 interconnected chips, enabling huge computational power for cutting-edge models. It’s optimized for high-volume, low-latency AI inference, handling complex thinking models and real-time chatbots efficiently.
  • Meta is in advanced talks to purchase and rent large quantities of Google’s custom AI chips (TPUs), starting with cloud rentals in 2026 and moving to direct purchases for data centers in 2027, a significant move to diversify beyond Nvidia and challenge the AI hardware market. This multi-billion dollar deal could reshape AI infrastructure by giving Meta access to Google’s specialized silicon for workloads like AI model inference, signaling a major shift in big tech’s chip strategy, notes this TechRadar article. 
  • According to a Wall Street Journal report published on December 2, 2025, Amazon’s new Trainium3 custom AI chip presents a challenge to Nvidia’s market position by providing a more affordable option for AI development.  Four times as fast as its previous generation of AI chips, Amazon said Trainium3 (produced by AWS’s Annapurna Labs custom-chip design business) can reduce the cost of training and operating AI models by up to 50% compared with systems that use equivalent graphics processing units, or GPUs.  AWS acquired Israeli startup Annapurna Labs in 2015 and began designing chips to power AWS’s data-center servers, including network security chips, central processing units, and later its AI processor series, known as Inferentia and Trainium.  “The main advantage at the end of the day is price performance,” said Ron Diamant, an AWS vice president and the chief architect of the Trainium chips. He added that his main goal is giving customers more options for different computing workloads. “I don’t see us trying to replace Nvidia,” Diamant said.
  • Interestingly, many of the biggest buyers of Amazon’s chips are also Nvidia customers. Chief among them is Anthropic, which AWS said in late October is using more than one million Trainium2 chips to build and deploy its Claude AI model. Nvidia announced a month later that it was investing $10 billion in Anthropic as part of a massive deal to sell the AI firm computing power generated by its chips.

Image Credit: Emil Lendof/WSJ, iStock

Other AI Silicon Facts and Figures:

  • Edge AI chips are forecast to reach $13.5 billion in 2025, driven by IoT and smartphone integration.
  • AI accelerators based on ASIC designs are expected to grow by 34% year-over-year in 2025.
  • Automotive AI chips are set to surpass $6.3 billion in 2025, thanks to advancements in autonomous driving.
  • Google’s TPU v5p reached 30% faster matrix math throughput in benchmark tests.
  • U.S.-based AI chip startups raised over $5.1 billion in venture capital in the first half of 2025 alone.

Conclusions:

Custom silicon is now essential for deploying AI in real-world applications such as automation, robotics, healthcare, finance, and mobility. As AI expands across every sector, these purpose-built chips are becoming the true backbone of modern computing—driving a hardware race that is just as important as advances in software. More and more AI firms are seeking to diversify their suppliers by buying chips and other hardware from companies other than Nvidia.  Advantages like cost-effectiveness, specialization, lower power consumption and strategic independence that cloud providers gain from developing their own in-house AI silicon.  By developing their own chips, hyperscalers can create a vertically integrated AI stack (hardware, software, and cloud services) optimized for their specific internal workloads and cloud platforms. This allows them to tailor performance precisely to their needs, potentially achieving better total cost of ownership (TCO) than general-purpose Nvidia GPUs

However, Nvidia is convinced it will retain a huge lead in selling AI silicon.  In a post on X, Nvida wrote that it was “delighted by Google’s success with its TPUs,” before adding that Nvidia “is a generation ahead of the industry—it’s the only platform that runs every AI model and does it everywhere computing is done.” The company said its chips offer “greater performance, versatility, and fungibility” than more narrowly tailored custom chips made by Google and AWS.

The race is far from over, but we can expect to surely see more competition in the AI silicon arena.

………………………………………………………………………………………………………………………………………………………………………………….

Links for Notes:

1.  https://www.mckinsey.com/industries/semiconductors/our-insights/artificial-intelligence-hardware-%20new-opportunities-for-semiconductor-companies/pt-PT

2. https://sqmagazine.co.uk/ai-chip-statistics/

3. https://www.ibm.com/think/news/custom-chips-ai-future

References:

https://www.wsj.com/tech/ai/amazons-custom-chips-pose-another-threat-to-nvidia-8aa19f5b

https://www.techradar.com/pro/meta-and-google-could-be-about-to-sign-a-mega-ai-chip-deal-and-it-could-change-everything-in-the-tech-space

https://www.wsj.com/tech/ai/nvidia-ai-chips-competitors-amd-broadcom-google-amazon-6729c65a

AI infrastructure spending boom: a path towards AGI or speculative bubble?

OpenAI and Broadcom in $10B deal to make custom AI chips

Reuters & Bloomberg: OpenAI to design “inference AI” chip with Broadcom and TSMC

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

Cisco CEO sees great potential in AI data center connectivity, silicon, optics, and optical systems

Expose: AI is more than a bubble; it’s a data center debt bomb

China gaining on U.S. in AI technology arms race- silicon, models and research