Cisco’s Silicon One G300 as the dominant AI networking fabric, competing with Broadcom’s Tomahawk 6 series

On February 10, 2026, Cisco announced the Silicon One G300 102.4 Tbps Ethernet switch silicon, claiming it can power gigawatt-scale AI clusters for training, inference, and real-time agentic workloads, while maximizing GPU utilization with a 28% improvement in job completion time. The G300 was said to offer Intelligent Collective Networking, which combines an industry-leading fully shared packet buffer, path-based load balancing, and proactive network telemetry to offer better performance and profitability for large-scale data centers. It efficiently absorbs bursty AI traffic, responds faster to link failures, and prevents packet drops that can stall jobs, ensuring reliable data delivery even over long distances. With Intelligent Collective Networking, Cisco can deliver 33% increased network utilization, and a 28% reduction in job completion time versus simulated non-optimized path selection, making AI data centers more profitable with more tokens generated per GPU-hour.  Also, the Cisco Silicon One G300 is highly programmable, enabling equipment to be upgraded for new network functionality even after it has been deployed. This enables Silicon One-based products to support emerging use cases and play multiple network roles, protecting long-term infrastructure investments. And with security fused into the hardware, customers can embrace holistic, at-speed security to keep clusters up and running.

The Cisco Silicon One G300 will power new Cisco N9000 and Cisco 8000 systems that push the frontier of AI networking in the data center. The systems feature innovative liquid cooling and support high-density optics to achieve new efficiency benchmarks and ensure customers get the most out of their GPU investments. In addition, the company enhanced Nexus One to make it easier for enterprises to operate their AI networks — on-premises or in the cloud — removing the complexity that can hold organizations back from scaling AI data centers.

“We are spearheading performance, manageability, and security in AI networking by innovating across the full stack – from silicon to systems and software,” said Jeetu Patel, President and Chief Product Officer, Cisco. “We’re building the foundation for the future of infrastructure, supporting every type of customer—from hyperscalers to enterprises—as they shift to AI-powered workloads.”

“As AI training and inference continues to scale, data movement is the key to efficient AI compute; the network becomes part of the compute itself. It’s not just about faster GPUs – the network must deliver scalable bandwidth and reliable, congestion-free data movement,” said Martin Lund, Executive Vice President of Cisco’s Common Hardware Group. “Cisco Silicon One G300, powering our new Cisco N9000 and Cisco 8000 systems, delivers high-performance, programmable, and deterministic networking – enabling every customer to fully utilize their compute and scale AI securely and reliably in production.”

The networking industry reaction to Cisco’s newest ASIC has been largely positive, with industry analysts and partners highlighting its role in reclaiming Cisco’s dominance in the AI infrastructure market. For example, Brendan Burke of Futurium thinks Cisco’s Silicon One G300 could be the backbone of Agentic AI Inference.  His take: “Cisco’s latest announcements represent a calculated move to assert dominance in the AI networking fabric by attacking the specific bottlenecks of GPU cluster efficiency. As AI workloads shift toward agentic inference, where autonomous agents continuously interact across distributed environments, the network must handle unpredictable traffic patterns, unlike the structured flows of traditional training. Cisco is leveraging its vertical integration strategy to address the reliability and power constraints that plague these massive clusters. By emphasizing programmable silicon and rigorous optic qualification, Cisco aims to decouple network lifespan from rapid GPU innovation cycles, ensuring infrastructure can adapt to new traffic steering algorithms without hardware replacements. The G300 is a bid to make Ethernet the undisputed standard for AI back-end networks.”

Key Performance Indicators:
  • Industry-Leading Specs: Market analysts have noted that the G300’s 102.4 Tbps switching capacity sets a new benchmark for AI scale-out and scale-across networking.
  • Efficiency Gains: Initial simulations showing a 28% reduction in job completion time (JCT) and a 33% increase in network utilization have been cited as major differentiators for large-scale AI clusters.
  • Sustainability Focus: The shift toward liquid-cooled systems for the G300, which offers 70% greater energy efficiency per bit, is being viewed as a critical move for sustainable AI growth.
Strategic & Market Impact:
  • Competitive Positioning: Experts from HyperFRAME Research suggest that the new silicon signals a “new confidence” from Cisco, positioning them as the “Apple of infrastructure” by tightly integrating hardware and software.
  • AI Infrastructure Pivot: Financial analysts at Seeking Alpha have upgraded Cisco’s outlook, viewing the company no longer as just a legacy hardware firm but as a central player in the AI revolution.
  • Partner Confidence: Major partners, such as Shanghai Lichan Technology, have expressed excitement about the Nexus 9100 Series powered by this silicon, specifically for its ability to simplify and scale AI deployments.
Critical Observations:
  • Nvidia & Broadcom Competition: While the  G300 is seen as a strong challenger to Nvidia’s Spectrum-X and Broadcom’s Tomahawk/Jericho lines, some observers note that Cisco still faces a steep climb to regain market share lost to these competitors in recent years.
  • Complexity Concerns: Some industry veterans have pointed out that while the silicon is “hyperscale ready,” the success of these ASICs in the enterprise will depend on Cisco’s ability to maintain operational simplicity through tools like the Nexus Dashboard.

……………………………………………………………………………………………………………………………………………………………………………………………

Cisco’s Silicon One G300 and Broadcom’s latest Tomahawk 6 series both offer a top-tier 102.4 Tbps switching capacity, with the primary differentiators lying in each company’s unique approach to congestion management and network programmability.
Technical Spec. Comparison:
Cisco Silicon One G300
Broadcom Tomahawk 6 (BCM78910 Series)
Bandwidth

102.4 Tbps

TechPowerUp
Bandwidth

102.4 Tbps

Broadcom
Manufacturing Process

TSMC 3nm

X
Manufacturing Process

3nm technology

Broadcom
SerDes Lanes & Speed

512 lanes at 200 Gbps per link

The Register
SerDes Lanes & Speed

512 lanes at 200 Gbps per link, or 1024 lanes at 100G

Broadcom
Port Configuration

Up to 64 x 1.6TbE ports or 512 x 200GbE ports

The Register
Port Configuration

Up to 64 x 1.6TbE ports or 512 x 200GbE ports

Broadcom
Target AI Cluster Size

Supports deployments of up to 128,000 GPUs

The Register
Target AI Cluster Size

Supports over 100,000 XPUs (accelerators)

BroadcomBroadcom
Key Feature Differences:
  • Congestion Management: Cisco differentiates its G300 with an “Intelligent Collective Networking” approach featuring a fully shared packet buffer and a load-balancing agent that communicates across all G300s in the network to build a global map of congestion. Broadcom’s Tomahawk series also includes smart congestion control and global load balancing, though Cisco claims its implementation achieves higher network utilization (33% better).
  • Programmability: Cisco emphasizes P4 programmability, allowing customers to update network functionality even after deployment.
  • Ecosystem & Integration: Broadcom operates primarily in the merchant silicon market, with their chips used by various partners like HPE Juniper Networking. Cisco uses its own silicon to power its 
    Nexus 9000 and 8000 Series switches, tightly integrating hardware with software management platforms like Nexus One for a unified solution.
  • Cooling Solutions: The Cisco G300 is designed to support high-density optics and is offered in new systems that include liquid-cooled options, providing 70% greater energy efficiency per bit compared to previous generations.

………………………………………………………………………………………………………………………………………………………………………………

References:

https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2026/m02/cisco-announces-new-silicon-one-g300.html

https://blogs.cisco.com/sp/cisco-silicon-one-g300-the-next-wave-of-ai-innovation

Will Cisco’s Silicon One G300 Be the Backbone of Agentic Inference?

Analysis: Ethernet gains on InfiniBand in data center connectivity market; White Box/ODM vendors top choice for AI hyperscalers

Cisco CEO sees great potential in AI data center connectivity, silicon, optics, and optical systems

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Nvidia enters Data Center Ethernet market with its Spectrum-X networking platform

Will AI clusters be interconnected via Infiniband or Ethernet: NVIDIA doesn’t care, but Broadcom sure does!

One thought on “Cisco’s Silicon One G300 as the dominant AI networking fabric, competing with Broadcom’s Tomahawk 6 series

Leave a Reply to Anonymous Cancel Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*