Hyperscaler network equipment design
Hyperscaler design of networking equipment with ODM partners
Networking equipment for hyperscalers like Google, Amazon, Microsoft, Oracle, Meta, and others is a mix of in‑house engineering and partnerships with specialized vendors. These companies operate at such massive scale to design their own switches, routers, and interconnects — but rely on Original Design Manufacturers (ODMs) and network silicon providers to build them.
In‑House Networking Design:
Hyperscalers have dedicated hardware teams that create custom network gear to meet their unique performance, latency, and power‑efficiency needs.
- Google – Designs its own Jupiter and Andromeda data center network fabrics, plus custom top‑of‑rack (ToR) and spine switches. Uses merchant silicon from Broadcom, Intel (Barefoot Tofino), and others, but with Google‑built control planes and software.
- Amazon (AWS) – Builds custom switches and routers for its Scalable Reliable Datagram (SRD) and Elastic Fabric Adapter (EFA) HPC networks. Uses in‑house firmware and network operating systems, often on ODM‑built hardware.
- Microsoft (Azure) – Designs OCP‑compliant switches (e.g., SONiC network OS) and contributes to the Open Compute Project. Uses merchant silicon from Broadcom, Marvell, and Mellanox/NVIDIA.
- Oracle Cloud Infrastructure (OCI) – Designs its own high‑performance RDMA‑enabled network for HPC and AI workloads, with custom switches built by ODM partners.
- Meta – Designs Wedge, Backpack, and Minipack switches under OCP, manufactured by ODMs.
Manufacturing & ODM Partners:
While the hyperscaler’s network equipment designs are proprietary, the physical manufacturing is typically outsourced to ODMs who specialize in hyperscale networking gear:
ODM / OEM | Builds for | Notes |
---|---|---|
Quanta Cloud Technology (QCT) | AWS, Azure, Oracle, Meta | Custom ToR/spine switches, OCP gear |
WiWynn | Microsoft, Meta | OCP‑compliant switches and racks |
Celestica | Multiple hyperscalers | High‑end switches, optical modules |
Accton / Edgecore | Google, Meta, others | White‑box switches for OCP |
Foxconn / Hon Hai | AWS, Google | Large‑scale manufacturing |
Delta Networks | Multiple CSPs | Optical and Ethernet gear |
Network Silicon & Optics Suppliers:
Even though most hyperscalers design the chassis and racks, they often use merchant silicon and optics from:
- Broadcom – Tomahawk, Trident, Jericho switch ASICs
- Marvell – Prestera switch chips, OCTEON DPUs
- NVIDIA (Mellanox acquisition) – Spectrum Ethernet, InfiniBand for AI/HPC
- Intel (Barefoot acquisition) – Tofino programmable switch ASICs
- Cisco Silicon One – Used selectively in hyperscale builds
- Coherent optics & transceivers – From II‑VI (Coherent), Lumentum, InnoLight, etc.
Hyperscaler Networking Supply Chain Map:
Layer | Key Players | Role | Example Hyperscaler Relationships |
---|---|---|---|
Network Silicon (ASICs / DPUs) | Broadcom (Tomahawk, Jericho), Marvell (Prestera, OCTEON), NVIDIA/Mellanox (Spectrum, InfiniBand), Intel (Barefoot Tofino), Cisco (Silicon One) | Core packet switching, programmability, congestion control | Google (Broadcom, Intel), AWS (Broadcom, Marvell), Microsoft (Broadcom, Marvell, NVIDIA), Oracle (Broadcom, NVIDIA) |
Optics & Interconnects | Coherent (II‑VI), Lumentum, InnoLight, Source Photonics, Broadcom (optical PHYs) | 400G/800G transceivers, co‑packaged optics, DWDM modules | All hyperscalers source from multiple vendors for redundancy |
ODM / Manufacturing | Quanta Cloud Technology (QCT), WiWynn, Celestica, Accton/Edgecore, Foxconn, Delta Networks | Build hyperscaler‑designed switches, routers, and chassis | AWS (QCT, Foxconn), Google (Accton, QCT), Microsoft (WiWynn, Celestica), Meta (Accton, WiWynn), Oracle (QCT, Celestica) |
Network OS & Control Plane | In‑house NOS (Google proprietary, AWS custom OS, Microsoft SONiC, Oracle custom), OCP software | Routing, telemetry, automation, SDN control | Google (Jupiter fabric OS), AWS (custom SRD/EFA stack), Microsoft (SONiC), Oracle (OCI NOS) |
Integration & Deployment | Hyperscaler internal engineering teams | Rack integration, cabling, fabric topology, automation pipelines | All hyperscalers do this in‑house for security and scale |
Design Flow:
- Chip Vendors → supply merchant silicon to ODMs or directly to hyperscaler design teams.
- Hyperscaler Hardware Teams → design chassis, PCB layouts, thermal systems, and specify optics.
- ODMs → manufacture to spec, often in Asia, with hyperscaler QA oversight.
- Optics Vendors → deliver transceivers and cables, often qualified by hyperscaler labs.
- In‑House NOS → loaded onto hardware, integrated into hyperscaler’s SDN fabric.
- Deployment → rolled out in data centers globally, often in multi‑tier Clos or AI‑optimized topologies.
Major Trends:
- Disaggregation – Hyperscalers separate hardware from software, running their own Network Operating System (NOS), (e.g., SONiC, Google’s proprietary OS) on ODM‑built “white‑box” or “bare metal” switches.
- AI‑Optimized Fabrics – New designs focus on ultra‑low latency, congestion control, and massive east‑west bandwidth for GPU clusters.
- Optical Integration – Co‑packaged optics and 800G+ transceivers are becoming standard for AI and HPC workloads.
- AI Cluster Networking – NVIDIA InfiniBand and 800G Ethernet fabrics are now common for GPU pods.
- Co‑Packaged Optics – Moving optics closer to the ASIC to reduce power and latency.
- Open Compute Project Influence – Many designs are OCP‑compliant but with proprietary tweaks.
- Multi‑Vendor Strategy – Hyperscalers dual‑source ASICs and optics to avoid supply chain risk.