Will the wave of AI generated user-to/from-network traffic increase spectacularly as Cisco and Nokia predict?

Network operators are bracing themselves for a wave of AI traffic, partially based on a RtBrick survey, as well as forecasts by Cisco and Nokia, but that hasn’t happened yet.  The heavy AI traffic today is East to West (or vice-versa) within cloud resident AI data centers and for AI data center interconnects.

1.  Cisco believes that AI Inference agents will soon engage “continuously” with end-users, keeping traffic levels consistently high. has stated that AI will greatly increase network traffic, citing a shift toward new, more demanding traffic patterns driven by “agentic AI” and other applications. This perspective is a core part of Cisco’s business strategy, which is focused on selling the modernized infrastructure needed to handle the coming surge. Cisco identified three stages of AI-driven traffic growth, each with different network demands: 

  • Today’s generative AI models produce “spikey” traffic which spikes up when a user submits a query, but then returns to a low baseline. Current networks are largely handling this traffic without issues.
  • Persistent “agentic” AI traffic: The next phase will involve AI agents that constantly interact with end-users and other agents. Cisco CEO Chuck Robbins has stated that this will drive traffic “beyond the peaks of current chatbot interaction” and keep network levels “consistently high”.
  • Edge-based AI: A third wave of “physical AI” will require more computing and networking at the edge of the network to accommodate specialized use cases like industrial IoT. 

“As we move towards agentic AI and the demand for inferencing expands to the enterprise and end user networking environments, traffic on the network will reach unprecedented levels,” Cisco CEO Chuck Robbins said on the company’s recent earnings call. “Network traffic will not only increase beyond the peaks of current chatbot interaction, but will remain consistently high with agents in constant interaction.”

2. Nokia recently predicted that both direct and indirect AI traffic on mobile networks will grow at a faster pace than regular, non-AI traffic.

  • Direct AI traffic: This is generated by users or systems directly interacting with AI services and applications. Consumer examples: Using generative AI tools, interacting with AI-powered gaming, or experiencing extended reality (XR) environments. Enterprise examples: Employing predictive maintenance, autonomous operations, video and image analytics, or enhanced customer interactions.
  • Indirect AI traffic: This occurs when AI algorithms are used to influence user engagement with existing services, thereby increasing overall traffic. Examples: AI-driven personalized recommendations for video content on social media, streaming platforms, and online marketplaces, which can lead to longer user sessions and higher bandwidth consumption. 

The Finland based network equipment vendor warned that the AI wave could bring “a potential surge in uplink data traffic that could overwhelm our current network infrastructure if we’re not prepared,” noting that the rise of hybrid on-device and cloud tools will require much more than the 5-15 Mbps uplink available on today’s networks.  Nokia’s Global Network Traffic 2030 report forecasts that overall traffic could grow by 5 to 9 times current levels by 2033.  All told, Nokia said AI traffic is expected to hit 1088 exabytes (EB) per month by 2033. That means overall traffic will grow 5x in a best case scenario and 9x in a worse case.

To manage this anticipated traffic surge, Nokia advocates for radical changes to existing network infrastructure.

  • Cognitive networks: The company states that networks must become “cognitive,” leveraging AI and machine learning (ML) to handle the growing data demand.
  • Network-as-Code: As part of its Technology Strategy 2030, Nokia promotes a framework for more flexible and scalable networks that leverage AI and APIs.
  • 6G preparation: Nokia Bell Labs is already conducting research and field tests to prepare for 6G networks around 2030, with a focus on delivering the capacity needed for AI and other emerging technologies.
  • Optimizing the broadband edge: The company also highlights the need to empower the broadband network edge to handle the demands of AI applications, which require low latency and high reliability. 

Nokia’s Global Network Traffic 2030 report didn’t mention agentic AI, which are artificial intelligence systems designed to autonomously perceive, reason, and act in their environment to achieve complex goals with less human oversight. Unlike generative AI, which focuses on creating content, agentic AI specializes in workflow automation and independent problem-solving by making decisions, adapting plans, and executing tasks over extended periods to meet long-term objectives.

3.  Ericsson did point to traffic increases stemming from the use of AI-based assistants in its 2024 Mobility Report. In particular, it predicted the majority of traffic would be related to the use of consumer video AI assistants, rather than text-based applications and – outside the consumer realm – forecast increased traffic from “AI agents interacting with drones and droids. Accelerated consumer uptake of GenAI will cause a steady increase of traffic in addition to the baseline increase,” Ericsson said of its traffic growth scenario.

…………………………………………………………………………………………………………………………………………………………………………………..

Dissenting Views:

1.  UK Disruptive Analysis Founder Dean Bubley isn’t a proponent of huge AI traffic growth. “Many in the telecom industry and vendor community are trying to talk up AI as driving future access network traffic and therefore demand for investment, spectrum etc., but there is no evidence of this at present,” he told Fierce Network.

Bubley argues that AI agents won’t really create much traffic on access networks to homes or businesses. Instead, he said, they will drive traffic “inside corporate networks, and inside and between data centers on backbone networks and inside the cloud.  “There might be a bit more uplink traffic if video/images are sent to the cloud for AI purposes, but again that’s hypothetical,” he said.

2.  In a LinkedIn post, Ookla analyst Mike Dano said he was a bit suspicious about “Cisco predicting a big jump in network traffic due to AI agents constantly wandering around the Internet and doing things.”  While almost all of the comments agreed with Dano, it still is an open question whether the AI traffic Armageddon will actually materialize.

……………………………………………………………………………………………………………………………………………………………………………………….

References:

RtBrick survey: Telco leaders warn AI and streaming traffic to “crack networks” by 2030

https://www.fierce-network.com/cloud/will-ai-agents-really-raise-network-traffic-baseline

Q4FY25-Earnings-Slides.pdf

https://onestore.nokia.com/asset/213660

https://www.linkedin.com/posts/mikedano_it-looks-like-cisco-is-predicting-a-big-jump-activity-7363223007152017408-JiVS/

Mulit-vendor Open RAN stalls as Echostar/Dish shuts down it’s 5G network leaving Mavenir in the lurch

Last week’s announcement that Echostar/ Dish Network will sell $23 billion worth of spectrum licenses to AT&T was very bad news for Mavenir.  As a result of that deal, Dish Network’s 5G Open RAN network, running partly on Mavenir’s software, is to be decommissioned.  Dish Network had been constructing a fourth nationwide U.S. mobile network with new Open RAN suppliers – one of the only true multi-vendor Open RANs worldwide.

Credit: Kristoffer Tripplaar/Alamy Stock Photo

Echostar’s decision to shut down its 5G network marks a very sad end for the world’s largest multivendor open RAN and will have ramifications for the entire industry. “If you look at all the initiatives, what the US government did or in general, they are the only ones who actually spent a good chunk of money to really support open RAN architecture,” said Pardeep Kohli, the CEO of Mavenir, one of the vendors involved in the Dish Network project. “So now the question is where do you go from here?”

As part of its original set of updates on 5G network plans, Dish revealed it would host its 5G core – the part that will survive the spectrum sale – in the public cloud of AWS. And the hyperscaler’s data facilities have also been used for RAN software from Mavenir installed on servers known as central units.

Open RAN enters is in the fronthaul interface between Mavenir’s DU software and radios provided by Japan’s Fujitsu. Its ability to connect its software to another company’s radios validates Mavenir’s claims to be an open RAN vendor, says Kohli. While other suppliers boast compatibility with open RAN specifications, commercial deployments pairing vendors over this interface remain rare.

Mavenir has evidently been frustrated by the continued dominance of Huawei, Ericsson and Nokia, whose combined RAN market share grew from 75.1% in 2023 to 77.5% last year, according to research from Omdia, an Informa company. Dish Network alone would not have made a sufficient difference for Mavenir and other open RAN players, according to Kohli. “It helped us come this far,” he said. “Now it’s up to how far other people want to take it.” A retreat from open RAN would, he thinks, be a “bad outcome for all the western operators,” leaving them dependent on a Nordic duopoly in countries where Chinese vendors are now banned.

“If they (telcos) don’t support it (multi-vendor OpenRAN), and other people are not supporting it, we are back to a Chinese world and a non-Chinese world,” he said. “In the non-Chinese world, you have Ericsson and Nokia, and in the Chinese world, it’s Huawei and ZTE. And that’s going to be a pretty bad outcome if that’s where it ends up.”

…………………………………………………………………………………………………………………………………………………………………

Open RAN x-U.S.:

Outside the U.S., the situation is no better for OpenRAN. Only Japan’s Rakuten and Germany’s 1&1 have attempted to build a “greenfield” Open RAN from scratch. As well as reporting billions of dollars in losses on network deployment, Rakuten has struggled to attract customers. It owns the RAN software it has deployed but counts only 1&1 as a significant customer. And Rakuten’s original 4G rollout was not based on the industry’s open RAN specifications, according to critics. “They were not pure,” said Mavenir’s Kohli.

Plagued by delays and other problems, 1&1’s rollout has been a further bad advert for Open RAN. For the greenfield operators, the issue is not the maturity of open RAN technology. Rather, it is the investment and effort needed to build any kind of new nationwide telecom network in a country that already has infrastructure options. And the biggest brownfield operators, despite professing support for open RAN, have not backed any of the the new entrants.

RAN Market Concentration:

  • Stefan Pongratz, an analyst with Dell’Oro, found that five of six regions he tracks are today classed as “highly concentrated,” with an HHI score of more than 2,500. “This suggests that the supplier diversity element of the open RAN vision is fading,” wrote Pongratz in a recent blog.
  • A study from Omdia (owned by Informa), shows the combined RAN market share of Huawei, Ericsson and Nokia grew from 75.1% in 2023 to 77.5% last year. The only significant alternative to the European and Chinese vendors is Samsung, and its market share has shrunk from 6.1% to 4.8% over this period.

Concentration would seem to be especially high in the U.S., where Ericsson now boasts a RAN market share of more than 50% and generates about 44% of its sales (the revenue contribution of India, Ericsson’s second-biggest market, was just 4% for the recent second quarter).  That’s partly because smaller regional operators previously ordered to replace Huawei in their networks spent a chunk of the government’s “rip and replace” funds on Ericsson rather than open RAN, says Kohli. Ironically, though, Ericsson owes much of the recent growth in its U.S. market share to what has been sold as an open RAN single vendor deal with AT&T [1.]. Under that contract, it is replacing Nokia at a third of AT&T’s sites, having already been the supplier for the other two thirds.

Note 1. In December 2023, AT&T awarded Ericsson a multi-year, $14 billion Open RAN contract to serve as the foundation for its open network deployment, with a goal of having 70% of its wireless traffic on open platforms by late 2026. That large, single-vendor award for the core infrastructure was criticized for potentially undermining the goal of Open RAN which was to encourage competition among multiple network equipment and software providers. AT&T’s claim of a mulit-vendor network turned out to be just a smokescreen.  Fujitsu/1Finity supplied third-party radios used in AT&T’s first Open RAN call with Ericsson.

Indeed, AT&T’s open RAN claims have been difficult to take seriously, especially since it identified Mavenir as a third supplier of radio units, behind Ericsson and Japan’s Fujitsu, just a few months before Mavenir quit the radio unit market. Mavenir stopped manufacturing and distributing Open RAN radios in June 2025 as part of a financial restructuring and a shift to a software-focused business model. 

…………………………………………………………………………………………………………………….

Arguably, Kohli describes Echostar/ Dish Network as the only U.S. player that was spending “a good chunk of money to really support open RAN architecture.”

Ultimately, he thinks the big U.S. telcos may come to regret their heavier reliance on the RAN gear giants. “It may look great for AT&T and Verizon today, but they’ll be funding this whole thing as a proprietary solution going forward because, really, there’s no incentive for anybody else to come in,” he said.

…………………………………………………………………………………………………………………….

References:

https://www.lightreading.com/open-ran/echostar-rout-leaves-its-open-ran-vendors-high-and-dry

https://www.lightreading.com/open-ran/mavenir-ceo-warns-of-ericsson-and-nokia-duopoly-as-open-ran-stalls

AT&T to to buy spectrum Licenses from EchoStar for $23 billion

AT&T to deploy Fujitsu and Mavenir radio’s in crowded urban areas

Dell’Oro Group: RAN Market Grows Outside of China in 2Q 2025

Mavenir and NEC deploy Massive MIMO on Orange’s 5G SA network in France

Spark New Zealand completes 5G SA core network trials with AWS and Mavenir software

Mavenir at MWC 2022: Nokia and Ericsson are not serious OpenRAN vendors

Ericsson expresses concerns about O-RAN Alliance and Open RAN performance vs. costs

Nokia and Mavenir to build 4G/5G public and private network for FSG in Australia

 

Hyperscaler design of networking equipment with ODM partners

Networking equipment for  hyperscalers like Google, Amazon, Microsoft, Oracle, Meta, and others is a mix of in‑house engineering and partnerships with specialized vendors. These companies operate at such massive scale to design their own switches, routers, and interconnects — but rely on Original Design Manufacturers (ODMs) and network silicon providers to build them.

In‑House Networking Design:

Hyperscalers have dedicated hardware teams that create custom network gear to meet their unique performance, latency, and power‑efficiency needs.

  • Google – Designs its own Jupiter and Andromeda data center network fabrics, plus custom top‑of‑rack (ToR) and spine switches. Uses merchant silicon from Broadcom, Intel (Barefoot Tofino), and others, but with Google‑built control planes and software.
  • Amazon (AWS) – Builds custom switches and routers for its Scalable Reliable Datagram (SRD) and Elastic Fabric Adapter (EFA) HPC networks. Uses in‑house firmware and network operating systems, often on ODM‑built hardware.
  • Microsoft (Azure) – Designs OCP‑compliant switches (e.g., SONiC network OS) and contributes to the Open Compute Project. Uses merchant silicon from Broadcom, Marvell, and Mellanox/NVIDIA.
  • Oracle Cloud Infrastructure (OCI) – Designs its own high‑performance RDMA‑enabled network for HPC and AI workloads, with custom switches built by ODM partners.
  • Meta – Designs Wedge, Backpack, and Minipack switches under OCP, manufactured by ODMs.

Manufacturing & ODM Partners:

While the hyperscaler’s network equipment designs are proprietary, the physical manufacturing is typically outsourced to ODMs who specialize in hyperscale networking gear:

ODM / OEM Builds for Notes
Quanta Cloud Technology (QCT) AWS, Azure, Oracle, Meta Custom ToR/spine switches, OCP gear
WiWynn Microsoft, Meta OCP‑compliant switches and racks
Celestica Multiple hyperscalers High‑end switches, optical modules
Accton / Edgecore Google, Meta, others White‑box switches for OCP
Foxconn / Hon Hai AWS, Google Large‑scale manufacturing
Delta Networks Multiple CSPs Optical and Ethernet gear

Network Silicon & Optics Suppliers:

Even though most hyperscalers design the chassis and racks, they often use merchant silicon and optics from:

  • Broadcom – Tomahawk, Trident, Jericho switch ASICs
  • Marvell – Prestera switch chips, OCTEON DPUs
  • NVIDIA (Mellanox acquisition) – Spectrum Ethernet, InfiniBand for AI/HPC
  • Intel (Barefoot acquisition) – Tofino programmable switch ASICs
  • Cisco Silicon One – Used selectively in hyperscale builds
  • Coherent optics & transceivers – From II‑VI (Coherent), Lumentum, InnoLight, etc.

Hyperscaler Networking Supply Chain Map:

Layer Key Players Role Example Hyperscaler Relationships
Network Silicon (ASICs / DPUs) Broadcom (Tomahawk, Jericho), Marvell (Prestera, OCTEON), NVIDIA/Mellanox (Spectrum, InfiniBand), Intel (Barefoot Tofino), Cisco (Silicon One) Core packet switching, programmability, congestion control Google (Broadcom, Intel), AWS (Broadcom, Marvell), Microsoft (Broadcom, Marvell, NVIDIA), Oracle (Broadcom, NVIDIA)
Optics & Interconnects Coherent (II‑VI), Lumentum, InnoLight, Source Photonics, Broadcom (optical PHYs) 400G/800G transceivers, co‑packaged optics, DWDM modules All hyperscalers source from multiple vendors for redundancy
ODM / Manufacturing Quanta Cloud Technology (QCT), WiWynn, Celestica, Accton/Edgecore, Foxconn, Delta Networks Build hyperscaler‑designed switches, routers, and chassis AWS (QCT, Foxconn), Google (Accton, QCT), Microsoft (WiWynn, Celestica), Meta (Accton, WiWynn), Oracle (QCT, Celestica)
Network OS & Control Plane In‑house NOS (Google proprietary, AWS custom OS, Microsoft SONiC, Oracle custom), OCP software Routing, telemetry, automation, SDN control Google (Jupiter fabric OS), AWS (custom SRD/EFA stack), Microsoft (SONiC), Oracle (OCI NOS)
Integration & Deployment Hyperscaler internal engineering teams Rack integration, cabling, fabric topology, automation pipelines All hyperscalers do this in‑house for security and scale

Design Flow:

  1. Chip Vendors → supply merchant silicon to ODMs or directly to hyperscaler design teams.
  2. Hyperscaler Hardware Teams → design chassis, PCB layouts, thermal systems, and specify optics.
  3. ODMs → manufacture to spec, often in Asia, with hyperscaler QA oversight.
  4. Optics Vendors → deliver transceivers and cables, often qualified by hyperscaler labs.
  5. In‑House NOS → loaded onto hardware, integrated into hyperscaler’s SDN fabric.
  6. Deployment → rolled out in data centers globally, often in multi‑tier Clos or AI‑optimized topologies.

 Major Trends:

  • Disaggregation – Hyperscalers separate hardware from software, running their own Network Operating System (NOS), (e.g., SONiC, Google’s proprietary OS) on ODM‑built “white‑box” or “bare metal” switches.
  • AI‑Optimized Fabrics – New designs focus on ultra‑low latency, congestion control, and massive east‑west bandwidth for GPU clusters.
  • Optical Integration – Co‑packaged optics and 800G+ transceivers are becoming standard for AI and HPC workloads.
  • AI Cluster Networking – NVIDIA InfiniBand and 800G Ethernet fabrics are now common for GPU pods.
  • Co‑Packaged Optics – Moving optics closer to the ASIC to reduce power and latency.
  • Open Compute Project Influence – Many designs are OCP‑compliant but with proprietary tweaks.
  • Multi‑Vendor Strategy – Hyperscalers dual‑source ASICs and optics to avoid supply chain risk.

References:

How it works: hyperscaler compute server in house design process with ODM partners

Cloud‑resident high performance compute servers used by hyperscale cloud providers like Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Oracle Cloud Infrastructure (OCI), Meta, and others use a mix of custom in‑house designs and ODM (Original Design Manufacturer) ‑ built hardware.

In‑House Design Teams:

  • Amazon (AWS) – Designs its own Nitro System–based servers, including custom motherboards, networking cards, and security chips. AWS also develops Graviton (Arm‑based) and Trainium/Inferentia (AI) processors. HPC instances use Elastic Fabric Adapter (EFA) for low‑latency interconnects.
  • Google (GCP) – Builds custom server boards and racks for its data centers, plus TPUs (Tensor Processing Units) for AI workloads.  GCP builds custom HPC server boards and racks, plus TPUs for AI workloads. It uses high‑speed interconnects like Google’s Jupiter network for HPC clusters.
  • Microsoft Azure – Designs Azure‑optimized servers and AI accelerators, often in collaboration with partners, and contributes designs to the Open Compute Project (OCP).  It integrates InfiniBand and/or 400 Gbps Ethernet for HPC interconnects.
  • Oracle – Designs bare‑metal HPC shapes with AMD EPYC, Intel Xeon, and NVIDIA GPUs, plus RDMA cluster networking for microsecond latency.
  • Meta – Designs its compute servers, especially for AI workloads, by working closely with ODM partners like Quanta Computer, Wiwynn, and Foxconn.

Manufacturing Partners (ODMs/OEMs):

While the hyperscaler compute server designs are proprietary, the physical manufacturing is typically outsourced to Original Design Manufacturers (ODMs) who specialize in hyperscale data center gear as per these tables:

ODM / OEM Known for Cloud Customers
Quanta Cloud Technology (QCT) Custom rack servers, HPC nodes AWS, Azure, Oracle
WiWynn OCP‑compliant HPC servers Microsoft, Meta
Inventec HPC and AI‑optimized servers AWS, GCP
Foxconn / Hon Hai Large‑scale server manufacturing Google, AWS
Celestica HPC and networking gear Multiple hyperscalers
Supermicro GPU‑dense HPC systems AWS, Oracle, Azure
ODM / OEM Role in Hyperscale Cloud
Quanta Cloud Technology (QCT) Major supplier for AWS, Azure, and others; builds custom rack servers and storage nodes.
WiWynn Spun off from Wistron; manufactures OCP‑compliant servers for Microsoft and Facebook/Meta.
Inventec Supplies compute and storage servers for AWS and other CSPs.
Foxconn / Hon Hai Builds cloud server hardware for multiple providers, including Google.
Delta / Celestica Provides specialized server and networking gear for hyperscale data centers.
Supermicro Supplies both standard and custom AI‑optimized servers to cloud and enterprise customers.

The global server market expected to reach $380 billion by 2028.  Image credit: Alamy

……………………………………………………………………………………………………………………………………………………………..

Here’s a supply chain relationship map for cloud‑resident high‑performance compute (HPC) servers used by the major hyperscalers:

Hyperscale HPC Server Design & Manufacturing Landscape:

Cloud Provider In‑House Design Focus Key Manufacturing / ODM Partners Notable HPC Hardware Features
Amazon Web Services (AWS) Custom Nitro boards, Graviton CPUs, Trainium/Inferentia AI chips, EFA networking Quanta Cloud Technology (QCT), Inventec, Foxconn Arm‑based HPC nodes, GPU clusters (NVIDIA H100/A100), ultra‑low‑latency RDMA
Google Cloud Platform (GCP) Custom server boards, TPU accelerators, Jupiter network fabric Quanta, Inventec, Foxconn TPU pods, GPU supernodes, liquid‑cooled racks
Microsoft Azure OCP‑compliant HPC designs, Maia AI chip, Cobalt CPU, InfiniBand networking WiWynn, QCT, Celestica Cray‑based HPC clusters, GPU/FPGA acceleration
Oracle Cloud Infrastructure (OCI) Bare‑metal HPC shapes, RDMA cluster networking QCT, Supermicro AMD EPYC/Intel Xeon nodes, NVIDIA GPU dense racks
Meta (for AI/HPC research) OCP‑based AI/HPC servers WiWynn, QCT AI Research SuperCluster, liquid cooling
Alibaba Cloud / Tencent Cloud Custom AI/HPC boards, Arm CPUs Inspur, Sugon, QCT GPU/FPGA acceleration, high‑bandwidth fabrics

Meta’s ODM Collaboration Model:

  • Quanta Computer: Meta has partnered with Quanta for final assembly of its next-gen AI servers. Quanta is responsible for building up to 6,000 racks of the Santa Barbara servers, which feature advanced cooling and power delivery systems.
  • Wiwynn & Foxconn: These ODMs also play key roles in Meta’s infrastructure. Wiwynn reportedly earns more than half its revenue from Meta, while Foxconn handles system assembly for NVIDIA’s NVL 72 servers, which Meta may also utilize.
  • Broadcom Partnership: For chip supply, Meta collaborates with Broadcom to integrate custom ASICs into its server designs.

Hyperscaler/ODM Collaboration Process:

  1. Design Phase – Hyperscalers’ hardware teams define the architecture: CPU/GPU choice, interconnect, cooling, power density.
  2. ODM Manufacturing – Partners like Quanta, WiWynn, Inventec, Foxconn, Celestica, and Supermicro build the servers to spec.
  3. Integration & Deployment – Systems are tested, integrated into racks, and deployed in hyperscale data centers.
  4. Optimization – Providers fine‑tune firmware, drivers, and orchestration for HPC workloads (e.g., CFD, genomics, AI training).

Industry Trends:

  • Open Compute Project (OCP) – Many designs are shared in the OCP community, allowing ODMs to build interoperable, cost‑optimized hardware at scale.  Open Compute Project designs speed up deployment and interoperability.
  • Vertical Integration – Hyperscalers increasingly design custom silicon (e.g., AWS Graviton, Google TPU, Microsoft Maia AI chip) to optimize performance and reduce dependency on third‑party CPUs/GPUs.
  • AI‑Optimized Racks – New designs focus on high‑density GPU clusters, liquid cooling, and ultra‑low‑latency networking for AI workloads.
  • Vertical integration: More custom silicon to optimize performance and cost.  See Specialized HPC components below.
  • Liquid cooling: Increasingly common for dense GPU/CPU HPC racks.

Specialized HPC Components:

  • CPUs – AMD EPYC, Intel Xeon Scalable, AWS Graviton (Arm), custom Google CPUs.
  • GPUs / Accelerators – NVIDIA H100/A100, AMD Instinct, Google TPU, AWS Trainium.
  • Networking – Mellanox/NVIDIA InfiniBand, AWS EFA, Oracle RDMA cluster networking.
  • Storage – Parallel file systems like Lustre, BeeGFS, IBM Spectrum Scale for HPC workloads

References:

ODM Sales Soar as Hyperscalers and Cloud Providers Go Direct

The future of US hyperscale data centers | McKinsey

100MW+ Wholesale Colocation Deals: Inside the Hyperscaler Surge

https://www.datacenterknowledge.com/servers/foxconn-on-track-to-become-the-world-s-largest-server-vendor-omdia

Hyperscaler design of networking equipment with ODM partners – IEEE ComSoc Technology Blog

Liquid Dreams: The Rise of Immersion Cooling and Underwater Data Centers

 

 

 

 

 

Page 2 of 2
1 2