Hyperscaler design of networking equipment with ODM partners

Networking equipment for  hyperscalers like Google, Amazon, Microsoft, Oracle, Meta, and others is a mix of in‑house engineering and partnerships with specialized vendors. These companies operate at such massive scale to design their own switches, routers, and interconnects — but rely on Original Design Manufacturers (ODMs) and network silicon providers to build them.

In‑House Networking Design:

Hyperscalers have dedicated hardware teams that create custom network gear to meet their unique performance, latency, and power‑efficiency needs.

  • Google – Designs its own Jupiter and Andromeda data center network fabrics, plus custom top‑of‑rack (ToR) and spine switches. Uses merchant silicon from Broadcom, Intel (Barefoot Tofino), and others, but with Google‑built control planes and software.
  • Amazon (AWS) – Builds custom switches and routers for its Scalable Reliable Datagram (SRD) and Elastic Fabric Adapter (EFA) HPC networks. Uses in‑house firmware and network operating systems, often on ODM‑built hardware.
  • Microsoft (Azure) – Designs OCP‑compliant switches (e.g., SONiC network OS) and contributes to the Open Compute Project. Uses merchant silicon from Broadcom, Marvell, and Mellanox/NVIDIA.
  • Oracle Cloud Infrastructure (OCI) – Designs its own high‑performance RDMA‑enabled network for HPC and AI workloads, with custom switches built by ODM partners.
  • Meta – Designs Wedge, Backpack, and Minipack switches under OCP, manufactured by ODMs.

Manufacturing & ODM Partners:

While the hyperscaler’s network equipment designs are proprietary, the physical manufacturing is typically outsourced to ODMs who specialize in hyperscale networking gear:

ODM / OEM Builds for Notes
Quanta Cloud Technology (QCT) AWS, Azure, Oracle, Meta Custom ToR/spine switches, OCP gear
WiWynn Microsoft, Meta OCP‑compliant switches and racks
Celestica Multiple hyperscalers High‑end switches, optical modules
Accton / Edgecore Google, Meta, others White‑box switches for OCP
Foxconn / Hon Hai AWS, Google Large‑scale manufacturing
Delta Networks Multiple CSPs Optical and Ethernet gear

Network Silicon & Optics Suppliers:

Even though most hyperscalers design the chassis and racks, they often use merchant silicon and optics from:

  • Broadcom – Tomahawk, Trident, Jericho switch ASICs
  • Marvell – Prestera switch chips, OCTEON DPUs
  • NVIDIA (Mellanox acquisition) – Spectrum Ethernet, InfiniBand for AI/HPC
  • Intel (Barefoot acquisition) – Tofino programmable switch ASICs
  • Cisco Silicon One – Used selectively in hyperscale builds
  • Coherent optics & transceivers – From II‑VI (Coherent), Lumentum, InnoLight, etc.

Hyperscaler Networking Supply Chain Map:

Layer Key Players Role Example Hyperscaler Relationships
Network Silicon (ASICs / DPUs) Broadcom (Tomahawk, Jericho), Marvell (Prestera, OCTEON), NVIDIA/Mellanox (Spectrum, InfiniBand), Intel (Barefoot Tofino), Cisco (Silicon One) Core packet switching, programmability, congestion control Google (Broadcom, Intel), AWS (Broadcom, Marvell), Microsoft (Broadcom, Marvell, NVIDIA), Oracle (Broadcom, NVIDIA)
Optics & Interconnects Coherent (II‑VI), Lumentum, InnoLight, Source Photonics, Broadcom (optical PHYs) 400G/800G transceivers, co‑packaged optics, DWDM modules All hyperscalers source from multiple vendors for redundancy
ODM / Manufacturing Quanta Cloud Technology (QCT), WiWynn, Celestica, Accton/Edgecore, Foxconn, Delta Networks Build hyperscaler‑designed switches, routers, and chassis AWS (QCT, Foxconn), Google (Accton, QCT), Microsoft (WiWynn, Celestica), Meta (Accton, WiWynn), Oracle (QCT, Celestica)
Network OS & Control Plane In‑house NOS (Google proprietary, AWS custom OS, Microsoft SONiC, Oracle custom), OCP software Routing, telemetry, automation, SDN control Google (Jupiter fabric OS), AWS (custom SRD/EFA stack), Microsoft (SONiC), Oracle (OCI NOS)
Integration & Deployment Hyperscaler internal engineering teams Rack integration, cabling, fabric topology, automation pipelines All hyperscalers do this in‑house for security and scale

Design Flow:

  1. Chip Vendors → supply merchant silicon to ODMs or directly to hyperscaler design teams.
  2. Hyperscaler Hardware Teams → design chassis, PCB layouts, thermal systems, and specify optics.
  3. ODMs → manufacture to spec, often in Asia, with hyperscaler QA oversight.
  4. Optics Vendors → deliver transceivers and cables, often qualified by hyperscaler labs.
  5. In‑House NOS → loaded onto hardware, integrated into hyperscaler’s SDN fabric.
  6. Deployment → rolled out in data centers globally, often in multi‑tier Clos or AI‑optimized topologies.

 Major Trends:

  • Disaggregation – Hyperscalers separate hardware from software, running their own Network Operating System (NOS), (e.g., SONiC, Google’s proprietary OS) on ODM‑built “white‑box” or “bare metal” switches.
  • AI‑Optimized Fabrics – New designs focus on ultra‑low latency, congestion control, and massive east‑west bandwidth for GPU clusters.
  • Optical Integration – Co‑packaged optics and 800G+ transceivers are becoming standard for AI and HPC workloads.
  • AI Cluster Networking – NVIDIA InfiniBand and 800G Ethernet fabrics are now common for GPU pods.
  • Co‑Packaged Optics – Moving optics closer to the ASIC to reduce power and latency.
  • Open Compute Project Influence – Many designs are OCP‑compliant but with proprietary tweaks.
  • Multi‑Vendor Strategy – Hyperscalers dual‑source ASICs and optics to avoid supply chain risk.

References:

How it works: hyperscaler compute server in house design process with ODM partners

Cloud‑resident high performance compute servers used by hyperscale cloud providers like Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Oracle Cloud Infrastructure (OCI), Meta, and others use a mix of custom in‑house designs and ODM (Original Design Manufacturer) ‑ built hardware.

In‑House Design Teams:

  • Amazon (AWS) – Designs its own Nitro System–based servers, including custom motherboards, networking cards, and security chips. AWS also develops Graviton (Arm‑based) and Trainium/Inferentia (AI) processors. HPC instances use Elastic Fabric Adapter (EFA) for low‑latency interconnects.
  • Google (GCP) – Builds custom server boards and racks for its data centers, plus TPUs (Tensor Processing Units) for AI workloads.  GCP builds custom HPC server boards and racks, plus TPUs for AI workloads. It uses high‑speed interconnects like Google’s Jupiter network for HPC clusters.
  • Microsoft Azure – Designs Azure‑optimized servers and AI accelerators, often in collaboration with partners, and contributes designs to the Open Compute Project (OCP).  It integrates InfiniBand and/or 400 Gbps Ethernet for HPC interconnects.
  • Oracle – Designs bare‑metal HPC shapes with AMD EPYC, Intel Xeon, and NVIDIA GPUs, plus RDMA cluster networking for microsecond latency.
  • Meta – Designs its compute servers, especially for AI workloads, by working closely with ODM partners like Quanta Computer, Wiwynn, and Foxconn.

Manufacturing Partners (ODMs/OEMs):

While the hyperscaler compute server designs are proprietary, the physical manufacturing is typically outsourced to Original Design Manufacturers (ODMs) who specialize in hyperscale data center gear as per these tables:

ODM / OEM Known for Cloud Customers
Quanta Cloud Technology (QCT) Custom rack servers, HPC nodes AWS, Azure, Oracle
WiWynn OCP‑compliant HPC servers Microsoft, Meta
Inventec HPC and AI‑optimized servers AWS, GCP
Foxconn / Hon Hai Large‑scale server manufacturing Google, AWS
Celestica HPC and networking gear Multiple hyperscalers
Supermicro GPU‑dense HPC systems AWS, Oracle, Azure
ODM / OEM Role in Hyperscale Cloud
Quanta Cloud Technology (QCT) Major supplier for AWS, Azure, and others; builds custom rack servers and storage nodes.
WiWynn Spun off from Wistron; manufactures OCP‑compliant servers for Microsoft and Facebook/Meta.
Inventec Supplies compute and storage servers for AWS and other CSPs.
Foxconn / Hon Hai Builds cloud server hardware for multiple providers, including Google.
Delta / Celestica Provides specialized server and networking gear for hyperscale data centers.
Supermicro Supplies both standard and custom AI‑optimized servers to cloud and enterprise customers.

The global server market expected to reach $380 billion by 2028.  Image credit: Alamy

……………………………………………………………………………………………………………………………………………………………..

Here’s a supply chain relationship map for cloud‑resident high‑performance compute (HPC) servers used by the major hyperscalers:

Hyperscale HPC Server Design & Manufacturing Landscape:

Cloud Provider In‑House Design Focus Key Manufacturing / ODM Partners Notable HPC Hardware Features
Amazon Web Services (AWS) Custom Nitro boards, Graviton CPUs, Trainium/Inferentia AI chips, EFA networking Quanta Cloud Technology (QCT), Inventec, Foxconn Arm‑based HPC nodes, GPU clusters (NVIDIA H100/A100), ultra‑low‑latency RDMA
Google Cloud Platform (GCP) Custom server boards, TPU accelerators, Jupiter network fabric Quanta, Inventec, Foxconn TPU pods, GPU supernodes, liquid‑cooled racks
Microsoft Azure OCP‑compliant HPC designs, Maia AI chip, Cobalt CPU, InfiniBand networking WiWynn, QCT, Celestica Cray‑based HPC clusters, GPU/FPGA acceleration
Oracle Cloud Infrastructure (OCI) Bare‑metal HPC shapes, RDMA cluster networking QCT, Supermicro AMD EPYC/Intel Xeon nodes, NVIDIA GPU dense racks
Meta (for AI/HPC research) OCP‑based AI/HPC servers WiWynn, QCT AI Research SuperCluster, liquid cooling
Alibaba Cloud / Tencent Cloud Custom AI/HPC boards, Arm CPUs Inspur, Sugon, QCT GPU/FPGA acceleration, high‑bandwidth fabrics

Meta’s ODM Collaboration Model:

  • Quanta Computer: Meta has partnered with Quanta for final assembly of its next-gen AI servers. Quanta is responsible for building up to 6,000 racks of the Santa Barbara servers, which feature advanced cooling and power delivery systems.
  • Wiwynn & Foxconn: These ODMs also play key roles in Meta’s infrastructure. Wiwynn reportedly earns more than half its revenue from Meta, while Foxconn handles system assembly for NVIDIA’s NVL 72 servers, which Meta may also utilize.
  • Broadcom Partnership: For chip supply, Meta collaborates with Broadcom to integrate custom ASICs into its server designs.

Hyperscaler/ODM Collaboration Process:

  1. Design Phase – Hyperscalers’ hardware teams define the architecture: CPU/GPU choice, interconnect, cooling, power density.
  2. ODM Manufacturing – Partners like Quanta, WiWynn, Inventec, Foxconn, Celestica, and Supermicro build the servers to spec.
  3. Integration & Deployment – Systems are tested, integrated into racks, and deployed in hyperscale data centers.
  4. Optimization – Providers fine‑tune firmware, drivers, and orchestration for HPC workloads (e.g., CFD, genomics, AI training).

Industry Trends:

  • Open Compute Project (OCP) – Many designs are shared in the OCP community, allowing ODMs to build interoperable, cost‑optimized hardware at scale.  Open Compute Project designs speed up deployment and interoperability.
  • Vertical Integration – Hyperscalers increasingly design custom silicon (e.g., AWS Graviton, Google TPU, Microsoft Maia AI chip) to optimize performance and reduce dependency on third‑party CPUs/GPUs.
  • AI‑Optimized Racks – New designs focus on high‑density GPU clusters, liquid cooling, and ultra‑low‑latency networking for AI workloads.
  • Vertical integration: More custom silicon to optimize performance and cost.  See Specialized HPC components below.
  • Liquid cooling: Increasingly common for dense GPU/CPU HPC racks.

Specialized HPC Components:

  • CPUs – AMD EPYC, Intel Xeon Scalable, AWS Graviton (Arm), custom Google CPUs.
  • GPUs / Accelerators – NVIDIA H100/A100, AMD Instinct, Google TPU, AWS Trainium.
  • Networking – Mellanox/NVIDIA InfiniBand, AWS EFA, Oracle RDMA cluster networking.
  • Storage – Parallel file systems like Lustre, BeeGFS, IBM Spectrum Scale for HPC workloads

References:

ODM Sales Soar as Hyperscalers and Cloud Providers Go Direct

The future of US hyperscale data centers | McKinsey

100MW+ Wholesale Colocation Deals: Inside the Hyperscaler Surge

https://www.datacenterknowledge.com/servers/foxconn-on-track-to-become-the-world-s-largest-server-vendor-omdia

Hyperscaler design of networking equipment with ODM partners – IEEE ComSoc Technology Blog

Liquid Dreams: The Rise of Immersion Cooling and Underwater Data Centers

 

 

 

 

 

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Both telecom and enterprise networks are being reshaped by AI bandwidth and latency demands of AI.  Network operators that fail to modernize architectures risk falling behind.  Why?  AI workloads are network killers — they demand massive east-west traffic, ultra-low latency, and predictable throughput.

  • Real-time observability is becoming non-negotiable, as enterprises need to detect and fix issues before they impact AI model training or inference.
  • Self-driving networks are moving from concept to reality, with AI not just monitoring but actively remediating problems.
  • The competitive race is now about who can integrate AI into networking most seamlessly — and HPE/Juniper’s Mist AI, Cisco’s assurance stack, and Nvidia’s AI fabrics are three different but converging approaches.

Cisco, HPE/Juniper, and Nvidia are designing AI-optimized networking equipment, with a focus on real-time observability, lower latency and increased data center performance for AI workloads.  Here’s a capsule summary:

Cisco: AI-Ready Infrastructure:

  • Cisco is embedding AI telemetry and analytics into its Silicon One chips, Nexus 9000 switches, and Catalyst campus gear.
  • The focus is on real-time observability via its ThousandEyes platform and AI-driven assurance in DNA Center, aiming to optimize both enterprise and AI/ML workloads.
  • Cisco is also pushing AI-native data center fabrics to handle GPU-heavy clusters for training and inference.

HPE + Juniper: AI-Native Networking Push:

  • Following its $13.4B acquisition of Juniper Networks, HPE has merged Juniper’s Mist AI platform with its own Aruba portfolio to create AI-native, “self-driving” networks.
  • Key upgrades include:

-Agentic AI troubleshooting that uses generative AI workflows to pinpoint and fix issues across wired, wireless, WAN, and data center domains.

-Marvis AI Assistant with enhanced conversational capabilities — IT teams can now ask open-ended questions like “Why is the Orlando site slow?” and get contextual, actionable answers.

-Large Experience Model (LEM) with Marvis Minis — digital twins that simulate user experiences to predict and prevent performance issues before they occur.

-Apstra integration for data center automation, enabling autonomous service provisioning and cross-domain observability

Nvidia: AI Networking at Compute Scale

  • Nvidia’s Spectrum-X Ethernet platform  and Quantum-2 InfiniBand (both from Mellanox acquisition) are designed for AI supercomputing fabrics, delivering ultra-low latency and congestion control for GPU clusters.
  • In partnership with HPE, Nvidia is integrating NVIDIA AI Enterprise and Blackwell architecture GPUs into HPE Private Cloud AI, enabling enterprises to deploy AI workloads with optimized networking and compute together.
  • Nvidia’s BlueField DPUs offload networking, storage, and security tasks from CPUs, freeing resources for AI processing.

………………………………………………………………………………………………………………………………………………………..

Here’s a side-by-side comparison of how Cisco, HPE/Juniper, and Nvidia are approaching AI‑optimized enterprise networking — so you can see where they align and where they differentiate:

Feature / Focus Area Cisco HPE / Juniper Nvidia
Core AI Networking Vision AI‑ready infrastructure with embedded analytics and assurance for enterprise + AI workloads AI‑native, “self‑driving” networks across campus, WAN, and data center High‑performance fabrics purpose‑built for AI supercomputing
Key Platforms Silicon One chips, Nexus 9000 switches, Catalyst campus gear, ThousandEyes, DNA Center Mist AI platform, Marvis AI Assistant, Marvis Minis, Apstra automation Spectrum‑X Ethernet, Quantum‑2 InfiniBand, BlueField DPUs
AI Integration AI‑driven assurance, predictive analytics, real‑time telemetry Generative AI for troubleshooting, conversational AI for IT ops, digital twin simulations AI‑optimized networking stack tightly coupled with GPU compute
Observability End‑to‑end visibility via ThousandEyes + DNA Center Cross‑domain observability (wired, wireless, WAN, DC) with proactive issue detection Telemetry and congestion control for GPU clusters
Automation Policy‑driven automation in campus and data center fabrics Autonomous provisioning, AI‑driven remediation, intent‑based networking Offloading networking/storage/security tasks to DPUs for automation
Target Workloads Enterprise IT, hybrid cloud, AI/ML inference & training Enterprise IT, edge, hybrid cloud, AI/ML workloads AI training & inference at hyperscale, HPC, large‑scale data centers
Differentiator Strong enterprise install base + integrated assurance stack Deep AI‑native operations with user experience simulation Ultra‑low latency, high‑throughput fabrics for GPU‑dense environments

Key Takeaways:

  • Cisco is strongest in enterprise observability and broad infrastructure integration.
  • HPE/Juniper is leaning into AI‑native operations with a heavy focus on automation and user experience simulation.
  • Nvidia is laser‑focused on AI supercomputing performance, building the networking layer to match its GPU dominance.
Conclusions:
  • Cisco leverages its market leadership, customer base and strategic partnerships to integrate AI with existing enterprise networks.
  • HPE/Juniper challenges rivals with an AI-native, experience-first network management platform. 
  • Nvidia aims to dominate the full-stack AI infrastructure, including networking.

Omdia on resurgence of Huawei: #1 RAN vendor in 3 out of 5 regions; RAN market has bottomed

Market research firm Omdia (owned by Informa) says Huawei remains the number one RAN vendor in three out of five large geographical regions.  Far from being fatally weakened by U.S. government sanctions, Huawei today looks as big and strong as ever. Its sales last year were the second highest in its history and only 4% less than it made in 2020, before those sanctions took effect. In three out of the five global regions studied by Omdia – Asia and Oceania, the Middle East and Africa, and Latin America and the Caribbean – Huawei was the leading RAN vendor. While third in Europe, it was absent from the top three only in North America where it is banned.

Spain’s Telefónica remains a big Huawei customer in Brazil and Germany, despite telling Krach in 2020 that it would soon have “clean networks” in those markets. Deutsche Telekom and Vodafone, two other European telco giants, are also still heavy users of Huawei.  Ericsson and Nokia have noted Europe’s inability to kick out Huawei while alerting investors to “aggressive” competition from Chinese vendors in some regions.

“A few years ago, we were all talking about high-risk vendors in Europe and I think, as it looks right now, that is not an opportunity,” said Börje Ekholm, Ericsson’s CEO, on a call with analysts last month. The substitution of the Nordic vendors for Huawei has not gone as far as they would have hoped.   Ekholm warned analysts one year ago about “sharply increased competition from Chinese vendors in Europe and Latin America” and said there was a risk of losing contracts. “I am sure we’ll lose some, but we do it because it is right for the overall gross margin in the company. Don’t expect us to be the most aggressive in the market.”

There are few signs of European telcos replacing one of the Nordic vendors with Huawei, or of big market share losses by Ericsson and Nokia to Chinese rivals. Nokia’s RAN market share outside China did not materially change between the first and second quarters, says Remy Pascal, a principal analyst with Omdia (quarterly figures are not disclosed but Nokia held 17.6% of the RAN market including China last year). Huawei appears to have overtaken it because of gains at the expense of other vendors and a larger revenue contribution from Huawei-friendly emerging markets in the second quarter. Seasonality and the timing of revenue recognition were also factors, says Pascal.

Huawei is still highly regarded by chief technology officers for the quality of its products. It was a pioneer in the development of 5G equipment for time division duplex (TDD) technology, where uplink and downlink communications occupy the same frequency channel, and in massive MIMO, an antenna-rich system for boosting signal strength. It beat Ericsson and Nokia to the commercialization of power amplifiers based on gallium nitride, an efficient alternative to silicon, according to Earl Lum, the founder of EJL Wireless Research.

Sanctions  have not held back Huawei’s technology as much as analysts had expected. While the company was cut off from the foundries capable of manufacturing the most advanced silicon, it managed to obtain good-enough 7-nanometer chips in China for its latest smartphones, spurring its resurgence in that market. Network products remain less dependent on access to cutting-edge chips, and sales in that sector do not appear to have suffered outside markets that have imposed restrictions.

Alternatives to Huawei’s dominance have not materialized in a RAN sector that was already short of options. Besides evicting Huawei from telco networks, U.S. authorities hoped “Open RAN” would give rise to American developers of RAN products. That has failed badly. 

  • Mavenir, arguably the best Open RAN hope the U.S. had, became emblematic of the Open RAN market gloom after it recently withdrew from the market for radio units as part of a debt restructuring.  The company has sold its Open RAN software to DISH Network and Vodafone, it has not achieved the market penetration it initially targeted. Mavenir has faced significant financial challenges that led to a restructuring in 2025, significant layoffs and a major shift in strategy away from developing its own hardware.   
  • Parallel Wireless makes Open RAN software and also provides Open RAN software-defined radios (SDRs) as part of its hardware ecosystem, focusing on disaggregating the radio access network stack to allow operators flexibility and reduced total cost of ownership. Their offerings include a hardware-agnostic 5G Standalone (SA) software stack and the Open RAN Aggregator software, which manages and converges multi-vendor RAN interfaces toward the core network.

Stefan Pongratz of Dell’Oro Group forecasts annual revenues from multi-vendor RAN deployments – where telcos combine vendors instead of buying from a single big supplier – will have reached an upper limit of $3 billion by 2029, giving multi-vendor RAN less than 10% of the total RAN market by that date.  He says five of six tracked regions are now classed as “highly concentrated,” with an Herfindahl-Hirschman Index (HHI) score of more than 2,500. “This suggests that the supplier diversity element of the open RAN vision is fading,” Stefan added.

Preliminary data from Dell’Oro  indicate that Open RAN revenues grew year-over-year (Y/Y) in 2Q25 and were nearly flat Y/Y in the first half, supported by easier comparisons, stronger capex tied to existing Open RAN deployments, and increased activity among early majority adopters.

Open RAN used to mean alternatives to Ericsson and Nokia. Today, it looks synonymous with the top 5 RAN vendors (Huawei, Ericsson, Nokia, ZTE, and Samsung). In such an environment of extreme market concentration and failed U.S. sanctions, the appeal of Huawei’s RAN technology is still very much intact.

……………………………………………………………………………………………………………………………………………………………………….

Omdia’s historical data shows that RAN sales fell by $5 billion, to $40 billion, in 2023, and by the same amount again last year. In 2025, it is guiding for low single-digit percentage growth outside China, implying the RAN market has bottomed out.  This stabilization suggests the market may be transitioning into a phase of flat-to-modest growth, though risks such as operator capex constraints and uneven regional demand remain.  However, concentration of RAN vendors

…………………………………………………………………………………………………………………………………………………………………………

References:

https://www.lightreading.com/5g/huawei-overtakes-nokia-outside-china-as-open-ran-stabilizes-

Open RAN is Stabilizing

Omdia: Huawei increases global RAN market share due to China hegemony

Malaysia’s U Mobile signs MoU’s with Huawei and ZTE for 5G network rollout

Dell’Oro Group: RAN Market Grows Outside of China in 2Q 2025

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

Network equipment vendors increase R&D; shift focus as 0% RAN market growth forecast for next 5 years!

vRAN market disappoints – just like OpenRAN and mobile 5G

Mobile Experts: Open RAN market drops 83% in 2024 as legacy carriers prefer single vendor solutions

Huawei launches CloudMatrix 384 AI System to rival Nvidia’s most advanced AI system

U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China

 

AT&T to to buy spectrum Licenses from EchoStar for $23 billion

Executive Summary:

Embattled EchoStar Corp, parent company of Dish Network [1.], has agreed to sell spectrum licenses to AT&T Inc. for about $23 billion in a deal that will help the company stay out of bankruptcy and fend off regulatory concerns about its airwave use. The sale will expand AT&T’s wireless network and add about 50 MHz of low-band and mid-band spectrum in an all-cash transaction, the Dallas, Texas-based telecommunications company said in a statement on Tuesday. The deal is expected to close by mid-2026, pending regulatory approval.

Note 1. Dish Network is one of only two U.S. wireless telcos that have commercially deployed 5G SA core network on Amazon’s AWS public cloud..  The EchoStar subsidiary has also deployed 5G OpenRAN.

……………………………………………………………………………………………………………………………………………………………………………………………

Key Takeaways:

  • $23 billion acquisition will add an average of approximately 50 MHz of low-band and mid-band spectrum to AT&T’s holdings – covering virtually every market across the U.S. and positioning AT&T to maintain long-term leadership in advanced connectivity across 5G and fiber
  • Transaction powers improved and capital-efficient long-term growth by accelerating the Company’s ability to add converged subscribers with both 5G wireless and home internet services in more places
  • Leading AT&T network will enable continued EchoStar participation in wireless industry through long-term wholesale network services agreement

 

AT&T said the acquisition of approximately 30 MHz of mid-band spectrum and 20 MHz of low-band spectrum will strengthen the company’s ability to deliver 5G and fiber services across the US. EchoStar will operate in the US market as a hybrid mobile network operator under its Boost brand, the company said in the statement. AT&T will be its primary network partner for wireless service. AT&T has been spending heavily to expand its fiber-optic network across the country and previously said it would use cash savings from Trump’s tax and spending bill to accelerate those plans. In May, it agreed to buy the consumer fiber operations of Lumen Technologies Inc. for $5.75 billion, expanding its fast broadband service in major cities like Denver and Las Vegas. AT&T intends to finance the EchoStar deal with a combination of cash on hand and borrowings. Jefferies Financial Group Inc. advised AT&T on the EchoStar acquisition.

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Backgrounder:

Federal regulators have been pushing EchoStar to sell some of its airwaves after concerns it had failed to put valuable slices of wireless spectrum to use, Bloomberg reported in July. The FCC launched an investigation in May into whether EchoStar was meeting its obligations for its wireless and satellite spectrum rights. The company skipped bond payments and considered filing for bankruptcy, saying the probe had stymied its ability to make decisions about its 5G network.  In a June meeting, first reported by Bloomberg, Trump urged EchoStar Chairman Charlie Ergen and FCC Chairman Brendan Carr to cut a deal to resolve the dispute. EchoStar shopped the assets to other would-be buyers, including Elon Musk’s Starlink, Bloomberg earlier reported.

The purchase price is $9 billion more than EchoStar paid for the spectrum and $5 billion more than the appraised value used in securitizing the assets, New Street Research’s Philip Burnett said in a research note Tuesday. While $1.5 billion shy of New Street’s valuation, he said the sale price was “nevertheless a great mark on value.”

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Quotes:

EchoStar CEO Ergen described the sale and related agreement to work with AT&T as “critical steps toward resolving the FCC’s spectrum utilization concerns.”

FCC spokesperson Katie Gorscak said “We appreciate the productive and ongoing discussions with the EchoStar team. The FCC will continue to focus on ensuring the beneficial use of scarce spectrum resources.”

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

References:

https://finance.yahoo.com/news/t-buy-echostar-spectrum-licenses-160830548.html

https://about.att.com/story/2025/echostar.html

FCC to investigate Dish Network’s compliance with federal requirements to build a nationwide 5G network

FCC approves EchoStar/Dish request to extend timeline for its 5G buildout

New FCC Chairman Carr Seen Clarifying Space Rules and Streamlining Approvals Process

Dish Network & Nokia: world’s first 5G SA core network deployed on public cloud (AWS)

Dish Network to FCC on its “game changing” OpenRAN deployment

DISH Wireless Awarded $50 Million NTIA Grant for 5G Open RAN Center (ORCID)

A 1959 re-invention of Frequency Modulated Continuous Wave (FMCW) radar

by Jeffrey Pawlan,  Chairman of IEEE SCV Life Members Affinity Group, with Alan J Weissberger (member of IEEE SCV Life Members Affinity Group)

Frequency Modulated Continuous Wave (FMCW) radar, is currently a hot topic in the IEEE Microwave Theory and Technology Society.  It is a type of radar system that measures distance and velocity by continuously transmitting a radio wave whose frequency changes over time (a “chirp” signal). This allows FMCW radar to determine both the distance to a target and its velocity by analyzing the frequency difference between the transmitted and received signals. It also measures the Doppler frequency (due to the Doppler effect) for calculating the speed of the object.

Below is a short summary of my 1959 invention related to FMCW, but first some historical background and applications:

  • The earliest known use of FMCW was in 1920s for ionospheric research.
  • First patent for an FMCW radar was filed in 1928 for altitude measurement from an aircraft. – Bentley, J. O., “Airplane Altitude Indicating System,” U.S. Patent No. 2,011,392 , issued August 13, 1935, application August 10, 1928.
  • Most theoretical work were published between the late 1940s and early 1960s. – Luck, D. G. C., Frequency Modulated Radar, New York, McGraw-Hill, 1949. – J.R. Klauder, A.C. Price, S. Darlington, and W.J. Albersheim, “The Theory and Design of Chirp Radars,” Bell Syst.Tech. J. 39 (4), 1960, pp. 745–808
  • Applications include: Radar altimeter, Radar navigation , Vehicle collision warning systems, Level-indication in tanks, aviation (altimetry, weather), industrial automation (distance, speed, level sensing), and healthcare (vital sign monitoring).
  • Its low power, compact size, and cost-effectiveness suit short-range applications, enabling features like blind-spot detection, drone navigation, and non-contact medical measurements.

Advantages of FMCW Radar:

  • mm-wave FMCW radars offer high-resolution distance measurement (resolution of 2 cm can be easily achieved over 20-30 meters)
  • Measures the target range and velocity simultaneously
  • FMCW provide quick updating of measurement compared to pulsed radar system (because FMCW mm-wave radars are continuously transmitting the signal)
  • Functions well in many types of weather & atmospheric conditions such as heavy rain, humidity, fog, and dusty conditions.
  • Immune to effects from temperature differences or high temperatures.
  • Better electrical and radiation safety
  • FMCW radars offer a good range compared to other non-radio technologies such as visible or infrared light spectrum or those using ultrasonic waves due to the superior signal propagation.
  • Can be mounted invisibly (behind radome)
  • Can penetrate into a variety of materials; hence, FMCW radar can be used for measurement or detection of concealed or covered targets
  • Better at detecting tangential motion than Doppler-based systems.

My 1959 re-invention of FMCW radar:

Surplus stores were common in large cities like Los Angeles (where I lived) from 1950 to 1970. Those stores purchased World War II radios and parts from the U.S. government at a very low cost. The stores then sold those items to hobbyists and collectors.  That’s how microwave radio parts were acquired – mostly by hobbyists.

In most cases, one complete X-band radar unit and one partial unit were purchased. Each unit contained a reflex klystron [1.] for the receiver’s local oscillator. The magnetron in the transmitter was not used because it could’ve generated a dangerous level of RF power.  Pieces of WR90 waveguide were used to construct a low-power (20 milliwatts) transmitter with one klystron and a receiver with the other. These klystrons were type 723A/B.

…………………………………………………………………………………………………………………………………………………………………….

Note 1. A klystron is a specialized linear-beam vacuum tube, invented in 1937, which is used as an amplifier for high radio frequencies, from UHF up into the microwave range. A reflex klystron is a low-power vacuum tube oscillator that generates microwave oscillations by using a single resonant cavity and a repeller electrode to reflect an electron beam back through the cavity. The electron beam is velocity-modulated by the cavity, causing electrons to bunch together and produce a microwave RF output as they return through the cavity gap. It consists of an electron gun, a single cavity that functions as both a “buncher” and “catcher,” and a repeller that pushes the electrons back.

…………………………………………………………………………………………………………………………………………………………………….

Frequency Modulation (FM) was used to modulate the signal. This was achieved by adjusting the negative voltage of the klystron tube’s repeller which is a negatively charged electrode located at the far end of the tube from the electron gun. Its function is to create a high negative potential that repels the electron beam, causing it to reverse direction and pass back through the resonant cavity, a process necessary for the device to oscillate and generate microwave energy.

This repeller was approximately -105V at a very low current. I used a vacuum tube radio “B” battery to provide the repeller power.  I did not use AC-operated power supplies, because they produced too much ripple which caused an unacceptable amount of hum in the signal.  A transformer was placed in series with the repeller voltage.

In my amateur radio experimentation, I used audio from my microphone amplifier. I cannot explain how my brain got the idea that I might be able to make a radar by using triangular wave modulation. I confirmed that idea by looking at the resulting frequency difference of the return microwave signal bouncing off of a nearby wall. It worked!

I modulated the klystron with a triangular wave and then determined the distance to the wall by comparing the timing of the reflected signal with the time of the transmitted signal. The triangular wave varied the wavelength of the signal over time so I could see the time difference based on the point on the triangle waveform. Since I was so young, I did not have access to the previously published papers, patents, and books so I could not copy or reference them. Therefore, I did not invent FMCW radar, but claim I independently re-invented it.

In 1959, I was a subscriber to Microwaves magazine. I penned a short letter to that publication and described what I had done and included a few diagrams. They had no idea that I was only 14 years old. They wrote back a typed letter stating they would like me to submit a full article on this new radar system. The problem for me was that even though I was excellent at reading even technical books, I was absolutely terrible expressing myself in writing. So I did not reply to them. I saved that letter from the editor of Microwaves magazine for decades but when I moved the last time I threw it out.

………………………………………………………………………………………………………………………………………………………………………………………………….

Postscript:

At 79 years old, I recently designed a unique antenna feedhorn for 47 GHz.  I submitted a 12-page write-up to the Microwave Journal which will publish it in 2026.

References:

https://pawlan.com/about.html

https://data.cresis.ku.edu/education/tutorial_2006/fmcw_6-28-06_1pm.pdf

https://www.everythingrf.com/community/what-is-a-fmcw-radar

Nokia introduces new Wavence microwave solutions to extend 5G reach in both urban and rural environments

SoftBank’s Transformer AI model boosts 5G AI-RAN uplink throughput by 30%, compared to a baseline model without AI

Softbank has developed its own Transformer-based AI model that can be used for wireless signal processing. SoftBank used its Transformer model to improve uplink channel interpolation which is a signal processing technique where the network essentially makes an educated guess as to the characteristics and current state of a signal’s channel. Enabling this type of intelligence in a network contributes to faster, more stable communication, according to SoftBank.  The Japanese wireless network operator successfully increased uplink throughput by approximately 20% compared to a conventional signal processing method (the baseline method). In the latest demonstration, the new Transformer-based architecture was run on GPUs and tested in a live Over-the-Air (OTA) wireless environment. In addition to confirming real-time operation, the results showed further throughput gains and achieved ultra-low latency.

Editor’s note: A Transformer  model is a type of neural network architecture that emerged in 2017. It excels at interpreting streams of sequential data associated with large language models (LLMs). Transformer models have also achieved elite performance in other fields of artificial intelligence (AI), including computer vision, speech recognition and time series forecasting.  Transformer models are lightweight, efficient, and versatile – capable of natural language processing (NLP), image recognition and wireless signal processing as per this Softbank demo.

Significant throughput improvement:

  • Uplink channel interpolation using the new architecture improved uplink throughput by approximately 8% compared to the conventional CNN model. Compared to the baseline method without AI, this represents an approximately 30% increase in throughput, proving that the continuous evolution of AI models leads to enhanced communication quality in real-world environments.

Higher AI performance with ultra-low latency:

  • While real-time 5G communication requires processing in under 1 millisecond, this demonstration with the Transformer achieved an average processing time of approximately 338 microseconds, an ultra-low latency that is about 26% faster than the convolution neural network (CNN) [1.] based approach. Generally, AI model processing speeds decrease as performance increases. This achievement overcomes the technically difficult challenge of simultaneously achieving higher AI performance and lower latency.  Editor’s note: Perhaps this can overcome the performance limitations in ITU-R M.2150 for URRLC in the RAN, which is based on an uncompleted 3GPP Release 16 specification.

Note 1. CNN-based approaches to achieving low latency focus on optimizing model architecture, computation, and hardware to accelerate inference, especially in real-time applications. Rather than relying on a single technique, the best results are often achieved through a combination of methods. 

Using the new architecture, SoftBank conducted a simulation of “Sounding Reference Signal (SRS) prediction,” a process required for base stations to assign optimal radio waves (beams) to terminals. Previous research using a simpler Multilayer Perceptron (MLP) AI model for SRS prediction confirmed a maximum downlink throughput improvement of about 13% for a terminal moving at 80 km/h.*2

In the new simulation with the Transformer-based architecture, the downlink throughput for a terminal moving at 80 km/h improved by up to approximately 29%, and by up to approximately 31% for a terminal moving at 40 km/h. This confirms that enhancing the AI model more than doubled the throughput improvement rate (see Figure 1). This is a crucial achievement that will lead to a dramatic improvement in communication speeds, directly impacting the user experience.

The most significant technical challenge for the practical application of “AI for RAN” is to further improve communication quality using high-performance AI models while operating under the real-time processing constraint of less than one millisecond. SoftBank addressed this by developing a lightweight and highly efficient Transformer-based architecture that focuses only on essential processes, achieving both low latency and maximum AI performance. The important features are:

(1) Grasps overall wireless signal correlations
By leveraging the “Self-Attention” mechanism, a key feature of Transformers, the architecture can grasp wide-ranging correlations in wireless signals across frequency and time (e.g., complex signal patterns caused by radio wave reflection and interference). This allows it to maintain high AI performance while remaining lightweight. Convolution focuses on a part of the input, while Self-Attention captures the relationships of the entire input (see Figure 2).

(2) Preserves physical information of wireless signals
While it is common to normalize input data to stabilize learning in AI models, the architecture features a proprietary design that uses the raw amplitude of wireless signals without normalization. This ensures that crucial physical information indicating communication quality is not lost, significantly improving the performance of tasks like channel estimation.

(3) Versatility for various tasks
The architecture has a versatile, unified design. By making only minor changes to its output layer, it can be adapted to handle a variety of different tasks, including channel interpolation/estimation, SRS prediction, and signal demodulation. This reduces the time and cost associated with developing separate AI models for each task.

The demonstration results show that high-performance AI models like Transformer and the GPUs that run them are indispensable for achieving the high communication performance required in the 5G-Advanced and 6G eras. Furthermore, an AI-RAN that controls the RAN on GPUs allows for continuous performance upgrades through software updates as more advanced AI models emerge, even after the hardware has been deployed. This will enable telecommunication carriers to improve the efficiency of their capital expenditures and maximize value.

Moving forward, SoftBank will accelerate the commercialization of the technologies validated in this demonstration. By further improving communication quality and advancing networks with AI-RAN, SoftBank will contribute to innovation in future communication infrastructure.  The Japan based conglomerate strongly endorsed AI RAN at MWC 2025.

References:

https://www.softbank.jp/en/corp/news/press/sbkk/2025/20250821_02/

https://www.telecoms.com/5g-6g/softbank-claims-its-ai-ran-tech-boosts-throughput-by-30-

https://www.telecoms.com/ai/softbank-makes-mwc-25-all-about-ai-ran

https://www.ibm.com/think/topics/transformer-model

https://www.itu.int/rec/R-REC-M.2150/en

Softbank developing autonomous AI agents; an AI model that can predict and capture human cognition

Dell’Oro Group: RAN Market Grows Outside of China in 2Q 2025

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum

Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?

OpenAI announces new open weight, open source GPT models which Orange will deploy

Deutsche Telekom and Google Cloud partner on “RAN Guardian” AI agent

 

 

 

Dell’Oro Group: RAN Market Grows Outside of China in 2Q 2025

Following two years of steep declines, initial estimates by Dell’Oro Group reveal that total RAN revenues—including baseband, radio hardware, and software, excluding services—advanced for a third consecutive quarter outside of China in 2Q 2025.

“Our initial assessment confirms that the narrative we’ve been discussing for some time is now coming to fruition. Market conditions have continued to stabilize, resulting in growth for three consecutive quarters outside of China,” said Stefan Pongratz, Vice President of RAN market research at the Dell’Oro Group. “However, broader market sentiment remains subdued, and a rapid rebound is not anticipated. The industry acknowledges that short-term fluctuations are unlikely to alter the market’s generally flat long-term trajectory,” Pongratz added.

Additional highlights from the 2Q 2025 RAN report:

  • Growth in Europe, as well as the Middle East and Africa, nearly offset declines in the Caribbean and Latin America, as well as the Asia Pacific region.
  • RAN vendor dynamics are gradually shifting, driven by three major trends: the strong are getting stronger, laggards are not improving, and the market is becoming increasingly divided.
  • Ericsson and Huawei together accounted for more than 60 percent of the 1H25 market in North America and China, respectively.
  • The top 5 RAN suppliers, based on worldwide revenues for the trailing four quarters, are Huawei, Ericsson, Nokia, ZTE, and Samsung.
  • The short-term outlook remains unchanged, with total RAN expected to stabilize in 2025.

For sure, RAN is not a growth market (+1% CAGR between 2000 and 2023). However, underneath that flattish topline over time, RAN revenues fluctuate significantly as new spectrum/technologies become available. After a massive RAN surge between 2017 and 2021, RAN revenues declined sharply in 2023 and the fundamental question now is fairly straightforward – how will the slowdown in mobile data traffic impact the RAN market over the next five years? The constantly changing and increasingly demanding end-user expectations in combination with the search for growth present opportunities and challenges for incumbent RAN suppliers and new entrants.

………………………………………………………………………………………………………………………………………………………………………………………………

Huawei’s ability to sustain growth during a period of industry volatility can be attributed to several key factors:

  • Strong Presence in China: Huawei maintains a commanding position in its home market, which remains one of the largest and most competitive globally. Despite external pressures and restrictions, its domestic strength provides stability and scale.
  • Expanding Global Footprint: Growth in regions such as Europe, the Middle East, and Africa helped Huawei offset weaker performance in Asia Pacific, the Caribbean, and Latin America. These markets have been central to Huawei’s strategy of diversifying its global presence.
  • Technological Advancements in 5G: Huawei has continued to invest heavily in 5G RAN innovation, leveraging advanced radio hardware, AI-driven network optimization, and energy-efficient base stations. These capabilities strengthen its competitive edge in delivering cost-effective and high-performance solutions.
  • Resilient Business Strategy: Despite global challenges, including regulatory restrictions in certain markets, Huawei has adapted by strengthening local partnerships, investing in regional ecosystems, and optimizing supply chain resilience.

………………………………………………………………………………………………………………………………………………………………………………………………….

According to the recent Omdia reportEricsson is the top RAN vendor in both business performance and portfolio strength in 2025, thanks in part to its energy-efficient products, comprehensive support across radio technologies, and Open RAN–ready offerings.

Ericsson also continues expanding its enterprise solutions, with integrated strategies that include private 5G, Cradlepoint, and cloud-native cores. In India, Ericsson signed a multi-billion-dollar 4G/5G equipment deal with Bharti Airtel to enhance network coverage using Open RAN-ready solutions.

Nokia is actively replacing Huawei in key European deployments—securing a major Open RAN contract to supply Deutsche Telekom across 3,000 German sites. In the U.S., Nokia signed a multi-year deal with AT&T to provide cloud-based voice core and 5G network automation solutions powered by AI/ML. Nokia is gaining ground in Europe and the U.S. through modernization and automation contracts. Samsung is leveraging Open RAN partnerships for a comeback, and overall vendor competition is shaped by technology shifts toward cloud-native, AI-enabled, and multi-vendor architectures.

Samsung is stepping up in the Open RAN ecosystem — as illustrated by a successful joint demonstration between Samsung, Vodafone, and AMD showcasing a full Open RAN voice call using AMD processors and Samsung’s O-RAN vRAN software. Despite its RAN equipment revenues falling 25% in 2024, Samsung remains well positioned in Europe and Africa, particularly in Vodafone tenders for replacing Huawei, which may drive recovery through expanded vRAN/Open RAN adoption.

In summary, the global RAN market is stabilizing after a steep downturn in 2024. Huawei holds steady in core markets like China and parts of Europe, while Ericsson leads globally on portfolio strength and new deals — particularly Open RAN and enterprise solutions.

………………………………………………………………………………………………………………………………………………………………………………………………………

References:

RAN Market Grows Outside of China, According to Dell’Oro Group

Mobile Radio Access Network (RAN)

https://telecomlead.com/telecom-equipment/huawei-achieves-growth-in-global-ran-market-amid-industry-stabilization-122275

https://www.ericsson.com/en/ran/omdia-2025

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum

Omdia: Huawei increases global RAN market share due to China hegemony

Network equipment vendors increase R&D; shift focus as 0% RAN market growth forecast for next 5 years!

vRAN market disappoints – just like OpenRAN and mobile 5G

Mobile Experts: Open RAN market drops 83% in 2024 as legacy carriers prefer single vendor solutions

Dell’Oro: Global RAN Market to Drop 21% between 2021 and 2029

Ericsson CEO’s strong statements on 5G SA, WRC 27, and AI in networks

At the Technology Policy Institute Forum in Aspen, Colorado this week, Ericsson CEO Börje Ekholm made many comments about “The Future of Wireless & Global Connectivity.”  To begin with, he said it’s super critical for western nations, including the U.S., to increase their 5G Stand Alone (SA) network deployments.  5G network operators need 5G SA to take full advantage of the platform to support apps and services that are optimized for low latency, higher uplink prioritization and network slicing. “It’s hard to monetize something you don’t have,” Ekholm said. “The network has to be built for 5G SA.”

Ekholm’s 5G SA comments echo those of Magnus Ewerbring, Ericsson’s chief technology officer – Asia Pacific, who strongly asserted that 5G SA is the way for wireless network operators to monetize and differentiate their 5G networks. 

Ekholm noted that China has prioritized 5G SA and has more than 4 million base stations deployed, estimating that this represents about ten times what’s been deployed in the US. China has been able to monetize that by supporting advanced robotics and automation in tens of thousands of factories. China is “highly competitive,” has “enormous scale, domestically,” and has made 5G SA a priority, the Ericsson CEO said.  Western countries needs to take China’s 5G SA efforts “seriously” and invest more in their wireless infrastructure as it’s a competitive imperative. 

Status of 5G SA network deployments:

A recent Heavy Reading (now part of Omdia) operator survey found that 35% of respondents said they have deployed 5G SA, with 20% expecting to be live by year-end. Some 41% cited “new or better services” as the primary driver for 5G core investment.

After a very slow start during the past five years, Téral Research says the migration to 5G SA has increased.  Of the total 354 commercially available 5G public networks reported at the end of 1Q25, 74 are 5G SA –  up from 49 one year ago.  This growth is being driven by the success of fixed wireless access (FWA), a wider range of 5G SA-compatible devices, and the rise of voice over new radio (VoNR). Téral is also seeing increased adoption of private cloud for SA core deployments, with data sovereignty concerns shaping CSP strategies. Network slicing, which requires 5G SA, is moving from theory to practice—now extending to critical use cases like military applications.

3GPP URLC specifications are still not finalized and approved:

It should be noted that the 3GPP specifications for URLLC (Ultra-Reliable Low-Latency Communication) in the 5G SA core network and 5G NR access network are not considered 100% completed or finalized. URLLC relies on both the 5G NR (Radio Access Network) and the 5G Core network to achieve its goals. URLLC is vital for various industrial applications requiring real-time control and automation, such as the Industrial Internet of Things (IIoT), virtual reality, and autonomous vehicles. 

3GPP Release 16 introduced significant enhancements for URLLC in the 5G New Radio (NR) access and 5G Core network. While Release 16 was “frozen” in July 2022, work on URLLC enhancements, particularly in the Radio Access Network (RAN), was not fully completed. These enhancements are crucial for 3GPP NR to meet the ITU-R M.2410 minimum performance requirements for URLLC for ultra-high reliability and ultra low latency.

 3GPP Technical Specifications (TS) and Technical Reports (TR) become “official” standards when transposed into corresponding publications of the 3GPP Organizational Partner (like ETSI)  or the standards body ((ITU-R)) acting as publisher for the Partner (ATIS for ITU-R). Once a Release is frozen (see definition in TR 21.900) and all work items completed, 3GPP specifications are officially transposed and published by the Organizational Partners, as a part of their standards series.

………………………………………………………………………………………………………………………………………………………………………..

Ekholm also said that strong western representation at ITU-R’s WRC-27 “is critically important.”  That’s because licensed spectrum is likewise critical for the next generation of automation, self-driving vehicles and AI applications that will require a “truly reliable” and low-latency network, he added without mentioning the incomplete 3GPP URLLC specs.

AI is the most fundamental technology we’ve seen so far,” he said. Ericsson has already been able to generate a 10% boost in spectrum efficiency using AI tools. While AI will no doubt erase some jobs, he’s also optimistic it will create new ones.  Like so many analysts, Ekholm expects  Gen AI to drive more traffic and new capabilities. “The criticality of the connectivity layer will become even more important,” he added.

References:

https://www.lightreading.com/5g/ericsson-ceo-calls-for-bigger-push-toward-5g-sa

https://www.tpiaspenforum.tech/agenda

Ericsson reports ~flat 2Q-2025 results; sees potential for 5G SA and AI to drive growth

Ookla: Uneven 5G deployment in Europe, 5G SA remains sluggish; Ofcom: 28% of UK connections on 5G with only 2% 5G SA

Ookla: Europe severely lagging in 5G SA deployments and performance

Téral Research: 5G SA core network deployments accelerate after a very slow start

Vision of 5G SA core on public cloud fails; replaced by private or hybrid cloud?

Latest Ericsson Mobility Report talks up 5G SA networks and FWA

3GPP Release 16 5G NR Enhancements for URLLC in the RAN & URLLC in the 5G Core network

 

Lumen deploys 400G on a routed optical network to meet AI & cloud bandwidth demands

Lumen is actively expanding its 400G optical network to support growing demands for high-bandwidth services, particularly for AI and cloud applications. This expansion includes deploying 400G connectivity in key markets and enhancing its Ultra-Low Loss (ULL) fiber network, the largest in North America. Lumen has deployed 400G in over a dozen markets, enabling faster speeds for accessing cloud services and third-party applications, according to SDxCentral. The initial rollout includes major markets like Atlanta, Chicago, Dallas, Denver, Kansas City, Las Vegas, Los Angeles, Minneapolis, New York City, Phoenix, and Seattle.  

Lumen’s 400G network provides faster speeds for accessing third-party applications, cloud on-ramps, and various on-demand services.  400G connectivity is available through Lumen’s Ethernet On-Demand, Internet On-Demand, E-Line, E-LAN, and E-Access services.   Jeff Ary, VP of product management at Lumen, explained that those combinations of services offer their customers flexibility in selecting how they consume the new capabilities, including through Lumen’s network-as-a-service (NaaS) model.

“We have customers that still want a one-year term in a fixed amount that they pay monthly. We have others that are really liking the NaaS, where they can turn it up, turn it down. But this supports all those customers, whether they want fixed bandwidth speed, fixed monthly amount, or if they want fixed and be able to increase and decrease through our NaaS portfolio, it’ll support that also,” Ary said.  Ary explained that the move builds on Lumen’s agreement last year with Corning to secure 10% of that vendor’s global fiber production capacity over the next two years. “That helped us optimize how we run our optical network,” Ary said, and helped push Lumen’s 400G network reach to more than 90,000 route miles. “That enabled us to put our layer-two network on top of that, to have up to 400-gig on our Ethernet network, which then, of course, helps our IP network as well,” he added.

Lumen is working with multiple vendors for its 400G deployment, including Cisco for the routed optical network, Juniper for the routers in the data centers, Ciena for long-haul optical transport and Corning for the fiber itself. Lumen is also using pluggable optic modules, and as demand increases the company can change out the optical pluggable modules from 400G to 800G or 1.6T, as needed.  Lumen said in February that it would use Ciena’s WaveLogic 6 Extreme (WL6e) 1.6 Tb/s coherent transceiver to support increased demand for running AI workloads, and in May Lumen announced it is working with Corning for a fiber buildout in western North Carolina to expand network capacity in light of  the AI boom.

Dave Ward, CTO and product officer for Lumen, told Light Reading that the wireline network service provider is delivering Internet Protocol (IP) and Ethernet services built on a routed optical network, “taking advantage of all the bandwidth and capacity we have in our fiber network,” instead of using a hub and spoke model where a centralized hub acts as the network core. Lumen is providing the network to transport their data to locations where they want to train AI or develop inference workloads, said Ward.  While Lumen’s network already connects to over 2,200 data centers, this launch provides a 400G upgrade and integration with Lumen Digital, Lumen’s on-demand IP and Ethernet services, according to Ward. 

“With a routed optical network, we get orders of magnitude improvement on capacity that we can now route across our fiber wherever we have fiber available, and it’s two to three orders of magnitude lower cost to deliver a bit,” Ward said. “We’re really trying to build that cloud core and really make that accessible.  It’s really building out those cloud core pieces, and lowering friction and having bandwidth, latency and redundancy engineered paths for our customers,” he added.

“We are partnering not only with the data center operators, but also with the hyperscalers to improve the speeds and access to all of those locations where our customers have their workloads and where they want their workloads and data to be,” said Ward.

Lumen’s overall network provides connectivity to the major cloud providers and 163,000 on-net customer locations. By 2028, Lumen plans to extend its network to 47 million intercity fiber miles.

References:

https://www.sdxcentral.com/news/lumen-lights-400g-connections-to-support-ai-naas-demand/

https://www.lightreading.com/data-centers/lumen-cranks-up-data-center-interconnect-to-400g-for-ai-boom

https://www.lumen.com/en-us/solutions/use-case/artificial-intelligence.html

Lumen and Ciena Transmit 1.2 Tbps Wavelength Service Across 3,050 Kilometers

Analysts weigh in: AT&T in talks to buy Lumen’s consumer fiber unit – Bloomberg

Lumen Technologies to connect Prometheus Hyperscale’s energy efficient AI data centers

Microsoft choses Lumen’s fiber based Private Connectivity Fabric℠ to expand Microsoft Cloud network capacity in the AI era

Lumen, Google and Microsoft create ExaSwitch™ – a new on-demand, optical networking ecosystem

ACSI report: AT&T, Lumen and Google Fiber top ranked in fiber network customer satisfaction

Page 1 of 224
1 2 3 224