OneLayer Raises $28M Series A funding to transform private 5G networks with enhanced security

Private network security startup OneLayer has scored $28 million in series A funding led by Maor Investments with participation from McRock Capital, Chevron Technology Ventures and existing investors Viola Ventures, Grove Ventures and Koch Disruptive Technologies.  The Series A round brings total funding to more than $43 million.

The Boston, MA headquartered firm says they are the leader in private LTE/5G OT management and Zero Trust security. The security startup’s major rivals include Fortinet, Juniper Networks and Trend Micro.

“Private 5G represents a massive market opportunity, with enterprise demand far outpacing the industry’s ability to deliver simple, secure solutions,” said Ido Hart, Partner at Maor. “OneLayer solves this by translating cellular networks into enterprise IT language, eliminating the need for specialized cellular expertise. At its core, OneLayer is building the essential infrastructure layer that will make private 5G as ubiquitous as WiFi or any LAN.”

OneLayer’s mission is to secure and manage all private cellular network devices with enterprise-grade capabilities while eliminating the need for specialized cellular expertise. The company’s platform transforms private cellular environments from operational silos into integrated enterprise networks, providing complete device visibility, security, and control through familiar IT management tools.

The funding round was highly competitive and oversubscribed, reflecting strong investor confidence in OneLayer’s vision and proven momentum. OneLayer has demonstrated significant market traction across multiple verticals, including utilities, manufacturing, mining, airports, and critical infrastructure, with proven deployments spanning across North America, Europe and Latin America. This includes major expansions at leading fortune 100 manufacturers, with enterprises scaling from initial deployments to comprehensive multi-site deployments spanning numerous facilities.

“Enterprises are deploying private cellular networks at unprecedented scale, but they’re struggling with device visibility, security gaps, and operational complexity that didn’t exist with traditional IT infrastructure,” said Dave Mor, CEO and Co-Founder of OneLayer. “The demand for solutions that bridge this gap is explosive, customers need to manage thousands of cellular-connected devices with the same confidence and control they have over their existing IT assets. Private LTE/5G is fast becoming a dominant enterprise network. Organizations need to manage and secure it within their current network setup. We established OneLayer to bridge this gap and create one layer of security and one layer of asset management across all organization networks.” Mor added.

“OneLayer enables secure private 5G for industrial automation and autonomy – the foundation Industrial AI needs to scale,” said Scott MacDonald, Co-founder & Managing Partner of McRock Capital. “Private cellular networks represent the next wave of digital transformation, but connectivity without security is a house of cards. OneLayer provides the missing layer of trust that enterprises need to adopt private 5G with confidence.”

“OneLayer’s software platform holds promise to improve private cellular network reliability and integration for industrial companies, offering better visibility, cost efficiency and security,” said Jim Gable, Vice President of Innovation within Chevron’s Technology, Projects and Execution division and President of Technology Ventures at Chevron. “This is the latest investment from our Core Venture Fund, which focuses on high-growth startups and breakthrough technologies that have the potential to improve Chevron’s core businesses, as well as create new opportunities for growth. We welcome OneLayer to the portfolio.”

“Since coming out of stealth mode in 2022, OneLayer has gone from strength to strength, raising more than $43 million to date,” SNS Telecom & IT 5G research director, Asad Khan, told Fierce.

OneLayer’s platform has proven essential for organizations deploying private LTE and 5G networks at scale. Notable deployments include:

  • Southern Linc’s regional LTE network spanning 122,000 square miles across Alabama, Georgia, and southeastern Mississippi, supporting diverse devices from grid control systems to mission-critical push-to-talk services.
  • Evergy’s private LTE network, built to eventually cover their 1.7 million customers in Kansas and Missouri; currently supporting thousands of devices, including Internet of Things (IoT) sensors, smart meters, OT and other cellular devices.
  • Latin America mining operations where OneLayer enabled automated processes, improved operational efficiency, and secure deployment of new production sites through comprehensive device discovery, classification, and network segmentation.

OneLayer’s comprehensive partner ecosystem spans the entire private cellular value chain, from cellular leaders Nokia and Ericsson to leading cellular router manufacturers Digi, Cradlepoint, and Multitech; CMDBs ServiceNow and SolarWinds; channel partners and integrators World Wide Technology (WWT), Burns & McDonnell (B&M), Anterix, and Future Technologies; and leading cybersecurity vendors such as Palo Alto Networks, Fortinet, Checkpoint, Claroty, and more.

Adding to the strong revenues from its expanding customer base, the company will use the Series A funding to:

  • Accelerate go-to-market initiatives in response to surging enterprise demand
  • Expand product capabilities to provide customers a platform that solves their critical operational pains
  • Scale geographic expansion beyond North America following successful entry into Latin American markets, and further expand European operations

OneLayer’s growth reflects the broader market transformation as private cellular networks evolve from carrier-grade infrastructure to enterprise-managed networks. With partnerships across major 5G vendors and proven success in critical verticals and leading enterprises, OneLayer has become the defacto solution for managing and securing the rapidly expanding private cellular market.

About OneLayer 
OneLayer provides advanced asset management, operational intelligence, and Zero Trust security for private LTE and 5G networks. Its technology empowers enterprises to manage their cellular networks seamlessly without the need for cellular expertise. For more information, visit www.onelayer.com. Media Contact:  [email protected]

References:

https://www.prnewswire.com/il/news-releases/onelayer-raises-28m-series-a-to-transform-private-5g-networks-for-enterprise-use-302586220.html

https://www.fierce-network.com/wireless/private-network-security-startup-onelayer-grabs-28-million-funding

 

FT: Scale of AI private company valuations dwarfs dot-com boom

The Financial Times reports that ten loss­ mak­ing arti­fi­cial intel­li­gence (AI) start-ups have gained close to $1 trillion in private market valu­ation in the past 12 months, fuel­ling fears about a bubble in private mar­kets that is much greater than the dot com bubble at the end of the 20th century.  OpenAI leads the pack with a $500 billion valuation, but Anthropic and xAI have also seen their val­ues march higher amid a mad scramble to buy into emerging AI com­pan­ies. Smal­ler firms build­ing AI applic­a­tions have also surged, while more estab­lished busi­nesses, like Dat­ab­ricks, have soared after embra­cing the tech­no­logy.

U.S. venture capitalists (VCs) have poured $161 billion into artificial intelligence startups this year — roughly two-thirds of all venture spending, according to PitchBook — even as the technology’s commercial payoff remains elusive. VCs are on track to spend well over $200bn on AI companies this year.

Most of that money has gone to just 10 companies, including OpenAI, Anthropic, Databricks, xAI, Perplexity, Scale AI, and Figure AI, whose combined valuations have swelled by nearly $1 trillion, Financial Times calculations show.  Those AI start-ups are all burning cash with no profits forecasted for many years.

Start-ups with about $5mn in annual recurring revenue, a metric used by fast-growing young businesses to provide a snapshot of their earnings, are seeking valuations of more than $500mn, according to a senior Silicon Valley venture capitalist.

Valuing unproven businesses at 100 times their earnings or more dwarfs the excesses of 2021, he added: “Even during peak Zirp [zero-interest rate policies], these would have been $250mn-$300mn valuations.”

“The market is investing as if all these companies are outliers. That’s generally not the way it works out,” he said. VCs typically expect to lose money on most of their bets, but see one or two pay the rest off many times over.

There will be casualties. Just like there always will be, just like there always is in the tech industry,” said Marc Benioff, co-founder and chief executive of Salesforce, which has invested heavily in AI. He estimates $1tn of investment on AI might be wasted, but that the technology will ultimately yield 10 times that in new value.

“The only way we know how to build great technology is to throw as much against the wall as possible, see what sticks, and then focus on the winners,” he added.

Of course there’s a bubble,” said Hemant Taneja, chief executive of General Catalyst, which raised an $8 billion fund last year and has backed Anthropic and Mistral. “Bubbles align capital and talent around new trends. There’s always some destruction, but they also produce lasting innovation.”

Venture investors have weathered cycles of boom and bust before — from the dot-com crash in 2000 to the software downturn in 2022 — but the current wave of AI funding is unprecedented. In 2000, VCs invested $10.5 billion in internet startups; in 2021, they deployed $135 billion into software firms. This year, they are on pace to exceed $200 billion in AI. “We’ve gone from the doldrums to full-on FOMO,” said one investment executive.

OpenAI and its start-up peers are competing with Meta, Google, Microsoft, Amazon, IBM, and others in a hugely capital-intensive race to train ever-better models, meaning the path to profitability is also likely to be longer than for previous generations of start-ups.

Backers are betting that AI will open multi-trillion-dollar markets, from automated coding to AI friends or companionship. Yet some valuations are testing credulity. Startups generating about $5 million in annual recurring revenue are seeking valuations above $500 million, a Silicon Valley investor said — 100 times revenue, surpassing even the excesses of 2021. “The market is behaving as if every company will be an outlier,” he said. “That’s rarely how it works.”

The enthusiasm has spilled into public markets. Shares of Nvidia, AMD, Broadcom, and Oracle have collectively gained hundreds of billions in market value from their ties to OpenAI. But those gains could unwind quickly if questions about the startup’s mounting losses and financial sustainability persist.

Sebastian Mallaby, author of The Power Law, summed it up beautifully:

“The logic among investors is simple — if we get AGI (Artificial General Intelligence, which would match or exceed human thinking), it’s all worth it. If we don’t, it isn’t…. “It comes down to these articles of faith about Sam’s (Sam Altman of OpenAI) ability to work it out.”

References:

https://www.ft.com/content/59baba74-c039-4fa7-9d63-b14f8b2bb9e2

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?

Amazon’s Jeff Bezos at Italian Tech Week: “AI is a kind of industrial bubble”

Gartner: AI spending >$2 trillion in 2026 driven by hyperscalers data center investments

AI Data Center Boom Carries Huge Default and Demand Risks

Canalys & Gartner: AI investments drive growth in cloud infrastructure spending

 

Ericsson’s revenue drops, profits soar; deal with Vodafone and partnership with Export Development Canada look promising

Ericsson’s 3rd quarter 2025 results released today showed a 9% drop in revenues, to 56.2 billion Swedish kronor (US$5.9 billion), compared with the same period last year. Ericsson’s gross margin rose 2% to 47.6%.   U.S. sales fell by as much as 17% year-over-year for the 3rd quarter, to about SEK22.5 billion ($2.4 billion), after an especially busy period in 2024. And the only region where Ericsson realized any growth was northeast Asia, and that was due to Japan’s new 5G rollout.

At Ericsson’s big mobile networks unit, sales fell 11% year-over-year, to SEK35.4 billion ($3.7 billion), while the decline on a constant-currency basis was just 4%. The division’s operating income also slid by 6%, to SEK7.1 billion ($740 million).

Sales were much better at the company’s cloud software and services group, responsible for the development of Ericsson’s core network software as well as its business and operational support systems. Reported sales rose 3%, to SEK15.3 billion ($1.6 billion), while Ericsson put the organic improvement at 9%. More importantly, it swung from an operating loss of SEK400 million ($42 million) a year earlier to a profit of SEK1.7 billion ($180 million).

Net income soared by an astonishing 191%, to SEK11.3 billion ($1.2 billion).  That sharp increase in net income was due to Ericsson’s recent sale of iconectiv, a provider of number-portability and data-exchange services, to a private equity firm. The deal landed Ericsson a capital gain of SEK7.6 billion ($800 million) that flattered its profits at the operating income level. In Stockholm, Ericsson’s share price soared more than 14% in mid-morning trade, although it remained almost 2% below its level at the start of the year.

CEO Börje Ekholm said on today’s earnings call: “The margin expansion reflects actions we’ve taken over the last years to increase operational excellence and efficiency, including the work we’ve done on our cost base. Over the last year, we’ve reduced our headcount by some 6,000, leveraging new ways of working, and that of course includes AI.”

Since the end of 2022, the year Ericsson acquired VoIP software developer Vonage for $6.2 billion, headcount has fallen by more than 15,600, to just 89,898 at the end of June, the company revealed in its latest earnings report.

The Vonage business suffered a 17% drop in sales, to SEK3.2 billion ($330 million), and saw its loss widen by 50%, to SEK600 million ($63 million). It is where Ericsson believes it can monetize the network application programming interfaces (APIs) that will link software apps to networks and hopefully revitalize the 5G market.  However, that’s not happening yet.

“The geopolitical situation has required us to shift resources a bit politically. As we went through that transition, we duplicated a large part of the R&D spend. We don’t need to have that anymore as we have relocated R&D,” said Ekholm. “We are not going to jeopardize technology leadership and if we feel there is any risk – and that is a risk I don’t see today – then we would of course need to reassess.”

After years of growth, R&D spending fell by 10% year-over-year for the first nine months of 2025, to SEK35.8 billion ($3.8 billion), prompting concern among analysts that Ericsson could lose competitiveness versus Chinese rivals.

AI is now being used to refine the algorithms that are fed into Ericsson’s software products, said Per Narvinger, the head of Ericsson’s mobile networks business group, on a call with Light Reading.  No indication was given if that would reduce headcount any further.

Ericsson hopes the new 5G contract it announced with Vodafone earlier today will boost sales in Europe, where underinvestment in midband 5G coverage and the “standalone” variant of 5G have been constant bugbears for the company. After the rollout of “non-standalone” 5G, which maintains the 4G core, operators just continued to sell a “4G plus” service, Ekholm said.

“It was the established business model of most operators around the world, so it became very natural to take that step and then use 5G almost as a marketing icon on the phone, but, in reality, it didn’t give the extra capabilities,” he added. Standalone features such as low latency and network slicing will be critical in future apps, Ekholm correctly said, arguing that 6G will necessitate edge cloud and AI investments that have also not yet happened.

In summing up, Ericsson said “Increased uncertainty remains on the outlook, both in terms of potential for further tariff changes as well as in the broader macroeconomic environment.”

Looking ahead:
– Continue to invest in technology leadership to strengthen competitive position
– Future-proofed Open RAN-ready portfolio
– New use cases to monetize network investments taking shape
● AI applications becoming a key driver for network investments
● Structurally improving the business through rigorous cost management

……………………………………………………………………………………………………………………………………………………….

Separately,

Ericsson today announced the signing of a USD $3 billion partnership agreement with Export Development Canada (EDC) to expand investment in Canadian research and development, deepen domestic supply chains, and accelerate next-generation technologies including 5G, Cloud RAN, AI, and quantum innovation.

Börje Ekholm, President and CEO, Ericsson, says: “Canada is one of Ericsson’s most important hubs for global research and development, and this partnership with Export Development Canada will allow us to scale that leadership even further. By strengthening our collaboration with Canadian businesses, universities and government partners, we can accelerate breakthroughs in 5G, quantum, and Cloud RAN that will drive growth, create opportunities, and reinforce Canada’s position as a global leader in next generation networks.”

With more than 3,100 employees nationwide and R&D centres in Ottawa, Montreal, and Toronto, Ericsson Canada is at the heart of the company’s global innovation footprint. Canadian teams are driving advancements in 5G, 5G Advanced, and 6G, while also contributing to new research in quantum communications and AI-powered network management.

The three-year partnership will enable Ericsson to expand its Canadian-led innovation and global projects with the support of financial and insurance solutions from EDC. By reinforcing Ericsson’s Canadian supply chain and connecting the company with innovative domestic businesses, the agreement will also amplify Ericsson’s ability to bring Canadian technology to the world, strengthen competitiveness, and create new opportunities for Canadian companies within Ericsson’s global network of partners.

Across all wireless network equipment vendors, annual sales of RAN products fell from $45 billion in 2022 to $35 billion last year, according to Omdia, a Light Reading sister company. Market research firms Omdia and Dell’Oro have encouragingly guided for a more stable market this year.

Most wireless network providers have seen no incentive to spend more on 5G when their returns to date have been so disappointing. And there is skepticism about the business case for low latency services and network slicing. Telcos increasingly sell large bundles of gigabytes to their customers and have struggled to monetize other features.

References:

https://www.ericsson.com/4a8fc0/assets/local/investors/documents/financial-reports-and-filings/interim-reports-archive/2025/9month25-en.pdf

https://www.ericsson.com/4a90a6/assets/local/investors/documents/financial-reports-and-filings/interim-reports-archive/2025/9month25-ceo-slides.pdf

https://www.lightreading.com/5g/ericsson-says-world-is-flat-amid-us-gloom-and-keeps-cutting

https://www.lightreading.com/open-ran/vodafone-spring-6-lands-with-a-whimper-for-ericsson-and-samsung

https://www.ericsson.com/en/press-releases/6/2025/ericsson-edc-advance-canadas-technology-leadership

Ericsson integrates agentic AI into its NetCloud platform for self healing and autonomous 5G private networks

Ericsson CEO’s strong statements on 5G SA, WRC 27, and AI in networks

Ericsson completes Aduna joint venture with 12 telcos to drive network API adoption

Ericsson reports ~flat 2Q-2025 results; sees potential for 5G SA and AI to drive growth

Ericsson revamps its OSS/BSS with AI using Amazon Bedrock as a foundation

Ericsson’s sales rose for the first time in 8 quarters; mobile networks need an AI boost

Beyon partners with Ericsson to build energy-efficient wireless networks in Bahrain

Latest Ericsson Mobility Report talks up 5G SA networks and FWA

Ericsson and e& (UAE) sign MoU for 6G collaboration vs ITU-R IMT-2030 framework

 

 

OCP 2025 Meta keynote: Scaling the AI Infrastructure to Data Center Regions

At the OCP Global Summit 2025 in San Jose, CA, Meta detailed its strategy for scaling AI infrastructure to regional data center deployments, emphasizing open, collaborative, and highly scalable designs to support growing AI workloads. The October 14th keynote presentation by Meta’s VP of Data Center Infrastructure, Dan Rabinovitsj, discussed strategies for deploying and operating AI at scale across various data center regions at OCP 2025. The session highlighted innovations for building AI-ready data centers, focusing on open hardware, power innovation, and challenges in next-generation AI infrastructure.

Initiatives discussed included: new Ethernet standards for AI clusters, integration of the Ultra Ethernet Consortium standard, Meta’s vision for open networking hardware, AMD’s “Helios” rack-scale AI platform, MSI’s integrated OCP solutions, next-gen liquid cooling, and solutions for distributed and edge AI.

Rabinovitsj highlighted Meta’s contributions to open standards and hardware innovations, including the Open Rack Wide standard and advanced networking concepts for AI clusters.

Meta also announced several new milestones for data center networking:

  • The evolution of Disaggregated Scheduled Fabric (DSF) to support scale-out interconnect for large AI clusters that span entire data center buildings.
  • A new Non-Scheduled Fabric (NSF) architecture based entirely on shallow-buffer, disaggregated Ethernet switches that will support our largest AI clusters like Prometheus.
  • The addition of Minipack3N, based on NVIDIA’s Ethernet Spectrum-4 ASIC, to our portfolio of 51 Tbps OCP switches that use OCP’s SAI and Meta’s FBOSS software stack.
  • The launch of the Ethernet for Scale-Up Networking (ESUN) initiative, focused on making Ethernet suitable for connecting high-performance processors, or GPUs, within a single rack by emphasizing requirements like low latency, high bandwidth, and lossless transfers. Meta has been working with other large-scale data center operators and leading Ethernet vendors to advance using Ethernet for scale-up networking (specifically the high-performance interconnects required for next-generation AI accelerator architectures.

OCP Summit 2025: The Open Future of Networking Hardware for AI

Key hardware projects discussed by Meta included:
  • Open Rack Wide (ORW) standard: Meta introduced the ORW specification, a new open standard for double-wide equipment racks designed to meet the extreme power, cooling, and serviceability demands of next-generation AI systems. AMD, a partner of Meta, showcased its “Helios” rack-scale platform built to be compliant with this new standard.
  • Networking fabrics for AI clusters: Meta detailed its networking architecture, revealing the following innovations:
    • Disaggregated Scheduled Fabric (DSF): An updated version of DSF was discussed (see below), which now provides non-blocking interconnects for clusters of up to 18,432 XPUs (AI processors), enabling communication between a larger number of GPUs.  
    • Non-Scheduled Fabric (NSF): Meta unveiled NSF, a new fabric for its largest AI clusters, which runs on shallow-buffer, disaggregated Ethernet switches to reduce latency. NSF is planned for Meta’s upcoming multi-gigawatt “Prometheus” clusters. See next section below for details.
  • FBNIC: Meta announced FBNIC, a network ASIC of their own design.
  • 51T switches: Meta revealed new 51T network switches, which utilize Broadcom and Cisco ASICs.
  • Next-generation optical connections: For faster and higher-capacity optical interconnections, Meta discussed its adoption of 2x400G FR4-LITE and 400G/2x400G DR4 optics for its 400G and 800G connectivity.
  • Sustainable hardware: As part of its 2030 net-zero goals, Meta presented a new AI-powered methodology for tracking and estimating the carbon emissions of its IT hardware. The methodology will be open-sourced for the wider industry

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Deep Dive into DSF and NSF:

1. Disaggregated Scheduled Fabric (DSF):
DSF is designed to provide a highly efficient, lossless, and scalable network. First introduced at OCP in 2024, Meta announced a major upgrade to its design. 
  • Non-blocking scale: An updated, two-stage architecture for DSF can now support a non-blocking fabric for up to 18,432 XPUs (AI processors). This allows all-to-all communication between a significantly larger number of GPUs without performance degradation.
  • Proactive congestion avoidance: DSF uses a Virtual Output Queue (VOQ)-based system to manage traffic flow. By scheduling traffic between endpoints, it proactively avoids congestion before it occurs, which improves bandwidth delivery and overall network efficiency.
  • Open and standardized: The fabric is built on open standards like the OCP-SAI (Switch Abstraction Interface) and is managed by Meta’s own network operating system, FBOSS. This vendor-agnostic approach allows Meta to use components from different suppliers and avoid vendor lock-in.
  • Optimal load balancing: Traffic is “sprayed” across all available links and switches, ensuring an equal load and smooth performance for bandwidth-intensive workloads like AI training. 
2. Non-Scheduled Fabric (NSF):
Meta unveiled NSF as a new fabric specifically for its most massive AI installations, including the multi-gigawatt “Prometheus” cluster scheduled for 2026.
  • Low latency: Unlike DSF, which relies on scheduling, NSF operates on shallow-buffer, disaggregated Ethernet switches. This reduces round-trip latency, making it ideal for the most latency-sensitive AI workloads.
  • Adaptive routing: The NSF architecture is a three-tier fabric that supports adaptive routing for effective load-balancing. This helps minimize congestion and ensure optimal utilization of GPUs, which is critical for maximizing performance in Meta’s largest AI factories.
  • Disaggregated design: Like DSF, NSF is built on a disaggregated design. This allows Meta to scale its network by using interchangeable, industry-standard components instead of a single vendor’s closed system.
3. A dual-fabric strategy for the future:
Meta’s decision to pursue both DSF and NSF reflects its strategy for tackling the diverse and growing networking challenges posed by modern AI.
  • DSF: Provides a high-efficiency, highly scalable network for its large, but still modular, AI clusters.
  • NSF: Is optimized for the extreme demands of its largest, gigawatt-scale “AI factories” like Prometheus, where low latency and robust adaptive routing are paramount. 
This parallel, dual-fabric strategy allows Meta to build and operate AI infrastructure with unprecedented scale, performance, and flexibility, using open standards to accelerate innovation and reduce costs. 

Image Credit: Meta

………………………………………………………………………………………………………………………………………………………..

References:

OCP Summit 2025: The Open Future of Networking Hardware for AI

https://www.opencompute.org/blog/introducing-esun-advancing-ethernet-for-scale-up-ai-infrastructure-at-ocp

Networking at the Heart of AI — @Scale: Networking 2025 Recap

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Gartner: AI spending >$2 trillion in 2026 driven by hyperscalers data center investments

AI Data Center Boom Carries Huge Default and Demand Risks

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Qualcomm to acquire Alphawave Semi for $2.4 billion; says its high-speed wired tech will accelerate AI data center expansion

Cisco CEO sees great potential in AI data center connectivity, silicon, optics, and optical systems

Data Center Networking Market to grow at a CAGR of 6.22% during 2022-2027 to reach $35.6 billion by 2027

AT&T deploys nationwide 5G SA while Verizon lags and T-Mobile leads

In a blog published today (October 8th), Yigal Elbaz, AT&T’s senior VP and network CTO, AT&T  announced its 5G Standalone (SA) network is now deployed nationwide, marking an important milestone many years in the making.   Elbaz described the 5G SA nationwide deployment as “another bold leap” in wireless connectivity and said the operator is moving customers onto the network “in select areas every day.”

In fact, that “bold leap” was expected to be realized years ago! In 2021, AT&T partnered with Microsoft to offload its mobile 5G Standalone (SA) core network and other network cloud operations to Microsoft Azure, acquiring AT&T’s Network Cloud technology, software, and network operations team in the process. The goal was for Microsoft to manage the software development and deployment of AT&T’s network functions on Azure, allowing AT&T to accelerate innovation, improve efficiency, and reduce operating costs. This move has provided a strategic win for Microsoft’s Azure for Operators division by integrating AT&T’s technology and offering it to other telecom companies.

AT&T said they have millions of customers already on their 5G SA network, and we’re expanding availability to more customers as device support and provisioning allow.  Elbaz elaborated:

5G Standalone networks have now reached a level of maturity that enables our nationwide expansion. This growth is powered by an open and virtualized network, which enables us to scale efficiently and foster collaboration within an open ecosystem of partners. By embracing this open and virtualized network architecture, we are not only modernizing our infrastructure but also unlocking significant advantages for our customers and partners. This approach not only accelerates our ability to roll out new technologies like 5G Standalone but also helps ensure our customers benefit from a network that is robust, innovative, and designed with their needs in mind.

With 5G Standalone now nationwide, we’ve set the stage for the next wave of innovation, creativity, and connection. I couldn’t be prouder of our teams who made this possible, and we’ll continue to scale 5G Standalone over time and set the stage for next generation applications and services.

Compatible 5G SA smartphones include models released in the last several years starting with Apple’s iPhone 13, Samsung’s Galaxy S21 and Google’s Pixel 8.

AT&T also said its 5G Reduced Capability (RedCap) network, which uses the 5G SA core and supports the new Apple Watch Series 11, Apple Watch Ultra 3, and Apple Watch SE 3, has been expanded to 250 million points of presence.  AT&T 5G RedCap customers can look forward to a growing portfolio of devices, Elbaz said. 

……………………………………………………………………………………………………………………………………………………………………………….

Separately, Verizon is closing in on completing its 5G SA upgrade. The operator says its 5G SA is deployed nationwide but there are some places where it is still in the process of being rolled out. Although the deployment is not 100% complete, “the vast majority” of 5G SA capable phones will connect to Verizon’s network in “the vast majority of places,” according to an operator spokesperson.

Verizon has launched two network slicing services based on the 5G SA network. In April, the operator launched Frontline, a network slice for first responders that is available across the country. It also offers Enhanced Video Calling, which provides a network slice for better video communications on iPhones.

T-MobileUS launched 5G SA in 2020 and has since rolled out 5G Advanced nationwide. It also offers two network slicing propositions, T-Priority for first responders and SuperMobile for enterprise customers. Both AT&T and Verizon have implemented cloud-native 5G core networks, but T-Mobile’s implementation is more traditional. At Mobile World Congress earlier this year, T-Mobile announced its telco cloud strategy for core and edge networks that is based on Red Hat (owned by IBM).

A recent Heavy Reading (now part of Omdia) survey found 5G SA is poised to scale rapidly. Gabriel Brown noted that the results show “a critical mass is building behind 5G SA that will unlock innovation in the wider mobile network services ecosystem.”   

“This matters when it comes to layering in new services because a cloud native deployment allows operators to be more agile and deploy services faster,” Brown added.

References:

https://about.att.com/blogs/2025/5g-standalone-nationwide.html

https://about.att.com/blogs/2025/5g-redcap.html

https://www.lightreading.com/5g/at-t-verizon-chase-t-mobile-with-nationwide-5g-sa

AT&T 5G SA Core Network to run on Microsoft Azure cloud platform

Téral Research: 5G SA core network deployments accelerate after a very slow start

Building and Operating a Cloud Native 5G SA Core Network

Ookla: Europe severely lagging in 5G SA deployments and performance

Vision of 5G SA core on public cloud fails; replaced by private or hybrid cloud?

GSA: More 5G SA devices, but commercial 5G SA deployments lag

GSA 5G SA Core Network Update Report

Latest Ericsson Mobility Report talks up 5G SA networks and FWA

Global 5G Market Snapshot; Dell’Oro and GSA Updates on 5G SA networks and devices

Dell’Oro: Mobile Core Network market has lowest growth rate since 4Q 2017

5G SA networks (real 5G) remain conspicuous by their absence

 

Amazon’s Jeff Bezos at Italian Tech Week: “AI is a kind of industrial bubble”

Tech firms are spending hundreds of billions of dollars on advanced AI chips and data centers, not just to keep pace with a surge in the use of chatbots such as ChatGPT, Gemini and Claude, but to make sure they’re ready to handle a more fundamental and disruptive shift of economic activity from humans to machines. The final bill may run into the trillions. The financing is coming from venture capital, debt and, lately, some more unconventional arrangements that have raised concerns among top industry executives and financial asset management firms.

At Italian Tech Week in Turin on October 3, 2025, Amazon founder Jeff Bezos said this about artificial intelligence,  “This is a kind of industrial bubble, as opposed to financial bubbles.”  Bezos differentiated this from “bad” financial or housing bubbles, which  cause harm. Bezos’s comparison of the current AI boom to a historical “industrial bubble” highlights that, while speculative, it is rooted in real, transformative technology. 

“It can even be good, because when the dust settles and you see who are the winners, societies benefit from those investors,” Bezos said. “That is what is going to happen here too. This is real, the benefits to society from AI are going to be gigantic.”

He noted that during bubbles, everything (both good and bad investments) gets funded. When these periods of excitement come along, investors have a hard time distinguishing the good ideas from the bad, he said, adding this is “probably happening today” with AI investments.  “Investors have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas,” Bezos said of the AI industry. “And that’s also probably happening today,” he added.

  • A “good” kind of bubble: He explained that during industrial bubbles, excessive funding flows to both good and bad ideas, making it hard for investors to distinguish between them. However, the influx of capital spurs significant innovation and infrastructure development that ultimately benefits society once the bubble bursts and the strongest companies survive.
  • Echoes of the dot-com era: Bezos drew a parallel to the dot-com boom of the 1990s, where many internet companies failed, but the underlying infrastructure—like fiber-optic cable—endured and led to the creation of companies like Amazon.
  • Gigantic benefits: Despite the market frothiness, Bezos reiterated that AI is “real” and its benefits to society “are going to be gigantic.”
Bezos is not the only high-profile figure to express caution about the AI boom:
  • Sam Altman (OpenAI): The CEO of OpenAI has stated that he believes “investors as a whole are overexcited about AI.” In In August, the OpenAI CEO told reporters the AI market was in a bubble. When bubbles happen, “smart people get overexcited about a kernel of truth,” Altman warned, drawing parallels with the dot-com boom. Still, he said his personal belief is “on the whole, this would be a huge net win for the economy.”
  • David Solomon (Goldman Sachs): Also speaking at Italian Tech Week, the Goldman Sachs CEO warned that a lot of capital deployed in AI would not deliver returns and that a market “drawdown” could occur.
  • Mark Zuckerberg (Meta): Zuckerberg has also acknowledged that an AI bubble exists. The Meta CEO acknowledged that the rapid development of and surging investments in AI stands to form a bubble, potentially outpacing practical productivity and returns and risking a market crash.  However, he would rather “misspend a couple hundred billion dollars” on AI development than be late to the technology.
  • Morgan Stanley Wealth Management’s chief investment officer, Lisa Shalett, warned that the AI stock boom was showing “cracks” and was likely closer to its end than its beginning. The firm cited concerns over negative free cash flow growth among major AI players and increasing speculative investment. Shalett highlighted that free cash flow growth for the major cloud providers, or “hyperscalers,” has turned negative. This is viewed as a key signal of the AI capital expenditure cycle’s maturity. Some analysts estimate this growth could shrink by about 16% over the next year.
Image Credit:  Dreamstime.com  © Skypixel
………………………………………………………………………………………………………………………………………………………………………………………
Bezos’s remarks come as some analysts express growing fears of an impending AI market crash.
  • Underlying technology is real: Unlike purely speculative bubbles, the AI boom is driven by a fundamental technology shift with real-world applications that will survive any market correction.
  • Historical context: Some analysts believe the current AI bubble is on a much larger scale than the dot-com bubble due to the massive influx of investment.
  • Significant spending: The level of business spending on AI is already at historic levels and is fueling economic growth, which could cause a broader economic slowdown if it were to crash.
  • Potential for disruption: The AI industry faces risks such as diminishing returns for costly advanced models, increased competition, and infrastructure limitations related to power consumption. 

Ian Harnett argues, the current bubble may be approaching its “endgame.” He wrote in the Financial Times:

“The dramatic rise in AI capital expenditure by so-called hyperscalers of the technology and the stock concentration in US equities are classic peak bubble signals. But history shows that a bust triggered by this over-investment may hold the key to the positive long-run potential of AI.

Until recently, the missing ingredient was the rapid build-out of physical capital. This is now firmly in place, echoing the capex boom seen in the late-1990s bubble in telecommunications, media and technology stocks. That scaling of the internet and mobile telephony was central to sustaining ‘blue sky’ earnings expectations and extreme valuations, but it also led to the TMT bust.”

Today’s AI capital expenditure (capex) is increasingly being funded by debt, marking a notable shift from previous reliance on cash reserves. While tech giants initially used their substantial cash flows for AI infrastructure, their massive and escalating spending has led them to increasingly rely on external financing to cover costs.

This is especially true of Oracle, which will have to increase its capex by almost $100 billion over the next two years for their deal to build out AI data centers for OpenAI.  That’s an annualized growth rate of some 47%, even though Oracle’s free cash flow has already fallen into negative territory for the first time since 1990.  According to a recent note from KeyBanc Capital Markets, Oracle may need to borrow $25 billion annually over the next four years.  This comes at a time when Oracle is already carrying substantial debt and is highly leveraged. As of the end of August, the company had around $82 billion in long-term debt, with a debt-to-equity ratio of roughly 450%. By comparison, Alphabet—the parent company of Google—reported a ratio of 11.5%, while Microsoft’s stood at about 33%.  In July, Moody’s revised Oracle’s credit outlook to negative from, while affirming its Baa2 senior unsecured rating. This negative outlook reflects the risks associated with Oracle’s significant expansion into AI infrastructure, which is expected to lead to elevated leverage and negative free cash flow due to high capital expenditures. Caveat Emptor!

References:

https://fortune.com/2025/10/04/jeff-bezos-amazon-openai-sam-altman-ai-bubble-tech-stocks-investing/

https://www.ft.com/content/c7b9453e-f528-4fc3-9bbd-3dbd369041be

Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?

AI Data Center Boom Carries Huge Default and Demand Risks

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Gartner: AI spending >$2 trillion in 2026 driven by hyperscalers data center investments

Will the wave of AI generated user-to/from-network traffic increase spectacularly as Cisco and Nokia predict?

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

RtBrick survey: Telco leaders warn AI and streaming traffic to “crack networks” by 2030

https://fortune.com/2025/09/19/zuckerberg-ai-bubble-definitely-possibility-sam-altman-collapse/

https://finance.yahoo.com/news/why-fears-trillion-dollar-ai-130008034.html

Huawei to Double Output of Ascend AI chips in 2026; OpenAI orders HBM chips from SK Hynix & Samsung for Stargate UAE project

With sales of Nvidia AI chips restricted in China, Huawei Technologies Inc. plans to make about 600,000 of its 910C Ascend chips next year, roughly double this year’s output, people familiar with the matter told Bloomberg. The China tech behemoth will increase its Ascend product line in 2026 to as many as 1.6 million dies – the basic silicon component that’s packaged as a chip.

Huawei had struggled to get those products to potential customers for much of 2025, because of U.S. sanctions.  Yet if Huawei and its partner Semiconductor Manufacturing International Corp. (SMIC) can hit that ambitious AI chip manufacturing target, it suggest self sufficiency which will remove some of the bottlenecks that’ve hindered not just its AI business.

The projections for 2025 and 2026 include dies that Huawei has in inventory, as well as internal estimates of yields or the rate of failure during production, the people said. Shares in SMIC and rival chipmaker Hua Hong Semiconductor Ltd. gained more than 4% in Hong Kong Tuesday, while the broader market stayed largely unchanged.

Huawei Ascend branding at a trade show i China. Photographer: Ying Tang/Getty Images

Chinese AI companies from Alibaba Group Holding Ltd. to DeepSeek need millions of AI chips to develop and operate AI services. Nvidia alone was estimated to have sold a million H20 chips in 2024.

What Bloomberg Economics Says:

Huawei’s reported plan to double AI-chip output over the next year suggests China is making real progress in working around US export controls. Yet the plan also exposes the limitations imposed by US controls: Node development remains stuck at 7 nanometers, and Huawei will continue to rely on stockpiles of foreign high-bandwidth memory amid a lack of domestic production.

From Beijing’s perspective, Huawei’s production expansion represents another move in an ongoing back-and-forth with the West over semiconductor access and self-sufficiency. The priority remains accelerating indigenization of critical technologies while steadily pushing back against Western controls.

– Michael Deng, analyst

While Huawei’s new AI silicon promises massive performance gains it has several shortcomings, especially the lack of a developer community comparable to Nvidia’s CUDA ecosystem.  A Chinese tech executive said Nvidia’s biggest advantage wasn’t its advanced chips but the ecosystem built around CUDA, its parallel computing architecture and programming model. The exec called for the creation of a Chinese version of CUDA that can be used worldwide. 

Also, Huawei is playing catchup by progressively going open source. It announced last month that its Ascend and AI training toolkit CANN, its Mind development environment and Pangu models would all be open source by year-end.

Huawei chairman Eric Xu said in an interview the company had given the “ecosystem issue” a great deal of thought and regarded the transition to open source as a long-term project. “Why keep it hidden? If it’s widely used, an ecosystem will emerge; if it’s used less, the ecosystem will disappear,” he said.

………………………………………………………………………………………………………………………………………………………………………

At its customer event in Shanghai last month, Huawei revealed that it planned to spend 15 billion Chinese yuan (US$2.1 billion) annually over the next five years on ecosystem development and open source computing.

Xu announced a series of new Ascend chips – the 950, 960 and 970 – to be rolled out over the next three years.  He foreshadowed a new series of massive Atlas SuperPoD clusters – each one a single logical machine made up of multiple physical devices that can work together – and also announced Huawei’s unified bus interconnect protocol, which allows customers to stitch together compute power across multiple data centers. 

Xu acknowledged that Huawei’s single Ascend chips could not match Nvidia’s, but said the SuperPoDs were currently the world’s most powerful and will remain so “for years to come.” But the scale of its SuperPOD architecture points to its other shortcoming – the power consumption of these giant compute arrays. 

………………………………………………………………………………………………………………………………………………………………………….

Separately, OpenAI has made huge memory chip agreements with South Korea’s SK Hynix and Samsung, the world’s two biggest semiconductor memory manufacturers.  The partnership, aimed at locking up HBM ((High Bandwidth Memory) [1.] chip supply for the $400 billion Stargate AI infrastructure project, is estimated to be worth more than 100 trillion Korean won (US$71.3 billion) for the Korean chipmakers over the next four years. The two companies say they are targeting 900,000 DRAM wafer starts per month – more than double the current global HBM capacity.

Note 1. HBM is a specialized type of DRAM that uses a unique 3D vertical stacking architecture and Through-Silicon Via (TSV) technology to achieve significantly higher bandwidth and performance than traditional, flat DRAM configurations. HBM uses standard DRAM “dies” stacked vertically, connected by TSVs, to create a more densely packed, high-performance memory solution for demanding applications like AI and high-performance computing.

…………………………………………………………………………………………………………………………………………………………………………….

“These partnerships will focus on increasing the supply of advanced memory chips essential for next-generation AI and expanding data center capacity in Korea, positioning Samsung and SK as key contributors to global AI infrastructure and supporting Korea’s ambition to become a top-three global AI nation.” OpenAI said.

The announcement followed a meeting between President Lee Jae-myung, Samsung Electronics Executive Chairman Jay Y. Lee, SK Chairman Chey Tae-won, and OpenAI CEO Sam Altman at the Presidential Office in Seoul.

Through these partnerships, Samsung Electronics and SK hynix plan to scale up production of advanced memory chips, targeting 900,000 DRAM wafer starts per month at an accelerated capacity rollout, critical for powering OpenAI’s advanced AI models.

OpenAI also signed a series of agreements today to explore developing next-generation AI data centers in Korea. These include a Memorandum of Understanding (MoU) with the Korean Ministry of Science and ICT (MSIT) specifically to evaluate opportunities for building AI data centers outside the Seoul Metropolitan Area, supporting balanced regional economic growth and job creation across the country.

The agreements signed today also include a separate partnership with SK Telecom to explore building an AI data center in Korea, as well as an agreement with Samsung C&T, Samsung Heavy Industries, and Samsung SDS to assess opportunities for additional data center capacity in the country.

References:

https://www.bloomberg.com/news/articles/2025-09-29/huawei-to-double-output-of-top-ai-chip-as-nvidia-wavers-in-china

https://www.lightreading.com/ai-machine-learning/huawei-sets-itself-as-china-s-go-to-for-ai-tech

https://openai.com/index/samsung-and-sk-join-stargate/

OpenAI orders $71B in Korean memory chips

AI Data Center Boom Carries Huge Default and Demand Risks

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China

Huawei launches CloudMatrix 384 AI System to rival Nvidia’s most advanced AI system

China gaining on U.S. in AI technology arms race- silicon, models and research

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?

Despite U.S. sanctions, Huawei has come “roaring back,” due to massive China government support and policies

Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?

IEEE Techblog has called attention to the many challenges and risks inherent in the current mega-spending boom for AI infrastructure (building data centers, obtaining power/electricity, cooling, maintenance, fiber optic networking, etc) .  In particular, these two recent blog posts:

AI Data Center Boom Carries Huge Default and Demand Risks and

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

This article focuses on the tremendous debt that Open AI, Oracle and newer AI cloud companies will have to obtain and the huge hurdles they face to pay back the money being spent to build out their AI infrastructures. While the major hyperscalers (Amazon, Microsoft, Google and Meta) are in good financial shape and won’t need to take on much debt, a  new wave of  heavily leveraged firms is emerging—one that could reshape the current AI boom.

OpenAI, for example, is set to take borrowing and large-scale contracts to an unbelievable new level. OpenAI is planning a vast network of data centers expected to cost at least $1 trillion over the coming years. As part of this effort, the company signed a $300 billion, five-year contract this month under which Oracle “is to set up AI computing infrastructure and lease it to OpenAI.”   In other words, OpenAI agreed to pay Oracle $300 billion over five years for the latter company to build out new AI data centers.  Where will OpenAI get that money?  It will be be burning billions in cash and won’t be profitable till 2029 at the earliest.

To fulfill its side of the deal, Oracle will need to invest heavily in infrastructure before receiving full payment—requiring significant borrowing. According to a recent note from KeyBanc Capital Markets, Oracle may need to borrow $25 billion annually over the next four years.  This comes at a time when Oracle is already carrying substantial debt and is highly leveraged. As of the end of August, the company had around $82 billion in long-term debt, with a debt-to-equity ratio of roughly 450%. By comparison, Alphabet—the parent company of Google—reported a ratio of 11.5%, while Microsoft’s stood at about 33%.

Companies like Oracle and other less-capitalized AI players such as CoreWeave have little choice but to take on more debt if they want to compete at the highest level. Nebius Group, another Nasdaq-listed AI cloud provider similar to CoreWeave, struck a $19.4 billion deal in September to provide AI computing services to Microsoft. The company announced it would finance the necessary capital expenditures “through a combination of its cash flow and debt secured against the contract.”

………………………………………………………………………………………………………………………………………………………………………………………………

Sidebar – Stock market investors seem to love debt and risk:

CoreWeave’s shares have more than tripled since its IPO in March, while Nebius stock jumped nearly 50% after announcing its deal with Microsoft. Not to be outdone, Oracle’s stock surged 40% in a single day after the company disclosed a major boost in projected revenue from OpenAI in its infrastructure deal—even though the initiative will require years of heavy spending by Oracle.

–>What’s so amazing to this author is that OpenAI selected Oracle for the AI infrastructure it will use, even though the latter is NOT a major cloud service provider and is certainly not a hyperscaler.  For Q1 2025, it held about 3% market share, placing it #5 among global cloud service providers.

…………………………………………………………………………………………………………………………………………………………………………………………………

Data Center Compute Server & Storage Room;  iStock Photo credit: Andrey Semenov

……………………………………………………………………………………………………………………………………………….

Among other new AI Cloud players:

  • CyrusOne secured nearly $12 billion in financing (much in debt) for AI / data center expansion. Around $7.9 billion of that is for new data center / AI digital infrastructure projects in the U.S.
  • SoftBank / “Stargate” initiative: The Stargate project (OpenAI + Oracle + SoftBank + MGX, etc.) is being structured with major debt. The plan is huge—around $500 billion in AI infrastructure and supercomputers, and financing is expected to be ~70% debt, ~10% equity among the sources.
  • xAI (Elon Musk’s AI firm):  xAI raised $10 billion in combined debt + equity. Specifically ~$5 billion in secured notes / term loans (debt), with the remainder in equity. The money is intended to build out its AI infrastructure (e.g. GPU facilities / data centers).

There’s growing skepticism about whether these companies can meet their massive contract obligations and repay their debts. Multiple recent studies suggest AI adoption isn’t advancing as quickly as supporters claim. One study found that only 3% of consumers are paying for AI services. Forecasts projecting trillions of dollars in annual spending on AI data centers within a few years appear overly optimistic.

OpenAI’s position, despite the hype, seems very shaky. D.A. Davidson analyst Gil Luria estimates the company would need to generate over $300 billion in annual revenue by 2030 to justify the spending implied in its Oracle deal—a steep climb from its current run rate of about $12 billion. OpenAI has financial backing from SoftBank and Nvidia, with Nvidia pledging up to $100 billion, but even that may not be enough.  “A vast majority of Oracle’s data center capacity is now promised to one customer, OpenAI, who itself does not have the capital to afford its many obligations,” Luria said.

Oracle could try to limit risk by pacing its spending with revenue received from OpenAI.  Nonetheless, Moody’s flagged “significant” risks in a recent note, citing the huge costs of equipment, land, and electricity. “Whether these will be financed through traditional debt, leases or highly engineered financing vehicles, the overall growth in balance sheet obligations will also be extremely large,” Moody’s warned. In July (two months before the OpenAI deal), it gave Oracle a negative credit outlook.

There’s a real possibility that things go smoothly. Oracle may handle its contracts and debt well, as it has in the past. CoreWeave, Nebius, and others might even pioneer new financial models that help accelerate AI development.

It’s very likely that some of today’s massive AI infrastructure deals will be delayed, renegotiated, or reassigned if AI demand doesn’t grow as fast as AI spending. Legal experts say contracts could be transferred.  For example, if OpenAI can’t make the promised, Oracle might lease the infrastructure to a more financially stable company, assuming the terms allow it.

Such a shift wouldn’t necessarily doom Oracle or its debt-heavy peers. But it would be a major test for an emerging financial model for AI—one that’s starting to look increasingly speculative.  Yes, even bubbly!

………………………………………………………………………………………………………………………………………………………………………………

References:

https://www.wsj.com/tech/ai/debt-is-fueling-the-next-wave-of-the-ai-boom-278d0e04

https://www.crn.com/news/cloud/2025/cloud-market-share-q1-2025-aws-dips-microsoft-and-google-show-growth

AI Data Center Boom Carries Huge Default and Demand Risks

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Should Peak Data Rates be specified for 5G (IMT 2020) and 6G (IMT 2030) networks?

Peak Data Rate [1.] is one of the most visible attributes of IMT (International Mobile Telecommunications) cellular networks, e.g. 3G, 4G and 5G. As a result, it gets significant attention from analysts and reporters that create high expectations for  IMT end users which may never be realized in commercially deployed IMT networks.

For example, the peak data rates specified by the ITU-R M.2410 report for IMT-2020 (5G) have not been realized in any 5G production networks under typical conditions. The ITU-R’s 20 Gbps downlink and 10 Gbps uplink targets are theoretical maximums, achievable only in a controlled test environment with ideal conditions. Please refer to the chart below.

……………………………………………………………………………………………………………………………………………………………………..

Note 1. Peak data rate is the theoretical maximum [achievable] data rate under ideal conditions, which is the received data bits assuming error-free conditions assignable to a single mobile station, when all assignable radio resources for the corresponding link direction are utilized (i.e. excluding radio resources that are used for physical layer synchronization, reference signals or pilots, guard bands and guard times).

………………………………………………………………………………………………………………………………………………………………………

5G services are deployed across three main frequency ranges and the speed capability varies dramatically for each.

  • Low-band (sub-6 GHz): Offers wide coverage but only a modest speed improvement over 4G, typically delivering a few hundred Mbps at best.
  • Mid-band (sub-6 GHz): Provides a balance of speed and coverage, with peak speeds sometimes reaching 1 Gbps, though typical average speeds are much lower.
  • High-band (millimeter wave or mmWave): This is the only band capable of reaching multi-gigabit speeds. However, its signal range is very short and it is easily blocked by physical objects, limiting its availability to dense urban areas and specific venues.  5G mmWave base station power consumption is also very high which limits coverage.
Several factors are critical for pushing the boundaries of 5G downlink speeds in live networks:
  • mmWave spectrum: Higher-band millimeter wave spectrum offers massive bandwidth, enabling multi-gigabit speeds. However, its use is limited to dense urban areas and specific venues due to its short range.
  • Carrier aggregation: Combining multiple frequency bands (e.g., mmWave with mid-band) significantly increases the total available bandwidth and is crucial for achieving the highest download speeds.
  • 5G Advanced (Release 18): New developments in 5G-Advanced technology (also known as 5.5G) enable even higher performance. The Telstra record in 2025 utilized 5G Advanced software.
  • Equipment and device capabilities: Peak speeds require cutting-edge network hardware from vendors like Ericsson, Nokia, and Samsung, as well as the latest mobile devices powered by advanced modems from companies like Qualcomm and MediaTek.

The gap between what IMT-2020 (5G) technology can deliver (on paper) and what is actually realized in commercial 5G networks  has grown larger and larger over the past few years [2.].  Here’s a summary of speed differences:

Speed metric ITU-R specification Reality in commercial networks
Peak data rate 20 Gbps (downlink)

10 Gbps (uplink)

Reached only in isolated demonstrations, typically using high-band mmWave technology.
User experienced rate 100 Mbps (downlink)

15 to 50 Mbps (uplink)

The typical average speed for many users, particularly on low- and mid-band deployments.  mmWave is higher, but the range is limited.

Note 2.  The gap is even greater for 5G latency! The minimum required latency in ITU-R M.2410 for user plane are:
– 4 ms for eMBB
1 ms for URLLC
assumes unloaded conditions (a single user) for small IP packets (e.g. 0 byte payload + IP header), for both downlink and uplink.

The minimum requirement for control plane latency is 20 ms. Proponents are encouraged to consider lower control plane latency, e.g. 10 ms.

However, the average latency experienced in deployed commercial 5G networks is higher, typically ranging between 5 and 20 milliseconds, depending on the network architecture, spectrum, and use case.  One reason is that the 3GPP Release 16 spec for 5G-NR enhancements for URLLC in the RAN and Core network were never completed.

5G mmWave spectrum has the potential for the lowest latency, but its limited range and line-of-sight requirements limit restrict deployments to dense urban areas.  Therefore, most 5G users connect via mid-band or low-band, which have higher latency.

……………………………………………………………………………………………………………………………………………………………….

For that reason, several companies (Apple, Nokia, TELECOM ITALIA, Deutsche Telekom, SK Telecom, Spark NZ, AT&T) have proposed not to define IMT-2030 peak data rate requirement values in ITU-R M.[IMT-2030.TECH PERF REQ] nor to maintain the IMT-2020 (5G) peak data rate numbers from the ITU-R M.2410 report.

Author’s Note: The IMT-2030 performance requirements in ITU-R M.[IMT-2030.TECH PERF REQ] are to be evaluated according to the criteria defined in Report ITU-R M.[IMT‑2030.EVAL] and Report ITU-R M.[IMT-2030.SUBMISSION] for the development of IMT-2030 recommendations within ITU-R WP5D.

……………………………………………………………………………………………………………………………………………………………………………….

Addendum – Measurements of top 5G network speeds:

  • In the first half of 2025, Ookla said  e& in the United Arab Emirates was the world’s fastest 5G network, noting a median upload speed of 52.21 Mbps. Other top performers like South Korea, Qatar, and Brazil also see median speeds well above 20 Mbps.
  • U.S. performance: In the U.S., major carriers are in a close race. In mid-2024, Opensignal found Verizon with the fastest 5G upload speed at 21.2 Mbps, with T-Mobile close behind. However, as of early 2025, a separate Opensignal report credited T-Mobile with the fastest overall upload experience, at 17.9 Mbps, though that figure includes both 4G and 5G connections.
  • European performance: Speeds vary across Europe. Ookla reported that in the first half of 2025, Magenta Telekom in Austria achieved a median 5G upload speed of 35.67 Mbps, while Three in the U.K. recorded a median of 13.07 Mbps.
  • Rural vs. urban divide: Average 5G uplink speeds are often higher in urban areas where mid-band spectrum is more prevalent. However, as of mid-2023, Opensignal noted that the rural-urban gap for 5G upload speeds in the U.S. was narrowing due to increased rural investment.
  • Dependence on network type: Whether a network uses 5G standalone (SA) or non-standalone (NSA) architecture impacts speeds. In early 2025, an analysis in the U.K. showed that while 5G SA had lower latency, 5G NSA still had a slightly higher proportion of high-speed uplink connections. 

…………………………………………………………………………………………………………………………………………………………

References:

https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/imt-2020/Documents/S01-1_Requirements%20for%20IMT-2020_Rev.pdf

https://www.itu.int/pub/r-rep-m.2410-2017

https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-M.2410-2017-PDF-E.pdfITU-R WP 5D reports on: IMT-2030 (“6G”) Minimum Technology Performance Requirements; Evaluation Criteria & Methodology

3GPP Release 16 5G NR Enhancements for URLLC in the RAN & URLLC in the 5G Core network

 

IMT-2030 Technical Performance Requirements (TPR) from ITU-R WP5D

Key Objectives of WG Technology Aspects at ITU-R WP 5D meeting June 24-July 3, 2025

ITU-R WP5D IMT 2030 Submission & Evaluation Guidelines vs 6G specs in 3GPP Release 20 & 21

ITU-R: IMT-2030 (6G) Backgrounder and Envisioned Capabilities

Draft new ITU-R recommendation (not yet approved): M.[IMT.FRAMEWORK FOR 2030 AND BEYOND]

ITU-R M.2150-1 (5G RAN standard) will include 3GPP Release 17 enhancements; future revisions by 2025

 

 

Page 1 of 340
1 2 3 340