Challengers & Leaders in Gartner’s Magic Quadrant for 4G and 5G Private Mobile Network Services?

Here’s the chart from the Gartner report:

This author is astonished and perplexed that Gartner lumped service providers, network equipment makers and systems integrators into the same set of leaders and challengers. That’s like comparing apples to oranges to pineapples?

Steve Saunders strongly criticized the report, noting that “Five companies lead the private 5G network (equipment/software) market: Ericsson, Huawei, Nokia, Samsung, and ZTE. Challengers include Cisco, IBM, Juniper, and Mavenir (though none of them make their own 5G chips or have the same level of 5G juju as the 5G Big 5).   Gartner’s new Magic Quadrant omits six (!) of these vendors, including Huawei, whose technology underlies literally hundreds of private 5G networks.”

…………………………………………………………………………………………………………………………….

Inclusion and Exclusion Criteria:

All of the following criteria was met by 15 March 2024 (the cut-off date) in order for providers to be included in this Magic Quadrant assessment:
  • At least 20 direct, deployed commercial contracts or 20 direct, deployed commercial sites (excluding POCs) for 4G and 5G private mobile network services managed by the vendor, where it is the prime contractor with the enterprise (end user)
  • At least 25% of commercial contracts (excluding POCs) for 4G and 5G private mobile
Network services managed by the vendor, where it is the prime contractor with the enterprise (end user). In case the vendor has less than 25% direct contracts, it must have at least 50 direct contracts (excluding POCs).
  • Commercial contracts in two or more regions where the vendor is the prime contractor, (excluding POCs) for 4G and 5G private mobile network services provided by the vendor. Regions are defined as follows:
    • North America
    • Latin America
    • Western Europe
    • Eastern Europe
    • Eurasia
    • Greater China
    • Emerging Asia/Pacific
    • Mature Asia/Pacific
    • Middle East and North Africa
    • Sub-Saharan Africa
    • Other
  • At least two or more commercial contracts won where the vendor is the prime contractor, (excluding POCs) for 4G and 5G private mobile network services managed by the vendor in the last 12 months
  • Provide the following capabilities defined in this Magic Quadrant as prime contractor or through a third party:
    • Network end-to-end sourcing
    • Network design
    • Implementation and integration
    • Service management and support

……………………………………………………………………………………………..

Vendors were assessed based on the following:
  • Scope of the offering, and planned or ongoing investments in the following segments in terms of capabilities covered in this Magic Quadrant and specific offerings per segment, preintegrated functions with ecosystem partners, productized versus project-based offering for each segment, and geographical availability of each of the offerings (global/regional/local):
    • Dedicated/stand-alone
    • Hybrid PMN
    • PMN with core network slicing
    • Campus and level of integration with WLAN solutions
    • PMN for industrial sites including OT security capabilities and compliance
    • Multisite, including management capabilities to provide a centralized life cycle management experience for all included sites
    • PMN offering for small and midsize businesses
  • PMN-related acquisitions or strategic partnerships to add capabilities to the PMN offering
  • Radio planning and site survey capabilities
  • Modularity of the offer
  • Flexibility to offer an open partner ecosystem for core PMN elements (radio, core network, monitoring and life cycle management, edge/cloud computing infrastructure stack, SIM management)
  • Public network integration (private/public handover features)
  • Service management options (self-service, co-managed, fully managed service)
  • API capabilities
  • Bundling capabilities with other related prepackaged technologies and services such as IoT, MEC, managed mobility, cloud, security, industry-edge application
Business Model:
  • The design, logic and execution of the vendor’s business proposition to achieve continued success
  • The value proposition, revenue models, customer segmentation, distribution channels, etc.
  • Appropriate use of build/buy/partner options to maximize profitability
  • Management of customization costs
  • Use of automation to improve cost-efficiency
The vendors were assessed based on the following:
  • Scope of the spectrum offered: regulated and industrial spectrum (CBRS type)
  • Proof of concept models
  • Flexibility offering capex and opex models
  • Flexibility to bring your own partners

References:

https://www.gartner.com/doc/reprints?id=1-2J9ZQDL4&ct=241105&st=sb

https://www.fierce-network.com/wireless/op-ed-gartner-biffs-its-new-4g5g-magic-quadrant

https://www.cisco.com/c/en/us/solutions/private-5g-networks.html

https://www.celona.io/the-state-of-private-wireless

SNS Telecom & IT: Private 5G and 4G LTE cellular networks for the global defense sector is a $1.5B opportunity

SNS Telecom & IT: Private 5G Network market annual spending will be $3.5 Billion by 2027

Highlights of GSA report on Private Mobile Network Market – 3Q2024

 

 

Dell’Oro: 4G and 5G FWA revenue grew 7% in 2024; MRFR: FWA worth $182.27B by 2032

According to Dell’Oro Group,  Fixed Wireless Access (FWA) has surged in recent years to support both residential and enterprise connectivity due to its ease of deployment along with the more widespread availability of 4G LTE and 5G Sub-6GHz networks. Preliminary findings suggest total FWA revenues, including RAN equipment, residential CPE, and enterprise router and gateway revenue remain on track to advance 7% in 2024, driven largely by residential subscriber growth in North America and India, as well as growing branch office connectivity more globally.

“Initially viewed as a way to monetize under-utilized spectrum, FWA has grown to become a major tool for connecting homes and businesses with broadband,” said Jeff Heynen, Vice President with the Dell’Oro Group. “What started in the U.S. is now expanding to India, Southeast Asia, Europe, and the Middle East, as mobile operators continue to expand their 5G-based FWA offerings to both residential and enterprise customers,” added Heynen.

Additional highlights from the Fixed Wireless Access Infrastructure and CPE Advanced Research Report:

  • Total FWA equipment revenue for the 2023-2027 period have been revised upward by 17 percent, reflecting continued positive subscriber growth in North America and India.
  • Long-term subscriber growth is expected to occur in emerging markets in Southeast Asia and MEA, due to upgrades to existing 3G and LTE networks and a need to connect subscribers economically.
  • The Satellite Broadband market will also be a key enabler of broadband connectivity in emerging markets as well as rural markets where existing infrastructure either doesn’t exist or is cost-prohibitive to deploy. Subscriber growth will generally come from LEOS-based providers including Starlink, OneWeb, and Project Kuiper.

About the Report

The Dell’Oro Group Fixed Wireless Access Infrastructure and CPE Report includes 5-year market forecasts for FWA CPE (Residential and Enterprise) and RAN infrastructure, segmented by technology, including 802.11/Other, 4G LTE, CBRS, 5G sub-6GHz, 5G mmWave, and 60GHz technologies. The report also includes regional subscriber forecasts for FWA and satellite broadband technologies, as well as Residential Gateway forecasts for satellite broadband deployments. To purchase this report, please contact us by email at [email protected].

In a related Dell’Oro post, Stefan Pongratz wrote that Dedicated FWA RAN < $1B:

The market opportunity for DSL and fiber replacements or alternative solutions is vast. According to the ITU and Ericsson’s Mobility Report, approximately 35% of the world’s two billion households remain underserved, lacking broadband connectivity. Beyond these unconnected households, FWA technologies can also address the needs of secondary homes and small businesses. With nearly half of 5G operators supporting 5G FWA (GSA), fixed wireless is already a mature technology, boosting both the RAN and the broadband markets.

Despite these advancements, the fundamental economics driving FWA are not expected to shift significantly in 2025. While technological improvements are expanding the TAM, the business case remains constrained by the mobile network’s capacity and the ROI of dedicated FWA RAN deployments. Operators continue refining their targets, but the existing mobile network infrastructure offers the most favorable RAN economics.

Although operators are gradually increasing their investments in dedicated RAN solutions for high-traffic areas, mobile networks are expected to maintain dominance in the near term. According to our latest FWA report, which covers the broader FWA ecosystem—including 3GPP and non-3GPP RAN and devices—dedicated FWA RAN investments are projected to stay below $1 billion in 2025.

…………………………………………………………………………………………………………………

Separately, MRFR says the FWA market will be worth $182.27 Billion by 2032. Here’s a chart of 5G Fixed Wireless Access Market Growth:

FWA Growth Drivers:

Rising Demand for High-Speed Internet: With the increasing reliance on digital infrastructure and applications, there is a surging demand for high-speed and reliable internet connectivity. 5G FWA solutions offer ultra-fast broadband to underserved and remote areas, addressing connectivity gaps effectively.

Growing Adoption of IoT and Advanced Technologies: The proliferation of IoT devices and the need for seamless connectivity are driving the adoption of 5G FWA solutions. Additionally, advancements in mmWave technology enhance bandwidth efficiency, boosting market adoption.

Cost-Effective Alternative to Fiber Networks: 5G FWA provides a cost-efficient and rapid deployment option compared to traditional fiber-based internet, making it an attractive solution for internet service providers and enterprises.

References:

Fixed Wireless Access Equipment Spend to Exceed $48 B Over the Next Five Years, According to Dell’Oro Group

https://www.delloro.com/what-to-expect-from-ran-in-2025/

https://www.marketresearchfuture.com/reports/5g-fixed-wireless-access-market-7561

https://tech.einnews.com/pr_news/776058765/5g-fixed-wireless-access-market-worth-182-27-billion-by-2032-exclusive-report-by-mrfr

Latest Ericsson Mobility Report talks up 5G SA networks (?) and FWA (!)

Fiber and Fixed Wireless Access are the fastest growing fixed broadband technologies in the OECD

Ericsson: Over 300 million Fixed Wireless Access (FWA) connections by 2028

WiFi 7: Backgrounder and CES 2025 Announcements

Backgrounder:

Wi-Fi 7, also known as the IEEE 802.11be-2024 [1.], is the latest generation of Wi-Fi technology, offering significantly faster speeds, increased network capacity, and lower latency compared to previous versions like Wi-Fi 6, by utilizing features like wider 320MHz channels, Multi-Link Operation (MLO), and 4K-QAM modulation across all frequency bands (2.4GHz, 5GHz, and 6GHz).  Wi-Fi 7 is designed to use huge swaths of unlicensed spectrum in the 6 GHz band, first made available in Wi-Fi 6E standard, to deliver a maximum data rate of up to 46 Gbps.

Note 1. The Wi-Fi Alliance began certifying Wi-Fi 7 devices in January 2024. The IEEE approved the IEEE 802.11be standard in 2024 on September 26, 2024The standard supports at least one mode of operation capable of supporting a maximum throughput of at least 30 Gbps, as measured at the MAC data service access point (SAP), with carrier frequency operation between 1 and 7.250 GHz, while ensuring backward compatibility and coexistence with legacy IEEE Std 802.11 compliant devices operating in the 2.4 GHz, 5 GHz, and 6 GHz bands.

………………………………………………………………………………………………………………………………………………………………………………………………..

The role of 6 GHz Wi-Fi in delivering connectivity is changing, and growing. A recent report from OpenSignal, found that smartphone users spend 77% to 88% of their screen-on time connected to Wi-Fi. Further, the latest generations of Wi-Fi (largely due to the support of 320 MHz channels and critical features like Multi-Link Operation) are increasingly more reliable and deterministic, making them viable options for advanced applications like extended reality in both the home and the enterprise.

New features:

  • 320MHz channels: Double the bandwidth compared to Wi-Fi 6E. 
  • Multi-Link Operation (MLO): Allows devices to connect using multiple channels across different bands simultaneously. 
  • K-QAM modulation: Enables more data to be transmitted per signal. 
CES 2025 WiFi 7 Announcements:

1.  TP-Link unveiled the Deco BE68 Whole Home Mesh Wi-Fi 7 solution, which is claims delivers speeds of up to 14 Gbps, covering 8,100 sq. ft. and supporting up to 200 connected devices. “Featuring 10G, 2.5G, and 1G ports, it ensures fast, reliable wired connections. With Deco Mesh technology, the system delivers seamless coverage and uninterrupted performance for streaming, gaming, and more,” stated the company.

TP-Link also announced an outdoor mesh system to address the increasing demand for outdoor Wi-Fi connectivity. The Deco BE65-Outdoor and Deco BE25-Outdoor nodes are equipment with weather, water and dust proof enclosures. When combined with the Deco indoor models, a cohesive and reliable indoor-outdoor mesh network that allows a user to move seamlessly between the two environments can be achieved.

2.  Intel Core Ultra Series 2) are all equipped with Wi-Fi 7 capabilities integrated into the silicon, Intel has made Wi-Fi its standard choice. On its website, the company explained that a “typical” Wi-Fi 7 laptop is a potential maximum data rate of almost 5.8 Gbps. “This is 2.4X faster than the 2.4 Gbps possible with Wi-Fi 6/6E and could easily enable high quality 8K video streaming or reduce a massive 15 GB file download to roughly 25 seconds vs. the one minute it would take with the best legacy Wi-Fi technology,” Intel added.

3. ASUS  New Wi-Fi 7 Router Lineup

ASUS unveiled a range of new networking products at CES 2025, including the ASUS RT-BE58 Go travel router and ASUS 5G-Go mobile router – both recipients of the CES 2025 Innovation Award – alongside the ROG Rapture GT-BE19000AI gaming router and the ZenWiFi Outdoor series for home Wi-Fi setups.

  • The RT-BE58 Go – is a dual-band, Wi-Fi 7-capable mobile router supports three use cases: 4G/5G mobile tethering, public Wi-Fi hotspot (WISP), and conventional home router. It also supports VPN from up to 30 service providers and subscription-free Trend Micro security for online protection, while AiMesh compatibility allows for the router to be paired with other ASUS routers to provide wider signal coverage.
  • The ROG Rapture GT-BE19000AI is the iteration of the GT-BE19000 router released last year, this time with an NPU onboard coupled with CPU and MCU. This tri-core combination enables features like ROG AI Game Booster and Adaptive QoS 2.0 to reduce network latency by up to 34% for supported games, plus 46% power savings through its AI Power Saving mode that saves power based on usage patterns. Additional features include advanced ad and tracker blocking, network insights, and RF scanning.

References:

https://standards.ieee.org/ieee/802.11be/7516/

https://en.wikipedia.org/wiki/Wi-Fi_7

https://www.mathworks.com/help/wlan/ug/overview-of-wifi-7-or-ieee-802-11-be.html

[CES 2025] ASUS Presents New Wi-Fi 7 Router Lineup

Google, MediaTek team up; a new Wi-Fi HaLow chip; Wi-Fi 7 becomes standard — Top Wi-Fi news from CES 2025

WiFi 7 and the controversy over 6 GHz unlicensed vs licensed spectrum

Telstra selects SpaceX’s Starlink to bring Satellite-to-Mobile text messaging to its customers in Australia

Australia’s Telstra currently works with Space X’s Starlink to provide low-Earth orbit (LEO) satellite home and small business Internet services.  Today, the company announced it will be adding direct-to-device (D2D) text messaging services for customers in Australia.  We wrote about that in this IEEE Techblog postTelstra’s new D2D service is currently in the testing phase and not yet available commercially. Telstra forecasts it will be available from most outdoor areas on mainland Australia and Tasmania where there is a direct line of sight to the sky.

Telstra already has the largest and most reliable mobile network in Australia covering 99.7% of the Australian population over an area of 3 million square kilometres, which is more than 1 million square kilometres greater than our nearest competitor. But Australia’s landmass is vast and there will always be large areas where mobile and fixed networks do not reach, and this is where satellite technology will play a complementary role to our existing networks.  As satellite technology continues to evolve to support voice, data and IoT Telsa plans to explore opportunities for the commercial launch of those new services.

Telstra previously teamed up with satellite provider Eutelsat OneWeb to deliver OneWeb low-Earth orbit (LEO) mobile backhaul to customers in Australia. The telco said the D2D text messaging service with Starlink will provide improved coverage to customers in regional and remote areas. Telstra’s mobile network covers 99.7% of the Australian population over an area of 3 million square kilometers. The company said it has invested $11.8 billion into its mobile network in Australia over the past seven years.  As satellite technology advances, Telstra plans to look into voice, data and IoT services.

T-Mobile, AT&T and Verizon are all working on satellite-based text messaging services. Many D2D providers such as Starlink have promised text messaging services initially with plans to add more bandwidth-heavy applications, including voice and video, at a later date.  “The first Starlink satellite direct to cell phone constellation is now complete,” SpaceX’s Elon Musk wrote on social media in December 2024. That’s good news for T-Mobile, which plans to launch a D2D service with Starlink in the near future.  Verizon and AT&T and working with satellite provider AST SpaceMobile to develop their own D2D services.

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

What is Satellite-to-Mobile technology?

Satellite-to-Mobile is one of the most exciting areas in the whole telco space and creates a future where outdoor connectivity for basic services, starting with text messages and, eventually, voice and low-rates of data, may be possible from some of Australia’s most remote locations. You may also hear it referred to as Direct to Handset or DTH technology.

What makes this technology so interesting is that for many people, they won’t need to buy a specific compatible phone to send an SMS over Satellite-to-Mobile, as it will take advantage of technology already inside modern smartphones.

Satellite-to-Mobile will complement our existing land-based mobile network offering basic  connectivity where people have never had it before.* This technology will continue to mature and will initially support sending and receiving text messages, and in the longer term, voice and low speed data to smartphones across Australia when outdoors with a clear line of site to the sky. Just as mobile networks didn’t replace fibre networks, it’s important to realise the considerable difference between the carrying capacity of satellite versus mobile technology.

Who will benefit most from Satellite-to-Mobile technology?

Satellite-to-Mobile is most relevant to people in regional and remote areas of the country that are outside their carrier’s mobile coverage footprint.

Currently, Satellite-to-Mobile technology allows users to send a message only.

This is currently really a “just-in-case” connectivity layer that allows a person to make contact for help or let someone know they are ok when they are outside their own carrier’s mobile coverage footprint.

……………………………………………………………………………………………………………………………………………………………

References:

https://www.telstra.com.au/internet/starlink

https://www.telstra.com.au/exchange/telstra-to-bring-spacex-s-starlink-satellite-to-mobile-technolog

https://www.lightreading.com/satellite/telstra-taps-starlink-for-d2d-satellite-messaging-service

https://www.lightreading.com/satellite/amazon-d2d-offerings-are-in-development-

Telstra partners with Starlink for home phone service and LEO satellite broadband services

AT&T deal with AST SpaceMobile to provide wireless service from space

AST SpaceMobile: “5G” Connectivity from Space to Everyday Smartphones

AST SpaceMobile achieves 4G LTE download speeds >10 Mbps during test in Hawaii

AST SpaceMobile completes 1st ever LEO satellite voice call using AT&T spectrum and unmodified Samsung and Apple smartphones

AST SpaceMobile Deploys Largest-Ever LEO Satellite Communications Array

 

vRAN market disappoints – just like OpenRAN and mobile 5G

Most wireless network operators are not convinced virtual RAN (vRAN) [1.] is worth the effort to deploy. Omdia, an analyst company owned by Informa, put vRAN’s share of the total market for RAN baseband products at just 10% in 2023. It is growing slowly, with 20% market share forecast by 2028, but it far from being the default RAN architectural choice.

Among the highly touted benefits of virtualization is the ability for RAN developers to exploit the much bigger economies of scale found in the mainstream IT market. “General-purpose technology will eventually have so much investment in it that it will outpace custom silicon,” said Sachin Katti, the general manager of Intel’s network and edge group, during a previous Light Reading interview.

Note 1. The key feature of vRAN is the virtualization of RAN functions, allowing operators to perform baseband operations on standard servers instead of dedicated hardware.  The Asia Pacific region is currently leading in vRAN adoption due to rapid 5G deployment in countries like China, South Korea, and Japan. Samsung has established a strong presence as a supplier of vRAN equipment and software.

The whole market for RAN products generated revenues of just $40 billion in 2023. Intel alone made $54.2 billion in sales that same year.  Yet Huawei, Ericsson and Nokia, the big players in RAN base station technology, have continued to miniaturize and advance their custom chips. Nokia boasts 5-nanometer chips in its latest products and last year lured Derek Urbaniak, a highly regarded semiconductor expert, from Ericsson in a sign it wants to play an even bigger role in custom chip development.

Ericsson collaborates closely with Intel on virtual RAN, and yet it has repeatedly insisted its application-specific integrated circuits (ASICs) perform better than Intel’s CPUs in 5G. One year ago, Michael Begley, Ericsson’s head of RAN compute, told Light Reading that “purpose-built hardware will continue to be the most energy-efficient and compact hardware for radio site deployments going forward.”

Intel previously suffered delays when moving to smaller designs and there is gloominess about its prospects as note in several IEEE Techblog posts like this one and this one. Intel suffered a $17 billion loss for the quarter ending in September, after reporting a small $300 million profit a year before. Sales fell 6% year-over-year, to $13.3 billion, over this same period.

Unfortunately, for telcos eyeing virtualization, Intel is all they really have. Its dominance of the small market for virtual RAN has not been weakened in the last couple of years, leaving operators with no viable alternatives. This was made apparent in a recent blog post by Ericsson, which listed Intel as the only commercial-grade chip solution for virtual RAN. AMD was at the “active engagement” stage, said Ericsson last November. Processors based on the blueprints of ARM, a UK-based chip designer that licenses its designs, were not even mentioned.

The same economies-of-scale case for virtual RAN is now being made about Nvidia and its graphical processing units (GPUs), which Nvidia boss Jensen Huang seems eager to pitch as a kind of general-purpose AI successor to more humdrum CPUs. If the RAN market is too small, and its developers must ride in the slipstream of a much bigger market, Nvidia and its burgeoning ecosystem may seem a safer bet than Intel. And the GPU maker already has a RAN pitch, including a lineup of Arm-based CPUs to host some of the RAN software.

Semiconductor-related economies of scale, should not be the sole benefit of a virtual RAN. “With a lot of the work that’s been done around orchestration, you can deploy new software to hundreds of sites in a couple of hours in a way that was not feasible before,” said Alok Shah of Samsung Electronics. Architecturally, virtualization should allow an operator to host its RAN on the same cloud-computing infrastructure used for other telco and IT workloads. With a purpose-built RAN, an operator would be using multiple infrastructure platforms.

In telecom markets without much fiber or fronthaul infrastructure there is unlikely to be much centralization of RAN compute. This necessitates the deployment of servers at mast sites, where it is hard to see them being used for anything but the RAN. Even if a company wanted to host other applications at a mobile site, the processing power of Sapphire Rapids, the latest Intel generation, is fully consumed by the functions of the virtual distributed unit (vDU), according to Shah. “I would say the vDU function is kind of swallowing up the whole server,” he said.

Indeed, for all the talk of total cost of ownership (TCO) savings, some deployments of Sapphire Rapids have even had to feature two servers at a site to support a full 5G service, according to Paul Miller, the chief technology officer of Wind River, which provides the cloud-computing platform for Samsung’s virtual RAN in Verizon’s network.  Miller expects that to change with Granite Rapids, the forthcoming successor technology to Sapphire Rapids. “It’s going to be a bit of a sea change for the network from a TCO perspective – that you may be able to get things that took two servers previously, like low-band and mid-band 5G, onto a single server,” he said.

Samsung’s Shah is hopeful Granite Rapids will even free up compute capacity for other types of applications. “We’ll have to see how that plays out, but the opportunity is there, I think, in the future, as we get to that next generation of compute.” In the absence of many alternative processor platforms, especially for telcos rejecting the inline virtual RAN approach, Intel will be under pressure to make sure the journey for Granite Rapids is less turbulent than it sounds.

Another challenge is the mobile backhaul, which is expected to limit the growth of the vRAN industry. Backhaul connectivity ia central s widely used in wireless networks to transfer a signal from a remote cell site to the core network (typically the edge of the Internet). The two main methods of mobile backhaul implementations are fiber-based and wireless point-to-point backhaul.

The pace of data delivery suffers in tiny cell networks with poor mobile network connectivity. Data management is becoming more and more important as tiny cells are employed for network connectivity. Increased data traffic across small cells, which raises questions about data security, is mostly to blame for poor data management. vRAN solutions promise improved network resiliency and utilization, faster network routing, and better-optimized network architecture to meet the diverse 5G requirements of enterprise customers.

References:

https://www.lightreading.com/5g/virtual-ran-still-seems-to-be-not-worth-the-effort

https://www.ericsson.com/en/blog/north-america/2024/open-ran-progress-report

https://www.sdxcentral.com/5g/ran/definitions/vran/

https://www.businessresearchinsights.com/market-reports/virtualized-radio-access-network-vran-market-106129

https://www.globalgrowthinsights.com/market-reports/virtualized-radio-access-network-vran-market-100486

LightCounting: Open RAN/vRAN market is pausing and regrouping

Dell’Oro: Private 5G ecosystem is evolving; vRAN gaining momentum; skepticism increasing

Huawei CTO Says No to Open RAN and Virtualized RAN

Heavy Reading: How network operators will deploy Open RAN and cloud native vRAN

CES 2025: Intel announces edge compute processors with AI inferencing capabilities

At CES 2025 today, Intel unveiled the new Intel® Core™ Ultra (Series 2) processors, designed to revolutionize mobile computing for businesses, creators and enthusiast gamers. Intel said “the new processors feature cutting-edge AI enhancements, increased efficiency and performance improvements.”

“Intel Core Ultra processors are setting new benchmarks for mobile AI and graphics, once again demonstrating the superior performance and efficiency of the x86 architecture as we shape the future of personal computing,” said Michelle Johnston Holthaus, interim co-CEO of Intel and CEO of Intel Products. “The strength of our AI PC product innovation, combined with the breadth and scale of our hardware and software ecosystem across all segments of the market, is empowering users with a better experience in the traditional ways we use PCs for productivity, creation and communication, while opening up completely new capabilities with over 400 AI features. And Intel is only going to continue bolstering its AI PC product portfolio in 2025 and beyond as we sample our lead Intel 18A product to customers now ahead of volume production in the second half of 2025.”

Intel also announced new edge computing processors, designed to provide scalability and superior performance across diverse use cases. Intel Core Ultra processors were said to deliver remarkable power efficiency, making them ideal for AI workloads at the edge, with performance gains that surpass competing products in critical metrics like media processing and AI analytics. Those edge processors are targeted at compute servers running in hospitals, retail stores, factory floors and other “edge” locations that sit between big data centers and end-user devices. Such locations are becoming increasingly important to telecom network operators hoping to sell AI capabilities, private wireless networks, security offerings and other services to those enterprise locations.

Intel edge products launching today at CES include:

  • Intel® Core™ Ultra 200S/H/U series processors (code-named Arrow Lake).
  • Intel® Core™ 200S/H series processors (code-named Bartlett Lake S and Raptor Lake H Refresh).
  • Intel® Core™ 100U series processors (code-named Raptor Lake U Refresh).
  • Intel® Core™ 3 processor and Intel® Processor (code-named Twin Lake).

“Intel has been powering the edge for decades,” said Michael Masci, VP of product management in Intel’s edge computing group, during a media presentation last week.  According to Masci, AI is beginning to expand the edge opportunity through inferencing [1.].  “Companies want more local compute. AI inference at the edge is the next major hotbed for AI innovation and implementation,” he added.

Note 1. Inferencing in AI refers to the process where a trained AI model makes predictions or decisions based on new data, rather than previously stored “training models.” It’s essentially AI’s ability to apply learned knowledge on fresh inputs in real-time. Edge computing plays a critical role in inferencing, because it brings it closer to users. That lowers latency (much faster AI responses) and can also reduce bandwidth costs and ensure privacy and security as well.

Editor’s Note: Intel’s edge compute business – the one pursuing AI inferencing – is in in its Client Computing Group (CCG) business unit. Intel’s chips for telecom operators reside inside its NEX business unit.

Intel’s Masci specifically called out Nvidia’s GPU chips, claiming Intel’s new silicon lineup supports up to 5.8x faster performance and better usage per watt.  Indeed, Intel claims their “Core™ Ultra 7 processor  uses about one-third fewer TOPS (Trillions Operations Per Second) than Nvidia’s Jetson AGX Orin, but beats its competitor with media performance that is up to 5.6 times faster, video analytics performance that is up to 3.4x faster and performance per watt per dollar up to 8.2x better.”

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

However, Nvidia has been using inference in its AI chips for quite some time. Company officials last month confirmed that 40% of Nvidia’s revenues come from AI inference, rather than AI training efforts in big data centers.  Colette Kress, Nvidia Executive Vice President and Chief Financial Officer, said, “Our architectures allows an end-to-end scaling approach for them to do whatever they need to in the world of accelerated computing and Ai. And we’re a very strong candidate to help them, not only with that infrastructure, but also with the software.”

“Inference is super hard. And the reason why inference is super hard is because you need the accuracy to be high on the one hand. You need the throughput to be high so that the cost could be as low as possible, but you also need the latency to be low,” explained Nvidia CEO Jensen Huang during his company’s recent quarterly conference call.

“Our hopes and dreams is that someday, the world does a ton of inference. And that’s when AI has really succeeded, right? It’s when every single company is doing inference inside their companies for the marketing department and forecasting department and supply chain group and their legal department and engineering, and coding, of course. And so we hope that every company is doing inference 24/7.”

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Sadly for its many fans (including this author), Intel continues to struggle in both data center processors and AI/ GPU chips. The Wall Street Journal recently reported that “Intel’s perennial also-ran, AMD, actually eclipsed Intel’s revenue for chips that go into data centers. This is a stunning reversal: In 2022, Intel’s data-center revenue was three times that of AMD.”

Even worse for Intel, more and more of the chips that go into data centers are GPUs and Intel has minuscule market share of these high-end chips. GPUs are used for training and delivering AI.  The WSJ notes that many of the companies spending the most on building out new data centers are switching to chips that have nothing to do with Intel’s proprietary architecture, known as x86, and are instead using a combination of a competing architecture from ARM and their own custom chip designs.  For example, more than half of the CPUs Amazon has installed in its data centers over the past two years were its own custom chips based on ARM’s architecture, Dave Brown, Amazon vice president of compute and networking services, said recently.

This displacement of Intel is being repeated all across the big providers and users of cloud computing services. Microsoft and Google have also built their own custom, ARM-based CPUs for their respective clouds. In every case, companies are moving in this direction because of the kind of customization, speed and efficiency that custom silicon supports.

References:

https://www.intel.com/content/www/us/en/newsroom/news/2025-ces-client-computing-news.html#gs.j0qbu4

https://www.intel.com/content/www/us/en/newsroom/news/2025-ces-client-computing-news.html#gs.j0qdhd

https://seekingalpha.com/article/4741811-nvidia-corporation-nvda-ubs-global-technology-conference-transcript

https://www.wsj.com/tech/intel-microchip-competitors-challenges-562a42e3

https://www.lightreading.com/the-edge-network/intel-desperate-for-an-edge-over-nvidia-with-ai-inferencing

Massive layoffs and cost cutting will decimate Intel’s already tiny 5G network business

WSJ: China’s Telecom Carriers to Phase Out Foreign Chips; Intel & AMD will lose out

The case for and against AI-RAN technology using Nvidia or AMD GPUs

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

FT: Nvidia invested $1bn in AI start-ups in 2024

AI winner Nvidia faces competition with new super chip delayed

AI Frenzy Backgrounder; Review of AI Products and Services from Nvidia, Microsoft, Amazon, Google and Meta; Conclusions

 

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

A growing portion of the billions of dollars being spent on AI data centers will go to the suppliers of networking chips, lasers, and switches that integrate thousands of GPUs and conventional micro-processors into a single AI computer cluster. AI can’t advance without advanced networks, says Nvidia’s networking chief Gilad Shainer. “The network is the most important element because it determines the way the data center will behave.”

Networking chips now account for just 5% to 10% of all AI chip spending, said Broadcom CEO Hock Tan. As the size of AI server clusters hits 500,000 or a million processors, Tan expects that networking will become 15% to 20% of a data center’s chip budget. A data center with a million or more processors will cost $100 billion to build.

The firms building the biggest AI clusters are the hyperscalers, led by Alphabet’s Google, Amazon.com, Facebook parent Meta Platforms, and Microsoft. Not far behind are Oracle, xAI, Alibaba Group Holding, and ByteDance. Earlier this month, Bloomberg reported that capex for those four hyperscalers would exceed $200 billion this year, making the year-over-year increase as much as 50%. Goldman Sachs estimates that AI data center spending will rise another 35% to 40% in 2025.  Morgan Stanley expects Amazon and Microsoft to lead the pack with $96.4bn and $89.9bn of capex respectively, while Google and Meta will follow at $62.6bn and $52.3bn.

AI compute server architectures began scaling in recent years for two reasons.

1.] High end processor chips from Intel neared the end of speed gains made possible by shrinking a chip’s transistors.

2.] Computer scientists at companies such as Google and OpenAI built AI models that performed amazing feats by finding connections within large volumes of training material.

As the components of these “Large Language Models” (LLMs) grew to millions, billions, and then trillions, they began translating languages, doing college homework, handling customer support, and designing cancer drugs. But training an AI LLM is a huge task, as it calculates across billions of data points, rolls those results into new calculations, then repeats. Even with Nvidia accelerator chips to speed up those calculations, the workload has to be distributed across thousands of Nvidia processors and run for weeks.

To keep up with the distributed computing challenge, AI data centers all have two networks:

  1. The “front end” network which sends and receives data to/from  external users —like the networks of every enterprise data center or cloud-computing center. It’s placed on the network’s outward-facing front end or boundary and typically includes equipment like high end routers, web servers, DNS servers, application servers, load balancers, firewalls, and other devices which connect to the public internet, IP-MPLS VPNs and private lines.
  2. A “back end” network that connects every AI processor (GPUs and conventional MPUs) and memory chip with every other processor within the AI data center. “It’s just a supercomputer made of many small processors,” says Ram Velaga, Broadcom’s chief of core switching silicon. “All of these processors have to talk to each other as if they are directly connected.”  AI’s back-end networks need high bandwidth switches and network connections. Delays and congestion are expensive when each Nvidia compute node costs as much as $400,000. Idle processors waste money. Back-end networks carry huge volumes of data. When thousands of processors are exchanging results, the data crossing one of these networks in a second can equal all of the internet traffic in America.

Nvidia became one of today’s largest vendors of network gear via its acquisition of Israel based Mellanox in 2020 for $6.9 billion. CEO Jensen Huang and his colleagues realized early on that AI workloads would exceed a single box. They started using InfiniBand—a network designed for scientific supercomputers—supplied by Mellanox. InfiniBand became the standard for AI back-end networks.

While most AI dollars still go to Nvidia GPU accelerator chips, back-end networks are important enough that Nvidia has large networking sales. In the September quarter, those network sales grew 20%, to $3.1 billion. However, Ethernet is now challenging InfiniBand’s lock on AI networks.  Fortunately for Nvidia, its Mellanox subsidiary also makes high speed Ethernet hardware modules. For example, xAI uses Nvidia Ethernet products in its record-size Colossus system.

While current versions of Ethernet lack InfiniBand’s tools for memory and traffic management, those are now being added in a version called Ultra Ethernet [1.]. Many hyperscalers think Ethernet will outperform InfiniBand, as clusters scale to hundreds of thousands of processors. Another attraction is that Ethernet has many competing suppliers.  “All the largest guys—with an exception of Microsoft—have moved over to Ethernet,” says an anonymous network industry executive. “And even Microsoft has said that by summer of next year, they’ll move over to Ethernet, too.”

Note 1.  Primary goals and mission of Ultra Ethernet Consortium (UEC):  Deliver a complete architecture that optimizes Ethernet for high performance AI and HPC networking, exceeding the performance of today’s specialized technologies. UEC specifically focuses on functionality, performance, TCO, and developer and end-user friendliness, while minimizing changes to only those required and maintaining Ethernet interoperability. Additional goals: Improved bandwidth, latency, tail latency, and scale, matching tomorrow’s workloads and compute architectures. Backwards compatibility to widely-deployed APIs and definition of new APIs that are better optimized to future workloads and compute architectures.

……………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Ethernet back-end networks offer a big opportunity for Arista Networks, which builds switches using Broadcom chips. In the past two years, AI data centers became an important business for Arista.  AI provides sales to Arista switch rivals Cisco and Juniper Networks (soon to be a part of Hewlett Packard Enterprise), but those companies aren’t as established among hyperscalers. Analysts expect Arista to get more than $1 billion from AI sales next year and predict that the total market for back-end switches could reach $15 billion in a few years. Three of the five big hyperscale operators are using Arista Ethernet switches in back-end networks, and the other two are testing them. Arista CEO Jayshree Ullal (a former SCU EECS grad student of this author/x-adjunct Professor) says that back-end network sales seem to pull along more orders for front-end gear, too.

The network chips used for AI switching are feats of engineering that rival AI processor chips. Cisco makes its own custom Ethernet switching chips, but some 80% of the chips used in other Ethernet switches comes from Broadcom, with the rest supplied mainly by Marvell. These switch chips now move 51 terabits of data a second; it’s the same amount of data that a person would consume by watching videos for 200 days straight. Next year, switching speeds will double.

The other important parts of a network are connections between computing nodes and cables. As the processor count rises, connections increase at a faster rate. A 25,000-processor cluster needs 75,000 interconnects. A million processors will need 10 million interconnects.  More of those connections will be fiber optic, instead of copper or coax.  As networks speed up, copper’s reach shrinks. So, expanding clusters have to “scale-out” by linking their racks with optics. “Once you move beyond a few tens of thousand, or 100,000, processors, you cannot connect anything with copper—you have to connect them with optics,” Velaga says.

AI processing chips (GPUs) exchange data at about 10 times the rate of a general-purpose processor chip. Copper has been the preferred conduit because it’s reliable and requires no extra power. At current network speeds, copper works well at lengths of up to five meters. So, hyperscalers have tried to “scale-up” within copper’s reach by packing as many processors as they can within each shelf, and rack of shelves.

Back-end connections now run at 400 gigabits per second, which is equal to a day and half of video viewing. Broadcom’s Velaga says network speeds will rise to 800 gigabits in 2025, and 1.6 terabits in 2026.

Nvidia, Broadcom, and Marvell sell optical interface products, with Marvell enjoying a strong lead in 800-gigabit interconnects. A number of companies supply lasers for optical interconnects, including Coherent, Lumentum Holdings, Applied Optoelectronics, and Chinese vendors Innolight and Eoptolink. They will all battle for the AI data center over the next few years.

A 500,000-processor cluster needs at least 750 megawatts, enough to power 500,000 homes. When AI models scale to a million or more processors, they will require gigawatts of power and have to span more than one physical data center, says Velaga.

The opportunity for optical connections reaches beyond the AI data center. That’s because there isn’t enough power.  In September, Marvell, Lumentum, and Coherent demonstrated optical links for data centers as far apart as 300 miles. Nvidia’s next-generation networks will be ready to run a single AI workload across remote locations.

Some worry that AI performance will stop improving as processor counts scale. Nvidia’s Jensen Huang dismissed those concerns on his last conference call, saying that clusters of 100,000 processors or more will just be table stakes with Nvidia’s next generation of chips.  Broadcom’s Velaga says he is grateful: “Jensen (Nvidia CEO) has created this massive opportunity for all of us.”

References:

https://www.barrons.com/articles/ai-networking-nvidia-cisco-broadcom-arista-bce88c76?mod=hp_WIND_B_1_1  (PAYWALL)

https://www.msn.com/en-us/news/technology/networking-companies-ride-the-ai-wave-it-isn-t-just-nvidia/ar-AA1wJXGa?ocid=BingNewsSerp

https://www.datacenterdynamics.com/en/news/morgan-stanley-hyperscaler-capex-to-reach-300bn-in-2025/

https://ultraethernet.org/ultra-ethernet-specification-update/

Will AI clusters be interconnected via Infiniband or Ethernet: NVIDIA doesn’t care, but Broadcom sure does!

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Canalys & Gartner: AI investments drive growth in cloud infrastructure spending

AI Echo Chamber: “Upstream AI” companies huge spending fuels profit growth for “Downstream AI” firms

AI wave stimulates big tech spending and strong profits, but for how long?

Markets and Markets: Global AI in Networks market worth $10.9 billion in 2024; projected to reach $46.8 billion by 2029

Using a distributed synchronized fabric for parallel computing workloads- Part I

 

Using a distributed synchronized fabric for parallel computing workloads- Part II

U.S. federal appeals court says FCC’s net neutrality/open internet rules are “unlawful”

U.S. Court of Appeals for the Sixth Circuit today struck down the Federal Communications Commission’s (FCC’s) hard-fought and long-debated net neutrality/open internet rules. The FCC had sought to reinstate a sweeping policy established under President Obama that was designed to treat internet service as an essential public service, similar to a water or power utility.  The court ruled that broadband communications, including broadband delivered via mobile networks, is classified as an “information” service rather than a more heavily regulated “telecommunications” service. That important distinction means the FCC lacks the authority to impose the current set of rules, the court’s three-judge panel found.

The court said a recent U.S. Supreme Court ruling had removed a judicial framework that allowed courts to interpret rules with deference to the federal agency that created them. The 6th Circuit said the FCC did not have the statutory authority to change the classification of broadband internet to a telecommunications service. That role rests with Congress. The case was brought by the Ohio Telecom Association, a trade organization representing internet service providers (ISPs).

The negative ruling arrives about six months after the Sixth Circuit court stayed the FCC’s network neutrality rules, under current Chairwoman Jessica Rosenworcel, which aimed to resurrect broadband under a more heavily regulated Title II/telecommunications service classification. The FCC, under Rosenworcel, passed the current set of net neutrality rules in April 2024.

Backgrounder:  Under the net neutrality rules, internet service providers (ISPs) would have been subjected to greater regulation. A Republican-led commission repealed the rules in 2017 during President-elect Donald Trump’s first term in office.  Early last year, the FCC — then back under Democrat control — voted to formalize a national standard for internet service to prevent the blocking or slowing of information delivered over broadband internet lines. The core principle of open internet meant that internet service providers couldn’t discriminate among content suppliers. The order also would have given the FCC increased oversight to demand that internet providers respond to service outages or security breaches involving consumers’ data. The FCC cited national security, saying increased oversight was necessary for the commission to effectively crack down on foreign-owned companies that were deemed to be security threats.

“Using ‘the traditional tools of statutory construction’ … we hold that Broadband Internet Service Providers offer only an ‘information service’ … and therefore, the FCC lacks the statutory authority to impose its desired net-neutrality policies through the ‘telecommunications service’ provision of the Communications Act,” the Sixth Circuit court, based in Cincinnati, OH, said in its ruling.

“We conclude that Broadband Internet Service Providers at the very least offer consumers the ‘capability’ of ‘retrieving’ ‘information via telecommunications’,” the Sixth Circuit explained. “Accordingly, the FCC’s contrary conclusion is unlawful.”

The decision is another blow to an FCC that had fought for the rules to be reinstated under Rosenworcel and the outgoing Biden administration. The decision is a win for broadband service providers along with several organizations, including ACA Connects, CTIA, NCTA and USTelecom, that had argued against the rules, holding that the market has been thriving under a “light-touch regulatory framework.”

In a statement about the Sixth Circuit Court’s decision, FCC chairwoman Rosenworcel said:

“Consumers across the country have told us again and again that they want an internet that is fast, open, and fair. With this decision it is clear that Congress now needs to heed their call, take up the charge for net neutrality, and put open internet principles in federal law.”

However, the ruling is likely just the start of a larger wave of deregulatory shifts that’s expected to occur in 2025 and beyond under the new Trump administration.  Incoming FCC Chairman Brendon Carr (see quote below) is the senior Republican on the five-member commission and has championed many of Trump’s causes. One of the authors of the Project 2025 policy paper, he has outlined plans to remove regulations conservatives consider overbearing or outdated. He will also wrestle with looming budget crunches and court rulings that threaten to erode the federal agency’s overall authority.

Some reactions to Thursday’s ruling:

1.  “We hope that today’s decision will allow for a refocused conversation about effective ways to achieve national goals with respect to broadband access,” Mike Romano, EVP of NTCA – The Rural Broadband Association, said in a statement.

2.  MoffettNathanson analyst Craig Moffett noted in an emailed statement to investors that the broadband market has been concerned that if the FCC had the authority to impose Title II reclassification, it could open the door for the Commission to impose broadband price regulation. “That risk is now put to bed,” he said.  The decision was of little surprise after the FCC’s ability to start enforcing the rules was delayed after the court put them on hold amid a review of the precedent set by the US Supreme Court’s decision last June to strike down the decades-old Chevon deference (a.k.a. the Loper Bright) decision. That decision stands to limit the power and authority of federal agencies, such as the FCC, in interpreting certain laws that are considered ambiguous. Notably, Chevron has played a significant role over the years in establishing the FCC’s authority to set and enforce network neutrality regulations.  “Indeed, the reason we and others stopped worrying about Title II was because it was clear that the judicial principle of Chevron Deference wasn’t going to survive much longer,” Moffett wrote.

3. Free Press, a long-time network neutrality advocate, argued that the court wrongly rejected the FCC’s jurisdiction on the matter. “Beyond being a disappointing outcome, today’s 6th Circuit opinion is just plainly wrong at every level of analysis,” Matt Wood, VP of policy and general counsel at Free Press, said in a statement. “Today’s decision will let the incoming Trump FCC abdicate its responsibility to protect internet users against unscrupulous business practices … It’s rich to think of Donald Trump and Elon Musk’s hand-picked FCC chairman characterizing light-touch broadband rules as heavy-handed regulation, while scheming to force carriage of viewpoints favorable to Trump on the nation’s broadcast airwaves and social media sites.”

4.  Statement from Tully Center for Free Speech on Net Neutrality Ruling:

“The circuit court effectively shoots down the FCC’s 2024 net neutrality order in this latest volley in this decades-long regulatory debate. The decision noted the protracted back-and-forth regulatory history of whether the internet is akin to a telephone company or a telecommunications system for legal regulatory purposes.  The decision, which applies contemporary legal standards for administrative regulatory schemes, pretty much says that the FCC should not have a tight hand in regulating broadband internet services. It is a complicated matter and we will see how this ultimately affects consumers and the free flow of data and information.” Roy Gutterman, Syracuse University 
………………………………………………………………………………………………………………………………………………………………………………………..

As expected, FCC Commissioner & incoming FCC Chairman Brendon Carr (Republican) cheered news of the ruling by stating, “Over the past four years, the Biden Administration has worked to expand the government’s control over every feature of the Internet ecosystem. You can see it in the Biden Administration’s efforts to pressure social media companies into censoring the free speech rights of everyday Americans. You can see it in the Biden Administration’s demand that the FCC adopt ‘digital equity’ rules for the Internet—sweeping regulations that give the Commission nearly limitless powers over the Internet. And you can see it in the Biden Administration’s decision to impose so-called ‘net neutrality’ rules by applying Title II or utility-style regulations to the Internet.”

“I am pleased that the appellate court invalidated President Biden’s Internet power grab by striking down these unlawful Title II regulations. But the work to unwind the Biden Administration’s regulatory overreach will continue. I welcome the chance to advance a policy agenda that will deliver great results for the American people,” Carr added.

References:

https://eu-assets.contentstack.com/v3/assets/blt23eb5bbc4124baa6/bltb1088e1c699455aa/6776d788595b363497e9faba/US_court_of_appeals_-_6th_circuit_-_net_neutrality.pdf

https://www.fcc.gov/document/chairwoman-rosenworcel-sixth-circuit-court-net-neutrality-decision

https://docs.fcc.gov/public/attachments/DOC-408580A1.pdf

https://www.lightreading.com/regulatory-politics/sixth-circuit-shoots-down-fcc-s-net-neutrality-rules

https://www.latimes.com/entertainment-arts/business/story/2025-01-02/fccs-net-neutrality-rules-struck-down-blow-to-president-biden

FCC restores net neutrality order, but court challenges loom large

Analysis: FCC attempt to restore Net Neutrality & U.S. standards for broadband reliability, security, and consumer protection

FCC Draft Net Neutrality Order reclassifies broadband access; leaves 5G network slicing unresolved

FT: Nvidia invested $1bn in AI start-ups in 2024

Nvidia invested $1bn in artificial intelligence companies in 2024, as it emerged as a crucial backer of start-ups using the company’s graphics processing units (GPUs). The king of AI semiconductors, which surpassed a $3tn market capitalization in June due to huge demand for its high-performing  GPUs, has significantly invested into some of its own customers.

According to corporate filings and Dealroom research, Nvidia spent a total of $1bn across 50 start-up funding rounds and several corporate deals in 2024, compared with 2023, which saw 39 start-up rounds and $872mn in spending. The vast majority of deals were with “core AI” companies with high computing infrastructure demands, and so in some cases also buyers of its own chips. Tech companies have spent tens of billions of dollars on Nvidia’s chips over the past year since the debut of ChatGPT two years ago kick-started an unprecedented surge of investment in AI. Nvidia’s uptick in deals comes after it amassed a $9bn war chest of cash with its GPUs becoming one of the world’s hottest commodities.

The company’s shares rose more than 170% in 2024, as it and other tech giants helped power the S&P 500 index to its best two-year run this century. Nvidia’s $1bn worth of investments in “non-affiliated entities” in the first nine months last year includes both its venture and corporate investment arms.

According to company filings, that sum was 15% more than in 2023 and more than 10 times as much as it invested in 2022. Some of Nvidia’s largest customers, such as Microsoft, Amazon and Google, are actively working to reduce their reliance on its GPUs by developing their own custom chips. Such a development could make smaller AI companies a more important generator of revenues for Nvidia in the future.

“Right now Nvidia wants there to be more competition and it makes sense for them to have these new players in the mix,” said a fund manager with a stake in a number of companies it had invested in.

In 2024, Nvidia struck more deals than Microsoft and Amazon, although Google remains far more active, according to Dealroom. Such prolific dealmaking has raised concerns about Nvidia’s grip over the AI industry, at a time when it is facing heightened antitrust scrutiny in the US, Europe and China. Bill Kovacic, former chair of the US Federal Trade Commission, said competition watchdogs were “keen” to investigate a “dominant enterprise making these big investments” to see if buying company stakes was aimed at “achieving exclusivity”, although he said investments in a customer base could prove beneficial. Nvidia strongly rejects the idea that it connects funding with any requirement to use its technology.

The company said it was “working to grow our ecosystem, support great companies and enhance our platform for everyone. We compete and win on merit, independent of any investments we make.” It added: “Every company should be free to make independent technological choices that best suit their needs and strategies.”

The Santa Clara based company’s most recent start-up deal was a strategic investment in Elon Musk’s xAI. Other significant 2024 investments included its participation in funding rounds for OpenAI, Cohere, Mistral and Perplexity, some of the most prominent AI model providers.

Nvidia also has a start-up incubator, Inception, which separately has helped the early evolution of thousands of fledgling companies. The Inception program offers start-ups “preferred pricing” on hardware, as well as cloud credits from Nvidia’s partners.

There has been an uptick in Nvidia’s acquisitions, including a takeover of Run:ai, an Israeli AI workload management platform. The deal closed this week after coming under scrutiny from the EU’s antitrust regulator, which ultimately cleared the transaction. The US Department of Justice was also looking at the deal, according to Politico. Nvidia also bought AI software groups Nebulon, OctoAI, Brev.dev, Shoreline.io and Deci. Collectively it has made more acquisitions in 2024 than the previous four years combined, according to Dealroom. Recommended News in-depthArtificial intelligence Wall Street frenzy creates $11bn debt market for AI groups buying Nvidia chips.

The company is investing widely, pouring millions of dollars into AI groups involved in medical technology, search engines, gaming, drones, chips, traffic management, logistics, data storage and generation, natural language processing and humanoid robots. Its portfolio includes a number of start-ups whose valuations have soared to billions of dollars. CoreWeave, an AI cloud computing service provider and significant purchaser of Nvidia chips, is preparing to float early this year at a valuation as high as $35bn — increasing from about $7bn a year ago.

Nvidia invested $100mn in CoreWeave in early 2023, and participated in a $1bn equity fundraising round by the company in May. Another start-up, Applied Digital, was facing a plunging share price in 2024, with revenue misses and considerable debt obligations, before a group of investors led by Nvidia provided $160mn of equity capital in September, prompting a 65 per cent surge in its share price.

“Nvidia is using their massive market cap and huge cash flow to keep purchasers alive,” said Nate Koppikar, a short seller at Orso Partners. “If Applied Digital had died, that’s [a large volume] of sales that would have died with it.”

Neocloud groups such as CoreWeave, Crusoe and Lambda Labs have acquired tens of thousands of Nvidia’s high-performance GPUs, that are crucial for developing generative AI models. Those Nvidia AI chips are now also being used as collateral for huge loans. The frenzied dealmaking has shone a light on a rampant GPU economy in Silicon Valley that is increasingly being supported by deep-pocketed financiers in New York. However, its rapid growth has raised concerns about the potential for more risky lending, circular financing and Nvidia’s chokehold on the AI market.

References:

https://www.ft.com/content/f8acce90-9c4d-4433-b189-e79cad29f74e

https://www.ft.com/content/41bfacb8-4d1e-4f25-bc60-75bf557f1f21

AI cloud start-up Vultr valued at $3.5B; Hyperscalers gorge on Nvidia GPUs while AI semiconductor market booms

The case for and against AI-RAN technology using Nvidia or AMD GPUs

Nvidia is proposing a new approach to telco networks dubbed “AI radio access network (AI-RAN).”  The GPU king says: “Traditional CPU or ASIC-based RAN systems are designed only for RAN use and cannot process AI traffic today. AI-RAN enables a common GPU-based infrastructure that can run both wireless and AI workloads concurrently, turning networks from single-purpose to multi-purpose infrastructures and turning sites from cost-centers to revenue sources. With a strategic investment in the right kind of technology, telcos can leap forward to become the AI grid that facilitates the creation, distribution, and consumption of AI across industries, consumers, and enterprises. This moment in time presents a massive opportunity for telcos to build a fabric for AI training (creation) and AI inferencing (distribution) by repurposing their central and distributed infrastructures.”

One of the first principles of AI-RAN technology is to be able to run RAN and AI workloads concurrently and without compromising carrier-grade performance. This multi-tenancy can be either in time or space: dividing the resources based on time of day or based on percentage of compute. This also implies the need for an orchestrator that can provision, de-provision, or shift workloads seamlessly based on available capacity.

Image Credit:  Pitinan Piyavatin/Alamy Stock Photo

ARC-1, an appliance Nvidia showed off earlier this year, comes with a Grace Blackwell “superchip” that would replace either a traditional vendor’s application-specific integrated circuit (ASIC) or an Intel processor. Ericsson and Nokia are exploring the possibilities with Nvidia.  Developing RAN software for use with Nvidia’s chips means acquiring competency in compute unified device architecture (CUDA), Nvidia’s instruction set. “They do have to reprofile into CUDA,” said Soma Velayutham, the general manager of Nvidia’s AI and telecom business, during a recent interview with Light Reading. “That is an effort.”

Proof of Concept:

SoftBank has turned the AI-RAN vision into reality, with its successful outdoor field trial in Fujisawa City, Kanagawa, Japan, where NVIDIA-accelerated hardware and NVIDIA Aerial software served as the technical foundation.  That achievement marks multiple steps forward for AI-RAN commercialization and provides real proof points addressing industry requirements on technology feasibility, performance, and monetization:

  • World’s first outdoor 5G AI-RAN field trial running on an NVIDIA-accelerated computing platform. This is an end-to-end solution based on a full-stack, virtual 5G RAN software integrated with 5G core.
  • Carrier-grade virtual RAN performance achieved.
  • AI and RAN multi-tenancy and orchestration achieved.
  • Energy efficiency and economic benefits validated compared to existing benchmarks.
  • A new solution to unlock AI marketplace integrated on an AI-RAN infrastructure.
  • Real-world AI applications showcased, running on an AI-RAN network.

Above all, SoftBank aims to commercially release their own AI-RAN product for worldwide deployment in 2026. To help other mobile network operators get started on their AI-RAN journey now, SoftBank is also planning to offer a reference kit comprising the hardware and software elements required to trial AI-RAN in a fast and easy way.

SoftBank developed their AI-RAN solution by integrating hardware and software components from NVIDIA and ecosystem partners and hardening them to meet carrier-grade requirements. Together, the solution enables a full 5G vRAN stack that is 100% software-defined, running on NVIDIA GH200 (CPU+GPU), NVIDIA Bluefield-3 (NIC/DPU), and Spectrum-X for fronthaul and backhaul networking. It integrates with 20 radio units and a 5G core network and connects 100 mobile UEs.

The core software stack includes the following components:

  • SoftBank-developed and optimized 5G RAN Layer 1 functions such as channel mapping, channel estimation, modulation, and forward-error-correction, using NVIDIA Aerial CUDA-Accelerated-RAN libraries
  • Fujitsu software for Layer 2 functions
  • Red Hat’s OpenShift Container Platform (OCP) as the container virtualization layer, enabling different types of applications to run on the same underlying GPU computing infrastructure
  • A SoftBank-developed E2E AI and RAN orchestrator, to enable seamless provisioning of RAN and AI workloads based on demand and available capacity

AI marketplace solution integrated with SoftBank AI-RAN.  Image Credit: Nvidia

The underlying hardware is the NVIDIA GH200 Grace Hopper Superchip, which can be used in various configurations from distributed to centralized RAN scenarios. This implementation uses multiple GH200 servers in a single rack, serving AI and RAN workloads concurrently, for an aggregated-RAN scenario. This is comparable to deploying multiple traditional RAN base stations.

In this pilot, each GH200 server was able to process 20 5G cells using 100-MHz bandwidth, when used in RAN-only mode. For each cell, 1.3 Gbps of peak downlink performance was achieved in ideal conditions, and 816Mbps was demonstrated with carrier-grade availability in the outdoor deployment.

……………………………………………………………………………………………………………………………………..

Could AMD GPU’s be an alternative to Nvidia AI-RAN?

AMD is certainly valued by NScale, a UK business with a GPU-as-a-service offer, as an AI alternative to Nvidia. “AMD’s approach is quite interesting,” said David Power, NScale’s chief technology officer. “They have a very open software ecosystem. They integrate very well with common frameworks.” So far, though, AMD has said nothing publicly about any AI-RAN strategy.

The other telco concern is about those promised revenues. Nvidia insists it was conservative when estimating that a telco could realize $5 in inferencing revenues for every $1 invested in AI-RAN. But the numbers met with a fair degree of skepticism in the wider market. Nvidia says the advantage of doing AI inferencing at the edge is that latency, the time a signal takes to travel around the network, would be much lower compared with inferencing in the cloud. But the same case was previously made for hosting other applications at the edge, and they have not taken off.

Even if AI changes that, it is unclear telcos would stand to benefit. Sales generated by the applications available on the mobile Internet have gone largely to hyperscalers and other software developers, leaving telcos with a dwindling stream of connectivity revenues. Expect AI-RAN to be a big topic for 2025 as operators carefully weigh their options.  Many telcos are unconvinced there is a valid economic case for AI-RAN, especially since GPUs generate a lot of power (they are perceived as “energy hogs”). 

References:

AI-RAN Goes Live and Unlocks a New AI Opportunity for Telcos

https://www.lightreading.com/ai-machine-learning/2025-preview-ai-ran-would-be-a-paradigm-shift

Nvidia bid to reshape 5G needs Ericsson and Nokia buy-in

Softbank goes radio gaga about Nvidia in nervy days for Ericsson

T-Mobile emerging as Nvidia’s big AI cheerleader

AI cloud start-up Vultr valued at $3.5B; Hyperscalers gorge on Nvidia GPUs while AI semiconductor market booms

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

Nvidia enters Data Center Ethernet market with its Spectrum-X networking platform

FT: New benchmarks for Gen AI models; Neocloud groups leverage Nvidia chips to borrow >$11B

Will AI clusters be interconnected via Infiniband or Ethernet: NVIDIA doesn’t care, but Broadcom sure does!

Page 1 of 326
1 2 3 326