GEO satellite internet from HughesNet and Viasat can’t compete with LEO Starlink in speed or latency
GEO satellite internet providers provide reliable connectivity across large land masses, but their distance from Earth presents challenges to delivering low-latency and high-speed satellite Internet services. HughesNet and Viasat operate stationary satellites 22,000 miles above Earth, whereas LEO satellite operators such as Starlink have satellites orbiting a mere 340 miles above Earth. GEO satellites are also less ubiquitous than LEO satellites – GEO operators have fewer satellites in their constellations.
According to Ookla, GEO satellite providers HughesNet and Viasat can’t compete with Starlink when it comes to latency and download speeds. HughesNet and Viasat are best-known for providing consistent coverage across large land masses. But because they operate in geostationary orbit rather than low-Earth orbit (LEO) and because they have fewer satellites in their constellations, they struggle with speed limitations and latency, making it difficult for them to compete with LEO providers such as SpaceX’s Starlink.
HughesNet and Viasat have three satellites each in their fleet delivering fixed broadband service. Viasat plans to launch its Viasat-3 F2 satellite later this year and the Viasat-3 F3 in 2026. In addition, it owns a fleet of satellites from the company’s Inmarsat acquisition in May 2023 which are primarily used in maritime and mission-critical applications.
The challenges facing these GEO satellite providers have become more pronounced over the past few years, particularly as Starlink has moved aggressively into the U.S. market with promotions such as its recent offer to provide free equipment to new customers in states where it has excess capacity.
“HughesNet and Viasat are losing subscribers at a rapid rate thanks to competition from LEO satellite provider Starlink with its lower latency and faster download speeds,” according to Sue Marek, editorial director and analyst with Ookla.
Ookla’s Key Takeaways:
- HughesNet saw its median multi-server latency improve from 1019 milliseconds (ms) in Q1 2022 to 683 ms in Q1 2025. Viasat’s median latency increased slightly over that time period from 676 ms in Q1 2022 to 684 ms in Q1 2025. But neither are remotely close to matching Starlink with its median latency of just 45 ms in Q1 2025.
- HughesNet more than doubled its median download speeds from 20.87 Mbps in Q1 2022 to 47.79 Mbps in Q1 2025 while Viasat increased its median download speeds from 25.18 Mbps to 49.12 Mbps during that same time period.
- Upload speeds are another area where GEO satellite constellations struggle to compete with Starlink and other low-Earth orbit systems. HughesNet has increased its median upload speeds from 2.87 Mbps in Q1 2022 to 4.44 Mbps in Q1 2025 but that is still far lower than Starlink, which has a median upload speed of 14.84 Mbps in Q1 2025. Viasat saw its median upload speeds decline over that same time period from 3.06 Mbps in Q1 2022 to 1.08 Mbps in Q1 2025.
- HughesNet and Viasat are losing subscribers at a rapid rate thanks to competition from LEO satellite provider Starlink with its lower latency and faster download speeds.
Meanwhile, Starlink has nearly 8,000 satellites in low earth orbit (LEO) as part of its mega-constellation, according to Space.com. Starlink’s median download speeds, according to data from Ookla’s Speedtest users, almost doubled from 53.95 Mbit/s in Q3 2022 to 104.71 Mbit/s in Q1 2025. These latest average download speeds are also nearly twice that of HughesNet and Viasat.
In addition to network performance, Starlink has made strides in the U.S. market with promotions and distribution of free equipment to “new customers in states where it has excess capacity,” said Marek. In May, Starlink offered its Standard Kit, priced at $349, for free to consumers in select areas who agree to a one-year service commitment. But, “high demand” areas would still need to pay a one-time, upfront “demand surcharge” of $100, the company said.
Starlink is making headway teaming up with terrestrial service providers on direct-to-device (D2D) services, which connect smartphones and mobile devices directly to satellite networks in areas of spotty wireless service. Canada’s Rogers Communications launched a beta D2D service this week that initially supports text messaging via Starlink LEO satellites. The Canadian operator is also working with Lynk Global in a multi-vendor approach to D2D. Starlink announced this week that it has over 500,000 customers across Canada.
T-Mobile’s D2D service, T-Satellite with Starlink, will be commercially available later this month and will include SMS texting, MMS, picture messaging and short audio clips. In October, T-Satellite will add a data service to its Starlink-based satellite offering.
However, T-Mobile announced it would bump up the launch of T-Satellite to areas impacted by the recent flooding in central Texas. During a number of recent natural disasters, Starlink has offered free services and/or satellite equipment kits to affected communities.
Starlink is providing Mini Kits, which support 50 gigabyte and unlimited roaming data subscriptions, for search and rescue efforts in central Texas, in addition to one month of free service to customers in the areas impacted by recent flooding. In January, the satellite operator offered about a month of free service to new customers and a one-month service credit to existing customers in areas affected by the Los Angeles wildfires.
Starlink could be facing increasing competition from Project Kuiper, Amazon’s LEO satellite broadband service, as it ramps up deployment of a planned LEO constellation of over 3,000 satellites. However, Project Kuiper has fallen far behind schedule in meeting the FCC’s deadline of having more than 1,600 LEO satellites in orbit by the summer of 2026. Since its initial launch in April, Amazon only has a total of 78 satellites in orbit, according to CNBC. Meanwhile, Starlink has launched over 2,300 satellites in the past year alone.
References:
https://www.ookla.com/articles/hughesnet-viasat-performance-2025
https://www.space.com/space-exploration/launches-spacecraft/spacex-starlink-15-2-b1093-vsfs-ocisly
KDDI unveils AU Starlink direct-to-cell satellite service
Telstra selects SpaceX’s Starlink to bring Satellite-to-Mobile text messaging to its customers in Australia
One NZ launches commercial Satellite TXT service using Starlink LEO satellites
SpaceX launches first set of Starlink satellites with direct-to-cell capabilities
FCC: More competition for Starlink; freeing up spectrum for satellite broadband service
U.S. BEAD overhaul to benefit Starlink/SpaceX at the expense of fiber broadband providers
Starlink’s Direct to Cell service for existing LTE phones “wherever you can see the sky”
ABI Research and CCS Insight: Strong growth for satellite to mobile device connectivity (messaging and broadband internet access)
AST SpaceMobile completes 1st ever LEO satellite voice call using AT&T spectrum and unmodified Samsung and Apple smartphones
Ookla: Uneven 5G deployment in Europe, 5G SA remains sluggish; Ofcom: 28% of UK connections on 5G with only 2% 5G SA
According to Ookla, Europe is a “two-speed” 5G competitiveness landscape,” with some countries surging ahead and others falling well behind, In Q2-2025, Nordic and southern Europe countries maintained a substantial lead in 5G availability, helped by recent 700MHz band deployments in countries such as Sweden and Italy. By contrast, 5G availability in central and western European laggard countries such as Belgium, the UK and Hungary remains less than half that of the 5G pacemakers, says the study. On average, mobile subscribers in the EU spent 44.5% of their time connected to 5G networks in Q2 2025, up from 32.8% a year earlier.
The deployment and adoption of 5G SA in Europe remain sluggish, increasing slowly from a very low base and further widening the region’s gap with North America and Asia. Spain stands out as a clear leader in 5G SA deployment, reaching an 8% Speedtest® sample share compared with the EU average of just 1.3% as of Q2 2025. This progress has been driven by Spain’s proactive use of EU recovery funds to subsidize 5G SA rollouts in underserved areas, with a particular focus on bridging the rural-urban digital divide. However, the U.S. and China are still far ahead, with 5G SA sample shares above 20% and 80% respectively, reflecting a much greater pace of coverage and adoption in those markets.
Northern Europe Maintains 5G Availability Lead – Speed Test Intelligence Q2-2025:
Fragmented 5G Availability across Europe is driven by a complex mix of national policies on spectrum assignment and broader economic factors, rather than by simple geographic or demographic differences. 5G Availability is more strongly correlated with policy-driven factors such as spectrum allocation timelines and costs, coverage obligations, subsidy mechanisms, and regulations for infrastructure sharing and permitting, than with structural factors like urbanization rates or the number of operators. This indicates that 5G competitiveness is shaped less by technology gaps or inherent market imbalances and more by effective policy execution.
Northern Europe Maintains 5G Availability Lead; Other Countries Lag:
Fragmentation remains a persistent theme, shaping stark 5G deployment asymmetries that cannot be explained by geography or demographics alone. Northern and Southern European countries such as Denmark (83.9%), Sweden (77.8%), and Greece (76.4%) are disproportionately represented among the countries with the highest 5G Availability in Q2 2025, with coverage rates up to twice as high as those in Western and Eastern countries like the United Kingdom (45.2%), Hungary (29.9%), and Belgium (11.9%).
Low-band deployment and DSS use continue to lift 5G availability in lagging countries:
Recent advances in 5G Availability have been driven by low-band deployments and the use of DSS, raising the average proportion of time spent on 5G networks in the EU from 32.8% in Q2 2024 to 44.5% in Q2 2025. The pace of coverage growth, and the corresponding increase in 5G usage, has primarily reflected each country’s starting point. Lagging countries like Latvia, Poland, and Slovenia have seen double-digit gains in 5G Availability from a low base. By contrast, leading countries such as Switzerland and Denmark, where 5G coverage is now nearly ubiquitous, have shifted their focus to targeted capacity upgrades through site densification and mid-band expansion.
About Ookla:
Ookla, a global leader in connectivity intelligence, brings together the trusted expertise of Speedtest®, Downdetector®, Ekahau®, and RootMetrics® to deliver unmatched network and connectivity insights. By combining multi-source data with industry-leading expertise, we transform network performance metrics into strategic, actionable insights. Solutions empower service providers, enterprises, and governments with the critical data and insights needed to optimize networks, enhance digital experiences, and help close the digital divide. At the same time, we amplify the real-world experiences of individuals and businesses that rely on connectivity to work, learn, and communicate. From measuring and analyzing connectivity to driving industry innovation, Ookla helps the world stay connected.
Ookla is a division of Ziff Davis, a vertically focused digital media and internet company whose portfolio includes leading brands in technology, entertainment, shopping, health, cybersecurity, and martech.
……………………………………………………………………………………………………………………………………………………………………………………
Mobile Matters report from communications regulator Ofcom discusses 5G’s share of network connections in UK. Ofcom’s analysis – based on crowdsourced data collected by Opensignal and covering the period October 2024 to March 2025 – showed that 28% of connections were on 5G, with 71% still on 4G, 0.7% on 3G and a holdout 0.2% on 2G. In terms of mobile network operators, BT-owned EE had the highest proportion of network connections on 5G, at 32%, while Vodafone had the lowest, at 24%. O2, which is now the mobile arm of Virgin Media, had the lowest share of 4G connections (68%) and the highest proportion on 3G (3%).
5G standalone vs 5G non-standalone performance:
• 5G standalone (SA) accounted for 2% of all 5G connection attempts in the six months to March 2025. UK MNOs have started to offer 5G SA but its use is currently low.
• Standalone 5G’s average response time (latency) was about 15% lower (better) than for 5G NSA. However, our analysis also indicated that 5G SA had a lower average connection success rate (95.9%) than 5G NSA (97.6%), although this was slightly higher than 4G’s.
• 5G SA provided significantly higher download speeds than 5G NSA. Seventy per cent of 5G SA download speeds measurements were at 100 Mbit/s or higher, compared to 46% for 5G NSA, and 2MB, 5MB and 10MB file download times, on average, were about 45% faster on 5G SA than over 5G NSA.
• The picture was more mixed for uploads. While 5G NSA had a higher proportion of low-speed connections (18% of 5G NSA upload speeds provided less than 2 Mbit/s compared to 10% on 5G SA) it also had a slightly higher share of higher-speed connections (30% of 5G NSA uploads were 20 Mbit/s or higher vs 28% on 5G SA).
References:
https://www.ookla.com/articles/europe-5g-q2-2025
Ookla: Europe severely lagging in 5G SA deployments and performance
Téral Research: 5G SA core network deployments accelerate after a very slow start
Softbank developing autonomous AI agents; an AI model that can predict and capture human cognition
Speaking at a customer event Wednesday in Tokyo, Softbank Chairman and CEO Masayoshi Son said his company is developing “the world’s first” artificial intelligence (AI) agent system that can autonomously perform complex tasks. Human programmers will no longer needed. “The AI agents will think for themselves and improve on their own…so the era of humans doing the programming is coming to an end,”
Softbank estimated it needed to create around 1000 agents per person – a large number because “employees have complex thought processes. The agents will be active 24 hours a day, 365 days a year and will interact with each other.” Son estimates the agents will be at least four times as productive and four times as efficient as humans, and would cost around 40 Japanese yen (US$0.27) per agent per month. At that rate, the billion-agent plan would cost SoftBank $3.2 billion annually.
“For 40 yen per agent per month, the agent will independently memorize, negotiate and conduct learning. So with these actions being taken, it’s incredibly cheap,” Son said. “I’m excited to see how the AI agents will interact with one another and advance given tasks,” Son added that the AI agents, to achieve the goals, will “self-evolve and self-replicate” to execute subtasks.
Unlike generative AI, which needs human commands to carry out tasks, an AI agent performs tasks on its own by designing workflows with data available to it. It is expected to enhance productivity at companies by helping their decision-making and problem-solving.
While the CEO’s intent is clear, details of just how and when SoftBank will build this giant AI workforce are scarce. Son admitted the 1 billion target would be “challenging” and that the company had not yet developed the necessary software to support the huge numbers of agents. He said his team needed to build a toolkit for creating more agents and an operating system to orchestrate and coordinate them. Son, one of the world’s most ardent AI evangelists, is betting the company’s future on the technology.
According to Son, the capabilities of AI agents had already surpassed PhD-holders in advanced fields including physics, mathematics and chemistry. “There are no questions it can’t comprehend. We’re almost at a stage where there are hardly any limitations,” he enthused. Son acknowledged the problem of AI hallucinations, but dismissed it as “a temporary and minor issue.” Son said the development of huge AI data centers, such as the $500 billion Stargate project, would enable exponential growth in computing power and AI capabilities.
Softbank Group Corp. Chairman and CEO Masayoshi Son (L) and OpenAI CEO Sam Altman at an event on July 16, 2025. (Kyodo)
The project comes as SoftBank Group and OpenAI, the developer of chatbot ChatGPT, said in February they had agreed to establish a joint venture to promote AI services for corporations. Wednesday’s event included a cameo appearance from Sam Altman, CEO of SoftBank partner OpenAI, who said he was confident about the future of AI because the scaling law would exist “for a long time” and that cost was continually going down. “I think the first era of AI, the…ChatGPT initial era was about an AI that you could ask anything and it could tell you all these things,” Altman said.
“Now as these (AI) agents roll out, AI can do things for you…You can ask the computer to do something in natural language, a sort of vaguely defined complex task, and it can understand you and execute it for you,” Altman said. “The productivity and potential that it unlocks for the world is quite huge.”
……………………………………………………………………………………………………………………………………………..
According to the NY Times, an international team of scientists believe that A.I. systems can help them understand how the human mind works. They have created a ChatGPT-like system that can play the part of a human in a psychological experiment and behave as if it has a human mind. Details about the system, known as Centaur, were published on Wednesday in the journal Nature. Dr. Marcel Binz, a cognitive scientist at Helmholtz Munich, a German research center, is the author of the new AI study.
References:
https://english.kyodonews.net/articles/-/57396#google_vignette
https://www.lightreading.com/ai-machine-learning/softbank-aims-for-1-billion-ai-agents-this-year
https://www.nytimes.com/2025/07/02/science/ai-psychology-mind.html
https://www.nature.com/articles/s41586-025-09215-4
AI spending is surging; companies accelerate AI adoption, but job cuts loom large
Big Tech and VCs invest hundreds of billions in AI while salaries of AI experts reach the stratosphere
Ericsson reports ~flat 2Q-2025 results; sees potential for 5G SA and AI to drive growth
Agentic AI and the Future of Communications for Autonomous Vehicle (V2X)
Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined
Indosat Ooredoo Hutchison and Nokia use AI to reduce energy demand and emissions
Deloitte and TM Forum : How AI could revitalize the ailing telecom industry?
McKinsey: AI infrastructure opportunity for telcos? AI developments in the telecom sector
ZTE’s AI infrastructure and AI-powered terminals revealed at MWC Shanghai
Ericsson revamps its OSS/BSS with AI using Amazon Bedrock as a foundation
Big tech firms target data infrastructure software companies to increase AI competitiveness
SK Group and AWS to build Korea’s largest AI data center in Ulsan
OpenAI partners with G42 to build giant data center for Stargate UAE project
Nile launches a Generative AI engine (NXI) to proactively detect and resolve enterprise network issues
AI infrastructure investments drive demand for Ciena’s products including 800G coherent optics
Ericsson reports ~flat 2Q-2025 results; sees potential for 5G SA and AI to drive growth
Ericsson’s second-quarter results were not impressive, with YoY organic sales growth of +2% for the company and +3% for its network division (its largest). Its $14 billion AT&T OpenRAN deal, announced in December of 2023, helped lift Swedish vendor’s share of the global RAN market by +1.4 percentage points in 2024 to 25.7%, according to new research from analyst company Omdia (owned by Informa). As a result of its AT&T contract, the U.S. accounted for a stunning 44% of Ericsson’s second-quarter sales while the North American market resulted in a 10% YoY increase in organic revenues to SEK19.8bn ($2.05bn). Sales dropped in all other regions of the world! The charts below depict that very well:
Ericsson’s attention is now shifting to a few core markets that Ekholm has identified as strategic priorities, among them the U.S., India, Japan and the UK. All, unsurprisingly, already make up Ericsson’s top five countries by sales, although their contribution minus the US came to just 15% of turnover for the recent second quarter. “We are already very strong in North America, but we can do more in India and Japan,” said Ekholm. “We see those as critically important for the long-term success.”
Opportunities: As telco investment in RAN equipment has declined by 12.5% (or $5 billion) last year, the Swedish equipment vendor has had few other obvious growth opportunities. Ericsson’s Enterprise division, which is supposed to be the long-term provider of sales growth for Ericsson, is still very small – its second-quarter revenues stood at just SEK5.5bn ($570m) and even once currency exchange changes are taken into account, its sales shrank by 6% YoY.
On Tuesday’s earnings call, Ericsson CEO Börje Ekholm said that the RAN equipment sector, while stable currently, isn’t offering any prospects of exciting near-term growth. For longer-term growth the industry needs “new monetization opportunities” and those could come from the ongoing modest growth in 5G-enabled fixed wireless access (FWA) deployments, from 5G standalone (SA) deployments that enable mobile network operators to offer “differentiated solutions” and from network APIs (that ultra hyped market is not generating meaningful revenues for anyone yet).
Cost Cutting Continues: Ericsson also has continued to be aggressive about cost reduction, eliminating thousands of jobs since it completed its Vonage takeover. “Over the last year, we have reduced our total number of employees by about 6% or 6,000,” said Ekholm on his routine call with analysts about financial results. “We also see and expect big benefits from the use of AI and that is one reason why we expect restructuring costs to remain elevated during the year.”
Use of AI: Ericsson sees AI as an opportunity to enable network automation and new industry revenue opportunities. The company is now using AI as an aid in network design – a move that could have negative ramifications for staff involved in research and development. Ericsson is already using AI for coding and “other parts of internal operations to drive efficiency… We see some benefits now. And it’s going to impact how the network is operated – think of fully autonomous, intent-based networks that will require AI as a fundamental component. That’s one of the reasons why we invested in an AI factory,” noted the CEO, referencing the consortium-based investment in a Swedish AI Factory that was announced in late May. At the time, Ericsson noted that it planned to “leverage its data science expertise to develop and deploy state-of-the-art AI models – improving performance and efficiency and enhancing customer experience.
Ericsson is also building AI capability into the products sold to customers. “I usually use the example of link adaptation,” said Per Narvinger, the head of Ericsson’s mobile networks business group, on a call with Light Reading, referring to what he says is probably one of the most optimized algorithms in telecom. “That’s how much you get out of the spectrum, and when we have rewritten link adaptation, and used AI functionality on an AI model, we see we can get a gain of 10%.”
Ericsson hopes that AI will boost consumer and business demand for 5G connectivity. New form factors such as smart glasses and AR headsets will need lower-latency connections with improved support for the uplink, it has repeatedly argued. But analysts are skeptical, while Ericsson thinks Europe is ill equipped for more advanced 5G services.
“We’re still very early in AI, in [understanding] how applications are going to start running, but I think it’s going to be a key driver of our business going forward, both on traffic, on the way we operate networks, and the way we run Ericsson,” Ekholm said.
Europe Disappoints: In much of Europe, Ericsson and Nokia have been frustrated by some government and telco unwillingness to adopt the European Union’s “5G toolbox” recommendations and evict Chinese vendors. “I think what we have seen in terms of implementation is quite varied, to be honest,” said Narvinger. Rather than banning Huawei outright, Germany’s government has introduced legislation that allows operators to use most of its RAN products if they find a substitute for part of Huawei’s management system by 2029. Opponents have criticized that move, arguing it does not address the security threat posed by Huawei’s RAN software. Nevertheless, Ericsson clearly eyes an opportunity to serve European demand for military communications, an area where the use of Chinese vendors would be unthinkable.
“It is realistic to say that a large part of the increased defense spending in Europe will most likely be allocated to connectivity because that is a critical part of a modern defense force,” said Ekholm. “I think this is a very good opportunity for western vendors because it would be far-fetched to think they will go with high-risk vendors.” Ericsson is also targeting related demand for mission-critical services needed by first responders.
5G SA and Mobile Core Networks: Ekholm noted that 5G SA deployments are still few and far between – only a quarter of mobile operators have any kind of 5G SA deployment in place right now, with the most notable being in the US, India and China. “Two things need to happen,” for greater 5G SA uptake, stated the CEO.
- “One is mid-band [spectrum] coverage… there’s still very low build out coverage in, for example, Europe, where it’s probably less than half the population covered… Europe is clearly behind on that“ compared with the U.S., China and India.
- “The second is that [network operators] need to upgrade their mobile core [platforms]... Those two things will have to happen to take full advantage of the capabilities of the [5G] network,” noted Ekholm, who said the arrival of new devices, such as AI glasses, that require ultra low latency connections and “very high uplink performance” is starting to drive interest. “We’re also seeing a lot of network slicing opportunities,” he added, to deliver dedicated network resources to, for example, police forces, sports and entertainment stadiums “to guarantee uplink streams… consumers are willing to pay for these things. So I’m rather encouraged by the service innovation that’s starting to happen on 5G SA and… that’s going to drive the need for more radio coverage [for] mid-band and for core [systems].”
Ericsson’s Summary -Looking Ahead:
- Continue to strengthen competitive position
- Strong customer engagement for differentiated connectivity
- New use cases to monetize network investments taking shape
- Expect RAN market to remain broadly stable
- Structurally improving the business through rigorous cost management
- Continue to invest in technology leadership
………………………………………………………………………………………………………………………………………………………………………………………………
References:
https://www.telecomtv.com/content/5g/ericsson-ceo-waxes-lyrical-on-potential-of-5g-sa-ai-53441/
https://www.lightreading.com/5g/ericsson-targets-big-huawei-free-places-ai-and-nato-as-profits-soar
Ericsson revamps its OSS/BSS with AI using Amazon Bedrock as a foundation
Agentic AI and the Future of Communications for Autonomous Vehicle (V2X)
by Prashant Vajpayee (bio below), edited by Alan J Weissberger
Abstract:
Autonomous vehicles increasingly depend on Vehicle-to-Everything (V2X) communications, but 5G networks face challenges such as latency, coverage gaps, high infrastructure costs, and security risks. To overcome these limitations, this article explores alternative protocols like DSRC, VANETs, ISAC, PLC, and Federated Learning, which offer decentralized, low-latency communication solutions.
Of critical importance for this approach is Agentic AI—a distributed intelligence model based on the Object, Orient, Decide, and Act (OODA) loop—that enhances adaptability, collaboration, and security across the V2X stack. Together, these technologies lay the groundwork for a resilient, scalable, and secure next-generation Intelligent Transportation System (ITS).
Problems with 5G for V2X Communications:
There are several problems with using 5G for V2X communications, which is why the 5G NR (New Radio) V2X specification, developed by the 3rd Generation Partnership Project (3GPP) in Release 16, hasn’t been widely implemented. Here are a few of them:
- Variable latency: Even though 5G promises sub-milliseconds latency, realistic deployment often reflects 10 to 50 milliseconds delay, specifically V2X server is hosted in cloud environment. Furthermore, multi-hop routing, network slicing, and delay in handovers cause increment in latency. Due to this fact, 5G becomes unsuitable for ultra-reliable low-latency communication (URLLC) in critical scenarios [1, 2].
- Coverage Gaps & Handover Issues: Availability of 5G network is a problem in rural and remote areas. Furthermore, in fast moving vehicle, switching between 5G networks can cause delays in communication and connectivity failure [3, 4].
- Infrastructure and Cost Constraint: The deployment of full 5G infrastructure requires dense small-cell infrastructure, which cost burden and logistically complex solution especially in developing regions and along highways.
- Spectrum Congestion and Interference: During the scenarios of share spectrum, other services can cause interference in realm of 5G network, which cause degradation on V2X reliability.
- Security and Trust Issues: Centralized nature of 5G architectures remain vulnerable to single point of failure, which is risky for autonomous systems in realm of cybersecurity.
Alternative Communications Protocols as a Solution for V2X (when integrated with Agentic AI):
The following list of alternative protocols offers a potential remedy for the above 5G shortcomings when integrated with Agentic AI.
|
While these alternatives reduce dependency on centralized infrastructure and provide greater fault tolerance, they also introduce complexity. As autonomous vehicles (AVs) become increasingly prevalent, Vehicle-to-Everything (V2X) communication is emerging as the digital nervous system of intelligent transportation systems. Given the deployment and reliability challenges associated with 5G, the industry is shifting toward alternative networking solutions—where Agentic AI is being introduced as a cognitive layer that renders these ecosystems adaptive, secure, and resilient.
The following use cases show how Agentic AI can bring efficiency:
- Cognitive Autonomy: Each vehicle or roadside unit (RSU) operates an AI agent capable of observing, orienting, deciding, and acting (OOAD) without continuous reliance on cloud supervision. This autonomy enables real-time decision-making for scenarios such as rerouting, merging, or hazard avoidance—even in disconnected environments [12].
- Multi-Agent Collaboration: AI agents negotiate and coordinate with one another using standardized protocols (e.g., MCP, A2A), enabling guidance on optimal vehicle spacing, intersection management, and dynamic traffic control—without the need for centralized orchestration [13].
- Embedded Security Intelligence: While multiple agents collaborate, dedicated security agents monitor system activities for anomalies, enforce access control policies, and quarantine threats at the edge. As Forbes notes, “Agentic AI demands agentic security,” emphasizing the importance of embedding trust and resilience into every decision node [14].
- Protocol-Agnostic Adaptability: Agentic AI can dynamically switch among various communication protocols—including DSRC, VANETs, ISAC, or PLC—based on real-time evaluations of signal quality, latency, and network congestion. Agents equipped with cognitive capabilities enhance system robustness against 5G performance limitations or outages.
- Federated Learning and Self-Improvement: Vehicles independently train machine learning models locally and transmit only model updates—preserving data privacy, minimizing bandwidth usage, and improving processing efficiency.
The figure below illustrates the proposed architectural framework for secure Agentic AI enablement within V2X communications, leveraging alternative communication protocols and the OODA (Observe–Orient–Decide–Act) cognitive model.
Conclusions:
With the integration of an intelligent Agentic AI layer into V2X systems, autonomous, adaptive, and efficient decision-making emerges from seamless collaboration of the distributed intelligent components.
Numerous examples highlight the potential of Agentic AI to deliver significant business value.
- TechCrunch reports that Amazon’s R&D division is actively developing an Agentic AI framework to automate warehouse operations through robotics [15]. A similar architecture can be extended to autonomous vehicles (AVs) to enhance both communication and cybersecurity capabilities.
- Forbes emphasizes that “Agentic AI demands agentic security,” underscoring the need for every action—whether executed by human or machine—to undergo rigorous review and validation from a security perspective [16]. Forbes notes, “Agentic AI represents the next evolution in AI—a major transition from traditional models that simply respond to human prompts.” By combining Agentic AI with alternative networking protocols, robust V2X ecosystems can be developed—capable of maintaining resilience despite connectivity losses or infrastructure gaps, enforcing strong cyber defense, and exhibiting intelligence that learns, adapts, and acts autonomously [19].
- Business Insider highlights the scalability of Agentic AI, referencing how Qualtrics has implemented continuous feedback loops to retrain its AI agents dynamically [17]. This feedback-driven approach is equally applicable in the mobility domain, where it can support real-time coordination, dynamic rerouting, and adaptive decision-making.
- Multi-agent systems are also advancing rapidly. As Amazon outlines its vision for deploying “multi-talented assistants” capable of operating independently in complex environments, the trajectory of Agentic AI becomes even more evident [18].
References:
-
- Coll-Perales, B., Lucas-Estañ, M. C., Shimizu, T., Gozalvez, J., Higuchi, T., Avedisov, S., … & Sepulcre, M. (2022). End-to-end V2X latency modeling and analysis in 5G networks. IEEE Transactions on Vehicular Technology, 72(4), 5094-5109.
- Horta, J., Siller, M., & Villarreal-Reyes, S. (2025). Cross-layer latency analysis for 5G NR in V2X communications. PloS one, 20(1), e0313772.
- Cellular V2X Communications Towards 5G- Available at “pdf”
- Al Harthi, F. R. A., Touzene, A., Alzidi, N., & Al Salti, F. (2025, July). Intelligent Handover Decision-Making for Vehicle-to-Everything (V2X) 5G Networks. In Telecom (Vol. 6, No. 3, p. 47). MDPI.
- DSRC Safety Modem, Available at- “https://www.nxp.com/products/wireless-connectivity/dsrc-safety-modem:DSRC-MODEM”
- VANETs and V2X Communication, Available at- “https://www.sanfoundry.com/vanets-and-v2x-communication/#“
- Yu, K., Feng, Z., Li, D., & Yu, J. (2023). Secure-ISAC: Secure V2X communication: An integrated sensing and communication perspective. arXiv preprint arXiv:2312.01720.
- Study on integrated sensing and communication (ISAC) for C-V2X application, Available at- “https://5gaa.org/content/uploads/2025/05/wi-isac-i-tr-v.1.0-may-2025.pdf“
- Ramasamy, D. (2023). Possible hardware architectures for power line communication in automotive v2g applications. Journal of The Institution of Engineers (India): Series B, 104(3), 813-819.
- Xu, K., Zhou, S., & Li, G. Y. (2024). Federated reinforcement learning for resource allocation in V2X networks. IEEE Journal of Selected Topics in Signal Processing.
- Asad, M., Shaukat, S., Nakazato, J., Javanmardi, E., & Tsukada, M. (2025). Federated learning for secure and efficient vehicular communications in open RAN. Cluster Computing, 28(3), 1-12.
- Bryant, D. J. (2006). Rethinking OODA: Toward a modern cognitive framework of command decision making. Military Psychology, 18(3), 183-206.
- Agentic AI Communication Protocols: The Backbone of Autonomous Multi-Agent Systems, Available at- “https://datasciencedojo.com/blog/agentic-ai-communication-protocols/”
- Agentic AI And The Future Of Communications Networks, Available at- “https://www.forbes.com/councils/forbestechcouncil/2025/05/27/agentic-ai-and-the-future-of-communications-networks/”
- Amazon launches new R&D group focused on agentic AI and robotics, Available at- “Amazon launches new R&D group focused on agentic AI and robotics”
- Securing Identities For The Agentic AI Landscape, Available at “https://www.forbes.com/councils/forbestechcouncil/2025/07/03/securing-identities-for-the-agentic-ai-landscape/”
- Qualtrics’ president of product has a vision for agentic AI in the workplace: ‘We’re going to operate in a multiagent world’, Available at- “https://www.businessinsider.com/agentic-ai-improve-qualtrics-company-customer-communication-data-collection-2025-5”
- Amazon’s R&D lab forms new agentic AI group, Available at- “https://www.cnbc.com/2025/06/04/amazons-rd-lab-forms-new-agentic-ai-group.html”
- Agentic AI: The Next Frontier In Autonomous Work, Available at- “https://www.forbes.com/councils/forbestechcouncil/2025/06/27/agentic-ai-the-next-frontier-in-autonomous-work/”
About the Author:
Prashant Vajpayee is a Senior Product Manager and researcher in AI and cybersecurity, with expertise in enterprise data integration, cyber risk modeling, and intelligent transportation systems. With a foundation in strategic leadership and innovation, he has led transformative initiatives at Salesforce and advanced research focused on cyber risk quantification and resilience across critical infrastructure, including Transportation 5.0 and global supply chain. His work empowers organizations to implement secure, scalable, and ethically grounded digital ecosystems. Through his writing, Prashant seeks to demystify complex cybersecurity as well as AI challenges and share actionable insights with technologists, researchers, and industry leaders.
Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined
AI RAN [1.] is projected to account for approximately a third of the RAN market by 2029, according to a recent AI RAN Advanced Research Report published by the Dell’Oro Group. In the near term, the focus within the AI RAN segment will center on Distributed-RAN (D-RAN), single-purpose deployments, and 5G.
“Near-term priorities are more about efficiency gains than new revenue streams,” said Stefan Pongratz, Vice President at Dell’Oro Group. “There is strong consensus that AI RAN can improve the user experience, enhance performance, reduce power consumption, and play a critical role in the broader automation journey. Unsurprisingly, however, there is greater skepticism about AI’s ability to reverse the flat revenue trajectory that has defined operators throughout the 4G and 5G cycles,” continued Pongratz.
Note 1. AI RAN integrates AI and machine learning (ML) across various aspects of the RAN domain. The AI RAN scope in this report is aligned with the greater industry vision. While the broader AI RAN vision includes services and infrastructure, the projections in this report focus on the RAN equipment market.
Additional highlights from the July 2025 AI RAN Advanced Research Report:
- The base case is built on the assumption that AI RAN is not a growth vehicle. But it is a crucial technology/tool for operators to adopt. Over time, operators will incorporate more virtualization, intelligence, automation, and O-RAN into their RAN roadmaps.
- This initial AI RAN report forecasts the AI RAN market based on location, tenancy, technology, and region.
- The existing RAN radio and baseband suppliers are well-positioned in the initial AI-RAN phase, driven primarily by AI-for-RAN upgrades leveraging the existing hardware. Per Dell’Oro Group’s regular RAN coverage, the top 5 RAN suppliers contributed around 95 percent of the 2024 RAN revenue.
- AI RAN is projected to account for around a third of total RAN revenue by 2029.
In the first quarter of 2025, Dell’Oro said the top five RAN suppliers based on revenues outside of China are Ericsson, Nokia, Huawei, Samsung and ZTE. In terms of worldwide revenue, the ranking changes to Huawei, Ericsson, Nokia, ZTE and Samsung.
About the Report: Dell’Oro Group’s AI RAN Advanced Research Report includes a 5-year forecast for AI RAN by location, tenancy, technology, and region. Contact: [email protected]
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………
Author’s Note: Nvidia’s Aerial Research portfolio already contains a host of AI-powered tools designed to augment wireless network simulations. It is also collaborating with T-Mobile and Cisco to develop AI RAN solutions to support future 6G applications. The GPU king is also working with some of those top five RAN suppliers, Nokia and Ericsson, on an AI-RAN Innovation Center. Unveiled last October, the project aims to bring together cloud-based RAN and AI development and push beyond applications that focus solely on improving efficiencies.
……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….
The one year old AI RAN Alliance has now increased its membership to over 100, up from around 84 in May. However, there are not many telco members with only Vodafone joining since May. The other telco members are: Turkcell ,Boost Mobile, Globe, Indosat Ooredoo Hutchison (Indonesia), Korea Telecom, LG UPlus, SK Telecom, T-Mobile US and Softbank. This limited telco presence could reflect the ongoing skepticism about the goals of AI-RAN, including hopes for new revenue opportunities through network slicing, as well as hosting and monetizing enterprise AI workloads at the edge.
Francisco Martín Pignatelli, head of open RAN at Vodafone, hardly sounded enthusiastic in his statement in the AI-RAN Alliance press release. “Vodafone is committed to using AI to optimize and enhance the performance of our radio access networks. Running AI and RAN workloads on shared infrastructure boosts efficiency, while integrating AI and generative applications over RAN enables new real-time capabilities at the network edge,” he added.
Perhaps, the most popular AI RAN scenario is “AI on RAN,” which enables AI services on the RAN at the network edge in a bid to support and benefit from new services, such as AI inferencing.
“We are thrilled by the extraordinary growth of the AI-RAN Alliance,” said Alex Jinsung Choi, Chair of the AI-RAN Alliance and Principal Fellow at SoftBank Corp.’s Research Institute of Advanced Technology. “This milestone underscores the global momentum behind advancing AI for RAN, AI and RAN, and AI on RAN. Our members are pioneering how artificial intelligence can be deeply embedded into radio access networks — from foundational research to real-world deployment — to create intelligent, adaptive, and efficient wireless systems.”
Choi recently suggested that now is the time to “revisit all our value propositions and then think about what should be changed or what should be built” to be able to address issues including market saturation and the “decoupling” between revenue growth and rising TCO. He also cited self-driving vehicles and mobile robots, where low latency is critical, as specific use cases where AI-RAN will be useful for running enterprise workloads.
About the AI-RAN Alliance:
The AI-RAN Alliance is a global consortium accelerating the integration of artificial intelligence into Radio Access Networks. Established in 2024, the Alliance unites leading companies, researchers, and technologists to advance open, practical approaches for building AI-native wireless networks. The Alliance focuses on enabling experimentation, sharing knowledge, and real-world performance to support the next generation of mobile infrastructure. For more information, visit: https://ai-ran.org
References:
https://www.delloro.com/advanced-research-report/ai-ran/
https://www.delloro.com/news/ai-ran-to-top-10-billion-by-2029/
Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum
AI RAN Alliance selects Alex Choi as Chairman
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
Deutsche Telekom and Google Cloud partner on “RAN Guardian” AI agent
The case for and against AI-RAN technology using Nvidia or AMD GPUs
AI spending is surging; companies accelerate AI adoption, but job cuts loom large
Global AI market is experiencing significant growth. Companies are adopting AI at an accelerated rate, with 72% reporting adoption in at least one business function in 2024, a significant increase from 55% in 2023, according to S&P Global. This growth is being driven by various factors, including the potential for enhanced productivity, improved efficiency, and increased innovation across industries.
Global spending on AI is projected to reach $632 billion by 2028, according to the IDC Worldwide AI and Generative AI Spending Guide. 16 technologies will be impacted: hardware (IaaS, server, and storage), software (AI applications [content workflow and management applications, CRM applications, ERM applications], AI application development and deployment, AI platforms [AI life-cycle software, computer vision AI tools, conversational AI tools, intelligent knowledge discovery software], AI system infrastructure software), and services (business services and IT services)
Grand View Research estimates that the global AI market, encompassing hardware, software, and services, will grow to over $1.8 trillion by 2030, compounding annually at 37%.
Barron’s says AI spending is surging, but certain job types are at risk according to CIOs. On Wednesday, two major U.S. investment banks released reports based on surveys of chief information officers, or CIOs, at corporations that suggest rising spending plans for AI infrastructure.
- Morgan Stanley’s technology team said AI tops the priority list for projects that will see the largest spending increase, adding that 60% of CIOs expect to have AI projects in production by year end. Military spending for AI applications by NATO members is projected to exceed $112 billion by 2030, assuming a 4% AI investment allocation rate.
- Piper Sandler analyst James Fish noted 93% of CIOs plan to increase spending on AI infrastructure this year with 48% saying they will increase spending significantly by more than 25% versus last year. Piper Sandler said that is good news for the major cloud computing vendors—including Microsoft Azure, Oracle Cloud, Amazon.com’s Amazon Web Services, and Google Cloud by Alphabet.
- More than half the CIOs in Piper Sandler’s survey admitted the rise of AI made certain jobs more vulnerable for headcount reduction. The job categories most at risk for cuts are (in order): IT administration, sales, customer support, and IT help desks.
–>Much more on AI related job losses discussion below.
Executives’ confidence in AI execution has jumped from 53% to 71% in the past year, driven by $246 billion in infrastructure investment and demonstrable business results. Another article from the same date notes the introduction of “AI for Citizens” by Mistral, aimed at empowering public institutions with AI capabilities for their citizens, according to an artticle.
This strong growth in the AI market is driven by several factors:
- Technological advancements: Improvements in machine learning algorithms, computational power, and the development of new frameworks like deep learning and neural networks are enabling more sophisticated AI applications.
- Data availability: The abundance of digital data from various sources (social media, IoT devices, sensors) provides vast training datasets for AI models, according to LinkedIn.
- Increasing investments: Significant investments from major technology companies, governments, and research institutions are fueling AI research and development.
- Cloud computing: The growth of cloud platforms like AWS, Azure, and Google Cloud provides scalable infrastructure and tools for developing and deploying AI applications, making AI accessible to a wider range of businesses.
- Competitive advantages: Businesses are leveraging AI/ML to gain a competitive edge by enhancing product development, optimizing operations, and making data-driven decisions.
……………………………………………………………………………………………………………………………………………………………………
- Some sources predict that AI could replace the equivalent of 300 million full-time jobs globally, with a significant impact on tasks performed by white-collar workers in areas like finance, law, and consulting.
- Entry-level positions are particularly vulnerable, with some experts suggesting that AI could cannibalize half of all entry-level white-collar roles within five years.
- Sectors like manufacturing and customer service are also facing potential job losses due to the automation capabilities of AI and robotics.
- A recent survey found that 41% of companies plan to reduce their workforce by 2030 due to AI, according to the World Economic Forum.
- BT CEO Allison Kirkby hinted at mass job losses due to AI. She told the Financial Times last month that her predecessor’s plan to eliminate up to 45,000 jobs by 2030 “did not reflect the full potential of AI.” In fact, she thinks AI may be able to help her shed a further 10,000 or so jobs by the end of the decade.
- Microsoft announced last week that it will lay off about 9,000 employees across different teams in its global workforce.
- “Artificial intelligence is going to replace literally half of all white-collar workers in the U.S.,” Ford Motor CEO Jim Farley said in an interview last week with author Walter Isaacson at the Aspen Ideas Festival. “AI will leave a lot of white-collar people behind.”
- Amazon CEO Andy Jassy wrote in a note to employees in June that he expected the company’s overall corporate workforce to be smaller in the coming years because of the “once-in-a-lifetime” AI technology. “We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs,” Jassy said.
- Technology-related factors such as automation drove 20,000 job cuts among U.S.-based employers in the first half of the year, outplacement firm Challenger, Gray & Christmas said in a recent report. “We do see companies using the term ‘technological update’ more often than we have over the past decade, so our suspicion is that some of the AI job cuts that are likely happening are falling into that category,” Andy Challenger, a senior vice president at the Chicago, Illinois-based outplacement firm, told CFO Dive. In some cases, companies may avoid directly tying their layoffs to AI because they “don’t want press on it,” he said.
- In private, CEOs have spent months whispering about how their businesses could likely be run with a fraction of the current staff. Technologies including automation software, AI and robots are being rolled out to make operations as lean and efficient as possible.
- Four in 10 employers anticipate reducing their workforce where AI can automate tasks, according to World Economic Forum survey findings unveiled in January.
The long-term impact of AI on employment is still being debated, with some experts predicting that AI will also create new jobs and boost productivity, offsetting some of the losses. However, reports and analysis indicate that workers need to prepare for significant changes in the job market and develop new skills to adapt to the evolving demands of an AI-driven economy.
References:
https://my.idc.com/getdoc.jsp?containerId=IDC_P33198
https://www.grandviewresearch.com/press-release/global-artificial-intelligence-ai-market
https://www.barrons.com/articles/ai-jobs-survey-cios-619d2a5e?mod=hp_FEEDS_1_TECHNOLOGY_3
https://www.hrdive.com/news/ai-driven-job-cuts-underreported-challenger/752526/
https://www.lightreading.com/ai-machine-learning/telcos-are-cutting-jobs-but-not-because-of-ai
https://www.wsj.com/tech/ai/ai-white-collar-job-loss-b9856259
https://www.wsj.com/tech/ai/ai-white-collar-job-loss-b9856259
HPE cost reduction campaign with more layoffs; 250 AI PoC trials or deployments
AT&T and Verizon cut jobs another 6% last year; AI investments continue to increase
Verizon and AT&T cut 5,100 more jobs with a combined 214,350 fewer employees than 2015
AI adoption to accelerate growth in the $215 billion Data Center market
Big Tech post strong earnings and revenue growth, but cuts jobs along with Telecom Vendors
Nokia (like Ericsson) announces fresh wave of job cuts; Ericsson lays off 240 more in China
AI wave stimulates big tech spending and strong profits, but for how long?
Liquid Dreams: The Rise of Immersion Cooling and Underwater Data Centers
By Omkar Ashok Bhalekar with Ajay Lotan Thakur
As demand for data keeps rising, driven by generative AI, real-time analytics, 8K streaming, and edge computing, data centers are facing an escalating dilemma: how to maintain performance without getting too hot. Traditional air-cooled server rooms that were once large enough for straightforward web hosting and storage are being stretched to their thermal extremes by modern compute-intensive workloads. While the world’s digital backbone burns hot, innovators are diving deep, deep to the ocean floor. Say hello to immersion cooling and undersea data farms, two technologies poised to revolutionize how the world stores and processes data.
Heat Is the Silent Killer of the Internet – In each data center, heat is the unobtrusive enemy. If racks of performance GPUs, CPUs, and ASICs are all operating at the same time, they generate massive amounts of heat. The old approach with gigantic HVAC systems and chilled air manifolds is reaching its technological and environmental limits.
In the majority of installations, over 35-40% of total energy consumption is spent on simply cooling the hardware, rather than running it. As model sizes and inference loads explode (think ChatGPT, DALL·E, or Tesla FSD), traditional cooling infrastructures simply aren’t up to the task without costly upgrades or environmental degradation. This is why there is a paradigm shift.
Liquid cooling is not an option everywhere due to lack of infrastructure, expense, and geography, so we still must rely on every player in the ecosystem to step up the ante when it comes to energy efficiency. The burden crosses multiple domains, chip manufacturers need to deliver far greater performance per watt with advanced semiconductor design, and software developers need to write that’s fundamentally low power by optimizing algorithms and reducing computational overhead.
Along with these basic improvements, memory manufacturers are designing low-power solutions, system manufacturers are making more power-efficient delivery networks, and cloud operators are making their data center operations more efficient while increasing the use of renewable energy sources. As Microsoft Chief Environmental Officer Lucas Joppa said, “We need to think about sustainability not as a constraint, but as an innovative driver that pushes us to build more efficient systems across every layer of the stack of technology.”
However, despite these multifaceted efficiency gains, thermal management remains a significant bottleneck that can have a deep and profound impact on overall system performance and energy consumption. Ineffective cooling can force processors to slow down their performance, which is counterintuitive to better chips and optimized software. This becomes a self-perpetuating loop where wasteful thermal management will counteract efficiency gains elsewhere in the system.
In this blogpost, we will address the cooling aspect of energy consumption, considering how future thermal management technology can be a multiplier of efficiency across the entire computing infrastructure. We will explore how proper cooling strategies not only reduce direct energy consumption from cooling components themselves but also enable other components of the system to operate at their maximum efficiency levels.
What Is Immersion Cooling?
Immersion cooling cools servers by submerging them in carefully designed, non-conductive fluids (typically dielectric liquids) that transfer heat much more efficiently than air. Immersion liquids are harmless to electronics; in fact, they allow direct liquid contact cooling with no risk of short-circuiting or corrosion.
Two general types exist:
- Single-phase immersion, with the fluid remaining liquid and transferring heat by convection.
- Two-phase immersion, wherein fluid boils at low temperature, gets heated and condenses in a closed loop.
According to Vertiv’s research, in high-density data centers, liquid cooling improves the energy efficiency of IT and facility systems compared to air cooling. In their fully optimized study, the introduction of liquid cooling created a 10.2% reduction in total data center power and a more than 15% improvement in Total Usage Effectiveness (TUE).
Total Usage Effectiveness is calculated by using the formula below:
TUE = ITUE x PUE (ITUE = Total Energy Into the IT Equipment/Total Energy into the Compute Components, PUE = Power Usage Effectiveness)
Reimagining Data Centers Underwater
Imagine shipping an entire data center in a steel capsule and sinking it to the ocean floor. That’s no longer sci-fi.
Microsoft’s Project Natick demonstrated the concept by deploying a sealed underwater data center off the Orkney Islands, powered entirely by renewable energy and cooled by the surrounding seawater. Over its two-year lifespan, the submerged facility showed:
- A server failure rate 1/8th that of land-based centers.
- No need for on-site human intervention.
- Efficient, passive cooling by natural sea currents.
Why underwater? Seawater is an open, large-scale heat sink, and underwater environments are naturally less prone to temperature fluctuations, dust, vibration, and power surges. Most coastal metropolises are the biggest consumers of cloud services and are within 100 miles of a viable deployment site, which would dramatically reduce latency.
Why This Tech Matters Now Data centers already account for about 2–3% of the world’s electricity, and with the rapid growth in AI and metaverse workloads, that figure will grow. Generative inference workloads and AI training models consume up to 10x the power per rack that regular server workloads do, subjecting cooling gear and sustainability goals to tremendous pressure. Legacy air cooling technologies are reaching thermal and density thresholds, and immersion cooling is a critical solution to future scalability. According to Submer, a Barcelona based immersion cooling company, immersion cooling has the ability to reduce energy consumed by cooling systems by up to 95% and enable higher rack density, thus providing a path to sustainable growth in data centers under AI-driven demands
Advantages & Challenges
Immersion and submerged data centers possess several key advantages:
- Sustainability – Lower energy consumption and lower carbon footprints are paramount as ESG (Environmental, Social, Governance) goals become business necessities.
- Scalability & Efficiency – Immersion allows more density per square foot, reducing real estate and overhead facility expenses.
- Reliability – Liquid-cooled and underwater systems have fewer mechanical failures including less thermal stress, fewer moving parts, and less oxidation.
- Security & Autonomy – Underwater encased pods or autonomous liquid systems are difficult to hack and can be remotely monitored and updated, ideal for zero-trust environments.
While there are advantages of Immersion Cooling / Submerges Datacenters, there are some challenges/limitations as well –
- Maintenance and Accessibility Challenges – Both options make hardware maintenance complex. Immersion cooling requires careful removal and washing of components to and from dielectric liquids, whereas underwater data centers provide extremely poor physical access, with entire modules having to be removed to fix them, which translates to longer downtimes.
- High Initial Costs and Deployment Complexity – Construction of immersion tanks or underwater enclosures involves significant capital investment in specially designed equipment, infrastructure, and deployment techniques. Underwater data centers are also accompanied by marine engineering, watertight modules, and intricate site preparation.
- Environmental and Regulatory Concerns – Both approaches involve environmental issues and regulatory adherence. Immersion systems struggle with fluid waste disposal regulations, while underwater data centers have marine environmental impact assessments, permits, and ongoing ecosystem protection mechanisms.
- Technology Maturity and Operational Risks – These are immature technologies with minimal historical data on long-term performance and reliability. Potential problems include leakage of liquids in immersion cooling or damage and biofouling in underwater installation, leading to uncertain large-scale adoption.
Industry Momentum
Various companies are leading the charge:
- GRC (Green Revolution Cooling) and submersion cooling offer immersion solutions to hyperscalers and enterprises.
- HPC is offered with precision liquid cooling by Iceotope. Immersion cooling at scale is being tested by Alibaba, Google, and Meta to support AI and ML clusters.
- Microsoft is researching commercial viability of underwater data centers as off-grid, modular ones in Project Natick.
Hyperscalers are starting to design entire zones of their new data centers specifically for liquid-cooled GPU pods, while smaller edge data centers are adopting immersion tech to run quietly and efficiently in urban environments.
- The Future of Data Centers: Autonomous, Sealed, and Everywhere
Looking ahead, the trend is clear: data centers are becoming more intelligent, compact, and environmentally integrated. We’re entering an era where: - AI-based DCIM software predicts and prevents failure in real-time.
- Edge nodes with immersive cooling can be located anywhere, smart factories, offshore oil rigs.
- Entire data centers might be built as prefabricated modules, inserted into oceans, deserts, or even space.
- The general principle? Compute must not be limited by land, heat, or humans.
Final Thoughts
In the fight to enable the digital future, air is a luxury. Immersed in liquid or bolted to the seafloor, data centers are shifting to cool smarter, not harder.
Underwater installations and liquid cooling are no longer out-there ideas, they’re lifelines to a scalable, sustainable web.
So, tomorrow’s “Cloud” won’t be in the sky, it will hum quietly under the sea.
References
- https://news.microsoft.com/source/features/sustainability/project-natick-underwater-datacenter/
- https://www.researchgate.net/publication/381537233_Advancement_of_Liquid_Immersion_Cooling_for_Data_Centers
- https://en.wikipedia.org/wiki/Project_Natick
- https://theliquidgrid.com/underwater-data-centers/
- https://www.sunbirddcim.com/glossary/submerged-server-cooling
- https://www.vertiv.com/en-us/solutions/learn-about/liquid-cooling-options-for-data-centers/
- https://submer.com/immersion-cooling/
About Author:
Omkar Bhalekar is a senior network engineer and technology enthusiast specializing in Data center architecture, Manufacturing infrastructure, and Sustainable solutions. With extensive experience in designing resilient industrial networks and building smart factories and AI data centers with scalable networks, Omkar writes to simplify complex technical topics for engineers, researchers, and industry leaders.
Google Fiber and Nokia demo network slicing for home broadband in GFiber Labs
Network slicing has previously been restricted to 5G Stand Alone (SA) networks, which the IEEE Techblog regularly covers (see References below). However, network slicing software may also have a place in the home broadband network, as per a demo from Google Fiber and Nokia. Google Fiber says that this use of “network slicing gives us the ability to carve up a customer’s home network into different “lanes,” each optimized for a specific use.” In a GFiber Labs demo, gaming was used as the test scenario.
Google Fiber placed two gaming consoles next to each other and simulated network congestion, which drove the game’s latency up to 90 milliseconds. Unsurprisingly, “it was stalling, pixelating…a really ill experience for the end user,” said Nick Saporito, Google Fiber’s head of product. “This was a foundational test and it worked,” he added.
In the long-term, this could truly change how home internet works, especially when it’s driven by the customer. Today’s one-size-fits-all connections treat all traffic the same. But we know not everyone uses the internet the same way: gamers care about latency, remote workers need video stability, home businesses rely on solid uptime and security, and, we can see a future where applications (AI, VR, etc.) may require next-level performance. Network slicing could be how we level up network performance.
Network slicing opens the door to something new: the ability for customers to tailor their connection to the categories of Internet use that matter most in their home. It’s not about prioritizing traffic behind the scenes, it’s about giving you more control, more flexibility, and more ways to get the performance you need, when you need it. And with GFiber, it will always be in service of giving customers more control, without compromising our commitment to an open, unrestricted internet.
There’s also potential for something called “transactional slices.” These would spin up automatically, just for a few seconds, to keep things like financial logins secure. For example, connecting you directly to a service like your bank without routing traffic across the broader internet. You wouldn’t even notice it happening, but it could add meaningful peace of mind.
Network slicing is the next logical step in how we think about GFiber service — especially our lifestyle products like Core, Home, and Edge, built to meet the needs of customers’ unique internet lifestyles. Those products are designed to better match the way people live and work. Network slicing takes that a step further: adding real-time customization and control at the network level.
While we’re very excited about the possibilities here, there are few things that have to happen before we roll out network slicing across our network. Automation is a key piece of the puzzle. We’ll be diving deeper with Nokia later this year to explore how we can bring some of these ideas to life. This kind of innovation is exactly what GFiber Labs was built for and we’re excited about potentially leveling up the GFiber customer experience — again.
When considering how to implement network slicing on a wider scale, Saporito noted two key challenges. First, “a lot” of network automation is required to ensure a seamless experience. Google Fiber currently has a “mini-app” that lives on the router to help on the automation front, so that a technician doesn’t have to log onto the router and manually configure the settings.
Another challenge is determining how to effectively sell network slicing capabilities to customers. Given how prevalent multi-gig internet has become, Google Fiber is thinking about whether it makes sense to give customers more “ISP-like controls over their pipe,” Saporito said, rather than just providing a one-size-fits-all product.
“Much like you can put your car in sport or comfort mode, maybe our customers could go to the GFiber app and put their internet in gaming mode, for example, and then all their gaming traffic is special handled by network slicing,” he explained. “Those are ways that we’re kind of thinking about how we would productize it.”
But widespread adoption of broadband network slicing is still a ways away, according to Dell’Oro Group VP Jeff Heynen, as most ISPs and equipment providers are still in the proof-of-concept phase. “That being said, if you look down the road and you don’t expect downstream bandwidth consumption to grow as quickly as it historically has, then network slicing could be a way to help ISPs charge more for their service or, less likely, charge for specific slices,” Heynen said.
Aside from improving gaming or AI applications, one interesting use case for slicing is to provide additional security around financial transactions, Heynen noted. An operator could create a slice on a “per-transaction basis,” complementing a more standard encryption method like SSL.
“You could imagine an ISP differentiating themselves from their competition by highlighting that they have the most secure broadband network, for example,” he added. Saporito similarly noted the value of a so-called “transactional slice.” Though Google Fiber has yet to demo the concept, the idea is to create a temporary slice that would work when a customer logs onto their bank account. “We could create an automatic slice in the background to where that banking traffic is going directly to the financial institution’s back-end, versus traversing the transport network,” he said. “The customer wouldn’t even really notice it.”
https://fiber.google.com/blog/2025/06/network-slicing-demo.html
https://www.fierce-network.com/broadband/google-fiber-puts-nokia-network-slicing-technology-test
Téral Research: 5G SA core network deployments accelerate after a very slow start
5G network slicing progress report with a look ahead to 2025
ABI Research: 5G Network Slicing Market Slows; T-Mobile says “it’s time to unleash Network Slicing”
Is 5G network slicing dead before arrival? Replaced by private 5G?
5G Network Slicing Tutorial + Ericsson releases 5G RAN slicing software
Indosat Ooredoo Hutchison and Nokia use AI to reduce energy demand and emissions
Indonesian network operator Indosat Ooredoo Hutchison has deployed Nokia Energy Efficiency (part of the company’s Autonomous Networks portfolio – described below) to reduce energy demand and carbon dioxide emissions across its RAN network using AI. Nokia’s energy control system uses AI and machine learning algorithms to analyze real-time traffic patterns, and will enable the operator to adjust or shut idle and unused radio equipment automatically during low network demand periods.
The multi-vendor, AI-driven energy management solution can reduce energy costs and carbon footprint with no negative impact on network performance or customer experience. It can be rolled out in a matter of weeks.
Indosat is aiming to transform itself from a conventional telecom operator into an AI TechCo—powered by intelligent technologies, cloud-based platforms, and a commitment to sustainability. By embedding automation and intelligence into network operations, Indosat is unlocking new levels of efficiency, agility, and environmental responsibility across its infrastructure.
Earlier this year Indosat claimed to be the first operator to deploy AI-RAN in Indonesia, in a deal involving the integration of Nokia’s 5G cloud RAN solution with Nvidia’s Aerial platform. The Memorandum of Understanding (MoU) between the three firms included the development, testing, and deployment of AI-RAN, with an initial focus on transferring AI inferencing workloads on the AI Aerial, then the integration of RAN workloads on the same platform.
“As data consumption continues to grow, so does our responsibility to manage resources wisely. This collaboration reflects Indosat’s unwavering commitment to environmental stewardship and sustainable innovation, using AI to not only optimize performance, but also reduce emissions and energy use across our network.” said Desmond Cheung, Director and Chief Technology Officer at Indosat Ooredoo Hutchison.
Indosat was the first operator in Southeast Asia to achieve ISO 50001 certification for energy management—underscoring its pledge to minimize environmental impact through operational excellence. The collaboration with Nokia builds upon a successful pilot project, in which the AI-powered solution demonstrated its ability to reduce energy consumption in live network conditions.
Following the pilot project, Nokia deployed its Energy Efficiency solution to the entire Nokia RAN footprint within Indonesia, e.g. Sumatra, Kalimantan, Central and East Java.
“We are very pleased to be helping Indosat deliver on its commitments to sustainability and environmental responsibility, establishing its position both locally and internationally. Nokia Energy Efficiency reflects the important R&D investments that Nokia continues to make to help our customers optimize energy savings and network performance simultaneously,” said Henrique Vale, VP for Cloud and Network Services APAC at Nokia.
Nokia’s Autonomous Networks portfolio, including its Autonomous Networks Fabric solution, utilizes Agentic AI to deliver advanced security, analytics, and operations capabilities that provide operators with a holistic, real-time view of the network so they can reduce costs, accelerate time-to-value, and deliver the best customer experience.
Autonomous Networks Fabric is a unifying intelligence layer that weaves together observability, analytics, security, and automation across every network domain; allowing a network to behave as one adaptive system, regardless of vendor, architecture, or deployment model.
References: