Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?

IEEE Techblog has called attention to the many challenges and risks inherent in the current mega-spending boom for AI infrastructure (building data centers, obtaining power/electricity, cooling, maintenance, fiber optic networking, etc) .  In particular, these two recent blog posts:

AI Data Center Boom Carries Huge Default and Demand Risks and

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

This article focuses on the tremendous debt that Open AI, Oracle and newer AI cloud companies will have to obtain and the huge hurdles they face to pay back the money being spent to build out their AI infrastructures. While the major hyperscalers (Amazon, Microsoft, Google and Meta) are in good financial shape and won’t need to take on much debt, a  new wave of  heavily leveraged firms is emerging—one that could reshape the current AI boom.

OpenAI, for example, is set to take borrowing and large-scale contracts to an unbelievable new level. OpenAI is planning a vast network of data centers expected to cost at least $1 trillion over the coming years. As part of this effort, the company signed a $300 billion, five-year contract this month under which Oracle “is to set up AI computing infrastructure and lease it to OpenAI.”   In other words, OpenAI agreed to pay Oracle $300 billion over five years for the latter company to build out new AI data centers.  Where will OpenAI get that money?  It will be be burning billions in cash and won’t be profitable till 2029 at the earliest.

To fulfill its side of the deal, Oracle will need to invest heavily in infrastructure before receiving full payment—requiring significant borrowing. According to a recent note from KeyBanc Capital Markets, Oracle may need to borrow $25 billion annually over the next four years.  This comes at a time when Oracle is already carrying substantial debt and is highly leveraged. As of the end of August, the company had around $82 billion in long-term debt, with a debt-to-equity ratio of roughly 450%. By comparison, Alphabet—the parent company of Google—reported a ratio of 11.5%, while Microsoft’s stood at about 33%.

Companies like Oracle and other less-capitalized AI players such as CoreWeave have little choice but to take on more debt if they want to compete at the highest level. Nebius Group, another Nasdaq-listed AI cloud provider similar to CoreWeave, struck a $19.4 billion deal in September to provide AI computing services to Microsoft. The company announced it would finance the necessary capital expenditures “through a combination of its cash flow and debt secured against the contract.”

………………………………………………………………………………………………………………………………………………………………………………………………

Sidebar – Stock market investors seem to love debt and risk:

CoreWeave’s shares have more than tripled since its IPO in March, while Nebius stock jumped nearly 50% after announcing its deal with Microsoft. Not to be outdone, Oracle’s stock surged 40% in a single day after the company disclosed a major boost in projected revenue from OpenAI in its infrastructure deal—even though the initiative will require years of heavy spending by Oracle.

–>What’s so amazing to this author is that OpenAI selected Oracle for the AI infrastructure it will use, even though the latter is NOT a major cloud service provider and is certainly not a hyperscaler.  For Q1 2025, it held about 3% market share, placing it #5 among global cloud service providers.

…………………………………………………………………………………………………………………………………………………………………………………………………

Data Center Compute Server & Storage Room;  iStock Photo credit: Andrey Semenov

……………………………………………………………………………………………………………………………………………….

Among other new AI Cloud players:

  • CyrusOne secured nearly $12 billion in financing (much in debt) for AI / data center expansion. Around $7.9 billion of that is for new data center / AI digital infrastructure projects in the U.S.
  • SoftBank / “Stargate” initiative: The Stargate project (OpenAI + Oracle + SoftBank + MGX, etc.) is being structured with major debt. The plan is huge—around $500 billion in AI infrastructure and supercomputers, and financing is expected to be ~70% debt, ~10% equity among the sources.
  • xAI (Elon Musk’s AI firm):  xAI raised $10 billion in combined debt + equity. Specifically ~$5 billion in secured notes / term loans (debt), with the remainder in equity. The money is intended to build out its AI infrastructure (e.g. GPU facilities / data centers).

There’s growing skepticism about whether these companies can meet their massive contract obligations and repay their debts. Multiple recent studies suggest AI adoption isn’t advancing as quickly as supporters claim. One study found that only 3% of consumers are paying for AI services. Forecasts projecting trillions of dollars in annual spending on AI data centers within a few years appear overly optimistic.

OpenAI’s position, despite the hype, seems very shaky. D.A. Davidson analyst Gil Luria estimates the company would need to generate over $300 billion in annual revenue by 2030 to justify the spending implied in its Oracle deal—a steep climb from its current run rate of about $12 billion. OpenAI has financial backing from SoftBank and Nvidia, with Nvidia pledging up to $100 billion, but even that may not be enough.  “A vast majority of Oracle’s data center capacity is now promised to one customer, OpenAI, who itself does not have the capital to afford its many obligations,” Luria said.

Oracle could try to limit risk by pacing its spending with revenue received from OpenAI.  Nonetheless, Moody’s flagged “significant” risks in a recent note, citing the huge costs of equipment, land, and electricity. “Whether these will be financed through traditional debt, leases or highly engineered financing vehicles, the overall growth in balance sheet obligations will also be extremely large,” Moody’s warned. In July (two months before the OpenAI deal), it gave Oracle a negative credit outlook.

There’s a real possibility that things go smoothly. Oracle may handle its contracts and debt well, as it has in the past. CoreWeave, Nebius, and others might even pioneer new financial models that help accelerate AI development.

It’s very likely that some of today’s massive AI infrastructure deals will be delayed, renegotiated, or reassigned if AI demand doesn’t grow as fast as AI spending. Legal experts say contracts could be transferred.  For example, if OpenAI can’t make the promised, Oracle might lease the infrastructure to a more financially stable company, assuming the terms allow it.

Such a shift wouldn’t necessarily doom Oracle or its debt-heavy peers. But it would be a major test for an emerging financial model for AI—one that’s starting to look increasingly speculative.  Yes, even bubbly!

………………………………………………………………………………………………………………………………………………………………………………

References:

https://www.wsj.com/tech/ai/debt-is-fueling-the-next-wave-of-the-ai-boom-278d0e04

https://www.crn.com/news/cloud/2025/cloud-market-share-q1-2025-aws-dips-microsoft-and-google-show-growth

AI Data Center Boom Carries Huge Default and Demand Risks

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Verizon’s 6G Innovation Forum joins a crowded list of 6G efforts that may conflict with 3GPP and ITU-R IMT-2030 work

Verizon has established a 6G Innovation Forum with a group of companies to drive innovation and enabling the 6G era. Verizon’s future-forward initiative is uniting key players across the technology ecosystem, including leading network vendors Ericsson, Samsung Electronics, and Nokia; and device and chipset innovators Meta, and Qualcomm Technologies, Inc., in the early stages of development to define 6G together by identifying potential new use cases, devices and network technology. The forum aims to establish an open, diversified, and resilient 6G ecosystem and develop foundational 6G technologies while ensuring global alignment.

This effort underscores Verizon’s commitment to drive the collaborative evolution of connectivity and deliver transformative experiences for consumers and enterprises. Verizon’s networks form the backbone of the emerging Artificial Intelligence economy, delivering the infrastructure and expertise essential for businesses to fully harness AI’s potential. For over a decade, Verizon has integrated AI into its operations to optimize network performance and infrastructure, a commitment that will continue with the evolution of 6G. This will accelerate Verizon’s AI Connect strategy and intelligent edge capabilities, enabling businesses to manage real-time AI workloads at scale by leveraging Verizon’s comprehensive suite of solutions with its award-winning network.

The forum will move beyond theoretical discussions and rapidly progress toward tangible 6G advancements and the realization of potential new and innovative use cases. Key areas of focus will include:

  • Unlocking the full potential of 6G by testing new spectrum bands and bandwidths.
  • Fostering a globally harmonized 6G landscape by actively working with global standards bodies like 3GPP to ensure that the forum’s work aligns with mainstream 6G development and promotes interoperability across the industry.
  • Allowing forum partners to test and refine 6G technologies in a real-world environment by establishing dedicated Verizon 6G Labs, starting in Los Angeles, to serve as hubs for collaborative research, prototyping, and early lab and field trials.

“Verizon is consistently at the forefront of network innovation. We were the first in the world to turn up 5G and continue to enhance our best, most reliable and fastest 5G network in ways that open the door to possibilities far beyond what we can imagine today,” said Joe Russo, EVP & President, Global Networks and Technology at Verizon. “5G Advanced lays the foundation for the 6G future – whether that’s new wearables, AI experiences, or entirely new use cases we haven’t even thought of yet, and that’s what excites me the most. With the best team in the industry, we will build the future of these solutions with our partners. We’re already building a network designed for the next era – one that will transform how we live, work and play.”

Yago Tenorio, the chief technology officer of Verizon told Light Reading he wants the Forum to identify and refine 6G use cases before technology details are agreed upon by 3GPP and ITU-R WP5D.  Smart glasses combined with artificial intelligence (AI) have arguably emerged as the prime candidate to succeed smartphones as a mass-market 6G consumer gadget. Last week, displayed the sort of smart glasses that could become popular in a future 6G scenario.

“One example of why this forum matters is that if you go to the standards today there is a lot of talk about uplink capacity with eight antennas in the device,” Tenorio said . “I don’t have any problem with that. It’s going to be very useful for FWA [fixed wireless access] and maybe useful for some smartphones, some classes of devices. But can you imagine a wearable with eight antennas? I mean, it’s difficult enough to have two,” he added.

Comment and Analysis:

It seems there are way too many 6G Forums and Consortiums that overlap and potentially can generate conflicting specifications.  The two main bodies are ITU-R and 3GPP.

  • ITU-R WP5D sets the formal requirements for terrestrial international mobile telecommunications (IMT) and is working on the framework for IMT-2030 (the official designation for 6G). This framework, outlined in the ITU-R’s IMT-2030 vision and Recommendation ITU-R M. 2160, includes key aspects like technology trends, usage scenarios, and performance capabilities for the next generation of mobile networks.  5D also develops the minimum technical performance requirements (TPRs) for IMT-2030 (“6G”) which will be specified in a Report ITU-R M. [IMT-2030.TECH PERF REQ]. In February 2025,  WP5D discussed a draft document on these requirements, and the next step is to detail the specific values for key metrics like peak data rate and spectral efficiency, with candidates for the radio interfaces to be submitted by early 2029 and finalized around mid-2030.
  • 3GPP creates cellular specifications which are submitted to ITU-R WP5D by ATIS as contributions directed towards Radio Interface Technologies (RITs) and Sets of Radio Interface Technologies (SRITs).  3GPP began its 6G study work in 2024. It is working toward a first-phase 6G specification to be completed in Release 21 by late 2028, which will be submitted for consideration as the IMT 2030 RIT/SRIT standard. Note that ONLY 3GPP defines the 5G and 6G core network specifications.  There is no serious work in ITU-T for the non-radio aspects of 5G or 6G.

Summary of 6G Forums:

North America:
  • Next G Alliance: An initiative within the Alliance for Telecommunications Industry Solutions (ATIS) to advance North American leadership in wireless technology. It includes working groups focused on creating a 6G roadmap, defining applications and use cases, and addressing spectrum issues.
  • AI-RAN Alliance: This group brings together technology and telecom leaders to integrate artificial intelligence (AI) directly into radio access network (RAN) technology to improve network performance, efficiency, and resource utilization in the lead-up to 6G.
  • Verizon 6G Innovation Forum: Established in September 2025, this consortium unites companies such as Ericsson, Nokia, Samsung, Meta, and Qualcomm to develop the 6G ecosystem, identify use cases, and define foundational technologies.
  • Brooklyn 6G Summit (B6GS): An annual flagship event hosted by Nokia and NYU, bringing together vendors, academia, and operators to discuss 6G research and innovation. 
Europe:
  • 6G Smart Networks and Services Industry Association (6G-IA): A European-based group that represents the private sector and collaborates with the European Commission on 6G research initiatives. It oversees projects like Hexa-X and Hexa-X-II, which have helped define the 6G vision.
  • 6G Flagship (Finland): Based at the University of Oulu, this is one of the world’s first 6G research programs. It leads multiple national and international projects, working to develop the components, tools, and test network for a 6G-enabled digital world.
  • one6G: This non-profit association works to accelerate the adoption of next-generation wireless technologies by supporting global 6G research and standardization. 
Asia:
  • China IMT-2030 (6G) Promotion Group, established in 2019 by the Ministry of Industry and Information Technology (MIIT) to coordinate government, academia, and industry efforts in promoting 6G research, development, and international cooperation. The group focuses on defining technical standards, exploring new applications like integrated sensing and non-terrestrial networks, and aims for 6G commercialization around 2030.
  • 6G Forum (Korea): An organization working to lead and promote the evolution of wireless technology beyond 5G and into 6G, encouraging collaboration between industries, government, and academia.
  • Bharat 6G Alliance (India): A partnership between Indian companies, academia, and research organizations to accelerate the country’s innovation and collaboration in 6G.
  • XG Mobile Promotion Forum (Japan): This group, which has a memorandum of understanding with the Next G Alliance, focuses on advancing the 5G and 6G ecosystem. 
Other notable efforts:
  • IEEE Future Networks: This IEEE initiative includes a Testbed Working Group that collaborates with existing 5G testbeds to accelerate the development of next-generation networks, including 6G.
  • Research initiatives: Numerous specific projects and academic consortia worldwide are also driving focused research on various aspects of 6G, such as integrating AI into networks or developing specific components.
  • See References below for more collaborative efforts directed at 6G.

……………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.verizon.com/about/news/verizon-leads-future-wireless-development-new-industry-6g-forum

https://www.lightreading.com/6g/verizon-cto-worries-whether-6g-will-measure-up-in-the-us

Verizon launches 6G forum; it’s all about the use cases, CTO says

https://www.ericsson.com/en/6g/spectrum

ITU-R WP5D IMT 2030 Submission & Evaluation Guidelines vs 6G specs in 3GPP Release 20 & 21

ITU-R WP 5D reports on: IMT-2030 (“6G”) Minimum Technology Performance Requirements; Evaluation Criteria & Methodology

Highlights of 3GPP Stage 1 Workshop on IMT 2030 (6G) Use Cases

Ericsson and e& (UAE) sign MoU for 6G collaboration vs ITU-R IMT-2030 framework

ITU-R: IMT-2030 (6G) Backgrounder and Envisioned Capabilities

ITU-R WP5D invites IMT-2030 RIT/SRIT contributions

NGMN issues ITU-R framework for IMT-2030 vs ITU-R WP5D Timeline for RIT/SRIT Standardization

Qualcomm CEO: expect “pre-commercial” 6G devices by 2028

Mulit-vendor Open RAN stalls as Echostar/Dish shuts down it’s 5G network leaving Mavenir in the lurch

Last week’s announcement that Echostar/ Dish Network will sell $23 billion worth of spectrum licenses to AT&T was very bad news for Mavenir.  As a result of that deal, Dish Network’s 5G Open RAN network, running partly on Mavenir’s software, is to be decommissioned.  Dish Network had been constructing a fourth nationwide U.S. mobile network with new Open RAN suppliers – one of the only true multi-vendor Open RANs worldwide.

Credit: Kristoffer Tripplaar/Alamy Stock Photo

Echostar’s decision to shut down its 5G network marks a very sad end for the world’s largest multivendor open RAN and will have ramifications for the entire industry. “If you look at all the initiatives, what the US government did or in general, they are the only ones who actually spent a good chunk of money to really support open RAN architecture,” said Pardeep Kohli, the CEO of Mavenir, one of the vendors involved in the Dish Network project. “So now the question is where do you go from here?”

As part of its original set of updates on 5G network plans, Dish revealed it would host its 5G core – the part that will survive the spectrum sale – in the public cloud of AWS. And the hyperscaler’s data facilities have also been used for RAN software from Mavenir installed on servers known as central units.

Open RAN enters is in the fronthaul interface between Mavenir’s DU software and radios provided by Japan’s Fujitsu. Its ability to connect its software to another company’s radios validates Mavenir’s claims to be an open RAN vendor, says Kohli. While other suppliers boast compatibility with open RAN specifications, commercial deployments pairing vendors over this interface remain rare.

Mavenir has evidently been frustrated by the continued dominance of Huawei, Ericsson and Nokia, whose combined RAN market share grew from 75.1% in 2023 to 77.5% last year, according to research from Omdia, an Informa company. Dish Network alone would not have made a sufficient difference for Mavenir and other open RAN players, according to Kohli. “It helped us come this far,” he said. “Now it’s up to how far other people want to take it.” A retreat from open RAN would, he thinks, be a “bad outcome for all the western operators,” leaving them dependent on a Nordic duopoly in countries where Chinese vendors are now banned.

“If they (telcos) don’t support it (multi-vendor OpenRAN), and other people are not supporting it, we are back to a Chinese world and a non-Chinese world,” he said. “In the non-Chinese world, you have Ericsson and Nokia, and in the Chinese world, it’s Huawei and ZTE. And that’s going to be a pretty bad outcome if that’s where it ends up.”

…………………………………………………………………………………………………………………………………………………………………

Open RAN x-U.S.:

Outside the U.S., the situation is no better for OpenRAN. Only Japan’s Rakuten and Germany’s 1&1 have attempted to build a “greenfield” Open RAN from scratch. As well as reporting billions of dollars in losses on network deployment, Rakuten has struggled to attract customers. It owns the RAN software it has deployed but counts only 1&1 as a significant customer. And Rakuten’s original 4G rollout was not based on the industry’s open RAN specifications, according to critics. “They were not pure,” said Mavenir’s Kohli.

Plagued by delays and other problems, 1&1’s rollout has been a further bad advert for Open RAN. For the greenfield operators, the issue is not the maturity of open RAN technology. Rather, it is the investment and effort needed to build any kind of new nationwide telecom network in a country that already has infrastructure options. And the biggest brownfield operators, despite professing support for open RAN, have not backed any of the the new entrants.

RAN Market Concentration:

  • Stefan Pongratz, an analyst with Dell’Oro, found that five of six regions he tracks are today classed as “highly concentrated,” with an HHI score of more than 2,500. “This suggests that the supplier diversity element of the open RAN vision is fading,” wrote Pongratz in a recent blog.
  • A study from Omdia (owned by Informa), shows the combined RAN market share of Huawei, Ericsson and Nokia grew from 75.1% in 2023 to 77.5% last year. The only significant alternative to the European and Chinese vendors is Samsung, and its market share has shrunk from 6.1% to 4.8% over this period.

Concentration would seem to be especially high in the U.S., where Ericsson now boasts a RAN market share of more than 50% and generates about 44% of its sales (the revenue contribution of India, Ericsson’s second-biggest market, was just 4% for the recent second quarter).  That’s partly because smaller regional operators previously ordered to replace Huawei in their networks spent a chunk of the government’s “rip and replace” funds on Ericsson rather than open RAN, says Kohli. Ironically, though, Ericsson owes much of the recent growth in its U.S. market share to what has been sold as an open RAN single vendor deal with AT&T [1.]. Under that contract, it is replacing Nokia at a third of AT&T’s sites, having already been the supplier for the other two thirds.

Note 1. In December 2023, AT&T awarded Ericsson a multi-year, $14 billion Open RAN contract to serve as the foundation for its open network deployment, with a goal of having 70% of its wireless traffic on open platforms by late 2026. That large, single-vendor award for the core infrastructure was criticized for potentially undermining the goal of Open RAN which was to encourage competition among multiple network equipment and software providers. AT&T’s claim of a mulit-vendor network turned out to be just a smokescreen.  Fujitsu/1Finity supplied third-party radios used in AT&T’s first Open RAN call with Ericsson.

Indeed, AT&T’s open RAN claims have been difficult to take seriously, especially since it identified Mavenir as a third supplier of radio units, behind Ericsson and Japan’s Fujitsu, just a few months before Mavenir quit the radio unit market. Mavenir stopped manufacturing and distributing Open RAN radios in June 2025 as part of a financial restructuring and a shift to a software-focused business model. 

…………………………………………………………………………………………………………………….

Arguably, Kohli describes Echostar/ Dish Network as the only U.S. player that was spending “a good chunk of money to really support open RAN architecture.”

Ultimately, he thinks the big U.S. telcos may come to regret their heavier reliance on the RAN gear giants. “It may look great for AT&T and Verizon today, but they’ll be funding this whole thing as a proprietary solution going forward because, really, there’s no incentive for anybody else to come in,” he said.

…………………………………………………………………………………………………………………….

References:

https://www.lightreading.com/open-ran/echostar-rout-leaves-its-open-ran-vendors-high-and-dry

https://www.lightreading.com/open-ran/mavenir-ceo-warns-of-ericsson-and-nokia-duopoly-as-open-ran-stalls

AT&T to to buy spectrum Licenses from EchoStar for $23 billion

AT&T to deploy Fujitsu and Mavenir radio’s in crowded urban areas

Dell’Oro Group: RAN Market Grows Outside of China in 2Q 2025

Mavenir and NEC deploy Massive MIMO on Orange’s 5G SA network in France

Spark New Zealand completes 5G SA core network trials with AWS and Mavenir software

Mavenir at MWC 2022: Nokia and Ericsson are not serious OpenRAN vendors

Ericsson expresses concerns about O-RAN Alliance and Open RAN performance vs. costs

Nokia and Mavenir to build 4G/5G public and private network for FSG in Australia

 

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Both telecom and enterprise networks are being reshaped by AI bandwidth and latency demands of AI.  Network operators that fail to modernize architectures risk falling behind.  Why?  AI workloads are network killers — they demand massive east-west traffic, ultra-low latency, and predictable throughput.

  • Real-time observability is becoming non-negotiable, as enterprises need to detect and fix issues before they impact AI model training or inference.
  • Self-driving networks are moving from concept to reality, with AI not just monitoring but actively remediating problems.
  • The competitive race is now about who can integrate AI into networking most seamlessly — and HPE/Juniper’s Mist AI, Cisco’s assurance stack, and Nvidia’s AI fabrics are three different but converging approaches.

Cisco, HPE/Juniper, and Nvidia are designing AI-optimized networking equipment, with a focus on real-time observability, lower latency and increased data center performance for AI workloads.  Here’s a capsule summary:

Cisco: AI-Ready Infrastructure:

  • Cisco is embedding AI telemetry and analytics into its Silicon One chips, Nexus 9000 switches, and Catalyst campus gear.
  • The focus is on real-time observability via its ThousandEyes platform and AI-driven assurance in DNA Center, aiming to optimize both enterprise and AI/ML workloads.
  • Cisco is also pushing AI-native data center fabrics to handle GPU-heavy clusters for training and inference.
  • Cisco claims “exceptional momentum” and leadership in AI: >$800M in AI infrastructure orders taken from web-scale customers in Q4, bringing the FY25 total to over $2B.
  • Cisco Nexus switches now fully and seamlessly integrated with NVIDIA’s Spectrum-X architecture to deliver high speed networking for AI clusters

HPE + Juniper: AI-Native Networking Push:

  • Following its $13.4B acquisition of Juniper Networks, HPE has merged Juniper’s Mist AI platform with its own Aruba portfolio to create AI-native, “self-driving” networks.
  • Key upgrades include:

-Agentic AI troubleshooting that uses generative AI workflows to pinpoint and fix issues across wired, wireless, WAN, and data center domains.

-Marvis AI Assistant with enhanced conversational capabilities — IT teams can now ask open-ended questions like “Why is the Orlando site slow?” and get contextual, actionable answers.

-Large Experience Model (LEM) with Marvis Minis — digital twins that simulate user experiences to predict and prevent performance issues before they occur.

-Apstra integration for data center automation, enabling autonomous service provisioning and cross-domain observability

Nvidia: AI Networking at Compute Scale

  • Nvidia’s Spectrum-X Ethernet platform  and Quantum-2 InfiniBand (both from Mellanox acquisition) are designed for AI supercomputing fabrics, delivering ultra-low latency and congestion control for GPU clusters.
  • In partnership with HPE, Nvidia is integrating NVIDIA AI Enterprise and Blackwell architecture GPUs into HPE Private Cloud AI, enabling enterprises to deploy AI workloads with optimized networking and compute together.
  • Nvidia’s BlueField DPUs offload networking, storage, and security tasks from CPUs, freeing resources for AI processing.

………………………………………………………………………………………………………………………………………………………..

Here’s a side-by-side comparison of how Cisco, HPE/Juniper, and Nvidia are approaching AI‑optimized enterprise networking — so you can see where they align and where they differentiate:

Feature / Focus Area Cisco HPE / Juniper Nvidia
Core AI Networking Vision AI‑ready infrastructure with embedded analytics and assurance for enterprise + AI workloads AI‑native, “self‑driving” networks across campus, WAN, and data center High‑performance fabrics purpose‑built for AI supercomputing
Key Platforms Silicon One chips, Nexus 9000 switches, Catalyst campus gear, ThousandEyes, DNA Center Mist AI platform, Marvis AI Assistant, Marvis Minis, Apstra automation Spectrum‑X Ethernet, Quantum‑2 InfiniBand, BlueField DPUs
AI Integration AI‑driven assurance, predictive analytics, real‑time telemetry Generative AI for troubleshooting, conversational AI for IT ops, digital twin simulations AI‑optimized networking stack tightly coupled with GPU compute
Observability End‑to‑end visibility via ThousandEyes + DNA Center Cross‑domain observability (wired, wireless, WAN, DC) with proactive issue detection Telemetry and congestion control for GPU clusters
Automation Policy‑driven automation in campus and data center fabrics Autonomous provisioning, AI‑driven remediation, intent‑based networking Offloading networking/storage/security tasks to DPUs for automation
Target Workloads Enterprise IT, hybrid cloud, AI/ML inference & training Enterprise IT, edge, hybrid cloud, AI/ML workloads AI training & inference at hyperscale, HPC, large‑scale data centers
Differentiator Strong enterprise install base + integrated assurance stack Deep AI‑native operations with user experience simulation Ultra‑low latency, high‑throughput fabrics for GPU‑dense environments

Key Takeaways:

  • Cisco is strongest in enterprise observability and broad infrastructure integration.
  • HPE/Juniper is leaning into AI‑native operations with a heavy focus on automation and user experience simulation.
  • Nvidia is laser‑focused on AI supercomputing performance, building the networking layer to match its GPU dominance.
Conclusions:
  • Cisco leverages its market leadership, customer base and strategic partnerships to integrate AI with existing enterprise networks.
  • HPE/Juniper challenges rivals with an AI-native, experience-first network management platform. 
  • Nvidia aims to dominate the full-stack AI infrastructure, including networking.
References:

Muon Space in deal with Hubble Network to deploy world’s first satellite-powered Bluetooth network

Muon Space, a  provider of end-to-end space systems specializing in mission-optimized satellite constellations, today announced its most capable satellite platform, MuSat XL, a high-performance 500 kg-class spacecraft designed for the most demanding next-generation low Earth orbit (LEO) missions. Muon also announced its first customer for the XL Platform: Hubble Network, a Seattle-based space-tech pioneer building the world’s first satellite-powered Bluetooth network.  IEEE Techblog reported Hubble Network’s first Bluetooth to space satellite connection in this post.

The XL Platform delivers a dramatically expanded capability tier to the flight-proven Halo™ stack – delivering more power, agility, and integration flexibility while preserving the speed, scalability and cost-effectiveness needed for constellation deployment. Optimized for Earth observation (EO) and telecommunications missions supporting commercial and national security customers that require multi-payload operations, extreme data throughput, high-performance inter-satellite networking, and cutting-edge attitude control and pointing, the XL Platform sets a new industry benchmark for mission performance and value.  “XL is more than a bigger bus – it’s a true enabler for customers pushing the boundaries of what’s possible in orbit, like Hubble,” said Jonny Dyer, CEO of Muon Space. “Their transformative BLE technology represents the future of space-based services and we are ecstatic to enable their mission with the XL Platform and our Halo stack.”

The Muon Space XL platform combines exceptional payload power, precise pointing, and high-bandwidth networking to enable advanced space capabilities across defense, disaster response, and commercial missions.

Enhancing Global BLE Coverage:

In 2024, Hubble became the first company to establish a Bluetooth connection directly to a satellite, fueling global IoT growth. Using MuSat XL, it will deploy a next-generation BLE payload featuring a phased-array antenna and a receiver 20 times more powerful than its CubeSat predecessor, enabling BLE detection at 30 times lower power and direct connectivity for ultra-low-cost, energy-efficient devices worldwide. MuSat XL’s large payload accommodation, multi-kW power system, and cutting-edge networking and communications capabilities are key enablers for advanced services like Hubble’s.

“Muon’s platform gives us the scale and power to build a true Bluetooth layer around the Earth,” said Alex Haro, Co-Founder and CEO of Hubble Network.

The first two MuSat XL satellites will provide a 12-hour global revisit time, with a scalable design for faster coverage. Hubble’s BLE Finding Network supports critical applications in logistics, infrastructure, defense, and consumer technology.

A Next Generation Multi-Mission Satellite Platform:

MuSat XL is built for operators who need real capability – more power, larger apertures, more flexibility, and more agility – and with the speed to orbit and reliability that Muon has already demonstrated with its other platforms in orbit since 2023. Built on the foundation of Muon’s heritage 200 kg MuSat architecture, MuSat XL is a 500 kg-class bus that extends the Halo technology stack’s performance envelope to enable high-impact, real-time missions.

Key capabilities include:

  • 1 kW+ orbit average payload power – Supporting advanced sensors, phased arrays, and edge computing applications.
  • Seamless, internet-standards based, high bandwidth, low latency communications, and optical crosslink networking – Extremely high volume downlink (>5 TB / day) and near real-time communications for time-sensitive operations critical for defense, disaster response, and dynamic tasking.
  • Flexible onboard interface, network, compute – Muon’s PayloadCore architecture enables rapid hardware/software integration of payloads and deployment of cloud-like workflows to onboard network, storage, and compute.
  • Precise, stable, and agile pointing – Attitude control architected for the rigorous needs of next-generation EO and RF payloads.

In the competitive small satellite market, MuSat XL offers standout advantages in payload volume, power availability, and integration flexibility – making it a versatile backbone for advanced sensors, communications systems, and compute-intensive applications. The platform is built for scale: modular, manufacturable, and fully integrated with Muon’s vertically developed stack, from custom instrument design to full mission operations via the Halo technology stack.

Muon designed MuSat XL to deliver exceptional performance without added complexity. Early adopters like Hubble signal a broader trend in the industry: embracing platforms that offer operational autonomy, speed, and mission longevity at commercial scale.

About Muon Space:

Founded in 2021, Muon Space is an end-to-end space systems company that designs, builds, and operates mission-optimized satellite constellations to deliver critical data and enable real-time compute and decision-making in space. Its proprietary technology stack, Halo™, integrates advanced spacecraft platforms, robust payload integration and management, and a powerful software-defined orchestration layer to enable high-performance capabilities at unprecedented speed – from concept to orbit. With state-of-the-art production facilities in Silicon Valley and a growing track record of commercial and national security customers, Muon Space is redefining how critical Earth intelligence is delivered from space.  Muon Space employs a team of more than 150 engineers and scientists, including industry experts from Skybox, NASA, SpaceX, and others.  SOURCE: Muon Space

About Hubble Network:

Founded in 2021, Hubble is creating the world’s first satellite-powered Bluetooth network, enabling global connectivity without reliance on cellular infrastructure. The Hubble platform makes it easy to transmit low-bandwidth data from any Bluetooth-enabled device, with no infrastructure required. Their global BLE network is live and expanding rapidly, delivering real-time visibility across supply chains, fleets, and facilities.  Visit www.hubble.com for more information.

References:

https://www.muonspace.com/

https://www.prnewswire.com/news-releases/muon-space-unveils-xl-satellite-platform-announces-hubble-network-as-first-customer-302523719.html

https://www.satellitetoday.com/government-military/2025/05/16/muon-space-advances-to-stage-ii-on-nro-contract-for-commercial-electro-optical-imagery/

https://www.satellitetoday.com/manufacturing/2025/06/12/muon-space-expands-series-b-and-buys-propulsion-startup-in-a-bid-to-scale-production/

Hubble Network Makes Earth-to-Space Bluetooth Satellite Connection; Life360 Global Location Tracking Network

WiFi 7: Backgrounder and CES 2025 Announcements

Emerging Cybersecurity Risks in Modern Manufacturing Factory Networks

By Omkar Ashok Bhalekar with Ajay Lotan Thakur

Introduction

With the advent of new industry 5.0 standards and ongoing advancements in the field of Industry 4.0, the manufacturing landscape is facing a revolutionary challenge which not only demands sustainable use of environmental resources but also compels us to make constant changes in industrial security postures to tackle modern threats. Technologies such as Internet of Things (IoT) in Manufacturing, Private 4G/5G, Cloud-hosted applications, Edge-computing, and Real-time streaming telemetry are effectively fueling smart factories and making them more productive.

Although this evolution facilitates industrial automation, innovation and high productivity, it also greatly makes the exposure footprint more vulnerable for cyberattacks. Industrial Cybersecurity is quintessential for mission critical manufacturing operations; it is a key cornerstone to safeguard factories and avoid major downtimes.

With the rapid amalgamation of IT and OT (Operational Technology), a hack or a data breach can cause operational disruptions like line down situations, halt in production lines, theft or loss of critical data, and huge financial damage to an organization.

Industrial Networking

Why does Modern Manufacturing demand Cybersecurity? Below outlines a few reasons why cybersecurity is essential in modern manufacturing:

  • Convergence of IT and OT: Industrial control systems (ICS) which used to be isolated or air-gapped are now all inter-connected and hence vulnerable to breaches.
  • Enlarged Attack Surface: Every device or component in the factory which is on the network is susceptible to threats and attacks.
  • Financial Loss: Cyberattacks such as WannaCry or targeted BSOD Blue Screen of Death (BSOD) can cost millions of dollars per minute and result in complete shutdown of operations.
  • Disruptions in Logistics Network: Supply chain can be greatly disarrayed due to hacks or cyberattacks causing essential parts shortage.
  • Legislative Compliance: Strict laws and regulations such as CISA, NIST, and ISA/IEC 62443 are proving crucial and mandating frameworks to safeguard industries

It is important to understand and adapt to the changing trends in the cybersecurity domain, especially when there are several significant factors at risk. Historically, it has been observed that mankind always has had some lessons learned from their past mistakes while not only advances at fast pace, but the risks from external threats would limit us from making advancements without taking cognizance.

This attitude of adaptability or malleability needs to become an integral part of the mindset and practices in cybersecurity spheres and should not be limited to just industrial security. Such practices can scale across other technological fields. Moreover, securing industries does not just mean physical security, but it also opens avenues for cybersecurity experts to learn and innovate in the field of applications and software such as Manufacturing Execution System (MES) which are crucial for critical operations.

Greatest Cyberattacks in Manufacturing of all times:

Familiarizing and acknowledging different categories of attacks and their scales which have historically hampered the manufacturing domain is pivotal. In this section we would highlight some of the Real-World cybersecurity incidents.

Ransomware (Colonial Pipeline, WannaCry, y.2021):

These attacks brought the US east coast to a standstill due to extreme shortage of fuel and gasoline after hacking employee credentials.

Cause: The root cause for this was compromised VPN account credentials. An VPN account which wasn’t used for a long time and lacked Multi-factor Authentication (MFA) was breached and the credentials were part of a password leak on dark web. The Ransomware group “Darkside” exploited this entry point to gain access to Colonial Pipeline’s IT systems. They did not initially penetrate operational technology systems. However, the interdependence of IT and OT systems caused operational impacts. Once inside, attackers escalated privileges and exfiltrated 100 GB of data within 2 hours. Ransomware was deployed to encrypt critical business systems. Colonial Pipeline proactively shut down the pipeline fearing lateral movement into OT networks.

Effect: The pipeline, which supplies nearly 45% of the fuel to the U.S. East Coast, was shut down for 6 days. Mass fuel shortages occurred across several U.S. states, leading to public panic and fuel hoarding. Colonial Pipeline paid $4.4 million ransom. Later, approximately $2.3 million was recovered by the FBI. Led to a Presidential Executive Order on Cybersecurity and heightened regulations around critical infrastructure cybersecurity. Exposed how business IT network vulnerabilities can lead to real-world critical infrastructure impacts, even without OT being directly targeted.

Industrial Sabotage (Stuxnet, y.2009):

This unprecedented and novel software worm was able to hijack an entire critical facility and sabotage all the machines rendering them defunct.

Cause: Nation-state-developed malware specifically targeting Industrial Control Systems (ICS), with an unprecedented level of sophistication. Stuxnet was developed jointly by the U.S. (NSA) and Israel (Unit 8200) under operation “Olympic Games”. The target was Iran’s uranium enrichment program at Natanz Nuclear Facility. The worm was introduced via USB drives (air-gapped network). Exploited four zero-day vulnerabilities in Windows systems at that time, unprecedented. Specifically targeted Siemens Step7 software running on Windows, which controls Siemens S7-300 PLCs. Stuxnet would identify systems controlling centrifuges used for uranium enrichment. Reprogrammed the PLCs to intermittently change the rotational speed of centrifuges, causing mechanical stress and failure, while reporting normal operations to operators. Used rootkits for both Windows and PLC-level to remain stealthy.

Effect: Destroyed approximately 1,000 IR-1 centrifuges (~10% of Iran’s nuclear capability). Set back Iran’s nuclear program by 1-2 years. Introduced a new era of cyberwarfare, where malware caused physical destruction. Raised global awareness about the vulnerabilities in industrial control systems (ICS). Iran responded by accelerating its cyber capabilities, forming the Iranian Cyber Army. ICS/SCADA security became a top global priority, especially in energy and defense sectors.

Upgrade spoofing (SolarWinds Orion Supply chain Attack, y.2020):

Attackers injected malicious pieces of software into the software updates which infected millions of users.

Cause: Compromise of the SolarWinds build environment leading to a supply chain attack. Attackers known as Russian Cozy Bear, linked to Russia’s foreign intelligence agency, gained access to SolarWinds’ development pipeline. Malicious code was inserted into Orion Platform updates, released between March to June 2020 Customers who downloaded the update installed malware known as SUNBURST. Attackers compromised SolarWinds build infrastructure. It created a backdoor in Orion’s signed DLLs. Over 18,000 customers were potentially affected, including 100 high-value targets. After the exploit, attackers used manual lateral movement, privilege escalation, and custom C2 (command-and-control) infrastructure to exfiltrate data.

Effect: Breach included major U.S. government agencies: DHS, DoE, DoJ, Treasury, State Department, and more. Affected top corporations: Cisco, Intel, Microsoft, FireEye, and others FireEye discovered the breach after noticing unusual two-factor authentication activity. Exposed critical supply chain vulnerabilities and demonstrated how a single point of compromise could lead to nationwide espionage. Promoted the creation of Cybersecurity Executive Order 14028, Zero Trust mandates, and widespread adoption of Software Bill of Materials (SBOM) practices.

Spywares (Pegasus, y.2016-2021):

Cause: Zero-click and zero-day exploits leveraged by NSO Group’s Pegasus spyware, sold to governments. Pegasus can infect phones without any user interaction also known as zero-click exploits. It acquires malicious access to WhatsApp, iMessage or browsers like Safari’s vulnerabilities on iOS, including zero-days attacks on Android devices. Delivered via SMS, WhatsApp messages, or silent push notifications. Once installed, it provides complete surveillance capability such as access to microphones, camera, GPS, calls, photos, texts, and encrypted apps. Zero-click iOS exploit ForcedEntry allows complete compromise of an iPhone. Malware is extremely stealthy, often removing itself after execution. Bypassed Apple’s BlastDoor sandbox and Android’s hardened security modules.

Effect: Used by multiple governments to surveil activists, journalists, lawyers, opposition leaders, even heads of state. The 2021 Pegasus Project, led by Amnesty International and Forbidden Stories, revealed a leaked list of 50,000 potential targets. Phones of high-profile individuals including international journalists, associates, specifically French president, and Indian opposition figures were allegedly targeted which triggered legal and political fallout. NSO Group was blacklisted by the U.S. Department of Commerce. Apple filed a lawsuit against NSO Group in 2021. Renewed debates over the ethics and regulation of commercial spyware.

Other common types of attacks:

Phishing and Smishing: These attacks send out links or emails that appear to be legitimate but are crafted by bad actors for financial means or identity theft.

Social Engineering: Shoulder surfing though sounds funny; it’s the tale of time where the most expert security personnel have been outsmarted and faced data or credential leaks. Rather than relying on technical vulnerabilities, this attack targets human psychology to gain access or break into systems. The attacker manipulates people into revealing confidential information using techniques such as Reconnaissance, Engagement, Baiting or offering Quid pro quo services.

Security Runbook for Manufacturing Industries:

To ensure ongoing enhancements to industrial security postures and preserve critical manufacturing operations, following are 11 security procedures and tactics which will ensure 360-degree protection based on established frameworks:

A. Incident Handling Tactics (First Line of Defense) Team should continuously improve incident response with the help of documentation and response apps. Co-ordination between teams, communications root, cause analysis and reference documentation are the key to successful Incident response.

B. Zero Trust Principles (Trust but verify) Use strong security device management tools to ensure all end devices are in compliance such as trusted certificates, NAC, and enforcement policies. Regular and random checks on users’ official data patterns and assign role-based policy limiting full access to critical resources.

C. Secure Communication and Data Protection Use endpoint or cloud-based security session with IPSec VPN tunnels to make sure all traffic can be controlled and monitored. All user data must be encrypted using data protection and recovery software such as BitLocker.

D. Secure IT Infrastructure Hardening of network equipment such switches, routers, WAPs with dot1x, port-security and EAP-TLS or PEAP. Implement edge-based monitoring solutions to detect anomalies and redundant network infrastructure to ensure least MTTR.

E. Physical Security Locks, badge readers or biometric systems for all critical rooms and network cabinets are a must. A security operations room (SOC) can help monitor internal thefts or sabotage incidents.

F. North-South and East-West Traffic Isolation Safety traffic and external traffic can be rate limited using Firewalls or edge compute devices. 100% isolation is a good wishful thought, but measures need to be taken to constantly monitor any security punch-holes.

G. Industrial Hardware for Industrial applications Use appropriate Industrial grade IP67 or IP68 rated network equipment to avoid breakdowns due to environmental factors. Localized industrial firewalls can provide desired granularity on the edge thereby skipping the need to follow Purdue model.

H. Next-Generation Firewalls with Application-Level Visibility Incorporate Stateful Application Aware Firewalls, which can help provide more control over zones and policies and differentiate application’s behavioral characteristics. Deploy Tools which can perform deep packet inspection and function as platforms for Intrusion prevention (IPS/IDS).

I. Threat and Traffic Analyzer Tools such as network traffic analyzers can help achieve network Layer1-Layer7 security monitoring by detecting and responding to malicious traffic patterns. Self-healing networks with automation and monitoring tools which can detect traffic anomalies and rectify the network incompliance.

J. Information security and Software management Companies must maintain a repo of trust certificates, software and releases and keep pushing regular patches for critical bugs. Keep a constant track of release notes and CVEs (Common Vulnerabilities and exposures) for all vendor software.

K. Idiot-Proofing (How to NOT get Hacked) Regular training to employees and familiarizing them with cyber-attacks and jargons like CryptoJacking or HoneyNets can help create awareness. Encourage and provide a platform for employees or workers to voice their opinions and resolve their queries regarding security threats.

Current Industry Perspective and Software Response

In response to the escalating tide of cyberattacks in manufacturing, from the Triton malware striking industrial safety controls to LockerGoga shutting down production at Norsk Hydro, there has been a sea change in how the software industry is facilitating operational resilience. Security companies are combining cutting-edge threat detection with ICS/SCADA systems, delivering purpose-designed solutions like zero-trust network access, behavior-based anomaly detection, and encrypted machine-to-machine communications. Companies such as Siemens and Claroty are leading the way, bringing security by design rather than an afterthought. A prime example is Dragos OT-specific threat intelligence and incident response solutions, which have become the focal point in the fight against nation-state attacks and ransomware operations against critical infrastructure.

Bridging the Divide between IT and OT: Two way street

With the intensification of OT and IT convergence, perimeter-based defense is no longer sufficient. Manufacturers are embracing emerging strategies such as Cybersecurity Mesh Architecture (CSMA) and applying IT-centric philosophies such as DevSecOps within the OT environment to foster secure by default deployment habits. The trend also brings attention to IEC 62443 conformity as well as NIST based risk assessment frameworks catering to manufacturing. Legacy PLCs having been networked and exposed to internet-borne threats, companies are embracing micro-segmentation, secure remote access, and real-time monitoring solutions that unify security across both environments. Learn how Schneider Electric is empowering manufacturers to securely link IT/OT systems with scalable cybersecurity programs.

Conclusion

In a nutshell, Modern manufacturing, contrary to the past, is not just about quick input and quick output systems which can scale and be productive, but it is an ecosystem, where cybersecurity and manufacturing harmonize and just like healthcare system is considered critical to humans, modern factories are considered quintessential to manufacturing. So many experiences with cyberattacks on critical infrastructure such as pipelines, nuclear plants, power-grids over the past 30 years not only warrant world’s attention but also calls to action the need to devise regulatory standards which must be followed by each and every entity in manufacturing.

As mankind keeps making progress and sprinting towards the next industrial revolution, it’s an absolute exigency to emphasize making Industrial Cybersecurity a keystone in building upcoming critical manufacturing facilities and building a strong foundation for operational excellency. Now is the right time to buy into the trend of Industrial security, sure enough the leaders who choose to be “Cyberfacturers” will survive to tell the tale, and the rest may just serve as stark reminders of what happens when pace outperforms security.

References

About Author:

Omkar Bhalekar is a senior network engineer and technology enthusiast specializing in Data center architecture, Manufacturing infrastructure, and Sustainable solutions with extensive experience in designing resilient industrial networks and building smart factories and AI data centers with scalable networks. He is also the author of the Book Autonomous and Predictive Networks: The future of Networking in the Age of AI and co-author of Quantum Ops – Bridging Quantum Computing & IT Operations. Omkar writes to simplify complex technical topics for engineers, researchers, and industry leaders.

Countdown to Q-day: How modern-day Quantum and AI collusion could lead to The Death of Encryption

By Omkar Ashok Bhalekar with Ajay Lotan Thakur

Behind the quiet corridors of research laboratories and the whir of supercomputer data centers, a stealth revolution is gathering force, one with the potential to reshape the very building blocks of cybersecurity. At its heart are qubits, the building blocks of quantum computing, and the accelerant force of generative AI. Combined, they form a double-edged sword capable of breaking today’s encryption and opening the door to an era of both vast opportunity and unprecedented danger.

Modern Cryptography is Fragile

Modern-day computer security relies on the un-sinking complexity of certain mathematical problems. RSA encryption, introduced for the first time in 1977 by Rivest, Shamir, and Adleman, relies on the principle that factorization of a 2048-bit number into primes is computationally impossible for ordinary computers (RSA paper, 1978). Also, Diffie-Hellman key exchange, which was described by Whitfield Diffie and Martin Hellman in 1976, offers key exchange in a secure manner over an insecure channel based on the discrete logarithm problem (Diffie-Hellman paper, 1976). Elliptic-Curve Cryptography (ECC) was described in 1985 independently by Victor Miller and Neal Koblitz, based on the hardness of elliptic curve discrete logarithms, and remains resistant to brute-force attacks but with smaller key sizes for the same level of security (Koblitz ECC paper, 1987).

But quantum computing flips the script. Thanks to algorithms like Shor’s Algorithm, a sufficiently powerful quantum computer could factor large numbers exponentially faster than regular computers rendering RSA and ECC utterly useless. Meanwhile, Grover’s Algorithm provides symmetric key systems like AES with a quadratic boost.

What would take millennia or centuries to classical computers, quantum computers could boil down to days or even hours with the right scale. In fact, experts reckon that cracking RSA-2048 using Shor’s Algorithm could take just 20 million physical qubits which is a number that’s diminishing each year.

Generative AI adds fuel to the fire

While quantum computing threatens to undermine encryption itself, generative AI is playing an equally insidious but no less revolutionary role. By mass-producing activities such as the development of malware, phishing emails, and synthetic identities, generative AI models, large language models, and diffusion-based visual synthesizers, for example, are lowering the bar on sophisticated cyberattacks.

Even worse, generative AI can be applied to model and experiment with vulnerabilities in implementations of cryptography, including post-quantum cryptography. It can be employed to assist with training reinforcement learning agents that optimize attacks against side channels or profile quantum circuits to uncover new behaviors.

With quantum computing on the horizon, generative AI is both a sophisticated research tool and a player to watch when it comes to weaponization. On the one hand, security researchers utilize generative AI to produce, examine, and predict vulnerabilities in cryptography systems to inform the development of post-quantum-resistant algorithms. Meanwhile, it is exploited by malicious individuals for their ability to automate the production of complex attack vectors like advanced malware, phishing attacks, and synthetic identities radically reducing the barrier to conducting high impact cyberattacks. This dual-use application of generative AI radically shortens the timeline for adversaries to take advantage of breached or transitional cryptographic infrastructures, practically bridging the window of opportunity for defenders to deploy effective quantum-safe security solutions.

Real-World Implications

The impact of busted cryptography is real, and it puts at risk the foundations of everyday life:

1. Online Banking (TLS/HTTPS)

When you use your bank’s web site, the “https” in the address bar signifies encrypted communication over TLS (Transport Layer Security). Most TLS implementations rely on RSA or ECC keys to securely exchange session keys. A quantum attack would decrypt those exchanges, allowing an attacker to decrypt all internet traffic, including sensitive banking data.

2. Cryptocurrencies

Bitcoin, Ethereum, and other cryptocurrencies use ECDSA (Elliptic Curve Digital Signature Algorithm) for signing transactions. If quantum computers can crack ECDSA, a hacker would be able to forge signatures and steal digital assets. In fact, scientists have already performed simulations in which a quantum computer might be able to extract private keys from public blockchain data, enabling theft or rewriting the history of transactions.

3. Government Secrets and Intelligence Archives

National security agencies all over the world rely heavily on encryption algorithms such as RSA and AES to protect sensitive information, including secret messages, intelligence briefs, and critical infrastructure data. Of these, AES-256 is one that is secure even in the presence of quantum computing since it is a symmetric-key cipher that enjoys quantum resistance simply because Grover’s algorithm can only give a quadratic speedup against it, brute-force attacks remain gigantic in terms of resources and time. Conversely, asymmetric cryptographic algorithms like RSA and ECC, which underpin the majority of public key infrastructures, are fundamentally vulnerable to quantum attacks that can solve the hard mathematical problems they rely on for security.

Such a disparity offers a huge security gap. Information obtained today, even though it is in such excellent safekeeping now, might not be so in the future when sufficiently powerful quantum computers will be accessible, a scenario that is sometimes referred to as the “harvest now, decrypt later” threat. Both intelligence agencies and adversaries could be quietly hoarding and storing encrypted communications, confident that quantum technology will soon have the capability to decrypt this stockpile of sensitive information. The Snowden disclosures placed this threat in the limelight by revealing that the NSA catches and keeps vast amounts of global internet traffic, such as diplomatic cables, military orders, and personal communications. These repositories of encrypted data, unreadable as they stand now, are an unseen vulnerability; when Q-Day which is the onset of available, practical quantum computers that can defeat RSA and ECC, come around, confidentiality of decades’ worth of sensitive communications can be irretrievably lost.

Such a compromise would have apocalyptic consequences for national security and geopolitical stability, exposing classified negotiations, intelligence operations, and war plans to adversaries. Such a specter has compelled governments and security entities to accelerate the transition to post-quantum cryptography standards and explore quantum-resistant encryption schemes in an effort to safeguard the confidentiality and integrity of information in the era of quantum computing.

Arms Race Toward Post-Quantum Cryptography

In response, organizations like NIST are leading the development of post-quantum cryptographic standards, selecting algorithms believed to be quantum resistant. But migration is glacial. Implementing backfitting systems with new cryptographic foundations into billions of devices and services is a logistical nightmare. This is not a process of merely software updates but of hardware upgrades, re-certifications, interoperability testing, and compatibility testing with worldwide networks and critical infrastructure systems, all within a mode of minimizing downtime and security vulnerabilities.

Building such a large quantum computer that can factor RSA-2048 is an enormous task. It would require millions of logical qubits with very low error rates, it’s estimated. Today’s high-end quantum boxes have less than 100 operational qubits, and their error rates are too high to support complicated processes over a long period of time. However, with continued development of quantum correction methods, materials research, and qubit coherence times, specialists warn that effective quantum decryption capability may appear more quickly than the majority of organizations are prepared to deal with.

This convergence time frame, when old and new environments coexist, is where danger is most present. Attackers can use generative AI to look for these hybrid environments in which legacy encryption is employed, by botching the identification of old crypto implementations, producing targeted exploits en masse, and choreographing multi-step attacks that overwhelm conventional security monitoring and patching mechanisms.

Preparing for the Convergence

In order to be able to defend against this coming storm, the security strategy must evolve:

  • Inventory Cryptographic Assets: Firms must take stock of where and how encryption is being used across their environments.
  • Adopt Crypto-Agility: System needs to be designed so it can easily switch between encryption algorithms without full redesign.
  • Quantum Test Threats: Use AI tools to stress-test quantum-like threats in encryption schemes.

Adopt PQC and Zero-Trust Models: Shift towards quantum-resistant cryptography and architectures with breach as the new default state.

In Summary

Quantum computing is not only a looming threat, it is a countdown to a new cryptographic arms race. Generative AI has already reshaped the cyber threat landscape, and in conjunction with quantum power, it is a force multiplier. It is a two-front challenge that requires more than incremental adjustment; it requires a change of cybersecurity paradigm.

Panic will not help us. Preparation will.

Abbreviations

RSA – Rivest, Shamir, and Adleman
ECC – Elliptic-Curve Cryptography
AES – Advanced Encryption Standard
TLS – Transport Layer Security
HTTPS – Hypertext Transfer Protocol Secure
ECDSA – Elliptic Curve Digital Signature Algorithm
NSA – National Security Agency
NIST – National Institute of Standards and Technology
PQC – Post-Quantum Cryptography

References

***Google’s Gemini is used in this post to paraphrase some sentences to add more context. ***

About Author:

Omkar Bhalekar is a senior network engineer and technology enthusiast specializing in Data center architecture, Manufacturing infrastructure, and Sustainable solutions with extensive experience in designing resilient industrial networks and building smart factories and AI data centers with scalable networks. He is also the author of the Book Autonomous and Predictive Networks: The future of Networking in the Age of AI and co-author of Quantum Ops – Bridging Quantum Computing & IT Operations. Omkar writes to simplify complex technical topics for engineers, researchers, and industry leaders.

Liquid Dreams: The Rise of Immersion Cooling and Underwater Data Centers

By Omkar Ashok Bhalekar with Ajay Lotan Thakur

As demand for data keeps rising, driven by generative AI, real-time analytics, 8K streaming, and edge computing, data centers are facing an escalating dilemma: how to maintain performance without getting too hot. Traditional air-cooled server rooms that were once large enough for straightforward web hosting and storage are being stretched to their thermal extremes by modern compute-intensive workloads. While the world’s digital backbone burns hot, innovators are diving deep, deep to the ocean floor. Say hello to immersion cooling and undersea data farms, two technologies poised to revolutionize how the world stores and processes data.

Heat Is the Silent Killer of the Internet – In each data center, heat is the unobtrusive enemy. If racks of performance GPUs, CPUs, and ASICs are all operating at the same time, they generate massive amounts of heat. The old approach with gigantic HVAC systems and chilled air manifolds is reaching its technological and environmental limits.

In the majority of installations, over 35-40% of total energy consumption is spent on simply cooling the hardware, rather than running it. As model sizes and inference loads explode (think ChatGPT, DALL·E, or Tesla FSD), traditional cooling infrastructures simply aren’t up to the task without costly upgrades or environmental degradation. This is why there is a paradigm shift.

Liquid cooling is not an option everywhere due to lack of infrastructure, expense, and geography, so we still must rely on every player in the ecosystem to step up the ante when it comes to energy efficiency. The burden crosses multiple domains, chip manufacturers need to deliver far greater performance per watt with advanced semiconductor design, and software developers need to write that’s fundamentally low power by optimizing algorithms and reducing computational overhead.

Along with these basic improvements, memory manufacturers are designing low-power solutions, system manufacturers are making more power-efficient delivery networks, and cloud operators are making their data center operations more efficient while increasing the use of renewable energy sources. As Microsoft Chief Environmental Officer Lucas Joppa said, “We need to think about sustainability not as a constraint, but as an innovative driver that pushes us to build more efficient systems across every layer of the stack of technology.”

However, despite these multifaceted efficiency gains, thermal management remains a significant bottleneck that can have a deep and profound impact on overall system performance and energy consumption. Ineffective cooling can force processors to slow down their performance, which is counterintuitive to better chips and optimized software. This becomes a self-perpetuating loop where wasteful thermal management will counteract efficiency gains elsewhere in the system.

In this blogpost, we will address the cooling aspect of energy consumption, considering how future thermal management technology can be a multiplier of efficiency across the entire computing infrastructure. We will explore how proper cooling strategies not only reduce direct energy consumption from cooling components themselves but also enable other components of the system to operate at their maximum efficiency levels.

What Is Immersion Cooling?

Immersion cooling cools servers by submerging them in carefully designed, non-conductive fluids (typically dielectric liquids) that transfer heat much more efficiently than air. Immersion liquids are harmless to electronics; in fact, they allow direct liquid contact cooling with no risk of short-circuiting or corrosion.

Two general types exist:

  • Single-phase immersion, with the fluid remaining liquid and transferring heat by convection.
  • Two-phase immersion, wherein fluid boils at low temperature, gets heated and condenses in a closed loop.

According to Vertiv’s research, in high-density data centers, liquid cooling improves the energy efficiency of IT and facility systems compared to air cooling. In their fully optimized study, the introduction of liquid cooling created a 10.2% reduction in total data center power and a more than 15% improvement in Total Usage Effectiveness (TUE).

Total Usage Effectiveness is calculated by using the formula below:

TUE = ITUE x PUE (ITUE = Total Energy Into the IT Equipment/Total Energy into the Compute Components, PUE = Power Usage Effectiveness)

Reimagining Data Centers Underwater
Imagine shipping an entire data center in a steel capsule and sinking it to the ocean floor. That’s no longer sci-fi.

Microsoft’s Project Natick demonstrated the concept by deploying a sealed underwater data center off the Orkney Islands, powered entirely by renewable energy and cooled by the surrounding seawater. Over its two-year lifespan, the submerged facility showed:

  • A server failure rate 1/8th that of land-based centers.
  • No need for on-site human intervention.
  • Efficient, passive cooling by natural sea currents.

Why underwater? Seawater is an open, large-scale heat sink, and underwater environments are naturally less prone to temperature fluctuations, dust, vibration, and power surges. Most coastal metropolises are the biggest consumers of cloud services and are within 100 miles of a viable deployment site, which would dramatically reduce latency.

Why This Tech Matters Now Data centers already account for about 2–3% of the world’s electricity, and with the rapid growth in AI and metaverse workloads, that figure will grow. Generative inference workloads and AI training models consume up to 10x the power per rack that regular server workloads do, subjecting cooling gear and sustainability goals to tremendous pressure. Legacy air cooling technologies are reaching thermal and density thresholds, and immersion cooling is a critical solution to future scalability. According to Submer, a Barcelona based immersion cooling company, immersion cooling has the ability to reduce energy consumed by cooling systems by up to 95% and enable higher rack density, thus providing a path to sustainable growth in data centers under AI-driven demands

Advantages & Challenges

Immersion and submerged data centers possess several key advantages:

  • Sustainability – Lower energy consumption and lower carbon footprints are paramount as ESG (Environmental, Social, Governance) goals become business necessities.
  • Scalability & Efficiency – Immersion allows more density per square foot, reducing real estate and overhead facility expenses.
  • Reliability – Liquid-cooled and underwater systems have fewer mechanical failures including less thermal stress, fewer moving parts, and less oxidation.
  • Security & Autonomy – Underwater encased pods or autonomous liquid systems are difficult to hack and can be remotely monitored and updated, ideal for zero-trust environments.

While there are advantages of Immersion Cooling / Submerges Datacenters, there are some challenges/limitations as well –

  • Maintenance and Accessibility Challenges – Both options make hardware maintenance complex. Immersion cooling requires careful removal and washing of components to and from dielectric liquids, whereas underwater data centers provide extremely poor physical access, with entire modules having to be removed to fix them, which translates to longer downtimes.
  • High Initial Costs and Deployment Complexity – Construction of immersion tanks or underwater enclosures involves significant capital investment in specially designed equipment, infrastructure, and deployment techniques. Underwater data centers are also accompanied by marine engineering, watertight modules, and intricate site preparation.
  • Environmental and Regulatory Concerns – Both approaches involve environmental issues and regulatory adherence. Immersion systems struggle with fluid waste disposal regulations, while underwater data centers have marine environmental impact assessments, permits, and ongoing ecosystem protection mechanisms.
  • Technology Maturity and Operational Risks – These are immature technologies with minimal historical data on long-term performance and reliability. Potential problems include leakage of liquids in immersion cooling or damage and biofouling in underwater installation, leading to uncertain large-scale adoption.

Industry Momentum

Various companies are leading the charge:

  • GRC (Green Revolution Cooling) and submersion cooling offer immersion solutions to hyperscalers and enterprises.
  • HPC is offered with precision liquid cooling by Iceotope. Immersion cooling at scale is being tested by Alibaba, Google, and Meta to support AI and ML clusters.
  • Microsoft is researching commercial viability of underwater data centers as off-grid, modular ones in Project Natick.

Hyperscalers are starting to design entire zones of their new data centers specifically for liquid-cooled GPU pods, while smaller edge data centers are adopting immersion tech to run quietly and efficiently in urban environments.

  • The Future of Data Centers: Autonomous, Sealed, and Everywhere
    Looking ahead, the trend is clear: data centers are becoming more intelligent, compact, and environmentally integrated. We’re entering an era where:
  • AI-based DCIM software predicts and prevents failure in real-time.
  • Edge nodes with immersive cooling can be located anywhere, smart factories, offshore oil rigs.
  • Entire data centers might be built as prefabricated modules, inserted into oceans, deserts, or even space.
  • The general principle? Compute must not be limited by land, heat, or humans.

Final Thoughts

In the fight to enable the digital future, air is a luxury. Immersed in liquid or bolted to the seafloor, data centers are shifting to cool smarter, not harder.

Underwater installations and liquid cooling are no longer out-there ideas, they’re lifelines to a scalable, sustainable web.

So, tomorrow’s “Cloud” won’t be in the sky, it will hum quietly under the sea.

References

About Author:
Omkar Bhalekar is a senior network engineer and technology enthusiast specializing in Data center architecture, Manufacturing infrastructure, and Sustainable solutions. With extensive experience in designing resilient industrial networks and building smart factories and AI data centers with scalable networks, Omkar writes to simplify complex technical topics for engineers, researchers, and industry leaders.

Indosat Ooredoo Hutchison and Nokia use AI to reduce energy demand and emissions

Indonesian network operator Indosat Ooredoo Hutchison has deployed Nokia Energy Efficiency (part of the company’s Autonomous Networks portfolio – described below) to reduce energy demand and carbon dioxide emissions across its RAN network using AI. Nokia’s energy control system uses AI and machine learning algorithms to analyze real-time traffic patterns, and will enable the operator to adjust or shut idle and unused radio equipment automatically during low network demand periods.

The multi-vendor, AI-driven energy management solution can reduce energy costs and carbon footprint with no negative impact on network performance or customer experience. It can be rolled out in a matter of weeks.

Indosat is aiming to transform itself from a conventional telecom operator into an AI TechCo—powered by intelligent technologies, cloud-based platforms, and a commitment to sustainability. By embedding automation and intelligence into network operations, Indosat is unlocking new levels of efficiency, agility, and environmental responsibility across its infrastructure.

Earlier this year Indosat claimed to be the first operator to deploy AI-RAN in Indonesia, in a deal involving the integration of Nokia’s 5G cloud RAN solution with Nvidia’s Aerial platform. The Memorandum of Understanding (MoU) between the three firms included the development, testing, and deployment of AI-RAN, with an initial focus on transferring AI inferencing workloads on the AI Aerial, then the integration of RAN workloads on the same platform.

“As data consumption continues to grow, so does our responsibility to manage resources wisely. This collaboration reflects Indosat’s unwavering commitment to environmental stewardship and sustainable innovation, using AI to not only optimize performance, but also reduce emissions and energy use across our network.” said Desmond Cheung, Director and Chief Technology Officer at Indosat Ooredoo Hutchison.

Indosat was the first operator in Southeast Asia to achieve ISO 50001 certification for energy management—underscoring its pledge to minimize environmental impact through operational excellence. The collaboration with Nokia builds upon a successful pilot project, in which the AI-powered solution demonstrated its ability to reduce energy consumption in live network conditions.

Following the pilot project, Nokia deployed its Energy Efficiency solution to the entire Nokia RAN footprint within Indonesia, e.g. Sumatra, Kalimantan, Central and East Java.

“We are very pleased to be helping Indosat deliver on its commitments to sustainability and environmental responsibility, establishing its position both locally and internationally. Nokia Energy Efficiency reflects the important R&D investments that Nokia continues to make to help our customers optimize energy savings and network performance simultaneously,” said Henrique Vale, VP for Cloud and Network Services APAC at Nokia.

Nokia’s Autonomous Networks portfolio, including its Autonomous Networks Fabric solution, utilizes Agentic AI to deliver advanced security, analytics, and operations capabilities that provide operators with a holistic, real-time view of the network so they can reduce costs, accelerate time-to-value, and deliver the best customer experience.

Autonomous Networks Fabric is a unifying intelligence layer that weaves together observability, analytics, security, and automation across every network domain; allowing a network to behave as one adaptive system, regardless of vendor, architecture, or deployment model.

References:

https://www.nokia.com/newsroom/indosat-ooredoo-hutchison-and-nokia-partner-to-reduce-energy-demand-and-support-ai-powered-sustainable-operations/

https://www.telecoms.com/ai/nokia-to-supply-indosat-ooredoo-hutchison-with-ai-powered-energy-efficient-ran-software

Analysts weigh in: AT&T in talks to buy Lumen’s consumer fiber unit – Bloomberg

Bloomberg News reports that AT&T is in talks to acquire Lumen Technologies’ consumer fiber operations, in a deal that could value the unit at more than $5.5 billion, citing people with knowledge of the matter. The companies are in exclusive discussions about a transaction valuing the unit at more than $5.5 billion, said one of the people, who requested to not be identified discussing confidential information. The terms of the unfinalized deal could change or the talks might still collapse, according to the report.

“If the rumored price is correct, it is a great deal for AT&T,” wrote the financial analysts at New Street Research in a note to investors. “The value per [fiber] location at $5.5 billion would be about $1,300 which compares to Frontier at $2,400, Ziply at $3,800, and Metronet at $4,700,” the analysts continued.

The potential move to offload Lumen’s fiber business, which provides high-speed internet services to residential customers, comes as Lumen is focusing on the AI boom for business customers for growth, while grappling with a rapid decline of its legacy business.  Lumen initiated the process to sell its consumer fiber operations, Reuters reported in December.  “We’re looking at all possible arrangements,” Lumen CFO Chris Stansbury said during the company’s quarterly conference call, according to a Seeking Alpha transcript.   “Ultimately, that consumer asset was going to sit in the space where the market was going to consolidate and at that point of consolidation, we were not going to be a consolidator,” Stansbury said.
Bundling of fiber-to-the-home and wireless gives large providers lower churn and more pricing strength, Stansbury said, adding that the asset has garnered “a great deal of interest.” Any transaction is likely to help Lumen lighten its debt load, he added.
…………………………………………………………………………………………………………………………
Lumen’s mass market business served 2.6 million residential and small business customers at the end of the third quarter of 2024. Roughly 1 million of them were on fiber connections, while the rest were on the operator’s copper network.  The fiber-optic based network provider has over 1,700 wire centers across its total network, with consumer fiber available in about 400 of them.
“For Lumen, a sale at $5.5 billion would be disappointing,” the New Street analysts wrote. “The rumored range was $6-9 billion. Most clients seemed to focus on the low end of that range, anticipating perhaps $6 billion for a sale of just the fiber asset.”
Source: Panther Media GmbH/Alamy Stock Photo
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………
Sidebar – Lot’s of fiber deals:
Verizon is pursuing Frontier Communications for $20 billion; Canada’s BCE is hoping to acquire Ziply Fiber for $5 billion; and T-Mobile and KKR are seeking to buy Metronet.  Earlier this month Crown Castle sold its small cell business to the EQT Active Core Infrastructure fund for $4.25 billion and its fiber business to Zayo for $4.25 billion.
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………
AT&T has been investing in its high-speed fiber internet offerings to help drive faster subscriber and revenue growth. Earlier this month, it had forecast first-quarter adjusted profit in line with analysts’ estimate.  If AT&T does close on a purchase of Lumen’s fiber business, the deal would surely solidify AT&T’s position as the nation’s largest fiber network operator.
……………………………………………………………………………………………………………………………………..
References:

https://www.bloomberg.com/news/articles/2025-03-25/at-t-said-in-talks-to-buy-lumen-s-consumer-fiber-unit?embedded-checkout=true  (paywall)

https://www.reuters.com/markets/deals/att-talks-buy-lumens-consumer-fiber-unit-bloomberg-news-reports-2025-03-25/

https://www.lightreading.com/fttx/is-at-t-getting-a-screaming-deal-on-lumen-s-fiber-

Lumen Technologies to connect Prometheus Hyperscale’s energy efficient AI data centers

Microsoft choses Lumen’s fiber based Private Connectivity Fabric℠ to expand Microsoft Cloud network capacity in the AI era

Lumen, Google and Microsoft create ExaSwitch™ – a new on-demand, optical networking ecosystem

ACSI report: AT&T, Lumen and Google Fiber top ranked in fiber network customer satisfaction

Lumen to provide mission-critical communications services to the U.S. Department of Defense

AT&T sets 1.6 Tbps long distance speed record on its white box based fiber optic network

Page 1 of 95
1 2 3 95