Mulit-vendor Open RAN stalls as Echostar/Dish shuts down it’s 5G network leaving Mavenir in the lurch

Last week’s announcement that Echostar/ Dish Network will sell $23 billion worth of spectrum licenses to AT&T was very bad news for Mavenir.  As a result of that deal, Dish Network’s 5G Open RAN network, running partly on Mavenir’s software, is to be decommissioned.  Dish Network had been constructing a fourth nationwide U.S. mobile network with new Open RAN suppliers – one of the only true multi-vendor Open RANs worldwide.

Credit: Kristoffer Tripplaar/Alamy Stock Photo

Echostar’s decision to shut down its 5G network marks a very sad end for the world’s largest multivendor open RAN and will have ramifications for the entire industry. “If you look at all the initiatives, what the US government did or in general, they are the only ones who actually spent a good chunk of money to really support open RAN architecture,” said Pardeep Kohli, the CEO of Mavenir, one of the vendors involved in the Dish Network project. “So now the question is where do you go from here?”

As part of its original set of updates on 5G network plans, Dish revealed it would host its 5G core – the part that will survive the spectrum sale – in the public cloud of AWS. And the hyperscaler’s data facilities have also been used for RAN software from Mavenir installed on servers known as central units.

Open RAN enters is in the fronthaul interface between Mavenir’s DU software and radios provided by Japan’s Fujitsu. Its ability to connect its software to another company’s radios validates Mavenir’s claims to be an open RAN vendor, says Kohli. While other suppliers boast compatibility with open RAN specifications, commercial deployments pairing vendors over this interface remain rare.

Mavenir has evidently been frustrated by the continued dominance of Huawei, Ericsson and Nokia, whose combined RAN market share grew from 75.1% in 2023 to 77.5% last year, according to research from Omdia, an Informa company. Dish Network alone would not have made a sufficient difference for Mavenir and other open RAN players, according to Kohli. “It helped us come this far,” he said. “Now it’s up to how far other people want to take it.” A retreat from open RAN would, he thinks, be a “bad outcome for all the western operators,” leaving them dependent on a Nordic duopoly in countries where Chinese vendors are now banned.

“If they (telcos) don’t support it (multi-vendor OpenRAN), and other people are not supporting it, we are back to a Chinese world and a non-Chinese world,” he said. “In the non-Chinese world, you have Ericsson and Nokia, and in the Chinese world, it’s Huawei and ZTE. And that’s going to be a pretty bad outcome if that’s where it ends up.”

…………………………………………………………………………………………………………………………………………………………………

Open RAN x-U.S.:

Outside the U.S., the situation is no better for OpenRAN. Only Japan’s Rakuten and Germany’s 1&1 have attempted to build a “greenfield” Open RAN from scratch. As well as reporting billions of dollars in losses on network deployment, Rakuten has struggled to attract customers. It owns the RAN software it has deployed but counts only 1&1 as a significant customer. And Rakuten’s original 4G rollout was not based on the industry’s open RAN specifications, according to critics. “They were not pure,” said Mavenir’s Kohli.

Plagued by delays and other problems, 1&1’s rollout has been a further bad advert for Open RAN. For the greenfield operators, the issue is not the maturity of open RAN technology. Rather, it is the investment and effort needed to build any kind of new nationwide telecom network in a country that already has infrastructure options. And the biggest brownfield operators, despite professing support for open RAN, have not backed any of the the new entrants.

RAN Market Concentration:

  • Stefan Pongratz, an analyst with Dell’Oro, found that five of six regions he tracks are today classed as “highly concentrated,” with an HHI score of more than 2,500. “This suggests that the supplier diversity element of the open RAN vision is fading,” wrote Pongratz in a recent blog.
  • A study from Omdia (owned by Informa), shows the combined RAN market share of Huawei, Ericsson and Nokia grew from 75.1% in 2023 to 77.5% last year. The only significant alternative to the European and Chinese vendors is Samsung, and its market share has shrunk from 6.1% to 4.8% over this period.

Concentration would seem to be especially high in the U.S., where Ericsson now boasts a RAN market share of more than 50% and generates about 44% of its sales (the revenue contribution of India, Ericsson’s second-biggest market, was just 4% for the recent second quarter).  That’s partly because smaller regional operators previously ordered to replace Huawei in their networks spent a chunk of the government’s “rip and replace” funds on Ericsson rather than open RAN, says Kohli. Ironically, though, Ericsson owes much of the recent growth in its U.S. market share to what has been sold as an open RAN single vendor deal with AT&T [1.]. Under that contract, it is replacing Nokia at a third of AT&T’s sites, having already been the supplier for the other two thirds.

Note 1. In December 2023, AT&T awarded Ericsson a multi-year, $14 billion Open RAN contract to serve as the foundation for its open network deployment, with a goal of having 70% of its wireless traffic on open platforms by late 2026. That large, single-vendor award for the core infrastructure was criticized for potentially undermining the goal of Open RAN which was to encourage competition among multiple network equipment and software providers. AT&T’s claim of a mulit-vendor network turned out to be just a smokescreen.  Fujitsu/1Finity supplied third-party radios used in AT&T’s first Open RAN call with Ericsson.

Indeed, AT&T’s open RAN claims have been difficult to take seriously, especially since it identified Mavenir as a third supplier of radio units, behind Ericsson and Japan’s Fujitsu, just a few months before Mavenir quit the radio unit market. Mavenir stopped manufacturing and distributing Open RAN radios in June 2025 as part of a financial restructuring and a shift to a software-focused business model. 

…………………………………………………………………………………………………………………….

Arguably, Kohli describes Echostar/ Dish Network as the only U.S. player that was spending “a good chunk of money to really support open RAN architecture.”

Ultimately, he thinks the big U.S. telcos may come to regret their heavier reliance on the RAN gear giants. “It may look great for AT&T and Verizon today, but they’ll be funding this whole thing as a proprietary solution going forward because, really, there’s no incentive for anybody else to come in,” he said.

…………………………………………………………………………………………………………………….

References:

https://www.lightreading.com/open-ran/echostar-rout-leaves-its-open-ran-vendors-high-and-dry

https://www.lightreading.com/open-ran/mavenir-ceo-warns-of-ericsson-and-nokia-duopoly-as-open-ran-stalls

AT&T to to buy spectrum Licenses from EchoStar for $23 billion

AT&T to deploy Fujitsu and Mavenir radio’s in crowded urban areas

Dell’Oro Group: RAN Market Grows Outside of China in 2Q 2025

Mavenir and NEC deploy Massive MIMO on Orange’s 5G SA network in France

Spark New Zealand completes 5G SA core network trials with AWS and Mavenir software

Mavenir at MWC 2022: Nokia and Ericsson are not serious OpenRAN vendors

Ericsson expresses concerns about O-RAN Alliance and Open RAN performance vs. costs

Nokia and Mavenir to build 4G/5G public and private network for FSG in Australia

 

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Both telecom and enterprise networks are being reshaped by AI bandwidth and latency demands of AI.  Network operators that fail to modernize architectures risk falling behind.  Why?  AI workloads are network killers — they demand massive east-west traffic, ultra-low latency, and predictable throughput.

  • Real-time observability is becoming non-negotiable, as enterprises need to detect and fix issues before they impact AI model training or inference.
  • Self-driving networks are moving from concept to reality, with AI not just monitoring but actively remediating problems.
  • The competitive race is now about who can integrate AI into networking most seamlessly — and HPE/Juniper’s Mist AI, Cisco’s assurance stack, and Nvidia’s AI fabrics are three different but converging approaches.

Cisco, HPE/Juniper, and Nvidia are designing AI-optimized networking equipment, with a focus on real-time observability, lower latency and increased data center performance for AI workloads.  Here’s a capsule summary:

Cisco: AI-Ready Infrastructure:

  • Cisco is embedding AI telemetry and analytics into its Silicon One chips, Nexus 9000 switches, and Catalyst campus gear.
  • The focus is on real-time observability via its ThousandEyes platform and AI-driven assurance in DNA Center, aiming to optimize both enterprise and AI/ML workloads.
  • Cisco is also pushing AI-native data center fabrics to handle GPU-heavy clusters for training and inference.
  • Cisco claims “exceptional momentum” and leadership in AI: >$800M in AI infrastructure orders taken from web-scale customers in Q4, bringing the FY25 total to over $2B.
  • Cisco Nexus switches now fully and seamlessly integrated with NVIDIA’s Spectrum-X architecture to deliver high speed networking for AI clusters

HPE + Juniper: AI-Native Networking Push:

  • Following its $13.4B acquisition of Juniper Networks, HPE has merged Juniper’s Mist AI platform with its own Aruba portfolio to create AI-native, “self-driving” networks.
  • Key upgrades include:

-Agentic AI troubleshooting that uses generative AI workflows to pinpoint and fix issues across wired, wireless, WAN, and data center domains.

-Marvis AI Assistant with enhanced conversational capabilities — IT teams can now ask open-ended questions like “Why is the Orlando site slow?” and get contextual, actionable answers.

-Large Experience Model (LEM) with Marvis Minis — digital twins that simulate user experiences to predict and prevent performance issues before they occur.

-Apstra integration for data center automation, enabling autonomous service provisioning and cross-domain observability

Nvidia: AI Networking at Compute Scale

  • Nvidia’s Spectrum-X Ethernet platform  and Quantum-2 InfiniBand (both from Mellanox acquisition) are designed for AI supercomputing fabrics, delivering ultra-low latency and congestion control for GPU clusters.
  • In partnership with HPE, Nvidia is integrating NVIDIA AI Enterprise and Blackwell architecture GPUs into HPE Private Cloud AI, enabling enterprises to deploy AI workloads with optimized networking and compute together.
  • Nvidia’s BlueField DPUs offload networking, storage, and security tasks from CPUs, freeing resources for AI processing.

………………………………………………………………………………………………………………………………………………………..

Here’s a side-by-side comparison of how Cisco, HPE/Juniper, and Nvidia are approaching AI‑optimized enterprise networking — so you can see where they align and where they differentiate:

Feature / Focus Area Cisco HPE / Juniper Nvidia
Core AI Networking Vision AI‑ready infrastructure with embedded analytics and assurance for enterprise + AI workloads AI‑native, “self‑driving” networks across campus, WAN, and data center High‑performance fabrics purpose‑built for AI supercomputing
Key Platforms Silicon One chips, Nexus 9000 switches, Catalyst campus gear, ThousandEyes, DNA Center Mist AI platform, Marvis AI Assistant, Marvis Minis, Apstra automation Spectrum‑X Ethernet, Quantum‑2 InfiniBand, BlueField DPUs
AI Integration AI‑driven assurance, predictive analytics, real‑time telemetry Generative AI for troubleshooting, conversational AI for IT ops, digital twin simulations AI‑optimized networking stack tightly coupled with GPU compute
Observability End‑to‑end visibility via ThousandEyes + DNA Center Cross‑domain observability (wired, wireless, WAN, DC) with proactive issue detection Telemetry and congestion control for GPU clusters
Automation Policy‑driven automation in campus and data center fabrics Autonomous provisioning, AI‑driven remediation, intent‑based networking Offloading networking/storage/security tasks to DPUs for automation
Target Workloads Enterprise IT, hybrid cloud, AI/ML inference & training Enterprise IT, edge, hybrid cloud, AI/ML workloads AI training & inference at hyperscale, HPC, large‑scale data centers
Differentiator Strong enterprise install base + integrated assurance stack Deep AI‑native operations with user experience simulation Ultra‑low latency, high‑throughput fabrics for GPU‑dense environments

Key Takeaways:

  • Cisco is strongest in enterprise observability and broad infrastructure integration.
  • HPE/Juniper is leaning into AI‑native operations with a heavy focus on automation and user experience simulation.
  • Nvidia is laser‑focused on AI supercomputing performance, building the networking layer to match its GPU dominance.
Conclusions:
  • Cisco leverages its market leadership, customer base and strategic partnerships to integrate AI with existing enterprise networks.
  • HPE/Juniper challenges rivals with an AI-native, experience-first network management platform. 
  • Nvidia aims to dominate the full-stack AI infrastructure, including networking.
References:

Muon Space in deal with Hubble Network to deploy world’s first satellite-powered Bluetooth network

Muon Space, a  provider of end-to-end space systems specializing in mission-optimized satellite constellations, today announced its most capable satellite platform, MuSat XL, a high-performance 500 kg-class spacecraft designed for the most demanding next-generation low Earth orbit (LEO) missions. Muon also announced its first customer for the XL Platform: Hubble Network, a Seattle-based space-tech pioneer building the world’s first satellite-powered Bluetooth network.  IEEE Techblog reported Hubble Network’s first Bluetooth to space satellite connection in this post.

The XL Platform delivers a dramatically expanded capability tier to the flight-proven Halo™ stack – delivering more power, agility, and integration flexibility while preserving the speed, scalability and cost-effectiveness needed for constellation deployment. Optimized for Earth observation (EO) and telecommunications missions supporting commercial and national security customers that require multi-payload operations, extreme data throughput, high-performance inter-satellite networking, and cutting-edge attitude control and pointing, the XL Platform sets a new industry benchmark for mission performance and value.  “XL is more than a bigger bus – it’s a true enabler for customers pushing the boundaries of what’s possible in orbit, like Hubble,” said Jonny Dyer, CEO of Muon Space. “Their transformative BLE technology represents the future of space-based services and we are ecstatic to enable their mission with the XL Platform and our Halo stack.”

The Muon Space XL platform combines exceptional payload power, precise pointing, and high-bandwidth networking to enable advanced space capabilities across defense, disaster response, and commercial missions.

Enhancing Global BLE Coverage:

In 2024, Hubble became the first company to establish a Bluetooth connection directly to a satellite, fueling global IoT growth. Using MuSat XL, it will deploy a next-generation BLE payload featuring a phased-array antenna and a receiver 20 times more powerful than its CubeSat predecessor, enabling BLE detection at 30 times lower power and direct connectivity for ultra-low-cost, energy-efficient devices worldwide. MuSat XL’s large payload accommodation, multi-kW power system, and cutting-edge networking and communications capabilities are key enablers for advanced services like Hubble’s.

“Muon’s platform gives us the scale and power to build a true Bluetooth layer around the Earth,” said Alex Haro, Co-Founder and CEO of Hubble Network.

The first two MuSat XL satellites will provide a 12-hour global revisit time, with a scalable design for faster coverage. Hubble’s BLE Finding Network supports critical applications in logistics, infrastructure, defense, and consumer technology.

A Next Generation Multi-Mission Satellite Platform:

MuSat XL is built for operators who need real capability – more power, larger apertures, more flexibility, and more agility – and with the speed to orbit and reliability that Muon has already demonstrated with its other platforms in orbit since 2023. Built on the foundation of Muon’s heritage 200 kg MuSat architecture, MuSat XL is a 500 kg-class bus that extends the Halo technology stack’s performance envelope to enable high-impact, real-time missions.

Key capabilities include:

  • 1 kW+ orbit average payload power – Supporting advanced sensors, phased arrays, and edge computing applications.
  • Seamless, internet-standards based, high bandwidth, low latency communications, and optical crosslink networking – Extremely high volume downlink (>5 TB / day) and near real-time communications for time-sensitive operations critical for defense, disaster response, and dynamic tasking.
  • Flexible onboard interface, network, compute – Muon’s PayloadCore architecture enables rapid hardware/software integration of payloads and deployment of cloud-like workflows to onboard network, storage, and compute.
  • Precise, stable, and agile pointing – Attitude control architected for the rigorous needs of next-generation EO and RF payloads.

In the competitive small satellite market, MuSat XL offers standout advantages in payload volume, power availability, and integration flexibility – making it a versatile backbone for advanced sensors, communications systems, and compute-intensive applications. The platform is built for scale: modular, manufacturable, and fully integrated with Muon’s vertically developed stack, from custom instrument design to full mission operations via the Halo technology stack.

Muon designed MuSat XL to deliver exceptional performance without added complexity. Early adopters like Hubble signal a broader trend in the industry: embracing platforms that offer operational autonomy, speed, and mission longevity at commercial scale.

About Muon Space:

Founded in 2021, Muon Space is an end-to-end space systems company that designs, builds, and operates mission-optimized satellite constellations to deliver critical data and enable real-time compute and decision-making in space. Its proprietary technology stack, Halo™, integrates advanced spacecraft platforms, robust payload integration and management, and a powerful software-defined orchestration layer to enable high-performance capabilities at unprecedented speed – from concept to orbit. With state-of-the-art production facilities in Silicon Valley and a growing track record of commercial and national security customers, Muon Space is redefining how critical Earth intelligence is delivered from space.  Muon Space employs a team of more than 150 engineers and scientists, including industry experts from Skybox, NASA, SpaceX, and others.  SOURCE: Muon Space

About Hubble Network:

Founded in 2021, Hubble is creating the world’s first satellite-powered Bluetooth network, enabling global connectivity without reliance on cellular infrastructure. The Hubble platform makes it easy to transmit low-bandwidth data from any Bluetooth-enabled device, with no infrastructure required. Their global BLE network is live and expanding rapidly, delivering real-time visibility across supply chains, fleets, and facilities.  Visit www.hubble.com for more information.

References:

https://www.muonspace.com/

https://www.prnewswire.com/news-releases/muon-space-unveils-xl-satellite-platform-announces-hubble-network-as-first-customer-302523719.html

https://www.satellitetoday.com/government-military/2025/05/16/muon-space-advances-to-stage-ii-on-nro-contract-for-commercial-electro-optical-imagery/

https://www.satellitetoday.com/manufacturing/2025/06/12/muon-space-expands-series-b-and-buys-propulsion-startup-in-a-bid-to-scale-production/

Hubble Network Makes Earth-to-Space Bluetooth Satellite Connection; Life360 Global Location Tracking Network

WiFi 7: Backgrounder and CES 2025 Announcements

Emerging Cybersecurity Risks in Modern Manufacturing Factory Networks

By Omkar Ashok Bhalekar with Ajay Lotan Thakur

Introduction

With the advent of new industry 5.0 standards and ongoing advancements in the field of Industry 4.0, the manufacturing landscape is facing a revolutionary challenge which not only demands sustainable use of environmental resources but also compels us to make constant changes in industrial security postures to tackle modern threats. Technologies such as Internet of Things (IoT) in Manufacturing, Private 4G/5G, Cloud-hosted applications, Edge-computing, and Real-time streaming telemetry are effectively fueling smart factories and making them more productive.

Although this evolution facilitates industrial automation, innovation and high productivity, it also greatly makes the exposure footprint more vulnerable for cyberattacks. Industrial Cybersecurity is quintessential for mission critical manufacturing operations; it is a key cornerstone to safeguard factories and avoid major downtimes.

With the rapid amalgamation of IT and OT (Operational Technology), a hack or a data breach can cause operational disruptions like line down situations, halt in production lines, theft or loss of critical data, and huge financial damage to an organization.

Industrial Networking

Why does Modern Manufacturing demand Cybersecurity? Below outlines a few reasons why cybersecurity is essential in modern manufacturing:

  • Convergence of IT and OT: Industrial control systems (ICS) which used to be isolated or air-gapped are now all inter-connected and hence vulnerable to breaches.
  • Enlarged Attack Surface: Every device or component in the factory which is on the network is susceptible to threats and attacks.
  • Financial Loss: Cyberattacks such as WannaCry or targeted BSOD Blue Screen of Death (BSOD) can cost millions of dollars per minute and result in complete shutdown of operations.
  • Disruptions in Logistics Network: Supply chain can be greatly disarrayed due to hacks or cyberattacks causing essential parts shortage.
  • Legislative Compliance: Strict laws and regulations such as CISA, NIST, and ISA/IEC 62443 are proving crucial and mandating frameworks to safeguard industries

It is important to understand and adapt to the changing trends in the cybersecurity domain, especially when there are several significant factors at risk. Historically, it has been observed that mankind always has had some lessons learned from their past mistakes while not only advances at fast pace, but the risks from external threats would limit us from making advancements without taking cognizance.

This attitude of adaptability or malleability needs to become an integral part of the mindset and practices in cybersecurity spheres and should not be limited to just industrial security. Such practices can scale across other technological fields. Moreover, securing industries does not just mean physical security, but it also opens avenues for cybersecurity experts to learn and innovate in the field of applications and software such as Manufacturing Execution System (MES) which are crucial for critical operations.

Greatest Cyberattacks in Manufacturing of all times:

Familiarizing and acknowledging different categories of attacks and their scales which have historically hampered the manufacturing domain is pivotal. In this section we would highlight some of the Real-World cybersecurity incidents.

Ransomware (Colonial Pipeline, WannaCry, y.2021):

These attacks brought the US east coast to a standstill due to extreme shortage of fuel and gasoline after hacking employee credentials.

Cause: The root cause for this was compromised VPN account credentials. An VPN account which wasn’t used for a long time and lacked Multi-factor Authentication (MFA) was breached and the credentials were part of a password leak on dark web. The Ransomware group “Darkside” exploited this entry point to gain access to Colonial Pipeline’s IT systems. They did not initially penetrate operational technology systems. However, the interdependence of IT and OT systems caused operational impacts. Once inside, attackers escalated privileges and exfiltrated 100 GB of data within 2 hours. Ransomware was deployed to encrypt critical business systems. Colonial Pipeline proactively shut down the pipeline fearing lateral movement into OT networks.

Effect: The pipeline, which supplies nearly 45% of the fuel to the U.S. East Coast, was shut down for 6 days. Mass fuel shortages occurred across several U.S. states, leading to public panic and fuel hoarding. Colonial Pipeline paid $4.4 million ransom. Later, approximately $2.3 million was recovered by the FBI. Led to a Presidential Executive Order on Cybersecurity and heightened regulations around critical infrastructure cybersecurity. Exposed how business IT network vulnerabilities can lead to real-world critical infrastructure impacts, even without OT being directly targeted.

Industrial Sabotage (Stuxnet, y.2009):

This unprecedented and novel software worm was able to hijack an entire critical facility and sabotage all the machines rendering them defunct.

Cause: Nation-state-developed malware specifically targeting Industrial Control Systems (ICS), with an unprecedented level of sophistication. Stuxnet was developed jointly by the U.S. (NSA) and Israel (Unit 8200) under operation “Olympic Games”. The target was Iran’s uranium enrichment program at Natanz Nuclear Facility. The worm was introduced via USB drives (air-gapped network). Exploited four zero-day vulnerabilities in Windows systems at that time, unprecedented. Specifically targeted Siemens Step7 software running on Windows, which controls Siemens S7-300 PLCs. Stuxnet would identify systems controlling centrifuges used for uranium enrichment. Reprogrammed the PLCs to intermittently change the rotational speed of centrifuges, causing mechanical stress and failure, while reporting normal operations to operators. Used rootkits for both Windows and PLC-level to remain stealthy.

Effect: Destroyed approximately 1,000 IR-1 centrifuges (~10% of Iran’s nuclear capability). Set back Iran’s nuclear program by 1-2 years. Introduced a new era of cyberwarfare, where malware caused physical destruction. Raised global awareness about the vulnerabilities in industrial control systems (ICS). Iran responded by accelerating its cyber capabilities, forming the Iranian Cyber Army. ICS/SCADA security became a top global priority, especially in energy and defense sectors.

Upgrade spoofing (SolarWinds Orion Supply chain Attack, y.2020):

Attackers injected malicious pieces of software into the software updates which infected millions of users.

Cause: Compromise of the SolarWinds build environment leading to a supply chain attack. Attackers known as Russian Cozy Bear, linked to Russia’s foreign intelligence agency, gained access to SolarWinds’ development pipeline. Malicious code was inserted into Orion Platform updates, released between March to June 2020 Customers who downloaded the update installed malware known as SUNBURST. Attackers compromised SolarWinds build infrastructure. It created a backdoor in Orion’s signed DLLs. Over 18,000 customers were potentially affected, including 100 high-value targets. After the exploit, attackers used manual lateral movement, privilege escalation, and custom C2 (command-and-control) infrastructure to exfiltrate data.

Effect: Breach included major U.S. government agencies: DHS, DoE, DoJ, Treasury, State Department, and more. Affected top corporations: Cisco, Intel, Microsoft, FireEye, and others FireEye discovered the breach after noticing unusual two-factor authentication activity. Exposed critical supply chain vulnerabilities and demonstrated how a single point of compromise could lead to nationwide espionage. Promoted the creation of Cybersecurity Executive Order 14028, Zero Trust mandates, and widespread adoption of Software Bill of Materials (SBOM) practices.

Spywares (Pegasus, y.2016-2021):

Cause: Zero-click and zero-day exploits leveraged by NSO Group’s Pegasus spyware, sold to governments. Pegasus can infect phones without any user interaction also known as zero-click exploits. It acquires malicious access to WhatsApp, iMessage or browsers like Safari’s vulnerabilities on iOS, including zero-days attacks on Android devices. Delivered via SMS, WhatsApp messages, or silent push notifications. Once installed, it provides complete surveillance capability such as access to microphones, camera, GPS, calls, photos, texts, and encrypted apps. Zero-click iOS exploit ForcedEntry allows complete compromise of an iPhone. Malware is extremely stealthy, often removing itself after execution. Bypassed Apple’s BlastDoor sandbox and Android’s hardened security modules.

Effect: Used by multiple governments to surveil activists, journalists, lawyers, opposition leaders, even heads of state. The 2021 Pegasus Project, led by Amnesty International and Forbidden Stories, revealed a leaked list of 50,000 potential targets. Phones of high-profile individuals including international journalists, associates, specifically French president, and Indian opposition figures were allegedly targeted which triggered legal and political fallout. NSO Group was blacklisted by the U.S. Department of Commerce. Apple filed a lawsuit against NSO Group in 2021. Renewed debates over the ethics and regulation of commercial spyware.

Other common types of attacks:

Phishing and Smishing: These attacks send out links or emails that appear to be legitimate but are crafted by bad actors for financial means or identity theft.

Social Engineering: Shoulder surfing though sounds funny; it’s the tale of time where the most expert security personnel have been outsmarted and faced data or credential leaks. Rather than relying on technical vulnerabilities, this attack targets human psychology to gain access or break into systems. The attacker manipulates people into revealing confidential information using techniques such as Reconnaissance, Engagement, Baiting or offering Quid pro quo services.

Security Runbook for Manufacturing Industries:

To ensure ongoing enhancements to industrial security postures and preserve critical manufacturing operations, following are 11 security procedures and tactics which will ensure 360-degree protection based on established frameworks:

A. Incident Handling Tactics (First Line of Defense) Team should continuously improve incident response with the help of documentation and response apps. Co-ordination between teams, communications root, cause analysis and reference documentation are the key to successful Incident response.

B. Zero Trust Principles (Trust but verify) Use strong security device management tools to ensure all end devices are in compliance such as trusted certificates, NAC, and enforcement policies. Regular and random checks on users’ official data patterns and assign role-based policy limiting full access to critical resources.

C. Secure Communication and Data Protection Use endpoint or cloud-based security session with IPSec VPN tunnels to make sure all traffic can be controlled and monitored. All user data must be encrypted using data protection and recovery software such as BitLocker.

D. Secure IT Infrastructure Hardening of network equipment such switches, routers, WAPs with dot1x, port-security and EAP-TLS or PEAP. Implement edge-based monitoring solutions to detect anomalies and redundant network infrastructure to ensure least MTTR.

E. Physical Security Locks, badge readers or biometric systems for all critical rooms and network cabinets are a must. A security operations room (SOC) can help monitor internal thefts or sabotage incidents.

F. North-South and East-West Traffic Isolation Safety traffic and external traffic can be rate limited using Firewalls or edge compute devices. 100% isolation is a good wishful thought, but measures need to be taken to constantly monitor any security punch-holes.

G. Industrial Hardware for Industrial applications Use appropriate Industrial grade IP67 or IP68 rated network equipment to avoid breakdowns due to environmental factors. Localized industrial firewalls can provide desired granularity on the edge thereby skipping the need to follow Purdue model.

H. Next-Generation Firewalls with Application-Level Visibility Incorporate Stateful Application Aware Firewalls, which can help provide more control over zones and policies and differentiate application’s behavioral characteristics. Deploy Tools which can perform deep packet inspection and function as platforms for Intrusion prevention (IPS/IDS).

I. Threat and Traffic Analyzer Tools such as network traffic analyzers can help achieve network Layer1-Layer7 security monitoring by detecting and responding to malicious traffic patterns. Self-healing networks with automation and monitoring tools which can detect traffic anomalies and rectify the network incompliance.

J. Information security and Software management Companies must maintain a repo of trust certificates, software and releases and keep pushing regular patches for critical bugs. Keep a constant track of release notes and CVEs (Common Vulnerabilities and exposures) for all vendor software.

K. Idiot-Proofing (How to NOT get Hacked) Regular training to employees and familiarizing them with cyber-attacks and jargons like CryptoJacking or HoneyNets can help create awareness. Encourage and provide a platform for employees or workers to voice their opinions and resolve their queries regarding security threats.

Current Industry Perspective and Software Response

In response to the escalating tide of cyberattacks in manufacturing, from the Triton malware striking industrial safety controls to LockerGoga shutting down production at Norsk Hydro, there has been a sea change in how the software industry is facilitating operational resilience. Security companies are combining cutting-edge threat detection with ICS/SCADA systems, delivering purpose-designed solutions like zero-trust network access, behavior-based anomaly detection, and encrypted machine-to-machine communications. Companies such as Siemens and Claroty are leading the way, bringing security by design rather than an afterthought. A prime example is Dragos OT-specific threat intelligence and incident response solutions, which have become the focal point in the fight against nation-state attacks and ransomware operations against critical infrastructure.

Bridging the Divide between IT and OT: Two way street

With the intensification of OT and IT convergence, perimeter-based defense is no longer sufficient. Manufacturers are embracing emerging strategies such as Cybersecurity Mesh Architecture (CSMA) and applying IT-centric philosophies such as DevSecOps within the OT environment to foster secure by default deployment habits. The trend also brings attention to IEC 62443 conformity as well as NIST based risk assessment frameworks catering to manufacturing. Legacy PLCs having been networked and exposed to internet-borne threats, companies are embracing micro-segmentation, secure remote access, and real-time monitoring solutions that unify security across both environments. Learn how Schneider Electric is empowering manufacturers to securely link IT/OT systems with scalable cybersecurity programs.

Conclusion

In a nutshell, Modern manufacturing, contrary to the past, is not just about quick input and quick output systems which can scale and be productive, but it is an ecosystem, where cybersecurity and manufacturing harmonize and just like healthcare system is considered critical to humans, modern factories are considered quintessential to manufacturing. So many experiences with cyberattacks on critical infrastructure such as pipelines, nuclear plants, power-grids over the past 30 years not only warrant world’s attention but also calls to action the need to devise regulatory standards which must be followed by each and every entity in manufacturing.

As mankind keeps making progress and sprinting towards the next industrial revolution, it’s an absolute exigency to emphasize making Industrial Cybersecurity a keystone in building upcoming critical manufacturing facilities and building a strong foundation for operational excellency. Now is the right time to buy into the trend of Industrial security, sure enough the leaders who choose to be “Cyberfacturers” will survive to tell the tale, and the rest may just serve as stark reminders of what happens when pace outperforms security.

References

About Author:

Omkar Bhalekar is a senior network engineer and technology enthusiast specializing in Data center architecture, Manufacturing infrastructure, and Sustainable solutions with extensive experience in designing resilient industrial networks and building smart factories and AI data centers with scalable networks. He is also the author of the Book Autonomous and Predictive Networks: The future of Networking in the Age of AI and co-author of Quantum Ops – Bridging Quantum Computing & IT Operations. Omkar writes to simplify complex technical topics for engineers, researchers, and industry leaders.

Countdown to Q-day: How modern-day Quantum and AI collusion could lead to The Death of Encryption

By Omkar Ashok Bhalekar with Ajay Lotan Thakur

Behind the quiet corridors of research laboratories and the whir of supercomputer data centers, a stealth revolution is gathering force, one with the potential to reshape the very building blocks of cybersecurity. At its heart are qubits, the building blocks of quantum computing, and the accelerant force of generative AI. Combined, they form a double-edged sword capable of breaking today’s encryption and opening the door to an era of both vast opportunity and unprecedented danger.

Modern Cryptography is Fragile

Modern-day computer security relies on the un-sinking complexity of certain mathematical problems. RSA encryption, introduced for the first time in 1977 by Rivest, Shamir, and Adleman, relies on the principle that factorization of a 2048-bit number into primes is computationally impossible for ordinary computers (RSA paper, 1978). Also, Diffie-Hellman key exchange, which was described by Whitfield Diffie and Martin Hellman in 1976, offers key exchange in a secure manner over an insecure channel based on the discrete logarithm problem (Diffie-Hellman paper, 1976). Elliptic-Curve Cryptography (ECC) was described in 1985 independently by Victor Miller and Neal Koblitz, based on the hardness of elliptic curve discrete logarithms, and remains resistant to brute-force attacks but with smaller key sizes for the same level of security (Koblitz ECC paper, 1987).

But quantum computing flips the script. Thanks to algorithms like Shor’s Algorithm, a sufficiently powerful quantum computer could factor large numbers exponentially faster than regular computers rendering RSA and ECC utterly useless. Meanwhile, Grover’s Algorithm provides symmetric key systems like AES with a quadratic boost.

What would take millennia or centuries to classical computers, quantum computers could boil down to days or even hours with the right scale. In fact, experts reckon that cracking RSA-2048 using Shor’s Algorithm could take just 20 million physical qubits which is a number that’s diminishing each year.

Generative AI adds fuel to the fire

While quantum computing threatens to undermine encryption itself, generative AI is playing an equally insidious but no less revolutionary role. By mass-producing activities such as the development of malware, phishing emails, and synthetic identities, generative AI models, large language models, and diffusion-based visual synthesizers, for example, are lowering the bar on sophisticated cyberattacks.

Even worse, generative AI can be applied to model and experiment with vulnerabilities in implementations of cryptography, including post-quantum cryptography. It can be employed to assist with training reinforcement learning agents that optimize attacks against side channels or profile quantum circuits to uncover new behaviors.

With quantum computing on the horizon, generative AI is both a sophisticated research tool and a player to watch when it comes to weaponization. On the one hand, security researchers utilize generative AI to produce, examine, and predict vulnerabilities in cryptography systems to inform the development of post-quantum-resistant algorithms. Meanwhile, it is exploited by malicious individuals for their ability to automate the production of complex attack vectors like advanced malware, phishing attacks, and synthetic identities radically reducing the barrier to conducting high impact cyberattacks. This dual-use application of generative AI radically shortens the timeline for adversaries to take advantage of breached or transitional cryptographic infrastructures, practically bridging the window of opportunity for defenders to deploy effective quantum-safe security solutions.

Real-World Implications

The impact of busted cryptography is real, and it puts at risk the foundations of everyday life:

1. Online Banking (TLS/HTTPS)

When you use your bank’s web site, the “https” in the address bar signifies encrypted communication over TLS (Transport Layer Security). Most TLS implementations rely on RSA or ECC keys to securely exchange session keys. A quantum attack would decrypt those exchanges, allowing an attacker to decrypt all internet traffic, including sensitive banking data.

2. Cryptocurrencies

Bitcoin, Ethereum, and other cryptocurrencies use ECDSA (Elliptic Curve Digital Signature Algorithm) for signing transactions. If quantum computers can crack ECDSA, a hacker would be able to forge signatures and steal digital assets. In fact, scientists have already performed simulations in which a quantum computer might be able to extract private keys from public blockchain data, enabling theft or rewriting the history of transactions.

3. Government Secrets and Intelligence Archives

National security agencies all over the world rely heavily on encryption algorithms such as RSA and AES to protect sensitive information, including secret messages, intelligence briefs, and critical infrastructure data. Of these, AES-256 is one that is secure even in the presence of quantum computing since it is a symmetric-key cipher that enjoys quantum resistance simply because Grover’s algorithm can only give a quadratic speedup against it, brute-force attacks remain gigantic in terms of resources and time. Conversely, asymmetric cryptographic algorithms like RSA and ECC, which underpin the majority of public key infrastructures, are fundamentally vulnerable to quantum attacks that can solve the hard mathematical problems they rely on for security.

Such a disparity offers a huge security gap. Information obtained today, even though it is in such excellent safekeeping now, might not be so in the future when sufficiently powerful quantum computers will be accessible, a scenario that is sometimes referred to as the “harvest now, decrypt later” threat. Both intelligence agencies and adversaries could be quietly hoarding and storing encrypted communications, confident that quantum technology will soon have the capability to decrypt this stockpile of sensitive information. The Snowden disclosures placed this threat in the limelight by revealing that the NSA catches and keeps vast amounts of global internet traffic, such as diplomatic cables, military orders, and personal communications. These repositories of encrypted data, unreadable as they stand now, are an unseen vulnerability; when Q-Day which is the onset of available, practical quantum computers that can defeat RSA and ECC, come around, confidentiality of decades’ worth of sensitive communications can be irretrievably lost.

Such a compromise would have apocalyptic consequences for national security and geopolitical stability, exposing classified negotiations, intelligence operations, and war plans to adversaries. Such a specter has compelled governments and security entities to accelerate the transition to post-quantum cryptography standards and explore quantum-resistant encryption schemes in an effort to safeguard the confidentiality and integrity of information in the era of quantum computing.

Arms Race Toward Post-Quantum Cryptography

In response, organizations like NIST are leading the development of post-quantum cryptographic standards, selecting algorithms believed to be quantum resistant. But migration is glacial. Implementing backfitting systems with new cryptographic foundations into billions of devices and services is a logistical nightmare. This is not a process of merely software updates but of hardware upgrades, re-certifications, interoperability testing, and compatibility testing with worldwide networks and critical infrastructure systems, all within a mode of minimizing downtime and security vulnerabilities.

Building such a large quantum computer that can factor RSA-2048 is an enormous task. It would require millions of logical qubits with very low error rates, it’s estimated. Today’s high-end quantum boxes have less than 100 operational qubits, and their error rates are too high to support complicated processes over a long period of time. However, with continued development of quantum correction methods, materials research, and qubit coherence times, specialists warn that effective quantum decryption capability may appear more quickly than the majority of organizations are prepared to deal with.

This convergence time frame, when old and new environments coexist, is where danger is most present. Attackers can use generative AI to look for these hybrid environments in which legacy encryption is employed, by botching the identification of old crypto implementations, producing targeted exploits en masse, and choreographing multi-step attacks that overwhelm conventional security monitoring and patching mechanisms.

Preparing for the Convergence

In order to be able to defend against this coming storm, the security strategy must evolve:

  • Inventory Cryptographic Assets: Firms must take stock of where and how encryption is being used across their environments.
  • Adopt Crypto-Agility: System needs to be designed so it can easily switch between encryption algorithms without full redesign.
  • Quantum Test Threats: Use AI tools to stress-test quantum-like threats in encryption schemes.

Adopt PQC and Zero-Trust Models: Shift towards quantum-resistant cryptography and architectures with breach as the new default state.

In Summary

Quantum computing is not only a looming threat, it is a countdown to a new cryptographic arms race. Generative AI has already reshaped the cyber threat landscape, and in conjunction with quantum power, it is a force multiplier. It is a two-front challenge that requires more than incremental adjustment; it requires a change of cybersecurity paradigm.

Panic will not help us. Preparation will.

Abbreviations

RSA – Rivest, Shamir, and Adleman
ECC – Elliptic-Curve Cryptography
AES – Advanced Encryption Standard
TLS – Transport Layer Security
HTTPS – Hypertext Transfer Protocol Secure
ECDSA – Elliptic Curve Digital Signature Algorithm
NSA – National Security Agency
NIST – National Institute of Standards and Technology
PQC – Post-Quantum Cryptography

References

***Google’s Gemini is used in this post to paraphrase some sentences to add more context. ***

About Author:

Omkar Bhalekar is a senior network engineer and technology enthusiast specializing in Data center architecture, Manufacturing infrastructure, and Sustainable solutions with extensive experience in designing resilient industrial networks and building smart factories and AI data centers with scalable networks. He is also the author of the Book Autonomous and Predictive Networks: The future of Networking in the Age of AI and co-author of Quantum Ops – Bridging Quantum Computing & IT Operations. Omkar writes to simplify complex technical topics for engineers, researchers, and industry leaders.

Liquid Dreams: The Rise of Immersion Cooling and Underwater Data Centers

By Omkar Ashok Bhalekar with Ajay Lotan Thakur

As demand for data keeps rising, driven by generative AI, real-time analytics, 8K streaming, and edge computing, data centers are facing an escalating dilemma: how to maintain performance without getting too hot. Traditional air-cooled server rooms that were once large enough for straightforward web hosting and storage are being stretched to their thermal extremes by modern compute-intensive workloads. While the world’s digital backbone burns hot, innovators are diving deep, deep to the ocean floor. Say hello to immersion cooling and undersea data farms, two technologies poised to revolutionize how the world stores and processes data.

Heat Is the Silent Killer of the Internet – In each data center, heat is the unobtrusive enemy. If racks of performance GPUs, CPUs, and ASICs are all operating at the same time, they generate massive amounts of heat. The old approach with gigantic HVAC systems and chilled air manifolds is reaching its technological and environmental limits.

In the majority of installations, over 35-40% of total energy consumption is spent on simply cooling the hardware, rather than running it. As model sizes and inference loads explode (think ChatGPT, DALL·E, or Tesla FSD), traditional cooling infrastructures simply aren’t up to the task without costly upgrades or environmental degradation. This is why there is a paradigm shift.

Liquid cooling is not an option everywhere due to lack of infrastructure, expense, and geography, so we still must rely on every player in the ecosystem to step up the ante when it comes to energy efficiency. The burden crosses multiple domains, chip manufacturers need to deliver far greater performance per watt with advanced semiconductor design, and software developers need to write that’s fundamentally low power by optimizing algorithms and reducing computational overhead.

Along with these basic improvements, memory manufacturers are designing low-power solutions, system manufacturers are making more power-efficient delivery networks, and cloud operators are making their data center operations more efficient while increasing the use of renewable energy sources. As Microsoft Chief Environmental Officer Lucas Joppa said, “We need to think about sustainability not as a constraint, but as an innovative driver that pushes us to build more efficient systems across every layer of the stack of technology.”

However, despite these multifaceted efficiency gains, thermal management remains a significant bottleneck that can have a deep and profound impact on overall system performance and energy consumption. Ineffective cooling can force processors to slow down their performance, which is counterintuitive to better chips and optimized software. This becomes a self-perpetuating loop where wasteful thermal management will counteract efficiency gains elsewhere in the system.

In this blogpost, we will address the cooling aspect of energy consumption, considering how future thermal management technology can be a multiplier of efficiency across the entire computing infrastructure. We will explore how proper cooling strategies not only reduce direct energy consumption from cooling components themselves but also enable other components of the system to operate at their maximum efficiency levels.

What Is Immersion Cooling?

Immersion cooling cools servers by submerging them in carefully designed, non-conductive fluids (typically dielectric liquids) that transfer heat much more efficiently than air. Immersion liquids are harmless to electronics; in fact, they allow direct liquid contact cooling with no risk of short-circuiting or corrosion.

Two general types exist:

  • Single-phase immersion, with the fluid remaining liquid and transferring heat by convection.
  • Two-phase immersion, wherein fluid boils at low temperature, gets heated and condenses in a closed loop.

According to Vertiv’s research, in high-density data centers, liquid cooling improves the energy efficiency of IT and facility systems compared to air cooling. In their fully optimized study, the introduction of liquid cooling created a 10.2% reduction in total data center power and a more than 15% improvement in Total Usage Effectiveness (TUE).

Total Usage Effectiveness is calculated by using the formula below:

TUE = ITUE x PUE (ITUE = Total Energy Into the IT Equipment/Total Energy into the Compute Components, PUE = Power Usage Effectiveness)

Reimagining Data Centers Underwater
Imagine shipping an entire data center in a steel capsule and sinking it to the ocean floor. That’s no longer sci-fi.

Microsoft’s Project Natick demonstrated the concept by deploying a sealed underwater data center off the Orkney Islands, powered entirely by renewable energy and cooled by the surrounding seawater. Over its two-year lifespan, the submerged facility showed:

  • A server failure rate 1/8th that of land-based centers.
  • No need for on-site human intervention.
  • Efficient, passive cooling by natural sea currents.

Why underwater? Seawater is an open, large-scale heat sink, and underwater environments are naturally less prone to temperature fluctuations, dust, vibration, and power surges. Most coastal metropolises are the biggest consumers of cloud services and are within 100 miles of a viable deployment site, which would dramatically reduce latency.

Why This Tech Matters Now Data centers already account for about 2–3% of the world’s electricity, and with the rapid growth in AI and metaverse workloads, that figure will grow. Generative inference workloads and AI training models consume up to 10x the power per rack that regular server workloads do, subjecting cooling gear and sustainability goals to tremendous pressure. Legacy air cooling technologies are reaching thermal and density thresholds, and immersion cooling is a critical solution to future scalability. According to Submer, a Barcelona based immersion cooling company, immersion cooling has the ability to reduce energy consumed by cooling systems by up to 95% and enable higher rack density, thus providing a path to sustainable growth in data centers under AI-driven demands

Advantages & Challenges

Immersion and submerged data centers possess several key advantages:

  • Sustainability – Lower energy consumption and lower carbon footprints are paramount as ESG (Environmental, Social, Governance) goals become business necessities.
  • Scalability & Efficiency – Immersion allows more density per square foot, reducing real estate and overhead facility expenses.
  • Reliability – Liquid-cooled and underwater systems have fewer mechanical failures including less thermal stress, fewer moving parts, and less oxidation.
  • Security & Autonomy – Underwater encased pods or autonomous liquid systems are difficult to hack and can be remotely monitored and updated, ideal for zero-trust environments.

While there are advantages of Immersion Cooling / Submerges Datacenters, there are some challenges/limitations as well –

  • Maintenance and Accessibility Challenges – Both options make hardware maintenance complex. Immersion cooling requires careful removal and washing of components to and from dielectric liquids, whereas underwater data centers provide extremely poor physical access, with entire modules having to be removed to fix them, which translates to longer downtimes.
  • High Initial Costs and Deployment Complexity – Construction of immersion tanks or underwater enclosures involves significant capital investment in specially designed equipment, infrastructure, and deployment techniques. Underwater data centers are also accompanied by marine engineering, watertight modules, and intricate site preparation.
  • Environmental and Regulatory Concerns – Both approaches involve environmental issues and regulatory adherence. Immersion systems struggle with fluid waste disposal regulations, while underwater data centers have marine environmental impact assessments, permits, and ongoing ecosystem protection mechanisms.
  • Technology Maturity and Operational Risks – These are immature technologies with minimal historical data on long-term performance and reliability. Potential problems include leakage of liquids in immersion cooling or damage and biofouling in underwater installation, leading to uncertain large-scale adoption.

Industry Momentum

Various companies are leading the charge:

  • GRC (Green Revolution Cooling) and submersion cooling offer immersion solutions to hyperscalers and enterprises.
  • HPC is offered with precision liquid cooling by Iceotope. Immersion cooling at scale is being tested by Alibaba, Google, and Meta to support AI and ML clusters.
  • Microsoft is researching commercial viability of underwater data centers as off-grid, modular ones in Project Natick.

Hyperscalers are starting to design entire zones of their new data centers specifically for liquid-cooled GPU pods, while smaller edge data centers are adopting immersion tech to run quietly and efficiently in urban environments.

  • The Future of Data Centers: Autonomous, Sealed, and Everywhere
    Looking ahead, the trend is clear: data centers are becoming more intelligent, compact, and environmentally integrated. We’re entering an era where:
  • AI-based DCIM software predicts and prevents failure in real-time.
  • Edge nodes with immersive cooling can be located anywhere, smart factories, offshore oil rigs.
  • Entire data centers might be built as prefabricated modules, inserted into oceans, deserts, or even space.
  • The general principle? Compute must not be limited by land, heat, or humans.

Final Thoughts

In the fight to enable the digital future, air is a luxury. Immersed in liquid or bolted to the seafloor, data centers are shifting to cool smarter, not harder.

Underwater installations and liquid cooling are no longer out-there ideas, they’re lifelines to a scalable, sustainable web.

So, tomorrow’s “Cloud” won’t be in the sky, it will hum quietly under the sea.

References

About Author:
Omkar Bhalekar is a senior network engineer and technology enthusiast specializing in Data center architecture, Manufacturing infrastructure, and Sustainable solutions. With extensive experience in designing resilient industrial networks and building smart factories and AI data centers with scalable networks, Omkar writes to simplify complex technical topics for engineers, researchers, and industry leaders.

Indosat Ooredoo Hutchison and Nokia use AI to reduce energy demand and emissions

Indonesian network operator Indosat Ooredoo Hutchison has deployed Nokia Energy Efficiency (part of the company’s Autonomous Networks portfolio – described below) to reduce energy demand and carbon dioxide emissions across its RAN network using AI. Nokia’s energy control system uses AI and machine learning algorithms to analyze real-time traffic patterns, and will enable the operator to adjust or shut idle and unused radio equipment automatically during low network demand periods.

The multi-vendor, AI-driven energy management solution can reduce energy costs and carbon footprint with no negative impact on network performance or customer experience. It can be rolled out in a matter of weeks.

Indosat is aiming to transform itself from a conventional telecom operator into an AI TechCo—powered by intelligent technologies, cloud-based platforms, and a commitment to sustainability. By embedding automation and intelligence into network operations, Indosat is unlocking new levels of efficiency, agility, and environmental responsibility across its infrastructure.

Earlier this year Indosat claimed to be the first operator to deploy AI-RAN in Indonesia, in a deal involving the integration of Nokia’s 5G cloud RAN solution with Nvidia’s Aerial platform. The Memorandum of Understanding (MoU) between the three firms included the development, testing, and deployment of AI-RAN, with an initial focus on transferring AI inferencing workloads on the AI Aerial, then the integration of RAN workloads on the same platform.

“As data consumption continues to grow, so does our responsibility to manage resources wisely. This collaboration reflects Indosat’s unwavering commitment to environmental stewardship and sustainable innovation, using AI to not only optimize performance, but also reduce emissions and energy use across our network.” said Desmond Cheung, Director and Chief Technology Officer at Indosat Ooredoo Hutchison.

Indosat was the first operator in Southeast Asia to achieve ISO 50001 certification for energy management—underscoring its pledge to minimize environmental impact through operational excellence. The collaboration with Nokia builds upon a successful pilot project, in which the AI-powered solution demonstrated its ability to reduce energy consumption in live network conditions.

Following the pilot project, Nokia deployed its Energy Efficiency solution to the entire Nokia RAN footprint within Indonesia, e.g. Sumatra, Kalimantan, Central and East Java.

“We are very pleased to be helping Indosat deliver on its commitments to sustainability and environmental responsibility, establishing its position both locally and internationally. Nokia Energy Efficiency reflects the important R&D investments that Nokia continues to make to help our customers optimize energy savings and network performance simultaneously,” said Henrique Vale, VP for Cloud and Network Services APAC at Nokia.

Nokia’s Autonomous Networks portfolio, including its Autonomous Networks Fabric solution, utilizes Agentic AI to deliver advanced security, analytics, and operations capabilities that provide operators with a holistic, real-time view of the network so they can reduce costs, accelerate time-to-value, and deliver the best customer experience.

Autonomous Networks Fabric is a unifying intelligence layer that weaves together observability, analytics, security, and automation across every network domain; allowing a network to behave as one adaptive system, regardless of vendor, architecture, or deployment model.

References:

https://www.nokia.com/newsroom/indosat-ooredoo-hutchison-and-nokia-partner-to-reduce-energy-demand-and-support-ai-powered-sustainable-operations/

https://www.telecoms.com/ai/nokia-to-supply-indosat-ooredoo-hutchison-with-ai-powered-energy-efficient-ran-software

Analysts weigh in: AT&T in talks to buy Lumen’s consumer fiber unit – Bloomberg

Bloomberg News reports that AT&T is in talks to acquire Lumen Technologies’ consumer fiber operations, in a deal that could value the unit at more than $5.5 billion, citing people with knowledge of the matter. The companies are in exclusive discussions about a transaction valuing the unit at more than $5.5 billion, said one of the people, who requested to not be identified discussing confidential information. The terms of the unfinalized deal could change or the talks might still collapse, according to the report.

“If the rumored price is correct, it is a great deal for AT&T,” wrote the financial analysts at New Street Research in a note to investors. “The value per [fiber] location at $5.5 billion would be about $1,300 which compares to Frontier at $2,400, Ziply at $3,800, and Metronet at $4,700,” the analysts continued.

The potential move to offload Lumen’s fiber business, which provides high-speed internet services to residential customers, comes as Lumen is focusing on the AI boom for business customers for growth, while grappling with a rapid decline of its legacy business.  Lumen initiated the process to sell its consumer fiber operations, Reuters reported in December.  “We’re looking at all possible arrangements,” Lumen CFO Chris Stansbury said during the company’s quarterly conference call, according to a Seeking Alpha transcript.   “Ultimately, that consumer asset was going to sit in the space where the market was going to consolidate and at that point of consolidation, we were not going to be a consolidator,” Stansbury said.
Bundling of fiber-to-the-home and wireless gives large providers lower churn and more pricing strength, Stansbury said, adding that the asset has garnered “a great deal of interest.” Any transaction is likely to help Lumen lighten its debt load, he added.
…………………………………………………………………………………………………………………………
Lumen’s mass market business served 2.6 million residential and small business customers at the end of the third quarter of 2024. Roughly 1 million of them were on fiber connections, while the rest were on the operator’s copper network.  The fiber-optic based network provider has over 1,700 wire centers across its total network, with consumer fiber available in about 400 of them.
“For Lumen, a sale at $5.5 billion would be disappointing,” the New Street analysts wrote. “The rumored range was $6-9 billion. Most clients seemed to focus on the low end of that range, anticipating perhaps $6 billion for a sale of just the fiber asset.”
Source: Panther Media GmbH/Alamy Stock Photo
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………
Sidebar – Lot’s of fiber deals:
Verizon is pursuing Frontier Communications for $20 billion; Canada’s BCE is hoping to acquire Ziply Fiber for $5 billion; and T-Mobile and KKR are seeking to buy Metronet.  Earlier this month Crown Castle sold its small cell business to the EQT Active Core Infrastructure fund for $4.25 billion and its fiber business to Zayo for $4.25 billion.
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………
AT&T has been investing in its high-speed fiber internet offerings to help drive faster subscriber and revenue growth. Earlier this month, it had forecast first-quarter adjusted profit in line with analysts’ estimate.  If AT&T does close on a purchase of Lumen’s fiber business, the deal would surely solidify AT&T’s position as the nation’s largest fiber network operator.
……………………………………………………………………………………………………………………………………..
References:

https://www.bloomberg.com/news/articles/2025-03-25/at-t-said-in-talks-to-buy-lumen-s-consumer-fiber-unit?embedded-checkout=true  (paywall)

https://www.reuters.com/markets/deals/att-talks-buy-lumens-consumer-fiber-unit-bloomberg-news-reports-2025-03-25/

https://www.lightreading.com/fttx/is-at-t-getting-a-screaming-deal-on-lumen-s-fiber-

Lumen Technologies to connect Prometheus Hyperscale’s energy efficient AI data centers

Microsoft choses Lumen’s fiber based Private Connectivity Fabric℠ to expand Microsoft Cloud network capacity in the AI era

Lumen, Google and Microsoft create ExaSwitch™ – a new on-demand, optical networking ecosystem

ACSI report: AT&T, Lumen and Google Fiber top ranked in fiber network customer satisfaction

Lumen to provide mission-critical communications services to the U.S. Department of Defense

AT&T sets 1.6 Tbps long distance speed record on its white box based fiber optic network

WiFi 7: Backgrounder and CES 2025 Announcements

Backgrounder:

Wi-Fi 7, also known as the IEEE 802.11be-2024 [1.], is the latest generation of Wi-Fi technology, offering significantly faster speeds, increased network capacity, and lower latency compared to previous versions like Wi-Fi 6, by utilizing features like wider 320MHz channels, Multi-Link Operation (MLO), and 4K-QAM modulation across all frequency bands (2.4GHz, 5GHz, and 6GHz).  Wi-Fi 7 is designed to use huge swaths of unlicensed spectrum in the 6 GHz band, first made available in Wi-Fi 6E standard, to deliver a maximum data rate of up to 46 Gbps.

Note 1. The Wi-Fi Alliance began certifying Wi-Fi 7 devices in January 2024. The IEEE approved the IEEE 802.11be standard in 2024 on September 26, 2024The standard supports at least one mode of operation capable of supporting a maximum throughput of at least 30 Gbps, as measured at the MAC data service access point (SAP), with carrier frequency operation between 1 and 7.250 GHz, while ensuring backward compatibility and coexistence with legacy IEEE Std 802.11 compliant devices operating in the 2.4 GHz, 5 GHz, and 6 GHz bands.

………………………………………………………………………………………………………………………………………………………………………………………………..

The role of 6 GHz Wi-Fi in delivering connectivity is changing, and growing. A recent report from OpenSignal, found that smartphone users spend 77% to 88% of their screen-on time connected to Wi-Fi. Further, the latest generations of Wi-Fi (largely due to the support of 320 MHz channels and critical features like Multi-Link Operation) are increasingly more reliable and deterministic, making them viable options for advanced applications like extended reality in both the home and the enterprise.

New features:

  • 320MHz channels: Double the bandwidth compared to Wi-Fi 6E. 
  • Multi-Link Operation (MLO): Allows devices to connect using multiple channels across different bands simultaneously. 
  • K-QAM modulation: Enables more data to be transmitted per signal. 
CES 2025 WiFi 7 Announcements:

1.  TP-Link unveiled the Deco BE68 Whole Home Mesh Wi-Fi 7 solution, which is claims delivers speeds of up to 14 Gbps, covering 8,100 sq. ft. and supporting up to 200 connected devices. “Featuring 10G, 2.5G, and 1G ports, it ensures fast, reliable wired connections. With Deco Mesh technology, the system delivers seamless coverage and uninterrupted performance for streaming, gaming, and more,” stated the company.

TP-Link also announced an outdoor mesh system to address the increasing demand for outdoor Wi-Fi connectivity. The Deco BE65-Outdoor and Deco BE25-Outdoor nodes are equipment with weather, water and dust proof enclosures. When combined with the Deco indoor models, a cohesive and reliable indoor-outdoor mesh network that allows a user to move seamlessly between the two environments can be achieved.

2.  Intel Core Ultra Series 2) are all equipped with Wi-Fi 7 capabilities integrated into the silicon, Intel has made Wi-Fi its standard choice. On its website, the company explained that a “typical” Wi-Fi 7 laptop is a potential maximum data rate of almost 5.8 Gbps. “This is 2.4X faster than the 2.4 Gbps possible with Wi-Fi 6/6E and could easily enable high quality 8K video streaming or reduce a massive 15 GB file download to roughly 25 seconds vs. the one minute it would take with the best legacy Wi-Fi technology,” Intel added.

3. ASUS  New Wi-Fi 7 Router Lineup

ASUS unveiled a range of new networking products at CES 2025, including the ASUS RT-BE58 Go travel router and ASUS 5G-Go mobile router – both recipients of the CES 2025 Innovation Award – alongside the ROG Rapture GT-BE19000AI gaming router and the ZenWiFi Outdoor series for home Wi-Fi setups.

  • The RT-BE58 Go – is a dual-band, Wi-Fi 7-capable mobile router supports three use cases: 4G/5G mobile tethering, public Wi-Fi hotspot (WISP), and conventional home router. It also supports VPN from up to 30 service providers and subscription-free Trend Micro security for online protection, while AiMesh compatibility allows for the router to be paired with other ASUS routers to provide wider signal coverage.
  • The ROG Rapture GT-BE19000AI is the iteration of the GT-BE19000 router released last year, this time with an NPU onboard coupled with CPU and MCU. This tri-core combination enables features like ROG AI Game Booster and Adaptive QoS 2.0 to reduce network latency by up to 34% for supported games, plus 46% power savings through its AI Power Saving mode that saves power based on usage patterns. Additional features include advanced ad and tracker blocking, network insights, and RF scanning.

References:

https://standards.ieee.org/ieee/802.11be/7516/

https://en.wikipedia.org/wiki/Wi-Fi_7

https://www.mathworks.com/help/wlan/ug/overview-of-wifi-7-or-ieee-802-11-be.html

[CES 2025] ASUS Presents New Wi-Fi 7 Router Lineup

Google, MediaTek team up; a new Wi-Fi HaLow chip; Wi-Fi 7 becomes standard — Top Wi-Fi news from CES 2025

WiFi 7 and the controversy over 6 GHz unlicensed vs licensed spectrum

Highlights of GSA report on Private Mobile Network Market – 3Q2024

According to GSA, the private mobile network market (PMNM) continued to grow in 3Q2024, as the number of unique customer references for deployments reached 1,603. The market is being driven by sectors like manufacturing, education, and mining, which use these networks for enhanced data, security and mobility needs.

On average, 71% of references included in the GSA database are non-public and unique to this database, submitted by members of the GSA Private Mobile Networks Special Interest Group (SIG). This number can be higher for certain industries, with more than 80% of sectors such as military and defense, maritime and power plants not visible in the public domain. The referenced SIG includes 16 companies: 450Alliance, 5G-ACIA, AI-Link, Airspan, Celona, Dell, Ericsson, GSMA, JMA Wireless, Keysight Technologies, Mavenir, Nokia, OnGo Alliance, OneLayer, PrivateLTEand5G.com and TCCA. GSA would like to thank its members 450Alliance, Airspan, Celona, Ericsson, Keysight Technologies, Mavenir, Nokia and OneLayer for sharing general information about their network deployments to enable this report and data set to be produced. New data has resulted in a significant uplift in this update.

Other PMNM highlights in the 3rd quarter 2024 include:

• There are 80 countries around the world with at least one private mobile network.

• Of the top 10 reporting countries, the United States reported growth of 24%, followed by the United Kingdom, up 11%, Sweden by 9% and Japan and Australia by 5% each. Finland and the Republic of Korea grew by 4% each

• Seaports and oil and gas were the fastest-growing industries, up 9%. Manufacturing, education and academic research and mining remain the top three sectors for customer references, although this does not represent the actual size and scale of deployments, which vary by user type.

• There are 80 countries around the world with at least one private mobile network.

• There is typically a strong, positive correlation between the number of private mobile network references and countries with dedicated spectrum. Private mobile networks are mainly in high- and upper-middle-income regions so far, with the United States, Germany, the United Kingdom, China and Japan having the most references. It is sometimes reported that China has a high number of networks, reaching up to 30,000, but GSA believes a large portion use the public network and therefore do not meet our definition.

Image Credit: GSA

Notes:

The definition of a private mobile network used in this report is a 3GPP-based 4G LTE or 5G network intended for the sole use of private entities, such as enterprises, industries and governments. They can use only physical elements, RAN or Core, or a combination of physical and virtual elements — for example hosted by a public land mobile network — but as a minimum, a dedicated network core must be implemented. The definition includes MulteFire or Future Railway Mobile Communication System. The network must use spectrum defined in 3GPP, be generally intended for business-critical or mission-critical operational needs, and where it is possible to identify commercial value, the database includes contracts worth more than €50,0000 and between €50,000 and €100,000 to filter out small demonstration network deployments. Private mobile networks are usually not offered to the general public, although GSA’s analysis does include the following: educational institutions that provide mobile broadband to student homes; private fixed wireless access networks deployed by communities for homes and businesses; city or town networks that use local licenses to provide wireless services in libraries or public places (possibly offering Wi-Fi with 3GPP wireless backhaul), which are not an extension of the public network.

Non-3GPP networks such as those using Wi-Fi, TETRA, P25, WiMAX, Sigfox, LoRa and proprietary technologies are excluded from the data set. Network implementations using solely network slices from public networks or placement of virtual networking functions on a router are also excluded. Where identifiable, extensions of the public network (such as one or two extra sites deployed at a location, as opposed to dedicated private networks) are excluded. These items may be described in the press as a type of private network.

References:

PMN December 2024 Summary

SNS Telecom & IT: Private 5G and 4G LTE cellular networks for the global defense sector is a $1.5B opportunity

SNS Telecom & IT: $6 Billion Private LTE/5G Market Shines Through Wireless Industry’s Gloom

SNS Telecom & IT: Private 5G Network market annual spending will be $3.5 Billion by 2027

Dell’Oro: Private RAN revenue declines slightly, but still doing relatively better than public RAN and WLAN markets

Pente Networks, MosoLabs and Alliance Corp collaborate for Private Cellular Network in a Box

HPE Aruba Launches “Cloud Native” Private 5G Network with 4G/5G Small Cell Radios

 

 

Page 1 of 95
1 2 3 95