The Financial Trap of Autonomous Networks: Scaling Agentic AI in the Telecom Core

By Pavan Madduri with Ajay Lotan Thakur

The telecom industry wants autonomous, self-healing networks, but nobody is looking at the GPU bill. Running Agentic AI 24/7 “just in case” will bankrupt your IT department and ruin your ESG goals. The only way to survive the autonomous era is ruthless, event-driven orchestration that scales cognitive compute to absolute zero.

Introduction: The Compute Crisis

The Compute Crisis Nobody is Talking About

Everyone in telecom right now is obsessed with “self-healing” autonomous networks. The vendor pitch sounds amazing. Just drop in some Agentic AI, let it watch your data plane, and watch it fix anomalies without a human ever touching a keyboard. But there’s a massive trap hiding underneath all that hype, and enterprise architects are completely ignoring it. It comes down to the raw physics of AI compute.

Unlike your standard microservices, which just run deterministic, compiled code on cheap CPU cycles, Agentic AI needs massive foundation models. To actually reason through a network failure, these models have to load gigabytes of weights into VRAM and generate tokens. You need dedicated GPUs for this. We aren’t talking about cheap, stateless API calls here. These are the most expensive, power-hungry workloads in your entire datacenter.

If a telco tries to run an autonomous core the old-fashioned way by keeping high-end GPU nodes spinning 24/7 just in case a BGP route flaps, their cloud bill is going to wipe out any operational savings the AI was supposed to deliver.

The reality is that autonomy is no longer just a software problem. It’s a financial one. The telcos that actually win will not be the ones with the smartest AI. They will be the ones who figure out how to build a strict “scale-to-zero” environment. They need to spin up that expensive cognitive compute exactly when it is needed, and kill it the exact second the job is done.

Why Traditional Autoscaling is Broken for AI

When platform engineers first see the compute costs of running these AI agents, their first instinct is usually just to slap standard Kubernetes Horizontal Pod Autoscaling (HPA) on the cluster and call it a day. But standard HPA was built for stateless web servers, not massive cognitive engines. If you try to use it for Agentic AI in a telecom core, you’re going to fail for two big reasons.

The Cold-Start Penalty: Traditional autoscaling is entirely reactive. It sits around waiting for a CPU to hit 80% before it decides to scale up. In telecom, SLAs are measured in sub-milliseconds. If you wait for an anomaly to spike your CPU, then provision a new GPU node, pull a massive AI container image, and load the model weights into VRAM, you are talking about minutes of delay. By the time your AI agent actually wakes up to fix the problem, you have already breached your SLA.

CPU Utilization is a Liar: For AI workloads, standard hardware metrics are completely misleading. A GPU could be pegged at 90% utilization just thinking through a minor log warning, while a massive, critical network failure is stuck waiting in the queue. If your scaling logic is tied to hardware metrics instead of the actual severity of the event queue, you are just going to burn budget scaling blindly.

We have to abandon reactive resource metrics entirely and move to event-driven orchestration.

The Fix: Event-Driven Orchestration

If standard HPA is broken for this, what is the fix? You have to completely decouple the infrastructure from the workload using strict, event-driven orchestration.

Instead of keeping baseline infrastructure running just to maintain a state, you treat cognitive compute as 100% ephemeral. You don’t scale based on how hard the CPU is working. You scale based on the exact depth and severity of the anomaly queue.

To actually build this, architects need purpose-built event-driven scalers like KEDA (Kubernetes Event-driven Autoscaling). KEDA lets your cluster completely bypass those reactive hardware metrics and listen directly to the network’s data plane.

But how do you avoid the cold-start latency of booting a fresh GPU pod? KEDA solves this by reacting to the event queue length itself rather than waiting for an existing pod’s CPU to max out. By the time a traditional HPA notices a CPU spike, the system is already overwhelmed. (To solve this exact issue in production, I open-sourced a custom KEDA scaler specifically designed to scrape and react to native GPU metrics, allowing the orchestrator to scale cognitive workloads preemptively. You can view the architecture on [GitHub])

KEDA intercepts the telemetry trigger at the source. When paired with a warm pool of paused GPU nodes and pre-pulled container images, KEDA can scale a pod from zero to active in milliseconds. The infrastructure is anticipating the load based on the queue, not reacting to the stress of it.

Here is what the workflow actually looks like when you do it right:

  1. The Trigger: Telemetry picks up a severe anomaly ,like a sudden 5G slice degradation, and pushes an event straight to a message broker like Kafka.
  2. The Scale-Up: KEDA intercepts that exact metric and instantly provisions a dedicated, GPU-backed AI pod from a warm standby pool.
  3. The Execution: The Agentic AI loads into VRAM, figures out the blast radius of the anomaly, and executes a fix. This is usually by reconciling the state through a GitOps controller.
  4. The Kill Switch: The absolute millisecond that the event queue clears and the network is stable, the orchestrator aggressively terminates the pod and gives the GPU back to the node pool.

You only pay the premium GPU tax during moments of active reasoning. The 24/7 idle tax is gone.

Architecting the Scale-to-Zero Core

To make this scale-to-zero dream a reality, you have to fundamentally change how you handle network observability. The biggest mistake I see architects make is tightly coupling their monitoring tools with their AI execution layer. If your observability stack is running on the same hardware as your AI engine, you are literally wasting premium GPU compute just to watch logs.

You need a strict, physical separation of concerns:

The Watchers (The Lightweight Control Plane):
Your network data plane needs to be monitored by lightweight, CPU-efficient edge collectors like Prometheus or OpenTelemetry. These sit right at the edge, continuously eating millions of telemetry data points and BGP state changes. Because they don’t do any complex reasoning, they run incredibly cheap on standard CPU nodes.

The Thinkers (The Heavyweight Execution Plane):
Your expensive AI models are completely isolated in a separate, GPU-backed node pool that literally defaults to zero instances.

When the Watchers spot an anomaly, they don’t try to fix it. They just fire an alert to KEDA. KEDA then wakes up the Thinkers, spinning up the exact number of GPU pods needed to handle that specific blast radius. By decoupling the watchers from the thinkers, you guarantee that not a single cycle of GPU compute is wasted on baseline monitoring.

The Bottom Line

Autonomous telecom networks are going to happen. But trying to brute-force the infrastructure provisioning is a fast track to bankrupting your IT department. The smartest Agentic AI in the world is useless if you can’t afford the cloud bill to run it.

Furthermore, this isn’t just about protecting the IT budget. Running idle GPUs 24/7 creates a massive, unnecessary carbon footprint. By enforcing a scale-to-zero architecture, telcos can drastically reduce the energy consumption of their autonomous networks, turning a massive ESG liability into a sustainable operational model.

Autonomy is no longer just a software engineering problem. It is an infrastructure balancing act. If Agentic AI is going to survive in the telecom core, we have to ditch legacy threshold scaling and embrace strict, event-driven orchestration.

Tools like KEDA give us the ability to build networks that are both cognitively brilliant and financially ruthless. We can spin up massive intelligence at the exact millisecond of failure and scale right back to zero the moment the network is healed.

References and Further Reading

About the Author

Pavan Madduri is a Cloud-Native Architect, CNCF Golden Kubestronaut, and active IEEE researcher specializing in enterprise infrastructure automation, Agentic SREs, and Kubernetes networking. He designs scalable, zero-trust cloud environments and frequently writes about the intersection of AI governance and cloud-native infrastructure.

Connect with Pavan Madduri on [LinkedIn] .

Disclaimer: The author acknowledges the use of AI-assisted tools for structural formatting, language refinement, and copyediting during the drafting of this article. The core architectural concepts, technical opinions, and engineering strategies remain entirely original.

ABI Research: mobile network spending to fall 29% in 2026-31

According to ABI Research, global mobile network infrastructure spending is projected to peak at ~$92 billion in 2026–2027 before falling 29% to $65 billion by 2031. This decline reflects the maturation of 5G deployments and a shift in operator focus toward 6G, with reduced demand for traditional Radio Access Network (RAN) equipment.

“5G deployments have seen significant growth over the years, with industry estimates placing the current number of launched 5G networks at over 350 globally,” said Matthias Foo, Principal Analyst at ABI Research. “By the end of 2025, global 5G population coverage is expected to reach 60%, driven in part by rapid deployments in India, where more than 500,000 5G Base Transceiver Stations have been installed within three years.”

As 5G rollouts mature, RAN equipment vendors are beginning to report slower growth. Even as advanced deployments such as 5G-Advanced emerge in markets like the United States, China, and Saudi Arabia, overall infrastructure demand is stabilizing following years of rapid expansion.

Recent financial results from major vendors reinforce this trend.

  • Ericsson reported flat RAN growth in 2025 and expects a similar outlook for 2026.
  • Nokia also posted flat performance in its Mobile Networks business.
  • ZTE reported a 5.9% Year-on-Year decline in its Carriers’ Networks segment in the first half of 2025.

Following the release of their respective financial reports, both Ericsson and Nokia said they expect the RAN market to be more or less flat this year.  Nokia is focusing on data center networking, while Ericsson is concentrating on mission critical communications, defense, and enterprise networking.

ABI says some near-term growth is still expected in 2026, supported by ongoing deployments in markets such as Malaysia, India, Argentina, Peru, and Vietnam.

Open RAN adoption is forecast to grow at a 26.5% CAGR through 2031, accounting for approximately 23% of the installed base. However, despite high-profile announcements from operators and vendors, the market is still expected to remain largely dominated by incumbent suppliers rather than new entrants initially expected.

These findings are from ABI Research’s Indoor, Outdoor, and IoT Network Infrastructure market data report, part of its 5G, 6G & Open RAN research service. The report provides detailed forecasts, market share analysis, and insights into key infrastructure investment trends.

Research Highlights:

  • mMIMO market tracker across regions and by configurations.
  • DAS revenue forecasts by region, technology, and verticals.
  • Small cell market tracker for both indoor and outdoor infrastructure.

………………………………………………………………………………………………

Dell’Oro Group is slightly less pessimistic than ABI Research. In January, it forecast that global RAN revenues will grow at a 1% CAGR for the remainder of the 2020s, as ongoing 5G investments.  Stefan Pongratz said at the time that downside risks still outweigh the upside potential though, the most notable of those being slowing data growth.

References:

https://www.abiresearch.com/press/mobile-network-spending-to-peak-at-us92-billion-by-2027-as-5g-buildouts-wind-down-ahead-of-6g#

https://www.telecoms.com/telecoms-infrastructure/mobile-network-spending-to-fall-29-in-2026-31-says-abi-research

Dell’Oro: RAN Market Stabilized in 2025 with 1% CAG forecast over next 5 years; Opinion on AI RAN, 5G Advanced, 6G RAN/Core risks

Dell’Oro: RAN market stable, Mobile Core Network market +14% Y/Y with 72 5G SA core networks deployed

RAN Silicon Rethink- Part II; vRAN and General-Purpose Compute

Will “AI at the Edge” transform telecom or be yet another telco monetization failure?

Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN

Omdia on resurgence of Huawei: #1 RAN vendor in 3 out of 5 regions; RAN market has bottomed

Omdia: Huawei increases global RAN market share due to China hegemony

Dell’Oro Group: RAN Market Grows Outside of China in 2Q 2025

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

 

Australia’s NBN and Nokia demonstrate multi-generation optical technologies concurrently over existing FTTP infrastructure

NBN Co, in collaboration with Nokia, has successfully conducted a laboratory demonstration of multiple generations of optical access and coherent transmission technologies operating concurrently over its existing Fiber‑to‑the‑Premises (FTTP) network. The technical trial validates the long‑term scalability of NBN Co’s national full‑fibre infrastructure and its capacity to accommodate the sustained growth of residential, enterprise, and industrial data demand anticipated over the coming decades.

The “Supercharging Fibre” trial, presented at the Broadband Forum Spring Member Meeting—held in Australia for the first time and hosted by NBN Co—demonstrated aggregate transmission rates exceeding 230 Gbit/s using multiple optical technologies over a single physical fiber link in a controlled laboratory environment. The experimental setup also established a pathway toward achieving terabit‑class capacities in future trials through the evolution of optical modulation formats and channel aggregation techniques.

A key outcome of the trial was the successful integration of coherent optical transmission with multiple generations of passive optical network (PON) technologies—GPONXGS‑PON, and 50G‑PON—operating simultaneously over the same fiber infrastructure currently in service across Australia. Coherent optics, traditionally deployed within metropolitan, core, and data center interconnect networks, employ advanced modulation and digital signal processing to deliver extended reach, low latency, and high spectral efficiency. Their introduction into the access network domain represents a significant step toward the convergence of access and transport technologies, offering an efficient route to enhanced capacity and service flexibility without extensive physical network replacement.

The demonstration (see illustration below) underscores the technical viability of leveraging existing passive optical infrastructure to support future bandwidth requirements driven by the proliferation of cloud computing, immersive digital experiences, artificial intelligence applications, and industrial IoT systems. The results further illustrate the potential of FTTP systems to evolve into a highly scalable, future‑ready broadband platform capable of sustaining national connectivity objectives.

Image Credit:  Perplexity.ai

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

By 31 December 2025, more than 1 million customers had transitioned from copper‑based services to high‑speed full‑fiber connections, positioning FTTP as NBN Co’s dominant fixed‑line technology at approximately 35% of total connections. The company achieved its commitment to enable 10 million premises, representing about 90% of the NBN fixed‑line footprint, to order multi‑gigabit‑capable wholesale broadband services. Ongoing upgrade activities encompass over 228,000 premises, as part of an initiative to extend full‑fiber access to 95% of the remaining ~622,000 copper‑served locations by 2030.

These developments reflect NBN Co’s strategic focus on access network modernization and underscore the continuing evolution of optical access technologies toward achieving the performance, flexibility, and resilience required to support Australia’s transition to a digital and cloud‑centric economy.

About NBN Co.:

NBN Co. was established in 2009 by the Commonwealth of Australia as a Government Business Enterprise (GBE) with a clear direction – to design, build and operate a wholesale broadband access network for Australia.

And we’ve done just that – creating a network that criss-crosses a country, and allowing internet retailers to provide reasonably priced broadband services to consumers and businesses.

The network is the digital backbone of Australia and is constantly evolving to keep communities and businesses connected and our nation productive.

 

References:

https://www.nbnco.com.au/corporate-information/media-centre/media-statements/nbn-superchargingfibre-trial

https://www.nbnco.com.au/corporate-information/about-nbn-co

https://www.broadband-forum.org/events/spring-2026-member-meeting/

Dell’Oro: Optical Transport Systems market +15% year-over-year in 3Q2025 driven by Cloud Service Providers

AI wireless and fiber optic network technologies; IMT 2030 “native AI” concept

Point Topic: FTTP broadband subs to reach 1.12bn by 2030 in 29 largest markets

Nokia and Hong Kong Broadband Network Ltd deploy 25G PON

Nokia’s launches symmetrical 25G PON modem

Google Fiber planning 20 Gig symmetrical service via Nokia’s 25G-PON system

Ericsson and Forschungszentrum Jülich MoU for neuromorphic computing use in 5G and 6G

Ericsson and major European research center Forschungszentrum Jülich are collaborating to develop technologies for the continued evolution of 5G and for the future introduction of 6G (IMT 2030) networks.  The organizations signed a Memorandum of Understanding (MoU) on March 24, 2026.The project aims to leverage JUPITER, Europe’s first “exascale” supercomputer, to design and test new artificial intelligence solutions for the complex demands of 6G. The partnership will explore AI models and methods to enhance Ericsson’s core network, network management, and Radio Access Network (RAN).

Important objectives include exploring ultra-efficient, “brain-inspired” computing approaches like neuromorphic computing [1.] to handle intense network tasks and strengthen Europe’s digital infrastructure.  Modern mobile networks rely heavily on Massive MIMO, a technology where many devices communicate simultaneously via numerous antennas. By exploring novel system architecture approaches like neuromorphic computing, researchers aim to speed up optimization and reduce energy use versus classical methods.

Note 1. Neuromorphic computing is a brain-inspired engineering approach that mimics biological neural networks using analog or digital electronic circuits. It combines memory and processing in one place—similar to neurons and synapses—to achieve extreme energy efficiency, speed, and learning capabilities, moving beyond the limitations of traditional computing architecture. Unlike traditional AI that uses continuous data, neuromorphic systems use “spikes”—discrete events in time—to mimic how neurons communicate. Such systems only consume significant power when processing data (“spiking”), making them ideal for ultra-low-power edge computing, unlike traditional computers that are always on. They can process complex, real-world data (like vision or touch) much faster and with far less power than traditional computers.

…………………………………………………………………………………………………………………………………………………………………………………………..

The alliance will study operational strategies like heat recovery to boost energy efficiency in HPC and cloud deployments. The collaboration involves systematic benchmarking of AI methods – including the application of neuromorphic AI – across Ericsson products to assess execution speed, scalability to large datasets, information retention, and storage efficiency.  In addition, the partnership will provide insights into the feasibility of cloud strategies based on concepts from the EuroHPC ecosystem, which is establishing a world-class supercomputing infrastructure.

Professor Laurens Kuipers, a member of the Executive Board of Forschungszentrum Jülich, said: “This collaboration has the potential to make a significant contribution to a more sustainable digital future. By combining our excellence in high-performance computing and our research into novel, neuro-inspired computing approaches with Ericsson’s expertise in telecommunications, we aim to develop more energy-efficient network solutions and strengthen a sovereign European digital infrastructure.”

Image Credit: Image: Forschungszentrum Jülich / Kurt Steinhausen

……………………………………………………………………………………………………………………………………….

Nicole Dinion, Head of Architecture and Technology, Cloud Software and Services, Ericsson said: “The future of mobile networks is deeply intertwined with AI and the need for unparalleled energy efficiency. Our collaboration with Forschungszentrum Jülich, for years a global leader in supercomputing and applied physics, combines their research and computing power with our expertise in all domains of telecoms technology. We will explore architectures that define the next generation of telecommunication.”

The collaboration covers several areas of research:

  • AI methods for Ericsson products across the full portfolio: systematic benchmarking of approaches to assess execution speed, scalability to large datasets, information retention, and storage efficiency. Where security and commercial conditions permit, the teams may also use JUPITER for large-scale model training, leveraging its compute resources.
  • Energy-efficient computing for AI inference at the radio and edge: developing and prototyping highly efficient solutions for tasks such as radio channel estimation and Massive MIMO – a key technology in modern mobile networks, in which many devices communicate simultaneously via numerous antennas. This includes exploring novel system architecture approaches like neuromorphic computing (e.g., memristors) to speed up optimization and reduce energy use versus classical methods.
  • HPC and cloud architectures and operations for AI: researching and implementing Modular Supercomputing Architecture (MSA) concepts from exascale work at Forschungszentrum Jülich – in particular, at the Jülich Supercomputing Centre (JSC) – and studying operational strategies, such as heat recovery, to boost energy efficiency in HPC and cloud deployments.

The collaboration will provide insights into the feasibility of cloud strategies based on concepts from the EuroHPC ecosystem, which is establishing a world-class supercomputing infrastructure with leading European centers such as the JSC.

ABOUT FORSCHUNGSZENTRUM JÜLICH:

Shaping change: This is what drives us at Forschungszentrum Jülich. As a member of the Helmholtz Association with more than 7,000 employees, we conduct research into the possibilities of a digitized society, a climate-friendly energy system, and a resource-efficient economy. We combine natural, life, and engineering sciences in the fields of information, energy, and the bioeconomy with specialist expertise in simulation and data science. www.fz-juelich.de

 

References:

https://www.ericsson.com/en/press-releases/2026/3/ericsson-and-forschungszentrum-julich-to-develop-advanced-ai-for-6g

https://www.ericsson.com/en/blog/2026/1/ai-future-will-be-defined-by-the-intelligent-digital-fabric

https://www.ibm.com/think/topics/neuromorphic-computing

China vs U.S.: Race to Generate Power for AI Data Centers as Electricity Demand Soars

AI infrastructure spending boom: a path towards AGI or speculative bubble?

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Expose: AI is more than a bubble; it’s a data center debt bomb

Sovereign AI infrastructure for telecom companies: implementation and challenges

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Custom AI Chips: Powering the next wave of Intelligent Computing

Groq and Nvidia in non-exclusive AI Inference technology licensing agreement; top Groq execs joining Nvidia

 

 

 

Analysis and Impact of Blockbuster FCC ban on foreign made WiFi routers

On March 23rd, the Federal Communications Commission (FCC) updated its Covered List to prohibit the sale of foreign made consumer-grade (WiFi) routers to be sold in the U.S.  The FCC’s Covered List is a list of communications equipment and services that are deemed to pose an unacceptable risk to the national security of the U.S. or the safety and security of U.S. persons.  This FCC decision follows a determination by an Executive Branch interagency body, which concluded those devices pose unacceptable risks to U.S. national security and the safety of its citizens. . The new FCC restriction applies strictly to new foreign made router models, meaning retailers can continue marketing previously approved units and consumers can operate their existing equipment without interruption.

Impact:

TP-Link, Netgear, and Asus are currently among the top-selling Wi-Fi router brands in the U.S. consumer market.  Estimates for early 2026 indicate that TP-Link alone holds approximately 35% of the U.S. consumer router market share, while Netgear and Asus collectively account for another 25%. The TP-Link Archer AXE75 is frequently rated the best router for most users due to its Wi-Fi 6E speed and reasonable price.

AXE5400 Tri-Band Gigabit Wi-Fi 6E Router

…………………………………………………………………………………………………………………………………

Linksys and Ubiquiti  are American-based companies, but their hardware is produced by contract manufacturers overseas in locations like China, Vietnam, and Taiwan. Similarly, Amazon eero and Google Nest mesh routers are not made in the U.S.

–>Hence, these companies ability to sell new WiFi router models in the U.S. is now facing strict regulatory hurdles.

Quotes:

FCC Chairman Brendan Carr said: “I welcome this Executive Branch national security determination, and I am pleased that the FCC has now added foreign-produced routers, which were found to pose an unacceptable national security risk, to the FCC’s Covered List.  “Following President Trump’s leadership, the FCC will continue to do our part in making sure that US cyberspace, critical infrastructure, and supply chains are safe and secure.”

Bogdan Botezatu, director of Threat Research at cybersecurity firm Bitdefender, says this ban is a step to harden the cybersecurity readiness of U.S. households, given ongoing geopolitical tensions. “Consumer routers sit at the edge of every home network, which makes them an attractive target and a strategic risk if compromised at scale,” he says. Asked whether he thinks the risk is real, Botezatu says the risk is real, though there’s no easy way to prove intent. “[Internet of Things] devices, including routers, are a weak point across the internet.”

Virtually all (WiFi) routers are made outside the United States, including those produced by US-based companies like TP-Link, which manufactures its products in Vietnam,” a spokesperson from TP-Link tells WIRED. “It appears that the entire router industry will be impacted by the FCC’s announcement concerning new devices not previously authorized by the FCC.”

Important Implications:
  • Reduced Product Availability: New, high-performance routers manufactured outside the U.S. will not receive the necessary approval to be imported or sold, restricting future consumer choices.
  • Higher Costs: The, “This ruling has the potential to significantly disrupt the U.S. consumer router market,” according to, likely resulting in increased prices for consumers as companies grapple with new regulatory requirements.
  • Shift in Manufacturing: Router manufacturers, including those targeting the U.S. market, will likely need to shift production to the U.S. to satisfy security concerns and bypass the ban, says PC Magazine.
  • Security Focus: The ban targets vulnerabilities in foreign hardware and firmware.
  • No Impact on Existing Devices: Consumers can continue to use routers they currently own

References:

https://www.fcc.gov/faqs-recent-updates-fcc-covered-list-regarding-routers-produced-foreign-countries

https://www.wired.com/story/us-government-foreign-made-router-ban-explained/

U.S. Weighs Ban on Chinese made TP-Link router and China Telecom

China backed Volt Typhoon has “pre-positioned” malware to disrupt U.S. critical infrastructure networks “on a scale greater than ever before”

WSJ: T-Mobile hacked by cyber-espionage group linked to Chinese Intelligence agency

Trump and FCC crack down on China telecoms; supply chain security at risk

RAN Silicon Rethink- Part II; vRAN and General-Purpose Compute

Overview:

The global Radio Access Network (RAN) market has experienced a significant decline, dropping by nearly $10 billion in annual product revenue between 2022 and 2024, from roughly $45 billion to about $35 billion by the end of last year (source: Omdia).

  • As the IEEE Techblog previously reported, Nokia is gradually moving away from its long-held reliance on custom RAN baseband (BBU) silicon from Marvell [1.] as it pivots to use Nvidia’s GPUs, as part of the latter’s $1B investment in Nokia in October 2025.

Note 1. Nokia uses Marvell RAN silicon in its 5G ReefShark portfolio. The companies collaborate to develop custom OCTEON SoC (System-on-a-Chip) and Infrastructure Processors, which are used to boost 5G AirScale base station performance.

  • Samsung has long partnered with Marvell Technology on purpose-built 5G baseband silicon. However, rising development costs and a contracting market for proprietary RAN hardware are reshaping that strategy. The economic case for new, custom RAN chipsets is becoming weaker as operators accelerate network virtualization.
  • In sharp contrast, Ericsson continues to defend its investment in proprietary silicon architectures while maintaining a flexible approach for operators that prefer virtualized or cloud RAN implementations running on standard central processing units (CPUs). At present, those solutions rely exclusively on Intel processors, though Ericsson notes its software is being engineered with portability in mind to support future hardware diversity.

Samsung’s Silicon Strategy:

Among RAN equipment vendors accessible to operators across North America and much of Europe, Samsung now stands as the principal alternative to the two Nordic RAN equipment suppliers, following the exclusion of Huawei and ZTE from many Western markets.

The South Korean conglomerate has become the global frontrunner in virtualized RAN (vRAN) deployments. Whereas custom silicon once dominated RAN infrastructure design, Samsung’s strategy has notably inverted that paradigm: vRAN is now its mainstream offering, and purpose-built hardware has moved to the periphery.

By the close of last year, Samsung reported supporting approximately 53,000 vRAN sites worldwide — a significant share of which lies within Verizon’s U.S. footprint. The company also disclosed major European developments, including Vodafone’s planned rollout across Germany and other markets, which will rely entirely on vRAN technology. For Samsung, discussions of bespoke, purpose-built 5G infrastructure have become increasingly rare.

According to Alok Shah, Vice President of Network Strategy at Samsung Networks, this transition reflects both the rising cost of developing custom silicon and the performance enhancements achieved by general-purpose CPU platforms.

“We’re still selling our purpose-built BBUs to a number of customers, but I do believe that it’s a matter of time,” Shah told Light Reading during MWC Barcelona, when asked if Samsung envisions an eventual phaseout of its proprietary baseband hardware portfolio.

Virtualized RAN Gains Momentum:

Transitioning to virtualized RAN (vRAN) allows network equipment vendors to capitalize on the scale economies of commercial data-center silicon. Samsung has established commercial vRAN contracts with Verizon and Vodafone, reflecting growing operator confidence in software-defined architectures.

“Virtual RAN performance has reached parity,” Shah said. “I know not all of our competitors feel that way, but that’s certainly how we feel. And the cost of building that modem is pretty high, even for a company like Samsung that’s really good at semiconductors,” he added.

Intel’s Granite Rapids Xeon platform exemplifies this shift to vRAN. The processor’s increased core density enables operators to cut hardware footprints; in many configurations, a single server can now support workloads that previously required two. Several network operators have confirmed this performance improvement during field evaluations.

Samsung and Ericsson continue to explore additional CPU suppliers. AMD’s latest multicore x86 processors offer up to 84 cores, compared with 72 in Intel’s Granite Rapids. However, offloading Forward Error Correction (FEC)—one of the most compute-intensive RAN processes—remains a challenge. Intel’s vRAN Boost feature integrates a dedicated hardware accelerator for FEC, while AMD currently lacks a direct equivalent.

Samsung has also evaluated Arm-based platforms, which increasingly support efficient software migration from x86. Nvidia’s Grace CPU, built on Arm architecture, has emerged as a potential candidate, especially when paired with its GPUs for selective Layer 1 acceleration.

Samsung’s roadmap aligns with a gradual and selective introduction of GPU acceleration. The company demonstrated GPU-based beamforming optimization during MWC, illustrating how AI can refine radio energy targeting. However, Samsung executives maintain that the latest Intel CPUs also provide sufficient capacity to host AI inference workloads directly. “Granite Rapids has plenty of capacity to support AI algorithms on-platform,” noted Shah.

While Nokia is building a GPU-compatible Layer 1 to accelerate computationally intensive baseband functions—including FEC—Samsung’s approach appears incrementally narrower, focusing on targeted AI for RAN optimization rather than complete GPU offload. GPUs may ultimately support AI at the Edge applications—so-called AI and RAN—where telecom operators leverage deployed GPUs for latency-sensitive inference services.

The degree to which such applications will reside within RAN sites remains uncertain. Some operators suggest that edge inference may instead remain within core network clusters that can meet latency requirements more efficiently.

Samsung’s architecture already supports GPU integration through commercial off-the-shelf (COTS) servers from manufacturers such as HPE, Dell, and Supermicro—aligning with broader cloud-native RAN trends. “It’s an off-the-shelf card that can be integrated directly into standard servers,” said Shah.

For now, Intel remains Samsung’s primary compute partner for commercial vRAN products. “We haven’t had an instance where customers are pushing for a second platform—it’s primarily a matter of commercial interest,” Shah added. The direction is clear: Samsung, like other leading vendors, is prioritizing scalable, general-purpose compute over bespoke 5G silicon as vRAN deployment accelerates.

……………………………………………………………………………………………………………………………………………………………………

References:

https://www.lightreading.com/5g/samsung-eyes-death-of-purpose-built-5g-but-has-no-ai-ran-fears

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN

Marvell shrinking share of the RAN custom silicon market & acquisition of XConn Technologies for AI data center connectivity

Intel FlexRAN™ gets boost from AT&T; faces competition from Marvel, Qualcomm, and EdgeQ for Open RAN silicon

Analysis: Nokia and Marvell partnership to develop 5G RAN silicon technology + other Nokia moves

Open Cosmos introduces global space-based LEO satellite service for IoT monitoring

Founded in 2015, UK headquartered Open Cosmos has introduced a new integrated satellite service that combines broadband, earth observation, and IoT capabilities to help organizations monitor critical infrastructure, protect environmental assets, and respond more rapidly to events. The company says the offering is unique in combining global IoT connectivity with real-time Earth observation data to deliver contextual intelligence for governments and institutions.

The service is built on Open Cosmos’ multi-layer satellite architecture, which the company describes as a trilogy of secure broadband connectivity, Earth observation, and IoT. The constellation includes the newly launched Connected Cosmos Low Earth Orbit (LEO) connectivity backbone [1.] and the Open Constellation Earth observation layer [2]. Each satellite carries an IoT payload, integrating functions that are typically deployed as separate systems.

Note 1.  Connected Cosmos is a new LEO constellation providing sovereign and secure communications for businesses and government bodies worldwide.  It ensures that critical data remains secure, trusted, and immediately usable—even when terrestrial infrastructure is compromised. It uses Optical Inter-Satellite-Links to route data between satellites, physically bypassing subsea cables.  Built to withstand interference from jamming and cyber attacks, it’s designed to cut through a contested orbital field for modern critical operations.

Note 2. The Open Constellation is a mutualized satellite infrastructure, created to enable organizations to share the data generated by satellites for improved access to information on our planet. Using this shared capacity reduces overall costs and increases access to better quality, more frequent data. With more satellites in orbit, more areas can be covered more frequently, giving partners of the Open Constellation a greater global coverage.

Open Cosmos Ecosystem:

Image Credit: Open Cosmos 

…………………………………………………………………………………………………………………………………………………………………………

The company says this approach is intended to “address the traditionally siloed nature of space-based data services, dramatically accelerating data delivery times and maximizing operational awareness, which will monitor environmental change and support disaster response across the globe – even in the most remote regions.”

Open Cosmos says the result is faster detection of events and a better understanding of what is happening on the ground. Potential applications include monitoring widely distributed assets, overseeing critical infrastructure such as energy, utility, and rail networks, protecting oceans, tracking wildfires, and observing offshore conditions. In this model, imagery and sensor data are combined so that users can not only see that a change has occurred, but also understand the context behind it.

“Our mission at Open Cosmos has always been focused on solving real world issues through space-based services,” said Danielle Edwards, VP for IoT at Open Cosmos. “This is an essential and critical technology service for governments, enterprises and institutions across the globe, helping to monitor and solve real world problems, with the innovative use of technology in space.

“Our existing Earth observation satellites already carry IoT payloads, so we have the experience to integrate further through our ConnectedCosmos LEO constellation, with each satellite being designed and made to carry IoT capabilities. Our aim is to provide a multitude of payload types within a single constellation to give our customers a completely bespoke and unique service.

“We won’t be just providing the data from a sensor; we will provide the visual imagery to explain why that data is changing. As demand for global monitoring and connected infrastructure continues to grow, our integrated approach represents a new model for space-enabled intelligence.”

At MWC earlier this month, Carlos Zamora, VP of Satcom Solutions at Open Cosmos, said the company is not positioning the LEO broadband service as a direct-to-device play.

Zamora elaborated:

“First of all we’re not going direct to device with the broadband. We’re not here to compete with Starlink or Kuiper or of all of these systems – we’re not here to bring internet to the to the masses. We’re here to bring a global secure connectivity to governments, commercial [customers] and actually anyone that is worried about their data resiliency and sovereignty.  But we do have IoT capabilities that commercial and other customers could use. So the architecture is also fundamentally different. What we’re selling is a network, not a link in space, but actually a network. And I think what makes the difference beyond just connectivity, which is already a differentiator, is the fact that we can start fusing all of our offerings together. And this is not just about moving bits from one place to another, it is giving you the possibility of accessing a space infrastructure that can give you access to real time Earth observation, to real time computing capabilities in orbit, and basically creating a network of assets that can increase your situational awareness and give you access to a global intelligence backbone.”

Open Cosmos is effectively positioning the platform as a secure, multi-sensor space infrastructure layer rather than a consumer broadband network. The focus is on government, enterprise, and institutional customers that need connectivity, resilience, and situational awareness tied to Earth observation and IoT data.

………………………………………………………………………………………………………………………………………………………………………………….

References:

https://www.open-cosmos.com/

https://www.open-cosmos.com/leo-satellite-network-connectivity

https://www.open-cosmos.com/news/open-cosmos-earth-observation-iot-real-time-data

https://www.telecoms.com/satellite/open-cosmos-launches-earth-observation-and-iot-satellite-service

Enterprise IoT and the Transformation of UK Telecom Business Models – Part 1

From LPWAN to Hybrid Networks: Satellite and NTN as Enablers of Enterprise IoT – Part 2

Semtech LoRa® PHY technology enables Amazon Sidewalk to expand while supporting fixed and mobile IoT endpoints

ITU-R recommendation IMT-2020-SAT.SPECS from ITU-R WP 5B to be based on 3GPP 5G NR-NTN and IoT-NTN (from Release 17 & 18)

CEA-Leti RF Chip Enables Ultralow-Power IoT Connectivity For Remote Devices Via Astrocast’s Nanosatellite Network

Anthropic Claude Users Reveal AI Hallucinations as their Top Concern

Introduction:

Across regions from Germany to Mexico, users of artificial intelligence (AI) are less concerned about being replaced by AI than by its propensity to make major mistakes, according to one of the largest global surveys to date on real-world AI usage and perception.  These mistakes, known as “AI Hallucinations,” are essentially made up stories rather than answers based on outdated information.

The study, conducted by Anthropic using its Claude chatbot, analyzed interviews with more than 80,000 users across 159 countries. The result is one of the most detailed global portraits yet of how AI is being deployed — and how users perceive its risks, benefits, and societal implications.

AI Hallucinations Outrank Job Displacement as Top Concern:

When asked what worries them most about AI, 27% of users cited AI chatbot errors described as “AI hallucinations,” while 22% pointed to job displacement and the loss of human autonomy. About 16% expressed concern that AI could weaken people’s capacity for critical thinking.

Image Credit: JOIST AI

“The AI hallucinations were a disaster. I lost so many hours of work,” said an entrepreneur from Germany. Another participant, a military worker in Mexico, noted the importance of domain knowledge in spotting AI’s flaws: “When I notice AI errors it’s because I’m well versed in the topic . . . but I wouldn’t know if the topic was alien to me, would I?”

An AI Interviewer for Global Insights:

The responses were collected in 70 languages using a novel feedback system that allowed Claude to act as both interviewer and analyst. The platform evaluated qualitative answers, categorizing responses to reveal common themes and linguistic nuances across regions.

“Beyond its scale and linguistic diversity, the project aimed to collect this rich human experience using Claude, so it could really inform our research agenda, change our research agenda, change the way we think about building our products, deploying our products,” said Deep Ganguli, who leads Anthropic’s societal impacts team and oversaw the research initiative.

Productivity and Personal Growth Drive AI Adoption:

While data quality and reliability drew criticism, the survey also underscored widespread acknowledgment of AI’s positive impact on productivity. Thirty-two percent of respondents said that AI tools had meaningfully improved their output at work.

An entrepreneur in the United Arab Emirates explained, “I used to be a web designer . . . now I build anything. Before I was one person, now I become 100 people — I don’t wait for anyone anymore.” Participants from Colombia, Japan, and the United States described similar gains, emphasizing how AI helps them free up time for family, hobbies, and creative exploration.

In total, nearly one in five users (19%) said AI had fallen short of their expectations. Yet usage patterns demonstrate remarkable versatility: respondents reported employing AI as a productivity assistant, educational tutor, design partner, creative collaborator, or even an emotional support companion.

A vivid example came from a soldier in Ukraine, who wrote, “In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life — my AI friends.”

Regional and Economic Divides in AI Optimism:

Regional variation was pronounced. Saffron Huang, the lead researcher on the project, found that respondents in South America, Africa, and across South and Southeast Asia expressed more optimism than users in Europe, the United States, or East Asia.

“The trend is that maybe more lower and middle-income countries are more optimistic than higher-income countries that have more AI exposure,” said Huang. She added that this optimism might reflect a sample skew toward early adopters in developing markets — individuals inclined to view new technologies as opportunities rather than threats.

“They just divide so cleanly . . . the more western developed countries are significantly more concerned about AI and the economy, [and] much more negative, and then, the reverse is true with the lower and middle-income countries,” she said.

According to Anthropic’s researchers, AI’s limited visibility in daily workflows across lower-income economies may explain the difference. “If AI hasn’t visibly entered your daily work yet, AI displacement likely feels abstract, especially when more immediate economic pressures already exist,” the team wrote in a companion blog post.

Next Steps: Measuring AI’s Real-World Impact:

Anthropic plans to extend its Claude Interviewer research framework into longitudinal studies that track how AI affects users’ lives over time. “The goal is to better measure both the improvements and the harms — and to use those insights to make systemic refinements,” said Ganguli.

The company’s approach — embedding feedback collection directly into an AI platform — represents an emerging model for data-driven, iterative AI development. By combining self-reported user experience data with large-scale text analytics, Anthropic aims to better understand how its models interact with human needs and constraints.

Industry and Research Community Respond:

The study has drawn attention across the AI community for its unprecedented reach and innovative methodology. Nickey Skarstad, director of product at language-learning company Duolingo, praised the work’s ambition. On LinkedIn, she wrote: “For anyone building products right now, this is the future of understanding your users. The what AND the why at a scale we’ve never had access to before.”

Still, several researchers remain cautious about overinterpreting the results. Divy Thakkar, a researcher at Anthropic rival Google DeepMind, expressed reservations on X, saying he was “sceptical” about calling the study a new form of science due to potential selection bias and limitations in survey design. “A human qualitative researcher would take time to build trust with their participants, hold the space for reflection, introspection, contradictions — that’s the whole point of it,” he wrote.

Methodological caveats extend to demographics. Almost half of the survey’s respondents were based in North America or Western Europe, while regions such as Central Asia had only several hundred participants.

Ilan Strauss, an economist and director of the AI Disclosures Project, described the initiative as “an excellent piece of work,” but urged careful interpretation. He noted that the absence of reported confidence intervals — standard practice in survey-based research — makes it difficult to measure uncertainty. Self-reported productivity gains, he added, are inherently prone to bias.

A Global Mirror for Human-AI Relations:

Despite these caveats, the Claude Interviewer study illustrates a broader shift in the relationship between humans and AI systems. As AI technologies proliferate across regions and industries, they are becoming both instruments of empowerment and sources of anxiety — mirroring social, economic, and cultural dynamics in striking ways.

While western economies debate AI-driven labor disruption and ethical alignment, many in emerging markets frame AI as a means of upward mobility and creative expansion. This duality — between apprehension and aspiration — may shape not only AI adoption patterns but also future research and regulatory directions across global contexts.

References:

https://www.ft.com/content/e074d3a9-7fd8-447d-ac0a-e0de756ac5c5?syn-25a6b1a6=1 (PAYWALL)

https://www.joist.ai/post/ai-hallucinations-what-they-are-and-why-it-matters

Sources: AI is Getting Smarter, but Hallucinations Are Getting Worse

Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators

Alphabet’s 2026 capex forecast soars; Gemini 3 AI model is a huge success

Analysis & Economic Implications of AI adoption in China

China’s open source AI models to capture a larger share of 2026 global AI market

AWS to deploy AI inference chips from Cerebras in its data centers; Anapurna Labs/Amazon in-house AI silicon products

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos

Gartner: AI spending >$2 trillion in 2026 driven by hyperscalers data center investments

 

 

Telco investments in mobile core networks surge 83% in 2025-Q4, but what about ROI?

According to new data from market research firm Omdia (owned by Informa), 2025 Q4 investments 5G SA Core networks surged 83% year-over-year. For OEMs, this uptick suggests a pivot away from the stagnant 5G Standalone (SA) momentum of recent years. Omdia identified North America and EMEA as the primary growth engines for the quarter.  “The surge in 5G core investment underscores CSPs’ strategic focus on enabling new revenue streams and digital transformation,” said Roberto Kompany, Principal Analyst Mobile Infrastructure at Omdia, in a statement. “This momentum is reflected in AT&T’s nationwide 5G SA and RedCap deployment and Verizon’s launch of a new enterprise-grade fixed wireless access (FWA) slice,” he said.

Ookla and Omdia recently noted accelerating 5G SA adoption in Europe, but the region continues to trail global leaders due to its low baseline. Spain remains a standout exception. Telefónica recently achieved a domestic milestone by deploying 5G SA in-building coverage via a Vantage Towers DAS, and has partnered with Airbus Helicopters to integrate 5G SA into manned and unmanned rotary-wing platforms for the Spanish armed forces. Despite broader deployments in the UK and Germany, a significant performance gap remains.

The GCC region ( Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the UAE.) currently delivers median 5G SA download speeds up to five times faster than European averages. This disparity highlights a capability gap rather than a coverage issue between mature and emerging markets. The industry footprint is expanding, with Omdia reporting 88 commercial 5G SA deployments to date—a notable increase from the 72 reported by Dell’Oro in late 2025.

…………………………………………………………………………………………………………………………………………………………………………………………….

While Dell’Oro confirms the 5G SA Core market growth, it emphasized that subscriber migration and active utilization, rather than just “flags in the ground,” are the true long-term drivers for infrastructure spend.  For the first time, the 5G Mobile Core Network (MCN) market accounted for 50 percent share of the total MCN market.

“In 2025, the MCN market recorded its highest year-over-year revenue growth rate since 2014,” stated Dave Bolan, Research Director at Dell’Oro Group. “This was driven by record-setting growth rates in all market segments: 4G MCN (highest since 2019), 5G MCN (highest since 2022), and Voice Core (highest since 2007). 4G MCN gains came from Caribbean and Latin America (CALA) and Europe, Middle East, Africa (EMEA) regions; 5G MCN from all regions; and Voice Core, primarily from Asia Pacific and EMEA regions.

“5G MCNs led the way in 2025 growth, as 5G Standalone (5G SA) networks reached an inflection point and moved towards mass market appeal, as more 5G SA networks expand in population coverage in urban, suburban, and rural areas. Voice Core was the next major contributor to growth in 2025, driven by planned 3G MCN shutdowns, which required upgrades from Circuit Switched Core to IMS Core, and IMS Core modernization to a cloud-native IMS Core for VoNR in 5G SA networks. Meanwhile, 4G MCNs expanded due to subscriber growth in Africa and South America,” added Bolan.

Looking ahead, Omdia forecasts sustained double-digit growth for 5G Core investments through 2026, fueled by the requirement for nationwide service parity and increased network capacity. This outlook favors the leading 5G Core vendors—Huawei, Ericsson, and Nokia—who currently maintain the highest market shares.

……………………………………………………………………………………………………………………………………………………………………………………………

ROI for 5G SA Core Networks?

The return on investment (ROI) for 5G Standalone (SA) core networks is currently at a critical inflection point. While initial years were marked by “bemoaning” slow momentum, 2025 and 2026 have seen a shift from pilot testing to an execution-driven phase with measurable, albeit varied, returns.  In the 2025–2026 market, enterprise ROI for 5G Standalone (SA) is primarily driven by three high-growth segments: Private 5G NetworksRedCap IoT, and Network Slicing. While public 5G consumer returns remain steady, these B2B use cases are where Mobile Network Operators (MNOs) are finding the most immediate “killer applications.”

ROI Drivers in 2026:
  • Operational Efficiency: 5G SA cores are cloud-native, allowing for microservices that can be deployed in hours rather than days. This reduces long-term operational costs (OpEx) by automating network functions and improving energy efficiency per gigabyte transmitted.
  • New Revenue Streams: Unlike 5G Non-Standalone (NSA), the SA core enables Network Slicing and Ultra-Reliable Low-Latency Communications (URLLC). These are essential for high-margin B2B services like industrial robotics, emergency services, and “SuperMobile” slicing for enterprises.
  • Monetization of “Capability”: In regions like the GCC (Gulf Cooperation Council), 5G SA delivers speeds up to five times faster than European averages, allowing operators to charge for performance-based tiers rather than just data volume.
  • Consumer Benefits: Early data from the UK indicates that 5G SA can extend device battery life by 11% to 22% due to its unified control plane, creating a tangible value proposition for premium consumer plans.
Current Market Challenges:
  • The “Value Perception Gap”: Despite nationwide rollouts, some operators (like AT&T in late 2025) saw mobile service revenue grow by only 3.4%, barely outpacing inflation.
  • Regional Disparity: ROI is strongest in North America and China, where industrial policy and sovereign wealth have accelerated deployment. In contrast, Europe faces a “regulatory quagmire” and higher costs for removing legacy equipment, slowing its path to profitability.
  • The 6G Factor: Some operators are hesitant to invest billions in a full 5G SA overhaul if the technology is viewed as a “transitional” generation that may be superseded by 6G-ready cores in the late 2020s.
Strategic Outlook for 2026:
Market research from the Dell’Oro Group projects the 5G Mobile Core Network market to grow at a 12% CAGR through 2030, reaching historic highs in 2026. For most operators, the consensus is that 5G SA is a strategic necessity to maintain competitiveness, even if the short-term financial returns are uneven.
In his February 2026 Newsletter, Stephane Teral wrote, “2026 points to a more mixed environment—RAN slightly down, 5G Core continuing to grow—against a backdrop of uncertain capex and an accelerating shift toward opex and software-driven models.”
…………………………………………………………………………………………………………………………………………………………………………………………….

References:

https://www.telecoms.com/5g-6g/telcos-spend-more-on-the-core-as-5g-sa-picks-up

https://www.linkedin.com/pulse/february-newsletter-4q25-fy25-wireless-infrastructure-update-ug9ec/

Dell’Oro: Mobile Core Networks +15% in 2025; Ookla: Global Reality Check on 5G SA and 5G Advanced in 2026

Dell’Oro: RAN market stable, Mobile Core Network market +14% Y/Y with 72 5G SA core networks deployed

Dell’Oro: RAN market stable, Mobile Core Network market +14% Y/Y with 72 5G SA core networks deployed

Téral Research: 5G SA core network deployments accelerate after a very slow start

Analysts: Telco CAPEX crash looks to continue: mobile core network, RAN, and optical all expected to decline

Building and Operating a Cloud Native 5G SA Core Network

MCN Market Roared Back in 2025 With 15 Percent Growth, According to Dell’Oro Group

Analysis of Airspan Networks & Atika Alliance: Resilient, Multi-Domain 5G Mission Critical Connectivity for the Defense Industry

Airspan Networks Holdings LLC (“Airspan”) and ATIKA Venture, S.L. (“Atika”) have entered into a strategic collaboration to advance resilient, multi-domain 5G communications for defense and security operations. The initiative focuses on developing interoperable, deployable network systems optimized for mission-critical connectivity across terrestrial and airborne domains.

The cooperation framework covers both commercial and technical engagements, with initial activities centered in Spain and expansion potential across Europe. The partnership unites Airspan’s portfolio in Open RAN (O-RAN), 5G, and commercial Air-to-Ground (ATG) communications with Atika’s capabilities in tactical 5G deployments, AI-driven network analytics, and secure 5G core integration for defense-grade environments.

Joint programs will address the convergence of deployable 5G infrastructure and mobile ad hoc network (MANET) systems under a unified network orchestration and control layer. The combined architecture aims to provide secure, high-throughput connectivity in dynamic and contested electromagnetic environments. Technical priorities include rapid network deployment, automated resilience management, AI-assisted spectrum optimization, and end-to-end encryption aligned with defense mission profiles.

Image Credit:  Aviat Networks

“Airspan has a strong history of solving advanced connectivity challenges, including low-latency, high-mobility communications through our Air-to-Ground In-Motion 5G platform,” stated Glenn Laxdal, CEO of Airspan. “Through this collaboration with Atika, we aim to adapt our commercial-grade 5G and O-RAN technologies to defense use cases that demand operational resilience and interoperability across domains. Atika’s deep experience in defense communications, combined with their expertise in AI-enabled network intelligence and secure 5G core technologies, represents a substantial complement to our portfolio.”

“The operational landscape increasingly depends on adaptable, intelligent, and sovereign networks,” said Ana Rodríguez Quirós, Managing Director of Atika. “Our partnership with Airspan strengthens our ability to support multi-domain 5G for defense users, extending connectivity beyond satellite and traditional radio systems. Building on our collaboration with the Spanish Army, this alliance demonstrates how advanced 5G network architectures can directly enhance mission readiness, mobility, and overall operational effectiveness.”

About Airspan:

Headquartered in Plano, Texas, Airspan Networks Holdings LLC is an innovative U.S.-based provider of wireless network solutions with a global presence, focused on delivering carrier-grade 5G and advanced wireless connectivity. Airspan’s portfolio spans three core solution areas – in-building, outdoor, and air-to-ground – and includes market-leading products for DAS, Open RAN, and small cells across both public and private network settings. Airspan supports mobile network operators, neutral-host providers, enterprises, public-sector organizations, and other service providers in building reliable, scalable wireless networks that enhance coverage and capacity while enabling fast, efficient deployment.

Visit our website at https://airspan.com/

About Atika:

Atika is a Spanish technology company specializing in advanced tactical communications and deployable 5G networks for defense and security. Its technology focuses on federated architectures, multi-domain connectivity, and network intelligence capabilities designed for real operational environments.

……………………………………………………………………………………………………………………………………………………….

Requirements and Analysis:

1.] Resilient, mission-critical 5G connectivity (URLCC that meets ITU-R M.2410 Technical Performance Requirements for IMT 2020) recommendation with a

2.] Unified network orchestration and control layer (5G Services Based Architecture depends on implementation of 3GPP Release 17 and 18 specifications.

1.  Enhancements to the 5G NR Physical Layer (PHY) to support Ultra-Reliable Low-Latency Communications (URLLC) in the Radio Access Network (RAN). While basic URLLC support was established in Release 15.  When 3GPP Release 16 was frozen in July 2020, URLLC in the RAN enhancements had not been completed or performance tested. Hence, the ITU-R M.2150 standard for IMT 2020 RIT/SRIT initially did not meet the ITU-R  M.2410 Technical Performance Requirements for IMT 2020 recommendation

The most significant PHY-layer optimizations were finalized in Release 16 (Phase 2) an Release 17 (Phase 3) with more to come in Release 18 as described below.

a] Release 16 (The “IIoT and URLLC” Phase):
This release introduced foundational PHY improvements to reach “six nines” (99.9999%) reliability. Key features included:

  • New DCI Formats: Compact Downlink Control Information (DCI) formats (e.g., Format 0_2 and 1_2) were added to reduce signaling overhead and improve robustness.
  • Sub-slot HARQ-ACK Feedback: Enabled faster feedback by allowing multiple HARQ-ACK transmissions within a single slot.
  • PUSCH Repetition Type B: Introduced to allow even finer-grained (mini-slot based) repetitions for low-latency uplink, enabling transmissions to cross slot boundaries.
  • Intra-UE Prioritization: Standardized the ability for a device to prioritize a high-priority (URLLC) transmission over a lower-priority (eMBB) one if they overlap in time.
  • Multi-TRP (CoMP): Enhanced support for Transmission and Reception Points (TRPs) to provide spatial diversity, ensuring communication continues if one path is blocked.
    Ericsson +6

b] Release 17 (The “Further Enhanced URLLC” Phase):
Completed in 2022, this release focused on consolidating these features and extending them to more complex scenarios:

  • URLLC in Unlicensed Spectrum (NR-U): Adapted URLLC PHY procedures for unlicensed bands, addressing regulatory constraints like Listen-Before-Talk (LBT).
  • Improved HARQ-ACK and CSI Reporting: Introduced more efficient and robust feedback mechanisms for better link adaptation.
  • Enhanced Multi-TRP for UL: Further optimized uplink transmissions using multiple TRPs for increased reliability.
Summary of Implemented Rel-17 RAN Enhancements:
  • Feedback Reliability: Improved HARQ-ACK and Channel State Information (CSI) reporting to ensure the network can adapt to rapid channel changes.
  • Traffic Prioritization: Intra-UE prioritization allows URLLC data to “pre-empt” or take priority over standard mobile broadband (eMBB) data within the same device.
  • Power Savings: New mechanisms like Paging Early Indication (PEI) allow URLLC-capable sensors to remain in low-power states longer without sacrificing the ability to wake up instantly for critical data.
c] Current Status:
While the core functional specifications for URLLC in the RAN are considered “complete” as of Release 17, the ecosystem continues to evolve into 3GPP Release 18 (5G-Advanced), which looks at further specialized enhancements for Extended Reality (XR) and Artificial Intelligence (AI).
Modem and Chipset Comparison (Device Side).
5G chipsets/modems:
Company Modem Model(s) Rel-17 URLLC Features
Qualcomm World’s first 5G Advanced-ready modem. Supports enhanced HARQ-ACK and CSI feedback for reliability, and AI-based beam management to maintain stable URLLC links.
MediaTek
MediaTek M90
Conforms to Rel-17 standards and aligns with Rel-18 5G-Advanced. Implements Rel-17 Paging Early Indication (PEI) to reduce power while maintaining low-latency readiness.
Samsung
Exynos Modem 5300
While primary documentation emphasizes Rel-16, Samsung achieved 1024 QAM (defined in Rel-17) in partnership with Qualcomm. Supports ultra-low latency via FR2 and EN-DC.
Network infrastructure implementation often takes the form of software-defined upgrades to existing massive MIMO and base station hardware.
  • Ericsson: Enabled “Time-Critical Communication” as a software upgrade on its RAN. Its Rel-17 implementation focuses on Hybrid Automatic Repeat Request (HARQ-ACK) enhancements, intra-UE multiplexing, and time-synchronization for Industrial IoT (IIoT).
  • Nokia: Updated its AirScale portfolio to support Rel-17 features, specifically targeting Time-Sensitive Communications (TSC) and deterministic networking for private factory environments.
  • Huawei: Has integrated Rel-17 URLLC enhancements as part of its “5.5G” (5G-Advanced) marketing, focusing on achieving sub-10ms latency for wide-area industrial control and 1ms for local-area automation.

2.  3GPP has specified a unified management and orchestration framework for 5G systems, primarily developed by working group SA5 (Management, Orchestration, and Charging). Starting from Release 15, 3GPP introduced a Service-Based Management Architecture (SBMA), which acts as a unified layer to manage and orchestrate 5G networks, including the Core, RAN, and end-to-end network slices.

Key aspects of the 3GPP unified 5G orchestration and control layer include:
  • Service-Based Management Architecture (SBMA): Instead of legacy, vendor-specific interfaces, 3GPP adopted a service-oriented approach. This architecture uses Management Services (MnS), which provide standardized interfaces for both management and orchestration, facilitating multi-vendor interoperability.
  • End-to-End Slice Management: The 3GPP standards (notably TS 28.530/531/532/533) define a common approach to manage the entire lifecycle of a 5G network slice (creation, activation, supervision, and termination) across RAN, Core, and Transport domains.
  • Network Automation (NWDAF): The Network Data Analytics Function (NWDAF), introduced in Release 15, is a key component for automated control. It collects network data, analyzes it, and feeds back insights to assist in policy management (PCF) and slice selection (NSSF).
  • Intent-Driven Management: 3GPP is enhancing its standards to support intent-driven management, enabling operators to manage network resources based on high-level desired outcomes rather than low-level configuration, which is crucial for autonomous networks.
  • AI/ML Management: Recent releases (18/19) focus on a unified, domain-independent AI/ML management and orchestration framework that supports the full lifecycle of AI/ML models within the 5G system.

The latest 3GPP release with finalized specifications for Service-Based Management Architecture (SBMA) is Release 18 (Rel-18), which was functionally frozen in early 2024. Rel-18 includes enhanced study items (FS_eSBMA) focused on supporting management for 5G standalone (SA) and non-standalone (NSA) scenarios and management of Management Functions.

…………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.businesswire.com/news/home/20260319340548/en/Airspan-Networks-and-Atika-Form-Alliance-to-Advance-Resilient-Multi-Domain-5G-Connectivity-for-Defense

SNS Telecom & IT: Mission-Critical Networks a $9.2 Billion Market

3GPP Release 16 5G NR Enhancements for URLLC in the RAN & URLLC in the 5G Core network

3GPP Release 16 Update: 5G Phase 2 (including URLLC) to be completed in June 2020; Mission Critical apps extended

Huawei, Qualcomm, Samsung, and Ericsson Leading Patent Race in $15 Billion 5G Licensing Market

https://www.3gpp.org/news-events/3gpp-news/sa5-5g

Revolutionizing 5G Mission Critical Transport Networks (Part 2)

https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/imt-2020/Documents/S01-1_Requirements%20for%20IMT-2020_Rev.pdf

 

Page 1 of 355
1 2 3 355