Dell’Oro: Global RAN market stable (again) in 1Q 2026; top 5 RAN vendors are unchanged

A recently published report from Dell’Oro Group indicates that the stable trends shaping the Radio Access Network (RAN) market in 2025 extended into the first quarter of 2026. Worldwide RAN revenue, excluding services, increased at a low-single-digit year-over-year rate in 1Q 2026, marking the fifth consecutive quarter where the market remained within a relatively narrow range (-4 to +4% year-over-year). However, market fundamentals remain constrained by slower mobile broadband growth.

“This positive start does not alter the fundamentals shaping the growth prospects of this market,” said Stefan Pongratz, Vice President for RAN market research at the Dell’Oro Group. “We attribute the improved conditions primarily to a favorable regional mix and easier comparisons in markets that experienced sharp declines. Meanwhile, RAN remains growth-constrained, and operators are increasingly preparing for a slower mobile broadband growth environment,” Pongratz added.

Additional highlights from the 1Q 2026 RAN report:

  • Growth in EMEA and APAC offset weaker activity in North America.
  • Revenue rankings were unchanged in 1Q 2026. Based on trailing four-quarter worldwide revenue, the top five RAN suppliers are Huawei, Ericsson, Nokia, ZTE, and Samsung [1.].
  • Regional imbalances continue to shape the market recovery trajectory, with APAC excluding China improving while North America and China remain under pressure.

Note 1. There were no significant market share shifts quarter-to-quarter for these five vendors whose ranking remains the same.  The ongoing war in Iran, with the Strait of Hormuz closed, has disrupted supply chains for specialized components, while higher energy prices are raising operational costs for infrastructure deployment, creating an increasingly complex environment for network equipment suppliers.

About the Report

Dell’Oro Group’s RAN Quarterly Report offers a complete overview of the RAN industry, with tables covering manufacturers’ and market revenue for multiple RAN segments including 5G NR Sub-7 GHz, 5G NR mmWave, LTE, Macro BTS, small cells, Massive MIMO, and Cloud RAN. The report also tracks the RAN market by region and includes a four-quarter outlook. To purchase this report, please contact us by email at [email protected]

References:

Worldwide RAN Market Remained Stable in 1Q 2026, According to Dell’Oro Group

Dell’Oro: RAN Market Stabilized in 2025 with 1% CAG forecast over next 5 years; Opinion on AI RAN, 5G Advanced, 6G RAN/Core risks

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

ABI Research: mobile network spending to fall 29% from 2026-to-2031

Dell’Oro: RAN market stable, Mobile Core Network market +14% Y/Y with 72 5G SA core networks deployed

Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos

Mulit-vendor Open RAN stalls as Echostar/Dish shuts down it’s 5G network leaving Mavenir in the lurch

The enterprise network stack is collapsing; AI’s impact; comparison with “Batch Pipelines Break AI Agents”

by Shashi Kiran with Alan J Weissberger, ScD

Abstract:

This article presents the primary author’s point of view on networking technology and market evolution, as experienced it directly with his customers at Nile, where he serves as Chief Marketing Officer (CMO). A key theme is overlaying the impact of AI and its implications for network and network security architecture on a new network stack. We focus specifically on the diverse complexity and heterogeneity of the LAN, while drawing inferences to other areas in the broader enterprise network.

The article draws no information from other publications or references, except for the security breach data points derived from IDC, Gartner, and market surveys.  Hence, the References listed at the end of the piece are from related IEEE Techblog posts and Nile press releases chosen by this website’s content manager.

Definitions:

The enterprise network stack is much more than a protocol stack. It is the layered architecture of physical infrastructure, forwarding devices, control protocols, management systems, and security enforcement functions that interconnect users, endpoints, workloads, and cloud services across campus, branch, WAN, data center, and cloud domains. It typically includes access, distribution, core, and edge segments, along with overlay, orchestration, telemetry, identity, and policy planes that govern how traffic is admitted, routed, segmented, monitored, and secured.

A useful way to think about the stack is in terms of planes:

  • Data plane: forwards packets, enforces QoS, and applies access-control functions close to the traffic path.

  • Control plane: discovers topology and capabilities, computes paths, and reacts to failures.

  • Management plane: handles configuration, monitoring, troubleshooting, reporting, and performance management.

  • Security stack: includes firewalls, IDS/IPS, secure web gateways, threat intelligence, and related inspection or enforcement tools.

At the device level, the stack typically includes physical media and network hardware such as cabling, Wi-Fi, NICs, switches, routers, gateways, servers, and dedicated security appliances. At higher layers, it includes protocols and services for addressing, routing, transport, application connectivity, identity, and policy enforcement, often mapped loosely to OSI/TCP-IP concepts rather than a strict textbook stack.

In an enterprise environment, the network stack extends across LAN, WAN, data center, cloud, and security domains, so “the stack” is less a single product and more an integrated system of infrastructure, software, telemetry, and policy. That is why discussions of enterprise architecture usually separate forwarding, orchestration, assurance, and security functions even when they are delivered in a unified platform.

Structural Limits of the Enterprise Network Stack:

The enterprise network stack is approaching a structural inflection which may be at a “breaking point.”  That’s because what’s failing is structural and architectural, not incremental.  The enterprise network stack was architected for a world that no longer exists, and most of the pain organizations feel today is the cost of pretending otherwise. The interesting question isn’t whether it breaks but rather when, and along which seams.  Here’s why:

The network stack most enterprises still run was designed around five assumptions that were partly true in 2010 but mostly false in 2026. Users sit at desks on managed devices. Applications live in a corporate data center. Traffic flows north-south through a perimeter. Identity equals a user with a session. Trust derives from network location. Every one of those is gone. Users are hybrid, apps are SaaS and multi-cloud, traffic is increasingly east-west and machine-driven, identity now includes non-human agents acting with delegated authority, and zero trust has formally retired the idea that being inside the network means anything.

So, the enterprise stack isn’t failing because any single piece is bad. Rather, it’s failing because the architecture it was based on no longer matches the workload, the threat model, or the operational reality it’s asked to serve. AI is the forcing function, but the cracks were already there. The choice in front of most enterprises isn’t whether to rebuild but whether to do it deliberately or by accident. Will reinvention and self-disruption be intentional or forced?

Today, many enterprise environments represent layered extensions of legacy architectures rather than cohesive designs. AI acts as an accelerant, exposing pre-existing architectural limitations. The resulting fragmentation increases operational complexity, reduces agility, and amplifies security risk.

Complexity is a Primary Risk Vector:

Complexity has evolved from an operational burden into a primary source of systemic risk. Modern network environments often exceed the capacity for deterministic human understanding, creating conditions where failures and vulnerabilities emerge at the intersections between systems rather than within individual components.

Empirical evidence suggests that many successful breaches exploit misconfigurations and integration gaps rather than novel vulnerabilities. In this context, complexity itself becomes the effective attack surface.

This challenge is particularly acute in the LAN, which often retains legacy architectural elements, heterogeneous device ecosystems, and fragmented management models. Combined with constrained IT resources, this environment can become a disproportionate source of exposure.

Reducing complexity—through architectural simplification, integrated control planes, and automation—is therefore not merely an operational objective but a core security strategy. In AI-driven environments, simplicity directly contributes to resilience and risk reduction.

An Architectural Reset is Needed:

An architectural reset is increasingly necessary. While incremental upgrades remain feasible, their marginal returns are diminishing relative to the growing mismatch between legacy designs and emerging requirements. Many organizations continue to extend existing architectures due to cost constraints or perceived transition risks. However, this approach often compounds technical debt and increases long-term exposure. The more fundamental question is not whether incremental evolution is possible, but whether it represents effective capital allocation in the context of AI-driven workloads and threat models.

Forward-looking architectures are converging around several principles: AI-native workload support, identity-centric security, zero-trust enforcement, and tightly integrated operational models. Organizations that proactively redefine their network architectures around these principles are more likely to achieve sustainable performance, security, and operational efficiency gains.

Here are a couple of conceptual architectural constructs for a unified, secure fabric with AI orchestration, autonomous operation and service delivery, which replaces the fragmented network stack and operations of the traditional/legacy network.   The first illustration is more  functional; the second is a more theoretical stack.  CLICK ON EACH IMAGE TO ENLARGE!

Security and the Network Fabric:

Security is neither fully “moving into” nor “remaining outside of” the network fabric; rather, it is being restructured across distinct functional planes, including identity, policy, enforcement, and detection.

Historically, network-centric security relied on in-path inspection mechanisms (e.g., firewalls, intrusion prevention systems, and proxies). This model proved difficult to scale due to encryption, cloud decentralization, and traffic patterns that bypass centralized inspection points.

In contemporary architectures, the network fabric is evolving into a high-performance enforcement plane. Policy definition and decision-making are increasingly centralized in identity and control-plane systems, while enforcement is distributed across the network and applied at line rate to identity-associated flows.

This separation of concerns improves scalability and composability. Identity-centric policy models define “who can do what,” while the network enforces those decisions efficiently and locally. The result is a more adaptable and performant security architecture.

However, the effectiveness of this approach depends on architectural discipline. Designs that treat the fabric as one component within a broader, identity-driven security framework tend to reduce complexity. Conversely, attempts to re-centralize security entirely within the network risk recreating earlier limitations in a more complex form.

AI’s Impact on Telecommunications Networks:

Artificial intelligence (AI) is influencing telecom network architectures along two orthogonal dimensions:

1.] AI introduces a new class of workloads that impose stringent and atypical requirements on network infrastructure.

AI workloads fundamentally challenge legacy network design assumptions. Traditional enterprise networks were optimized for north–south traffic patterns, human-driven interactions, and best-effort delivery models. In contrast, AI workloads generate predominantly east–west traffic, operate at machine timescales, and exhibit low tolerance for latency, jitter, and packet loss. Simultaneously, AI-enabled control and management planes enable higher degrees of automation and operational efficiency, particularly in campus and branch environments where autonomous operations are beginning to reduce manual intervention.

2.] AI is increasingly being embedded within the network itself, enhancing operations, optimization, fault diagnosis/recovery and security functions. The interaction between these roles is driving many of the architectural shifts observed today. Today, wide-area networks (WANs) must interconnect AI-intensive data center environments with distributed enterprise domains, effectively bridging heterogeneous traffic models and service requirements.

AI-Driven Changes in Traffic and Risk:

AI is reshaping both the structure of network traffic and its associated risk profile. From a traffic perspective, flows are becoming increasingly east–west, bursty, and machine-generated, with reduced visibility due to encryption and abstraction layers. From a security standpoint, AI introduces new classes of actors (e.g., non-human identities and autonomous agents), as well as new attack vectors, including adversarial AI and data exfiltration via model interactions.

These shifts are tightly coupled. The same properties that define AI-driven traffic—distribution, dynamism, and opacity—also complicate detection and enforcement. As a result, security architectures are evolving toward:

  • Identity-centric models that extend zero-trust principles to non-human entities.

  • Data loss prevention mechanisms adapted to AI-generated and AI-consumed data flows.

  • Fine-grained segmentation within network fabrics, subject to latency constraints.

  • Increased reliance on AI-driven detection and response systems to counter AI-enabled threats.

Importantly, these dynamics vary across network domains (LAN, WAN, and data center/cloud), requiring domain-specific adaptations while maintaining consistent policy frameworks.

Alignment with “Why Batch Pipelines Break AI Agents: The Case For Streaming-First Network Operations:”

The key points made in this article are highly consistent with the above referenced IEEE Techblog post written by Shazia Hasnie, Ph.D.  Both articles treat AI as an architectural forcing function: Shazia’s article focuses on the data/telemetry layer, while this post extends the same logic to the broader enterprise network stack. The core claim in both pieces is that legacy architectures were built for human-operated, latency-tolerant workflows, not autonomous AI systems. In the Shazia’s article, batch pipelines fail because they deliver stale, incomplete, and inconsistent context to AI agents.  Here, the same mismatch appears at the network level, where legacy enterprise designs were optimized for north–south traffic, perimeter trust, and static operational assumptions. Both arguments are fundamentally about architectural mismatch rather than isolated product shortcomings.

A particularly strong point of overlap is the emphasis on real-time context. Shazia’s article argues that AI agents require continuous data freshness and an ordered event stream to function safely, while this piece frames AI networking as a shift toward machine-timescale traffic, streaming telemetry, and identity-aware enforcement. In both cases, the network is no longer just a transport layer; it becomes part of the control loop that determines whether AI decisions are accurate and timely.

The failure models are also similar.  Shazia identifies five failure modes of batch-to-agent mismatch: stale data, memory gaps, delete blindness, schema fragility, and coordination failure. While not using that taxonomy explicitly, we share the same underlying diagnosis by arguing that complexity, fragmentation, and legacy operational models are now the primary sources of risk. Our discussion of east–west traffic, non-human identities, zero trust, and observability mirrors Shazia’s broader point that autonomous systems fail when their surrounding infrastructure cannot preserve state, sequence, and policy consistency.

These two articles work well together because they address different layers of the same transition. The first article is mainly about the data plane of AI operations—how telemetry, event streams, and agent inputs must move from batch to streaming to avoid operational failure. This article is about the network and security architecture around that data plane—how the enterprise stack, LAN, WAN, and fabric must evolve to support AI-native workloads and enforcement.  Hence, the reader can consider the two articles companion pieces.

…………………………………………………………………………………………………………………………………………………………………………………………

About the Author:

Shashi Kiran has nearly 30 years of experience in network, security and cloud technologies, primarily as an operator and executive in public and private B2B companies, where he has held global product management and marketing positions. He’s adopting a protopian view of AI, while being both fascinated and frightened by it at the same time.

Shashi is currently the CMO at Nile, whose network architecture aligns with what AI-era networks require: identity-centric control, embedded security, and autonomous operations.  He previously held executive roles at Cisco, Check Point Software, Broadcom and other venture backed startups, and is based in San Jose, CA. He can be reached at http://www.linkedin.com/in/skiran

…………………………………………………………………………………………………………………………………………………………………………………………

References:

Why Batch Pipelines Break AI Agents: The Case For Streaming-First Network Operations

Nile launches a Generative AI engine (NXI) to proactively detect and resolve enterprise network issues

Fiber Optic Networks & Subsea Cable Systems as the foundation for AI and Cloud services

Dell’Oro: Bright Future for Campus Network As A Service (NaaS) and Public Cloud Managed LAN

Cisco Plus: Network as a Service includes computing and storage too

https://nilesecure.com/press-releases/networking-and-security-in-higher-ed

https://nilesecure.com/press-releases/nile-powers-black-hat-mea-2025-with-zero-reported-incidents

 

Why Batch Pipelines Break AI Agents: The Case For Streaming-First Network Operations

By Shazia Hasnie, Ph.D. assisted by IEEE Techblog editor Sridhar Talari Rajagopal

Abstract:

The adoption of AI agents in network operations has exposed a critical architectural gap. Most enterprise data pipelines were designed for dashboards and reporting, not autonomous decision-making. When AI agents consume data from batch-oriented pipelines, five distinct failure modes emerge: stale data, memory gaps, delete blindness, schema fragility, and coordination failure. This article examines each failure mode, explains the underlying mechanism, and proposes architectural remedies grounded in streaming-first design principles. It also connects each technical failure to measurable business outcomes—extended downtime, recurring incidents, compliance exposure, silent decision degradation, and cascading impact. The result is both a diagnostic framework for I&O leaders and a financial argument for treating streaming data infrastructure as the prerequisite for autonomous operations.

Introduction: The Data Foundation Gap

Artificial intelligence is reshaping network operations. AI agents promise to detect anomalies, diagnose root causes, and execute remediation faster than human engineers. The industry has focused attention on models, GPUs, and orchestration frameworks. The data layer remains largely unexamined.

This is a critical oversight. Most enterprise data pipelines were built for human consumers. They serve dashboards, weekly reports, and historical analysis. Humans tolerate latency. Humans bring context. Humans notice when something looks wrong.

AI agents require something fundamentally different. They need real-time context. They need historical state. They need accurate representations of current reality. When these requirements are not met, agents do not complain. They act—on incomplete information, with incorrect assumptions, producing wrong outcomes.

The gap between what batch pipelines deliver and what agents require creates failure modes that most teams do not see until an agent makes the wrong decision. Recent analysis has identified the economic dimensions of this gap [1], while industry resources have begun documenting the specific failure patterns that arise when batch processing meets autonomous agents [6]. This article extends that work by identifying five distinct failure modes and proposing a streaming-first architectural response.

FIVE FAILURE MODES: ANATOMY OF BATCH-TO-AGENT MISMATCH

The following five failure modes represent the specific ways batch data pipelines undermine autonomous network operations. Each is examined through its mechanism—how the batch pipeline architecture produces the failure—its operational consequence, and the streaming-first architectural remedy that eliminates it. Together, they form a diagnostic taxonomy for any I&O team evaluating whether their data foundation is ready for Agentic AI.

Failure Mode 1: Stale Data

Mechanism: Batch telemetry pipelines poll, collect, and process data in cycles. Data is extracted on a schedule, transformed in bulk, and loaded into a destination—a warehouse, data lake, time-series database, or feature store that holds a static, point-in-time snapshot of the source. Between cycles, the pipeline holds no current state. An AI agent that spins up between cycles receives a snapshot of the past.

Consequence: The agent diagnoses an outage using telemetry from five minutes ago. The network state has changed during that interval. Routes have shifted. Traffic has been redirected. Thus, the agent’s diagnosis is based on a reality that no longer exists. Remediation actions applied to a past state can worsen the current incident. The agent becomes a liability rather than an asset. Industry documentation confirms that AI agents require continuous data freshness to function correctly [5].

Architectural Remedy: Streaming telemetry replaces cyclical polling with continuous event push. Data flows from source to consumer in real time, ingested directly into the streaming platform’s durable event log [2]. The agent consumes from a live stream, not a stale snapshot. Context acquisition takes milliseconds. The cognitive loop remains intact. This is not an add-on to the batch pipeline. It is a structural replacement of the ingestion layer.

Failure Mode 2: Memory Gap

Mechanism: Batch pipelines deliver windows of data—the last hour, the last day, the last processing cycle. They do not preserve the sequence of events that led to the current moment. Historical context is stripped away with each new extract. The pipeline knows what happened. It does not know what happened before.

Consequence: An agent responding to an interface flap cannot answer the most basic diagnostic question: has this happened before? It cannot correlate the current event with the three similar events that occurred in the preceding 24 hours. It cannot detect the pattern that would reveal a degrading optical module. Every incident appears isolated. Pattern recognition—the core value proposition of AI-driven operations—is structurally impossible. The distinction between streaming and batch architectures for these use cases has been well-documented [4].

Architectural Remedy: A durable event log with configurable retention serves as the agent’s memory [2]. Unlike a batch window, which discards history with each new extract, the event log preserves the ordered sequence of all events within the retention period. The agent seeks backward in the log on startup and replays the preceding window of telemetry. Pattern detection across time becomes native to the architecture. This is not a separate cache layered on top. It is the storage layer itself—immutable, ordered, and built for event replay from any offset.

Failure Mode 3: Delete Blindness

Mechanism: Batch pipeline’s Extract, Transform, Load (ETL) processes compare snapshots of source data. They do not watch the database transaction log. They identify what exists at two points in time and process the difference. When a record is deleted from the source system, the pipeline has no way of distinguishing between a row that was deleted and a row that was simply omitted due to extraction error, filtering logic, or schema mismatch. The absence of a row is not an event. It is a gap. Batch pipelines are not designed to interpret gaps as meaningful signals. The record simply vanishes from the next extract. The downstream consumer—an AI agent or any other system—has no way of knowing the record ever existed.

Consequence: The agent queries the downstream data store and finds no record for a deactivated account, a revoked certificate, or a cancelled change order. It cannot distinguish between “never existed” and “was deleted,” so it treats the absence as neutral.

The agent makes decisions on ghosts—data that no longer exists in source systems. In access control scenarios, this is not an operational error. It is a security incident. This specific failure mode has been identified in analyses of batch processing limitations for AI agents [6].

Architectural Remedy: Change data capture (CDC), implemented through Kafka Connect with Debezium connectors, reads the database transaction log directly [2], [8]. Debezium provides CDC source connectors for MySQL, PostgreSQL, MongoDB, SQL Server, and other databases — capturing inserts, updates, and deletes as discrete events with explicit operation types by tailing the database’s native transaction log. Nothing is invisible to the pipeline. The streaming architecture knows not only what exists but what ceased to exist. This is not an ETL workaround with soft-delete flags. It is a structural capability of the integration layer, converting database changes into first-class events the moment they occur.

Failure Mode 4: Schema Fragility

Mechanism: Source database schemas change over time. Columns are renamed, added, deprecated, or re-typed. Batch pipelines are configured for a specific schema at extraction time. When the source schema changes, the pipeline responds in one of two ways. It fails silently and drops the affected field from every subsequent extract. Or it fails loudly and stops processing entirely.

Silent failure is the more dangerous outcome. The pipeline continues delivering data. The consumer has no indication that a critical field is missing.

Consequence: The agent continues operating without a critical data input. It makes decisions with incomplete information. It has no awareness that its reasoning is compromised. The wrong decisions accumulate. By the time the missing field is discovered—often through an operational failure rather than a monitoring alert—the cost of remediation includes auditing and correcting every decision made during the degradation window.

Architectural Remedy: A schema registry with compatibility enforcement validates schema changes before they propagate to downstream consumers [2]. Streaming platforms can enforce backward and forward compatibility rules at the producer level. A breaking schema change is rejected before any data is published. The pipeline fails loudly and immediately. This is not a documentation standard or a code review checklist. It is a structural governance layer embedded in the streaming architecture itself, preventing silent field loss at the point of ingestion.

Failure Mode 5: Coordination Failure

Mechanism: When multiple AI agents operate on batch-derived data, each agent consumes a separate, potentially inconsistent snapshot. Agent A receives data from the 10:00 AM extract. Agent B receives data from the 10:15 AM extract. The extracts differ. Each agent holds a different version of reality. There is no shared, ordered log of events that all agents consume.

Consequence: Two agents respond to the same cascading failure. Agent A identifies a BGP routing issue and begins rerouting traffic. Agent B identifies a DNS resolution failure and begins modifying name server configurations. Neither agent knows the other acted. The redundant changes compete. The conflicting configurations create new instability. The original incident expands rather than resolves. What began as a single point of failure becomes a cascade that erodes trust in autonomous operations.

Architectural Remedy: A shared, ordered event log serves as a single source of truth for all agents in the system. Every agent consumes from the same log. Actions taken by one agent are published back to the log as events, immediately visible to all others [7]. Coordination becomes native to the architecture.

Visibility alone, however, does not prevent conflicting actions. Two agents may observe the same anomaly and both initiate remediation before either’s action becomes visible on the log. In practice, this is addressed through complementary mechanisms layered on the same event-driven model: action intent events that signal an agent is about to act, giving others a window to defer; idempotency keys that prevent duplicate remediation from causing harm; and lightweight leases for resources that should only be modified by one agent at a time. These mechanisms do not require a central coordinator. They are published to the same log, consumed by the same agents, and enforced through the same ordered stream.

This is not a separate orchestration layer or message bus bolted onto the side. It is the core of the streaming platform—a unified, ordered, multi-consumer event stream that provides both the shared state and the coordination primitives that eliminate the inconsistent snapshots batch architectures produce by default.

Batch-to-Streaming Reference Architecture — Five Failure Modes and Their Architectural Remedies

THE UNIFIED DIAGNOSTIC FRAMEWORK

The five failure modes translate into a practical audit that I&O leaders can apply to their own infrastructure. Each question corresponds to a specific architectural requirement.

The Five-Question Audit

  1. Can the data pipeline deliver real-time context to an agent the moment it wakes up? If not, the system is vulnerable to stale data failures.
  2. Can the agent access the preceding window of telemetry to detect patterns across events? If not, the system is vulnerable to memory gap failures.
  3. Does the pipeline capture deletes as explicit events with operation types? If not, the system is vulnerable to delete blindness.
  4. Does the pipeline detect schema changes before they propagate to downstream consumers? If not, the system is vulnerable to schema fragility.
  5. Do all agents share a single, ordered view of events with visibility into each other’s actions? If not, the system is vulnerable to coordination failure.

A negative answer to any one of these questions signals a data foundation that is not ready for autonomous operations. The model is not the bottleneck. The GPUs are not the bottleneck. The telemetry pipeline is.

THE MIGRATION PATH: FROM BATCH TO STREAMING-FIRST

Adopting a streaming-first architecture does not require abandoning existing batch investments overnight. For most organizations, the transition follows a coexistence model: streaming pipelines are introduced alongside batch pipelines, not as an immediate replacement.

The practical starting point is to identify the highest-value agent—the one whose decisions carry the greatest operational or financial consequence—and convert its data pipeline first. This agent is typically the one where stale data, memory gaps, or coordination failures have produced measurable incidents. Converting this single pipeline to streaming telemetry with a durable event log delivers a targeted operational improvement while the rest of the batch estate continues to function.

From there, adoption expands incrementally. Each additional agent is migrated as operational experience with the streaming platform grows. Teams develop competence in offset management, schema governance through the registry, and backpressure handling while batch pipelines continue to serve lower-priority consumers. The streaming and batch estates coexist for a transition period measured in months, not days.

This incremental approach also reveals where streaming delivers the greatest marginal benefit. Not every data flow requires real-time treatment. Dashboards fed by hourly batch extracts may serve their purpose indefinitely. The streaming investment should be directed at the pipelines that feed autonomous agents—the flows where the five failure modes carry real operational consequence. The goal is not to stream everything. It is to stream the right things first.

THE BUSINESS IMPACT: FROM TECHNICAL FAILURE TO FINANCIAL CONSEQUENCE

Technical failures in the data pipeline do not remain technical. They cascade into business outcomes that appear on budget reviews, SLA reports, and board presentations. Each failure mode carries a distinct financial consequence.

Stale Data → Extended Downtime
An agent diagnosing from stale telemetry makes incorrect decisions. Remediation applied to a past state can worsen the current incident. Mean Time to Resolution increases. For revenue-generating services, every minute of extended downtime translates to lost revenue and SLA penalty accrual.

Consider an illustrative model: a Tier-1 service provider processing $50M in customer transactions per hour, 5-minute stale-data induced misdiagnosis that extends an outage by 15 minutes represents $12.5M in direct revenue loss—not counting SLA penalties, regulatory scrutiny, or reputational harm. The cost of a single such incident can exceed the annual investment in the streaming infrastructure that would have prevented it. If even a portion of such incidents are eliminated by replacing the batch pipeline feeding the diagnostic agent with a streaming backbone, the infrastructure investment is recovered in a single avoided outage.

Memory Gap → Recurring Incidents
An agent without historical context cannot recognize chronic conditions. A flapping interface, a memory leak, or a degrading optical module triggers the same alert repeatedly. Each occurrence consumes GPU inference cycles. Each occurrence generates a ticket. Each occurrence may require human escalation. The cumulative cost of a single undiagnosed chronic issue, multiplied across an enterprise network over a year, represents operational expenditure that a stateful agent could eliminate.

Delete Blindness → Compliance and Security Exposure
An agent acting on deleted records makes authorization decisions based on invalid state. A deactivated account granted access. A revoked certificate treated as valid. In regulated industries, these errors are compliance violations with defined financial penalties and reporting obligations. The cost of a single access control error caused by ghost data can exceed the annual cost of the streaming infrastructure that would have prevented it.

Schema Fragility → Silent Decision Degradation
When a batch pipeline drops a critical field, the agent does not fail loudly. It continues operating with incomplete inputs. Decisions degrade silently. The cost includes not only the direct operational impact but the effort of auditing and correcting every decision made during the degradation window. Silent failure multiplies eventual remediation cost.

Coordination Failure → Cascading Impact
When multiple agents act on inconsistent views of reality, they create new problems. Redundant changes compete. Conflicting configurations destabilize the environment. The original incident expands. The cost includes extended resolution time, additional engineering effort, and eroded trust in autonomous operations. Organizational credibility is a balance sheet item that coordination failure depletes.

The Aggregated View
Taken together, the five failure modes represent a predictable drain on AI investment returns. An organization that deploys expensive GPU infrastructure, fine-tunes capable models, and implements event-driven orchestration [3]—but feeds all of it with a batch data pipeline—has built an autonomous operations capability on a foundation that guarantees suboptimal outcomes. The streaming backbone is not an incremental cost. It is the insurance policy that protects the returns on every other AI infrastructure investment.

CONCLUSION: STREAMING-FIRST AS THE ARCHITECTURAL PREREQUISITE

The five failure modes share a common root cause. Batch data pipelines were designed for human consumers who tolerate latency, bring context, and notice anomalies. AI agents tolerate nothing. They act on what they receive.

Each failure mode is addressable within a unified streaming data architecture. Streaming telemetry solves stale data by replacing cyclical polling with continuous event push. Durable event logs solve memory gaps by preserving the sequence of events with configurable retention, allowing agents to replay history and detect patterns across time. Change data capture—a structural component of the streaming architecture implemented through Kafka Connect and Debezium—solves delete blindness by reading database transaction logs directly, capturing inserts, updates, and deletes as discrete events with explicit operation types. A schema registry with compatibility enforcement solves schema fragility by validating schema changes before they propagate downstream, catching breaking changes at the source rather than discovering them after agent failure. A shared, ordered event log solves coordination failure by serving as a single source of truth that all agents consume, ensuring every agent operates on the same reality with visibility into every other agent’s actions—complemented by intent events, idempotency keys, and lightweight leases that prevent conflicting actions without a central coordinator.

These are not disparate tools. They are structural elements of a single streaming data architecture. Apache Kafka provides the durable, shared event log at the core. Kafka Connect provides the integration framework for change data capture, ingesting database changes as first-class events. Schema Registry provides the compatibility governance layer. Together, they form a complete data foundation where stale data, memory gaps, delete blindness, schema fragility, and coordination failure are eliminated by design—not patched after the fact.

These architectural components eliminate the data-layer failure modes. But real-time data also enables real-time action—and that speed demands an execution-layer governance framework. Policy-as-code engines ensure that agent decisions, even when based on perfect context and full state, are validated against operational guardrails before they become cluster changes. The streaming backbone delivers the context. The policy layer ensures that context is acted upon safely.

This streaming architecture is not an end in itself. It is the data foundation upon which event-driven network operations can be built. While the streaming backbone eliminates the data-layer failure modes, organizations that pair it with event-driven compute unlock an additional dimension of efficiency. When a telemetry event flows through the event log and an anomaly is detected, that same stream can trigger the Kubernetes Event-driven Autoscaling (KEDA) of inference workloads [3]—spinning up the right-sized model at the right moment, on the right context. The streaming backbone delivers the context. Event-driven orchestration delivers the compute. Together, they close the loop from detection to inference, ensuring the agent has both the data and the compute it needs without the waste of always-on infrastructure.

The barrier is not technology. Each of these architectural components is proven, open-source, and deployed in production environments today. The barrier is architectural awareness. Organizations that invest in a streaming-first data architecture will deploy AI agents that deliver on their promise. Organizations that do not will discover these failure modes in production—after the wrong decision is already made.

The streaming data architecture is not a performance upgrade for Agentic AI. It is the architectural prerequisite.

REFERENCES

[1] P. Madduri and A. L. Thakur, “The Financial Trap of Autonomous Networks: Scaling Agentic AI in the Telecom Core,” IEEE ComSoc Technology Blog, April 2026. [Online]. Available: https://techblog.comsoc.org/2026/03/30/the-financial-trap-of-autonomous-networks-scaling-agentic-ai-in-the-telecom-core/

[2] Apache Software Foundation, “Apache Kafka Documentation.” [Online].
Available: https://kafka.apache.org/42/getting-started/introduction/

[3] Cloud Native Computing Foundation, “KEDA: Kubernetes Event-driven Autoscaling.” [Online]. Available: https://keda.sh/

[4] Streamkap, “Streaming ETL vs. Batch ETL: A Decision Framework.” [Online].
Available: https://streamkap.com/resources-and-guides/streaming-etl-vs-batch-etl

[5] Streamkap, “Real-Time vs Batch Data for AI Agents: Why Freshness Matters.” [Online]. Available: https://streamkap.com/resources-and-guides/real-time-vs-batch-data-for-agents

[6] Streamkap, “Why AI Agents Can’t Use Batch Data.” [Online]. Available: https://streamkap.com/resources-and-guides/why-agents-cant-use-batch-data

[7] Redpanda, “Building safe, multi-agent AI systems in Redpanda Agentic Data Plane.” [Online]. Available: https://www.redpanda.com/blog/adp-governed-multi-agent-ai-cloud

[8] Debezium Community, “Debezium: Open-Source Change Data Capture,” Debezium Documentation. [Online]. Available: https://debezium.io/

ABOUT THE AUTHOR

Shazia Hasnie, Ph.D., is VP, Product Strategy and Innovation at Cuber AI, focused on Agentic Network Operations, AI-driven automation, and streaming data architectures. Her work explores the intersection of autonomous systems, cloud-native infrastructure, and the economic models that make AI operations sustainable at scale.

linkedin.com/in/shaziahasnie/

Ookla on the Global D2D Market

Direct-to-device (D2D) satellite connectivity is emerging as a practical extension of non-terrestrial networks (NTNs), enabling standard smartphones to communicate directly with satellite systems without specialized user equipment. Within the 3GPP ecosystem, NTN capabilities were standardized (3GPP specs become standards by being rubber stamped by ETSI and ITU-R) beginning with 3GPP Release 17, establishing a framework for satellite-terrestrial interoperability and expanding the potential reach of mobile broadband beyond the footprint of terrestrial radio access networks.

D2D services could reduce persistent coverage gaps, especially in rural, maritime, and other underserved environments where terrestrial deployment is constrained by economics or geography. However, commercially available services today remain limited, with most deployments focused on messaging and other low-throughput applications rather than full mobile broadband.

From a market perspective, D2D and NTN have broad implications for mobile network operators (MNOs), satellite operators, equipment vendors, and regulators. That strategic importance helps explain why companies such as Apple, Amazon, SpaceX, and AST SpaceMobile are investing in this segment, alongside broader ecosystem activity around 3GPP-based NTN architectures.

Image Credit: Ookla

Ookla® has contributed to the discussion with a high-resolution poster showing global Speedtest® usage data for D2D services, along with a detailed market study on the D2D landscape. The analysis is based on Android devices that register with D2D-capable satellite systems from Starlink, Skylo, and Lynk, providing an early empirical view of how NTN-based connectivity is being used in practice.

Looking ahead, continued investment in larger satellite constellations and additional spectrum holdings should improve D2D capacity, coverage, and service robustness. As the technology matures, the industry is likely to move from narrowband messaging toward richer data services, with 3GPP NTN providing the standardization path for broader ecosystem scale-up.

For mobile network operators, the long-term effect could be a rebalancing of investment priorities at the edge of network coverage, particularly in sparsely populated regions. That may reduce the incentive for some rural tower builds and alter the demand outlook for parts of the RAN infrastructure supply chain.

Looking ahead, continued investment in next-generation satellite constellations, coupled with expanded spectrum access, is expected to enhance D2D performance and capacity. Key players—including Starlink, AST SpaceMobile, and Amazon’s Project Kuiper—are targeting higher data rates and broader service capabilities, with the objective of extending beyond narrowband messaging to support more data-intensive applications.

For MNOs, the evolution of D2D introduces potential shifts in network planning and capital allocation, particularly at the margins of coverage. Satellite-based augmentation could reduce the economic rationale for terrestrial infrastructure deployment in sparsely populated areas, with downstream implications for tower companies and certain segments of the radio access network (RAN) supply chain.

From a policy perspective, D2D also has the potential to reshape universal service frameworks and coverage obligations. Regulators seeking to expand connectivity may increasingly incorporate NTN-based solutions into their policy toolkits, prompting a reassessment of long-standing assumptions regarding the role of terrestrial infrastructure in achieving nationwide coverage.  In that sense, D2D is not just a satellite story.  It is becoming a broader telecom architecture shaped by 3GPP specifications and the convergence of terrestrial and non-terrestrial mobile networks.

Merry-go-round of dog chasing his tail: relationship between U.S. hyperscalers and private Gen AI companies

1.  Hyperscalers’ earnings growth this quarter was boosted by an unusually large contribution from “other income,” which was actually mark-ups of their equity stakes in private Gen AI companies.  For example:

  • Nearly half of Alphabet’s (Google) record $62.6 billion profit—about $28.7 billion—did not come from search ads, cloud services or any of its products at all. It came from Alphabet updating the value of the equity it owns in private AI companies, primarily Anthropic.  Alphabet holds a 14% stake before the announcement of an additional $40 billion commitment last week.
  • Amazon’s earnings release stated that first-quarter net income “includes pre-tax gains of $16.8 billion included in non-operating income from our investments in Anthropic”—more than half of Amazon’s pre-tax income (or profit) for the quarter.
  • Alphabet and Amazon generated “other income” totaling $53 billion in Q1 2026, which accounted for nearly 60% of those two companies’ total net income in Q1 and 34% of the total $155 billion in income this quarter. Of this $53 billion in “other income,” $49 billion was explicitly due to equity stakes in private AI companies.
  • Microsoft reported “only” $942mn of other income in the first three months of the year, but this line item has now made $7.2bn over the past nine months.
  • Under U.S. accounting rules, publicly traded firms must adjust and report the assessed value of their private equity holdings every quarter. Because private AI start-ups like Anthropic experienced meteoric valuation updates (e.g., Anthropic climbing to an estimated $380 billion), both Alphabet and Amazon were required to record those massive “on-paper” gains directly to their bottom-line net income.
  • When the AI bubble finally bursts (and it will) the private AI companies assessed market value will collapse, resulting in “impairment write-downs” and huge earnings declines for the hyperscalers, e.g. Amazon, Google/Alphabet, Microsoft, FB/Meta, and Oracle.

2. Now here’s the merry-go-round/ dog chasing its tail relationship:

Not only have private investments and increasingly engorged funding rounds become a meaningful driver of the hyperscalers’ aggregate earnings, but the money the hyperscalers have pumped into the likes of Anthropic and OpenAI has allowed those private AI companies to sign huge computing deals with Alphabet’s Google Cloud, Microsoft’s Azure and Amazon Web Services (AWS).  OpenAI and Anthropic now make up about half of the entire cloud computing order books at Oracle, Alphabet, Amazon and Microsoft! 

Indeed, AI startups have loaded up hyperscalers with unprecedented long-term financial commitments.

–>OpenAI and Anthropic make up over $1 trillion of the estimated $2 trillion cumulative revenue backlog currently held by major cloud service providers!

  • OpenAI to Microsoft Azure: Internal documents show OpenAI’s massive server rentals have generated more than $23 billion in direct cloud spending for Microsoft.
  • Anthropic to Google Cloud: Anthropic signed a contract committing to spend $200 billion over five years on Google’s cloud infrastructure and TPU chips.
  • Anthropic to AWS: In tandem with a fresh $5 billion investment from Amazon, Anthropic committed to spend over $100 billion over the next decade on AWS technologies.

Image Generated by Chat GPT

……………………………………………………………………………………………………………………………………………………………

3. Because hyperscalers report their overall cloud results as broad aggregates, the exact percentage of current quarter revenue generated purely by AI startups varies by provider. However, recent financial disclosures and analyst tracking pinpoint the enormous impact of these startups on current revenues and future order books:
-Google Cloud:
    • Backlog Percentage: Over 40%. Anthropic‘s $200 billion Multi-Year Commitment accounts for nearly half of Google Cloud’s total disclosed $240 billion revenue backlog.
    • Current Revenue Share: Estimated 12% to 15% of its current $20 billion quarterly revenue run-rate is driven directly by AI infrastructure consumption from startups (both frontier labs and over 40 mid-tier AI companies built on Google Cloud Vertex AI).

-Microsoft Azure:
    • Current Revenue Share: Estimated 15% to 18%. Microsoft’s annualized AI revenue run-rate hit $37 billion. A massive chunk of Azure’s overall 40% growth rate is anchored directly by OpenAI’s compute demands and the commercialization of OpenAI-tied products.

-Amazon Web Services (AWS):
  • Current Revenue Share: Estimated 6% to 8%. While AWS has the largest overall cloud scale ($150 billion annual run rate), its revenue is traditionally diversified across enterprise SaaS and retail. However, Anthropic’s new $100 billion infrastructure commitment means AWS’s revenue mix is aggressively shifting toward AI startups. [1, 2, 3, 4]

–>This is another sign of just how incestuously codependent the big tech industry is to astronomically valued private AI start-ups.

…………………………………………………………………………………………………………………………………………………………..

4. Another example of this codependency is Oracle and OpenAI’s massive, debt-fueled financial loop. In September 2025, the two companies signed a staggering five-year, $300 billion cloud-computing contract. This single deal radically transformed both companies’ financial profiles, binding their survival together as inextricably tied.

The deal functioned as an aggressive narrative magnifier for both companies:
    • For Oracle: The $300 billion contract instantly added to Oracle’s Remaining Performance Obligations (RPO), which skyrocketed 359% to $455 billion. This accounting metric allowed Oracle to position itself as a dominant “hyperscaler,” pushing its market cap upward.
    • For OpenAI: The contract allowed OpenAI to claim it had secured the long-term compute capacity needed to achieve Artificial General Intelligence (AGI). This backed up its massive valuations, enabling OpenAI to close a historic $122 billion funding round in March 2026 at an $852 billion valuation.  

The financial codependency between the two entities is asymmetrical and high-risk:  
  • Oracle is a Financial Proxy for OpenAI: If OpenAI faces a “credit event” or cash crunch, Oracle’s stock directly plummets. Critics note that Oracle signed a contract with a startup that historically burns far more cash than it takes in, making OpenAI’s ability to actually pay the $300 billion highly volatile.
  • The Debt Spiral: To physically fulfill OpenAI’s compute demands, Oracle has gone on a massive, debt-fueled construction spree. Oracle raised $18 billion in bonds in late 2025 and an additional $30 billion in early 2026. Its capital expenditures have eclipsed operating cash flows, leading to deeply negative free cash flow and over $134 billion in total corporate debt.
The scale of this relationship has triggered systematic friction on Wall Street:
    • Project Finance Bottlenecks: Major commercial banks have struggled to syndicate the massive multi-billion-dollar construction loans Oracle needs to build out the required data centers (such as its 4.5-gigawatt capacity goals).
    • Bank Limits: The sheer volume of debt concentrated around this single enterprise relationship has pushed several Wall Street institutions against their regulatory exposure limits for a single corporate partnership.

Ultimately, critics view the partnership as a circular loop: Oracle borrows tens of billions of dollars to build data centers for OpenAI, hoping OpenAI can continuously raise venture capital from the market to pay Oracle back, while Oracle uses OpenAI’s paper contracts to justify its skyrocketing stock value to its own investor

……………………………………………………………………………………………………………………………………………………………………………………………

References:

https://www.ft.com/content/be97df0a-76b1-4cb0-9ba4-d1117d8d1450
https://fortune.com/2026/04/30/google-amazon-ai-profits-anthropic-stake-bubble-earnings-2026/
https://finance.yahoo.com/sectors/technology/articles/google-amazon-biggest-profit-driver-170449859.html

AI infrastructure spending boom: a path towards AGI or speculative bubble?

Expose: AI is more than a bubble; it’s a data center debt bomb

Amazon’s Jeff Bezos at Italian Tech Week: “AI is a kind of industrial bubble”

Open AI raises $8.3B and is valued at $300B; AI speculative mania rivals Dot-com bubble

China’s open source AI models to capture a larger share of 2026 global AI market

OpenAI and Broadcom in $10B deal to make custom AI chips

Generative AI Unicorns Rule the Startup Roost; OpenAI in the Spotlight

 

 

 

Analyst firms wide forecasts for the LEO satellite direct-to-device (D2D) market

LEO satellite direct-to-device (D2D) technology looks promising. Telecom analyst firms see D2D as a fast-growing but still early-stage market, with forecasts ranging from roughly 22% to 49% revenue CAGR depending on scope and whether they are measuring total D2D services or smartphone satellite D2D specifically. But that’s not happening now.  T-Mobile chief Srini Gopalan, who said the service so far had generated “a lot less usage” than anticipated.

The most common near-term view is that basic D2D will add modest operator revenue at first, but the long-term market could become multi-billion-dollar as broadband and richer services mature.  Here are a few analyst forecasts:

  • MarketsandMarkets projects the D2D market to rise from USD 0.57 billion in 2025 to USD 2.64 billion by 2030, a 35.6% CAGR.
  • Mordor Intelligence projects the direct-to-device satellite connectivity market from USD 4.08 billion in 2025 to USD 13.80 billion by 2031, a 22.37% CAGR.
  • Omdia forecasts smartphone satellite D2D revenue to reach USD 11.99 billion by 2030, with a 49.4% revenue CAGR from 2026 to 2030.
  • Counterpoint Research expects 46% of all smartphones shipped by 2030 to be D2D-capable. That implies D2D is moving from a niche satellite feature toward a mainstream handset capability, driven by chipset integration and broader device support.
  • Juniper Research thinks the number of monthly active users will top 150 million by 2031. The analyst firm suggests a temporary access model, similar to roaming or travel eSIMs, where consumers purchase access in a particular area for a set period.  Juniper thinks connectivity alone won’t be enough to attract consumers. It believes operators will have to bundle the satellite service into rewards programs or roaming access.
  • Analysys Mason expects operators launching D2D in 2026 to see about a 1% annual revenue uplift from basic services alone, with much larger upside once broadband D2D becomes available.
  • TelecomTV reports a similar view from Analyst Brad Grivner, who says D2D could give MNOs around a 1% annual revenue uplift and also improve retention and upsell opportunities.

The spread in forecasts mostly reflects different definitions of the market, different start dates, and whether the analyst counts only current narrowband services or also future broadband D2D. In practical terms, the consensus is that D2D will start as a coverage and messaging feature, then evolve into a broader connectivity platform as device support and satellite capacity scale.

Analysts consistently point to 3GPP NTN standardization (rubber stamped by ETSI and ITU-R), more satellite-ready smartphones, and large-scale LEO deployments as the main catalysts. They also emphasize emergency messaging, rural coverage, IoT, industrial connectivity, and enterprise resilience as the first meaningful demand pools.  D2D market growth is being driven by a mix of coverage gaps, new device support, and expanding enterprise use cases. The strongest themes across analyst and industry reports are universal connectivity, IoT demand, LEO satellite buildout, and 3GPP NTN standardization.

Image Credit: Digital Regulation Platform

…………………………………………………………………………………………………………………………………………………………..

Main D2D growth drivers:

  • Coverage expansion. Analysts say D2D is filling a major gap in rural, remote, maritime, and disaster-prone areas where terrestrial networks are weak or unavailable.

  • 3GPP NTN standards. Standardized non-terrestrial networking is making satellite connectivity more practical for mainstream devices and accelerating ecosystem adoption.

  • LEO constellation growth. More low-Earth-orbit satellites, along with falling launch costs and better satellite economics, are increasing capacity and improving latency.

  • Smartphone integration. As more phones become satellite-capable, D2D can move beyond niche emergency features into broader consumer usage.

  • Enterprise IoT demand. Logistics, mining, agriculture, utilities, and energy firms want reliable connectivity for remote assets, monitoring, and worker safety.

  • Disaster resilience. Climate-related outages and emergency-response needs are pushing governments and operators toward backup connectivity solutions.

  • Carrier-satellite partnerships. Cooperation between MNOs and satellite operators is speeding commercialization and helping services reach scale.

The D2Dmarket is still starting with messaging, emergency connectivity, and narrowband IoT, but analysts expect growth to broaden as device support and satellite capacity improve. In short, D2D grows fastest where it solves a clear pain point: no coverage, weak resilience, or expensive remote connectivity.

…………………………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.lightreading.com/satellite/making-the-most-of-satellite-d2d

Satellite direct-to-device services

Ookla: D2D satellite connectivity surged 24.5% during last 9 months; Starlink’s footprint expansion leads the way

Ookla: Starlink a viable competitor for hybrid 5G/NTN services due to network performance improvements and larger coverage area

GSA: 5G Non Terrestrial Networks, 5G SA and 5G Advanced gain momentum

Analysis: Amazon <- Globalstar – a strategic move for D2D and spectrum parity

Direct-to-Device (D2D) satellite network comparison: Starlink V2 (Starlink Mobile) vs “Satellite Connect Europe”

Deutsche Telekom selects Iridium for NB-IoT direct-to-device (D2D) connectivity

Standards are the key requirement for telco/satellite integration: D2D and satellite-based mobile backhaul

MTN Consulting: Satellite network operators to focus on Direct-to-device (D2D), Internet of Things (IoT), and cloud-based services

Nvidia strategic partnership with IREN targets 5G Watts AI infrastructure buildout + $2.1B investment option

Nvidia has announced a strategic partnership with cloud AI data center operator IREN [1.] to deploy up to 5G Watts (5GW) of AI infrastructure, driven by a $3.4 billion services contract and a $2.1 billion investment option for Nvidia. This collaboration aims to secure critical, high-density data center capacity for AI workloads while accelerating IREN’s transition into a major AI infrastructure provider.  This strategic expansion targets up to 5GW of NVIDIA DSX-aligned AI infrastructure across IREN’s global pipeline. The roadmap centers on the 2GW Sweetwater campus in Texas, positioned to be the flagship deployment of NVIDIA’s DSX factory architecture. This integrated model synergizes NVIDIA’s reference designs with IREN’s core competencies in utility-scale power procurement, site development, and full-stack GPU cloud operations.

Note 1. IREN’s metamorphosis from specialized mining to high-performance computing (HPC) mirrors the trajectory of Tier-1 AI Cloud providers like CoreWeave. With an operational fleet of 23,000 GPUs and a 3GW secured power portfolio in renewable-heavy regions, IREN is rapidly scaling its North American footprint. 

“AI factories are becoming foundational infrastructure for the global economy,” said Jensen Huang, founder and CEO of Nvidia. “Deploying these systems at scale requires deep integration across the full stack — compute, networking, software, power and operations. IREN brings the scale and infrastructure expertise to help accelerate the buildout of next-generation AI infrastructure globally. Together, we are building for the age of AI,” he added.  Future deployments are expected to focus on IREN’s 2-gigawatt Sweetwater campus in Texas, which the companies expect to serve as a flagship deployment for Nvidia’s DSX architecture.

“This partnership combines NVIDIA’s AI systems and architecture leadership with IREN’s expertise across power, land, data centers, GPU deployment and infrastructure operations,” said Daniel Roberts, cofounder and co-CEO of IREN. “Together, we believe we can accelerate deployment of AI infrastructure and expand access to compute for AI-native and enterprise customers globally.”

This partnership follows a massive $9.7B agreement with Microsoft for sovereign GPU cloud services—leveraging GB300 Blackwell systems—and a $5.8B hardware procurement through Dell. Despite the scale of the Microsoft deal, leadership indicates it utilizes only ~10% of IREN’s projected capacity.
……………………………………………………………………………………………………………………………………….
Upshot:
Nvidia’s agreement with IREN introduces a unique structural alignment: Nvidia acts as both an upstream provider and an anchor tenant/stakeholder. By securing long-dated options over direct equity, Nvidia mitigates balance sheet volatility while ensuring preferential access to critical, grid-connected capacity in a supply-constrained market.
……………………………………………………………………………………………………………………………………….

References:

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Expose: AI is more than a bubble; it’s a data center debt bomb

Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?

NTT’s IOWN is (finally) evolving to an All Photonics Network (APN); Physics based AI for enterprise OT

Like all South Korea telecoms, NTT has revised its mid-term business strategy to center on AI infrastructure, data centers, and “value domains.” This shift follows a slowdown in its traditional telecoms “cash cow” business and aims to reorient the group toward higher growth areas.  The company is prioritizing AI-related services, overseas data centers, and its vision for an IOWN [1.] based connectivity platform built for GPU, network, and power-heavy workloads.

Note 1.  IOWN  is NTT’s Innovative Optical and Wireless Network initiative, with a photonics optical network being at its core.  An All-Photonics Network (APN) is NTT’s vision for a next-generation network that uses laser generated light, rather than electronic conversion, to move data across compute, storage, and transport layers. It is NTT’s bet on a much faster, lower-latency, and more energy-efficient network architecture for AI, data centers, and advanced telecom services.

–>The all optical network was promised by many new age telcos in the late 1990s- early 2000s but it has never seen the light of day (no pun intended)

Image Credit: NTT

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Benefits of an all photonics network:

Today, continuous, high-volume AI data flows across clouds, data centers and edge environments rely on stable, low-latency pathways. Yet networks that rely on optical to electrical to optical (OEO) conversion cannot provide this consistently. Even small variations in routing, buffering and electrical switching reduce the predictability that AI needs. Adding bandwidth can delay the symptoms but doesn’t address the architectural challenges these networks face as data movement intensifies.

At the leading edge of this shift is the All-Photonics Network (APN), developed by the IOWN Global Forum. It’s an architectural breakthrough and a practical step to rearchitecting how data moves, designed for a world where AI is changing the rules entirely.  The APN introduces a new way of architecting and operationalizing photonic transport so organizations can use it without having to manage the underlying optical engineering. Instead of relying on electrical conversions at every stage, it extends optical communication to the transport layers that connect sites, regions and data centers. That results in far more consistent network performance. It reduces jitter significantly and improves throughput by avoiding repeated processing overhead.

The IOWN Global Forum outlines a future where optical-first infrastructure delivers (see image below):

  • Up to 100 times lower power consumption
  • More than 120 times greater transmission capacity
  • A reduction in end-to-end latency by as much as 200 times

NTT wants to combine AI with IOWN’s photonics-based networking to better support AI-era compute, data center, and transport demands.  AIOWN is meant to solve the bottlenecks created by AI workloads, where power, latency, and bandwidth are becoming as important as raw compute.

NTT is positioning it as infrastructure for the AI era, not just as a telecom upgrade, so it sits at the center of the company’s broader shift toward AI infrastructure and data centers. Instead of relying mainly on conventional electronic networking, the pure optical IOWN aims to connect data centers and networks with photonics-based transport that can reduce energy use and improve performance. That makes it especially relevant for GPU clusters, AI cloud environments, and high-capacity backbone links.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………

NTT says the traditional telecom environment is getting tougher, with stronger competition and rising traffic demands pressuring its core business. In response, it is shifting emphasis to three growth areas: AI services for corporate clients, global data center expansion, and adjacent financial services, while also reframing its network layer for the AI era through IOWN.

The “value domains” framing is basically NTT’s way of saying it wants to move up the stack into higher-margin, customer-specific businesses rather than remain mostly a utility-like connectivity provider. In practice, that means selling integrated AI, data center, and industry solutions where NTT can capture more of the economic value than in wholesale telecom alone.  NTT believes telecom cash flows will grow more slowly than AI infrastructure demand and they are likely correct.  AIOWN is especially important because it ties together compute, networks, and power, which are becoming the real bottlenecks in AI deployments. The strategy also aligns with NTT’s broader enterprise AI positioning, where it can monetize infrastructure and services together rather than betting only on model development.

Key Features and Evolution of APN:
  • Commercial Evolution (APN1.0 to APN2.0): NTT launched APN1.0 in March 2023, offering dedicated wavelength services with 1/200th the latency of conventional networks. Evolution includes the introduction of Open APN (Open All-Photonic Network) standards for interoperability.
  • Performance Targets (2030): The APN aims to achieve \(100 \times\) higher power efficiency, \(125 \times\) greater capacity, and \(1/200\) end-to-end latency compared to traditional, electronics-based networks.
  • Photonics-Electronics Convergence (PEC): By using light instead of electricity in network devices and servers, the APN eliminates costly, slow optical-electrical-optical conversions.
  • Service Expansion: APN services are expanding to support high-demand applications like 5G/6G mobile fronthaul, remote medical services, remote construction, and AI video analysis.
Implementation Progress:
    • 2025 Milestones: NTT utilized APN for the Expo 2025 Osaka to connect pavilions and demonstrated 1Tbps-class optical paths at OFC2025.
    • 2026 Developments: At MWC Barcelona 2026, NTT showcased APN-facilitated AI video analysis, in-network computing, and improved AI inference processing.
    • Open Standardization: NTT is collaborating with partners (e.g., IOWN Global Forum) to develop open specifications for multi-vendor interoperability. [1, 2, 3]

The APN is key to creating a “data-centric” infrastructure where distributed data centers can function as one integrated system. NTT says the APN acts as the bridge that brings optical performance into practical use now, while preparing organizations for deeper photonic integration as the technology matures.  NTT Group, the parent company of NTT DATA, plays a key role in helping to move optical technologies from niche use cases into the mainstream.

…………………………………………………………………………………………………………………………………………………………………………

Most  Operational Technology (OT) environments remain stuck with legacy systems, creating a gap between modern enterprise capabilities and industrial operations. NTT is addressing this enterprise OT gap caused by legacy system stagnation by implementing private 5G networks and edge computing, allowing for modernization without full system overhauls. This approach utilizes physics-based AI to provide secure, real-time insights on-premises, overcoming challenges in visibility and standardization.

…………………………………………………………………………………………………………………………………………………………………………

References:

https://uk.nttdata.com/insights/blog/when-networks-hit-the-speed-of-light-why-photonics-is-the-next-big-shift

The All-Photonics Network Enables the Next-Generation Digital Economy

https://www.rd.ntt/e/research/JN202203_17536.html

https://www.nttdata.com/global/en/insights/focus/2025/039

https://www.enterprisetimes.co.uk/2026/05/08/ntts-edge-strategy-overcomes-ot-stagnation/

NTT’s IOWN provides ultra low latency and energy efficiency in Japan and Hong Kong

NTT pins growth on IOWN (Innovative Optical and Wireless Network)

Sony and NTT (with IOWN) collaborate on remote broadcast production platform

NTT to offer optical technology-based next-generation network services under IOWN initiative; 6G to follow

NTT to launch 25 Gps FTTH service in Tokyo starting March 2026

NTT DOCOMO successful outdoor trial of AI-driven wireless interface with 3 partners

 

Posted in NTT

Optus and Ericsson achieve 180MHz across 2.3GHz and 3.5GHz bands using carrier aggregation on a live 5G SA network

Australian telco Optus has demonstrated advanced 5G NR carrier aggregation (5G NR-CA) performance on its 5G standalone (SA) network by implementing four-component carrier aggregation (4CC CA) across low-, mid-, and upper-mid-band spectrum. Using Ericsson 5G SA network equipment and software, the configuration aggregates FDD bands at 900 MHz (Band n8) and 2.1 GHz (Band n1) with TDD bands at 2.3 GHz (Band n40) and 3.5 GHz (Band n78).  Two-Component Carrier (2CC CA) uplink aggregation

This combined Optus’ unique two mid-band TDD spectrum holdings across 2.3GHz and 3.5GHz, achieving a record 180MHz TDD spectrum aggregation. In particular:

  • Four-Component Carrier aggregation enabled 220MHz downlink bandwidth, leveraging spectrum across four different bands of 900MHz, 2.1GHz, 2.3GHz and 3.5GHz
  • Two-Component Carrier uplink aggregation combined one Frequency Division Duplex (FDD) band from 900MHz and 2.1GHz with one TDD band from 2.3GHz and 3.5GHz
  • Achieved peak speeds of 3.4Gbps (downlink) and 200Mbps (uplink) in a live network site with commercial devices, including the Samsung Galaxy S26 Ultra

The demonstration aligns with 3GPP Release 16 and Release 17 5G NR-CA enhancements (TS 38.300, TS 38.101-1/2), which extend carrier aggregation capabilities across heterogeneous duplex modes (FDD+TDD) and multiple frequency ranges within FR1. The downlink configuration leverages cross-band scheduling and advanced MIMO layers (likely up to 4×4 or higher per component carrier, depending on band support) to maximize spectral efficiency across aggregated carriers.

On the uplink, Optus and Ericsson reported 200 Mbps throughput using two-component carrier aggregation (2CC CA), combining FDD (n8/n1) and TDD (n40/n78) spectrum. This implementation is consistent with 3GPP Release 16 uplink enhancements, including uplink carrier aggregation and transmit (Tx) switching (TS 38.213), which enables efficient utilization of UE power resources across multiple uplink carriers, particularly in mixed duplex scenarios.

All results were achieved on a live commercial 5G SA network at Optus’ Sydney campus using commercial off-the-shelf (COTS) user equipment, including the Samsung Galaxy S26 Ultra. This indicates full compliance with 3GPP-defined UE capability signaling (TS 38.306) and the availability of device-side support for complex NR-CA band combinations, including inter-band and cross-duplex aggregation.

“This achievement demonstrates how we are translating cutting-edge 5G technology into meaningful benefits for customers in real-world environments. Through our ongoing collaboration with Ericsson, we are unlocking greater capacity and performance across our 5G network, enabling faster speeds and more reliable connectivity,” said Optus CTO Sri Amirthalingam. “This milestone marks an important step in our network evolution towards 5G Advanced, reinforcing our commitment to remain at the forefront of innovation and to deliver tangible value for our customers.”

Ludvig Landgren, head of Ericsson Australia and New Zealand operations said: “Optus continues to demonstrate strong leadership in adopting advanced 5G capabilities, and this milestone highlights the strength of our partnership. By expanding and combining multiple spectrum assets with Ericsson technology, we are helping Optus deliver meaningful performance improvements that translate directly into better everyday experiences for their customers.”

………………………………………………………………………………………………………………………………………………..

From a broader industry perspective, these results build on ongoing  5G NR-CA advancements. T-Mobile US has demonstrated approximately 6 Gbps downlink throughput using six aggregated carriers in FR1, as well as 550 Mbps uplink throughput leveraging uplink Tx switching across sub-6 GHz bands. In Europe, Vodafone and MediaTek achieved 277 Mbps uplink throughput using NR uplink CA, while Elisa, Ericsson, and MediaTek demonstrated 12CC aggregation reaching 8 Gbps downlink—highlighting the scalability of NR-CA as defined in 3GPP Release 17 and evolving into Release 18 (5G-Advanced).

Within Australia, Telstra has deployed Ericsson’s automated carrier aggregation (CA) optimization solution across more than 50 live 5G Advanced sites, leveraging dynamic CA configuration and traffic-aware scheduling—capabilities aligned with 3GPP Release 18 objectives for AI-assisted RAN optimization.

A notable aspect of the Optus/Ericsson demonstration is the aggregation of 180 MHz of mid-band spectrum across n40 (2.3 GHz) and n78 (3.5 GHz). While not a headline peak-rate milestone, this represents a first in terms of contiguous mid-band NR-CA deployment at this bandwidth scale. Mid-band aggregation is particularly significant within the 3.3–4.2 GHz “golden band” range defined in global 5G spectrum harmonization efforts, as it offers an optimal balance between coverage and capacity.

Operationally, this configuration is expected to deliver immediate gains in high-traffic scenarios—such as dense urban environments, transport hubs, and large venues—by increasing available cell throughput and improving user-level quality of service (QoS). Furthermore, the expanded mid-band capacity directly benefits fixed wireless access (FWA) deployments, where sustained throughput and cell-edge performance are critical. Because the demonstrated CA combinations are already supported by commercial UE categories, deployment can proceed without requiring new device classes, accelerating time-to-impact.

Ericsson was recently selected to modernize and expand SoftBank’s core networks, as well as accelerate the Japanese giant’s 5G SA adoption. Expanding on a previous 5G SA deal centered around its radio access network (RAN) products, Ericsson is providing SoftBank with its Core Networks’ portfolio, including a dual-mode 5G Core solution running on Ericsson’s Cloud Native Infrastructure Solution (CNIS).

……………………………………………………………………………………………………………………………..

References:

https://www.ericsson.com/en/press-releases/7/2026/optus-and-ericsson-achieve-world-first-180mhz-across-2-3ghz-and-3-5ghz-5g-standalone-carrier-aggregation-on-live-network-using-commercial-devices-boosting-5g-customer-experience

https://www.telecoms.com/5g-6g/optus-and-ericsson-use-carrier-aggregation-to-notch-up-3-4-gbps-on-a-live-5g-sa-network

https://www.sdxcentral.com/news/ericsson-and-optus-claim-5g-sa-world-first/

https://www.ericsson.com/en/press-releases/7/2026/optus-and-ericsson-trial-ai-to-boost-5g-downlink

https://www.nokia.com/mobile-networks/ran/carrier-aggregation/5g-carrier-aggregation-explained/

China Unicom-Beijing and Huawei build “5.5G network” using 3 component carrier aggregation (3CC)

Nokia, BT Group & Qualcomm achieve enhanced 5G SA downlink speeds using 5G Carrier Aggregation with 5 Component Carriers

Finland’s Elisa, Ericsson and Qualcomm test uplink carrier aggregation on 5G SA network

T-Mobile US, Ericsson, and Qualcomm test 5G carrier aggregation with 6 component carriers

Ericsson and MediaTek set new 5G uplink speed record using Uplink Carrier Aggregation

BT tests 4CC Carrier Aggregation over a standalone 5G network using Nokia equipment

T-Mobile US achieves speeds over 3 Gbps using 5G Carrier Aggregation on its 5G SA network

 

Extreme Networks deploys Wi‑Fi 7 (IEEE 802.11be) at University of Florida’s “Swamp”

Executive Summary:

Extreme Networks, Inc. today announced the deployment of the first Wi‑Fi 7 network in a collegiate stadium at the University of Florida’s Ben Hill Griffin Stadium, also known as “The Swamp.”  The deployment is engineered to support peak densities approaching 90,000 concurrent users, with an emphasis on low-latency, high-throughput connectivity under extreme load conditions. Client devices associate rapidly via optimized authentication and roaming mechanisms, while high-efficiency scheduling enables uninterrupted uplink/downlink performance for real-time video streaming, social media sharing, and in-venue digital services such as mobile ordering.  Wi‑Fi 7 is based on the IEEE 802.11be standard, which was designed to improve ultra-dense venue wireless network performance.

Wi‑Fi 7 (IEEE 802.11be), improves stadium fan experience by increasing capacity, lowering latency, and making the radio layer more resilient in dense, interference-prone environments. The most relevant features are Multi-Link Operation (MLO)for simultaneous multi-band transmission, 320 MHz channels in 6 GHz, 4K-QAM, puncturing, and enhanced OFDMA/MU-MIMO scheduling.  These features collectively improve spectral efficiency, reduce contention, and sustain deterministic performance in ultra-dense environments. The result is a carrier-grade WLAN fabric that transforms “The Swamp” into a high-capacity, low-latency connectivity domain, establishing a new benchmark for large public venues.

This wireless infrastructure aligns with the University of Florida’s broader stadium modernization program, which includes physical upgrades such as expanded concourses, optimized ingress/egress flows, premium seating enhancements, and next-generation audiovisual systems. The converged digital and physical redesign enables tighter integration between network intelligence and venue operations.

Image Credit: University of Florida

“On game day, The Swamp transforms into one of the most electrifying and densely connected environments in college sports,” said Matt Vincent, Assistant Athletics Director, Information Technology at the University of Florida. “As we continue to invest in the fan experience at Ben Hill Griffin Stadium, adding Wi-Fi 7 allows us to significantly increase capacity while enabling smarter, real-time connectivity that helps everything run smoothly at peak demand. The NIaaS model from Extreme Networks also provides the flexibility to scale as needed without significant upfront investment, allowing our IT team to operate more efficiently while delivering a consistently high-quality digital experience for every fan.”

A New Era of Fan Connectivity:

The new Wi‑Fi 7 (IEEE 802.11be) network from Extreme will deliver:

  • Ultra-fast speeds enabling seamless 4K/8K video streaming, instant social sharing, and real-time stats access.
  • Lower latency for responsive mobile experiences, including in-seat ordering and interactive apps.
  • Improved device capacity supporting tens of thousands of concurrent connections without performance degradation.
  • Consistent coverage across seating bowls, concourses, suites, and outdoor areas.

Key Wi‑Fi 7 (IEEE 802.11be PHY) functions:

  • 320 MHz channels: Double the maximum Wi‑Fi channel width versus Wi‑Fi 6/6E, which increases potential throughput in 6 GHz.
  • 4K-QAM: Packs more bits into each symbol, improving efficiency when signal conditions are good and devices are close to APs, as they often are in under-seat stadium designs.
  • Puncturing: Lets the AP use the clean portion of a wide channel even if part of it is affected by interference, instead of discarding the whole channel.
  • Multi-RU and enhanced OFDMA: Improves how airtime is split among many clients, which is critical when large numbers of fans are active simultaneously.
  • Better MU-MIMO: Helps the AP serve multiple users in parallel, supporting more concurrent sessions without as much contention.

Transforming Stadium Operations:

For fans, the visible benefits are faster onboarding, smoother streaming, and more reliable mobile ordering and payments. For operators, the same network supports staff communications, POS systems, video surveillance, and IoT devices such as sensors and digital signage. Analytics from the WLAN can also reveal crowd flow, dwell time, and concession demand, which helps optimize staffing and sponsorship placement.

Beyond fan-facing services, the Wi‑Fi 7 network underpins mission-critical operational workflows. High-reliability connectivity supports real-time staff communications, accelerates point-of-sale (POS) transaction processing with reduced latency and higher transaction concurrency, and enables high-definition video surveillance integrated with AI/ML-based analytics for threat detection and crowd safety.

The network also functions as an IoT aggregation layer, supporting smart sensors, digital signage, environmental monitoring, and automated control systems via secure segmentation and policy enforcement. Through advanced analytics platforms such as Extreme Analytics, operators gain granular, real-time visibility into user behavior and network performance, including crowd flow dynamics, dwell time distributions, application usage patterns, and concession demand signals.

These data-driven insights enable closed-loop optimization of venue operations, from dynamic staffing and queue management to targeted digital engagement and monetization strategies, including context-aware advertising and sponsorship activation. In aggregate, the deployment represents a shift toward an intent-driven, analytics-centric stadium architecture where connectivity, operations, and revenue generation are tightly coupled.

About Extreme Networks:

Extreme Networks, Inc. (EXTR) is a leader in AI-powered cloud networking, focused on delivering simple and secure solutions that help businesses address challenges and enable connections among devices, applications, and users. We push the boundaries of technology, leveraging the powers of artificial intelligence, analytics, and automation. Tens of thousands of customers globally trust our AI-driven cloud networking solutions and industry-leading support to enable businesses to drive value, foster innovation, and overcome extreme challenges.

References:

https://www.businesswire.com/news/home/20260506829623/en/Extreme-Powers-First-Ever-College-Stadium-WiFi-7-Deployment-at-University-of-Floridas-The-Swamp

Research & Markets: WiFi 6E and WiFi 7 Chipset Market Report; Independent Analysis

Wireless Broadband Alliance Report: WiFi 7, converged Wi-Fi and 5G, AI/Cognitive networks, and OpenRoaming

WiFi 7: Backgrounder and CES 2025 Announcements

WiFi 7 and the controversy over 6 GHz unlicensed vs licensed spectrum

Qualcomm FastConnect 7800 combining WiFi 7 and Bluetooth in single chip

MediaTek to expand chipset portfolio to include WiFi7, smart homes, STBs, telematics and IoT

 

Page 1 of 358
1 2 3 358