The enterprise network stack is collapsing; AI’s impact; comparison with “Batch Pipelines Break AI Agents”

by Shashi Kiran with Alan J Weissberger, ScD

Definitions:

The enterprise network stack is much more than a protocol stack. It is the layered architecture of physical infrastructure, forwarding devices, control protocols, management systems, and security enforcement functions that interconnect users, endpoints, workloads, and cloud services across campus, branch, WAN, data center, and cloud domains. It typically includes access, distribution, core, and edge segments, along with overlay, orchestration, telemetry, identity, and policy planes that govern how traffic is admitted, routed, segmented, monitored, and secured.

A useful way to think about the stack is in terms of planes:

  • Data plane: forwards packets, enforces QoS, and applies access-control functions close to the traffic path.

  • Control plane: discovers topology and capabilities, computes paths, and reacts to failures.

  • Management plane: handles configuration, monitoring, troubleshooting, reporting, and performance management.

  • Security stack: includes firewalls, IDS/IPS, secure web gateways, threat intelligence, and related inspection or enforcement tools.

At the device level, the stack typically includes physical media and network hardware such as cabling, Wi-Fi, NICs, switches, routers, gateways, servers, and dedicated security appliances. At higher layers, it includes protocols and services for addressing, routing, transport, application connectivity, identity, and policy enforcement, often mapped loosely to OSI/TCP-IP concepts rather than a strict textbook stack.

In an enterprise environment, the network stack extends across LAN, WAN, data center, cloud, and security domains, so “the stack” is less a single product and more an integrated system of infrastructure, software, telemetry, and policy. That is why discussions of enterprise architecture usually separate forwarding, orchestration, assurance, and security functions even when they are delivered in a unified platform.

Structural Limits of the Enterprise Network Stack:

The enterprise network stack is approaching a structural inflection which may be at a “breaking point.”  That’s because what’s failing is structural and architectural, not incremental.  The enterprise network stack was architected for a world that no longer exists, and most of the pain organizations feel today is the cost of pretending otherwise. The interesting question isn’t whether it breaks but rather when, and along which seams.  Here’s why:

The network stack most enterprises still run was designed around five assumptions that were partly true in 2010 but mostly false in 2026. Users sit at desks on managed devices. Applications live in a corporate data center. Traffic flows north-south through a perimeter. Identity equals a user with a session. Trust derives from network location. Every one of those is gone. Users are hybrid, apps are SaaS and multi-cloud, traffic is increasingly east-west and machine-driven, identity now includes non-human agents acting with delegated authority, and zero trust has formally retired the idea that being inside the network means anything.

So, the enterprise stack isn’t failing because any single piece is bad. Rather, it’s failing because the architecture it was based on no longer matches the workload, the threat model, or the operational reality it’s asked to serve. AI is the forcing function, but the cracks were already there. The choice in front of most enterprises isn’t whether to rebuild but whether to do it deliberately or by accident. Will reinvention and self-disruption be intentional or forced?

Today, many enterprise environments represent layered extensions of legacy architectures rather than cohesive designs. AI acts as an accelerant, exposing pre-existing architectural limitations. The resulting fragmentation increases operational complexity, reduces agility, and amplifies security risk.

Complexity is a Primary Risk Vector:

Complexity has evolved from an operational burden into a primary source of systemic risk. Modern network environments often exceed the capacity for deterministic human understanding, creating conditions where failures and vulnerabilities emerge at the intersections between systems rather than within individual components.

Empirical evidence suggests that many successful breaches exploit misconfigurations and integration gaps rather than novel vulnerabilities. In this context, complexity itself becomes the effective attack surface.

This challenge is particularly acute in the LAN, which often retains legacy architectural elements, heterogeneous device ecosystems, and fragmented management models. Combined with constrained IT resources, this environment can become a disproportionate source of exposure.

Reducing complexity—through architectural simplification, integrated control planes, and automation—is therefore not merely an operational objective but a core security strategy. In AI-driven environments, simplicity directly contributes to resilience and risk reduction.

An Architectural Reset is Needed:

An architectural reset is increasingly necessary. While incremental upgrades remain feasible, their marginal returns are diminishing relative to the growing mismatch between legacy designs and emerging requirements. Many organizations continue to extend existing architectures due to cost constraints or perceived transition risks. However, this approach often compounds technical debt and increases long-term exposure. The more fundamental question is not whether incremental evolution is possible, but whether it represents effective capital allocation in the context of AI-driven workloads and threat models.

Forward-looking architectures are converging around several principles: AI-native workload support, identity-centric security, zero-trust enforcement, and tightly integrated operational models. Organizations that proactively redefine their network architectures around these principles are more likely to achieve sustainable performance, security, and operational efficiency gains.

Security and the Network Fabric:

Security is neither fully “moving into” nor “remaining outside of” the network fabric; rather, it is being restructured across distinct functional planes, including identity, policy, enforcement, and detection.

Historically, network-centric security relied on in-path inspection mechanisms (e.g., firewalls, intrusion prevention systems, and proxies). This model proved difficult to scale due to encryption, cloud decentralization, and traffic patterns that bypass centralized inspection points.

In contemporary architectures, the network fabric is evolving into a high-performance enforcement plane. Policy definition and decision-making are increasingly centralized in identity and control-plane systems, while enforcement is distributed across the network and applied at line rate to identity-associated flows.

This separation of concerns improves scalability and composability. Identity-centric policy models define “who can do what,” while the network enforces those decisions efficiently and locally. The result is a more adaptable and performant security architecture.

However, the effectiveness of this approach depends on architectural discipline. Designs that treat the fabric as one component within a broader, identity-driven security framework tend to reduce complexity. Conversely, attempts to re-centralize security entirely within the network risk recreating earlier limitations in a more complex form.

AI’s Impact on Telecommunications Networks:

Artificial intelligence (AI) is influencing telecom network architectures along two orthogonal dimensions:

1.] AI introduces a new class of workloads that impose stringent and atypical requirements on network infrastructure.

AI workloads fundamentally challenge legacy network design assumptions. Traditional enterprise networks were optimized for north–south traffic patterns, human-driven interactions, and best-effort delivery models. In contrast, AI workloads generate predominantly east–west traffic, operate at machine timescales, and exhibit low tolerance for latency, jitter, and packet loss. Simultaneously, AI-enabled control and management planes enable higher degrees of automation and operational efficiency, particularly in campus and branch environments where autonomous operations are beginning to reduce manual intervention.

2.] AI is increasingly being embedded within the network itself, enhancing operations, optimization, fault diagnosis/recovery and security functions. The interaction between these roles is driving many of the architectural shifts observed today. Today, wide-area networks (WANs) must interconnect AI-intensive data center environments with distributed enterprise domains, effectively bridging heterogeneous traffic models and service requirements.

AI-Driven Changes in Traffic and Risk:

AI is reshaping both the structure of network traffic and its associated risk profile. From a traffic perspective, flows are becoming increasingly east–west, bursty, and machine-generated, with reduced visibility due to encryption and abstraction layers. From a security standpoint, AI introduces new classes of actors (e.g., non-human identities and autonomous agents), as well as new attack vectors, including adversarial AI and data exfiltration via model interactions.

These shifts are tightly coupled. The same properties that define AI-driven traffic—distribution, dynamism, and opacity—also complicate detection and enforcement. As a result, security architectures are evolving toward:

  • Identity-centric models that extend zero-trust principles to non-human entities.

  • Data loss prevention mechanisms adapted to AI-generated and AI-consumed data flows.

  • Fine-grained segmentation within network fabrics, subject to latency constraints.

  • Increased reliance on AI-driven detection and response systems to counter AI-enabled threats.

Importantly, these dynamics vary across network domains (LAN, WAN, and data center/cloud), requiring domain-specific adaptations while maintaining consistent policy frameworks.

Alignment with “Why Batch Pipelines Break AI Agents: The Case For Streaming-First Network Operations:”

The key points made in this article are highly consistent with the above referenced IEEE Techblog post written by Shazia Hasnie, Ph.D.  Both articles treat AI as an architectural forcing function: Shazia’s article focuses on the data/telemetry layer, while this post extends the same logic to the broader enterprise network stack. The core claim in both pieces is that legacy architectures were built for human-operated, latency-tolerant workflows, not autonomous AI systems. In the Shazia’s article, batch pipelines fail because they deliver stale, incomplete, and inconsistent context to AI agents.  Here, the same mismatch appears at the network level, where legacy enterprise designs were optimized for north–south traffic, perimeter trust, and static operational assumptions. Both arguments are fundamentally about architectural mismatch rather than isolated product shortcomings.

A particularly strong point of overlap is the emphasis on real-time context. Shazia’s article argues that AI agents require continuous data freshness and an ordered event stream to function safely, while this piece frames AI networking as a shift toward machine-timescale traffic, streaming telemetry, and identity-aware enforcement. In both cases, the network is no longer just a transport layer; it becomes part of the control loop that determines whether AI decisions are accurate and timely.

The failure models are also similar.  Shazia identifies five failure modes of batch-to-agent mismatch: stale data, memory gaps, delete blindness, schema fragility, and coordination failure. While not using that taxonomy explicitly, we share the same underlying diagnosis by arguing that complexity, fragmentation, and legacy operational models are now the primary sources of risk. Our discussion of east–west traffic, non-human identities, zero trust, and observability mirrors Shazia’s broader point that autonomous systems fail when their surrounding infrastructure cannot preserve state, sequence, and policy consistency.

These two articles work well together because they address different layers of the same transition. The first article is mainly about the data plane of AI operations—how telemetry, event streams, and agent inputs must move from batch to streaming to avoid operational failure. This article is about the network and security architecture around that data plane—how the enterprise stack, LAN, WAN, and fabric must evolve to support AI-native workloads and enforcement.  Hence, the reader can consider the two articles companion pieces.

…………………………………………………………………………………………………………………………………………………………………………………………

About the Author:

Shashi Kiran is the Chief GTM Officer & CMO, Nile

…………………………………………………………………………………………………………………………………………………………………………………………

References:

Why Batch Pipelines Break AI Agents: The Case For Streaming-First Network Operations

 

 

Ookla on the Global D2D Market

Direct-to-device (D2D) satellite connectivity is emerging as a practical extension of non-terrestrial networks (NTNs), enabling standard smartphones to communicate directly with satellite systems without specialized user equipment. Within the 3GPP ecosystem, NTN capabilities were standardized (3GPP specs become standards by being rubber stamped by ETSI and ITU-R) beginning with 3GPP Release 17, establishing a framework for satellite-terrestrial interoperability and expanding the potential reach of mobile broadband beyond the footprint of terrestrial radio access networks.

D2D services could reduce persistent coverage gaps, especially in rural, maritime, and other underserved environments where terrestrial deployment is constrained by economics or geography. However, commercially available services today remain limited, with most deployments focused on messaging and other low-throughput applications rather than full mobile broadband.

From a market perspective, D2D and NTN have broad implications for mobile network operators (MNOs), satellite operators, equipment vendors, and regulators. That strategic importance helps explain why companies such as Apple, Amazon, SpaceX, and AST SpaceMobile are investing in this segment, alongside broader ecosystem activity around 3GPP-based NTN architectures.

Image Credit: Ookla

Ookla® has contributed to the discussion with a high-resolution poster showing global Speedtest® usage data for D2D services, along with a detailed market study on the D2D landscape. The analysis is based on Android devices that register with D2D-capable satellite systems from Starlink, Skylo, and Lynk, providing an early empirical view of how NTN-based connectivity is being used in practice.

Looking ahead, continued investment in larger satellite constellations and additional spectrum holdings should improve D2D capacity, coverage, and service robustness. As the technology matures, the industry is likely to move from narrowband messaging toward richer data services, with 3GPP NTN providing the standardization path for broader ecosystem scale-up.

For mobile network operators, the long-term effect could be a rebalancing of investment priorities at the edge of network coverage, particularly in sparsely populated regions. That may reduce the incentive for some rural tower builds and alter the demand outlook for parts of the RAN infrastructure supply chain.

Looking ahead, continued investment in next-generation satellite constellations, coupled with expanded spectrum access, is expected to enhance D2D performance and capacity. Key players—including Starlink, AST SpaceMobile, and Amazon’s Project Kuiper—are targeting higher data rates and broader service capabilities, with the objective of extending beyond narrowband messaging to support more data-intensive applications.

For MNOs, the evolution of D2D introduces potential shifts in network planning and capital allocation, particularly at the margins of coverage. Satellite-based augmentation could reduce the economic rationale for terrestrial infrastructure deployment in sparsely populated areas, with downstream implications for tower companies and certain segments of the radio access network (RAN) supply chain.

From a policy perspective, D2D also has the potential to reshape universal service frameworks and coverage obligations. Regulators seeking to expand connectivity may increasingly incorporate NTN-based solutions into their policy toolkits, prompting a reassessment of long-standing assumptions regarding the role of terrestrial infrastructure in achieving nationwide coverage.  In that sense, D2D is not just a satellite story.  It is becoming a broader telecom architecture shaped by 3GPP specifications and the convergence of terrestrial and non-terrestrial mobile networks.

Merry-go-round of dog chasing his tail: relationship between U.S. hyperscalers and private Gen AI companies

1.  Hyperscalers’ earnings growth this quarter was boosted by an unusually large contribution from “other income,” which was actually mark-ups of their equity stakes in private Gen AI companies.  For example:

  • Nearly half of Alphabet’s (Google) record $62.6 billion profit—about $28.7 billion—did not come from search ads, cloud services or any of its products at all. It came from Alphabet updating the value of the equity it owns in private AI companies, primarily Anthropic.  Alphabet holds a 14% stake before the announcement of an additional $40 billion commitment last week.
  • Amazon’s earnings release stated that first-quarter net income “includes pre-tax gains of $16.8 billion included in non-operating income from our investments in Anthropic”—more than half of Amazon’s pre-tax income (or profit) for the quarter.
  • Alphabet and Amazon generated “other income” totaling $53 billion in Q1 2026, which accounted for nearly 60% of those two companies’ total net income in Q1 and 34% of the total $155 billion in income this quarter. Of this $53 billion in “other income,” $49 billion was explicitly due to equity stakes in private AI companies.
  • Microsoft reported “only” $942mn of other income in the first three months of the year, but this line item has now made $7.2bn over the past nine months.
  • Under U.S. accounting rules, publicly traded firms must adjust and report the assessed value of their private equity holdings every quarter. Because private AI start-ups like Anthropic experienced meteoric valuation updates (e.g., Anthropic climbing to an estimated $380 billion), both Alphabet and Amazon were required to record those massive “on-paper” gains directly to their bottom-line net income.
  • When the AI bubble finally bursts (and it will) the private AI companies assessed market value will collapse, resulting in “impairment write-downs” and huge earnings declines for the hyperscalers, e.g. Amazon, Google/Alphabet, Microsoft, FB/Meta, and Oracle.

2. Now here’s the merry-go-round/ dog chasing its tail relationship:

Not only have private investments and increasingly engorged funding rounds become a meaningful driver of the hyperscalers’ aggregate earnings, but the money the hyperscalers have pumped into the likes of Anthropic and OpenAI has allowed those private AI companies to sign huge computing deals with Alphabet’s Google Cloud, Microsoft’s Azure and Amazon Web Services (AWS).  OpenAI and Anthropic now make up about half of the entire cloud computing order books at Oracle, Alphabet, Amazon and Microsoft! 

Indeed, AI startups have loaded up hyperscalers with unprecedented long-term financial commitments.

–>OpenAI and Anthropic make up over $1 trillion of the estimated $2 trillion cumulative revenue backlog currently held by major cloud service providers!

  • OpenAI to Microsoft Azure: Internal documents show OpenAI’s massive server rentals have generated more than $23 billion in direct cloud spending for Microsoft.
  • Anthropic to Google Cloud: Anthropic signed a contract committing to spend $200 billion over five years on Google’s cloud infrastructure and TPU chips.
  • Anthropic to AWS: In tandem with a fresh $5 billion investment from Amazon, Anthropic committed to spend over $100 billion over the next decade on AWS technologies.

Image Generated by Chat GPT

……………………………………………………………………………………………………………………………………………………………

3. Because hyperscalers report their overall cloud results as broad aggregates, the exact percentage of current quarter revenue generated purely by AI startups varies by provider. However, recent financial disclosures and analyst tracking pinpoint the enormous impact of these startups on current revenues and future order books:
-Google Cloud:
    • Backlog Percentage: Over 40%. Anthropic‘s $200 billion Multi-Year Commitment accounts for nearly half of Google Cloud’s total disclosed $240 billion revenue backlog.
    • Current Revenue Share: Estimated 12% to 15% of its current $20 billion quarterly revenue run-rate is driven directly by AI infrastructure consumption from startups (both frontier labs and over 40 mid-tier AI companies built on Google Cloud Vertex AI).

-Microsoft Azure:
    • Current Revenue Share: Estimated 15% to 18%. Microsoft’s annualized AI revenue run-rate hit $37 billion. A massive chunk of Azure’s overall 40% growth rate is anchored directly by OpenAI’s compute demands and the commercialization of OpenAI-tied products.

-Amazon Web Services (AWS):
  • Current Revenue Share: Estimated 6% to 8%. While AWS has the largest overall cloud scale ($150 billion annual run rate), its revenue is traditionally diversified across enterprise SaaS and retail. However, Anthropic’s new $100 billion infrastructure commitment means AWS’s revenue mix is aggressively shifting toward AI startups. [1, 2, 3, 4]

–>This is another sign of just how incestuously codependent the big tech industry is to astronomically valued private AI start-ups.

…………………………………………………………………………………………………………………………………………………………..

4. Another example of this codependency is Oracle and OpenAI’s massive, debt-fueled financial loop. In September 2025, the two companies signed a staggering five-year, $300 billion cloud-computing contract. This single deal radically transformed both companies’ financial profiles, binding their survival together as inextricably tied.

The deal functioned as an aggressive narrative magnifier for both companies:
    • For Oracle: The $300 billion contract instantly added to Oracle’s Remaining Performance Obligations (RPO), which skyrocketed 359% to $455 billion. This accounting metric allowed Oracle to position itself as a dominant “hyperscaler,” pushing its market cap upward.
    • For OpenAI: The contract allowed OpenAI to claim it had secured the long-term compute capacity needed to achieve Artificial General Intelligence (AGI). This backed up its massive valuations, enabling OpenAI to close a historic $122 billion funding round in March 2026 at an $852 billion valuation.  

The financial codependency between the two entities is asymmetrical and high-risk:  
  • Oracle is a Financial Proxy for OpenAI: If OpenAI faces a “credit event” or cash crunch, Oracle’s stock directly plummets. Critics note that Oracle signed a contract with a startup that historically burns far more cash than it takes in, making OpenAI’s ability to actually pay the $300 billion highly volatile.
  • The Debt Spiral: To physically fulfill OpenAI’s compute demands, Oracle has gone on a massive, debt-fueled construction spree. Oracle raised $18 billion in bonds in late 2025 and an additional $30 billion in early 2026. Its capital expenditures have eclipsed operating cash flows, leading to deeply negative free cash flow and over $134 billion in total corporate debt.
The scale of this relationship has triggered systematic friction on Wall Street:
    • Project Finance Bottlenecks: Major commercial banks have struggled to syndicate the massive multi-billion-dollar construction loans Oracle needs to build out the required data centers (such as its 4.5-gigawatt capacity goals).
    • Bank Limits: The sheer volume of debt concentrated around this single enterprise relationship has pushed several Wall Street institutions against their regulatory exposure limits for a single corporate partnership.

Ultimately, critics view the partnership as a circular loop: Oracle borrows tens of billions of dollars to build data centers for OpenAI, hoping OpenAI can continuously raise venture capital from the market to pay Oracle back, while Oracle uses OpenAI’s paper contracts to justify its skyrocketing stock value to its own investor

……………………………………………………………………………………………………………………………………………………………………………………………

References:

https://www.ft.com/content/be97df0a-76b1-4cb0-9ba4-d1117d8d1450
https://fortune.com/2026/04/30/google-amazon-ai-profits-anthropic-stake-bubble-earnings-2026/
https://finance.yahoo.com/sectors/technology/articles/google-amazon-biggest-profit-driver-170449859.html

AI infrastructure spending boom: a path towards AGI or speculative bubble?

Expose: AI is more than a bubble; it’s a data center debt bomb

Amazon’s Jeff Bezos at Italian Tech Week: “AI is a kind of industrial bubble”

Open AI raises $8.3B and is valued at $300B; AI speculative mania rivals Dot-com bubble

China’s open source AI models to capture a larger share of 2026 global AI market

OpenAI and Broadcom in $10B deal to make custom AI chips

Generative AI Unicorns Rule the Startup Roost; OpenAI in the Spotlight

 

 

 

Analyst firms wide forecasts for the LEO satellite direct-to-device (D2D) market

LEO satellite direct-to-device (D2D) technology looks promising. Telecom analyst firms see D2D as a fast-growing but still early-stage market, with forecasts ranging from roughly 22% to 49% revenue CAGR depending on scope and whether they are measuring total D2D services or smartphone satellite D2D specifically. But that’s not happening now.  T-Mobile chief Srini Gopalan, who said the service so far had generated “a lot less usage” than anticipated.

The most common near-term view is that basic D2D will add modest operator revenue at first, but the long-term market could become multi-billion-dollar as broadband and richer services mature.  Here are a few analyst forecasts:

  • MarketsandMarkets projects the D2D market to rise from USD 0.57 billion in 2025 to USD 2.64 billion by 2030, a 35.6% CAGR.
  • Mordor Intelligence projects the direct-to-device satellite connectivity market from USD 4.08 billion in 2025 to USD 13.80 billion by 2031, a 22.37% CAGR.
  • Omdia forecasts smartphone satellite D2D revenue to reach USD 11.99 billion by 2030, with a 49.4% revenue CAGR from 2026 to 2030.
  • Counterpoint Research expects 46% of all smartphones shipped by 2030 to be D2D-capable. That implies D2D is moving from a niche satellite feature toward a mainstream handset capability, driven by chipset integration and broader device support.
  • Juniper Research thinks the number of monthly active users will top 150 million by 2031. The analyst firm suggests a temporary access model, similar to roaming or travel eSIMs, where consumers purchase access in a particular area for a set period.  Juniper thinks connectivity alone won’t be enough to attract consumers. It believes operators will have to bundle the satellite service into rewards programs or roaming access.
  • Analysys Mason expects operators launching D2D in 2026 to see about a 1% annual revenue uplift from basic services alone, with much larger upside once broadband D2D becomes available.
  • TelecomTV reports a similar view from Analyst Brad Grivner, who says D2D could give MNOs around a 1% annual revenue uplift and also improve retention and upsell opportunities.

The spread in forecasts mostly reflects different definitions of the market, different start dates, and whether the analyst counts only current narrowband services or also future broadband D2D. In practical terms, the consensus is that D2D will start as a coverage and messaging feature, then evolve into a broader connectivity platform as device support and satellite capacity scale.

Analysts consistently point to 3GPP NTN standardization (rubber stamped by ETSI and ITU-R), more satellite-ready smartphones, and large-scale LEO deployments as the main catalysts. They also emphasize emergency messaging, rural coverage, IoT, industrial connectivity, and enterprise resilience as the first meaningful demand pools.  D2D market growth is being driven by a mix of coverage gaps, new device support, and expanding enterprise use cases. The strongest themes across analyst and industry reports are universal connectivity, IoT demand, LEO satellite buildout, and 3GPP NTN standardization.

Image Credit: Digital Regulation Platform

…………………………………………………………………………………………………………………………………………………………..

Main D2D growth drivers:

  • Coverage expansion. Analysts say D2D is filling a major gap in rural, remote, maritime, and disaster-prone areas where terrestrial networks are weak or unavailable.

  • 3GPP NTN standards. Standardized non-terrestrial networking is making satellite connectivity more practical for mainstream devices and accelerating ecosystem adoption.

  • LEO constellation growth. More low-Earth-orbit satellites, along with falling launch costs and better satellite economics, are increasing capacity and improving latency.

  • Smartphone integration. As more phones become satellite-capable, D2D can move beyond niche emergency features into broader consumer usage.

  • Enterprise IoT demand. Logistics, mining, agriculture, utilities, and energy firms want reliable connectivity for remote assets, monitoring, and worker safety.

  • Disaster resilience. Climate-related outages and emergency-response needs are pushing governments and operators toward backup connectivity solutions.

  • Carrier-satellite partnerships. Cooperation between MNOs and satellite operators is speeding commercialization and helping services reach scale.

The D2Dmarket is still starting with messaging, emergency connectivity, and narrowband IoT, but analysts expect growth to broaden as device support and satellite capacity improve. In short, D2D grows fastest where it solves a clear pain point: no coverage, weak resilience, or expensive remote connectivity.

…………………………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.lightreading.com/satellite/making-the-most-of-satellite-d2d

Satellite direct-to-device services

Ookla: D2D satellite connectivity surged 24.5% during last 9 months; Starlink’s footprint expansion leads the way

Ookla: Starlink a viable competitor for hybrid 5G/NTN services due to network performance improvements and larger coverage area

GSA: 5G Non Terrestrial Networks, 5G SA and 5G Advanced gain momentum

Analysis: Amazon <- Globalstar – a strategic move for D2D and spectrum parity

Direct-to-Device (D2D) satellite network comparison: Starlink V2 (Starlink Mobile) vs “Satellite Connect Europe”

Deutsche Telekom selects Iridium for NB-IoT direct-to-device (D2D) connectivity

Standards are the key requirement for telco/satellite integration: D2D and satellite-based mobile backhaul

MTN Consulting: Satellite network operators to focus on Direct-to-device (D2D), Internet of Things (IoT), and cloud-based services

Nvidia strategic partnership with IREN targets 5G Watts AI infrastructure buildout + $2.1B investment option

Nvidia has announced a strategic partnership with cloud AI data center operator IREN [1.] to deploy up to 5G Watts (5GW) of AI infrastructure, driven by a $3.4 billion services contract and a $2.1 billion investment option for Nvidia. This collaboration aims to secure critical, high-density data center capacity for AI workloads while accelerating IREN’s transition into a major AI infrastructure provider.  This strategic expansion targets up to 5GW of NVIDIA DSX-aligned AI infrastructure across IREN’s global pipeline. The roadmap centers on the 2GW Sweetwater campus in Texas, positioned to be the flagship deployment of NVIDIA’s DSX factory architecture. This integrated model synergizes NVIDIA’s reference designs with IREN’s core competencies in utility-scale power procurement, site development, and full-stack GPU cloud operations.

Note 1. IREN’s metamorphosis from specialized mining to high-performance computing (HPC) mirrors the trajectory of Tier-1 AI Cloud providers like CoreWeave. With an operational fleet of 23,000 GPUs and a 3GW secured power portfolio in renewable-heavy regions, IREN is rapidly scaling its North American footprint. 

“AI factories are becoming foundational infrastructure for the global economy,” said Jensen Huang, founder and CEO of Nvidia. “Deploying these systems at scale requires deep integration across the full stack — compute, networking, software, power and operations. IREN brings the scale and infrastructure expertise to help accelerate the buildout of next-generation AI infrastructure globally. Together, we are building for the age of AI,” he added.  Future deployments are expected to focus on IREN’s 2-gigawatt Sweetwater campus in Texas, which the companies expect to serve as a flagship deployment for Nvidia’s DSX architecture.

“This partnership combines NVIDIA’s AI systems and architecture leadership with IREN’s expertise across power, land, data centers, GPU deployment and infrastructure operations,” said Daniel Roberts, cofounder and co-CEO of IREN. “Together, we believe we can accelerate deployment of AI infrastructure and expand access to compute for AI-native and enterprise customers globally.”

This partnership follows a massive $9.7B agreement with Microsoft for sovereign GPU cloud services—leveraging GB300 Blackwell systems—and a $5.8B hardware procurement through Dell. Despite the scale of the Microsoft deal, leadership indicates it utilizes only ~10% of IREN’s projected capacity.
……………………………………………………………………………………………………………………………………….
Upshot:
Nvidia’s agreement with IREN introduces a unique structural alignment: Nvidia acts as both an upstream provider and an anchor tenant/stakeholder. By securing long-dated options over direct equity, Nvidia mitigates balance sheet volatility while ensuring preferential access to critical, grid-connected capacity in a supply-constrained market.
……………………………………………………………………………………………………………………………………….

References:

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Expose: AI is more than a bubble; it’s a data center debt bomb

Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?

NTT’s IOWN is (finally) evolving to an All Photonics Network (APN); Physics based AI for enterprise OT

Like all South Korea telecoms, NTT has revised its mid-term business strategy to center on AI infrastructure, data centers, and “value domains.” This shift follows a slowdown in its traditional telecoms “cash cow” business and aims to reorient the group toward higher growth areas.  The company is prioritizing AI-related services, overseas data centers, and its vision for an IOWN [1.] based connectivity platform built for GPU, network, and power-heavy workloads.

Note 1.  IOWN  is NTT’s Innovative Optical and Wireless Network initiative, with a photonics optical network being at its core.  An All-Photonics Network (APN) is NTT’s vision for a next-generation network that uses laser generated light, rather than electronic conversion, to move data across compute, storage, and transport layers. It is NTT’s bet on a much faster, lower-latency, and more energy-efficient network architecture for AI, data centers, and advanced telecom services.

–>The all optical network was promised by many new age telcos in the late 1990s- early 2000s but it has never seen the light of day (no pun intended)

Image Credit: NTT

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Benefits of an all photonics network:

Today, continuous, high-volume AI data flows across clouds, data centers and edge environments rely on stable, low-latency pathways. Yet networks that rely on optical to electrical to optical (OEO) conversion cannot provide this consistently. Even small variations in routing, buffering and electrical switching reduce the predictability that AI needs. Adding bandwidth can delay the symptoms but doesn’t address the architectural challenges these networks face as data movement intensifies.

At the leading edge of this shift is the All-Photonics Network (APN), developed by the IOWN Global Forum. It’s an architectural breakthrough and a practical step to rearchitecting how data moves, designed for a world where AI is changing the rules entirely.  The APN introduces a new way of architecting and operationalizing photonic transport so organizations can use it without having to manage the underlying optical engineering. Instead of relying on electrical conversions at every stage, it extends optical communication to the transport layers that connect sites, regions and data centers. That results in far more consistent network performance. It reduces jitter significantly and improves throughput by avoiding repeated processing overhead.

The IOWN Global Forum outlines a future where optical-first infrastructure delivers (see image below):

  • Up to 100 times lower power consumption
  • More than 120 times greater transmission capacity
  • A reduction in end-to-end latency by as much as 200 times

NTT wants to combine AI with IOWN’s photonics-based networking to better support AI-era compute, data center, and transport demands.  AIOWN is meant to solve the bottlenecks created by AI workloads, where power, latency, and bandwidth are becoming as important as raw compute.

NTT is positioning it as infrastructure for the AI era, not just as a telecom upgrade, so it sits at the center of the company’s broader shift toward AI infrastructure and data centers. Instead of relying mainly on conventional electronic networking, the pure optical IOWN aims to connect data centers and networks with photonics-based transport that can reduce energy use and improve performance. That makes it especially relevant for GPU clusters, AI cloud environments, and high-capacity backbone links.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………

NTT says the traditional telecom environment is getting tougher, with stronger competition and rising traffic demands pressuring its core business. In response, it is shifting emphasis to three growth areas: AI services for corporate clients, global data center expansion, and adjacent financial services, while also reframing its network layer for the AI era through IOWN.

The “value domains” framing is basically NTT’s way of saying it wants to move up the stack into higher-margin, customer-specific businesses rather than remain mostly a utility-like connectivity provider. In practice, that means selling integrated AI, data center, and industry solutions where NTT can capture more of the economic value than in wholesale telecom alone.  NTT believes telecom cash flows will grow more slowly than AI infrastructure demand and they are likely correct.  AIOWN is especially important because it ties together compute, networks, and power, which are becoming the real bottlenecks in AI deployments. The strategy also aligns with NTT’s broader enterprise AI positioning, where it can monetize infrastructure and services together rather than betting only on model development.

Key Features and Evolution of APN:
  • Commercial Evolution (APN1.0 to APN2.0): NTT launched APN1.0 in March 2023, offering dedicated wavelength services with 1/200th the latency of conventional networks. Evolution includes the introduction of Open APN (Open All-Photonic Network) standards for interoperability.
  • Performance Targets (2030): The APN aims to achieve \(100 \times\) higher power efficiency, \(125 \times\) greater capacity, and \(1/200\) end-to-end latency compared to traditional, electronics-based networks.
  • Photonics-Electronics Convergence (PEC): By using light instead of electricity in network devices and servers, the APN eliminates costly, slow optical-electrical-optical conversions.
  • Service Expansion: APN services are expanding to support high-demand applications like 5G/6G mobile fronthaul, remote medical services, remote construction, and AI video analysis.
Implementation Progress:
    • 2025 Milestones: NTT utilized APN for the Expo 2025 Osaka to connect pavilions and demonstrated 1Tbps-class optical paths at OFC2025.
    • 2026 Developments: At MWC Barcelona 2026, NTT showcased APN-facilitated AI video analysis, in-network computing, and improved AI inference processing.
    • Open Standardization: NTT is collaborating with partners (e.g., IOWN Global Forum) to develop open specifications for multi-vendor interoperability. [1, 2, 3]

The APN is key to creating a “data-centric” infrastructure where distributed data centers can function as one integrated system. NTT says the APN acts as the bridge that brings optical performance into practical use now, while preparing organizations for deeper photonic integration as the technology matures.  NTT Group, the parent company of NTT DATA, plays a key role in helping to move optical technologies from niche use cases into the mainstream.

…………………………………………………………………………………………………………………………………………………………………………

Most  Operational Technology (OT) environments remain stuck with legacy systems, creating a gap between modern enterprise capabilities and industrial operations. NTT is addressing this enterprise OT gap caused by legacy system stagnation by implementing private 5G networks and edge computing, allowing for modernization without full system overhauls. This approach utilizes physics-based AI to provide secure, real-time insights on-premises, overcoming challenges in visibility and standardization.

…………………………………………………………………………………………………………………………………………………………………………

References:

https://uk.nttdata.com/insights/blog/when-networks-hit-the-speed-of-light-why-photonics-is-the-next-big-shift

The All-Photonics Network Enables the Next-Generation Digital Economy

https://www.rd.ntt/e/research/JN202203_17536.html

https://www.nttdata.com/global/en/insights/focus/2025/039

https://www.enterprisetimes.co.uk/2026/05/08/ntts-edge-strategy-overcomes-ot-stagnation/

NTT’s IOWN provides ultra low latency and energy efficiency in Japan and Hong Kong

NTT pins growth on IOWN (Innovative Optical and Wireless Network)

Sony and NTT (with IOWN) collaborate on remote broadcast production platform

NTT to offer optical technology-based next-generation network services under IOWN initiative; 6G to follow

NTT to launch 25 Gps FTTH service in Tokyo starting March 2026

NTT DOCOMO successful outdoor trial of AI-driven wireless interface with 3 partners

 

Posted in NTT

Optus and Ericsson achieve 180MHz across 2.3GHz and 3.5GHz bands using carrier aggregation on a live 5G SA network

Australian telco Optus has demonstrated advanced 5G NR carrier aggregation (5G NR-CA) performance on its 5G standalone (SA) network by implementing four-component carrier aggregation (4CC CA) across low-, mid-, and upper-mid-band spectrum. Using Ericsson 5G SA network equipment and software, the configuration aggregates FDD bands at 900 MHz (Band n8) and 2.1 GHz (Band n1) with TDD bands at 2.3 GHz (Band n40) and 3.5 GHz (Band n78).  Two-Component Carrier (2CC CA) uplink aggregation

This combined Optus’ unique two mid-band TDD spectrum holdings across 2.3GHz and 3.5GHz, achieving a record 180MHz TDD spectrum aggregation. In particular:

  • Four-Component Carrier aggregation enabled 220MHz downlink bandwidth, leveraging spectrum across four different bands of 900MHz, 2.1GHz, 2.3GHz and 3.5GHz
  • Two-Component Carrier uplink aggregation combined one Frequency Division Duplex (FDD) band from 900MHz and 2.1GHz with one TDD band from 2.3GHz and 3.5GHz
  • Achieved peak speeds of 3.4Gbps (downlink) and 200Mbps (uplink) in a live network site with commercial devices, including the Samsung Galaxy S26 Ultra

The demonstration aligns with 3GPP Release 16 and Release 17 5G NR-CA enhancements (TS 38.300, TS 38.101-1/2), which extend carrier aggregation capabilities across heterogeneous duplex modes (FDD+TDD) and multiple frequency ranges within FR1. The downlink configuration leverages cross-band scheduling and advanced MIMO layers (likely up to 4×4 or higher per component carrier, depending on band support) to maximize spectral efficiency across aggregated carriers.

On the uplink, Optus and Ericsson reported 200 Mbps throughput using two-component carrier aggregation (2CC CA), combining FDD (n8/n1) and TDD (n40/n78) spectrum. This implementation is consistent with 3GPP Release 16 uplink enhancements, including uplink carrier aggregation and transmit (Tx) switching (TS 38.213), which enables efficient utilization of UE power resources across multiple uplink carriers, particularly in mixed duplex scenarios.

All results were achieved on a live commercial 5G SA network at Optus’ Sydney campus using commercial off-the-shelf (COTS) user equipment, including the Samsung Galaxy S26 Ultra. This indicates full compliance with 3GPP-defined UE capability signaling (TS 38.306) and the availability of device-side support for complex NR-CA band combinations, including inter-band and cross-duplex aggregation.

“This achievement demonstrates how we are translating cutting-edge 5G technology into meaningful benefits for customers in real-world environments. Through our ongoing collaboration with Ericsson, we are unlocking greater capacity and performance across our 5G network, enabling faster speeds and more reliable connectivity,” said Optus CTO Sri Amirthalingam. “This milestone marks an important step in our network evolution towards 5G Advanced, reinforcing our commitment to remain at the forefront of innovation and to deliver tangible value for our customers.”

Ludvig Landgren, head of Ericsson Australia and New Zealand operations said: “Optus continues to demonstrate strong leadership in adopting advanced 5G capabilities, and this milestone highlights the strength of our partnership. By expanding and combining multiple spectrum assets with Ericsson technology, we are helping Optus deliver meaningful performance improvements that translate directly into better everyday experiences for their customers.”

………………………………………………………………………………………………………………………………………………..

From a broader industry perspective, these results build on ongoing  5G NR-CA advancements. T-Mobile US has demonstrated approximately 6 Gbps downlink throughput using six aggregated carriers in FR1, as well as 550 Mbps uplink throughput leveraging uplink Tx switching across sub-6 GHz bands. In Europe, Vodafone and MediaTek achieved 277 Mbps uplink throughput using NR uplink CA, while Elisa, Ericsson, and MediaTek demonstrated 12CC aggregation reaching 8 Gbps downlink—highlighting the scalability of NR-CA as defined in 3GPP Release 17 and evolving into Release 18 (5G-Advanced).

Within Australia, Telstra has deployed Ericsson’s automated carrier aggregation (CA) optimization solution across more than 50 live 5G Advanced sites, leveraging dynamic CA configuration and traffic-aware scheduling—capabilities aligned with 3GPP Release 18 objectives for AI-assisted RAN optimization.

A notable aspect of the Optus/Ericsson demonstration is the aggregation of 180 MHz of mid-band spectrum across n40 (2.3 GHz) and n78 (3.5 GHz). While not a headline peak-rate milestone, this represents a first in terms of contiguous mid-band NR-CA deployment at this bandwidth scale. Mid-band aggregation is particularly significant within the 3.3–4.2 GHz “golden band” range defined in global 5G spectrum harmonization efforts, as it offers an optimal balance between coverage and capacity.

Operationally, this configuration is expected to deliver immediate gains in high-traffic scenarios—such as dense urban environments, transport hubs, and large venues—by increasing available cell throughput and improving user-level quality of service (QoS). Furthermore, the expanded mid-band capacity directly benefits fixed wireless access (FWA) deployments, where sustained throughput and cell-edge performance are critical. Because the demonstrated CA combinations are already supported by commercial UE categories, deployment can proceed without requiring new device classes, accelerating time-to-impact.

Ericsson was recently selected to modernize and expand SoftBank’s core networks, as well as accelerate the Japanese giant’s 5G SA adoption. Expanding on a previous 5G SA deal centered around its radio access network (RAN) products, Ericsson is providing SoftBank with its Core Networks’ portfolio, including a dual-mode 5G Core solution running on Ericsson’s Cloud Native Infrastructure Solution (CNIS).

……………………………………………………………………………………………………………………………..

References:

https://www.ericsson.com/en/press-releases/7/2026/optus-and-ericsson-achieve-world-first-180mhz-across-2-3ghz-and-3-5ghz-5g-standalone-carrier-aggregation-on-live-network-using-commercial-devices-boosting-5g-customer-experience

https://www.telecoms.com/5g-6g/optus-and-ericsson-use-carrier-aggregation-to-notch-up-3-4-gbps-on-a-live-5g-sa-network

https://www.sdxcentral.com/news/ericsson-and-optus-claim-5g-sa-world-first/

https://www.ericsson.com/en/press-releases/7/2026/optus-and-ericsson-trial-ai-to-boost-5g-downlink

https://www.nokia.com/mobile-networks/ran/carrier-aggregation/5g-carrier-aggregation-explained/

China Unicom-Beijing and Huawei build “5.5G network” using 3 component carrier aggregation (3CC)

Nokia, BT Group & Qualcomm achieve enhanced 5G SA downlink speeds using 5G Carrier Aggregation with 5 Component Carriers

Finland’s Elisa, Ericsson and Qualcomm test uplink carrier aggregation on 5G SA network

T-Mobile US, Ericsson, and Qualcomm test 5G carrier aggregation with 6 component carriers

Ericsson and MediaTek set new 5G uplink speed record using Uplink Carrier Aggregation

BT tests 4CC Carrier Aggregation over a standalone 5G network using Nokia equipment

T-Mobile US achieves speeds over 3 Gbps using 5G Carrier Aggregation on its 5G SA network

 

Extreme Networks deploys Wi‑Fi 7 (IEEE 802.11be) at University of Florida’s “Swamp”

Executive Summary:

Extreme Networks, Inc. today announced the deployment of the first Wi‑Fi 7 network in a collegiate stadium at the University of Florida’s Ben Hill Griffin Stadium, also known as “The Swamp.”  The deployment is engineered to support peak densities approaching 90,000 concurrent users, with an emphasis on low-latency, high-throughput connectivity under extreme load conditions. Client devices associate rapidly via optimized authentication and roaming mechanisms, while high-efficiency scheduling enables uninterrupted uplink/downlink performance for real-time video streaming, social media sharing, and in-venue digital services such as mobile ordering.  Wi‑Fi 7 is based on the IEEE 802.11be standard, which was designed to improve ultra-dense venue wireless network performance.

Wi‑Fi 7 (IEEE 802.11be), improves stadium fan experience by increasing capacity, lowering latency, and making the radio layer more resilient in dense, interference-prone environments. The most relevant features are Multi-Link Operation (MLO)for simultaneous multi-band transmission, 320 MHz channels in 6 GHz, 4K-QAM, puncturing, and enhanced OFDMA/MU-MIMO scheduling.  These features collectively improve spectral efficiency, reduce contention, and sustain deterministic performance in ultra-dense environments. The result is a carrier-grade WLAN fabric that transforms “The Swamp” into a high-capacity, low-latency connectivity domain, establishing a new benchmark for large public venues.

This wireless infrastructure aligns with the University of Florida’s broader stadium modernization program, which includes physical upgrades such as expanded concourses, optimized ingress/egress flows, premium seating enhancements, and next-generation audiovisual systems. The converged digital and physical redesign enables tighter integration between network intelligence and venue operations.

Image Credit: University of Florida

“On game day, The Swamp transforms into one of the most electrifying and densely connected environments in college sports,” said Matt Vincent, Assistant Athletics Director, Information Technology at the University of Florida. “As we continue to invest in the fan experience at Ben Hill Griffin Stadium, adding Wi-Fi 7 allows us to significantly increase capacity while enabling smarter, real-time connectivity that helps everything run smoothly at peak demand. The NIaaS model from Extreme Networks also provides the flexibility to scale as needed without significant upfront investment, allowing our IT team to operate more efficiently while delivering a consistently high-quality digital experience for every fan.”

A New Era of Fan Connectivity:

The new Wi‑Fi 7 (IEEE 802.11be) network from Extreme will deliver:

  • Ultra-fast speeds enabling seamless 4K/8K video streaming, instant social sharing, and real-time stats access.
  • Lower latency for responsive mobile experiences, including in-seat ordering and interactive apps.
  • Improved device capacity supporting tens of thousands of concurrent connections without performance degradation.
  • Consistent coverage across seating bowls, concourses, suites, and outdoor areas.

Key Wi‑Fi 7 (IEEE 802.11be PHY) functions:

  • 320 MHz channels: Double the maximum Wi‑Fi channel width versus Wi‑Fi 6/6E, which increases potential throughput in 6 GHz.
  • 4K-QAM: Packs more bits into each symbol, improving efficiency when signal conditions are good and devices are close to APs, as they often are in under-seat stadium designs.
  • Puncturing: Lets the AP use the clean portion of a wide channel even if part of it is affected by interference, instead of discarding the whole channel.
  • Multi-RU and enhanced OFDMA: Improves how airtime is split among many clients, which is critical when large numbers of fans are active simultaneously.
  • Better MU-MIMO: Helps the AP serve multiple users in parallel, supporting more concurrent sessions without as much contention.

Transforming Stadium Operations:

For fans, the visible benefits are faster onboarding, smoother streaming, and more reliable mobile ordering and payments. For operators, the same network supports staff communications, POS systems, video surveillance, and IoT devices such as sensors and digital signage. Analytics from the WLAN can also reveal crowd flow, dwell time, and concession demand, which helps optimize staffing and sponsorship placement.

Beyond fan-facing services, the Wi‑Fi 7 network underpins mission-critical operational workflows. High-reliability connectivity supports real-time staff communications, accelerates point-of-sale (POS) transaction processing with reduced latency and higher transaction concurrency, and enables high-definition video surveillance integrated with AI/ML-based analytics for threat detection and crowd safety.

The network also functions as an IoT aggregation layer, supporting smart sensors, digital signage, environmental monitoring, and automated control systems via secure segmentation and policy enforcement. Through advanced analytics platforms such as Extreme Analytics, operators gain granular, real-time visibility into user behavior and network performance, including crowd flow dynamics, dwell time distributions, application usage patterns, and concession demand signals.

These data-driven insights enable closed-loop optimization of venue operations, from dynamic staffing and queue management to targeted digital engagement and monetization strategies, including context-aware advertising and sponsorship activation. In aggregate, the deployment represents a shift toward an intent-driven, analytics-centric stadium architecture where connectivity, operations, and revenue generation are tightly coupled.

About Extreme Networks:

Extreme Networks, Inc. (EXTR) is a leader in AI-powered cloud networking, focused on delivering simple and secure solutions that help businesses address challenges and enable connections among devices, applications, and users. We push the boundaries of technology, leveraging the powers of artificial intelligence, analytics, and automation. Tens of thousands of customers globally trust our AI-driven cloud networking solutions and industry-leading support to enable businesses to drive value, foster innovation, and overcome extreme challenges.

References:

https://www.businesswire.com/news/home/20260506829623/en/Extreme-Powers-First-Ever-College-Stadium-WiFi-7-Deployment-at-University-of-Floridas-The-Swamp

Research & Markets: WiFi 6E and WiFi 7 Chipset Market Report; Independent Analysis

Wireless Broadband Alliance Report: WiFi 7, converged Wi-Fi and 5G, AI/Cognitive networks, and OpenRoaming

WiFi 7: Backgrounder and CES 2025 Announcements

WiFi 7 and the controversy over 6 GHz unlicensed vs licensed spectrum

Qualcomm FastConnect 7800 combining WiFi 7 and Bluetooth in single chip

MediaTek to expand chipset portfolio to include WiFi7, smart homes, STBs, telematics and IoT

 

Lumen to acquire Alkira to accelerate its push into multi-cloud and data center interconnect services

Lumen Technologies today announced it would buy cloud ​networking platform company Alkira for $475 million in cash.  The transaction is expected to close in the third quarter of 2026, subject to customary regulatory approvals and closing conditions.  Alkira is a cloud-native, carrier-agnostic networking platform that enables enterprises to design, deploy, and operate connectivity and network services across hybrid and multi-cloud environments. The acquisition is expected ‌to accelerate Lumen’s push into cloud-to-cloud (AKA multi-cloud) and data center interconnect services and expand its total addressable market to about $70 billion through Alkira’s global footprint and cloud-native platform.

Alkira serves enterprise customers – across financial services, retail, technology, healthcare, and manufacturing around the world. Customers use Alkira to manage connectivity across AWS, Azure, Google Cloud, and other environments through a single platform built for enterprise-grade security and compliance.  Lumen said the acquisition will “accelerate its vision with a single control plane that orchestrates connectivity beyond our network – across data centers, multiple clouds, partner ecosystems, and on-premises environments – in one unified system.”

The Programmable Network Imperative:

AI is reshaping how enterprises operate and how their networks must perform. More than half of internet traffic today is automated traffic generated by software systems rather than human users. Networks have to be big enough, fast enough, intelligent enough, and secure enough to keep up. Yet many enterprise networks remain static, manually configured, and fragmented across providers. Lumen is working to define a new category of enterprise networking: one built on world-class physical infrastructure, a programmable network, and a connected ecosystem of clouds, applications, and partners.

Quotes:

Kate Johnson, CEO of Lumen Technologies said:

“For decades, networking ran in the background. Today, it’s the nervous system, determining how fast you can move, how much you spend, and whether your AI investments produce value.  With Alkira, Lumen will pair the trusted network for AI with a cloud-native control plane, which will give customers a programmable network designed for the AI era. It’s what the market needs, and it’s what we’re building at Lumen.”

Lumen President and CFO Chris Stansbury said:

“Strategic revenue now represents more than half of our business revenue, and we are pleased with increasing customer interest in our programmable network solutions. The pending Alkira acquisition reflects a disciplined and opportunistic capital allocation strategy that supports our path to revenue growth outlined at Investor Day, while remaining on track to meet full-year guidance.”

In a letter to its customers, Amir Khan, Co-Founder & CEO, Alkira wrote, in part:

Why we’re excited to join forces with Lumen:

“Looking to the future, we’re thrilled about the powerful combination of Alkira’s on-demand network infrastructure and Lumen’s fiber and AI-ready platform. Lumen brings extensive enterprise reach and a richly connected ecosystem of clouds, applications, and partners. Together, our network infrastructure-as-a-service paired with Lumen’s Connectivity Fabric will deliver the market a connectivity solution purpose-built for the AI era. After close and our planned integration with Lumen, Alkira customers will benefit from deeper integration with Lumen’s network and dramatically broader reach. In the meantime, our priority is unwavering stability and continuity for every customer.”

………………………………………………………………………………………………………………………………………………………………………………………………..

Here ​are more details (courtesy of Reuters):
  • According to Lumen, the deal is unlikely to ​have a near-term impact on margins, but is expected to ⁠boost earnings as the digital platform grows, while improving long-term free ​cash flow and lowering buildout costs and risk.
  • “The acquisition of Alkira substantially ​completes the digital platform that we had to build. It accelerates it, it is capex that we do not have to invest now,” CFO Chris Stansbury told Reuters in an interview.
  • Lumen reported revenue ​of $2.9 billion for the first quarter ended March 31, above analysts’ average estimate of $2.83 ​billion, according to data compiled by LSEG.
  • “We had a very strong quarter on private ‌connectivity ⁠fabric (PCF), because we lit up some State of California business,” Stansbury said, adding that PCF growth was in the mid-single digit and Lumen’s digital offerings were a “big piece” of it.
  • The company’s quarterly adjusted loss came ​in at 47 ​cents per share, ⁠compared with expectations of a 13-cent per share loss.
  • Lumen raised its annual free cash flow forecast to a range ​of $1.9 billion to $2.1 billion, from an earlier projection of $1.2 ​billion ⁠to $1.4 billion, as its auditors determined that $729 million of the cash inflows associated with the sale of its consumer fiber operations to AT&T should be ⁠classified ​as operating cash flows.
  • In February, Lumen was ​selected to expand Anthropic’s fiber network across North America, contributing to its nearly $13 billion in total ​PCF contracts.

………………………………………………………………………………………………………………………………………………………………………

About Lumen Technologies:

Over the past 30 years, Lumen Technologies (formerly CenturyLink) has transformed from a regional telco into a global enterprise networking provider through major acquisitions. Key acquisitions include Level 3 Communications ($25B, 2017), Qwest Communications (2011), Embarq (2009), Global Crossing (2011), and Savvis (2011), along with smaller technology firms like NetAura and Cognilytics.
Major Acquisitions (CenturyLink/Lumen Era):
    • Level 3 Communications (2017): A transformative $25 billion deal that brought a massive global fiber network, significantly boosting enterprise capabilities.
    • Qwest Communications International (2011): A major acquisition, adding extensive fiber conduit and expanding the network in the western U.S..
    • Global Crossing (2011): Provided a substantial international footprint and a global network.
    • Savvis (2011): Expanded the company’s capabilities in cloud computing and data center hosting.
    • Embarq (2009): The former landline operations of Sprint Nextel which included metro Ethernet and various data networking services.
    • Centel (1993): An early, foundational acquisition of landline operations. 

Smaller/Strategic Acquisitions:
    • Alkira (2026): Planned acquisition to extend leadership in programmable networking.
    • NetAura (2016): Focused on security services and government customers.
    • Cognilytics (2014): A predictive analytics company.
    • DataGardens, Inc. (2014): A Disaster Recovery as-a-Service (DRaaS) provider.

These acquisitions have significantly broadened Lumen’s fiber assets and shifted the company focus away from residential service towards enterprise, AI-driven networking.

………………………………………………………………………………………………………………………………………………………………………

References:

https://ir.lumen.com/news-section/lumen-to-acquire-alkira/default.aspx

https://www.alkira.com/press-release/lumen-to-acquire-alkira-establishing-the-control-plane-for-cloud-connectivity/

https://ir.lumen.com/news-section/news/news-details/2026/Lumen-Technologies-Reports-Solid-First-Quarter-2026-Results/default.aspx

https://www.reuters.com/business/media-telecom/lumen-beats-quarterly-revenue-estimates-acquire-alkira-475-million-2026-05-05/

Lumen launches Multi-Cloud Gateway (MCGW) and expands metro fiber network after selling consumer FTTH business to AT&T

Lumen: “We’re Building the Backbone for the AI Economy” – NaaS platform to be available to more customers

Lumen deploys 400G on a routed optical network to meet AI & cloud bandwidth demands

Lumen and Ciena Transmit 1.2 Tbps Wavelength Service Across 3,050 Kilometers

Analysts weigh in: AT&T in talks to buy Lumen’s consumer fiber unit – Bloomberg

Lumen Technologies to connect Prometheus Hyperscale’s energy efficient AI data centers

Microsoft choses Lumen’s fiber based Private Connectivity Fabric℠ to expand Microsoft Cloud network capacity in the AI era

Lumen, Google and Microsoft create ExaSwitch™ – a new on-demand, optical networking ecosystem

Dell’Oro: Optical Transport market to hit $17B by 2027; Lumen Technologies 400G wavelength market

Ookla: Starlink a viable competitor for hybrid 5G/NTN services due to network performance improvements and larger coverage area

SpaceX’s Starlink low-Earth orbit (LEO) satellite constellation providing high speed internet service is increasingly positioning itself as a scalable broadband access platform within the global telecom ecosystem.  It now has growing relevance for both retail and enterprise connectivity use cases.

Network performance improvements  (see below) have occurred alongside substantial subscriber growth. Starlink’s global user base expanded from approximately 4.6 million at the end of 2024 to over 10 million by early 2026, underscoring the LEO satellite platform’s ability to scale capacity while maintaining service quality.

This evolution is exemplified by T-Mobile’s “SuperBroadband” offering, which integrates 5G fixed wireless access (FWA) with Starlink satellite connectivity to deliver hybrid terrestrial–non-terrestrial network (NTN) solutions for business customers. The viability of such architectures is directly dependent on sustained improvements in satellite network throughput, latency, and service consistency.

Ookla Speedtest® data for the second half of 2025 indicates significant year-over-year improvements in Starlink’s performance across key network metrics. Median download speeds exceeded 100 Mbps in 49 states, compared to 23 states in 2H 2024, reflecting both increased system capacity and improved spectral efficiency. Performance gains were also observed across the lower quartile of users: 25th percentile download speeds improved in 48 states, with the number of states below 50 Mbps declining from eleven to two (Alaska and Florida). This shift indicates not only higher peak throughput but also improved quality of experience (QoE) consistency across the subscriber base.

Latency performance has also trended positively, driven by both constellation densification and architectural enhancements. While Starlink continues to target ~20 ms median latency, the number of states with median multi-server latency below 40 ms increased from one to ten between 2H 2024 and 1H 2025. By 2H 2025, top-performing regions—including New Jersey, Colorado, Arizona, and Washington, D.C.—achieved median latencies of approximately 37 ms, approaching parity with certain terrestrial broadband deployments and enabling latency-sensitive applications.

There has been a rapid expansion of the Starlink constellation and ongoing satellite technology upgrades. As of February 2026, the constellation exceeded 10,000 satellites in orbit, materially increasing aggregate network capacity and reducing cell congestion through greater spatial reuse. The deployment of Generation 3 (V3) satellites—featuring an order-of-magnitude increase (~10×) in downlink capacity relative to prior generations—has further enhanced throughput. Concurrently, upgrades to inter-satellite laser links have enabled more efficient space-based routing, reducing dependency on terrestrial gateway infrastructure, minimizing bottlenecks, and improving end-to-end latency performance.

Notably, these network enhancements have coincided with rapid subscriber growth. Starlink’s global user base expanded from approximately 4.6 million at year-end 2024 to over 10 million by early 2026, demonstrating the platform’s ability to scale capacity in line with demand while maintaining or improving key performance indicators.

Uplink performance has also improved materially, with 22 states achieving median upload speeds ≥20 Mbps in 2H 2025, compared to zero states in the prior-year period. This threshold is aligned with the FCC’s current broadband definition, underscoring Starlink’s increasing capability to meet regulatory benchmarks for two-way broadband services. Nebraska, New Jersey, and Minnesota recorded the largest gains, with Nebraska leading overall at 24.94 Mbps median upload throughput.

However, performance gains remain uneven across certain geographies. States including Connecticut, Hawaii, and New Hampshire exhibited relatively modest uplink improvements, suggesting localized constraints related to capacity allocation, gateway distribution, or demand density. These variances highlight the continued importance of targeted constellation scaling and ground segment optimization to ensure uniform service quality.

In Q4, 44.7% of Starlink’s user base achieved the FCC’s 100/20 Mbps broadband benchmark, signaling the provider’s transition from a niche rural solution to a high-performance market disruptor. By scaling its LEO constellation to over 10,000 nodes and deploying higher-throughput payloads, Starlink has successfully optimized spectral efficiency and reduced latency, maintaining QoS even as its global subscriber base scaled to 10 million.

While the U.S. remains Starlink’s primary market, the competitive landscape is shifting. Amazon’s Project Kuiper faces significant deployment headwinds; despite an FCC mandate to orbit 1,618 satellites by July 2026, the company has only deployed roughly 240 units and has petitioned for a two-year extension due to launch capacity constraints.  This market penetration places legacy GEO operators like Hughesnet and Viasat at a strategic disadvantage. Although these incumbents are leveraging aggressive pricing and CPE (Customer Premises Equipment) refreshes to stem churn, the inherent latency limitations of GEO architecture continue to pose a significant structural barrier to competing with LEO-based performance.

Overall, the data indicates that Starlink is transitioning from a niche rural broadband solution toward a more robust, high-capacity access network capable of supporting hybrid 5G/NTN architectures and enterprise-grade connectivity services.

…………………………………………………………………………………………………………………………………………………………………………………………………………….

Addendum – LEO vs GEO satellite internet:

The technical architectures of Low Earth Orbit (LEO) and Geostationary Earth Orbit (GEO) systems are fundamentally defined by their orbital altitude, which dictates their latency, link budget, and network complexity.

  • Orbital Mechanics and Altitude:
    • GEO satellites reside at a fixed altitude of approximately 35,786 km. They orbit at the same speed as the Earth’s rotation, appearing stationary from the ground, which allows for simple, fixed-point antenna installations.
    • LEO satellites operate at significantly lower altitudes, typically between 160 km and 2,000 km. Because they are closer to Earth, they must travel at much higher velocities (approx. 28,000 km/h) to maintain orbit, completing a full revolution in about 90–128 minutes.

  • Latency and Propagation Delay:
    • GEO: The extreme distance results in a high propagation delay, with a typical round-trip time (RTT) of 500–600 ms. This is unsuitable for real-time applications like VoIP, gaming, or high-frequency trading.
    • LEO: Proximity to Earth reduces latency to 20–50 ms, making the performance comparable to terrestrial fiber.

  • Link Budget and Power Requirements:
    • GEO: High path loss over 36,000 km requires high-power Traveling Wave Tube Amplifiers (TWTAs) and large, high-gain satellite antennas to maintain signal integrity. However, the terminal transmit power required for low-bitrate applications can actually be lower than LEO due to the stable, optimized architecture of legacy GEO MSS systems.
    • LEO: Lower path loss enables the use of lower-power RF systems. However, the rapid movement requires complex phased array antennas at the user terminal to electronically track satellites and manage seamless handoffs between nodes in the constellation.

  • Network Resilience and Capacity:
    • GEO: A single satellite can cover up to 42% of the Earth’s surface, but capacity is centralized; a single point of failure can impact an entire region.
    • LEO: Resilience is achieved through distributed constellations of thousands of satellites. These systems often utilize Intersatellite Links (ISLs)—optical or RF mesh networks in space—to route data between satellites, reducing the need for local ground gateways.
Comparison Summary

Feature                 LEO Architecture GEO Architecture
Altitude 160 – 2,000 km ~35,786 km
Latency (RTT) 20 – 50 ms 500 – 600 ms
Coverage Regional/Global via large constellation ~1/3 of Earth per satellite
Terminal Type Advanced tracking/Phased array Fixed parabolic dish
Operational Life ~5 years (due to atmospheric drag) ~15 years

…………………………………………………………………………………………………………………………………………………………………………………………………………….

References:

https://www.ookla.com/articles/starlink-hits-new-us-highs

Ookla: D2D satellite connectivity surged 24.5% during last 9 months; Starlink’s footprint expansion leads the way

US Mobile’s new bundle combines its multi-network mobile service with Starlink residential internet

Starlink doubles subscriber base; expands to to 42 new countries, territories & markets

Elon Musk: Starlink could become a global mobile carrier; 2 year timeframe for new smartphones

Direct-to-Device (D2D) satellite network comparison: Starlink V2 (Starlink Mobile) vs “Satellite Connect Europe”

Blue Origin announces TeraWave – satellite internet rival for Starlink and Amazon Leo

Amazon Leo (formerly Project Kuiper) unveils satellite broadband for enterprises; Competitive analysis with Starlink

China ITU filing to put ~200K satellites in low earth orbit while FCC authorizes 7.5K additional Starlink LEO satellites

GEO satellite internet from HughesNet and Viasat can’t compete with LEO Starlink in speed or latency

Page 1 of 245
1 2 3 245