Anthropic Clause Users Reveal AI Hallucinations as their Top Concern

Introduction:

Across regions from Germany to Mexico, users of artificial intelligence (AI) are less concerned about being replaced by AI than by its propensity to make major mistakes, according to one of the largest global surveys to date on real-world AI usage and perception.  These mistakes, known as “AI Hallucinations,” are essentially made up stories rather than answers based on outdated information.

The study, conducted by Anthropic using its Claude chatbot, analyzed interviews with more than 80,000 users across 159 countries. The result is one of the most detailed global portraits yet of how AI is being deployed — and how users perceive its risks, benefits, and societal implications.

AI Hallucinations Outrank Job Displacement as Top Concern:

When asked what worries them most about AI, 27% of users cited AI chatbot errors described as “AI hallucinations,” while 22% pointed to job displacement and the loss of human autonomy. About 16% expressed concern that AI could weaken people’s capacity for critical thinking.

Image Credit: JOIST AI

“The AI hallucinations were a disaster. I lost so many hours of work,” said an entrepreneur from Germany. Another participant, a military worker in Mexico, noted the importance of domain knowledge in spotting AI’s flaws: “When I notice AI errors it’s because I’m well versed in the topic . . . but I wouldn’t know if the topic was alien to me, would I?”

An AI Interviewer for Global Insights:

The responses were collected in 70 languages using a novel feedback system that allowed Claude to act as both interviewer and analyst. The platform evaluated qualitative answers, categorizing responses to reveal common themes and linguistic nuances across regions.

“Beyond its scale and linguistic diversity, the project aimed to collect this rich human experience using Claude, so it could really inform our research agenda, change our research agenda, change the way we think about building our products, deploying our products,” said Deep Ganguli, who leads Anthropic’s societal impacts team and oversaw the research initiative.

Productivity and Personal Growth Drive AI Adoption:

While data quality and reliability drew criticism, the survey also underscored widespread acknowledgment of AI’s positive impact on productivity. Thirty-two percent of respondents said that AI tools had meaningfully improved their output at work.

An entrepreneur in the United Arab Emirates explained, “I used to be a web designer . . . now I build anything. Before I was one person, now I become 100 people — I don’t wait for anyone anymore.” Participants from Colombia, Japan, and the United States described similar gains, emphasizing how AI helps them free up time for family, hobbies, and creative exploration.

In total, nearly one in five users (19%) said AI had fallen short of their expectations. Yet usage patterns demonstrate remarkable versatility: respondents reported employing AI as a productivity assistant, educational tutor, design partner, creative collaborator, or even an emotional support companion.

A vivid example came from a soldier in Ukraine, who wrote, “In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life — my AI friends.”

Regional and Economic Divides in AI Optimism:

Regional variation was pronounced. Saffron Huang, the lead researcher on the project, found that respondents in South America, Africa, and across South and Southeast Asia expressed more optimism than users in Europe, the United States, or East Asia.

“The trend is that maybe more lower and middle-income countries are more optimistic than higher-income countries that have more AI exposure,” said Huang. She added that this optimism might reflect a sample skew toward early adopters in developing markets — individuals inclined to view new technologies as opportunities rather than threats.

“They just divide so cleanly . . . the more western developed countries are significantly more concerned about AI and the economy, [and] much more negative, and then, the reverse is true with the lower and middle-income countries,” she said.

According to Anthropic’s researchers, AI’s limited visibility in daily workflows across lower-income economies may explain the difference. “If AI hasn’t visibly entered your daily work yet, AI displacement likely feels abstract, especially when more immediate economic pressures already exist,” the team wrote in a companion blog post.

Next Steps: Measuring AI’s Real-World Impact:

Anthropic plans to extend its Claude Interviewer research framework into longitudinal studies that track how AI affects users’ lives over time. “The goal is to better measure both the improvements and the harms — and to use those insights to make systemic refinements,” said Ganguli.

The company’s approach — embedding feedback collection directly into an AI platform — represents an emerging model for data-driven, iterative AI development. By combining self-reported user experience data with large-scale text analytics, Anthropic aims to better understand how its models interact with human needs and constraints.

Industry and Research Community Respond:

The study has drawn attention across the AI community for its unprecedented reach and innovative methodology. Nickey Skarstad, director of product at language-learning company Duolingo, praised the work’s ambition. On LinkedIn, she wrote: “For anyone building products right now, this is the future of understanding your users. The what AND the why at a scale we’ve never had access to before.”

Still, several researchers remain cautious about overinterpreting the results. Divy Thakkar, a researcher at Anthropic rival Google DeepMind, expressed reservations on X, saying he was “sceptical” about calling the study a new form of science due to potential selection bias and limitations in survey design. “A human qualitative researcher would take time to build trust with their participants, hold the space for reflection, introspection, contradictions — that’s the whole point of it,” he wrote.

Methodological caveats extend to demographics. Almost half of the survey’s respondents were based in North America or Western Europe, while regions such as Central Asia had only several hundred participants.

Ilan Strauss, an economist and director of the AI Disclosures Project, described the initiative as “an excellent piece of work,” but urged careful interpretation. He noted that the absence of reported confidence intervals — standard practice in survey-based research — makes it difficult to measure uncertainty. Self-reported productivity gains, he added, are inherently prone to bias.

A Global Mirror for Human-AI Relations:

Despite these caveats, the Claude Interviewer study illustrates a broader shift in the relationship between humans and AI systems. As AI technologies proliferate across regions and industries, they are becoming both instruments of empowerment and sources of anxiety — mirroring social, economic, and cultural dynamics in striking ways.

While western economies debate AI-driven labor disruption and ethical alignment, many in emerging markets frame AI as a means of upward mobility and creative expansion. This duality — between apprehension and aspiration — may shape not only AI adoption patterns but also future research and regulatory directions across global contexts.

References:

https://www.ft.com/content/e074d3a9-7fd8-447d-ac0a-e0de756ac5c5?syn-25a6b1a6=1 (PAYWALL)

https://www.joist.ai/post/ai-hallucinations-what-they-are-and-why-it-matters

Sources: AI is Getting Smarter, but Hallucinations Are Getting Worse

Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators

Alphabet’s 2026 capex forecast soars; Gemini 3 AI model is a huge success

Analysis & Economic Implications of AI adoption in China

China’s open source AI models to capture a larger share of 2026 global AI market

AWS to deploy AI inference chips from Cerebras in its data centers; Anapurna Labs/Amazon in-house AI silicon products

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos

Gartner: AI spending >$2 trillion in 2026 driven by hyperscalers data center investments

 

 

Telco investments in mobile core networks surge 83% in 2025-Q4, but what about ROI?

According to new data from market research firm Omdia (owned by Informa), 2025 Q4 investments 5G SA Core networks surged 83% year-over-year. For OEMs, this uptick suggests a pivot away from the stagnant 5G Standalone (SA) momentum of recent years. Omdia identified North America and EMEA as the primary growth engines for the quarter.  “The surge in 5G core investment underscores CSPs’ strategic focus on enabling new revenue streams and digital transformation,” said Roberto Kompany, Principal Analyst Mobile Infrastructure at Omdia, in a statement. “This momentum is reflected in AT&T’s nationwide 5G SA and RedCap deployment and Verizon’s launch of a new enterprise-grade fixed wireless access (FWA) slice,” he said.

Ookla and Omdia recently noted accelerating 5G SA adoption in Europe, but the region continues to trail global leaders due to its low baseline. Spain remains a standout exception. Telefónica recently achieved a domestic milestone by deploying 5G SA in-building coverage via a Vantage Towers DAS, and has partnered with Airbus Helicopters to integrate 5G SA into manned and unmanned rotary-wing platforms for the Spanish armed forces. Despite broader deployments in the UK and Germany, a significant performance gap remains.

The GCC region ( Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the UAE.) currently delivers median 5G SA download speeds up to five times faster than European averages. This disparity highlights a capability gap rather than a coverage issue between mature and emerging markets. The industry footprint is expanding, with Omdia reporting 88 commercial 5G SA deployments to date—a notable increase from the 72 reported by Dell’Oro in late 2025.

…………………………………………………………………………………………………………………………………………………………………………………………….

While Dell’Oro confirms the 5G SA Core market growth, it emphasized that subscriber migration and active utilization, rather than just “flags in the ground,” are the true long-term drivers for infrastructure spend.  For the first time, the 5G Mobile Core Network (MCN) market accounted for 50 percent share of the total MCN market.

“In 2025, the MCN market recorded its highest year-over-year revenue growth rate since 2014,” stated Dave Bolan, Research Director at Dell’Oro Group. “This was driven by record-setting growth rates in all market segments: 4G MCN (highest since 2019), 5G MCN (highest since 2022), and Voice Core (highest since 2007). 4G MCN gains came from Caribbean and Latin America (CALA) and Europe, Middle East, Africa (EMEA) regions; 5G MCN from all regions; and Voice Core, primarily from Asia Pacific and EMEA regions.

“5G MCNs led the way in 2025 growth, as 5G Standalone (5G SA) networks reached an inflection point and moved towards mass market appeal, as more 5G SA networks expand in population coverage in urban, suburban, and rural areas. Voice Core was the next major contributor to growth in 2025, driven by planned 3G MCN shutdowns, which required upgrades from Circuit Switched Core to IMS Core, and IMS Core modernization to a cloud-native IMS Core for VoNR in 5G SA networks. Meanwhile, 4G MCNs expanded due to subscriber growth in Africa and South America,” added Bolan.

Looking ahead, Omdia forecasts sustained double-digit growth for 5G Core investments through 2026, fueled by the requirement for nationwide service parity and increased network capacity. This outlook favors the leading 5G Core vendors—Huawei, Ericsson, and Nokia—who currently maintain the highest market shares.

……………………………………………………………………………………………………………………………………………………………………………………………

ROI for 5G SA Core Networks?

The return on investment (ROI) for 5G Standalone (SA) core networks is currently at a critical inflection point. While initial years were marked by “bemoaning” slow momentum, 2025 and 2026 have seen a shift from pilot testing to an execution-driven phase with measurable, albeit varied, returns.  In the 2025–2026 market, enterprise ROI for 5G Standalone (SA) is primarily driven by three high-growth segments: Private 5G NetworksRedCap IoT, and Network Slicing. While public 5G consumer returns remain steady, these B2B use cases are where Mobile Network Operators (MNOs) are finding the most immediate “killer applications.”

ROI Drivers in 2026:
  • Operational Efficiency: 5G SA cores are cloud-native, allowing for microservices that can be deployed in hours rather than days. This reduces long-term operational costs (OpEx) by automating network functions and improving energy efficiency per gigabyte transmitted.
  • New Revenue Streams: Unlike 5G Non-Standalone (NSA), the SA core enables Network Slicing and Ultra-Reliable Low-Latency Communications (URLLC). These are essential for high-margin B2B services like industrial robotics, emergency services, and “SuperMobile” slicing for enterprises.
  • Monetization of “Capability”: In regions like the GCC (Gulf Cooperation Council), 5G SA delivers speeds up to five times faster than European averages, allowing operators to charge for performance-based tiers rather than just data volume.
  • Consumer Benefits: Early data from the UK indicates that 5G SA can extend device battery life by 11% to 22% due to its unified control plane, creating a tangible value proposition for premium consumer plans.
Current Market Challenges:
  • The “Value Perception Gap”: Despite nationwide rollouts, some operators (like AT&T in late 2025) saw mobile service revenue grow by only 3.4%, barely outpacing inflation.
  • Regional Disparity: ROI is strongest in North America and China, where industrial policy and sovereign wealth have accelerated deployment. In contrast, Europe faces a “regulatory quagmire” and higher costs for removing legacy equipment, slowing its path to profitability.
  • The 6G Factor: Some operators are hesitant to invest billions in a full 5G SA overhaul if the technology is viewed as a “transitional” generation that may be superseded by 6G-ready cores in the late 2020s.
Strategic Outlook for 2026:
Market research from the Dell’Oro Group projects the 5G Mobile Core Network market to grow at a 12% CAGR through 2030, reaching historic highs in 2026. For most operators, the consensus is that 5G SA is a strategic necessity to maintain competitiveness, even if the short-term financial returns are uneven.
In his February 2026 Newsletter, Stephane Teral wrote, “2026 points to a more mixed environment—RAN slightly down, 5G Core continuing to grow—against a backdrop of uncertain capex and an accelerating shift toward opex and software-driven models.”
…………………………………………………………………………………………………………………………………………………………………………………………….

References:

https://www.telecoms.com/5g-6g/telcos-spend-more-on-the-core-as-5g-sa-picks-up

https://www.linkedin.com/pulse/february-newsletter-4q25-fy25-wireless-infrastructure-update-ug9ec/

Dell’Oro: Mobile Core Networks +15% in 2025; Ookla: Global Reality Check on 5G SA and 5G Advanced in 2026

Dell’Oro: RAN market stable, Mobile Core Network market +14% Y/Y with 72 5G SA core networks deployed

Dell’Oro: RAN market stable, Mobile Core Network market +14% Y/Y with 72 5G SA core networks deployed

Téral Research: 5G SA core network deployments accelerate after a very slow start

Analysts: Telco CAPEX crash looks to continue: mobile core network, RAN, and optical all expected to decline

Building and Operating a Cloud Native 5G SA Core Network

MCN Market Roared Back in 2025 With 15 Percent Growth, According to Dell’Oro Group

Analysis of Airspan Networks & Atika Alliance: Resilient, Multi-Domain 5G Mission Critical Connectivity for the Defense Industry

Airspan Networks Holdings LLC (“Airspan”) and ATIKA Venture, S.L. (“Atika”) have entered into a strategic collaboration to advance resilient, multi-domain 5G communications for defense and security operations. The initiative focuses on developing interoperable, deployable network systems optimized for mission-critical connectivity across terrestrial and airborne domains.

The cooperation framework covers both commercial and technical engagements, with initial activities centered in Spain and expansion potential across Europe. The partnership unites Airspan’s portfolio in Open RAN (O-RAN), 5G, and commercial Air-to-Ground (ATG) communications with Atika’s capabilities in tactical 5G deployments, AI-driven network analytics, and secure 5G core integration for defense-grade environments.

Joint programs will address the convergence of deployable 5G infrastructure and mobile ad hoc network (MANET) systems under a unified network orchestration and control layer. The combined architecture aims to provide secure, high-throughput connectivity in dynamic and contested electromagnetic environments. Technical priorities include rapid network deployment, automated resilience management, AI-assisted spectrum optimization, and end-to-end encryption aligned with defense mission profiles.

Image Credit:  Aviat Networks

“Airspan has a strong history of solving advanced connectivity challenges, including low-latency, high-mobility communications through our Air-to-Ground In-Motion 5G platform,” stated Glenn Laxdal, CEO of Airspan. “Through this collaboration with Atika, we aim to adapt our commercial-grade 5G and O-RAN technologies to defense use cases that demand operational resilience and interoperability across domains. Atika’s deep experience in defense communications, combined with their expertise in AI-enabled network intelligence and secure 5G core technologies, represents a substantial complement to our portfolio.”

“The operational landscape increasingly depends on adaptable, intelligent, and sovereign networks,” said Ana Rodríguez Quirós, Managing Director of Atika. “Our partnership with Airspan strengthens our ability to support multi-domain 5G for defense users, extending connectivity beyond satellite and traditional radio systems. Building on our collaboration with the Spanish Army, this alliance demonstrates how advanced 5G network architectures can directly enhance mission readiness, mobility, and overall operational effectiveness.”

About Airspan:

Headquartered in Plano, Texas, Airspan Networks Holdings LLC is an innovative U.S.-based provider of wireless network solutions with a global presence, focused on delivering carrier-grade 5G and advanced wireless connectivity. Airspan’s portfolio spans three core solution areas – in-building, outdoor, and air-to-ground – and includes market-leading products for DAS, Open RAN, and small cells across both public and private network settings. Airspan supports mobile network operators, neutral-host providers, enterprises, public-sector organizations, and other service providers in building reliable, scalable wireless networks that enhance coverage and capacity while enabling fast, efficient deployment.

Visit our website at https://airspan.com/

About Atika:

Atika is a Spanish technology company specializing in advanced tactical communications and deployable 5G networks for defense and security. Its technology focuses on federated architectures, multi-domain connectivity, and network intelligence capabilities designed for real operational environments.

……………………………………………………………………………………………………………………………………………………….

Requirements and Analysis:

1.] Resilient, mission-critical 5G connectivity (URLCC that meets ITU-R M.2410 Technical Performance Requirements for IMT 2020) recommendation with a

2.] Unified network orchestration and control layer (5G Services Based Architecture depends on implementation of 3GPP Release 17 and 18 specifications.

1.  Enhancements to the 5G NR Physical Layer (PHY) to support Ultra-Reliable Low-Latency Communications (URLLC) in the Radio Access Network (RAN). While basic URLLC support was established in Release 15.  When 3GPP Release 16 was frozen in July 2020, URLLC in the RAN enhancements had not been completed or performance tested. Hence, the ITU-R M.2150 standard for IMT 2020 RIT/SRIT initially did not meet the ITU-R  M.2410 Technical Performance Requirements for IMT 2020 recommendation

The most significant PHY-layer optimizations were finalized in Release 16 (Phase 2) an Release 17 (Phase 3) with more to come in Release 18 as described below.

a] Release 16 (The “IIoT and URLLC” Phase):
This release introduced foundational PHY improvements to reach “six nines” (99.9999%) reliability. Key features included:

  • New DCI Formats: Compact Downlink Control Information (DCI) formats (e.g., Format 0_2 and 1_2) were added to reduce signaling overhead and improve robustness.
  • Sub-slot HARQ-ACK Feedback: Enabled faster feedback by allowing multiple HARQ-ACK transmissions within a single slot.
  • PUSCH Repetition Type B: Introduced to allow even finer-grained (mini-slot based) repetitions for low-latency uplink, enabling transmissions to cross slot boundaries.
  • Intra-UE Prioritization: Standardized the ability for a device to prioritize a high-priority (URLLC) transmission over a lower-priority (eMBB) one if they overlap in time.
  • Multi-TRP (CoMP): Enhanced support for Transmission and Reception Points (TRPs) to provide spatial diversity, ensuring communication continues if one path is blocked.
    Ericsson +6

b] Release 17 (The “Further Enhanced URLLC” Phase):
Completed in 2022, this release focused on consolidating these features and extending them to more complex scenarios:

  • URLLC in Unlicensed Spectrum (NR-U): Adapted URLLC PHY procedures for unlicensed bands, addressing regulatory constraints like Listen-Before-Talk (LBT).
  • Improved HARQ-ACK and CSI Reporting: Introduced more efficient and robust feedback mechanisms for better link adaptation.
  • Enhanced Multi-TRP for UL: Further optimized uplink transmissions using multiple TRPs for increased reliability.
Summary of Implemented Rel-17 RAN Enhancements:
  • Feedback Reliability: Improved HARQ-ACK and Channel State Information (CSI) reporting to ensure the network can adapt to rapid channel changes.
  • Traffic Prioritization: Intra-UE prioritization allows URLLC data to “pre-empt” or take priority over standard mobile broadband (eMBB) data within the same device.
  • Power Savings: New mechanisms like Paging Early Indication (PEI) allow URLLC-capable sensors to remain in low-power states longer without sacrificing the ability to wake up instantly for critical data.
c] Current Status:
While the core functional specifications for URLLC in the RAN are considered “complete” as of Release 17, the ecosystem continues to evolve into 3GPP Release 18 (5G-Advanced), which looks at further specialized enhancements for Extended Reality (XR) and Artificial Intelligence (AI).
Modem and Chipset Comparison (Device Side).
5G chipsets/modems:
Company Modem Model(s) Rel-17 URLLC Features
Qualcomm World’s first 5G Advanced-ready modem. Supports enhanced HARQ-ACK and CSI feedback for reliability, and AI-based beam management to maintain stable URLLC links.
MediaTek
MediaTek M90
Conforms to Rel-17 standards and aligns with Rel-18 5G-Advanced. Implements Rel-17 Paging Early Indication (PEI) to reduce power while maintaining low-latency readiness.
Samsung
Exynos Modem 5300
While primary documentation emphasizes Rel-16, Samsung achieved 1024 QAM (defined in Rel-17) in partnership with Qualcomm. Supports ultra-low latency via FR2 and EN-DC.
Network infrastructure implementation often takes the form of software-defined upgrades to existing massive MIMO and base station hardware.
  • Ericsson: Enabled “Time-Critical Communication” as a software upgrade on its RAN. Its Rel-17 implementation focuses on Hybrid Automatic Repeat Request (HARQ-ACK) enhancements, intra-UE multiplexing, and time-synchronization for Industrial IoT (IIoT).
  • Nokia: Updated its AirScale portfolio to support Rel-17 features, specifically targeting Time-Sensitive Communications (TSC) and deterministic networking for private factory environments.
  • Huawei: Has integrated Rel-17 URLLC enhancements as part of its “5.5G” (5G-Advanced) marketing, focusing on achieving sub-10ms latency for wide-area industrial control and 1ms for local-area automation.

2.  3GPP has specified a unified management and orchestration framework for 5G systems, primarily developed by working group SA5 (Management, Orchestration, and Charging). Starting from Release 15, 3GPP introduced a Service-Based Management Architecture (SBMA), which acts as a unified layer to manage and orchestrate 5G networks, including the Core, RAN, and end-to-end network slices.

Key aspects of the 3GPP unified 5G orchestration and control layer include:
  • Service-Based Management Architecture (SBMA): Instead of legacy, vendor-specific interfaces, 3GPP adopted a service-oriented approach. This architecture uses Management Services (MnS), which provide standardized interfaces for both management and orchestration, facilitating multi-vendor interoperability.
  • End-to-End Slice Management: The 3GPP standards (notably TS 28.530/531/532/533) define a common approach to manage the entire lifecycle of a 5G network slice (creation, activation, supervision, and termination) across RAN, Core, and Transport domains.
  • Network Automation (NWDAF): The Network Data Analytics Function (NWDAF), introduced in Release 15, is a key component for automated control. It collects network data, analyzes it, and feeds back insights to assist in policy management (PCF) and slice selection (NSSF).
  • Intent-Driven Management: 3GPP is enhancing its standards to support intent-driven management, enabling operators to manage network resources based on high-level desired outcomes rather than low-level configuration, which is crucial for autonomous networks.
  • AI/ML Management: Recent releases (18/19) focus on a unified, domain-independent AI/ML management and orchestration framework that supports the full lifecycle of AI/ML models within the 5G system.

The latest 3GPP release with finalized specifications for Service-Based Management Architecture (SBMA) is Release 18 (Rel-18), which was functionally frozen in early 2024. Rel-18 includes enhanced study items (FS_eSBMA) focused on supporting management for 5G standalone (SA) and non-standalone (NSA) scenarios and management of Management Functions.

…………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.businesswire.com/news/home/20260319340548/en/Airspan-Networks-and-Atika-Form-Alliance-to-Advance-Resilient-Multi-Domain-5G-Connectivity-for-Defense

SNS Telecom & IT: Mission-Critical Networks a $9.2 Billion Market

3GPP Release 16 5G NR Enhancements for URLLC in the RAN & URLLC in the 5G Core network

3GPP Release 16 Update: 5G Phase 2 (including URLLC) to be completed in June 2020; Mission Critical apps extended

Huawei, Qualcomm, Samsung, and Ericsson Leading Patent Race in $15 Billion 5G Licensing Market

https://www.3gpp.org/news-events/3gpp-news/sa5-5g

Revolutionizing 5G Mission Critical Transport Networks (Part 2)

https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/imt-2020/Documents/S01-1_Requirements%20for%20IMT-2020_Rev.pdf

 

IMT-2030 (“6G”) Minimum Technology Performance Requirements for Radio Interface Technologies

At its February 2026 meeting in Geneva, ITU-R WP 5D reached agreement on the technical performance requirements for IMT-2030, also known as 6G.  Formal approval is expected to follow when the parent ITU-R study group 5 meets in December 2026.

At their Feb 2026 meeting, WP 5D WG Technology Aspects/SWG Radio Aspects discussed all the 16 contributions related to that document.  It was clarified that these requirements are to be evaluated according to the criteria defined in Report ITU-R M.[IMT 2030.EVAL] and M.[IMT 2030.SUBMISSION]. They are used only for development of IMT-2030 radio interface technologies (RIT/SRITs).

IMPORTANT: As noted many times, 3GPP will specify the 6G Core network and 6G Architecture which will have their own performance requirements.  See References below.

The working party’s draft new report, Minimum requirements related to technical performance for IMT‑2030 radio interface(s),” outlines 20 technical performance requirements (TPR). Seven of them are new and specific to describe the 6G performances. Those IMT 2030 technical performance requirements will be used as unified requirements to evaluate the 6G radio interfaces (RITs/SRITs).

Image Credit:  ITU-R

…………………………………………………………………………………………………….

The IMT-2030 Usage Scenarios:

The full set of requirements is based on six proposed usage scenarios for 6G networks:

  • Immersive communication (IC)
  • Hyper reliable and low‑latency communication (HRLLC)
  • Massive communication (MC)
  • Ubiquitous connectivity (UC)
  • Artificial intelligence (AI) and communication (AIAC)
  • Integrated sensing and communication (ISAC)

The IMT-2030 framework:

The newly defined 6G requirements build on the IMT‑2030 framework that ITU first published in December 2023 as a globally harmonized foundation for next‑generation connectivity (Recommendation ITU‑R M.2160). This recommendation also defines the overarching principles for future network design, notably:

  • Sustainability.
  • Security and resilience.
  • Connecting the unconnected.
  • Ubiquitous intelligence.

ITU – the United Nations agency for digital technologies – aims for the 6th generation of mobile communications (6G) to enable affordable, resilient, energy‑efficient networks for health, education, agriculture and disaster response. Advanced networks also present a way to close the persistent digital divide that today leaves many people in low-income countries behind.

This work to date provides a unified technical foundation to evaluate the candidate radio interfaces for IMT-2030 and guide the evolution of global 6G research and standardization.

Groundwork for future resilience:

IMT‑2030 lays the groundwork for affordable, high‑quality connectivity to remote and underserved communities. By setting globally harmonized performance requirements, it aims to ensure access for everyone, make communication systems more resilient, support sustainability and implement energy‑efficient technologies. ITU aims for innovative 6G services to deliver broad social and economic benefits.

The 20 requirements set out in the new draft report ​are meant to provide a consistent basis for specification and evaluation. While the requirements establish minimum performance levels, they do not restrict implementation approaches or guarantee real-world deployment performance.

They reflect ongoing global research and technology activities and should pave the way for concrete IMT-2030 evaluation guidelines, the next step in ITU’s global standardization process for 6G.

Accordingly, the IMT-2030 draft report has been submitted for approval to ITU‑R Study Group 5, responsible for terrestrial radiocommunication services, at a meeting scheduled for 1 December.

Until then, the draft remains available exclusively to ITU‑R members directly involved in its finalization and approval. You need a TIES login account to access ITU documents.

………………………………………………………………………………………………………………………..

About ITU-R Study Group 5:

ITU-R Study Group 5 is responsible for Terrestrial Services, including Fixed Wireless, Mobile (land, maritime and aeronautical), radiodetermination service as well as amateur and amateur-satellite services and the development of international standards, regulation and guidelines for these systems. The group’s work encompasses a wide range of topics, including spectrum management, network architecture, and radio interface technologies.

About ITU-R Working Party 5D:

ITU-R Working Party 5D is responsible for the development and harmonization of international standards for International Mobile Telecommunications (IMT) systems, including the latest IMT-2030 (6G) technology. The working party’s efforts ensure interoperability and global compatibility for wireless communication systems.

Further information on IMT‑2030 and related activities is available on the portal for IMT towards 2030 and beyond.

………………………………………………………………………………………………………..

References:

IMT-2030: Technical requirements for the 6G future

https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/Pages/default.aspx

Roles of 3GPP and ITU-R WP 5D in the IMT 2030/6G standards process

ITU-R M.[IMT-2030.EVAL] & ITU-R M.[IMT-2030.SUBMISSION] reports: Evaluation & Submission Guidelines for 6G RIT/SRITs (6G)

ITU-R WP 5D reports on: IMT-2030 (“6G”) Minimum Technology Performance Requirements; Evaluation Criteria & Methodology

Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)

AI wireless and fiber optic network technologies; IMT 2030 “native AI” concept

Verizon’s 6G Innovation Forum joins a crowded list of 6G efforts that may conflict with 3GPP and ITU-R IMT-2030 work

ITU-R WP5D IMT 2030 Submission & Evaluation Guidelines vs 6G specs in 3GPP Release 20 & 21

Highlights of 3GPP Stage 1 Workshop on IMT 2030 (6G) Use Cases

Development of “IMT Vision for 2030 and beyond” from ITU-R WP 5D

 

 

 

AWS to deploy AI inference chips from Cerebras in its data centers; Anapurna Labs/Amazon in-house AI silicon products

Amazon Web Services (AWS) announced it plans to integrate AI processors from Cerebras Systems [1.]  into its data centers, signaling growing confidence in the AI-focused semiconductor startup. Under a new multiyear partnership announced Friday, AWS will deploy Cerebras’s Wafer-Scale Engine (WSE) to accelerate inference workloads—the stage of AI operations where models generate responses to user queries. Financial details of the agreement were not disclosed.

Note 1.  Founded in 2015 and headquartered in Sunnyvale, CA, Cerebras claims to have the world’s fastest AI inference and training platform.

The collaboration reflects a significant realignment in compute infrastructure strategies across the AI ecosystem. While initial industry focus centered on model training, the rapid expansion of deployed AI services is driving demand for optimized inference performance. Traditional GPUs, though unmatched for training, can be suboptimal for inference scenarios that require ultra-low latency and high throughput. Cloud and AI platform providers are therefore diversifying their silicon portfolios to better match workload profiles and to scale capacity efficiently.

AWS, the world’s largest cloud infrastructure provider, has traditionally relied on its in-house semiconductor division, Annapurna Labs, for custom chip design. Annapurna’s Trainium processors compete with GPUs from major suppliers such as Nvidia and AMD, offering cost and performance advantages for AI training workloads. The new partnership introduces Cerebras technology into AWS infrastructure, where it will work alongside Trainium to enhance large-scale inference capabilities.

Cerebras, best known for its wafer-scale architecture, markets its WSE processors as a high-speed inference platform capable of executing the decode phase of generative AI processing—where text, images, or other outputs are generated—at up to 25 times the speed of conventional GPU solutions. The company, valued at approximately $23 billion following a $1 billion funding round in February, has attracted backing from Fidelity, Benchmark, Tiger Global, Atreides, and Coatue.

The Cerebras deal underscores a major shift in the market for computing power. Image Credit: rebecca lewington/cerebras syste/Reuters

The AWS collaboration follows Cerebras’s major compute partnership with OpenAI, which reportedly involves deploying up to 750 MW of computing capacity powered by its chips. AWS and Cerebras will position their joint offering as a premium cloud inference solution, targeting enterprise AI developers requiring high-performance and scalable compute.

“The scale of AI demand is shifting from model creation to global deployment,” said Andrew Feldman, CEO of Cerebras. “Working with AWS aligns our technology with the industry’s largest cloud, giving us reach to a broad enterprise and developer base. If you want slow inference, there will be cheaper ways to go,” Feldman said. “But if you want fast tokens, if speed matters to you, if you’re doing coding or agentic work, not only are we the absolute fastest, but we intend to set the bar. We’re in this to win it.”

AWS and Cerebras will support both aggregated and disaggregated configurations. Disaggregated is ideal when you have large, stable workloads. Most customers run a mix of workloads with different prefill/decode ratios, where the traditional aggregated approach is still ideal. The start-up expects most customers will want access to both and the ability to route workloads to whichever configuration serves them best.

The move intensifies competition in the inference silicon segment, where Nvidia faces growing pressure from purpose-built processor architectures such as Cerebras’s WSE and other emerging alternatives. Nvidia, which recently announced a $20 billion licensing deal with Groq and plans to unveil a new inference-optimized platform, remains the dominant supplier but now contends with an accelerating wave of specialization across the AI compute stack.

AWS vice president and Annapurna Labs co-founder Nafea Bshara emphasized the company’s goal of offering flexible performance tiers. “Our job is to push the speed and lower the price,” he said, noting that AWS will continue to offer cost-optimized Trainium-only options alongside high-performance Cerebras-Trainium configurations.

………………………………………………………………………………………………………………………………………………………………………………………………….

Amazon’s Internally Designed AI Silicon:

Amazon has built a fairly broad internal AI-oriented silicon portfolio through Annapurna Labs, primarily for AWS:

  • Inferentia (Inferentia, Inferentia2) – Custom machine learning accelerators designed for high-throughput, low-cost inference at cloud scale. These power many AWS inference instances and are positioned as an alternative to Nvidia GPUs for production model serving.

  • Trainium (Trainium, Trainium2, Trainium3) – AI training accelerators optimized for large-scale model training (including frontier and foundation models), with Trainium2 and Trainium3 as newer generations offering materially higher performance and better $/compute than the first generation. These are central to projects such as the Rainier supercomputer for Anthropic.

  • Graviton (Graviton, Graviton2/3/4) – Arm-based general-purpose CPUs used heavily across EC2, increasingly in AI-adjacent roles (pre/post-processing, orchestration, model-serving microservices) and as part of cost-optimized AI stacks, even though they are not dedicated accelerators.

  • Nitro system – While not an AI accelerator per se, the Nitro family (offload cards and system) is an internally developed data-plane and virtualization offload architecture that underpins EC2 and works in tandem with Graviton, Inferentia, and Trainium to free CPU cycles and improve I/O for AI/ML workloads.

All of these are designed and iterated internally by Annapurna Labs for exclusive use in AWS data centers, then exposed to customers via AWS services rather than as standalone merchant silicon.

Amazon’s Annapurna Labs is an internal chip design group that has become a core strategic asset for AWS, especially for custom data center and AI silicon.

Origins and acquisition:

  • Annapurna Labs is an Israeli chip design startup founded in 2011 by semiconductor veterans of Intel and Broadcom, including Avigdor Willenz and Nafea Bshara.

  • “When we talked with market sources and consulted with experts in the fields of data and servers, at that time only Amazon had a holistic vision and the ability to execute on a large scale,” recalls Bshara about the start of the romance with Amazon. “We were prepared to build the technology and at the same time were open to working with startups. From there we began a journey together with many meetings and shared thinking, among others with James Hamilton (Microsoft’s former data-base product architect and to AWS SVP), and from there within six months we found ourselves inside Amazon.”
  • Amazon began working with the company around 2013 and acquired it in 2015 for an estimated $350–$400 million.

  • Before the deal, Annapurna was in stealth, focusing on low‑power networking and server chips to improve data center efficiency.

Role inside Amazon and AWS:

  • Post‑acquisition, Annapurna was folded into AWS as a specialist microelectronics and custom silicon group, designing chips to reduce cost and power per unit of compute.

  • The group underpins several key AWS technologies: the Nitro system for offloading virtualization and I/O, Arm‑based Graviton CPUs for general compute, and Trainium and Inferentia accelerators for AI training and inference.

  • These chips let AWS optimize performance per watt and per dollar versus x86 servers and third‑party accelerators, improving margins and competitive pricing.

Key products and architectures:

  • Nitro: A combination of custom hardware and software that offloads storage, networking, and security functions from the host CPU, increasing tenant isolation and freeing CPU cycles for workloads.

  • Graviton: A family of Arm‑based server CPUs; by 2018 Graviton was widely adopted on AWS and is now used by most AWS customers for general cloud infrastructure workloads due to better price‑performance and energy efficiency.

  • Inferentia and Trainium: Custom accelerators designed by Annapurna for machine learning inference (Inferentia) and training (Trainium), intended to reduce AWS’s dependence on high‑priced Nvidia GPUs for AI workloads.

Strategic importance and AI focus:

  • Annapurna’s work is central to Amazon’s strategy of vertical integration in the cloud: owning the silicon stack as much as the software and services.

  • The group designs chips that power Amazon’s AI infrastructure, including systems used both by internal teams and external customers such as Anthropic, for which AWS is the primary cloud and silicon provider.

  • Amazon and Anthropic are collaborating on “Project Rainier,” a massive supercomputer built around hundreds of thousands of Annapurna‑designed Trainium2 chips, targeting more than five times the compute used to train current frontier models.

Organization, footprint, and industry impact:

  • Annapurna Labs maintains a significant presence in Israel, employing hundreds of engineers focused on advanced AI and networking processors for AWS.

  • It also operates major engineering hubs such as an Austin, Texas lab where advanced semiconductors and AI systems are designed and tested.

  • Analysts often describe the acquisition as one of Amazon’s most successful, arguing that Annapurna’s custom silicon is a “secret sauce” that helps AWS compete with Microsoft, Google, and others on performance, cost, and energy efficiency.

…………………………………………………………………………………………………………………………………………………………..

References:

https://www.cerebras.ai/company

https://www.cerebras.ai/blog/cerebras-is-coming-to-aws

https://www.wsj.com/tech/amazon-announces-inference-chips-deal-with-cerebras-109ecd31

https://www.marketwatch.com/story/how-the-ceo-of-this-upstart-nvidia-rival-hopes-to-seize-on-the-lucrative-market-for-ai-chips-d5ccdab0

https://en.globes.co.il/en/article-nafea-bshara-the-israeli-behind-amazons-graviton-chip-1001420744

Intel and AI chip startup SambaNova partner; SN50 AI inferencing chip max speed said to be 5X faster than competitive AI chips

Custom AI Chips: Powering the next wave of Intelligent Computing

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Will “AI at the Edge” transform telecom or be yet another telco monetization failure?

Huawei to Double Output of Ascend AI chips in 2026; OpenAI orders HBM chips from SK Hynix & Samsung for Stargate UAE project

OpenAI and Broadcom in $10B deal to make custom AI chips

U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

2026 Consumer Electronics Show Preview: smartphones, AI in devices/appliances and advanced semiconductor chips

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Google announces Gemini: it’s most powerful AI model, powered by TPU chips

 

Analysis: Equinix’s “Distributed AI Hub” vs competitive global carrier neutral offerings

Backgrounder:

As AI workloads undergo geographic decentralization across a fragmented hybrid-cloud ecosystem, enterprises face significant headwinds in maintaining deterministic performance, data sovereignty, and OpEx predictability.  As AI training and autonomous agent workloads drive demand for high-bandwidth, low-latency multi-cloud architectures, the focus shifts to alleviating pressure on the backbone and access networks through densified, software-defined connectivity.  Central cloud infrastructure is teetering under the weight of spiraling workloads, and being distributed east-west into regional backwaters in search of power, and north-south in the metro centers and enterprise premises in an urgent quest to actually put AI to work. Enterprises are, suddenly, stitching together training in one cloud, inference in another, and agents at the edge – all without breaking performance budgets. Which is why networks and high speed connectivity matter more than ever.

………………………………………………………………………………………………………………………………………………………………………………………

Equinix Carrier Neutral Hubs:

Equinix is positioning its carrier-neutral interconnection hubs as the strategic solution to mitigate these challenges. By optimizing last-mile backhaul and orchestrating distributed infrastructure, the platform enables localized inference at the edge.   In January 2026,  Equinix announced a last-mile access service (Equinix Fabric Intelligence) and yesterday (March 11, 2026), the company announced their Distributed AI Hub, to provide a single, unified framework for enterprises to connect, secure and simplify their increasingly complex and distributed AI ecosystems.

The Hub is a neutral location that allows enterprises to discover, connect to and consume AI infrastructure providers—including model companies, GPU clouds, data platforms, network and security services, and AI frameworks—all through private, low-latency connectivity at Equinix’s 280 high performance data centers.

“Enterprises are racing to deploy agentic AI but are finding that their existing infrastructure was never designed for the complexities of distributed intelligence,” said Mary Johnston Turner, Research Vice President, Digital Infrastructure Strategies at IDC.”  By 2027, IDC expects 80% of enterprises will deploy distributed edge infrastructure to improve the latency and responsiveness of AI applications. Enterprises will need solutions like Equinix’s Distributed AI Hub to enable them to unify these disparate systems.”

To realize the full potential of agentic AI, enterprises must converge inherently distributed workflows—spanning model training and inferencing workloads dispersed across public clouds, private data centers, edge nodes, and an expanding set of specialized “neocloud” platforms. Each environment brings distinct latency, performance, and data sovereignty constraints. This operational fragmentation can impede innovation velocity, complicate governance, and make it exceedingly difficult to execute AI workloads in proximity to the data sources that drive them, thereby diminishing both business impact and user experience.

Equinix is addressing this challenge with the launch of the Distributed AI Hub, an evolution of its global digital infrastructure platform. The Hub provides a unified, vendor-neutral framework that federates data, compute, cloud access, and AI ecosystem partners across geographically distributed domains. It allows enterprises to deploy and orchestrate AI workloads where they achieve optimal performance—without re-architecting applications or migrating data across incompatible environments. Through consistent governance, secure interconnection, and high-performance data mobility, the Hub simplifies how organizations connect models, replicate datasets, execute inferencing, and manage multi-environment AI operations. Unlike hyperscaler AI marketplaces that prioritize vertically integrated ecosystems, the Equinix Distributed AI Hub is open by design, enabling customers to assemble best-of-breed AI stacks tailored to workload and compliance requirements.

“AI isn’t centralized—but the right infrastructure can make it run as seamlessly as if it were,” said Jon Lin, Chief Business Officer at Equinix. “Equinix is the neutral ground where AI, cloud and networking infrastructure converge. We are providing enterprises the freedom to build and scale AI wherever their data, partners, and teams already live, while running inference close to the data and users that depend on it, without the operational drag that comes from stitching together complex, distributed systems. With our Distributed AI Hub, we’re giving customers a simpler, smarter, and far more connected way to run and scale their AI today. We are building one of the most expansive and neutral AI ecosystems.”

Image Credit: Equinix

…………………………………………………………………………………………………………………………………………………………………………….

The Hub’s first major integration is with Palo Alto Networks, extending AI-driven security into the distributed enterprise. The collaboration combines Equinix’s global interconnection fabric and distributed data infrastructure with Palo Alto Networks Prisma AIRS, delivering real-time protection for autonomous agents and model interactions across external data sources and tools. This integration gives enterprises unified visibility and policy control across the entire AI lifecycle—from data ingestion to inference execution—irrespective of deployment location. Furthermore, Prisma AIRS will be natively available through Equinix Network Edge, enabling centralized management of AI-centric security services at the digital edge, closer to users, clouds, and critical workloads.

“The conversation around distributed AI is finally getting real,” said Lloyd Taylor, CTO/CISO, at Alembic. “It’s more than compute and data, it’s controlling where the data lives and how the compute runs. Equinix is framing that problem the right way, by bringing placement, governance, and predictable performance into the same architecture with the Distributed AI Hub. This is what makes distributed AI viable at enterprise scale.”

The Distributed AI Hub is available globally at 280 Equinix data center locations, enabling enterprises to deploy consistent AI infrastructure patterns worldwide. Equinix will be participating at NVIDIA GTC—located at Booth 1030—and will be previewing the Hub.

Additional Resources:

About Equinix:

Equinix, Inc. (Nasdaq: EQIX) shortens the path to boundless connectivity anywhere in the world. Its digital infrastructure, data center footprint and interconnected ecosystems empower innovations that enhance our work, life and planet. Equinix connects economies, countries, organizations and communities, delivering seamless digital experiences and cutting-edge AI—quickly, efficiently and everywhere.

……………………………………………………………………………………………………………………………………………………………………………….

Competitive Analysis (Source: Perplexity.ai):

Equinix is the largest and most mature carrier‑neutral interconnection hub globally, but it faces serious competition at several layers of the stack.

Global carrier‑neutral players:

Major global and multi‑regional competitors offering carrier‑neutral colocation and interconnection include:

  • Digital Realty (PlatformDIGITAL, Interconnection Fabric, strong global footprint, direct cloud on‑ramps).

  • NTT Global Data Centers.

  • CyrusOne, QTS, GDS, Telehouse/KDDI, CoreSite, Flexential, Cologix and others in specific metros/regions.

Selected ecosystem comparison:

Provider Positioning vs Equinix Geographic strength Interconnection focus
Digital Realty Closest global rival in scale and cloud access. North America, Europe, APAC. PlatformDIGITAL, interconnection fabric, “data gravity” narrative.
NTT GDC Large carrier‑neutral platform, often telco‑adjacent. Strong in Japan and APAC, expanding globally. Cloud on‑ramps, network‑dense campuses in key metros.
CyrusOne Hyperscale and enterprise colocation, carrier‑neutral. North America and Europe. High‑density interconnection, hyperscale campuses.
CoreSite Cloud‑ and network‑dense US metros. US only, key peering hubs. Open Cloud Exchange for multi‑cloud connectivity.
Cologix / Flexential / phoenixNAP Regional network‑neutral interconnection platforms. Primarily North America, secondary/edge markets. Dense carrier mix, regional cloud and IX connectivity.

How Equinix is differentiated:

Analysts typically see Equinix’s moat in: dense metro ecosystems, breadth of on‑net networks and clouds, and the maturity of its software‑defined interconnection (Fabric) and edge services, rather than in being the only carrier‑neutral hub. Its main strategic challenge is staying ahead of peers like Digital Realty, NTT, and CyrusOne as they build similar fabrics around large, carrier‑neutral campuses and hyperscale‑adjacent deployments.

…………………………………………………………………………………………………………………………………………………………………………………………………

References:

https://newsroom.equinix.com/2026-03-11-Equinix-Unveils-the-Distributed-AI-Hub-to-Simplify-and-Secure-Enterprise-AI-Infrastructure

Agents of chaos – Equinix proposes metro fix for the new AI sprawl

Orange Telco Cloud to use Equinix Bare Metal to deliver virtual services with <10 ms latency

Equinix Partners with Nokia to Increase 5G and Edge Ecosystem Innovation

Equinix and Vodafone to Build Digital Subsea Cable Hub in Genoa, Italy

Equinix to deploy Nokia’s IP/MPLS network infrastructure for its global data center interconnection services

Synergy Research: Strong demand for Colocation with Equinix, Digital Realty and NTT top providers

CoreSite Enables 50G Multi-cloud Networking with Enhanced Virtual Connections to Oracle Cloud Infrastructure FastConnect

Arrcus MCN solution now part of CoreSite’s Open Cloud Exchange®

Global Data Center Colocation Market Size forecast = $131.8 Billion by 2030 at a 14.2% CAGR

Initiatives and Analysis: Nokia focuses on data centers as its top growth market

AWS deployed in Digital Realty Data Centers at 100Gbps & for Bell Canada’s 5G Edge Computing

TMR: Data Center Networking Market sees shift to user-centric & data-oriented business + CoreSite DC Tour

Analysis: AT&T’s $250B network investment to advance U.S. connectivity

Rapid adoption of artificial intelligence (AI), cloud computing and IoT connected devices has prompted telecom operators to invest heavily in fiber and 5G networks.  In line with that movement,  AT&T announced it will spend more than $250 ​billion over five years in the U.S. to expand its network and make deals to boost wireless and fiber connectivity in the U.S.

“Today, we’re committing more than $250 billion to increase U.S. connectivity competitiveness and expand access to AT&T’s leading fiber and wireless networks – the best way to get on the internet,” said John Stankey, Chairman and CEO of AT&T. “Current Federal telecommunications policy is as strong as I’ve seen in my career, making our commitment to invest possible. We look forward to serving American communities and businesses for the next 150 years.”

Ubiquitous networks that provide reliable, always-on connectivity are the critical conduits that make Artificial Intelligence, autonomous technologies, cloud computing, and data-heavy digital services possible. AT&T’s investment will expand future-ready fiber and wireless services, modernize critical infrastructure, and strengthen network resilience and security to support communities and the economy for decades to come, including:

  • Accelerating the deployment of fiber, 5G home internet, wireless and satellite across urban, suburban, and rural America.
    • AT&T’s satellite collaboration with AST SpaceMobile will extend coverage into remote areas.
  • Strengthening FirstNet®, Built with AT&T – the nation’s first and only network built with and for first responders – and modernizing vital infrastructure for public safety and resilience
    • With AT&T Dynamic Defense, we deliver the only network connectivity with comprehensive built-in security controls.
  • Laying the groundwork for the next wave of American technological leadership through smart infrastructure and network optimization.
    • AT&T’s Wi-Fi Personalization provides a tailored home experience that matches our customers’ daily habits, and AT&T Turbo Live allows customers to boost their data experience at live events to get the reliable connection they want, even in crowded venues.

AT&T says they will continue investing in technologies that advance and protect the connected economy, including:

  • Scaling network security and AI-driven threat intelligence.
  • Enabling the next wave of American invention across industries by opening up our network to allow new entrants to innovate and supply telecommunications equipment.
  • Strengthening collaboration with public-sector partners to support national resilience and first responders.
  • Supporting America’s leadership in global technology and innovation.

With this commitment, AT&T says it will keep building the network Americans rely on, whether delivered by fiber, wireless, or satellite, so more people and businesses have access to fast, reliable connectivity. It’s the foundation for what’s next, from remote care, to autonomous vehicles to AI, and it will help keep America connected for the next 150 years.

AT&T store, building exterior, Fifth Avenue, New York City, New York, USA.  Photo by: Plexi Images/GHI/Universal Images Group via Getty Images

…………………………………………………………………………………………………………………

Comment and Analysis:

The spending push comes alongside federal broadband initiatives created under the ‌2021 infrastructure ⁠law, including the $42.5 billion Broadband Equity, Access, and Deployment (BEAD) Program.  However, the rollout of funding has faced delays due to a combination of implementation challenges and policy changes under the Trump administration. AT&T has secured the largest share of BEAD funding for fiber build‑outs, winning about $1.06 billion, according to New Street Research.
………………………………………………………………………………………………………………………………………………………………………………..
Fiber broadband has become ​a key battleground between carriers ​and cable providers as ⁠they compete for home internet customers:
  • Comcast is defending its subscriber base while undergoing strategic changes. The company on ​Tuesday began a $5.9 million network‑expansion project in Greater Hartford and Middletown, set to ​finish later ⁠this year.
  • Verizon has accelerated its fixed‑broadband expansion after completing its acquisition of Frontier Communications earlier this year and is rolling out limited‑time discounted bundles to attract customers.
Investment Comparison (2026 Forecasts):
Feature AT&T Verizon T-Mobile
Headline Commitment $250 Billion (5-Year Total) $16.0 – $16.5 Billion (Annual) ~$10 Billion (Annual)
Estimated Annual Capex $23 – $24 Billion $16.0 – $16.5 Billion ~$10 Billion
Key Strategic Focus Aggressive fiber-to-the-home (FTTH) and 5G/6G Network “densification,” software, and Frontier integration 5G Advanced features and rural expansion via BEAD
Spending Trend Increasing: Doubling previous capex levels Decreasing: Down from $17B in 2025 to improve margins Disciplined: Focusing on cash generation over heavy builds
Strategic Divergence:
  • AT&T’s “All-In” Approach: AT&T is significantly outspending its rivals to “build something more valuable tomorrow”. Its $250 billion figure reflects a broad “inclusive spend” that covers fiber expansion, 5G upgrades, and recent spectrum acquisitions like the $23 billion EchoStar deal.
  • Verizon’s Fiscally Responsible Pivot: Under new CEO Dan Schulman, Verizon is reducing its capex for 2026. The company is transitioning from a “coverage” phase to a “densification” and software-focused phase, as its C-band deployment is now 90% complete. Verizon is prioritizing free cash flow and dividend sustainability over aggressive new builds.
  • T-Mobile’s Capital Efficiency: T-Mobile is maintaining the lowest capex among the “Big Three,” focusing instead on shareholder returns (with an authorized $14.6 billion for 2026). Its growth strategy has shifted toward upselling customers to higher-rate plans (“more for more”) and leveraging government funding, like the BEAD program, for rural coverage rather than pure internal spending.
Market Implications:
  • Analysts at Recon Analytics note that AT&T’s proposed annual spend ($50B if divided evenly, though actual capex guidance is closer to $24B) is roughly 3x Verizon and 5x T-Mobile.
  • While AT&T bets on long-term infrastructure dominance, the high debt load ($118.4B) remains a risk compared to Verizon’s clearer deleveraging path.

………………………………………………………………………………………………………………………………………………………………………

Details Lacking:

AT&T’s $250 billion spend announcement through 2030 lacks granular details on several fronts, making it more of a high-level commitment than a fully specified plan.

  • AT&T reported capital investment of $22B for full-year 2025 and its outlook for the 2026-2028 period puts capital investment at $23B-to-$24B per annum. That accounts for about half the annual sum AT&T now says it will spend ($25B/year for the next 5 years).
  • AT&T did not state how much of the $250B to be spent would be on network infrastructure build-outs vs deals with other companies (e.g. AST Space Mobile) vs money spent on new hires. The AT&T press release (see Reference #1 below) says the telco will be recruiting and training new technicians to build and maintain those networks. The plan includes “hiring thousands of technicians in 2026 alone.”
  • More importantly, there were no network coverage targets announced or new technologies to be deployed, e.g. 5G Advanced, 6G, 50G PON, etc.

Coverage Targets:

The announcement targets “unmatched coverage for more than 100 million customers” across fiber and wireless networks in urban, suburban, and rural areas, but provides no maps, timelines, or metrics like gigabit availability percentages or specific unserved locations.

Technologies Deployed:

AT&T highlights accelerating fiber broadband, 5G wireless and home internet, satellite via AST SpaceMobile partnership for remote areas, FirstNet modernization, and AI-driven security like Dynamic Defense, without naming new equipment vendors, spectrum bands beyond past deals, or deployment schedules.  No mention of new technologies.

Spending Breakdown:

No explicit allocation is given for infrastructure capex versus partnerships (e.g., AST SpaceMobile collaboration or the prior $23B EchoStar spectrum purchase), hiring (thousands of technicians in 2026 alone), or training within its ~110,000 U.S. workforce; the total is framed as a multi-year pledge dependent on favorable tax/regulatory conditions.

AT&T’s press release did not mention its $23 billion spectrum deal with EchoStar, which has yet to close. That $23B is surely included in the total spend. There will likely be other similar lines in its spreadsheet that will enable AT&T to get to the magic $250 billion mark.

References:

https://about.att.com/story/2026/att-announces-250-billion-commitment.html

https://www.telecoms.com/operator-ecosystem/at-t-s-250-billion-investment-pledge-not-as-big-as-it-sounds

https://www.reuters.com/business/media-telecom/att-invest-250-billion-over-five-years-us-boost-infrastructure-2026-03-10/

AT&T and AWS to deliver last mile connectivity for AI workloads; AT&T Geo Modeler™ AI simulation tool

AT&T and Ericsson boost Cloud RAN performance with AI-native software running on Intel Xeon 6 SoC

AT&T’s convergence strategy is working as per its 3Q 2025 earnings report

AT&T deploys nationwide 5G SA while Verizon lags and T-Mobile leads

AT&T to buy spectrum licenses from EchoStar for $23 billion

T-Mobile’s new CEO Srini Gopalan faces fierce competition from AT&T, Verizon and MVNOs

 

Semtech LoRa® PHY technology enables Amazon Sidewalk to expand while supporting fixed and mobile IoT endpoints

Introduction:

Semtech Corporation, a leading provider of high-performance semiconductor, Internet of Things (IoT) systems and cloud connectivity service solutions, is the creator and primary owner of the intellectual property (IP) for LoRa® technology, providing the Physical layer chips (PHY transceivers) used in LoRaWAN – the very popular Low Power Wide Area Network (LPWAN) for IoT endpoints.

The Camarillo, CA based company last week announced that LoRa® technology will continue to serve as the core radio modulation for Amazon Sidewalk across all markets in this year’s Sidewalk international expansion.  Sidewalk’s global expansion officially begins in Canada and Mexico with further expansion to other international regions is scheduled for later in 2026. The network is projected to expand to over 30 new countries by year’s end.

Amazon Sidewalk is increasingly viewed as a commercial success in terms of infrastructure deployment and technical capability, transitioning from a niche smart home feature to a broad, LoRa-based Low Power Wide Area Network (LPWAN). While it faced initial skepticism regarding privacy and adoption, the network now boasts massive, passive coverage of over 95% of the U.S. population and is undergoing rapid international expansion.

 

Architectural role of LoRa in Sidewalk:

LoRa is the de facto wireless platform of LPWANs for IoT. Semtech’s LoRa chipsets connect sensors to the Cloud and enable real-time communication of data and analytics that can be utilized to enhance efficiency and productivity. LoRa devices enable smart IoT applications that solve some of the biggest challenges facing our planet: energy management, natural resource reduction, pollution control, and infrastructure efficiency.

Amazon Sidewalk aggregates spectrum in unlicensed bands and combines multiple physical layers, with Semtech’s LoRa modulation providing the long‑range, low‑power tier for neighborhood‑scale coverage beyond home Wi‑Fi and short‑range Personal Area Networks (PANs). By using ONLY LoRa as the core wide‑area PHY, Sidewalk evolves from a home‑centric LAN into a geographically distributed WAN that can support both fixed and mobile IoT endpoints across dense residential environments.

Network scale and coverage:

Sidewalk already covers roughly 95% of the U.S. population, making it one of the largest license‑free, consumer‑facing LPWA deployments, and the 2026 roadmap extends the footprint into Canada and Mexico first, followed by additional international markets later in the year.  This expansion effectively turns Sidewalk into a multi‑continent overlay network, leveraging existing consumer premises equipment and LoRa‑enabled endpoints to provide persistent connectivity without requiring dedicated operator‑grade RAN build‑outs.

Technology differentiation vs other LPWAN options:

NB-IoT (included in ITU-R M.2150 IMT 2020 RIT/SRIT standard) holds the largest LPWAN share at roughly 54%–58% of total LPWAN connections,  due to massive adoption in China which accounts for approximately 84% of all global NB-IoT connections. Outside of China, LoRaWAN is the clear market leader with a 41% share of connections. As of late 2025, there are over 125 million LoRaWAN end devices deployed globally, growing at a 25% annual rate. It is the preferred choice for private IoT networks, specifically in smart buildings, agriculture, and industrial asset tracking.

LoRa’s combination of long range, ultra‑low power operation, and mature ecosystem (silicon, gateways, and cloud stacks) gives Sidewalk a differentiated profile relative to alternatives such as narrowband cellular IoT and other unlicensed LPWAN modulation methods.  For Amazon, anchoring Sidewalk on LoRa reduces RF and protocol fragmentation on the end‑device side while preserving flexibility to layer higher‑level Sidewalk services and security on top of the underlying LoRa/LoRaWAN protocol stack.

Market and ecosystem context:

Amazon Sidewalk now sits alongside large industrial and enterprise LoRaWAN networks, reinforcing LoRa’s position as the leading low‑power wide‑area connectivity technology in unlicensed spectrum. The LoRaWAN IoT connectivity market is forecast to grow from about 10.7 billion USD in 2025 to 44.8 billion USD by 2030 (33.1% CAGR), while LoRaWAN deployments have surpassed 125 million devices globally with a 25% CAGR, signaling a robust runway for Sidewalk‑class Massive IoT use cases.

Implications for device and service design:

For device OEMs and service providers, Amazon’s decision effectively de‑risks LoRa as a long‑term connectivity bet for consumer and prosumer IoT, given Sidewalk’s trajectory to tens of millions of active devices worldwide.  Vendors integrating LoRa‑based designs can now target both traditional LoRaWAN operator networks and the Sidewalk ecosystem, enabling common hardware platforms to support smart home, safety, environmental monitoring, and asset‑tracking applications at neighborhood and city scale.

LoRa Enables Sidewalk’s Technical Evolution:

Chirp spread spectrum (CSS) modulation in LoRa technology provides the technical foundation enabling Amazon Sidewalk’s new capabilities:

  • Enhanced Network Density: LoRa multi-spreading factor capability optimizes longer range and shorter time-on-air, supporting higher device concentrations in urban environments while maintaining reliable connectivity.
  • Location-Based Services: Unique location accuracy service that combines the power of Wi-Fi, Bluetooth Low Energy (BLE) and GPS enables a new class of location aware devices that don’t need expensive cellular solutions for asset tracking applications.
  • Hub-Less Deployments: Utilized for both out-of-band-diagnostics as well as signaling radio for battery-powered cameras, LoRa lowers the need for hubs/repeaters, reducing infrastructure complexity for consumers while extending effective coverage areas.

Proven Heritage of LoRa in Massive IoT Networks:

Semtech’s LoRa technology has been deployed by more than 170 major mobile network operators globally, with over 500 million connected devices across smart cities, utilities, logistics, unmanned aircraft systems, and industrial applications. This proven deployment heritage provides the technical foundation and ecosystem maturity required for Amazon Sidewalk’s global expansion.

The technology’s long-range capability, extending connectivity up to several kilometers from Sidewalk bridge devices, combined with its ability to penetrate buildings and operate in dense urban environments makes it uniquely suited for neighborhood-scale networks. LoRa provides free, long-range connectivity that consumers can rely on for years of battery-powered operation.

Building on CES 2026 Momentum:

Ring showcased its expanded product portfolio using LoRa at CES 2026, introducing comprehensive sensor families for security, safety and home automation. These products join the growing network of devices powered on Sidewalk, including water leak and freeze detection sensors, wearable devices and environmental monitoring solutions, all leveraging the connectivity advantages of LoRa.

The Sidewalk network’s architecture—combining LoRa for long-range communication with Bluetooth Low Energy for device setup—creates a robust, resilient IoT infrastructure that can scale to support millions of devices while maintaining the ultra-low power consumption critical for battery-operated sensors and cameras.

…………………………………………………………………………………………………………………………………………………………………..

About Semtech:

Semtech Corporation (Nasdaq: SMTC) is a leading provider of high-performance semiconductor, IoT systems and cloud connectivity service solutions dedicated to delivering high-quality technology solutions that enable a smarter, more connected and sustainable planet. Our global teams are committed to empowering solution architects and application developers to develop breakthrough products for the infrastructure, industrial and consumer markets.

References:

Will “AI at the Edge” transform telecom or be yet another telco monetization failure?

New Telco Opportunity – AI at the Edge:

At MWC 2026 last week, there were a flurry of claims that “AI at the Edge” would transform the telecom industry.  One of many examples is an article titled, “The AI edge boom is giving telecom a new strategic role.”  In that piece, Jeff Aaron, vice president of product and solutions marketing at Hewlett Packard Enterprise (HPE) spoke with theCUBE’s John Furrier at MWC Barcelona, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed telecom edge AI and why networking is becoming a strategic foundation for data-centric services.  Aaron said:

“A big reason for [reignited interest in routing] is AI workloads. They’re moving everywhere now. They have to move to the edge.  For them to move to the edge, you’ve got to get them outside of the factory and to all the locations. We’re right in the core of that, and it’s super exciting.”

As AI expands to the edge, data will need to move not only to local compute, but also between many distributed edge sites, making routing paramount. There are four ways AI infrastructure is scaling — inside data centers and across distributed edge locations, according to Aaron.

“There’s scale-out, scale-across, scale-up, and on-ramp. Two are within the data center — scale-out and scale-up — but scale-across and edge on-ramp basically mean you got to figure out how to connect to those areas, and those are just networking,” he added.

Scale-across refers to connecting distributed data centers and edge locations, while edge on-ramp brings remote sites such as factories or branch locations into the network to access AI services. Supporting those distributed environments creates an opportunity for HPE to bring networking and compute together into a more integrated infrastructure stack. At MWC 2026 Barcelona, those trends are clearly coming into focus, according to Aaron.

“Data is moving everywhere right now, and the network is back. The network isn’t just plumbing. The network is how you build a value-added service using an AI workload as a telco infrastructure,” he added.

Telecom carriers are now urgently trying to move from being “dumb data pipes” to becoming “AI performance platforms” by leveraging their geographically distributed infrastructure to host AI closer to the end user.  They urgently want to pivot from selling just bandwidth and connectivity to selling outcomes and intelligence with a heavy focus on industrial and enterprise-specific edge deployments.  They are considering the following services and business models:

  • Infrastructure as a Service (IaaS) & GPUaaS: Offering raw computing power, specifically GPUs, from edge data centers to enterprises that need low-latency processing without building their own facilities.
  • Sovereign AI Clouds: Providing AI services that guarantee data remains within national borders, appealing to government and highly regulated sectors like finance and healthcare.
  • API Monetization: Exposing real-time network data (e.g., location intelligence, predictive network quality, fraud risk scoring) via APIs that enterprises pay to integrate into their own applications.
  • Outcome-Based Pricing: Charging for specific business results, such as a “guaranteed video call quality” or “fraud loss reduction share,” rather than just data usage.
  • AI-as-a-Service (AIaaS): Bundling pre-trained models or specialized AI agents (e.g., for customer service or industrial monitoring) with connectivity

Major Carrier AI Edge Deployment Plans:

  • AT&T:
    • Launched Connected AI for Manufacturing in March 2026, which unifies 5G, IoT, and generative AI to provide real-time fault detection (claiming a 70% reduction in waste).
    • Deploying “Edge Zones” in major U.S. cities (Detroit, LA, Dallas) to allow developers to run low-latency, cloud-based software locally.
    • Partnering with AWS to link fiber and 5G directly into AWS environments for distributed AI workloads.
  • Verizon:
    • Unveiled Verizon AI Connect, a suite of products designed to manage resource-intensive AI workloads for hyperscalers like Google Cloud and Meta.
    • Trialing V2X (Vehicle-to-Everything) platforms to provide carmakers with standardized APIs for low-latency edge processing in autonomous driving.
    • Collaborating with NVIDIA to integrate GPUs into private 5G networks for on-premise AI inferencing in robotics and AR.
  • SK Telecom (SKT):
    • Announced an “AI Native” strategy at MWC 2026, including a roadmap for AI-RAN (Radio Access Network) that uses GPUs to optimize network performance and host user AI apps simultaneously.
    • Building a Manufacturing AI Cloud powered by over 2,000 NVIDIA RTX GPUs to support digital twin simulations and robotics.
    • Expanding AI Data Centers (AIDC) across South Korea and Southeast Asia (Vietnam, Malaysia) using energy-optimized LNG-powered facilities.
  • Orange & Deutsche Telekom:
    • Deploying AI-powered planning tools to cut fiber rollout costs and optimize site power consumption by up to 33% using AI “Deep Sleep” modes.
    • Focusing on Sovereign AI strategies to ensure data governance for European enterprise customers.
  • Vodafone:
    • Utilizing AI/ML applications for daily power reduction at 5G sites and testing autonomous network healing via AI agents
  • BT:
    • Offers 5G-connected VR for manufacturing design teams (e.g., Hyperbat) to collaborate on 3D models in real-time.  
……………………………………………………………………………………………………………..
Summary of Emerging AI Edge Products:
Product Category Primary Target Key Value Proposition
AI-RAN Industry 4.0 Seamless, ultra-low latency for robotics and sensing.
Connected AI Platforms Manufacturing Real-time predictive maintenance and waste reduction.
AI-as-a-Service (AIaaS) Developers/SMBs Access to GPU power and pre-trained models via telco edge nodes.
Network Slicing APIs App Developers Programmatic control over bandwidth for AR/VR and gaming.

…………………………………………………………………………………………………………………………………………………………………………………………..

A Dissenting View of “AI at the Edge”:

The global market for AI within the global telecommunications sector is valued at $6.69 billion in 2026, growing at a compound annual rate (CAGR) of 41.9% from 2025.   The broader edge AI market—including hardware, software, and services—is forecast to reach $29.98 billion in 2026, according to The Business Research Company We think those estimates are way too high.

The market research firm states:

………………………………………………………………………………………………………

Author’s Opinion:

Unless telcos change their corporate culture along with slowing the footprint growth of cloud service providers/hyperscalers, we think that AI at the Edge will be yet another telco monetization failure.  Just like their failure to monetize: 4G LTE apps, the telco cloud, 5G, multi-access edge computing (MEC), OpenRAN, LPWANs and other telecom technologies that never lived up to their promise and potential.

That’s largely because telcos are very weak: developing IT platforms, compute services, killer applications, and rapid execution of new services (e.g. 5G services require a 5G SA core network which telcos were very slow to deploy).  Telecom execs themselves cite cultural and speed‑of‑change issues: the industry is not organized like a software company, so it struggles to iterate products at AI/cloud pace. Also, telcos historically struggle with software. Managing distributed GPU clusters is vastly different from managing cell towers.

After spending billions on 5G with very  little or no ROI, investors are skeptical of the increased capex required for AI-grade edge servers which must be maintained by telcos.  Those servers will be expensive (especially if they contain clusters of Nvidia GPUs) and consume a lot of power, which is a critical issue at the edge of the carrier’s network.

Many network operators frame AI/edge as “network optimization” or “utilizing underused sites,” not as building monetizable AI platforms with APIs, SDKs, and ecosystems. This mirrors 5G, where huge RAN/core builds were not matched by a clear product and platform strategy, leaving value to OTTs and hyperscalers which are  extending their control planes and protocol stacks to the network edge (local zones, operator co‑lo, on‑premises stacks).

Telcos risk becoming “dumb pipes” for AI traffic if they can’t provide a superior developer ecosystem.  If they only sell space/power/connectivity, the cloud service providers will continue to own the developer and AI value chain.  Analysts warn that edge is a “right to participate, not a right to win.”  As such, value accrues to whoever owns the AI platform, tools, marketplace, and pricing power, not the entity that provides connectivity, PoP or cell towers.

Data fragmentation and weak “intelligence” layer:

  • AI monetization depends on high‑quality, cross‑domain data, but telco data is fragmented across OSS, BSS, probes, and partner systems; without unification, it is hard to expose compelling network/edge intelligence services.

  • Analysts emphasize that failure here reduces telcos to generic GPU landlords, while higher‑margin offers (real‑time quality, fraud, identity, mobility/context APIs) remain unrealized.

Narrow internal focus on cost savings:

  • Many operators’ early AI focus is inward (Opex reduction in assurance, planning, customer care) rather than building external, revenue‑generating products, echoing how early 5G was justified mainly on cost/efficiency.

  • Commentators warn that if AI/edge remains a “network efficiency” play, the commercial upside will go to cloud/AI natives that turn similar capabilities into products sold to enterprises.

What analysts say telcos must do differently:

  • Build “Sovereign AI factories” and edge AI clouds: GPU‑enabled sites with cloud‑like developer experience (APIs, self‑service portals, metering, SLAs) and clear sovereign/regional guarantees.

  • Combine differentiated connectivity with AI services (latency‑backed SLAs, AI‑on‑RAN, domain‑specific models for verticals) and use modern, flexible commercial models instead of just selling bandwidth or colocation.

Conclusions:

In summary, the main risk for telcos is to successfully transition from owning and maintaining network infrastructure to owning and operating AI platforms and products at software industry speed.  AI at the edge is less of a new service or product and more an architectural upgrade. The two ways telcos can benefit are from:

  1.  Internal cost reduction: If telcos use it to lower their own costs (fraud prevention, risk management, predictive maintenance, fault isolation, self-healing networks, etc.), it’s an automatic win but won’t increase the top line.
  2.  Revenue from new AI -Edge services, e.g. Verizon uses edge-based video analytics in warehouses to improve inventory turnover by up to 40%.   If they expect to charge a massive premium for “AI-enabled 5G,” they face the same monetization wall that has doomed them for the past 20 years!

References:

https://siliconangle.com/2026/03/04/telecom-edge-ai-makes-networking-strategic-mwc26/

https://www.nvidia.com/en-us/lp/ai/the-blueprint-for-ai-success-ebook/

How telcos can monetize AI beyond connectivity

https://www.thebusinessresearchcompany.com/report/generative-artificial-intelligence-ai-in-telecom-global-market-report

AT&T and AWS to deliver last mile connectivity for AI workloads; AT&T Geo Modeler™ AI simulation tool

Analysis: Edge AI and Qualcomm’s AI Program for Innovators 2026 – APAC for startups to lead in AI innovation

Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN

Private 5G networks move to include automation, autonomous systems, edge computing & AI operations

Dell’Oro: RAN Market Stabilized in 2025 with 1% CAG forecast over next 5 years; Opinion on AI RAN, 5G Advanced, 6G RAN/Core risks

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum

Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

CES 2025: Intel announces edge compute processors with AI inferencing capabilities

AT&T and AWS to deliver last mile connectivity for AI workloads; AT&T Geo Modeler™ AI simulation tool

AT&T is strategically re-architecting its infrastructure for the AI era through high-capacity network modernization and deep integration with hyperscale cloud providers.

In addition to its almost six year old deal to run its 5G SA core network in Microsoft Azure’s cloudAT&T announced at MWC 2026 that it’s now woring with Amazon Web Services (AWS) to extend 5G and fiber connectivity from business customers and locations directly into AWS environments, creating secure, resilient and reliable premises‑to‑cloud architectures for AI workloads. The collaboration is designed to reduce network complexity and latency while supporting real‑time analytics, machine learning, and agentic AI use cases.

This collaboration continues a long-standing relationship between AT&T and AWS and follows recent news outlining broader efforts to modernize the nation’s connectivity infrastructure by providing high-capacity fiber to AWS data centers, migrate AT&T workloads to AWS cloud capabilities and explore emerging satellite technologies.

AWS Interconnect – last mile embeds AT&T‑delivered connectivity directly into AWS workflows, designed to enable customers to provision and manage last‑mile connectivity within the AWS environment and lays the foundation for the use of AI agents to monitor and manage the AI experience from the user to the cloud. This streamlined, self‑managed approach helps enterprises reduce network complexity while maintaining control of their extended enterprise network, allowing businesses to move faster as they scale AI.

High level illustration of the planned AWS Interconnect – last mile architecture, showing how resilient interconnections and AT&T Fiber and fixed wireless access are intended to simplify private connectivity from customer locations into AWS environments. 

Diagram Source: AT&T

………………………………………………………………………………………………………

“AI does not just need more compute; it needs flatter networks and faster connections,” said Shawn Hakl, SVP & Head of Product, AT&T Business. “By bringing high‑capacity connectivity closer to cloud platforms, integrating the management of the networks directly into the cloud provisioning process and engineering for resiliency at the metro level, AT&T is helping enterprises streamline their networks, improve performance, security, and scale AI with confidence.”

AT&T says they are building an AI‑ready network (?) designed to scale performance by continuing ongoing network investment, including the growth of capacities up to 1.6Tbps across key metro and long‑haul routes.

AT&T also announced it would work with Nvidia, Microsoft and MicroAI through its Connected AI platform for “smart manufacturing.”

………………………………………………………………………………………………………………..

Finally, AT&T described  AT&T Geo Modeler which is able to better predict connectivity for emerging technologies like autonomous vehicles, drones, and robotics.

The Geo Modeler is an AI-powered simulation tool that helps predict, in near real time, how a wireless network will perform in the real world. Inspired by the video games Kounev played with his family growing up, the virtual model and simulation is “essentially like a giant video game of the United States” that, infused with AI tools, gives engineers a clearer picture of where potential weak spots may appear. Then issues can be addressed earlier and fixes can roll out faster. In essence, it creates virtual models, similar to the way video games are designed and developed.

“The Geo Modeler helps us see how the real world will shape coverage before we build, so we can deliver connectivity that’s ready for what’s next,” said AT&T scientist Velin Kounev.

Matt Harden, VP of Connected Solutions at AT&T, agrees. “The Geo Modeler is a foundational capability for the connected mobility era,” he said. “By marrying advanced geospatial simulation with AI-driven network orchestration, we can deliver predictable, high-performance connectivity that adapts with the environment. Whether it’s a hurricane, a packed stadium, or a city corridor full of autonomous vehicles, we will be prepared.”

References:

https://about.att.com/story/2026/aws-collaboration-scalable-business-ai.html

https://about.att.com/blogs/2026/150-years-of-connection.html

https://about.att.com/blogs/2025/geo-modeler.html

AT&T and Ericsson boost Cloud RAN performance with AI-native software running on Intel Xeon 6 SoC

AT&T deploys nationwide 5G SA while Verizon lags and T-Mobile leads

AT&T to buy spectrum licenses from EchoStar for $23 billion

AT&T’s convergence strategy is working as per its 3Q 2025 earnings report

Progress report: Moving AT&T’s 5G core network to Microsoft Azure Hybrid Cloud platform

AT&T 5G SA Core Network to run on Microsoft Azure cloud platform

 

Page 1 of 241
1 2 3 241