STL Partners webinar: Agentic AI needed for RAN autonomy & efficiency

Yesterday, a STL Partners webinar titled “Turning autonomy into margin: Agentic AI and the autonomous RAN,” suggested agentic AI is the missing layer that can turn RAN autonomy from a technical goal into a direct profit margin booster. It argues that operators should prioritize autonomy use cases by business impact, not just by how much automation coverage they add, and that the right roadmap can move autonomy from an engineering KPI to a commercial advantage.

The central message was that autonomy only matters if it improves economics (see poll results below). The webinar revealed that network operators need a dual-axis framework that combines the usual autonomous-network maturity view with a value-creation lens, so they can focus on the capabilities that scale into measurable business outcomes.

Agentic AI is presented as the practical enabler for moving beyond human-in-the-loop operations. In this framing, agents help orchestrate tasks, make decisions, and coordinate network actions in ways that support more closed-loop automation than traditional workflows can deliver.

The results of an “actuality” poll relating to RAN autonomy revealed that controlling costs and reliability were most important, with the enablement of new revenue growth through APIs and sensing only scoring 10.87% of respondents.  Similarly, results for an “aspirations” poll for RAN autonomy were also fairly evenly spread between reducing costs and optimizing the customer experience, with just 13.21% citing new revenue growth.

Source: STL Partners

Terje Jensen, SVP, global business security officer and head of network and cloud technology strategy at Telenor, said that he had expected to see network operators’ aspirations shift more clearly towards improving customer experience and even revenue generation, not just efficiency.

Darwin Janz, strategic technology planner at SaskTel, also thought network operators’ ambitions would be higher, but he noted that they still struggle to identify concrete, monetizable use cases. Without that, there’s a real risk of building technical solutions in search of a problem, rather than starting from clear enterprise needs and value, Darwin noted. “We really need to see those use cases and enterprise customer needs,” he added.

……………………………………………………………………………………………………………………….

The webinar was built around four practical questions:

  1. Which use cases create real commercial impact?
  2. How to shift from autonomy as an engineering metric to a margin driver?
  3. Where agentic does AI add value today?
  4. What data, orchestration, and organizational foundations are needed to scale beyond pilots.

For network operators, the implication is that autonomous RAN strategy should be tied to P&L outcomes such as lower operating cost, better resource utilization, and faster optimization cycles. The webinar’s message is that autonomy becomes strategically important only when it is deployed in a way that compounds across the network and business.

…………………………………………………………………………………………………………………..

References:

https://www.lightreading.com/network-automation/telcos-showing-limited-aspiration-for-ran-autonomy-benefit

The Financial Trap of Autonomous Networks: Scaling Agentic AI in the Telecom Core

Nokia to showcase agentic AI network slicing; Ericsson partners with Ookla to measure 5G network slicing performance

 

 

South Korea’s top 3 telcos reinvent themselves as “AI Companies;” growth strategies revealed

Overview:

South Korea’s telecommunications industry is rapidly shifting its center of gravity to AI, with SK Telecom, KT and LG Uplus all declaring their transformation into AI companies. Industry officials describe this as a restructuring process.

  • SK Telecom is pushing a full-stack AI strategy spanning infrastructure, models and services.
  • KT is accelerating a B2B-focused push to become an “AX” platform company.
  • LG Uplus is positioning itself as an AI software company through its ixi-O agent, stressing safety and security. Industry officials say the next test is profitability.
Photo credit: Shutterstock
…………………………………………………………………………………………………………………………………………
Here’s a summary of the AI strategies of the three South Korean telcos:
1. SK Telecom – Pioneering Sovereign AI and Full-Stack Infrastructure:

Ryu Jong-heon, SKT’s CEO, wrote in a letter sent to shareholders ahead of last month’s annual general meeting, “If our AI business so far was about incubating various areas, we will now focus more on businesses where SKT can be competitive and secure sustainability in AI competition that is expanding without limit.”

SK Telecom (SKT) is prioritizing a “Sovereign AI” strategy, designed to offer localized, secure AI infrastructure that mitigates reliance on external hyper-scalers. By integrating AI Data Centers (AIDC) with industry-specific applications and their proprietary A.X K1 model—a 500B parameter hyper-scale LLM—SKT aims to deliver an end-to-end “Sovereign AI Package.”
To fortify its AI full-stack, SKT is leveraging a robust partnership ecosystem:
  • Next-Gen Compute: Strategic collaboration with Arm and Rebellions for AI CPU/NPU innovation.
  • Infrastructure & Power: Agreements with Supermicro and Schneider Electric to optimize AIDC efficiency and server density.
  • Model Scaling: With A.X K1 outperforming benchmarks like DeepSeek V3.1, SKT plans to transition to multimodal capabilities and trillion-parameter scaling to secure market dominance across B2B and B2C segments.

2. KT Corporation – Transitioning to an AX Platform Operator:

Under the leadership of CEO Yun-young Park, KT is accelerating its AX (AI Transformation) strategy with a sharp focus on the B2B sector. Following a structural reorganization that established the AX Future Technology Institute and the AX Business Division, KT is positioning itself as a platform enabler rather than a mere solution provider. Despite perceived lags in proprietary model development (e.g., the mi-deum LLM), KT is pursuing a pragmatic “practical gains” strategy. By partnering with Microsoft, KT is adopting a “detour” approach to rapidly integrate global-standard AI capabilities into its existing corporate customer base. CEO Yun-young Park explained, “If AI services are actors on a theatre stage, we are an AX platform company that builds that stage.”

3. LG Uplus -Move to AI-Driven Software and Security:

LG Uplus, led by CEO Beom-sik Hong, is leveraging security and reliability as its primary competitive differentiators. The company is transitioning into an AI-centric software (SW) company, focusing on high-margin service architectures over raw infrastructure. The cornerstone of this strategy is ixi-O, a voice AI agent. The upcoming ixi-O Pro will feature advanced behavioral analytics, including tone and emotional state detection, to provide proactive customer engagement.  Hong stated, “We will become an AI-centred software (SW) company that leads solutions in telecommunications and AX technology,” signaling a two-track global expansion strategy involving both service exports and technology stack licensing.

……………………………………………………………………………………………………………………………………………………………………………………….
Market Outlook: The Race for Monetization:
As the “Three Firms, Three Strategies” AI era unfolds, the industry focus has shifted from experimental incubation to sustainable monetization. An industry official noted, “The key is how to graft telecommunications network technology built up so far onto AI services. All three telcos have finished setting specific roadmaps. Now is the time to prove it with results.” Many believe that the Korean network operator that successfully bridges the gap between massive CAPEX in AI infrastructure and scalable, profitable AI-native services will ultimately define the next generation of telecommunications.
…………………………………………………………………………………………………………………………………

References:

https://www.digitaltoday.co.kr/en/view/49093/koreas-top-three-telecoms-bet-future-on-ai-shift-from-networks

SKT 6G ATHENA White Paper: a mid-to-long term network evolution strategy for the AI era

SK Group and AWS to build Korea’s largest AI data center in Ulsan

South Korea has 30 million 5G users, but did not meet expectations; KT and SKT AI initiatives

McKinsey: AI infrastructure opportunity for telcos? AI developments in the telecom sector

WSJ: 5G in South Korea has not lived up to expectations

South Korea government fines mobile carriers $25M for exaggerating 5G speeds; KT says 5G vision not met

KT and LG Electronics to cooperate on 6G technologies and standards, especially full-duplex communications

SK Telecom (SKT) and Nokia to work on AI assisted “fiber sensing”

SKT Develops Technology for Integration of Heterogeneous Quantum Cryptography Communication Networks

SKT with Global Telcos to Expand Metaverse Platform in US, Europe and Southeast Asia

South Korean telcos to double 5G network bandwidth with massive MIMO; Private 5G

Omdia: ARPU declining or flat for South Korean 5G network operators

3 South Korean mobile operators to share 5G networks in remote areas

LG U+ first to deploy 600G backbone network in Korea with Ciena’s ROADM equipment

 

Anthropic’s Project Glasswing aims to reshape IT cybersecurity

Backgrounder:

Late last year, Anthropic said that state-sponsored Chinese hackers had used its artificial intelligence (AI) technology in an effort to infiltrate the computer systems of roughly 30 companies and government agencies around the world. The company said it was the first reported case of a cyberattack in which AI technologies had gathered sensitive information with limited help from human operators.

As Anthropic and its chief rival, OpenAI, prepare to release new and more powerful AI systems, cybersecurity experts are increasingly vocal in their warnings that AI is fundamentally changing cybersecurity.  AI technology could allow hackers to identify security holes in computer systems far faster than in the past, vastly raising the stakes in the decades-long fight between hackers and the security experts guarding computer networks.  As hackers deploy AI to break and steal, security experts are also leaning on AI to spot flaws in their systems — including some that had gone unnoticed for decades.

“This is the most change in the cyber environment, ever,” said Francis deSouza, the chief operating officer and president of security products at Google Cloud. “You have to fight A.I. “This is the most change in the cyber environment, ever,” said Francis deSouza, the chief operating officer and president of security products at Google Cloud. “You have to fight AI with AI.”

Hackers have used AI chatbots to draft phishing emails and ransom notes, cybersecurity experts said. Others have used AI to parse large quantities of stolen data and determine what information might be valuable. Without help from AI attackers could sometimes break into computer networks within minutes, Mr. deSouza said, but with the help of AI breaches can take just seconds.  Some hackers specialize in breaking into systems and then selling off their access to other attackers. Those handoffs used to take as much as eight hours, as hackers negotiated the sales and passed along the compromised entry points, deSouza added. Now that process has accelerated to about 20 seconds, he said, with hackers sometimes using A.I. agents to speed up the process.

Some experts argue that the guardrails added by companies like Anthropic and OpenAI can actually provide an advantage to malicious attackers. Guardrails could cause an AI chatbot to deny help to a user trying to defend a system from an attack, they argue, but persistent hackers could be more diligent about finding vulnerabilities — and keeping those tricks to themselves.

In February, Anthropic said it had used its A.I. technologies to find over 500 so-called zero-day vulnerabilities — security holes that were unknown to software makers — in various pieces of commonly used open source software. The next month, a researcher at Anthropic revealed that he had used A.I. to find a serious security vulnerability in the core of the Linux operating system, which is software that powers much of the internet and is used in computer servers, cloud computing services, Android phones and Teslas. The bug had existed, apparently undiscovered, since 2003.

Project Glasswing Overview:

Anthropic has announced Project Glasswing – a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks – in an effort to secure the world’s most critical software.

The fast growing AI private company has found that AI models (like its own Claude) have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities. Their Mythos Preview language model has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser.

Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout—for economies, public safety, and national security—could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes.

The Project Glasswig partners will use Mythos Preview as part of their defensive security work. Anthropic will share what they learn so the entire IT industry can benefit. They have also extended access to a group of over 40 additional organizations that build or maintain critical software infrastructure so they can use the model to scan and secure both first-party and open-source systems.

Anthropic is committing up to $100M in usage credits for Mythos Preview across these efforts, as well as $4M in direct donations to open-source security organizations.

Project Glasswing Core Objectives:
  • Give Defenders a Head Start: The initiative aims to use Mythos’s capabilities to find and fix zero-day vulnerabilities in critical codebases before they can be discovered by malicious actors.
  • Secure Critical Infrastructure: Partners use the model to scan first-party systems and open-source software that underpin global banking, energy, and logistics networks.
  • Modernize Defense Practices: Anthropic is collaborating with partners to evolve security workflows, such as patching and disclosure processes, to match the “machine speed” of AI-driven vulnerability discovery.
Claude Mythos Capabilities:
The Glasswing initiative was formed after Anthropic researchers observed that the Mythos model had reached a threshold where its reasoning and coding skills surpassed all but the most skilled human security researchers.
  • Zero-Day Discovery: In early testing, the model autonomously found thousands of high-severity vulnerabilities, including a 27-year-old bug in OpenBSD and a 16-year-old flaw in FFmpeg code that had been scanned by automated tools millions of times without detection.
  • Performance Benchmarks: Mythos Preview scored 83% on the CyberGym cybersecurity benchmark, significantly outperforming previous models like Claude Opus.

 

References:

https://www.anthropic.com/glasswing

https://www.nytimes.com/2026/04/06/technology/ai-cybersecurity-hackers.html

Anthropic Glasswing: AI Vulnerability Detection Has Crossed a Threshold

Anthropic Claude Users Reveal AI Hallucinations as their Top Concern

Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators

New Linux Foundation white paper: How to integrate AI applications with telecom networks using standardized CAMARA APIs and the Model Context Protocol (MCP)

IDC Survey of Networking Leaders: Enterprise AI progress stalls despite ambitious goals

New IDC research released in April 2026 highlights a growing disconnect between ambitious enterprise AI goals and the reality of their technical execution.  The 2026 IDC AI in Networking Special Report (LinkedIn Video hyperlink) [1.] found that organizations expecting to move from early and selective AI use for business and IT initiatives to more advanced deployments largely haven’t. The result is a widening gap between intent and execution that is becoming harder to ignore.  This widening gap in AI execution is driven by a mismatch between ambitious goals and the realities of legacy infrastructure, which cannot handle the data demands for production-grade models.

Despite high expectations, many organizations have seen their AI progress stall over the last 18 months, with “select use” adopters failing to advance to more “substantial” deployments. A critical shortage of specialized AI experienced personnel, combined with lagging security and governance controls, has caused widespread “pilot paralysis” across most enterprises. To overcome this, organizations are shifting toward “AI factories” to create a repeatable, governed pipeline for deploying AI.

Note 1. IDC’s 2026 AI in Networking Special Report is a report driven by a worldwide survey of 500+ enterprise network executives and experts. The report covers both the impact and plans for supporting AI workloads across the network and using AI-powered networking solutions. The focus of this research is comprehensive, covering datacenters, cloud services, multi-cloud environments, network core and edge, and network management.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Mark Leary, IDC research director, Network Observability and Automation:

“Many solution suppliers are prioritizing a platform approach to the challenges associated with moving AI workloads into production. This survey of networking leaders highlights the shift in preference from platforms to best-in-class solutions when supporting AI workloads across their networks. As certain functional requirements intensify, as IT staff experience and expertise build, and as platforms fall short in delivering expected advantages, IT organizations are more willing to take on the added responsibilities associated with assembling their own mix of best-in-class solutions. For the supplier, the challenge is to avoid developing and delivering a platform that is classified as a jack-of-all-trades and master of none.”

Agentic AI is to have a profound effect on the network infrastructure and on networking staff. Two years ago, AI assistants were labeled leading edge when they offered natural language processing for operator interactions and network management guidance driven by technical manual content. How things have changed! Agentic AI is no longer just a passive informer and instructor but an active intelligent virtual network engineer. Agents gather and process comprehensive network data, develop deep and precise insights, and determine and, increasingly, execute needed network management actions. Whether fixing a network problem, activating a network service, optimizing a network configuration, or responding to a developing network condition, agentic AI solutions are proving more and more useful across the entire network and the entire set of tasks required to engineer and operate the network.”

While this IDC Survey Spotlight offers only an overview of responses relating to agentic AI, detailed results are available by geographic region, select country, company size, major vertical industries, respondent role, and the AI maturity level of the respondent’s organization.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Organizations are pursuing AI in networking across two categories:

1.] Supporting AI workloads across network infrastructure and

2.] Applying AI to network operations. 

But in both cases, progress is constrained by persistent challenges. “2026 is when organizations find out if AI in networking delivers real operational impact—or remains stuck in pilot mode,” Leary said in the referenced LinkedIn Video.

Source: IDC

……………………………………………………………………………………………………………………………

Security remains the top concern among enterprises, both as a barrier to deployment and a primary use case for AI itself. “You have to fight AI with AI from a network security perspective,” said Brandon Butler, senior research manager at IDC. “There’s a realization that nefarious actors are leveraging AI themselves. The pressure is already on the network. The question now is whether organizations can keep up with what AI is demanding of their infrastructure,” he added.

Integration with existing systems and a shortage of skilled talent follow close behind. “Most folks don’t feel their staff can fully evaluate and select the right solutions,” Leary said. As a result, many organizations are turning outward for help:

  • 81% say they are increasing spending on managed service providers (MSP) to support AI initiatives.
  • 89% of data centers expect to increase bandwidth by at least 11% within the next year, driven by AI workloads.
  • That demand extends beyond individual facilities, with 91% expecting similar growth in inter-data center connectivity, highlighting the strain on distributed architectures.
  • Nearly half of respondents (46%) prefer AI systems that can both determine and execute network actions autonomously.
  • Another 41% favor a guided approach, while 13% prefer no AI involvement.

Cloud environments are seeing sharper increases in AI use. Organizations anticipate an average 49% rise in bandwidth for cloud connectivity over the next year. “The cloud is almost always involved,” Leary says. “The biggest group mixes one cloud platform with one or more data centers.”

Beyond the data center and cloud, the network edge is emerging as the next major growth area. Today, 27% of organizations have deployed AI workloads at the edge, and 54% plan to do so within two years. Butler said: “Folks who are leveraging AI more extensively are already pushing workloads to the edge. We see this as a leading indicator of where the market is going.”

“Two years in a row, the largest group said they want AI to both determine and execute actions. It was honestly surprising,” he added.

Enterprise edge bandwidth is projected to grow by an average of 51% in the next year. As AI becomes more distributed, network teams will need to manage greater complexity across environments while maintaining performance and security.

…………………………………………………………………………………………………………………………………………………………………………….

When assessing expected ROI from AI in networking, IDC survey respondents focused on elevating IT capabilities, with 31% prioritizing superior service levels and 30% focusing on operational efficiency. These outcomes ranked above worker productivity and revenue, suggesting that leaders are strategically utilizing AI to enhance foundational operational workflows. Notably, reducing operating costs ranked seventh, suggesting a focus on strategic value rather than immediate expense reduction.

Source: IDC

……………………………………………………………………………………………………

IDC Research identified specific applications—from automated configuration validation to AI-enhanced threat response—as catalysts for measurable performance gains and the organizational trust essential for broader implementation. For network executives, this phased approach represents the most strategic methodology for achieving long-term operational objectives.

“It doesn’t have to be handing the keys of your kingdom to AI to really get some benefits from these AI tools,” Butler concluded.

……………………………………………………………………………………………………………………………………………………………………………………….

References:

https://www.linkedin.com/posts/brandon-butler-29761a3_idc-recently-published-our-second-annual-activity-7429576183614320640-p5PA/

https://www.networkworld.com/article/4152655/ai-for-it-stalls-as-network-complexity-rises.html

Anthropic Claude Users Reveal AI Hallucinations as their Top Concern

Introduction:

Across regions from Germany to Mexico, users of artificial intelligence (AI) are less concerned about being replaced by AI than by its propensity to make major mistakes, according to one of the largest global surveys to date on real-world AI usage and perception.  These mistakes, known as “AI Hallucinations,” are essentially made up stories rather than answers based on outdated information.

The study, conducted by Anthropic using its Claude chatbot, analyzed interviews with more than 80,000 users across 159 countries. The result is one of the most detailed global portraits yet of how AI is being deployed — and how users perceive its risks, benefits, and societal implications.

AI Hallucinations Outrank Job Displacement as Top Concern:

When asked what worries them most about AI, 27% of users cited AI chatbot errors described as “AI hallucinations,” while 22% pointed to job displacement and the loss of human autonomy. About 16% expressed concern that AI could weaken people’s capacity for critical thinking.

Image Credit: JOIST AI

“The AI hallucinations were a disaster. I lost so many hours of work,” said an entrepreneur from Germany. Another participant, a military worker in Mexico, noted the importance of domain knowledge in spotting AI’s flaws: “When I notice AI errors it’s because I’m well versed in the topic . . . but I wouldn’t know if the topic was alien to me, would I?”

An AI Interviewer for Global Insights:

The responses were collected in 70 languages using a novel feedback system that allowed Claude to act as both interviewer and analyst. The platform evaluated qualitative answers, categorizing responses to reveal common themes and linguistic nuances across regions.

“Beyond its scale and linguistic diversity, the project aimed to collect this rich human experience using Claude, so it could really inform our research agenda, change our research agenda, change the way we think about building our products, deploying our products,” said Deep Ganguli, who leads Anthropic’s societal impacts team and oversaw the research initiative.

Productivity and Personal Growth Drive AI Adoption:

While data quality and reliability drew criticism, the survey also underscored widespread acknowledgment of AI’s positive impact on productivity. Thirty-two percent of respondents said that AI tools had meaningfully improved their output at work.

An entrepreneur in the United Arab Emirates explained, “I used to be a web designer . . . now I build anything. Before I was one person, now I become 100 people — I don’t wait for anyone anymore.” Participants from Colombia, Japan, and the United States described similar gains, emphasizing how AI helps them free up time for family, hobbies, and creative exploration.

In total, nearly one in five users (19%) said AI had fallen short of their expectations. Yet usage patterns demonstrate remarkable versatility: respondents reported employing AI as a productivity assistant, educational tutor, design partner, creative collaborator, or even an emotional support companion.

A vivid example came from a soldier in Ukraine, who wrote, “In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life — my AI friends.”

Regional and Economic Divides in AI Optimism:

Regional variation was pronounced. Saffron Huang, the lead researcher on the project, found that respondents in South America, Africa, and across South and Southeast Asia expressed more optimism than users in Europe, the United States, or East Asia.

“The trend is that maybe more lower and middle-income countries are more optimistic than higher-income countries that have more AI exposure,” said Huang. She added that this optimism might reflect a sample skew toward early adopters in developing markets — individuals inclined to view new technologies as opportunities rather than threats.

“They just divide so cleanly . . . the more western developed countries are significantly more concerned about AI and the economy, [and] much more negative, and then, the reverse is true with the lower and middle-income countries,” she said.

According to Anthropic’s researchers, AI’s limited visibility in daily workflows across lower-income economies may explain the difference. “If AI hasn’t visibly entered your daily work yet, AI displacement likely feels abstract, especially when more immediate economic pressures already exist,” the team wrote in a companion blog post.

Next Steps: Measuring AI’s Real-World Impact:

Anthropic plans to extend its Claude Interviewer research framework into longitudinal studies that track how AI affects users’ lives over time. “The goal is to better measure both the improvements and the harms — and to use those insights to make systemic refinements,” said Ganguli.

The company’s approach — embedding feedback collection directly into an AI platform — represents an emerging model for data-driven, iterative AI development. By combining self-reported user experience data with large-scale text analytics, Anthropic aims to better understand how its models interact with human needs and constraints.

Industry and Research Community Respond:

The study has drawn attention across the AI community for its unprecedented reach and innovative methodology. Nickey Skarstad, director of product at language-learning company Duolingo, praised the work’s ambition. On LinkedIn, she wrote: “For anyone building products right now, this is the future of understanding your users. The what AND the why at a scale we’ve never had access to before.”

Still, several researchers remain cautious about overinterpreting the results. Divy Thakkar, a researcher at Anthropic rival Google DeepMind, expressed reservations on X, saying he was “sceptical” about calling the study a new form of science due to potential selection bias and limitations in survey design. “A human qualitative researcher would take time to build trust with their participants, hold the space for reflection, introspection, contradictions — that’s the whole point of it,” he wrote.

Methodological caveats extend to demographics. Almost half of the survey’s respondents were based in North America or Western Europe, while regions such as Central Asia had only several hundred participants.

Ilan Strauss, an economist and director of the AI Disclosures Project, described the initiative as “an excellent piece of work,” but urged careful interpretation. He noted that the absence of reported confidence intervals — standard practice in survey-based research — makes it difficult to measure uncertainty. Self-reported productivity gains, he added, are inherently prone to bias.

A Global Mirror for Human-AI Relations:

Despite these caveats, the Claude Interviewer study illustrates a broader shift in the relationship between humans and AI systems. As AI technologies proliferate across regions and industries, they are becoming both instruments of empowerment and sources of anxiety — mirroring social, economic, and cultural dynamics in striking ways.

While western economies debate AI-driven labor disruption and ethical alignment, many in emerging markets frame AI as a means of upward mobility and creative expansion. This duality — between apprehension and aspiration — may shape not only AI adoption patterns but also future research and regulatory directions across global contexts.

References:

https://www.ft.com/content/e074d3a9-7fd8-447d-ac0a-e0de756ac5c5?syn-25a6b1a6=1 (PAYWALL)

https://www.joist.ai/post/ai-hallucinations-what-they-are-and-why-it-matters

Sources: AI is Getting Smarter, but Hallucinations Are Getting Worse

Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators

Alphabet’s 2026 capex forecast soars; Gemini 3 AI model is a huge success

Analysis & Economic Implications of AI adoption in China

China’s open source AI models to capture a larger share of 2026 global AI market

AWS to deploy AI inference chips from Cerebras in its data centers; Anapurna Labs/Amazon in-house AI silicon products

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos

Gartner: AI spending >$2 trillion in 2026 driven by hyperscalers data center investments

 

 

Will “AI at the Edge” transform telecom or be yet another telco monetization failure?

New Telco Opportunity – AI at the Edge:

At MWC 2026 last week, there were a flurry of claims that “AI at the Edge” would transform the telecom industry.  One of many examples is an article titled, “The AI edge boom is giving telecom a new strategic role.”  In that piece, Jeff Aaron, vice president of product and solutions marketing at Hewlett Packard Enterprise (HPE) spoke with theCUBE’s John Furrier at MWC Barcelona, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed telecom edge AI and why networking is becoming a strategic foundation for data-centric services.  Aaron said:

“A big reason for [reignited interest in routing] is AI workloads. They’re moving everywhere now. They have to move to the edge.  For them to move to the edge, you’ve got to get them outside of the factory and to all the locations. We’re right in the core of that, and it’s super exciting.”

As AI expands to the edge, data will need to move not only to local compute, but also between many distributed edge sites, making routing paramount. There are four ways AI infrastructure is scaling — inside data centers and across distributed edge locations, according to Aaron.

“There’s scale-out, scale-across, scale-up, and on-ramp. Two are within the data center — scale-out and scale-up — but scale-across and edge on-ramp basically mean you got to figure out how to connect to those areas, and those are just networking,” he added.

Scale-across refers to connecting distributed data centers and edge locations, while edge on-ramp brings remote sites such as factories or branch locations into the network to access AI services. Supporting those distributed environments creates an opportunity for HPE to bring networking and compute together into a more integrated infrastructure stack. At MWC 2026 Barcelona, those trends are clearly coming into focus, according to Aaron.

“Data is moving everywhere right now, and the network is back. The network isn’t just plumbing. The network is how you build a value-added service using an AI workload as a telco infrastructure,” he added.

Telecom carriers are now urgently trying to move from being “dumb data pipes” to becoming “AI performance platforms” by leveraging their geographically distributed infrastructure to host AI closer to the end user.  They urgently want to pivot from selling just bandwidth and connectivity to selling outcomes and intelligence with a heavy focus on industrial and enterprise-specific edge deployments.  They are considering the following services and business models:

  • Infrastructure as a Service (IaaS) & GPUaaS: Offering raw computing power, specifically GPUs, from edge data centers to enterprises that need low-latency processing without building their own facilities.
  • Sovereign AI Clouds: Providing AI services that guarantee data remains within national borders, appealing to government and highly regulated sectors like finance and healthcare.
  • API Monetization: Exposing real-time network data (e.g., location intelligence, predictive network quality, fraud risk scoring) via APIs that enterprises pay to integrate into their own applications.
  • Outcome-Based Pricing: Charging for specific business results, such as a “guaranteed video call quality” or “fraud loss reduction share,” rather than just data usage.
  • AI-as-a-Service (AIaaS): Bundling pre-trained models or specialized AI agents (e.g., for customer service or industrial monitoring) with connectivity

Major Carrier AI Edge Deployment Plans:

  • AT&T:
    • Launched Connected AI for Manufacturing in March 2026, which unifies 5G, IoT, and generative AI to provide real-time fault detection (claiming a 70% reduction in waste).
    • Deploying “Edge Zones” in major U.S. cities (Detroit, LA, Dallas) to allow developers to run low-latency, cloud-based software locally.
    • Partnering with AWS to link fiber and 5G directly into AWS environments for distributed AI workloads.
  • Verizon:
    • Unveiled Verizon AI Connect, a suite of products designed to manage resource-intensive AI workloads for hyperscalers like Google Cloud and Meta.
    • Trialing V2X (Vehicle-to-Everything) platforms to provide carmakers with standardized APIs for low-latency edge processing in autonomous driving.
    • Collaborating with NVIDIA to integrate GPUs into private 5G networks for on-premise AI inferencing in robotics and AR.
  • SK Telecom (SKT):
    • Announced an “AI Native” strategy at MWC 2026, including a roadmap for AI-RAN (Radio Access Network) that uses GPUs to optimize network performance and host user AI apps simultaneously.
    • Building a Manufacturing AI Cloud powered by over 2,000 NVIDIA RTX GPUs to support digital twin simulations and robotics.
    • Expanding AI Data Centers (AIDC) across South Korea and Southeast Asia (Vietnam, Malaysia) using energy-optimized LNG-powered facilities.
  • Orange & Deutsche Telekom:
    • Deploying AI-powered planning tools to cut fiber rollout costs and optimize site power consumption by up to 33% using AI “Deep Sleep” modes.
    • Focusing on Sovereign AI strategies to ensure data governance for European enterprise customers.
  • Vodafone:
    • Utilizing AI/ML applications for daily power reduction at 5G sites and testing autonomous network healing via AI agents
  • BT:
    • Offers 5G-connected VR for manufacturing design teams (e.g., Hyperbat) to collaborate on 3D models in real-time.  
……………………………………………………………………………………………………………..
Summary of Emerging AI Edge Products:
Product Category Primary Target Key Value Proposition
AI-RAN Industry 4.0 Seamless, ultra-low latency for robotics and sensing.
Connected AI Platforms Manufacturing Real-time predictive maintenance and waste reduction.
AI-as-a-Service (AIaaS) Developers/SMBs Access to GPU power and pre-trained models via telco edge nodes.
Network Slicing APIs App Developers Programmatic control over bandwidth for AR/VR and gaming.

…………………………………………………………………………………………………………………………………………………………………………………………..

A Dissenting View of “AI at the Edge”:

The global market for AI within the global telecommunications sector is valued at $6.69 billion in 2026, growing at a compound annual rate (CAGR) of 41.9% from 2025.   The broader edge AI market—including hardware, software, and services—is forecast to reach $29.98 billion in 2026, according to The Business Research Company We think those estimates are way too high.

The market research firm states:

………………………………………………………………………………………………………

Author’s Opinion:

Unless telcos change their corporate culture along with slowing the footprint growth of cloud service providers/hyperscalers, we think that AI at the Edge will be yet another telco monetization failure.  Just like their failure to monetize: 4G LTE apps, the telco cloud, 5G, multi-access edge computing (MEC), OpenRAN, LPWANs and other telecom technologies that never lived up to their promise and potential.

That’s largely because telcos are very weak: developing IT platforms, compute services, killer applications, and rapid execution of new services (e.g. 5G services require a 5G SA core network which telcos were very slow to deploy).  Telecom execs themselves cite cultural and speed‑of‑change issues: the industry is not organized like a software company, so it struggles to iterate products at AI/cloud pace. Also, telcos historically struggle with software. Managing distributed GPU clusters is vastly different from managing cell towers.

After spending billions on 5G with very  little or no ROI, investors are skeptical of the increased capex required for AI-grade edge servers which must be maintained by telcos.  Those servers will be expensive (especially if they contain clusters of Nvidia GPUs) and consume a lot of power, which is a critical issue at the edge of the carrier’s network.

Many network operators frame AI/edge as “network optimization” or “utilizing underused sites,” not as building monetizable AI platforms with APIs, SDKs, and ecosystems. This mirrors 5G, where huge RAN/core builds were not matched by a clear product and platform strategy, leaving value to OTTs and hyperscalers which are  extending their control planes and protocol stacks to the network edge (local zones, operator co‑lo, on‑premises stacks).

Telcos risk becoming “dumb pipes” for AI traffic if they can’t provide a superior developer ecosystem.  If they only sell space/power/connectivity, the cloud service providers will continue to own the developer and AI value chain.  Analysts warn that edge is a “right to participate, not a right to win.”  As such, value accrues to whoever owns the AI platform, tools, marketplace, and pricing power, not the entity that provides connectivity, PoP or cell towers.

Data fragmentation and weak “intelligence” layer:

  • AI monetization depends on high‑quality, cross‑domain data, but telco data is fragmented across OSS, BSS, probes, and partner systems; without unification, it is hard to expose compelling network/edge intelligence services.

  • Analysts emphasize that failure here reduces telcos to generic GPU landlords, while higher‑margin offers (real‑time quality, fraud, identity, mobility/context APIs) remain unrealized.

Narrow internal focus on cost savings:

  • Many operators’ early AI focus is inward (Opex reduction in assurance, planning, customer care) rather than building external, revenue‑generating products, echoing how early 5G was justified mainly on cost/efficiency.

  • Commentators warn that if AI/edge remains a “network efficiency” play, the commercial upside will go to cloud/AI natives that turn similar capabilities into products sold to enterprises.

What analysts say telcos must do differently:

  • Build “Sovereign AI factories” and edge AI clouds: GPU‑enabled sites with cloud‑like developer experience (APIs, self‑service portals, metering, SLAs) and clear sovereign/regional guarantees.

  • Combine differentiated connectivity with AI services (latency‑backed SLAs, AI‑on‑RAN, domain‑specific models for verticals) and use modern, flexible commercial models instead of just selling bandwidth or colocation.

Conclusions:

In summary, the main risk for telcos is to successfully transition from owning and maintaining network infrastructure to owning and operating AI platforms and products at software industry speed.  AI at the edge is less of a new service or product and more an architectural upgrade. The two ways telcos can benefit are from:

  1.  Internal cost reduction: If telcos use it to lower their own costs (fraud prevention, risk management, predictive maintenance, fault isolation, self-healing networks, etc.), it’s an automatic win but won’t increase the top line.
  2.  Revenue from new AI -Edge services, e.g. Verizon uses edge-based video analytics in warehouses to improve inventory turnover by up to 40%.   If they expect to charge a massive premium for “AI-enabled 5G,” they face the same monetization wall that has doomed them for the past 20 years!

References:

https://siliconangle.com/2026/03/04/telecom-edge-ai-makes-networking-strategic-mwc26/

https://www.nvidia.com/en-us/lp/ai/the-blueprint-for-ai-success-ebook/

How telcos can monetize AI beyond connectivity

https://www.thebusinessresearchcompany.com/report/generative-artificial-intelligence-ai-in-telecom-global-market-report

AT&T and AWS to deliver last mile connectivity for AI workloads; AT&T Geo Modeler™ AI simulation tool

Analysis: Edge AI and Qualcomm’s AI Program for Innovators 2026 – APAC for startups to lead in AI innovation

Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN

Private 5G networks move to include automation, autonomous systems, edge computing & AI operations

Dell’Oro: RAN Market Stabilized in 2025 with 1% CAG forecast over next 5 years; Opinion on AI RAN, 5G Advanced, 6G RAN/Core risks

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum

Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

CES 2025: Intel announces edge compute processors with AI inferencing capabilities

AT&T and AWS to deliver last mile connectivity for AI workloads; AT&T Geo Modeler™ AI simulation tool

AT&T is strategically re-architecting its infrastructure for the AI era through high-capacity network modernization and deep integration with hyperscale cloud providers.

In addition to its almost six year old deal to run its 5G SA core network in Microsoft Azure’s cloudAT&T announced at MWC 2026 that it’s now woring with Amazon Web Services (AWS) to extend 5G and fiber connectivity from business customers and locations directly into AWS environments, creating secure, resilient and reliable premises‑to‑cloud architectures for AI workloads. The collaboration is designed to reduce network complexity and latency while supporting real‑time analytics, machine learning, and agentic AI use cases.

This collaboration continues a long-standing relationship between AT&T and AWS and follows recent news outlining broader efforts to modernize the nation’s connectivity infrastructure by providing high-capacity fiber to AWS data centers, migrate AT&T workloads to AWS cloud capabilities and explore emerging satellite technologies.

AWS Interconnect – last mile embeds AT&T‑delivered connectivity directly into AWS workflows, designed to enable customers to provision and manage last‑mile connectivity within the AWS environment and lays the foundation for the use of AI agents to monitor and manage the AI experience from the user to the cloud. This streamlined, self‑managed approach helps enterprises reduce network complexity while maintaining control of their extended enterprise network, allowing businesses to move faster as they scale AI.

High level illustration of the planned AWS Interconnect – last mile architecture, showing how resilient interconnections and AT&T Fiber and fixed wireless access are intended to simplify private connectivity from customer locations into AWS environments. 

Diagram Source: AT&T

………………………………………………………………………………………………………

“AI does not just need more compute; it needs flatter networks and faster connections,” said Shawn Hakl, SVP & Head of Product, AT&T Business. “By bringing high‑capacity connectivity closer to cloud platforms, integrating the management of the networks directly into the cloud provisioning process and engineering for resiliency at the metro level, AT&T is helping enterprises streamline their networks, improve performance, security, and scale AI with confidence.”

AT&T says they are building an AI‑ready network (?) designed to scale performance by continuing ongoing network investment, including the growth of capacities up to 1.6Tbps across key metro and long‑haul routes.

AT&T also announced it would work with Nvidia, Microsoft and MicroAI through its Connected AI platform for “smart manufacturing.”

………………………………………………………………………………………………………………..

Finally, AT&T described  AT&T Geo Modeler which is able to better predict connectivity for emerging technologies like autonomous vehicles, drones, and robotics.

The Geo Modeler is an AI-powered simulation tool that helps predict, in near real time, how a wireless network will perform in the real world. Inspired by the video games Kounev played with his family growing up, the virtual model and simulation is “essentially like a giant video game of the United States” that, infused with AI tools, gives engineers a clearer picture of where potential weak spots may appear. Then issues can be addressed earlier and fixes can roll out faster. In essence, it creates virtual models, similar to the way video games are designed and developed.

“The Geo Modeler helps us see how the real world will shape coverage before we build, so we can deliver connectivity that’s ready for what’s next,” said AT&T scientist Velin Kounev.

Matt Harden, VP of Connected Solutions at AT&T, agrees. “The Geo Modeler is a foundational capability for the connected mobility era,” he said. “By marrying advanced geospatial simulation with AI-driven network orchestration, we can deliver predictable, high-performance connectivity that adapts with the environment. Whether it’s a hurricane, a packed stadium, or a city corridor full of autonomous vehicles, we will be prepared.”

References:

https://about.att.com/story/2026/aws-collaboration-scalable-business-ai.html

https://about.att.com/blogs/2026/150-years-of-connection.html

https://about.att.com/blogs/2025/geo-modeler.html

AT&T and Ericsson boost Cloud RAN performance with AI-native software running on Intel Xeon 6 SoC

AT&T deploys nationwide 5G SA while Verizon lags and T-Mobile leads

AT&T to buy spectrum licenses from EchoStar for $23 billion

AT&T’s convergence strategy is working as per its 3Q 2025 earnings report

Progress report: Moving AT&T’s 5G core network to Microsoft Azure Hybrid Cloud platform

AT&T 5G SA Core Network to run on Microsoft Azure cloud platform

 

Analysis & Economic Implications of AI adoption in China

Executive Summary:

Visible signs of artificial intelligence adoption in China are everywhere. Consumers interact seamlessly with chatbots, livestream hosts promote algorithmically selected products, and recommendation engines exhibit an almost anticipatory understanding of user preferences.  Yet, beyond these consumer-facing applications, a deeper and potentially more consequential transformation is unfolding. Across China’s retail and services sectors, AI is shifting from demand generation to cost optimization. Enterprises are deploying machine learning in logistics, inventory management, customer service, and fulfillment operations to reduce inefficiencies as revenue growth slows and pricing power tightens.

Highlights:

  • Chinese companies are increasingly using AI to control operational costs and improve efficiency in a low-growth economic environment.

  • AI is being deployed in logistics, inventory management, and customer service to reduce expenses rather than primarily drive demand.

  • This shift towards AI for cost reduction is leading to steadier cash flow and improved operating margins for consumer companies.

China’s Consumer Sector: AI Powers Efficiency Over Growth:

As China’s economy adjusts to structural deceleration—marked by subdued household confidence, persistent real-estate overhang, and maturing market saturation—consumer companies face an unfamiliar imperative: prioritize resilience over expansion. With pricing power eroded and cost inflation persistent, traditional growth levers have lost potency. Leading platforms are responding by reorienting AI investments toward operational efficiency, transforming algorithms from engagement engines into margin-defense mechanisms. For investors, this evolution signals a new phase of earnings potential—one where incremental productivity gains could prove more durable than cyclical demand recovery.

“In a low-growth environment, incremental efficiency gains matter more than top-line expansion,” notes Zhao Ming, senior analyst for China internet companies at Hongyuan Capital. “AI has become a strategic lever for margin preservation.”

China’s consumer sector entered 2026 navigating familiar structural headwinds: cautious household sentiment, a fading property-wealth effect, and fierce price competition. Unlike in previous cycles, companies are finding it increasingly difficult to pass rising costs on to consumers. The result has been a strategic realignment. Where past growth phases emphasized volume and engagement, today’s market is rewarding operational discipline. That shift has sharpened the appeal of AI—not as a marketing showcase, but as a core instrument of productivity and cost control.

“In a slower-growth environment, leading Chinese consumer companies are using AI primarily to improve productivity and reduce operating costs rather than to drive incremental demand,” McKinsey said in a recent analysis of AI adoption across China’s retail and services sectors.

From Growth Catalyst to Cost Lever:

The center of gravity for AI investment has shifted from customer-facing innovation to operational optimization. E-commerce platforms and logistics operators have been among the earliest to integrate AI into mission-critical workflows. Demand-forecasting models are helping warehouses fine-tune inventory levels and reduce exposure to slow-moving goods. Routing algorithms are compressing last-mile delivery times and cutting fuel consumption. Automated customer-service systems are deflecting an ever-larger share of inquiries typically handled by human agents.

On their own, each of these applications may appear incremental. Taken together, they represent a meaningful improvement in margin resilience at a time when top-line expansion remains constrained. In an environment where minor percentage-point gains in efficiency can significantly affect earnings quality, AI is emerging as a quiet but potent differentiator.

Logistics as a Testbed for Scalable Efficiency:

The operational impact of AI is most visible in the logistics ecosystem, a sector that remains one of the largest cost centers in China’s consumer economy. Machine-learning systems are now proficient at forecasting order density by neighborhood and time of day, enabling fulfillment centers to position inventory closer to anticipated demand. In dense urban markets, adaptive algorithms continually adjust delivery routes in response to evolving conditions—from traffic and weather to cancellations and reorders—reducing both transit times and redundancy.

For investors, the value proposition is compelling: logistics efficiency scales. Once AI models are trained and stress-tested, they can be deployed across regions at low incremental cost, generating operating leverage even in periods of stagnant demand. Crucially, incumbents benefit from data scale. Years of transaction and delivery records translate into more accurate predictive models, reinforcing competitive moats and raising barriers to entry. This dynamic is reshaping industry structure even as consumer-facing platform features converge toward commoditization.

AI Extends Gains to Physical Retail:

Beyond e-commerce, brick-and-mortar retail—long considered a laggard in China’s digital transformation—is also seeing measurable efficiency dividends. Smart shelving, computer-vision inventory systems, and automated stock monitoring are cutting labor intensity while increasing inventory turnover. Grocery and convenience chains now rely on AI to optimize product assortments at the store level, calibrating selections to localized consumption patterns instead of applying national averages. The effect is twofold: reduced waste and fewer markdowns, both of which have historically weighed on profitability. The outcomes may not register as eye-catching innovation, but they align closely with investor priorities—stabler cash flows and predictable margins.

Labor Efficiency as a Strategic Imperative:

AI-enhanced customer service represents another underappreciated margin driver. Major consumer platforms report that routine customer interactions—order tracking, returns, product troubleshooting—are now predominantly handled through automated systems. This transition is particularly relevant in a labor market where wage growth continues to outpace consumption. Limiting headcount growth while maintaining response times and service quality has become a key operational goal.

“AI doesn’t replace customer service,” says Li Wenyuan, chief technology officer at retail software firm Qimeng Tech. “It filters it, so humans deal only with the expensive problems.” That filtering function is transforming customer operations from cost centers into scalable service platforms, balancing efficiency with user satisfaction.

Economic Implications:

For investors, the impact of China’s second-wave AI adoption will likely manifest less in headline growth metrics and more in incremental financial performance indicators. Key areas to watch include:

  • Operating margin expansion driven by process automation

  • Reduced fulfillment and logistics costs as a share of revenue

  • Improved capital-expenditure efficiency through data-driven asset utilization

The first chapter of China’s AI consumer story was about differentiation—using algorithms to personalize experiences, boost engagement, and drive sales. The next chapter is about discipline. As growth normalizes, companies are deploying AI to do more with less: compress costs, stabilize earnings, and build leaner, more adaptive operating models. In a market where scale alone no longer guarantees profitability, AI has become not just a tool for innovation—but a mechanism for survival.

References:

https://www.barrons.com/articles/china-ai-boom-commerce-warehouses-b1ad55f1

China’s open source AI models to capture a larger share of 2026 global AI market

China’s telecom industry rapid growth in 2025 eludes Nokia and Ericsson as sales collapse

China ITU filing to put ~200K satellites in low earth orbit while FCC authorizes 7.5K additional Starlink LEO satellites

China gaining on U.S. in AI technology arms race- silicon, models and research

U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China

Bloomberg: China Lures Billionaires Into Race to Catch U.S. in AI

 

 

 

Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators

Executive Summary:

In a February 6, 2026 CNBC interview with with Scott Wapner, Nvidia CEO Jensen Huang [1.] characterized the current AI build‑out as “the largest infrastructure buildout in human history,” driven by exceptionally high demand for compute from hyperscalers and AI companies. “Through the roof” is how he described AI infrastructure spending.  It’s a “once-in-a-generation infrastructure buildout,” specifically highlighting that demand for Nvidia’s Blackwell chips and the upcoming Vera Rubin platform is “sky-high.” He emphasized that the shift from experimental AI to AI as a fundamental utility has reached a definitive inflection point for every major industry.

Jensen forecasts aa roughly 7–to- 8‑year AI investment cycle lies ahead, with capital intensity justified because deployed AI infrastructure is already generating rising cash flows for operators.  He maintains that the widely cited ~$660 billion AI data center capex pipeline is sustainable, on the grounds that GPUs and surrounding systems are revenue‑generating assets, not speculative overbuild. In his view, as long as customers can monetize AI workloads profitably, they will “keep multiplying their investments,” which underpins continued multi‑year GPU demand, including for prior‑generation parts that remain fully leased.

Note 1.  Being the undisputed leader of AI hardware (GPU chips and networking equipment via its Mellanox acquisition), Nvidia MUST ALWAYS MAKE POSITIVE REMARKS AND FORECASTS related to the AI build out boom.  Reader discretion is advised regarding Huang’s extremely bullish, “all-in on AI” remarks.

Huang reiterated that AI will “fundamentally change how we compute everything,” shifting data centers from general‑purpose CPU‑centric architectures to accelerated computing built around GPUs and dense networking. He emphasizes Nvidia’s positioning as a full‑stack infrastructure and computing platform provider—chips, systems, networking, and software—rather than a standalone chip vendor.  He accuratedly stated that Nvidia designs “all components of AI infrastructure” so that system‑level optimization (GPU, NIC, interconnect, software stack) can deliver performance gains that outpace what is possible with a single chip under a slowing Moore’s Law. The installed base is presented as productive: even six‑year‑old A100‑class GPUs are described as fully utilized through leasing, underscoring persistent elasticity of AI compute demand across generations.

AI Poster Childs – OpenAI and Anthropic:

Huang praised OpenAI and Anthropic, the two leading artificial intelligence labs, which both use Nvidia chips through cloud providers. Nvidia invested $10 billion in Anthropic last year, and Huang said earlier this week that the chipmaker will invest heavily in OpenAI’s next fundraising round.

“Anthropic is making great money. Open AI is making great money,” Huang said. “If they could have twice as much compute, the revenues would go up four times as much.”

He said that all the graphics processing units that Nvidia has sold in the past — even six-year old chips such as the A100 — are currently being rented, reflecting sustained demand for AI computing power.

“To the extent that people continue to pay for the AI and the AI companies are able to generate a profit from that, they’re going to keep on doubling, doubling, doubling, doubling,” Huang said.

Economics, utilization, and returns:

On economics, Huang’s central claim is that AI capex converts into recurring, growing revenue streams for cloud providers and AI platforms, which differentiates this cycle from prior overbuilds. He highlights very high utilization: GPUs from multiple generations remain in service, with cloud operators effectively turning them into yield‑bearing infrastructure.

This utilization and monetization profile underlies his view that the capex “arms race” is rational: when AI services are profitable, incremental racks of GPUs, network fabric, and storage can be modeled as NPV‑positive infrastructure projects rather than speculative capacity. He implies that concerns about a near‑term capex cliff are misplaced so long as end‑market AI adoption continues to inflect.

Competitive and geopolitical context:

Huang acknowledges intensifying global competition in AI chips and infrastructure, including from Chinese vendors such as Huawei, especially under U.S. export controls that have reduced Nvidia’s China revenue share to roughly half of pre‑control levels. He frames Nvidia’s strategy as maintaining an innovation lead so that developers worldwide depend on its leading‑edge AI platforms, which he sees as key to U.S. leadership in the AI race.

He also ties AI infrastructure to national‑scale priorities in energy and industrial policy, suggesting that AI data centers are becoming a foundational layer of economic productivity, analogous to past buildouts in electricity and the internet.

Implications for hyperscalers and chips:

Hyperscalers (and also Nvidia customers) Meta , Amazon, Google/Alphabet and Microsoft recently stated that they plan to dramatically increase spending on AI infrastructure in the years ahead. In total, these hyperscalers could spend $660 billion on capital expenditures in 2026 [2.] , with much of that spending going toward buying Nvidia’s chips. Huang’s message to them is that AI data centers are evolving into “AI factories” where each gigawatt of capacity represents tens of billions of dollars of investment spanning land, compute, and networking. He suggests that the hyperscaler industry—roughly a $2.5 trillion sector with about $500 billion in annual capex transitioning from CPU to GPU‑centric generative AI—still has substantial room to run.

Note 2.  An understated point is that while these hyperscalers are spending hundered of billions of dollars on AI data centers and Nvidia chips/equipment they are simultaneously laying off tens of thousands of employees.  For example, Amazon recently announced 16,000 job cuts this year after 14,000 layoffs last October.

From a chip‑level perspective, he argues that Nvidia’s competitive moat stems from tightly integrated hardware, networking, and software ecosystems rather than any single component, positioning the company as the systems architect of AI infrastructure rather than just a merchant GPU vendor.

References:

https://www.cnbc.com/2026/02/06/nvidia-rises-7percent-as-ceo-says-660-billion-capex-buildout-is-sustainable.html

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

184K global tech layoffs in 2025 to date; ~27.3% related to AI replacing workers

 

 

Analysis: Edge AI and Qualcomm’s AI Program for Innovators 2026 – APAC for startups to lead in AI innovation

Qualcomm is a strong believer in Edge AI as an enabler of faster, more secure, and energy-efficient processing directly on devices—rather than the cloud—unlocking real-time intelligence for industries like robotics and smart cities.

In support of that vision, the fabless SoC company announced the official launch of its Qualcomm AI Program for Innovators (QAIPI) 2026 – APAC, a regional startup incubation initiative that supports startups across Japan, Singapore, and South Korea in advancing the development and commercialization of innovative edge AI solutions.

Building on Qualcomm’s commitment to edge AI innovation, the second edition of QAIPI-APAC invites startups to develop intelligent solutions across a broad range of edge-AI applications using Qualcomm Dragonwing™ and Snapdragon® platforms, together with the new Arduino® UNO Q development board, strengthening their pathway toward global commercialization.

Startups gain comprehensive support and resources, including access to Qualcomm Dragonwing™ and Snapdragon® platforms, the Arduino® UNO Q development board, technical guidance and mentorship, a grant of up to US$10,000, and eligibility for up to US$5,000 in patent filing incentives, accelerating AI product development and deployment.

Applications are open now through April 30, 2026 and will be evaluated based on innovation, technical feasibility, potential societal impact, and commercial relevance. The program will be implemented in two phases. The application phase is open to eligible startups incorporated and registered in Japan, Singapore, or South Korea. Shortlisted startups will enter the mentorship phase, receiving one-on-one guidance, online training, technical support, and access to Qualcomm-powered hardware platforms and development kits for product development. They will also receive a shortlist grant of up to US$10,000 and may be eligible for a patent filing incentive of up to US$5,000. At the conclusion of the program, shortlisted startups may be invited to showcase their innovations at a signature Demo Day in late 2026, engaging with industry leaders, investors, and potential collaborators across the APAC innovation ecosystem.

Comment and Analysis:

Qualcomm is a strong believer in Edge AI—the practice of running AI models directly on devices (smartphones, cars, IoT, PCs) rather than in the cloud—because they view it as the next major technological paradigm shift, overcoming limitations inherent in cloud computing. Despite the challenges of power consumption and processing limitations, Qualcomm’s strategy hinges on specialized, heterogenous computing rather than relying solely on RISC-based CPU cores.

Key Issues for Qualcomm’s Edge AI solutions:

1.  The “Heterogeneous” Solution to Processing Limits
While it is true that standard CPU cores (even RISC-based ones) are inefficient for AI, Qualcomm does not use them for AI workloads. Instead, they use a heterogeneous architecture:
  • Qualcomm® AI Engine: This combines specialized hardware, including the Hexagon NPU (Neural Processing Unit), Adreno GPU, and CPU. The NPU is specifically designed to handle high-performance, complex AI workloads (like Generative AI) far more efficiently than a generic CPU.
  • Custom Oryon CPU: The latest Snapdragon X Elite platform features customized cores that provide high performance while outperforming traditional x86 solutions in power efficiency for everyday tasks.
2. Overcoming Power Consumption (Performance/Watt)
Qualcomm focus on “Performance per Watt” rather than raw power.
  • Specialization Saves Power: By using specialized AI engines (NPUs) rather than general-purpose CPU/GPU cores, Qualcomm can run inference tasks at a fraction of the power cost.
  • Lower Overall Energy: Doing AI at the edge can save total energy by avoiding the need to send data to a power-hungry data center, which requires network infrastructure, and then sending it back.
  • Intelligent Efficiency: The Snapdragon 8 Elite, for example, saw a 27% reduction in power consumption while increasing AI performance significantly.
3. Critical Advantages of Edge over Cloud
Qualcomm believes edge is essential because cloud AI cannot solve certain critical problems:
  • Instant Responsiveness (Low Latency): For autonomous vehicles or industrial robotics, a few milliseconds of latency to the cloud can be catastrophic. Edge AI provides real-time, instantaneous analysis.
  • Privacy and Security: Data never leaves the device. This is crucial for privacy-conscious users (biometrics) and compliance (GDPR), which is a major advantage over cloud-based AI.
  • Offline Capability: Edge devices, such as agricultural sensors or smart home devices in remote areas, continue to function without internet connectivity.
4. Market Expansion and Economic Drivers
  • Diversification: With the smartphone market maturing, Qualcomm sees the “Connected Intelligent Edge” as a huge growth opportunity, extending their reach into automotive, IoT, and PCs.
  • “Ecosystem of You”: Qualcomm aims to connect billions of devices, making AI personal and context-aware, rather than generic.
5. Bridging the Gap: Software & Model Optimization
Qualcomm is not just providing hardware; they are simplifying the deployment of AI:
  • Qualcomm AI Hub: This makes it easier for developers to deploy optimized models on Snapdragon devices.
  • Model Optimization: They specialize in making AI models smaller and more efficient (using quantization and specialized AI inference) to run on devices without requiring massive, cloud-sized computing power.
In summary, Qualcomm believes in Edge AI because they are building highly specialized hardware designed to excel within tight power and thermal constraints.
……………………………………………………………………………………………………………………………………………………………………………

References:

https://www.prnewswire.com/apac/news-releases/qualcomm-ai-program-for-innovators-2026–apac-officially-kicks-off—empowering-startups-across-japan-singapore-and-south-korea-to-lead-the-ai-innovation-302676025.html

Qualcomm CEO: AI will become pervasive, at the edge, and run on Snapdragon SoC devices

Huawei, Qualcomm, Samsung, and Ericsson Leading Patent Race in $15 Billion 5G Licensing Market

Private 5G networks move to include automation, autonomous systems, edge computing & AI operations

Nvidia’s networking solutions give it an edge over competitive AI chip makers

Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?

CES 2025: Intel announces edge compute processors with AI inferencing capabilities

Qualcomm CEO: expect “pre-commercial” 6G devices by 2028

 

Page 1 of 14
1 2 3 14