China gaining on U.S. in AI technology arms race- silicon, models and research

Introduction:

According to the Wall Street Journal, the U.S. maintains its early lead in AI technology with Silicon Valley home to the most popular AI models and the most powerful AI chips (from Santa Clara based Nvidia and AMD).  However, China has shown a willingness to spend whatever it takes to take the lead in AI models and silicon.

The rising popularity of DeepSeek, the Chinese AI startup, has buoyed Beijing’s hopes that it can become more self-sufficient. Huawei has published several papers this year detailing how its researchers used its homegrown AI chips to build large language models without relying on American technology.

“China is obviously making progress in hardening its AI and computing ecosystem,” said Michael Frank, founder of think tank Seldon Strategies.

AI Silicon:

Morgan Stanley analysts forecast that China will have 82% of AI chips from domestic makers by 2027, up from 34% in 2024.  China’s government has played an important role, funding new chip initiatives and other projects. In July, the local government in Shenzhen, where Huawei is based, said it was raising around $700 million to invest in strengthening an “independent and controllable” semiconductor supply chain.

During a meeting with President Xi Jinping in February, Huawei Chief Executive Officer Ren Zhengfei told Xi about “Project Spare Tire,” an effort by Huawei and 2,000 other enterprises to help China’s semiconductor sector achieve a self-sufficiency rate of 70% by 2028, according to people familiar with the meeting.

……………………………………………………………………………………………………………………………………………

AI Models:

Prodded by Beijing, Chinese financial institutions, state-owned companies and government agencies have rushed to deploy Chinese-made AI models, including DeepSeek [1.] and Alibaba’s Qwen. That has fueled demand for homegrown AI technologies and fostered domestic supply chains.

Note 1. DeepSeek’s V3 large language model matched many performance benchmarks of rival AI programs developed in the U.S. at a fraction of the cost. DeepSeek’s open-weight models have been integrated into many hospitals in China for various medical applications.

In recent weeks, a flurry of Chinese companies have flooded the market with open-source AI models, many of which are claiming to surpass DeepSeek’s performance in certain use cases. Open source models are freely accessible for modification and deployment.

The Chinese government is actively supporting AI development through funding and policy initiatives, including promoting the use of Chinese-made AI models in various sectors. 

Meanwhile, OpenAI’s CEO Sam Altman said his company had pushed back the release of its open-source AI model indefinitely for further safety testing.

AI Research:

China has taken a commanding lead in the exploding field of artificial intelligence (AI) research, despite U.S. restrictions on exporting key computing chips to its rival, finds a new report.

The analysis of the proprietary Dimensions database, released yesterday, finds that the number of AI-related research papers has grown from less than 8500 published in 2000 to more than 57,000 in 2024. In 2000, China-based scholars produced just 671 AI papers, but in 2024 their 23,695 AI-related publications topped the combined output of the United States (6378), the United Kingdom (2747), and the European Union (10,055).

“U.S. influence in AI research is declining, with China now dominating,” Daniel Hook, CEO of Digital Science, which owns the Dimensions database, writes in the report DeepSeek and the New Geopolitics of AI: China’s ascent to research pre-eminence in AI.

In 2024, China’s researchers filed 35,423 AI-related patent applications, more than 13 times the 2678 patents filed in total by the U.S., the U.K., Canada, Japan, and South Korea.

References:

https://www.wsj.com/tech/ai/how-china-is-girding-for-an-ai-battle-with-the-u-s-5b23af51

https://www.science.org/content/article/china-tops-world-artificial-intelligence-publications-database-analysis-reveals#

Huawei launches CloudMatrix 384 AI System to rival Nvidia’s most advanced AI system

U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China

Goldman Sachs: Big 3 China telecom operators are the biggest beneficiaries of China’s AI boom via DeepSeek models; China Mobile’s ‘AI+NETWORK’ strategy

Gen AI eroding critical thinking skills; AI threatens more telecom job losses

Softbank developing autonomous AI agents; an AI model that can predict and capture human cognition

AI spending is surging; companies accelerate AI adoption, but job cuts loom large

Big Tech and VCs invest hundreds of billions in AI while salaries of AI experts reach the stratosphere

ZTE’s AI infrastructure and AI-powered terminals revealed at MWC Shanghai

Deloitte and TM Forum : How AI could revitalize the ailing telecom industry?

 

Huawei launches CloudMatrix 384 AI System to rival Nvidia’s most advanced AI system

On Saturday, Huawei Technologies displayed an advanced AI computing system in China, as the Chinese technology giant seeks to capture market share in the country’s growing artificial intelligence sector.  Huawei’s CloudMatrix 384 system made its first public debut at the World Artificial Intelligence Conference (WAIC), a three-day event in Shanghai where companies showcase their latest AI innovations, drawing a large crowd to the company’s booth.

The Huawei CloudMatrix 384 is a high-density AI computing system featuring 384 Huawei Ascend 910C chips, designed to rival Nvidia’s GB200 NVL72 (more below).  The AI system employs a “supernode” architecture with high-speed internal chip interconnects. The system is built with optical links for low-latency, high-bandwidth communication. Huawei has also integrated the CloudMatrix 384 into its cloud platform. The system has drawn close attention from the global AI community since Huawei first announced it in April.

The CloudMatrix 384 resides on the super-node Ascend platform and uses high-speed bus interconnection capability, resulting in low latency linkage between 384 Ascend NPUs.  Huawei says that “compared to traditional AI clusters that often stack servers, storage, network technology, and other resources, Huawei CloudMatrix has a super-organized setup. As a result, it also reduces the chance of facing failures at times of large-scale training.

Attendees visit a Huawei booth during the World Artificial Intelligence Conference in Shanghai, China July 26, 2025.
Photo Credit: REUTERS/Go Nakamura

 

Huawei staff at its WAIC booth declined to comment when asked to introduce the CloudMatrix 384 system. A spokesperson for Huawei did not respond to questions.  However, Huawei says that “early reports revealed that the CloudMatrix 384 can offer 300 PFLOPs of dense BF16 computing. That’s double of Nvidia GB200 NVL72 AI tech system. It also excels in terms of memory capacity (3.6x) and bandwidth (2.1x).”  Indeed, industry analysts view the CloudMatrix 384 as a direct competitor to Nvidia’s GB200 NVL72, the U.S. GPU chipmaker’s most advanced system-level product currently available in the market.

One industry expert has said the CloudMatrix 384 system rivals Nvidia’s most advanced offerings.  Dylan Patel, founder of semiconductor research group SemiAnalysis, said in an April article that Huawei now had AI system capabilities that could beat Nvidia’s AI system.  The CloudMatrix 384 incorporates 384 of Huawei’s latest 910C chips and outperforms Nvidia’s GB200 NVL72 on some metrics, which uses 72 B200 chips, according to SemiAnalysis.  The performance stems from Huawei’s system design capabilities, which compensate for weaker individual chip performance through the use of more chips and system-level innovations, SemiAnalysis said.

Huawei has become widely regarded as China’s most promising domestic supplier of chips essential for AI development, even though the company faces U.S. export restrictions. Nvidia CEO Jensen Huang told Bloomberg in May that Huawei had been “moving quite fast” and named the CloudMatrix as an example.

Huawei says the system uses “supernode” architecture that allows the chips to interconnect at super-high speeds and in June, Huawei Cloud CEO Zhang Pingan said the CloudMatrix 384 system was operational on Huawei’s cloud platform.

According to Huawei, the Ascend AI chip-based CloudMatrix 384 with three important benefits:

  • Ultra-large bandwidth
  • Ultra-Low Latency
  • Ultra-Strong Performance

These three perks can help enterprises achieve better AI training as well as stable reasoning performance for models. They could further retain long-term reliability.

References:

https://www.huaweicentral.com/huawei-launches-cloudmatrix-384-ai-chip-cluster-against-nvidia-nvl72/

https://www.reuters.com/world/china/huawei-shows-off-ai-computing-system-rival-nvidias-top-product-2025-07-26/

https://semianalysis.com/2025/04/16/huawei-ai-cloudmatrix-384-chinas-answer-to-nvidia-gb200-nvl72/

U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China

Huawei’s “FOUR NEW strategy” for carriers to be successful in AI era

FT: Nvidia invested $1bn in AI start-ups in 2024

Gen AI eroding critical thinking skills; AI threatens more telecom job losses

Two alarming research studies this year have drawn attention to the damage that Gen AI agents like ChatGPT are doing to our brains:

The first study, published in February, by Microsoft and Carnegie Mellon University, surveyed 319 knowledge workers and concluded that “while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skills for independent problem-solving.”

An MIT study divided participants into three essay-writing groups. One group had access to Gen AI and another to Internet search engines while the third group had access to neither. This “brain” group, as MIT’s researchers called it, outperformed the others on measures of cognitive ability. By contrast, participants in the group using a Gen AI large language model (LLM) did the worst. “Brain connectivity systematically scaled down with the amount of external support,” said the report’s authors.

Across the 20 companies regularly tracked by Light Reading, headcount fell by 51,700 last year. Since 2015, it has dropped by more than 476,600, more than a quarter of the previous total.

Source:  Light Reading

………………………………………………………………………………………………………………………………………………

Doing More with Less:

  • In 2015, Verizon generated sales of $131.6 billion with a workforce of 177,700 employees. Last year, it made $134.8 billion with fewer than 100,000. Revenues per employee, accordingly, have risen from about $741,000 to more than $1.35 million over this period.
  • AT&T made nearly $868,000 per employee last year, compared with less than $522,000 in 2015.
  • Deutsche Telekom, buoyed by its T-Mobile US business, has grown its revenue per employee from about $356,000 to more than $677,000 over the same time period.
  • Orange’s revenue per employee has risen from $298,000 to $368,000.

Significant workforce reductions have happened at all those companies, especially AT&T which finished last year with 141,000 employees – about half the number it had in 2015!

Not to be outdone, headcount at network equipment companies are also shrinking. Ericsson, Europe’s biggest 5G vendor, cut 6,000 jobs or 6% of its workforce last year and has slashed 13,000 jobs since 2023. Nokia’s headcount fell from 86,700 in 2023 to 75,600 at the end of last year. The latest message from Börje Ekholm, Ericsson’s CEO, is that AI will help the company operate with an even smaller workforce in future. “We also see and expect big benefits from the use of AI, and that is one reason why we expect restructuring costs to remain elevated during the year,” he said on this week’s earnings call with analysts.

………………………………………………………………………………………………………………………………………………

Other Voices:

Light Reading’s Iain Morris wrote, “An erosion of brainpower and ceding of tasks to AI would entail a loss of control as people are taken out of the mix. If AI can substitute for a junior coder, as experts say it can, the entry-level job for programming will vanish with inevitable consequences for the entire profession. And as AI assumes responsibility for the jobs once done by humans, a shrinking pool of individuals will understand how networks function.

“If you can’t understand how the AI is making that decision, and why it is making that decision, we could end up with scenarios where when something goes wrong, we simply just can’t understand it,” said Nik Willetts, the CEO of a standards group called the TM Forum, during a recent conversation with Light Reading. “It is a bit of an extreme to just assume no one understands how it works,” he added. “It is a risk, though.”

………………………………………………………………………………………………………………………………………………

References:

https://www.lightreading.com/ai-machine-learning/as-ai-plans-to-make-us-stupid-telco-jobs-keep-disappearing

https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf

AI spending is surging; companies accelerate AI adoption, but job cuts loom large

Verizon and AT&T cut 5,100 more jobs with a combined 214,350 fewer employees than 2015

Big Tech post strong earnings and revenue growth, but cuts jobs along with Telecom Vendors

Nokia (like Ericsson) announces fresh wave of job cuts; Ericsson lays off 240 more in China

Deutsche Telekom exec: AI poses massive challenges for telecom industry

Softbank developing autonomous AI agents; an AI model that can predict and capture human cognition

Speaking at a customer event Wednesday in Tokyo, Softbank Chairman and CEO Masayoshi Son said his company is developing “the world’s first” artificial intelligence (AI) agent system that can autonomously perform complex tasks.  Human programmers will no longer needed. “The AI agents will think for themselves and improve on their own…so the era of humans doing the programming is coming to an end,”

Softbank estimated it needed to create around 1000 agents per person – a large number because “employees have complex thought processes.  The agents will be active 24 hours a day, 365 days a year and will interact with each other.”  Son estimates the agents will be at least four times as productive and four times as efficient as humans, and would cost around 40 Japanese yen (US$0.27) per agent per month. At that rate, the billion-agent plan would cost SoftBank $3.2 billion annually.

“For 40 yen per agent per month, the agent will independently memorize, negotiate and conduct learning. So with these actions being taken, it’s incredibly cheap,” Son said. “I’m excited to see how the AI agents will interact with one another and advance given tasks,” Son added that the AI agents, to achieve the goals, will “self-evolve and self-replicate” to execute subtasks.

Unlike generative AI, which needs human commands to carry out tasks, an AI agent performs tasks on its own by designing workflows with data available to it. It is expected to enhance productivity at companies by helping their decision-making and problem-solving.

While the CEO’s intent is clear, details of just how and when SoftBank will build this giant AI workforce are scarce. Son admitted the 1 billion target would be “challenging” and that the company had not yet developed the necessary software to support the huge numbers of agents. He said his team needed to build a toolkit for creating more agents and an operating system to orchestrate and coordinate them. Son, one of the world’s most ardent AI evangelists, is betting the company’s future on the technology.

According to Son,  the capabilities of AI agents had already surpassed PhD-holders in advanced fields including physics, mathematics and chemistry. “There are no questions it can’t comprehend. We’re almost at a stage where there are hardly any limitations,” he enthused. Son acknowledged the problem of AI hallucinations, but dismissed it as “a temporary and minor issue.”  Son said the development of huge AI data centers, such as the $500 billion Stargate project, would enable exponential growth in computing power and AI capabilities.

Softbank Group Corp. Chairman and CEO Masayoshi Son (L) and OpenAI CEO Sam Altman at an event on July 16, 2025. (Kyodo)

The project comes as SoftBank Group and OpenAI, the developer of chatbot ChatGPT, said in February they had agreed to establish a joint venture to promote AI services for corporations. Wednesday’s event included a cameo appearance from Sam Altman, CEO of SoftBank partner OpenAI, who said he was confident about the future of AI because the scaling law would exist “for a long time” and that cost was continually going down. “I think the first era of AI, the…ChatGPT initial era was about an AI that you could ask anything and it could tell you all these things,” Altman said.

“Now as these (AI) agents roll out, AI can do things for you…You can ask the computer to do something in natural language, a sort of vaguely defined complex task, and it can understand you and execute it for you,” Altman said. “The productivity and potential that it unlocks for the world is quite huge.”

……………………………………………………………………………………………………………………………………………..

According to the NY Times,  an international team of scientists believe that A.I. systems can help them understand how the human mind works. They have created a ChatGPT-like system that can play the part of a human in a psychological experiment and behave as if it has a human mind. Details about the system, known as Centaur, were published on Wednesday in the journal Nature.  Dr. Marcel Binz, a cognitive scientist at Helmholtz Munich, a German research center, is the author of the new AI study.

The researchers gathered a range of studies to train Meta’s LLaMA AI LLM— some that they had carried out themselves, and others that were conducted by other groups. In one study, human volunteers played a game in which they steered a spaceship in search of treasure. In another, they memorized lists of words. In yet another, they played a pair of slot machines with different payouts and figured out how to win as much money as possible. All told, 160 experiments were chosen for LLaMA to train on, including over 10 million responses from more than 60,000 volunteers.

References:

https://english.kyodonews.net/articles/-/57396#google_vignette

https://www.lightreading.com/ai-machine-learning/softbank-aims-for-1-billion-ai-agents-this-year

https://www.nytimes.com/2025/07/02/science/ai-psychology-mind.html

https://www.nature.com/articles/s41586-025-09215-4

AI spending is surging; companies accelerate AI adoption, but job cuts loom large

Big Tech and VCs invest hundreds of billions in AI while salaries of AI experts reach the stratosphere

Ericsson reports ~flat 2Q-2025 results; sees potential for 5G SA and AI to drive growth

Agentic AI and the Future of Communications for Autonomous Vehicle (V2X)

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

Indosat Ooredoo Hutchison and Nokia use AI to reduce energy demand and emissions

Deloitte and TM Forum : How AI could revitalize the ailing telecom industry?

McKinsey: AI infrastructure opportunity for telcos? AI developments in the telecom sector

ZTE’s AI infrastructure and AI-powered terminals revealed at MWC Shanghai

Ericsson revamps its OSS/BSS with AI using Amazon Bedrock as a foundation

Big tech firms target data infrastructure software companies to increase AI competitiveness

SK Group and AWS to build Korea’s largest AI data center in Ulsan

OpenAI partners with G42 to build giant data center for Stargate UAE project

Nile launches a Generative AI engine (NXI) to proactively detect and resolve enterprise network issues

AI infrastructure investments drive demand for Ciena’s products including 800G coherent optics

 

Ericsson reports ~flat 2Q-2025 results; sees potential for 5G SA and AI to drive growth

Ericsson’s second-quarter results were not impressive, with YoY organic sales growth of +2% for the company and +3% for its network division (its largest). Its $14 billion AT&T OpenRAN deal, announced in December of 2023, helped lift Swedish vendor’s share of the global RAN market by +1.4 percentage points in 2024 to 25.7%, according to new research from analyst company Omdia (owned by Informa).  As a result of its AT&T contract, the U.S.  accounted for a stunning 44% of Ericsson’s second-quarter sales while the North American market resulted in a 10% YoY increase in organic revenues to SEK19.8bn ($2.05bn). Sales dropped in all other regions of the world!  The charts below depict that very well:

Ericsson’s attention is now shifting to a few core markets that Ekholm has identified as strategic priorities, among them the U.S., India, Japan and the UK. All, unsurprisingly, already make up Ericsson’s top five countries by sales, although their contribution minus the US came to just 15% of turnover for the recent second quarter. “We are already very strong in North America, but we can do more in India and Japan,” said Ekholm. “We see those as critically important for the long-term success.”

Opportunities: As telco investment in RAN equipment has declined by 12.5% (or $5 billion) last year, the Swedish equipment vendor has had few other obvious growth opportunities. Ericsson’s Enterprise division, which is supposed to be the long-term provider of sales growth for Ericsson, is still very small – its second-quarter revenues stood at just SEK5.5bn ($570m) and even once currency exchange changes are taken into account, its sales shrank by 6% YoY.

On Tuesday’s earnings call, Ericsson CEO Börje Ekholm said that the RAN equipment sector, while stable currently, isn’t offering any prospects of exciting near-term growth. For longer-term growth the industry needs “new monetization opportunities” and those could come from the ongoing modest growth in 5G-enabled fixed wireless access (FWA) deployments, from 5G standalone (SA) deployments that enable mobile network operators to offer “differentiated solutions” and from network APIs (that ultra hyped market is not generating meaningful revenues for anyone yet).

Cost Cutting Continues: Ericsson also has continued to be aggressive about cost reduction, eliminating thousands of jobs since it completed its Vonage takeover. “Over the last year, we have reduced our total number of employees by about 6% or 6,000,” said Ekholm on his routine call with analysts about financial results. “We also see and expect big benefits from the use of AI and that is one reason why we expect restructuring costs to remain elevated during the year.”

Use of AI: Ericsson sees AI as an opportunity to enable network automation and new industry revenue opportunities. The company is now using AI as an aid in network design  – a move that could have negative ramifications for staff involved in research and development. Ericsson is already using AI for coding and “other parts of internal operations to drive efficiency… We see some benefits now.  And it’s going to impact how the network is operated – think of fully autonomous, intent-based networks that will require AI as a fundamental component. That’s one of the reasons why we invested in an AI factory,” noted the CEO, referencing the consortium-based investment in a Swedish AI Factory that was announced in late May. At the time, Ericsson noted that it planned to “leverage its data science expertise to develop and deploy state-of-the-art AI models – improving performance and efficiency and enhancing customer experience.

Ericsson is also building AI capability into the products sold to customers. “I usually use the example of link adaptation,” said Per Narvinger, the head of Ericsson’s mobile networks business group, on a call with Light Reading, referring to what he says is probably one of the most optimized algorithms in telecom. “That’s how much you get out of the spectrum, and when we have rewritten link adaptation, and used AI functionality on an AI model, we see we can get a gain of 10%.”

Ericsson hopes that AI will boost consumer and business demand for 5G connectivity. New form factors such as smart glasses and AR headsets will need lower-latency connections with improved support for the uplink, it has repeatedly argued. But analysts are skeptical, while Ericsson thinks Europe is ill equipped for more advanced 5G services.

“We’re still very early in AI, in [understanding] how applications are going to start running, but I think it’s going to be a key driver of our business going forward, both on traffic, on the way we operate networks, and the way we run Ericsson,” Ekholm said.

Europe Disappoints: In much of Europe, Ericsson and Nokia have been frustrated by some government and telco unwillingness to adopt the European Union’s “5G toolbox” recommendations and evict Chinese vendors. “I think what we have seen in terms of implementation is quite varied, to be honest,” said Narvinger. Rather than banning Huawei outright, Germany’s government has introduced legislation that allows operators to use most of its RAN products if they find a substitute for part of Huawei’s management system by 2029. Opponents have criticized that move, arguing it does not address the security threat posed by Huawei’s RAN software. Nevertheless, Ericsson clearly eyes an opportunity to serve European demand for military communications, an area where the use of Chinese vendors would be unthinkable.

“It is realistic to say that a large part of the increased defense spending in Europe will most likely be allocated to connectivity because that is a critical part of a modern defense force,” said Ekholm. “I think this is a very good opportunity for western vendors because it would be far-fetched to think they will go with high-risk vendors.” Ericsson is also targeting related demand for mission-critical services needed by first responders.

5G SA and Mobile Core Networks:  Ekholm noted that 5G SA deployments are still few and far between – only a quarter of mobile operators have any kind of 5G SA deployment in place right now, with the most notable being in the US, India and China.  “Two things need to happen,” for greater 5G SA uptake, stated the CEO.

  • One is mid-band [spectrum] coverage… there’s still very low build out coverage in, for example, Europe, where it’s probably less than half the population covered… Europe is clearly behind on that“ compared with the U.S., China and India.
  • The second is that [network operators] need to upgrade their mobile core [platforms]... Those two things will have to happen to take full advantage of the capabilities of the [5G] network,” noted Ekholm, who said the arrival of new devices, such as AI glasses, that require ultra low latency connections and “very high uplink performance” is starting to drive interest. “We’re also seeing a lot of network slicing opportunities,” he added, to deliver dedicated network resources to, for example, police forces, sports and entertainment stadiums “to guarantee uplink streams… consumers are willing to pay for these things. So I’m rather encouraged by the service innovation that’s starting to happen on 5G SA and… that’s going to drive the need for more radio coverage [for] mid-band and for core [systems].”

Ericsson’s Summary -Looking Ahead:

  • Continue to strengthen competitive position
  • Strong customer engagement for differentiated connectivity
  • New use cases to monetize network investments taking shape
  • Expect RAN market to remain broadly stable
  • Structurally improving the business through rigorous cost management
  • Continue to invest in technology leadership

………………………………………………………………………………………………………………………………………………………………………………………………

References:

https://www.ericsson.com/4a033f/assets/local/investors/documents/financial-reports-and-filings/interim-reports-archive/2025/6month25-en.pdf

https://www.ericsson.com/4a033f/assets/local/investors/documents/financial-reports-and-filings/interim-reports-archive/2025/6month25-ceo-slides.pdf

https://www.telecomtv.com/content/5g/ericsson-ceo-waxes-lyrical-on-potential-of-5g-sa-ai-53441/

https://www.lightreading.com/5g/ericsson-targets-big-huawei-free-places-ai-and-nato-as-profits-soar

Ericsson revamps its OSS/BSS with AI using Amazon Bedrock as a foundation

 

Agentic AI and the Future of Communications for Autonomous Vehicle (V2X)

by Prashant Vajpayee (bio below), edited by Alan J Weissberger

Abstract:

Autonomous vehicles increasingly depend on Vehicle-to-Everything (V2X) communications, but 5G networks face challenges such as latency, coverage gaps, high infrastructure costs, and security risks. To overcome these limitations, this article explores alternative protocols like DSRC, VANETs, ISAC, PLC, and Federated Learning, which offer decentralized, low-latency communication solutions.

Of critical importance for this approach is Agentic AI—a distributed intelligence model based on the Object, Orient, Decide, and Act (OODA) loop—that enhances adaptability, collaboration, and security across the V2X stack. Together, these technologies lay the groundwork for a resilient, scalable, and secure next-generation Intelligent Transportation System (ITS).

Problems with 5G for V2X Communications:

There are several problems with using 5G for V2X communications, which is why the 5G NR (New Radio) V2X specification, developed by the 3rd Generation Partnership Project (3GPP) in Release 16, hasn’t been widely implemented.  Here are a few of them:

  • Variable latency: Even though 5G promises sub-milliseconds latency, realistic deployment often reflects 10 to 50 milliseconds delay, specifically V2X server is hosted in cloud environment. Furthermore, multi-hop routing, network slicing, and delay in handovers cause increment in latency. Due to this fact, 5G becomes unsuitable for ultra-reliable low-latency communication (URLLC) in critical scenarios [1, 2].
  • Coverage Gaps & Handover Issues: Availability of 5G network is a problem in rural and remote areas. Furthermore, in fast moving vehicle, switching between 5G networks can cause delays in communication and connectivity failure [3, 4].
  • Infrastructure and Cost Constraint: The deployment of full 5G infrastructure requires dense small-cell infrastructure, which cost burden and logistically complex solution especially in developing regions and along highways.
  • Spectrum Congestion and Interference: During the scenarios of share spectrum, other services can cause interference in realm of 5G network, which cause degradation on V2X reliability.
  • Security and Trust Issues: Centralized nature of 5G architectures remain vulnerable to single point of failure, which is risky for autonomous systems in realm of cybersecurity.

Alternative Communications Protocols as a Solution for V2X (when integrated with Agentic AI):

The following list of alternative protocols offers a potential remedy for the above 5G shortcomings when integrated with Agentic AI.

Alternate Protocol Use Case Benefits
 
DSRC (Dedicated Short-Range Communications) A low latency safety Wi-Fi-like messaging system that lets vehicles talk to each other and to traffic lights or signs Fast and reliable for safety alerts like crash warnings or red-light violations—even when there’s no cellular network available [5]
VANETs (Vehicular Ad Hoc Networks) Vehicles form a temporary network with nearby cars and roadside units for decentralized peer to peer communication Effective for local, peer-to-peer communication without needing towers or internet—ideal in tunnels or remote rural areas [6]
ISAC (Integrated Sensing and Communication) It Combines radar/LiDAR sensing with data exchange in one system This helps vehicles look and communicate at the same time—useful for automated parking, intersection safety, and hazard detection [7, 8]
PLC (Power Line Communication) It uses Electric Vehicle (EV) charging cables to send data between the car and the grid Enables smart charging and energy sharing (V2G)—vehicles can even send power back to the grid during peak hours [9]
Federated Learning Vehicles train AI models locally and share only the updates without raw data Enables privacy and efficiency—cars learn from each other without sending sensitive data to the cloud [10, 11]

While these alternatives reduce dependency on centralized infrastructure and provide greater fault tolerance, they also introduce complexity. As autonomous vehicles (AVs) become increasingly prevalent, Vehicle-to-Everything (V2X) communication is emerging as the digital nervous system of intelligent transportation systems. Given the deployment and reliability challenges associated with 5G, the industry is shifting toward alternative networking solutions—where Agentic AI is being introduced as a cognitive layer that renders these ecosystems adaptive, secure, and resilient.

The following use cases show how Agentic AI can bring efficiency:

  • Cognitive Autonomy: Each vehicle or roadside unit (RSU) operates an AI agent capable of observing, orienting, deciding, and acting (OOAD) without continuous reliance on cloud supervision. This autonomy enables real-time decision-making for scenarios such as rerouting, merging, or hazard avoidance—even in disconnected environments [12].
  • Multi-Agent Collaboration: AI agents negotiate and coordinate with one another using standardized protocols (e.g., MCP, A2A), enabling guidance on optimal vehicle spacing, intersection management, and dynamic traffic control—without the need for centralized orchestration [13].
  • Embedded Security Intelligence: While multiple agents collaborate, dedicated security agents monitor system activities for anomalies, enforce access control policies, and quarantine threats at the edge. As Forbes notes, “Agentic AI demands agentic security,” emphasizing the importance of embedding trust and resilience into every decision node [14].
  • Protocol-Agnostic Adaptability: Agentic AI can dynamically switch among various communication protocols—including DSRC, VANETs, ISAC, or PLC—based on real-time evaluations of signal quality, latency, and network congestion. Agents equipped with cognitive capabilities enhance system robustness against 5G performance limitations or outages.
  • Federated Learning and Self-Improvement: Vehicles independently train machine learning models locally and transmit only model updates—preserving data privacy, minimizing bandwidth usage, and improving processing efficiency.

The figure below illustrates the proposed architectural framework for secure Agentic AI enablement within V2X communications, leveraging alternative communication protocols and the OODA (Observe–Orient–Decide–Act) cognitive model.

Conclusions:

With the integration of an intelligent Agentic AI layer into V2X systems, autonomous, adaptive, and efficient decision-making emerges from seamless collaboration of the distributed intelligent components.

Numerous examples highlight the potential of Agentic AI to deliver significant business value.

  • TechCrunch reports that Amazon’s R&D division is actively developing an Agentic AI framework to automate warehouse operations through robotics [15]. A similar architecture can be extended to autonomous vehicles (AVs) to enhance both communication and cybersecurity capabilities.
  • Forbes emphasizes that “Agentic AI demands agentic security,” underscoring the need for every action—whether executed by human or machine—to undergo rigorous review and validation from a security perspective [16].  Forbes notes, “Agentic AI represents the next evolution in AI—a major transition from traditional models that simply respond to human prompts.” By combining Agentic AI with alternative networking protocols, robust V2X ecosystems can be developed—capable of maintaining resilience despite connectivity losses or infrastructure gaps, enforcing strong cyber defense, and exhibiting intelligence that learns, adapts, and acts autonomously [19].
  • Business Insider highlights the scalability of Agentic AI, referencing how Qualtrics has implemented continuous feedback loops to retrain its AI agents dynamically [17]. This feedback-driven approach is equally applicable in the mobility domain, where it can support real-time coordination, dynamic rerouting, and adaptive decision-making.
  • Multi-agent systems are also advancing rapidly. As Amazon outlines its vision for deploying “multi-talented assistants” capable of operating independently in complex environments, the trajectory of Agentic AI becomes even more evident [18].

References:

    1. Coll-Perales, B., Lucas-Estañ, M. C., Shimizu, T., Gozalvez, J., Higuchi, T., Avedisov, S., … & Sepulcre, M. (2022). End-to-end V2X latency modeling and analysis in 5G networks. IEEE Transactions on Vehicular Technology, 72(4), 5094-5109.
    2. Horta, J., Siller, M., & Villarreal-Reyes, S. (2025). Cross-layer latency analysis for 5G NR in V2X communications. PloS one, 20(1), e0313772.
    3. Cellular V2X Communications Towards 5G- Available at “pdf”
    4. Al Harthi, F. R. A., Touzene, A., Alzidi, N., & Al Salti, F. (2025, July). Intelligent Handover Decision-Making for Vehicle-to-Everything (V2X) 5G Networks. In Telecom (Vol. 6, No. 3, p. 47). MDPI.
    5. DSRC Safety Modem, Available at- “https://www.nxp.com/products/wireless-connectivity/dsrc-safety-modem:DSRC-MODEM”
    6. VANETs and V2X Communication, Available at- “https://www.sanfoundry.com/vanets-and-v2x-communication/#“
    7. Yu, K., Feng, Z., Li, D., & Yu, J. (2023). Secure-ISAC: Secure V2X communication: An integrated sensing and communication perspective. arXiv preprint arXiv:2312.01720.
    8. Study on integrated sensing and communication (ISAC) for C-V2X application, Available at- “https://5gaa.org/content/uploads/2025/05/wi-isac-i-tr-v.1.0-may-2025.pdf“
    9. Ramasamy, D. (2023). Possible hardware architectures for power line communication in automotive v2g applications. Journal of The Institution of Engineers (India): Series B, 104(3), 813-819.
    10. Xu, K., Zhou, S., & Li, G. Y. (2024). Federated reinforcement learning for resource allocation in V2X networks. IEEE Journal of Selected Topics in Signal Processing.
    11. Asad, M., Shaukat, S., Nakazato, J., Javanmardi, E., & Tsukada, M. (2025). Federated learning for secure and efficient vehicular communications in open RAN. Cluster Computing, 28(3), 1-12.
    12. Bryant, D. J. (2006). Rethinking OODA: Toward a modern cognitive framework of command decision making. Military Psychology, 18(3), 183-206.
    13. Agentic AI Communication Protocols: The Backbone of Autonomous Multi-Agent Systems, Available at- “https://datasciencedojo.com/blog/agentic-ai-communication-protocols/”
    14. Agentic AI And The Future Of Communications Networks, Available at- “https://www.forbes.com/councils/forbestechcouncil/2025/05/27/agentic-ai-and-the-future-of-communications-networks/”
    15. Amazon launches new R&D group focused on agentic AI and robotics, Available at- “Amazon launches new R&D group focused on agentic AI and robotics”
    16. Securing Identities For The Agentic AI Landscape, Available at “https://www.forbes.com/councils/forbestechcouncil/2025/07/03/securing-identities-for-the-agentic-ai-landscape/”
    17. Qualtrics’ president of product has a vision for agentic AI in the workplace: ‘We’re going to operate in a multiagent world’, Available at- “https://www.businessinsider.com/agentic-ai-improve-qualtrics-company-customer-communication-data-collection-2025-5”
    18. Amazon’s R&D lab forms new agentic AI group, Available at- “https://www.cnbc.com/2025/06/04/amazons-rd-lab-forms-new-agentic-ai-group.html”
    19. Agentic AI: The Next Frontier In Autonomous Work, Available at- “https://www.forbes.com/councils/forbestechcouncil/2025/06/27/agentic-ai-the-next-frontier-in-autonomous-work/”

About the Author:

Prashant Vajpayee is a Senior Product Manager and researcher in AI and cybersecurity, with expertise in enterprise data integration, cyber risk modeling, and intelligent transportation systems. With a foundation in strategic leadership and innovation, he has led transformative initiatives at Salesforce and advanced research focused on cyber risk quantification and resilience across critical infrastructure, including Transportation 5.0 and global supply chain. His work empowers organizations to implement secure, scalable, and ethically grounded digital ecosystems. Through his writing, Prashant seeks to demystify complex cybersecurity as well as AI challenges and share actionable insights with technologists, researchers, and industry leaders.

Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined

AI RAN [1.] is projected to account for approximately a third of the RAN market by 2029, according to a recent AI RAN Advanced Research Report published by the Dell’Oro Group.  In the near term, the focus within the AI RAN segment will center on Distributed-RAN (D-RAN), single-purpose deployments, and 5G.

“Near-term priorities are more about efficiency gains than new revenue streams,” said Stefan Pongratz, Vice President at Dell’Oro Group. “There is strong consensus that AI RAN can improve the user experience, enhance performance, reduce power consumption, and play a critical role in the broader automation journey. Unsurprisingly, however, there is greater skepticism about AI’s ability to reverse the flat revenue trajectory that has defined operators throughout the 4G and 5G cycles,” continued Pongratz.

Note 1. AI RAN integrates AI and machine learning (ML) across various aspects of the RAN domain. The AI RAN scope in this report is aligned with the greater industry vision. While the broader AI RAN vision includes services and infrastructure, the projections in this report focus on the RAN equipment market.

Additional highlights from the July 2025 AI RAN Advanced Research Report:

  • The base case is built on the assumption that AI RAN is not a growth vehicle. But it is a crucial technology/tool for operators to adopt. Over time, operators will incorporate more virtualization, intelligence, automation, and O-RAN into their RAN roadmaps.
  • This initial AI RAN report forecasts the AI RAN market based on location, tenancy, technology, and region.
  • The existing RAN radio and baseband suppliers are well-positioned in the initial AI-RAN phase, driven primarily by AI-for-RAN upgrades leveraging the existing hardware. Per Dell’Oro Group’s regular RAN coverage, the top 5 RAN suppliers contributed around 95 percent of the 2024 RAN revenue.
  • AI RAN is projected to account for around a third of total RAN revenue by 2029.

In the first quarter of 2025, Dell’Oro said the top five RAN suppliers based on revenues outside of China are Ericsson, Nokia, Huawei, Samsung and ZTE. In terms of worldwide revenue, the ranking changes to Huawei, Ericsson, Nokia, ZTE and Samsung. 

About the Report: Dell’Oro Group’s AI RAN Advanced Research Report includes a 5-year forecast for AI RAN by location, tenancy, technology, and region. Contact: [email protected]

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Author’s Note:   Nvidia’s Aerial Research portfolio already contains a host of AI-powered tools designed to augment wireless network simulations. It is also collaborating with T-Mobile and Cisco to develop AI RAN solutions to support future 6G applications.  The GPU king is also working with some of those top five RAN suppliers, Nokia and Ericsson, on an AI-RAN Innovation Center. Unveiled last October, the project aims to bring together cloud-based RAN and AI development and push beyond applications that focus solely on improving efficiencies.

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

The one year old AI RAN Alliance has now increased its membership to over 100, up from around 84 in May.  However, there are not many telco members with only Vodafone joining since May. The other telco members are: Turkcell ,Boost Mobile, Globe, Indosat Ooredoo Hutchison (Indonesia), Korea Telecom, LG UPlus, SK Telecom, T-Mobile US and Softbank. This limited telco presence could reflect the ongoing skepticism about the goals of AI-RAN, including hopes for new revenue opportunities through network slicing, as well as hosting and monetizing enterprise AI workloads at the edge.

Francisco Martín Pignatelli, head of open RAN at Vodafone, hardly sounded enthusiastic in his statement in the AI-RAN Alliance press release. “Vodafone is committed to using AI to optimize and enhance the performance of our radio access networks. Running AI and RAN workloads on shared infrastructure boosts efficiency, while integrating AI and generative applications over RAN enables new real-time capabilities at the network edge,” he added.

Perhaps, the most popular AI RAN scenario is “AI on RAN,”  which enables AI services on the RAN at the network edge in a bid to support and benefit from new services, such as AI inferencing.

“We are thrilled by the extraordinary growth of the AI-RAN Alliance,” said Alex Jinsung Choi, Chair of the AI-RAN Alliance and Principal Fellow at SoftBank Corp.’s Research Institute of Advanced Technology. “This milestone underscores the global momentum behind advancing AI for RAN, AI and RAN, and AI on RAN. Our members are pioneering how artificial intelligence can be deeply embedded into radio access networks — from foundational research to real-world deployment — to create intelligent, adaptive, and efficient wireless systems.”

Choi recently suggested that now is the time to “revisit all our value propositions and then think about what should be changed or what should be built” to be able to address issues including market saturation and the “decoupling” between revenue growth and rising TCO.  He also cited self-driving vehicles and mobile robots, where low latency is critical, as specific use cases where AI-RAN will be useful for running enterprise workloads.

About the AI-RAN Alliance:

The AI-RAN Alliance is a global consortium accelerating the integration of artificial intelligence into Radio Access Networks. Established in 2024, the Alliance unites leading companies, researchers, and technologists to advance open, practical approaches for building AI-native wireless networks. The Alliance focuses on enabling experimentation, sharing knowledge, and real-world performance to support the next generation of mobile infrastructure. For more information, visit: https://ai-ran.org

References:

https://www.delloro.com/advanced-research-report/ai-ran/

https://www.delloro.com/news/ai-ran-to-top-10-billion-by-2029/

https://www.lightreading.com/ai-machine-learning/vodafone-swells-ai-ran-alliance-ranks-but-skepticism-remains

https://www.businesswire.com/news/home/20250709519466/en/AI-RAN-Alliance-Surpasses-100-Members-in-First-Year-of-Rapid-Growth

Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum

AI RAN Alliance selects Alex Choi as Chairman

Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?

Deutsche Telekom and Google Cloud partner on “RAN Guardian” AI agent

The case for and against AI-RAN technology using Nvidia or AMD GPUs

 

AI spending is surging; companies accelerate AI adoption, but job cuts loom large

Global AI market is experiencing significant growth. Companies are adopting AI at an accelerated rate, with 72% reporting adoption in at least one business function in 2024, a significant increase from 55% in 2023, according to S&P Global.  This growth is being driven by various factors, including the potential for enhanced productivity, improved efficiency, and increased innovation across industries. 

Global spending on AI is projected to reach $632 billion by 2028, according to the IDC Worldwide AI and Generative AI Spending Guide16 technologies will be impacted: hardware (IaaS, server, and storage), software (AI applications [content workflow and management applications, CRM applications, ERM applications], AI application development and deployment, AI platforms [AI life-cycle software, computer vision AI tools, conversational AI tools, intelligent knowledge discovery software], AI system infrastructure software), and services (business services and IT services)

Grand View Research estimates that the global AI market, encompassing hardware, software, and services, will grow to over $1.8 trillion by 2030, compounding annually at 37%.

Barron’s says AI spending is surging, but certain job types are at risk according to CIOs.  On Wednesday, two major U.S. investment banks released reports based on surveys of chief information officers, or CIOs, at corporations that suggest rising spending plans for AI infrastructure.

  • Morgan Stanley’s technology team said AI tops the priority list for projects that will see the largest spending increase, adding that 60% of CIOs expect to have AI projects in production by year end.  Military spending for AI applications by NATO members is projected to exceed $112 billion by 2030, assuming a 4% AI investment allocation rate. 
  • Piper Sandler analyst James Fish noted 93% of CIOs plan to increase spending on AI infrastructure this year with 48% saying they will increase spending significantly by more than 25% versus last year.  Piper Sandler said that is good news for the major cloud computing vendors—including Microsoft Azure, Oracle Cloud, Amazon.com’s Amazon Web Services, and Google Cloud by Alphabet.
  • More than half the CIOs in Piper Sandler’s survey admitted the rise of AI made certain jobs more vulnerable for headcount reduction. The job categories most at risk for cuts are (in order): IT administration, sales, customer support, and IT help desks. 

–>Much more on AI related job losses discussion below.

Executives’ confidence in AI execution has jumped from 53% to 71% in the past year, driven by $246 billion in infrastructure investment and demonstrable business results. Another article from the same date notes the introduction of “AI for Citizens” by Mistral, aimed at empowering public institutions with AI capabilities for their citizens, according to an artticle. 

This strong growth in the AI market is driven by several factors:

  • Technological advancements: Improvements in machine learning algorithms, computational power, and the development of new frameworks like deep learning and neural networks are enabling more sophisticated AI applications.
  • Data availability: The abundance of digital data from various sources (social media, IoT devices, sensors) provides vast training datasets for AI models, according to LinkedIn.
  • Increasing investments: Significant investments from major technology companies, governments, and research institutions are fueling AI research and development.
  • Cloud computing: The growth of cloud platforms like AWS, Azure, and Google Cloud provides scalable infrastructure and tools for developing and deploying AI applications, making AI accessible to a wider range of businesses.
  • Competitive advantages: Businesses are leveraging AI/ML to gain a competitive edge by enhancing product development, optimizing operations, and making data-driven decisions. 

……………………………………………………………………………………………………………………………………………………………………

Potential job cuts due to AI loom large:
  • Some sources predict that AI could replace the equivalent of 300 million full-time jobs globally, with a significant impact on tasks performed by white-collar workers in areas like finance, law, and consulting.
  • Entry-level positions are particularly vulnerable, with some experts suggesting that AI could cannibalize half of all entry-level white-collar roles within five years.
  • Sectors like manufacturing and customer service are also facing potential job losses due to the automation capabilities of AI and robotics.
  • A recent survey found that 41% of companies plan to reduce their workforce by 2030 due to AI, according to the World Economic Forum. 
  • BT CEO Allison Kirkby hinted at mass job losses due to AI. She told the Financial Times last month that her predecessor’s plan to eliminate up to 45,000 jobs by 2030 “did not reflect the full potential of AI.” In fact, she thinks AI may be able to help her shed a further 10,000 or so jobs by the end of the decade.
  • Microsoft announced last week that it will lay off about 9,000 employees across different teams in its global workforce.
  • “Artificial intelligence is going to replace literally half of all white-collar workers in the U.S.,” Ford Motor CEO  Jim Farley said in an interview last week with author Walter Isaacson at the Aspen Ideas Festival. “AI will leave a lot of white-collar people behind.”
  • Amazon CEO Andy Jassy wrote in a note to employees in June that he expected the company’s overall corporate workforce to be smaller in the coming years because of the “once-in-a-lifetime” AI technology. “We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs,” Jassy said.
  • Technology-related factors such as automation drove 20,000 job cuts among U.S.-based employers in the first half of the year,  outplacement firm Challenger, Gray & Christmas said in a recent report. “We do see companies using the term ‘technological update’ more often than we have over the past decade, so our suspicion is that some of the AI job cuts that are likely happening are falling into that category,” Andy Challenger, a senior vice president at the Chicago, Illinois-based outplacement firm, told CFO Dive. In some cases, companies may avoid directly tying their layoffs to AI because they “don’t want press on it,” he said.
  • In private, CEOs have spent months whispering about how their businesses could likely be run with a fraction of the current staff. Technologies including automation software, AI and robots are being rolled out to make operations as lean and efficient as possible.
  • Four in 10 employers anticipate reducing their workforce where AI can automate tasks, according to World Economic Forum survey findings unveiled in January.

The long-term impact of AI on employment is still being debated, with some experts predicting that AI will also create new jobs and boost productivity, offsetting some of the losses. However, reports and analysis indicate that workers need to prepare for significant changes in the job market and develop new skills to adapt to the evolving demands of an AI-driven economy.

References:

https://my.idc.com/getdoc.jsp?containerId=IDC_P33198

https://www.grandviewresearch.com/press-release/global-artificial-intelligence-ai-market

https://www.barrons.com/articles/ai-jobs-survey-cios-619d2a5e?mod=hp_FEEDS_1_TECHNOLOGY_3

https://www.hrdive.com/news/ai-driven-job-cuts-underreported-challenger/752526/

https://www.lightreading.com/ai-machine-learning/telcos-are-cutting-jobs-but-not-because-of-ai

https://www.wsj.com/tech/ai/ai-white-collar-job-loss-b9856259

https://www.wsj.com/tech/ai/ai-white-collar-job-loss-b9856259

HPE cost reduction campaign with more layoffs; 250 AI PoC trials or deployments

AT&T and Verizon cut jobs another 6% last year; AI investments continue to increase

Verizon and AT&T cut 5,100 more jobs with a combined 214,350 fewer employees than 2015

AI adoption to accelerate growth in the $215 billion Data Center market

Big Tech post strong earnings and revenue growth, but cuts jobs along with Telecom Vendors

Nokia (like Ericsson) announces fresh wave of job cuts; Ericsson lays off 240 more in China

AI wave stimulates big tech spending and strong profits, but for how long?

Big Tech and VCs invest hundreds of billions in AI while salaries of AI experts reach the stratosphere

Introduction:

Two and a half years after OpenAI set off the generative artificial intelligence (AI) race with the release of the ChatGPT, big tech companies are accelerating their A.I. spending, pumping hundreds of billions of dollars into their frantic effort to create systems that can mimic or even exceed the abilities of the human brain.  The areas of super huge AI spending are data centers, salaries for experts, and VC investments. Meanwhile, the UAE is building one of the world’s largest AI data centers while Softbank CEO Masayoshi Son believes that Artificial General Intelligence (AGI) will surpass human-level cognitive abilities (Artificial General Intelligence or AGI) within a few years.  And that Artificial Super Intelligence (ASI) will surpass human intelligence by a factor of 10,000 within the next 10 years.

AI Data Center Build-out Boom:

Tech industry’s giants are building AI data centers that can cost more than $100 billion and will consume more electricity than a million American homes.  Meta, Microsoft, Amazon and Google have told investors that they expect to spend a combined $320 billion on infrastructure costs this year. Much of that will go toward building new data centers — more than twice what they spent two years ago.

As OpenAI and its partners build a roughly $60 billion data center complex for A.I. in Texas and another in the Middle East, Meta is erecting a facility in Louisiana that will be twice as large. Amazon is going even bigger with a new campus in Indiana. Amazon’s partner, the A.I. start-up Anthropic, says it could eventually use all 30 of the data centers on this 1,200-acre campus to train a single A.I system.  Even if Anthropic’s progress stops, Amazon says that it will use those 30 data centers to deliver A.I. services to customers.

Amazon is building a data center complex in New Carlisle, Ind., for its work with the A.I. company Anthropic. Photo Credit…AJ Mast for The New York Times

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Stargate UAE:

OpenAI is partnering with United Arab Emirates firm G42 and others to build a huge artificial-intelligence data center in Abu Dhabi, UAE.  The project, called Stargate UAE, is part of a broader push by the U.A.E. to become one of the world’s biggest funders of AI companies and infrastructure—and a hub for AI jobs.  The Stargate project is led by G42, an AI firm controlled by Sheikh Tahnoon bin Zayed al Nahyan, the U.A.E. national-security adviser and brother of the president. As part of the deal, an enhanced version of ChatGPT would be available for free nationwide, OpenAI said.

The first 200-megawatt chunk of the data center is due to be completed by the end of 2026, while the remainder of the project hasn’t been finalized. The buildings’ construction will be funded by G42, and the data center will be operated by OpenAI and tech company Oracle, G42 said. Other partners include global tech investor, AI/GPU chip maker Nvidia and network-equipment company Cisco.

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Softbank and ASI:

Not wanting to be left behind, SoftBank, led by CEO Masayoshi Son, has made massive investments in AI and has a bold vision for the future of AI development. Son has expressed a strong belief that Artificial Super Intelligence (ASI), surpassing human intelligence by a factor of 10,000, will emerge within the next 10 years.  For example, Softbank has:

  • Significant  investments in OpenAI, with planned investments reaching approximately $33.2 billion. Son considers OpenAI a key partner in realizing their ASI vision.
  • Acquired Ampere Computing (chip designer) for $6.5 billion to strengthen their AI computing capabilities.
  • Invested in the Stargate Project alongside OpenAI, Oracle, and MGX. Stargate aims to build large AI-focused data centers in the U.S., with a planned investment of up to $500 billion. 

 Son predicts that AI will surpass human-level cognitive abilities (Artificial General Intelligence or AGI) within a few years. He then anticipates a much more advanced form of AI, ASI, to be 10,000 times smarter than humans within a decade. He believes this progress is driven by advancements in models like OpenAI’s o1, which can “think” for longer before responding. 

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Super High Salaries for AI Researchers:

Salaries for A.I. experts are going through the roof and reaching the stratosphere.  OpenAI, Google DeepMind, Anthropic, Meta, and NVIDIA are paying over $300,000 in base salary, plus bonuses and stock options. Other companies like Netflix, Amazon, and Tesla are also heavily invested in AI and offer competitive compensation packages.

Meta has been offering compensation packages worth as much as $100 million per person. The owner of Facebook made more than 45 offers to researchers at OpenAI alone, according to a person familiar with these approaches.  Meta’s CTO Andrew Bosworth implied that only a few people for very senior leadership roles may have been offered that kind of money, but clarified “the actual terms of the offer” wasn’t a “sign-on bonus. It’s all these different things.”  Tech companies typically offer the biggest chunks of their pay to senior leaders in restricted stock unit (RSU) grants, dependent on either tenure or performance metrics.  A four-year total pay package worth about $100 million for a very senior leader is not inconceivable for Meta. Most of Meta’s named officers, including Bosworth, have earned total compensation of between $20 million and nearly $24 million per year for years.

Meta CEO Mark Zuckerberg on Monday announced its new artificial intelligence organization, Meta Superintelligence Labs, to its employees, according to an internal post reviewed by The Information. The organization includes Meta’s existing AI teams, including its Fundamental AI Research lab, as well as “a new lab focused on developing the next generation of our models,” Zuckerberg said in the post. Scale AI CEO Alexandr Wang has joined Meta as its Chief AI Officer and will partner with former GitHub CEO Nat Friedman to lead the organization. Friedman will lead Meta’s work on AI products and applied research.

“I’m excited about the progress we have planned for Llama 4.1 and 4.2,” Zuckerberg said in the post. “In parallel, we’re going to start research on our next generation models to get to the frontier in the next year or so,” he added.

On Thursday, researcher Lucas Beyer confirmed he was leaving OpenAI to join Meta along with the two others who led OpenAI’s Zurich office. He tweeted: “1) yes, we will be joining Meta. 2) no, we did not get 100M sign-on, that’s fake news.” (Beyer politely declined to comment further on his new role to TechCrunch.) Beyer’s expertise is in computer vision AI. That aligns with what Meta is pursuing: entertainment AI, rather than productivity AI, Bosworth reportedly said in that meeting. Meta already has a stake in the ground in that area with its Quest VR headsets and its Ray-Ban and Oakley AI glasses.

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

VC investments in AI are off the charts:

Venture capitalists are strongly increasing their AI spending. U.S. investment in A.I. companies rose to $65 billion in the first quarter, up 33% from the previous quarter and up 550% from the quarter before ChatGPT came out in 2022, according to data from PitchBook, which tracks the industry.

This astounding VC spending, critics argue, comes with a huge risk. A.I. is arguably more expensive than anything the tech industry has tried to build, and there is no guarantee it will live up to its potential. But the bigger risk, many executives believe, is not spending enough to keep pace with rivals.

“The thinking from the big C.E.O.s is that they can’t afford to be wrong by doing too little, but they can afford to be wrong by doing too much,” said Jordan Jacobs, a partner with the venture capital firm Radical Ventures.  “Everyone is deeply afraid of being left behind,” said Chris V. Nicholson, an investor with the venture capital firm Page One Ventures who focuses on A.I. technologies.

Indeed, a significant driver of investment has been a fear of missing out on the next big thing, leading to VCs pouring billions into AI startups at “nosebleed valuations” without clear business models or immediate paths to profitability.

Conclusions:

Big tech companies and VCs acknowledge that they may be overestimating A.I.’s potential. Developing and implementing AI systems, especially large language models (LLMs), is incredibly expensive due to hardware (GPUs), software, and expertise requirements. One of the chief concerns is that revenue for many AI companies isn’t matching the pace of investment. Even major players like OpenAI reportedly face significant cash burn problems.  But even if the technology falls short, many executives and investors believe, the investments they’re making now will be worth it.

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.nytimes.com/2025/06/27/technology/ai-spending-openai-amazon-meta.html

Meta is offering multimillion-dollar pay for AI researchers, but not $100M ‘signing bonuses’

https://www.theinformation.com/briefings/meta-announces-new-superintelligence-lab

OpenAI partners with G42 to build giant data center for Stargate UAE project

AI adoption to accelerate growth in the $215 billion Data Center market

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

Proposed solutions to high energy consumption of Generative AI LLMs: optimized hardware, new algorithms, green data centers

 

Deloitte and TM Forum : How AI could revitalize the ailing telecom industry?

IEEE Techblog readers are well aware of the dire state of the global telecommunications industry.  In particular:

  • According to Deloitte, the global telecommunications industry is expected to have revenues of about US$1.53 trillion in 2024, up about 3% over the prior year.Both in 2024 and out to 2028, growth is expected to be higher in Asia Pacific and Europe, Middle East, and Africa, with growth in the Americas being around 1% annually.
  • Telco sales were less than $1.8 trillion in 2022 vs. $1.9 trillion in 2012, according to Light Reading. Collective investments of about $1 trillion over a five-year period had brought a lousy return of less than 1%.
  • Last year (2024), spending on radio access network infrastructure fell by $5 billion, more than 12% of the total, according to analyst firm Omdia, imperilling the kit vendors on which telcos rely.

Deloitte believes generative (gen) AI will have a huge impact on telecom network providers:

Telcos are using gen AI to reduce costs, become more efficient, and offer new services. Some are building new gen AI data centers to sell training and inference to others. What role does connectivity play in these data centers?

There is a gen AI gold rush expected over the next five years. Spending estimates range from hundreds of billions to over a trillion dollars on the physical layer required for gen AI: chips, data centers, and electricity.16 Close to another hundred billion US dollars will likely be spent on the software and services layer.17 Telcos should focus on the opportunity to participate by connecting all of those different pieces of hardware and software. And shouldn’t telcos, whose business is all about connectivity, be able to profit in some way?

There are gen AI markets for connectivity: Inside the data centers there are miles of mainly copper (and some fiber) cables for transmitting data from board to board and rack to rack. Serving this market is worth billions in 2025,18 but much of this connectivity is provided by data centers and chipmakers and have never been provided by telcos.

There are also massive, long-haul fiber networks ranging from tens to thousands of miles long. These connect (for example) a hyperscaler’s data centers across a region or continent, or even stretch along the seabed, connecting data centers across continents. Sometimes these new fiber networks are being built to support sovereign AI—that is, the need to keep all the AI data inside a given country or region.

Historically, those fiber networks were massive expenditures, built by only the largest telcos or (in the undersea case) built by consortia of telcos, to spread the cost across many players. In 2025, it looks like some of the major gen AI players are building at least some of this connection capacity, but largely on their own or with companies that are specialists in long-haul fiber.

Telcos may want to think about how they can continue to be a relevant player in the part of the connectivity space, rather than just ceding it to the gen AI behemoths. For context, it is estimated that big tech players will spend over US$100 billion on network capex between 2024 and 2030, representing 5% to 10% of their total capex in that period, up from only about 4% to 5% of capex for a network historically.

Where the opportunities could be greater are for connecting billions of consumers and enterprises. Telcos already serve these large markets, and as consumers and businesses start sending larger amounts of data over wireline and wireless networks, that growth might translate to higher revenues. A recent research report suggests that direct gen AI data traffic could be in exabyte by 2033.24

The immediate challenge is that many gen AI use cases for both consumer and enterprise markets are not exactly bandwidth hogs: In 2025, they tend to be text-based (so small file sizes) and users may expect answers in seconds rather than milliseconds,25 which can limit how telcos can monetize the traffic. Users will likely pay a premium for ultra-low latency, but if latency isn’t an issue, they are unlikely to pay a premium.

Telcos may want to think about how they can continue to be a relevant player in the part of the connectivity space, rather than just ceding it to the gen AI behemoths.

A longer-term challenge is on-device edge computing. Even if users start doing a lot more with creating, consuming, and sharing gen AI video in real time (requiring much larger file transmission and lower latency), the majority of devices (smartphones, PCs, wearables, or Internet of Things (IoT) devices in factories and ports) are expected to soon have onboard gen AI processing chips.26 These gen accelerators, combined with emerging smaller language AI models, may mean that network connectivity is less of an issue. Instead of a consumer recording a video, sending the raw image to the cloud for AI processing, then the cloud sending it back, the image could be enhanced or altered locally, with less need for high-speed or low-latency connectivity.

Of course, small models might not work well. The chips on consumer and enterprise edge devices might not be powerful enough or might be too power inefficient with unacceptably short battery life. In which case, telcos may be lifted by a wave of gen AI usage. But that’s unlikely to be in 2025, or even 2026.

Another potential source of gen AI monetization is what’s being called AI Radio Access Network (RAN). At the top of every cell tower are a bunch of radios and antennas. There is also a powerful processor or processors for controlling those radios and antennas. In 2024, a consortium (the AI-RAN Alliance) was formed to look at the idea of adding the same kind of generative AI chips found in data centers or enterprise edge servers (a mix of GPUs and CPUs) to every tower.The idea would be that they could run the RAN, help make it more open, flexible, and responsive, dynamically configure the network in real time, and be able to perform gen AI inference or training as service with any extra capacity left over, generating incremental revenues. At this time, a number of original equipment manufacturers (OEMs, including ones who currently account for over 95% of RAN sales), telcos, and chip companies are part of the alliance. Some expect AI RAN to be a logical successor to Open RAN and be built on top of it, and may even be what 6G turns out to be.

…………………………………………………………………………………………………………………………………………………………………………….

The TM Forum has three broad “AI initiatives,” which are part of their overarching “Industry Missions.” These missions aim to change the future of global connectivity, with AI being a critical component.

The three broad “AI initiatives” (or “Industry Missions” where AI plays a central role) are:

  1. AI and Data Innovation: This mission focuses on the safe and widespread adoption of AI and data at scale within the telecommunications industry. It aims to help telcos accelerate, de-risk, and reduce the costs of applying AI technologies to cut operational expenses and drive revenue growth. This includes developing best practices, standards, data architectures, ontologies, and APIs.

  2. Autonomous Networks: This initiative is about unlocking the power of seamless end-to-end autonomous operations in telecommunications networks. AI is a fundamental technology for achieving higher levels of network automation, moving towards zero-touch, zero-wait, and zero-trouble operations.

  3. Composable IT and Ecosystems: While not solely an “AI initiative,” this mission focuses on simpler IT operations and partnering via AI-ready composable software. AI plays a significant role in enabling more agile and efficient IT systems that can adapt and integrate within dynamic ecosystems. It’s based on the TM Forum’s Open Digital Architecture (ODA). Eighteen big telcos are now running on ODA while the same number of vendors are described by the TM Forum as “ready” to adopt it.

These initiatives are supported by various programs, tools, and resources, including:

  • AI Operations (AIOps): Focusing on deploying and managing AI at scale, re-engineering operational processes to support AI, and governing AI operations.
  • Responsible AI: Addressing ethical considerations, risk management, and governance frameworks for AI.
  • Generative AI Maturity Interactive Tool (GAMIT): To help organizations assess their readiness to exploit the power of GenAI.
  • AI Readiness Check (AIRC): An online tool for members to identify gaps in their AI adoption journey across key business dimensions.
  • AI for Everyone (AI4X): A pillar focused on democratizing AI across all business functions within an organization.

Under the leadership of CEO Nik Willetts, a rejuvenated, AI-wielding TM Forum now underpins what many telcos do in business and operational support systems, the essential IT plumbing.  The TM Forum rates automation using the same five-level system as the car industry, where 0 means completely manual and 5 heralds the end of human intervention. Many telcos are on track for Level 4 in specific areas this year, said Willetts. China Mobile has already realized an 80% reduction in major faults, saving 3,000 person years of effort and 4,000 kilowatt hours of energy each year, thanks to automation.

Outside of China, telcos and telco vendors are leaning heavily on technologies mainly developed by just a few U.S. companies to implement AI. A person remains in the loop for critical decision-making, but the justifications for taking any decision are increasingly provided by systems built on the core underlying technologies from those same few companies.   As IEEE Techblog has noted, AI is still hallucinating – throwing up nonsense or falsehoods – just as domain-specific experts are being threatened by it.

Agentic AI substitutes interacting software programs for junior technicians, the future decision-makers. If AI Level 4 renders them superfluous, where do the future decision-makers come from?

Caroline Chappell, an independent consultant with years of expertise in the telecom industry, says there is now talk of what the AI pundits call “learning world models,” more sophisticated AI that grows to understand its environment much as a baby does. When mature, it could come up with completely different approaches to the design of telecom networks and technologies. At this stage, it may be impossible for almost anyone to understand what AI is doing, she said.

 

 

References:

https://www.deloitte.com/us/en/insights/industry/technology/technology-media-telecom-outlooks/telecommunications-industry-outlook-2025.html

https://www.lightreading.com/ai-machine-learning/escape-from-ai-proves-impossible-at-tm-forum-bash-in-new-code-red-

Sources: AI is Getting Smarter, but Hallucinations Are Getting Worse

McKinsey: AI infrastructure opportunity for telcos? AI developments in the telecom sector

 

 

 

Page 1 of 10
1 2 3 10