AI in Networks Market
Huawei launches CloudMatrix 384 AI System to rival Nvidia’s most advanced AI system
On Saturday, Huawei Technologies displayed an advanced AI computing system in China, as the Chinese technology giant seeks to capture market share in the country’s growing artificial intelligence sector. Huawei’s CloudMatrix 384 system made its first public debut at the World Artificial Intelligence Conference (WAIC), a three-day event in Shanghai where companies showcase their latest AI innovations, drawing a large crowd to the company’s booth.
The Huawei CloudMatrix 384 is a high-density AI computing system featuring 384 Huawei Ascend 910C chips, designed to rival Nvidia’s GB200 NVL72 (more below). The AI system employs a “supernode” architecture with high-speed internal chip interconnects. The system is built with optical links for low-latency, high-bandwidth communication. Huawei has also integrated the CloudMatrix 384 into its cloud platform. The system has drawn close attention from the global AI community since Huawei first announced it in April.
The CloudMatrix 384 resides on the super-node Ascend platform and uses high-speed bus interconnection capability, resulting in low latency linkage between 384 Ascend NPUs. Huawei says that “compared to traditional AI clusters that often stack servers, storage, network technology, and other resources, Huawei CloudMatrix has a super-organized setup. As a result, it also reduces the chance of facing failures at times of large-scale training.

Huawei staff at its WAIC booth declined to comment when asked to introduce the CloudMatrix 384 system. A spokesperson for Huawei did not respond to questions. However, Huawei says that “early reports revealed that the CloudMatrix 384 can offer 300 PFLOPs of dense BF16 computing. That’s double of Nvidia GB200 NVL72 AI tech system. It also excels in terms of memory capacity (3.6x) and bandwidth (2.1x).” Indeed, industry analysts view the CloudMatrix 384 as a direct competitor to Nvidia’s GB200 NVL72, the U.S. GPU chipmaker’s most advanced system-level product currently available in the market.
One industry expert has said the CloudMatrix 384 system rivals Nvidia’s most advanced offerings. Dylan Patel, founder of semiconductor research group SemiAnalysis, said in an April article that Huawei now had AI system capabilities that could beat Nvidia’s AI system. The CloudMatrix 384 incorporates 384 of Huawei’s latest 910C chips and outperforms Nvidia’s GB200 NVL72 on some metrics, which uses 72 B200 chips, according to SemiAnalysis. The performance stems from Huawei’s system design capabilities, which compensate for weaker individual chip performance through the use of more chips and system-level innovations, SemiAnalysis said.
Huawei has become widely regarded as China’s most promising domestic supplier of chips essential for AI development, even though the company faces U.S. export restrictions. Nvidia CEO Jensen Huang told Bloomberg in May that Huawei had been “moving quite fast” and named the CloudMatrix as an example.
Huawei says the system uses “supernode” architecture that allows the chips to interconnect at super-high speeds and in June, Huawei Cloud CEO Zhang Pingan said the CloudMatrix 384 system was operational on Huawei’s cloud platform.
According to Huawei, the Ascend AI chip-based CloudMatrix 384 with three important benefits:
- Ultra-large bandwidth
- Ultra-Low Latency
- Ultra-Strong Performance
These three perks can help enterprises achieve better AI training as well as stable reasoning performance for models. They could further retain long-term reliability.
References:
https://www.huaweicentral.com/huawei-launches-cloudmatrix-384-ai-chip-cluster-against-nvidia-nvl72/
https://semianalysis.com/2025/04/16/huawei-ai-cloudmatrix-384-chinas-answer-to-nvidia-gb200-nvl72/
U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China
Huawei’s “FOUR NEW strategy” for carriers to be successful in AI era
FT: Nvidia invested $1bn in AI start-ups in 2024
Gen AI eroding critical thinking skills; AI threatens more telecom job losses
Two alarming research studies this year have drawn attention to the damage that Gen AI agents like ChatGPT are doing to our brains:
The first study, published in February, by Microsoft and Carnegie Mellon University, surveyed 319 knowledge workers and concluded that “while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skills for independent problem-solving.”
An MIT study divided participants into three essay-writing groups. One group had access to Gen AI and another to Internet search engines while the third group had access to neither. This “brain” group, as MIT’s researchers called it, outperformed the others on measures of cognitive ability. By contrast, participants in the group using a Gen AI large language model (LLM) did the worst. “Brain connectivity systematically scaled down with the amount of external support,” said the report’s authors.
Across the 20 companies regularly tracked by Light Reading, headcount fell by 51,700 last year. Since 2015, it has dropped by more than 476,600, more than a quarter of the previous total.
Source: Light Reading
………………………………………………………………………………………………………………………………………………
Doing More with Less:
- In 2015, Verizon generated sales of $131.6 billion with a workforce of 177,700 employees. Last year, it made $134.8 billion with fewer than 100,000. Revenues per employee, accordingly, have risen from about $741,000 to more than $1.35 million over this period.
- AT&T made nearly $868,000 per employee last year, compared with less than $522,000 in 2015.
- Deutsche Telekom, buoyed by its T-Mobile US business, has grown its revenue per employee from about $356,000 to more than $677,000 over the same time period.
- Orange’s revenue per employee has risen from $298,000 to $368,000.
Significant workforce reductions have happened at all those companies, especially AT&T which finished last year with 141,000 employees – about half the number it had in 2015!
Not to be outdone, headcount at network equipment companies are also shrinking. Ericsson, Europe’s biggest 5G vendor, cut 6,000 jobs or 6% of its workforce last year and has slashed 13,000 jobs since 2023. Nokia’s headcount fell from 86,700 in 2023 to 75,600 at the end of last year. The latest message from Börje Ekholm, Ericsson’s CEO, is that AI will help the company operate with an even smaller workforce in future. “We also see and expect big benefits from the use of AI, and that is one reason why we expect restructuring costs to remain elevated during the year,” he said on this week’s earnings call with analysts.
………………………………………………………………………………………………………………………………………………
Other Voices:
Light Reading’s Iain Morris wrote, “An erosion of brainpower and ceding of tasks to AI would entail a loss of control as people are taken out of the mix. If AI can substitute for a junior coder, as experts say it can, the entry-level job for programming will vanish with inevitable consequences for the entire profession. And as AI assumes responsibility for the jobs once done by humans, a shrinking pool of individuals will understand how networks function.
“If you can’t understand how the AI is making that decision, and why it is making that decision, we could end up with scenarios where when something goes wrong, we simply just can’t understand it,” said Nik Willetts, the CEO of a standards group called the TM Forum, during a recent conversation with Light Reading. “It is a bit of an extreme to just assume no one understands how it works,” he added. “It is a risk, though.”
………………………………………………………………………………………………………………………………………………
References:
AI spending is surging; companies accelerate AI adoption, but job cuts loom large
Verizon and AT&T cut 5,100 more jobs with a combined 214,350 fewer employees than 2015
Big Tech post strong earnings and revenue growth, but cuts jobs along with Telecom Vendors
Nokia (like Ericsson) announces fresh wave of job cuts; Ericsson lays off 240 more in China
Deutsche Telekom exec: AI poses massive challenges for telecom industry
Ericsson reports ~flat 2Q-2025 results; sees potential for 5G SA and AI to drive growth
Ericsson’s second-quarter results were not impressive, with YoY organic sales growth of +2% for the company and +3% for its network division (its largest). Its $14 billion AT&T OpenRAN deal, announced in December of 2023, helped lift Swedish vendor’s share of the global RAN market by +1.4 percentage points in 2024 to 25.7%, according to new research from analyst company Omdia (owned by Informa). As a result of its AT&T contract, the U.S. accounted for a stunning 44% of Ericsson’s second-quarter sales while the North American market resulted in a 10% YoY increase in organic revenues to SEK19.8bn ($2.05bn). Sales dropped in all other regions of the world! The charts below depict that very well:
Ericsson’s attention is now shifting to a few core markets that Ekholm has identified as strategic priorities, among them the U.S., India, Japan and the UK. All, unsurprisingly, already make up Ericsson’s top five countries by sales, although their contribution minus the US came to just 15% of turnover for the recent second quarter. “We are already very strong in North America, but we can do more in India and Japan,” said Ekholm. “We see those as critically important for the long-term success.”
Opportunities: As telco investment in RAN equipment has declined by 12.5% (or $5 billion) last year, the Swedish equipment vendor has had few other obvious growth opportunities. Ericsson’s Enterprise division, which is supposed to be the long-term provider of sales growth for Ericsson, is still very small – its second-quarter revenues stood at just SEK5.5bn ($570m) and even once currency exchange changes are taken into account, its sales shrank by 6% YoY.
On Tuesday’s earnings call, Ericsson CEO Börje Ekholm said that the RAN equipment sector, while stable currently, isn’t offering any prospects of exciting near-term growth. For longer-term growth the industry needs “new monetization opportunities” and those could come from the ongoing modest growth in 5G-enabled fixed wireless access (FWA) deployments, from 5G standalone (SA) deployments that enable mobile network operators to offer “differentiated solutions” and from network APIs (that ultra hyped market is not generating meaningful revenues for anyone yet).
Cost Cutting Continues: Ericsson also has continued to be aggressive about cost reduction, eliminating thousands of jobs since it completed its Vonage takeover. “Over the last year, we have reduced our total number of employees by about 6% or 6,000,” said Ekholm on his routine call with analysts about financial results. “We also see and expect big benefits from the use of AI and that is one reason why we expect restructuring costs to remain elevated during the year.”
Use of AI: Ericsson sees AI as an opportunity to enable network automation and new industry revenue opportunities. The company is now using AI as an aid in network design – a move that could have negative ramifications for staff involved in research and development. Ericsson is already using AI for coding and “other parts of internal operations to drive efficiency… We see some benefits now. And it’s going to impact how the network is operated – think of fully autonomous, intent-based networks that will require AI as a fundamental component. That’s one of the reasons why we invested in an AI factory,” noted the CEO, referencing the consortium-based investment in a Swedish AI Factory that was announced in late May. At the time, Ericsson noted that it planned to “leverage its data science expertise to develop and deploy state-of-the-art AI models – improving performance and efficiency and enhancing customer experience.
Ericsson is also building AI capability into the products sold to customers. “I usually use the example of link adaptation,” said Per Narvinger, the head of Ericsson’s mobile networks business group, on a call with Light Reading, referring to what he says is probably one of the most optimized algorithms in telecom. “That’s how much you get out of the spectrum, and when we have rewritten link adaptation, and used AI functionality on an AI model, we see we can get a gain of 10%.”
Ericsson hopes that AI will boost consumer and business demand for 5G connectivity. New form factors such as smart glasses and AR headsets will need lower-latency connections with improved support for the uplink, it has repeatedly argued. But analysts are skeptical, while Ericsson thinks Europe is ill equipped for more advanced 5G services.
“We’re still very early in AI, in [understanding] how applications are going to start running, but I think it’s going to be a key driver of our business going forward, both on traffic, on the way we operate networks, and the way we run Ericsson,” Ekholm said.
Europe Disappoints: In much of Europe, Ericsson and Nokia have been frustrated by some government and telco unwillingness to adopt the European Union’s “5G toolbox” recommendations and evict Chinese vendors. “I think what we have seen in terms of implementation is quite varied, to be honest,” said Narvinger. Rather than banning Huawei outright, Germany’s government has introduced legislation that allows operators to use most of its RAN products if they find a substitute for part of Huawei’s management system by 2029. Opponents have criticized that move, arguing it does not address the security threat posed by Huawei’s RAN software. Nevertheless, Ericsson clearly eyes an opportunity to serve European demand for military communications, an area where the use of Chinese vendors would be unthinkable.
“It is realistic to say that a large part of the increased defense spending in Europe will most likely be allocated to connectivity because that is a critical part of a modern defense force,” said Ekholm. “I think this is a very good opportunity for western vendors because it would be far-fetched to think they will go with high-risk vendors.” Ericsson is also targeting related demand for mission-critical services needed by first responders.
5G SA and Mobile Core Networks: Ekholm noted that 5G SA deployments are still few and far between – only a quarter of mobile operators have any kind of 5G SA deployment in place right now, with the most notable being in the US, India and China. “Two things need to happen,” for greater 5G SA uptake, stated the CEO.
- “One is mid-band [spectrum] coverage… there’s still very low build out coverage in, for example, Europe, where it’s probably less than half the population covered… Europe is clearly behind on that“ compared with the U.S., China and India.
- “The second is that [network operators] need to upgrade their mobile core [platforms]... Those two things will have to happen to take full advantage of the capabilities of the [5G] network,” noted Ekholm, who said the arrival of new devices, such as AI glasses, that require ultra low latency connections and “very high uplink performance” is starting to drive interest. “We’re also seeing a lot of network slicing opportunities,” he added, to deliver dedicated network resources to, for example, police forces, sports and entertainment stadiums “to guarantee uplink streams… consumers are willing to pay for these things. So I’m rather encouraged by the service innovation that’s starting to happen on 5G SA and… that’s going to drive the need for more radio coverage [for] mid-band and for core [systems].”
Ericsson’s Summary -Looking Ahead:
- Continue to strengthen competitive position
- Strong customer engagement for differentiated connectivity
- New use cases to monetize network investments taking shape
- Expect RAN market to remain broadly stable
- Structurally improving the business through rigorous cost management
- Continue to invest in technology leadership
………………………………………………………………………………………………………………………………………………………………………………………………
References:
https://www.telecomtv.com/content/5g/ericsson-ceo-waxes-lyrical-on-potential-of-5g-sa-ai-53441/
https://www.lightreading.com/5g/ericsson-targets-big-huawei-free-places-ai-and-nato-as-profits-soar
Ericsson revamps its OSS/BSS with AI using Amazon Bedrock as a foundation
Agentic AI and the Future of Communications for Autonomous Vehicle (V2X)
by Prashant Vajpayee (bio below), edited by Alan J Weissberger
Abstract:
Autonomous vehicles increasingly depend on Vehicle-to-Everything (V2X) communications, but 5G networks face challenges such as latency, coverage gaps, high infrastructure costs, and security risks. To overcome these limitations, this article explores alternative protocols like DSRC, VANETs, ISAC, PLC, and Federated Learning, which offer decentralized, low-latency communication solutions.
Of critical importance for this approach is Agentic AI—a distributed intelligence model based on the Object, Orient, Decide, and Act (OODA) loop—that enhances adaptability, collaboration, and security across the V2X stack. Together, these technologies lay the groundwork for a resilient, scalable, and secure next-generation Intelligent Transportation System (ITS).
Problems with 5G for V2X Communications:
There are several problems with using 5G for V2X communications, which is why the 5G NR (New Radio) V2X specification, developed by the 3rd Generation Partnership Project (3GPP) in Release 16, hasn’t been widely implemented. Here are a few of them:
- Variable latency: Even though 5G promises sub-milliseconds latency, realistic deployment often reflects 10 to 50 milliseconds delay, specifically V2X server is hosted in cloud environment. Furthermore, multi-hop routing, network slicing, and delay in handovers cause increment in latency. Due to this fact, 5G becomes unsuitable for ultra-reliable low-latency communication (URLLC) in critical scenarios [1, 2].
- Coverage Gaps & Handover Issues: Availability of 5G network is a problem in rural and remote areas. Furthermore, in fast moving vehicle, switching between 5G networks can cause delays in communication and connectivity failure [3, 4].
- Infrastructure and Cost Constraint: The deployment of full 5G infrastructure requires dense small-cell infrastructure, which cost burden and logistically complex solution especially in developing regions and along highways.
- Spectrum Congestion and Interference: During the scenarios of share spectrum, other services can cause interference in realm of 5G network, which cause degradation on V2X reliability.
- Security and Trust Issues: Centralized nature of 5G architectures remain vulnerable to single point of failure, which is risky for autonomous systems in realm of cybersecurity.
Alternative Communications Protocols as a Solution for V2X (when integrated with Agentic AI):
The following list of alternative protocols offers a potential remedy for the above 5G shortcomings when integrated with Agentic AI.
|
While these alternatives reduce dependency on centralized infrastructure and provide greater fault tolerance, they also introduce complexity. As autonomous vehicles (AVs) become increasingly prevalent, Vehicle-to-Everything (V2X) communication is emerging as the digital nervous system of intelligent transportation systems. Given the deployment and reliability challenges associated with 5G, the industry is shifting toward alternative networking solutions—where Agentic AI is being introduced as a cognitive layer that renders these ecosystems adaptive, secure, and resilient.
The following use cases show how Agentic AI can bring efficiency:
- Cognitive Autonomy: Each vehicle or roadside unit (RSU) operates an AI agent capable of observing, orienting, deciding, and acting (OOAD) without continuous reliance on cloud supervision. This autonomy enables real-time decision-making for scenarios such as rerouting, merging, or hazard avoidance—even in disconnected environments [12].
- Multi-Agent Collaboration: AI agents negotiate and coordinate with one another using standardized protocols (e.g., MCP, A2A), enabling guidance on optimal vehicle spacing, intersection management, and dynamic traffic control—without the need for centralized orchestration [13].
- Embedded Security Intelligence: While multiple agents collaborate, dedicated security agents monitor system activities for anomalies, enforce access control policies, and quarantine threats at the edge. As Forbes notes, “Agentic AI demands agentic security,” emphasizing the importance of embedding trust and resilience into every decision node [14].
- Protocol-Agnostic Adaptability: Agentic AI can dynamically switch among various communication protocols—including DSRC, VANETs, ISAC, or PLC—based on real-time evaluations of signal quality, latency, and network congestion. Agents equipped with cognitive capabilities enhance system robustness against 5G performance limitations or outages.
- Federated Learning and Self-Improvement: Vehicles independently train machine learning models locally and transmit only model updates—preserving data privacy, minimizing bandwidth usage, and improving processing efficiency.
The figure below illustrates the proposed architectural framework for secure Agentic AI enablement within V2X communications, leveraging alternative communication protocols and the OODA (Observe–Orient–Decide–Act) cognitive model.
Conclusions:
With the integration of an intelligent Agentic AI layer into V2X systems, autonomous, adaptive, and efficient decision-making emerges from seamless collaboration of the distributed intelligent components.
Numerous examples highlight the potential of Agentic AI to deliver significant business value.
- TechCrunch reports that Amazon’s R&D division is actively developing an Agentic AI framework to automate warehouse operations through robotics [15]. A similar architecture can be extended to autonomous vehicles (AVs) to enhance both communication and cybersecurity capabilities.
- Forbes emphasizes that “Agentic AI demands agentic security,” underscoring the need for every action—whether executed by human or machine—to undergo rigorous review and validation from a security perspective [16]. Forbes notes, “Agentic AI represents the next evolution in AI—a major transition from traditional models that simply respond to human prompts.” By combining Agentic AI with alternative networking protocols, robust V2X ecosystems can be developed—capable of maintaining resilience despite connectivity losses or infrastructure gaps, enforcing strong cyber defense, and exhibiting intelligence that learns, adapts, and acts autonomously [19].
- Business Insider highlights the scalability of Agentic AI, referencing how Qualtrics has implemented continuous feedback loops to retrain its AI agents dynamically [17]. This feedback-driven approach is equally applicable in the mobility domain, where it can support real-time coordination, dynamic rerouting, and adaptive decision-making.
- Multi-agent systems are also advancing rapidly. As Amazon outlines its vision for deploying “multi-talented assistants” capable of operating independently in complex environments, the trajectory of Agentic AI becomes even more evident [18].
References:
-
- Coll-Perales, B., Lucas-Estañ, M. C., Shimizu, T., Gozalvez, J., Higuchi, T., Avedisov, S., … & Sepulcre, M. (2022). End-to-end V2X latency modeling and analysis in 5G networks. IEEE Transactions on Vehicular Technology, 72(4), 5094-5109.
- Horta, J., Siller, M., & Villarreal-Reyes, S. (2025). Cross-layer latency analysis for 5G NR in V2X communications. PloS one, 20(1), e0313772.
- Cellular V2X Communications Towards 5G- Available at “pdf”
- Al Harthi, F. R. A., Touzene, A., Alzidi, N., & Al Salti, F. (2025, July). Intelligent Handover Decision-Making for Vehicle-to-Everything (V2X) 5G Networks. In Telecom (Vol. 6, No. 3, p. 47). MDPI.
- DSRC Safety Modem, Available at- “https://www.nxp.com/products/wireless-connectivity/dsrc-safety-modem:DSRC-MODEM”
- VANETs and V2X Communication, Available at- “https://www.sanfoundry.com/vanets-and-v2x-communication/#“
- Yu, K., Feng, Z., Li, D., & Yu, J. (2023). Secure-ISAC: Secure V2X communication: An integrated sensing and communication perspective. arXiv preprint arXiv:2312.01720.
- Study on integrated sensing and communication (ISAC) for C-V2X application, Available at- “https://5gaa.org/content/uploads/2025/05/wi-isac-i-tr-v.1.0-may-2025.pdf“
- Ramasamy, D. (2023). Possible hardware architectures for power line communication in automotive v2g applications. Journal of The Institution of Engineers (India): Series B, 104(3), 813-819.
- Xu, K., Zhou, S., & Li, G. Y. (2024). Federated reinforcement learning for resource allocation in V2X networks. IEEE Journal of Selected Topics in Signal Processing.
- Asad, M., Shaukat, S., Nakazato, J., Javanmardi, E., & Tsukada, M. (2025). Federated learning for secure and efficient vehicular communications in open RAN. Cluster Computing, 28(3), 1-12.
- Bryant, D. J. (2006). Rethinking OODA: Toward a modern cognitive framework of command decision making. Military Psychology, 18(3), 183-206.
- Agentic AI Communication Protocols: The Backbone of Autonomous Multi-Agent Systems, Available at- “https://datasciencedojo.com/blog/agentic-ai-communication-protocols/”
- Agentic AI And The Future Of Communications Networks, Available at- “https://www.forbes.com/councils/forbestechcouncil/2025/05/27/agentic-ai-and-the-future-of-communications-networks/”
- Amazon launches new R&D group focused on agentic AI and robotics, Available at- “Amazon launches new R&D group focused on agentic AI and robotics”
- Securing Identities For The Agentic AI Landscape, Available at “https://www.forbes.com/councils/forbestechcouncil/2025/07/03/securing-identities-for-the-agentic-ai-landscape/”
- Qualtrics’ president of product has a vision for agentic AI in the workplace: ‘We’re going to operate in a multiagent world’, Available at- “https://www.businessinsider.com/agentic-ai-improve-qualtrics-company-customer-communication-data-collection-2025-5”
- Amazon’s R&D lab forms new agentic AI group, Available at- “https://www.cnbc.com/2025/06/04/amazons-rd-lab-forms-new-agentic-ai-group.html”
- Agentic AI: The Next Frontier In Autonomous Work, Available at- “https://www.forbes.com/councils/forbestechcouncil/2025/06/27/agentic-ai-the-next-frontier-in-autonomous-work/”
About the Author:
Prashant Vajpayee is a Senior Product Manager and researcher in AI and cybersecurity, with expertise in enterprise data integration, cyber risk modeling, and intelligent transportation systems. With a foundation in strategic leadership and innovation, he has led transformative initiatives at Salesforce and advanced research focused on cyber risk quantification and resilience across critical infrastructure, including Transportation 5.0 and global supply chain. His work empowers organizations to implement secure, scalable, and ethically grounded digital ecosystems. Through his writing, Prashant seeks to demystify complex cybersecurity as well as AI challenges and share actionable insights with technologists, researchers, and industry leaders.
Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined
AI RAN [1.] is projected to account for approximately a third of the RAN market by 2029, according to a recent AI RAN Advanced Research Report published by the Dell’Oro Group. In the near term, the focus within the AI RAN segment will center on Distributed-RAN (D-RAN), single-purpose deployments, and 5G.
“Near-term priorities are more about efficiency gains than new revenue streams,” said Stefan Pongratz, Vice President at Dell’Oro Group. “There is strong consensus that AI RAN can improve the user experience, enhance performance, reduce power consumption, and play a critical role in the broader automation journey. Unsurprisingly, however, there is greater skepticism about AI’s ability to reverse the flat revenue trajectory that has defined operators throughout the 4G and 5G cycles,” continued Pongratz.
Note 1. AI RAN integrates AI and machine learning (ML) across various aspects of the RAN domain. The AI RAN scope in this report is aligned with the greater industry vision. While the broader AI RAN vision includes services and infrastructure, the projections in this report focus on the RAN equipment market.
Additional highlights from the July 2025 AI RAN Advanced Research Report:
- The base case is built on the assumption that AI RAN is not a growth vehicle. But it is a crucial technology/tool for operators to adopt. Over time, operators will incorporate more virtualization, intelligence, automation, and O-RAN into their RAN roadmaps.
- This initial AI RAN report forecasts the AI RAN market based on location, tenancy, technology, and region.
- The existing RAN radio and baseband suppliers are well-positioned in the initial AI-RAN phase, driven primarily by AI-for-RAN upgrades leveraging the existing hardware. Per Dell’Oro Group’s regular RAN coverage, the top 5 RAN suppliers contributed around 95 percent of the 2024 RAN revenue.
- AI RAN is projected to account for around a third of total RAN revenue by 2029.
In the first quarter of 2025, Dell’Oro said the top five RAN suppliers based on revenues outside of China are Ericsson, Nokia, Huawei, Samsung and ZTE. In terms of worldwide revenue, the ranking changes to Huawei, Ericsson, Nokia, ZTE and Samsung.
About the Report: Dell’Oro Group’s AI RAN Advanced Research Report includes a 5-year forecast for AI RAN by location, tenancy, technology, and region. Contact: [email protected]
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………
Author’s Note: Nvidia’s Aerial Research portfolio already contains a host of AI-powered tools designed to augment wireless network simulations. It is also collaborating with T-Mobile and Cisco to develop AI RAN solutions to support future 6G applications. The GPU king is also working with some of those top five RAN suppliers, Nokia and Ericsson, on an AI-RAN Innovation Center. Unveiled last October, the project aims to bring together cloud-based RAN and AI development and push beyond applications that focus solely on improving efficiencies.
……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….
The one year old AI RAN Alliance has now increased its membership to over 100, up from around 84 in May. However, there are not many telco members with only Vodafone joining since May. The other telco members are: Turkcell ,Boost Mobile, Globe, Indosat Ooredoo Hutchison (Indonesia), Korea Telecom, LG UPlus, SK Telecom, T-Mobile US and Softbank. This limited telco presence could reflect the ongoing skepticism about the goals of AI-RAN, including hopes for new revenue opportunities through network slicing, as well as hosting and monetizing enterprise AI workloads at the edge.
Francisco Martín Pignatelli, head of open RAN at Vodafone, hardly sounded enthusiastic in his statement in the AI-RAN Alliance press release. “Vodafone is committed to using AI to optimize and enhance the performance of our radio access networks. Running AI and RAN workloads on shared infrastructure boosts efficiency, while integrating AI and generative applications over RAN enables new real-time capabilities at the network edge,” he added.
Perhaps, the most popular AI RAN scenario is “AI on RAN,” which enables AI services on the RAN at the network edge in a bid to support and benefit from new services, such as AI inferencing.
“We are thrilled by the extraordinary growth of the AI-RAN Alliance,” said Alex Jinsung Choi, Chair of the AI-RAN Alliance and Principal Fellow at SoftBank Corp.’s Research Institute of Advanced Technology. “This milestone underscores the global momentum behind advancing AI for RAN, AI and RAN, and AI on RAN. Our members are pioneering how artificial intelligence can be deeply embedded into radio access networks — from foundational research to real-world deployment — to create intelligent, adaptive, and efficient wireless systems.”
Choi recently suggested that now is the time to “revisit all our value propositions and then think about what should be changed or what should be built” to be able to address issues including market saturation and the “decoupling” between revenue growth and rising TCO. He also cited self-driving vehicles and mobile robots, where low latency is critical, as specific use cases where AI-RAN will be useful for running enterprise workloads.
About the AI-RAN Alliance:
The AI-RAN Alliance is a global consortium accelerating the integration of artificial intelligence into Radio Access Networks. Established in 2024, the Alliance unites leading companies, researchers, and technologists to advance open, practical approaches for building AI-native wireless networks. The Alliance focuses on enabling experimentation, sharing knowledge, and real-world performance to support the next generation of mobile infrastructure. For more information, visit: https://ai-ran.org
References:
https://www.delloro.com/advanced-research-report/ai-ran/
https://www.delloro.com/news/ai-ran-to-top-10-billion-by-2029/
Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum
AI RAN Alliance selects Alex Choi as Chairman
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
Deutsche Telekom and Google Cloud partner on “RAN Guardian” AI agent
The case for and against AI-RAN technology using Nvidia or AMD GPUs
ZTE’s AI infrastructure and AI-powered terminals revealed at MWC Shanghai
ZTE Corporation unveiled a full range of AI initiatives under the theme “Catalyzing Intelligent Innovation” at MWC Shanghai 2025. Those innovations include AI + networks, AI applications, and AI-powered terminals. During several demonstrations, ZTE showcased its key advancements in AI phones and smart homes. Leveraging its underlying capabilities, the company is committed to providing full-stack solutions—from infrastructure to application ecosystems—for operators, enterprises, and consumers, co-creating an era of AI for all.
ZTE’s Chief Development Officer Cui Li outlined the vendor’s roadmap for building intelligent infrastructure and accelerating artificial intelligence (AI) adoption across industries during a keynote session at MWC Shanghai 2025. During her speech, Cui highlighted the growing influence of large AI models and the critical role of foundational infrastructure. “No matter how AI technology evolves in the future, the focus will remain on efficient infrastructure, optimized algorithms and practical applications,” she said. The Chinese vendor is deploying modular, prefabricated data center units and AI-based power management, which she said reduce energy use and cooling loads by more than 10%. These developments are aimed at delivering flexible, sustainable capacity to meet growing AI demands, the ZTE executive said.
ZTE is also advancing “AI-native” networks that shift from traditional architectures to heterogeneous computing platforms, with embedded AI capabilities. This, Cui said, marks a shift from AI as a support tool to autonomous agents shaping operations. Ms. Cui emphasized the role of high-quality, secure data and efficient algorithms in building more capable AI. “Data is like fertile ‘soil’. Its volume, purity and security decide how well AI as a plant can grow,” she said. “Every digital application — including AI — depends on efficient and green infrastructure,” she said.
ZTE is heavily investing in AI-native network architecture and high-efficiency computing:
- AI-native networks – ZTE is redesigning telecom infrastructure with embedded intelligence, modular data centers and AI-driven energy systems to meet escalating AI compute demands.
- Smarter models, better data – With advanced training methods and tools, ZTE is pushing the boundaries of model accuracy and real-world performance.
- Edge-to-core deployment – ZTE is integrating AI across consumer, home and industry use cases, delivering over 100 applied solutions across 18 verticals under its “AI for All” strategy.
ZTE has rolled out a full range of innovative solutions for network intelligence upgrades.
-
AIR RAN solution: deeply integrating AI to fully improve energy efficiency, maintenance efficiency, and user experience, driving the transition towards value creation of 5G
-
AIR Net solution: a high-level autonomous network solution that encompasses three engines to advance network operations towards “Agentic Operations”
-
AI-optical campus solution: addressing network pain points in various scenarios for higher operational efficiency in cities
-
HI-NET solution: a high-performance and highly intelligent transport network solution enabling “terminal-edge-network-computing” synergy with multiple groundbreaking innovations, including the industry’s first integrated sensing-communication-computing CPE, full-band OTNs, highest-density 800G intelligent switches, and the world’s leading AI-native routers
Through technological innovations in wireless and wired networks, ZTE is building an energy-efficient, wide-coverage, and intelligent network infrastructure that meets current business needs and lays the groundwork for future AI-driven applications, positioning operators as first movers in digital transformation.
In the home terminal market, ZTE AI Home establishes a family-centric vDC and employs MoE-based AI agents to deliver personalized services for each household member. Supported by an AI network, home-based computing power, AI screens, and AI companion robots, ZTE AI Home ensures a seamless and engaging experience—providing 24/7 all-around, warm-hearted care for every family member. The product highlights include:
-
AI FTTR: Serving as a thoughtful life assistant, it is equipped with a household knowledge base to proactively understand and optimize daily routines for every family member.
-
AI Wi-Fi 7: Featuring the industry’s first omnidirectional antenna and smart roaming solution, it ensures high-speed and stable connectivity.
-
Smart display: It acts like an exclusive personal trainer, leveraging precise semantic parsing technology to tailor personalized services for users.
-
AI flexible screen & cloud PC: Multi-screen interactions cater to diverse needs for home entertainment and mobile office, creating a new paradigm for smart homes.
-
AI companion robot: Backed by smart emotion recognition and bionic interaction systems, the robot safeguards children’s healthy growth with emotionally intelligent connections.
ZTE will anchor its product strategy on “Connectivity + Computing.” Collaborating with industry partners, the company is committed to driving industrial transformation, and achieving computing and AI for all, thereby contributing to a smarter, more connected world.
References:
ZTE reports H1-2024 revenue of RMB 62.49 billion (+2.9% YoY) and net profit of RMB 5.73 billion (+4.8% YoY)
ZTE reports higher earnings & revenue in 1Q-2024; wins 2023 climate leadership award
Malaysia’s U Mobile signs MoU’s with Huawei and ZTE for 5G network rollout
China Mobile & ZTE use digital twin technology with 5G-Advanced on high-speed railway in China
Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum
Dell’Oro: RAN market still declining with Huawei, Ericsson, Nokia, ZTE and Samsung top vendors
Dell’Oro: Global RAN Market to Drop 21% between 2021 and 2029
Nile launches a Generative AI engine (NXI) to proactively detect and resolve enterprise network issues
Nile is a Nile is a private, venture-funded technology company specializing in AI-driven network and security infrastructure services for enterprises and government organizations. Nile has pioneered the use of AI and machine learning in enterprise networking. Its latest generative AI capability, Nile Experience Intelligence (NXI), proactively resolves network issues before they impact users or IT teams, automating fault detection, root cause analysis, and remediation at scale. This approach reduces manual intervention, eliminates alert fatigue, and ensures high performance and uptime by autonomously managing networks.
Significant Innovations Include:
-
Automated site surveys and network design using AI and machine learning
-
Digital twins for simulating and optimizing network operations
-
Edge-to-cloud zero-trust security built into all service components
-
Closed-loop automation for continuous optimization without human intervention
Today, the company announced the launch of Nile Experience Intelligence (NXI), a novel generative AI capability designed to proactively resolve network issues before they impact IT teams, users, IoT devices, or the performance standards defined by Nile’s Network-as-a-Service (NaaS) guarantee. As a core component of the Nile Access Service [1.], NXI uniquely enables Nile to take advantage of its comprehensive, built-in AI automation capabilities. NXI allows Nile to autonomously monitor every customer deployment at scale, identifying performance anomalies and network degradations that impact reliability and user experience. While others market their offerings as NaaS, only the Nile Access Service with NXI delivers a financially backed performance guarantee—an unmatched industry standard.
………………………………………………………………………………………………………………………………………………………………
Note 1. Nile Access Service is a campus Network-as-a-Service (NaaS) platform that delivers both wired and wireless LAN connectivity with integrated Zero Trust Networking (ZTN), automated lifecycle management, and a unique industry-first performance guarantee. The service is built on a vertically integrated stack of hardware, software, and cloud-based management, leveraging continuous monitoring, analytics, and AI-powered automation to simplify deployment, automate maintenance, and optimize network performance.
………………………………………………………………………………………………………………………………………………………………………………………………….
“Traditional networking and NaaS offerings based on service packs rely on IT organizations to write rules that are static and reactive, which requires continuous management. Nile and NXI flipped that approach by using generative AI to anticipate and resolve issues across our entire install base, before users or IT teams are even aware of them,” said Suresh Katukam, Chief Product Officer at Nile. “With NXI, instead of providing recommendations and asking customers to write rules that involve manual interaction—we’re enabling autonomous operations that provide a superior and uninterrupted user experience.”
Key capabilities include:
- Proactive Fault Detection and Root Cause Analysis: predictive modeling-based data analysis of billions of daily events, enabling proactive insights across Nile’s entire customer install base.
- Large Scale Automated Remediation: leveraging the power of generative AI and large language models (LLMs), NXI automatically validates and implements resolutions without manual intervention, virtually eliminating customer-generated trouble tickets.
- Eliminate Alert Fatigue: NXI eliminates alert overload by shifting focus from notifications to autonomous, actionable resolution, ensuring performance and uptime without IT intervention.
Unlike rules-based systems dependent on human-configured logic and manual maintenance, NXI is:
- Generative AI and self-learning powered, eliminating the need for static, manually created rules that are prone to human error and require ongoing maintenance.
- Designed for scale, NXI already processes terabytes of data daily and effortlessly scales to manage thousands of networks simultaneously.
- Built on Nile’s standardized architecture, enabling consistent AI-driven optimization across all customer networks at scale.
- Closed-loop automated, no dashboards or recommended actions for customers to interpret, and no waiting on manual intervention.
Katukam added, “NXI is a game-changer for Nile. It enables us to stay ahead of user experience and continuously fine-tune the network to meet evolving needs. This is what true autonomous networking looks like—proactive, intelligent, and performance-guaranteed.”
From improved connectivity to consistent performance, Nile customers are already seeing the impact of NXI. For more information about NXI and Nile’s secure Network as a Service platform, visit www.nilesecure.com.
About Nile:
Nile is leading a fundamental shift in the networking industry, challenging decades-old conventions to deliver a radically new approach. By eliminating complexity and rethinking how networks are built, consumed, and operated, Nile is pioneering a new category designed for a modern, service-driven era. With a relentless focus on simplicity, security, reliability, and performance, Nile empowers organizations to move beyond the limitations of legacy infrastructure and embrace a future where networking is effortless, predictable, and fully aligned with their digital ambitions.
Nile is recognized as a disruptor in the enterprise networking market, offering a modern alternative to traditional vendors like Cisco and HPE. Its model enables organizations to reduce total cost of ownership by more than 60% and reclaim IT resources while providing superior connectivity. Major customers include Stanford University, Pitney Bowes, and Carta.
The company has received several industry accolades, including the CRN Tech Innovators Award (2024) and recognition in Gartner’s Peer Insights Voice of the Customer Report1. Nile has raised over $300 million in funding, with a significant $175 million Series C round in 2023 to fuel expansion.
References:
https://nilesecure.com/company/about-us
Does AI change the business case for cloud networking?
Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections
Qualcomm to acquire Alphawave Semi for $2.4 billion; says its high-speed wired tech will accelerate AI data center expansion
AI infrastructure investments drive demand for Ciena’s products including 800G coherent optics
McKinsey: AI infrastructure opportunity for telcos? AI developments in the telecom sector
A new report from McKinsey & Company offers a wide range of options for telecom network operators looking to enter the market for AI services. One high-level conclusion is that strategy inertia and decision paralysis might be the most dangerous threats. That’s largely based on telco’s failure to monetize past emerging technologies like smartphones and mobile apps, cloud networking, 5G-SA (the true 5G), etc. For example, global mobile data traffic rose 60% per year from 2010 to 2023, while the global telecom industry’s revenues rose just 1% during that same time period.
“Operators could provide the backbone for today’s AI economy to reignite growth. But success will hinge on effectively navigating complex market dynamics, uncertain demand, and rising competition….Not every path will suit every telco; some may be too risky for certain operators right now. However, the most significant risk may come from inaction, as telcos face the possibility of missing out on their fair share of growth from this latest technological disruption.”
McKinsey predicts that global data center demand could rise as high as 298 gigawatts by 2030, from just 55 gigawatts in 2023. Fiber connections to AI infused data centers could generate up to $50 billion globally in sales to fiber facilities based carriers.
Pathways to growth -Exploring four strategic options:
- Connecting new data centers with fiber
- Enabling high-performance cloud access with intelligent network services
- Turning unused space and power into revenue
- Building a new GPU as a Service business.
“Our research suggests that the addressable GPUaaS [GPU-as-a-service] market addressed by telcos could range from $35 billion to $70 billion by 2030 globally.” Verizon’s AI Connect service (described below), Indosat Ooredoo Hutchinson (IOH), Singtel and Softbank in Asia have launched their own GPUaaS offerings.
……………………………………………………………………………………………………………………………………………………………………………………………………………………………….
Recent AI developments in the telecom sector include:
- The AI-RAN Alliance, which promises to allow wireless network operators to add AI to their radio access networks (RANs) and then sell AI computing capabilities to enterprises and other customers at the network edge. Nvidia is leading this industrial initiative. Telecom operators in the alliance include T-Mobile and SoftBank, as well as Boost Mobile, Globe, Indosat Ooredoo Hutchison, Korea Telecom, LG UPlus, SK Telecom and Turkcell.
- Verizon’s new AI Connect product, which includes Vultr’s GPU-as-a-service (GPUaaS) offering. GPU-as-a-service is a cloud computing model that allows businesses to rent access to powerful graphics processing units (GPUs) for AI and machine learning workloads without having to purchase and maintain that expensive hardware themselves. Verizon also has agreements with Google Cloud and Meta to provide network infrastructure for their AI workloads, demonstrating a focus on supporting the broader AI economy.
- Orange views AI as a critical growth driver. They are developing “AI factories” (data centers optimized for AI workloads) and providing an “AI platform layer” called Live Intelligence to help enterprises build generative AI systems. They also offer a generative AI assistant for contact centers in partnership with Microsoft.
- Lumen Technologies continues to build fiber connections intended to carry AI traffic.
- British Telecom (BT) has launched intelligent network services and is working with partners like Fortinet to integrate AI for enhanced security and network management.
- Telus (Canada) has built its own AI platform called “Fuel iX” to boost employee productivity and generate new revenue. They are also commercializing Fuel iX and building sovereign AI infrastructure.
- Telefónica: Their “Next Best Action AI Brain” uses an in-house Kernel platform to revolutionize customer interactions with precise, contextually relevant recommendations.
- Bharti Airtel (India): Launched India’s first anti-spam network, an AI-powered system that processes billions of calls and messages daily to identify and block spammers.
- e& (formerly Etisalat in UAE): Has launched the “Autonomous Store Experience (EASE),” which uses smart gates, AI-powered cameras, robotics, and smart shelves for a frictionless shopping experience.
- SK Telecom (Korea): Unveiled a strategy to implement an “AI Infrastructure Superhighway” and is actively involved in AI-RAN (AI in Radio Access Networks) development, including their AITRAS solution.
- Vodafone: Sees AI as a transformative force, with initiatives in network optimization, customer experience (e.g., their TOBi chatbot handling over 45 million interactions per month), and even supporting neurodiverse staff.
- Deutsche Telekom: Deploys AI across various facets of its operations
……………………………………………………………………………………………………………………………………………………………………..
A recent report from DCD indicates that new AI models that can reason may require massive, expensive data centers, and such data centers may be out of reach for even the largest telecom operators. Across optical data center interconnects, data centers are already communicating with each other for multi-cluster training runs. “What we see is that, in the largest data centers in the world, there’s actually a data center and another data center and another data center,” he says. “Then the interesting discussion becomes – do I need 100 meters? Do I need 500 meters? Do I need a kilometer interconnect between data centers?”
……………………………………………………………………………………………………………………………………………………………………..
References:
https://www.datacenterdynamics.com/en/analysis/nvidias-networking-vision-for-training-and-inference/
https://opentools.ai/news/inaction-on-ai-a-critical-misstep-for-telecos-says-mckinsey
Bain & Co, McKinsey & Co, AWS suggest how telcos can use and adapt Generative AI
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
The case for and against AI-RAN technology using Nvidia or AMD GPUs
Telecom and AI Status in the EU
Major technology companies form AI-Enabled Information and Communication Technology (ICT) Workforce Consortium
AI RAN Alliance selects Alex Choi as Chairman
AI Frenzy Backgrounder; Review of AI Products and Services from Nvidia, Microsoft, Amazon, Google and Meta; Conclusions
AI sparks huge increase in U.S. energy consumption and is straining the power grid; transmission/distribution as a major problem
Deutsche Telekom and Google Cloud partner on “RAN Guardian” AI agent
NEC’s new AI technology for robotics & RAN optimization designed to improve performance
MTN Consulting: Generative AI hype grips telecom industry; telco CAPEX decreases while vendor revenue plummets
Amdocs and NVIDIA to Accelerate Adoption of Generative AI for $1.7 Trillion Telecom Industry
SK Telecom and Deutsche Telekom to Jointly Develop Telco-specific Large Language Models (LLMs)
U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China
Damage of U.S. Export Controls and Trade War with China:
The U.S. big tech sector, especially needs to know what the rules of the trade game will be looking ahead instead of the on-again/off-again Trump tariffs and trade war with China which includes 145% tariffs and export controls on AI chips from Nvidia, AMD, and other U.S. semiconductor companies.
The latest export restriction on Nvidia’s H20 AI chips are a case in point. Nvidia said it would record a $5.5 billion charge on its quarterly earnings after it disclosed that the U.S. will now require a license for exporting the company’s H20 processors to China and other countries. The U.S. government told the chip maker on April 14th that the new license requirement would be in place “indefinitely.”
Nvidia designed the H20 chip to comply with existing U.S. export controls that limit sales of advanced AI processors to Chinese customers. That meant the chip’s capabilities were significantly degraded; Morgan Stanley analyst Joe Moore estimates the H20’s performance is about 75% below that of Nvidia’s H100 family. The Commerce Department said it was issuing new export-licensing requirements covering H20 chips and AMD’s MI308 AI processors.
Big Chinese cloud companies like Tencent, ByteDance (TikTok’s parent), Alibaba, Baidu, and iFlytek have been left scrambling for domestic alternatives to the H20, the primary AI chip that Nvidia had until recently been allowed to sell freely into the Chinese market. Some analysts suggest that H20 bulk orders to build a stockpile were a response to concerns about future U.S. export restrictions and a race to secure limited supplies of Nvidia chips. The estimate is that there’s a 90 days supply of H20 chips, but it’s uncertain what China big tech companies will use when that runs out.
The inability to sell even a low-performance chip into the Chinese market shows how the trade war will hurt Nvidia’s business. The AI chip king is now caught between the world’s two superpowers as they jockey to take the lead in AI development.
Nvidia CEO Jensen Huang “flew to China to do damage control and make sure China/Xi knows Nvidia wants/needs China to maintain its global ironclad grip on the AI Revolution,” the analysts note. The markets and tech world are tired of “deal progress” talks from the White House and want deals starting to be inked so they can plan their future strategy. The analysts think this is a critical week ahead to get some trade deals on the board, because Wall Street has stopped caring about words and comments around “deal progress.”
- Baidu is developing its own AI chips called Kunlun. It recently placed an order for 1,600 of Huawei’s Ascend 910B AI chips for 200 servers. This order was made in anticipation of further U.S. export restrictions on AI chips.
- Alibaba (T-Head) has developed AI chips like the Hanguang 800 inference chip, used to accelerate its e-commerce platform and other services.
- Cambricon Technologies: Designs various types of semiconductors, including those for training AI models and running AI applications on devices.
- Biren Technology: Designs general-purpose GPUs and software development platforms for AI training and inference, with products like the BR100 series.
- Moore Threads: Develops GPUs designed for training large AI models, with data center products like the MTT KUAE.
- Horizon Robotics: Focuses on AI chips for smart driving, including the Sunrise and Journey series, collaborating with automotive companies.
- Enflame Technology: Designs chips for data centers, specializing in AI training and inference.
“With Nvidia’s H20 and other advanced GPUs restricted, domestic alternatives like Huawei’s Ascend series are gaining traction,” said Doug O’Laughlin, an industry analyst at independent semiconductor research company SemiAnalysis. “While there are still gaps in software maturity and overall ecosystem readiness, hardware performance is closing in fast,” O’Laughlin added. According to the SemiAnalysis report, Huawei’s Ascend chip shows how China’s export controls have failed to stop firms like Huawei from accessing critical foreign tools and sub-components needed for advanced GPUs. “While Huawei’s Ascend chip can be fabricated at SMIC, this is a global chip that has HBM from Korea, primary wafer production from TSMC, and is fabricated by 10s of billions of wafer fabrication equipment from the US, Netherlands, and Japan,” the report stated.
Huawei’s New AI Chip May Dominate in China:
Huawei Technologies plans to begin mass shipments of its advanced 910C artificial intelligence chip to Chinese customers as early as next month, according to Reuters. Some shipments have already been made, people familiar with the matter said. Huawei’s 910C, a graphics processing unit (GPU), represents an architectural evolution rather than a technological breakthrough, according to one of the two people and a third source familiar with its design. It achieves performance comparable to Nvidia’s H100 chip by combining two 910B processors into a single package through advanced integration techniques, they said. That means it has double the computing power and memory capacity of the 910B and it also has incremental improvements, including enhanced support for diverse AI workload data.
Conclusions:
The U.S. Commerce Department’s latest export curbs on Nvidia’s H20 “will mean that Huawei’s Ascend 910C GPU will now become the hardware of choice for (Chinese) AI model developers and for deploying inference capacity,” said Paul Triolo, a partner at consulting firm Albright Stonebridge Group.
The markets, tech world and the global economy urgently need U.S. – China trade negotiations in some form to start as soon as possible, Wedbush analysts say in a research note today. The analysts expect minimal or no guidance from tech companies during this earnings season as they are “playing darts blindfolded.”
References:
https://qz.com/china-six-tigers-ai-startup-zhipu-moonshot-minimax-01ai-1851768509#
https://www.huaweicloud.com/intl/en-us/
Goldman Sachs: Big 3 China telecom operators are the biggest beneficiaries of China’s AI boom via DeepSeek models; China Mobile’s ‘AI+NETWORK’ strategy
Telecom sessions at Nvidia’s 2025 AI developers GTC: March 17–21 in San Jose, CA
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
FT: Nvidia invested $1bn in AI start-ups in 2024
Omdia: Huawei increases global RAN market share due to China hegemony
Huawei’s “FOUR NEW strategy” for carriers to be successful in AI era
Telecom sessions at Nvidia’s 2025 AI developers GTC: March 17–21 in San Jose, CA
Nvidia’s annual AI developers conference (GTC) used to be a relatively modest affair, drawing about 9,000 people in its last year before the Covid outbreak. But the event now unofficially dubbed “AI Woodstock” is expected to bring more than 25,000 in-person attendees!
Nvidia’s Blackwell AI chips, the main showcase of last year’s GTC (GPU Technology Conference), have only recently started shipping in high volume following delays related to the mass production of their complicated design. Blackwell is expected to be the main anchor of Nvidia’s AI business through next year. Analysts expect Nvidia Chief Executive Jensen Huang to showcase a revved-up version of that family called Blackwell Ultra at his keynote address on Tuesday.
March 18th Update: The next Blackwell Ultra NVL72 chips, which have one-and-a-half times more memory and two times more bandwidth, will be used to accelerate building AI agents, physical AI, and reasoning models, Huang said. Blackwell Ultra will be available in the second half of this year. The Rubin AI chip, is expected to launch in late 2026. Rubin Ultra will take the stage in 2027.
Nvidia watchers are especially eager to hear more about the next generation of AI chips called Rubin, which Nvidia has only teased at in prior events. Ross Seymore of Deutsche Bank expects the Rubin family to show “very impressive performance improvements” over Blackwell. Atif Malik of Citigroup notes that Blackwell provided 30 times faster performance than the company’s previous generation on AI inferencing, which is when trained AI models generate output. “We don’t rule out Rubin seeing similar improvement,” Malik wrote in a note to clients this month.
Rubin products aren’t expected to start shipping until next year. But much is already expected of the lineup; analysts forecast Nvidia’s data-center business will hit about $237 billion in revenue for the fiscal year ending in January of 2027, more than double its current size. The same segment is expected to eclipse $300 billion in annual revenue two years later, according to consensus estimates from Visible Alpha. That would imply an average annual growth rate of 30% over the next four years, for a business that has already exploded more than sevenfold over the last two.
Nvidia has also been haunted by worries about competition with in-house chips designed by its biggest customers like Amazon and Google. Another concern has been the efficiency breakthroughs claimed by Chinese AI startup DeepSeek, which would seemingly lessen the need for the types of AI chip clusters that Nvidia sells for top dollar.
…………………………………………………………………………………………………………………………………………………………………………………………………………………….
Telecom Sessions of Interest:
Wednesday Mar 19 | 2:00 PM – 2:40 PM
Delivering Real Business Outcomes With AI in Telecom [S73438]
In this session, executives from three leading telcos will share their unique journeys of embedding AI into their organizations. They’ll discuss how AI is driving measurable value across critical areas such as network optimization, customer experience, operational efficiency, and revenue growth. Gain insights into the challenges and lessons learned, key strategies for successful AI implementation, and the transformative potential of AI in addressing evolving industry demands.
Thursday Mar 20 | 11:00 AM – 11:40 AM PDT
AI-RAN in Action [S72987]
Thursday Mar 20 | 9:00 AM – 9:40 AM PDTHow Indonesia Delivered a Telco-led Sovereign AI Platform for 270M Users [S73440]
Thursday Mar 20 | 3:00 PM – 3:40 PM PDT
Driving 6G Development With Advanced Simulation Tools [S72994]
Thursday Mar 20 | 2:00 PM – 2:40 PM PDT
Thursday Mar 20 | 4:00 PM – 4:40 PM PDT
Pushing Spectral Efficiency Limits on CUDA-accelerated 5G/6G RAN [S72990]
Thursday Mar 20 | 4:00 PM – 4:40 PM PDT
Enable AI-Native Networking for Telcos with Kubernetes [S72993]
Monday Mar 17 | 3:00 PM – 4:45 PM PDT
Automate 5G Network Configurations With NVIDIA AI LLM Agents and Kinetica Accelerated Database [DLIT72350]
Learn how to create AI agents using LangGraph and NVIDIA NIM to automate 5G network configurations. You’ll deploy LLM agents to monitor real-time network quality of service (QoS) and dynamically respond to congestion by creating new network slices. LLM agents will process logs to detect when QoS falls below a threshold, then automatically trigger a new slice for the affected user equipment. Using graph-based models, the agents understand the network configuration, identifying impacted elements. This ensures efficient, AI-driven adjustments that consider the overall network architecture.
We’ll use the Open Air Interface 5G lab to simulate the 5G network, demonstrating how AI can be integrated into real-world telecom environments. You’ll also gain practical knowledge on using Python with LangGraph and NVIDIA AI endpoints to develop and deploy LLM agents that automate complex network tasks.
Prerequisite: Python programming.
………………………………………………………………………………………………………………………………………………………………………………………………………..
References: