AI in Networks Market
ZTE’s AI infrastructure and AI-powered terminals revealed at MWC Shanghai
ZTE Corporation unveiled a full range of AI initiatives under the theme “Catalyzing Intelligent Innovation” at MWC Shanghai 2025. Those innovations include AI + networks, AI applications, and AI-powered terminals. During several demonstrations, ZTE showcased its key advancements in AI phones and smart homes. Leveraging its underlying capabilities, the company is committed to providing full-stack solutions—from infrastructure to application ecosystems—for operators, enterprises, and consumers, co-creating an era of AI for all.
ZTE’s Chief Development Officer Cui Li outlined the vendor’s roadmap for building intelligent infrastructure and accelerating artificial intelligence (AI) adoption across industries during a keynote session at MWC Shanghai 2025. During her speech, Cui highlighted the growing influence of large AI models and the critical role of foundational infrastructure. “No matter how AI technology evolves in the future, the focus will remain on efficient infrastructure, optimized algorithms and practical applications,” she said. The Chinese vendor is deploying modular, prefabricated data center units and AI-based power management, which she said reduce energy use and cooling loads by more than 10%. These developments are aimed at delivering flexible, sustainable capacity to meet growing AI demands, the ZTE executive said.
ZTE is also advancing “AI-native” networks that shift from traditional architectures to heterogeneous computing platforms, with embedded AI capabilities. This, Cui said, marks a shift from AI as a support tool to autonomous agents shaping operations. Ms. Cui emphasized the role of high-quality, secure data and efficient algorithms in building more capable AI. “Data is like fertile ‘soil’. Its volume, purity and security decide how well AI as a plant can grow,” she said. “Every digital application — including AI — depends on efficient and green infrastructure,” she said.
ZTE is heavily investing in AI-native network architecture and high-efficiency computing:
- AI-native networks – ZTE is redesigning telecom infrastructure with embedded intelligence, modular data centers and AI-driven energy systems to meet escalating AI compute demands.
- Smarter models, better data – With advanced training methods and tools, ZTE is pushing the boundaries of model accuracy and real-world performance.
- Edge-to-core deployment – ZTE is integrating AI across consumer, home and industry use cases, delivering over 100 applied solutions across 18 verticals under its “AI for All” strategy.
ZTE has rolled out a full range of innovative solutions for network intelligence upgrades.
-
AIR RAN solution: deeply integrating AI to fully improve energy efficiency, maintenance efficiency, and user experience, driving the transition towards value creation of 5G
-
AIR Net solution: a high-level autonomous network solution that encompasses three engines to advance network operations towards “Agentic Operations”
-
AI-optical campus solution: addressing network pain points in various scenarios for higher operational efficiency in cities
-
HI-NET solution: a high-performance and highly intelligent transport network solution enabling “terminal-edge-network-computing” synergy with multiple groundbreaking innovations, including the industry’s first integrated sensing-communication-computing CPE, full-band OTNs, highest-density 800G intelligent switches, and the world’s leading AI-native routers
Through technological innovations in wireless and wired networks, ZTE is building an energy-efficient, wide-coverage, and intelligent network infrastructure that meets current business needs and lays the groundwork for future AI-driven applications, positioning operators as first movers in digital transformation.
In the home terminal market, ZTE AI Home establishes a family-centric vDC and employs MoE-based AI agents to deliver personalized services for each household member. Supported by an AI network, home-based computing power, AI screens, and AI companion robots, ZTE AI Home ensures a seamless and engaging experience—providing 24/7 all-around, warm-hearted care for every family member. The product highlights include:
-
AI FTTR: Serving as a thoughtful life assistant, it is equipped with a household knowledge base to proactively understand and optimize daily routines for every family member.
-
AI Wi-Fi 7: Featuring the industry’s first omnidirectional antenna and smart roaming solution, it ensures high-speed and stable connectivity.
-
Smart display: It acts like an exclusive personal trainer, leveraging precise semantic parsing technology to tailor personalized services for users.
-
AI flexible screen & cloud PC: Multi-screen interactions cater to diverse needs for home entertainment and mobile office, creating a new paradigm for smart homes.
-
AI companion robot: Backed by smart emotion recognition and bionic interaction systems, the robot safeguards children’s healthy growth with emotionally intelligent connections.
ZTE will anchor its product strategy on “Connectivity + Computing.” Collaborating with industry partners, the company is committed to driving industrial transformation, and achieving computing and AI for all, thereby contributing to a smarter, more connected world.
References:
ZTE reports H1-2024 revenue of RMB 62.49 billion (+2.9% YoY) and net profit of RMB 5.73 billion (+4.8% YoY)
ZTE reports higher earnings & revenue in 1Q-2024; wins 2023 climate leadership award
Malaysia’s U Mobile signs MoU’s with Huawei and ZTE for 5G network rollout
China Mobile & ZTE use digital twin technology with 5G-Advanced on high-speed railway in China
Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum
Dell’Oro: RAN market still declining with Huawei, Ericsson, Nokia, ZTE and Samsung top vendors
Dell’Oro: Global RAN Market to Drop 21% between 2021 and 2029
Nile launches a Generative AI engine (NXI) to proactively detect and resolve enterprise network issues
Nile is a Nile is a private, venture-funded technology company specializing in AI-driven network and security infrastructure services for enterprises and government organizations. Nile has pioneered the use of AI and machine learning in enterprise networking. Its latest generative AI capability, Nile Experience Intelligence (NXI), proactively resolves network issues before they impact users or IT teams, automating fault detection, root cause analysis, and remediation at scale. This approach reduces manual intervention, eliminates alert fatigue, and ensures high performance and uptime by autonomously managing networks.
Significant Innovations Include:
-
Automated site surveys and network design using AI and machine learning
-
Digital twins for simulating and optimizing network operations
-
Edge-to-cloud zero-trust security built into all service components
-
Closed-loop automation for continuous optimization without human intervention
Today, the company announced the launch of Nile Experience Intelligence (NXI), a novel generative AI capability designed to proactively resolve network issues before they impact IT teams, users, IoT devices, or the performance standards defined by Nile’s Network-as-a-Service (NaaS) guarantee. As a core component of the Nile Access Service [1.], NXI uniquely enables Nile to take advantage of its comprehensive, built-in AI automation capabilities. NXI allows Nile to autonomously monitor every customer deployment at scale, identifying performance anomalies and network degradations that impact reliability and user experience. While others market their offerings as NaaS, only the Nile Access Service with NXI delivers a financially backed performance guarantee—an unmatched industry standard.
………………………………………………………………………………………………………………………………………………………………
Note 1. Nile Access Service is a campus Network-as-a-Service (NaaS) platform that delivers both wired and wireless LAN connectivity with integrated Zero Trust Networking (ZTN), automated lifecycle management, and a unique industry-first performance guarantee. The service is built on a vertically integrated stack of hardware, software, and cloud-based management, leveraging continuous monitoring, analytics, and AI-powered automation to simplify deployment, automate maintenance, and optimize network performance.
………………………………………………………………………………………………………………………………………………………………………………………………….
“Traditional networking and NaaS offerings based on service packs rely on IT organizations to write rules that are static and reactive, which requires continuous management. Nile and NXI flipped that approach by using generative AI to anticipate and resolve issues across our entire install base, before users or IT teams are even aware of them,” said Suresh Katukam, Chief Product Officer at Nile. “With NXI, instead of providing recommendations and asking customers to write rules that involve manual interaction—we’re enabling autonomous operations that provide a superior and uninterrupted user experience.”
Key capabilities include:
- Proactive Fault Detection and Root Cause Analysis: predictive modeling-based data analysis of billions of daily events, enabling proactive insights across Nile’s entire customer install base.
- Large Scale Automated Remediation: leveraging the power of generative AI and large language models (LLMs), NXI automatically validates and implements resolutions without manual intervention, virtually eliminating customer-generated trouble tickets.
- Eliminate Alert Fatigue: NXI eliminates alert overload by shifting focus from notifications to autonomous, actionable resolution, ensuring performance and uptime without IT intervention.
Unlike rules-based systems dependent on human-configured logic and manual maintenance, NXI is:
- Generative AI and self-learning powered, eliminating the need for static, manually created rules that are prone to human error and require ongoing maintenance.
- Designed for scale, NXI already processes terabytes of data daily and effortlessly scales to manage thousands of networks simultaneously.
- Built on Nile’s standardized architecture, enabling consistent AI-driven optimization across all customer networks at scale.
- Closed-loop automated, no dashboards or recommended actions for customers to interpret, and no waiting on manual intervention.
Katukam added, “NXI is a game-changer for Nile. It enables us to stay ahead of user experience and continuously fine-tune the network to meet evolving needs. This is what true autonomous networking looks like—proactive, intelligent, and performance-guaranteed.”
From improved connectivity to consistent performance, Nile customers are already seeing the impact of NXI. For more information about NXI and Nile’s secure Network as a Service platform, visit www.nilesecure.com.
About Nile:
Nile is leading a fundamental shift in the networking industry, challenging decades-old conventions to deliver a radically new approach. By eliminating complexity and rethinking how networks are built, consumed, and operated, Nile is pioneering a new category designed for a modern, service-driven era. With a relentless focus on simplicity, security, reliability, and performance, Nile empowers organizations to move beyond the limitations of legacy infrastructure and embrace a future where networking is effortless, predictable, and fully aligned with their digital ambitions.
Nile is recognized as a disruptor in the enterprise networking market, offering a modern alternative to traditional vendors like Cisco and HPE. Its model enables organizations to reduce total cost of ownership by more than 60% and reclaim IT resources while providing superior connectivity. Major customers include Stanford University, Pitney Bowes, and Carta.
The company has received several industry accolades, including the CRN Tech Innovators Award (2024) and recognition in Gartner’s Peer Insights Voice of the Customer Report1. Nile has raised over $300 million in funding, with a significant $175 million Series C round in 2023 to fuel expansion.
References:
https://nilesecure.com/company/about-us
Does AI change the business case for cloud networking?
Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections
Qualcomm to acquire Alphawave Semi for $2.4 billion; says its high-speed wired tech will accelerate AI data center expansion
AI infrastructure investments drive demand for Ciena’s products including 800G coherent optics
McKinsey: AI infrastructure opportunity for telcos? AI developments in the telecom sector
A new report from McKinsey & Company offers a wide range of options for telecom network operators looking to enter the market for AI services. One high-level conclusion is that strategy inertia and decision paralysis might be the most dangerous threats. That’s largely based on telco’s failure to monetize past emerging technologies like smartphones and mobile apps, cloud networking, 5G-SA (the true 5G), etc. For example, global mobile data traffic rose 60% per year from 2010 to 2023, while the global telecom industry’s revenues rose just 1% during that same time period.
“Operators could provide the backbone for today’s AI economy to reignite growth. But success will hinge on effectively navigating complex market dynamics, uncertain demand, and rising competition….Not every path will suit every telco; some may be too risky for certain operators right now. However, the most significant risk may come from inaction, as telcos face the possibility of missing out on their fair share of growth from this latest technological disruption.”
McKinsey predicts that global data center demand could rise as high as 298 gigawatts by 2030, from just 55 gigawatts in 2023. Fiber connections to AI infused data centers could generate up to $50 billion globally in sales to fiber facilities based carriers.
Pathways to growth -Exploring four strategic options:
- Connecting new data centers with fiber
- Enabling high-performance cloud access with intelligent network services
- Turning unused space and power into revenue
- Building a new GPU as a Service business.
“Our research suggests that the addressable GPUaaS [GPU-as-a-service] market addressed by telcos could range from $35 billion to $70 billion by 2030 globally.” Verizon’s AI Connect service (described below), Indosat Ooredoo Hutchinson (IOH), Singtel and Softbank in Asia have launched their own GPUaaS offerings.
……………………………………………………………………………………………………………………………………………………………………………………………………………………………….
Recent AI developments in the telecom sector include:
- The AI-RAN Alliance, which promises to allow wireless network operators to add AI to their radio access networks (RANs) and then sell AI computing capabilities to enterprises and other customers at the network edge. Nvidia is leading this industrial initiative. Telecom operators in the alliance include T-Mobile and SoftBank, as well as Boost Mobile, Globe, Indosat Ooredoo Hutchison, Korea Telecom, LG UPlus, SK Telecom and Turkcell.
- Verizon’s new AI Connect product, which includes Vultr’s GPU-as-a-service (GPUaaS) offering. GPU-as-a-service is a cloud computing model that allows businesses to rent access to powerful graphics processing units (GPUs) for AI and machine learning workloads without having to purchase and maintain that expensive hardware themselves. Verizon also has agreements with Google Cloud and Meta to provide network infrastructure for their AI workloads, demonstrating a focus on supporting the broader AI economy.
- Orange views AI as a critical growth driver. They are developing “AI factories” (data centers optimized for AI workloads) and providing an “AI platform layer” called Live Intelligence to help enterprises build generative AI systems. They also offer a generative AI assistant for contact centers in partnership with Microsoft.
- Lumen Technologies continues to build fiber connections intended to carry AI traffic.
- British Telecom (BT) has launched intelligent network services and is working with partners like Fortinet to integrate AI for enhanced security and network management.
- Telus (Canada) has built its own AI platform called “Fuel iX” to boost employee productivity and generate new revenue. They are also commercializing Fuel iX and building sovereign AI infrastructure.
- Telefónica: Their “Next Best Action AI Brain” uses an in-house Kernel platform to revolutionize customer interactions with precise, contextually relevant recommendations.
- Bharti Airtel (India): Launched India’s first anti-spam network, an AI-powered system that processes billions of calls and messages daily to identify and block spammers.
- e& (formerly Etisalat in UAE): Has launched the “Autonomous Store Experience (EASE),” which uses smart gates, AI-powered cameras, robotics, and smart shelves for a frictionless shopping experience.
- SK Telecom (Korea): Unveiled a strategy to implement an “AI Infrastructure Superhighway” and is actively involved in AI-RAN (AI in Radio Access Networks) development, including their AITRAS solution.
- Vodafone: Sees AI as a transformative force, with initiatives in network optimization, customer experience (e.g., their TOBi chatbot handling over 45 million interactions per month), and even supporting neurodiverse staff.
- Deutsche Telekom: Deploys AI across various facets of its operations
……………………………………………………………………………………………………………………………………………………………………..
A recent report from DCD indicates that new AI models that can reason may require massive, expensive data centers, and such data centers may be out of reach for even the largest telecom operators. Across optical data center interconnects, data centers are already communicating with each other for multi-cluster training runs. “What we see is that, in the largest data centers in the world, there’s actually a data center and another data center and another data center,” he says. “Then the interesting discussion becomes – do I need 100 meters? Do I need 500 meters? Do I need a kilometer interconnect between data centers?”
……………………………………………………………………………………………………………………………………………………………………..
References:
https://www.datacenterdynamics.com/en/analysis/nvidias-networking-vision-for-training-and-inference/
https://opentools.ai/news/inaction-on-ai-a-critical-misstep-for-telecos-says-mckinsey
Bain & Co, McKinsey & Co, AWS suggest how telcos can use and adapt Generative AI
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
The case for and against AI-RAN technology using Nvidia or AMD GPUs
Telecom and AI Status in the EU
Major technology companies form AI-Enabled Information and Communication Technology (ICT) Workforce Consortium
AI RAN Alliance selects Alex Choi as Chairman
AI Frenzy Backgrounder; Review of AI Products and Services from Nvidia, Microsoft, Amazon, Google and Meta; Conclusions
AI sparks huge increase in U.S. energy consumption and is straining the power grid; transmission/distribution as a major problem
Deutsche Telekom and Google Cloud partner on “RAN Guardian” AI agent
NEC’s new AI technology for robotics & RAN optimization designed to improve performance
MTN Consulting: Generative AI hype grips telecom industry; telco CAPEX decreases while vendor revenue plummets
Amdocs and NVIDIA to Accelerate Adoption of Generative AI for $1.7 Trillion Telecom Industry
SK Telecom and Deutsche Telekom to Jointly Develop Telco-specific Large Language Models (LLMs)
U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China
Damage of U.S. Export Controls and Trade War with China:
The U.S. big tech sector, especially needs to know what the rules of the trade game will be looking ahead instead of the on-again/off-again Trump tariffs and trade war with China which includes 145% tariffs and export controls on AI chips from Nvidia, AMD, and other U.S. semiconductor companies.
The latest export restriction on Nvidia’s H20 AI chips are a case in point. Nvidia said it would record a $5.5 billion charge on its quarterly earnings after it disclosed that the U.S. will now require a license for exporting the company’s H20 processors to China and other countries. The U.S. government told the chip maker on April 14th that the new license requirement would be in place “indefinitely.”
Nvidia designed the H20 chip to comply with existing U.S. export controls that limit sales of advanced AI processors to Chinese customers. That meant the chip’s capabilities were significantly degraded; Morgan Stanley analyst Joe Moore estimates the H20’s performance is about 75% below that of Nvidia’s H100 family. The Commerce Department said it was issuing new export-licensing requirements covering H20 chips and AMD’s MI308 AI processors.
Big Chinese cloud companies like Tencent, ByteDance (TikTok’s parent), Alibaba, Baidu, and iFlytek have been left scrambling for domestic alternatives to the H20, the primary AI chip that Nvidia had until recently been allowed to sell freely into the Chinese market. Some analysts suggest that H20 bulk orders to build a stockpile were a response to concerns about future U.S. export restrictions and a race to secure limited supplies of Nvidia chips. The estimate is that there’s a 90 days supply of H20 chips, but it’s uncertain what China big tech companies will use when that runs out.
The inability to sell even a low-performance chip into the Chinese market shows how the trade war will hurt Nvidia’s business. The AI chip king is now caught between the world’s two superpowers as they jockey to take the lead in AI development.
Nvidia CEO Jensen Huang “flew to China to do damage control and make sure China/Xi knows Nvidia wants/needs China to maintain its global ironclad grip on the AI Revolution,” the analysts note. The markets and tech world are tired of “deal progress” talks from the White House and want deals starting to be inked so they can plan their future strategy. The analysts think this is a critical week ahead to get some trade deals on the board, because Wall Street has stopped caring about words and comments around “deal progress.”
- Baidu is developing its own AI chips called Kunlun. It recently placed an order for 1,600 of Huawei’s Ascend 910B AI chips for 200 servers. This order was made in anticipation of further U.S. export restrictions on AI chips.
- Alibaba (T-Head) has developed AI chips like the Hanguang 800 inference chip, used to accelerate its e-commerce platform and other services.
- Cambricon Technologies: Designs various types of semiconductors, including those for training AI models and running AI applications on devices.
- Biren Technology: Designs general-purpose GPUs and software development platforms for AI training and inference, with products like the BR100 series.
- Moore Threads: Develops GPUs designed for training large AI models, with data center products like the MTT KUAE.
- Horizon Robotics: Focuses on AI chips for smart driving, including the Sunrise and Journey series, collaborating with automotive companies.
- Enflame Technology: Designs chips for data centers, specializing in AI training and inference.
“With Nvidia’s H20 and other advanced GPUs restricted, domestic alternatives like Huawei’s Ascend series are gaining traction,” said Doug O’Laughlin, an industry analyst at independent semiconductor research company SemiAnalysis. “While there are still gaps in software maturity and overall ecosystem readiness, hardware performance is closing in fast,” O’Laughlin added. According to the SemiAnalysis report, Huawei’s Ascend chip shows how China’s export controls have failed to stop firms like Huawei from accessing critical foreign tools and sub-components needed for advanced GPUs. “While Huawei’s Ascend chip can be fabricated at SMIC, this is a global chip that has HBM from Korea, primary wafer production from TSMC, and is fabricated by 10s of billions of wafer fabrication equipment from the US, Netherlands, and Japan,” the report stated.
Huawei’s New AI Chip May Dominate in China:
Huawei Technologies plans to begin mass shipments of its advanced 910C artificial intelligence chip to Chinese customers as early as next month, according to Reuters. Some shipments have already been made, people familiar with the matter said. Huawei’s 910C, a graphics processing unit (GPU), represents an architectural evolution rather than a technological breakthrough, according to one of the two people and a third source familiar with its design. It achieves performance comparable to Nvidia’s H100 chip by combining two 910B processors into a single package through advanced integration techniques, they said. That means it has double the computing power and memory capacity of the 910B and it also has incremental improvements, including enhanced support for diverse AI workload data.
Conclusions:
The U.S. Commerce Department’s latest export curbs on Nvidia’s H20 “will mean that Huawei’s Ascend 910C GPU will now become the hardware of choice for (Chinese) AI model developers and for deploying inference capacity,” said Paul Triolo, a partner at consulting firm Albright Stonebridge Group.
The markets, tech world and the global economy urgently need U.S. – China trade negotiations in some form to start as soon as possible, Wedbush analysts say in a research note today. The analysts expect minimal or no guidance from tech companies during this earnings season as they are “playing darts blindfolded.”
References:
https://qz.com/china-six-tigers-ai-startup-zhipu-moonshot-minimax-01ai-1851768509#
https://www.huaweicloud.com/intl/en-us/
Goldman Sachs: Big 3 China telecom operators are the biggest beneficiaries of China’s AI boom via DeepSeek models; China Mobile’s ‘AI+NETWORK’ strategy
Telecom sessions at Nvidia’s 2025 AI developers GTC: March 17–21 in San Jose, CA
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
FT: Nvidia invested $1bn in AI start-ups in 2024
Omdia: Huawei increases global RAN market share due to China hegemony
Huawei’s “FOUR NEW strategy” for carriers to be successful in AI era
Telecom sessions at Nvidia’s 2025 AI developers GTC: March 17–21 in San Jose, CA
Nvidia’s annual AI developers conference (GTC) used to be a relatively modest affair, drawing about 9,000 people in its last year before the Covid outbreak. But the event now unofficially dubbed “AI Woodstock” is expected to bring more than 25,000 in-person attendees!
Nvidia’s Blackwell AI chips, the main showcase of last year’s GTC (GPU Technology Conference), have only recently started shipping in high volume following delays related to the mass production of their complicated design. Blackwell is expected to be the main anchor of Nvidia’s AI business through next year. Analysts expect Nvidia Chief Executive Jensen Huang to showcase a revved-up version of that family called Blackwell Ultra at his keynote address on Tuesday.
March 18th Update: The next Blackwell Ultra NVL72 chips, which have one-and-a-half times more memory and two times more bandwidth, will be used to accelerate building AI agents, physical AI, and reasoning models, Huang said. Blackwell Ultra will be available in the second half of this year. The Rubin AI chip, is expected to launch in late 2026. Rubin Ultra will take the stage in 2027.
Nvidia watchers are especially eager to hear more about the next generation of AI chips called Rubin, which Nvidia has only teased at in prior events. Ross Seymore of Deutsche Bank expects the Rubin family to show “very impressive performance improvements” over Blackwell. Atif Malik of Citigroup notes that Blackwell provided 30 times faster performance than the company’s previous generation on AI inferencing, which is when trained AI models generate output. “We don’t rule out Rubin seeing similar improvement,” Malik wrote in a note to clients this month.
Rubin products aren’t expected to start shipping until next year. But much is already expected of the lineup; analysts forecast Nvidia’s data-center business will hit about $237 billion in revenue for the fiscal year ending in January of 2027, more than double its current size. The same segment is expected to eclipse $300 billion in annual revenue two years later, according to consensus estimates from Visible Alpha. That would imply an average annual growth rate of 30% over the next four years, for a business that has already exploded more than sevenfold over the last two.
Nvidia has also been haunted by worries about competition with in-house chips designed by its biggest customers like Amazon and Google. Another concern has been the efficiency breakthroughs claimed by Chinese AI startup DeepSeek, which would seemingly lessen the need for the types of AI chip clusters that Nvidia sells for top dollar.
…………………………………………………………………………………………………………………………………………………………………………………………………………………….
Telecom Sessions of Interest:
Wednesday Mar 19 | 2:00 PM – 2:40 PM
Delivering Real Business Outcomes With AI in Telecom [S73438]
In this session, executives from three leading telcos will share their unique journeys of embedding AI into their organizations. They’ll discuss how AI is driving measurable value across critical areas such as network optimization, customer experience, operational efficiency, and revenue growth. Gain insights into the challenges and lessons learned, key strategies for successful AI implementation, and the transformative potential of AI in addressing evolving industry demands.
Thursday Mar 20 | 11:00 AM – 11:40 AM PDT
AI-RAN in Action [S72987]
Thursday Mar 20 | 9:00 AM – 9:40 AM PDTHow Indonesia Delivered a Telco-led Sovereign AI Platform for 270M Users [S73440]
Thursday Mar 20 | 3:00 PM – 3:40 PM PDT
Driving 6G Development With Advanced Simulation Tools [S72994]
Thursday Mar 20 | 2:00 PM – 2:40 PM PDT
Thursday Mar 20 | 4:00 PM – 4:40 PM PDT
Pushing Spectral Efficiency Limits on CUDA-accelerated 5G/6G RAN [S72990]
Thursday Mar 20 | 4:00 PM – 4:40 PM PDT
Enable AI-Native Networking for Telcos with Kubernetes [S72993]
Monday Mar 17 | 3:00 PM – 4:45 PM PDT
Automate 5G Network Configurations With NVIDIA AI LLM Agents and Kinetica Accelerated Database [DLIT72350]
Learn how to create AI agents using LangGraph and NVIDIA NIM to automate 5G network configurations. You’ll deploy LLM agents to monitor real-time network quality of service (QoS) and dynamically respond to congestion by creating new network slices. LLM agents will process logs to detect when QoS falls below a threshold, then automatically trigger a new slice for the affected user equipment. Using graph-based models, the agents understand the network configuration, identifying impacted elements. This ensures efficient, AI-driven adjustments that consider the overall network architecture.
We’ll use the Open Air Interface 5G lab to simulate the 5G network, demonstrating how AI can be integrated into real-world telecom environments. You’ll also gain practical knowledge on using Python with LangGraph and NVIDIA AI endpoints to develop and deploy LLM agents that automate complex network tasks.
Prerequisite: Python programming.
………………………………………………………………………………………………………………………………………………………………………………………………………..
References:
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
The case for and against AI-RAN technology using Nvidia or AMD GPUs
FT: Nvidia invested $1bn in AI start-ups in 2024
Quartet launches “Open Telecom AI Platform” with multiple AI layers and domains
At Mobile World Congress 2025, Jio Platforms (JPL), AMD, Cisco, and Nokia announced the Open Telecom AI Platform, a new project designed to pioneer the use of AI across all network domains. It aims to provide a centralized intelligence layer that can integrate AI and automation into every layer of network operations.
The AI platform will be large language model (LLM) agnostic and use open APIs to optimize functionality and capabilities. By collectively harnessing agentic AI and using LLMs, domain-specific SLMs and machine learning techniques, the Telecom AI Platform is intended to enable end-to-end intelligence for network management and operations. The founding quartet of companies said that by combining shared elements, the platform provides improvements across network security and efficiency alongside a reduction in total cost of ownership. The companies each bring their specific expertise to the consortium across domains including RAN, routing, AI compute and security.
Jio Platforms will be the initial customer. The Indian telco says it will be AI-agnostic and use open APIs to optimize functionality and capabilities. It will be able to make use of agentic AI, as well as large language models (LLMs), domain-specific small language models (SLMs), and machine learning techniques.
“Think about this platform as multi-layer, multi-domain. Each of these domains, or each of these layers, will have their own agentic AI capability. By harnessing agentic AI across all telco layers, we are building a multimodal, multidomain orchestrated workflow platform that redefines efficiency, intelligence, and security for the telecom industry,” said Mathew Oommen, group CEO, Reliance Jio.
“In collaboration with AMD, Cisco, and Nokia, Jio is advancing the Open Telecom AI Platform to transform networks into self-optimising, customer-aware ecosystems. This initiative goes beyond automation – it’s about enabling AI-driven, autonomous networks that adapt in real time, enhance user experiences, and create new service and revenue opportunities across the digital ecosystem,” he added.
On top of Jio Platforms’ agentic AI workflow manager is an AI orchestrator which will work with what is deemed the best LLM. “Whichever LLM is the right LLM, this orchestrator will leverage it through an API framework,” Oomen explained. He said that Jio Platforms could have its first product set sometime this year.
Under the terms of the agreement, AMD will provide high-performance computing solutions, including EPYC CPUs, Instinct GPUs, DPUs, and adaptive computing technologies. Cisco will contribute networking, security, and AI analytics solutions, including Cisco Agile Services Networking, AI Defense, Splunk Analytics, and Data Center Networking. Nokia will bring expertise in wireless and fixed broadband, core networks, IP, and optical transport. Finally, Jio Platforms Limited (JPL) will be the platform’s lead organizer and first adopter. It will also provide global telecom operators’ initial deployment and reference model.
The Telecom AI Platform intends to share the results with other network operators (besides Jio).
“We don’t want to take a few years to create something. I will tell you a little secret, and the secret is Reliance Jio has decided to look at markets outside of India. As part of this, we will not only leverage it for Jio, we will figure out how to democratize this platform for the rest of the world. Because unlike a physical box, this is going to be a lot of virtual functions and capabilities.”
AMD represents a lower-cost alternative to Intel and Nvidia when it comes to central processing units (CPUs) and graphics processing units (GPUs), respectively. For AMD, getting into a potentially successful telco platform is a huge success. Intel, its arch-rival in CPUs, has a major lead with telecom projects (e.g. cloud RAN and OpenRAN), having invested massive amounts of money in 5G and other telecom technologies.
AMD’s participation suggests that this JPL-led group is looking for hardware that can handle AI workloads at a much lower cost then using NVIDIA GPUs.
“AMD is proud to collaborate with Jio Platforms Limited, Cisco, and Nokia to power the next generation of AI-driven telecom infrastructure,” said Lisa Su, chair and CEO, AMD. “By leveraging our broad portfolio of high-performance CPUs, GPUs, and adaptive computing solutions, service providers will be able to create more secure, efficient, and scalable networks. Together we can bring the transformational benefits of AI to both operators and users and enable innovative services that will shape the future of communications and connectivity.”
Jio will surely be keeping a close eye on the cost of rolling out this reference architecture when the time comes, and optimizing it to ensure the telco AI platform is financially viable.
“Nokia possesses trusted technology leadership in multiple domains, including RAN, core, fixed broadband, IP and optical transport. We are delighted to bring this broad expertise to the table in service of today’s important announcement,” said Pekka Lundmark, President and CEO at Nokia. “The Telecom AI Platform will help Jio to optimise and monetise their network investments through enhanced performance, security, operational efficiency, automation and greatly improved customer experience, all via the immense power of artificial intelligence. I am proud that Nokia is contributing to this work.”
Cisco chairman and CEO Chuck Robbins said: “This collaboration with Jio Platforms Limited, AMD and Nokia harnesses the expertise of industry leaders to revolutionise networks with AI.
“Cisco is proud of the role we play here with integrated solutions from across our stack including Cisco Agile Services Networking, Data Center Networking, Compute, AI Defence, and Splunk Analytics. We look forward to seeing how the Telecom AI Platform will boost efficiency, enhance security, and unlock new revenue streams for service provider customers.”
If all goes well, the Open Telecom AI Platform could offer an alternative to Nvidia’s AI infrastructure, and give telcos in lower-ARPU markets a more cost-effective means of imbuing their network operations with the power of AI.
References:
https://www.telecoms.com/ai/jio-s-new-ai-club-could-offer-a-cheaper-route-into-telco-ai
Does AI change the business case for cloud networking?
For several years now, the big cloud service providers – Amazon Web Services (AWS), Microsoft Azure, and Google Cloud – have tried to get wireless network operators to run their 5G SA core network, edge computing and various distributed applications on their cloud platforms. For example, Amazon’s AWS public cloud, Microsoft’s Azure for Operators, and Google’s Anthos for Telecom were intended to get network operators to run their core network functions into a hyperscaler cloud.
AWS had early success with Dish Network’s 5G SA core network which has all its functions running in Amazon’s cloud with fully automated network deployment and operations.
Conversely, AT&T has yet to commercially deploy its 5G SA Core network on the Microsoft Azure public cloud. Also, users on AT&T’s network have experienced difficulties accessing Microsoft 365 and Azure services. Those incidents were often traced to changes within the network’s managed environment. As a result, Microsoft has drastically reduced its early telecom ambitions.
Several pundits now say that AI will significantly strengthen the business case for cloud networking by enabling more efficient resource management, advanced predictive analytics, improved security, and automation, ultimately leading to cost savings, better performance, and faster innovation for businesses utilizing cloud infrastructure.
“AI is already a significant traffic driver, and AI traffic growth is accelerating,” wrote analyst Brian Washburn in a market research report for Omdia (owned by Informa). “As AI traffic adds to and substitutes conventional applications, conventional traffic year-over-year growth slows. Omdia forecasts that in 2026–30, global conventional (non-AI) traffic will be about 18% CAGR [compound annual growth rate].”
Omdia forecasts 2031 as “the crossover point where global AI network traffic exceeds conventional traffic.”
Markets & Markets forecasts the global cloud AI market (which includes cloud AI networking) will grow at a CAGR of 32.4% from 2024 to 2029.
AI is said to enhance cloud networking in these ways:
- Optimized resource allocation:
AI algorithms can analyze real-time data to dynamically adjust cloud resources like compute power and storage based on demand, minimizing unnecessary costs. - Predictive maintenance:
By analyzing network patterns, AI can identify potential issues before they occur, allowing for proactive maintenance and preventing downtime. - Enhanced security:
AI can detect and respond to cyber threats in real-time through anomaly detection and behavioral analysis, improving overall network security. - Intelligent routing:
AI can optimize network traffic flow by dynamically routing data packets to the most efficient paths, improving network performance. - Automated network management:
AI can automate routine network management tasks, freeing up IT staff to focus on more strategic initiatives.
The pitch is that AI will enable businesses to leverage the full potential of cloud networking by providing a more intelligent, adaptable, and cost-effective solution. Well, that remains to be seen. Google’s new global industry lead for telecom, Angelo Libertucci, told Light Reading:
“Now enter AI,” he continued. “With AI … I really have a power to do some amazing things, like enrich customer experiences, automate my network, feed the network data into my customer experience virtual agents. There’s a lot I can do with AI. It changes the business case that we’ve been running.”
“Before AI, the business case was maybe based on certain criteria. With AI, it changes the criteria. And it helps accelerate that move [to the cloud and to the edge],” he explained. “So, I think that work is ongoing, and with AI it’ll actually be accelerated. But we still have work to do with both the carriers and, especially, the network equipment manufacturers.”
Google Cloud last week announced several new AI-focused agreements with companies such as Amdocs, Bell Canada, Deutsche Telekom, Telus and Vodafone Italy.
As IEEE Techblog reported here last week, Deutsche Telekom is using Google Cloud’s Gemini 2.0 in Vertex AI to develop a network AI agent called RAN Guardian. That AI agent can “analyze network behavior, detect performance issues, and implement corrective actions to improve network reliability and customer experience,” according to the companies.
And, of course, there’s all the buzz over AI RAN and we plan to cover expected MWC 2025 announcements in that space next week.
https://www.lightreading.com/cloud/google-cloud-doubles-down-on-mwc
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
The case for and against AI-RAN technology using Nvidia or AMD GPUs
Generative AI in telecom; ChatGPT as a manager? ChatGPT vs Google Search
Deutsche Telekom and Google Cloud partner on “RAN Guardian” AI agent
Deutsche Telekom and Google Cloud today announced a new partnership to improve Radio Access Network (RAN) operations through the development of a network AI agent. Built using Gemini 2.0 in Vertex AI from Google Cloud, the agent can analyze network behavior, detect performance issues, and implement corrective actions to improve network reliability, reduce operational costs, and enhance customer experiences.
Deutsche Telekom says that as telecom networks become increasingly complex, traditional rule-based automation falls short in addressing real-time challenges. The solution is to use Agentic AI which leverages large language models (LLMs) and advanced reasoning frameworks to create intelligent agents that can think, reason, act, and learn independently.
The RAN Guardian agent, which has been tested and verified at Deutsche Telekom, collaborates in a human-like manner, detecting network anomalies and executing self-healing actions to optimize RAN performance. It will be exhibited at next week’s Mobile World Congress (MWC) in Barcelona, Spain.
–>This cooperative initiative appears to be a first step towards building autonomous and self-healing networks.
In addition to Gemini 2.0 in Vertex AI, the RAN Guardian also uses CloudRun, BigQuery, and Firestore to help deliver:
- Autonomous RAN performance monitoring: The RAN Guardian will continuously analyze key network parameters in real time to predict and detect anomalies.
- AI-driven issue classification and routing: The agent will identify and prioritize network degradations based on multiple data sources, including network monitoring data, inventory data, performance data, and coverage data.
- Proactive network optimization: The agent will also recommend or autonomously implement corrective actions, including resource reallocation and configuration adjustments.
“By combining Deutsche Telekom’s deep telecom expertise with Google Cloud’s cutting-edge AI capabilities, we’re building the next generation of intelligent networks,” said Angelo Libertucci, Global Industry Lead, Telecommunications, Google Cloud. “This means fewer disruptions, faster speeds, and an overall enhanced mobile experience for Deutsche Telekom’s customers.”
“Traditional network management approaches are no longer sufficient to meet the demands of 5G and beyond. We are pioneering AI agents for networks, working with key partners like Google Cloud to unlock a new level of intelligence and automation in RAN operations as a step towards autonomous, self-healing networks” said Abdu Mudesir, Group CTO, Deutsche Telekom.
Mr. Mudesir and Google Cloud’s Muninder Sambi will discuss the role of AI agents in the future of network operations at MWC next week.
References:
https://www.telecoms.com/ai/deutsche-telekom-and-google-cloud-team-up-on-ai-agent-for-ran-operations
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
The case for and against AI-RAN technology using Nvidia or AMD GPUs
AI RAN Alliance selects Alex Choi as Chairman
AI sparks huge increase in U.S. energy consumption and is straining the power grid; transmission/distribution as a major problem
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
An increasing focus on deploying AI into radio access networks (RANs) was among the key findings of NVIDIA’s third annual “State of AI in Telecommunications” survey of 450 telecom professionals, as more than a third of respondents indicated they’re investing or planning to invest in AI-RAN. The survey polled more than 450 telecommunications professionals worldwide, revealing continued momentum for AI adoption — including growth in generative AI use cases — and how the technology is helping optimize customer experiences and increase employee productivity. The percentage of network operators planning to use open source tools increased from 28% in 2023 to 40% in 2025. AvidThink Founder and Principal Roy Chua said one of the biggest challenges network operators will have when using open source models is vetting the outputs they get during training.
Of the telecommunications professionals surveyed, almost all stated that their company is actively deploying or assessing AI projects. Here are some top insights on impact and use cases:
- 84% said AI is helping to increase their company’s annual revenue
- 77% said AI helped reduce annual operating costs
- 60% said increased employee productivity was their biggest benefit from AI
- 44% said they’re investing in AI for customer experience optimization, which is the No. 1 area of investment for AI in telecommunications
- 40% said they’re deploying AI into their network planning and operations, including RAN
The percentage of respondents who indicated they will build AI solutions in-house rose from 27% in 2024 to 37% this year. “Telcos are really looking to do more of this work themselves,” Nvidia’s Global Head of Business Development for Telco Chris Penrose [1.] said. “They’re seeing the importance of them taking control and ownership of becoming an AI center of excellence, of doing more of the training of their own resources.”
With respect to using AI inferencing, Chris said, “”We’ve got 14 publicly announced telcos that are doing this today, and we’ve got an equally big funnel.” Penrose noted that the AI skills gap remains the biggest hurdle for operators. Why? Because, as he put it, just because someone is an AI scientist doesn’t mean they are also necessarily a generative AI or agentic AI scientist specifically. And in order to attract the right talent, operators need to demonstrate that they have the infrastructure that will allow top-tier employees to do amazing work. See also: GPUs, data center infrastructure, etc.
Note 1. Penrose represented AT&T’s IoT business for years at various industry trade shows and events before leaving the company in 2020.
Rather than the large data centers processing AI Large Language Models (LLMs), AI inferencing could be done more quickly at smaller “edge” facilities that are closer to end users. That’s where telecom operators might step in. “Telcos are in a unique position,” Penrose told Light Reading. He explained that many countries want to ensure that their AI data and operations remain inside the boundaries of that country. Thus, telcos can be “the trusted providers of [AI] infrastructure in their nations.”
“We’ll call it AI RAN-ready infrastructure. You can make money on it today. You can use it for your own operations. You can use it to go drive some services into the market. … Ultimately your network itself becomes a key anchor workload,” Penrose said.
Source: Skorzewiak/Alamy Stock Photo
Nvidia proposes that network operators can not only run their own AI workloads on Nvidia GPUs, they can also sell those inferencing services to third parties and make a profit by doing so. “We’ve got lots of indications that many [telcos] are having success, and have not only deployed their first [AI compute] clusters, but are making reinvestments to deploy additional compute in their markets,” Penrose added.
Nvidia specifically pointed to AI inferencing announcements by Singtel, Swisscom, Telenor, Indosat and SoftBank.
Other vendors are hoping for similar sales. “I think this vision of edge computing becoming AI inferencing at the end of the network is massive for us,” HPE boss Antonio Neri said last year, in discussing HPE’s $14 billion bid for Juniper Networks.
That comes after multi-access edge computing (MEC) has not lived up to its potential, partially because a 5G SA core network is needed for that and few have been commercially deployed. Edge computing disillusionment is clear among hyperscalers and also network operators. For example, Cox folded its edge computing business into its private networks operation. AT&T no longer discusses the edge computing locations it was building with Microsoft and Google. And Verizon has admitted to edge computing “miscalculations.”
Will AI inferencing be the savior for MEC? The jury is out on that topic. However, Nvidia said that 40% of its revenues already come from AI inferencing. Presumably that inferencing is happening in larger data centers and then delivered to nearby users. Meaning, a significant amount of inferencing is being done today without additional facilities, distributed at a network’s edge, that could enable speedier, low-latency AI services.
“The idea that AI inferencing is going to be all about low-latency connections, and hence stuff like AI RAN and and MEC and assorted other edge computing concepts, doesn’t seem to be a really good fit with the current main direction of AI applications and models,” argued Disruptive Wireless analyst Dean Bubley in a Linked In post.
References:
https://blogs.nvidia.com/blog/ai-telcos-survey-2025/
State of AI in Telecommunications
https://www.fierce-network.com/premium/whitepaper/edge-computing-powered-global-ai-inference
https://www.fierce-network.com/cloud/are-ai-services-telcos-magic-revenue-bullet
The case for and against AI-RAN technology using Nvidia or AMD GPUs
Ericsson’s sales rose for the first time in 8 quarters; mobile networks need an AI boost
AI RAN Alliance selects Alex Choi as Chairman
Markets and Markets: Global AI in Networks market worth $10.9 billion in 2024; projected to reach $46.8 billion by 2029
AI sparks huge increase in U.S. energy consumption and is straining the power grid; transmission/distribution as a major problem
Tata Consultancy Services: Critical role of Gen AI in 5G; 5G private networks and enterprise use cases
Cisco CEO sees great potential in AI data center connectivity, silicon, optics, and optical systems
It’s no surprise to IEEE Techblog readers that Cisco’s networking business – still its biggest unit, generating nearly half its total sales – reported <$6.9 billion in revenue for the three-month period ending in January (Cisco’s second fiscal quarter). That was down 3% compared with the same quarter the year before. For its first half year, networking sales dropped 14% year-over-year, to about $13.6 billion.
However, total second-quarter revenues grew 9% year-over-year, to just less than $14 billion, boosted by the Splunk (security company) acquisition in March 2024. Thanks to that deal, Cisco’s security revenues more than doubled for the first half, to about $4.1 billion. But net income fell 8%, to roughly $2.4 billion, due partly to higher costs for research and development, as well as sales and marketing expenses.
Cisco groused about an “inventory correction” as networking customers digested stock they had already bought, but that surely is not the case now as that inventory has been worked off by its customers (ISPs, telcos, enterprise & government end users). Cisco CFO Richard Scott Herren now says “The demand that we’re seeing today a function of extended lead times like we saw a couple of years ago. That’s not the case. Our lead times are not extending.”
Currently, Cisco firmly believes that Ethernet connectivity sales to owners of AI data centers is an “emerging opportunity.” That refers to Cisco’s data center switching solutions for “web-scale” and enterprise customer intra-data center communications. The company’s AI strategy is described here.
Image Courtesy of Cisco Systems
………………………………………………………………………………………………………………………………………
AI investments “will lead to our networking equipment being combined with Nvidia GPUs, and that’s how we’ll accomplish that in the future,” CEO Chuck Robbins told industry analysts on a call to discuss second-quarter results, according to a Motley Fool transcript. “There’s so much change going on right now from a technology perspective that there’s both excitement about the opportunity, and candidly, there’s a little bit of fear of slowing down too much and letting your competition get too much ahead of you. So, we saw solid demand,” he said.
However, Cisco will face mighty competition in that space.
- Nokia is targeting the same opportunity and last month said it would spend an additional €100 million (US$104 million) on its Internet Protocol unit annually with the goal of generating another €1 billion ($1.04 billion) in data center revenues by 2028.
- Arista Networks is another rival in this market, selling high performance Ethernet switches to cloud service providers like Microsoft.
- Nvidia, whose $7 billion acquisition of Mellanox in 2019 gave it effective control of InfiniBand, an alternative to Ethernet that had represented the main option for connecting GPU clusters when analysts published research on the topic in August 2023. Just as important, the Mellanox division of Nvidia also is a leader in Ethernet connectivity within data centers as described in this IEEE Techblog post.
- Juniper Networks (being acquired by HPC) is also focusing on networking the AI data center as per a white paper you can download after filling out this form.
During the Q & A, Robbins elaborated: “On the $700 million in AI orders, it’s a combination of systems, silicon, optics, and optical systems. And I think if you break it down, it’s about half is in silicon and systems. And it continues to accelerate. And I’d say the teams have done a great job on the silicon front. We’ve invested heavily in more resources there. The team is running parallel development efforts for multiple chips that are staggered in their time frames. They’ve worked hard. They were increasing the yield, which is a positive thing. And so, we feel good about it, but it’s a combination of all those things that we’re selling to the customers.”
…………………………………………………………………………………………………………………………………………………………………………………………
Enterprise AI:
“What we’re seeing on the enterprise side relative to AI is it’s still — customers are still in the very early days, and they all realize they need to figure out exactly what their use cases are. We’re starting to see some spending though on specific AI-driven infrastructure. And we think as we get AI pods out there — we got Hyperfabric coming. We got AI defense coming.
We have Hypershield in the market. And we got this new DPU switch, they are all going to be a part of the infrastructure to support these AI applications. So, we’re beginning to see it happen, but I think it’s also really important to understand that as the enterprises leverage their private data, their proprietary data, and they’ll do some training on that and then they’ll run inference obviously against that. We believe that opportunity is an order of magnitude higher than what we’ve seen in training today. We’re going to continue to innovate and build capabilities to put ourselves in a better position to be a real beneficiary as this continues to accelerate. But as of today, we feel like we’re in pretty good shape.”
“If you look at AI defense with the AI Summit that we did recently, there’s — I think there’s about 20-some-odd customers who are interested in going to proof of concept with us right now on it. We had almost half the Fortune 100 there for that event. So, I feel good about where we are. It will turn into greater demand as we just continue to scale these products.”
Telco use of AI Edge Applications:
“We see some of the European network operators are looking at delivering AI as a service,” said Robbins. “We see a lot of them planning for AI edge applications that are sitting at the edge of their networks that they’re managing for customers.”
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,………………………………………………
Cisco raised its guidance and now expects revenues for the full year of between $56 billion and $56.5 billion, up from its earlier range of $55.3 billion to $56.3 billion.
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,………………………………………………
References:
https://www.cisco.com/site/uk/en/solutions/artificial-intelligence/index.html
https://www.juniper.net/content/dam/www/assets/white-papers/us/en/networking-the-ai-data-center.pdf
Nokia selects Intel’s Justin Hotard as new CEO to increase growth in IP networking and data center connections
Initiatives and Analysis: Nokia focuses on data centers as its top growth market
Nvidia enters Data Center Ethernet market with its Spectrum-X networking platform