ITU-R M.[IMT-2030.EVAL] & ITU-R M.[IMT-2030.SUBMISSION] reports: Evaluation & Submission Guidelines for 6G RIT/SRITs (6G)

Backgrounder:

As stated for years in IEEE Techblog posts, ITU-R Working Party 5D (WP 5D) is responsible for all International Mobile Telecommunications (IMT) terrestrial radio interface technology (RIT/SRIT) reports and standards, e.g. 3G, 4G, 5G (IMT 2020) and 6G (IMT 2030).

5D has developed the minimum technical performance requirements and the evaluation criteria for IMT 2020 (5G) and will do so now for IMT 2030 (6G) along with other reports and standards described in this article

While any ITU member can propose IMT 2030 RIT/SRIT candidate standards, it is expected that they will principally come from 3GPP which contributes their specs to 5D via ATIS.

Standards for the non-radio aspects of 5G (e.g. core network, security, network slicing, etc) and 6G were supposed to be promulgated by ITU-T, but 3GPP (which develops those specifications) years ago decided NOT to liaise their specs with ITU-T.

–>Please see References at the bottom of this article for more information.

…………………………………………………………………………………………………………………………………………….

ITU-R M.[IMT-2030.EVAL] – 6G RIT/SRIT Evaluation Criteria:

The 5D WG Technology aspects/SWG Evaluation is working on a report which will provide guidelines for the procedure, the methodology and the criteria (technical, spectrum and service) to be used in evaluating the candidate IMT-2030 radio interface technologies (RITs) or Set of RITs (SRITs) for a number of test environments. These test environments are chosen to closely simulate more stringent radio operating environments.

The evaluation procedure is designed in such a way that the overall performance of the candidate RITs/SRITs may be fairly and equally assessed on a technical basis. It ensures that the overall IMT-2030 objectives are met. This Report provides, for proponents, developers of candidate RITs/SRITs and independent evaluation groups, the common evaluation methodology and evaluation configurations to evaluate the candidate RITs/SRITs and system aspects impacting the radio performance.

–>This report is scheduled to be finalized at the WP 5D Meeting No. 52 (Geneva, 27 May-5 June 2026).

………………………………………………………………………………………………………………………………………………

ITU-R M.[IMT-2030.SUBMISSION] – 6G RIT/SRIT Submission Guidelines:

The draft new 5D Report ITU-R M.[IMT-2030.SUBMISSION], originating from the 5D July 2025 meeting, defines the submission guidelines, templates, and evaluation methodology for 6G Radio Interface Technologies (RITs/SRITs). The report focuses on enabling technology proposals for IMT-2030 which are to be submitted from February 2027 to February 2029 for 5D evaluation and approval.

Key Aspects of the Draft Report [IMT-2030.SUBMISSION]:
  • Submission & Evaluation Guidelines: The report serves as the official guide for submitting candidate Radio Interface Technologies (RITs) or Sets of Radio Interface Technologies (SRITs) for IMT-2030.
  • Structure: It is modeled after earlier reports like M.2411 (for 5G), defining the evaluation criteria, procedures, and templates for 6G technologies.
  • Technical Requirements: It outlines minimum performance requirements (MPRs) for 6G, including advanced capabilities like artificial intelligence, energy efficiency, and joint requirements.
  • Timeline: The report is central to the 2027-2030 timeline, aiming for the first submissions at the 54th WP 5D meeting (Feb 2027) and final submission by early 2029.
  • Context: It aligns with the ITU-R M.2160 framework (the “6G Vision”), which encompasses six usage scenarios: immersive communication, hyper-reliable low-latency communication, massive communication, ubiquitous connectivity, AI-integrated communication, and integrated sensing and communication.
–>This report is critical for 3GPP to align their Release 20 and 21 (6G) specifications with the requirements defined by 5D. Other standards organizations, e.g. ETSI, China, Korea, etc may also submit IMT 2030 RIT/SRIT candidate standards as they did for IMT 2020.
………………………………………………………………………………………………………………………………………………

WP 5D Workplan for IMT 2030 RIT/SRITs:

As previously noted, 5D will accept and evaluate IMT 2030 candidate RIT/SRIT submissions starting at 54th meeting of WP 5D, currently planned for February 2027. The final deadline for submissions is 12 calendar days prior to the start of the 59th meeting of WP 5D in February 2029. The evaluation of the proposed RITs/SRITs by the independent evaluation groups and the consensus-building process will be performed throughout this two year time period and thereafter. Subsequent calendar schedules will be decided according to the submissions of proposals to 5D.

WP 5D meetings in 2030 will focus on the final stages of evaluating, adopting, and approving 6G technology submissions, aiming for approval of the final IMT-2030 recommendation in late 2030.  The 5D tentative meeting schedule for 2030:

  • Meeting No. 62 (February 2030): 1 Finalize Addendum 6 to Circular Letter taking into account the draft new Report ITU-R M.[IMT-2030. OUTCOME]. 2 Review and update the work plan, if necessary.
  • Meeting No. 63 (June 2030): 1 Develop and finalize Addendum 7 to Circular Letter taking into account completion of the draft new Recommendation ITU-R M.[IMT 2030.SPECS].
  • Meeting No. 63 (October 2030):  Finalize standards before potential approval by ITU-R SG 5 in November 2030 or early 2031.

References:

ITU-R WP 5D Meeting Reports (TIES access required)

https://www.itu.int/en/events/Pages/Calendar-Events.aspx?sector=ITU-R&group=R23-WP5D

https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/imt-2030/pages/default.aspx

https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/imt-2030/Pages/submission-eval.aspx

https://www.itu.int/wrc-27/

Roles of 3GPP and ITU-R WP 5D in the IMT 2030/6G standards process

 

ITU-R WP 5D Timeline for submission, evaluation process & consensus building for IMT-2030 (6G) RITs/SRITs

ITU-R WP5D IMT 2030 Submission & Evaluation Guidelines vs 6G specs in 3GPP Release 20 & 21

Highlights of 3GPP Stage 1 Workshop on IMT 2030 (6G) Use Cases

ITU-R WP 5D reports on: IMT-2030 (“6G”) Minimum Technology Performance Requirements; Evaluation Criteria & Methodology

ITU-R: IMT-2030 (6G) Backgrounder and Envisioned Capabilities

Verizon’s 6G Innovation Forum joins a crowded list of 6G efforts that may conflict with 3GPP and ITU-R IMT-2030 work

Ericsson and e& (UAE) sign MoU for 6G collaboration vs ITU-R IMT-2030 framework

ITU-R WP5D invites IMT-2030 RIT/SRIT contributions

NGMN issues ITU-R framework for IMT-2030 vs ITU-R WP5D Timeline for RIT/SRIT Standardization

IMT-2030 Technical Performance Requirements (TPR) from ITU-R WP5D

Should Peak Data Rates be specified for 5G (IMT 2020) and 6G (IMT 2030) networks?

 

China vs U.S.: Race to Generate Power for AI Data Centers as Electricity Demand Soars

The International Energy Agency (IEA) forecasts that in the next five years, the global demand for power (electricity) is set to grow roughly 50% faster than it did during the previous decade – and more than twice as fast as energy demand overall.  That tremendous increase in demand is due to power hungry AI data centers.  There’s also electric cars and buses, electric-powered industrial machines, and electric heating of homes.

Global AI growth will be contingent on generating more power for data centers:

  • Global data center power demand is now expected to rise to a record 1,596 terawatt-hours by 2035 – +255% increase from 2025 levels.
  • The U.S. is set to remain the leader in energy consumption with a +144% surge in demand over this period, to 430 terawatt-hours.
  • China’s demand is projected to rise +255%, to 397 terawatt-hours.
  • European demand is expected to surge +303%, to 274 terawatt-hours.
  • New data centers coming online between now and 2030 will need more than 600 terawatt-hours of electricity. This is enough to power ~60 million homes.

 

Power for AI Data Centers: China vs U.S.:

China is currently ahead of the United States in generating and building out power infrastructure to support AI data centers, a phenomenon sometimes described by industry observers as an “electron gap.”

China’s rapid, centralized expansion of electricity generation—including both massive renewable projects and traditional, dispatchable power—has created a significant capacity advantage in the race to support AI workloads, which are increasingly limited by energy availability rather than just chip access.

Key factors in China’s power advantage for AI include:

Massive Generation Growth: Between 2010 and 2024, China’s power production increased by more than the rest of the world combined. In 2024 alone, China added 543 gigawatts of power capacity—more than the total capacity added by the U.S. in its entire history.

Significant Surplus Capacity: By 2030, China is projected to have roughly 400 gigawatts of spare power capacity, which is triple the expected power demand of the global data center fleet at that time.

“Eastern Data, Western Computing” Initiative: China is actively shifting energy-intensive data centers to its resource-rich western regions (like Inner Mongolia) while powering them with surplus renewable energy, such as wind and solar.

Lower Costs and Faster Buildouts: Data centers in China can pay less than half the rates for electricity that American data centers do. Furthermore, projects in China can move from planning to operation in months, compared to years in the U.S. due to faster permitting and fewer regulatory hurdles.

Conclusions:

While the U.S. currently leads in advanced AI chips and model development, it is facing a severe “energy bottleneck” for new data centers, with some requiring over a gigawatt of power. U.S. power demand has remained relatively flat for 20 years, resulting in a lag in building new capacity, whereas China has traditionally built power infrastructure in anticipation of high demand. Morgan Stanley has forecast that U.S. data centers could face a 44-gigawatt electricity shortfall in the next three years.

Despite China’s advantage in energy, U.S. export controls on high-end AI chips (such as Nvidia’s GPUs) have acted as a significant constraint on China’s actual AI compute power. This has led to a situation where the U.S. has the best “brains” (chips) but limited power to run them, while China has the “muscle” (energy) but limited access to top-tier AI brains.

However, the rapid improvements in Chinese AI models (such as DeepSeek), which are more energy-efficient and optimized for lower-tier hardware, may help mitigate this constraint.

References:

https://www.bloomberg.com/news/newsletters/2026-02-14/ai-battle-turbocharged-by-50-power-demand-surge-new-economy

https://www.iea.org/reports/electricity-2026

https://x.com/KobeissiLetter/status/2023437717888250284

How will the United States and China power the AI race?

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Analysis: Ethernet gains on InfiniBand in data center connectivity market; White Box/ODM vendors top choice for AI hyperscalers

Fiber Optic Boost: Corning and Meta in multiyear $6 billion deal to accelerate U.S data center buildout

How will fiber and equipment vendors meet the increased demand for fiber optics in 2026 due to AI data center buildouts?

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators

 

Analysis & Economic Implications of AI adoption in China

Executive Summary:

Visible signs of artificial intelligence adoption in China are everywhere. Consumers interact seamlessly with chatbots, livestream hosts promote algorithmically selected products, and recommendation engines exhibit an almost anticipatory understanding of user preferences.  Yet, beyond these consumer-facing applications, a deeper and potentially more consequential transformation is unfolding. Across China’s retail and services sectors, AI is shifting from demand generation to cost optimization. Enterprises are deploying machine learning in logistics, inventory management, customer service, and fulfillment operations to reduce inefficiencies as revenue growth slows and pricing power tightens.

Highlights:

  • Chinese companies are increasingly using AI to control operational costs and improve efficiency in a low-growth economic environment.

  • AI is being deployed in logistics, inventory management, and customer service to reduce expenses rather than primarily drive demand.

  • This shift towards AI for cost reduction is leading to steadier cash flow and improved operating margins for consumer companies.

China’s Consumer Sector: AI Powers Efficiency Over Growth:

As China’s economy adjusts to structural deceleration—marked by subdued household confidence, persistent real-estate overhang, and maturing market saturation—consumer companies face an unfamiliar imperative: prioritize resilience over expansion. With pricing power eroded and cost inflation persistent, traditional growth levers have lost potency. Leading platforms are responding by reorienting AI investments toward operational efficiency, transforming algorithms from engagement engines into margin-defense mechanisms. For investors, this evolution signals a new phase of earnings potential—one where incremental productivity gains could prove more durable than cyclical demand recovery.

“In a low-growth environment, incremental efficiency gains matter more than top-line expansion,” notes Zhao Ming, senior analyst for China internet companies at Hongyuan Capital. “AI has become a strategic lever for margin preservation.”

China’s consumer sector entered 2026 navigating familiar structural headwinds: cautious household sentiment, a fading property-wealth effect, and fierce price competition. Unlike in previous cycles, companies are finding it increasingly difficult to pass rising costs on to consumers. The result has been a strategic realignment. Where past growth phases emphasized volume and engagement, today’s market is rewarding operational discipline. That shift has sharpened the appeal of AI—not as a marketing showcase, but as a core instrument of productivity and cost control.

“In a slower-growth environment, leading Chinese consumer companies are using AI primarily to improve productivity and reduce operating costs rather than to drive incremental demand,” McKinsey said in a recent analysis of AI adoption across China’s retail and services sectors.

From Growth Catalyst to Cost Lever:

The center of gravity for AI investment has shifted from customer-facing innovation to operational optimization. E-commerce platforms and logistics operators have been among the earliest to integrate AI into mission-critical workflows. Demand-forecasting models are helping warehouses fine-tune inventory levels and reduce exposure to slow-moving goods. Routing algorithms are compressing last-mile delivery times and cutting fuel consumption. Automated customer-service systems are deflecting an ever-larger share of inquiries typically handled by human agents.

On their own, each of these applications may appear incremental. Taken together, they represent a meaningful improvement in margin resilience at a time when top-line expansion remains constrained. In an environment where minor percentage-point gains in efficiency can significantly affect earnings quality, AI is emerging as a quiet but potent differentiator.

Logistics as a Testbed for Scalable Efficiency:

The operational impact of AI is most visible in the logistics ecosystem, a sector that remains one of the largest cost centers in China’s consumer economy. Machine-learning systems are now proficient at forecasting order density by neighborhood and time of day, enabling fulfillment centers to position inventory closer to anticipated demand. In dense urban markets, adaptive algorithms continually adjust delivery routes in response to evolving conditions—from traffic and weather to cancellations and reorders—reducing both transit times and redundancy.

For investors, the value proposition is compelling: logistics efficiency scales. Once AI models are trained and stress-tested, they can be deployed across regions at low incremental cost, generating operating leverage even in periods of stagnant demand. Crucially, incumbents benefit from data scale. Years of transaction and delivery records translate into more accurate predictive models, reinforcing competitive moats and raising barriers to entry. This dynamic is reshaping industry structure even as consumer-facing platform features converge toward commoditization.

AI Extends Gains to Physical Retail:

Beyond e-commerce, brick-and-mortar retail—long considered a laggard in China’s digital transformation—is also seeing measurable efficiency dividends. Smart shelving, computer-vision inventory systems, and automated stock monitoring are cutting labor intensity while increasing inventory turnover. Grocery and convenience chains now rely on AI to optimize product assortments at the store level, calibrating selections to localized consumption patterns instead of applying national averages. The effect is twofold: reduced waste and fewer markdowns, both of which have historically weighed on profitability. The outcomes may not register as eye-catching innovation, but they align closely with investor priorities—stabler cash flows and predictable margins.

Labor Efficiency as a Strategic Imperative:

AI-enhanced customer service represents another underappreciated margin driver. Major consumer platforms report that routine customer interactions—order tracking, returns, product troubleshooting—are now predominantly handled through automated systems. This transition is particularly relevant in a labor market where wage growth continues to outpace consumption. Limiting headcount growth while maintaining response times and service quality has become a key operational goal.

“AI doesn’t replace customer service,” says Li Wenyuan, chief technology officer at retail software firm Qimeng Tech. “It filters it, so humans deal only with the expensive problems.” That filtering function is transforming customer operations from cost centers into scalable service platforms, balancing efficiency with user satisfaction.

Economic Implications:

For investors, the impact of China’s second-wave AI adoption will likely manifest less in headline growth metrics and more in incremental financial performance indicators. Key areas to watch include:

  • Operating margin expansion driven by process automation

  • Reduced fulfillment and logistics costs as a share of revenue

  • Improved capital-expenditure efficiency through data-driven asset utilization

The first chapter of China’s AI consumer story was about differentiation—using algorithms to personalize experiences, boost engagement, and drive sales. The next chapter is about discipline. As growth normalizes, companies are deploying AI to do more with less: compress costs, stabilize earnings, and build leaner, more adaptive operating models. In a market where scale alone no longer guarantees profitability, AI has become not just a tool for innovation—but a mechanism for survival.

References:

https://www.barrons.com/articles/china-ai-boom-commerce-warehouses-b1ad55f1

China’s open source AI models to capture a larger share of 2026 global AI market

China’s telecom industry rapid growth in 2025 eludes Nokia and Ericsson as sales collapse

China ITU filing to put ~200K satellites in low earth orbit while FCC authorizes 7.5K additional Starlink LEO satellites

China gaining on U.S. in AI technology arms race- silicon, models and research

U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China

Bloomberg: China Lures Billionaires Into Race to Catch U.S. in AI

 

 

 

Analysis & Evaluation: HomeOfficeIQ™ within SmartHome™on the Calix Platform

Calix, Inc. today announced the launch of HomeOfficeIQ™, an advanced value-added service integrated within SmartHome™ on the Calix Platform. Designed to help broadband service providers strengthen subscriber retention and unlock new revenue opportunities, HomeOfficeIQ introduces intelligent cellular network failover functionality managed through CommandIQ®. This capability ensures uninterrupted connectivity for remote workers and households during network outages, extending resilience beyond the limits of the broadband infrastructure itself.

Through the CommandIQ app, subscribers can precisely manage device- or network-level prioritization to maintain business continuity, while ProtectIQ® continues to safeguard home networks against security threats and ExperienceIQ® enforces content and application controls for a consistent, secure online environment.

As hybrid work becomes a fixture of the connected home, resilient and secure broadband is now indispensable. According to Calix Market Insights, 37% of residential internet subscribers regularly use their home connections for work, and more than one in three who switched providers cited enhanced security as a primary driver. This shift highlights subscribers’ growing expectation for reliable connectivity that protects sensitive applications and workflows. For service providers, this translates into a clear opportunity to capture higher average revenue per user (ARPU)—with 35% of respondents reporting employer reimbursement for home internet service, underscoring its professional value.

Ben Foster, president and chief executive officer at Twin Valleyopens in a new tab, said: “Calix is giving us a powerful way to differentiate our residential experiences and create real value for our customers. Adding HomeOfficeIQ to our upcoming lifestyle-based offers builds on what SmartHome already delivers and reinforces why customers continue choosing Twin Valley. By keeping customers connected when it matters most, Calix is helping Twin Valley strengthen loyalty, support higher ARPU, and drive long-term, sustainable growth for our business.” 

Shane Eleniak, chief product officer at Calix, said: “HomeOfficeIQ reflects our commitment to helping service providers deliver secure, connected experiences for their subscribers—even during unavoidable outages. Now, HomeOfficeIQ will offer our customers a simple, new way to strengthen their residential offerings. With this launch, Calix continues to extend SmartHome innovation on the Calix Platform, helping providers further differentiate from competitors, build the trust that drives retention and ARPU, and create long-term value for their businesses and communities.”

According to Calix, their HomeOffice IQ capabilities and enhancements for SmartHome enable service providers to deliver:

  • Safe, secure connections during unavoidable network outages. HomeOfficeIQ quickly restores connectivity to critical and prioritized devices during outages, while keeping ProtectIQ active and applying the content controls of ExperienceIQ. Together, these ensure a safe, secure experience—helping safeguard video meetings, cloud-based applications, and other time-sensitive activities. HomeOfficeIQ is fully compatible with the award-winning Calix GigaSpire® portfolio.
  • Personalized network controls over SSIDs or IoT devices for subscribers. When a cellular hotspot is activated leveraging HomeOfficeIQ, households can easily select and prioritize either multiple SSIDs or multiple devices. This helps ensure essential activities like work, school, or telehealth stay connected—giving households meaningful control over their network for the moments that matter most.
  • Built-in CommandIQ promotions that boost engagement and drive adoption. Integrated directly into CommandIQ, promo and announcement tiles help providers promote new offers, share service updates, and educate users, increasing engagement and accelerating SmartHome adoption. CommandIQ recently earned the TMC 2025 Cybersecurity Excellence Award for advanced subscriber protection.

………………………………………………………………………………………………………………………………….

About Calix:

Calix focuses on a broadband services, cloud software, access systems, and managed services for ISPs and Broadband Service Providers (BSPs) which offers broadband access and other bundled services.  Here is how they describe themselves:

Calix, Inc. (NYSE: CALX)—Calix is an appliance-based platform, cloud and managed services company. Broadband experience providers leverage Calix’s broadband platform, cloud and managed services to simplify their operations, subscriber engagement and services; innovate for their consumer, business and municipal subscribers; and grow their value for members, investors and the communities they serve.

Our end-to-end platform and managed services democratize the use of data—enabling our customers of any size to operate efficiently, acquire subscribers and deliver exceptional experiences. Calix is dedicated to driving continuous improvement in partnership with our growing ecosystem to support the transformation of our customers and their communities.

…………………………………………………………………………………………………………………………………………………………..

Author’s Assessment of Calix Deliverables:

  • Broadband Platform and Cloud:

Calix Cloud delivers analytics, automation, and service intelligence to simplify operations, improve service agility, and drive revenue growth for service providers.

Engagement Cloud, Service Cloud, Operations Cloud, and related modules provide subscriber insights, marketing automation, and network/service visibility on a common platform.

  • Access systems and premises gear

    • Broadband access nodes and fiber-to-the-home (FTTH) optics support PON-based and fiber-based broadband architectures for residential and business services.

    • GigaSpire-branded residential gateways and Wi‑Fi systems provide managed in‑home connectivity as CPE tightly integrated with the Calix platform and clouds.

  • Managed services portfolio (SmartLife/experience services)

    • SmartHome delivers managed residential Wi‑Fi, security (ProtectIQ), parental/content controls (ExperienceIQ), and new offers such as HomeOfficeIQ with cellular failover for work-from-home reliability.

    • SmartBiz targets small business connectivity, offering managed Wi‑Fi and value‑added services tailored for business workflows and higher service tiers.

    • SmartTown extends secure Wi‑Fi beyond the home into community and public spaces as a managed Wi‑Fi fabric for municipalities and regional BSPs.

Core market segments:

  • Regional and rural broadband service providers (BSPs/ISPs)

    • Primary customers are Tier 2/3 and regional providers looking to differentiate on managed Wi‑Fi, subscriber experience, and ARPU growth rather than raw bandwidth alone.

  • Residential broadband and smart home

    • Focus on households that rely on broadband for hybrid work, streaming, gaming, and family connectivity, where secure, managed Wi‑Fi and application-aware services drive perceived value.

  • Small business and community connectivity

    • SmartBiz and SmartTown position Calix with providers serving SMBs, local enterprises, and municipalities that need managed wireless coverage and simple operations at scale.

  • Growth aligned to government-funded fiber buildouts

    • Calix highlights substantial revenue opportunity tied to U.S. BEAD and related broadband programs, leveraging its platform and systems as BSPs scale new FTTH networks and experience-based services. Their 10G PON solution is used by over 225 BSPs.

……………………………………………………………………………………………………………………………………………………

References:

https://www.calix.com/press-release/2026/02/calix-launches-homeofficeiq.html

https://www.businesswire.com/news/home/20260212390035/en/Calix-Launches-HomeOfficeIQ-So-Service-Providers-Can-Keep-Home-Networks-Securely-Connectedand-Drive-ARPU-GrowthEven-During-Unavoidable-Outages

https://www.businesswire.com/news/home/20231016892716/en/Calix-HomeOfficeIQ-Is-the-Latest-SmartHome-Managed-Service-To-Enable-Broadband-Providers-To-Raise-the-Bar-for-Subscriber-Experiences-and-Expand-Their-Markets

https://www.calix.com/blog/2026/01/latency-in-internet.html

Calix and Corning Weigh In: When Will Broadband Wireline Spending Increase?

Calix touts GigaSpire as smart home solution for ISPs

ZTE sees demand for fixed broadband and smart home solutions while 5G lags

MediaTek to expand chipset portfolio to include WiFi7, smart homes, STBs, telematics and IoT

 

Cisco’s Silicon One G300 as the dominant AI networking fabric, competing with Broadcom’s Tomahawk 6 series

On February 10, 2026, Cisco announced the Silicon One G300 102.4 Tbps Ethernet switch silicon, claiming it can power gigawatt-scale AI clusters for training, inference, and real-time agentic workloads, while maximizing GPU utilization with a 28% improvement in job completion time. The G300 was said to offer Intelligent Collective Networking, which combines an industry-leading fully shared packet buffer, path-based load balancing, and proactive network telemetry to offer better performance and profitability for large-scale data centers. It efficiently absorbs bursty AI traffic, responds faster to link failures, and prevents packet drops that can stall jobs, ensuring reliable data delivery even over long distances. With Intelligent Collective Networking, Cisco can deliver 33% increased network utilization, and a 28% reduction in job completion time versus simulated non-optimized path selection, making AI data centers more profitable with more tokens generated per GPU-hour.  Also, the Cisco Silicon One G300 is highly programmable, enabling equipment to be upgraded for new network functionality even after it has been deployed. This enables Silicon One-based products to support emerging use cases and play multiple network roles, protecting long-term infrastructure investments. And with security fused into the hardware, customers can embrace holistic, at-speed security to keep clusters up and running.

The Cisco Silicon One G300 will power new Cisco N9000 and Cisco 8000 systems that push the frontier of AI networking in the data center. The systems feature innovative liquid cooling and support high-density optics to achieve new efficiency benchmarks and ensure customers get the most out of their GPU investments. In addition, the company enhanced Nexus One to make it easier for enterprises to operate their AI networks — on-premises or in the cloud — removing the complexity that can hold organizations back from scaling AI data centers.

“We are spearheading performance, manageability, and security in AI networking by innovating across the full stack – from silicon to systems and software,” said Jeetu Patel, President and Chief Product Officer, Cisco. “We’re building the foundation for the future of infrastructure, supporting every type of customer—from hyperscalers to enterprises—as they shift to AI-powered workloads.”

“As AI training and inference continues to scale, data movement is the key to efficient AI compute; the network becomes part of the compute itself. It’s not just about faster GPUs – the network must deliver scalable bandwidth and reliable, congestion-free data movement,” said Martin Lund, Executive Vice President of Cisco’s Common Hardware Group. “Cisco Silicon One G300, powering our new Cisco N9000 and Cisco 8000 systems, delivers high-performance, programmable, and deterministic networking – enabling every customer to fully utilize their compute and scale AI securely and reliably in production.”

The networking industry reaction to Cisco’s newest ASIC has been largely positive, with industry analysts and partners highlighting its role in reclaiming Cisco’s dominance in the AI infrastructure market. For example, Brendan Burke of Futurium thinks Cisco’s Silicon One G300 could be the backbone of Agentic AI Inference.  His take: “Cisco’s latest announcements represent a calculated move to assert dominance in the AI networking fabric by attacking the specific bottlenecks of GPU cluster efficiency. As AI workloads shift toward agentic inference, where autonomous agents continuously interact across distributed environments, the network must handle unpredictable traffic patterns, unlike the structured flows of traditional training. Cisco is leveraging its vertical integration strategy to address the reliability and power constraints that plague these massive clusters. By emphasizing programmable silicon and rigorous optic qualification, Cisco aims to decouple network lifespan from rapid GPU innovation cycles, ensuring infrastructure can adapt to new traffic steering algorithms without hardware replacements. The G300 is a bid to make Ethernet the undisputed standard for AI back-end networks.”

Key Performance Indicators:
  • Industry-Leading Specs: Market analysts have noted that the G300’s 102.4 Tbps switching capacity sets a new benchmark for AI scale-out and scale-across networking.
  • Efficiency Gains: Initial simulations showing a 28% reduction in job completion time (JCT) and a 33% increase in network utilization have been cited as major differentiators for large-scale AI clusters.
  • Sustainability Focus: The shift toward liquid-cooled systems for the G300, which offers 70% greater energy efficiency per bit, is being viewed as a critical move for sustainable AI growth.
Strategic & Market Impact:
  • Competitive Positioning: Experts from HyperFRAME Research suggest that the new silicon signals a “new confidence” from Cisco, positioning them as the “Apple of infrastructure” by tightly integrating hardware and software.
  • AI Infrastructure Pivot: Financial analysts at Seeking Alpha have upgraded Cisco’s outlook, viewing the company no longer as just a legacy hardware firm but as a central player in the AI revolution.
  • Partner Confidence: Major partners, such as Shanghai Lichan Technology, have expressed excitement about the Nexus 9100 Series powered by this silicon, specifically for its ability to simplify and scale AI deployments.
Critical Observations:
  • Nvidia & Broadcom Competition: While the  G300 is seen as a strong challenger to Nvidia’s Spectrum-X and Broadcom’s Tomahawk/Jericho lines, some observers note that Cisco still faces a steep climb to regain market share lost to these competitors in recent years.
  • Complexity Concerns: Some industry veterans have pointed out that while the silicon is “hyperscale ready,” the success of these ASICs in the enterprise will depend on Cisco’s ability to maintain operational simplicity through tools like the Nexus Dashboard.

……………………………………………………………………………………………………………………………………………………………………………………………

Cisco’s Silicon One G300 and Broadcom’s latest Tomahawk 6 series both offer a top-tier 102.4 Tbps switching capacity, with the primary differentiators lying in each company’s unique approach to congestion management and network programmability.
Technical Spec. Comparison:
Cisco Silicon One G300
Broadcom Tomahawk 6 (BCM78910 Series)
Bandwidth

102.4 Tbps

TechPowerUp
Bandwidth

102.4 Tbps

Broadcom
Manufacturing Process

TSMC 3nm

X
Manufacturing Process

3nm technology

Broadcom
SerDes Lanes & Speed

512 lanes at 200 Gbps per link

The Register
SerDes Lanes & Speed

512 lanes at 200 Gbps per link, or 1024 lanes at 100G

Broadcom
Port Configuration

Up to 64 x 1.6TbE ports or 512 x 200GbE ports

The Register
Port Configuration

Up to 64 x 1.6TbE ports or 512 x 200GbE ports

Broadcom
Target AI Cluster Size

Supports deployments of up to 128,000 GPUs

The Register
Target AI Cluster Size

Supports over 100,000 XPUs (accelerators)

BroadcomBroadcom
Key Feature Differences:
  • Congestion Management: Cisco differentiates its G300 with an “Intelligent Collective Networking” approach featuring a fully shared packet buffer and a load-balancing agent that communicates across all G300s in the network to build a global map of congestion. Broadcom’s Tomahawk series also includes smart congestion control and global load balancing, though Cisco claims its implementation achieves higher network utilization (33% better).
  • Programmability: Cisco emphasizes P4 programmability, allowing customers to update network functionality even after deployment.
  • Ecosystem & Integration: Broadcom operates primarily in the merchant silicon market, with their chips used by various partners like HPE Juniper Networking. Cisco uses its own silicon to power its 
    Nexus 9000 and 8000 Series switches, tightly integrating hardware with software management platforms like Nexus One for a unified solution.
  • Cooling Solutions: The Cisco G300 is designed to support high-density optics and is offered in new systems that include liquid-cooled options, providing 70% greater energy efficiency per bit compared to previous generations.

………………………………………………………………………………………………………………………………………………………………………………

References:

https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2026/m02/cisco-announces-new-silicon-one-g300.html

https://blogs.cisco.com/sp/cisco-silicon-one-g300-the-next-wave-of-ai-innovation

Will Cisco’s Silicon One G300 Be the Backbone of Agentic Inference?

Analysis: Ethernet gains on InfiniBand in data center connectivity market; White Box/ODM vendors top choice for AI hyperscalers

Cisco CEO sees great potential in AI data center connectivity, silicon, optics, and optical systems

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Nvidia enters Data Center Ethernet market with its Spectrum-X networking platform

Will AI clusters be interconnected via Infiniband or Ethernet: NVIDIA doesn’t care, but Broadcom sure does!

T-Mobile US announces new broadband wireless and fiber targets, 5G-A with agentic AI and live voice call translation

T-Mobile US (the “Un-carrier”) today announced new targets of 15 million 5G broadband customers by 2030, a 25% increase from its previous target of 12 million by the end of 2028, driven by increased spectral efficiency, better CPE technology, increased eligibility including to business customers with complementary usage profiles, and broadened product offerings to continue to meet evolving customer needs. T-Mobile is also leveraging its scale and nationwide 5G Advanced network to expand into new growth areas, including advertising, financial services, and long-term opportunities in edge and physical AI.  The top rated U.S. wireless telco is also expecting between 3 and 4 million T-Fiber customers by 2030.

“T-Mobile is raising the bar on what customers, stockholders, and the industry can expect from the Un-carrier. T-Mobile has an unmatched combination of the Best Network, Best Value, and Best Customer Experiences — hallmarks of our unique Un-carrier differentiation — paired with our industry-leading portfolio of assets,” said T-Mobile CEO Srini Gopalan.

“This is why customers bring their connectivity relationship to T-Mobile. Looking ahead, we see an extraordinary runway to further expand this differentiation — through sustained momentum in network perception, digital and AI-driven transformation, and our future-forward innovation in areas like 6G and advanced AI. With this foundation, I’m confident that the future has never been brighter.”

Here are 2 of many impressive slides from T-Mo’s investor presentation referenced below:

The Un-carrier also plans to launch real-time and agentic AI services directly into its 5G-Advanced (5G-A) network by the end of 2026.  This initiative, which began with a beta program in early 2026 for postpaid customers, allows for AI-driven features to function natively within the network, meaning users do not need to download specific apps or upgrade their hardware. This 5G-A offering will include live voice call translation in over 50 languages.  By integrating AI directly into the 5G-A infrastructure (RAN, core network, and management layers), T-Mobile is enabling features that work on any eligible device, not just smartphones.

New 5G-A Agentic AI Highlights:

  • The initial application is a “Live Translation” feature for voice calls, allowing for real-time translation in over 50 languages.
  • “Agentic” AI and Automation: The network will use AI to enhance operational efficiency, including predictive optimization and dynamic resource allocation.
  • The 5G-Advanced deployment also supports increased data speeds (up to 6.3 Gbps in tests), low-latency applications like XR and cloud gaming, and enhanced location services.
  • The forthcoming capability will permit features to be active with only one participant needing to be on the 5G-A network.
  • Infrastructure Partners: T-Mobile is collaborating with partners including NVIDIA, Ericsson, and Nokia to build an AI-RAN (Radio Access Network) framework. Telecompaper Telecompaper +3 This move is part of a broader strategy to transition from 5G to 5G-Advanced, with a focus on delivering “intent-driven” AI services and laying the groundwork for 6G (IMT 2030).

……………………………………………………………………………………………………………………………………….

References:

https://www.t-mobile.com/how-mobile-works/innovation/5g-advanced

https://www.t-mobile.com/news/business/t-mobile-capital-markets-day-update-feb-2026

https://investor.t-mobile.com/events-and-presentations/events/event-details/2026/T-Mobile-Q4-2025-Earnings-Call-and-Capital-Markets-Day-Update-2026-yRJC80TMnI/default.aspx

https://www.t-mobile.com/benefits/live-translation

https://www.usatoday.com/story/tech/columnist/2026/02/11/t-mobile-real-time-phone-call-translation/88605297007/

 

Analysis: Rakuten Mobile and Intel partnership to embed AI directly into vRAN

Today, Rakuten Mobile and Intel announced a partnership to embed Artificial Intelligence (AI) directly into the virtualized Radio Access Network (vRAN) stack.   While vRAN currently represents a small percentage of the total RAN market (Dell’Oro Group recently forecasts vRAN to account for 5% to 10% of the total RAN market by 2026), this partnership could boost increase that percentage as it addresses key adoption hurdles—performance, power, and AI integration.   Key areas of innovation include:

  • Enhanced Wireless Spectral Efficiency: Optimizing spectrum utilization for superior network performance and capacity.
  • Automated RAN Operations: Streamlining network management and reducing operational complexities through intelligent automation.
  • Optimized Resource Allocation: Dynamically allocating network resources for maximum efficiency and subscriber experience.
  • Increased Energy Efficiency: Significantly reducing power consumption in the RAN, contributing to sustainable network operations.

The partnership essentially aims to make vRAN superior in performance and TCO (Total Cost of Ownership) compared to traditional, proprietary, purpose built RAN hardware.

“We are incredibly excited to expand our collaboration with Intel to pioneer truly AI-native RAN architectures,” said Sharad Sriwastawa, co-CEO and CTO, Rakuten Mobile. “Together, we are validating transformative AI-driven innovations that will not only shape but define the future of mobile networks. This partnership showcases how intelligent RAN can be achieved through the seamless and efficient integration of AI workloads directly within existing vRAN software stacks, delivering unparalleled performance and efficiency.”

Rakuten Mobile and Intel are engaged in rigorous testing and validation of cutting-edge RAN AI use cases across Layer 1, Layer 2, and comprehensive RAN operation and network platform management. A core objective is the seamless integration of AI directly into the RAN stack, meticulously addressing integration challenges while upholding carrier-grade reliability and stringent latency requirements.

Utilizing Intel FlexRAN reference software, the Intel vRAN AI Development Kit, and a robust suite of AI tools and libraries, Rakuten Mobile is collaboratively training, optimizing, and deploying sophisticated AI models specifically tailored for demanding RAN workloads. This collaborative effort is designed to realize ultra-low, real-time AI latency on Intel Xeon 6 SoC, capitalizing on their built-in AI acceleration capabilities, including AVX512/VNNI and AMX.

“AI is transforming how networks are built and operated,” said Kevork Kechichian, Executive Vice President and General Manager of the Data Center Group, Intel Corporation. “Together with Rakuten, we are demonstrating how AI benefits can be achieved in vRAN. Intel Xeon processors power the majority of commercial vRAN deployments worldwide, and this transformation momentum continues to accelerate. Intel is providing AI-ready Xeon platforms that allow operators like Rakuten to design AI-ready infrastructure from the ground up, with built-in acceleration capabilities.”

Rakuten says they are “poised to unlock new levels of RAN performance, efficiency, and automation by embedding AI directly into the RAN software stack, this AI-native evolution represents the future of cloud-native, AI-powered RAN – inherently software-upgradable and built on open, general-purpose computing platforms. Additionally, the extended collaboration between Rakuten Mobile and Intel marks a significant step toward realizing the vision of autonomous, self-optimizing networks and powerfully reinforces both companies’ commitment to open, programmable, and intelligent RAN infrastructure worldwide.”

……………………………………………………………………………………………………………………………………………………………………..

Here is why this partnership might boost the vRAN market:
  • AI-Native Efficiency & Performance: The collaboration focuses on integrating AI to improve network performance and energy efficiency, which is a major pain point for operators. By embedding AI directly into the vRAN stack, they are enhancing wireless spectral efficiency, reducing power consumption, and automating RAN operations.
  • Leveraging High-Performance Hardware: The initiative utilizes Intel® Xeon® 6 processors with built-in vRAN Boost. This eliminates the need for external, power-hungry accelerator cards, offering up to 2.4x more capacity and 70% better performance-per-watt.
  • Validation of Large-Scale Commercial Viability: Rakuten Mobile operates the world’s first fully virtualized, cloud-native network. Its continued collaboration with Intel to make the vRAN AI-native provides a proven blueprint for other operators, reducing the perceived risk of adopting vRAN, particularly in brownfield (existing) networks.
  • Acceleration of Open RAN Ecosystem: The collaboration supports the broader push towards Open RAN, which is expected to see a significant rise in market share, doubling between 2022 and 2026.

………………………………………………………………………………………………………………………………………………………………

vRAN Market Outlook (2026–2033):
Market analysts expect 2026 to be a “pivotal year” for early real-world deployments of these intelligent architectures. While the base RAN market is stagnant, the virtualized segment is projected for aggressive growth:
  • Market Share Shift: Omdia forecasts that vRAN’s share of the RAN baseband subsector will reach 20% by 2028. That’s a significant jump from its current low single-digit percentage.
  • Explosive CAGR: The global vRAN market is projected to grow from approximately $16.6 billion in 2024 to nearly $80 billion by 2033, representing a 19.5% CAGR.
  • Small Cell Dominance: By the end of 2026, it is estimated that 77% of all vRAN implementations will be on small cell architectures, a key area where Rakuten and Intel have demonstrated success.
Despite these gains, vRAN still faces “performance parity” challenges with traditional RAN in high-capacity macro environments, which may temper the speed of total market replacement in the near term.
………………………………………………………………………………………………………………………………………………………………

References:

https://corp.mobile.rakuten.co.jp/english/news/press/2026/0210_01/

Virtual RAN gets a boost from Samsung demo using Intel’s Grand Rapids/Xeon Series 6 SoC

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

vRAN market disappoints – just like OpenRAN and mobile 5G

LightCounting: Open RAN/vRAN market is pausing and regrouping

Dell’Oro: Private 5G ecosystem is evolving; vRAN gaining momentum; skepticism increasing

https://www.mordorintelligence.com/industry-reports/virtualized-ran-vran-market

https://www.grandviewresearch.com/industry-analysis/virtualized-radio-access-network-market-report

Virtualization’s role in 5G Advanced (3GPP Release 18) and a proposed new hardware architecture

Disclaimer:  The author used Google Gemini to provide research contained in this article.

In a February 9, 2026 article, Ji-Yun Seol, Executive VP and Head of Product Strategy, Networks Business at Samsung, says: “The evolution from 5G to 5G-Advanced and 6G hinges on three interconnected pillars: virtualization for flexible networks, AI integration across all network layers, and automation towards autonomous networks.”

As the IEEE Techblog has extensively covered both AI RAN and the use of AI in 6G (IMT 2030), this post focuses on the role of virtualization in 5G Advanced.

In 3GPP Release 18 (5G-Advanced), virtualization is the foundational technology that enables several “software-defined” breakthroughs.  3GPP  Release 18 components) have already been submitted to ITU-R WP 5D for inclusion in the next revision of ITU-R M.2150.  Any remaining technical issues and the final decision for publication of ITU-R M.2150-3 are expected to be resolved during the WP 5D meeting concluding in Feb 2026.

3GPP Rel 18 features that depend most heavily on a virtualized, cloud-native architecture include:

1. AI-Enhanced Radio Access Network (RAN)
Release 18 is the first to integrate AI/ML directly into the air interface. This requires a virtualized environment to:
  • Host AI Models: Run complex machine learning algorithms for channel state information (CSI) feedback, beam management, and positioning.
  • Automate Optimization: Enable “zero-touch” operations where the network dynamically adjusts power and resource allocation based on predictive traffic patterns.
2. Advanced Network Slicing

While slicing existed in earlier releases, 5G-Advanced introduces more sophisticated, automated management. Virtualization is critical for:

Dynamic Resource Partitioning: Using Cloud-native Network Functions (CNFs) to create dedicated virtual networks on demand for specific use cases like Public Safety or industrial automation.

  • SLA Assurance: Automatically scaling virtual resources to guarantee the ultra-low latency required for high-bandwidth applications like XR (Extended Reality).
3. Split-Processing for Extended Reality (XR)

To support lightweight headsets, 5G-Advanced relies on split-rendering.

  • Edge Cloud Dependency: Virtualization allows heavy graphical processing to be moved from the headset to a virtualized Edge Cloud. This requires a highly agile, virtualized edge infrastructure to maintain the near-zero delay needed for immersive experiences.
4. Integrated Network Security
Release 18 introduces features specifically for Security Impact on Virtualization.
  • Infrastructure Visibility: New protocols provide the 3GPP layer with direct visibility into the underlying virtualized platform to detect vulnerabilities in the software-defined infrastructure.
5. Automated Management & Orchestration (Self-Configuration)

Virtualization enables “self-organizing networks” (SON) where network entities can self-configure.

  • Lifecycle Management: Standardized solutions in Rel-18 allow for the automated downloading, activation, and testing of software across virtualized network functions (VNFs).
………………………………………………………………………………………………………………………………………………………………………………………………………………………………….
Summary of 3GPP Rel 18 Features vs Virtualization:
Feature Primary Virtualization Dependency
AI/ML for RAN Hosting and training models on COTS hardware
Edge-Based XR Offloading computation to virtualized edge nodes
Automated Slicing Rapid instantiation of CNFs for specific “slices”
Net Energy Saving Software-driven power-down of virtual resources

………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

On the hardware side, traditional telecommunications infrastructure was defined by a tight coupling of network functions to proprietary, purpose-built hardware—resulting in siloed environments where routers, baseband units, and security appliances existed as distinct physical appliances. While providing reliable performance, this monolithic model introduced limitations in scalability, creating high demands for space, power, and capital expenditure for functional upgrades.

Virtualization transforms this paradigm by decoupling network functions from dedicated hardware, deploying them as software-defined workloads on commercial off-the-shelf (COTS) servers. This shift toward general-purpose compute platforms drives operational efficiency, enhances flexibility, and enables AI readiness. The industry adoption followed a staged evolution: starting with the virtualization of core networks—migrating packet gateways and subscriber databases to standard servers—followed by Virtualized RAN (vRAN), which disaggregates baseband processing from radio hardware to operate as cloud-native software.

In 5G-Advanced (Release 18), the hardware shifts from proprietary “black boxes” to a disaggregated architecture of General-Purpose Processors (GPPs) and Specialized Accelerators.

The physical infrastructure required to run these virtualized functions generally falls into three categories:

1. Telco-Grade Edge Servers

Virtual Network Functions (VNFs) and Cloud-native Functions (CNFs) run on Commercial Off-The-Shelf (COTS) servers designed for high-density environments.

  • Processors: Typically 
    Intel Xeon Scalable  or AMD EPYC processors with high CPU core counts (up to 48+ cores) to handle parallelized workloads.
  • Memory: Large-scale deployments require 384GB to over 1TB of DDR4/DDR5 RAM to support multiple network “slices” simultaneously.
  • Form Factor: Short-depth chassis (300mm to 600mm) to fit into standard telco racks or outdoor cabinets at the network edge.
2. Layer 1 (PHY) Hardware Accelerators
Because general CPUs struggle with the extreme math required for 5G-Advanced’s physical layer (L1), specialized cards are added to the servers.
  • Inline vs. Lookaside:
    • Lookaside: The CPU sends specific tasks (like Forward Error Correction) to the card and gets them back.
    • Inline: The entire L1 data flow passes through the accelerator, reducing the load on the CPU and improving power efficiency.
  • Chips: These cards use FPGAs (Field Programmable Gate Arrays), ASICs (Application-Specific Integrated Circuits), or GPUs.
3. AI-Specific Infrastructure
As Release 18 introduces AI/ML directly into the radio interface, the hardware must support high-performance inferencing.
  • GPU Integration: Platforms like NVIDIA Aerial use GPUs to accelerate both 5G signal processing and AI workloads on the same hardware.
  • DPUs (Data Processing Units): Used to offload networking and security tasks, ensuring that data moves between the radio and the virtualized core with sub-microsecond precision.
Summary of Hardware Component Functions:
Hardware Component Function in 5G-Advanced
COTS Servers Host virtualized core and RAN software (vCU, vDU)
L1 Accelerators Handle compute-heavy signal processing (Beamforming, MIMO)
SmartNICs / DPUs Manage high-speed data transfer and timing synchronization
GPUs Power the AI/ML models for network optimization and XR rendering

…………………………………………………………………………………………………………………………………………………………………..

References:

https://www.3gpp.org/specifications-technologies/releases/release-18

Samsung: Turning legacy infrastructure into AI-ready networks

Dell’Oro: Analysis of the Nokia-NVIDIA-partnership on AI RAN

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)

 

Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators

Executive Summary:

In a February 6, 2026 CNBC interview with with Scott Wapner, Nvidia CEO Jensen Huang [1.] characterized the current AI build‑out as “the largest infrastructure buildout in human history,” driven by exceptionally high demand for compute from hyperscalers and AI companies. “Through the roof” is how he described AI infrastructure spending.  It’s a “once-in-a-generation infrastructure buildout,” specifically highlighting that demand for Nvidia’s Blackwell chips and the upcoming Vera Rubin platform is “sky-high.” He emphasized that the shift from experimental AI to AI as a fundamental utility has reached a definitive inflection point for every major industry.

Jensen forecasts aa roughly 7–to- 8‑year AI investment cycle lies ahead, with capital intensity justified because deployed AI infrastructure is already generating rising cash flows for operators.  He maintains that the widely cited ~$660 billion AI data center capex pipeline is sustainable, on the grounds that GPUs and surrounding systems are revenue‑generating assets, not speculative overbuild. In his view, as long as customers can monetize AI workloads profitably, they will “keep multiplying their investments,” which underpins continued multi‑year GPU demand, including for prior‑generation parts that remain fully leased.

Note 1.  Being the undisputed leader of AI hardware (GPU chips and networking equipment via its Mellanox acquisition), Nvidia MUST ALWAYS MAKE POSITIVE REMARKS AND FORECASTS related to the AI build out boom.  Reader discretion is advised regarding Huang’s extremely bullish, “all-in on AI” remarks.

Huang reiterated that AI will “fundamentally change how we compute everything,” shifting data centers from general‑purpose CPU‑centric architectures to accelerated computing built around GPUs and dense networking. He emphasizes Nvidia’s positioning as a full‑stack infrastructure and computing platform provider—chips, systems, networking, and software—rather than a standalone chip vendor.  He accuratedly stated that Nvidia designs “all components of AI infrastructure” so that system‑level optimization (GPU, NIC, interconnect, software stack) can deliver performance gains that outpace what is possible with a single chip under a slowing Moore’s Law. The installed base is presented as productive: even six‑year‑old A100‑class GPUs are described as fully utilized through leasing, underscoring persistent elasticity of AI compute demand across generations.

AI Poster Childs – OpenAI and Anthropic:

Huang praised OpenAI and Anthropic, the two leading artificial intelligence labs, which both use Nvidia chips through cloud providers. Nvidia invested $10 billion in Anthropic last year, and Huang said earlier this week that the chipmaker will invest heavily in OpenAI’s next fundraising round.

“Anthropic is making great money. Open AI is making great money,” Huang said. “If they could have twice as much compute, the revenues would go up four times as much.”

He said that all the graphics processing units that Nvidia has sold in the past — even six-year old chips such as the A100 — are currently being rented, reflecting sustained demand for AI computing power.

“To the extent that people continue to pay for the AI and the AI companies are able to generate a profit from that, they’re going to keep on doubling, doubling, doubling, doubling,” Huang said.

Economics, utilization, and returns:

On economics, Huang’s central claim is that AI capex converts into recurring, growing revenue streams for cloud providers and AI platforms, which differentiates this cycle from prior overbuilds. He highlights very high utilization: GPUs from multiple generations remain in service, with cloud operators effectively turning them into yield‑bearing infrastructure.

This utilization and monetization profile underlies his view that the capex “arms race” is rational: when AI services are profitable, incremental racks of GPUs, network fabric, and storage can be modeled as NPV‑positive infrastructure projects rather than speculative capacity. He implies that concerns about a near‑term capex cliff are misplaced so long as end‑market AI adoption continues to inflect.

Competitive and geopolitical context:

Huang acknowledges intensifying global competition in AI chips and infrastructure, including from Chinese vendors such as Huawei, especially under U.S. export controls that have reduced Nvidia’s China revenue share to roughly half of pre‑control levels. He frames Nvidia’s strategy as maintaining an innovation lead so that developers worldwide depend on its leading‑edge AI platforms, which he sees as key to U.S. leadership in the AI race.

He also ties AI infrastructure to national‑scale priorities in energy and industrial policy, suggesting that AI data centers are becoming a foundational layer of economic productivity, analogous to past buildouts in electricity and the internet.

Implications for hyperscalers and chips:

Hyperscalers (and also Nvidia customers) Meta , Amazon, Google/Alphabet and Microsoft recently stated that they plan to dramatically increase spending on AI infrastructure in the years ahead. In total, these hyperscalers could spend $660 billion on capital expenditures in 2026 [2.] , with much of that spending going toward buying Nvidia’s chips. Huang’s message to them is that AI data centers are evolving into “AI factories” where each gigawatt of capacity represents tens of billions of dollars of investment spanning land, compute, and networking. He suggests that the hyperscaler industry—roughly a $2.5 trillion sector with about $500 billion in annual capex transitioning from CPU to GPU‑centric generative AI—still has substantial room to run.

Note 2.  An understated point is that while these hyperscalers are spending hundered of billions of dollars on AI data centers and Nvidia chips/equipment they are simultaneously laying off tens of thousands of employees.  For example, Amazon recently announced 16,000 job cuts this year after 14,000 layoffs last October.

From a chip‑level perspective, he argues that Nvidia’s competitive moat stems from tightly integrated hardware, networking, and software ecosystems rather than any single component, positioning the company as the systems architect of AI infrastructure rather than just a merchant GPU vendor.

References:

https://www.cnbc.com/2026/02/06/nvidia-rises-7percent-as-ceo-says-660-billion-capex-buildout-is-sustainable.html

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

184K global tech layoffs in 2025 to date; ~27.3% related to AI replacing workers

 

 

SNS Telecom & IT: Mission-Critical Networks a $9.2 Billion Market

For nearly a century, the critical communications industry has relied on narrowband LMR networks for mission-critical voice and low-speed data services. Over time, these systems have evolved from relatively basic analog radios to digital communications technologies, such as APCO P25 and TETRA, to provide superior voice quality, end-to-end encryption, and other advanced features. However, due to their inherent bandwidth and design limitations, even the most sophisticated digital LMR networks are unable to support mobile broadband and data-driven critical IoT applications that have become vital for public safety, defense, utilities, transportation, oil and gas, mining, and other segments of the critical communications industry.

The 3GPP-defined LTE and 5G NR air interfaces have emerged as the leading radio access technology candidates to fill this void. Over the last decade, a plethora of fully dedicated, hybrid commercial-private, and secure MVNO-based 3GPP networks have been deployed to deliver critical communications broadband capabilities – in addition to the use of commercial mobile operator networks – for application scenarios as diverse as PTT group communications, multimedia messaging, high-definition video surveillance, BVLOS (Beyond Visual Line-of-Sight) operation of drones, situational awareness, untethered AR/VR/MR, collaborative mobile robots, AGVs (Automated Guided Vehicles), and automation in IIoT (Industrial IoT) environments. These networks range from nationwide PPDR (Public Protection & Disaster Relief) broadband platforms such as the United States’ FirstNet, South Korea’s Safe-Net, Saudi Arabia’s mission-critical broadband network, Great Britain’s ESN, France’s RRF, Sweden’s SWEN, and Finland’s VIRVE 2 public safety broadband service to defense sector 5G programs for the adoption of tactical cellular systems and permanent private 5G networks at military bases, regional cellular networks covering the service footprint of utility companies, FRMCS (Future Railway Mobile Communication System)-ready networks for train-to-ground communications, and NPNs (Non-Public Networks) for localized wireless connectivity in settings such as airports, maritime ports, oil and gas production facilities, power plants, substations, offshore wind farms, remote mining sites, factories, and warehouses.

Historically, most critical communications user organizations have viewed LTE and 5G NR as complementary technologies, used primarily to augment existing voice-centric LMR networks with broadband capabilities. This perception has changed with the commercial availability of 3GPP standards-compliant MCX (Mission-Critical PTT, Video & Data), QPP (QoS, Priority & Preemption), HPUE (High-Power User Equipment), IOPS (Isolated Operation for Public Safety), URLLC (Ultra-Reliable, Low-Latency Communications), TSC (Time-Sensitive Communications), and related service enablers. LTE and 5G networks have gained recognition as an all-inclusive critical communications platform and are nearing the point where they can fully replace legacy LMR systems with a future-proof transition path, supplemented by additional 5G features, such as 5G MBS/5MBS (5G Multicast-Broadcast Services) for MCX services in high-density environments, 5G NR sidelink for off-network communications, VMRs (Vehicle-Mounted Relays), MWAB (Mobile gNB With Wireless Access Backhauling), satellite NTN (Non-Terrestrial Network) integration, and support for lower 5G NR bandwidths in dedicated frequency bands for PPDR, utilities, and railways.

SNS Telecom & IT’s LTE & 5G for Critical Communications: 2025 – 2030 research publication projects that global investments in mission-critical 3GPP networks and associated applications reached $5.4 billion in 2025. Driven by public safety broadband, defense communications, smart grid modernization, FRMCS, and IIoT initiatives, the market is expected to grow at a CAGR of approximately 19% over the next three years, eventually accounting for more than $9.2 billion by the end of 2028. Looking ahead to 2030, the industry will be underpinned by operational deployments ranging from sub-1 GHz wide area networks for national-scale MCX services, utility communications, and GSM-R replacement to systems operating in mid-band spectrum such as Band n101 (1.9 GHz) and Band n79 (4.4-5 GHz), as well as mmWave (Millimeter Wave) frequencies for specialized applications.

 

Image Credit: SNS Telecom & IT

…………………………………………………………………………………………………………….

About SNS Telecom & IT

SNS Telecom & IT is a global market intelligence and consulting firm with a primary focus on the telecommunications and information technology industries. Developed by in-house subject matter experts, our market intelligence and research reports provide unique insights on both established and emerging technologies. Our areas of coverage include but are not limited to 6G, 5G, LTE, Open RAN, vRAN, small cells, mobile core, xHaul transport, network automation, mobile operator services, FWA, neutral host networks, private 4G/5G cellular networks, public safety broadband, critical communications, MCX, IIoT, V2X communications, and vertical applications.

References:

https://www.snstelecom.com/lte-for-critical-communications

SNS Telecom & IT: Private LTE & 5G Network Ecosystem – CAGR 22% from 2025-2030

SNS Telecom & IT: Private 5G and 4G LTE cellular networks for the global defense sector are a $1.5B opportunity

Fiber Optic Networks & Subsea Cable Systems as the foundation for AI and Cloud services

Emerging Cybersecurity Risks in Modern Manufacturing Factory Networks

RtBrick survey: Telco leaders warn AI and streaming traffic to “crack networks” by 2030

Page 1 of 351
1 2 3 351