RAN Silicon Rethink- Part II; vRAN and General-Purpose Compute

The global Radio Access Network (RAN) market has experienced a significant decline, dropping by nearly $10 billion in annual product revenue between 2022 and 2024, from roughly $45 billion to about $35 billion by the end of last year (source: Omdia).

As the IEEE Techblog previously reported, Nokia is gradually moving away from its long-held reliance on custom RAN baseband (BBU) silicon from Marvell as it pivots to use Nvidia’s GPUs, as part of the latter’s $1B investment in Nokia in October 2025.  Samsung has long partnered with Marvell Technology on purpose-built 5G baseband silicon. However, rising development costs and a contracting market for proprietary RAN hardware are reshaping that strategy. The economic case for new, custom RAN chipsets is becoming weaker as operators accelerate network virtualization.

In sharp contrast, Ericsson continues to defend its investment in proprietary silicon architectures while maintaining a flexible approach for operators that prefer virtualized or cloud RAN implementations running on standard central processing units (CPUs). At present, those solutions rely exclusively on Intel processors, though Ericsson notes its software is being engineered with portability in mind to support future hardware diversity.

Among RAN equipment vendors accessible to operators across North America and much of Europe, Samsung now stands as the principal alternative to the Nordic suppliers, following the exclusion of Huawei and ZTE from many Western markets.

The South Korean conglomerate has become the global frontrunner in virtualized RAN (vRAN) deployments. Whereas custom silicon once dominated RAN infrastructure design, Samsung’s strategy has notably inverted that paradigm: vRAN is now its mainstream offering, and purpose-built hardware has moved to the periphery.

By the close of last year, Samsung reported supporting approximately 53,000 vRAN sites worldwide — a significant share of which lies within Verizon’s U.S. footprint. The company also disclosed major European developments, including Vodafone’s planned rollout across Germany and other markets, which will rely entirely on vRAN technology. For Samsung, discussions of bespoke, purpose-built 5G infrastructure have become increasingly rare.

According to Alok Shah, Vice President of Network Strategy at Samsung Networks, this transition reflects both the rising cost of developing custom silicon and the performance enhancements achieved by general-purpose CPU platforms.

“We’re still selling our purpose-built BBUs to a number of customers, but I do believe that it’s a matter of time,” Shah told Light Reading during MWC Barcelona, when asked if Samsung envisions an eventual phaseout of its proprietary baseband hardware portfolio.

Virtualized RAN Gains Momentum:

Transitioning to virtualized RAN (vRAN) allows network equipment vendors to capitalize on the scale economies of commercial data-center silicon. Samsung has established commercial vRAN contracts with Verizon and Vodafone, reflecting growing operator confidence in software-defined architectures.

“Virtual RAN performance has reached parity,” Shah said. “I know not all of our competitors feel that way, but that’s certainly how we feel. And the cost of building that modem is pretty high, even for a company like Samsung that’s really good at semiconductors,” he added.

Intel’s Granite Rapids Xeon platform exemplifies this shift to vRAN. The processor’s increased core density enables operators to cut hardware footprints; in many configurations, a single server can now support workloads that previously required two. Several network operators have confirmed this performance improvement during field evaluations.

Samsung and Ericsson continue to explore additional CPU suppliers. AMD’s latest multicore x86 processors offer up to 84 cores, compared with 72 in Intel’s Granite Rapids. However, offloading Forward Error Correction (FEC)—one of the most compute-intensive RAN processes—remains a challenge. Intel’s vRAN Boost feature integrates a dedicated hardware accelerator for FEC, while AMD currently lacks a direct equivalent.

Samsung has also evaluated Arm-based platforms, which increasingly support efficient software migration from x86. Nvidia’s Grace CPU, built on Arm architecture, has emerged as a potential candidate, especially when paired with its GPUs for selective Layer 1 acceleration.

Samsung’s roadmap aligns with a gradual and selective introduction of GPU acceleration. The company demonstrated GPU-based beamforming optimization during MWC, illustrating how AI can refine radio energy targeting. However, Samsung executives maintain that the latest Intel CPUs also provide sufficient capacity to host AI inference workloads directly. “Granite Rapids has plenty of capacity to support AI algorithms on-platform,” noted Shah.

While Nokia is building a GPU-compatible Layer 1 to accelerate computationally intensive baseband functions—including FEC—Samsung’s approach appears incrementally narrower, focusing on targeted AI for RAN optimization rather than complete GPU offload. GPUs may ultimately support AI at the Edge applications—so-called AI and RAN—where telecom operators leverage deployed GPUs for latency-sensitive inference services.

The degree to which such applications will reside within RAN sites remains uncertain. Some operators suggest that edge inference may instead remain within core network clusters that can meet latency requirements more efficiently.

Samsung’s architecture already supports GPU integration through commercial off-the-shelf (COTS) servers from manufacturers such as HPE, Dell, and Supermicro—aligning with broader cloud-native RAN trends. “It’s an off-the-shelf card that can be integrated directly into standard servers,” said Shah.

For now, Intel remains Samsung’s primary compute partner for commercial vRAN products. “We haven’t had an instance where customers are pushing for a second platform—it’s primarily a matter of commercial interest,” Shah added. The direction is clear: Samsung, like other leading vendors, is prioritizing scalable, general-purpose compute over bespoke 5G silicon as vRAN deployment accelerates.

References:

https://www.lightreading.com/5g/samsung-eyes-death-of-purpose-built-5g-but-has-no-ai-ran-fears

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN

Marvell shrinking share of the RAN custom silicon market & acquisition of XConn Technologies for AI data center connectivity

Intel FlexRAN™ gets boost from AT&T; faces competition from Marvel, Qualcomm, and EdgeQ for Open RAN silicon

Analysis: Nokia and Marvell partnership to develop 5G RAN silicon technology + other Nokia moves

Open Cosmos introduces global space-based LEO satellite service for IoT monitoring

Founded in 2015, UK headquartered Open Cosmos has introduced a new integrated satellite service that combines broadband, earth observation, and IoT capabilities to help organizations monitor critical infrastructure, protect environmental assets, and respond more rapidly to events. The company says the offering is unique in combining global IoT connectivity with real-time Earth observation data to deliver contextual intelligence for governments and institutions.

The service is built on Open Cosmos’ multi-layer satellite architecture, which the company describes as a trilogy of secure broadband connectivity, Earth observation, and IoT. The constellation includes the newly launched Connected Cosmos Low Earth Orbit (LEO) connectivity backbone [1.] and the Open Constellation Earth observation layer [2]. Each satellite carries an IoT payload, integrating functions that are typically deployed as separate systems.

Note 1.  Connected Cosmos is a new LEO constellation providing sovereign and secure communications for businesses and government bodies worldwide.  It ensures that critical data remains secure, trusted, and immediately usable—even when terrestrial infrastructure is compromised. It uses Optical Inter-Satellite-Links to route data between satellites, physically bypassing subsea cables.  Built to withstand interference from jamming and cyber attacks, it’s designed to cut through a contested orbital field for modern critical operations.

Note 2. The Open Constellation is a mutualized satellite infrastructure, created to enable organizations to share the data generated by satellites for improved access to information on our planet. Using this shared capacity reduces overall costs and increases access to better quality, more frequent data. With more satellites in orbit, more areas can be covered more frequently, giving partners of the Open Constellation a greater global coverage.

Open Cosmos Ecosystem:

Image Credit: Open Cosmos 

…………………………………………………………………………………………………………………………………………………………………………

The company says this approach is intended to “address the traditionally siloed nature of space-based data services, dramatically accelerating data delivery times and maximizing operational awareness, which will monitor environmental change and support disaster response across the globe – even in the most remote regions.”

Open Cosmos says the result is faster detection of events and a better understanding of what is happening on the ground. Potential applications include monitoring widely distributed assets, overseeing critical infrastructure such as energy, utility, and rail networks, protecting oceans, tracking wildfires, and observing offshore conditions. In this model, imagery and sensor data are combined so that users can not only see that a change has occurred, but also understand the context behind it.

“Our mission at Open Cosmos has always been focused on solving real world issues through space-based services,” said Danielle Edwards, VP for IoT at Open Cosmos. “This is an essential and critical technology service for governments, enterprises and institutions across the globe, helping to monitor and solve real world problems, with the innovative use of technology in space.

“Our existing Earth observation satellites already carry IoT payloads, so we have the experience to integrate further through our ConnectedCosmos LEO constellation, with each satellite being designed and made to carry IoT capabilities. Our aim is to provide a multitude of payload types within a single constellation to give our customers a completely bespoke and unique service.

“We won’t be just providing the data from a sensor; we will provide the visual imagery to explain why that data is changing. As demand for global monitoring and connected infrastructure continues to grow, our integrated approach represents a new model for space-enabled intelligence.”

At MWC earlier this month, Carlos Zamora, VP of Satcom Solutions at Open Cosmos, said the company is not positioning the LEO broadband service as a direct-to-device play.

Zamora elaborated:

“First of all we’re not going direct to device with the broadband. We’re not here to compete with Starlink or Kuiper or of all of these systems – we’re not here to bring internet to the to the masses. We’re here to bring a global secure connectivity to governments, commercial [customers] and actually anyone that is worried about their data resiliency and sovereignty.  But we do have IoT capabilities that commercial and other customers could use. So the architecture is also fundamentally different. What we’re selling is a network, not a link in space, but actually a network. And I think what makes the difference beyond just connectivity, which is already a differentiator, is the fact that we can start fusing all of our offerings together. And this is not just about moving bits from one place to another, it is giving you the possibility of accessing a space infrastructure that can give you access to real time Earth observation, to real time computing capabilities in orbit, and basically creating a network of assets that can increase your situational awareness and give you access to a global intelligence backbone.”

Open Cosmos is effectively positioning the platform as a secure, multi-sensor space infrastructure layer rather than a consumer broadband network. The focus is on government, enterprise, and institutional customers that need connectivity, resilience, and situational awareness tied to Earth observation and IoT data.

………………………………………………………………………………………………………………………………………………………………………………….

References:

https://www.open-cosmos.com/

https://www.open-cosmos.com/leo-satellite-network-connectivity

https://www.open-cosmos.com/news/open-cosmos-earth-observation-iot-real-time-data

https://www.telecoms.com/satellite/open-cosmos-launches-earth-observation-and-iot-satellite-service

Enterprise IoT and the Transformation of UK Telecom Business Models – Part 1

From LPWAN to Hybrid Networks: Satellite and NTN as Enablers of Enterprise IoT – Part 2

Semtech LoRa® PHY technology enables Amazon Sidewalk to expand while supporting fixed and mobile IoT endpoints

ITU-R recommendation IMT-2020-SAT.SPECS from ITU-R WP 5B to be based on 3GPP 5G NR-NTN and IoT-NTN (from Release 17 & 18)

CEA-Leti RF Chip Enables Ultralow-Power IoT Connectivity For Remote Devices Via Astrocast’s Nanosatellite Network

Anthropic Claude Users Reveal AI Hallucinations as their Top Concern

Introduction:

Across regions from Germany to Mexico, users of artificial intelligence (AI) are less concerned about being replaced by AI than by its propensity to make major mistakes, according to one of the largest global surveys to date on real-world AI usage and perception.  These mistakes, known as “AI Hallucinations,” are essentially made up stories rather than answers based on outdated information.

The study, conducted by Anthropic using its Claude chatbot, analyzed interviews with more than 80,000 users across 159 countries. The result is one of the most detailed global portraits yet of how AI is being deployed — and how users perceive its risks, benefits, and societal implications.

AI Hallucinations Outrank Job Displacement as Top Concern:

When asked what worries them most about AI, 27% of users cited AI chatbot errors described as “AI hallucinations,” while 22% pointed to job displacement and the loss of human autonomy. About 16% expressed concern that AI could weaken people’s capacity for critical thinking.

Image Credit: JOIST AI

“The AI hallucinations were a disaster. I lost so many hours of work,” said an entrepreneur from Germany. Another participant, a military worker in Mexico, noted the importance of domain knowledge in spotting AI’s flaws: “When I notice AI errors it’s because I’m well versed in the topic . . . but I wouldn’t know if the topic was alien to me, would I?”

An AI Interviewer for Global Insights:

The responses were collected in 70 languages using a novel feedback system that allowed Claude to act as both interviewer and analyst. The platform evaluated qualitative answers, categorizing responses to reveal common themes and linguistic nuances across regions.

“Beyond its scale and linguistic diversity, the project aimed to collect this rich human experience using Claude, so it could really inform our research agenda, change our research agenda, change the way we think about building our products, deploying our products,” said Deep Ganguli, who leads Anthropic’s societal impacts team and oversaw the research initiative.

Productivity and Personal Growth Drive AI Adoption:

While data quality and reliability drew criticism, the survey also underscored widespread acknowledgment of AI’s positive impact on productivity. Thirty-two percent of respondents said that AI tools had meaningfully improved their output at work.

An entrepreneur in the United Arab Emirates explained, “I used to be a web designer . . . now I build anything. Before I was one person, now I become 100 people — I don’t wait for anyone anymore.” Participants from Colombia, Japan, and the United States described similar gains, emphasizing how AI helps them free up time for family, hobbies, and creative exploration.

In total, nearly one in five users (19%) said AI had fallen short of their expectations. Yet usage patterns demonstrate remarkable versatility: respondents reported employing AI as a productivity assistant, educational tutor, design partner, creative collaborator, or even an emotional support companion.

A vivid example came from a soldier in Ukraine, who wrote, “In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life — my AI friends.”

Regional and Economic Divides in AI Optimism:

Regional variation was pronounced. Saffron Huang, the lead researcher on the project, found that respondents in South America, Africa, and across South and Southeast Asia expressed more optimism than users in Europe, the United States, or East Asia.

“The trend is that maybe more lower and middle-income countries are more optimistic than higher-income countries that have more AI exposure,” said Huang. She added that this optimism might reflect a sample skew toward early adopters in developing markets — individuals inclined to view new technologies as opportunities rather than threats.

“They just divide so cleanly . . . the more western developed countries are significantly more concerned about AI and the economy, [and] much more negative, and then, the reverse is true with the lower and middle-income countries,” she said.

According to Anthropic’s researchers, AI’s limited visibility in daily workflows across lower-income economies may explain the difference. “If AI hasn’t visibly entered your daily work yet, AI displacement likely feels abstract, especially when more immediate economic pressures already exist,” the team wrote in a companion blog post.

Next Steps: Measuring AI’s Real-World Impact:

Anthropic plans to extend its Claude Interviewer research framework into longitudinal studies that track how AI affects users’ lives over time. “The goal is to better measure both the improvements and the harms — and to use those insights to make systemic refinements,” said Ganguli.

The company’s approach — embedding feedback collection directly into an AI platform — represents an emerging model for data-driven, iterative AI development. By combining self-reported user experience data with large-scale text analytics, Anthropic aims to better understand how its models interact with human needs and constraints.

Industry and Research Community Respond:

The study has drawn attention across the AI community for its unprecedented reach and innovative methodology. Nickey Skarstad, director of product at language-learning company Duolingo, praised the work’s ambition. On LinkedIn, she wrote: “For anyone building products right now, this is the future of understanding your users. The what AND the why at a scale we’ve never had access to before.”

Still, several researchers remain cautious about overinterpreting the results. Divy Thakkar, a researcher at Anthropic rival Google DeepMind, expressed reservations on X, saying he was “sceptical” about calling the study a new form of science due to potential selection bias and limitations in survey design. “A human qualitative researcher would take time to build trust with their participants, hold the space for reflection, introspection, contradictions — that’s the whole point of it,” he wrote.

Methodological caveats extend to demographics. Almost half of the survey’s respondents were based in North America or Western Europe, while regions such as Central Asia had only several hundred participants.

Ilan Strauss, an economist and director of the AI Disclosures Project, described the initiative as “an excellent piece of work,” but urged careful interpretation. He noted that the absence of reported confidence intervals — standard practice in survey-based research — makes it difficult to measure uncertainty. Self-reported productivity gains, he added, are inherently prone to bias.

A Global Mirror for Human-AI Relations:

Despite these caveats, the Claude Interviewer study illustrates a broader shift in the relationship between humans and AI systems. As AI technologies proliferate across regions and industries, they are becoming both instruments of empowerment and sources of anxiety — mirroring social, economic, and cultural dynamics in striking ways.

While western economies debate AI-driven labor disruption and ethical alignment, many in emerging markets frame AI as a means of upward mobility and creative expansion. This duality — between apprehension and aspiration — may shape not only AI adoption patterns but also future research and regulatory directions across global contexts.

References:

https://www.ft.com/content/e074d3a9-7fd8-447d-ac0a-e0de756ac5c5?syn-25a6b1a6=1 (PAYWALL)

https://www.joist.ai/post/ai-hallucinations-what-they-are-and-why-it-matters

Sources: AI is Getting Smarter, but Hallucinations Are Getting Worse

Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators

Alphabet’s 2026 capex forecast soars; Gemini 3 AI model is a huge success

Analysis & Economic Implications of AI adoption in China

China’s open source AI models to capture a larger share of 2026 global AI market

AWS to deploy AI inference chips from Cerebras in its data centers; Anapurna Labs/Amazon in-house AI silicon products

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos

Gartner: AI spending >$2 trillion in 2026 driven by hyperscalers data center investments

 

 

Telco investments in mobile core networks surge 83% in 2025-Q4, but what about ROI?

According to new data from market research firm Omdia (owned by Informa), 2025 Q4 investments 5G SA Core networks surged 83% year-over-year. For OEMs, this uptick suggests a pivot away from the stagnant 5G Standalone (SA) momentum of recent years. Omdia identified North America and EMEA as the primary growth engines for the quarter.  “The surge in 5G core investment underscores CSPs’ strategic focus on enabling new revenue streams and digital transformation,” said Roberto Kompany, Principal Analyst Mobile Infrastructure at Omdia, in a statement. “This momentum is reflected in AT&T’s nationwide 5G SA and RedCap deployment and Verizon’s launch of a new enterprise-grade fixed wireless access (FWA) slice,” he said.

Ookla and Omdia recently noted accelerating 5G SA adoption in Europe, but the region continues to trail global leaders due to its low baseline. Spain remains a standout exception. Telefónica recently achieved a domestic milestone by deploying 5G SA in-building coverage via a Vantage Towers DAS, and has partnered with Airbus Helicopters to integrate 5G SA into manned and unmanned rotary-wing platforms for the Spanish armed forces. Despite broader deployments in the UK and Germany, a significant performance gap remains.

The GCC region ( Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the UAE.) currently delivers median 5G SA download speeds up to five times faster than European averages. This disparity highlights a capability gap rather than a coverage issue between mature and emerging markets. The industry footprint is expanding, with Omdia reporting 88 commercial 5G SA deployments to date—a notable increase from the 72 reported by Dell’Oro in late 2025.

…………………………………………………………………………………………………………………………………………………………………………………………….

While Dell’Oro confirms the 5G SA Core market growth, it emphasized that subscriber migration and active utilization, rather than just “flags in the ground,” are the true long-term drivers for infrastructure spend.  For the first time, the 5G Mobile Core Network (MCN) market accounted for 50 percent share of the total MCN market.

“In 2025, the MCN market recorded its highest year-over-year revenue growth rate since 2014,” stated Dave Bolan, Research Director at Dell’Oro Group. “This was driven by record-setting growth rates in all market segments: 4G MCN (highest since 2019), 5G MCN (highest since 2022), and Voice Core (highest since 2007). 4G MCN gains came from Caribbean and Latin America (CALA) and Europe, Middle East, Africa (EMEA) regions; 5G MCN from all regions; and Voice Core, primarily from Asia Pacific and EMEA regions.

“5G MCNs led the way in 2025 growth, as 5G Standalone (5G SA) networks reached an inflection point and moved towards mass market appeal, as more 5G SA networks expand in population coverage in urban, suburban, and rural areas. Voice Core was the next major contributor to growth in 2025, driven by planned 3G MCN shutdowns, which required upgrades from Circuit Switched Core to IMS Core, and IMS Core modernization to a cloud-native IMS Core for VoNR in 5G SA networks. Meanwhile, 4G MCNs expanded due to subscriber growth in Africa and South America,” added Bolan.

Looking ahead, Omdia forecasts sustained double-digit growth for 5G Core investments through 2026, fueled by the requirement for nationwide service parity and increased network capacity. This outlook favors the leading 5G Core vendors—Huawei, Ericsson, and Nokia—who currently maintain the highest market shares.

……………………………………………………………………………………………………………………………………………………………………………………………

ROI for 5G SA Core Networks?

The return on investment (ROI) for 5G Standalone (SA) core networks is currently at a critical inflection point. While initial years were marked by “bemoaning” slow momentum, 2025 and 2026 have seen a shift from pilot testing to an execution-driven phase with measurable, albeit varied, returns.  In the 2025–2026 market, enterprise ROI for 5G Standalone (SA) is primarily driven by three high-growth segments: Private 5G NetworksRedCap IoT, and Network Slicing. While public 5G consumer returns remain steady, these B2B use cases are where Mobile Network Operators (MNOs) are finding the most immediate “killer applications.”

ROI Drivers in 2026:
  • Operational Efficiency: 5G SA cores are cloud-native, allowing for microservices that can be deployed in hours rather than days. This reduces long-term operational costs (OpEx) by automating network functions and improving energy efficiency per gigabyte transmitted.
  • New Revenue Streams: Unlike 5G Non-Standalone (NSA), the SA core enables Network Slicing and Ultra-Reliable Low-Latency Communications (URLLC). These are essential for high-margin B2B services like industrial robotics, emergency services, and “SuperMobile” slicing for enterprises.
  • Monetization of “Capability”: In regions like the GCC (Gulf Cooperation Council), 5G SA delivers speeds up to five times faster than European averages, allowing operators to charge for performance-based tiers rather than just data volume.
  • Consumer Benefits: Early data from the UK indicates that 5G SA can extend device battery life by 11% to 22% due to its unified control plane, creating a tangible value proposition for premium consumer plans.
Current Market Challenges:
  • The “Value Perception Gap”: Despite nationwide rollouts, some operators (like AT&T in late 2025) saw mobile service revenue grow by only 3.4%, barely outpacing inflation.
  • Regional Disparity: ROI is strongest in North America and China, where industrial policy and sovereign wealth have accelerated deployment. In contrast, Europe faces a “regulatory quagmire” and higher costs for removing legacy equipment, slowing its path to profitability.
  • The 6G Factor: Some operators are hesitant to invest billions in a full 5G SA overhaul if the technology is viewed as a “transitional” generation that may be superseded by 6G-ready cores in the late 2020s.
Strategic Outlook for 2026:
Market research from the Dell’Oro Group projects the 5G Mobile Core Network market to grow at a 12% CAGR through 2030, reaching historic highs in 2026. For most operators, the consensus is that 5G SA is a strategic necessity to maintain competitiveness, even if the short-term financial returns are uneven.
In his February 2026 Newsletter, Stephane Teral wrote, “2026 points to a more mixed environment—RAN slightly down, 5G Core continuing to grow—against a backdrop of uncertain capex and an accelerating shift toward opex and software-driven models.”
…………………………………………………………………………………………………………………………………………………………………………………………….

References:

https://www.telecoms.com/5g-6g/telcos-spend-more-on-the-core-as-5g-sa-picks-up

https://www.linkedin.com/pulse/february-newsletter-4q25-fy25-wireless-infrastructure-update-ug9ec/

Dell’Oro: Mobile Core Networks +15% in 2025; Ookla: Global Reality Check on 5G SA and 5G Advanced in 2026

Dell’Oro: RAN market stable, Mobile Core Network market +14% Y/Y with 72 5G SA core networks deployed

Dell’Oro: RAN market stable, Mobile Core Network market +14% Y/Y with 72 5G SA core networks deployed

Téral Research: 5G SA core network deployments accelerate after a very slow start

Analysts: Telco CAPEX crash looks to continue: mobile core network, RAN, and optical all expected to decline

Building and Operating a Cloud Native 5G SA Core Network

MCN Market Roared Back in 2025 With 15 Percent Growth, According to Dell’Oro Group

Analysis of Airspan Networks & Atika Alliance: Resilient, Multi-Domain 5G Mission Critical Connectivity for the Defense Industry

Airspan Networks Holdings LLC (“Airspan”) and ATIKA Venture, S.L. (“Atika”) have entered into a strategic collaboration to advance resilient, multi-domain 5G communications for defense and security operations. The initiative focuses on developing interoperable, deployable network systems optimized for mission-critical connectivity across terrestrial and airborne domains.

The cooperation framework covers both commercial and technical engagements, with initial activities centered in Spain and expansion potential across Europe. The partnership unites Airspan’s portfolio in Open RAN (O-RAN), 5G, and commercial Air-to-Ground (ATG) communications with Atika’s capabilities in tactical 5G deployments, AI-driven network analytics, and secure 5G core integration for defense-grade environments.

Joint programs will address the convergence of deployable 5G infrastructure and mobile ad hoc network (MANET) systems under a unified network orchestration and control layer. The combined architecture aims to provide secure, high-throughput connectivity in dynamic and contested electromagnetic environments. Technical priorities include rapid network deployment, automated resilience management, AI-assisted spectrum optimization, and end-to-end encryption aligned with defense mission profiles.

Image Credit:  Aviat Networks

“Airspan has a strong history of solving advanced connectivity challenges, including low-latency, high-mobility communications through our Air-to-Ground In-Motion 5G platform,” stated Glenn Laxdal, CEO of Airspan. “Through this collaboration with Atika, we aim to adapt our commercial-grade 5G and O-RAN technologies to defense use cases that demand operational resilience and interoperability across domains. Atika’s deep experience in defense communications, combined with their expertise in AI-enabled network intelligence and secure 5G core technologies, represents a substantial complement to our portfolio.”

“The operational landscape increasingly depends on adaptable, intelligent, and sovereign networks,” said Ana Rodríguez Quirós, Managing Director of Atika. “Our partnership with Airspan strengthens our ability to support multi-domain 5G for defense users, extending connectivity beyond satellite and traditional radio systems. Building on our collaboration with the Spanish Army, this alliance demonstrates how advanced 5G network architectures can directly enhance mission readiness, mobility, and overall operational effectiveness.”

About Airspan:

Headquartered in Plano, Texas, Airspan Networks Holdings LLC is an innovative U.S.-based provider of wireless network solutions with a global presence, focused on delivering carrier-grade 5G and advanced wireless connectivity. Airspan’s portfolio spans three core solution areas – in-building, outdoor, and air-to-ground – and includes market-leading products for DAS, Open RAN, and small cells across both public and private network settings. Airspan supports mobile network operators, neutral-host providers, enterprises, public-sector organizations, and other service providers in building reliable, scalable wireless networks that enhance coverage and capacity while enabling fast, efficient deployment.

Visit our website at https://airspan.com/

About Atika:

Atika is a Spanish technology company specializing in advanced tactical communications and deployable 5G networks for defense and security. Its technology focuses on federated architectures, multi-domain connectivity, and network intelligence capabilities designed for real operational environments.

……………………………………………………………………………………………………………………………………………………….

Requirements and Analysis:

1.] Resilient, mission-critical 5G connectivity (URLCC that meets ITU-R M.2410 Technical Performance Requirements for IMT 2020) recommendation with a

2.] Unified network orchestration and control layer (5G Services Based Architecture depends on implementation of 3GPP Release 17 and 18 specifications.

1.  Enhancements to the 5G NR Physical Layer (PHY) to support Ultra-Reliable Low-Latency Communications (URLLC) in the Radio Access Network (RAN). While basic URLLC support was established in Release 15.  When 3GPP Release 16 was frozen in July 2020, URLLC in the RAN enhancements had not been completed or performance tested. Hence, the ITU-R M.2150 standard for IMT 2020 RIT/SRIT initially did not meet the ITU-R  M.2410 Technical Performance Requirements for IMT 2020 recommendation

The most significant PHY-layer optimizations were finalized in Release 16 (Phase 2) an Release 17 (Phase 3) with more to come in Release 18 as described below.

a] Release 16 (The “IIoT and URLLC” Phase):
This release introduced foundational PHY improvements to reach “six nines” (99.9999%) reliability. Key features included:

  • New DCI Formats: Compact Downlink Control Information (DCI) formats (e.g., Format 0_2 and 1_2) were added to reduce signaling overhead and improve robustness.
  • Sub-slot HARQ-ACK Feedback: Enabled faster feedback by allowing multiple HARQ-ACK transmissions within a single slot.
  • PUSCH Repetition Type B: Introduced to allow even finer-grained (mini-slot based) repetitions for low-latency uplink, enabling transmissions to cross slot boundaries.
  • Intra-UE Prioritization: Standardized the ability for a device to prioritize a high-priority (URLLC) transmission over a lower-priority (eMBB) one if they overlap in time.
  • Multi-TRP (CoMP): Enhanced support for Transmission and Reception Points (TRPs) to provide spatial diversity, ensuring communication continues if one path is blocked.
    Ericsson +6

b] Release 17 (The “Further Enhanced URLLC” Phase):
Completed in 2022, this release focused on consolidating these features and extending them to more complex scenarios:

  • URLLC in Unlicensed Spectrum (NR-U): Adapted URLLC PHY procedures for unlicensed bands, addressing regulatory constraints like Listen-Before-Talk (LBT).
  • Improved HARQ-ACK and CSI Reporting: Introduced more efficient and robust feedback mechanisms for better link adaptation.
  • Enhanced Multi-TRP for UL: Further optimized uplink transmissions using multiple TRPs for increased reliability.
Summary of Implemented Rel-17 RAN Enhancements:
  • Feedback Reliability: Improved HARQ-ACK and Channel State Information (CSI) reporting to ensure the network can adapt to rapid channel changes.
  • Traffic Prioritization: Intra-UE prioritization allows URLLC data to “pre-empt” or take priority over standard mobile broadband (eMBB) data within the same device.
  • Power Savings: New mechanisms like Paging Early Indication (PEI) allow URLLC-capable sensors to remain in low-power states longer without sacrificing the ability to wake up instantly for critical data.
c] Current Status:
While the core functional specifications for URLLC in the RAN are considered “complete” as of Release 17, the ecosystem continues to evolve into 3GPP Release 18 (5G-Advanced), which looks at further specialized enhancements for Extended Reality (XR) and Artificial Intelligence (AI).
Modem and Chipset Comparison (Device Side).
5G chipsets/modems:
Company Modem Model(s) Rel-17 URLLC Features
Qualcomm World’s first 5G Advanced-ready modem. Supports enhanced HARQ-ACK and CSI feedback for reliability, and AI-based beam management to maintain stable URLLC links.
MediaTek
MediaTek M90
Conforms to Rel-17 standards and aligns with Rel-18 5G-Advanced. Implements Rel-17 Paging Early Indication (PEI) to reduce power while maintaining low-latency readiness.
Samsung
Exynos Modem 5300
While primary documentation emphasizes Rel-16, Samsung achieved 1024 QAM (defined in Rel-17) in partnership with Qualcomm. Supports ultra-low latency via FR2 and EN-DC.
Network infrastructure implementation often takes the form of software-defined upgrades to existing massive MIMO and base station hardware.
  • Ericsson: Enabled “Time-Critical Communication” as a software upgrade on its RAN. Its Rel-17 implementation focuses on Hybrid Automatic Repeat Request (HARQ-ACK) enhancements, intra-UE multiplexing, and time-synchronization for Industrial IoT (IIoT).
  • Nokia: Updated its AirScale portfolio to support Rel-17 features, specifically targeting Time-Sensitive Communications (TSC) and deterministic networking for private factory environments.
  • Huawei: Has integrated Rel-17 URLLC enhancements as part of its “5.5G” (5G-Advanced) marketing, focusing on achieving sub-10ms latency for wide-area industrial control and 1ms for local-area automation.

2.  3GPP has specified a unified management and orchestration framework for 5G systems, primarily developed by working group SA5 (Management, Orchestration, and Charging). Starting from Release 15, 3GPP introduced a Service-Based Management Architecture (SBMA), which acts as a unified layer to manage and orchestrate 5G networks, including the Core, RAN, and end-to-end network slices.

Key aspects of the 3GPP unified 5G orchestration and control layer include:
  • Service-Based Management Architecture (SBMA): Instead of legacy, vendor-specific interfaces, 3GPP adopted a service-oriented approach. This architecture uses Management Services (MnS), which provide standardized interfaces for both management and orchestration, facilitating multi-vendor interoperability.
  • End-to-End Slice Management: The 3GPP standards (notably TS 28.530/531/532/533) define a common approach to manage the entire lifecycle of a 5G network slice (creation, activation, supervision, and termination) across RAN, Core, and Transport domains.
  • Network Automation (NWDAF): The Network Data Analytics Function (NWDAF), introduced in Release 15, is a key component for automated control. It collects network data, analyzes it, and feeds back insights to assist in policy management (PCF) and slice selection (NSSF).
  • Intent-Driven Management: 3GPP is enhancing its standards to support intent-driven management, enabling operators to manage network resources based on high-level desired outcomes rather than low-level configuration, which is crucial for autonomous networks.
  • AI/ML Management: Recent releases (18/19) focus on a unified, domain-independent AI/ML management and orchestration framework that supports the full lifecycle of AI/ML models within the 5G system.

The latest 3GPP release with finalized specifications for Service-Based Management Architecture (SBMA) is Release 18 (Rel-18), which was functionally frozen in early 2024. Rel-18 includes enhanced study items (FS_eSBMA) focused on supporting management for 5G standalone (SA) and non-standalone (NSA) scenarios and management of Management Functions.

…………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.businesswire.com/news/home/20260319340548/en/Airspan-Networks-and-Atika-Form-Alliance-to-Advance-Resilient-Multi-Domain-5G-Connectivity-for-Defense

SNS Telecom & IT: Mission-Critical Networks a $9.2 Billion Market

3GPP Release 16 5G NR Enhancements for URLLC in the RAN & URLLC in the 5G Core network

3GPP Release 16 Update: 5G Phase 2 (including URLLC) to be completed in June 2020; Mission Critical apps extended

Huawei, Qualcomm, Samsung, and Ericsson Leading Patent Race in $15 Billion 5G Licensing Market

https://www.3gpp.org/news-events/3gpp-news/sa5-5g

Revolutionizing 5G Mission Critical Transport Networks (Part 2)

https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/imt-2020/Documents/S01-1_Requirements%20for%20IMT-2020_Rev.pdf

 

IMT-2030 (“6G”) Minimum Technology Performance Requirements for Radio Interface Technologies

At its February 2026 meeting in Geneva, ITU-R WP 5D reached agreement on the technical performance requirements for IMT-2030, also known as 6G.  Formal approval is expected to follow when the parent ITU-R study group 5 meets in December 2026.

At their Feb 2026 meeting, WP 5D WG Technology Aspects/SWG Radio Aspects discussed all the 16 contributions related to that document.  It was clarified that these requirements are to be evaluated according to the criteria defined in Report ITU-R M.[IMT 2030.EVAL] and M.[IMT 2030.SUBMISSION]. They are used only for development of IMT-2030 radio interface technologies (RIT/SRITs).

IMPORTANT: As noted many times, 3GPP will specify the 6G Core network and 6G Architecture which will have their own performance requirements.  See References below.

The working party’s draft new report, Minimum requirements related to technical performance for IMT‑2030 radio interface(s),” outlines 20 technical performance requirements (TPR). Seven of them are new and specific to describe the 6G performances. Those IMT 2030 technical performance requirements will be used as unified requirements to evaluate the 6G radio interfaces (RITs/SRITs).

Image Credit:  ITU-R

…………………………………………………………………………………………………….

The IMT-2030 Usage Scenarios:

The full set of requirements is based on six proposed usage scenarios for 6G networks:

  • Immersive communication (IC)
  • Hyper reliable and low‑latency communication (HRLLC)
  • Massive communication (MC)
  • Ubiquitous connectivity (UC)
  • Artificial intelligence (AI) and communication (AIAC)
  • Integrated sensing and communication (ISAC)

The IMT-2030 framework:

The newly defined 6G requirements build on the IMT‑2030 framework that ITU first published in December 2023 as a globally harmonized foundation for next‑generation connectivity (Recommendation ITU‑R M.2160). This recommendation also defines the overarching principles for future network design, notably:

  • Sustainability.
  • Security and resilience.
  • Connecting the unconnected.
  • Ubiquitous intelligence.

ITU – the United Nations agency for digital technologies – aims for the 6th generation of mobile communications (6G) to enable affordable, resilient, energy‑efficient networks for health, education, agriculture and disaster response. Advanced networks also present a way to close the persistent digital divide that today leaves many people in low-income countries behind.

This work to date provides a unified technical foundation to evaluate the candidate radio interfaces for IMT-2030 and guide the evolution of global 6G research and standardization.

Groundwork for future resilience:

IMT‑2030 lays the groundwork for affordable, high‑quality connectivity to remote and underserved communities. By setting globally harmonized performance requirements, it aims to ensure access for everyone, make communication systems more resilient, support sustainability and implement energy‑efficient technologies. ITU aims for innovative 6G services to deliver broad social and economic benefits.

The 20 requirements set out in the new draft report ​are meant to provide a consistent basis for specification and evaluation. While the requirements establish minimum performance levels, they do not restrict implementation approaches or guarantee real-world deployment performance.

They reflect ongoing global research and technology activities and should pave the way for concrete IMT-2030 evaluation guidelines, the next step in ITU’s global standardization process for 6G.

Accordingly, the IMT-2030 draft report has been submitted for approval to ITU‑R Study Group 5, responsible for terrestrial radiocommunication services, at a meeting scheduled for 1 December.

Until then, the draft remains available exclusively to ITU‑R members directly involved in its finalization and approval. You need a TIES login account to access ITU documents.

………………………………………………………………………………………………………………………..

About ITU-R Study Group 5:

ITU-R Study Group 5 is responsible for Terrestrial Services, including Fixed Wireless, Mobile (land, maritime and aeronautical), radiodetermination service as well as amateur and amateur-satellite services and the development of international standards, regulation and guidelines for these systems. The group’s work encompasses a wide range of topics, including spectrum management, network architecture, and radio interface technologies.

About ITU-R Working Party 5D:

ITU-R Working Party 5D is responsible for the development and harmonization of international standards for International Mobile Telecommunications (IMT) systems, including the latest IMT-2030 (6G) technology. The working party’s efforts ensure interoperability and global compatibility for wireless communication systems.

Further information on IMT‑2030 and related activities is available on the portal for IMT towards 2030 and beyond.

………………………………………………………………………………………………………..

References:

IMT-2030: Technical requirements for the 6G future

https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/Pages/default.aspx

Roles of 3GPP and ITU-R WP 5D in the IMT 2030/6G standards process

ITU-R M.[IMT-2030.EVAL] & ITU-R M.[IMT-2030.SUBMISSION] reports: Evaluation & Submission Guidelines for 6G RIT/SRITs (6G)

ITU-R WP 5D reports on: IMT-2030 (“6G”) Minimum Technology Performance Requirements; Evaluation Criteria & Methodology

Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)

AI wireless and fiber optic network technologies; IMT 2030 “native AI” concept

Verizon’s 6G Innovation Forum joins a crowded list of 6G efforts that may conflict with 3GPP and ITU-R IMT-2030 work

ITU-R WP5D IMT 2030 Submission & Evaluation Guidelines vs 6G specs in 3GPP Release 20 & 21

Highlights of 3GPP Stage 1 Workshop on IMT 2030 (6G) Use Cases

Development of “IMT Vision for 2030 and beyond” from ITU-R WP 5D

 

 

 

Part II: Outcomes from the IEEE–ITU Sustainable Climate Symposium

IEEE–International Telecommunication Union (ITU) Symposium on Achieving a Sustainable Climate – Part II

by Marta Koch, IEEE Europe Member & PhD Researcher & Teaching Facilitator, Imperial College London with Alan J Weissberger, IEEE Techblog Content Manager

Editor’s Note: This is the second of a two-part article summarizing this ITU-IEEE Symposium. Part I is here.

Why AI Matters for Sustainable Telecommunications:

The IEEE–ITU Symposium on underscored that developing AI‑enabled sustainable telecommunications networks represents a fundamentally multidisciplinary challenge situated at the intersection of communications engineering, energy systems, computer science, climate science, and public policy. Delivering meaningful climate outcomes through digital technologies requires not only progress in algorithms, architectures, and network optimization, but also institutional frameworks that enable responsible, interoperable, and scalable deployment across diverse operational contexts.

A systems-level view of telecommunications sustainability os needed—beyond traditional performance metrics—to one where future networks are intelligent, adaptive, and energy‑efficient by design. Building on ITU analyses positioning AI, advanced connectivity, and digital platforms as key enablers of environmental action, participants also highlighted the importance of understanding their environmental trade‑offs.

Machine Learning for Climate‑Aware Network Optimization:

Machine learning (ML) is emerging as a strategic enabler of climate‑aligned energy management across telecom networks. ML techniques now underpin network‑wide energy optimisation, demand and renewable generation forecasting, power–communications coordination, and climate services such as early warning and adaptive planning. In resource‑constrained or climate‑vulnerable contexts, ensuring model robustness, transparency, and alignment with sustainability objectives is essential. Research priorities include energy‑ and carbon‑aware model design, integration of grid and resilience metrics, and standardised evaluation methods for sustainability‑critical ML applications.

Use Cases for Energy‑Efficient Operations via AI:

Important AI applications include traffic prediction, adaptive resource management, energy‑aware RAN optimisation, and predictive network sleep modes. Cross‑layer and multi‑timescale optimisation enables maximum energy efficiency without compromising service quality.

Network Resilience Under Climate Stress:

With climate‑related disruptions increasing globally, AI‑enabled predictive maintenance, self‑healing architectures, and climate‑aware planning have become core to resilient network operations. These approaches align with UN‑led initiatives on climate services and disaster early warning systems.

Power–Communications Interdependencies:

Participants highlighted the coupling between power and communications systems, emphasising cascading‑failure scenarios and the potential of AI‑enabled digital twins for joint optimisation. These perspectives align with ITU frameworks on digital public infrastructure and smart sustainable cities, which stress interoperability across physical and digital systems.

Sustainable AI and Hardware–Software Co‑Design:

Effective climate action depends on co‑optimising physical and digital infrastructure—from data centres and energy systems to ML models and orchestration layers. Sustainable network intelligence requires energy‑efficient algorithms, hardware‑aware deployment, and system‑level governance. The approach aligns with ITU’s Green Digital Action initiative and related efforts by ISO, IEC, UNEP, and WMO to advance standards‑driven, science‑informed digital sustainability.

Digital Public Infrastructure and Climate‑Resilient Digitalization:

Digital Public Infrastructure (DPI)—open and interoperable systems for identity, payments, data exchange, and connectivity—was highlighted as foundational for inclusive, climate‑resilient digital transformation. Effective DPI design requires governance, risk management, and safeguards, as emphasised by UNDP and the UN Office for Digital and Emerging Technologies.

IEEE Technology Assessment Tool:

The symposium introduced an IEEE envisioning proof‑of‑concept tool to support sustainable network planning through systematic assessment of digital and energy technologies, evaluating trade‑offs across performance, sustainability, and resilience.

Importance of International Standards:

A central outcome of the symposium was recognition of the critical role of international standardization in translating technological innovation into practical, climate‑relevant impact. As telecommunications networks become increasingly software‑defined, AI‑driven, and interconnected with energy and physical infrastructure systems, standards provide the technical and governance foundations essential for interoperability, data integrity, trustworthiness, and long‑term sustainability. Presentations from global standards organizations highlighted the importance of harmonized frameworks that can minimize market fragmentation, facilitate cross‑border interoperability, and incorporate environmental and resilience criteria directly into network design, operation, and lifecycle management.

Standards were identified as key to scalable, trustworthy AI deployment, with interoperability and data governance central to ITU‑T Study Group 5’s agenda.

Sessions also reinforced the importance of equitable access—advancing AI‑assisted network planning and cost‑efficient deployment in climate‑vulnerable regions to balance sustainability, affordability, and inclusion.The symposium further emphasized the need for a system‑level approach, recognizing that telecommunications networks operate as integral components within broader energy, transport, and urban infrastructure ecosystems. In this context, AI and machine learning increasingly serve as coordinating layers across hardware, software, and physical assets, enabling cross‑domain optimization. Standardization plays a crucial enabling role by aligning interfaces, performance metrics, and assessment methodologies across sectors, thereby supporting coherent operation of digital and physical systems under conditions of resource constraint, geopolitical uncertainty, and climate stress.

Implications for IEEE Communications Society:

For IEEE Communications Society (ComSoc) members, discussions highlighted a dual responsibility and opportunity. There is a responsibility to ensure future communications networks are designed to minimize environmental impact, maintain resilience under climate extremes, and promote equitable access to essential connectivity and data sharing.
Simultaneously, there is an opportunity for researchers and practitioners to contribute technical evidence, performance models, and quantitative metrics that inform and advance international standardization.

By maintaining sustained collaboration among research institutions, industry stakeholders, standards bodies, and policy entities—and engaging with the broader frameworks of global climate and sustainable‑development governance—the telecom community can play a defining role in enabling energy‑efficient, climate‑aware, and resilient digital infrastructure worldwide.

…………………………………………………………………………………………………………………………………………………………………………

References:

[1] M. Koch and UN Climate Technology Centre and Network (UN CTCN), “Maximizing Emerging Trends in Locally-Led AI Solutions for Climate Action,” SDG Knowledge Hub, International Institute for Sustainable Development, 2025.
https://sdg.iisd.org/commentary/guest-articles/maximizing-emerging-trends-in-locally-led-ai-solutions-for-climate-action/

[2] M. Koch, “Stakeholder asset-mapping of climate technology infrastructures,” Nature Reviews Earth & Environment, 2025.
DOI: 10.1038/s43017-025-00737-z

[3] World Meteorological Organization, Early Warnings for All: Executive Action Plan 2023–2027, WMO, Geneva, 2023.
https://wmo.int/media/magazine-article/overview-of-early-warnings-all-executive-action-plan-2023-2027

[4] United Nations Environment Programme, Global Climate Risk Assessment Framework, UNEP, Nairobi, 2023.
https://www.unepfi.org/themes/climate-change/2023-climate-risk-landscape/

[5] ITU, WMO, UNEP, and UNFCCC, Global Initiative on Resilience to Natural Hazards through AI Solutions, United Nations, Geneva. https://www.itu.int/en/ITU-T/extcoop/ai4resilience/Pages/default.aspx

[6] ITU-T Study Group 5, Work Programme on Environment, Climate Action, Circular Economy and Electromagnetic Fields, International Telecommunication Union, Geneva.
https://www.itu.int/en/ITU-T/studygroups/2022-2024/05/

[7] International Telecommunication Union – Telecommunication Standardization Sector, Building Digital Public Infrastructure for Cities and Communities, ITU, Geneva, 2025.
https://www.itu.int/dms_pub/itu-t/opb/tut/T-TUT-SMARTCITY-2025-9-PDF-E.pdf

[8] International Telecommunication Union – Telecommunication Standardization Sector, Frontier Technologies to Protect the Environment and Tackle Climate Change (T-TUT-ICT-2020-02), ITU, Geneva, 2020.
https://www.itu.int/dms_pub/itu-t/opb/tut/T-TUT-ICT-2020-02-PDF-E.pdf

[9] International Telecommunication Union – Telecommunication Standardization Sector, Smart Sustainable Cities and Digital Infrastructure Frameworks, ITU, Geneva, 2025.
https://www.itu.int/dms_pub/itu-t/opb/tut/T-TUT-SMARTCITY-2025-6-PDF-E.pdf

[10] International Telecommunication Union, Green Digital Action, ITU, Geneva.
https://www.itu.int/initiatives/green-digital-action/

[11] World Bank Group, Digital Public Infrastructure and Development: A World Bank Group Approach, Washington, DC, 2025.
https://openknowledge.worldbank.org/entities/publication/cca2963e-27bf-4dbb-aa5a-24a0ffc92ed9

[12] United Nations Office for Digital and Emerging Technologies and United Nations Development Programme, DPI Safeguards Initiative. https://www.dpi-safeguards.org

……………………………………………………………………………………..

About Marta Koch:

Marta Koch is an IEEE member, PhD Researcher and Teaching Facilitator at Imperial College London, Research Associate at the Oxford Computational Political Science Group at the University of Oxford and Research Consultant at UNOPS. She has been nominated as research delegate to UN Climate Change (UNFCCC), UNEP, UNDESA, UNIDO and ITU meetings.

Part I: Outcomes from the IEEE–ITU Sustainable Climate Symposium

IEEE–International Telecommunication Union (ITU) Symposium: Achieving a Sustainable Climate 2025 Outcomes: Capitalizing on AI for Energy-Efficient and Climate Resilient Telecommunications Networks

By Marta Koch, IEEE Europe Member & PhD Researcher & Teaching Facilitator, Imperial College London with Alan J Weissberger, IEEE Techblog Content Manager

Editor’s Note: This is the first of a two part article summarizing this ITU-IEEE Symposium.  The second article is here.

Introduction:

Telecommunications networks are increasingly recognized as critical infrastructure for both economic development and societal resilience. As climate change accelerates and energy systems undergo rapid transformation, the telecoms sector faces a dual challenge: 1.] Reducing its own environmental footprint while ensuring reliable connectivity under growing physical, climatic, and 2.] Systemic stress.

These two themes were the focus of the IEEE–International Telecommunication Union (ITU) Symposium on Achieving a Sustainable Climate, which was held in December 2025 at the ITU headquarters in Geneva.

The symposium convened researchers, industry leaders, standards bodies, and United Nations agencies to examine how digital transformation, artificial intelligence (AI), and emerging ICT solutions can support the energy transition and climate mitigation and adaptation, and the governance and standardisation developments needed to effectively and sustainably leverage this technology globally.

As an Imperial College London researcher and IEEE member, I attended the symposium as part of ongoing work at the intersection of telecommunications, artificial intelligence, and climate action, with a focus on the governance, design, and deployment of AI-enabled systems for climate mitigation and adaptation, as well as the environmental and systems-level sustainability of AI-driven digital infrastructure.

Organization and Collaboration:

The symposium was co-organized by the ITU Telecom Standardization Bureau (ITU-T) and ITU T Study Group 5, which focuses on environment, climate action, circular economy, and electromagnetic fields. This collaboration underscored the central role of international standardization in shaping sustainable, climate-resilient ICT systems and provided a strong standards-oriented framework for discussions on AI deployment, energy efficiency, and network resilience [6].

Symposium photo courtesy of the ITU

……………………………………………………………………………………………………………………………………….

Key Discussion Themes:

Across plenary sessions, thematic panels and case studies, several cross-cutting issues emerged:

  • Expanding role of AI and machine learning (ML) in enabling more energy-efficient, resilient, and inclusive telecommunications networks.
  • The role of the ICT sector in accelerating decarbonisation and strengthening climate adaptation, particularly in support of the global energy transition
  • Interactions between physical and digital infrastructure systems, including electrification and communications, as enablers of circular economy models
  • Digital and AI standardisation as foundations for sustainable, climate-resilient development and place- and people-based outcomes
  • Intersections between decarbonisation, electrification, circularity, digital access, and equity
  • Public–private collaboration models supporting climate finance, eco-design, and scalable deployment in climate-vulnerable and developing regions.

International Policy Governance Perspectives at the Symposium:

The symposium featured strong representation from international organisations, grounding technical discussions in policy, standards, finance, and real-world deployment realities across the ICT, energy, and climate domains.
ITU delegates Tomas Lamanauskas, Seizo Onoe, Bilel Jamoussi, and Dominique Würges emphasized the importance of aligning global mandates with local needs in sustainable ICT ecosystems.

The following are essential to both decarbonization and resilient digital infrastructure: robust standards, interoperability, and AI governance frameworks (particularly those addressing environmental sustainability, circular economy principles, and responsible management of electromagnetic fields). That message was consistent with the opening plenary’s framing of international policy, eco-design, and circularity as foundational for practical deployment.

Energy and electrification perspectives were discussed by Dario Liguti of the United Nations Economic Commission for Europe and Norela Constantinescu of the International Renewable Energy Agency. They highlighted the global energy transition focus on both progress and persistent gaps in decarbonization and electrification. Coordinated planning between energy systems and telecommunications can significantly improve resilience, system efficiency, and equity for climate-adaptive services.

Industrial deployment and logistics viewpoints were provided by Luca Longo of the United Nations Industrial Development Organization and Yaxuan Chen of the Universal Postal Union. They described how integrated ICT and energy solutions could enhance operational outcomes, sustainability, and service delivery across industrial and sectoral contexts. Cross-sector collaboration was identified as a critical enabler of scalable impact.

Standards alignment was discussed by Matthew Doherty of the International Electrotechnical Commission and Noelia García Nebra of the International Organization for Standardization. They reinforced the essential need for international standards frameworks for translating research and innovation into deployable, interoperable solutions. This theme resonated strongly with the standards session’s emphasis on practical tools to support sustainable, climate-resilient outcomes across markets and regions.

Financing and digital innovation perspectives were contributed by Seth Ayers of the World Bank, who highlighted how digital and AI-enabled approaches can help unlock finance, de-risk investment, and expand access to sustainable energy and connectivity solutions in underserved and marginalised contexts, supporting climate resilience and inclusive growth.

Disaster risk reduction and emergency management perspectives were contributed by Yuji Maeda of NTT, Inc., Maeda-son highlighted how advanced aerial technologies and environmental sensing can be used to mitigate the impacts of extreme natural events. He shared ground-breaking research at NTT in Japan demonstrating the world’s first drone designed to act as a “flying lightning rod”, an invention selected by TIME Magazine as one of the Best Inventions of 2025. They are using a protective Faraday cage and a conductive tether to deliberately trigger and safely redirect lightning strikes away from critical infrastructure, illustrating the potential for drone-enabled systems to improve emergency response, infrastructure protection, and climate resilience.

Innovation diffusion was addressed by Heather Jacobs of WIPO GREEN, who underscored the importance of technology transfer, matchmaking platforms, and collaboration mechanisms in scaling affordable and climate-relevant digital and energy technologies. Her remarks highlighted the symposium’s focus on public–private partnerships and global deployment pathways.

A European Green Digital Coalition case study was presented by Ilias Iakovidis of the European Commission Directorate-General for Communications Networks, Content and Technology. He highlighted the development and deployment of a scientific methodology to assess the Net Carbon Impact of ICT solutions. His contribution demonstrated how digitalisation’s sustainability benefits can be quantified and scaled through coordinated industry engagement, financial sector alignment, and evidence-based deployment guidelines.

The growing Global Initiative on Resilience to Natural Hazards through AI Solutions was presented by Elena Xoplaki, Vice-Chair of the UN ITU, WMO, and UNEP Global Initiative on Resilience to Natural Hazards. She explained how AI, data integration, and resilient telecommunications networks underpin multi-hazard early warning systems and climate risk reduction efforts worldwide [5].

……………………………………………………………………………………………………………………………………….

Part II. of this report, listing all references, is here.

About Marta Koch:

Marta Koch is an IEEE member, PhD Researcher and Teaching Facilitator at Imperial College London, Research Associate at the Oxford Computational Political Science Group at the University of Oxford and Research Consultant at UNOPS. She has been nominated as research delegate to UN Climate Change (UNFCCC), UNEP, UNDESA, UNIDO and ITU meetings.

Her research and consultancy work focuses on digital and AI governance, development and deployment for climate action and sustainable development, with particular emphasis on climate technology digital and physical infrastructures and the sustainability of AI and digitalisation. Her research has been funded by the United Nations, Natural Environment Research Council (NERC) and the UK Science & Technology Network (STN) under the Foreign, Commonwealth & Development Office and the Department for Science, Innovation & Technology, and endorsed by the UNESCO International Decade of Sciences for Sustainable Development.

AWS to deploy AI inference chips from Cerebras in its data centers; Anapurna Labs/Amazon in-house AI silicon products

Amazon Web Services (AWS) announced it plans to integrate AI processors from Cerebras Systems [1.]  into its data centers, signaling growing confidence in the AI-focused semiconductor startup. Under a new multiyear partnership announced Friday, AWS will deploy Cerebras’s Wafer-Scale Engine (WSE) to accelerate inference workloads—the stage of AI operations where models generate responses to user queries. Financial details of the agreement were not disclosed.

Note 1.  Founded in 2015 and headquartered in Sunnyvale, CA, Cerebras claims to have the world’s fastest AI inference and training platform.

The collaboration reflects a significant realignment in compute infrastructure strategies across the AI ecosystem. While initial industry focus centered on model training, the rapid expansion of deployed AI services is driving demand for optimized inference performance. Traditional GPUs, though unmatched for training, can be suboptimal for inference scenarios that require ultra-low latency and high throughput. Cloud and AI platform providers are therefore diversifying their silicon portfolios to better match workload profiles and to scale capacity efficiently.

AWS, the world’s largest cloud infrastructure provider, has traditionally relied on its in-house semiconductor division, Annapurna Labs, for custom chip design. Annapurna’s Trainium processors compete with GPUs from major suppliers such as Nvidia and AMD, offering cost and performance advantages for AI training workloads. The new partnership introduces Cerebras technology into AWS infrastructure, where it will work alongside Trainium to enhance large-scale inference capabilities.

Cerebras, best known for its wafer-scale architecture, markets its WSE processors as a high-speed inference platform capable of executing the decode phase of generative AI processing—where text, images, or other outputs are generated—at up to 25 times the speed of conventional GPU solutions. The company, valued at approximately $23 billion following a $1 billion funding round in February, has attracted backing from Fidelity, Benchmark, Tiger Global, Atreides, and Coatue.

The Cerebras deal underscores a major shift in the market for computing power. Image Credit: rebecca lewington/cerebras syste/Reuters

The AWS collaboration follows Cerebras’s major compute partnership with OpenAI, which reportedly involves deploying up to 750 MW of computing capacity powered by its chips. AWS and Cerebras will position their joint offering as a premium cloud inference solution, targeting enterprise AI developers requiring high-performance and scalable compute.

“The scale of AI demand is shifting from model creation to global deployment,” said Andrew Feldman, CEO of Cerebras. “Working with AWS aligns our technology with the industry’s largest cloud, giving us reach to a broad enterprise and developer base. If you want slow inference, there will be cheaper ways to go,” Feldman said. “But if you want fast tokens, if speed matters to you, if you’re doing coding or agentic work, not only are we the absolute fastest, but we intend to set the bar. We’re in this to win it.”

AWS and Cerebras will support both aggregated and disaggregated configurations. Disaggregated is ideal when you have large, stable workloads. Most customers run a mix of workloads with different prefill/decode ratios, where the traditional aggregated approach is still ideal. The start-up expects most customers will want access to both and the ability to route workloads to whichever configuration serves them best.

The move intensifies competition in the inference silicon segment, where Nvidia faces growing pressure from purpose-built processor architectures such as Cerebras’s WSE and other emerging alternatives. Nvidia, which recently announced a $20 billion licensing deal with Groq and plans to unveil a new inference-optimized platform, remains the dominant supplier but now contends with an accelerating wave of specialization across the AI compute stack.

AWS vice president and Annapurna Labs co-founder Nafea Bshara emphasized the company’s goal of offering flexible performance tiers. “Our job is to push the speed and lower the price,” he said, noting that AWS will continue to offer cost-optimized Trainium-only options alongside high-performance Cerebras-Trainium configurations.

………………………………………………………………………………………………………………………………………………………………………………………………….

Amazon’s Internally Designed AI Silicon:

Amazon has built a fairly broad internal AI-oriented silicon portfolio through Annapurna Labs, primarily for AWS:

  • Inferentia (Inferentia, Inferentia2) – Custom machine learning accelerators designed for high-throughput, low-cost inference at cloud scale. These power many AWS inference instances and are positioned as an alternative to Nvidia GPUs for production model serving.

  • Trainium (Trainium, Trainium2, Trainium3) – AI training accelerators optimized for large-scale model training (including frontier and foundation models), with Trainium2 and Trainium3 as newer generations offering materially higher performance and better $/compute than the first generation. These are central to projects such as the Rainier supercomputer for Anthropic.

  • Graviton (Graviton, Graviton2/3/4) – Arm-based general-purpose CPUs used heavily across EC2, increasingly in AI-adjacent roles (pre/post-processing, orchestration, model-serving microservices) and as part of cost-optimized AI stacks, even though they are not dedicated accelerators.

  • Nitro system – While not an AI accelerator per se, the Nitro family (offload cards and system) is an internally developed data-plane and virtualization offload architecture that underpins EC2 and works in tandem with Graviton, Inferentia, and Trainium to free CPU cycles and improve I/O for AI/ML workloads.

All of these are designed and iterated internally by Annapurna Labs for exclusive use in AWS data centers, then exposed to customers via AWS services rather than as standalone merchant silicon.

Amazon’s Annapurna Labs is an internal chip design group that has become a core strategic asset for AWS, especially for custom data center and AI silicon.

Origins and acquisition:

  • Annapurna Labs is an Israeli chip design startup founded in 2011 by semiconductor veterans of Intel and Broadcom, including Avigdor Willenz and Nafea Bshara.

  • “When we talked with market sources and consulted with experts in the fields of data and servers, at that time only Amazon had a holistic vision and the ability to execute on a large scale,” recalls Bshara about the start of the romance with Amazon. “We were prepared to build the technology and at the same time were open to working with startups. From there we began a journey together with many meetings and shared thinking, among others with James Hamilton (Microsoft’s former data-base product architect and to AWS SVP), and from there within six months we found ourselves inside Amazon.”
  • Amazon began working with the company around 2013 and acquired it in 2015 for an estimated $350–$400 million.

  • Before the deal, Annapurna was in stealth, focusing on low‑power networking and server chips to improve data center efficiency.

Role inside Amazon and AWS:

  • Post‑acquisition, Annapurna was folded into AWS as a specialist microelectronics and custom silicon group, designing chips to reduce cost and power per unit of compute.

  • The group underpins several key AWS technologies: the Nitro system for offloading virtualization and I/O, Arm‑based Graviton CPUs for general compute, and Trainium and Inferentia accelerators for AI training and inference.

  • These chips let AWS optimize performance per watt and per dollar versus x86 servers and third‑party accelerators, improving margins and competitive pricing.

Key products and architectures:

  • Nitro: A combination of custom hardware and software that offloads storage, networking, and security functions from the host CPU, increasing tenant isolation and freeing CPU cycles for workloads.

  • Graviton: A family of Arm‑based server CPUs; by 2018 Graviton was widely adopted on AWS and is now used by most AWS customers for general cloud infrastructure workloads due to better price‑performance and energy efficiency.

  • Inferentia and Trainium: Custom accelerators designed by Annapurna for machine learning inference (Inferentia) and training (Trainium), intended to reduce AWS’s dependence on high‑priced Nvidia GPUs for AI workloads.

Strategic importance and AI focus:

  • Annapurna’s work is central to Amazon’s strategy of vertical integration in the cloud: owning the silicon stack as much as the software and services.

  • The group designs chips that power Amazon’s AI infrastructure, including systems used both by internal teams and external customers such as Anthropic, for which AWS is the primary cloud and silicon provider.

  • Amazon and Anthropic are collaborating on “Project Rainier,” a massive supercomputer built around hundreds of thousands of Annapurna‑designed Trainium2 chips, targeting more than five times the compute used to train current frontier models.

Organization, footprint, and industry impact:

  • Annapurna Labs maintains a significant presence in Israel, employing hundreds of engineers focused on advanced AI and networking processors for AWS.

  • It also operates major engineering hubs such as an Austin, Texas lab where advanced semiconductors and AI systems are designed and tested.

  • Analysts often describe the acquisition as one of Amazon’s most successful, arguing that Annapurna’s custom silicon is a “secret sauce” that helps AWS compete with Microsoft, Google, and others on performance, cost, and energy efficiency.

…………………………………………………………………………………………………………………………………………………………..

References:

https://www.cerebras.ai/company

https://www.cerebras.ai/blog/cerebras-is-coming-to-aws

https://www.wsj.com/tech/amazon-announces-inference-chips-deal-with-cerebras-109ecd31

https://www.marketwatch.com/story/how-the-ceo-of-this-upstart-nvidia-rival-hopes-to-seize-on-the-lucrative-market-for-ai-chips-d5ccdab0

https://en.globes.co.il/en/article-nafea-bshara-the-israeli-behind-amazons-graviton-chip-1001420744

Intel and AI chip startup SambaNova partner; SN50 AI inferencing chip max speed said to be 5X faster than competitive AI chips

Custom AI Chips: Powering the next wave of Intelligent Computing

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Will “AI at the Edge” transform telecom or be yet another telco monetization failure?

Huawei to Double Output of Ascend AI chips in 2026; OpenAI orders HBM chips from SK Hynix & Samsung for Stargate UAE project

OpenAI and Broadcom in $10B deal to make custom AI chips

U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

2026 Consumer Electronics Show Preview: smartphones, AI in devices/appliances and advanced semiconductor chips

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Google announces Gemini: it’s most powerful AI model, powered by TPU chips

 

Analysis: Equinix’s “Distributed AI Hub” vs competitive global carrier neutral offerings

Backgrounder:

As AI workloads undergo geographic decentralization across a fragmented hybrid-cloud ecosystem, enterprises face significant headwinds in maintaining deterministic performance, data sovereignty, and OpEx predictability.  As AI training and autonomous agent workloads drive demand for high-bandwidth, low-latency multi-cloud architectures, the focus shifts to alleviating pressure on the backbone and access networks through densified, software-defined connectivity.  Central cloud infrastructure is teetering under the weight of spiraling workloads, and being distributed east-west into regional backwaters in search of power, and north-south in the metro centers and enterprise premises in an urgent quest to actually put AI to work. Enterprises are, suddenly, stitching together training in one cloud, inference in another, and agents at the edge – all without breaking performance budgets. Which is why networks and high speed connectivity matter more than ever.

………………………………………………………………………………………………………………………………………………………………………………………

Equinix Carrier Neutral Hubs:

Equinix is positioning its carrier-neutral interconnection hubs as the strategic solution to mitigate these challenges. By optimizing last-mile backhaul and orchestrating distributed infrastructure, the platform enables localized inference at the edge.   In January 2026,  Equinix announced a last-mile access service (Equinix Fabric Intelligence) and yesterday (March 11, 2026), the company announced their Distributed AI Hub, to provide a single, unified framework for enterprises to connect, secure and simplify their increasingly complex and distributed AI ecosystems.

The Hub is a neutral location that allows enterprises to discover, connect to and consume AI infrastructure providers—including model companies, GPU clouds, data platforms, network and security services, and AI frameworks—all through private, low-latency connectivity at Equinix’s 280 high performance data centers.

“Enterprises are racing to deploy agentic AI but are finding that their existing infrastructure was never designed for the complexities of distributed intelligence,” said Mary Johnston Turner, Research Vice President, Digital Infrastructure Strategies at IDC.”  By 2027, IDC expects 80% of enterprises will deploy distributed edge infrastructure to improve the latency and responsiveness of AI applications. Enterprises will need solutions like Equinix’s Distributed AI Hub to enable them to unify these disparate systems.”

To realize the full potential of agentic AI, enterprises must converge inherently distributed workflows—spanning model training and inferencing workloads dispersed across public clouds, private data centers, edge nodes, and an expanding set of specialized “neocloud” platforms. Each environment brings distinct latency, performance, and data sovereignty constraints. This operational fragmentation can impede innovation velocity, complicate governance, and make it exceedingly difficult to execute AI workloads in proximity to the data sources that drive them, thereby diminishing both business impact and user experience.

Equinix is addressing this challenge with the launch of the Distributed AI Hub, an evolution of its global digital infrastructure platform. The Hub provides a unified, vendor-neutral framework that federates data, compute, cloud access, and AI ecosystem partners across geographically distributed domains. It allows enterprises to deploy and orchestrate AI workloads where they achieve optimal performance—without re-architecting applications or migrating data across incompatible environments. Through consistent governance, secure interconnection, and high-performance data mobility, the Hub simplifies how organizations connect models, replicate datasets, execute inferencing, and manage multi-environment AI operations. Unlike hyperscaler AI marketplaces that prioritize vertically integrated ecosystems, the Equinix Distributed AI Hub is open by design, enabling customers to assemble best-of-breed AI stacks tailored to workload and compliance requirements.

“AI isn’t centralized—but the right infrastructure can make it run as seamlessly as if it were,” said Jon Lin, Chief Business Officer at Equinix. “Equinix is the neutral ground where AI, cloud and networking infrastructure converge. We are providing enterprises the freedom to build and scale AI wherever their data, partners, and teams already live, while running inference close to the data and users that depend on it, without the operational drag that comes from stitching together complex, distributed systems. With our Distributed AI Hub, we’re giving customers a simpler, smarter, and far more connected way to run and scale their AI today. We are building one of the most expansive and neutral AI ecosystems.”

Image Credit: Equinix

…………………………………………………………………………………………………………………………………………………………………………….

The Hub’s first major integration is with Palo Alto Networks, extending AI-driven security into the distributed enterprise. The collaboration combines Equinix’s global interconnection fabric and distributed data infrastructure with Palo Alto Networks Prisma AIRS, delivering real-time protection for autonomous agents and model interactions across external data sources and tools. This integration gives enterprises unified visibility and policy control across the entire AI lifecycle—from data ingestion to inference execution—irrespective of deployment location. Furthermore, Prisma AIRS will be natively available through Equinix Network Edge, enabling centralized management of AI-centric security services at the digital edge, closer to users, clouds, and critical workloads.

“The conversation around distributed AI is finally getting real,” said Lloyd Taylor, CTO/CISO, at Alembic. “It’s more than compute and data, it’s controlling where the data lives and how the compute runs. Equinix is framing that problem the right way, by bringing placement, governance, and predictable performance into the same architecture with the Distributed AI Hub. This is what makes distributed AI viable at enterprise scale.”

The Distributed AI Hub is available globally at 280 Equinix data center locations, enabling enterprises to deploy consistent AI infrastructure patterns worldwide. Equinix will be participating at NVIDIA GTC—located at Booth 1030—and will be previewing the Hub.

Additional Resources:

About Equinix:

Equinix, Inc. (Nasdaq: EQIX) shortens the path to boundless connectivity anywhere in the world. Its digital infrastructure, data center footprint and interconnected ecosystems empower innovations that enhance our work, life and planet. Equinix connects economies, countries, organizations and communities, delivering seamless digital experiences and cutting-edge AI—quickly, efficiently and everywhere.

……………………………………………………………………………………………………………………………………………………………………………….

Competitive Analysis (Source: Perplexity.ai):

Equinix is the largest and most mature carrier‑neutral interconnection hub globally, but it faces serious competition at several layers of the stack.

Global carrier‑neutral players:

Major global and multi‑regional competitors offering carrier‑neutral colocation and interconnection include:

  • Digital Realty (PlatformDIGITAL, Interconnection Fabric, strong global footprint, direct cloud on‑ramps).

  • NTT Global Data Centers.

  • CyrusOne, QTS, GDS, Telehouse/KDDI, CoreSite, Flexential, Cologix and others in specific metros/regions.

Selected ecosystem comparison:

Provider Positioning vs Equinix Geographic strength Interconnection focus
Digital Realty Closest global rival in scale and cloud access. North America, Europe, APAC. PlatformDIGITAL, interconnection fabric, “data gravity” narrative.
NTT GDC Large carrier‑neutral platform, often telco‑adjacent. Strong in Japan and APAC, expanding globally. Cloud on‑ramps, network‑dense campuses in key metros.
CyrusOne Hyperscale and enterprise colocation, carrier‑neutral. North America and Europe. High‑density interconnection, hyperscale campuses.
CoreSite Cloud‑ and network‑dense US metros. US only, key peering hubs. Open Cloud Exchange for multi‑cloud connectivity.
Cologix / Flexential / phoenixNAP Regional network‑neutral interconnection platforms. Primarily North America, secondary/edge markets. Dense carrier mix, regional cloud and IX connectivity.

How Equinix is differentiated:

Analysts typically see Equinix’s moat in: dense metro ecosystems, breadth of on‑net networks and clouds, and the maturity of its software‑defined interconnection (Fabric) and edge services, rather than in being the only carrier‑neutral hub. Its main strategic challenge is staying ahead of peers like Digital Realty, NTT, and CyrusOne as they build similar fabrics around large, carrier‑neutral campuses and hyperscale‑adjacent deployments.

…………………………………………………………………………………………………………………………………………………………………………………………………

References:

https://newsroom.equinix.com/2026-03-11-Equinix-Unveils-the-Distributed-AI-Hub-to-Simplify-and-Secure-Enterprise-AI-Infrastructure

Agents of chaos – Equinix proposes metro fix for the new AI sprawl

Orange Telco Cloud to use Equinix Bare Metal to deliver virtual services with <10 ms latency

Equinix Partners with Nokia to Increase 5G and Edge Ecosystem Innovation

Equinix and Vodafone to Build Digital Subsea Cable Hub in Genoa, Italy

Equinix to deploy Nokia’s IP/MPLS network infrastructure for its global data center interconnection services

Synergy Research: Strong demand for Colocation with Equinix, Digital Realty and NTT top providers

CoreSite Enables 50G Multi-cloud Networking with Enhanced Virtual Connections to Oracle Cloud Infrastructure FastConnect

Arrcus MCN solution now part of CoreSite’s Open Cloud Exchange®

Global Data Center Colocation Market Size forecast = $131.8 Billion by 2030 at a 14.2% CAGR

Initiatives and Analysis: Nokia focuses on data centers as its top growth market

AWS deployed in Digital Realty Data Centers at 100Gbps & for Bell Canada’s 5G Edge Computing

TMR: Data Center Networking Market sees shift to user-centric & data-oriented business + CoreSite DC Tour

Page 1 of 354
1 2 3 354