Australia’s NBN and Nokia demonstrate multi-generation optical technologies concurrently over existing FTTP infrastructure

NBN Co, in collaboration with Nokia, has successfully conducted a laboratory demonstration of multiple generations of optical access and coherent transmission technologies operating concurrently over its existing Fiber‑to‑the‑Premises (FTTP) network. The technical trial validates the long‑term scalability of NBN Co’s national full‑fibre infrastructure and its capacity to accommodate the sustained growth of residential, enterprise, and industrial data demand anticipated over the coming decades.

The “Supercharging Fibre” trial, presented at the Broadband Forum Spring Member Meeting—held in Australia for the first time and hosted by NBN Co—demonstrated aggregate transmission rates exceeding 230 Gbit/s using multiple optical technologies over a single physical fiber link in a controlled laboratory environment. The experimental setup also established a pathway toward achieving terabit‑class capacities in future trials through the evolution of optical modulation formats and channel aggregation techniques.

A key outcome of the trial was the successful integration of coherent optical transmission with multiple generations of passive optical network (PON) technologies—GPONXGS‑PON, and 50G‑PON—operating simultaneously over the same fiber infrastructure currently in service across Australia. Coherent optics, traditionally deployed within metropolitan, core, and data center interconnect networks, employ advanced modulation and digital signal processing to deliver extended reach, low latency, and high spectral efficiency. Their introduction into the access network domain represents a significant step toward the convergence of access and transport technologies, offering an efficient route to enhanced capacity and service flexibility without extensive physical network replacement.

The demonstration (see illustration below) underscores the technical viability of leveraging existing passive optical infrastructure to support future bandwidth requirements driven by the proliferation of cloud computing, immersive digital experiences, artificial intelligence applications, and industrial IoT systems. The results further illustrate the potential of FTTP systems to evolve into a highly scalable, future‑ready broadband platform capable of sustaining national connectivity objectives.

Image Credit:  Perplexity.ai

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

By 31 December 2025, more than 1 million customers had transitioned from copper‑based services to high‑speed full‑fiber connections, positioning FTTP as NBN Co’s dominant fixed‑line technology at approximately 35% of total connections. The company achieved its commitment to enable 10 million premises, representing about 90% of the NBN fixed‑line footprint, to order multi‑gigabit‑capable wholesale broadband services. Ongoing upgrade activities encompass over 228,000 premises, as part of an initiative to extend full‑fiber access to 95% of the remaining ~622,000 copper‑served locations by 2030.

These developments reflect NBN Co’s strategic focus on access network modernization and underscore the continuing evolution of optical access technologies toward achieving the performance, flexibility, and resilience required to support Australia’s transition to a digital and cloud‑centric economy.

About NBN Co.:

NBN Co. was established in 2009 by the Commonwealth of Australia as a Government Business Enterprise (GBE) with a clear direction – to design, build and operate a wholesale broadband access network for Australia.

And we’ve done just that – creating a network that criss-crosses a country, and allowing internet retailers to provide reasonably priced broadband services to consumers and businesses.

The network is the digital backbone of Australia and is constantly evolving to keep communities and businesses connected and our nation productive.

 

References:

https://www.nbnco.com.au/corporate-information/media-centre/media-statements/nbn-superchargingfibre-trial

https://www.nbnco.com.au/corporate-information/about-nbn-co

https://www.broadband-forum.org/events/spring-2026-member-meeting/

Dell’Oro: Optical Transport Systems market +15% year-over-year in 3Q2025 driven by Cloud Service Providers

AI wireless and fiber optic network technologies; IMT 2030 “native AI” concept

Point Topic: FTTP broadband subs to reach 1.12bn by 2030 in 29 largest markets

Nokia and Hong Kong Broadband Network Ltd deploy 25G PON

Nokia’s launches symmetrical 25G PON modem

Google Fiber planning 20 Gig symmetrical service via Nokia’s 25G-PON system

Ericsson and Forschungszentrum Jülich MoU for neuromorphic computing use in 5G and 6G

Ericsson and major European research center Forschungszentrum Jülich are collaborating to develop technologies for the continued evolution of 5G and for the future introduction of 6G (IMT 2030) networks.  The organizations signed a Memorandum of Understanding (MoU) on March 24, 2026.The project aims to leverage JUPITER, Europe’s first “exascale” supercomputer, to design and test new artificial intelligence solutions for the complex demands of 6G. The partnership will explore AI models and methods to enhance Ericsson’s core network, network management, and Radio Access Network (RAN).

Important objectives include exploring ultra-efficient, “brain-inspired” computing approaches like neuromorphic computing [1.] to handle intense network tasks and strengthen Europe’s digital infrastructure.  Modern mobile networks rely heavily on Massive MIMO, a technology where many devices communicate simultaneously via numerous antennas. By exploring novel system architecture approaches like neuromorphic computing, researchers aim to speed up optimization and reduce energy use versus classical methods.

Note 1. Neuromorphic computing is a brain-inspired engineering approach that mimics biological neural networks using analog or digital electronic circuits. It combines memory and processing in one place—similar to neurons and synapses—to achieve extreme energy efficiency, speed, and learning capabilities, moving beyond the limitations of traditional computing architecture. Unlike traditional AI that uses continuous data, neuromorphic systems use “spikes”—discrete events in time—to mimic how neurons communicate. Such systems only consume significant power when processing data (“spiking”), making them ideal for ultra-low-power edge computing, unlike traditional computers that are always on. They can process complex, real-world data (like vision or touch) much faster and with far less power than traditional computers.

…………………………………………………………………………………………………………………………………………………………………………………………..

The alliance will study operational strategies like heat recovery to boost energy efficiency in HPC and cloud deployments. The collaboration involves systematic benchmarking of AI methods – including the application of neuromorphic AI – across Ericsson products to assess execution speed, scalability to large datasets, information retention, and storage efficiency.  In addition, the partnership will provide insights into the feasibility of cloud strategies based on concepts from the EuroHPC ecosystem, which is establishing a world-class supercomputing infrastructure.

Professor Laurens Kuipers, a member of the Executive Board of Forschungszentrum Jülich, said: “This collaboration has the potential to make a significant contribution to a more sustainable digital future. By combining our excellence in high-performance computing and our research into novel, neuro-inspired computing approaches with Ericsson’s expertise in telecommunications, we aim to develop more energy-efficient network solutions and strengthen a sovereign European digital infrastructure.”

Image Credit: Image: Forschungszentrum Jülich / Kurt Steinhausen

……………………………………………………………………………………………………………………………………….

Nicole Dinion, Head of Architecture and Technology, Cloud Software and Services, Ericsson said: “The future of mobile networks is deeply intertwined with AI and the need for unparalleled energy efficiency. Our collaboration with Forschungszentrum Jülich, for years a global leader in supercomputing and applied physics, combines their research and computing power with our expertise in all domains of telecoms technology. We will explore architectures that define the next generation of telecommunication.”

The collaboration covers several areas of research:

  • AI methods for Ericsson products across the full portfolio: systematic benchmarking of approaches to assess execution speed, scalability to large datasets, information retention, and storage efficiency. Where security and commercial conditions permit, the teams may also use JUPITER for large-scale model training, leveraging its compute resources.
  • Energy-efficient computing for AI inference at the radio and edge: developing and prototyping highly efficient solutions for tasks such as radio channel estimation and Massive MIMO – a key technology in modern mobile networks, in which many devices communicate simultaneously via numerous antennas. This includes exploring novel system architecture approaches like neuromorphic computing (e.g., memristors) to speed up optimization and reduce energy use versus classical methods.
  • HPC and cloud architectures and operations for AI: researching and implementing Modular Supercomputing Architecture (MSA) concepts from exascale work at Forschungszentrum Jülich – in particular, at the Jülich Supercomputing Centre (JSC) – and studying operational strategies, such as heat recovery, to boost energy efficiency in HPC and cloud deployments.

The collaboration will provide insights into the feasibility of cloud strategies based on concepts from the EuroHPC ecosystem, which is establishing a world-class supercomputing infrastructure with leading European centers such as the JSC.

ABOUT FORSCHUNGSZENTRUM JÜLICH:

Shaping change: This is what drives us at Forschungszentrum Jülich. As a member of the Helmholtz Association with more than 7,000 employees, we conduct research into the possibilities of a digitized society, a climate-friendly energy system, and a resource-efficient economy. We combine natural, life, and engineering sciences in the fields of information, energy, and the bioeconomy with specialist expertise in simulation and data science. www.fz-juelich.de

 

References:

https://www.ericsson.com/en/press-releases/2026/3/ericsson-and-forschungszentrum-julich-to-develop-advanced-ai-for-6g

https://www.ericsson.com/en/blog/2026/1/ai-future-will-be-defined-by-the-intelligent-digital-fabric

https://www.ibm.com/think/topics/neuromorphic-computing

China vs U.S.: Race to Generate Power for AI Data Centers as Electricity Demand Soars

AI infrastructure spending boom: a path towards AGI or speculative bubble?

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Expose: AI is more than a bubble; it’s a data center debt bomb

Sovereign AI infrastructure for telecom companies: implementation and challenges

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Custom AI Chips: Powering the next wave of Intelligent Computing

Groq and Nvidia in non-exclusive AI Inference technology licensing agreement; top Groq execs joining Nvidia

 

 

 

Analysis and Impact of Blockbuster FCC ban on foreign made WiFi routers

On March 23rd, the Federal Communications Commission (FCC) updated its Covered List to prohibit the sale of foreign made consumer-grade (WiFi) routers to be sold in the U.S.  The FCC’s Covered List is a list of communications equipment and services that are deemed to pose an unacceptable risk to the national security of the U.S. or the safety and security of U.S. persons.  This FCC decision follows a determination by an Executive Branch interagency body, which concluded those devices pose unacceptable risks to U.S. national security and the safety of its citizens. . The new FCC restriction applies strictly to new foreign made router models, meaning retailers can continue marketing previously approved units and consumers can operate their existing equipment without interruption.

Impact:

TP-Link, Netgear, and Asus are currently among the top-selling Wi-Fi router brands in the U.S. consumer market.  Estimates for early 2026 indicate that TP-Link alone holds approximately 35% of the U.S. consumer router market share, while Netgear and Asus collectively account for another 25%. The TP-Link Archer AXE75 is frequently rated the best router for most users due to its Wi-Fi 6E speed and reasonable price.

AXE5400 Tri-Band Gigabit Wi-Fi 6E Router

…………………………………………………………………………………………………………………………………

Linksys and Ubiquiti  are American-based companies, but their hardware is produced by contract manufacturers overseas in locations like China, Vietnam, and Taiwan. Similarly, Amazon eero and Google Nest mesh routers are not made in the U.S.

–>Hence, these companies ability to sell new WiFi router models in the U.S. is now facing strict regulatory hurdles.

Quotes:

FCC Chairman Brendan Carr said: “I welcome this Executive Branch national security determination, and I am pleased that the FCC has now added foreign-produced routers, which were found to pose an unacceptable national security risk, to the FCC’s Covered List.  “Following President Trump’s leadership, the FCC will continue to do our part in making sure that US cyberspace, critical infrastructure, and supply chains are safe and secure.”

Bogdan Botezatu, director of Threat Research at cybersecurity firm Bitdefender, says this ban is a step to harden the cybersecurity readiness of U.S. households, given ongoing geopolitical tensions. “Consumer routers sit at the edge of every home network, which makes them an attractive target and a strategic risk if compromised at scale,” he says. Asked whether he thinks the risk is real, Botezatu says the risk is real, though there’s no easy way to prove intent. “[Internet of Things] devices, including routers, are a weak point across the internet.”

Virtually all (WiFi) routers are made outside the United States, including those produced by US-based companies like TP-Link, which manufactures its products in Vietnam,” a spokesperson from TP-Link tells WIRED. “It appears that the entire router industry will be impacted by the FCC’s announcement concerning new devices not previously authorized by the FCC.”

Important Implications:
  • Reduced Product Availability: New, high-performance routers manufactured outside the U.S. will not receive the necessary approval to be imported or sold, restricting future consumer choices.
  • Higher Costs: The, “This ruling has the potential to significantly disrupt the U.S. consumer router market,” according to, likely resulting in increased prices for consumers as companies grapple with new regulatory requirements.
  • Shift in Manufacturing: Router manufacturers, including those targeting the U.S. market, will likely need to shift production to the U.S. to satisfy security concerns and bypass the ban, says PC Magazine.
  • Security Focus: The ban targets vulnerabilities in foreign hardware and firmware.
  • No Impact on Existing Devices: Consumers can continue to use routers they currently own

References:

https://www.fcc.gov/faqs-recent-updates-fcc-covered-list-regarding-routers-produced-foreign-countries

https://www.wired.com/story/us-government-foreign-made-router-ban-explained/

U.S. Weighs Ban on Chinese made TP-Link router and China Telecom

China backed Volt Typhoon has “pre-positioned” malware to disrupt U.S. critical infrastructure networks “on a scale greater than ever before”

WSJ: T-Mobile hacked by cyber-espionage group linked to Chinese Intelligence agency

Trump and FCC crack down on China telecoms; supply chain security at risk

RAN Silicon Rethink- Part II; vRAN and General-Purpose Compute

Overview:

The global Radio Access Network (RAN) market has experienced a significant decline, dropping by nearly $10 billion in annual product revenue between 2022 and 2024, from roughly $45 billion to about $35 billion by the end of last year (source: Omdia).

  • As the IEEE Techblog previously reported, Nokia is gradually moving away from its long-held reliance on custom RAN baseband (BBU) silicon from Marvell [1.] as it pivots to use Nvidia’s GPUs, as part of the latter’s $1B investment in Nokia in October 2025.

Note 1. Nokia uses Marvell RAN silicon in its 5G ReefShark portfolio. The companies collaborate to develop custom OCTEON SoC (System-on-a-Chip) and Infrastructure Processors, which are used to boost 5G AirScale base station performance.

  • Samsung has long partnered with Marvell Technology on purpose-built 5G baseband silicon. However, rising development costs and a contracting market for proprietary RAN hardware are reshaping that strategy. The economic case for new, custom RAN chipsets is becoming weaker as operators accelerate network virtualization.
  • In sharp contrast, Ericsson continues to defend its investment in proprietary silicon architectures while maintaining a flexible approach for operators that prefer virtualized or cloud RAN implementations running on standard central processing units (CPUs). At present, those solutions rely exclusively on Intel processors, though Ericsson notes its software is being engineered with portability in mind to support future hardware diversity.

Samsung’s Silicon Strategy:

Among RAN equipment vendors accessible to operators across North America and much of Europe, Samsung now stands as the principal alternative to the two Nordic RAN equipment suppliers, following the exclusion of Huawei and ZTE from many Western markets.

The South Korean conglomerate has become the global frontrunner in virtualized RAN (vRAN) deployments. Whereas custom silicon once dominated RAN infrastructure design, Samsung’s strategy has notably inverted that paradigm: vRAN is now its mainstream offering, and purpose-built hardware has moved to the periphery.

By the close of last year, Samsung reported supporting approximately 53,000 vRAN sites worldwide — a significant share of which lies within Verizon’s U.S. footprint. The company also disclosed major European developments, including Vodafone’s planned rollout across Germany and other markets, which will rely entirely on vRAN technology. For Samsung, discussions of bespoke, purpose-built 5G infrastructure have become increasingly rare.

According to Alok Shah, Vice President of Network Strategy at Samsung Networks, this transition reflects both the rising cost of developing custom silicon and the performance enhancements achieved by general-purpose CPU platforms.

“We’re still selling our purpose-built BBUs to a number of customers, but I do believe that it’s a matter of time,” Shah told Light Reading during MWC Barcelona, when asked if Samsung envisions an eventual phaseout of its proprietary baseband hardware portfolio.

Virtualized RAN Gains Momentum:

Transitioning to virtualized RAN (vRAN) allows network equipment vendors to capitalize on the scale economies of commercial data-center silicon. Samsung has established commercial vRAN contracts with Verizon and Vodafone, reflecting growing operator confidence in software-defined architectures.

“Virtual RAN performance has reached parity,” Shah said. “I know not all of our competitors feel that way, but that’s certainly how we feel. And the cost of building that modem is pretty high, even for a company like Samsung that’s really good at semiconductors,” he added.

Intel’s Granite Rapids Xeon platform exemplifies this shift to vRAN. The processor’s increased core density enables operators to cut hardware footprints; in many configurations, a single server can now support workloads that previously required two. Several network operators have confirmed this performance improvement during field evaluations.

Samsung and Ericsson continue to explore additional CPU suppliers. AMD’s latest multicore x86 processors offer up to 84 cores, compared with 72 in Intel’s Granite Rapids. However, offloading Forward Error Correction (FEC)—one of the most compute-intensive RAN processes—remains a challenge. Intel’s vRAN Boost feature integrates a dedicated hardware accelerator for FEC, while AMD currently lacks a direct equivalent.

Samsung has also evaluated Arm-based platforms, which increasingly support efficient software migration from x86. Nvidia’s Grace CPU, built on Arm architecture, has emerged as a potential candidate, especially when paired with its GPUs for selective Layer 1 acceleration.

Samsung’s roadmap aligns with a gradual and selective introduction of GPU acceleration. The company demonstrated GPU-based beamforming optimization during MWC, illustrating how AI can refine radio energy targeting. However, Samsung executives maintain that the latest Intel CPUs also provide sufficient capacity to host AI inference workloads directly. “Granite Rapids has plenty of capacity to support AI algorithms on-platform,” noted Shah.

While Nokia is building a GPU-compatible Layer 1 to accelerate computationally intensive baseband functions—including FEC—Samsung’s approach appears incrementally narrower, focusing on targeted AI for RAN optimization rather than complete GPU offload. GPUs may ultimately support AI at the Edge applications—so-called AI and RAN—where telecom operators leverage deployed GPUs for latency-sensitive inference services.

The degree to which such applications will reside within RAN sites remains uncertain. Some operators suggest that edge inference may instead remain within core network clusters that can meet latency requirements more efficiently.

Samsung’s architecture already supports GPU integration through commercial off-the-shelf (COTS) servers from manufacturers such as HPE, Dell, and Supermicro—aligning with broader cloud-native RAN trends. “It’s an off-the-shelf card that can be integrated directly into standard servers,” said Shah.

For now, Intel remains Samsung’s primary compute partner for commercial vRAN products. “We haven’t had an instance where customers are pushing for a second platform—it’s primarily a matter of commercial interest,” Shah added. The direction is clear: Samsung, like other leading vendors, is prioritizing scalable, general-purpose compute over bespoke 5G silicon as vRAN deployment accelerates.

……………………………………………………………………………………………………………………………………………………………………

References:

https://www.lightreading.com/5g/samsung-eyes-death-of-purpose-built-5g-but-has-no-ai-ran-fears

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Ericsson goes with custom silicon (rather than Nvidia GPUs) for AI RAN

Marvell shrinking share of the RAN custom silicon market & acquisition of XConn Technologies for AI data center connectivity

Intel FlexRAN™ gets boost from AT&T; faces competition from Marvel, Qualcomm, and EdgeQ for Open RAN silicon

Analysis: Nokia and Marvell partnership to develop 5G RAN silicon technology + other Nokia moves

Open Cosmos introduces global space-based LEO satellite service for IoT monitoring

Founded in 2015, UK headquartered Open Cosmos has introduced a new integrated satellite service that combines broadband, earth observation, and IoT capabilities to help organizations monitor critical infrastructure, protect environmental assets, and respond more rapidly to events. The company says the offering is unique in combining global IoT connectivity with real-time Earth observation data to deliver contextual intelligence for governments and institutions.

The service is built on Open Cosmos’ multi-layer satellite architecture, which the company describes as a trilogy of secure broadband connectivity, Earth observation, and IoT. The constellation includes the newly launched Connected Cosmos Low Earth Orbit (LEO) connectivity backbone [1.] and the Open Constellation Earth observation layer [2]. Each satellite carries an IoT payload, integrating functions that are typically deployed as separate systems.

Note 1.  Connected Cosmos is a new LEO constellation providing sovereign and secure communications for businesses and government bodies worldwide.  It ensures that critical data remains secure, trusted, and immediately usable—even when terrestrial infrastructure is compromised. It uses Optical Inter-Satellite-Links to route data between satellites, physically bypassing subsea cables.  Built to withstand interference from jamming and cyber attacks, it’s designed to cut through a contested orbital field for modern critical operations.

Note 2. The Open Constellation is a mutualized satellite infrastructure, created to enable organizations to share the data generated by satellites for improved access to information on our planet. Using this shared capacity reduces overall costs and increases access to better quality, more frequent data. With more satellites in orbit, more areas can be covered more frequently, giving partners of the Open Constellation a greater global coverage.

Open Cosmos Ecosystem:

Image Credit: Open Cosmos 

…………………………………………………………………………………………………………………………………………………………………………

The company says this approach is intended to “address the traditionally siloed nature of space-based data services, dramatically accelerating data delivery times and maximizing operational awareness, which will monitor environmental change and support disaster response across the globe – even in the most remote regions.”

Open Cosmos says the result is faster detection of events and a better understanding of what is happening on the ground. Potential applications include monitoring widely distributed assets, overseeing critical infrastructure such as energy, utility, and rail networks, protecting oceans, tracking wildfires, and observing offshore conditions. In this model, imagery and sensor data are combined so that users can not only see that a change has occurred, but also understand the context behind it.

“Our mission at Open Cosmos has always been focused on solving real world issues through space-based services,” said Danielle Edwards, VP for IoT at Open Cosmos. “This is an essential and critical technology service for governments, enterprises and institutions across the globe, helping to monitor and solve real world problems, with the innovative use of technology in space.

“Our existing Earth observation satellites already carry IoT payloads, so we have the experience to integrate further through our ConnectedCosmos LEO constellation, with each satellite being designed and made to carry IoT capabilities. Our aim is to provide a multitude of payload types within a single constellation to give our customers a completely bespoke and unique service.

“We won’t be just providing the data from a sensor; we will provide the visual imagery to explain why that data is changing. As demand for global monitoring and connected infrastructure continues to grow, our integrated approach represents a new model for space-enabled intelligence.”

At MWC earlier this month, Carlos Zamora, VP of Satcom Solutions at Open Cosmos, said the company is not positioning the LEO broadband service as a direct-to-device play.

Zamora elaborated:

“First of all we’re not going direct to device with the broadband. We’re not here to compete with Starlink or Kuiper or of all of these systems – we’re not here to bring internet to the to the masses. We’re here to bring a global secure connectivity to governments, commercial [customers] and actually anyone that is worried about their data resiliency and sovereignty.  But we do have IoT capabilities that commercial and other customers could use. So the architecture is also fundamentally different. What we’re selling is a network, not a link in space, but actually a network. And I think what makes the difference beyond just connectivity, which is already a differentiator, is the fact that we can start fusing all of our offerings together. And this is not just about moving bits from one place to another, it is giving you the possibility of accessing a space infrastructure that can give you access to real time Earth observation, to real time computing capabilities in orbit, and basically creating a network of assets that can increase your situational awareness and give you access to a global intelligence backbone.”

Open Cosmos is effectively positioning the platform as a secure, multi-sensor space infrastructure layer rather than a consumer broadband network. The focus is on government, enterprise, and institutional customers that need connectivity, resilience, and situational awareness tied to Earth observation and IoT data.

………………………………………………………………………………………………………………………………………………………………………………….

References:

https://www.open-cosmos.com/

https://www.open-cosmos.com/leo-satellite-network-connectivity

https://www.open-cosmos.com/news/open-cosmos-earth-observation-iot-real-time-data

https://www.telecoms.com/satellite/open-cosmos-launches-earth-observation-and-iot-satellite-service

Enterprise IoT and the Transformation of UK Telecom Business Models – Part 1

From LPWAN to Hybrid Networks: Satellite and NTN as Enablers of Enterprise IoT – Part 2

Semtech LoRa® PHY technology enables Amazon Sidewalk to expand while supporting fixed and mobile IoT endpoints

ITU-R recommendation IMT-2020-SAT.SPECS from ITU-R WP 5B to be based on 3GPP 5G NR-NTN and IoT-NTN (from Release 17 & 18)

CEA-Leti RF Chip Enables Ultralow-Power IoT Connectivity For Remote Devices Via Astrocast’s Nanosatellite Network

Anthropic Claude Users Reveal AI Hallucinations as their Top Concern

Introduction:

Across regions from Germany to Mexico, users of artificial intelligence (AI) are less concerned about being replaced by AI than by its propensity to make major mistakes, according to one of the largest global surveys to date on real-world AI usage and perception.  These mistakes, known as “AI Hallucinations,” are essentially made up stories rather than answers based on outdated information.

The study, conducted by Anthropic using its Claude chatbot, analyzed interviews with more than 80,000 users across 159 countries. The result is one of the most detailed global portraits yet of how AI is being deployed — and how users perceive its risks, benefits, and societal implications.

AI Hallucinations Outrank Job Displacement as Top Concern:

When asked what worries them most about AI, 27% of users cited AI chatbot errors described as “AI hallucinations,” while 22% pointed to job displacement and the loss of human autonomy. About 16% expressed concern that AI could weaken people’s capacity for critical thinking.

Image Credit: JOIST AI

“The AI hallucinations were a disaster. I lost so many hours of work,” said an entrepreneur from Germany. Another participant, a military worker in Mexico, noted the importance of domain knowledge in spotting AI’s flaws: “When I notice AI errors it’s because I’m well versed in the topic . . . but I wouldn’t know if the topic was alien to me, would I?”

An AI Interviewer for Global Insights:

The responses were collected in 70 languages using a novel feedback system that allowed Claude to act as both interviewer and analyst. The platform evaluated qualitative answers, categorizing responses to reveal common themes and linguistic nuances across regions.

“Beyond its scale and linguistic diversity, the project aimed to collect this rich human experience using Claude, so it could really inform our research agenda, change our research agenda, change the way we think about building our products, deploying our products,” said Deep Ganguli, who leads Anthropic’s societal impacts team and oversaw the research initiative.

Productivity and Personal Growth Drive AI Adoption:

While data quality and reliability drew criticism, the survey also underscored widespread acknowledgment of AI’s positive impact on productivity. Thirty-two percent of respondents said that AI tools had meaningfully improved their output at work.

An entrepreneur in the United Arab Emirates explained, “I used to be a web designer . . . now I build anything. Before I was one person, now I become 100 people — I don’t wait for anyone anymore.” Participants from Colombia, Japan, and the United States described similar gains, emphasizing how AI helps them free up time for family, hobbies, and creative exploration.

In total, nearly one in five users (19%) said AI had fallen short of their expectations. Yet usage patterns demonstrate remarkable versatility: respondents reported employing AI as a productivity assistant, educational tutor, design partner, creative collaborator, or even an emotional support companion.

A vivid example came from a soldier in Ukraine, who wrote, “In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life — my AI friends.”

Regional and Economic Divides in AI Optimism:

Regional variation was pronounced. Saffron Huang, the lead researcher on the project, found that respondents in South America, Africa, and across South and Southeast Asia expressed more optimism than users in Europe, the United States, or East Asia.

“The trend is that maybe more lower and middle-income countries are more optimistic than higher-income countries that have more AI exposure,” said Huang. She added that this optimism might reflect a sample skew toward early adopters in developing markets — individuals inclined to view new technologies as opportunities rather than threats.

“They just divide so cleanly . . . the more western developed countries are significantly more concerned about AI and the economy, [and] much more negative, and then, the reverse is true with the lower and middle-income countries,” she said.

According to Anthropic’s researchers, AI’s limited visibility in daily workflows across lower-income economies may explain the difference. “If AI hasn’t visibly entered your daily work yet, AI displacement likely feels abstract, especially when more immediate economic pressures already exist,” the team wrote in a companion blog post.

Next Steps: Measuring AI’s Real-World Impact:

Anthropic plans to extend its Claude Interviewer research framework into longitudinal studies that track how AI affects users’ lives over time. “The goal is to better measure both the improvements and the harms — and to use those insights to make systemic refinements,” said Ganguli.

The company’s approach — embedding feedback collection directly into an AI platform — represents an emerging model for data-driven, iterative AI development. By combining self-reported user experience data with large-scale text analytics, Anthropic aims to better understand how its models interact with human needs and constraints.

Industry and Research Community Respond:

The study has drawn attention across the AI community for its unprecedented reach and innovative methodology. Nickey Skarstad, director of product at language-learning company Duolingo, praised the work’s ambition. On LinkedIn, she wrote: “For anyone building products right now, this is the future of understanding your users. The what AND the why at a scale we’ve never had access to before.”

Still, several researchers remain cautious about overinterpreting the results. Divy Thakkar, a researcher at Anthropic rival Google DeepMind, expressed reservations on X, saying he was “sceptical” about calling the study a new form of science due to potential selection bias and limitations in survey design. “A human qualitative researcher would take time to build trust with their participants, hold the space for reflection, introspection, contradictions — that’s the whole point of it,” he wrote.

Methodological caveats extend to demographics. Almost half of the survey’s respondents were based in North America or Western Europe, while regions such as Central Asia had only several hundred participants.

Ilan Strauss, an economist and director of the AI Disclosures Project, described the initiative as “an excellent piece of work,” but urged careful interpretation. He noted that the absence of reported confidence intervals — standard practice in survey-based research — makes it difficult to measure uncertainty. Self-reported productivity gains, he added, are inherently prone to bias.

A Global Mirror for Human-AI Relations:

Despite these caveats, the Claude Interviewer study illustrates a broader shift in the relationship between humans and AI systems. As AI technologies proliferate across regions and industries, they are becoming both instruments of empowerment and sources of anxiety — mirroring social, economic, and cultural dynamics in striking ways.

While western economies debate AI-driven labor disruption and ethical alignment, many in emerging markets frame AI as a means of upward mobility and creative expansion. This duality — between apprehension and aspiration — may shape not only AI adoption patterns but also future research and regulatory directions across global contexts.

References:

https://www.ft.com/content/e074d3a9-7fd8-447d-ac0a-e0de756ac5c5?syn-25a6b1a6=1 (PAYWALL)

https://www.joist.ai/post/ai-hallucinations-what-they-are-and-why-it-matters

Sources: AI is Getting Smarter, but Hallucinations Are Getting Worse

Nvidia CEO Huang: AI is the largest infrastructure buildout in human history; AI Data Center CAPEX will generate new revenue streams for operators

Alphabet’s 2026 capex forecast soars; Gemini 3 AI model is a huge success

Analysis & Economic Implications of AI adoption in China

China’s open source AI models to capture a larger share of 2026 global AI market

AWS to deploy AI inference chips from Cerebras in its data centers; Anapurna Labs/Amazon in-house AI silicon products

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Market research firms Omdia and Dell’Oro: impact of 6G and AI investments on telcos

Gartner: AI spending >$2 trillion in 2026 driven by hyperscalers data center investments

 

 

Telco investments in mobile core networks surge 83% in 2025-Q4, but what about ROI?

According to new data from market research firm Omdia (owned by Informa), 2025 Q4 investments 5G SA Core networks surged 83% year-over-year. For OEMs, this uptick suggests a pivot away from the stagnant 5G Standalone (SA) momentum of recent years. Omdia identified North America and EMEA as the primary growth engines for the quarter.  “The surge in 5G core investment underscores CSPs’ strategic focus on enabling new revenue streams and digital transformation,” said Roberto Kompany, Principal Analyst Mobile Infrastructure at Omdia, in a statement. “This momentum is reflected in AT&T’s nationwide 5G SA and RedCap deployment and Verizon’s launch of a new enterprise-grade fixed wireless access (FWA) slice,” he said.

Ookla and Omdia recently noted accelerating 5G SA adoption in Europe, but the region continues to trail global leaders due to its low baseline. Spain remains a standout exception. Telefónica recently achieved a domestic milestone by deploying 5G SA in-building coverage via a Vantage Towers DAS, and has partnered with Airbus Helicopters to integrate 5G SA into manned and unmanned rotary-wing platforms for the Spanish armed forces. Despite broader deployments in the UK and Germany, a significant performance gap remains.

The GCC region ( Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the UAE.) currently delivers median 5G SA download speeds up to five times faster than European averages. This disparity highlights a capability gap rather than a coverage issue between mature and emerging markets. The industry footprint is expanding, with Omdia reporting 88 commercial 5G SA deployments to date—a notable increase from the 72 reported by Dell’Oro in late 2025.

…………………………………………………………………………………………………………………………………………………………………………………………….

While Dell’Oro confirms the 5G SA Core market growth, it emphasized that subscriber migration and active utilization, rather than just “flags in the ground,” are the true long-term drivers for infrastructure spend.  For the first time, the 5G Mobile Core Network (MCN) market accounted for 50 percent share of the total MCN market.

“In 2025, the MCN market recorded its highest year-over-year revenue growth rate since 2014,” stated Dave Bolan, Research Director at Dell’Oro Group. “This was driven by record-setting growth rates in all market segments: 4G MCN (highest since 2019), 5G MCN (highest since 2022), and Voice Core (highest since 2007). 4G MCN gains came from Caribbean and Latin America (CALA) and Europe, Middle East, Africa (EMEA) regions; 5G MCN from all regions; and Voice Core, primarily from Asia Pacific and EMEA regions.

“5G MCNs led the way in 2025 growth, as 5G Standalone (5G SA) networks reached an inflection point and moved towards mass market appeal, as more 5G SA networks expand in population coverage in urban, suburban, and rural areas. Voice Core was the next major contributor to growth in 2025, driven by planned 3G MCN shutdowns, which required upgrades from Circuit Switched Core to IMS Core, and IMS Core modernization to a cloud-native IMS Core for VoNR in 5G SA networks. Meanwhile, 4G MCNs expanded due to subscriber growth in Africa and South America,” added Bolan.

Looking ahead, Omdia forecasts sustained double-digit growth for 5G Core investments through 2026, fueled by the requirement for nationwide service parity and increased network capacity. This outlook favors the leading 5G Core vendors—Huawei, Ericsson, and Nokia—who currently maintain the highest market shares.

……………………………………………………………………………………………………………………………………………………………………………………………

ROI for 5G SA Core Networks?

The return on investment (ROI) for 5G Standalone (SA) core networks is currently at a critical inflection point. While initial years were marked by “bemoaning” slow momentum, 2025 and 2026 have seen a shift from pilot testing to an execution-driven phase with measurable, albeit varied, returns.  In the 2025–2026 market, enterprise ROI for 5G Standalone (SA) is primarily driven by three high-growth segments: Private 5G NetworksRedCap IoT, and Network Slicing. While public 5G consumer returns remain steady, these B2B use cases are where Mobile Network Operators (MNOs) are finding the most immediate “killer applications.”

ROI Drivers in 2026:
  • Operational Efficiency: 5G SA cores are cloud-native, allowing for microservices that can be deployed in hours rather than days. This reduces long-term operational costs (OpEx) by automating network functions and improving energy efficiency per gigabyte transmitted.
  • New Revenue Streams: Unlike 5G Non-Standalone (NSA), the SA core enables Network Slicing and Ultra-Reliable Low-Latency Communications (URLLC). These are essential for high-margin B2B services like industrial robotics, emergency services, and “SuperMobile” slicing for enterprises.
  • Monetization of “Capability”: In regions like the GCC (Gulf Cooperation Council), 5G SA delivers speeds up to five times faster than European averages, allowing operators to charge for performance-based tiers rather than just data volume.
  • Consumer Benefits: Early data from the UK indicates that 5G SA can extend device battery life by 11% to 22% due to its unified control plane, creating a tangible value proposition for premium consumer plans.
Current Market Challenges:
  • The “Value Perception Gap”: Despite nationwide rollouts, some operators (like AT&T in late 2025) saw mobile service revenue grow by only 3.4%, barely outpacing inflation.
  • Regional Disparity: ROI is strongest in North America and China, where industrial policy and sovereign wealth have accelerated deployment. In contrast, Europe faces a “regulatory quagmire” and higher costs for removing legacy equipment, slowing its path to profitability.
  • The 6G Factor: Some operators are hesitant to invest billions in a full 5G SA overhaul if the technology is viewed as a “transitional” generation that may be superseded by 6G-ready cores in the late 2020s.
Strategic Outlook for 2026:
Market research from the Dell’Oro Group projects the 5G Mobile Core Network market to grow at a 12% CAGR through 2030, reaching historic highs in 2026. For most operators, the consensus is that 5G SA is a strategic necessity to maintain competitiveness, even if the short-term financial returns are uneven.
In his February 2026 Newsletter, Stephane Teral wrote, “2026 points to a more mixed environment—RAN slightly down, 5G Core continuing to grow—against a backdrop of uncertain capex and an accelerating shift toward opex and software-driven models.”
…………………………………………………………………………………………………………………………………………………………………………………………….

References:

https://www.telecoms.com/5g-6g/telcos-spend-more-on-the-core-as-5g-sa-picks-up

https://www.linkedin.com/pulse/february-newsletter-4q25-fy25-wireless-infrastructure-update-ug9ec/

Dell’Oro: Mobile Core Networks +15% in 2025; Ookla: Global Reality Check on 5G SA and 5G Advanced in 2026

Dell’Oro: RAN market stable, Mobile Core Network market +14% Y/Y with 72 5G SA core networks deployed

Dell’Oro: RAN market stable, Mobile Core Network market +14% Y/Y with 72 5G SA core networks deployed

Téral Research: 5G SA core network deployments accelerate after a very slow start

Analysts: Telco CAPEX crash looks to continue: mobile core network, RAN, and optical all expected to decline

Building and Operating a Cloud Native 5G SA Core Network

MCN Market Roared Back in 2025 With 15 Percent Growth, According to Dell’Oro Group

Analysis of Airspan Networks & Atika Alliance: Resilient, Multi-Domain 5G Mission Critical Connectivity for the Defense Industry

Airspan Networks Holdings LLC (“Airspan”) and ATIKA Venture, S.L. (“Atika”) have entered into a strategic collaboration to advance resilient, multi-domain 5G communications for defense and security operations. The initiative focuses on developing interoperable, deployable network systems optimized for mission-critical connectivity across terrestrial and airborne domains.

The cooperation framework covers both commercial and technical engagements, with initial activities centered in Spain and expansion potential across Europe. The partnership unites Airspan’s portfolio in Open RAN (O-RAN), 5G, and commercial Air-to-Ground (ATG) communications with Atika’s capabilities in tactical 5G deployments, AI-driven network analytics, and secure 5G core integration for defense-grade environments.

Joint programs will address the convergence of deployable 5G infrastructure and mobile ad hoc network (MANET) systems under a unified network orchestration and control layer. The combined architecture aims to provide secure, high-throughput connectivity in dynamic and contested electromagnetic environments. Technical priorities include rapid network deployment, automated resilience management, AI-assisted spectrum optimization, and end-to-end encryption aligned with defense mission profiles.

Image Credit:  Aviat Networks

“Airspan has a strong history of solving advanced connectivity challenges, including low-latency, high-mobility communications through our Air-to-Ground In-Motion 5G platform,” stated Glenn Laxdal, CEO of Airspan. “Through this collaboration with Atika, we aim to adapt our commercial-grade 5G and O-RAN technologies to defense use cases that demand operational resilience and interoperability across domains. Atika’s deep experience in defense communications, combined with their expertise in AI-enabled network intelligence and secure 5G core technologies, represents a substantial complement to our portfolio.”

“The operational landscape increasingly depends on adaptable, intelligent, and sovereign networks,” said Ana Rodríguez Quirós, Managing Director of Atika. “Our partnership with Airspan strengthens our ability to support multi-domain 5G for defense users, extending connectivity beyond satellite and traditional radio systems. Building on our collaboration with the Spanish Army, this alliance demonstrates how advanced 5G network architectures can directly enhance mission readiness, mobility, and overall operational effectiveness.”

About Airspan:

Headquartered in Plano, Texas, Airspan Networks Holdings LLC is an innovative U.S.-based provider of wireless network solutions with a global presence, focused on delivering carrier-grade 5G and advanced wireless connectivity. Airspan’s portfolio spans three core solution areas – in-building, outdoor, and air-to-ground – and includes market-leading products for DAS, Open RAN, and small cells across both public and private network settings. Airspan supports mobile network operators, neutral-host providers, enterprises, public-sector organizations, and other service providers in building reliable, scalable wireless networks that enhance coverage and capacity while enabling fast, efficient deployment.

Visit our website at https://airspan.com/

About Atika:

Atika is a Spanish technology company specializing in advanced tactical communications and deployable 5G networks for defense and security. Its technology focuses on federated architectures, multi-domain connectivity, and network intelligence capabilities designed for real operational environments.

……………………………………………………………………………………………………………………………………………………….

Requirements and Analysis:

1.] Resilient, mission-critical 5G connectivity (URLCC that meets ITU-R M.2410 Technical Performance Requirements for IMT 2020) recommendation with a

2.] Unified network orchestration and control layer (5G Services Based Architecture depends on implementation of 3GPP Release 17 and 18 specifications.

1.  Enhancements to the 5G NR Physical Layer (PHY) to support Ultra-Reliable Low-Latency Communications (URLLC) in the Radio Access Network (RAN). While basic URLLC support was established in Release 15.  When 3GPP Release 16 was frozen in July 2020, URLLC in the RAN enhancements had not been completed or performance tested. Hence, the ITU-R M.2150 standard for IMT 2020 RIT/SRIT initially did not meet the ITU-R  M.2410 Technical Performance Requirements for IMT 2020 recommendation

The most significant PHY-layer optimizations were finalized in Release 16 (Phase 2) an Release 17 (Phase 3) with more to come in Release 18 as described below.

a] Release 16 (The “IIoT and URLLC” Phase):
This release introduced foundational PHY improvements to reach “six nines” (99.9999%) reliability. Key features included:

  • New DCI Formats: Compact Downlink Control Information (DCI) formats (e.g., Format 0_2 and 1_2) were added to reduce signaling overhead and improve robustness.
  • Sub-slot HARQ-ACK Feedback: Enabled faster feedback by allowing multiple HARQ-ACK transmissions within a single slot.
  • PUSCH Repetition Type B: Introduced to allow even finer-grained (mini-slot based) repetitions for low-latency uplink, enabling transmissions to cross slot boundaries.
  • Intra-UE Prioritization: Standardized the ability for a device to prioritize a high-priority (URLLC) transmission over a lower-priority (eMBB) one if they overlap in time.
  • Multi-TRP (CoMP): Enhanced support for Transmission and Reception Points (TRPs) to provide spatial diversity, ensuring communication continues if one path is blocked.
    Ericsson +6

b] Release 17 (The “Further Enhanced URLLC” Phase):
Completed in 2022, this release focused on consolidating these features and extending them to more complex scenarios:

  • URLLC in Unlicensed Spectrum (NR-U): Adapted URLLC PHY procedures for unlicensed bands, addressing regulatory constraints like Listen-Before-Talk (LBT).
  • Improved HARQ-ACK and CSI Reporting: Introduced more efficient and robust feedback mechanisms for better link adaptation.
  • Enhanced Multi-TRP for UL: Further optimized uplink transmissions using multiple TRPs for increased reliability.
Summary of Implemented Rel-17 RAN Enhancements:
  • Feedback Reliability: Improved HARQ-ACK and Channel State Information (CSI) reporting to ensure the network can adapt to rapid channel changes.
  • Traffic Prioritization: Intra-UE prioritization allows URLLC data to “pre-empt” or take priority over standard mobile broadband (eMBB) data within the same device.
  • Power Savings: New mechanisms like Paging Early Indication (PEI) allow URLLC-capable sensors to remain in low-power states longer without sacrificing the ability to wake up instantly for critical data.
c] Current Status:
While the core functional specifications for URLLC in the RAN are considered “complete” as of Release 17, the ecosystem continues to evolve into 3GPP Release 18 (5G-Advanced), which looks at further specialized enhancements for Extended Reality (XR) and Artificial Intelligence (AI).
Modem and Chipset Comparison (Device Side).
5G chipsets/modems:
Company Modem Model(s) Rel-17 URLLC Features
Qualcomm World’s first 5G Advanced-ready modem. Supports enhanced HARQ-ACK and CSI feedback for reliability, and AI-based beam management to maintain stable URLLC links.
MediaTek
MediaTek M90
Conforms to Rel-17 standards and aligns with Rel-18 5G-Advanced. Implements Rel-17 Paging Early Indication (PEI) to reduce power while maintaining low-latency readiness.
Samsung
Exynos Modem 5300
While primary documentation emphasizes Rel-16, Samsung achieved 1024 QAM (defined in Rel-17) in partnership with Qualcomm. Supports ultra-low latency via FR2 and EN-DC.
Network infrastructure implementation often takes the form of software-defined upgrades to existing massive MIMO and base station hardware.
  • Ericsson: Enabled “Time-Critical Communication” as a software upgrade on its RAN. Its Rel-17 implementation focuses on Hybrid Automatic Repeat Request (HARQ-ACK) enhancements, intra-UE multiplexing, and time-synchronization for Industrial IoT (IIoT).
  • Nokia: Updated its AirScale portfolio to support Rel-17 features, specifically targeting Time-Sensitive Communications (TSC) and deterministic networking for private factory environments.
  • Huawei: Has integrated Rel-17 URLLC enhancements as part of its “5.5G” (5G-Advanced) marketing, focusing on achieving sub-10ms latency for wide-area industrial control and 1ms for local-area automation.

2.  3GPP has specified a unified management and orchestration framework for 5G systems, primarily developed by working group SA5 (Management, Orchestration, and Charging). Starting from Release 15, 3GPP introduced a Service-Based Management Architecture (SBMA), which acts as a unified layer to manage and orchestrate 5G networks, including the Core, RAN, and end-to-end network slices.

Key aspects of the 3GPP unified 5G orchestration and control layer include:
  • Service-Based Management Architecture (SBMA): Instead of legacy, vendor-specific interfaces, 3GPP adopted a service-oriented approach. This architecture uses Management Services (MnS), which provide standardized interfaces for both management and orchestration, facilitating multi-vendor interoperability.
  • End-to-End Slice Management: The 3GPP standards (notably TS 28.530/531/532/533) define a common approach to manage the entire lifecycle of a 5G network slice (creation, activation, supervision, and termination) across RAN, Core, and Transport domains.
  • Network Automation (NWDAF): The Network Data Analytics Function (NWDAF), introduced in Release 15, is a key component for automated control. It collects network data, analyzes it, and feeds back insights to assist in policy management (PCF) and slice selection (NSSF).
  • Intent-Driven Management: 3GPP is enhancing its standards to support intent-driven management, enabling operators to manage network resources based on high-level desired outcomes rather than low-level configuration, which is crucial for autonomous networks.
  • AI/ML Management: Recent releases (18/19) focus on a unified, domain-independent AI/ML management and orchestration framework that supports the full lifecycle of AI/ML models within the 5G system.

The latest 3GPP release with finalized specifications for Service-Based Management Architecture (SBMA) is Release 18 (Rel-18), which was functionally frozen in early 2024. Rel-18 includes enhanced study items (FS_eSBMA) focused on supporting management for 5G standalone (SA) and non-standalone (NSA) scenarios and management of Management Functions.

…………………………………………………………………………………………………………………………………………………………………………..

References:

https://www.businesswire.com/news/home/20260319340548/en/Airspan-Networks-and-Atika-Form-Alliance-to-Advance-Resilient-Multi-Domain-5G-Connectivity-for-Defense

SNS Telecom & IT: Mission-Critical Networks a $9.2 Billion Market

3GPP Release 16 5G NR Enhancements for URLLC in the RAN & URLLC in the 5G Core network

3GPP Release 16 Update: 5G Phase 2 (including URLLC) to be completed in June 2020; Mission Critical apps extended

Huawei, Qualcomm, Samsung, and Ericsson Leading Patent Race in $15 Billion 5G Licensing Market

https://www.3gpp.org/news-events/3gpp-news/sa5-5g

Revolutionizing 5G Mission Critical Transport Networks (Part 2)

https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/imt-2020/Documents/S01-1_Requirements%20for%20IMT-2020_Rev.pdf

 

IMT-2030 (“6G”) Minimum Technology Performance Requirements for Radio Interface Technologies

At its February 2026 meeting in Geneva, ITU-R WP 5D reached agreement on the technical performance requirements for IMT-2030, also known as 6G.  Formal approval is expected to follow when the parent ITU-R study group 5 meets in December 2026.

At their Feb 2026 meeting, WP 5D WG Technology Aspects/SWG Radio Aspects discussed all the 16 contributions related to that document.  It was clarified that these requirements are to be evaluated according to the criteria defined in Report ITU-R M.[IMT 2030.EVAL] and M.[IMT 2030.SUBMISSION]. They are used only for development of IMT-2030 radio interface technologies (RIT/SRITs).

IMPORTANT: As noted many times, 3GPP will specify the 6G Core network and 6G Architecture which will have their own performance requirements.  See References below.

The working party’s draft new report, Minimum requirements related to technical performance for IMT‑2030 radio interface(s),” outlines 20 technical performance requirements (TPR). Seven of them are new and specific to describe the 6G performances. Those IMT 2030 technical performance requirements will be used as unified requirements to evaluate the 6G radio interfaces (RITs/SRITs).

Image Credit:  ITU-R

…………………………………………………………………………………………………….

The IMT-2030 Usage Scenarios:

The full set of requirements is based on six proposed usage scenarios for 6G networks:

  • Immersive communication (IC)
  • Hyper reliable and low‑latency communication (HRLLC)
  • Massive communication (MC)
  • Ubiquitous connectivity (UC)
  • Artificial intelligence (AI) and communication (AIAC)
  • Integrated sensing and communication (ISAC)

The IMT-2030 framework:

The newly defined 6G requirements build on the IMT‑2030 framework that ITU first published in December 2023 as a globally harmonized foundation for next‑generation connectivity (Recommendation ITU‑R M.2160). This recommendation also defines the overarching principles for future network design, notably:

  • Sustainability.
  • Security and resilience.
  • Connecting the unconnected.
  • Ubiquitous intelligence.

ITU – the United Nations agency for digital technologies – aims for the 6th generation of mobile communications (6G) to enable affordable, resilient, energy‑efficient networks for health, education, agriculture and disaster response. Advanced networks also present a way to close the persistent digital divide that today leaves many people in low-income countries behind.

This work to date provides a unified technical foundation to evaluate the candidate radio interfaces for IMT-2030 and guide the evolution of global 6G research and standardization.

Groundwork for future resilience:

IMT‑2030 lays the groundwork for affordable, high‑quality connectivity to remote and underserved communities. By setting globally harmonized performance requirements, it aims to ensure access for everyone, make communication systems more resilient, support sustainability and implement energy‑efficient technologies. ITU aims for innovative 6G services to deliver broad social and economic benefits.

The 20 requirements set out in the new draft report ​are meant to provide a consistent basis for specification and evaluation. While the requirements establish minimum performance levels, they do not restrict implementation approaches or guarantee real-world deployment performance.

They reflect ongoing global research and technology activities and should pave the way for concrete IMT-2030 evaluation guidelines, the next step in ITU’s global standardization process for 6G.

Accordingly, the IMT-2030 draft report has been submitted for approval to ITU‑R Study Group 5, responsible for terrestrial radiocommunication services, at a meeting scheduled for 1 December.

Until then, the draft remains available exclusively to ITU‑R members directly involved in its finalization and approval. You need a TIES login account to access ITU documents.

………………………………………………………………………………………………………………………..

About ITU-R Study Group 5:

ITU-R Study Group 5 is responsible for Terrestrial Services, including Fixed Wireless, Mobile (land, maritime and aeronautical), radiodetermination service as well as amateur and amateur-satellite services and the development of international standards, regulation and guidelines for these systems. The group’s work encompasses a wide range of topics, including spectrum management, network architecture, and radio interface technologies.

About ITU-R Working Party 5D:

ITU-R Working Party 5D is responsible for the development and harmonization of international standards for International Mobile Telecommunications (IMT) systems, including the latest IMT-2030 (6G) technology. The working party’s efforts ensure interoperability and global compatibility for wireless communication systems.

Further information on IMT‑2030 and related activities is available on the portal for IMT towards 2030 and beyond.

………………………………………………………………………………………………………..

References:

IMT-2030: Technical requirements for the 6G future

https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/Pages/default.aspx

Roles of 3GPP and ITU-R WP 5D in the IMT 2030/6G standards process

ITU-R M.[IMT-2030.EVAL] & ITU-R M.[IMT-2030.SUBMISSION] reports: Evaluation & Submission Guidelines for 6G RIT/SRITs (6G)

ITU-R WP 5D reports on: IMT-2030 (“6G”) Minimum Technology Performance Requirements; Evaluation Criteria & Methodology

Comparing AI Native mode in 6G (IMT 2030) vs AI Overlay/Add-On status in 5G (IMT 2020)

AI wireless and fiber optic network technologies; IMT 2030 “native AI” concept

Verizon’s 6G Innovation Forum joins a crowded list of 6G efforts that may conflict with 3GPP and ITU-R IMT-2030 work

ITU-R WP5D IMT 2030 Submission & Evaluation Guidelines vs 6G specs in 3GPP Release 20 & 21

Highlights of 3GPP Stage 1 Workshop on IMT 2030 (6G) Use Cases

Development of “IMT Vision for 2030 and beyond” from ITU-R WP 5D

 

 

 

AWS to deploy AI inference chips from Cerebras in its data centers; Anapurna Labs/Amazon in-house AI silicon products

Amazon Web Services (AWS) announced it plans to integrate AI processors from Cerebras Systems [1.]  into its data centers, signaling growing confidence in the AI-focused semiconductor startup. Under a new multiyear partnership announced Friday, AWS will deploy Cerebras’s Wafer-Scale Engine (WSE) to accelerate inference workloads—the stage of AI operations where models generate responses to user queries. Financial details of the agreement were not disclosed.

Note 1.  Founded in 2015 and headquartered in Sunnyvale, CA, Cerebras claims to have the world’s fastest AI inference and training platform.

The collaboration reflects a significant realignment in compute infrastructure strategies across the AI ecosystem. While initial industry focus centered on model training, the rapid expansion of deployed AI services is driving demand for optimized inference performance. Traditional GPUs, though unmatched for training, can be suboptimal for inference scenarios that require ultra-low latency and high throughput. Cloud and AI platform providers are therefore diversifying their silicon portfolios to better match workload profiles and to scale capacity efficiently.

AWS, the world’s largest cloud infrastructure provider, has traditionally relied on its in-house semiconductor division, Annapurna Labs, for custom chip design. Annapurna’s Trainium processors compete with GPUs from major suppliers such as Nvidia and AMD, offering cost and performance advantages for AI training workloads. The new partnership introduces Cerebras technology into AWS infrastructure, where it will work alongside Trainium to enhance large-scale inference capabilities.

Cerebras, best known for its wafer-scale architecture, markets its WSE processors as a high-speed inference platform capable of executing the decode phase of generative AI processing—where text, images, or other outputs are generated—at up to 25 times the speed of conventional GPU solutions. The company, valued at approximately $23 billion following a $1 billion funding round in February, has attracted backing from Fidelity, Benchmark, Tiger Global, Atreides, and Coatue.

The Cerebras deal underscores a major shift in the market for computing power. Image Credit: rebecca lewington/cerebras syste/Reuters

The AWS collaboration follows Cerebras’s major compute partnership with OpenAI, which reportedly involves deploying up to 750 MW of computing capacity powered by its chips. AWS and Cerebras will position their joint offering as a premium cloud inference solution, targeting enterprise AI developers requiring high-performance and scalable compute.

“The scale of AI demand is shifting from model creation to global deployment,” said Andrew Feldman, CEO of Cerebras. “Working with AWS aligns our technology with the industry’s largest cloud, giving us reach to a broad enterprise and developer base. If you want slow inference, there will be cheaper ways to go,” Feldman said. “But if you want fast tokens, if speed matters to you, if you’re doing coding or agentic work, not only are we the absolute fastest, but we intend to set the bar. We’re in this to win it.”

AWS and Cerebras will support both aggregated and disaggregated configurations. Disaggregated is ideal when you have large, stable workloads. Most customers run a mix of workloads with different prefill/decode ratios, where the traditional aggregated approach is still ideal. The start-up expects most customers will want access to both and the ability to route workloads to whichever configuration serves them best.

The move intensifies competition in the inference silicon segment, where Nvidia faces growing pressure from purpose-built processor architectures such as Cerebras’s WSE and other emerging alternatives. Nvidia, which recently announced a $20 billion licensing deal with Groq and plans to unveil a new inference-optimized platform, remains the dominant supplier but now contends with an accelerating wave of specialization across the AI compute stack.

AWS vice president and Annapurna Labs co-founder Nafea Bshara emphasized the company’s goal of offering flexible performance tiers. “Our job is to push the speed and lower the price,” he said, noting that AWS will continue to offer cost-optimized Trainium-only options alongside high-performance Cerebras-Trainium configurations.

………………………………………………………………………………………………………………………………………………………………………………………………….

Amazon’s Internally Designed AI Silicon:

Amazon has built a fairly broad internal AI-oriented silicon portfolio through Annapurna Labs, primarily for AWS:

  • Inferentia (Inferentia, Inferentia2) – Custom machine learning accelerators designed for high-throughput, low-cost inference at cloud scale. These power many AWS inference instances and are positioned as an alternative to Nvidia GPUs for production model serving.

  • Trainium (Trainium, Trainium2, Trainium3) – AI training accelerators optimized for large-scale model training (including frontier and foundation models), with Trainium2 and Trainium3 as newer generations offering materially higher performance and better $/compute than the first generation. These are central to projects such as the Rainier supercomputer for Anthropic.

  • Graviton (Graviton, Graviton2/3/4) – Arm-based general-purpose CPUs used heavily across EC2, increasingly in AI-adjacent roles (pre/post-processing, orchestration, model-serving microservices) and as part of cost-optimized AI stacks, even though they are not dedicated accelerators.

  • Nitro system – While not an AI accelerator per se, the Nitro family (offload cards and system) is an internally developed data-plane and virtualization offload architecture that underpins EC2 and works in tandem with Graviton, Inferentia, and Trainium to free CPU cycles and improve I/O for AI/ML workloads.

All of these are designed and iterated internally by Annapurna Labs for exclusive use in AWS data centers, then exposed to customers via AWS services rather than as standalone merchant silicon.

Amazon’s Annapurna Labs is an internal chip design group that has become a core strategic asset for AWS, especially for custom data center and AI silicon.

Origins and acquisition:

  • Annapurna Labs is an Israeli chip design startup founded in 2011 by semiconductor veterans of Intel and Broadcom, including Avigdor Willenz and Nafea Bshara.

  • “When we talked with market sources and consulted with experts in the fields of data and servers, at that time only Amazon had a holistic vision and the ability to execute on a large scale,” recalls Bshara about the start of the romance with Amazon. “We were prepared to build the technology and at the same time were open to working with startups. From there we began a journey together with many meetings and shared thinking, among others with James Hamilton (Microsoft’s former data-base product architect and to AWS SVP), and from there within six months we found ourselves inside Amazon.”
  • Amazon began working with the company around 2013 and acquired it in 2015 for an estimated $350–$400 million.

  • Before the deal, Annapurna was in stealth, focusing on low‑power networking and server chips to improve data center efficiency.

Role inside Amazon and AWS:

  • Post‑acquisition, Annapurna was folded into AWS as a specialist microelectronics and custom silicon group, designing chips to reduce cost and power per unit of compute.

  • The group underpins several key AWS technologies: the Nitro system for offloading virtualization and I/O, Arm‑based Graviton CPUs for general compute, and Trainium and Inferentia accelerators for AI training and inference.

  • These chips let AWS optimize performance per watt and per dollar versus x86 servers and third‑party accelerators, improving margins and competitive pricing.

Key products and architectures:

  • Nitro: A combination of custom hardware and software that offloads storage, networking, and security functions from the host CPU, increasing tenant isolation and freeing CPU cycles for workloads.

  • Graviton: A family of Arm‑based server CPUs; by 2018 Graviton was widely adopted on AWS and is now used by most AWS customers for general cloud infrastructure workloads due to better price‑performance and energy efficiency.

  • Inferentia and Trainium: Custom accelerators designed by Annapurna for machine learning inference (Inferentia) and training (Trainium), intended to reduce AWS’s dependence on high‑priced Nvidia GPUs for AI workloads.

Strategic importance and AI focus:

  • Annapurna’s work is central to Amazon’s strategy of vertical integration in the cloud: owning the silicon stack as much as the software and services.

  • The group designs chips that power Amazon’s AI infrastructure, including systems used both by internal teams and external customers such as Anthropic, for which AWS is the primary cloud and silicon provider.

  • Amazon and Anthropic are collaborating on “Project Rainier,” a massive supercomputer built around hundreds of thousands of Annapurna‑designed Trainium2 chips, targeting more than five times the compute used to train current frontier models.

Organization, footprint, and industry impact:

  • Annapurna Labs maintains a significant presence in Israel, employing hundreds of engineers focused on advanced AI and networking processors for AWS.

  • It also operates major engineering hubs such as an Austin, Texas lab where advanced semiconductors and AI systems are designed and tested.

  • Analysts often describe the acquisition as one of Amazon’s most successful, arguing that Annapurna’s custom silicon is a “secret sauce” that helps AWS compete with Microsoft, Google, and others on performance, cost, and energy efficiency.

…………………………………………………………………………………………………………………………………………………………..

References:

https://www.cerebras.ai/company

https://www.cerebras.ai/blog/cerebras-is-coming-to-aws

https://www.wsj.com/tech/amazon-announces-inference-chips-deal-with-cerebras-109ecd31

https://www.marketwatch.com/story/how-the-ceo-of-this-upstart-nvidia-rival-hopes-to-seize-on-the-lucrative-market-for-ai-chips-d5ccdab0

https://en.globes.co.il/en/article-nafea-bshara-the-israeli-behind-amazons-graviton-chip-1001420744

Intel and AI chip startup SambaNova partner; SN50 AI inferencing chip max speed said to be 5X faster than competitive AI chips

Custom AI Chips: Powering the next wave of Intelligent Computing

RAN silicon rethink – from purpose built products & ASICs to general purpose processors or GPUs for vRAN & AI RAN

Will “AI at the Edge” transform telecom or be yet another telco monetization failure?

Huawei to Double Output of Ascend AI chips in 2026; OpenAI orders HBM chips from SK Hynix & Samsung for Stargate UAE project

OpenAI and Broadcom in $10B deal to make custom AI chips

U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

2026 Consumer Electronics Show Preview: smartphones, AI in devices/appliances and advanced semiconductor chips

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Google announces Gemini: it’s most powerful AI model, powered by TPU chips

 

Page 1 of 241
1 2 3 241