AI in Telecom
SK Telecom forms AI CIC in-house company to pursue internal AI innovation
SK Telecom (SKT) is establishing an in-house independent company (CIC) that consolidates its artificial intelligence (AI) capabilities. Through AI CIC, SK Telecom plans to invest approximately 5 trillion won (US$3.5 billion) in AI over the next five years and achieve annual sales of over 5 trillion won ($3.5 billion) by 2030.
On September 25th, SK Telecom CEO Ryu Young-sang held a town hall meeting for all employees at the SKT Tower Supex Hall in Jung-gu, Seoul, announcing the launch of AI CIC to pursue rapid AI innovation. Ryu will concurrently serve as the CEO of SK CIC. SK Telecom plans to unveil detailed organizational restructuring plans for AI CIC at the end of October this year.
“We are launching AI CIC, a streamlined organizational structure, and will simultaneously pursue internal AI innovation, including internal systems, organizational culture, and enhancing employees’ AI capabilities. We will grow AI CIC to be the main driver of SK’s AI business and, furthermore, the core that leads the AI business for the entire SK Group. The AI CIC will establish itself as South Korea’s leading AI business operator in all fields of AI, including services, platforms, AI data centers and proprietary foundation models,” Ryu said.
The newly established AI CIC will be responsible for all the company’s AI-related functions and businesses. It is expected that SK Telecom’s business will be divided into mobile network operations (MNO) and AI, with AI CIC consolidating related businesses to enhance operational efficiency. Furthermore, AI CIC will actively participate in government-led AI projects, contributing to the establishment of a government-driven AI ecosystem. SKT said that reorganizing its services under one umbrella will “drive AI innovation that enhance business productivity and efficiency.”
“Through this (AI CIC), we will play a central role in building a domestic AI-related ecosystem and become a company that contributes to the success of the national AI strategy,” Ryu said.
By integrating and consolidating dispersed AI technology assets, SKT plans to strengthen the role of the “AI platform” that supports AI technology/operations across the entire SK Group, including SKT, and also pursue a strategy to secure a flexible “AI model” to respond to the diverse AI needs of the government, industry, and private sectors.
In addition, SKT will accelerate the development of future growth areas (R&D) such as digital twins and robots, and the expansion of domestic and international partnerships based on AI full-stack capabilities.
Ryu Young-sang, CEO of SK Telecom, unveils the plans for the AI CIC
CEO Ryu said, “SK Telecom has secured various achievements such as securing 10 million Adot (AI enabled) subscribers, selecting an independent AI foundation model, launching the Ulsan AI DC, and establishing global partnerships through its transformation into an AI company over the past three years, and has laid the foundation for future leaps forward. We will achieve another AI innovation centered around the AI CIC to restore the trust of customers and the market and advance into a global AI company.”
………………………………………………………………………………………………………………………………………………………………………………………………………
References:
https://www.businesskorea.co.kr/news/articleView.html?idxno=253124
SKT-Samsung Electronics to Optimize 5G Base Station Performance using AI
SK Telecom unveils plans for AI Infrastructure at SK AI Summit 2024
SK Telecom (SKT) and Nokia to work on AI assisted “fiber sensing”
SK Telecom and Singtel partner to develop next-generation telco technologies using AI
SK Telecom, DOCOMO, NTT and Nokia develop 6G AI-native air interface
South Korea has 30 million 5G users, but did not meet expectations; KT and SKT AI initiatives
Qualcomm CEO: expect “pre-commercial” 6G devices by 2028
During his keynote speech at the 2025 Snapdragon Summit in Maui, Qualcomm CEO Cristiano Amon said:
“We have been very busy working on the next generation of connectivity…which is 6G. Designed to be the connection between the cloud and Edge devices, The difference between 5G and 6G, besides increasing the speeds, increasing broadband, increasing the amount of data, it’s also a network that has intelligence to have perception and sensor data. We’re going to have completely new use cases for this network of intelligence — connecting the edge and the cloud.”
“We have been working on this (6G) for a while, and it’s sooner than you think. We are ready to have pre-commercial devices with 6G as early as 2028. And when we get that, we’re going to have context aware intelligence at scale.”
…………………………………………………………………………………………………………………………………………………………………………………………..
Analysis: Let’s examine that statement, in light of the ITU-R IMT 2030 recommendations not scheduled to be completed until the end of 2030:
“pre-commercial devices” are not meant for general consumers while “as early as” leaves open the possibility that those 6G devices might not be available until after 2028.
…………………………………………………………………………………………………………………………………………………………………………………………..
Looking ahead at the future of devices, Amon noted that 6G would play a key role in the evolution of AI technology, with AI models becoming hybrid. This includes a combination of cloud and edge devices (like user interfaces, sensors, etc). According to Qualcomm, 6G will make this happen. Anon envisions a future where AI agents are a crucial part of our daily lives, upending the way we currently use our connected devices. He firmly believes that smartphones, laptops, cars, smart glasses, earbuds, and more will have a direct line of communication with these AI agents — facilitated by 6G connectivity.
Opinion: This sounds very much like the hype around 5G ushering a whole new set of ultra-low latency applications which never happened (because the 3GPP specs for URLLC were not completed in June 2020 when Release 16 was frozen). Also, very few mobile operators deployed 5G SA core, without which there are no 5G features, like network slicing and security.
Separately, Nokia Bell Labs has said that in the coming 6G era, “new man-machine interfaces” controlled by voice and gesture input will gradually replace more traditional inputs, like typing on touchscreens. That’s easy to read as conjecture, but we’ll have to see if that really happens when the first commercial 5G networks are deployed in late 2030- early 2031.
We’re sure to see faster network speeds with higher amounts of data with 6G with AI in more devices, but standardized 6G is still at least five years from being a commercial reality.
References:
https://www.androidauthority.com/qualcomm-6g-2028-3600781/
https://www.nokia.com/6g/6g-explained/
ITU-R WP5D IMT 2030 Submission & Evaluation Guidelines vs 6G specs in 3GPP Release 20 & 21
ITU-R WP 5D reports on: IMT-2030 (“6G”) Minimum Technology Performance Requirements; Evaluation Criteria & Methodology
ITU-R: IMT-2030 (6G) Backgrounder and Envisioned Capabilities
Ericsson and e& (UAE) sign MoU for 6G collaboration vs ITU-R IMT-2030 framework
Highlights of 3GPP Stage 1 Workshop on IMT 2030 (6G) Use Cases
6th Digital China Summit: China to expand its 5G network; 6G R&D via the IMT-2030 (6G) Promotion Group
MediaTek overtakes Qualcomm in 5G smartphone chip market
Lumen: “We’re Building the Backbone for the AI Economy” – NaaS platform to be available to more customers
“Lumen is determined to lead the transformation of our industry to meet the demands of the AI economy,” said Lumen Technologies CEO Kate Johnson. “With ubiquitous reach and a digital-first platform, we are positioned to deliver next-gen connectivity, power enterprise innovation, and secure our own growth. This is how we build the trusted network for AI and deliver exceptional value to our customers and shareholders.”
Highlights included keynote remarks from Johnson, who outlined the three pillars of the company’s strategy:
- Building the backbone for the AI economy with a physical network designed for scale, speed, and security – delivering connectivity anywhere and for everything customers want to do.
- Cloudifying and agentifying telecom to reduce complexity and simplify the network for customers as an intelligent, on-demand, consumption-based digital platform.
- Creating a connected ecosystem with partnerships that extend Lumen’s reach, accelerate customer-first, AI-driven innovation, and unlock new opportunities across industries.
Johnson noted how Lumen’s growth is powered by a set of unique enablers that turn the company’s network into a true digital platform. With near-term product launches like self-service digital portal Lumen Connect, a universal Fabric Port, and new innovations in development that extend intelligence into the network edge, Lumen is making connectivity programmable and effortless. Combined with the company’s Network-as-a-Service business model and a connected ecosystem of data centers, hyper-scalers and technology partners, these enablers give customers the speed, security, and simplicity they need to thrive in the AI economy.
Lumen Technologies CEO Kate Johnson spotlights the company’s bold strategy, financial progress, and early look at product roadmap to reimagine digital networking for the AI economy at a gathering of industry analysts.
………………………………………………………………………………………………………………………………………………………………………………………………………
Chief Financial Officer Chris Stansbury said 2026 is expected to mark an inflection point as new digital revenues, growth in IP and Wavelengths, and long-term hyper-scaler contracts begin to outpace legacy declines – setting up what he called a “trampoline moment” for expansion. Lumen projects business segment revenue growth in 2028 and a return to overall top-line growth in 2029, establishing a clear path from stabilization to value creation.
With a strengthened balance sheet and greater financial freedom, executives highlighted the bold investment in the company’s three strategic pillars, each designed to accelerate innovation and position Lumen for long-term industry leadership.
Lumen’s strategy begins with the physical network, which carries a significant portion of the world’s internet traffic. With construction underway coast-to-coast, the company is executing a multi-billion-dollar program to expand its intercity and metro fiber backbone:
- Adding 34 million new fiber miles by the end of 2028 for a total of 47 million intercity and metro miles.
- Connecting data centers, clouds, edge, and enterprise locations in any combination.
- Delivering 400G today and plans to scale to 1.6 terabits in the future.
Lumen’s substantial investments to expand high-speed connectivity ensures customers have the network scale, speed, and reliability to confidently innovate and grow without constraints.
The rise of AI is driving unprecedented demands for a new, Cloud 2.0 architecture with distributed, low-latency, high-bandwidth networks that can move and process massive amounts of data across multi-cloud, edge, and enterprise locations. Lumen is meeting this challenge by cloudifying and agentifying telecom, turning its expansive fiber footprint into a programmable digital platform that strips away the complexity of legacy networking.
Lumen plans to make its network-as-a-service (NaaS) platform [1.] available to more customers, regardless of their existing internet connection. At the company’s Analyst Forum, The NaaS platform includes new innovations like Lumen Fabric Port (Q4 2025), Lumen Multi-Cloud Gateway (Q4 2025), and Lumen Connect (Q1 2026). Together, these technologies digitize the entire service lifecycle, so customers can provision, manage, and scale thousands of services across thousands of locations, within minutes.
Note 1. Network as a Service (NaaS) is a cloud-based model that allows businesses to rent networking services from a provider on a subscription or pay-per-use basis, instead of building and maintaining their own network infrastructure. NaaS provides scalable and flexible network capabilities, shifting the cost from a capital expense (CapEx) to an operational expense (OpEx). NaaS functions by using a virtualized, software-defined network, meaning the network capabilities are abstracted from the physical hardware. Businesses access and manage their network resources through a web-based interface or portal, and the NaaS provider manages the underlying infrastructure, including hardware, software, updates, and troubleshooting.
Lumen CTO Dave Ward unveiled “Project Berkeley,” a network interface device that essentially expands the company’s NaaS services, like on-demand internet, Ethernet and IP VPN, to off-net sites using any access type. Those access types can be 5G, fiber, copper, fixed wireless access, satellite and more. Project Berkeley leverages digital twin technology, which lets Lumen have “a full replicate understanding of exactly what’s going on in this device running out of our cloud.”
Ward said on the company’s website:
“Lumen is taking the network out of its hardware box and transforming it into a true digital platform. Technology and Product Officer Dave Ward. “By cloudifying our fiber assets into software and disrupting cloud economics, we’re giving customers the ability to turn up services within minutes, scale as their AI workloads demand, and innovate at cloud speed. This is what the future of digital networking should deliver.”
Lumen has been growing its NaaS platform for some time. It launched its first offering in 2023 and now counts over 1,000 enterprise NaaS customers. The company now plans to bring its connectivity products to over 10 million off-net buildings, said Ward. The device will also allow hyper-scalers to integrate and sell these products in their respective marketplaces.
In closing the Analyst session, CEO Johnson underscored that Lumen’s strategies are the foundation of the company’s momentum today – transforming the industry with innovation to fuel growth, strengthening financial performance, and positioning the company as a critical enabler in the digital economy.
“We’re thrilled by the energy and engagement we’ve seen from the analyst community. The discussions around how Lumen is delivering an expansive network, digital platform, connected ecosystem and winning culture to meet the exponential enterprise demands of AI demonstrate the urgent need for innovation in our industry, and we’re proud to be at the forefront of that conversation.”
About Lumen Technologies:
Lumen is unleashing the world’s digital potential. We ignite business growth by connecting people, data, and applications – quickly, securely, and effortlessly. As the trusted network for AI, Lumen uses the scale of our network to help companies realize AI’s full potential. From metro connectivity to long-haul data transport to our edge cloud, security, managed service, and digital platform capabilities, we meet our customers’ needs today and as they build for tomorrow.
For news and insights visit news.lumen.com, LinkedIn: /lumentechnologies, X: lumentechco, Facebook: /lumentechnologies, Instagram: @lumentechnologies and YouTube: /lumentechnologies. Lumen and Lumen Technologies are registered trademarks of Lumen Technologies LLC in the United States. Lumen Technologies LLC is a wholly owned affiliate of Lumen Technologies, Inc.
References:
For a replay of the webcast, visit Lumen’s investor website
https://www.fierce-network.com/broadband/lumen-says-its-taking-its-naas-new-level
Lumen deploys 400G on a routed optical network to meet AI & cloud bandwidth demands
Dell’Oro: Bright Future for Campus Network As A Service (NaaS) and Public Cloud Managed LAN
NaaS emerges as challenger to legacy network models; likely to grow rapidly along with SD WAN market
Verizon and WiPro in Network-as-a-Service (NaaS) partnership
ABI Research: Network-as-a-Service market to be over $150 billion by 2030
Cisco Plus: Network as a Service includes computing and storage too
Gartner: changes in WAN requirements, SD-WAN/SASE assumptions and magic quadrant for network services
Ciena to acquire Nubis Communications for high performance optical and electrical interconnects to support AI workloads
New Ciena Acquisition:
Today, Ciena announced it will acquire electronics startup Nubis Communications, a privately-held company headquartered in New Providence, New Jersey for $270 million. Nubis specializes in high-performance, ultra-compact, low-power optical and electrical interconnects tailored to support AI workloads. The Nubis acquisition will give Ciena access to technology that supports a wider range of data center use cases. It is is expected to close during Ciena’s fiscal 4th quarter.
Nubis’ solutions complement Ciena’s existing high-speed interconnects portfolio and will enable new capabilities to support growing AI workloads by significantly increasing scale up and scale out capacity and density inside the data center. The Nubis portfolio includes two key technologies:
- Co-Packaged Optics (CPO) / Near Packaged Optics (NPO): Nubis’ compact, high-density optical modules deliver ultra-fast data transfer using light instead of traditional electrical signals. Supporting up to 6.4 Tb/s full-duplex bandwidth, these modules are optimized for low-latency, low-power operation – making them ideal for scaling AI systems. Combined with Ciena’s high-speed SerDes, Nubis’ optical engines enable differentiated CPO solutions to address high-performance connectivity needs inside and between racks.
- Electrical ACC: Nubis’ advanced analog electronics enable Active Copper Cables (ACC) to support high-speed data transmission, allowing data to travel up to 4 meters at speeds of 200 Gb/s per lane. This low-power, low-latency solution helps customers connect more AI accelerators across racks without the limitations of traditional copper or DSP-based solutions
Nubis has developed two products to increase bandwidth and reduce latency within and between data center racks:
- XT Optical Engines is a series of optical modules that support up to 6.4 Tbps of full-duplex bandwidth while using light instead of traditional electrical signals.
- Nitro Linear Redriver aims to improve the performance of all the copper cables that are wired into the data center. Bloomberg has predicted copper usage in North American data centers could increase by 1.1-2.4 million tons by 2030 as “AI demands mount.”
“The acquisition of Nubis represents a significant step forward in Ciena’s strategy to address the rapidly growing demand for scalable, high-performance connectivity inside the data center, driven by the explosive growth of AI-related traffic,” said David Rothenstein, Chief Strategy Officer at Ciena. “With ownership of these key technologies for a wider range of use cases inside the data center, we are expanding our competitive advantage by advancing development of differentiated solutions, reducing development costs, and driving long-term efficiency and profitability. Nitro also supports up to 4m of reach for 200G per lane active copper cables, far beyond the limits of passive copper and legacy analog solutions. This is a game-changer for AI infrastructure, where short-reach, high-bandwidth copper is preferred for cost and latency reasons,” Rothenstein added.
“The Nubis team is thrilled to join Ciena and enhance its industry-leading portfolio with our breakthrough interconnect technologies,” said Dan Harding, CEO of Nubis. “Together, we will advance Ciena’s data center strategy by delivering reliable, high-quality, and high-performance interconnect solutions to support the next generation of AI workloads.”
Dell’Oro VP Jimmy Yu said Nubis is probably “one of [Ciena’s] most forward-looking” acquisitions, since the company is assembling the pieces it thinks are necessary to support future data center networking. “This acquisition aligns well with Ciena’s overall strategy to expand into the data center market, and it likely played a role in their decision to exit future investments in broadband PON,” Yu said.
……………………………………………………………………………………………………………………………………………………………..
Ciena Cutting Back on Residential Broadband Access investments to focus on AI and Coherent Optics:
The Nubis takeover comes shortly after Ciena announced it will reduce investment in residential broadband access (e.g. 25G PON) to focus more on AI applications and its coherent optics business. Ciena CEO Gary Smith said on the company’s Q3 2025 earnings call:
“Folks are more concentrated on 10-gig and driving that out, and there’s a good market for that. As we looked at our overall portfolio and our investments in [25-gig], we see so much opportunity in these different AI workloads that we want to continue to really make sure we’re heavily invested in that….To be clear, we will continue to sell and support our existing broadband access products. However, we will be limiting our forward investments only to strategic areas such as DCOM [1.].”
Note 1. DCOM refers to Ciena’s data center out-of-band management solution, which involves replacing bulky legacy hardware like copper cabling and console servers with passive optical network (PON) technology.
Dell’Oro Group’s Jimmy Yu thinks Ciena’s move to re-allocate R&D dollars makes sense so that the company is not spread too thin and [misses] out the biggest opportunity sitting in front of them. “My guess is that to address the future of AI workloads and AI data center interconnect, Ciena will need to not only maintain their cadence on launching new high performance coherent optics like the WaveLogic 6e for long distance 1.6 Tbps connections, but also optical devices for shorter distances like 800 ZR/ZR+ plugs and even shorter distances that take them inside the data center,” Yu explained.
Ciena considers the WaveLogic series its bread-and-butter for coherent optics. The company in Q3 gained 11 new customers for its WaveLogic 6 Extreme product, bringing its total customer tally to 60. Companies deploying WaveLogic 6 include operators such as Arelion, Lumen and Telstra, which are upgrading their networks to support demand from cloud customers.
Supplemental Materials:
In conjunction with this announcement, Ciena has posted to the Events and Presentations page of the Investor Relations section of its website a recorded transaction overview presentation and accompanying transcript.
About Ciena:
Ciena is the global leader in high-speed connectivity. We build the world’s most adaptive networks to support exponential growth in bandwidth demand. By harnessing the power of our networking systems, components, automation software, and services, Ciena revolutionizes data transmission and network management. With unparalleled expertise and innovation, we empower our customers, partners, and communities to thrive in the AI era. For updates on Ciena, follow us on LinkedIn and X, or visit the Ciena Insights webpage and Ciena website.
About Nubis Communications:
Nubis says they innovate across photonics, electronics, packaging and manufacturing to create optics significantly more dense, scalable and lower power than existing solutions, breaking the I/O wall in data centers and enabling more advanced compute, AI and machine learning. The startup has raised over $50 million in funding with the help of investors such as Ericsson and Marvell Technology co-founders Weili Dai and Sehat Sutardja.
Nubis has just over 50 employees including a seasoned executive team. Founder Peter Winzer previously led fiber optic transmission research at Nokia’s Bell Labs, while CEO Dan Harding spent over 15 years at Broadcom.
References:
https://www.nubis-inc.com/about-us/
https://www.nubis-inc.com/products/
https://www.fierce-network.com/broadband/ciena-ramps-data-center-focus-new-270m-deal
https://www.fierce-network.com/broadband/ciena-pulls-back-broadband-focus-more-ai
AI infrastructure investments drive demand for Ciena’s products including 800G coherent optics
Lumen and Ciena Transmit 1.2 Tbps Wavelength Service Across 3,050 Kilometers
Ciena CEO sees huge increase in AI generated network traffic growth while others expect a slowdown
Summit Broadband deploys 400G using Ciena’s WaveLogic 5 Extreme
DriveNets and Ciena Complete Joint Testing of 400G ZR/ZR+ optics for Network Cloud Platform
Ciena acquires 2 privately held companies: Tibit Communications and Benu Networks
Ericsson integrates agentic AI into its NetCloud platform for self healing and autonomous 5G private networks
Ericsson is integrating agentic AI into its NetCloud platform to create self-healing and autonomous 5G private (enterprise) networks. This initiative upgrades the existing NetCloud Assistant (ANA), a generative AI tool, into a strategic partner capable of managing complex workflows and orchestrating multiple AI agents. The agentic AI agent aims to simplify private 5G adoption by reducing deployment complexity and the need for specialized administration. This new agentic architecture allows the new Ericsson system to interpret high-level instructions and autonomously assign tasks to a team of specialized AI agents.
Key AI features include:
- Agentic organizational hierarchy: ANA will be supported by multiple orchestrator and functional AI agents capable of planning and executing (with administrator direction). Orchestrator agents will be deployed in phases, starting with a troubleshooting agent planned in Q4 2025, followed by configuration, deployment, and policy agents planned in 2026. These orchestrators will connect with task, process, knowledge, and decision agents within an integrated agentic framework.
- Automated troubleshooting: ANA’s troubleshooting orchestrator will include automated workflows that address the top issues identified by Ericsson support teams, partners, and customers, such as offline devices and poor signal quality. Planned to launch in Q4 2025, this feature is expected to reduce downtime and customer support cases by over 20 percent.
- Multi-modal content generation: ANA can now generate dynamic graphs to visually represent trends and complex query results involving multiple data points.
- Explainable AI: ANA displays real-time process feedback, revealing steps taken by AI agents in order to enhance transparency and trust.
- Expanded AIOps insights: NetCloud AIOps will be expanded to provide isolation and correlation of fault, performance, configuration, and accounting anomalies for Wireless WAN and NetCloud SASE. For Ericsson Private 5G, NetCloud is expected to provide service health analytics including KPI monitoring and user equipment connectivity diagnostics. Planned availability Q4 2025.


Manish Tiwari, Head of Enterprise 5G, Ericsson Enterprise Wireless Solutions, adds: “With the integration of Ericsson Private 5G into the NetCloud platform, we’re taking a major step forward in making enterprise connectivity smarter, simpler, and adaptive. By building on powerful AI foundations, seamless lifecycle management, and the ability to scale securely across sites, we are providing flexibility to further accelerate digital transformation across industries. This is about more than connectivity: it is about giving enterprises the business-critical foundation they need to run IT and OT systems with confidence and unlock the next wave of innovation for their businesses.”
Pankaj Malhotra, Head of WWAN & Security, Ericsson Enterprise Wireless Solutions, says: “By introducing agentic AI into NetCloud, we’re enabling enterprises to simplify deployment and operations while also improving reliability, performance, and user experience. More importantly, it lays the foundation for our vision of fully autonomous, self-optimizing 5G enterprise networks, that can power the next generation of enterprise innovation.”
Agentic AI and the Future of Communications for Autonomous Vehicle (V2X)
Ericsson completes Aduna joint venture with 12 telcos to drive network API adoption
Ericsson reports ~flat 2Q-2025 results; sees potential for 5G SA and AI to drive growth
Ericsson revamps its OSS/BSS with AI using Amazon Bedrock as a foundation
Ericsson’s sales rose for the first time in 8 quarters; mobile networks need an AI boost
Highlights of Nokia’s Smart Factory in Oulu, Finland for 5G and 6G innovation
Nokia has opened a Smart Factory in Oulu, Finland, for 5G/6G design, manufacturing, and testing, integrating AI technologies and Industry 4.0 applications. It brings ~3,000 staff under one roof and is positioned as Europe’s flagship site for radio access (RAN) innovation.
The Oulu campus will initially focus on 5G, including: Standardization, System-on Chips as well as 5G radio hardware and software and patents. Oulu Factory, part of the new campus, will target New Production Introduction for Nokia’s 5G radio and baseband products. The new campus strengthens Oulu’s ecosystem as a global testbed for resilient and secure networks for both civilian and defense applications.
At Oulu “Home of Radio” campus, Nokia’s research and innovation underpins high quality, tested world class products readymade for customers across markets. Nokia’s experts will continue to foster innovation, from Massive MIMO radios like Osprey and Habrok to next-generation 6G solutions, creating secure, high-performance, future-proof connectivity.
Sustainability is integral to the facility. Renewable energy is used throughout the site, with additional energy used to heat 20,000 households in Oulu. The on-site energy station is one of the world’s largest CO2-based district heating and cooling plants.
Active 6G proof-of-concept trials will be tested using ~7 GHz and challenging propagation scenarios.
“Our teams in Oulu are shaping the future of 5G and 6G developing our most advanced radio networks. Oulu has a unique ecosystem that integrates Nokia’s R&D and smart manufacturing with an ecosystem of partners – including universities, start-ups and NATO’s DIANA test center. Oulu embodies our culture of innovation and the new campus will be essential to advancing connectivity necessary to power the AI supercycle,” said Justin Hotard, President and CEO of Nokia
Nokia Oulu Facts:
- Around 3,000 employees and 40 nationalities working on the campus.
- Oulu campus covers the entire product lifecycle of a product, from R&D to manufacturing and testing of the products.
- Footprint of the building is overall 55,000 square metres, including manufacturing, R&D and office space.
- Green campus with all energy purchased green and all surplus energy generated fed back into the district heating system and used to heat 20,000 local households.
- The campus boasts 100% waste utilization rate and 99% avoidance in CO2 emissions.
- Construction started in the second half of 2022, with the first employees moving into the facility in the first half of this year.
- YIT constructed the site and Arkkitehtitoimisto ALA were the architects.
References:
https://www.sdxcentral.com/analysis/behind-the-scenes-at-nokias-new-home-of-radio/
Will the wave of AI generated user-to/from-network traffic increase spectacularly as Cisco and Nokia predict?
Nokia’s Bell Labs to use adapted 4G and 5G access technologies for Indian space missions
Indosat Ooredoo Hutchison and Nokia use AI to reduce energy demand and emissions
Verizon partners with Nokia to deploy large private 5G network in the UK
Nokia selects Intel’s Justin Hotard as new CEO to increase growth in IP networking and data center connections
Nokia sees new types of 6G connected devices facilitated by a “3 layer technology stack”
Nokia and Eolo deploy 5G SA mmWave “Cloud RAN” network
Will the wave of AI generated user-to/from-network traffic increase spectacularly as Cisco and Nokia predict?
Network operators are bracing themselves for a wave of AI traffic, partially based on a RtBrick survey, as well as forecasts by Cisco and Nokia, but that hasn’t happened yet. The heavy AI traffic today is East to West (or vice-versa) within cloud resident AI data centers and for AI data center interconnects.
1. Cisco believes that AI Inference agents will soon engage “continuously” with end-users, keeping traffic levels consistently high. has stated that AI will greatly increase network traffic, citing a shift toward new, more demanding traffic patterns driven by “agentic AI” and other applications. This perspective is a core part of Cisco’s business strategy, which is focused on selling the modernized infrastructure needed to handle the coming surge. Cisco identified three stages of AI-driven traffic growth, each with different network demands:
- Today’s generative AI models produce “spikey” traffic which spikes up when a user submits a query, but then returns to a low baseline. Current networks are largely handling this traffic without issues.
- Persistent “agentic” AI traffic: The next phase will involve AI agents that constantly interact with end-users and other agents. Cisco CEO Chuck Robbins has stated that this will drive traffic “beyond the peaks of current chatbot interaction” and keep network levels “consistently high”.
- Edge-based AI: A third wave of “physical AI” will require more computing and networking at the edge of the network to accommodate specialized use cases like industrial IoT.
“As we move towards agentic AI and the demand for inferencing expands to the enterprise and end user networking environments, traffic on the network will reach unprecedented levels,” Cisco CEO Chuck Robbins said on the company’s recent earnings call. “Network traffic will not only increase beyond the peaks of current chatbot interaction, but will remain consistently high with agents in constant interaction.”
2. Nokia recently predicted that both direct and indirect AI traffic on mobile networks will grow at a faster pace than regular, non-AI traffic.
- Direct AI traffic: This is generated by users or systems directly interacting with AI services and applications. Consumer examples: Using generative AI tools, interacting with AI-powered gaming, or experiencing extended reality (XR) environments. Enterprise examples: Employing predictive maintenance, autonomous operations, video and image analytics, or enhanced customer interactions.
- Indirect AI traffic: This occurs when AI algorithms are used to influence user engagement with existing services, thereby increasing overall traffic. Examples: AI-driven personalized recommendations for video content on social media, streaming platforms, and online marketplaces, which can lead to longer user sessions and higher bandwidth consumption.
The Finland based network equipment vendor warned that the AI wave could bring “a potential surge in uplink data traffic that could overwhelm our current network infrastructure if we’re not prepared,” noting that the rise of hybrid on-device and cloud tools will require much more than the 5-15 Mbps uplink available on today’s networks. Nokia’s Global Network Traffic 2030 report forecasts that overall traffic could grow by 5 to 9 times current levels by 2033. All told, Nokia said AI traffic is expected to hit 1088 exabytes (EB) per month by 2033. That means overall traffic will grow 5x in a best case scenario and 9x in a worse case.
To manage this anticipated traffic surge, Nokia advocates for radical changes to existing network infrastructure.
- Cognitive networks: The company states that networks must become “cognitive,” leveraging AI and machine learning (ML) to handle the growing data demand.
- Network-as-Code: As part of its Technology Strategy 2030, Nokia promotes a framework for more flexible and scalable networks that leverage AI and APIs.
- 6G preparation: Nokia Bell Labs is already conducting research and field tests to prepare for 6G networks around 2030, with a focus on delivering the capacity needed for AI and other emerging technologies.
- Optimizing the broadband edge: The company also highlights the need to empower the broadband network edge to handle the demands of AI applications, which require low latency and high reliability.
Nokia’s Global Network Traffic 2030 report didn’t mention agentic AI, which are artificial intelligence systems designed to autonomously perceive, reason, and act in their environment to achieve complex goals with less human oversight. Unlike generative AI, which focuses on creating content, agentic AI specializes in workflow automation and independent problem-solving by making decisions, adapting plans, and executing tasks over extended periods to meet long-term objectives.
3. Ericsson did point to traffic increases stemming from the use of AI-based assistants in its 2024 Mobility Report. In particular, it predicted the majority of traffic would be related to the use of consumer video AI assistants, rather than text-based applications and – outside the consumer realm – forecast increased traffic from “AI agents interacting with drones and droids. Accelerated consumer uptake of GenAI will cause a steady increase of traffic in addition to the baseline increase,” Ericsson said of its traffic growth scenario.
…………………………………………………………………………………………………………………………………………………………………………………..
Dissenting Views:
1. UK Disruptive Analysis Founder Dean Bubley isn’t a proponent of huge AI traffic growth. “Many in the telecom industry and vendor community are trying to talk up AI as driving future access network traffic and therefore demand for investment, spectrum etc., but there is no evidence of this at present,” he told Fierce Network.
Bubley argues that AI agents won’t really create much traffic on access networks to homes or businesses. Instead, he said, they will drive traffic “inside corporate networks, and inside and between data centers on backbone networks and inside the cloud. “There might be a bit more uplink traffic if video/images are sent to the cloud for AI purposes, but again that’s hypothetical,” he said.
2. In a LinkedIn post, Ookla analyst Mike Dano said he was a bit suspicious about “Cisco predicting a big jump in network traffic due to AI agents constantly wandering around the Internet and doing things.” While almost all of the comments agreed with Dano, it still is an open question whether the AI traffic Armageddon will actually materialize.
……………………………………………………………………………………………………………………………………………………………………………………….
References:
RtBrick survey: Telco leaders warn AI and streaming traffic to “crack networks” by 2030
https://www.fierce-network.com/cloud/will-ai-agents-really-raise-network-traffic-baseline
Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers
Both telecom and enterprise networks are being reshaped by AI bandwidth and latency demands of AI. Network operators that fail to modernize architectures risk falling behind. Why? AI workloads are network killers — they demand massive east-west traffic, ultra-low latency, and predictable throughput.
- Real-time observability is becoming non-negotiable, as enterprises need to detect and fix issues before they impact AI model training or inference.
- Self-driving networks are moving from concept to reality, with AI not just monitoring but actively remediating problems.
- The competitive race is now about who can integrate AI into networking most seamlessly — and HPE/Juniper’s Mist AI, Cisco’s assurance stack, and Nvidia’s AI fabrics are three different but converging approaches.
Cisco, HPE/Juniper, and Nvidia are designing AI-optimized networking equipment, with a focus on real-time observability, lower latency and increased data center performance for AI workloads. Here’s a capsule summary:
Cisco: AI-Ready Infrastructure:
- Cisco is embedding AI telemetry and analytics into its Silicon One chips, Nexus 9000 switches, and Catalyst campus gear.
- The focus is on real-time observability via its ThousandEyes platform and AI-driven assurance in DNA Center, aiming to optimize both enterprise and AI/ML workloads.
- Cisco is also pushing AI-native data center fabrics to handle GPU-heavy clusters for training and inference.
- Cisco claims “exceptional momentum” and leadership in AI: >$800M in AI infrastructure orders taken from web-scale customers in Q4, bringing the FY25 total to over $2B.
- Cisco Nexus switches now fully and seamlessly integrated with NVIDIA’s Spectrum-X architecture to deliver high speed networking for AI clusters
HPE + Juniper: AI-Native Networking Push:
- Following its $13.4B acquisition of Juniper Networks, HPE has merged Juniper’s Mist AI platform with its own Aruba portfolio to create AI-native, “self-driving” networks.
- Key upgrades include:
-Agentic AI troubleshooting that uses generative AI workflows to pinpoint and fix issues across wired, wireless, WAN, and data center domains.
-Marvis AI Assistant with enhanced conversational capabilities — IT teams can now ask open-ended questions like “Why is the Orlando site slow?” and get contextual, actionable answers.
-Large Experience Model (LEM) with Marvis Minis — digital twins that simulate user experiences to predict and prevent performance issues before they occur.
-Apstra integration for data center automation, enabling autonomous service provisioning and cross-domain observability
Nvidia: AI Networking at Compute Scale
- Nvidia’s Spectrum-X Ethernet platform and Quantum-2 InfiniBand (both from Mellanox acquisition) are designed for AI supercomputing fabrics, delivering ultra-low latency and congestion control for GPU clusters.
- In partnership with HPE, Nvidia is integrating NVIDIA AI Enterprise and Blackwell architecture GPUs into HPE Private Cloud AI, enabling enterprises to deploy AI workloads with optimized networking and compute together.
- Nvidia’s BlueField DPUs offload networking, storage, and security tasks from CPUs, freeing resources for AI processing.
………………………………………………………………………………………………………………………………………………………..
Here’s a side-by-side comparison of how Cisco, HPE/Juniper, and Nvidia are approaching AI‑optimized enterprise networking — so you can see where they align and where they differentiate:
Feature / Focus Area | Cisco | HPE / Juniper | Nvidia |
---|---|---|---|
Core AI Networking Vision | AI‑ready infrastructure with embedded analytics and assurance for enterprise + AI workloads | AI‑native, “self‑driving” networks across campus, WAN, and data center | High‑performance fabrics purpose‑built for AI supercomputing |
Key Platforms | Silicon One chips, Nexus 9000 switches, Catalyst campus gear, ThousandEyes, DNA Center | Mist AI platform, Marvis AI Assistant, Marvis Minis, Apstra automation | Spectrum‑X Ethernet, Quantum‑2 InfiniBand, BlueField DPUs |
AI Integration | AI‑driven assurance, predictive analytics, real‑time telemetry | Generative AI for troubleshooting, conversational AI for IT ops, digital twin simulations | AI‑optimized networking stack tightly coupled with GPU compute |
Observability | End‑to‑end visibility via ThousandEyes + DNA Center | Cross‑domain observability (wired, wireless, WAN, DC) with proactive issue detection | Telemetry and congestion control for GPU clusters |
Automation | Policy‑driven automation in campus and data center fabrics | Autonomous provisioning, AI‑driven remediation, intent‑based networking | Offloading networking/storage/security tasks to DPUs for automation |
Target Workloads | Enterprise IT, hybrid cloud, AI/ML inference & training | Enterprise IT, edge, hybrid cloud, AI/ML workloads | AI training & inference at hyperscale, HPC, large‑scale data centers |
Differentiator | Strong enterprise install base + integrated assurance stack | Deep AI‑native operations with user experience simulation | Ultra‑low latency, high‑throughput fabrics for GPU‑dense environments |
Key Takeaways:
- Cisco is strongest in enterprise observability and broad infrastructure integration.
- HPE/Juniper is leaning into AI‑native operations with a heavy focus on automation and user experience simulation.
- Nvidia is laser‑focused on AI supercomputing performance, building the networking layer to match its GPU dominance.
- Cisco leverages its market leadership, customer base and strategic partnerships to integrate AI with existing enterprise networks.
- HPE/Juniper challenges rivals with an AI-native, experience-first network management platform.
- Nvidia aims to dominate the full-stack AI infrastructure, including networking.
SoftBank’s Transformer AI model boosts 5G AI-RAN uplink throughput by 30%, compared to a baseline model without AI
Softbank has developed its own Transformer-based AI model that can be used for wireless signal processing. SoftBank used its Transformer model to improve uplink channel interpolation which is a signal processing technique where the network essentially makes an educated guess as to the characteristics and current state of a signal’s channel. Enabling this type of intelligence in a network contributes to faster, more stable communication, according to SoftBank. The Japanese wireless network operator successfully increased uplink throughput by approximately 20% compared to a conventional signal processing method (the baseline method). In the latest demonstration, the new Transformer-based architecture was run on GPUs and tested in a live Over-the-Air (OTA) wireless environment. In addition to confirming real-time operation, the results showed further throughput gains and achieved ultra-low latency.
Editor’s note: A Transformer model is a type of neural network architecture that emerged in 2017. It excels at interpreting streams of sequential data associated with large language models (LLMs). Transformer models have also achieved elite performance in other fields of artificial intelligence (AI), including computer vision, speech recognition and time series forecasting. Transformer models are lightweight, efficient, and versatile – capable of natural language processing (NLP), image recognition and wireless signal processing as per this Softbank demo.
Significant throughput improvement:
- Uplink channel interpolation using the new architecture improved uplink throughput by approximately 8% compared to the conventional CNN model. Compared to the baseline method without AI, this represents an approximately 30% increase in throughput, proving that the continuous evolution of AI models leads to enhanced communication quality in real-world environments.
Higher AI performance with ultra-low latency:
- While real-time 5G communication requires processing in under 1 millisecond, this demonstration with the Transformer achieved an average processing time of approximately 338 microseconds, an ultra-low latency that is about 26% faster than the convolution neural network (CNN) [1.] based approach. Generally, AI model processing speeds decrease as performance increases. This achievement overcomes the technically difficult challenge of simultaneously achieving higher AI performance and lower latency. Editor’s note: Perhaps this can overcome the performance limitations in ITU-R M.2150 for URRLC in the RAN, which is based on an uncompleted 3GPP Release 16 specification.
Note 1. CNN-based approaches to achieving low latency focus on optimizing model architecture, computation, and hardware to accelerate inference, especially in real-time applications. Rather than relying on a single technique, the best results are often achieved through a combination of methods.
Using the new architecture, SoftBank conducted a simulation of “Sounding Reference Signal (SRS) prediction,” a process required for base stations to assign optimal radio waves (beams) to terminals. Previous research using a simpler Multilayer Perceptron (MLP) AI model for SRS prediction confirmed a maximum downlink throughput improvement of about 13% for a terminal moving at 80 km/h.*2
In the new simulation with the Transformer-based architecture, the downlink throughput for a terminal moving at 80 km/h improved by up to approximately 29%, and by up to approximately 31% for a terminal moving at 40 km/h. This confirms that enhancing the AI model more than doubled the throughput improvement rate (see Figure 1). This is a crucial achievement that will lead to a dramatic improvement in communication speeds, directly impacting the user experience.
The most significant technical challenge for the practical application of “AI for RAN” is to further improve communication quality using high-performance AI models while operating under the real-time processing constraint of less than one millisecond. SoftBank addressed this by developing a lightweight and highly efficient Transformer-based architecture that focuses only on essential processes, achieving both low latency and maximum AI performance. The important features are:
(1) Grasps overall wireless signal correlations
By leveraging the “Self-Attention” mechanism, a key feature of Transformers, the architecture can grasp wide-ranging correlations in wireless signals across frequency and time (e.g., complex signal patterns caused by radio wave reflection and interference). This allows it to maintain high AI performance while remaining lightweight. Convolution focuses on a part of the input, while Self-Attention captures the relationships of the entire input (see Figure 2).
(2) Preserves physical information of wireless signals
While it is common to normalize input data to stabilize learning in AI models, the architecture features a proprietary design that uses the raw amplitude of wireless signals without normalization. This ensures that crucial physical information indicating communication quality is not lost, significantly improving the performance of tasks like channel estimation.
(3) Versatility for various tasks
The architecture has a versatile, unified design. By making only minor changes to its output layer, it can be adapted to handle a variety of different tasks, including channel interpolation/estimation, SRS prediction, and signal demodulation. This reduces the time and cost associated with developing separate AI models for each task.
The demonstration results show that high-performance AI models like Transformer and the GPUs that run them are indispensable for achieving the high communication performance required in the 5G-Advanced and 6G eras. Furthermore, an AI-RAN that controls the RAN on GPUs allows for continuous performance upgrades through software updates as more advanced AI models emerge, even after the hardware has been deployed. This will enable telecommunication carriers to improve the efficiency of their capital expenditures and maximize value.
Moving forward, SoftBank will accelerate the commercialization of the technologies validated in this demonstration. By further improving communication quality and advancing networks with AI-RAN, SoftBank will contribute to innovation in future communication infrastructure. The Japan based conglomerate strongly endorsed AI RAN at MWC 2025.
References:
https://www.softbank.jp/en/corp/news/press/sbkk/2025/20250821_02/
https://www.telecoms.com/5g-6g/softbank-claims-its-ai-ran-tech-boosts-throughput-by-30-
https://www.telecoms.com/ai/softbank-makes-mwc-25-all-about-ai-ran
https://www.ibm.com/think/topics/transformer-model
https://www.itu.int/rec/R-REC-M.2150/en
Softbank developing autonomous AI agents; an AI model that can predict and capture human cognition
Dell’Oro Group: RAN Market Grows Outside of China in 2Q 2025
Dell’Oro: AI RAN to account for 1/3 of RAN market by 2029; AI RAN Alliance membership increases but few telcos have joined
Dell’Oro: RAN revenue growth in 1Q2025; AI RAN is a conundrum
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
OpenAI announces new open weight, open source GPT models which Orange will deploy
Deutsche Telekom and Google Cloud partner on “RAN Guardian” AI agent
RtBrick survey: Telco leaders warn AI and streaming traffic to “crack networks” by 2030
Respondents to a RtBrick survey of 200 senior telecom decision makers in the U.S., UK, and Australia finds that network operator leaders are failing to make key decisions and lack the motivation to change. The report exposes urgent warnings from telco engineers that their networks are on a five-year collision course with AI and streaming traffic. It finds that 93% of respondents report a lack of support from leadership to deploy disaggregated network equipment. Key findings:
- Risk-averse leadership and a lack of skills are the top factors that are choking progress.
- Majority are stuck in early planning, while AT&T, Deutsche Telekom, and Comcast lead large-scale disaggregation rollouts.
- Operators anticipate higher broadband prices but fear customer backlash if service quality can’t match the price.
- Organizations require more support from leadership to deploy disaggregation (93%).
- Complexity around operational transformation (42%), such as redesigning architectures and workflows.
- Critical shortage of specialist skills/staff (38%) to manage disaggregated systems.
The survey finds that almost nine in ten operators (87%) expect customers to demand higher broadband speeds by 2030, while roughly the same (79%) state their customers expect costs to increase, suggesting they will pay more for it. Yet half of all leaders (49%) admit they lack complete confidence in delivering services at a viable cost. Eighty-four percent say customer expectations for faster, cheaper broadband are already outpacing their networks, while 81% concede their current architectures are not well-suited to handling the future increases in bandwidth demand, suggesting they may struggle with the next wave of AI and streaming traffic.
“Senior leaders, engineers, and support staff inside operators have made their feelings clear: the bottleneck isn’t capacity, it’s decision-making,” said Pravin S Bhandarkar, CEO and Founder of RtBrick. “Disaggregated networks are no longer an experiment. They’re the foundation for the agility, scalability, and transparency operators need to thrive in an AI-driven, streaming-heavy future,” he added noting the intent to deploy disaggregation as per this figure:
However, execution continues to trail ambition. Only one in twenty leaders has confirmed they’re “in deployment” today, while 49% remain stuck in early-stage “exploration”, and 38% are still “in planning”. Meanwhile, big-name operators such as AT&T, Deutsche Telekom, and Comcast are charging ahead and already actively deploying disaggregation at scale, demonstrating faster rollouts, greater operational control, and true vendor flexibility. Here’s a snapshot of those activities:
- AT&T has deployed an open, disaggregated routing network in their core, powered by DriveNets Network Cloud software on white-box bare metal switches and routers from Taiwanese ODMs, according to Israel based DriveNets. DriveNets utilizes a Distributed Disaggregated Chassis (DDC) architecture, where a cluster of bare metal switches act as a single routing entity. That architecture has enabled AT&T to accelerate 5G and fiber rollouts and improve network scalability and performance. It has made 1.6Tb/s transport a reality on AT&T’s live network.
- Deutsche Telekom has deployed a disaggregated broadband network using routing software from RtBrick running on bare-metal switch hardware to provide high-speed internet connectivity. They’re also actively promoting Open BNG solutions as part of this initiative.
- Comcast uses network cloud software from DriveNets and white-box hardware to disaggregate their core network, aiming to increase efficiency and enable new services through a self-healing and consumable network. This also includes the use of disaggregated, pluggable optics from multiple vendors.
Nearly every leader surveyed also claims their organization is “using” or “planning to use” AI in network operations, including for planning, optimization, and fault resolution. However, nine in ten (93%) say they cannot unlock AI’s full value without richer, real-time network data. This requires more open, modular, software-driven architecture, enabled by network disaggregation.
“Telco leaders see AI as a powerful asset that can enhance network performance,” said Zara Squarey, Research Manager at Vanson Bourne. “However, the data shows that without support from leadership, specialized expertise, and modern architectures that open up real-time data, disaggregation deployments may risk further delays.”
When asked what benefits they expect disaggregation to deliver, operators focused on outcomes that could deliver the following benefits:
- 54% increased operational automation
- 54% enhanced supply chain resilience
- 51% improved energy efficiency
- 48% lower purchase and operational costs
- 33% reduced vendor lock-in
Transformation priorities align with those goals, with automation and agility (57%) ranked first, followed by vendor flexibility (55%), supply chain security (51%), cost efficiency (46%) and energy usage and sustainability (47%).
About the research:
The ‘State of Disaggregation’ research was independently conducted by Vanson Bourne in June 2025 and commissioned by RtBrick to identify the primary drivers and barriers to disaggregated network rollouts. The findings are based on responses from 200 telecom decision makers across the U.S., UK, and Australia, representing operations, engineering, and design/Research and Development at organizations with 100 to 5,000 or more employees.
References:
https://www.rtbrick.com/state-of-disaggregation-report-2
https://drivenets.com/blog/disaggregation-is-driving-the-future-of-atts-ip-transport-today/
Disaggregation of network equipment – advantages and issues to consider