Deloitte and TM Forum : How AI could revitalize the ailing telecom industry?

IEEE Techblog readers are well aware of the dire state of the global telecommunications industry.  In particular:

  • According to Deloitte, the global telecommunications industry is expected to have revenues of about US$1.53 trillion in 2024, up about 3% over the prior year.Both in 2024 and out to 2028, growth is expected to be higher in Asia Pacific and Europe, Middle East, and Africa, with growth in the Americas being around 1% annually.
  • Telco sales were less than $1.8 trillion in 2022 vs. $1.9 trillion in 2012, according to Light Reading. Collective investments of about $1 trillion over a five-year period had brought a lousy return of less than 1%.
  • Last year (2024), spending on radio access network infrastructure fell by $5 billion, more than 12% of the total, according to analyst firm Omdia, imperilling the kit vendors on which telcos rely.

Deloitte believes generative (gen) AI will have a huge impact on telecom network providers:

Telcos are using gen AI to reduce costs, become more efficient, and offer new services. Some are building new gen AI data centers to sell training and inference to others. What role does connectivity play in these data centers?

There is a gen AI gold rush expected over the next five years. Spending estimates range from hundreds of billions to over a trillion dollars on the physical layer required for gen AI: chips, data centers, and electricity.16 Close to another hundred billion US dollars will likely be spent on the software and services layer.17 Telcos should focus on the opportunity to participate by connecting all of those different pieces of hardware and software. And shouldn’t telcos, whose business is all about connectivity, be able to profit in some way?

There are gen AI markets for connectivity: Inside the data centers there are miles of mainly copper (and some fiber) cables for transmitting data from board to board and rack to rack. Serving this market is worth billions in 2025,18 but much of this connectivity is provided by data centers and chipmakers and have never been provided by telcos.

There are also massive, long-haul fiber networks ranging from tens to thousands of miles long. These connect (for example) a hyperscaler’s data centers across a region or continent, or even stretch along the seabed, connecting data centers across continents. Sometimes these new fiber networks are being built to support sovereign AI—that is, the need to keep all the AI data inside a given country or region.

Historically, those fiber networks were massive expenditures, built by only the largest telcos or (in the undersea case) built by consortia of telcos, to spread the cost across many players. In 2025, it looks like some of the major gen AI players are building at least some of this connection capacity, but largely on their own or with companies that are specialists in long-haul fiber.

Telcos may want to think about how they can continue to be a relevant player in the part of the connectivity space, rather than just ceding it to the gen AI behemoths. For context, it is estimated that big tech players will spend over US$100 billion on network capex between 2024 and 2030, representing 5% to 10% of their total capex in that period, up from only about 4% to 5% of capex for a network historically.

Where the opportunities could be greater are for connecting billions of consumers and enterprises. Telcos already serve these large markets, and as consumers and businesses start sending larger amounts of data over wireline and wireless networks, that growth might translate to higher revenues. A recent research report suggests that direct gen AI data traffic could be in exabyte by 2033.24

The immediate challenge is that many gen AI use cases for both consumer and enterprise markets are not exactly bandwidth hogs: In 2025, they tend to be text-based (so small file sizes) and users may expect answers in seconds rather than milliseconds,25 which can limit how telcos can monetize the traffic. Users will likely pay a premium for ultra-low latency, but if latency isn’t an issue, they are unlikely to pay a premium.

Telcos may want to think about how they can continue to be a relevant player in the part of the connectivity space, rather than just ceding it to the gen AI behemoths.

A longer-term challenge is on-device edge computing. Even if users start doing a lot more with creating, consuming, and sharing gen AI video in real time (requiring much larger file transmission and lower latency), the majority of devices (smartphones, PCs, wearables, or Internet of Things (IoT) devices in factories and ports) are expected to soon have onboard gen AI processing chips.26 These gen accelerators, combined with emerging smaller language AI models, may mean that network connectivity is less of an issue. Instead of a consumer recording a video, sending the raw image to the cloud for AI processing, then the cloud sending it back, the image could be enhanced or altered locally, with less need for high-speed or low-latency connectivity.

Of course, small models might not work well. The chips on consumer and enterprise edge devices might not be powerful enough or might be too power inefficient with unacceptably short battery life. In which case, telcos may be lifted by a wave of gen AI usage. But that’s unlikely to be in 2025, or even 2026.

Another potential source of gen AI monetization is what’s being called AI Radio Access Network (RAN). At the top of every cell tower are a bunch of radios and antennas. There is also a powerful processor or processors for controlling those radios and antennas. In 2024, a consortium (the AI-RAN Alliance) was formed to look at the idea of adding the same kind of generative AI chips found in data centers or enterprise edge servers (a mix of GPUs and CPUs) to every tower.The idea would be that they could run the RAN, help make it more open, flexible, and responsive, dynamically configure the network in real time, and be able to perform gen AI inference or training as service with any extra capacity left over, generating incremental revenues. At this time, a number of original equipment manufacturers (OEMs, including ones who currently account for over 95% of RAN sales), telcos, and chip companies are part of the alliance. Some expect AI RAN to be a logical successor to Open RAN and be built on top of it, and may even be what 6G turns out to be.

…………………………………………………………………………………………………………………………………………………………………………….

The TM Forum has three broad “AI initiatives,” which are part of their overarching “Industry Missions.” These missions aim to change the future of global connectivity, with AI being a critical component.

The three broad “AI initiatives” (or “Industry Missions” where AI plays a central role) are:

  1. AI and Data Innovation: This mission focuses on the safe and widespread adoption of AI and data at scale within the telecommunications industry. It aims to help telcos accelerate, de-risk, and reduce the costs of applying AI technologies to cut operational expenses and drive revenue growth. This includes developing best practices, standards, data architectures, ontologies, and APIs.

  2. Autonomous Networks: This initiative is about unlocking the power of seamless end-to-end autonomous operations in telecommunications networks. AI is a fundamental technology for achieving higher levels of network automation, moving towards zero-touch, zero-wait, and zero-trouble operations.

  3. Composable IT and Ecosystems: While not solely an “AI initiative,” this mission focuses on simpler IT operations and partnering via AI-ready composable software. AI plays a significant role in enabling more agile and efficient IT systems that can adapt and integrate within dynamic ecosystems. It’s based on the TM Forum’s Open Digital Architecture (ODA). Eighteen big telcos are now running on ODA while the same number of vendors are described by the TM Forum as “ready” to adopt it.

These initiatives are supported by various programs, tools, and resources, including:

  • AI Operations (AIOps): Focusing on deploying and managing AI at scale, re-engineering operational processes to support AI, and governing AI operations.
  • Responsible AI: Addressing ethical considerations, risk management, and governance frameworks for AI.
  • Generative AI Maturity Interactive Tool (GAMIT): To help organizations assess their readiness to exploit the power of GenAI.
  • AI Readiness Check (AIRC): An online tool for members to identify gaps in their AI adoption journey across key business dimensions.
  • AI for Everyone (AI4X): A pillar focused on democratizing AI across all business functions within an organization.

Under the leadership of CEO Nik Willetts, a rejuvenated, AI-wielding TM Forum now underpins what many telcos do in business and operational support systems, the essential IT plumbing.  The TM Forum rates automation using the same five-level system as the car industry, where 0 means completely manual and 5 heralds the end of human intervention. Many telcos are on track for Level 4 in specific areas this year, said Willetts. China Mobile has already realized an 80% reduction in major faults, saving 3,000 person years of effort and 4,000 kilowatt hours of energy each year, thanks to automation.

Outside of China, telcos and telco vendors are leaning heavily on technologies mainly developed by just a few U.S. companies to implement AI. A person remains in the loop for critical decision-making, but the justifications for taking any decision are increasingly provided by systems built on the core underlying technologies from those same few companies.   As IEEE Techblog has noted, AI is still hallucinating – throwing up nonsense or falsehoods – just as domain-specific experts are being threatened by it.

Agentic AI substitutes interacting software programs for junior technicians, the future decision-makers. If AI Level 4 renders them superfluous, where do the future decision-makers come from?

Caroline Chappell, an independent consultant with years of expertise in the telecom industry, says there is now talk of what the AI pundits call “learning world models,” more sophisticated AI that grows to understand its environment much as a baby does. When mature, it could come up with completely different approaches to the design of telecom networks and technologies. At this stage, it may be impossible for almost anyone to understand what AI is doing, she said.

 

 

References:

https://www.deloitte.com/us/en/insights/industry/technology/technology-media-telecom-outlooks/telecommunications-industry-outlook-2025.html

https://www.lightreading.com/ai-machine-learning/escape-from-ai-proves-impossible-at-tm-forum-bash-in-new-code-red-

Sources: AI is Getting Smarter, but Hallucinations Are Getting Worse

McKinsey: AI infrastructure opportunity for telcos? AI developments in the telecom sector

 

 

 

Ericsson revamps its OSS/BSS with AI using Amazon Bedrock as a foundation

At this week’s TM Forum-organized Digital Transformation World (DTW) event in Copenhagen, Ericsson has given its operations support systems (BSS/OSS) portfolio a complete AI makeover.  This BSS/OSS revamp aims to improve operational efficiency, boost business growth, and elevate customer experiences. It includes a Gen-AI Lab, where telcos can try out their latest BSS/OSS-related ideas; a Telco Agentic AI Studio, where developers are invited to come and build generative AI products for telcos; and a range of Ericsson’s own Telco IT AI apps. Underpinning all this is the Telco IT AI Engine, which handles various tasks to do with BSS/OSS orchestration.

Ericsson is investing to enable CSPs make a real impact with AI, intent and automation. AI is now embedded throughout the portfolio, and the other updates range across five critical, interlinked transformation areas within a CSP’s operational transformation, with each area of evolution based on a clear rationale and vision for the value it generates.  Ericsson sites several benefits for telcos:

  • Data – Make your data more useful. Introducing Telco DataOps Platform. An evolution from the existing Ericsson Mediation, the platform enables unified data collection, processing, management, and governance, removing silos and complexity to make data more useful across the whole business, and fuel effective AI to run their business and operations more smoothly.
  • Cloud and IT – Stay ahead of the business. Introducing Ericsson Intelligent IT Suite. A holistic end-to-end approach supporting OSS/BSS evolution designed for Telco scale to accelerate delivery, streamline operations, and empower teams with the tools to unlock value from day one and beyond. It enables CSPs to embrace innovative transformative approaches that deliver real-time business agility and impact to stay ahead of business demands in rapidly evolving OSS/BSS landscapes.
  • Monetization – Make sure you get paid. Introducing Ericsson Charging and Billing Evolved. A cloud-native monetization platform that enables real-time charging and billing for multi-sided business models. It is powered by cutting-edge AI capabilities that makes it easy to accelerate partner-led growth, launch and monetize enterprise services efficiently, and capture revenue across all business lines at scale.
  • Service Orchestration – Deliver as fast as you can sell. Upgraded Ericsson Service Orchestration and Assurance with Agentic AI: Uses AI and intent to automatically set up and manage services based on a CSP’s business goals, providing a robust engine for transforming to autonomous networks. It empowers CSPs to cut out manual steps and provides the infrastructure to launch and scale differentiated connectivity services
  • Core Commerce – Be easy to buy from. AI-enabled core commerce. Streamline selling with intelligent offer creation. Key capabilities include efficient offering design through a Gen-AI capable product configuration assistant and guided selling using an intelligent telco-specific CPQ for seamless ‘Quote to Cash’ processes, supported by a CRM-agnostic approach. CSPs can launch tailored enterprise solutions faster and co-create offers with partners all while delivering seamless omni-channel experiences

Grameenphone, a Bangladesh telco with more than 80 million subscribers is an Ericsson BSS/OSS customer. “They can’t do massive investments in areas that aren’t going to give a return,” said Jason Keane, the head of Ericsson’s business and operational support systems portfolio who noted the low average revenue per user (ARPU) in the Bangladeshi telecom market. The technologies developed by Ericsson are helping Grameenphone’s subscribers with top-ups, bill payments and operations issues.

“What they’re saying is we want to enable our customers to have a fast, seamless experience, where AI can help in some of the interaction flows between external systems.  “AI itself isn’t free.  You’ve got to pay your consumption, and it can add up if you don’t use it correctly.”

To date, very few companies have seen financial benefits in either higher sales or lower costs from AI. The ROI just isn’t there.  If organizations end up spending more on AI systems than they would on manual effort to achieve the same results, money would be wasted.  Another issue is the poor quality of telco data which can’t be effectively used to train AI agents.

Ericsson’ Booth at DTW Ignite 2025 event in Copenhagen

………………………………………………………………………………………………………………………………………………………..

Ericsson appears to have been heavily reliant on Amazon Web Services (AWS) for the technologies it is advertising at DTW this week. Amazon Bedrock, a managed service for building generative AI models, is the foundation of the Gen-AI Lab and the Telco Agentic AI Studio. “We had to pick one, right?” said Keane. “I picked Amazon. It’s a good provider, and this is the model I do my development against.”

Regarding AI’s threat to jobs of OSS/BSS workers, Light Reading’s Iain Morris, wrote:

“Wider adoption by telcos of Ericsson’s latest technologies, and similar offerings from rivals, might be a big negative for many telco operations employees.  At most immediate risk are the junior technicians or programmers dealing with basic code that can be easily handled by AI. But the senior programmers had to start somewhere, and even they don’t look safe. AI enthusiasts dream of what the TM Forum calls the fully autonomous network, when people are out of the loop and the operation is run almost entirely by machines.”

Ericsson has realized its OSS and BSS tools need to address the requirements of network operators that either already, or will in the near future, adopt cloud-native processes, run cloud-based horizontal IT platforms and make extensive use of AI to automate back-office processes and introduce autonomous network operations that reduce manual intervention and the time to address problems while also introducing greater agility (as long as the right foundations are in place).

Mats Karlsson, Head of Solution Area Business and Operations Support Systems, Ericsson says: “What we are unveiling today illustrates a transformative step into industrializing Business and Operations Support Systems for the autonomous age. Using AI and automation, as well as our decades of knowledge and experience in our people, technology, processes – we get results. These changes will ensure we empower CSPs to unlock value precisely when and where it can be captured. We operate in a complex industry, one which is evidently in need of a focus on no nonsense OSS/BSS. These changes, and our commitment to continuous evolution for innovation, will help simplify it where possible, ensuring that CSPs can get on with their key goals of building better, more efficient services for their customers while securing existing revenue and striving for new revenue opportunities.”

Ahmad Latif Ali, Associate Vice President, EMEA Telecommunications Insights at IDC says: “Our recent research, featured in the IDC InfoBrief “Mapping the OSS/BSS Transformation Journey: Accelerate Innovation and Commercial Success,” highlights recurring challenges organizations faced in transformation initiatives, particularly the complex and often simultaneous evolution of systems, processes, and organizational structures. Ericsson’s continuous evolution of OSS/BSS addresses these key, interlinked transformation challenges head-on, paving the way for automation powered by advanced AI capabilities. This approach creates effective pathways to modernize OSS/BSS and supports meaningful progress across the transformation journey.”

References:

https://www.ericsson.com/en/news/2025/6/evolved-ericsson-ossbss-portfolio-to-ignite-csp-business-and-operational-transformation

https://www.lightreading.com/oss-bss-cx/ericsson-goes-mad-for-ai-amid-fears-about-jobs-and-big-tech-power

https://www.telecomtv.com/content/telcos-and-ai-channel/ericsson-revamps-its-oss-bss-for-the-ai-era-53236/

McKinsey: AI infrastructure opportunity for telcos? AI developments in the telecom sector

Telecom sessions at Nvidia’s 2025 AI developers GTC: March 17–21 in San Jose, CA

Quartet launches “Open Telecom AI Platform” with multiple AI layers and domains

Goldman Sachs: Big 3 China telecom operators are the biggest beneficiaries of China’s AI boom via DeepSeek models; China Mobile’s ‘AI+NETWORK’ strategy

Generative AI in telecom; ChatGPT as a manager? ChatGPT vs Google Search

Allied Market Research: Global AI in telecom market forecast to reach $38.8 by 2031 with CAGR of 41.4% (from 2022 to 2031)

The case for and against AI in telecommunications; record quarter for AI venture funding and M&A deals

 

 

SK Group and AWS to build Korea’s largest AI data center in Ulsan

Amazon Web Services (AWS) is partnering with the SK Group to build South Korea’s largest AI data centerThe two companies are expected to launch the project later this month and will hold a groundbreaking ceremony for the 100MW facility in August, according to state news service Yonhap. 

AWS, the cloud subsidiary of Amazon, provides on-demand cloud computing platforms and application programming interfaces (APIs) to individuals, businesses and governments on a pay-per-use basis.The data center will be built on a 36,000-square-meter site in an industrial park in Ulsan, 305 km southeast of Seoul. It will be powered by 60,000 GPUs, making it the country’s first large-scale AI data center.

The facility will be located in the Mipo industrial complex in Ulsan, 305 kilometers southeast of Seoul. It will house 60,000 graphics processing units (GPUs) and have a power capacity of 100 megawatts, making it the country’s first AI infrastructure of such scale, the sources said.

Ryu Young-sang, chief executive officer (CEO) of SK Telecom Co., had announced the company’s plan to build a hyperscale AI data center equipped with 60,000 GPUs in collaboration with a global tech partner, during the Mobile World Congress (MWC) 2025 held in Spain in March.

SK Telecom plans to invest 3.4 trillion won (US$2.49 billion) in AI infrastructure by 2028, with a significant portion expected to be allocated to the data center project. SK Telecom- South Korea’s biggest mobile operator and 31% owned by the SK Group – will manage the project. “They have been working on the project, but the exact timeline and other details have yet to be finalized,” an SK Group spokesperson said.

 

This captured image from SK Multi Utility’s homepage shows the potential site for its artificial intelligence (AI) data center in the Mipo Industrial Complex in Ulsan, 305 kilometers southeast of Seoul. (PHOTO NOT FOR SALE) (Yonhap)
………………………………………………………………………………………………………………………………………………………………………………………………………….

The AI data center  will be developed in two phases, with the initial 40MW phase to be completed by November 2027 and the full 100MW capacity to be operational by February 2029, the Korea Herald reported Monday.  Once completed, the facility, powered by 60,000 graphics processing units, will have a power capacity of 103 megawatts, making it the country’s largest AI infrastructure, sources said.

SK Group appears to have chosen Ulsan as the site, considering its proximity to SK Gas’ liquefied natural gas combined heat and power plant, ensuring a stable supply of large-scale electricity essential for data center operations. The facility is also capable of utilizing LNG cold energy for data center cooling.

SKT last month released its revised AI pyramid strategy, targeting AI infrastructure including data centers, GPUaaS and customized data centers. It is also developing personal agents A. and Aster for consumers and AIX services for enterprise customers.

Globally, it has found partners through the Global Telecom Alliance, which it co-founded, and is collaborating with US firms Anthropic and Lambda.

SKT’s AI business unit is still small, however, recording just KRW156 billion ($115 million) in revenue in Q1, two-thirds of it from data center infrastructure. Its parent SK Group, which also includes memory chip giant SK Hynix and energy firm SK Innovation, reported $88 billion in revenue last year.

AWS, the world’s largest cloud provider, has been expanding its footprint in Korea. It currently runs a data center in Seoul and began constructing its second facility in Incheon’s Seo District in late 2023. The company has pledged to invest 7.85 trillion won in Korea’s cloud computing infrastructure by 2027.

Earlier this month AWS launched its Taiwan cloud region – its 15th in Asia-Pacific – with plans to invest $5 billion on local cloud and AI infrastructure.

References:

https://en.yna.co.kr/view/AEN20250616004500320?section=k-biz/corporate

https://www.koreaherald.com/article/10510141

https://www.lightreading.com/data-centers/aws-sk-group-to-build-korea-s-largest-ai-data-center

Big tech firms target data infrastructure software companies to increase AI competitiveness

Artificial intelligence (AI) is driving a once-in-a-generation makeover in tech that’s forcing several of the largest social media platforms and software makers to buy companies that help AI-backed systems run smoothly. Worldwide, generative AI spending is expected to total $644 billion in 2025, an increase of 76.4% from 2024, according to a forecast by technology data provider Gartner.

AI without data is like life without oxygen, it doesn’t exist,” said Brian Marshall, global co-head of software investment banking at Citi. “Because of that, data is having a zeitgeist moment right now driven by AI,” Marshall said.  As a result, companies that manage and process the data used to build advanced AI models on cloud-based systems have become highly sought after targets for legacy tech companies like Meta, Salesforce, and ServiceNow in the scramble to stay competitive against the likes of OpenAI, Google and Anthropic.

Getty Images.  Credit: piranka

……………………………………………………………………………………………………………………………………………………………………………………………….

Enterprise data infrastructure [1.] and analytics companies like Confluent, Collibra, Sigma Computing, Matillion, Dataiku, Fivetran, Boomi, and Qlik, could become targets for legacy tech providers in the near term, investment bankers say. The companies, they say, may help businesses integrate, analyze, and store information better.

“Messy, siloed data has long undermined the attempts of enterprises to deliver on the transformative potential of analytics. Now, with the urgency to deploy effective AI, fixing it isn’t just essential — it’s existential,” Florian Douetteau, co-founder and CEO of Dataiku, said in a statement.

Note 1. Data infrastructure software refers to the collection of programs, applications, and tools that enable the management, storage, processing, and analysis of data within an organizationThis software forms the foundation for a robust data management strategy, ensuring data accessibility, security, and efficient utilization. 

Several multibillion-dollar deals for data infrastructure companies have been struck or closed just in the last few weeks:

  • Meta announced Friday a $14.3 billion deal for a 49% stake in data-labeling company Scale AI.  It’s 28 year old co-founder and CEO will join Meta as an AI advisor.
  • Salesforce announced plans last month to buy data integration company Informatica for $8 billion.  It will enable Salesforce to better analyze and assimilate scattered data from across its internal and external systems before feeding it into its in-house AI system, Einstein AI, executives said at the time.
  • IT management provider ServiceNow said in May it was buying data catalogue platform Data.world, which will allow ServiceNow to better understand the business context behind data, executives said when it was announced.
  • IBM announced it was acquiring data management provider DataStax in February to manage and process unstructured data before feeding it to its AI platform.

Those deals highlight the strategic importance for legacy software players to own all aspects of data management, and M&A is often the fastest way to achieve it. Instead of building complex data systems from scratch, they are acquiring specialists that can help organize, clean, and connect data from across their business.

Would-be targets have sometimes become the hunters as was the case when Databricks, a leader in data processing and AI that was recently valued at $62 billion, announced plans last week to buy serverless database manager Neon for $1 billion.

“A lot of companies have a huge amount of data, but I think they’re learning that you can’t just funnel every piece of data you have into an AI engine with no organization, and hope that it spits out the right answer,” said Brian Mangino, partner at Latham & Watkins.

……………………………………………………………………………………………………………………………………………………………………………………………….

References:

Hutchison Telecom is deploying 5G-Advanced in Hong Kong without 5G-A endpoints

Hutchison Telecom-Hong Kong is deploying 3GPP’s 5G-Advanced (5G-A) in high-traffic venues in Hong Kong, including the HK Exhibition Center, the West Kowloon Cultural District and the new $3.9 billion Kai Tak Sports Park. However, 5G-A end points [1.] (like smartphones and tablets) aren’t likely to arrive until next year, according to Hutchinson Executive Director and CEO Kenny Koo.  Therefore, the 5G-A Hong Kong deployment is mostly symbolic.  However, Hutchison is doing some commercial business with 5G-A hotspots

Hutchinson used the 5G-A modems to provide coverage for the annual Art Basel visual arts fair in March, enabling organizers to offer free Wi-Fi for visitors. It’s also found a little niche in pop-up stores.  5G-A modems registered download and upload speeds of 3.1 Gbit/s and 370 Mbit/s respectively in a demo earlier this month.

Note 1. Only a handful of 5G-A endpoint devices are available in mainland China, where operators are reporting 5G-A commercial networks in hundreds of cities – in the 3.5GHz, 4.9GHz and 2.1GHz bands.

Koo said in a statement:

“2025 marks the 5th anniversary of 5G launch. In December 2024, our 5G customer penetration rate reached 54%. At this important stage, we are comprehensively enhancing our 5G coverage and capacity while continuously optimizing user experience. Limited-time upgrade offers are also tailored to encourage customers to upgrade to 5G. Together, we are advancing into the new era of 5.5G (aka 5GA). In support of the development of the Northern Metropolis, we have taken the initiative to actively enhance 5G network coverage in the district as the flow of people and vehicles surges. This ensures that commuters travelling between the northwest New Territories and Kowloon can enjoy a smoother network experience at major transportation hubs, including Tai Lam Tunnel and the Kam Sheung Road section of the MTR Tuen Ma Line. In addition, we are helping to boost the mega event economy by activating 5.5G network hotspots at major event venues in Hong Kong including Kai Tak Sports Park, the West Kowloon Cultural District and the Hong Kong Convention and Exhibition Centre. Customers enjoy an improved experience at high-traffic hotspots compared with the original 5G coverage, with enhanced network speed, increased capacity and low latency performance provided by 5G broadband.”

“We try to position ourselves as a market leader in the technology evolution,” Koo told Light Reading. He said the 5G-A ecosystem “was not yet ready” because of the lack of devices that can support the 26 GHz and 28 GHz bands. “iPhone, Samsung and Huawei handsets do not support 5.5G in those bands,” he said, using the company’s preferred branding of 5G-Advanced.   

Author’s Note: Koo did not mention that 5G-A has yet to be standardized by ITU-R as part of IMT 2020 RIT/SRIT aka the ITU-R M.2150 recommendation.  5G Advanced is included in 3GPP Release 18  and is expected to be part of M.2150 issue 3, now being developed by ITU-R WP 5D. 

Hutchinson’s subscriber base grew 17% to 4.6 million, mostly due to prepaid gains, while 5G penetration increased 8 percentage points to 54%. Koo said the company has been able to sustain the growth this year because of demand from inbound travelers from mainland China. “They like our prepaid cards,” he added.  Last year, Hutchinson’s roaming revenue increased 30% to 684 million Hong Kong dollars (US$87 million) and now accounts for nearly a fifth of total service revenue.

eSim is also a growing market for Hutchinson.  There are a lot of travel SIM portals selling eSIM solutions right to consumers,” Koo said.  The popularity of its eSIM product means Hutchison’s addressable market has expanded way beyond Hong Kong to reach mobile customers worldwide. 

About Hutchison Telecommunications Hong Kong:

Hutchison Telecommunications Hong Kong Limited (“HTHK”) has launched 5G broadband services in both the consumer and enterprise markets, providing high-speed indoor and outdoor internet access. Leveraging a robust 5G network, HTHK has also extended the deployment of 5G solutions including 5G 4K live broadcasting, virtual reality and real-time data transmission to various verticals. HTHK plays a prominent role in developing a new economy ecosystem, channeling the latest technologies into innovations that set market trends and steer industry development.

……………………………………………………………………………………………………………………………………………………………

References:

https://www.lightreading.com/5g/hutchison-joins-5g-advanced-race

https://doc.irasia.com/listco/hk/hthkh/press/p250529.pdf

5G Advanced offers opportunities for new revenue streams; 3GPP specs for 5G FWA?

What is 5G Advanced and is it ready for deployment any time soon?

Huawei pushes 5.5G (aka 5G Advanced) but there are no completed 3GPP specs or ITU-R standards!

Nokia exec talks up “5G Advanced” (3GPP release 18) before 5G standards/specs have been completed

ITU-R recommendation IMT-2020-SAT.SPECS from ITU-R WP 5B to be based on 3GPP 5G NR-NTN and IoT-NTN (from Release 17 & 18)

Nile launches a Generative AI engine (NXI) to proactively detect and resolve enterprise network issues

Nile is a Nile is a private, venture-funded technology company specializing in AI-driven network and security infrastructure services for enterprises and government organizations. Nile has pioneered the use of AI and machine learning in enterprise networking. Its latest generative AI capability, Nile Experience Intelligence (NXI), proactively resolves network issues before they impact users or IT teams, automating fault detection, root cause analysis, and remediation at scale. This approach reduces manual intervention, eliminates alert fatigue, and ensures high performance and uptime by autonomously managing networks.

Significant Innovations Include:

  • Automated site surveys and network design using AI and machine learning

  • Digital twins for simulating and optimizing network operations

  • Edge-to-cloud zero-trust security built into all service components

  • Closed-loop automation for continuous optimization without human intervention

Today, the company announced the launch of Nile Experience Intelligence (NXI), a novel generative AI capability designed to proactively resolve network issues before they impact IT teams, users, IoT devices, or the performance standards defined by Nile’s Network-as-a-Service (NaaS) guarantee.  As a core component of the Nile Access Service [1.], NXI uniquely enables Nile to take advantage of its comprehensive, built-in AI automation capabilities.  NXI allows Nile to autonomously monitor every customer deployment at scale, identifying performance anomalies and network degradations that impact reliability and user experience. While others market their offerings as NaaS, only the Nile Access Service with NXI delivers a financially backed performance guarantee—an unmatched industry standard.

………………………………………………………………………………………………………………………………………………………………

Note 1. Nile Access Service is a campus Network-as-a-Service (NaaS) platform that delivers both wired and wireless LAN connectivity with integrated Zero Trust Networking (ZTN), automated lifecycle management, and a unique industry-first performance guarantee. The service is built on a vertically integrated stack of hardware, software, and cloud-based management, leveraging continuous monitoring, analytics, and AI-powered automation to simplify deployment, automate maintenance, and optimize network performance.

………………………………………………………………………………………………………………………………………………………………………………………………….

“Traditional networking and NaaS offerings based on service packs rely on IT organizations to write rules that are static and reactive, which requires continuous management. Nile and NXI flipped that approach by using generative AI to anticipate and resolve issues across our entire install base, before users or IT teams are even aware of them,” said Suresh Katukam, Chief Product Officer at Nile. “With NXI, instead of providing recommendations and asking customers to write rules that involve manual interaction—we’re enabling autonomous operations that provide a superior and uninterrupted user experience.”

Key capabilities include:

  • Proactive Fault Detection and Root Cause Analysis: predictive modeling-based data analysis of billions of daily events, enabling proactive insights across Nile’s entire customer install base.
  • Large Scale Automated Remediation: leveraging the power of generative AI and large language models (LLMs), NXI automatically validates and implements resolutions without manual intervention, virtually eliminating customer-generated trouble tickets.
  • Eliminate Alert Fatigue: NXI eliminates alert overload by shifting focus from notifications to autonomous, actionable resolution, ensuring performance and uptime without IT intervention.

Unlike rules-based systems dependent on human-configured logic and manual maintenance, NXI is:

  • Generative AI and self-learning powered, eliminating the need for static, manually created rules that are prone to human error and require ongoing maintenance.
  • Designed for scale, NXI already processes terabytes of data daily and effortlessly scales to manage thousands of networks simultaneously.
  • Built on Nile’s standardized architecture, enabling consistent AI-driven optimization across all customer networks at scale.
  • Closed-loop automated, no dashboards or recommended actions for customers to interpret, and no waiting on manual intervention.

Katukam added, “NXI is a game-changer for Nile. It enables us to stay ahead of user experience and continuously fine-tune the network to meet evolving needs. This is what true autonomous networking looks like—proactive, intelligent, and performance-guaranteed.”

From improved connectivity to consistent performance, Nile customers are already seeing the impact of NXI. For more information about NXI and Nile’s secure Network as a Service platform, visit www.nilesecure.com.

About Nile:

Nile is leading a fundamental shift in the networking industry, challenging decades-old conventions to deliver a radically new approach. By eliminating complexity and rethinking how networks are built, consumed, and operated, Nile is pioneering a new category designed for a modern, service-driven era. With a relentless focus on simplicity, security, reliability, and performance, Nile empowers organizations to move beyond the limitations of legacy infrastructure and embrace a future where networking is effortless, predictable, and fully aligned with their digital ambitions.

Nile is recognized as a disruptor in the enterprise networking market, offering a modern alternative to traditional vendors like Cisco and HPE. Its model enables organizations to reduce total cost of ownership by more than 60% and reclaim IT resources while providing superior connectivity. Major customers include Stanford University, Pitney Bowes, and Carta.

The company has received several industry accolades, including the CRN Tech Innovators Award (2024) and recognition in Gartner’s Peer Insights Voice of the Customer Report1. Nile has raised over $300 million in funding, with a significant $175 million Series C round in 2023 to fuel expansion.

References:

https://nilesecure.com/press-releases/nile-launches-networking-industrys-first-generative-ai-engine-designed-to-autonomously-optimize-the-enterprise-wired-and-wireless-user-experience

https://nilesecure.com/company/about-us

https://www.networkcomputing.com/naas/nile-rolls-out-trust-service-to-bring-zero-trust-to-campus-network-environments

Does AI change the business case for cloud networking?

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

Qualcomm to acquire Alphawave Semi for $2.4 billion; says its high-speed wired tech will accelerate AI data center expansion

AI infrastructure investments drive demand for Ciena’s products including 800G coherent optics

NGMN: 6G Key Messages from a network operator point of view

As 3GPP prepares for its Release 20 [1.], the Next Generation Mobile Networks Alliance (NGMN) has issued a 6G Key Messages statement saying that 6G can’t be just “another generational shift” and that lessons must be learned from “the mistakes of 5G.”  NGMN says that 6G must demonstrate clear, tangible benefits within a realistic techno-economic framework. Network Architecture needs to meet MNOs criteria for modularity, simplicity, openness, operational simplification, compatibility and interoperability, and trustworthiness while delivering economic and social sustainability. These factors are crucial to enable fast deployment and to support the development of market-aligned services that meet user demands.

“6G standards must be globally harmonized. It is expected to be built upon the features and capabilities introduced with 5G, alongside new capabilities to deliver new services and value. Such technological evolutions should be assessed with respect to their benefits versus their associated impact. 6G standards must learn from the mistakes of 5G, including multiple architecture options, features that are never used and use cases that have no market pull.”

NGMN insists that the introduction of 6G should not cost network operators more than necessary:

“The introduction of 6G should not necessitate a forced hardware refresh.  While new radio equipment is required for deployment in new frequency bands, the evolution toward 6G in existing bands should primarily occur through software upgrades, ensuring a smooth transition.”

Note 1.  According to 3GPP’s current planning, Release 20 will include a study phase, gathering technical input on potential 6G features, use cases, and architectural shifts. These discussions are intended to inform later specification work, likely in Release 21, aligned with the IMT-2030 submission process.  See Editor’s Note below for relationship between 3GPP’s 6G work and ITU-R IMT 2030.

THE NGMN 6G KEY MESSAGES PUBLICATION HAS BEEN ENDORSED BY THE NGMN BOARD OF DIRECTORS IN JUNE 2025:

Laurent Leboucher, Chairman of the NGMN Alliance Board and Orange Group CTO and EVP Networks, explained “6G should be viewed as a seamless evolution — fully compatible with 5G and propelled by continuous software innovation. The industry must move beyond synchronised hardware/software ‘G’ cycles and embrace decoupled roadmaps: one for hardware infrastructure, guided by value-driven and sustainable investments, and another — faster and demand-led — for software-defined business capabilities addressing real needs from society.”

“Along with presenting this consolidated view to 3GPP, this publication serves as a foundation for engaging with the broader industry, driving collaboration, innovation, and strategic direction in the evolving 6G landscape,” said Anita Döhler, CEO of NGMN . “A core tenet of our message is that 6G is not treated as another generational shift for mobile technology – it must be evolutionary.”

“Network evolution is essential for addressing ever-changing societal needs. To achieve this, we need to work collectively as an industry to ensure all future networks are secure, environmentally sound, and economically sustainable,” said Luke Ibbetson, Head of Group R&D at Vodafone and NGMN Board Director.

Key Categories:

• Enhanced Human Communications includes use cases of enriched communications, such as immersive experience, telepresence and multimodal interaction. Voice services must evolve in a business sustainable manner.

• Enhanced Machine Communications reflects the growth of collaborative robotics, requiring reliable communication among robots, their environment and humans.

• Enabling Services gather use cases that require additional features such as high accuracy location, mapping, or sensing.

• Network Evolution describes aspects related to the evolution of core technologies including AI as a service, energy efficiency, and delivering ubiquitous coverage.

Requirements and Design Considerations:

Sustainability: Minimising environmental impact, securing economic viability, and ensuring social sustainability is the key goal of 6G design.
Trustworthiness: Ensure that security and privacy are intrinsically embedded in the 6G system to protect against threats and provide solutions that measurably demonstrate this attribute.
Innovation: A new radio interface should demonstrate significant benefits over and above IMT-2020, as mentioned in the Radio Performance Assessment Framework publication, while considering the practical issues related to deployments in a realistic techno-economical context. It is also critical for innovation that the entirety of the upper 6 GHz band would be available to mobile networks.

Radio Performance Assessment Framework (RPAF) includes guidance for new 6G Radio Access Technologies (RAT). It emphasises that any proposed solutions must be assessed against a reasonable baseline to demonstrate meaningful performance gains.

Editor’s Note:  ITU-R WP5D is the official standards body for 6G, which is known as IMT 2030. Like for 5G (IMT 2020), WP 5D sets the requirements while 3GPP develops the Radio Interface Technology (RIT and SRIT) specs which are then contributed to WP 5D by ATIS.

About the MGMN Alliance:

Next Generation Mobile Networks Alliance – is a global, operator-driven leadership network established in 2006 by leading international mobile network operators (MNOs). Its mission is to ensure that next-generation mobile network infrastructure, service platforms and devices meet operators’ requirements while addressing the demands and expectations of end users.

NGMN’s vision is to provide impactful industry guidance to enable innovative, sustainable and affordable mobile telecommunication services. Key focus areas include Mastering the Route to Disaggregation, Green Future Networks and 6G, while continuing to support the full implementation of 5G.

As a global alliance of nearly 70 companies and organisations—including operators, vendors, and academia—NGMN actively incorporates the perspectives of all stakeholders. It drives global alignment and convergence of technology standards and industry initiatives to avoid fragmentation and support industry scalability.

References:

https://www.ngmn.org/wp-content/uploads/2506_NGMN_6G-Key-Messages_An-Operator-View_V1.0.pdf

NGMN calls for harmonised 6G standards to drive seamless mobile evolution on behalf of global MNOs

NGMN Radio Performance Assessment Framework

NGMN issues ITU-R framework for IMT-2030 vs ITU-R WP5D Timeline for RIT/SRIT Standardization

ITU-R WP 5D reports on: IMT-2030 (“6G”) Minimum Technology Performance Requirements; Evaluation Criteria & Methodology

ITU-R: IMT-2030 (6G) Backgrounder and Envisioned Capabilities

ITU-R WP5D invites IMT-2030 RIT/SRIT contributions

Highlights of 3GPP Stage 1 Workshop on IMT 2030 (6G) Use Cases

https://unidir.org/wp-content/uploads/2024/12/241211_ITU-R-Update-on-WRC-and-IMT-2030.pdf

https://www.itu.int/dms_pubrec/itu-r/rec/m/R-REC-M.2160-0-202311-I%21%21PDF-E.pdf

Draft new ITU-R recommendation (not yet approved): M.[IMT.FRAMEWORK FOR 2030 AND BEYOND]

 

IBM to Build World’s First Large-Scale, Fault-Tolerant Quantum Computer – Starling

IBM is building the world’s first large-scale quantum computer capable of operating without errors. The computer, called Starling, is set to launch by 2029. The quantum computer will reside in IBM’s new quantum data center in Poughkeepsie, New York and is expected to perform 20,000 more operations than today’s quantum computers, the company said in its announcement Tuesday.

Starling will be “fault tolerant,” meaning it would be able to perform quantum operations for things like drug discovery, supply chain optimization, semiconductor design, and financial risk analyses without the errors that plague quantum computers today and make them less useful than traditional computers.

To represent the computational state of an IBM Starling would require the memory of more than a quindecillion (1048) of the world’s most powerful supercomputers. With Starling, users will be able to fully explore the complexity of its quantum states, which are beyond the limited properties able to be accessed by current quantum computers.

IBM, which already operates a large, global fleet of quantum computers, is releasing a new Quantum Roadmap that outlines its plans to build out a practical, fault-tolerant quantum computer.

“IBM is charting the next frontier in quantum computing,” said Arvind Krishna, Chairman and CEO, IBM. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”

A large-scale, fault-tolerant quantum computer with hundreds or thousands of logical qubits could run hundreds of millions to billions of operations, which could accelerate time and cost efficiencies in fields such as drug development, materials discovery, chemistry, and optimization.

Starling will be able to access the computational power required for these problems by running 100 million quantum operations using 200 logical qubits. It will be the foundation for IBM Quantum Blue Jay, which will be capable of executing 1 billion quantum operations over 2,000 logical qubits.

A logical qubit is a unit of an error-corrected quantum computer tasked with storing one qubit’s worth of quantum information. It is made from multiple physical qubits working together to store this information and monitor each other for errors.

Like classical computers, quantum computers need to be error corrected to run large workloads without faults. To do so, clusters of physical qubits are used to create a smaller number of logical qubits with lower error rates than the underlying physical qubits. Logical qubit error rates are suppressed exponentially with the size of the cluster, enabling them to run greater numbers of operations.

Creating increasing numbers of logical qubits capable of executing quantum circuits, with as few physical qubits as possible, is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.

The Path to Large-Scale Fault Tolerance:

The success of executing an efficient fault-tolerant architecture is dependent on the choice of its error-correcting code, and how the system is designed and built to enable this code to scale.

Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations – necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be able to be implemented beyond small-scale experiments and devices.

A practical, large-scale, fault-tolerant quantum computer requires an architecture that is:

  • Fault-tolerant to suppress enough errors for useful algorithms to succeed.
  • Able to prepare and measure logical qubits through computation.
  • Capable of applying universal instructions to these logical qubits.
  • Able to decode measurements from logical qubits in real-time and can alter subsequent instructions.
  • Modular to scale to hundreds or thousands of logical qubits to run more complex algorithms.
  • Efficient enough to execute meaningful algorithms with realistic physical resources, such as energy and infrastructure.

Today, IBM is introducing two new technical papers that detail how it will solve the above criteria to build a large-scale, fault-tolerant architecture.

The first paper unveils how such a system will process instructions and run operations effectively with qLDPC codes. This work builds on a groundbreaking approach to error correction featured on the cover of Nature that introduced quantum low-density parity check (qLDPC) codes. This code drastically reduces the number of physical qubits needed for error correction and cuts required overhead by approximately 90 percent, compared to other leading codes. Additionally, it lays out the resources required to reliably run large-scale quantum programs to prove the efficiency of such an architecture over others.

The second paper describes how to efficiently decode the information from the physical qubits and charts a path to identify and correct errors in real-time with conventional computing resources.

From Roadmap to Reality:

The new IBM Quantum Roadmap outlines the key technology milestones that will demonstrate and execute the criteria for fault tolerance. Each new processor in the roadmap addresses specific challenges to build quantum computers that are modular, scalable, and error-corrected:

  • IBM Quantum Loon, expected in 2025, is designed to test architecture components for the qLDPC code, including “C-couplers” that connect qubits over longer distances within the same chip.
  • IBM Quantum Kookaburra, expected in 2026, will be IBM’s first modular processor designed to store and process encoded information. It will combine quantum memory with logic operations — the basic building block for scaling fault-tolerant systems beyond a single chip.
  • IBM Quantum Cockatoo, expected in 2027, will entangle two Kookaburra modules using “L-couplers.” This architecture will link quantum chips together like nodes in a larger system, avoiding the need to build impractically large chips.

Media Contacts

Erin Angelini, IBM Communications
[email protected]

Brittany Forgione, IBM Communications
[email protected]

References:

https://newsroom.ibm.com/2025-06-10-IBM-Sets-the-Course-to-Build-Worlds-First-Large-Scale,-Fault-Tolerant-Quantum-Computer-at-New-IBM-Quantum-Data-Center

IBM’s path to scaling fault tolerance, read their blog here, and watch BM Quantum scientists in this latest video

Bloomberg on Quantum Computing: appeal, who’s building them, how does it work?

Google’s new quantum computer chip Willow infinitely outpaces the world’s fastest supercomputers

Ultra-secure quantum messages sent a record distance over a fiber optic network

Quantum Computers and Qubits: IDTechEx report; Alice & Bob whitepaper & roadmap

China Mobile verifies optimized 5G algorithm based on universal quantum computer

Qualcomm to acquire Alphawave Semi for $2.4 billion; says its high-speed wired tech will accelerate AI data center expansion

Qualcomm Inc. has agreed to buy London-listed semiconductor company Alphawave IP Group Plc  (“Alphawave Semi”) for about $2.4 billion in cash to expand its technology for artificial intelligence (AI) in data centers.  The deal is expected to close in the first quarter of 2026, subject to regulatory and shareholder approval.

Alphawave Semi is claimed to be a global leader in high-speed wired connectivity and compute technologies delivering IP, custom silicon, connectivity products and chiplets. It’s products ‘form a part of the core infrastructure enabling next generation services in a wide array of high growth applications’ such as data centers, AI, data networking and data storage, according to the press release.

Regarding connectivity, Alphawave Semi develops industry leading PAM4 and Coherent DSP products in the most advanced technologies for all forms of data center connectivity.

Alphawave (which held an initial public offering in 2021 at 410 pence per share) has consistently traded below that level. The company had struggled with a reliance on large customers and navigating geopolitical tensions between the US and China, where Alphawave decided to cut back its business last year.

Still, the company’s technology has been gaining traction and had reported a surge in orders in the fourth quarter. Chief Executive Officer Tony Pialis said in a statement at the time that orders from North American AI customers were driving the business.

Qualcomm Chief Executive Officer Cristiano Amon is looking to reduce the company’s reliance on the smartphone market, where growth has slowed, and push into new areas. Alphawave makes high-speed semiconductor and connectivity technology that can be used for data centers and AI applications, two growth areas in the chip industry that are being driven by demand for products like OpenAI’s ChatGPT.

Image credit: Qualcomm / Alphawave Semi

“Qualcomm’s acquisition of Alphawave Semi represents a significant milestone for us and an opportunity for our business to join forces with a respected industry leader and drive value to our customers,” said Tony Pialis, president and CEO of Alphawave Semi. “By combining our resources and expertise, we will be well-positioned to expand our product offerings, reach a broader customer base, and enhance our technological capabilities. Together, we will unlock new opportunities for growth, drive innovation, and create a leading player in AI compute and connectivity solutions.”

“Under Tony’s leadership Alphawave Semi has developed leadi

ng high-speed wired connectivity and compute technologies that are complementary to our power-efficient CPU and NPU cores,” said Mr. Amon. “Qualcomm’s advanced custom processors are a natural fit for data center workloads. The combined teams share the goal of building advanced technology solutions and enabling next-level connected computing performance across a wide array of high growth areas, including data center infrastructure.”

References:

https://www.qualcomm.com/news/releases/2025/06/qualcomm-to-acquire-alphawave-semi

https://awavesemi.com/

https://www.bloomberg.com/news/articles/2025-06-09/qualcomm-agrees-to-buy-uk-listed-alphawave-for-2-4-billion

https://www.telecoms.com/enterprise-telecoms/qualcomm-to-purchase-alphawave-semi

MediaTek overtakes Qualcomm in 5G smartphone chip market

 

 

AI infrastructure investments drive demand for Ciena’s products including 800G coherent optics

Artificial Intelligence (AI) infrastructure investments are starting to shift toward networks needed to support the technology, rather than focusing exclusively on computing and power, according to Ciena Chief Executive Gary Smith.  The trends helped Ciena swing to a profit and post a 24% jump in sales in the recent quarter.

The company enables high-speed fiber optic connectivity for telecommunications and data centers, helping hyper-scalers such as Amazon and Microsoft support AI initiatives via data center interconnects and intra-data center networking.  Currently, the company is ramping up production to meet surging demand fueled by cloud and AI investments.

“There’s no point in investing in these massive amounts of GPUs if we’re going to strand it because we didn’t invest in the network,” Smith said Thursday.

……………………………………………………………………………………………………………………………………………………..

Ciena sees a bright future in 800G coherent optics that can accommodate AI traffic.  Smith said a global cloud provider has selected Ciena’s coherent 800-gig pluggable modules and Reconfigurable Line System (RLS) photonics for investing in geographically distributed, regional GPU clusters.  “With our coherent optical technology ideally suited for this type of connectivity, we expect to see more of these opportunities emerge as cloud providers evolve their data center network architectures to support their AI strategies,” he added.

It’s still early innings for 800G adoption, but demand is climbing due to AI and cloud connectivity. Vertical Systems Group expects to see “a measurable increase” in 800G installations this year.  Dell’Oro optical networking analyst Jimmy Yu noted on LinkedIn Ciena’s data center interconnect win is the first he’s heard of that involves connecting GPU clusters across 100+ kilometer spans. “It was a hot topic of discussion for nearly 2 years. It is now going to start,” Yu said.

……………………………………………………………………………………………………………………………………………………

Ciena’s future growth opportunities include network service and cloud service providers as well as ODM/OEM sales of optical components.

References:

https://www.wsj.com/business/earnings/ciena-swings-to-profit-as-ai-investments-drive-demand-0195f30c

https://investor.ciena.com/static-files/d964ccac-74b3-43d9-a73e-ecf67fab6060

https://investor.ciena.com/news-releases/news-release-details/ciena-reports-fiscal-second-quarter-2025-financial-results

https://www.fierce-network.com/broadband/ciena-now-expects-tariff-costs-10m-quarter

 

 

Page 1 of 332
1 2 3 332