Lumen Technologies to connect Prometheus Hyperscale’s energy efficient AI data centers

The need for more cloud computing capacity and AI applications has been driving huge investments in data centers. Those investments have led to a steady demand for fiber capacity between data centers and more optical networking innovation inside data centers. Here’s the latest example of that:

Prometheus Hyperscale has chosen Lumen Technologies to connect its energy-efficient data centers to meet growing AI data demands. Lumen network services will help Prometheus with the rapid growth in AI, big data, and cloud computing as they address the critical environmental challenges faced by the AI industry.

Rendering of Prometheus Hyperscale flagship Data Center in Evanston, Wyoming:

……………………………………………………………………………….

Prometheus Hyperscale, known for pioneering sustainability in the hyperscale data center industry, is deploying a Lumen Private Connectivity Fabric℠ solution, including new network routes built with Lumen next generation wavelength services and Dedicated Internet Access (DIA) [1.] services with Distributed Denial of Service (DDoS) protection layered on top.

Note 1.  Dedicated Internet Access (DIA) is a premium internet service that provides a business with a private, high-speed connection to the internet.

This expanded network will enable high-density compute in Prometheus facilities to deliver scalable and efficient data center solutions while maintaining their commitment to renewable energy and carbon neutrality. Lumen networking technology will provide the low-latency, high-performance infrastructure critical to meet the demands of AI workloads, from training to inference, across Prometheus’ flagship facility in Wyoming and four future data centers in the western U.S.

“What Prometheus Hyperscale is doing in the data center industry is unique and innovative, and we want to innovate alongside of them,” said Ashley Haynes-Gaspar, Lumen EVP and chief revenue officer. “We’re proud to partner with Prometheus Hyperscale in supporting the next generation of sustainable AI infrastructure. Our Private Connectivity Fabric solution was designed with scalability and security to drive AI innovation while aligning with Prometheus’ ambitious sustainability goals.”

Prometheus, founded as Wyoming Hyperscale in 2020, turned to Lumen networking solutions prior to the launch of its first development site in Aspen, WY. This facility integrates renewable energy sources, sustainable cooling systems, and AI-driven energy optimization, allowing for minimal environmental impact while delivering the computational power AI-driven enterprises demand. The partnership with Lumen reinforces Prometheus’ dedication to both technological innovation and environmental responsibility.

“AI is reshaping industries, but it must be done responsibly,” said Trevor Neilson, president of Prometheus Hyperscale. “By joining forces with Lumen, we’re able to offer our customers best-in-class connectivity to AI workloads while staying true to our mission of building the most sustainable data centers on the planet. Lumen’s network expertise is the perfect complement to our vision.”

Prometheus’ data center campus in Evanston, Wyoming will be one of the biggest data centers in the world with facilities expected to come online in late 2026. Future data centers in Pueblo, ColoradoFort Morgan, ColoradoPhoenix, Arizona; and Tucson, Arizona, will follow and be strategically designed to leverage clean energy resources and innovative technology.

About Prometheus Hyperscale:

Prometheus Hyperscale, founded by Trenton Thornock, is revolutionizing data center infrastructure by developing sustainable, energy-efficient hyperscale data centers. Leveraging unique, cutting-edge technology and working alongside strategic partners, Prometheus is building next-generation, liquid-cooled hyperscale data centers powered by cleaner energy. With a focus on innovation, scalability, and environmental stewardship, Prometheus Hyperscale is redefining the data center industry for a sustainable future. This announcement follows recent news of Bernard Looney, former CEO of bp, being appointed Chairman of the Board.

To learn more visit: www.prometheushyperscale.com

About Lumen Technologies:

Lumen uses the scale of their network to help companies realize AI’s full potential. From metro connectivity to long-haul data transport to edge cloud, security, managed service, and digital platform capabilities, Lumenn meets its customers’ needs today and is ready for tomorrow’s requirements.

In October, Lumen CTO Dave Ward told Light Reading that a “fundamentally different order of magnitude” of compute power, graphics processing units (GPUs) and bandwidth is required to support AI workloads. “It is the largest expansion of the Internet in our lifetime,” Ward said.

Lumen is constructing 130,000 fiber route miles to support Meta and other customers seeking to interconnect AI-enabled data centers. According to a story by Kelsey Ziser, the fiber conduits in this buildout would contain anywhere from 144 to more than 500 fibers to connect multi-gigawatt data centers.

REFERENCES:

https://www.prnewswire.com/news-releases/lumen-partners-with-prometheus-hyperscale-to-enhance-connectivity-for-sustainable-ai-driven-data-centers-302333590.html

https://www.lightreading.com/data-centers/2024-in-review-data-center-shifts

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

Initiatives and Analysis: Nokia focuses on data centers as its top growth market

Proposed solutions to high energy consumption of Generative AI LLMs: optimized hardware, new algorithms, green data centers

Deutsche Telekom with AWS and VMware demonstrate a global enterprise network for seamless connectivity across geographically distributed data centers

 

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

One of the big tech themes in 2024 was the buildout of data center infrastructure to support generative (Gen) artificial intelligence (AI) compute servers. Gen AI requires massive computational power, which only huge, powerful data centers can provide. Big tech companies like Amazon (AWS), Microsoft (Azure), Google (Google Cloud), Meta (Facebook) and others are building or upgrading their data centers to provide the infrastructure necessary for training and deploying AI models. These investments include high-performance GPUs, specialized hardware, and cutting-edge network infrastructure.

  • Barron’s reports that big tech companies are spending billions on that initiative. In the first nine months of 2024, Amazon, Microsoft, and Alphabet spent a combined $133 billion building AI capacity, up 57% from the previous year, according to Barron’s. Much of the spending accrued to Nvidia, whose data center revenue reached $80 billion over the past three quarters, up 174%.  The infrastructure buildout will surely continue in 2025, but tough questions from investors about return on investment (ROI) and productivity gains will take center stage from here.
  • Amazon, Google, Meta and Microsoft expanded such investments by 81% year over year during the third quarter of 2024, according to an analysis by the Dell’Oro Group, and are on track to have spent $180 billion on data centers and related costs by the end of the year.  The three largest public cloud providers, Amazon Web Services (AWS), Azure and Google Cloud, each had a spike in their investment in AI during the third quarter of this year.  Baron Fung, a senior director at Dell’Oro Group, told Newsweek: “We think spending on AI infrastructure will remain elevated compared to other areas over the long-term. These cloud providers are spending many billions to build larger and more numerous AI clusters. The larger the AI cluster, the more complex and sophisticated AI models that can be trained. Applications such as Copilot, chatbots, search, will be more targeted to each user and application, ultimately delivering more value to users and how much end-users will pay for such a service,” Fung added.
  • Efficient and scalable data centers can lower operational costs over time. Big tech companies could offer AI cloud services at scale, which might result in recurring revenue streams. For example, AI infrastructure-as-a-service (IaaS) could be a substantial revenue driver in the future, but no one really knows when that might be.

Microsoft has a long history of pushing new software and services products to its large customer base. In fact, that greatly contributed to the success of its Azure cloud computing and storage services. The centerpiece of Microsoft’s AI strategy is getting many of those customers to pay for Microsoft 365 Copilot, an AI assistant for its popular apps like Word, Excel, and PowerPoint. Copilot costs $360 a year per user, and that’s on top of all the other software, which costs anywhere from $72 to $657 a year. Microsoft’s AI doesn’t come cheap.  Alistair Speirs, senior director of Microsoft Azure Global Infrastructure told Newsweek: “Microsoft’s datacenter construction has been accelerating for the past few years, and that growth is guided by the growing demand signals that we are seeing from customers for our cloud and AI offerings.  “As we grow our infrastructure to meet the increasing demand for our cloud and AI services, we do so with a holistic approach, grounded in the principle of being a good neighbor in the communities in which we operate.”

Venture capitalist David Cahn of Sequoia Capital estimates that for AI to be profitable, every dollar invested on infrastructure needs four dollars in revenue. Those profits aren’t likely to come in 2025, but the companies involved (and there investors) will no doubt want to see signs of progress.  One issue they will have to deal with is the popularity of free AI, which doesn’t generate any revenue by itself.

An August 2024 survey of over 4,600 adult Americans from researchers at the Federal Reserve Bank of St. Louis, Vanderbilt University, and Harvard University showed that 32% of respondents had used AI in the previous week, a faster adoption rate than either the PC or the internet. When asked what services they used, free options like OpenAI’s ChatGPT, Google’s Gemini, Meta Platform’s Meta AI, and Microsoft’s Windows Copilot were cited most often. Unlike 365, versions of Copilot built into Windows and Bing are free.

The unsurprising popularity of free AI services creates a dilemma for tech firms. It’s expensive to run AI in the cloud at scale, and as of now there’s no revenue behind it. The history of the internet suggests that these free services will be monetized through advertising, an arena where Google, Meta, and Microsoft have a great deal of experience. Investors should expect at least one of these services to begin serving ads in 2025, with the others following suit. The better AI gets—and the more utility it provides—the more likely consumers will go along with those ads.

Productivity Check:

We’re at the point in AI’s rollout where novelty needs to be replaced by usefulness—and investors will soon be looking for signs that AI is delivering productivity gains to business. Here we can turn to macroeconomic data for answers. According to the U.S. Bureau of Labor Statistics, since the release of ChatGPT in November 2022, labor productivity has risen at an annualized rate of 2.3% versus the historical median of 2.0%. It’s too soon to credit AI for those gains, but if above-median productivity growth continues into 2025, the conversation gets more interesting.

There’s also the continued question of AI and jobs, a fraught conversation that isn’t going to get any easier. There may already be AI-related job loss happening in the information sector, home to media, software, and IT. Since the release of ChatGPT, employment is down 3.9% in the sector, even as U.S. payrolls overall have grown by 3.3%. The other jobs most at risk are in professional and business services and in the financial sector.  To be sure, the history of technological change is always complicated. AI might take away jobs, but it’s sure to add some, too.

“Some jobs will likely be automated. But at the same time, we could see new opportunities in areas requiring creativity, judgment, or decision-making,” economists Alexander Bick of the Federal Reserve Bank of St. Louis and Adam Blandin of Vanderbilt University tell Barron’s. “Historically, every big tech shift has created new types of work we couldn’t have imagined before.”

Closing Quote:

Generative AI (GenAI) is being felt across all technology segments and subsegments, but not to everyone’s benefit,” said John-David Lovelock, Distinguished VP Analyst at Gartner. “Some software spending increases are attributable to GenAI, but to a software company, GenAI most closely resembles a tax. Revenue gains from the sale of GenAI add-ons or tokens flow back to their AI model provider partner.”

References:

AI Stocks Face a New Test. Here Are the 3 Big Questions Hanging Over Tech in 2025

Big Tech Increases Spending on Infrastructure Amid AI Boom – Newsweek

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

Ciena CEO sees huge increase in AI generated network traffic growth while others expect a slowdown

Proposed solutions to high energy consumption of Generative AI LLMs: optimized hardware, new algorithms, green data centers

SK Telecom unveils plans for AI Infrastructure at SK AI Summit 2024

Huawei’s “FOUR NEW strategy” for carriers to be successful in AI era

Initiatives and Analysis: Nokia focuses on data centers as its top growth market

India Mobile Congress 2024 dominated by AI with over 750 use cases

Reuters & Bloomberg: OpenAI to design “inference AI” chip with Broadcom and TSMC

Ciena CEO sees huge increase in AI generated network traffic growth while others expect a slowdown

Today, Ciena reported better than expected revenue of $1.12 billion in its 4th quarter, which was above analyst expectations of around $1.103 billion.  Orders were once again ahead of revenue, even though the company had expected orders to be below revenue just a few months ago. A closer look at key metrics reveals mixed results, with some segments like Software and Services showing strong growth (+20.6% year-over-year) and others like Routing and Switching experiencing significant declines (-38.4% year-over-year).

Increased demand for the company’s Reconfigurable Line Systems (RLS), primarily from large cloud providers. And he said the company was also doing well selling its WaveLogic coherent optical pluggables, which optimize performance in data centers as they support traffic from AI and machine learning.

Ciena’s Managed Optical Fiber Networks (MOFN) technology is designed for global service providers that are building dedicated private optical networks for cloud providers.  MOFN came about a few years ago when cloud providers wanted to enter countries where they weren’t allowed to build their own fiber networks. “They had to go with the incumbent carrier, but they wanted to have control of their network within country. It was sort of a niche-type play. But we’ve seen more recently, over the last 6-9 months, that model being more widely adopted,” Smith said.  MOFN is becoming more widely utilized, and the good news for Ciena is that cloud providers often request that Ciena equipment be used so that it matches with the rest of their network, according to Smith.

Image Credit: Midjourney for Fierce Network

…………………………………………………………………………………………………………………………..

The company also said it now expects average annual revenue growth of approximately 8% to 11% over the next three years.   “Our business is linked heavily into the growth of bandwidth around the world,” CEO Gary Smith said after Ciena’s earnings call.  “Traffic growth has been between 20% and 40% per year very consistently for the last two decades,” Smith told Light Reading.

Ciena believes huge investments in data centers with AI compute servers will ultimately result in more traffic traveling over U.S. and international broadband networks. “It has to come out of the data center and onto the network,” Smith said of AI data.  “Now, quite where it ends up being, who can know. As an exact percentage, a lot of people are working through that, including the cloud guys,” he said about the data traffic growth rate over the next few years.  “But one would expect [AI data] to layer on top of that 30% growth, is the point I’m making,” he added.

AI comes at a fortuitous time for Ciena. “You’re having to connect these GPU clusters over greater distances. We’re beginning to see general, broader traffic growth in things like inference and training. And that’s going to obviously drive our business, which is why we’re forecasting greater than normal growth,” Smith said.

Smith’s positive comments on AI traffic are noteworthy in light of some data points showing a slowdown in the rate of growth in data traffic on global networks.  For example:

  • OpenVault recently reported that monthly average broadband data consumption in the third quarter inched up 7.2%, the lowest rate of growth seen since the company began reporting these trends in 2012.
  • In Ericsson’s newest report, Fredrik Jejdling, EVP and head of business area networks, said: “We see continued mobile network traffic growth but at a slower rate.”
  • Some of the nation’s biggest Content Data Network (CDN) providers – including Akamai, Fastly and Edgio – are struggling to come to terms with a historic slowdown in Internet traffic growth. Such companies operate the content delivery networks that convey video and other digital content online.
  • “In terms of traffic growth, it is growing very slowly – at rates that we haven’t seen in the 25-plus years we’ve been in this business. So it’s growing very, very slow,” Akamai CFO Ed McGowan said recently. “It’s just been a weak traffic environment.”

“The cloud providers themselves are building bigger infrastructure and networks, and laying track for even greater growth in the future as more and more of that AI traffic comes out of the data center,” Smithsaid. “So that’s why we’re predicting greater growth than normal over the next three years. It’s early days for that traffic coming out of the data center, but I think we’re seeing clear evidence around it. So you’re looking at an enormous step function in traffic flows over the next few years,” he concluded.

References:

https://www.lightreading.com/data-centers/ciena-ceo-prepare-for-the-ai-traffic-wave

https://www.fierce-network.com/broadband/cienas-ceo-says-companys-growth-linked-ai

Summit Broadband deploys 400G using Ciena’s WaveLogic 5 Extreme

DriveNets and Ciena Complete Joint Testing of 400G ZR/ZR+ optics for Network Cloud Platform

Telco spending on RAN infrastructure continues to decline as does mobile traffic growth

Analysys Mason & Light Reading: cellular data traffic growth rates are decreasing

TechCrunch: Meta to build $10 billion Subsea Cable to manage its global data traffic

Initiatives and Analysis: Nokia focuses on data centers as its top growth market

 

China Telecom’s 2025 priorities: cloud based AI smartphones (?), 5G new calling (GSMA), and satellite-to-phone services

At the 2024 Digital Technology Ecosystem Conference last week, China Telecom executives identified AI, 5G new calling and satellite-to-phone services as its handset priorities for 2025. The state-owned network operator, like other China telcos, is working with local manufacturers to build the devices it wants to sell through its channels.

China Telecom’s smartphone priorities align with its major corporate objectives. As China Telecom vice president Li Jun explained, devices are critical right across the business. “Terminals are an extension of the cloud network, a carrier of services, and a user interface,” he said.

China Telecom Vice President Li Jun

………………………………………………………………………………………………………………………………………………………………………………………………………………

China Telecom Deputy General Manager Tang Ke, introduced the progress of China Telecom and its partners in AI and emerging terminal ecosystem cooperation.  He stated that in 2024, China Telecom will achieve large-scale development of basic 5G services, with over 120 million new self-registered users annually and more than 140 models of phones supporting 5G messaging.

In terms of emerging businesses, leading domestic smartphone brands fully support direct satellite connection, with 20 models available and over 4 million units activated. Leading PC brands fully integrate Tianyi Cloud Computer, further enriching applications in work, education, life, and entertainment scenarios. Domestic phones fully support quantum secure calls, with over 50 million new self-registered users. Terminals fully support the upgrade to 800M, reaching over 100 million users. Besides phones to support direct-to-cell calling, it also hoped to develop low-cost positioning tech using Beidou and 5G location capabilities.

China Telecom continues to promote comprehensive AI upgrades of terminals, collaborating with partners to expand AI terminal categories and provide users with more diverse choices and higher-quality experiences. Tang Ke revealed that, at the main forum of the “2024 Digital Technology Ecosystem Conference,” China Telecom will release its first operator-customized AI phone.

Tang Ke emphasized that in the AI era, jointly building a collaborative and mutually promoting AI terminal ecosystem has become the inevitable path of industry development. Ecosystem participants must closely coordinate in technology, industry, and business to offer users the best AI experience. China Telecom will comprehensively advance technical collaboration, accelerating coordination from levels such as chips, large models, and intelligent agents, and promoting the construction of AI technology frameworks from both the device and cloud sides. The company will comprehensively push terminal AI upgrades, accelerating the AI development of wearables, healthcare, education, innovation, and industry terminals, based on key categories such as smartphones, cameras, cloud computers, and smart speakers.

Deputy Marketing Director Shao Yantao laid out the company’s device strategy for the year ahead. He said China Telecom’s business was based on networks, cloud-network integration and quantum security, with a focus on three technology directions – AI, 5G and satellites.  With AI, it aims to carry out joint product development with OEM partners to build device-cloud capabilities and establish AI models.  The state owned telco will pursue “domestic and foreign” projects in cloud-based AI mobile phones.

Besides smartphones, other AI-powered products next year would likely include door locks, speakers, glasses and watches, Shao said. The other big focus area is 5G new calling, based on new IMS DC (data channel) capabilities, with the aim of opening up new use cases like screen sharing and interactive games during a voice call.

China Telecom would develop its own open-source IMS DC SDK to support AI, payments, XR and other new functionality, Shao said. But he acknowledged the need to build cooperation across the industry ecosystem.  The network operator and its partners would also collaborate on Voice over WiFI and 3CC carrier aggregation for 5G-Advanced devices, he added.

……………………………………………………………………………………………………………………………………………………………………………………………..

China’s Ministry of Industry and Information Technology (MIIT) claims that China has built and activated over 4.1 million 5G base stations, with the 5G network continuously extending into rural areas, achieving “5G coverage in every township.” 5G has been integrated into 80 major categories of the national economy, with over 100,000 application cases accumulated. The breadth and depth of applications continue to expand, profoundly transforming lifestyles, production methods, and governance models.

The meeting emphasized the need to leverage the implementation of the “Sailing” Action Upgrade Plan for Large-scale 5G Applications as a means to vigorously promote the large-scale development of 5G applications, supporting new types of industrialization and the modernization of the information and communications industry, thereby laying a solid foundation for building a strong network nation and advancing Chinese-style modernization.

References:

https://www.lightreading.com/smartphones-devices/china-telecom-to-target-new-ai-satellite-devices-in-2025

https://www.c114.com.cn/news/22/c23811.html

https://en.c114.com.cn/583/a1279613.html

https://en.c114.com.cn/583/a1279469.html

China Telecom and China Mobile invest in LEO satellite companies

China Telecom with ZTE demo single-wavelength 1.2T bps hollow-core fiber transmission system over 100T bps

ZTE and China Telecom unveil 5G-Advanced solution for B2B and B2C services

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

Meta Platforms and Elon Musk’s xAI start-up are among companies building clusters of computer servers with as many as 100,000 of Nvidia’s most advanced GPU chips as the race for artificial-intelligence (AI) supremacy accelerates.

  • Meta Chief Executive Mark Zuckerberg said last month that his company was already training its most advanced AI models with a conglomeration of chips he called “bigger than anything I’ve seen reported for what others are doing.”
  • xAI built a supercomputer called Colossus—with 100,000 of Nvidia’s Hopper GPU/AI chips—in Memphis, TN in a matter of months.
  • OpenAI and Microsoft have been working to build up significant new computing facilities for AI. Google is building massive data centers to house chips that drive its AI strategy.

xAI built a supercomputer in Memphis that it calls Colossus, with 100,000 Nvidia AI chips. Photo: Karen Pulfer Focht/Reuters

A year ago, clusters of tens of thousands of GPU chips were seen as very large. OpenAI used around 10,000 of Nvidia’s chips to train the version of ChatGPT it launched in late 2022, UBS analysts estimate. Installing many GPUs in one location, linked together by superfast networking equipment and cables, has so far produced larger AI models at faster rates. But there are questions about whether ever-bigger super clusters will continue to translate into smarter chatbots and more convincing image-generation tools.

Nvidia Chief Executive Jensen Huang  said that while the biggest clusters for training for giant AI models now top out at around 100,000 of Nvidia’s current chips, “the next generation starts at around 100,000 Blackwells. And so that gives you a sense of where the industry is moving. Do we think that we need millions of GPUs? No doubt. That is a certainty now. And the question is how do we architect it from a data center perspective,” Huang added.

“There is no evidence that this will scale to a million chips and a $100 billion system, but there is the observation that they have scaled extremely well all the way from just dozens of chips to 100,000,” said Dylan Patel, the chief analyst at SemiAnalysis, a market research firm.

Giant super clusters are already getting built. Musk posted last month on his social-media platform X that his 100,000-chip Colossus super cluster was “soon to become” a 200,000-chip cluster in a single building. He also posted in June that the next step would probably be a 300,000-chip cluster of Nvidia’s newest GPU chips next summer.  The rise of super clusters comes as their operators prepare for Nvidia’s nexgen Blackwell chips, which are set to start shipping out in the next couple of months. Blackwell chips are estimated to cost around $30,000 each, meaning a cluster of 100,000 would cost $3 billion, not counting the price of the power-generation infrastructure and IT equipment around the chips.

Those dollar figures make building up super clusters with ever more chips something of a gamble, industry insiders say, given that it isn’t clear that they will improve AI models to a degree that justifies their cost. Indeed, new engineering challenges also often arise with larger clusters:

  • Meta researchers said in a July paper that a cluster of more than 16,000 of Nvidia’s GPUs suffered from unexpected failures of chips and other components routinely as the company trained an advanced version of its Llama model over 54 days.
  • Keeping Nvidia’s chips cool is a major challenge as clusters of power-hungry chips become packed more closely together, industry executives say, part of the reason there is a shift toward liquid cooling where refrigerant is piped directly to chips to keep them from overheating.
  • The sheer size of the super clusters requires a stepped-up level of management of those chips when they fail. Mark Adams, chief executive of Penguin Solutions, a company that helps set up and operate computing infrastructure, said elevated complexity in running large clusters of chips inevitably throws up problems.

The continuation of the AI boom for Nvidia largely depends on how the largest clusters of GPU chips deliver a return on investment for its customers. The trend also fosters demand for Nvidia’s networking equipment, which is fast becoming a significant business. Nvidia’s networking equipment revenue in 2024 was $3.13 billion, which was a 51.8% increase from the previous year.  Mostly from its Mellanox acquisition, Nvidia offers these networking platforms:

  • Accelerated Ethernet Switching for AI and the Cloud

  • Quantum InfiniBand for AI and Scientific Computing

  • Bluefield® Network Accelerators

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Nvidia forecasts total fiscal fourth-quarter sales of about $37.5bn, up 70%. That was above average analyst projections of $37.1bn, compiled by Bloomberg, but below some projections that were as high as $41bn. “Demand for Hopper and anticipation for Blackwell – in full production – are incredible as foundation model makers scale pretraining, post-training and inference, Huang said.  “Both Hopper and Blackwell systems have certain supply constraints, and the demand for Blackwell is expected to exceed supply for several quarters in fiscal 2026,” CFO Colette Kress said.

References:

https://www.wsj.com/tech/ai/nvidia-chips-ai-race-96d21d09?mod=tech_lead_pos5

https://www.datacenterdynamics.com/en/news/nvidias-data-center-revenue-up-112-over-last-year-as-ai-boom-continues/

https://www.nvidia.com/en-us/networking/

https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-third-quarter-fiscal-2025

HPE-Juniper combo + Cisco restructuring create enterprise network uncertainty

Hewlett Packard Enterprise’s (HPE) pending acquisition of Juniper Networks and Cisco’s recent corporate restructuring (which de-emphasizes legacy networking products like access/core routers and Ethernet switches) is putting enterprise networking customers in a holding pattern.   They are  pausing investments in network equipment as they wait out the uncertainty.

“I’ve had customers put things on hold right now, and not just the Juniper side but both sides,” Andre Kindness, principal analyst at Forrester Research, said in an interview with SDxCentral about how Juniper and HPE customers are reacting to uncertainty around the deal. “Typically, if customers are strong enough to look outside of Cisco and they’re not a Cisco shop, then HPE, Aruba, Juniper are the primary ones that they’re looking at. I’ve had customers put some of that on hold at this point.”

That holding pattern is tied to uncertainty over what systems and platforms will emerge from a combined HPE-Juniper. Mr. Kindness noted in a blog post when the deal was announced that “the journey ahead will be rife with obstacles for Juniper and HPE/Aruba customers alike.” Kindness explained that one important move for HPE would be to “rationalize/optimize the portfolio, the products and the solutions.”

“HPE will try to reassure you that nothing will change; it doesn’t make sense to keep everything, especially the multiple AP [access point] product lines (Instant On, Mist, and Aruba Aps), all the routing and switching operating systems (Juno, AOS-CX, and ArubaOS) and both management systems (Central and Mist),” Kindness wrote.

“Though not immediately, products will need to go and the hardware that stays will need to be changed to accommodate cloud-based management, monitoring, and AI.”  HPE CEO Antonio Neri and his management team has attempted to temper these concerns by stating there is virtually no overlap between HPE and Juniper’s product lines, which Kindness said, “just boggles my mind,” he added.

Juniper’s AI product, called Marvis (part of the Mist acquisition in 2019), is by far the most advanced AI solution in the networking market. That’s not a profound statement; no vendor has anything close to it. The quick history: Juniper’s acquisition of Mist brought the company a cloud-based Wi-Fi solution with a leading AI capability, Marvis. Juniper quickly started integrating its switching and routing portfolio into Marvis. Walmart, Amazon, and others took notice. Fast-forward to today: This gives HPE Aruba a two-year lead against its competitors by bringing Juniper into the fold.

“I think [Neri’s] got to worry about the financial analyst out there in the stock market or the shareholders to pacify them, and then at the same time you don’t want to scare the bejesus out of your customer base, or Juniper customer base, so you’re going to say that there’s going to be either no overlap or no changes, everything will coexist,” Kindness said.

While overlap and other concerns could alter what a potential Juniper HPE combo looks like, Kindness said he expects the result to lean heavily on Juniper’s telecom and networking assets. That includes HPE products like Aruba networking gear being replaced by Juniper’s artificial intelligence (AI)-focused Mist and Marvis platforms.

“Mist has been really a game changer for the company and just really opened a lot of doors,” Kindness explained. “[Juniper] really did a 180 degree turn when they bought [Mist], and just the revenue that’s brought in and the expansion of the product line itself, and the capabilities of Mist and actually Marvis in the background would be hard for [HPE] to replicate at this point. My perception was HPE looked at it and said, Marvis and Mist is just something that would take too long to get to.” Kindness added that he does not expect significant platform thinning to happen for a couple of years after a potential closing of the deal, but the interim could be filled with challenges tied to channel partners and go-to-market strategies that could chip away at market opportunities similar to what is happening at VMware following the Broadcom acquisition.  “Broadcom is ruthless, right or wrong, it’s its business model,” Kindness said. “HPE is not quite that dynamic.”

……………………………………………………………………………………………………………………………………….

Cisco CFO Scott Herren told the audience at a recent investor conference that HPE’s pending Juniper acquisition is causing “uncertainty” in the enterprise WLAN market that could be benefit Cisco. “I think for sure that’s created just a degree of uncertainty and a question of, hey, should I consider if I was previously a vendor or a customer of either of those, now is the time to kind of open up and look at other opportunities,” Herren said. “And we’ve seen our wireless business, our orders greater than $1 million grew more than 20% in the fourth quarter.”

Cisco is also working through its own networking drama as part of the vendor’s recently announced restructuring process. Those moves will see Cisco focus more on high-growth areas like AI, security, and cloud at the expense of its legacy operations, including the pairing down of its networking product lines.

“It looks like Cisco’s realizing that all the complexity of customer choice and all these variations and offering a zillion features is probably not the way to go. I think Chuck realized it,” Kindness said of Cisco’s efforts. “If you look at the ACI [Application Centric Infrastructure] and Cloud Dashboard for Nexus starting to consolidate, and then the Catalyst line and the Aironet line and the Meraki line are consolidating, it’s just the right move. The market has told them that for the last 10 years, it just took them a while to recognize it.”

References:

https://www.sdxcentral.com/articles/analysis/hpe-juniper-cisco-networking-chaos-has-enterprises-nervous/2024/11/

https://www.juniper.net/us/en.html

HPE + Juniper Networks Creates A Cisco Doppelganger

 

Cisco to lay off more than 4,000 as it shifts focus to AI and Cybersecurity

Cisco restructuring plan will result in ~4100 layoffs; focus on security and cloud based products

 

SK Telecom unveils plans for AI Infrastructure at SK AI Summit 2024

Introduction:

During the two-day SK AI Summit 2024 [1.], SK Telecom CEO Ryu Young-sang unveiled the company’s comprehensive strategy which revolves around three core components: AI data centers (AIDCs), a cloud-based GPU service (GPU-as-a-Service, GPUaaS), and Edge AI.  SK Telecom is planning to construct hyperscale data centers in key regions across South Korea, with the goal of becoming the AIDC hub in the Asia Pacific region. Additionally, the company will launch a cloud-based GPU service to address the domestic GPU shortage and introducing ‘Edge AI’ to bridge the gap between AIDC and on-device AI.  This innovative approach aims to connect national AI infrastructure and expand globally, in collaboration with partners both in South Korea and abroad.

Note 1. The SK AI Summit is an annual event held by the SK Group, where global experts in various AI fields gather to discuss coexistence in the era of artificial general intelligence (AGI) and seek ways to strengthen the ecosystem.

………………………………………………………………………………………………………………………………………………………………………..

Constructing AI Data Centers in South Korea’s key regions:

SK Telecom plans to start with hyperscale AIDCs that require more than 100 megawatts (MW) in local regions, with future plans to expand its scale to gigawatts (GW) or more, to leap forward as the AIDC hub in the Asia Pacific region.

By extending the AIDC to national bases, centers can secure a stable power supply through the utilization of new renewable energy sources such as hydrogen, solar and wind power, and easily expand to global markets through submarine cables. SK Telecom anticipates building AIDC cost-effectively when the data center combines SK Group’s capabilities in high-efficiency next-generation semiconductors, immersion cooling, and other energy solutions, along with its AI cluster operation.

Prior to this, SK Telecom plans to open an AIDC testbed in Pangyo, Korea, in December, which combines the capabilities of the SK Group and various solutions owned by partner companies. This facility, where all three types of next-generation liquid cooling solutions—direct liquid cooling, immersion cooling, and precision liquid cooling—are deployed, will be the first and only testbed in Korea. It will also feature advanced AI semiconductors like SK hynix’s HBM, as well as GPU virtualization solutions and AI energy optimization technology. This testbed will provide opportunities to observe and experience the cutting-edge technologies of a future AIDC.

 Supplying GPU via cloud to metropolitan areas:

SK Telecom plans to launch a cloud-based GPU-as-a-Service (GPUaaS) by converting the Gasan data center, located in the metropolitan area, into an AIDC to quickly resolve the domestic GPU shortage.

Starting in December, SK Telecom plans to launch a GPUaaS with NVIDIA H100 Tensor Core GPU through a partnership with U.S.-based Lambda. In March 2025, SK Telecom plans to introduce NVIDIA H200 Tensor Core GPU in Korea, gradually expanding to meet customer demand.

Through the AI cloud services (GPUaaS), SKT aims to enable companies to develop AI services easily and at a lower cost, without needing to purchase their own GPUs, ultimately supporting the vitalization of Korea’s AI ecosystem.

Introducing ‘Edge AI’ to open a new opportunity in telco infrastructure:

SK Telecom plans to introduce ‘Edge AI,’ which can narrow the gap between AIDC and on-device AI, using the nationwide communication infrastructure.

Edge AI is an infrastructure that combines mobile communication networks and AI computing, offering advantages in reduced latency, enhanced security, and improved privacy compared to large-scale AIDCs. Additionally, it enables large-scale AI computing, complementing the existing AI infrastructure, compared to on-device AI.

SKT is currently conducting research on advanced technologies and collaborating with global partners to build AIDC-utilizing communication infrastructure and develop customized servers. The company is also carrying out various proof of concept (PoC) projects across six areas, including healthcare, AI robots, and AI CCTV, to discover specialized Edge AI services.

“So far, the competition in telecommunications infrastructure has been all about connectivity, namely speed and capacity, but now the paradigm of network evolution should be changed,” said Ryu Young-sang, CEO of SK Telecom. “The upcoming 6G will evolve into a next-generation AI infrastructure where communication and AI are integrated.”

Developing a comprehensive AIDC solution to enter global market:

SK Telecom plans to develop a comprehensive AIDC solution that combines AI semiconductors, data centers, and energy solutions through collaboration with AI companies in Korea and abroad, with the aim of entering the global market.SK Telecom aims to lead the global standardization of Edge AI and collaborate on advanced technology research, while working towards the transition to 6G AI infrastructure.

………………………………………………………………………………………………………………….

About SK Telecom:

SK Telecom has been leading the growth of the mobile industry since 1984. Now, it is taking customer experience to new heights by extending beyond connectivity. By placing AI at the core of its business, SK Telecom is rapidly transforming into an AI company with a strong global presence. It is focusing on driving innovations in areas of AI Infrastructure, AI Transformation (AIX) and AI Service to deliver greater value for industry, society, and life.

For more information, please contact [email protected] or visit our LinkedIn page www.linkedin.com/company/sk-telecom

………………………………………………………………………………………………………………….

References:

SKT-Samsung Electronics to Optimize 5G Base Station Performance using AI

SK Telecom (SKT) and Nokia to work on AI assisted “fiber sensing”

Huawei’s “FOUR NEW strategy” for carriers to be successful in AI era

At the 10th Ultra-Broadband Forum (UBBF 2024) in Istanbul, Turkey, James Chen, President of Huawei’s Carrier Business, delivered a speech entitled “Network+AI, Unleashing More Business Value.”

“To explore the potential of AI, the ‘FOUR NEW’ strategy — new hub, new services, new experience, and new operation is crucial. It helps carriers to expand market boundaries, foster innovative services, and enhance market competitiveness, while also optimize network O&M and achieve business success. Huawei is committed to working with global carriers and partners to unleash more business value and forge a win-win digital and intelligent future through the “FOUR NEW” strategy.”

James Chen, President of Carrier Business, Huawei, delivering a keynote speech

James Chen, President of Huawei’s Carrier Business, delivering a keynote speech

……………………………………………………………………………………………..

Huawei believes that its “FOUR NEW” strategy is key to unleashing more business value through the combination of networking and AI.

  1. New Hub: The new Hub is the AI Hub for home services. The core of the AI Hub is the development of AI agents. AI agents need to connect people, things, and applications, understand and respond to the requirements of family members, control smart devices to meet family requirements, and connect AI applications to expand the boundaries of home services. The new hub helps carriers achieve business breakthroughs in the home market.
  2. New Services: Carriers enable new services and aggregate high-quality contents with AI to gradually build a home AI application ecosystem. AI not only can upgrade traditional services, such as interactive fitness and motion-sensing games, but also innovate home services, such as home service robots, health care, and education, etc. It improves quality of life and gradually builds a home AI ecosystem.
  3. New Experience: New services such as cloud gaming, live commerce, AI searches for photos and videos, are emerging one after another. These services have high requirements on network quality, including latency, uplink and downlink bandwidth, and jitter. This brings new network monetization opportunities to carriers. Carriers can seize monetization opportunities through new business models, such as latency-based charging, upstream bandwidth-based charging, and AI-function based charging. High-quality service experience requires high-quality networks. Carriers build “Premium vertical and premium horizontal” high-quality networks to support high-quality service experience and business monetization. The key to building a “Premium vertical and premium horizontal” network is to build 1 ms connections between data centers and 1 ms access to a data center.
  4. New Operation: As carriers’ network scale is getting larger, autonomous driving network is becoming more important. AI supports high-level network autonomous driving and improves network operation efficiency. Huawei’s L4 autonomous driving network based on the Telecom Foundation Model helps operators reduce customer complaints, shorten the complaint closure time, improve service provisioning efficiency, reduce the number of site visits, and accelerate fault rectification.

In the wave of digital intelligence transformation, the “FOUR NEW” strategy is not only the embodiment of network technology innovation, but also the important driving force for continuously releasing network business value. New Hub, New Services, New Experience, and New Operation support each other and together form a complete road to digital intelligence business success.

In the future, Huawei will continue to remain customer-centric, work with global carriers and partners to explore the digital intelligence era, accelerate the release of the business value of network + AI, and embrace a prosperous intelligent world.

References:

https://www.prnewswire.com/news-releases/huawei-proposes-the-four-new-strategy-to-help-carriers-achieve-business-success-in-the-digital-and-intelligence-era-302294830.html

Huawei’s First-Half Net Profit Rose on Strong Smartphone Sales, Car Business

China Unicom-Beijing and Huawei build “5.5G network” using 3 component carrier aggregation (3CC)

Despite U.S. sanctions, Huawei has come “roaring back,” due to massive China government support and policies

Huawei to revolutionize network operations and maintenance

Reuters & Bloomberg: OpenAI to design “inference AI” chip with Broadcom and TSMC

Bloomberg reports that OpenAI, the fast-growing company behind ChatGPT, is working with Broadcom Inc. to develop a new artificial intelligence chip specifically focused on running AI models after they’ve been trained, according to two people familiar with the matter.   The two companies are also consulting with Taiwan Semiconductor Manufacturing Company(TSMC) the world’s largest chip contract manufacturer. OpenAI has been planning a custom chip and working on its uses for the technology for around a year, the people said, but the discussions are still at an early stage.  The company has assembled a chip design team of about 20 people, led by top engineers who have previously built Tensor Processing Units (TPUs) at Google, including Thomas Norrie and Richard Ho (head of hardware engineering).

Reuters reported on OpenAI’s ongoing talks with Broadcom and TSMC on Tuesday. It has been working for months with Broadcom to build its first AI chip focusing on inference (responds to user requests), according to sources. Demand right now is greater for training chips, but analysts have predicted the need for inference chips could surpass them as more AI applications are deployed.

OpenAI has examined a range of options to diversify chip supply and reduce costs. OpenAI considered building everything in-house and raising capital for an expensive plan to build a network of chip manufacturing factories known as “foundries.”

REUTERS/Dado Ruvic/Illustration/File Photo Purchase Licensing Rights

OpenAI may continue to research setting up its own network of foundries, or chip factories, one of the people said, but the startup has realized that working with partners on custom chips is a quicker, attainable path for now. Reuters earlier reported that OpenAI was pulling back from the effort of establishing its own chip manufacturing capacity.  The company has dropped the ambitious foundry plans for now due to the costs and time needed to build a network, and plans instead to focus on in-house chip design efforts, according to sources.

OpenAI, which helped commercialize generative AI that produces human-like responses to queries, relies on substantial computing power to train and run its systems. As one of the largest purchasers of Nvidia’s graphics processing units (GPUs), OpenAI uses AI chips both to train models where the AI learns from data and for inference, applying AI to make predictions or decisions based on new information. Reuters previously reported on OpenAI’s chip design endeavors. The Information reported on talks with Broadcom and others.

The Information reported in June that Broadcom had discussed making an AI chip for OpenAI. As one of the largest buyers of chips, OpenAI’s decision to source from a diverse array of chipmakers while developing its customized chip could have broader tech sector implications.

Broadcom is the largest designer of application-specific integrated circuits (ASICs) — chips designed to fit a single purpose specified by the customer. The company’s biggest customer in this area is Alphabet Inc.’s Google. Broadcom also works with Meta Platforms Inc. and TikTok owner ByteDance Ltd.

When asked last month whether he has new customers for the business, given the huge demand for AI training, Broadcom Chief Executive Officer Hock Tan said that he will only add to his short list of customers when projects hit volume shipments.  “It’s not an easy product to deploy for any customer, and so we do not consider proof of concepts as production volume,” he said during an earnings conference call.

OpenAI’s services require massive amounts of computing power to develop and run — with much of that coming from Nvidia chips. To meet the demand, the industry has been scrambling to find alternatives to Nvidia. That’s included embracing processors from Advanced Micro Devices Inc. and developing in-house versions.

OpenAI is also actively planning investments and partnerships in data centers, the eventual home for such AI chips. The startup’s leadership has pitched the U.S. government on the need for more massive data centers and CEO Sam Altman has sounded out global investors, including some in the Middle East, to finance the effort.

“It’s definitely a stretch,” OpenAI Chief Financial Officer Sarah Friar told Bloomberg Television on Monday. “Stretch from a capital perspective but also my own learning. Frankly we are all learning in this space: Infrastructure is destiny.”

Currently, Nvidia’s GPUs hold over 80% AI market share. But shortages and rising costs have led major customers like Microsoft, Meta, and now OpenAI, to explore in-house or external alternatives.

Training AI models and operating services like ChatGPT are expensive. OpenAI has projected a $5 billion loss this year on $3.7 billion in revenue, according to sources. Compute costs, or expenses for hardware, electricity and cloud services needed to process large datasets and develop models, are the company’s largest expense, prompting efforts to optimize utilization and diversify suppliers.
OpenAI has been cautious about poaching talent from Nvidia because it wants to maintain a good rapport with the chip maker it remains committed to working with, especially for accessing its new generation of Blackwell chips, sources added.

References:

https://www.bloomberg.com/news/articles/2024-10-29/openai-broadcom-working-to-develop-ai-chip-focused-on-inference?embedded-checkout=true

https://www.reuters.com/technology/artificial-intelligence/openai-builds-first-chip-with-broadcom-tsmc-scales-back-foundry-ambition-2024-10-29/

AI Echo Chamber: “Upstream AI” companies huge spending fuels profit growth for “Downstream AI” firms

AI Frenzy Backgrounder; Review of AI Products and Services from Nvidia, Microsoft, Amazon, Google and Meta; Conclusions

AI sparks huge increase in U.S. energy consumption and is straining the power grid; transmission/distribution as a major problem

Generative AI Unicorns Rule the Startup Roost; OpenAI in the Spotlight

 

SKT-Samsung Electronics to Optimize 5G Base Station Performance using AI

SK Telecom (SKT) has partnered with Samsung Electronics to use AI to improve the performance of its 5G base stations in order to upgrade its wireless network.  Specifically, they will use AI-based 5G base station quality optimization technology (AI-RAN Parameter Recommender) to commercial 5G networks.

The two companies have been working throughout the year to learn from past mobile network operation experiences using AI and deep learning, and recently completed the development of technology that automatically recommends optimal parameters for each base station environment.  When applied to SKT’s commercial network, the new technology was able to bring out the potential performance of 5G base stations and improve the customer experience.

Mobile base stations are affected by different wireless environments depending on their geographical location and surrounding facilities. For the same reason, there can be significant differences in the quality of 5G mobile communication services in different areas using the same standard equipment.

Accordingly, SKT utilized deep learning, which analyzes and learns the correlation between statistical data accumulated in existing wireless networks and AI operating parameters, to predict various wireless environments and service characteristics and successfully automatically derive optimal parameters for improving perceived quality.

Samsung Electronics’ ‘Network Parameter Optimization AI Model’ used in this demonstration improves the efficiency of resources invested in optimizing the wireless network environment and performance, and enables optimal management of mobile communication networks extensively organized in cluster units.

The two companies are conducting additional learning and verification by diversifying the parameters applied to the optimized AI model and expanding the application to subways where traffic patterns change frequently.

SKT is pursuing advancements in the method of improving quality by automatically adjusting the output of base station radio waves or resetting the range of radio retransmission allowance when radio signals are weak or data transmission errors occur due to interference.

In addition, we plan to continuously improve the perfection of the technology by expanding the scope of targets that can be optimized with AI, such as parameters related to future beamforming*, and developing real-time application functions.

* Beamforming: A technology that focuses the signal received through the antenna toward a specific receiving device to transmit and receive the signal strongly.

SKT is expanding the application of AI technology to various areas of the telecommunications network, including ‘Telco Edge AI’, network power saving, spam blocking, and operation automation, including this base station quality improvement. In particular, AI-based network power saving technology was recently selected as an excellent technology at the world-renowned ‘Network X Award 2024’.

Ryu Tak-ki, head of SK Telecom’s infrastructure technology division, said, “This is a meaningful achievement that has confirmed that the potential performance of individual base stations can be maximized by incorporating AI,” and emphasized, “We will accelerate the evolution into an AI-Native Network that provides differentiated customer experiences through the convergence of telecommunications and AI technologies.”

“AI is a key technology for innovation in various industrial fields, and it is also playing a decisive role in the evolution to next-generation networks,” said Choi Sung-hyun, head of the advanced development team at Samsung Electronics’ network business division. “Samsung Electronics will continue to take the lead in developing intelligent and automated technologies for AI-based next-generation networks.”

Page 1 of 2
1 2