AI/ML
Are cloud AI startups a serious threat to hyperscalers?
Introduction:
Cloud AI startups include Elon Musk’s xAI, OpenAI, Vultr, Prosimo, Alcion, Run:ai, among others. They all are now or planning to build their own AI compute servers and data center infrastructure. Are they a serious threat to legacy cloud service providers who are also building their own AI compute servers?
- xAI built a supercomputer it calls Colossus—with 100,000 of Nvidia’s Hopper AI chips—in Memphis, TN in 19 days vs the four months it normally takes. The xAI supercomputer is designed to drive cutting-edge AI research, from machine learning to neural networks with a plan to use Colossus to train large language models (like OpenAI’s GPT-series) and extend the framework into areas including autonomous machines, robotics and scientific simulations. It’s mission statement says: “xAI is a company working on building artificial intelligence to accelerate human scientific discovery. We are guided by our mission to advance our collective understanding of the universe.”
- Open AI lab’s policy chief Chris Lehane told the FT that his company will build digital infrastructure to train and run its systems. In an interview with the FT, Lehane said “chips, data and energy” will be the crucial factors in helping his company win the AI race and achieve its stated goal of developing advanced general intelligence (AGI), AI which can match or surpass the capability of the human brain. Lehane said the company would build clusters of data centers in the US mid west and south west, but did not going into further detail about the plan. DCD has contacted the company to ask for more information on its data center buildout.
Elon Musk’s xAI built a supercomputer in Memphis that it calls Colossus, with 100,000 Nvidia AI chips. Photo: Karen Pulfer Focht/Reuters
As noted in our companion post, Cloud AI startup Vultr raised $333 million in a financing round this week from Advanced Micro Devices (AMD) and hedge fund LuminArx Capital Management and is now valued at $3.5 billion
Threats from Cloud AI Startups include:
- Specialization in AI: Many cloud AI startups are highly specialized in AI and ML solutions, focusing on specific needs such as deep learning, natural language processing, or AI-driven analytics. They can offer cutting-edge solutions that cater to AI-first applications, which might be more agile and innovative compared to the generalist services offered by hyperscalers.
- Flexibility and Innovation: Startups can innovate rapidly and respond to the needs of niche markets. For example, they might create more specialized and fine-tuned AI models or offer unique tools that address specific customer needs. Their focus on AI might allow them to provide highly optimized services for machine learning, automation, or data science, potentially making them appealing to companies with AI-centric needs.
- Cost Efficiency: Startups often have lower operational overheads, allowing them to provide more flexible pricing or cost-effective solutions tailored to smaller businesses or startups. They may disrupt the cost structure of larger cloud providers by offering more competitive prices for AI workloads.
- Partnerships with Legacy Providers: Some AI startups focus on augmenting the services of hyperscalers, partnering with them to integrate advanced AI capabilities. However, in doing so, they still create competition by offering specialized services that could, over time, encroach on the more general cloud offerings of these providers.
Challenges to Overcome:
- Scale and Infrastructure: Hyperscalers have massive infrastructure investments that enable them to offer unparalleled performance, reliability, and global reach. AI startups will need to overcome significant challenges in terms of scaling infrastructure and ensuring that their services are available and reliable on a global scale.
- Ecosystem and Integration: As mentioned, many large enterprises rely on the vast ecosystem of services that hyperscalers provide. Startups will need to provide solutions that are highly compatible with existing tools, or offer a compelling reason for companies to shift their infrastructure to smaller providers.
- Market Penetration and Trust: Hyperscalers are trusted by major enterprises, and their brands are synonymous with stability and security. Startups need to gain this trust, which can take years, especially in industries where regulatory compliance and reliability are top priorities.
Conclusions:
Cloud AI startups will likely carve out a niche in the rapidly growing AI space, but they are not yet a direct existential threat to hyperscalers. While they could challenge hyperscalers’ dominance in specific AI-related areas (e.g., AI model development, hyper-specialized cloud services), the larger cloud providers have the infrastructure, resources, and customer relationships to maintain their market positions. Over time, however, AI startups could impact how traditional cloud services evolve, pushing hyperscalers to innovate and tailor their offerings more toward AI-centric solutions.
Cloud AI startups could pose some level of threat to hyperscalers (like Amazon Web Services, Microsoft Azure, and Google Cloud) and legacy cloud service providers, but the impact will take some time to be significant. These cloud AI startups might force hyperscalers to accelerate their own AI development but are unlikely to fully replace them in the short to medium term.
References:
ChatGPT search
Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?
Ciena CEO sees huge increase in AI generated network traffic growth while others expect a slowdown
Lumen Technologies to connect Prometheus Hyperscale’s energy efficient AI data centers
https://www.datacenterdynamics.com/en/news/openai-could-build-its-own-data-centers-in-the-us-report/
AI cloud start-up Vultr valued at $3.5B; Hyperscalers gorge on Nvidia GPUs while AI semiconductor market booms
Over the past two years, AI model builders OpenAI, Anthropic and Elon Musk’s xAI have raised nearly $40bn between them. Other sizeable investment rounds this week alone included $500mn for Perplexity, an AI-powered search engine, and $333mn for Vultr, part of a new band of companies running specialized cloud data centers to support AI.
Cloud AI startup Vultr raised $333 million in a financing round this week from Advanced Micro Devices (AMD) and hedge fund LuminArx Capital Management. That’s a sign of the super hot demand for AI infrastructure. West Palm Beach, Fla.-based Vultr said it is now valued at $3.5 billion and plans to use the financing to acquire more graphics processing units (GPUs) which process AI models. The funding is Vultr’s first injection of outside capital. That’s unusually high for a company that had not previously raised external equity capital. The average valuation for companies receiving first-time financing is $51mn, according to PitchBook.
Vultr said its AI cloud service, in which it leases GPU access to customers, will soon become the biggest part of its business. Earlier this month, Vultr announced plans to build its first “super-compute” cluster with thousands of AMD GPUs at its Chicago-area data center. Vultr said its cloud platform serves hundreds of thousands of businesses, including Activision Blizzard, the Microsoft-owned videogame company, and Indian telecommunications giant Bharti Airtel. Vultr’s customers also use its decade-old cloud platform for their core IT systems, said Chief Executive J.J. Kardwell. Like most cloud platform providers, Vultr isn’t using just one GPU supplier. It offers Nvidia and AMD GPUs to customers, and plans to keep doing so, Kardwell said. “There are different parts of the market that value each of them,” he added.
Vultr’s plan to expand its network of data centers, currently in 32 locations, is a bet that customers will seek greater proximity to their computing infrastructure as they move from training to “inference” — industry parlance for using models to perform calculations and make decisions.
Vultr runs a cloud computing platform on which customers can run applications and store data remotely © Vultr
……………………………………………………………………………………………………………………………………………………………………………………………..
The 10 biggest cloud companies — dubbed hyperscalers — are on track to allocate $326bn to capital expenditure in 2025, according to analysts at Morgan Stanley. While most depend heavily on chips made by Nvidia, large companies including Google, Amazon and Facebook are designing their own customized silicon to perform specialized tasks. Away from the tech mega-caps, emerging “neo-cloud” companies such as Vultr, CoreWeave, Lambda Labs and Nebius have raised billions of dollars of debt and equity in the past year in a bet on the expanding power and computing needs of AI models.
AI chip market leader Nvidia, which alongside other investors, provided more than $400 million to AI cloud provider CoreWeave [1.] in 2023. CoreWeave last year also secured $2.3 billion in debt financing by using its Nvidia GPUs as collateral.
Note 1. CoreWeave is a New Jersey-based company that got its start in cryptocurrency mining.
The race to train sophisticated AI models has inspired the commissioning of increasingly large “supercomputers” (aka AI Clusters) that link up hundreds of thousands of high-performance GPU chips. Elon Musk’s start-up xAI built its Colossus supercomputer in just three months and has pledged to increase it tenfold. Meanwhile, Amazon is building a GPU cluster alongside Anthropic, developer of the Claude AI models. The ecommerce group has invested $8bn in Anthropic.
Hyperscalers are big buyers of Nvidia GPUs:
Analysts at market research firm Omdia (an Informa company) estimate that Microsoft bought 485,000 of Nvidia’s “Hopper” chips this year. With demand outstripping supply of Nvidia’s most advanced graphics processing units for much of the past two years, Microsoft’s chip hoard has given it an edge in the race to build the next generation of AI systems.
This year, Big Tech companies have spent tens of billions of dollars on data centers running Nvidia’s latest GPU chips, which have become the hottest commodity in Silicon Valley since the debut of ChatGPT two years ago kick-started an unprecedented surge of investment in AI.
- Microsoft’s Azure cloud infrastructure was used to train OpenAI’s latest o1 model, as they race against a resurgent Google, start-ups such as Anthropic and Elon Musk’s xAI, and rivals in China for dominance of the next generation of computing. Omdia estimates
- ByteDance and Tencent each ordered about 230,000 of Nvidia’s chips this year, including the H20 model, a less powerful version of Hopper that was modified to meet U.S. export controls for Chinese customers.
- Meta bought 224,000 Hopper chips.
- Amazon and Google, which along with Meta are stepping up deployment of their own custom AI chips as an alternative to Nvidia’s, bought 196,000 and 169,000 Hopper chips, respectively, the analysts said. Omdia analyses companies’ publicly disclosed capital spending, server shipments and supply chain intelligence to calculate its estimates.
The top 10 buyers of data center infrastructure — which now include relative newcomers xAI and CoreWeave — make up 60% of global investment in computing power. Vlad Galabov, director of cloud and data center research at Omdia, said some 43% cent of spending on compute servers went to Nvidia in 2024. “Nvidia GPUs claimed a tremendously high share of the server capex,” he said.
What’s telling is that the biggest buyers of Nvidia GPUs are the hyperscalers who design their own compute servers and outsource the detailed implementation and manufacturing to Taiwan and China ODMs! U.S. compute server makers Dell and HPE are not even in the ball park!
What about #2 GPU maker AMD?
Dave McCarthy, a research vice president in cloud and edge services at research firm International Data Corp (IDC). “For AMD to be able to get good billing with an up-and-coming cloud provider like Vultr will help them get more visibility in the market.” AMD has also invested in cloud providers such as TensorWave, which also offers an AI cloud service. In August, AMD bought the data-center equipment designer ZT Systems for nearly $5 billion. Microsoft, Meta Platforms and Oracle have said they use AMD’s GPUs. A spokesperson for Amazon’s cloud unit said the company works closely with AMD and is “actively looking at offering AMD’s AI chips.”
Promising AI Chip Startups:
Nuvia: Founded by former Apple engineers, Nuvia is focused on creating high-performance processors tailored for AI workloads. Their chips are designed to deliver superior performance while maintaining energy efficiency, making them ideal for data centers and edge computing.
SambaNova Systems: This startup is revolutionizing AI with its DataScale platform, which integrates hardware and software to optimize AI workloads. Their unique architecture allows for faster training and inference, catering to enterprises looking to leverage AI for business intelligence.
Graphcore: Known for its Intelligence Processing Unit (IPU), Graphcore is making waves in the AI chip market. The IPU is designed specifically for machine learning tasks, providing significant speed and efficiency improvements over traditional GPUs.
Market for AI semiconductors:
- IDC estimates it will reach $193.3 billion by the end of 2027 from an estimated $117.5 billion this year. Nvidia commands about 95% of the market for AI chips, according to IDC.
- Bank of America analysts forecast the market for AI chips will be worth $276 billion by 2027.
References:
https://www.ft.com/content/946069f6-e03b-44ff-816a-5e2c778c67db
https://www.restack.io/p/ai-chips-answer-top-ai-chip-startups-2024-cat-ai
Lumen Technologies to connect Prometheus Hyperscale’s energy efficient AI data centers
The need for more cloud computing capacity and AI applications has been driving huge investments in data centers. Those investments have led to a steady demand for fiber capacity between data centers and more optical networking innovation inside data centers. Here’s the latest example of that:
Prometheus Hyperscale has chosen Lumen Technologies to connect its energy-efficient data centers to meet growing AI data demands. Lumen network services will help Prometheus with the rapid growth in AI, big data, and cloud computing as they address the critical environmental challenges faced by the AI industry.
Rendering of Prometheus Hyperscale flagship Data Center in Evanston, Wyoming:
……………………………………………………………………………….
Prometheus Hyperscale, known for pioneering sustainability in the hyperscale data center industry, is deploying a Lumen Private Connectivity Fabric℠ solution, including new network routes built with Lumen next generation wavelength services and Dedicated Internet Access (DIA) [1.] services with Distributed Denial of Service (DDoS) protection layered on top.
Note 1. Dedicated Internet Access (DIA) is a premium internet service that provides a business with a private, high-speed connection to the internet.
This expanded network will enable high-density compute in Prometheus facilities to deliver scalable and efficient data center solutions while maintaining their commitment to renewable energy and carbon neutrality. Lumen networking technology will provide the low-latency, high-performance infrastructure critical to meet the demands of AI workloads, from training to inference, across Prometheus’ flagship facility in Wyoming and four future data centers in the western U.S.
“What Prometheus Hyperscale is doing in the data center industry is unique and innovative, and we want to innovate alongside of them,” said Ashley Haynes-Gaspar, Lumen EVP and chief revenue officer. “We’re proud to partner with Prometheus Hyperscale in supporting the next generation of sustainable AI infrastructure. Our Private Connectivity Fabric solution was designed with scalability and security to drive AI innovation while aligning with Prometheus’ ambitious sustainability goals.”
Prometheus, founded as Wyoming Hyperscale in 2020, turned to Lumen networking solutions prior to the launch of its first development site in Aspen, WY. This facility integrates renewable energy sources, sustainable cooling systems, and AI-driven energy optimization, allowing for minimal environmental impact while delivering the computational power AI-driven enterprises demand. The partnership with Lumen reinforces Prometheus’ dedication to both technological innovation and environmental responsibility.
“AI is reshaping industries, but it must be done responsibly,” said Trevor Neilson, president of Prometheus Hyperscale. “By joining forces with Lumen, we’re able to offer our customers best-in-class connectivity to AI workloads while staying true to our mission of building the most sustainable data centers on the planet. Lumen’s network expertise is the perfect complement to our vision.”
Prometheus’ data center campus in Evanston, Wyoming will be one of the biggest data centers in the world with facilities expected to come online in late 2026. Future data centers in Pueblo, Colorado; Fort Morgan, Colorado; Phoenix, Arizona; and Tucson, Arizona, will follow and be strategically designed to leverage clean energy resources and innovative technology.
About Prometheus Hyperscale:
Prometheus Hyperscale, founded by Trenton Thornock, is revolutionizing data center infrastructure by developing sustainable, energy-efficient hyperscale data centers. Leveraging unique, cutting-edge technology and working alongside strategic partners, Prometheus is building next-generation, liquid-cooled hyperscale data centers powered by cleaner energy. With a focus on innovation, scalability, and environmental stewardship, Prometheus Hyperscale is redefining the data center industry for a sustainable future. This announcement follows recent news of Bernard Looney, former CEO of bp, being appointed Chairman of the Board.
To learn more visit: www.prometheushyperscale.com
About Lumen Technologies:
Lumen uses the scale of their network to help companies realize AI’s full potential. From metro connectivity to long-haul data transport to edge cloud, security, managed service, and digital platform capabilities, Lumenn meets its customers’ needs today and is ready for tomorrow’s requirements.
In October, Lumen CTO Dave Ward told Light Reading that a “fundamentally different order of magnitude” of compute power, graphics processing units (GPUs) and bandwidth is required to support AI workloads. “It is the largest expansion of the Internet in our lifetime,” Ward said.
Lumen is constructing 130,000 fiber route miles to support Meta and other customers seeking to interconnect AI-enabled data centers. According to a story by Kelsey Ziser, the fiber conduits in this buildout would contain anywhere from 144 to more than 500 fibers to connect multi-gigawatt data centers.
REFERENCES:
https://www.lightreading.com/data-centers/2024-in-review-data-center-shifts
Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?
Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers
Initiatives and Analysis: Nokia focuses on data centers as its top growth market
Proposed solutions to high energy consumption of Generative AI LLMs: optimized hardware, new algorithms, green data centers
Deutsche Telekom with AWS and VMware demonstrate a global enterprise network for seamless connectivity across geographically distributed data centers
Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?
One of the big tech themes in 2024 was the buildout of data center infrastructure to support generative (Gen) artificial intelligence (AI) compute servers. Gen AI requires massive computational power, which only huge, powerful data centers can provide. Big tech companies like Amazon (AWS), Microsoft (Azure), Google (Google Cloud), Meta (Facebook) and others are building or upgrading their data centers to provide the infrastructure necessary for training and deploying AI models. These investments include high-performance GPUs, specialized hardware, and cutting-edge network infrastructure.
- Barron’s reports that big tech companies are spending billions on that initiative. In the first nine months of 2024, Amazon, Microsoft, and Alphabet spent a combined $133 billion building AI capacity, up 57% from the previous year, according to Barron’s. Much of the spending accrued to Nvidia, whose data center revenue reached $80 billion over the past three quarters, up 174%. The infrastructure buildout will surely continue in 2025, but tough questions from investors about return on investment (ROI) and productivity gains will take center stage from here.
- Amazon, Google, Meta and Microsoft expanded such investments by 81% year over year during the third quarter of 2024, according to an analysis by the Dell’Oro Group, and are on track to have spent $180 billion on data centers and related costs by the end of the year. The three largest public cloud providers, Amazon Web Services (AWS), Azure and Google Cloud, each had a spike in their investment in AI during the third quarter of this year. Baron Fung, a senior director at Dell’Oro Group, told Newsweek: “We think spending on AI infrastructure will remain elevated compared to other areas over the long-term. These cloud providers are spending many billions to build larger and more numerous AI clusters. The larger the AI cluster, the more complex and sophisticated AI models that can be trained. Applications such as Copilot, chatbots, search, will be more targeted to each user and application, ultimately delivering more value to users and how much end-users will pay for such a service,” Fung added.
- Efficient and scalable data centers can lower operational costs over time. Big tech companies could offer AI cloud services at scale, which might result in recurring revenue streams. For example, AI infrastructure-as-a-service (IaaS) could be a substantial revenue driver in the future, but no one really knows when that might be.
Microsoft has a long history of pushing new software and services products to its large customer base. In fact, that greatly contributed to the success of its Azure cloud computing and storage services. The centerpiece of Microsoft’s AI strategy is getting many of those customers to pay for Microsoft 365 Copilot, an AI assistant for its popular apps like Word, Excel, and PowerPoint. Copilot costs $360 a year per user, and that’s on top of all the other software, which costs anywhere from $72 to $657 a year. Microsoft’s AI doesn’t come cheap. Alistair Speirs, senior director of Microsoft Azure Global Infrastructure told Newsweek: “Microsoft’s datacenter construction has been accelerating for the past few years, and that growth is guided by the growing demand signals that we are seeing from customers for our cloud and AI offerings. “As we grow our infrastructure to meet the increasing demand for our cloud and AI services, we do so with a holistic approach, grounded in the principle of being a good neighbor in the communities in which we operate.”
Venture capitalist David Cahn of Sequoia Capital estimates that for AI to be profitable, every dollar invested on infrastructure needs four dollars in revenue. Those profits aren’t likely to come in 2025, but the companies involved (and there investors) will no doubt want to see signs of progress. One issue they will have to deal with is the popularity of free AI, which doesn’t generate any revenue by itself.
An August 2024 survey of over 4,600 adult Americans from researchers at the Federal Reserve Bank of St. Louis, Vanderbilt University, and Harvard University showed that 32% of respondents had used AI in the previous week, a faster adoption rate than either the PC or the internet. When asked what services they used, free options like OpenAI’s ChatGPT, Google’s Gemini, Meta Platform’s Meta AI, and Microsoft’s Windows Copilot were cited most often. Unlike 365, versions of Copilot built into Windows and Bing are free.
The unsurprising popularity of free AI services creates a dilemma for tech firms. It’s expensive to run AI in the cloud at scale, and as of now there’s no revenue behind it. The history of the internet suggests that these free services will be monetized through advertising, an arena where Google, Meta, and Microsoft have a great deal of experience. Investors should expect at least one of these services to begin serving ads in 2025, with the others following suit. The better AI gets—and the more utility it provides—the more likely consumers will go along with those ads.
Productivity Check:
We’re at the point in AI’s rollout where novelty needs to be replaced by usefulness—and investors will soon be looking for signs that AI is delivering productivity gains to business. Here we can turn to macroeconomic data for answers. According to the U.S. Bureau of Labor Statistics, since the release of ChatGPT in November 2022, labor productivity has risen at an annualized rate of 2.3% versus the historical median of 2.0%. It’s too soon to credit AI for those gains, but if above-median productivity growth continues into 2025, the conversation gets more interesting.
There’s also the continued question of AI and jobs, a fraught conversation that isn’t going to get any easier. There may already be AI-related job loss happening in the information sector, home to media, software, and IT. Since the release of ChatGPT, employment is down 3.9% in the sector, even as U.S. payrolls overall have grown by 3.3%. The other jobs most at risk are in professional and business services and in the financial sector. To be sure, the history of technological change is always complicated. AI might take away jobs, but it’s sure to add some, too.
“Some jobs will likely be automated. But at the same time, we could see new opportunities in areas requiring creativity, judgment, or decision-making,” economists Alexander Bick of the Federal Reserve Bank of St. Louis and Adam Blandin of Vanderbilt University tell Barron’s. “Historically, every big tech shift has created new types of work we couldn’t have imagined before.”
Closing Quote:
“Generative AI (GenAI) is being felt across all technology segments and subsegments, but not to everyone’s benefit,” said John-David Lovelock, Distinguished VP Analyst at Gartner. “Some software spending increases are attributable to GenAI, but to a software company, GenAI most closely resembles a tax. Revenue gains from the sale of GenAI add-ons or tokens flow back to their AI model provider partner.”
References:
AI Stocks Face a New Test. Here Are the 3 Big Questions Hanging Over Tech in 2025
Big Tech Increases Spending on Infrastructure Amid AI Boom – Newsweek
Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers
Ciena CEO sees huge increase in AI generated network traffic growth while others expect a slowdown
Proposed solutions to high energy consumption of Generative AI LLMs: optimized hardware, new algorithms, green data centers
SK Telecom unveils plans for AI Infrastructure at SK AI Summit 2024
Huawei’s “FOUR NEW strategy” for carriers to be successful in AI era
Initiatives and Analysis: Nokia focuses on data centers as its top growth market
India Mobile Congress 2024 dominated by AI with over 750 use cases
Reuters & Bloomberg: OpenAI to design “inference AI” chip with Broadcom and TSMC
Ciena CEO sees huge increase in AI generated network traffic growth while others expect a slowdown
Today, Ciena reported better than expected revenue of $1.12 billion in its 4th quarter, which was above analyst expectations of around $1.103 billion. Orders were once again ahead of revenue, even though the company had expected orders to be below revenue just a few months ago. A closer look at key metrics reveals mixed results, with some segments like Software and Services showing strong growth (+20.6% year-over-year) and others like Routing and Switching experiencing significant declines (-38.4% year-over-year).
Increased demand for the company’s Reconfigurable Line Systems (RLS), primarily from large cloud providers. And he said the company was also doing well selling its WaveLogic coherent optical pluggables, which optimize performance in data centers as they support traffic from AI and machine learning.
Ciena’s Managed Optical Fiber Networks (MOFN) technology is designed for global service providers that are building dedicated private optical networks for cloud providers. MOFN came about a few years ago when cloud providers wanted to enter countries where they weren’t allowed to build their own fiber networks. “They had to go with the incumbent carrier, but they wanted to have control of their network within country. It was sort of a niche-type play. But we’ve seen more recently, over the last 6-9 months, that model being more widely adopted,” Smith said. MOFN is becoming more widely utilized, and the good news for Ciena is that cloud providers often request that Ciena equipment be used so that it matches with the rest of their network, according to Smith.
Image Credit: Midjourney for Fierce Network
…………………………………………………………………………………………………………………………..
The company also said it now expects average annual revenue growth of approximately 8% to 11% over the next three years. “Our business is linked heavily into the growth of bandwidth around the world,” CEO Gary Smith said after Ciena’s earnings call. “Traffic growth has been between 20% and 40% per year very consistently for the last two decades,” Smith told Light Reading.
Ciena believes huge investments in data centers with AI compute servers will ultimately result in more traffic traveling over U.S. and international broadband networks. “It has to come out of the data center and onto the network,” Smith said of AI data. “Now, quite where it ends up being, who can know. As an exact percentage, a lot of people are working through that, including the cloud guys,” he said about the data traffic growth rate over the next few years. “But one would expect [AI data] to layer on top of that 30% growth, is the point I’m making,” he added.
AI comes at a fortuitous time for Ciena. “You’re having to connect these GPU clusters over greater distances. We’re beginning to see general, broader traffic growth in things like inference and training. And that’s going to obviously drive our business, which is why we’re forecasting greater than normal growth,” Smith said.
Smith’s positive comments on AI traffic are noteworthy in light of some data points showing a slowdown in the rate of growth in data traffic on global networks. For example:
- OpenVault recently reported that monthly average broadband data consumption in the third quarter inched up 7.2%, the lowest rate of growth seen since the company began reporting these trends in 2012.
- In Ericsson’s newest report, Fredrik Jejdling, EVP and head of business area networks, said: “We see continued mobile network traffic growth but at a slower rate.”
- Some of the nation’s biggest Content Data Network (CDN) providers – including Akamai, Fastly and Edgio – are struggling to come to terms with a historic slowdown in Internet traffic growth. Such companies operate the content delivery networks that convey video and other digital content online.
- “In terms of traffic growth, it is growing very slowly – at rates that we haven’t seen in the 25-plus years we’ve been in this business. So it’s growing very, very slow,” Akamai CFO Ed McGowan said recently. “It’s just been a weak traffic environment.”
“The cloud providers themselves are building bigger infrastructure and networks, and laying track for even greater growth in the future as more and more of that AI traffic comes out of the data center,” Smithsaid. “So that’s why we’re predicting greater growth than normal over the next three years. It’s early days for that traffic coming out of the data center, but I think we’re seeing clear evidence around it. So you’re looking at an enormous step function in traffic flows over the next few years,” he concluded.
References:
https://www.lightreading.com/data-centers/ciena-ceo-prepare-for-the-ai-traffic-wave
https://www.fierce-network.com/broadband/cienas-ceo-says-companys-growth-linked-ai
Summit Broadband deploys 400G using Ciena’s WaveLogic 5 Extreme
DriveNets and Ciena Complete Joint Testing of 400G ZR/ZR+ optics for Network Cloud Platform
Telco spending on RAN infrastructure continues to decline as does mobile traffic growth
Analysys Mason & Light Reading: cellular data traffic growth rates are decreasing
TechCrunch: Meta to build $10 billion Subsea Cable to manage its global data traffic
Initiatives and Analysis: Nokia focuses on data centers as its top growth market
China Telecom’s 2025 priorities: cloud based AI smartphones (?), 5G new calling (GSMA), and satellite-to-phone services
At the 2024 Digital Technology Ecosystem Conference last week, China Telecom executives identified AI, 5G new calling and satellite-to-phone services as its handset priorities for 2025. The state-owned network operator, like other China telcos, is working with local manufacturers to build the devices it wants to sell through its channels.
China Telecom’s smartphone priorities align with its major corporate objectives. As China Telecom vice president Li Jun explained, devices are critical right across the business. “Terminals are an extension of the cloud network, a carrier of services, and a user interface,” he said.
China Telecom Vice President Li Jun
………………………………………………………………………………………………………………………………………………………………………………………………………………
China Telecom Deputy General Manager Tang Ke, introduced the progress of China Telecom and its partners in AI and emerging terminal ecosystem cooperation. He stated that in 2024, China Telecom will achieve large-scale development of basic 5G services, with over 120 million new self-registered users annually and more than 140 models of phones supporting 5G messaging.
In terms of emerging businesses, leading domestic smartphone brands fully support direct satellite connection, with 20 models available and over 4 million units activated. Leading PC brands fully integrate Tianyi Cloud Computer, further enriching applications in work, education, life, and entertainment scenarios. Domestic phones fully support quantum secure calls, with over 50 million new self-registered users. Terminals fully support the upgrade to 800M, reaching over 100 million users. Besides phones to support direct-to-cell calling, it also hoped to develop low-cost positioning tech using Beidou and 5G location capabilities.
China Telecom continues to promote comprehensive AI upgrades of terminals, collaborating with partners to expand AI terminal categories and provide users with more diverse choices and higher-quality experiences. Tang Ke revealed that, at the main forum of the “2024 Digital Technology Ecosystem Conference,” China Telecom will release its first operator-customized AI phone.
Tang Ke emphasized that in the AI era, jointly building a collaborative and mutually promoting AI terminal ecosystem has become the inevitable path of industry development. Ecosystem participants must closely coordinate in technology, industry, and business to offer users the best AI experience. China Telecom will comprehensively advance technical collaboration, accelerating coordination from levels such as chips, large models, and intelligent agents, and promoting the construction of AI technology frameworks from both the device and cloud sides. The company will comprehensively push terminal AI upgrades, accelerating the AI development of wearables, healthcare, education, innovation, and industry terminals, based on key categories such as smartphones, cameras, cloud computers, and smart speakers.
Deputy Marketing Director Shao Yantao laid out the company’s device strategy for the year ahead. He said China Telecom’s business was based on networks, cloud-network integration and quantum security, with a focus on three technology directions – AI, 5G and satellites. With AI, it aims to carry out joint product development with OEM partners to build device-cloud capabilities and establish AI models. The state owned telco will pursue “domestic and foreign” projects in cloud-based AI mobile phones.
Besides smartphones, other AI-powered products next year would likely include door locks, speakers, glasses and watches, Shao said. The other big focus area is 5G new calling, based on new IMS DC (data channel) capabilities, with the aim of opening up new use cases like screen sharing and interactive games during a voice call.
China Telecom would develop its own open-source IMS DC SDK to support AI, payments, XR and other new functionality, Shao said. But he acknowledged the need to build cooperation across the industry ecosystem. The network operator and its partners would also collaborate on Voice over WiFI and 3CC carrier aggregation for 5G-Advanced devices, he added.
……………………………………………………………………………………………………………………………………………………………………………………………..
China’s Ministry of Industry and Information Technology (MIIT) claims that China has built and activated over 4.1 million 5G base stations, with the 5G network continuously extending into rural areas, achieving “5G coverage in every township.” 5G has been integrated into 80 major categories of the national economy, with over 100,000 application cases accumulated. The breadth and depth of applications continue to expand, profoundly transforming lifestyles, production methods, and governance models.
The meeting emphasized the need to leverage the implementation of the “Sailing” Action Upgrade Plan for Large-scale 5G Applications as a means to vigorously promote the large-scale development of 5G applications, supporting new types of industrialization and the modernization of the information and communications industry, thereby laying a solid foundation for building a strong network nation and advancing Chinese-style modernization.
References:
https://www.c114.com.cn/news/22/c23811.html
https://en.c114.com.cn/583/a1279613.html
https://en.c114.com.cn/583/a1279469.html
China Telecom and China Mobile invest in LEO satellite companies
China Telecom with ZTE demo single-wavelength 1.2T bps hollow-core fiber transmission system over 100T bps
ZTE and China Telecom unveil 5G-Advanced solution for B2B and B2C services
Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers
Meta Platforms and Elon Musk’s xAI start-up are among companies building clusters of computer servers with as many as 100,000 of Nvidia’s most advanced GPU chips as the race for artificial-intelligence (AI) supremacy accelerates.
- Meta Chief Executive Mark Zuckerberg said last month that his company was already training its most advanced AI models with a conglomeration of chips he called “bigger than anything I’ve seen reported for what others are doing.”
- xAI built a supercomputer called Colossus—with 100,000 of Nvidia’s Hopper GPU/AI chips—in Memphis, TN in a matter of months.
- OpenAI and Microsoft have been working to build up significant new computing facilities for AI. Google is building massive data centers to house chips that drive its AI strategy.
xAI built a supercomputer in Memphis that it calls Colossus, with 100,000 Nvidia AI chips. Photo: Karen Pulfer Focht/Reuters
A year ago, clusters of tens of thousands of GPU chips were seen as very large. OpenAI used around 10,000 of Nvidia’s chips to train the version of ChatGPT it launched in late 2022, UBS analysts estimate. Installing many GPUs in one location, linked together by superfast networking equipment and cables, has so far produced larger AI models at faster rates. But there are questions about whether ever-bigger super clusters will continue to translate into smarter chatbots and more convincing image-generation tools.
Nvidia Chief Executive Jensen Huang said that while the biggest clusters for training for giant AI models now top out at around 100,000 of Nvidia’s current chips, “the next generation starts at around 100,000 Blackwells. And so that gives you a sense of where the industry is moving. Do we think that we need millions of GPUs? No doubt. That is a certainty now. And the question is how do we architect it from a data center perspective,” Huang added.
“There is no evidence that this will scale to a million chips and a $100 billion system, but there is the observation that they have scaled extremely well all the way from just dozens of chips to 100,000,” said Dylan Patel, the chief analyst at SemiAnalysis, a market research firm.
Giant super clusters are already getting built. Musk posted last month on his social-media platform X that his 100,000-chip Colossus super cluster was “soon to become” a 200,000-chip cluster in a single building. He also posted in June that the next step would probably be a 300,000-chip cluster of Nvidia’s newest GPU chips next summer. The rise of super clusters comes as their operators prepare for Nvidia’s nexgen Blackwell chips, which are set to start shipping out in the next couple of months. Blackwell chips are estimated to cost around $30,000 each, meaning a cluster of 100,000 would cost $3 billion, not counting the price of the power-generation infrastructure and IT equipment around the chips.
Those dollar figures make building up super clusters with ever more chips something of a gamble, industry insiders say, given that it isn’t clear that they will improve AI models to a degree that justifies their cost. Indeed, new engineering challenges also often arise with larger clusters:
- Meta researchers said in a July paper that a cluster of more than 16,000 of Nvidia’s GPUs suffered from unexpected failures of chips and other components routinely as the company trained an advanced version of its Llama model over 54 days.
- Keeping Nvidia’s chips cool is a major challenge as clusters of power-hungry chips become packed more closely together, industry executives say, part of the reason there is a shift toward liquid cooling where refrigerant is piped directly to chips to keep them from overheating.
- The sheer size of the super clusters requires a stepped-up level of management of those chips when they fail. Mark Adams, chief executive of Penguin Solutions, a company that helps set up and operate computing infrastructure, said elevated complexity in running large clusters of chips inevitably throws up problems.
The continuation of the AI boom for Nvidia largely depends on how the largest clusters of GPU chips deliver a return on investment for its customers. The trend also fosters demand for Nvidia’s networking equipment, which is fast becoming a significant business. Nvidia’s networking equipment revenue in 2024 was $3.13 billion, which was a 51.8% increase from the previous year. Mostly from its Mellanox acquisition, Nvidia offers these networking platforms:
- Accelerated Ethernet Switching for AI and the Cloud
- Quantum InfiniBand for AI and Scientific Computing
- Bluefield® Network Accelerators
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..
Nvidia forecasts total fiscal fourth-quarter sales of about $37.5bn, up 70%. That was above average analyst projections of $37.1bn, compiled by Bloomberg, but below some projections that were as high as $41bn. “Demand for Hopper and anticipation for Blackwell – in full production – are incredible as foundation model makers scale pretraining, post-training and inference, Huang said. “Both Hopper and Blackwell systems have certain supply constraints, and the demand for Blackwell is expected to exceed supply for several quarters in fiscal 2026,” CFO Colette Kress said.
References:
https://www.wsj.com/tech/ai/nvidia-chips-ai-race-96d21d09?mod=tech_lead_pos5
https://www.nvidia.com/en-us/networking/
https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-third-quarter-fiscal-2025
HPE-Juniper combo + Cisco restructuring create enterprise network uncertainty
Hewlett Packard Enterprise’s (HPE) pending acquisition of Juniper Networks and Cisco’s recent corporate restructuring (which de-emphasizes legacy networking products like access/core routers and Ethernet switches) is putting enterprise networking customers in a holding pattern. They are pausing investments in network equipment as they wait out the uncertainty.
“I’ve had customers put things on hold right now, and not just the Juniper side but both sides,” Andre Kindness, principal analyst at Forrester Research, said in an interview with SDxCentral about how Juniper and HPE customers are reacting to uncertainty around the deal. “Typically, if customers are strong enough to look outside of Cisco and they’re not a Cisco shop, then HPE, Aruba, Juniper are the primary ones that they’re looking at. I’ve had customers put some of that on hold at this point.”
That holding pattern is tied to uncertainty over what systems and platforms will emerge from a combined HPE-Juniper. Mr. Kindness noted in a blog post when the deal was announced that “the journey ahead will be rife with obstacles for Juniper and HPE/Aruba customers alike.” Kindness explained that one important move for HPE would be to “rationalize/optimize the portfolio, the products and the solutions.”
“HPE will try to reassure you that nothing will change; it doesn’t make sense to keep everything, especially the multiple AP [access point] product lines (Instant On, Mist, and Aruba Aps), all the routing and switching operating systems (Juno, AOS-CX, and ArubaOS) and both management systems (Central and Mist),” Kindness wrote.
“Though not immediately, products will need to go and the hardware that stays will need to be changed to accommodate cloud-based management, monitoring, and AI.” HPE CEO Antonio Neri and his management team has attempted to temper these concerns by stating there is virtually no overlap between HPE and Juniper’s product lines, which Kindness said, “just boggles my mind,” he added.
Juniper’s AI product, called Marvis (part of the Mist acquisition in 2019), is by far the most advanced AI solution in the networking market. That’s not a profound statement; no vendor has anything close to it. The quick history: Juniper’s acquisition of Mist brought the company a cloud-based Wi-Fi solution with a leading AI capability, Marvis. Juniper quickly started integrating its switching and routing portfolio into Marvis. Walmart, Amazon, and others took notice. Fast-forward to today: This gives HPE Aruba a two-year lead against its competitors by bringing Juniper into the fold.
“I think [Neri’s] got to worry about the financial analyst out there in the stock market or the shareholders to pacify them, and then at the same time you don’t want to scare the bejesus out of your customer base, or Juniper customer base, so you’re going to say that there’s going to be either no overlap or no changes, everything will coexist,” Kindness said.
While overlap and other concerns could alter what a potential Juniper HPE combo looks like, Kindness said he expects the result to lean heavily on Juniper’s telecom and networking assets. That includes HPE products like Aruba networking gear being replaced by Juniper’s artificial intelligence (AI)-focused Mist and Marvis platforms.
“Mist has been really a game changer for the company and just really opened a lot of doors,” Kindness explained. “[Juniper] really did a 180 degree turn when they bought [Mist], and just the revenue that’s brought in and the expansion of the product line itself, and the capabilities of Mist and actually Marvis in the background would be hard for [HPE] to replicate at this point. My perception was HPE looked at it and said, Marvis and Mist is just something that would take too long to get to.” Kindness added that he does not expect significant platform thinning to happen for a couple of years after a potential closing of the deal, but the interim could be filled with challenges tied to channel partners and go-to-market strategies that could chip away at market opportunities similar to what is happening at VMware following the Broadcom acquisition. “Broadcom is ruthless, right or wrong, it’s its business model,” Kindness said. “HPE is not quite that dynamic.”
……………………………………………………………………………………………………………………………………….
Cisco CFO Scott Herren told the audience at a recent investor conference that HPE’s pending Juniper acquisition is causing “uncertainty” in the enterprise WLAN market that could be benefit Cisco. “I think for sure that’s created just a degree of uncertainty and a question of, hey, should I consider if I was previously a vendor or a customer of either of those, now is the time to kind of open up and look at other opportunities,” Herren said. “And we’ve seen our wireless business, our orders greater than $1 million grew more than 20% in the fourth quarter.”
Cisco is also working through its own networking drama as part of the vendor’s recently announced restructuring process. Those moves will see Cisco focus more on high-growth areas like AI, security, and cloud at the expense of its legacy operations, including the pairing down of its networking product lines.
“It looks like Cisco’s realizing that all the complexity of customer choice and all these variations and offering a zillion features is probably not the way to go. I think Chuck realized it,” Kindness said of Cisco’s efforts. “If you look at the ACI [Application Centric Infrastructure] and Cloud Dashboard for Nexus starting to consolidate, and then the Catalyst line and the Aironet line and the Meraki line are consolidating, it’s just the right move. The market has told them that for the last 10 years, it just took them a while to recognize it.”
References:
https://www.juniper.net/us/en.html
Cisco to lay off more than 4,000 as it shifts focus to AI and Cybersecurity
Cisco restructuring plan will result in ~4100 layoffs; focus on security and cloud based products
FT: New benchmarks for Gen AI models; Neocloud groups leverage Nvidia chips to borrow >$11B
The Financial Times reports that technology companies are rushing to redesign how they test and evaluate their Gen AI models, as current AI benchmarks appear to be inadequate. AI benchmarks are used to assess how well an AI model can generate content that is coherent, relevant, and creative. This can include generating text, images, music, or any other form of content.
OpenAI, Microsoft, Meta and Anthropic have all recently announced plans to build AI agents that can execute tasks for humans autonomously on their behalf. To do this effectively, the AI systems must be able to perform increasingly complex actions, using reasoning and planning.
Current public AI benchmarks — Hellaswag and MMLU — use multiple-choice questions to assess common sense and knowledge across various topics. However, researchers argue this method is now becoming redundant and models need more complex problems.
“We are getting to the era where a lot of the human-written tests are no longer sufficient as a good barometer for how capable the models are,” said Mark Chen, senior vice-president of research at OpenAI. “That creates a new challenge for us as a research world.”
The SWE Verified benchmark was updated in August to better evaluate autonomous systems based on feedback from companies, including OpenAI. It uses real-world software problems sourced from the developer platform GitHub and involves supplying the AI agent with a code repository and an engineering issue, asking them to fix it. The tasks require reasoning to complete.
“It is a lot more challenging [with agentic systems] because you need to connect those systems to lots of extra tools,” said Jared Kaplan, chief science officer at Anthropic.
“You have to basically create a whole sandbox environment for them to play in. It is not as simple as just providing a prompt, seeing what the completion is and then evaluating that.”
Another important factor when conducting more advanced tests is to make sure the benchmark questions are kept out of the public domain, in order to ensure the models do not effectively “cheat” by generating the answers from training data, rather than solving the problem.
The need for new benchmarks has also led to efforts by external organizations. In September, the start-up Scale AI announced a project called “Humanity’s Last Exam”, which crowdsourced complex questions from experts across different disciplines that required abstract reasoning to complete.
Meanwhile, the Financial Times recently reported that Wall Street’s largest financial institutions had loaned more than $11bn to “neocloud” groups, backed by their possession of Nvidia’s AI GPU chips. These companies include names such as CoreWeave, Crusoe and Lambda, and provide cloud computing services to tech businesses building AI products. They have acquired tens of thousands of Nvidia’s graphics processing units (GPUs) through partnerships with the chipmaker. With capital expenditure on data centres surging, in the rush to develop AI models, the Nvidia’s AI GPU chips have become a precious commodity.
Nvidia’s chips have become a precious commodity in the ongoing race to develop AI models © Marlena Sloss/Bloomberg
…………………………………………………………………………………………………………………………………
The $3tn tech group’s allocation of chips to neocloud groups has given confidence to Wall Street lenders to lend billions of dollars to the companies that are then used to buy more Nvidia chips. Nvidia is itself an investor in neocloud companies that in turn are among its largest customers. Critics have questioned the ongoing value of the collateralised chips as new advanced versions come to market — or if the current high spending on AI begins to retract. “The lenders all coming in push the story that you can borrow against these chips and add to the frenzy that you need to get in now,” said Nate Koppikar, a short seller at hedge fund Orso Partners. “But chips are a depreciating, not appreciating, asset.”
References:
https://www.ft.com/content/866ad6e9-f8fe-451f-9b00-cb9f638c7c59
https://www.ft.com/content/fb996508-c4df-4fc8-b3c0-2a638bb96c19
https://www.ft.com/content/41bfacb8-4d1e-4f25-bc60-75bf557f1f21
Tata Consultancy Services: Critical role of Gen AI in 5G; 5G private networks and enterprise use cases
Reuters & Bloomberg: OpenAI to design “inference AI” chip with Broadcom and TSMC
AI adoption to accelerate growth in the $215 billion Data Center market
AI Echo Chamber: “Upstream AI” companies huge spending fuels profit growth for “Downstream AI” firms
AI winner Nvidia faces competition with new super chip delayed
SK Telecom unveils plans for AI Infrastructure at SK AI Summit 2024
Introduction:
During the two-day SK AI Summit 2024 [1.], SK Telecom CEO Ryu Young-sang unveiled the company’s comprehensive strategy which revolves around three core components: AI data centers (AIDCs), a cloud-based GPU service (GPU-as-a-Service, GPUaaS), and Edge AI. SK Telecom is planning to construct hyperscale data centers in key regions across South Korea, with the goal of becoming the AIDC hub in the Asia Pacific region. Additionally, the company will launch a cloud-based GPU service to address the domestic GPU shortage and introducing ‘Edge AI’ to bridge the gap between AIDC and on-device AI. This innovative approach aims to connect national AI infrastructure and expand globally, in collaboration with partners both in South Korea and abroad.
Note 1. The SK AI Summit is an annual event held by the SK Group, where global experts in various AI fields gather to discuss coexistence in the era of artificial general intelligence (AGI) and seek ways to strengthen the ecosystem.
………………………………………………………………………………………………………………………………………………………………………..
Constructing AI Data Centers in South Korea’s key regions:
SK Telecom plans to start with hyperscale AIDCs that require more than 100 megawatts (MW) in local regions, with future plans to expand its scale to gigawatts (GW) or more, to leap forward as the AIDC hub in the Asia Pacific region.
By extending the AIDC to national bases, centers can secure a stable power supply through the utilization of new renewable energy sources such as hydrogen, solar and wind power, and easily expand to global markets through submarine cables. SK Telecom anticipates building AIDC cost-effectively when the data center combines SK Group’s capabilities in high-efficiency next-generation semiconductors, immersion cooling, and other energy solutions, along with its AI cluster operation.
Prior to this, SK Telecom plans to open an AIDC testbed in Pangyo, Korea, in December, which combines the capabilities of the SK Group and various solutions owned by partner companies. This facility, where all three types of next-generation liquid cooling solutions—direct liquid cooling, immersion cooling, and precision liquid cooling—are deployed, will be the first and only testbed in Korea. It will also feature advanced AI semiconductors like SK hynix’s HBM, as well as GPU virtualization solutions and AI energy optimization technology. This testbed will provide opportunities to observe and experience the cutting-edge technologies of a future AIDC.
Supplying GPU via cloud to metropolitan areas:
SK Telecom plans to launch a cloud-based GPU-as-a-Service (GPUaaS) by converting the Gasan data center, located in the metropolitan area, into an AIDC to quickly resolve the domestic GPU shortage.
Starting in December, SK Telecom plans to launch a GPUaaS with NVIDIA H100 Tensor Core GPU through a partnership with U.S.-based Lambda. In March 2025, SK Telecom plans to introduce NVIDIA H200 Tensor Core GPU in Korea, gradually expanding to meet customer demand.
Through the AI cloud services (GPUaaS), SKT aims to enable companies to develop AI services easily and at a lower cost, without needing to purchase their own GPUs, ultimately supporting the vitalization of Korea’s AI ecosystem.
Introducing ‘Edge AI’ to open a new opportunity in telco infrastructure:
SK Telecom plans to introduce ‘Edge AI,’ which can narrow the gap between AIDC and on-device AI, using the nationwide communication infrastructure.
Edge AI is an infrastructure that combines mobile communication networks and AI computing, offering advantages in reduced latency, enhanced security, and improved privacy compared to large-scale AIDCs. Additionally, it enables large-scale AI computing, complementing the existing AI infrastructure, compared to on-device AI.
SKT is currently conducting research on advanced technologies and collaborating with global partners to build AIDC-utilizing communication infrastructure and develop customized servers. The company is also carrying out various proof of concept (PoC) projects across six areas, including healthcare, AI robots, and AI CCTV, to discover specialized Edge AI services.
“So far, the competition in telecommunications infrastructure has been all about connectivity, namely speed and capacity, but now the paradigm of network evolution should be changed,” said Ryu Young-sang, CEO of SK Telecom. “The upcoming 6G will evolve into a next-generation AI infrastructure where communication and AI are integrated.”
Developing a comprehensive AIDC solution to enter global market:
SK Telecom plans to develop a comprehensive AIDC solution that combines AI semiconductors, data centers, and energy solutions through collaboration with AI companies in Korea and abroad, with the aim of entering the global market.SK Telecom aims to lead the global standardization of Edge AI and collaborate on advanced technology research, while working towards the transition to 6G AI infrastructure.
………………………………………………………………………………………………………………….
About SK Telecom:
SK Telecom has been leading the growth of the mobile industry since 1984. Now, it is taking customer experience to new heights by extending beyond connectivity. By placing AI at the core of its business, SK Telecom is rapidly transforming into an AI company with a strong global presence. It is focusing on driving innovations in areas of AI Infrastructure, AI Transformation (AIX) and AI Service to deliver greater value for industry, society, and life.
For more information, please contact [email protected] or visit our LinkedIn page www.linkedin.com/company/sk-telecom
………………………………………………………………………………………………………………….
References:
SKT-Samsung Electronics to Optimize 5G Base Station Performance using AI
SK Telecom (SKT) and Nokia to work on AI assisted “fiber sensing”