AI
Nvidia pays $1 billion for a stake in Nokia to collaborate on AI networking solutions
This is not only astonishing but unheard of: the world’s largest and most popular fabless semiconductor company –Nvidia– taking a $1 billion stake in a telco/reinvented data center connectivity company-Nokia.
Indeed, GPU king Nvidia will pay $1 billion for a stake of 2.9% in Nokia as part of a deal focused on AI and data centers, the Finnish telecom equipment maker said on Tuesday as its shares hit their highest level in nearly a decade on hope for AI to lift their business revenue and profits. The nonexclusive partnership and the investment will make Nvidia the second-largest shareholder in Nokia. Nokia said it will issue 166,389,351 new shares for Nvidia, which the U.S. company will subscribe to at $6.01 per share.
Nokia said the companies will collaborate on artificial intelligence networking solutions and explore opportunities to include its data center communications products in Nvidia’s future AI infrastructure plans. Nokia and its Swedish rival Ericsson both make networking equipment for connectivity inside (intra-) data centers and between (inter-) data centers and have been benefiting from increased AI use.

Summary:
- NVIDIA and Nokia to establish a strategic partnership to enable accelerated development and deployment of next generation AI native mobile networks and AI networking infrastructure.
- NVIDIA introduces NVIDIA Arc Aerial RAN Computer, a 6G-ready telecommunications computing platform.
- Nokia to expand its global access portfolio with new AI-RAN product based on NVIDIA platform.
- T-Mobile U.S. is working with Nokia and NVIDIA to integrate AI-RAN technologies into its 6G development process.
- Collaboration enables new AI services and improved consumer experiences to support explosive growth in mobile AI traffic.
- Dell Technologies provides PowerEdge servers to power new AI-RAN solution.
- Partnership marks turning point for the industry, paving the way to AI-native 6G by taking AI-RAN to innovation and commercialization at a global scale.
In some respects, this new partnership competes with Nvidia’s own data center connectivity solutions from its Mellanox Technologies division, which it acquired for $6.9 billion in 2019. Meanwhile, Nokia now claims to have worked on a redesign to ensure its RAN software is compatible with Nvidia’s compute unified device architecture (CUDA) platform, meaning it can run on Nvidia’s GPUs. Nvidia has also modified its hardware offer, creating capacity cards that will slot directly into Nokia’s existing AirScale baseband units at mobile sites.
Having dethroned Intel several years ago, Nvidia now has a near-monopoly in supplying GPU chips for data centers and has partnered with companies ranging from OpenAI to Microsoft. AMD is a distant second but is gaining ground in the data center GPU space as is ARM Ltd with its RISC CPU cores. Capital expenditure on data center infrastructure is expected to exceed $1.7 trillion by 2030, consulting firm McKinsey, largely because of the expansion of AI.
Nvidia CEO Jensen Huang said the deal would help make the U.S. the center of the next revolution in 6G. “Thank you for helping the United States bring telecommunication technology back to America,” Huang said in a speech in Washington, addressing Nokia CEO Justin Hotard (x-Intel). “The key thing here is it’s American technology delivering the base capability, which is the accelerated computing stack from Nvidia, now purpose-built for mobile,” Hotard told Reuters in an interview. “Jensen and I have been talking for a little bit and I love the pace at which Nvidia moves,” Hotard said. “It’s a pace that I aspire for us to move at Nokia.” He expects the new equipment to start contributing to revenue from 2027 as it goes into commercial deployment, first with 5G, followed by 6G after 2030.
Nvidia has been on a spending spree in recent weeks. The company in September pledged to invest $5 billion in beleaguered chip maker Intel. The investment pairs the world’s most valuable company, which has been a darling of the AI boom, with a chip maker that has almost completely fallen out of the AI conversation.
Later that month, Nvidia said it planned to invest up to $100 billion in OpenAI over an unspecified period that will likely span at least a few years. The partnership includes plans for an enormous data-center build-out and will allow OpenAI to build and deploy at least 10 gigawatts of Nvidia systems.
…………………………………………………………………………………………………………………………………………………………………………………………………………………………………
Tech Details:
Nokia uses Marvell Physical Layer (1) baseband chips for many of its products. Among other things, this ensured Nokia had a single software stack for all its virtual and purpose-built RAN products. Pallavi Mahajan, Nokia’s recently joined chief technology and AI officer recently told Light Reading that their software could easily adapt to run on Nvidia’s GPUs: “We built a hardware abstraction layer so that whether you are on Marvell, whether you are on any of the x86 servers or whether you are on GPUs, the abstraction takes away from that complexity, and the software is still the same.”
Fully independent software has been something of a Holy Grail for the entire industry. It would have ramifications for the whole market and its economics. Yet Nokia has conceivably been able to minimize the effort required to put its Layer 1 and specific higher-layer functions on a GPU. “There are going to be pieces of the software that are going to leverage on the accelerated compute,” said Mahajan. “That’s where we will bring in the CUDA integration pieces. But it’s not the entire software,” she added. The appeal of Nvidia as an alternative was largely to be found in “the programmability pieces that you don’t have on any other general merchant silicon,” said Mahajan. “There’s also this entire ecosystem, the CUDA ecosystem, that comes in.” One of Nvidia’s most eye-catching recent moves is the decision to “open source” Aerial, its own CUDA-based RAN software framework, so that other developers can tinker, she says. “What it now enables is the entire ecosystem to go out and contribute.”
…………………………………………………………………………………………………………………………………………………………………………………………………………………………………
Quotes:
“Telecommunications is a critical national infrastructure — the digital nervous system of our economy and security,” said Jensen Huang, founder and CEO of NVIDIA. “Built on NVIDIA CUDA and AI, AI-RAN will revolutionize telecommunications — a generational platform shift that empowers the United States to regain global leadership in this vital infrastructure technology. Together with Nokia and America’s telecom ecosystem, we’re igniting this revolution, equipping operators to build intelligent, adaptive networks that will define the next generation of global connectivity.”
“The next leap in telecom isn’t just from 5G to 6G — it’s a fundamental redesign of the network to deliver AI-powered connectivity, capable of processing intelligence from the data center all the way to the edge. Our partnership with NVIDIA, and their investment in Nokia, will accelerate AI-RAN innovation to put an AI data center into everyone’s pocket,” said Justin Hotard, President and CEO of Nokia. “We’re proud to drive this industry transformation with NVIDIA, Dell Technologies, and T-Mobile U.S., our first AI-RAN deployments in T-Mobile’s network will ensure America leads in the advanced connectivity that AI needs.”
……………………………………………………………………………………………………………………………………………………………………………………
Editor’s Notes:
1. In more advanced 5G networks, Physical Layer functions have demanded the support of custom silicon, or “accelerators.” A technique known as “lookaside,” favored by Ericsson and Samsung, uses an accelerator for only a single problematic Layer 1 task – forward error correction – and keeps everything else on the CPU. Nokia prefers the “inline” approach, which puts the whole of Layer 1 on the accelerator.
2. The huge AI-RAN push that Nvidia started with the formation of the AI-RAN Alliance in early 2024 has not met with an enthusiastic telco response so far. Results from Nokia as well as Ericsson show wireless network operators are spending less on 5G rollouts than they were in the early 2020s. Telco numbers indicate the appetite for smartphone and other mobile data services has not produced any sales growth. As companies prioritize efficiency above all else, baseband units that include Marvell and Nvidia cards may seem too expensive.
……………………………………………………………………………………………………………………………………………………………………………………….
Other Opinions and Quotes:
Nvidia chips are likely to be more expensive, said Mads Rosendal, analyst at Danske Bank Credit Research, but the proposed partnership would be mutually beneficial, given Nvidia’s large share in the U.S. data center market.
“This is a strong endorsement of Nokia’s capabilities,” said PP Foresight analyst Paolo Pescatore. “Next-generation networks, such as 6G, will play a significant role in enabling new AI-powered experiences,” he added.
Iain Morris, International Editor, Light Reading: “Layer 1 control software runs on ARM RISC CPU cores in both Marvell and Nvidia technologies. The bigger differences tend to be in the hardware acceleration “kernels,” or central components, which have some unique demands. Yet Nokia has been working to put as much as it possibly can into a bucket of common software. Regardless, if Nvidia is effectively paying for all this with its $1 billion investment, the risks for Nokia may be small………….Nokia’s customers will in future have an AI-RAN choice that limits or even shrinks the floorspace for Marvell. The development also points to more subtle changes in Nokia’s thinking. The message earlier this year was that Nokia did not require a GPU to implement AI for RAN, whereby machine-generated algorithms help to improve network performance and efficiency. Marvell had that covered because it had incorporated AI and machine-learning technologies into the baseband chips used by Nokia.”
“If you start doing inline, you typically get much more locked into the hardware,” said Per Narvinger, the president of Ericsson’s mobile networks business group, on a recent analyst call. During its own trials with Nvidia, Ericsson said it was effectively able to redeploy virtual RAN software written for Intel’s x86 CPUs on the Grace CPU with minimal changes, leaving the GPU only as a possible option for the FEC accelerator. Putting the entire Layer 1 on a GPU would mean “you probably also get more tightly into that specific implementation,” said Narvinger. “Where does it really benefit from having that kind of parallel compute system?”
………………………………………………………………………………………………………………………………………………….
Separately, Nokia and Nvidia will partner with T-Mobile U.S. to develop and test AI RAN technologies for developing 6G, Nokia said in its press release. Trials are expected to begin in 2026, focused on field validation of performance and efficiency gains for customers.
References:
https://nvidianews.nvidia.com/news/nvidia-nokia-ai-telecommunications
https://www.reuters.com/world/europe/nvidia-make-1-billion-investment-finlands-nokia-2025-10-28/
https://www.lightreading.com/5g/nvidia-takes-1b-stake-in-nokia-which-promises-5g-and-6g-overhaul
https://www.wsj.com/business/telecom/nvidia-takes-1-billion-stake-in-nokia-69f75bb6
Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers
Nvidia’s networking solutions give it an edge over competitive AI chip makers
Telecom sessions at Nvidia’s 2025 AI developers GTC: March 17–21 in San Jose, CA
Nvidia AI-RAN survey results; AI inferencing as a reinvention of edge computing?
FT: Nvidia invested $1bn in AI start-ups in 2024
The case for and against AI-RAN technology using Nvidia or AMD GPUs
Highlights of Nokia’s Smart Factory in Oulu, Finland for 5G and 6G innovation
Nokia & Deutsche Bahn deploy world’s first 1900 MHz 5G radio network meeting FRMCS requirements
Will the wave of AI generated user-to/from-network traffic increase spectacularly as Cisco and Nokia predict?
Indosat Ooredoo Hutchison and Nokia use AI to reduce energy demand and emissions
Verizon partners with Nokia to deploy large private 5G network in the UK
Reuters: US Department of Energy forms $1 billion AI supercomputer partnership with AMD
The U.S. Department of Energy has formed a $1 billion partnership with Advanced Micro Devices (AMD) to construct two supercomputers that will tackle large scientific problems ranging from nuclear power to cancer treatments to national security, U.S. Energy Secretary Chris Wright and AMD CEO Lisa Su told Reuters.
The U.S. is building the two machines to ensure the country has enough supercomputers to run increasingly complex experiments that require harnessing enormous amounts of data-crunching capability. The machines can accelerate the process of making scientific discoveries in areas the U.S. is focused on.

U.S. Energy Secretary Wright said the systems would “supercharge” advances in nuclear power, fusion energy, technologies for defense and national security, and the development of drugs. Scientists and companies are trying to replicate nuclear fusion, the reaction that fuels the sun, by jamming light atoms in a plasma gas under intense heat and pressure to release massive amounts of energy. “We’ve made great progress, but plasmas are unstable, and we need to recreate the center of the sun on Earth,” Wright told Reuters.
The second, more advanced computer called Discovery will be based around AMD’s MI430 series of AI chips that are tuned for high-performance computing.
184K global tech layoffs in 2025 to date; ~27.3% related to AI replacing workers
As of October, over 184,000 global tech jobs were cut in 2025, according to a report from Silicon Valley Business Journal. 50,184 were directly related to the implementation of artificial intelligence (AI) and automation tools by businesses. Silicon Valley’s AI boom has been pummeling headcounts across major companies in the region — and globally. U.S. companies accounted for about 123,000 of the layoffs.
These are the 10 tech companies with the most significant mass layoffs since January 2025:
- Intel: 33,900 layoffs. The company has cited the need to reduce costs and restructure its organization after years of technical and financial setbacks.
- Microsoft: 19,215 layoffs. The tech giant has conducted multiple rounds of cuts throughout the year across various departments as it prioritizes AI investments.
- TCS: 12,000 layoffs. As a major IT firm, Tata Consultancy Services’ cuts largely affected mid-level and senior positions, which are becoming redundant due to AI and evolving client demands.
- Accenture: 11,000 layoffs. The consulting company reduced its headcount as it shifts toward greater automation and AI-driven services.
- Panasonic: 10,000 layoffs. The Japanese manufacturer announced these job cuts as part of a strategy to improve efficiency and focus on core business areas.
- IBM: 9,000 layoffs as part of a restructuring effort to shift some roles to India and align the workforce with areas like AI and hybrid cloud. The layoffs were reportedly concentrated in certain teams, including the Cloud Classic division, and impacted locations such as Raleigh, New York, Dallas, and California.
- Amazon: 5,555 layoffs. Cuts have impacted various areas, including the Amazon Web Services (AWS) cloud unit and the consumer retail business.
- Salesforce: 5,000 layoffs. Many of these cuts impacted the customer service division, where AI agents now handle a significant portion of client interactions.
- STMicro: 5,000 cuts in the next three years, including 2,800 job cuts announced earlier this year, its chief executive said on Wednesday. Around 2,000 employees will leave the Franco-Italian chipmaker due to attrition, bringing the total count with voluntary departures to 5,000, Jean-Marc Chery said at a June 4th event in Paris, hosted by BNP Paribas.
- Meta: 3,720 layoffs. The company has made multiple rounds of cuts targeting “low-performers” and positions within its AI and virtual reality divisions. More details below.
……………………………………………………………………………………………………………………………………………………………………..

Image Credit: simplehappyart via Getty Images
……………………………………………………………………………………………………………………………………………………………………..
In a direct contradiction in August, Cisco announced layoffs of 221 employees in the San Francisco Bay Area, affecting roles in Milpitas and San Francisco. This occurred despite strong financial results and the CEO’s previous statement that the company would not cut jobs in favor of AI. The cuts, which included software engineering roles, are part of the company’s broader strategy to streamline operations & focus on AI.
About two-thirds of all job cuts — roughly 123,000 positions — came from U.S.-based companies, with the remainder spread across mainly Ireland, India and Japan. The report compiles data from WARN notices, TrueUp, TechCrunch and Layoffs.fyi through Oct. 21st.

- Shift to AI and automation: Many companies are restructuring their workforce to focus on AI-centric growth and are automating tasks previously done by human workers, particularly in customer service and quality assurance.
- Economic headwinds: Ongoing economic uncertainty, inflation, and higher interest rates are prompting tech companies to cut costs and streamline operations.
- Market corrections: Following a period of rapid over-hiring, many tech companies are now “right-sizing” their staff to become leaner and more efficient.
References:
Report: Broadcom Announces Further Job Cuts as Global Tech Layoffs Approach 185,000 in 2025
Tech layoffs continue unabated: pink slip season in hard-hit SF Bay Area
HPE cost reduction campaign with more layoffs; 250 AI PoC trials or deployments
High Tech Layoffs Explained: The End of the Free Money Party
Massive layoffs and cost cutting will decimate Intel’s already tiny 5G network business
Big Tech post strong earnings and revenue growth, but cuts jobs along with Telecom Vendors
Telecom layoffs continue unabated as AT&T leads the pack – a growth engine with only 1% YoY growth?
Cisco restructuring plan will result in ~4100 layoffs; focus on security and cloud based products
Cloud Computing Giants Growth Slows; Recession Looms, Layoffs Begin
Omdia: How telcos will evolve in the AI era
Dario Talmesio, research director, service provider, strategy and regulation at market research firm Omdia (owned by Informa) sees positive signs for network operators.
“After many years of plumbing, now telecom operators are starting to see some of the benefits of their network and beyond network strategies. Furthermore, the investor community is now appreciating telecom investments, after many years of poor valuation, he said during his analyst keynote presentation at Network X, a conference organized by Light Reading and Informa in Paris, France last week.
“What has changed in the telecoms industry over the past few years is the fact that we are no longer in a market that is in contraction,” he said. Although telcos are generally not seeing double-digit percentage increases in revenue or profit, “it’s a reliable business … a business that is able to provide cash to investors.”
Omdia forecasts that global telecoms revenue will have a CAGR of 2.8% in the 2025-2030 timeframe. In addition, the industry has delivered two consecutive years of record free cash flow, above 17% of sales.
However, Omdia found that telcos have reduced capex, which is trending towards 15% of revenues. Opex fell by -0.2% in 2024 and is broadly flatlining. There was a 2.2% decline in global labor opex following the challenging trend in 2023, when labor opex increased by 4% despite notable layoffs.
“Overall, the positive momentum is continuing, but of course there is more work to be done on the efficiency side,” Talmesio said. He added that it is also still too early to say what impact AI investments will have over the longer term. “All the work that has been done so far is still largely preparatory, with visible results expected to materialize in the near(ish) future,” he added. His Network X keynote presentation addressed the following questions:
- How will telcos evolve their operating structures and shift their business focuses in the next 5 years?
- AI, cloud and more to supercharge efficiencies and operating models?
- How will big tech co-opetition evolve and impact traditional telcos?
Customer care was seen as the area first impacted by AI, building on existing GenAI implementations. In contrast, network operations are expected to ultimately see the most significant impact of agentic AI.
Talmesio said many of the building blocks are in place for telecoms services and future revenue generation, with several markets reaching 60% to 70% fiber coverage, and some even approaching 100%.
Network operators are now moving beyond monetizing pure data access and are able to charge more for different gigabit speeds, home gaming, more intelligent home routers and additional WiFi access points, smart home services such as energy, security and multi-room video, and more.
While noting that connectivity remains the most important revenue driver, when contributions from various telecoms-adjacent services are added up “it becomes a significant number,” Talmesio said.
Mobile networks are another important building block. While acknowledging that 5G has been something of a disappointment in the first five years of the deployment cycle, “this is really changing” as more operators deploy 5G standalone (5G SA core) networks, Omdia observed.
Talmesio said: “At the end of June, there were only 66 telecom operators launching or commercially using 5G SA. But those 66 operators are those operators that carry the majority of the world’s 5G subscribers. And with 5G SA, we have improved latency and more devices among other factors. Monetization is still in its infancy, perhaps, but then you can see some really positive progress in 5G Advanced, where as of June, we had 13 commercial networks available with some good monetization examples, including uplink.”
“Telecom is moving beyond telecoms,” with a number of new AI strategies in place. For example, telcos are increasingly providing AI infrastructure in their data centers, offering GPU as-a-service, AI-related colocation, AI-RAN and edge AI functionality.

Dario Talmesio, Omdia
……………………………………………………………………………………………………………………………………………………
AI is also being used for network management, with AI productivity tools and AI digital assistants, as well as AI software services including GenAI products and services for consumer, enterprises and vertical markets.
“There is an additional boost for telecom operators to move beyond connectivity, which is the sovereignty agenda,” Talmesio noted. While sovereignty in the past was largely applied to data residency, “in reality, there are more and more aspects of sovereignty that are in many ways facilitating telecom operators in retaining or entering business areas that probably ten years ago were unthinkable for them.” These include cloud and data center infrastructure, sovereign AI, cyberdefense and quantum safety, satellite communication, data protection and critical communications.
“The telecom business is definitely improving,” Talmesio concluded, noting that the market is now also being viewed more favorably by investors. “In many ways, the glass is maybe still half full, but there’s more water being poured into the telecom industry.”
References:
https://networkxevent.com/speakers/dario-talmesio/
https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/pushing-telcos-ai-envelope-on-capital-decisions
Omdia on resurgence of Huawei: #1 RAN vendor in 3 out of 5 regions; RAN market has bottomed
Omdia: Huawei increases global RAN market share due to China hegemony
Dell’Oro & Omdia: Global RAN market declined in 2023 and again in 2024
Omdia: Cable network operators deploy PONs
IBM and Groq Partner to Accelerate Enterprise AI Inference Capabilities
IBM and Groq [1.] today announced a strategic market and technology partnership designed to give clients immediate access to Groq’s inference technology — GroqCloud, on watsonx Orchestrate – providing clients high-speed AI inference capabilities at a cost that helps accelerate agentic AI deployment. As part of the partnership, Groq and IBM plan to integrate and enhance RedHat open source vLLM technology with Groq’s LPU architecture. IBM Granite models are also planned to be supported on GroqCloud for IBM clients.
………………………………………………………………………………………………………………………………………………….
Note 1. Groq is a privately held company founded by Jonathan Ross in 2016. As a startup, its ownership is distributed among its founders, employees, and a variety of venture capital and institutional investors including BlackRock Private Equity Partners. Groq developed the LPU and GroqCloud to make compute faster and more affordable. The company says it is trusted by over two million developers and teams worldwide and is a core part of the American AI Stack.
NOTE that Grok, a conversational AI assistant developed by Elon Musk’s xAI is a completely different entity.
………………………………………………………………………………………………………………………………………………….
Enterprises moving AI agents from pilot to production still face challenges with speed, cost, and reliability, especially in mission-critical sectors like healthcare, finance, government, retail, and manufacturing. This partnership combines Groq’s inference speed, cost efficiency, and access to the latest open-source models with IBM’s agentic AI orchestration to deliver the infrastructure needed to help enterprises scale.
Powered by its custom LPU, GroqCloud delivers over 5X faster and more cost-efficient inference than traditional GPU systems. The result is consistently low latency and dependable performance, even as workloads scale globally. This is especially powerful for agentic AI in regulated industries.
For example, IBM’s healthcare clients receive thousands of complex patient questions simultaneously. With Groq, IBM’s AI agents can analyze information in real-time and deliver accurate answers immediately to enhance customer experiences and allow organizations to make faster, smarter decisions.
This technology is also being applied in non-regulated industries. IBM clients across retail and consumer packaged goods are using Groq for HR agents to help enhance automation of HR processes and increase employee productivity.

“Many large enterprise organizations have a range of options with AI inferencing when they’re experimenting, but when they want to go into production, they must ensure complex workflows can be deployed successfully to ensure high-quality experiences,” said Rob Thomas, SVP, Software and Chief Commercial Officer at IBM. “Our partnership with Groq underscores IBM’s commitment to providing clients with the most advanced technologies to achieve AI deployment and drive business value.”
“With Groq’s speed and IBM’s enterprise expertise, we’re making agentic AI real for business. Together, we’re enabling organizations to unlock the full potential of AI-driven responses with the performance needed to scale,” said Jonathan Ross, CEO & Founder at Groq. “Beyond speed and resilience, this partnership is about transforming how enterprises work with AI, moving from experimentation to enterprise-wide adoption with confidence, and opening the door to new patterns where AI can act instantly and learn continuously.”
IBM will offer access to GroqCloud’s capabilities starting immediately and the joint teams will focus on delivering the following capabilities to IBM clients, including:
- High speed and high-performance inference that unlocks the full potential of AI models and agentic AI, powering use cases such as customer care, employee support and productivity enhancement.
- Security and privacy-focused AI deployment designed to support the most stringent regulatory and security requirements, enabling effective execution of complex workflows.
- Seamless integration with IBM’s agentic product, watsonx Orchestrate, providing clients flexibility to adopt purpose-built agentic patterns tailored to diverse use cases.
The partnership also plans to integrate and enhance RedHat open source vLLM technology with Groq’s LPU architecture to offer different approaches to common AI challenges developers face during inference. The solution is expected to enable watsonx to leverage capabilities in a familiar way and let customers stay in their preferred tools while accelerating inference with GroqCloud. This integration will address key AI developer needs, including inference orchestration, load balancing, and hardware acceleration, ultimately streamlining the inference process.
Together, IBM and Groq provide enhanced access to the full potential of enterprise AI, one that is fast, intelligent, and built for real-world impact.
References:
FT: Scale of AI private company valuations dwarfs dot-com boom
AI adoption to accelerate growth in the $215 billion Data Center market
Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)
Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?
Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?
Amazon’s Jeff Bezos at Italian Tech Week: “AI is a kind of industrial bubble”
Tech firms are spending hundreds of billions of dollars on advanced AI chips and data centers, not just to keep pace with a surge in the use of chatbots such as ChatGPT, Gemini and Claude, but to make sure they’re ready to handle a more fundamental and disruptive shift of economic activity from humans to machines. The final bill may run into the trillions. The financing is coming from venture capital, debt and, lately, some more unconventional arrangements that have raised concerns among top industry executives and financial asset management firms.
At Italian Tech Week in Turin on October 3, 2025, Amazon founder Jeff Bezos said this about artificial intelligence, “This is a kind of industrial bubble, as opposed to financial bubbles.” Bezos differentiated this from “bad” financial or housing bubbles, which cause harm. Bezos’s comparison of the current AI boom to a historical “industrial bubble” highlights that, while speculative, it is rooted in real, transformative technology.
“It can even be good, because when the dust settles and you see who are the winners, societies benefit from those investors,” Bezos said. “That is what is going to happen here too. This is real, the benefits to society from AI are going to be gigantic.”
He noted that during bubbles, everything (both good and bad investments) gets funded. When these periods of excitement come along, investors have a hard time distinguishing the good ideas from the bad, he said, adding this is “probably happening today” with AI investments. “Investors have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas,” Bezos said of the AI industry. “And that’s also probably happening today,” he added.
- A “good” kind of bubble: He explained that during industrial bubbles, excessive funding flows to both good and bad ideas, making it hard for investors to distinguish between them. However, the influx of capital spurs significant innovation and infrastructure development that ultimately benefits society once the bubble bursts and the strongest companies survive.
- Echoes of the dot-com era: Bezos drew a parallel to the dot-com boom of the 1990s, where many internet companies failed, but the underlying infrastructure—like fiber-optic cable—endured and led to the creation of companies like Amazon.
- Gigantic benefits: Despite the market frothiness, Bezos reiterated that AI is “real” and its benefits to society “are going to be gigantic.”
- Sam Altman (OpenAI): The CEO of OpenAI has stated that he believes “investors as a whole are overexcited about AI.” In In August, the OpenAI CEO told reporters the AI market was in a bubble. When bubbles happen, “smart people get overexcited about a kernel of truth,” Altman warned, drawing parallels with the dot-com boom. Still, he said his personal belief is “on the whole, this would be a huge net win for the economy.”
- David Solomon (Goldman Sachs): Also speaking at Italian Tech Week, the Goldman Sachs CEO warned that a lot of capital deployed in AI would not deliver returns and that a market “drawdown” could occur.
- Mark Zuckerberg (Meta): Zuckerberg has also acknowledged that an AI bubble exists. The Meta CEO acknowledged that the rapid development of and surging investments in AI stands to form a bubble, potentially outpacing practical productivity and returns and risking a market crash. However, he would rather “misspend a couple hundred billion dollars” on AI development than be late to the technology.
- Morgan Stanley Wealth Management’s chief investment officer, Lisa Shalett, warned that the AI stock boom was showing “cracks” and was likely closer to its end than its beginning. The firm cited concerns over negative free cash flow growth among major AI players and increasing speculative investment. Shalett highlighted that free cash flow growth for the major cloud providers, or “hyperscalers,” has turned negative. This is viewed as a key signal of the AI capital expenditure cycle’s maturity. Some analysts estimate this growth could shrink by about 16% over the next year.

Bezos’s remarks come as some analysts express growing fears of an impending AI market crash.
- Underlying technology is real: Unlike purely speculative bubbles, the AI boom is driven by a fundamental technology shift with real-world applications that will survive any market correction.
- Historical context: Some analysts believe the current AI bubble is on a much larger scale than the dot-com bubble due to the massive influx of investment.
- Significant spending: The level of business spending on AI is already at historic levels and is fueling economic growth, which could cause a broader economic slowdown if it were to crash.
- Potential for disruption: The AI industry faces risks such as diminishing returns for costly advanced models, increased competition, and infrastructure limitations related to power consumption.
Ian Harnett argues, the current bubble may be approaching its “endgame.” He wrote in the Financial Times:
“The dramatic rise in AI capital expenditure by so-called hyperscalers of the technology and the stock concentration in US equities are classic peak bubble signals. But history shows that a bust triggered by this over-investment may hold the key to the positive long-run potential of AI.
Until recently, the missing ingredient was the rapid build-out of physical capital. This is now firmly in place, echoing the capex boom seen in the late-1990s bubble in telecommunications, media and technology stocks. That scaling of the internet and mobile telephony was central to sustaining ‘blue sky’ earnings expectations and extreme valuations, but it also led to the TMT bust.”
Today’s AI capital expenditure (capex) is increasingly being funded by debt, marking a notable shift from previous reliance on cash reserves. While tech giants initially used their substantial cash flows for AI infrastructure, their massive and escalating spending has led them to increasingly rely on external financing to cover costs.
This is especially true of Oracle, which will have to increase its capex by almost $100 billion over the next two years for their deal to build out AI data centers for OpenAI. That’s an annualized growth rate of some 47%, even though Oracle’s free cash flow has already fallen into negative territory for the first time since 1990. According to a recent note from KeyBanc Capital Markets, Oracle may need to borrow $25 billion annually over the next four years. This comes at a time when Oracle is already carrying substantial debt and is highly leveraged. As of the end of August, the company had around $82 billion in long-term debt, with a debt-to-equity ratio of roughly 450%. By comparison, Alphabet—the parent company of Google—reported a ratio of 11.5%, while Microsoft’s stood at about 33%. In July, Moody’s revised Oracle’s credit outlook to negative from, while affirming its Baa2 senior unsecured rating. This negative outlook reflects the risks associated with Oracle’s significant expansion into AI infrastructure, which is expected to lead to elevated leverage and negative free cash flow due to high capital expenditures. Caveat Emptor!
References:
https://fortune.com/2025/10/04/jeff-bezos-amazon-openai-sam-altman-ai-bubble-tech-stocks-investing/
https://www.ft.com/content/c7b9453e-f528-4fc3-9bbd-3dbd369041be
Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?
AI Data Center Boom Carries Huge Default and Demand Risks
Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)
Gartner: AI spending >$2 trillion in 2026 driven by hyperscalers data center investments
Will the wave of AI generated user-to/from-network traffic increase spectacularly as Cisco and Nokia predict?
Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers
RtBrick survey: Telco leaders warn AI and streaming traffic to “crack networks” by 2030
https://fortune.com/2025/09/19/zuckerberg-ai-bubble-definitely-possibility-sam-altman-collapse/
https://finance.yahoo.com/news/why-fears-trillion-dollar-ai-130008034.html
Huawei to Double Output of Ascend AI chips in 2026; OpenAI orders HBM chips from SK Hynix & Samsung for Stargate UAE project
With sales of Nvidia AI chips restricted in China, Huawei Technologies Inc. plans to make about 600,000 of its 910C Ascend chips next year, roughly double this year’s output, people familiar with the matter told Bloomberg. The China tech behemoth will increase its Ascend product line in 2026 to as many as 1.6 million dies – the basic silicon component that’s packaged as a chip.
Huawei had struggled to get those products to potential customers for much of 2025, because of U.S. sanctions. Yet if Huawei and its partner Semiconductor Manufacturing International Corp. (SMIC) can hit that ambitious AI chip manufacturing target, it suggest self sufficiency which will remove some of the bottlenecks that’ve hindered not just its AI business.
The projections for 2025 and 2026 include dies that Huawei has in inventory, as well as internal estimates of yields or the rate of failure during production, the people said. Shares in SMIC and rival chipmaker Hua Hong Semiconductor Ltd. gained more than 4% in Hong Kong Tuesday, while the broader market stayed largely unchanged.

Huawei Ascend branding at a trade show i China. Photographer: Ying Tang/Getty Images
Chinese AI companies from Alibaba Group Holding Ltd. to DeepSeek need millions of AI chips to develop and operate AI services. Nvidia alone was estimated to have sold a million H20 chips in 2024.
What Bloomberg Economics Says:
Huawei’s reported plan to double AI-chip output over the next year suggests China is making real progress in working around US export controls. Yet the plan also exposes the limitations imposed by US controls: Node development remains stuck at 7 nanometers, and Huawei will continue to rely on stockpiles of foreign high-bandwidth memory amid a lack of domestic production.
From Beijing’s perspective, Huawei’s production expansion represents another move in an ongoing back-and-forth with the West over semiconductor access and self-sufficiency. The priority remains accelerating indigenization of critical technologies while steadily pushing back against Western controls.
– Michael Deng, analyst
While Huawei’s new AI silicon promises massive performance gains it has several shortcomings, especially the lack of a developer community comparable to Nvidia’s CUDA ecosystem. A Chinese tech executive said Nvidia’s biggest advantage wasn’t its advanced chips but the ecosystem built around CUDA, its parallel computing architecture and programming model. The exec called for the creation of a Chinese version of CUDA that can be used worldwide.
Also, Huawei is playing catchup by progressively going open source. It announced last month that its Ascend and AI training toolkit CANN, its Mind development environment and Pangu models would all be open source by year-end.
Huawei chairman Eric Xu said in an interview the company had given the “ecosystem issue” a great deal of thought and regarded the transition to open source as a long-term project. “Why keep it hidden? If it’s widely used, an ecosystem will emerge; if it’s used less, the ecosystem will disappear,” he said.
………………………………………………………………………………………………………………………………………………………………………
At its customer event in Shanghai last month, Huawei revealed that it planned to spend 15 billion Chinese yuan (US$2.1 billion) annually over the next five years on ecosystem development and open source computing.
Xu announced a series of new Ascend chips – the 950, 960 and 970 – to be rolled out over the next three years. He foreshadowed a new series of massive Atlas SuperPoD clusters – each one a single logical machine made up of multiple physical devices that can work together – and also announced Huawei’s unified bus interconnect protocol, which allows customers to stitch together compute power across multiple data centers.
Xu acknowledged that Huawei’s single Ascend chips could not match Nvidia’s, but said the SuperPoDs were currently the world’s most powerful and will remain so “for years to come.” But the scale of its SuperPOD architecture points to its other shortcoming – the power consumption of these giant compute arrays.
………………………………………………………………………………………………………………………………………………………………………….
Separately, OpenAI has made huge memory chip agreements with South Korea’s SK Hynix and Samsung, the world’s two biggest semiconductor memory manufacturers. The partnership, aimed at locking up HBM ((High Bandwidth Memory) [1.] chip supply for the $400 billion Stargate AI infrastructure project, is estimated to be worth more than 100 trillion Korean won (US$71.3 billion) for the Korean chipmakers over the next four years. The two companies say they are targeting 900,000 DRAM wafer starts per month – more than double the current global HBM capacity.
Note 1. HBM is a specialized type of DRAM that uses a unique 3D vertical stacking architecture and Through-Silicon Via (TSV) technology to achieve significantly higher bandwidth and performance than traditional, flat DRAM configurations. HBM uses standard DRAM “dies” stacked vertically, connected by TSVs, to create a more densely packed, high-performance memory solution for demanding applications like AI and high-performance computing.
…………………………………………………………………………………………………………………………………………………………………………….
“These partnerships will focus on increasing the supply of advanced memory chips essential for next-generation AI and expanding data center capacity in Korea, positioning Samsung and SK as key contributors to global AI infrastructure and supporting Korea’s ambition to become a top-three global AI nation.” OpenAI said.
The announcement followed a meeting between President Lee Jae-myung, Samsung Electronics Executive Chairman Jay Y. Lee, SK Chairman Chey Tae-won, and OpenAI CEO Sam Altman at the Presidential Office in Seoul.
Through these partnerships, Samsung Electronics and SK hynix plan to scale up production of advanced memory chips, targeting 900,000 DRAM wafer starts per month at an accelerated capacity rollout, critical for powering OpenAI’s advanced AI models.
OpenAI also signed a series of agreements today to explore developing next-generation AI data centers in Korea. These include a Memorandum of Understanding (MoU) with the Korean Ministry of Science and ICT (MSIT) specifically to evaluate opportunities for building AI data centers outside the Seoul Metropolitan Area, supporting balanced regional economic growth and job creation across the country.
The agreements signed today also include a separate partnership with SK Telecom to explore building an AI data center in Korea, as well as an agreement with Samsung C&T, Samsung Heavy Industries, and Samsung SDS to assess opportunities for additional data center capacity in the country.
References:
https://www.lightreading.com/ai-machine-learning/huawei-sets-itself-as-china-s-go-to-for-ai-tech
OpenAI orders $71B in Korean memory chips
AI Data Center Boom Carries Huge Default and Demand Risks
U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China
Huawei launches CloudMatrix 384 AI System to rival Nvidia’s most advanced AI system
China gaining on U.S. in AI technology arms race- silicon, models and research
Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?
Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?
Despite U.S. sanctions, Huawei has come “roaring back,” due to massive China government support and policies
Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)
Big Tech plans to spend between $364 billion and $400 billion on AI data centers, purchasing specialized AI hardware like GPUs, and supporting cloud computing/storage capacity. The final 2Q 2025 GDP report, released last week, reveals a surge in data center infrastructure spending from $9.5 billion in early 2020 to $40.4 billion in the second quarter of 2025. It’s largely due to an unprecedented investment boom driven by artificial intelligence (AI) and cloud computing. The increase highlights a monumental shift in capital expenditure by major tech companies.
Yet there are huge uncertainties about how far AI will transform scientific discovery and hypercharge technological advance. Tech financial analysts worry that enthusiasm for AI has turned into a bubble that is reminiscent of the mania around the internet’s infrastructure build-out boom from 1998-2000. During that time period, telecom network providers spent over $100 billion blanketing the country with fiber optic cables based on the belief that the internet’s growth would be so explosive that such massive investments were justified. The “talk of the town” during those years was the “All Optical Network,” with ultra-long haul optical transceiver, photonic switches and optical add/drop multiplexers. 27 years later, it still has not been realized anywhere in the world.
The resulting massive optical network overbuilding made telecom the hardest hit sector of the dot-com bust. Industry giants toppled like dominoes, including Global Crossing, WorldCom, Enron, Qwest, PSI Net and 360Networks.
However, a key difference between then and now is that today’s tech giants (e.g. hyperscalers) produce far more cash than the fiber builders in the 1990s. Also, AI is immediately available for use by anyone that has a high speed internet connection (via desktop, laptop, tablet or smartphone) unlike the late 1990s when internet users (consumers and businesses) had to obtain high-speed wireline access via cable modems, DSL or (in very few areas) fiber to the premises.
……………………………………………………………………………………………………………………………………………………………………………………………………………………………….
Today, the once boring world of chips and data centers has become a raging multi-hundred billion dollar battleground where Silicon Valley giants attempt to one up each other with spending commitments—and sci-fi names. Meta CEO Mark Zuckerberg teased his planned “Hyperion” mega-data center with a social-media post showing it would be the size of a large chunk of Manhattan.
OpenAI’s Sam Altman calls his data-center effort “Stargate,” a reference to the 1994 movie about an interstellar time-travel portal. Company executives this week laid out plans that would require at least $1 trillion in data-center investments, and Altman recently committed the company to pay Oracle an average of approximately $60 billion a year for AI compute servers in data centers in coming years. That’s despite Oracle is not a major cloud service provider and OpenAI will not have the cash on hand to pay Oracle.
In fact, OpenAI is on track to realize just $13 billion in revenue from all its paying customers this year and won’t be profitable till at least 2029 or 2030. The company projects its total cash burn will reach $115 billion by 2029. The majority of its revenue comes from subscriptions to premium versions of ChatGPT, with the remainder from selling access to its models via its API. Although ~ 700 million people—9% of the world’s population—are weekly users of ChatGPT (as of August, up from 500 million in March), its estimated that over 90% use the free version. Also this past week:
- Nvidia plans to invest up to $100 billion to help OpenAI build data center capacity with millions GPUs.
- OpenAI revealed an expanded deal with Oracle and SoftBank , scaling its “Stargate” project to a $400 billion commitment across multiple phases and sites.
- OpenAI deepened its enterprise reach with a formal integration into Databricks — signaling a new phase in its push for commercial adoption.
Nvidia is supplying capital and chips. Oracle is building the sites. OpenAI is anchoring the demand. It’s a circular economy that could come under pressure if any one player falters. And while the headlines came fast this week, the physical buildout will take years to deliver — with much of it dependent on energy and grid upgrades that remain uncertain.
Another AI darling is CoreWeave, a company that provides GPU-accelerated cloud computing platforms and infrastructure. From its founding in 2017 until its pivot to cloud computing in 2019, Corweave was an obscure cryptocurrency miner with fewer than two dozen employees. Flooded with money from Wall Street and private-equity investors, it has morphed into a computing goliath with a market value bigger than General Motors or Target.
…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….
Massive AI infrastructure spending will require tremendous AI revenue for pay-back:
David Cahn, a partner at venture-capital firm Sequoia, estimates that the money invested in AI infrastructure in 2023 and 2024 alone requires consumers and companies to buy roughly $800 billion in AI products over the life of these chips and data centers to produce a good investment return. Analysts believe most AI processors have a useful life of between three and five years.
This week, consultants at Bain & Co. estimated the wave of AI infrastructure spending will require $2 trillion in annual AI revenue by 2030. By comparison, that is more than the combined 2024 revenue of Amazon, Apple, Alphabet, Microsoft, Meta and Nvidia, and more than five times the size of the entire global subscription software market.
Morgan Stanley estimates that last year there was around $45 billion of revenue for AI products. The sector makes money from a combination of subscription fees for chatbots such as ChatGPT and money paid to use these companies’ data centers. How the tech sector will cover the gap is “the trillion dollar question,” said Mark Moerdler, an analyst at Bernstein. Consumers have been quick to use AI, but most are using free versions, Moerdler said. Businesses have been slow to spend much on AI services, except for the roughly $30 a month per user for Microsoft’s Copilot or similar products. “Someone’s got to make money off this,” he said.
…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….
Why this time is different (?):
AI cheerleaders insist that this boom is different from the dot-com era. If AI continues to advance to the point where it can replace a large swath of white collar jobs, the savings will be more than enough to pay back the investment, backers argue. AI executives predict the technology could add 10% to global GDP in coming years.
“Training AI models is a gigantic multitrillion dollar market,” Oracle chairman Larry Ellison told investors this month. The market for companies and consumers using AI daily “will be much, much larger.”
The financing behind the AI build-out is complex. Debt is layered on at nearly every level. his “debt-fueled arms race” involves large technology companies, startups, and private credit firms seeking innovative ways to fund the development of data centers and acquire powerful hardware, such as Nvidia GPUs. Debt is layered across different levels of the AI ecosystem, from the large tech giants to smaller cloud providers and specialized hardware firms.
Alphabet, Microsoft, Amazon, Meta and others create their own AI products, and sometimes sell access to cloud-computing services to companies such as OpenAI that design AI models. The four “hyperscalers” alone are expected to spend nearly $400 billion on capital investments next year, more than the cost of the Apollo space program in today’s dollars. Some build their own data centers, and some rely on third parties to erect the mega-size warehouses tricked out with cooling equipment and power.
…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….
Echoes of bubbles past:
History is replete with technology bubbles that pop. Optimism over an invention—canals, electricity, railroads—prompts an investor stampede premised on explosive growth. Overbuilding follows, and investors eat giant losses, even when a new technology permeates the economy. Predicting when a boom turns into a bubble is notoriously hard. Many inflate for years. Some never pop, and simply stagnate.
The U.K.’s 19th-century railway mania was so large that over 7% of the country’s GDP went toward blanketing the country with rail. Between 1840 and 1852, the railway system nearly quintupled to 7,300 miles of track, but it only produced one-fourth of the revenue builders expected, according to Andrew Odlyzko,PhD, an emeritus University of Minnesota mathematics professor who studies bubbles. He calls the unbridled optimism in manias “collective hallucinations,” where investors, society and the press follow herd mentality and stop seeing risks.
He knows from firsthand experience as a researcher at Bell Labs in the 1990s. Then, telecom giants and upstarts raced to speculatively plunge tens of millions of miles of fiber cables into the ground, spending the equivalent of around 1% of U.S. GDP over half a decade.
Backers compared the effort to the highway system, to the advent of electricity and to discovering oil. The prevailing belief at the time, he said, was that internet use was doubling every 100 days. But in reality, for most of the 1990s boom, traffic doubled every year, Odlyzko found.
The force of the mania led executives across the industry to focus on hype more than unfavorable news and statistics, pouring money into fiber until the bubble burst.
“There was a strong element of self interest,” as companies and executives all stood to benefit financially as long as the boom continued, Odlyzko said. “Cautionary signs are disregarded.”
Kevin O’Hara, a co-founder of upstart fiber builder Level 3, said banks and stock investors were throwing money at the company, and executives believed demand would rocket upward for years. Despite worrying signs, executives focused on the promise of more traffic from uses like video streaming and games.
“It was an absolute gold rush,” he said. “We were spending about $110 million a week” building out the network.
When reality caught up, Level 3’s stock dropped 95%, while giants of the sector went bust. Much of the fiber sat unused for over a decade. Ultimately, the growth of video streaming and other uses in the early 2010s helped soak up much of the oversupply.
Worrying signs:
There are growing, worrying signs that the optimism about AI won’t pan out.
- MIT Media Lab (2025): The “State of AI in Business 2025” report found that 95% of custom enterprise AI tools and pilots fail to produce a measurable financial impact or reach full-scale production. The primary issue identified was a “learning gap” among leaders and organizations, who struggle to properly integrate AI tools and redesign workflows to capture value.
- A University of Chicago economics paper found AI chatbots had “no significant impact on workers’ earnings, recorded hours, or wages” at 7,000 Danish workplaces.
- Gartner (2024–2025): The research and consulting firm has reported that 85% of AI initiatives fail to deliver on their promised value. Gartner also predicts that by the end of 2025, 30% of generative AI projects will be abandoned after the proof-of-concept phase due to issues like poor data quality, lack of clear business value, and escalating costs.
- RAND Corporation (2024): In its analysis, RAND confirmed that the failure rate for AI projects is over 80%, which is double the failure rate of non-AI technology projects. Cited obstacles include cost overruns, data privacy concerns, and security risks.
OpenAI’s release of ChatGPT-5 in August was widely viewed as an incremental improvement, not the game-changing thinking machine many expected. Given the high cost of developing it, the release fanned concerns that generative AI models are improving at a slower pace than expected. Each new AI model—ChatGPT-4, ChatGPT-5—costs significantly more than the last to train and release to the world, often three to five times the cost of the previous, say AI executives. That means the payback has to be even higher to justify the spending.
Another hurdle: The chips in the data centers won’t be useful forever. Unlike the dot-com boom’s fiber cables, the latest AI chips rapidly depreciate in value as technology improves, much like an older model car. And they are extremely expensive.
“This is bigger than all the other tech bubbles put together,” said Roger McNamee, co-founder of tech investor Silver Lake Partners, who has been critical of some tech giants. “This industry can be as successful as the most successful tech products ever introduced and still not justify the current levels of investment.”
Other challenges include the growing strain on global supply chains, especially for chips, power and infrastructure. As for economy-wide gains in productivity, few of the biggest listed U.S. companies are able to describe how AI was changing their businesses for the better. Equally striking is the minimal euphoria some Big Tech companies display in their regulatory filings. Meta’s 10k form last year reads: “[T]here can be no assurance that the usage of AI will enhance our products or services or be beneficial to our business, including our efficiency or profitability.” That is very shaky basis on which to conduct a $300bn capex splurge.
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………
Conclusions:
Big tech spending on AI infrastructure has been propping up the U.S. economy, with some projections indicating it could fuel nearly half of the 2025 GDP growth. However, this contribution primarily stems from capital expenditures, and the long-term economic impact is still being debated. George Saravelos of Deutsche Bank notes that economic growth is not coming from AI itself but from building the data centers to generate AI capacity.
Once those AI factories have been built, with needed power supplies and cooling, will the productivity gains from AI finally be realized? How globally disseminated will those benefits be? Finally, what will be the return on investment (ROI) for the big spending AI companies like the hyperscalers, OpenAI and other AI players?
………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………
References:
https://www.wsj.com/tech/ai/ai-bubble-building-spree-55ee6128
https://www.ft.com/content/6c181cb1-0cbb-4668-9854-5a29debb05b1
https://www.cnbc.com/2025/09/26/openai-big-week-ai-arms-race.html
Gartner: AI spending >$2 trillion in 2026 driven by hyperscalers data center investments
AI Data Center Boom Carries Huge Default and Demand Risks
AI spending is surging; companies accelerate AI adoption, but job cuts loom large
Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?
Canalys & Gartner: AI investments drive growth in cloud infrastructure spending
AI wave stimulates big tech spending and strong profits, but for how long?
AI Echo Chamber: “Upstream AI” companies huge spending fuels profit growth for “Downstream AI” firms
OpenAI partners with G42 to build giant data center for Stargate UAE project
Big Tech and VCs invest hundreds of billions in AI while salaries of AI experts reach the stratosphere
Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers
Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers
Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections
OpenAI and Broadcom in $10B deal to make custom AI chips
Lumen Technologies to connect Prometheus Hyperscale’s energy efficient AI data centers
Proposed solutions to high energy consumption of Generative AI LLMs: optimized hardware, new algorithms, green data centers
Liquid Dreams: The Rise of Immersion Cooling and Underwater Data Centers
Lumen: “We’re Building the Backbone for the AI Economy” – NaaS platform to be available to more customers
Initiatives and Analysis: Nokia focuses on data centers as its top growth market
SK Telecom forms AI CIC in-house company to pursue internal AI innovation
SK Telecom (SKT) is establishing an in-house independent company (CIC) that consolidates its artificial intelligence (AI) capabilities. Through AI CIC, SK Telecom plans to invest approximately 5 trillion won (US$3.5 billion) in AI over the next five years and achieve annual sales of over 5 trillion won ($3.5 billion) by 2030.
On September 25th, SK Telecom CEO Ryu Young-sang held a town hall meeting for all employees at the SKT Tower Supex Hall in Jung-gu, Seoul, announcing the launch of AI CIC to pursue rapid AI innovation. Ryu will concurrently serve as the CEO of SK CIC. SK Telecom plans to unveil detailed organizational restructuring plans for AI CIC at the end of October this year.
“We are launching AI CIC, a streamlined organizational structure, and will simultaneously pursue internal AI innovation, including internal systems, organizational culture, and enhancing employees’ AI capabilities. We will grow AI CIC to be the main driver of SK’s AI business and, furthermore, the core that leads the AI business for the entire SK Group. The AI CIC will establish itself as South Korea’s leading AI business operator in all fields of AI, including services, platforms, AI data centers and proprietary foundation models,” Ryu said.
The newly established AI CIC will be responsible for all the company’s AI-related functions and businesses. It is expected that SK Telecom’s business will be divided into mobile network operations (MNO) and AI, with AI CIC consolidating related businesses to enhance operational efficiency. Furthermore, AI CIC will actively participate in government-led AI projects, contributing to the establishment of a government-driven AI ecosystem. SKT said that reorganizing its services under one umbrella will “drive AI innovation that enhance business productivity and efficiency.”
“Through this (AI CIC), we will play a central role in building a domestic AI-related ecosystem and become a company that contributes to the success of the national AI strategy,” Ryu said.
By integrating and consolidating dispersed AI technology assets, SKT plans to strengthen the role of the “AI platform” that supports AI technology/operations across the entire SK Group, including SKT, and also pursue a strategy to secure a flexible “AI model” to respond to the diverse AI needs of the government, industry, and private sectors.
In addition, SKT will accelerate the development of future growth areas (R&D) such as digital twins and robots, and the expansion of domestic and international partnerships based on AI full-stack capabilities.

Ryu Young-sang, CEO of SK Telecom, unveils the plans for the AI CIC
CEO Ryu said, “SK Telecom has secured various achievements such as securing 10 million Adot (AI enabled) subscribers, selecting an independent AI foundation model, launching the Ulsan AI DC, and establishing global partnerships through its transformation into an AI company over the past three years, and has laid the foundation for future leaps forward. We will achieve another AI innovation centered around the AI CIC to restore the trust of customers and the market and advance into a global AI company.”
………………………………………………………………………………………………………………………………………………………………………………………………………
References:
https://www.businesskorea.co.kr/news/articleView.html?idxno=253124
SKT-Samsung Electronics to Optimize 5G Base Station Performance using AI
SK Telecom unveils plans for AI Infrastructure at SK AI Summit 2024
SK Telecom (SKT) and Nokia to work on AI assisted “fiber sensing”
SK Telecom and Singtel partner to develop next-generation telco technologies using AI
SK Telecom, DOCOMO, NTT and Nokia develop 6G AI-native air interface
South Korea has 30 million 5G users, but did not meet expectations; KT and SKT AI initiatives
AI Data Center Boom Carries Huge Default and Demand Risks
“How does the digital economy exist?” asked John Medina, a senior vice president at Moody’s, who specializes in assessing infrastructure investments. “It exists on data centers.”
New investments in data centers to power Artificial Intelligence (AI) are projected to reach $3 trillion to $4 trillion by 2030, according to Nvidia. Other estimates suggest the investment needed to keep pace with AI demand could be as high as $7 trillion by 2030, according to McKinsey. This massive spending is already having a significant economic impact, with some analysis indicating that AI data center expenditure has surpassed the total impact from US consumer spending on GDP growth in 2025.
U.S. data center demand, driven largely by A.I., could triple by 2030, according to McKinsey. That would require data centers to make nearly $7 trillion in investment to keep up. OpenAI, SoftBank and Oracle recently announced a pact to invest $500 billion in A.I. infrastructure through 2029. Meta and Alphabet are also investing billions. Merely saying “please” and “thank you” to a chatbot eats up tens of millions of dollars in processing power, according to OpenAI’s chief executive, Sam Altman.
- OpenAI, SoftBank, and Oracle pledging to invest $500 billion in AI infrastructure through 2029.
- Nvidia and Intel collaborating to develop AI infrastructure, with Nvidia investing $5 billion in Intel stock.
- Microsoft spending $4 billion on a second data center in Wisconsin.
- Amazon planning to invest $20 billion in Pennsylvania for AI infrastructure.

Compute and Storage Servers within an AI Data Center. Photo credit: iStock quantic69
The spending frenzy comes with a big default risk. According to Moody’s, structured finance has become a popular way to pay for new data center projects, with more than $9 billion of issuance in the commercial mortgage-backed security and asset-backed security markets during the first four months of 2025. Meta, for example, tapped the bond manager Pimco to issue $26 billion in bonds to finance its data center expansion plans.
As more debt enters these data center build-out transactions, analysts and lenders are putting more emphasis on lease terms for third-party developers. “Does the debt get paid off in that lease term, or does the tenant’s lease need to be renewed?” Medina of Moody’s said. “What we’re seeing often is there is lease renewal risk, because who knows what the markets or what the world will even be like from a technology perspective at that time.”
Even if A.I. proliferates, demand for processing power may not. Chinese technology company DeepSeek has demonstrated that A.I. models can produce reliable outputs with less computing power. As A.I. companies make their models more efficient, data center demand could drop, making it much harder to turn investments in A.I. infrastructure into profit. After Microsoft backed out of a $1 billion data center investment in March, UBS wrote that the company, which has lease obligations of roughly $175 billion, most likely overcommitted.
Some worry costs will always be too high for profits. In a blog post on his company’s website, Harris Kupperman, a self-described boomer investor and the founder of the hedge fund Praetorian Capital, laid out his bearish case on A.I. infrastructure. Because the building needs upkeep and the chips and other technology will continually evolve, he argued that data centers will depreciate faster than they can generate revenue.
“Even worse, since losing the A.I. race is potentially existential, all future cash flow, for years into the future, may also have to be funneled into data centers with fabulously negative returns on capital,” he added. “However, lighting hundreds of billions on fire may seem preferable than losing out to a competitor, despite not even knowing what the prize ultimately is.”
It’s not just Silicon Valley with skin in the game. State budgets are being upended by tax incentives given to developers of A.I. data centers. According to Good Jobs First, a nonprofit that promotes corporate and government accountability in economic development, at least 10 states so far have lost more than $100 million per year in tax revenue to data centers. But the true monetary impact may never be truly known: Over one-third of states that offer tax incentives for data centers do not disclose aggregate revenue loss.
Local governments are also heralding the expansion of energy infrastructure to support the surge of data centers. Phoenix, for example, is expected to grow its data center power capacity by over 500 percent in the coming years — enough power to support over 4.3 million households. Virginia, which has more than 50 new data centers in the works, has contracted the state’s largest utility company, Dominion, to build 40 gigawatts of additional capacity to meet demand — triple the size of the current grid.
The stakes extend beyond finance. The big bump in data center activity has been linked to distorted residential power readings across the country. And according to the International Energy Agency, a 100-megawatt data center, which uses water to cool servers, consumes roughly two million liters of water per day, equivalent to 6,500 households. This puts strain on water supply for nearby residential communities, a majority of which, according to Bloomberg News, are already facing high levels of water stress.
“I think we’re in that era right now with A.I. models where it’s just who can make the bigger and better one,” said Vijay Gadepally, a senior scientist at the Lincoln Laboratory Supercomputing Center at the Massachusetts Institute of Technology. “But we haven’t actually stopped to think about, Well, OK, is this actually worth it?”
References:
What Wall Street Sees in the Data Center Boom – The New York Times
 
            


