AI spending boom accelerates: Big tech to invest an aggregate of $400 billion in 2025; much more in 2026!

The biggest U.S. mega-cap tech companies are on track to invest an aggregate of $400 billion into artificial intelligence (AI) initiatives this year, a commitment they collectively indicate “is nowhere near enough.”  Meta, Alphabet, Microsoft, and Amazon all have announced further AI spending increases in 2026. The investment community reacted favorably to the plans presented by Google and Amazon late this past week, though some apprehension was noted regarding the strategies outlined by Meta and Microsoft.

  • Meta Platforms says it continues to experience capacity constraints as it simultaneously trains new AI models and supports existing product infrastructure.  Meta CEO Mark Zuckerberg described an unsatiated appetite for more computing resources that Meta must work to fulfill to ensure it’s a leader in a fast-moving AI race. “We want to make sure we’re not underinvesting,” he said on an earnings call with analysts Wednesday after posting third-quarter results. Meta signaled in the earnings report that capital expenditures would be “notably larger” next year than in 2025, during which Meta expects to spend as much as $72 billion. He indicated that the company’s existing advertising business and platforms are operating in a “compute-starved state.” This condition persists because Meta is allocating more resources toward AI research and development efforts rather than bolstering existing operations.
  • Microsoft reported substantial customer demand for its data-center-driven services, prompting plans to double its data center footprint over the next two years. Concurrently, Amazon is working aggressively to deploy additional cloud capacity to meet demand.  Amy Hood, Microsoft’s chief financial officer, said: “We’ve been short [on computing power] now for many quarters. I thought we were going to catch up. We are not. Demand is increasing.” She further elaborated, “When you see these kinds of demand signals and we know we’re behind, we do need to spend.”
  • Alphabet (Google’s parent company) reported that capital expenditures will jump from $85 billion to between $91 billion and $93 billion. Google CFO Anat Ashkenazi said the investments are already yielding returns: “We already are generating billions of dollars from AI in the quarter. But then across the board, we have a rigorous framework and approach by which we evaluate these long-term investments.” 
  • Amazon has not provided a specific total dollar figure for its planned AI investment in 2026. However, the company has announced it expects its total capital expenditures (capex) in 2026 to be even higher than its 2025 projection of $125 billion, with the vast majority of this spending dedicated to AI and related infrastructure for Amazon Web Services (AWS).
  • Apple: Announced it is also increasing its AI investments, though its overall spending remains smaller in comparison to the other tech giants.

As big as the spending projections were this week, they look pedestrian compared with OpenAI, which has announced roughly $1 trillion worth of AI infrastructure deals of late with partners including Nvidia , Oracle and Broadcom.

Despite the big capex tax write-offs (due to the 2025 GOP tax act) there is a large degree of uncertainty regarding the eventual outcomes of this substantial AI infrastructure spending. The companies themselves, along with numerous AI proponents, assert that these investments are essential for machine-learning systems to achieve artificial general intelligence (AGI), a state where they surpass human intelligence.

………………………………………………………………………………………………………………………………………………………………
Youssef Squali, lead internet analyst at Truist Securities, wrote: “Whoever gets to AGI first will have an incredible competitor advantage over everybody else, and it’s that fear of missing out that all these players are suffering from. It’s the right strategy. The greater risk is to underspend and to be left with a competitive disadvantage.”

Yet skeptics question whether investing billions in large-language models (LLMs), the most prevalent AI system, will ultimately achieve that objective. They also highlight the limited number of paying users for existing technology and the prolonged training period required before a global workforce can effectively utilize it.

During investor calls following the earnings announcements, analysts directed incisive questions at company executives. On Microsoft’s call, one analyst voiced a central market concern, asking: “Are we in a bubble?” Similarly, on the call for Google’s parent company, Alphabet, another analyst questioned: “What early signs are you seeing that gives you confidence that the spending is really driving better returns longer term?”

Bank of America (BofA) credit strategists Yuri Seliger and Sohyun Marie Lee write in a client note that capital spending by five of the Magnificent Seven megacap tech companies (Amazon.comAlphabet, and Microsoft, along with Meta and Oracle) has been growing even faster than their prodigious cash flows. “These companies collectively may be reaching a limit to how much AI capex they are willing to fund purely from cash flows,” they write.  Consensus estimates of AI capex suggest will climb to 94% of operating cash flows, minus dividends and share repurchases, in 2025 and 2026, up from 76% in 2024. That’s still less than 100% of cash flows, so they don’t need to borrow to fund spending, “but it’s getting close,” they add.

………………………………………………………………………………………………………………………………………………………………….

Big Tech AI Investment Comments and Quotes:

Google, which projected a rise in its full-year capital expenditures from $85 billion to a range of $91 billion to $93 billion, indicated that these investments were already proving profitable.  Google’s Ashkenazi stated: “We already are generating billions of dollars from AI in the quarter. But then across the board, we have a rigorous framework and approach by which we evaluate these long-term investments.”

Microsoft reported that it expects to face capacity shortages that will affect its ability to power both its current businesses and AI research needs until at least the first half of the next year. The company noted that its cloud computing division, Azure, is absorbing “most of the revenue impact.”

Amazon informed investors of its expedited efforts to bring new capacity online, citing its ability to immediately monetize these investments.

“You’re going to see us continue to be very aggressive in investing capacity because we see the demand,” said Amazon Chief Executive Andy Jassy. “As fast as we’re adding capacity right now, we’re monetizing it.”

Meta did not provide new details on AI model release or product timelines, nor did it specify when investors might see a broader return on their investments, which unsettled some investors. CEO Zuckerberg told analysts that the company would simply pivot if its spending on achieving AGI is proven incorrect. “I think it’s the right strategy to aggressively front load building capacity. That way, we’re prepared for the most optimistic case. In the worst case, we would just slow building new infrastructure for some period while we grow into what we build.”

Meta’s chief financial officer, Susan Li, stated that the company’s capital expenditures—which have already nearly doubled from last year to $72 billion this year—will grow “notably larger” in 2026, though specific figures were not provided. Meta brought this year’s biggest investment-grade corporate bond deal to market, totaling some $30 billion, the latest in a parade of recent data-center borrowing.

Apple confirmed during its earnings call it is also increasing investments in AI . However, its total spending levels remain significantly lower compared to the outlays planned by the other major technology firms.

………………………………………………………………………………………………………………………………………………………………

Skepticism and Risk: 

While proponents argue the investments are necessary for AGI and offer a competitive advantage, skeptics question if huge spending (capex) on AI infrastructure and large-language models will achieve this goal and point to limited paying users for current AI technology.  Meta CEO Zuckerberg addressed this by telling investors the company would “simply pivot” if its AGI spending strategy proves incorrect.

The mad scramble by mega tech companies and Open AI to build AI data centers is largely relying on debt markets, with a slew of public and private mega deals since September. Hyperscalers would have to spend 94% of operating cash flow to pay for their AI buildouts so are turning to debt financing to help defray some of that cost, according to Bank of America. Unlike earnings per share, cash flow can’t be manipulated by companies. If they spend more on AI than they generate internally, they have to finance the difference.

Hyperscaler debt taken on so far this year have raised almost as much money as all debt financings done between 2020 and 2024, the BofA research said.  BofA calculates $75 billion of AI-related public debt offerings just in the past two months!

 

In bubbles, everyone gets caught up in the idea that spending on the hot theme will deliver vast profits — eventually. When the bubble is big enough, it shifts the focus of the market as a whole from disliking capital expenditure, and hating speculative capital spending in particular, to loving it.  That certainly seems the case today with surging AI spending.  For much more, please check-out the References below.

References:

Reuters: US Department of Energy forms $1 billion AI supercomputer partnership with AMD

The U.S. Department of  Energy has formed a $1 billion partnership with Advanced Micro Devices (AMD) to construct two supercomputers that will tackle large scientific problems ranging from nuclear power to cancer treatments to national security, U.S. Energy Secretary Chris Wright and AMD CEO Lisa Su told Reuters.

The U.S. is building the two machines to ensure the country has enough supercomputers to run increasingly complex experiments that require harnessing enormous amounts of data-crunching capability. The machines can accelerate the process of making scientific discoveries in areas the U.S. is focused on.

 

U.S. Energy Secretary Wright said the systems would “supercharge” advances in nuclear power, fusion energy, technologies for defense and national security, and the development of drugs. Scientists and companies are trying to replicate nuclear fusion, the reaction that fuels the sun, by jamming light atoms in a plasma gas under intense heat and pressure to release massive amounts of energy. “We’ve made great progress, but plasmas are unstable, and we need to recreate the center of the sun on Earth,” Wright told Reuters.

“We’re going to get just massively faster progress using the computation from these AI systems that I believe will have practical pathways to harness fusion energy in the next two or three years.” Wright said the supercomputers would also help manage the U.S. arsenal of nuclear weapons and accelerate drug discovery by simulating ways to treat cancer down to the molecular level. “My hope is in the next five or eight years, we will turn most cancers, many of which today are ultimate death sentences, into manageable conditions,” Wright said.
The plans call for the first computer called Lux to be constructed and come online within the next six months. It will be based around AMD’s MI355X artificial intelligence chips, and the design will also include central processors (CPUs) and networking chips made by AMD. The system is co-developed by AMD, Hewlett Packard Enterprise (HPE), Oracle Cloud Infrastructure and Oak Ridge National Laboratory (ORNL).
AMD’s CEO Su said the Lux deployment was the fastest deployment of this size of computer that she has seen.
“This is the speed and agility that we wanted to (do) this for the U.S. AI efforts,” Su said.
ORNL Director Stephen Streiffer said the Lux supercomputer will deliver about three times the AI capacity of current supercomputers.

The second, more advanced computer called Discovery will be based around AMD’s MI430 series of AI chips that are tuned for high-performance computing.

The MI430 is a special variant of its MI400 series that combines important features of traditional supercomputing chips along with the features to run AI applications, Su said.
This system will be designed by ORNL, HPE and AMD. Discovery is expected to be delivered in 2028 and be ready for operations in 2029.  Streiffer said he expected enormous gains but couldn’t predict how much greater computational capability it would have.
The Department of Energy will host the computers, the companies will provide the machines and capital spending, and both sides will share the computing power, a DOE official said.
The two supercomputers based on AMD chips are intended to be the first of many of these types of partnerships with private industry and DOE labs across the country, the official said.
References:

Amazon’s Jeff Bezos at Italian Tech Week: “AI is a kind of industrial bubble”

Tech firms are spending hundreds of billions of dollars on advanced AI chips and data centers, not just to keep pace with a surge in the use of chatbots such as ChatGPT, Gemini and Claude, but to make sure they’re ready to handle a more fundamental and disruptive shift of economic activity from humans to machines. The final bill may run into the trillions. The financing is coming from venture capital, debt and, lately, some more unconventional arrangements that have raised concerns among top industry executives and financial asset management firms.

At Italian Tech Week in Turin on October 3, 2025, Amazon founder Jeff Bezos said this about artificial intelligence,  “This is a kind of industrial bubble, as opposed to financial bubbles.”  Bezos differentiated this from “bad” financial or housing bubbles, which  cause harm. Bezos’s comparison of the current AI boom to a historical “industrial bubble” highlights that, while speculative, it is rooted in real, transformative technology. 

“It can even be good, because when the dust settles and you see who are the winners, societies benefit from those investors,” Bezos said. “That is what is going to happen here too. This is real, the benefits to society from AI are going to be gigantic.”

He noted that during bubbles, everything (both good and bad investments) gets funded. When these periods of excitement come along, investors have a hard time distinguishing the good ideas from the bad, he said, adding this is “probably happening today” with AI investments.  “Investors have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas,” Bezos said of the AI industry. “And that’s also probably happening today,” he added.

  • A “good” kind of bubble: He explained that during industrial bubbles, excessive funding flows to both good and bad ideas, making it hard for investors to distinguish between them. However, the influx of capital spurs significant innovation and infrastructure development that ultimately benefits society once the bubble bursts and the strongest companies survive.
  • Echoes of the dot-com era: Bezos drew a parallel to the dot-com boom of the 1990s, where many internet companies failed, but the underlying infrastructure—like fiber-optic cable—endured and led to the creation of companies like Amazon.
  • Gigantic benefits: Despite the market frothiness, Bezos reiterated that AI is “real” and its benefits to society “are going to be gigantic.”
Bezos is not the only high-profile figure to express caution about the AI boom:
  • Sam Altman (OpenAI): The CEO of OpenAI has stated that he believes “investors as a whole are overexcited about AI.” In In August, the OpenAI CEO told reporters the AI market was in a bubble. When bubbles happen, “smart people get overexcited about a kernel of truth,” Altman warned, drawing parallels with the dot-com boom. Still, he said his personal belief is “on the whole, this would be a huge net win for the economy.”
  • David Solomon (Goldman Sachs): Also speaking at Italian Tech Week, the Goldman Sachs CEO warned that a lot of capital deployed in AI would not deliver returns and that a market “drawdown” could occur.
  • Mark Zuckerberg (Meta): Zuckerberg has also acknowledged that an AI bubble exists. The Meta CEO acknowledged that the rapid development of and surging investments in AI stands to form a bubble, potentially outpacing practical productivity and returns and risking a market crash.  However, he would rather “misspend a couple hundred billion dollars” on AI development than be late to the technology.
  • Morgan Stanley Wealth Management’s chief investment officer, Lisa Shalett, warned that the AI stock boom was showing “cracks” and was likely closer to its end than its beginning. The firm cited concerns over negative free cash flow growth among major AI players and increasing speculative investment. Shalett highlighted that free cash flow growth for the major cloud providers, or “hyperscalers,” has turned negative. This is viewed as a key signal of the AI capital expenditure cycle’s maturity. Some analysts estimate this growth could shrink by about 16% over the next year.
Image Credit:  Dreamstime.com  © Skypixel
………………………………………………………………………………………………………………………………………………………………………………………
Bezos’s remarks come as some analysts express growing fears of an impending AI market crash.
  • Underlying technology is real: Unlike purely speculative bubbles, the AI boom is driven by a fundamental technology shift with real-world applications that will survive any market correction.
  • Historical context: Some analysts believe the current AI bubble is on a much larger scale than the dot-com bubble due to the massive influx of investment.
  • Significant spending: The level of business spending on AI is already at historic levels and is fueling economic growth, which could cause a broader economic slowdown if it were to crash.
  • Potential for disruption: The AI industry faces risks such as diminishing returns for costly advanced models, increased competition, and infrastructure limitations related to power consumption. 

Ian Harnett argues, the current bubble may be approaching its “endgame.” He wrote in the Financial Times:

“The dramatic rise in AI capital expenditure by so-called hyperscalers of the technology and the stock concentration in US equities are classic peak bubble signals. But history shows that a bust triggered by this over-investment may hold the key to the positive long-run potential of AI.

Until recently, the missing ingredient was the rapid build-out of physical capital. This is now firmly in place, echoing the capex boom seen in the late-1990s bubble in telecommunications, media and technology stocks. That scaling of the internet and mobile telephony was central to sustaining ‘blue sky’ earnings expectations and extreme valuations, but it also led to the TMT bust.”

Today’s AI capital expenditure (capex) is increasingly being funded by debt, marking a notable shift from previous reliance on cash reserves. While tech giants initially used their substantial cash flows for AI infrastructure, their massive and escalating spending has led them to increasingly rely on external financing to cover costs.

This is especially true of Oracle, which will have to increase its capex by almost $100 billion over the next two years for their deal to build out AI data centers for OpenAI.  That’s an annualized growth rate of some 47%, even though Oracle’s free cash flow has already fallen into negative territory for the first time since 1990.  According to a recent note from KeyBanc Capital Markets, Oracle may need to borrow $25 billion annually over the next four years.  This comes at a time when Oracle is already carrying substantial debt and is highly leveraged. As of the end of August, the company had around $82 billion in long-term debt, with a debt-to-equity ratio of roughly 450%. By comparison, Alphabet—the parent company of Google—reported a ratio of 11.5%, while Microsoft’s stood at about 33%.  In July, Moody’s revised Oracle’s credit outlook to negative from, while affirming its Baa2 senior unsecured rating. This negative outlook reflects the risks associated with Oracle’s significant expansion into AI infrastructure, which is expected to lead to elevated leverage and negative free cash flow due to high capital expenditures. Caveat Emptor!

References:

https://fortune.com/2025/10/04/jeff-bezos-amazon-openai-sam-altman-ai-bubble-tech-stocks-investing/

https://www.ft.com/content/c7b9453e-f528-4fc3-9bbd-3dbd369041be

Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?

AI Data Center Boom Carries Huge Default and Demand Risks

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Gartner: AI spending >$2 trillion in 2026 driven by hyperscalers data center investments

Will the wave of AI generated user-to/from-network traffic increase spectacularly as Cisco and Nokia predict?

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

RtBrick survey: Telco leaders warn AI and streaming traffic to “crack networks” by 2030

https://fortune.com/2025/09/19/zuckerberg-ai-bubble-definitely-possibility-sam-altman-collapse/

https://finance.yahoo.com/news/why-fears-trillion-dollar-ai-130008034.html

Huawei to Double Output of Ascend AI chips in 2026; OpenAI orders HBM chips from SK Hynix & Samsung for Stargate UAE project

With sales of Nvidia AI chips restricted in China, Huawei Technologies Inc. plans to make about 600,000 of its 910C Ascend chips next year, roughly double this year’s output, people familiar with the matter told Bloomberg. The China tech behemoth will increase its Ascend product line in 2026 to as many as 1.6 million dies – the basic silicon component that’s packaged as a chip.

Huawei had struggled to get those products to potential customers for much of 2025, because of U.S. sanctions.  Yet if Huawei and its partner Semiconductor Manufacturing International Corp. (SMIC) can hit that ambitious AI chip manufacturing target, it suggest self sufficiency which will remove some of the bottlenecks that’ve hindered not just its AI business.

The projections for 2025 and 2026 include dies that Huawei has in inventory, as well as internal estimates of yields or the rate of failure during production, the people said. Shares in SMIC and rival chipmaker Hua Hong Semiconductor Ltd. gained more than 4% in Hong Kong Tuesday, while the broader market stayed largely unchanged.

Huawei Ascend branding at a trade show i China. Photographer: Ying Tang/Getty Images

Chinese AI companies from Alibaba Group Holding Ltd. to DeepSeek need millions of AI chips to develop and operate AI services. Nvidia alone was estimated to have sold a million H20 chips in 2024.

What Bloomberg Economics Says:

Huawei’s reported plan to double AI-chip output over the next year suggests China is making real progress in working around US export controls. Yet the plan also exposes the limitations imposed by US controls: Node development remains stuck at 7 nanometers, and Huawei will continue to rely on stockpiles of foreign high-bandwidth memory amid a lack of domestic production.

From Beijing’s perspective, Huawei’s production expansion represents another move in an ongoing back-and-forth with the West over semiconductor access and self-sufficiency. The priority remains accelerating indigenization of critical technologies while steadily pushing back against Western controls.

– Michael Deng, analyst

While Huawei’s new AI silicon promises massive performance gains it has several shortcomings, especially the lack of a developer community comparable to Nvidia’s CUDA ecosystem.  A Chinese tech executive said Nvidia’s biggest advantage wasn’t its advanced chips but the ecosystem built around CUDA, its parallel computing architecture and programming model. The exec called for the creation of a Chinese version of CUDA that can be used worldwide. 

Also, Huawei is playing catchup by progressively going open source. It announced last month that its Ascend and AI training toolkit CANN, its Mind development environment and Pangu models would all be open source by year-end.

Huawei chairman Eric Xu said in an interview the company had given the “ecosystem issue” a great deal of thought and regarded the transition to open source as a long-term project. “Why keep it hidden? If it’s widely used, an ecosystem will emerge; if it’s used less, the ecosystem will disappear,” he said.

………………………………………………………………………………………………………………………………………………………………………

At its customer event in Shanghai last month, Huawei revealed that it planned to spend 15 billion Chinese yuan (US$2.1 billion) annually over the next five years on ecosystem development and open source computing.

Xu announced a series of new Ascend chips – the 950, 960 and 970 – to be rolled out over the next three years.  He foreshadowed a new series of massive Atlas SuperPoD clusters – each one a single logical machine made up of multiple physical devices that can work together – and also announced Huawei’s unified bus interconnect protocol, which allows customers to stitch together compute power across multiple data centers. 

Xu acknowledged that Huawei’s single Ascend chips could not match Nvidia’s, but said the SuperPoDs were currently the world’s most powerful and will remain so “for years to come.” But the scale of its SuperPOD architecture points to its other shortcoming – the power consumption of these giant compute arrays. 

………………………………………………………………………………………………………………………………………………………………………….

Separately, OpenAI has made huge memory chip agreements with South Korea’s SK Hynix and Samsung, the world’s two biggest semiconductor memory manufacturers.  The partnership, aimed at locking up HBM ((High Bandwidth Memory) [1.] chip supply for the $400 billion Stargate AI infrastructure project, is estimated to be worth more than 100 trillion Korean won (US$71.3 billion) for the Korean chipmakers over the next four years. The two companies say they are targeting 900,000 DRAM wafer starts per month – more than double the current global HBM capacity.

Note 1. HBM is a specialized type of DRAM that uses a unique 3D vertical stacking architecture and Through-Silicon Via (TSV) technology to achieve significantly higher bandwidth and performance than traditional, flat DRAM configurations. HBM uses standard DRAM “dies” stacked vertically, connected by TSVs, to create a more densely packed, high-performance memory solution for demanding applications like AI and high-performance computing.

…………………………………………………………………………………………………………………………………………………………………………….

“These partnerships will focus on increasing the supply of advanced memory chips essential for next-generation AI and expanding data center capacity in Korea, positioning Samsung and SK as key contributors to global AI infrastructure and supporting Korea’s ambition to become a top-three global AI nation.” OpenAI said.

The announcement followed a meeting between President Lee Jae-myung, Samsung Electronics Executive Chairman Jay Y. Lee, SK Chairman Chey Tae-won, and OpenAI CEO Sam Altman at the Presidential Office in Seoul.

Through these partnerships, Samsung Electronics and SK hynix plan to scale up production of advanced memory chips, targeting 900,000 DRAM wafer starts per month at an accelerated capacity rollout, critical for powering OpenAI’s advanced AI models.

OpenAI also signed a series of agreements today to explore developing next-generation AI data centers in Korea. These include a Memorandum of Understanding (MoU) with the Korean Ministry of Science and ICT (MSIT) specifically to evaluate opportunities for building AI data centers outside the Seoul Metropolitan Area, supporting balanced regional economic growth and job creation across the country.

The agreements signed today also include a separate partnership with SK Telecom to explore building an AI data center in Korea, as well as an agreement with Samsung C&T, Samsung Heavy Industries, and Samsung SDS to assess opportunities for additional data center capacity in the country.

References:

https://www.bloomberg.com/news/articles/2025-09-29/huawei-to-double-output-of-top-ai-chip-as-nvidia-wavers-in-china

https://www.lightreading.com/ai-machine-learning/huawei-sets-itself-as-china-s-go-to-for-ai-tech

https://openai.com/index/samsung-and-sk-join-stargate/

OpenAI orders $71B in Korean memory chips

AI Data Center Boom Carries Huge Default and Demand Risks

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

U.S. export controls on Nvidia H20 AI chips enables Huawei’s 910C GPU to be favored by AI tech giants in China

Huawei launches CloudMatrix 384 AI System to rival Nvidia’s most advanced AI system

China gaining on U.S. in AI technology arms race- silicon, models and research

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Can the debt fueling the new wave of AI infrastructure buildouts ever be repaid?

Despite U.S. sanctions, Huawei has come “roaring back,” due to massive China government support and policies

Big tech spending on AI data centers and infrastructure vs the fiber optic buildout during the dot-com boom (& bust)

Big Tech plans to spend between $364 billion and $400 billion on AI data centers, purchasing specialized AI hardware like GPUs, and supporting cloud computing/storage capacity. The final 2Q 2025 GDP report, released last week, reveals a surge in data center infrastructure spending from $9.5 billion in early 2020 to $40.4 billion in the second quarter of 2025.  It’s largely due to an unprecedented investment boom driven by artificial intelligence (AI) and cloud computing. The increase highlights a monumental shift in capital expenditure by major tech companies.

Yet there are huge uncertainties about how far AI will transform scientific discovery and hypercharge technological advance.  Tech financial analysts worry that enthusiasm for AI has turned into a bubble that is reminiscent of the mania around the internet’s infrastructure build-out boom from 1998-2000.  During that time period, telecom network providers spent over $100 billion blanketing the country with fiber optic cables based on the belief that the internet’s growth would be so explosive that such massive investments were justified.  The “talk of the town” during those years was the “All Optical Network,” with ultra-long haul optical transceiver, photonic switches and optical add/drop multiplexers.  27 years later, it still has not been realized anywhere in the world.

The resulting massive optical network overbuilding  made telecom the hardest hit sector of the dot-com bust. Industry giants toppled like dominoes, including Global Crossing, WorldCom, Enron, Qwest, PSI Net and 360Networks.

However, a key difference between then and now is that today’s tech giants (e.g. hyperscalers) produce far more cash than the fiber builders in the 1990s. Also, AI is immediately available for use by anyone that has a high speed internet connection (via desktop, laptop, tablet or smartphone) unlike the late 1990s when internet users (consumers and businesses) had to obtain high-speed wireline access via cable modems, DSL or (in very few areas) fiber to the premises.

……………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Today, the once boring world of chips and data centers has become a raging multi-hundred billion dollar battleground where Silicon Valley giants attempt to one up each other with spending commitments—and sci-fi names.   Meta CEO Mark Zuckerberg teased his planned “Hyperion” mega-data center with a social-media post showing it would be the size of a large chunk of Manhattan.

OpenAI’s Sam Altman calls his data-center effort “Stargate,” a reference to the 1994 movie about an interstellar time-travel portal. Company executives this week laid out plans that would require at least $1 trillion in data-center investments, and Altman recently committed the company to pay Oracle an average of approximately $60 billion a year for AI compute servers in data centers in coming years. That’s despite Oracle is not a major cloud service provider and OpenAI will not have the cash on hand to pay Oracle.

In fact, OpenAI is on track to realize just $13 billion in revenue from all its paying customers this year and won’t be profitable till at least 2029 or 2030. The company projects its total cash burn will reach $115 billion by 2029.  The majority of its revenue comes from subscriptions to premium versions of ChatGPT, with the remainder from selling access to its models via its API. Although ~ 700 million people—9% of the world’s population—are weekly users of ChatGPT (as of August, up from 500 million in March), its estimated that over 90% use the free version.  Also this past week:

  • Nvidia plans to invest up to $100 billion to help OpenAI build data center capacity with millions GPUs.
  • OpenAI revealed an expanded deal with Oracle and SoftBank , scaling its “Stargate” project to a $400 billion commitment across multiple phases and sites.
  • OpenAI deepened its enterprise reach with a formal integration into Databricks — signaling a new phase in its push for commercial adoption.

Nvidia is supplying capital and chips. Oracle is building the sites. OpenAI is anchoring the demand. It’s a circular economy that could come under pressure if any one player falters. And while the headlines came fast this week, the physical buildout will take years to deliver — with much of it dependent on energy and grid upgrades that remain uncertain.

Another AI darling is CoreWeave, a company that provides GPU-accelerated cloud computing platforms and infrastructure.  From its founding in 2017 until its pivot to cloud computing in 2019, Corweave was an obscure cryptocurrency miner with fewer than two dozen employees. Flooded with money from Wall Street and private-equity investors, it has morphed into a computing goliath with a market value bigger than General Motors or Target.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Massive AI infrastructure spending will require tremendous AI revenue for pay-back:

David Cahn, a partner at venture-capital firm Sequoia, estimates that the money invested in AI infrastructure in 2023 and 2024 alone requires consumers and companies to buy roughly $800 billion in AI products over the life of these chips and data centers to produce a good investment return. Analysts believe most AI processors have a useful life of between three and five years.

This week, consultants at Bain & Co. estimated the wave of AI infrastructure spending will require $2 trillion in annual AI revenue by 2030. By comparison, that is more than the combined 2024 revenue of Amazon, Apple, Alphabet, Microsoft, Meta and Nvidia, and more than five times the size of the entire global subscription software market.

Morgan Stanley estimates that last year there was around $45 billion of revenue for AI products. The sector makes money from a combination of subscription fees for chatbots such as ChatGPT and money paid to use these companies’ data centers.  How the tech sector will cover the gap is “the trillion dollar question,” said Mark Moerdler, an analyst at Bernstein. Consumers have been quick to use AI, but most are using free versions, Moerdler said. Businesses have been slow to spend much on AI services, except for the roughly $30 a month per user for Microsoft’s Copilot or similar products. “Someone’s got to make money off this,” he said.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Why this time is different (?):

AI cheerleaders insist that this boom is different from the dot-com era. If AI continues to advance to the point where it can replace a large swath of white collar jobs, the savings will be more than enough to pay back the investment, backers argue. AI executives predict the technology could add 10% to global GDP in coming years.

“Training AI models is a gigantic multitrillion dollar market,” Oracle chairman Larry Ellison told investors this month. The market for companies and consumers using AI daily “will be much, much larger.”

The financing behind the AI build-out is complex. Debt is layered on at nearly every level.  his “debt-fueled arms race” involves large technology companies, startups, and private credit firms seeking innovative ways to fund the development of data centers and acquire powerful hardware, such as Nvidia GPUs. Debt is layered across different levels of the AI ecosystem, from the large tech giants to smaller cloud providers and specialized hardware firms. 

Alphabet, Microsoft, Amazon, Meta and others create their own AI products, and sometimes sell access to cloud-computing services to companies such as OpenAI that design AI models. The four “hyperscalers” alone are expected to spend nearly $400 billion on capital investments next year, more than the cost of the Apollo space program in today’s dollars.  Some build their own data centers, and some rely on third parties to erect the mega-size warehouses tricked out with cooling equipment and power.

…………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Echoes of bubbles past:

History is replete with technology bubbles that pop. Optimism over an invention—canals, electricity, railroads—prompts an investor stampede premised on explosive growth. Overbuilding follows, and investors eat giant losses, even when a new technology permeates the economy.  Predicting when a boom turns into a bubble is notoriously hard. Many inflate for years. Some never pop, and simply stagnate.

The U.K.’s 19th-century railway mania was so large that over 7% of the country’s GDP went toward blanketing the country with rail. Between 1840 and 1852, the railway system nearly quintupled to 7,300 miles of track, but it only produced one-fourth of the revenue builders expected, according to Andrew Odlyzko,PhD, an emeritus University of Minnesota mathematics professor who studies bubbles. He calls the unbridled optimism in manias “collective hallucinations,” where investors, society and the press follow herd mentality and stop seeing risks.

He knows from firsthand experience as a researcher at Bell Labs in the 1990s. Then, telecom giants and upstarts raced to speculatively plunge tens of millions of miles of fiber cables into the ground, spending the equivalent of around 1% of U.S. GDP over half a decade.

Backers compared the effort to the highway system, to the advent of electricity and to discovering oil. The prevailing belief at the time, he said, was that internet use was doubling every 100 days. But in reality, for most of the 1990s boom, traffic doubled every year, Odlyzko found.

The force of the mania led executives across the industry to focus on hype more than unfavorable news and statistics, pouring money into fiber until the bubble burst.

“There was a strong element of self interest,” as companies and executives all stood to benefit financially as long as the boom continued, Odlyzko said. “Cautionary signs are disregarded.”

Kevin O’Hara, a co-founder of upstart fiber builder Level 3, said banks and stock investors were throwing money at the company, and executives believed demand would rocket upward for years. Despite worrying signs, executives focused on the promise of more traffic from uses like video streaming and games.

“It was an absolute gold rush,” he said. “We were spending about $110 million a week” building out the network.

When reality caught up, Level 3’s stock dropped 95%, while giants of the sector went bust. Much of the fiber sat unused for over a decade. Ultimately, the growth of video streaming and other uses in the early 2010s helped soak up much of the oversupply.

Worrying signs:

There are growing, worrying signs that the optimism about AI won’t pan out.

  • MIT Media Lab (2025): The “State of AI in Business 2025” report found that 95% of custom enterprise AI tools and pilots fail to produce a measurable financial impact or reach full-scale production. The primary issue identified was a “learning gap” among leaders and organizations, who struggle to properly integrate AI tools and redesign workflows to capture value.
  • A University of Chicago economics paper found AI chatbots had “no significant impact on workers’ earnings, recorded hours, or wages” at 7,000 Danish workplaces.
  • Gartner (2024–2025): The research and consulting firm has reported that 85% of AI initiatives fail to deliver on their promised value. Gartner also predicts that by the end of 2025, 30% of generative AI projects will be abandoned after the proof-of-concept phase due to issues like poor data quality, lack of clear business value, and escalating costs.
  • RAND Corporation (2024): In its analysis, RAND confirmed that the failure rate for AI projects is over 80%, which is double the failure rate of non-AI technology projects. Cited obstacles include cost overruns, data privacy concerns, and security risks.

OpenAI’s release of ChatGPT-5 in August was widely viewed as an incremental improvement, not the game-changing thinking machine many expected. Given the high cost of developing it, the release fanned concerns that generative AI models are improving at a slower pace than expected.  Each new AI model—ChatGPT-4, ChatGPT-5—costs significantly more than the last to train and release to the world, often three to five times the cost of the previous, say AI executives. That means the payback has to be even higher to justify the spending.

Another hurdle: The chips in the data centers won’t be useful forever. Unlike the dot-com boom’s fiber cables, the latest AI chips rapidly depreciate in value as technology improves, much like an older model car.  And they are extremely expensive.

“This is bigger than all the other tech bubbles put together,” said Roger McNamee, co-founder of tech investor Silver Lake Partners, who has been critical of some tech giants. “This industry can be as successful as the most successful tech products ever introduced and still not justify the current levels of investment.”

Other challenges include the growing strain on global supply chains, especially for chips, power and infrastructure. As for economy-wide gains in productivity, few of the biggest listed U.S. companies are able to describe how AI was changing their businesses for the better. Equally striking is the minimal euphoria some Big Tech companies display in their regulatory filings. Meta’s 10k form last year reads: “[T]here can be no assurance that the usage of AI will enhance our products or services or be beneficial to our business, including our efficiency or profitability.” That is very shaky basis on which to conduct a $300bn capex splurge.

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

Conclusions:

Big tech spending on AI infrastructure has been propping up the U.S. economy, with some projections indicating it could fuel nearly half of the 2025 GDP growth. However, this contribution primarily stems from capital expenditures, and the long-term economic impact is still being debated.  George Saravelos of Deutsche Bank notes that economic growth is not coming from AI itself but from building the data centers to generate AI capacity.

Once those AI factories have been built, with needed power supplies and cooling, will the productivity gains from AI finally be realized? How globally disseminated will those benefits be?  Finally, what will be the return on investment (ROI) for the big spending AI companies like the hyperscalers, OpenAI and other AI players?

………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

References:

https://www.wsj.com/tech/ai/ai-bubble-building-spree-55ee6128

https://www.ft.com/content/6c181cb1-0cbb-4668-9854-5a29debb05b1

https://www.cnbc.com/2025/09/26/openai-big-week-ai-arms-race.html

https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers

Gartner: AI spending >$2 trillion in 2026 driven by hyperscalers data center investments

AI Data Center Boom Carries Huge Default and Demand Risks

AI spending is surging; companies accelerate AI adoption, but job cuts loom large

Will billions of dollars big tech is spending on Gen AI data centers produce a decent ROI?

Canalys & Gartner: AI investments drive growth in cloud infrastructure spending

AI wave stimulates big tech spending and strong profits, but for how long?

AI Echo Chamber: “Upstream AI” companies huge spending fuels profit growth for “Downstream AI” firms

OpenAI partners with G42 to build giant data center for Stargate UAE project

Big Tech and VCs invest hundreds of billions in AI while salaries of AI experts reach the stratosphere

Superclusters of Nvidia GPU/AI chips combined with end-to-end network platforms to create next generation data centers

Analysis: Cisco, HPE/Juniper, and Nvidia network equipment for AI data centers

Networking chips and modules for AI data centers: Infiniband, Ultra Ethernet, Optical Connections

OpenAI and Broadcom in $10B deal to make custom AI chips

Lumen Technologies to connect Prometheus Hyperscale’s energy efficient AI data centers

Proposed solutions to high energy consumption of Generative AI LLMs: optimized hardware, new algorithms, green data centers

Liquid Dreams: The Rise of Immersion Cooling and Underwater Data Centers

Lumen: “We’re Building the Backbone for the AI Economy” – NaaS platform to be available to more customers

Initiatives and Analysis: Nokia focuses on data centers as its top growth market