In terms of both market structure and risk management, we are treating the AI companies as a sector rather than a broad-based type of software that supports all industries. The economic boom will come when we think about AI as just another stage of digital modernization for an organization, and one that provides a better approach to problem-solving than current methods, rather than some type of technological disruption.
To reach the economic boom, AI company shells will be discarded in order to achieve the growth expected in the long term. The assets of the companies that built the railways were integrated into transportation companies. The assets of the firms that laid fiber optic cables became the telecom industry.
The U.S. is making an enormous bet rarely seen in our history. In the first half of 2025, 1.1% of GDP growth came from A1 infrastructure investments. Consider earlier “moon shots.” The Manhattan Project investment was 0.4% of GDP at the time, similar to that of the actual moon shot, the Apollo program.
Government policy, actions and pronouncements have advantaged AI companies. Their financial viability – including the ability to cover costs, pay returns to shareholders and investors, and perform sustainably – will be critical to economic growth, national security, and resilience. The stakes are high given the investment.
Large generative AI models are a type of software, hyped as a step towards artificial general intelligence (AGI), as well as to solve more narrow problems. For a consumer, this is often generated text or a search (e.g., “please compose a love poem to my partner, John, in iambic pentameter”). For a business, the AI model should provide the digital transformation solution (e.g., equity market research, response to a customer inquiry, optimal supply chain).
The AI model is an approach to problem-solving; AI is not the answer to the problem. The question should be, which AI model solves the problem? Technical progress and hundreds of billions of investment dollars hang in the balance.
Let’s consider six critical risks of an AI company bubble or boom.
SLM or LLM: What Is the Most Efficient Way to Solve a Business Problem?
Simply put, are firms solving the right problem? OpenAI, Anthropic, Google, Meta, Mistral and other companies have bet hundreds of billions on generative large language models (LLMs) in a purported step towards AGI. Most business problems are simpler and can be solved with simpler small language models (SLMs) trained on small data sets.
Brenda Boultwood
McKinsey found that most business leaders cite model simplicity and training data among common challenges with AI adoption.
Think about a smarter, more effective customer-service call center for a retailer. It does not need an AI model trained on data sets containing Old English literature and MCAT study guides. More usefully, training data may include some historical customer data accompanied by rules of etiquette and ethics for the AI assisted responses. To use AI for digital modernization, the organization will focus its business and IT resources to build the bespoke SLM to deploy and gradually build AI capacity in the workforce.
Will a large or small AI model work better to solve the business problem? The distinction is critical to understand the potential revenue and credit risks of the AI companies pursuing AGI. Adding to the risks of large AI models are studies showing they don’t follow the rules and can deceive their human operators.
Based on analysis of AI prompt data, a NBER study found that approximately 75% of ChatGTP users pursue non-work-related prompts. A report from Apple found that AI “reasoning” may be circular because the training data often contains the answer to the question being tested.
As reported in Harvard Business Review, work-related AI usage often creates “workslop” that can cost companies millions of dollars in lost productivity as more senior employees are left with the task of redoing the efforts. In fact, an MIT study finds that only 5% of generative AI use cases have delivered profitability or productivity growth.
Progress in agentic and video AI could deliver more value, but both will be similarly challenged by questions of the alignment between the AI model and an organization’s specific business process.
Regulatory Capture and Rent-Seeking
Large tech company incumbents are engaged in unprecedented lobbying and rent-seeking behavior by strategically using their existing market power, data monopolies, and lobbying efforts to influence policy and market structure in ways that secure private gain without creating broader value across the economy.
This has been very effective in shaping the executive orders and government policy changes to accelerate AI investments and remove barriers to building infrastructure. Tariff policy has been structured to support the AI industry with the largest exemptions covering $34 billion of monthly imports of computer parts. These computer parts are a big part of AI capital expenditures because data centers require thousands of computers, and chips make up about 60% of their cost.
Company Revenue, Costs and Profitability
AI company revenues will need to be $2 trillion in 2030 to fund the 100GW (gigawatts) to meet the anticipated U.S. power demand for AI, according to Bain & Co.
OpenAI estimated revenues of $14 billion are a fraction of what is needed to scale the required computing power. OpenAI itself estimates electricity usage of 7GW, requiring a $350 billion investment for data centers and chips.
How will this investment be funded? OpenAI recently introduced GTP-5. While this new ChatGPT iteration showed incremental improvement, the financial markets were unimpressed. Unhindered, OpenAI accepted warrants in exchange for an AMD chip deal where the stock market valuation adjustment provided OpenAI with the cash to purchase the chips, thanks to the dilution of AMD shareholders. Nvidea plows cash into OpenAI in return for future use of Nvidea chips based on the promise of OpenAI revenues. There is risk that this “circular investment” strategy might fail “when the music stops.”
Data Center Obsolescence
Chips, servers and computer hardware comprise the majority of data center costs, says McKinsey. Unlike most large infrastructure, the life span of a chip is short for both physical and technological reasons. High utilization at high temperatures with intense energy usage create physical stress.
Chips become technically obsolete long before physical retirement. If the life span of a chip in an AI data center is 2.5 years, more powerful, higher-performing chips become available in 10 to 12 months. Those who compare the creation of a data center network to the growth of the U.S. railroad industry in the 19th century might consider how railroads would have developed if track gauge changed every year.
Opportunity Cost to the Economy
AI-related capex exceeded consumer spending as the primary source of GDP growth at 1.1% in the first half of 2025. Many expect AI capex will crowd out other manufacturing and industrial investments as large investment program management and factory construction. Construction jobs spike during the data center construction period. After construction, a very modest workforce is required to run the data center compared to a manufacturing plant with the same footprint.
With the cost of living crises of the last several years firmly in consumers’ minds, the increase in residential and commerical electricity prices caused by grid demand from AI data centers will continue to exert pressure on the economy and politicians alike.
Public Company and Private Credit Tie-Ups
Venture capital investment in AI companies totals a staggering $161 billion so far in 2025, A Bloomberg graphic shows the scale and circular nature of AI, technology company and chip company cash flows and investments. Byzantine tie-ups link public and private companies, software and hardware companies, as well as start-up and established Big-7 tech company shares through unusual and opaque funding structures, with numerous special purpose vehicles obscuring beneficial ownership.
Private equity, venture capital and bank lending support the financing structures. The circular funding dance could halt for a variety of reasons, including inadequate debt repayment timelines, a failure of AI start-ups to generate expected revenues, investor sentiment shiftimg to a shiny new object, or government policy changes.
Tie-ups across a number of large company public firms such as Nvidea, Microsoft, Oracle, AMD, Intel and Coreweave, and private companies such as OpenAI and xAI, are entered into at a speed and scale not seen before. Even Google’s CEO notes their large AI infrastructure investment is driven by an “AI infrastructure arms race” fear of missing out. FOMO often does not end well.
Both booms and busts are timeframe-dependent. Looked at in 2001, the investment in a national fiber optic network was a bust. By 2020, that was valuable infrastructure.
A similar claim may be made in time about AI infrastructure. At least two things are true. First, organizations are at a nascent stage in aligning AI models with operational business processes and business strategy. Second, the value of AI may not be delivered by AI companies such as OpenAI, Anthropic or xAI, but instead by internally developed narrow AI models trained on small data puddles using company operational data, ethical principles and strategy.
The hype required to fund AI infrastructure capex in the U.S. is useful and necessary, but should not distract from the way AI models will be used to solve business problems. Is the AI industry building a general AI model with a single prompt line to respond to all types of queries, or narrow bespoke problem-solving applications geared to digital modernization of a business process?
Typically, narrow AI models have been used to drive modernization. This time may be different, but we may learn at the cost of trillions of dollars.
Brenda Boultwood is the Distinguished Visiting Professor, Admiral Crowe Chair, in the Economics Department at the United States Naval Academy. The views expressed in this article are her own and should not be attributed to the United States Naval Academy, the U.S. Navy or the U.S. Department of Defense.
She is the former Director of the Office of Risk Management at the International Monetary Fund. She has previously served as a board member at both the Committee of Chief Risk Officers (CCRO) and GARP, and is also the former senior vice president and chief risk officer at Constellation Energy. She held a variety of business, risk management, and compliance roles at JPMorgan Chase and Bank One.