FinStrat Insights

New Industrial Stack Reshaping the Global Economy

Five interdependent layers — energy, semiconductors, cloud, models and applications — form the defining industrial architecture of the century.

When Sam Altman and President Donald Trump unveiled the Stargate initiative in early 2025, a $500 billion AI data center program, they revealed a fundamental truth that eludes most public discourse: AI is not just software. It is an industrial system spanning uranium mines, chip fabrication plants, hyperscale data centers, foundation models and autonomous agents.

Understanding AI means understanding five interdependent layers: energy, semiconductors, cloud infrastructure, models and applications. Together, they form the defining industrial architecture of our century.

Global data centers consumed 415 terawatt-hours in 2024, roughly 1.5% of worldwide electricity demand. By 2030, that figure is expected to more than double to 945 terawatt hours, exceeding Japan’s entire annual electricity consumption. In the U.S. alone, data centers used 183 terawatt-hours in 2024, more than 4% of national electricity, with forecasts reaching 426 terawatt-hours by decade’s end.

What changed? From 2005 to 2017, data center electricity consumption stayed flat as efficiency gains offset cloud growth. AI shattered that equilibrium. AI-optimized servers will represent 44% of total data center power by 2030, up from 21% in 2025. A single AI hyperscale facility now consumes as much electricity as 100,000 households.

Geography is being reshaped. Data center construction is migrating to regions with cheap, abundant power, from the Pacific Northwest’s hydroelectric corridors to France’s nuclear grid to Gulf states’ emerging sovereign cloud zones.

Cooling compounds the challenge. While conventional data centers operate at 10 to 15 kilowatts per rack, AI workloads demand 40 to 250 kilowatts. This shift drove rapid adoption of liquid cooling, a market projected to reach $21 billion by 2032 with annual growth exceeding 30%. Current Nvidia racks require 132 kilowatts; the next generation will require 240. Liquid cooling conducts heat 3,000 times more effectively than air, enables 58% higher server density, reduces infrastructure energy consumption by 40% and captures up to 98% of system heat. Organizations deploying liquid-cooled GB200 systems achieve up to 25 times the cost savings, more than $4 million annually for a 50-megawatt facility.

The energy layer is AI’s geological bedrock. Decisions made today about power plants and cooling systems will determine who can build tomorrow’s AI infrastructure.

The global AI chip market, valued at $72.7 billion in 2025, is projected to reach $121.7 billion in 2026 and between $670 billion and $1.1 trillion by 2035. At the center sits Nvidia, controlling 80% to 90% of the AI accelerator market with $49 billion in AI revenue in 2025, consuming 77% of all AI processor wafers worldwide.

The key architectural innovation was repurposing graphics processing units for massively parallel computation. GPUs, designed to render video-game graphics through millions of simultaneous calculations, proved ideal for the matrix multiplications at the heart of neural networks. Nvidia’s H100 and its successor, the B200, became the workhorses of AI training.

The competitive landscape is evolving rapidly. AMD’s Instinct MI300X offers 192 gigabytes of memory, 2.4 times that of Nvidia’s H100, and Microsoft Azure deploys it at scale. Google’s custom tensor processing units power much of the company’s AI infrastructure. Amazon’s Trainium2 and Inferentia2 chips are expected to handle 35% of new Amazon Web Services AI workloads in 2025. OpenAI is designing its own chip with Broadcom and TSMC, targeting production in 2026.

Among startups, Groq raised $750 million at a $6.9 billion valuation in September 2025, building an audience of more than two million developers for its ultra-low-latency inference platform.

Yet the single most consequential player designs no chips at all. Taiwan Semiconductor Manufacturing Co. fabricates the vast majority of the world’s most advanced AI processors, holding 70% of the global foundry market and dedicating more than 28% of total wafer capacity to AI chips. Nvidia, AMD, Google, Amazon and Apple all depend on TSMC’s leading edge processes. TSMC’s revenue grew 44% year-over-year in the second quarter of 2025, with net profit margins near 43%.

This concentration creates geopolitical vulnerability. Taiwan sits in one of the world’s most contested regions. TSMC’s $165 billion Arizona investment represents an attempt at geographic diversification, but replicating Taiwan’s manufacturing ecosystem is generational work. This concentration creates geopolitical vulnerability. Taiwan sits in one of the world’s most contested regions. TSMC’s $165 billion Arizona investment represents an attempt a geographic diversification, but replicating Taiwan’s manufacturing ecosystem is generational work. U.S. export controls on advanced chips to China have added complexity. China represented 26% of Nvidia’s revenue in fiscal 2022; that share has since dropped to 13%.

Global cloud infrastructure spending reached $102.6 billion in the third quarter of 2025 alone, up 25% year-over-year and the fifth consecutive quarter above 20% growth. Amazon Web Services, Microsoft Azure and Google Cloud account for roughly 66% of global cloud spending.

The hyperscalers’ capital expenditure dwarfs most national infrastructure budgets. Combined capital spending for Amazon, Microsoft, Google, Meta Platforms and Oracle is expected to surpass $600 billion in 2026, up 36% year-over-year. Goldman Sachs projects total hyperscaler capital expenditures from 2025 to 2027 will reach $1.15 trillion, more than double the $477 billion from 2022 to 2024. Roughly 75%, about $450 billion, targets AI infrastructure directly.

Investment in technology equipment and software reached 4.4% of U.S. gross domestic product in 2025, nearly as high as the dot-com bubble peak. Amazon Web Services reported a $200 billion order backlog by the third quarter of 2025. Google Cloud’s backlog surged from $108.2 billion in the second quarter to $157.7 billion in the third. Sovereign cloud is another emerging trend. Amazon Web Services plans a $5.3 billion region in Saudi Arabia and a 7.8 billion euro European Sovereign Cloud in Germany. Gartner forecasts sovereign-cloud infrastructure spending to reach $80 billion in 2026, up 35% year-over-year. In India, the Adani Group announced a $100 billion commitment for renewable-powered AI data centers by 2035, targeting five gigawatts of capacity. Google pledged $15 billion for AI infrastructure in India, including a gigawatt-scale hub
in Visakhapatnam.

Large language models and multimodal foundation models are products of the infrastructure sustaining them, and their development has become one of technology’s most expensive endeavors.

Training costs have escalated sharply. The original Transformer architecture, introduced
in 2017, cost roughly $900 to train. GPT-3, with 175 billion parameters, required an
estimated $5 million in compute. Training GPT-4 reportedly exceeded $100 million. Next-
generation frontier models may cost more than $1 billion to train.

The competitive landscape has fragmented. OpenAI’s GPT-5 family offers the most capable general purpose reasoning systems as of late 2025, but its early lead has narrowed. Anthropic’s Claude models have gained ground in enterprise deployments. Google’s Gemini 2.5 Pro competes across multiple modalities. Meta Platforms’ LLaMA open-weight models have reached one billion downloads.

Perhaps the most disruptive entrant is DeepSeek, the Chinese startup whose models achieved frontier performance at a fraction of the cost of Western systems. DeepSeek’s mixture-of-experts architecture activates only 37 billion of 671 billion parameters per token, dramatically reducing computational overhead. Its FP8 mixed-precision training cut costs while maintaining quality. The company’s aggressive pricing, at $0.28 per million input tokens, triggered a broad price war. DeepSeek’s innovations contributed to a brief $600 billion single-day decline in Nvidia’s market capitalization in January 2025.

The topmost layer is where the entire stack becomes visible: products and platforms built with AI as foundational architecture. This is where revenue is generated and where the return on trillions of dollars in investment must ultimately be realized.

Horizontal AI applications reached $8.4 billion in 2025, the largest and fastest-growing category, expanding 5.3 times year-over-year. Copilots dominate with 86% of that spending, led by ChatGPT Enterprise, Claude for Work and Microsoft Copilot. The enterprise copilot market reached $2.8 billion to $4.2 billion in 2024 and is forecast to reach $25.3 billion to $46.5 billion by 2033.

Code generation has become AI’s first true killer use case. The market for AI coding tools grew from $550 million to $4 billion in 2025. Half of all developers now use AI coding tools daily, with teams reporting productivity gains exceeding 15%. GitHub Copilot was first to market, but Cursor, a VS Code fork rebuilt around AI, captured significant share by shipping faster features, including project-wide context and multi-file editing.

The distinction between copilots and agents is not merely semantic. Copilots assist, drafting emails, summarizing meetings, suggesting code. Agents act, updating customer relationship management records, executing workflows, booking vendors and making constrained decisions. Microsoft’s Agent Mode, Copilot Studio’s computer-use capability and Salesforce’s Agentforce platform all represent steps toward this more autonomous paradigm.

Industry-specific applications are proliferating. In healthcare, ambient AI scribes, systems that listen to clinical encounters and automatically generate documentation, grew into a $600 million market in 2025, with providers reporting a 42% reduction in documentation time. Legal services AI tools reached $650 million in spending. The AI agent market itself is growing at a 46.3% compound annual rate, expanding from $7.84 billion in 2025 to an estimated $52.62 billion by 2030.

[1] Multiple news sources, January 2025 (Stargate Initiative announcement)
[2] International Energy Agency, “Energy and AI” report, April 2025
[3] IEA base-case scenario projection for 2030 data center electricity consumption
[4] U.S. Department of Energy analysis, 2024-2025; IEA 2025 report
[5] Gartner analysis, 2025; IEA projections for AI-optimized server power usage
[6] TTMS analysis, “Growing Energy Demand of AI – Data Centers 2024-2026,” January 2026
[10] Uptime Institute data center power density analysis, 2022-2024
[11] Introl analysis, “Liquid Cooling AI Data Center Infrastructure Essential 2025,” February 2026
[12] Introl Blog, current Nvidia rack requirements and next-generation projections, 2026 [13] Introl and Equinix liquid cooling technology analyses, 2025-2026
[14] Introl analysis of GB200 NVL72 system cost savings, February 2026
[15] Precedence Research; Meticulous Research, AI chip market projections, 2025
[16] Industry analysis reports on Nvidia market share and wafer consumption, 2025
[17] Technology press coverage of Nvidia H100, H200 and Blackwell B200 architectures, 2024-2025
[18] AMD Instinct MI300X specifications and Microsoft Azure deployment reports, 2025
[19] AWS announcements regarding Trainium2 and Inferentia2 deployment projections, 2025
[20] Reports on OpenAI chip development partnership with Broadcom and TSMC, 2025-2026
[21] Groq funding announcement and GroqCloud platform metrics, September 2025
[22] TSMC market share data and AI chip wafer capacity allocation, 2025
[23] TSMC Q2 2025 financial results
[27] Reports on TSMC Arizona fab investment and construction progress, 2025-2026
[30] Nvidia financial reports and China revenue analysis, fiscal 2022 vs. 2025
[31] Canalys and Synergy Research Group cloud infrastructure spending data, Q3 2025
[32] Market share analysis of AWS, Microsoft Azure, Google Cloud, Q3 2025
[33] Introl analysis and CreditSights estimates, hyperscaler capex projections for 2026
[34] Goldman Sachs hyperscaler capex projection, 2025-2027 period
[35] CreditSights and Introl analysis indicating ~75% of hyperscaler capex directed to AI infrastructure
[36] Economic analysis of technology equipment and software investment as percentage of U.S. GDP, 2025
[37] AWS and Google Cloud backlog figures from Q2 and Q3 2025 financial reports
[41] AWS sovereign cloud announcements for Saudi Arabia and Germany, 2025
[42] Gartner sovereign cloud infrastructure spending forecast; AICerts analysis, February 2026
[43] Adani Group press release, “$100 Bn to Sovereign AI Infrastructure,” February 2026
[44] Forbes report on Google’s India AI infrastructure investment, February 2026
[45] Historical estimates of Transformer (2017), GPT-3 and GPT-4 training costs
[46] Industry projections for frontier model training costs and power requirements, 2025-2026
[47] Meta LLaMA model download statistics and ecosystem analysis, 2025
[48] DeepSeek-V3 technical paper and architecture analysis; Bain & Co. analysis, February 2025
[49] Chat-Deep.ai analysis, “DeepSeek Balances Efficiency and Power in AI Training,” October 2025
[50] DeepSeek API pricing analysis and market impact reports, January 2025
[51] Financial press coverage of Nvidia market capitalization decline following DeepSeek announcements, January 2025
[56] Menlo Ventures horizontal AI applications spending analysis, 2025
[57] Menlo Ventures copilot market share breakdown, 2025
[58] MarketIntelo; DataIntelo, enterprise copilot market projections
[59] CB Insights analysis of AI coding tools market growth, 2024-2025
[60] Developer survey data on AI coding tool adoption and productivity metrics, 2025
[61] Cursor vs. GitHub Copilot competitive analysis and feature comparison, 2025
[65] Microsoft Copilot Studio and Salesforce Agentforce platform documentation, 2025
[66] Healthcare AI ambient scribe market analysis and provider efficiency metrics, 2025
[67] Legal services AI tools spending estimates, 2025
[68] AI agent market CAGR projection and size estimates, 2025-2030