Photo Gallery | 185861 Views | May 06,2019
May 9 , 2026. By Lenny Mendonca ( Lenny Mendonca, former chief economic and business adviser to Governor Gavin Newsom of California, is a senior partner emeritus at McKinsey & Company. ) , Martin Neil Baily ( Martin Neil Baily is a senior fellow emeritus in economic studies at the Brookings Institution and a former chairman of US President Bill Clinton’s Council of Economic Advisers. This article is provided by Project Syndicate (PS). )
While many would point to the financial system in response to the question of the biggest risk posed by AI, our attention would be better directed more toward labour markets.
Financial concerns are certainly understandable. Even in 2026, the spectre of 2008 haunts every conversation about economic risk. When Lehman Brothers collapsed, and the global banking system teetered, governments faced a momentous choice. Either they had to bail out the banks with public money or watch the financial system implode. In the United States, policymakers chose a bailout, encouraging future risk-taking and enraging taxpayers who bore the cost.
But US regulators then spent the following decade building a new line of defence, which is now embedded in the global banking architecture. In the process, they offered a roadmap for addressing any systemic risks now accumulating within the AI industry.
To be sure, the Financial Stability Board (FSB) warns that regulatory frameworks designed for monitoring AI are still in their early stages. But the risks remain manageable. The AI industry has arrived at a juncture that should look familiar to anyone who remembers the pre-2008 financial system. Market concentration is extreme, the interconnections between major players are deep, and the industry’s critical infrastructure runs through single points of failure.
Before 2008, risk in the financial system was assumed to be widely distributed. It was not. Leverage was hidden in off-balance-sheet vehicles, counterparty exposures were opaque, and the failure of a single institution could cascade unpredictably through the entire system. Regulation was scattered across numerous entities, none of which had a complete picture of what was happening. Regulators had no framework for thinking about systemic risks, and no way to designate which firms would bring down others if they failed.
The AI industry has a similar concentration problem.
According to Menlo Ventures, only three companies - Anthropic, OpenAI, and Google - control roughly 88pc of the enterprise large language model market. And the hardware layer is even more concentrated, with TSMC completely dominating advanced-node semiconductor manufacturing, raising concerns about a potential global compute bottleneck. When a 7.4-magnitude earthquake struck Taiwan in April 2024, it temporarily disrupted semiconductor production and reminded the world how geographically concentrated this infrastructure has become.
Fortunately, the central innovation of post-2008 financial regulation has proven effective. It is to identify the institutions whose failure would be catastrophic, and mandate that they hold sufficient total loss-absorbing capacity (equity and long-term debt that can be written down) to fail safely. The results are clear. A Congressional Research Service analysis of US bank failures shows a sharp decline in failures following the post-crisis regulatory reforms.
Although none of the tools introduced after the financial crisis translates directly to AI (banks hold financial assets that can be valued and stress-tested, whereas AI systems rely on training data, model weights, and compute capacity), the underlying regulatory logic still applies. Regulators need only consider three adaptations.
The first is systemic designation and disclosure. Regulators and standard setters should identify which AI providers, cloud platforms, and chip manufacturers have become critical infrastructure for the financial system. The FSB’s October 2025 report on AI monitoring acknowledged that financial institutions are increasingly dependent on a small number of major technology providers for AI capabilities, but that monitoring efforts remain at an “early stage,” owing to data gaps and a lack of standardised taxonomies. Fixing that is the first step.
Operational resilience requirements should serve as a proxy for capital buffers. Instead of adhering to total-loss absorbing capacity, capital requirements, and systemically important AI providers would have to demonstrate redundancy, failover capacity, and genuine substitutability. Financial firms relying on a single AI provider should face concentration limits analogous to the exposure rules that prevent banks from lending too much to a single counterparty. The FSB's Third-Party Risk Managment and Oversight Toolkit already provides a framework. Regulators should use it more aggressively.
We also need stress testing for AI-driven correlated failures. The European Systemic Risk Board warns that because AI models are “history-constrained” - trained on past data - they are inherently poor at predicting tail events outside their training distribution. This is precisely the kind of model risk that stress tests are designed to reveal. Regulators should develop AI-specific stress scenarios, such as the failure of a major cloud provider, a regulatory shock to a dominant model, or a geopolitical disruption to chip supply chains, and require financial institutions using AI in critical functions to demonstrate that they can absorb the shock.
Fortunately, AI-related failures would not necessarily trigger a 2008-style financial crisis. If regulators act with the appropriate urgency, the potential systemic financial risk from AI is manageable.
But what AI may do to people who work for a living is a deeper and far more consequential challenge. The scale of potential labour displacement from AI is no longer hypothetical. IBM has replaced hundreds of people in its HR department, where AI now handles 94pc of routine tasks. Salesforce has reduced hiring for engineering and customer service roles, and Shopify’s CEO has said that new hires will not be approved unless hiring managers can demonstrate that AI cannot do the job.
The World Economic Forum’s "Future of Jobs Report 2025" projects that 39pc of workers’ core skills will be disrupted by 2030, and McKinsey & Company estimates that half of today’s work activities could be automated between 2030 and 2060.
What is to be done?
Although research from the Harvard Kennedy School suggests that retraining for AI-exposed occupations can entail substantial earnings penalties, that is no reason to abandon this solution. Other countries, such as Denmark and Singapore, have spent amply on training, and their programs work well.
In any case, getting employers involved is essential because training programs should equip workers with the skills that are in demand now and that will be needed in the future. Ensuring access to high-speed broadband and digital literacy training is crucial.
Regulators have the tools to prevent an AI-driven financial crisis. But we still need policymakers to get serious about ensuring that the AI revolution works for everyone, not only the few who own the underlying technologies.
PUBLISHED ON
May 09,2026 [ VOL
27 , NO
1358]
Photo Gallery | 185861 Views | May 06,2019
Photo Gallery | 175902 Views | Apr 26,2019
Photo Gallery | 171462 Views | Oct 06,2021
My Opinion | 139414 Views | Aug 14,2021
May 9 , 2026
The Ethiopian state appears to have discovered a fiscal instrument that is politicall...
May 2 , 2026
By the time Ethiopia's National Dialogue Commission (ENDC) reached the end of its fir...
Apr 25 , 2026
In a political community, official speeches show what governments want their citizens...
For much of the past three decades, Ethiopia occupied a familiar place in the Western...