Financial Markets’ Reliance on AI Could Lead to Global Crisis
Home > AI, Cloud & Data > Article

Financial Markets’ Reliance on AI Could Lead to Global Crisis

Share it!
Diego Valverde By Diego Valverde | Journalist & Industry Analyst - Wed, 07/23/2025 - 10:30

The increasing reliance of financial markets on a small number of general-purpose AI models is creating unprecedented systemic risk, potentially leading to a financial crisis, warns the US Securities and Exchange Commission (SEC). The scale of such an event could potentially exceed past downturns due to the possibility of correlated failures and mass herd behavior.

Gary Gensler, Chair, SEC, says that it is “nearly unavoidable” for AI to trigger a future financial crisis. The underlying logic does not rest on AI itself but on its centralized implementation. If a large number of financial institutions base their analysis, operations, and strategies on the same foundational models, any bias, error, or reaction to a specific event in the base model will instantly replicate across the entire system. This replication creates a massive, automated herd movement that current control mechanisms cannot contain.

The situation presents fundamental structural differences from previous crises, such as the dot-com bubble of the late 1990s. At the time, the collapse resulted from the overvaluation of individual companies with unproven business models. The present risk, while also including extremely high market valuations for leading AI companies, introduces a new variable: horizontal risk.

The global financial system is regulated vertically. This means regulations apply to individual entities such as banks, investment funds, or brokerages. However, AI adoption creates a layer of horizontal technological dependence. A handful of technology corporations, which exist outside the traditional financial regulatory perimeter, develop and control the AI models that thousands of regulated financial entities consume.

This paradigm, as Wired reports, establishes a large-scale single point of failure. An error in an AI would not affect a single institution; it would affect all institutions that depend on its infrastructure and algorithms. Supervisory tools are designed for the solvency of individual actors, not for the resilience of the underlying technological infrastructure that unifies their behaviors. The concentration of computing power and training data in fewer hands exacerbates this vulnerability, creating a de facto oligopoly in market intelligence.

Amplifying Factors

Beyond the risk of centralization, additional factors amplify the threat and require consideration for a comprehensive analysis.

According to the Financial Stability Board (FSB), the speed of a potential crisis would be drastically higher than previous ones. Algorithmic operations execute in milliseconds, so an AI-induced sell-off could trigger a collapse in minutes. This is much faster than human regulators or stock exchange circuit breakers can effectively react. The ability of these systems to process and interpret news or market data almost instantly could generate positive feedback loops that exponentially accelerate declines.

Furthermore, the problem of model opacity, known as the "black box" effect, presents a significant challenge. Many deep learning systems are so complex that even their developers cannot fully explain the reasoning behind a specific decision. This lack of interpretability makes risk management extremely difficult. It raises a fundamental challenge for regulatory compliance and due diligence when a financial institution cannot fully validate or audit a model because it does not understand its internal processes.

Finally, the global regulatory landscape is insufficient. Deloitte’s 2025 financial services regulatory outlooks reveal that government agencies face a dilemma in overseeing a technology that is transversal across multiple sectors and jurisdictions. Regulating AI models at their source could stifle innovation, but failing to do so leaves a critical gap in financial stability. The need to create new regulatory frameworks that specifically address the risks of systemic, general-purpose AI models is under debate, as seen in the EU’s AI Act. The foreseeable future demands unprecedented collaboration among financial regulators and technology experts to develop standards, stress tests, and oversight protocols that can mitigate this new form of systemic risk before its consequences materialize.

You May Like

Most popular

Newsletter