main-article-of-news-banner.png

The Potential Risks of AI in Finance

 

As the financial industry becomes increasingly reliant on artificial intelligence (AI), there is a growing concern about the potential fallout that may occur. Gary Gensler, the SEC chair, has raised an alarm about the risks of AI in financial markets, highlighting the limitations of regulatory bodies in addressing these challenges.

One significant risk is the rise of AI-powered “black box” trading algorithms. These algorithms, driven by deep learning models, make decisions that are often not easily understood by humans. The danger lies in an improperly tuned algorithm that could trigger panic trading among other properly tuned algorithms, potentially leading to a market crash.

Gensler also highlights the “apprentice effect,” where individuals with similar training backgrounds tend to develop similar AI models. This homogeneity in AI models, combined with regulatory constraints, could result in a synchronized market response that amplifies the risk of a crash.

Regulating these AI models presents a significant challenge. The complex nature of AI algorithms makes it difficult for regulators to prevent market crashes. Gensler points out that if deep learning predictions were explainable, they would not be used in the first place. This lack of transparency raises concerns about the potential for biased decision-making in areas such as assessing creditworthiness.

 

Tecnología, Desarrollador, Toque, Dedo

 

The growing adoption of deep learning in finance is predicted to increase systemic risks. Gensler suggests that increasing capital requirements for AI-dependent financial institutions could be part of a solution. However, defining which institutions are AI-dependent may become challenging as AI integration becomes more widespread. Subjecting AI-generated results to a “sniff test” from more transparent linear models is also problematic.

Another challenge presented by AI is its reliance on vast amounts of data. AI models that use the same training sets risk inheriting and amplifying weaknesses in the data. This can lead to correlated predictions, market crowding, and herding. Additionally, monopolistic data governance can create single points of failure, even with extensive datasets.

While AI offers significant potential in optimizing trading strategies and assessing creditworthiness, Gensler’s warnings emphasize the need for a delicate balance between innovation and regulation. The challenges posed by AI algorithms, from the opaqueness of “black box” trading to potential biases in credit assessments, require not only technological proficiency but also regulatory foresight.

Navigating the AI-driven future in finance requires a comprehensive understanding of systemic interdependencies and the ability to mitigate potential risks.

LeacStat 2023