main-article-of-news-banner.png

Are Large Language Models Finance’s Second Shot At AI?

Source: forbes.com

 

Ever since AI first started making headlines in finance, it has been a story of great promise and anticipation—and limited real-world impact.

As far back as 2018, the industry’s cautious approach to AI adoption was coming in for comment. Little appears to have changed since then. In the U.K., for example, the Bank of England/FCA 2022 survey on machine learning (ML) adoption in finance found that while 72% of responding firms were using or developing ML applications, the median number of such applications was hovering around 20 to 30—progress, but hardly transformational.

 

AI In Finance: A Broad But Shallow Footprint

To be fair, AI has been used in a wide range of areas. In banking, this has included customer engagement (retention and cross-sell), credit (early warning and collection), risk and compliance (fraud detection, staff surveillance for mis-selling or insider trading), and natural language processing (NLP) systems to extract relevant data from unstructured documents like annual reports. Insurers have been exploring AI too, from partial automation of pricing and underwriting processes to triage and assessment of claims.

However, despite this breadth of use cases, AI has simply not had a big enough impact on the industry—yet. There are no examples of meaningful AI-powered challengers to incumbents. Very few of the existing AI use cases are considered business-critical by incumbent banks and insurers, just 20% in the 2022 U.K. survey. None of the industry’s big, hairy challenges—financial inclusion, the fight against financial crime or aligning finance to climate objectives—have seen AI-enabled breakthroughs.

 

Marca, Marcador, Mano, Escribir

 

Could the advent of LLMs change that?

Based on conversations with over 50 leading financial institutions across North America and Europe, I believe—with cautious optimism—that with LLMs, this time really could be different.

To understand why, consider the reasons behind ‘traditional’ AI’s modest impact on the industry: lack of adequate or reliable data, talent constraints, cultural barriers or resistance to change, and “last-mile” operationalization challenges (embedding AI system outputs into existing decision-making or operational processes and systems).

In theory, LLMs could help address all of these challenges:

•LLM applications work with unstructured data. This can dramatically expand the pool of usable data—both internally and externally—and applicable use cases.

•LLMs depend on a set of pre-trained foundation models and require much lower effort to adapt to specific contexts than traditional AI approaches. This could potentially allow even those without established data science and technology capabilities to leapfrog into an AI-enabled future.

•OpenAI’s decision to make ChatGPT widely available has democratized AI. Rightly or wrongly, many business leaders and their (non-technical) staff now feel confident enough about the technology to reimagine their products and processes using LLMs, making adoption much easier.

•The output from an LLM application is typically in a form that users already recognize, making last-mile adoption easier. In many cases, the output merely mimics, and replaces, the preparatory work that a junior colleague would have executed on their behalf.

Collectively, these factors could push AI adoption higher in two crucial ways.

First, LLMs can dramatically increase the “value from AI” in established AI use cases. For instance, LLMs can help customer help desk applications go beyond pre-scripted pathways and make them more attractive and usable. LLMs could also turbocharge traditional NLP applications through greater accuracy and lower need for training, dramatically increasing operations automation as a result.

Second, LLMs can expand the applicability of AI to areas that have been seen to be unsuitable for automation in the past. For instance, LLMs can be used to read and write software code. They can help generate reports or summaries in standard formats, such as initial drafts of model validation, customer due diligence or financial crime investigation reports.

Most financial institutions are also experimenting with sophisticated Q&A applications that allow staff to tap the organization’s internal knowledge base, supplemented with curated external sources where appropriate. For instance, investment advisors can draw upon internal company research when advising clients. Underwriters can tap internal guidelines and past examples when evaluating new policies.

 

Diseñador Web, Coder, Html, Css Html5

 

Are LLMs too risky for a heavily regulated industry?

The industry has been preparing to manage AI risks for at least half a decade. LLMs do add new dimensions to these risk considerations. This includes concerns around inaccurate or false outputs (“hallucinations”), security and privacy implications of using models built and owned by third parties, uncertainty over intellectual property rights, reputational risk of customer-facing AI, and the inherent risk of unjust bias in models that have been trained on unrepresentative, public data sets.

However, existing regulatory requirements (around model, data and third-party risk management, privacy and security, and fair treatment of customers) provide a robust foundation for AI adoption at scale, including LLMs. For example, banks—and increasingly, insurers—have controls in place to assess that models are accurate enough for their designated purpose and remain so in production. While LLMs present some new model-risk challenges from interpretability to conceptual soundness, the existing framework can be enhanced to meet them (using the embeddings layer to understand how the LLM is producing its output and debug inaccurate results, for example).

In addition, the industry is likely to use LLMs to replace interim layers of human involvement, not remove humans from the loop entirely. Because of the existence of multiple “maker-checker” layers in most existing finance processes, such partial automation can still have dramatic (such as +50%) efficiency benefits without needing to be 100% reliable.

With LLMs, the finance industry may have a second shot at a massive AI-led transformation. Uncontrolled adoption holds clear dangers, but a strong risk and compliance DNA leaves the industry well-placed to manage the risks.

LeackStat 2023