main-article-of-news-banner.png

How the financial-services industry can benefit from generative AI

 

It has been almost a year since OpenAI’s ChatGPT (Chat Generative Pre-trained Transformer) burst onto the scene and instantly propelled the term generative artificial intelligence (AI) into public discourse. It is now among the top board-level conversations in many industries, serving as a significant game changer. For the financial-services sector, however, the use of AI is not new. The industry has diligently used AI and automation for several years, from effectively reducing costs and paper usage to providing data-driven insights and improving customer services.

With this robust foundation firmly in place, the financial-services industry is uniquely poised to be an early leader in adopting generative-AI (GenAI) technology. This opens up brand-new, exciting opportunities to actively shape the future of finance, paving the way for innovation and transformation. However, it is imperative to acknowledge that we are well into the hype cycle surrounding generative AI. In light of this, industry leaders must now shift their foci to identifying precisely where this cutting-edge technology can genuinely add substantial commercial value to their businesses today. Those who effectively navigate this critical decision-making process and align it with their strategic goals will undoubtedly emerge as frontrunners, positioning themselves ahead of the pack when the hype inevitably subsides and the true commercial potential of generative AI really takes off.

 

Chatgpt, Computadora Portátil, Ai

 

Deepening digitisation

An important and noteworthy advantage of generative AI for the financial-services sector is that it can reduce a business’s heavy reliance on its legacy information-technology (IT) infrastructure system. This pivotal transformation is becoming increasingly critical in the face of mounting and ever-evolving data and security requirements, particularly for large-scale banks and financial institutions.

We are already witnessing forward-thinking banks taking proactive steps by introducing cutting-edge AI platforms. These platforms have the remarkable capability of facilitating software migrations away from outdated legacy systems and archaic programming languages, steering them toward more modern and cost-effective alternatives. This will mark a major leap forward in a financial institution’s digitisation journey. In this nascent age of generative AI, there will not be one model to rule them all; an open platform will allow businesses to use foundation models from multiple providers as well as their own.

Furthermore, generative AI provides an opportunity for financial institutions to take digital customer services to a new level. The bar for customer experience is constantly being raised in the financial-services industry as customers increasingly look for high levels of personalisation and efficiency. To remain competitive in this area, many financial-services organisations are now turning to generative AI to meet this demand and improve customer care. Advances in AI and large language models (LLMs) have opened up opportunities for banks to create tailored, hyper-personalised customer experiences. This allows companies to cater to different tones of voice, offer individual recommendations and create more human-like interactions at much faster speeds.

AI-infused digital-banking platforms will solve customer issues faster and more effectively, enabling customers to identify the products and services that fit their specific needs, saving them time and money. The ability to operate in different languages and provide text-to-voice solutions is an additional valuable benefit. These use cases are already having impacts, with examples such as NatWest’s Cora and TSB Bank’s Smart Agent being deployed to help staff deal with surges in digital customer services.

 

Building the right guardrails

A recent report from IBM indicates that some chief executive officers (CEOs) are now issuing bans on the internal use of AI, particularly in industries such as financial services (IBM’s 2023 “CEO decision-making in the age of AI). Their concerns are understandable. When it is not deployed responsibly, AI has the potential for both misuse and risk, especially in the financial-services sector, in which privacy and security are paramount. However, when AI is deployed safely and transparently within a strong governance framework, organisations can mitigate these risks and add tremendous value.

To safely harness the power of AI, financial institutions should prioritise building the right guardrails—creating an extensive AI framework that focuses on core principles of privacy, transparency, responsibility and ethics. These principles will serve to emphasise strategy and governance. Leaders should define a corporate AI strategy, monitor AI competencies, outline roles throughout the AI lifecycle, develop risk-management frameworks and cement environmental, social and corporate governance (ESG) principles. It will also be important to take employees on the journey by providing AI-training programmes for education and awareness to ensure accountability and assist upskilling wherever necessary.

Businesses can also maximise their chances of success by working with the right partners. Delivering the next chapter in AI will be rewarding but challenging, and partnering with organisations with deep technical knowledge and business-transformation experience will provide assurance and confidence in embracing this new frontier.

 

 

Navigating the IP challenge

Just as regulators were catching up to traditional AI and automation, generative AI has moved the goalposts and brought a plethora of new challenges.

Intellectual property (IP), both inbound and outbound, is among the top concerns. Outbound IP issues are based on the risk of an AI-model provider using proprietary data elsewhere. Whilst this is still a worry for business leaders, some pre-existing regulations and frameworks are already equipped to deal with these issues, such as laws on data privacy and cloud-hosting agreements.

The second area—inbound IP—is a more novel concern that centres on the scenario that if an AI-model provider is sued for IP infringement, it could be possible that your organisation becomes part of that litigation. There are steps you can take to protect yourself from this, such as using models with identified data, but we are still in the early days of developing our understanding of the security risks. The foundation models available on IBM’s watsonx platform, for example, have full data lineages, so you can verify exactly where the data has come from, nullifying this particular risk.

Transparency is also a pressing issue. An important part of consumer duty is to ensure fair and transparent customer outcomes, and AI applications are not exempt from these obligations. Any AI framework implemented by a financial-services provider must be able to guarantee this while also providing reasonable justifications and traceability for its decisions. The Financial Conduct Authority’s (FCA’s) consumer duty and the General Data Protection Regulation (GDPR) are already major topics of discussion in the financial-services industry, and financial businesses must be prepared to explain how specific models and AI-driven technologies create their outputs under the regulations in place.

 

Looking to the future of finance

As an industry, the financial-services sector often moves ahead of others in its maturity, and this is especially true of technological advancement. Banking leaders have been fast out of the blocks in exploring the possibilities of generative AI, and this pace of innovation is set to continue. However, to fully harness the power of this transformational technology, financial institutions must first understand how they can use it to accelerate their overall digital strategies and what guardrails they should put in place to ensure it meets safety, security and compliance requirements.

LeackStat 2023