AI in finance: can regulation keep pace?



As financial services firms continue to face cost pressures and seek to compete through innovation, artificial intelligence has emerged as a key driver of change. AI has already improved laborious, yet fundamental tasks such as data processing, and many firms say it has had a positive impact on their business. 

Indeed, the CFA Institute’s recent research on the Future State of the Investment Industry found that nine in 10 (91%) of those whose firms have implemented AI and big data into their core business operations say it has helped their firm, including 45% of firms who noted that AI is allowing staff to use their time more productively by automating repetitive activities.


How AI will reshape the investment industry

Given that 2023 was generative AI’s breakout year, we expect that large language models, like ChatGPT, will continue to push the boundaries of AI in the years to come. Other data science and machine learning methods will also help inform financial analysis and portfolio construction over the next decade, as investment professionals embed AI tools into the investment process. 

These developments carry the potential to create a more efficient, customer-centric, and data-driven asset management industry. AI tools and strategies are set to play a pivotal role in helping firms adapt to changing market dynamics and client demands.

AI has already started to go above and beyond merely improving efficiencies, by providing deeper analytical insights and more informed investment decision-making in the ever-competitive world of asset management. Our research found that approximately one in four organisations are already using AI and big data for decision-making in their core business (26%) and customer service (23%), while approximately one in three (36%) are using it for risk management.


Trabajo, Oficina, Equipo, Empresa


Regulating AI and machine learning 

The increased use of AI has also raised concerns about transparency, accountability, and ethical considerations within the industry. Its deployment carries risks that have the potential to undermine the trust and confidence of investors if not controlled appropriately.

These risks include data protection and privacy requirements with regard to how data is sourced and processed by AI tools, as well as issues of representativeness in data sampling techniques which can lead to potential biases in model outputs. Other risks relate to the accuracy and interpretability of models and the ‘black-box’ nature of many algorithms.

Events such as the recent AI Safety Summit, commissioned by the UK government, show that policy-makers are starting to take action on AI regulation. As a result of the summit, 29 countries including the UK, China, Australia, and the EU have signed the Bletchley Declaration, signifying a mutual agreement that while AI has the potential to transform and enhance operations, it also poses significant risks. 

While both the Summit and Declaration provide a framework for future AI developments, the work is far from over. It remains unclear exactly if or how governments plan to regulate the use of AI in specific industries or markets, and a globally consistent approach to regulation is likely still a long way off. 


Computadora Portátil, Macbook, Codigos


Going above and beyond regulation

Given the rapid pace of AI development, regulation will always be playing catch-up. Therefore, the onus must fall on industry leaders to establish ethical and professional responsibilities when it comes to AI development and adoption, and to ensure that the technology is used in a responsible manner. Important as they are, it won’t be enough to rely on regulators and politicians to agree on policy frameworks.

Professionals — including those in the investment industry — should be setting rigorous governance protocols when developing and implementing AI, including ensuring that sufficient human oversight and accountability mechanisms are put in place.

These protocols should help to ensure accurate and appropriate outcomes from AI, manage risks, and provide an ongoing assessment of the benefits of AI against the costs and risks. Additionally, the governance of AI must incorporate regular reporting and communication to clients of how AI models are being used and perform. Reporting and disclosures will enable clients to hold asset managers, banks, financial advisers and other service providers to account.

The use of AI in investment management will accelerate the disruption of existing business models and investment processes. AI has the potential to bring about the most significant changes to the investment industry seen in decades, but this doesn’t come without risks. As we have seen, how investment firms use and develop AI is essential to ensure that risks are mitigated and managed appropriately.

In this context, investment firms and professionals should not rely on government regulation to set the standard on what constitutes safe and responsible use of AI; the industry must take a proactive approach and provide ethical leadership.

LeackStat 2023