The ramifications of artificial intelligence (AI) are widespread across how we think and do business. The financial sector is no exception.
The prevalence of AI is among the highest when compared to other industries. Microsoft finds that: “Nearly three-quarters (72%) of banks, insurance firms and other financial institutions use the technology – a 7% increase over the past 12 months and far higher than the national average of 56%.”
For charity workers with a keen eye for how money changes hands, it’s important to dig deeper into how AI has infiltrated business. In general, it quickens decision-making and is automating processes. Aside from hastening transformation, AI has also had some surprising consequences.
AI, at the heart of the matter, tackles two challenges. It computes massive amounts of data and automates tasks. Hewlett Packard summarises the situation: “AI in finance is the use of technology like machine learning that mimics human intelligence and decision-making to enhance how financial institutions analyse, manage, invest, and protect money.”
Ultimately, AI is being employed by financiers to make sense of large-scale activities and to simplify decision-making. Bankers, in essence, can discern minute changes to build an overall trend.
Banking customers are also being served by AI-assistants. Across most financial platforms, powerful chatbots are used to funnel queries from inception to close.
For many of us, we see these assistants pop-out as part of a chat box. Behind that is an algorithm that’s not dissimilar to those used by Charity:water’s Sellu.
Importantly, the regulator has a key role to play in the proliferation and use of AI. The Bank of England opines that AI and machine learning impact finance at three intersections: “The primary drivers of AI risk in financial services relate to three key stages of the AI lifecycle: (i) data; (ii) models; and (iii) governance.”
AI collides with the financial lifecycle because of its ability to digest vast amounts of information quickly. This means that the feedback between information and organisational change can happen at speed. The second situation is in modelling. AI enables greater transparency. Predictive models can ‘learn’ what’s happening in the financial sector, rather than interpret ‘rules’. The consequence being that exceptions in the system are learned from.
Last, governance of AI is tricky. The Bank of England and the Financial Conduct Authority take a pro-innovation approach, where AI and machine learning are permitted. But they do need to be used with caution. At the organisational level, the crux is simple. If decision-making is outsourced to AI, who, if anyone, is responsible for decisions?
This conundrum is where many financiers are exploring, first, what level of responsibility is fit for AI and, second, what needs to be vetted by a real person. That decision has many ramifications – for example, whether AI will replace a workers’ job or not.
Unexpectedly, AI may open up banking to under-serviced and deprived areas. AI, in the form of a chatbot, has the potential to act as a bank teller despite the physical closure of retail branches.
NatWest is already using AI to service customers. A text-based bot called Cora has been launched across NatWest’s online pages.
Cora’s ‘intelligence’ is sophisticated: “Drawing upon advances in neuroscience, psychology, computing power and AI, a new Cora prototype has been built to include a highly life-like digital human that customers can have a two-way verbal conversation with on a computer screen, tablet or mobile phone.”
From this perspective, AI offers answers immediately and reduces the need to appear in branch.
Detecting fraud and malcontents has always been tricky in finance. Fraudsters typically use techniques called placement and layering to misappropriate funds. AI deployed at scale can detect unusual movements of cash.
The power of AI has produced different compliance-based software companies. ComplyAdvantage provides know-your-client (KYC) and transaction monitoring services that are powered by AI.
For KYC, potential customers are screened against a constantly changing watchlist. In terms of transactions, ComplyAdvantage deloys AI to seek out anomalies and make predictions.
In another use case, AI works to protect organisations from email fraud. Tessian, a British start-up, acts as a first line of defence. The cloud-based system analyses incoming emails for phishing, ransomware, impersonation, and other attacks. Considering other anti-virus software, Tessian is not rules-based, so it is nimbler and more flexible than other systems.
Ultimately when thinking about protecting and safeguarding emails and data, AI can be deployed at scale in financial institutions and charitable organisations.
LeackStat 2023
2024 © Leackstat. All rights reserved