Business leaders and policymakers around the world recognise that artificial intelligence (AI) is set to transform almost every area of our lives. Unsurprisingly, a flurry of global AI-related regulations is now in development and on the way to being rubber-stamped.
But until then, payments players are grappling how to handle such a fast-moving technology, and how it will affect the quest for compliance. In 2022, spending on financial crime compliance shot up by 13.6% from 2021 to reach $57 billion for the US and Canada combined, according to a LexisNexis Risk Solutions study. While in the UK, LexisNexis Risk Solutions estimates financial crime compliance costs for financial services to reach £34.2 billion annually – a jump of 19% from the £28.7 billion in 2021. And it’s regulatory expectations that are the biggest drivers of cost.
As the world shifts towards real-time payments, the potential rewards of AI could be incredible. That’s not to say that the potential risks could prove calamitous to a business’s bottom line. But amid all the scaremongering headlines about AI going rogue and wreaking havoc, it’s important to have a healthy sense of perspective.
AI is not new, and it has been used successfully in financial services for quite some time – as evidenced by the friendly customer service chatbots we’re all so familiar with now. Consider that ChatGPT has only been around for a year, and how quickly it’s transformed customer service strategies. AI and machine learning (ML) are making giant strides in natural language processing. AI-driven CRM platforms can hand off complex customer calls to skilled agents, and even help agents offer personalised responses based on customer data.
That’s one example of how, up until now, AI was mostly deployed to lower risk back-office or front-office processes largely untouched by compliance obligations. But that is changing fast. The speed at which AI is evolving and being embedded into more facets of the payment ecosystem is astonishing. Businesses are now urgently exploring how to use AI to enhance every aspect of their businesses and extract the maximum value from their data.
AI is constantly hungry for new data it can learn from – which is just as well, given that big data volumes are about to skyrocket. What we’re witnessing right now is the explosion of instant payments, fuelled by a convergence of connectivity. It’s happening through large-scale national real-time payment system implementations and fast-growing 5G mobile connectivity. Today, more than 70 countries on six continents support real-time payments, with $195 billion in transaction volume in 2022, growth of 63% from 2021 (according to the “2023 Prime Time for Real-Time” report by ACI Worldwide and GlobalData).
Meanwhile, the global roll-out of 5G means that mobile payments will get faster and greater connectivity made from more devices (including wearable tech and Internet of Things) in more locations. Consumer 5G connections surpassed one billion at the end of 2022 and will increase to two billion by the end 2025, according to GSMA Intelligence.
Along with mobile-enabled simple account balance cheques, bill or merchant payments, and account transfers, we can expect to see a surge in 5G-routed loan and credit applications, and more complex transactions, like business-to-business (B2B) invoicing and supplier payments, made on the go.
This 5G-fuelled explosion of payments activity means that AI will be gobbling up and verifying even more customer IDs, authorising payments, confirming funds availability and matching reconciliations in as close to real-time as possible, speeding up settlement and ensuring a consistently good payment experience.
But what’s good for the fintech is also fodder for the fraudster. It was inevitable that AI tools like ChatGPT are being manipulated into “FraudGPT”, and being turned against fintechs and their customers, creating fake customer profiles and luring customers and bank or fintech personnel into giving away sensitive data.
With increasing real-time connectivity driving up transaction volumes and opening up more avenues of attack for fraudsters, banks and fintechs will now need to forget about reactive fraud prevention approaches and embrace proactive, pre-emptive efforts. As more transactions go cross-border too, the risks of breaching compliance obligations intensify. Unlike domestic transactions, international transactions come with greater counterparty and FX risk and anti-money laundering (AML), know your customer (KYC) and sanctions screening work to do.
Therefore, it’s unsurprising that AI and ML are being used to automate and streamline what were slow and manually intensive fraud prevention and AML/KYC verification processes, and more quickly identify violations to reduce the risk of falling out of compliance.
The fantastic thing about AI is that it never gets bored. It can speed-read phenomenal amounts of data in milliseconds, and it can accelerate time-consuming processes like customer ID verification to aid KYC and AML operations. AI can ingest large volumes of data in the blink of an eye, standardise it, categorise and index it all at mind-boggling speeds, making it easier for businesses to find it and analyse it.
With algorithms able to detect suspicious fraud patterns at lightning speed, predictive AI is helping fintechs to evolve beyond rigid rules-based systems to hunt down bad actors sooner and stop fraud in its tracks before it can do maximum damage.
But AI is only as accurate as the data that flows into it. While its advantages are clear when it comes to fraud, its use in customer service could bring a whole host of unintended consequences for compliance efforts.
With new regulatory demands like the UK’s Consumer Duty requiring evidence of good customer outcomes, if data inputs are incorrect, missing key elements, or biased, fintechs are at risk of algorithms giving customers incorrect information and potentially exposing themselves to painful legal ramifications and costly penalties. The onus is on fintechs to ensure that their AI models are being fed with clean, healthy and accurate data to protect customers and their own bottom lines.
It’s no big leap to think that large-scale AI models could potentially outperform humans in financial services scenarios like those outlined above over the next few years. It’s highly likely that AI-powered compliance workflows can be trained on new regulations and compliance obligations and implement updates as soon as they happen, reducing the risk of non-compliance even further.
But no matter how much automation comes forth, every payment transaction involves humans, as individuals, employees and business owners, paying and getting paid. That’s why as efficient as AI is and will become in the future, it can’t replicate human nuance, nor can it fully comprehend the human motivations for committing fraud.
In payments, the human element can never be removed. It’s only by having human insights and ultimate oversight that AI can make the most beneficial impact for consumers and businesses.
LeackStat 2023
2024 © Leackstat. All rights reserved