It’s hard to imagine a more perfect term to describe the buzziest topic in technology than “generative” artificial intelligence.
Since new versions of this technology reached mainstream awareness in 2022, it’s been used to create everything from college essays, fake movie trailers, lines of code to debug software, and even a hit song. The wave of excitement has also generated a lot of investment, with AI technology pulling in $35 billion through the first five months of 2023, according to data from Pitchbook, even as other technology startups are trading at deep discounts.
Investors hope to get in early on startups that can more efficiently and effectively handle everyday tasks, which could add as much as $4.4 trillion — more than the GDP of the United Kingdom in 2021 — to the global economy, according to research from McKinsey. The financial services industry is especially ripe for AI, with McKinsey estimating that banks could add as much as $340 billion annually, in part by helping “service representatives such as call-center agents and wealth management financial advisers.”
Unlike previous venture capital crazes like blockchain, Web3 and the metaverse, which struggled to produce useful everyday products, generative AI is already in many people’s hands at home and at work. Fintech startups and large institutions have products out for advisors. And investors are ready for it, with three-quarters saying they believe AI would help advisors better serve clients in a recent survey conducted by Morgan Stanley Wealth Management.
“We were given Oculuses and it was interesting, but it was hard to incorporate into your life and routine,” said Scott Reddel, managing director of capital markets and wealth management at Accenture. “With AI, you start playing around with it, and it’s so much easier to integrate into the day-to-day things that you do. It’s more intuitive and easier for firms to get their brains around it.”
The rapid release of these products might also be why the technology has successfully generated a lot of fear, confusion and anxiety. While most people (82% of those responding to Morgan Stanley’s survey) agree that AI won’t fully replace financial advisors, there are still concerns about cybersecurity and data privacy, an upcoming regulatory response, and whether or not the technology works as it claims.
Unlike in recent decades, when legacy databases and outdated infrastructure have limited financial institutions’ abilities to introduce modern technology, many of the largest institutions hold an early lead on AI.
“It’s astounding to see the type of money that some of these bigger banks are putting into this newfangled tech,” said Vijay Raghavan, a senior analyst at Forrester. “Sometimes I think of JPMorgan [Chase] as more of a tech company than a bank.”
From February through April, JPMorgan advertised 3,651 jobs related to AI, almost double those from Citigroup and Deutsche Bank, according to data cited by Bloomberg. The firm reportedly has more than 50 pilot programs underway related to deploying AI in portfolio management, with Mary Erdoes, head of JPMorgan Asset Management, recently telling FundFire that the technology is close to enabling “fabulous leveraged changes in companies that are going to make us faster, better, smarter, quicker.”
During its 2023 investor day, JPMorgan reported that it expected AI and machine learning technology to deliver $1.5 billion in value for the company in 2023. But it could have its sights on a more consumer-facing product. In May, JPMorgan applied to trademark a product called Index GPT that relies on “cloud computing software using artificial intelligence” for “analyzing and selecting securities tailored to customer needs.”
The company declined to comment.
However, few firms have done as much work at getting AI tools into the hands of wealth management as Morgan Stanley. The company got into AI relatively early with a “Next Best Action” engine first announced in 2018 to help advisors find new opportunities among their existing clients. Today the tool is widely used across Morgan Stanley’s wealth management business, and the firm now employs neural networks to drive efficiencies at ETrade and in matching prospective clients with financial advisors, said Jeff McMillan, Morgan Stanley’s chief analytics and data officer.
“I don’t think there’s a team that’s not using [the technology] on a regular basis,” McMillan said. “These tools are ubiquitous, almost like email now, and are firmly embedded into the ecosystem of how we work.”
Now the wirehouse is looking at unstructured data like written content and natural language. The firm partnered with OpenAI, the company behind popular generative text AI program ChatGPT, to develop an AI assistant for its financial advisors that can quickly find information and get answers to questions. Thousands of Morgan Stanley advisors are currently involved in a beta test of the product, which is scheduled for companywide release in September.
“We really doubled down on all of the activity and coordination we needed to do with clients, with financial advisors at the center,” said Sal Cucchiara, chief information officer for wealth management and investment management at Morgan Stanley. “How do we make it easier for the financial advisor to serve the client? How do we make it easier for financial advisors to do business with us?”
The firm’s AI capabilities are made possible by a decision nearly a decade ago to rip off the Band-aid of legacy technology and build a modern data infrastructure that standardizes data and taxonomies across the entire firm, McMillan said.
Such a large, expensive project illustrates why many independent firms may find themselves playing catch-up with AI. The lack of standardized data across the various providers that independent advisors rely on — including custodians, asset managers, and technology vendors — already makes it difficult to handle simple tasks like digital account opening, let alone releasing an AI engine.
“The enterprise broker-dealers who have the budgets, who have the technology capabilities to throw some money at this and figure out what’s happening, they have the advantage,” Forrester’s Raghavan said. “It’s hard to tell what a smaller RIA is going to do with it. They could be at a disadvantage as advisors at bigger firms get trained up on how to use these [tools].”
There are a number of fintech vendors that have launched or announced generative AI features for independent advisors, including Morningstar, Orion’s Redtail CRM, Broadridge, Hearsay Systems, Riskalyze and FMG Suite. These vendors offer solutions to individual problems and could run into the same integration challenges as independent advisors’ other technology.
“It’s the number one problem; [independent advisors] have 17 different data sources,” Cucchiara said. “The payoff is years away, and the cost is large.”
However, the rapid development of generative AI is bringing down the barriers to entry, said Reddel. Every firm Accenture works with is at least exploring where AI can add value, even if a product launch is still far away now that costs and time to market have been drastically reduced.
“One of the things we’re finding now is if you’re a small independent firm or regional bank, right now it’s a much lower effort,” Reddel said. “There’s an opportunity to catch up or even leap in front.”
Even so, the proliferation of AI technology vendors also creates new cybersecurity vulnerabilities for advisory firms. It isn’t immediately clear which data these tools collect from users or where it goes, and hackers are already exploring ways generative AI can be used to develop sophisticated new attacks. A Salesforce survey found that 71% of senior IT leaders believe generative AI is likely to “introduce new security risks to data.”
“If I was a compliance officer in a large RIA managing billions [of dollars] of client assets, I’d be trying pretty hard to lock it down right now,” said Adrian Johnstone, CEO of fintech company Practifi. “You don’t want advisors experimenting with AI with client data.”
The “black box” nature of many AI engines also has firms worried about being able to verify the accuracy of information they create. It’s easy to find examples of consumer-facing tools like ChatGPT providing convincing yet totally inaccurate answers to questions, a phenomenon some have described as an AI “hallucination.” And groups like the National Institute of Standards and Technology have documented how AI can reflect both the explicit and implicit biases of its human creators.
“What is there to validate and confirm that this AI capability is serving up independent financial advice, independent recommendations that aren’t self-serving to the advisor?” said Raghavan, who advocates for financial services firms implementing what he calls “explainable AI.”
“Implicit biases that may or may not exist in the systems, how do you think about that?” he added. “This can impact sustainable and responsible product selection, or how firms are matching advisors [with clients].”
Finally, there’s the matter of how regulators are going to respond to advisors’ use of these tools. The Consumer Financial Protection Bureau has warned banks about penalties for deploying chatbots that provide inaccurate information or fail to protect clients’ privacy and data. The Securities and Exchange Commission is expected to release new rules in October about conflicts of interest in technology that will likely apply to brokers’ and financial advisors’ use of AI.
“Compliance officers that I speak to say, ‘I don’t want my advisors using these tools because I now have to review this AI content,’” Johnstone said. “More people are generating more content that needs more review.”
Early AI products have made efforts to sidestep these issues. For example, Morgan Stanley’s AI only works with the firm’s internal content (as opposed to consumer-facing tools like ChatGPT that have access to the entire internet), is only accessible to Morgan Stanley employees, and shows users justifications for each answer, along with links to source material, Cucchiara said.
“We recognize the transformative potential of AI in finance and are committed to ensuring its responsible use in alignment with our core values, and we will continue to conduct regular
audits to ensure that our AI systems are transparent, explainable and auditable to facilitate accountability,” he said.
While Morningstar’s generative AI, “Mo,” is available to retail consumers and financial advisors, the company has guardrails in place to help it be more accurate, according to James Rhodes, Morningstar’s chief technology officer and president of data, research and enterprise solutions.
But JPMorgan’s trademark filing makes clear that firms are looking to expand this technology further in wealth management. Fintech firms see an opportunity to use generative AI to produce code that can speed up product development, patch bugs, and even make it easier for advisors to customize products. For example, Johnstone envisions a future in which an advisor can create a CRM workflow completely customized for a specific need just by telling Practifi what they want.
There’s an opportunity for AI to improve cybersecurity for the industry rather than create additional weaknesses. Some automated processes that banks have long used to detect potential credit card fraud could be brought to financial advisors, helping them act as safekeepers of clients’ portfolios.
Some also see image-generating AI being used to make financial planning more visually engaging beyond graphs and scales.
However, there’s also a risk of advisors getting too comfortable with AI. For example, there is significant buzz from digital marketing firms on how AI can help advisors more quickly deliver personalized communications to all of their clients and prospects. While there’s some undeniable usefulness to these tools, it won’t take long for clients to start guessing if communications are genuine or just automated by AI.
“I was talking to my boss, who I’ve known and worked with for almost 20 years, brainstorming ideas to create communication content,” said Ryan Sullivan, vice president of applied insights at Hartford Funds. After thinking of a witty headline, his boss asked if he got it from ChatGPT. “Just the fact that it registered in his brain, even as a joke, means clients will soon start wondering if the services they receive are from their financial professional or from [AI].”
No one knows for sure yet how it will all shake out. There will be winners, there will be losers, and there will be unintended consequences. It will be up to each wealth management firm and individual advisor to employ these tools in a way that increases trust with clients rather than harms it.
“Clients who begin to suspect they’re not really being serviced by a person aren’t necessarily just going to pay less. They will vote with their feet and find better service,” said Raghavan.
LeackStat 2023
2024 © Leackstat. All rights reserved