main-article-of-news-banner.png

Maintaining Data Privacy Compliance When Using AI in Finance

Source: nasdaq.com

 

Operational efficiency is top of mind for nearly all corporate leaders, with artificial intelligence (AI) use cases gaining traction across industries. While AI has proved to offer a variety of benefits and astound users with its results, those in heavily-regulated industries–like financial services–are posing serious questions about the security, data validity, and ethics of this technology, particularly when it comes to data privacy.

Whether a financial institution aims to use AI to improve contract management, provide customers with better experiences, improve fraud detection, or otherwise, the parameters governing how the data is ingested and retained are of the utmost importance.

“Financial, legal, IT, and operations teams should evaluate appropriate data privacy regulations when considering their integration of AI to remain compliant and avoid getting into hot water with customers, stakeholders, or regulatory bodies,” says Colby Mangonon, Associate General Counsel at Evisort. She adds that “financial intuitions should also ensure the AI integration is safeguarded by strong information security framework and data processing policies to protect customer data.”

 

AI Use in Finance

As banks, investment firms, and other financial institutions build their technology stacks to improve efficiencies, many have begun reinforcing their technology throughout with artificial intelligence-backed solutions that supersize the results of their back-end operations.

Some banks have begun using OpenAI’s GPT-4 chatbots to allow advisors to pull up research and data. A leading payment processing company is leveraging AI to better differentiate between real and mistaken fraud detection and avoid card declines. Another leading financial institution is currently using AI to create customized contracts and digitally coordinate with the internal stakeholders for approval of special terms. AI also creates numerous opportunities to improve revenue-impacting operations, such as speeding up customer services like loan processing or onboarding.

Benefits aside, legal teams within financial institutions are all too aware of the concerns artificial intelligence (AI) poses to the privacy and security of their customer, stakeholder, and organizational data.

 

Bolsa, Pagar, Negocio, Finanzas

 

Concerns Over Artificial Intelligence (AI)

While AI-supported technology can be very useful in day-to-day operations, there are concerns about the specifics of data ingestion at an organizational level and the large-scale training of the underlying model. Questions become more specific when examining generative AI models that undergo both pre-training and fine-tuning processes.

Third-party generative AI tools with minimal regulation, such as ChatGPT, have already seen holds put in place while governing bodies work to get answers about the potential legal violations the use of these technologies may bring forward.

Italy recently banned the platform, with other European countries raising flags about how AI-related data ingestion fairs in accordance with GDPR rules. State-level laws like the California Consumer Privacy Act also come into play regarding the storage, correction, and deletion of personal data.

On top of regulatory concerns, several financial institutions are wary of using public third-party AI chatbots for fear that their proprietary data could be leaked. Organizations like JP Morgan Chase, Wells Fargo, and Goldman Sachs Group have banned the use of ChatGPT for business communication as they “evaluate safe and effective ways of using [these] technologies”.

Is all this to say AI should be avoided in order to protect data used within financial institutions? No. It means that legal teams and enterprises will need to carefully vet individual programs to ensure they meet regulatory standards for data privacy.

 

Ensuring Data Privacy Compliance When Using Generative AI

Protecting your enterprise when using AI requires a deeper understanding of specific providers and the parameters they use to build their technology. Mangonon explains, “When beginning the sourcing process, leaders within financial institutions, or any enterprise, should determine the specific data planned to input into an AI model, as this plays an integral role in choosing the right platform for your enterprise.”

When examining potential enterprise-grade solutions, inquire about the specifics of the provider’s AI models, data privacy and security structures, and the safeguards currently in place to mitigate risk. Helpful questions may include:

  • What are the AI data training practices used?
  • How is my enterprise’s confidential data and IP protected?
  • What are the security frameworks and practices?
  • Is the provider using a custom proprietary AI model or a third-party bolt-on model?
  • If they have a third-party bolt-on provider, what is their data retention policy?
  • Will our enterprise’s sensitive data be used to train the greater public AI model?
“By being meticulous in their assessment of AI-supported solutions, leaders at financial institutions can leverage all the benefits of AI and simultaneously remain compliant with regulatory requirements while reducing the risk of data compromise,” says Mangonon.

As artificial intelligence (AI) continues to gain ground in the enterprise technology landscape, legal teams within financial institutions will be responsible for both meeting data privacy standards and enabling the business to improve operations. With an enterprise mindset and proper due diligence, advantageous professionals will take their organizations to better business outcomes and have a greater competitive advantage.

LeackStat 2023