Despite increasing demand for and use of AI tools, 65% of companies can’t explain how AI model decisions or predictions are made. That’s according to the results of a new survey from global analytics firm FICO and Corinium, which surveyed 100 C-level analytic and data executives to understand how organizations are deploying AI and whether they’re ensuring AI is used ethically.
“Over the past 15 months, more and more businesses have been investing in AI tools, but have not elevated the importance of AI governance and responsible AI to the boardroom level,” FICO chief analytics officer Scott Zoldi said in a press release. “Organizations are increasingly leveraging AI to automate key processes that — in some cases — are making life-altering decisions for their customers and stakeholders. Senior leadership and boards must understand and enforce auditable, immutable AI model governance and product model monitoring to ensure that the decisions are accountable, fair, transparent, and responsible.”
The study, which was commissioned by FICO and conducted by Corinium, found that 33% of executive teams have an incomplete understanding of AI ethics. While IT, analytics, and compliance staff have the highest awareness, understanding across organizations remains patchy. As a result, there’s significant barriers to building support — 73% of stakeholders say they’ve struggled to get executive support for responsible AI practices.
Implementing AI responsibly means different things to different companies. For some, “responsible” implies adopting AI in a manner that’s ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, “responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable — at least in theory.
What can enterprises do to embrace responsible AI? Combating bias is an important step, but only 38% of companies say that they have bias mitigation steps built into their model development processes. In fact, only a fifth of respondents (20%) to the Corinium and FICO survey actively monitor their models in production for fairness and ethics, while just one in three (33%) have a model validation team to assess newly developed models.
The findings agree with a recent Boston Consulting Group survey of 1,000 enterprises, which found fewer than half of those that achieved AI at scale had fully mature, “responsible” AI implementations. The lagging adoption of responsible AI belies the value these practices can bring to bear. A study by Capgemini found customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t.
Businesses also recognize that things need to change, as the overwhelming majority (90%) agree that inefficient processes for model monitoring represent a barrier to AI adoption. Thankfully, almost two-thirds (63%) respondents to the Corinium and FICO report believe that AI ethics and responsible AI will become a core element of their organization’s strategy within two years.
“The business community is committed to driving transformation through AI-powered automation. However, senior leaders and boards need to be aware of the risks associated with the technology and the best practices to proactively mitigate them,” Zoldi added. “AI has the power to transform the world, but as the popular saying goes — with great power comes great responsibility.”
© 2021 LeackStat.com
2025 © Leackstat. All rights reserved