main-article-of-news-banner.png

Companies are overestimating how responsible they’re being with AI

Given the ongoing ethical debate around the corporate use of AI technologies, it's unsurprising that many companies centre their deployments within frameworks that aim to ensure a ‘responsible use of AI'. However, according to new research by Boston Consulting Group, it seems that executives are broadly overestimating how responsible they're actually being and aren't appropriately measuring their use of AI against practical guidance frameworks. 

This is concerning when you consider the implications of AI use on employees and customers, particularly when so many companies hide their AI algorithms in what is often described as a ‘black box'. For AI to be considered trustworthy, organizations need to ensure that they are appropriately explaining, and measuring, how they're not only incorporating privacy and governance principles, but also adding benefit to the people the algorithms impact. 

Steven Mills, BCG GAMMA's chief ethics officer, says of the survey results:

The results were surprising in that so many organizations are overly optimistic about the maturity of their responsible AI implementation. While many organizations are making progress, it's clear the depth and breadth of most efforts fall behind what is needed to truly ensure responsible AI.

Web, Red, Programación

The findings

BCG Consulting surveyed senior executives from more than 1,000 large organizations to get a better understanding of ‘responsible artificial intelligence' programmes. BCG describes the responsible use of AI in terms of the structures, processes and tools that companies use to ensure AI systems work in the "service of good while transforming their businesses". 

The areas of focus - or dimensions of AI responsibility - include: data and privacy governance; safety, security and robustness; transparency and explainability; accountability; fairness and equity; social and environmental impact mitigation; human plus AI. 

The framework BCG uses to assess maturity is based on four stages, which are as follows: 

  • Stage 1 (Lagging) - starting to implement a responsible AI program, with a focus on data and privacy

  • Stage 2 (Developing) - expanding across the remaining responsible AI dimensions and initiating responsible AI policies and processes. 

  • Stage 3 (Advanced) - making additional data and privacy-related improvements but lagging behind on human-related advances 

  • Stage 4 (Leading) - Performing at a high level across all responsible AI dimensions 

According to BCG's own assessment of organizations' responsible AI (RAI), of the companies it surveyed, 14% were lagging, 34% were developing, 31% were advanced and 21% were leading. The report states:

As organizations progress from lagging to leading, each stage is marked by substantial accomplishments, particularly in the areas of fairness and equity as well as human plus AI. This finding is important because organizations' RAI programs don't tend to initially focus on these dimensions, and they are the most difficult to address. 

Accomplishments in these areas are therefore highly indicative of broader maturation in RAI, and they signal that an organization is ready to transition to the next stage of maturity. Meanwhile, organizations consistently focus first on the area of data and privacy governance. This is a logical result, given that regulations and policies often mandate this focus.

Binaria, Uno, Cyborg, Cibernética, Bordo

Perception does not match reality

However, as noted above, BCG also found that organizations are seriously overestimating their RAI progress - there is a gap between perception and reality, which is concerning. 

BCG asked executives how they would define their organization's progress on its RAI journey, whether it had made no progress (2% of respondents), had defined RAI principles (11%), had partially implemented RAI (52%), or had fully implemented RAI (35%). It then compared each executive's response with its own assessment of the organization's maturity, based on their answers to a number of questions about their implementation across the seven dimensions mentioned above. 

The results highlight a significant overestimation on the part of companies that believe they are more advanced than they really are. The report states:

The results are surprising. We found that about 55% of all organizations-from laggers to leaders-are less advanced than they believe. Importantly, more than half (54%) of those that believe they have fully implemented RAI programs overestimated their progress. This group, in particular, is concerning. Because they believe they have fully implemented RAI programs, they are not likely to make further investments, although gaps clearly remain.

We also found that many organizations with advanced AI capabilities are behind in implementing RAI programs. Of the organizations that reported they have developed and implemented AI at scale, less than half have RAI capabilities on a par with that deployment. Achieving AI at scale not only requires building robust technical and human-enabling capabilities but also fully implementing an RAI program. For these organizations, falling short of full maturity across all RAI dimensions means that they have still not achieved their perceived level of at-scale AI deployment.

Binaria, Uno, Cyborg, Cibernética, Bordo

Best practices

AI has the potential to seriously impact the work and lives of people, which is why there is such a fierce debate around its ethical use. It organizations are overestimating their responsible use of AI, that does not bode well for long term uses. 

BCG outlines best practices for companies seeking to achieve responsible AI maturity. These include: 

  • Both the individuals responsible for AI systems and the business processes that use these systems adhere to their organization's principles of RAI.

  • The requirements and documentation of AI systems' design and development are managed according to industry best practices.

  • Biases in historical data are systematically tracked, and mitigating actions are proactively deployed in case issues are detected.

  • Security vulnerabilities in AI systems are evaluated and monitored in a rigorous manner.

  • The privacy of users and other people is systematically preserved in accordance with data use agreements.

  • The environmental impact of AI systems is regularly assessed and minimized.

  • All AI systems are designed to foster collaboration between humans and machines while minimizing the risk of adverse impact.

 

My take

This is a theme that is coming up time and time again. We recently reported how UK and US governments are ‘AI ready' but are not being responsible in their use of AI. No one is denying that AI technologies can be deployed rapidly in companies, as they have matured significantly in recent years. However, this needs to be done in tandem with responsible use frameworks, and consistently measured and checked, in order ensure they survive the long-term. 

© 2021 LeackStat.com