As AI becomes more pervasive, AI-based discrimination is getting the attention of policymakers and corporate leaders but keeping it out of AI-models in the first place is harder than it sounds. According to a new Forrester report, Put the AI in "Fair" with the Right Approach to Fairness, most organizations adhere to fairness in principle but fail in practice.
There are many reasons for this difficulty:
"Fairness" has multiple meanings: "To determine whether or not a machine learning model is fair, a company must decide how it will quantify and evaluate fairness," the report said. "Mathematically speaking, there are at least 21 different methods for measuring fairness."
Sensitivity attributes are missing: "The essential paradox of fairness in AI is the fact that companies often don't capture protected attributes like race, sexual orientation, and veteran status in their data because they're not supposed to base decisions on them," the report said.
The word "bias" means different things to different groups: "To a data scientist, bias results when the expected value given by a model differs from the actual value in the real world," the report said. "It is therefore a measure of accuracy. The general population, however, uses the term 'bias' to mean prejudice, or the opposite of fairness."
Using proxies for protected data categories: "The most prevalent approach to fairness is 'unawareness'—metaphorically burying your head in the sand by excluding protected classes such as gender, age, and race from your training data set," the report said. "But as any good data scientist will point out, most large data sets include proxies for these variables, which machine learning algorithms will exploit."
"Unfortunately, there's no way to quantify the size of this problem," said Brandon Purcell, a Forrester vice president, principal analyst, and co-author of the report, adding "... it's true that we are far from artificial general intelligence, but AI is being used to make critical decisions about people at scale today—from credit decisioning, to medical diagnoses, to criminal sentencing. So harmful bias is directly impacting people's lives and livelihoods."
To achieve these outcomes, model builders should use more representative training data, experiment with causal inference and adversarial AI in the modeling phase, and leverage crowdsourcing to spot bias in the final outcomes. The report recommends companies pay bounties for any uncovered flaws in their models.
"Mitigating harmful bias in AI is not just about selecting the right fairness criteria to evaluate models," the report said. "Fairness best practices must permeate the entire AI lifecycle, from the very inception of the use case to understanding and preparing the data to modeling, deployment, and ongoing monitoring."
To achieve less bias the report also recommends:
Eliminating bias also depends on practices and policies. As such, organizations should put a C-level executive in charge of navigating the ethical implications of AI.
"The key is in adopting best practices across the AI lifecycle from the very conception of the use case, through data understanding, modeling, evaluation, and into deployment and monitoring," Purcell said.
© 2021 LeackStat.com
2024 © Leackstat. All rights reserved