main-article-of-news-banner.png

Is fairness in AI a practical possibility? A new angle on designing

 

Is Fairness the new oil? I think that "new oil" phrase is dumb, but there is a lot of activity in Fairness in AI lately. It is, however, a complex problem. It's easy to calculate the MTBF (mean time between failure) of a device or derive endless statistics from a model's training and the output.

How does one measure whether the results are fair? Much of the literature on Fairness is intensely academic, but an emerging consensus places equal emphasis on statistics and human perceptions. One intriguing paper is The (I'm)Possibility of Fairness: Different Value Systems Require Different Mechanisms for Fair Decision Making, which I'll attempt to summarize.

I was also briefed by a company recently, Monitaur.ai, which claims to support all types of X(AI) - explainability by plugging into a running model and capturing detailed data of its operation as well as incremental data. They call this MLA machine learning assurance ,and describe it as:

Machine Learning Assurance (MLA) is a controls-based process for ML systems that establishes confidence and verifiability through software and human oversight. The objective of MLA is to assure interested stakeholders that an ML system is functioning as expected, but in particular, to assure an ML system's transparency, compliance, Fairness, safety, and optimal operation.

Customers are using this product to understand bias, disparate impact, outliers, and data and model drift. Monitaur does not, however, provide fairness analysis methodology. That is up to the client. 

Equilibrio, Swing, Igualdad, De Medición

So why is Fairness impossible? It isn't. The (Im)Possibility of Fairness paper takes a fairly deep dive into the subject, with an interesting framework for establishing Fairness. It starts with some assumptions:

  • The world is structurally biased (inequities to the systemic disadvantage of one social group compared to other groups with whom they coexist) and makes biased data. 
  • Observation is a process. When we create data, we choose what to look for.
  • Every automated system encodes a value judgment. Accepting training data as given implies structural bias does not appear in the data and that replicating the data would be ethical
  • Key conclusion: Different value judgments can require contradictory fairness properties, each leading to different societal outcomes. Researchers and practitioners must document data collection processes, worldviews, and value assumptions.
  • Value decisions must come from domain experts and affected populations; data scientists should listen to them to build values that lead to justice. 

To design fair systems, there must be solid agreement on what it means to be fair. One definition is individual Fairness, which is defined as individuals with like characteristics (within the model's scope) who should receive the same evaluation or treatment. This involves a seriatim (variance by individual from the expected outcome) combined with any number of analytical methods determining which features influenced the outcome. 

The more or less opposite point of view holds that demographic groups should, mostly, have similar outcomes, despite variation between members. The group fairness definition is in line with civil rights law in the US and UK, and is somewhat controversial. It evolved into a concept of disparate impact (I wrote about this in a previous article).  

There is some agreement among academics that, depending on which type of Fairness you aim for, individual or group, definitions and their implementations are at odds between their different beliefs about the world - their two worldviews are incompatible: 

  • What-you-see-is-what you-get (WYSIWYG): data scientists typically use whatever data is available without modification.
  • We're-All-Equal (WAE): Within the scope of the model, all groups are the same. 

More importantly, a single algorithm cannot logically accommodate both simultaneously, so data scientists and AI developers must be clear at the beginning which worldview they take

In the individual fairness model, the assumption is that the observation processes that generate data for machine learning are structurally biased (first bulleted assumption above). As a result, there is justification for seeking nondiscrimination against individuals.

If you believe that (your observed) demographic groups are fundamentally similar, group fairness mechanisms guarantee the adoption of nondiscrimination: similar groups receiving equal treatment. 

  • Under a WYSIWYG assumption, individual Fairness can be guaranteed
  • Under a WAE assumption, nondiscrimination can be guaranteed

 

Aprender, Nota, Signo, Directorio

Algorithms make predictions about individuals as a mapping from information about people, a feature space, to a space of decisions, which is a decision space. Thinking about it, it is easy to imagine two different types of spaces: construct spaces and observed spaces. Construct spaces are what we imagine is in the data in the feature space (for example, people with low FICO scores are an elevated risk for auto insurance). 

Constructs are the idealized features and decisions we wish we could use for decision-making. Observed features and decisions are the measurable features and outcomes that are used to make decisions. These two distinctions are the framework for deriving a mathematical model proving the incompatibility of different fairness models based on the data scientist's worldview. 

The Construct Feature Space (CFS) represents our best current understanding of the underlying factors and is contingent on ideas about how to decide in that context. IOW, the modeler may be selecting features to illuminate attributes that aren't in the data, such as productivity or threat level, projecting their leanings and prejudices. 

Therefore, the Construct Decision Space (CDS) is the space of idealized outcomes of a decision-making procedure. For example, this includes what processes to redesign or steps to take to rectify security gaps. The problem is, these outcomes are not explicitly derived in the model and depend on interpretation.

The Observed Feature Space (OFS) contains the observed information about people, data generated by recording by third parties such as transactions

The Observed Decision Space (ODS) is the observed decisions from a decision-making procedure, generated by an observational process mapping  

This brings us to the issue of Fairness. Individual Fairness means similar individuals (for the task) in the CFS should receive similar CDS decisions.

Fairness and Nondiscrimination

Existing data science and machine learning, at their most basic function, create transformations (through the operation of the algorithms) between the features and observed decisions. These are not just academic exercises. They are applied as decision-making tools in the real world. On the other hand, Fairness is a mechanism that maps individuals to Construct decisions based on their Construct features. Algorithmic Fairness aims to develop real-world mechanisms that match the conclusions generated by these Construct mechanisms. 

Individual Fairness. Fairness is an underlying and potentially unobservable phenomenon about people. Since the solution to a study is a mapping, defining Fairness on an individual basis is that similar individuals (for the model) receive similar decisions.

Nondiscrimination. The fairness definition applies to individuals, groups may share characteristics (such as race, gender), and Fairness compares these group characteristics (or combinations of them). Group membership can be determined, based on immutable characteristics, or those protected for historical reasons. It is usually considered unacceptable (and sometimes illegal) to use group membership as part of a decision-making process. Therefore nondiscrimination is defined as follows: groups who are similar (for the task) should, as a whole, receive identical decisions

My take

I've summarized much of the theoretical positioning and mainly presented the conclusions. In previous articles about Fairness, I wrote about the issues and many technical tools available, and their limitations. I thought these articles about worldview, constructs, and their implications were interesting enough to cover. The gap between what is and what is taken for granted is natural, but awareness is needed to apply the proper framework to evaluate Fairness.

© 2021 LeackStat.com