main-article-of-news-banner.png

AI ethics is growing up - towards an AI maturity model organizations can use

 

There has been an interesting development in the AI Ethics club. For the past few years, there has been a growing number of practitioners and organizations attempting to solve AI's manifest problems by teaching companies ethics.

This a wholly inadequate approach, where students may understand the subject matter but have no direction on applying it to their work.

The problem is that most of these "ethicists" have no background in either AI or the whole DevOps process. Still, nevertheless, they insert themselves as consultants. They propose solutions that are neither workable nor logical.

Getting an AI project funded in an organization takes an understanding of the company's process. Implementing an AI project in production takes care and understanding how it will interact with other systems, especially those AI applications that present emergent properties as they "learn."

One troubling trend is that a few of these more influential AI Ethics firms have merely been picking brains about getting up to speed to the next level, unlike more responsible organizations that are staffing up with experienced people. 

But there is good news: some influential actors in the AI Ethics field have understood, and have recently proceeded with one or two things:

  • Recruited staff with corporate development skills, preferably with AI, to act as traditional consultants to attack a problem with professionalism from a selection of a problem t solves, data governance, the craft of building and testing a model for, among other things, credible results and fairness. 
  • Developed material to guide these efforts, minimizing talk of ethics and maximizing advice for successful AI without causing harm to people or the organization. For example, this AI Ethics Maturity Model, published by Salesforce and authored by my colleague Kathy Baxter.

This a promising development, though I don't necessarily agree with the methodologies I've had a chance to review. My team has been pursuing these engagements for a while. Our template includes issues of diversity and Inclusion, gender equity, privacy, governance, conflicts of Interest, insensitivity and inequality.

 

Creando, Haciendo, Edificio

 

We start with Initial Ideation:

1. AI can do all kinds of wonderful things:

  • Unravel complexity in your supply chain
  • Automate repetitive tasks
  • Real-time chatbot systems. 
  • Augmented (Business) Intelligence
  • Customer recommendation engines. 
  • Customer churn modeling. 
  • Dynamic or demand pricing strategies
  • Customer segmentation and market research. 
  • Fraud detection.
  • Sales Forecasting

2. Be careful when starting with a pilot which:

  • Is (usually) a known application (low hanging fruit).
  • Has no impact, therefore, does not increase your credibility.
  • And neither proves anything nor develops around a compelling issue.

3. If not applied with care, AI can harm your business. Some examples of horrible things:

  • Discriminatory practices
  • Inadequate data governance
  • Intrusive personalization 
  • Your model makes the previous model worse
  • Overreaching, like IBM's Watson Oncology model
  • Your model predicts the past, not the future
  • Unanticipated disruption in upstream and downstream systems and processes
  • Adoption failure

4. Organizations that successfully navigate these risks typically exhibit the following best practices:

a. They ensure that they understand some of the ethical subtleties of bias in their day, their models and themselves.

b. They understand some of the idiosyncrasies and lack of transparency in machine learning.

c. They understand what the model will tell them and what it won't.

d. They make strategic choices about where to apply these methods. AI resources are scarce and expensive. You want to use them where you have the most significant benefit.

e. To begin, they consider pilot projects, and engage external resources.

f. The definition of success and associated metrics are developed collaboratively by the technical specialists and the executives accountable for business success (though those metrics should be evident in a pilot).

g. Required changes in systems, processes and training are anticipated and planned for. Both business and technical personnel are responsible for executing the operational details of moving from model building to functional solution (Not in a pilot necessarily). But, they plan for how the model will evolve, and how it will interact with other systems

h. There are sufficient resources devoted to data quality and data governance, and they focus not only on the technical aspects of data, but on how it is used in daily business practice.

i. An effective approach will identify the potential risks in using AI, and suggest improvement strategies. For example, our preliminary assessment takes about 4-8 weeks, based on the size and complexity of your organization.

You will notice that nowhere did I mention ethics. A few years ago, Harvard and the other premier MBA programs introduced ethics courses into their curriculum. I think everyone pretty much treated it as a joke - essentially a form of virtue signaling without actual substance. Therefore the focus should be on creating an AI development process that is capable of ferreting out bias, unfairness, disinformation, and privacy intrusion while focusing on the business impacts.

 

Web, Red, Tecnología, Desarrollador

 

In machine learning, the development of algorithms that create predictive models from data is grounded in statistics, not neuroscience or psychology. Models are designed to perform known tasks and do not rely on general intelligence. The first mistake is overestimating what a model can tell you.

  • Be careful with algorithms that are designed to operate with high output. This leads to uniformity which leads to problems. It may be appealing to have a model that can read 10,000 resumes a day, but its recommendations' may be too uniform to be valuable.  
  • Products you use with embedded AI must be considered. Not many vendors are willing to disclose their proprietary algorithms, but you bear the responsibility for the result.
  • Data sourced from outside your organization, or the complexity of blending multiple data sources, is the leading cause of errant AI applications. 
  • The "social context" - refers to people. Anything that affects people is in the social context and is subject to meticulous analysis. 
  • Fairness: this is the most challenging aspect to understand. Fairness has many definitions and is context-oriented. New mathematical models are emerging to test fairness. 
  • Subsequential bias is the secondary and tertiary unintended effect of your model. As your model operates, no matter how hard and thorough you scrubbed the unethical aspects, the model's results can create an opportunity for unethical secondary and tertiary effects.

AI has, to some extent, been introduced in every business. Companies are exposed to new risks, such as bias in the AI application, fear of job loss due to automation, privacy violations, and discrimination. Applied Ethics goes further than those currently well-known topics.

Beyond the categories listed above, there are prevalent causes of problems with AI development that aren't, strictly speaking, ethical issues, but can cause them:

  • Data: ML isn't developed in Excel. The volume of data needed for an ML model is vastly more than a human can examine for errors or faults. Data quality tools are helpful to a point but only for one data source at a time. Merging tables creates hidden problems that even current data management tools don't always spot. 
  • ML and even Deep Learning can cause unpredictable errors when facing situations that differ from the training data. This is because such systems are susceptible to "Shortcut Learning," statistical associations in the training data allow the model to produce correct answers for the wrong reasons. Machine Learning, Neural Nets and Deep Learning do not learn the concepts, but instead, they learn shortcuts to connect answers on the training set.
  • Adversarial Perturbations: Adversarial attacks involve generating slightly perturbed versions of the input data that fool the classifier (i.e., change its output) but stay almost imperceptible to the human eye.
  • Immutability: Great care must be taken to ensure the model cannot be tampered with. 

Other elements can override the ethical process, such as senior management and the work environment, e.g., some pressures come into play:

  • When an organization pressures development teams, the ethics risks increase.
  • You adopt an "It's only the math" excuse, or "That's how we do it."
  •  You engage in fairwashing: concocting misleading excuses for the results.
  • You don't know that you're doing these things.
  • The whole process is complicated and opaque in operation.
  • The organization is not used to introspection before you embark on a solution.
  • There is an "aching desire" to do something cool that obscures your judgment.

Often overlooked are those undesirable effects of AI that do not directly involve people: those that promote, excuse or damage the environment; those that cause loss to property; and those that, when embedded, cause breakdowns in automated processes. They may not be considered unethical, but they are just as dangerous to a company's brand or its ability to fulfill its commitments in a supply chain.

 

Web, Red, Programación

 

My take

The instruction in ethics has proven to be ineffective in assisting organizations deliver trustworthy applications. The ethics community seems to be evolving to one of professional services, with skill in all of the aspects of MLOps. This is a positive step. However, organizations like the EU, UNESCO, and many others will continue to pound the ethical aspect - to the detriment of useful guidance. 

© 2021 LeackStat.com