There has been an interesting development in the AI Ethics club. For the past few years, there has been a growing number of practitioners and organizations attempting to solve AI's manifest problems by teaching companies ethics.
This a wholly inadequate approach, where students may understand the subject matter but have no direction on applying it to their work.
The problem is that most of these "ethicists" have no background in either AI or the whole DevOps process. Still, nevertheless, they insert themselves as consultants. They propose solutions that are neither workable nor logical.
Getting an AI project funded in an organization takes an understanding of the company's process. Implementing an AI project in production takes care and understanding how it will interact with other systems, especially those AI applications that present emergent properties as they "learn."
One troubling trend is that a few of these more influential AI Ethics firms have merely been picking brains about getting up to speed to the next level, unlike more responsible organizations that are staffing up with experienced people.
But there is good news: some influential actors in the AI Ethics field have understood, and have recently proceeded with one or two things:
This a promising development, though I don't necessarily agree with the methodologies I've had a chance to review. My team has been pursuing these engagements for a while. Our template includes issues of diversity and Inclusion, gender equity, privacy, governance, conflicts of Interest, insensitivity and inequality.
We start with Initial Ideation:
1. AI can do all kinds of wonderful things:
2. Be careful when starting with a pilot which:
3. If not applied with care, AI can harm your business. Some examples of horrible things:
4. Organizations that successfully navigate these risks typically exhibit the following best practices:
a. They ensure that they understand some of the ethical subtleties of bias in their day, their models and themselves.
b. They understand some of the idiosyncrasies and lack of transparency in machine learning.
c. They understand what the model will tell them and what it won't.
d. They make strategic choices about where to apply these methods. AI resources are scarce and expensive. You want to use them where you have the most significant benefit.
e. To begin, they consider pilot projects, and engage external resources.
f. The definition of success and associated metrics are developed collaboratively by the technical specialists and the executives accountable for business success (though those metrics should be evident in a pilot).
g. Required changes in systems, processes and training are anticipated and planned for. Both business and technical personnel are responsible for executing the operational details of moving from model building to functional solution (Not in a pilot necessarily). But, they plan for how the model will evolve, and how it will interact with other systems
h. There are sufficient resources devoted to data quality and data governance, and they focus not only on the technical aspects of data, but on how it is used in daily business practice.
i. An effective approach will identify the potential risks in using AI, and suggest improvement strategies. For example, our preliminary assessment takes about 4-8 weeks, based on the size and complexity of your organization.
You will notice that nowhere did I mention ethics. A few years ago, Harvard and the other premier MBA programs introduced ethics courses into their curriculum. I think everyone pretty much treated it as a joke - essentially a form of virtue signaling without actual substance. Therefore the focus should be on creating an AI development process that is capable of ferreting out bias, unfairness, disinformation, and privacy intrusion while focusing on the business impacts.
In machine learning, the development of algorithms that create predictive models from data is grounded in statistics, not neuroscience or psychology. Models are designed to perform known tasks and do not rely on general intelligence. The first mistake is overestimating what a model can tell you.
AI has, to some extent, been introduced in every business. Companies are exposed to new risks, such as bias in the AI application, fear of job loss due to automation, privacy violations, and discrimination. Applied Ethics goes further than those currently well-known topics.
Beyond the categories listed above, there are prevalent causes of problems with AI development that aren't, strictly speaking, ethical issues, but can cause them:
Other elements can override the ethical process, such as senior management and the work environment, e.g., some pressures come into play:
Often overlooked are those undesirable effects of AI that do not directly involve people: those that promote, excuse or damage the environment; those that cause loss to property; and those that, when embedded, cause breakdowns in automated processes. They may not be considered unethical, but they are just as dangerous to a company's brand or its ability to fulfill its commitments in a supply chain.
The instruction in ethics has proven to be ineffective in assisting organizations deliver trustworthy applications. The ethics community seems to be evolving to one of professional services, with skill in all of the aspects of MLOps. This is a positive step. However, organizations like the EU, UNESCO, and many others will continue to pound the ethical aspect - to the detriment of useful guidance.
© 2021 LeackStat.com
2024 © Leackstat. All rights reserved