main-article-of-news-banner.png

Learn the benefits of interpretable machine learning

 

Tech companies have developed machine learning models and algorithms expeditiously in recent years. Those familiar with this technology likely remember a time when, for instance, bank personnel and loan officers were the ones who ultimately decided if you were approved for a loan. Nowadays, models are trained to handle such procedures in mass quantities. 

It's important to understand how a given model or algorithm works and why it would make certain predictions. The first chapter of Interpretable Machine Learning with Python, written by data scientist Serg Masís, deals with interpretable ML, or the ability to interpret ML models to find meaning in them.

 

The importance of interpretability and explainability in ML

To prove that this is more than just theory, the chapter then outlines examples of use cases where interpretability is not just applicable but needed. For instance, a climate model can teach a meteorologist a lot if it's easy to interpret and minable for scientific knowledge. In another scenario involving a self-driving vehicle, the algorithm involved may have points of failure. It therefore must be debuggable so developers can address them. Only then can it be considered reliable and safe.   

This chapter makes clear that interpretability and explainability in ML are related concepts, yet explainability is different because it requires a model's inner workings to have human-friendly explanations.

'Interpretable Machine Learning with Python
Click book cover to learn more
 
 

Interpretable ML is beneficial for businesses

These concepts add value and practical benefits when businesses apply them. For starters, interpretability can lead to better decision-making because when a model is tested in the real world, those who developed it can observe its strengths and weaknesses. The chapter provides a plausible example of this, where a self-driving car mistakes snow for pavement and crashes into a cliff. Knowing exactly why the car's algorithm mistook snow for a road can lead to improvements because developers will change the algorithm's assumptions to avoid more dangerous situations.

Businesses also want to maintain public trust and keep a good reputation. For a relevant example, the chapter uses Facebook's model for maximizing digital ad revenue, which has inadvertently shown users offensive content or disinformation in recent years. The solution would be for Facebook to look at why their model shows this content so often, then commit to reducing it. Interpretability plays a crucial role here.

In the following chapter, Masís articulates his belief that interpretable ML will lead to more trustworthy and reliable ML models and algorithms, which will then enable businesses to achieve public trust and become more profitable.

Click here to download chapter 1.

© 2022 LeackStat.com