What Is Explainable AI? - MATLAB
Video Player is loading.
Current Time 0:00
Duration 3:45
Loaded: 1.08%
Stream Type LIVE
Remaining Time 3:45
 
1x
  • Chapters
  • descriptions off, selected
  • captions off, selected
      Video length is 3:45

      What Is Explainable AI?

      Explainable AI allows users to understand how an AI model makes predictions or comes to results. Learn more about what explainable AI is, how to apply popular explainable AI techniques (such as LIME, Grad-CAM, and occlusion sensitivity), and why explainable AI is important when selecting an AI model.

      Explainable AI is particularly useful when using models that are not inherently explainable, such as deep learning models. Explainable AI helps ensure that an AI model is functioning as intended and is very relevant in safety-critical industries, where AI models must be highly reliable and trustworthy.

      Published: 29 Jan 2024

      Explainable AI is an emerging field that aims to explain the behavior of AI models in intuitive ways. The goal is to help users understand how an AI model makes predictions or comes to results. An AI black box is an AI system with internal workings invisible to the user. The AI model makes decisions without providing any explanation. For example, an AI model might correctly classify an image, but you don't know why.

      Engineers and scientists don't like black boxes. They want to know what's happening inside the box. Explainable AI aims to unbox black boxes. Explainability techniques help to show why and potentially how the AI model decided its response. Explainability often comes at the expense of power and accuracy. Deep learning models have great predictive power, but they are the hardest to understand.

      On the other end of the spectrum, linear regression and decision trees are inherently explainable. Decision trees make it easy to trace every step towards prediction and quickly understand why a model makes a decision. Explainable AI techniques are useful when you are using models that are not inherently explainable, such as the deep learning model. In this case, popular techniques such as Grad-CAM, LIME, and occlusion sensitivity can be used to visualize which features of the input influence the decision of the AI model.

      So explainable AI helps users gain confidence in AI decisions and can be useful when comparing the performance of multiple models. You want to choose the model that makes the right decisions for the right reasons, similar to selecting a good colleague to work with on your next project. You can observe in the explained image that the mapping of the most important features slightly differs between explainability techniques. This is because the underlying methodology is different.

      LIME approximates the behavior of a complex model by using a simpler, more interpretable model, such as a regression tree. The simple model is used to determine the importance of features of the input data as a proxy for the importance of features to the deep learning model. Grad-CAM is a generalization of class activation mapping. It uses the gradients of the classification score with respect to the final convolutional feature map. The parts of the image with a large value for the Grad-CAM map are the most important for prediction.

      Occlusion sensitivity is a simple technique. It computes a map of the change in activation when parts of the input are occluded with a mask. The occluding mask is moved across the input data, giving a change in classification score for each mask location. Imagine that the label was correct, but the explainability technique showed that the most important features are located around the person's head instead of the musical instrument. Then, you can safely assume that the model is not functioning properly and should be tuned or replaced.

      As AI technology advances, it's increasingly being used to solve real-world problems. AI models must leave the desktop and go into production where they're useful. Before deploying an AI model to production, use explainability techniques to make sure that you can trust the decisions of the model, the best performing model is sent to production, and the model is functioning as intended.

      The growing need to explain model behavior is especially relevant in safety-critical or regulated industries where AI models must be highly reliable and trustworthy. For example, explainability techniques can help identify and address any uncovered biases. Check out our AI video playlist to learn more about explainable AI, how to apply machine learning and deep learning, and how to use AI with MATLAB in engineering applications.

      View more related videos