Level: Intermediate to Advanced
Prerequisite: None
The quest for accuracy often prompts data scientists to consider complex “black box” models. Accuracy is one important factor in a successful project, but having models that are difficult to interpret sometimes seems like trading one problem for another. However, in recent years, new solutions have been found. An explanatory layer can be built on top of black box models to help interpret them, explain them to others, and build trust in your AI solutions.
Explainable AI, usually referred to as XAI, is a set of techniques for providing these explanations. XAI provides two kinds of explanations: global and local. Global explanations describe overall patterns in the model, notably which variables are most and least important. Local explanations describe why a particular case received a prediction. For example, why was a specific loan predicted to default? Regulated industries are often especially interested in XAI, but anyone that is considering complex machine learning models can benefit from a knowledge of XAI techniques.
This half-day session will introduce terms and concepts that can be applied in any setting. The focus will be practical traditional machine learning examples drawn from established industries. Participants will also get to see these concepts applied in some brief examples using a credit scoring data set, but there will not be a hands-on component.
You Will Learn
- Why the popularity of black box models is on the rise, and has prompted increased awareness and availability of XAI techniques
- Popular global and local explanation techniques
- How to interpret global explanations
- How to interpret local explanations
- Overall guidance for complex models and interpretability
Geared To
- Analytics practitioners
- Data scientists
- Machine learning engineers
- IT professionals
- Technology planners
- Consultants
- Business analysts
- Analytics project leaders