Prerequisite: None
Opaque "black box" models are more popular than ever before, their rise driven by recent developments in machine learning. But how do you harness their power while concurrently meeting needs for internal transparency and external regulation? What should analytics leaders know to help guide strategy and manage their teams?
Black box models such as deep neural networks and ensemble techniques such as Random Forest and XGBoost are increasingly popular because of their predictive power. However, they lack the transparency of simpler models, creating a dilemma. How does one produce the highest possible accuracy while providing the critical explainability demanded in most industries? How does one meet the ethical requirements of machine learning without explainability?
Explainable AI (XAI) is the solution that data scientists increasingly seek. In this workshop, we review the background context of why black box models are preferred and show that a transparent model is sometimes a viable alternative. The focus will be to inform analytics leadership with sufficient context to weigh in on strategic decisions in support of their teams. We will also discuss when to avoid XAI by building models that are inherently interpretable: interpretable machine learning (IML).
Keith McCormick will discuss strategies to balance the strengths and challenges of these contemporary techniques, including:
- Why the need for XAI is rapidly increasing
- A taxonomy of methods for XAI and IML, and when you should use them
- Adopting an optimal strategy: when to consider XAI, and when to avoid opaque models altogether
- How to build the best possible transparent models to avoid the need for XAI
The workshop will include a brief exercise where participants weigh the pros and cons of two different solutions to a case study, one with an opaque model and one without.