TDWI Articles

3 Signs of a Good AI Model

As society strives to master artificial intelligence, it is recognizing the need for explainable AI. This emerging trend will force organizations to create models that are effective and good for society.

Businesses today are spending billions pursuing artificial intelligence. Their end goal is to develop thinking machines that will help them run their operations more effectively, increase revenue, and achieve their organizational goals. They are engaging data scientists to obtain the right data from multiple sources and generate models that, when paired with their data, enable the business to execute large quantities of decisions effectively and efficiently without significant human intervention.

For Further Reading:

AI Techniques You Need to Know

Putting AI to Work Protecting Your Data

Working AI into Your Enterprise Initiatives

Until recently, the success of an AI project was judged only by its outcomes for the company, but an emerging industry trend suggests another goal -- explainable artificial intelligence (XAI). The gravitation toward XAI stems from demand from consumers (and ultimately society) to better understand how AI decisions are made. Regulations, such as the General Data Protection Regulation (GDPR) in Europe, have increased the demand for more accountability when AI is used to make automated decisions, especially in cases where bias has a detrimental effect on individuals.

What Is a Model?

The first step in understanding how to achieve XAI is to understand what a model is and how it works.

Simply stated, a model is a set of transformations that convert raw data into information, most often by applying statistics and advanced mathematical constructs such as calculus and linear algebra. What makes AI models different from traditional data transformations is that the model is constructed by employing algorithms to expose patterns from historical data; those patterns form the basis for the mathematical transformation.

Traditional data transformations are most often a set of directives and rules established and programmed by a developer to achieve a specific purpose. Because AI models learn from having more data, they can be regenerated periodically to sense and adjust to changes in the underlying behaviors associated with the transformation.

One of the strengths of AI is that the process of creating a model can identify patterns that are not obvious and intuitive by looking at the data. This is also one of its weaknesses because AI is often viewed as a black box that creates results without explaining what is happening within the model.

To achieve XAI and for a model to be exceptional for both the business and its constituents, it must achieve excellence in three areas simultaneously: explainability, transparency, and provability.

Explainability

The first tenet of XAI addresses the needs of information consumers to understand why the model generated a specific prediction. This requires that each individual output prediction from the model can be traced back through the process to understand why it was generated and an alternative was not.

For instance, in the case of determining whether a bank will extend credit to a customer, the bank needs to know what the optimal decision is. If the decision is to deny the consumer, it now becomes imperative that the bank be able to explain how this decision was reached and, more important, what factors can be adjusted to align the decision to be in the best interest of both the bank and the consumer. It is also important to society to be able to prove that no discriminatory bias was involved in reaching the decision.

Transparency

Transparency is the ability to fully understand the decision process.

There are two aspects to transparency. The first is understanding the data being used. This includes data lineage to be able to see from which source the data came that was input into the model. As was evidenced by the Facebook-Cambridge Analytica incident, the sourcing of data and the ethical implications of where that data lineage lies can have a huge impact on a business's ability to achieve its results.

The second aspect is understanding the process through which the data input was transformed from raw data to a prediction. Some models, such as linear regression and decision trees, are much more amenable to showing the path from input to output in an understandable and transparent way. The challenge is that these easy-to-understand algorithms are not always the most accurate or effective given the set of data or the decision to be made. Here is where XAI is as much about trade-offs for the betterment of society as it is getting exactly the right model for the specific data.

Provability

The final goal, provability, refers to the level of mathematical certainty underlying the predictions. As vital as this is to the effectiveness of the prediction model, provability is often at odds with transparency and explainability.

The job of data scientist has become so popular in recent years because the supply cannot meet the significant demand. Creating and validating these models is mentally challenging work but can have a huge payoff if done right. It takes a significant amount of mental prowess to apply advanced statistical and mathematical constructs to simulate intelligence in a machine and apply it to large quantities of data.

A data scientist is different from a data analyst in the way they go about work. Data scientists apply the scientific method to transform data into knowledge. They often hold the mathematical provability of their results above all else. Data analysts, on the other hand, analyze data and answer specific business questions. They are often looking for the best answer to a question that the business has.

Often, data scientists are so focused on the result and their ability to prove mathematically that the answer is correct that they ignore the need for it to be transparent and explainable to the information consumer. This is often where data science teams need a balancing force who can represent the explainability and transparency aspects of XAI and ensure that with each improvement in provability the other targets are not compromised beyond an acceptable level.

The Bottom Line

As AI becomes more prevalent, businesses will be asked to strike a balance among explainability, transparency, and provability. To achieve XAI, enterprises will have to understand how decisions during the creation of the model impact all three areas and the associated trade-offs. A data scientist who can develop a model that achieves all three will have achieved something of great value.

In the end, XAI must balance the demands of business with the needs of society.

About the Author

Troy Hiltbrand is the senior vice president of digital product management and analytics at Partner.co where he is responsible for its enterprise analytics and digital product strategy. You can reach the author via email.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.