By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Upside - Where Data Means Business

Why Truly Diverse Teams Create Better Analytics

Bias is everywhere -- even in analytics. The best way to control for bias is to incorporate a diversity of experiences and perspectives in the teams we use to design and maintain predictive, machine learning, and AI systems.

Bias is everywhere -- even (and especially) in the analytics we produce. This includes advanced analytics technologies such as predictive models, machine learning algorithms, and even artificial intelligence (AI) systems. Both analytics and the data we use as grist for them embody the known and unknown biases of the human beings who created them.

That's why market watcher Gartner says the best way to control for bias is to incorporate a diversity of experiences and perspectives in analytics teams. Granted, this isn't exactly a controversial idea.

For Further Reading:

AI in the Crosshairs

Prejudice in the Machine Can Jeopardize Your Enterprise

Is Your Data Science Team Sufficiently Diverse?

Nevertheless, Gartner urges diversity in all aspects of advanced analytics, up to and including the selection of algorithms and data sources. In managing our analytics efforts-- as with managing our stock market or project portfolios -- diversity makes sound strategic sense.

Diverse Input Equals Better Output

"In the enterprise, an AI team decides what to learn, how to do it, and how to ensure the best outcomes. Over time, AI teams tend to develop conforming, self-perpetuating approaches to AI, hindering the team's ability to innovate and even to spot incorrect outputs," argues analyst Svetlana Sicular in a recent Gartner research publication. Ultimately, Sicular points out, "AI reflects what it learns." The greater the diversity of your inputs -- e.g., human perspectives, sources of data, and advanced analytics techniques -- the richer your outputs.

Claudia Perlich, chief data scientist with marketing analytics specialist Dstillery, cites a textbook example of built-in bias. "If you use a machine learning system to automatically screen job candidates ... your predictive model may propagate historical biases. If a model makes predictions [based on] what has happened in the past, it is bounded by [the selection criteria of] the past."

In this case, the bias isn't just historical -- i.e., biased in favor of what has happened -- it is static or reactionary, too. The predictive model projects and perpetuates the conditions of the past.

Ostensibly, the model is selecting for candidates who have skills, qualities, attributes, experiences, etc. that correlate with what worked in the past. This might not work out so well for the company if and when conditions change, however. Furthermore, it's perpetuating social, economic, and other conditions -- i.e., biases. This might not work out so well for society, for the economy, etc.

Many Types of Diversity

Gartner, to its credit, isn't just paying lip service to diversity. Sicular addresses the importance of diversity in age ("to avoid one-sided conclusions and predictions, and most of all, to select problems that are meaningful beyond an age bracket or a small niche"), in "soft" qualities (where, for example, "strengths in different virtues [such as patience and perseverance] contribute to a balanced team"), in domain expertise, and in other matters.

When Sicular says you need a "diversity of viewpoints," she means it.

There are obvious practical reasons for this, she argues. "Individuals representing different genders, cultures, races, and ethnicities bring in their unique perspectives. If AI serves a global population, this is necessary. For example, different cultures treat privacy differently and have different ethical norms," Sicular writes. "Additionally, people prepare more carefully to discuss issues with peers from other cultures compared to discussions in the homogeneous group."

She cautions against uncritical diversity, however. "Beware of 'diversity' that is not really diverse. For example, inclusion of women solves a problem only partially. Although gender diversity is very important, in technology women may be from the same stratum of society as men," Sicular writes.

In this and other areas, non-diverse diversity contributes to groupthink, which itself ensures the perpetuation of unknown problems or biases. Groupthink is also inimical to two of the most important aspects of advanced analytics and AI development: the way we choose and frame problems and the tests or criteria we use to validate results.

Maintaining Fresh Perspectives

The interplay of different viewpoints and experiences promotes a kind of team dynamism. In practice, however, this dynamism can cool over time as people on a team become more comfortable with one another -- more familiar with the ideas, interests, priorities, and idiosyncrasies of their colleagues.

That's why it's a good idea to shake things up. "Periodically add new team members and rotate in temporary team members from other areas to bring fresh perspectives to the AI team. The problem of AI teams is not in grasping new ideas and technologies, but in established approaches that are difficult to change," she urges.

An analytics practice is a composite: it's a way of putting different pieces of information together into new combinations such that they represent a richer, more contextual world. We should extend this logic to the teams we task with producing our analytics, too. Like the analytics themselves, the composition of these teams should reflect the rich diversity of experience.

TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.