TDWI Articles

Why Women Make a Difference When Developing AI Solutions

Does gender matter in the development and implementation of artificial intelligence?

Artificial intelligence (AI) is a valuable asset to companies that use predictive modeling and automated tasks, but it still gets its intelligence from human-generated data. By nature this means that AI is prone to bias, no matter how evolved or unbiased we humans like to think we are. In the constantly evolving field of AI and discussions around ethical AI use, many organizations are looking towards team diversity as the solution to biased AI.

For Further Reading:

Why Truly Diverse Teams Create Better Analytics

How Ethical AI Is Redefining Data Strategy

Tackling Bias and Explainability in Automated Machine Learning

What Does Biased AI Look Like in the Real World?

Let’s look at a situation that occurred with Amazon in 2015. In an attempt to optimize the hiring process, their machine learning specialists created an AI program to look at resumes and select only the most qualified candidates. This ultimately backfired; the computer models based the characteristics of an ideal applicant on patterns in resumes that had been submitted to the company over a 10-year period.

Most resumes came from men, so the algorithm taught itself that male candidates were preferable. The AI penalized those who had attended women’s colleges and even discarded resumes that included the word “women,” as in “captain of women’s swim team.”

In another example, the AI system for Apple’s credit card faced allegations in 2019 that it was discriminating based on gender. Entrepreneur David Heinemeier Hansson complained that the Apple Card gave him 20 times the credit limit of his wife, despite his wife having a better credit score. As more complaints came in, even Apple's co-founder Steve Wozniak weighed in and expressed concern that the algorithms used to set credit limits might be inherently biased against women.

In short, artificial intelligence software, if programmed incorrectly, can teach itself to discriminate based on gender or other factors.

Why Representation Is Needed

A study published by the World Economic Forum found that only 22 percent of AI professionals across the world are female, compared to the 78 percent who are male. The study suggests that between men and women who hold AI skills, women are less likely to be promoted to senior roles, meaning they are unable to gain expertise in higher-profile and emerging skills.

When teams and organizations create AI solutions, lack of diverse representation can greatly impact the diversity of ideas programmed into a machine learning algorithm. In my recent experience at Zelros, having more female colleagues brings different perspectives and collectively creates a healthy dynamic to inspire more creativity, which has resulted in raising the bar higher for us all.

It's not that women are better at spotting biases or have a supernatural expertise in the execution of AI solutions; their simple nonpresence from the roles that decide what is fed to a system is the source of the problem.

This is how filters for social media apps get made that only work on white skin or how a computer is trained to favor men over women. If nobody from the impacted group is there in the boardroom to raise a red flag about these occurrences, then products and AI services can go to market with these biases unchecked. Having more women in our company has ensured we hold ourselves accountable.

Conclusion

Unbiased AI is now a hot topic for brand reputation, making diverse teams more important than ever. Diversity and inclusion are vital to ensuring that innovative technologies do not contribute to existing inequalities. This means having more women directly involved, and also means more diversity in general.

Companies are directly responsible for ensuring that their AI solutions are inclusive by design. At Zelros, our 2022 professional equality index score has increased to 89/100 when most other companies are aspiring to be 50/50. The addition of women to our teams not only leads to more inclusive AI, but also carries with it additional dynamics that lead to higher team performance.

Hire people of different ages, genders, sexual orientations, cultures, religions, and ethnic backgrounds. For AI solutions to serve the society at large, companies that implement them must train their systems to protect the communities they are serving. The more diverse your team is, the less biased your AI algorithms will be.

 

About the Author

Damien Philippon is a co-founder and COO at Zelros, an InsurTech provider bringing personalized insurance recommendations across channels. Damien has over 20 years of experience in IT and digital strategy globally. Prior to founding Zelros, he was the founder of managing consultancy Magellan Consulting. Before that, Damien spent half of his career at Atos, a managing consulting company leading complex digital transformation projects and IT systems such as CRM and ERP for global Fortune 500 companies. He co-founded Zelros six years ago because he believes AI will help turn the insurance industry into a more customer-centric industry. Philippon is based in Montreal and graduated from CentralSupelec University with a master of engineering, electrical and electronics engineering degree.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.