Averting Algorithmic Angst
Algorithms may not be as impartial as we think. Avoiding problems with algorithms will demand collaboration and novel thinking across your entire organization.
- By Barry Devlin
- May 9, 2017
We used to believe that professionals such as financial advisers acted in the interest of their clients and that judges were impartial. Having been disappointed in such beliefs, many now turn to algorithms as potentially unbiased arbiters of truth. Sadly, the neutrality of algorithms is equally questionable.
Google was one of the first companies to suffer from algorithmic angst. Its initial business success was built on its PageRank algorithm. Its neutrality was at the heart of an oft-quoted statement from its 2004 IPO prospectus: "Don't be evil. We believe strongly that in the long term, we will be better served -- as shareholders and in all other ways -- by a company that does good things for the world even if we forgo some short-term gains."
As I discussed in a previous article, Google is now far from a one-trick algorithmic pony: its recent research and application of artificial intelligence (AI) and algorithms has been impressive. It is less clear how Google is achieving algorithmic neutrality.
Judging Acceptable Bias
Increasingly, as businesses go digital, algorithms are becoming part of mainstream operations and decision making. As they do so, every business and each BI department must tackle similar questions of algorithmic neutrality and ethics. Toward which outcomes is an algorithm optimizing? What are the bounds of acceptable bias in an algorithm? How do these bounds differ by geography, culture, time, and other factors?
Deep and subtle thinking and reasoning is required here. A degree in philosophy or theology might be a better starting point than pure computer science.
The recent United Airlines "(in)voluntary re-accommodation" incident raised interesting algorithmic -- not to mention PR and human rights -- concerns. According to one report, when no one was willing to volunteer to leave the flight, it was announced that "a computer would randomly select four people." However, this "random" selection was in accordance with boarding priority rules which protect minors and people with disabilities and take account of fare class and frequent flyer status.
In short, the selection algorithm was deliberately biased for reasons both "good" (protection of minors) and arguably "evil" (favoring passengers of higher value to the airline).
Societal Impact of Widespread Algorithms
Cathy O'Neil's 2016 book, the wonderfully titled Weapons of Math Destruction prompts deeper concerns. O'Neil, a mathematician nd data scientist who has worked in finance and e-commerce, now believes that algorithms increase inequality and actually threaten democracy.
She says, "Ill-conceived mathematical models now micromanage the economy, from advertising to prisons. They're opaque, unquestioned, and unaccountable, and they operate at a scale to sort, target, or 'optimize' millions of people. By confusing their findings with on-the-ground reality, most of them create pernicious WMD feedback loops."
That aim to statistically process millions of people -- even when it works correctly, and it often doesn't -- places the poor and underprivileged directly in the firing line for algorithmically derived decisions. The wealthy, on the other hand, are treated as individuals and benefit from personal evaluation, be it biased or honest.
The implications of algorithmic neutrality and bias thus span from everyday business operations to wholesale societal impact. Such issues intersect with the company ethos and demand consideration at board level before AI initiatives are considered. Lessons from sociology, psychology, and philosophy rather than physics and pure mathematics must be understood and applied. The responses of individuals and society to pervasive algorithmic decision making are unquantifiable for now. Anticipating and planning for such responses will make the difference between future business success and failure.
Enterprise and Business-Unit Decision Making
Implementation considerations at lower levels of the organization are no less challenging.
Understanding when and which algorithms are useful and valid requires substantial statistical and mathematical knowledge. It is vital to engage skilled and experienced statisticians, beyond those recently emerged from college as data scientists. As I discussed last year, the results of polling prior to the U.S. presidential election show just how easy it is even for professionals to get it wrong.
As algorithms become more sophisticated, they become the ultimate "black boxes" of decision making. Explaining in everyday language how such decisions have been reached in a way that makes sense to customers, staff, and even in court remains an unsolved problem for the AI industry. Tom Davenport has even suggested that we should limit implementation to "models that are relatively interpretable."
These considerations and others suggest that avoiding severe algorithmic angst will demand extensive collaboration and novel thinking right across the organization, all the way from board level to members of the project teams. The possible outcomes -- both positive and negative -- of AI and algorithmic projects are broader and deeper than any BI initiative we've previously undertaken.
Dr. Barry Devlin is among the foremost authorities on business insight and one of the founders of data warehousing in 1988. With over 40 years of IT experience, including 20 years with IBM as a Distinguished Engineer, he is a widely respected analyst, consultant, lecturer, and author of “Data Warehouse -- from Architecture to Implementation" and "Business unIntelligence--Insight and Innovation beyond Analytics and Big Data" as well as numerous white papers. As founder and principal of 9sight Consulting, Devlin develops new architectural models and provides international, strategic thought leadership from Cornwall. His latest book, "Cloud Data Warehousing, Volume I: Architecting Data Warehouse, Lakehouse, Mesh, and Fabric," is now available.