Algorithms or Democracy -- Your Choice
Is an algorithmically run democracy our future?
- By Barry Devlin
- September 8, 2017
"Would you trust an algorithm to pick the next U.S. president?" I clicked "No" in an online survey and was cheered that 70 percent of respondents agreed with me. Then I wondered about the value of the information. Surely the sample is skewed by the length and source of the article if nothing else.
Dating from the week before the 2016 U.S. presidential election, the article opens with a scenario for the 2020 election where your personal AI assistant suggests your vote and volunteers to handle the tiresome details. Evidence has since emerged that algorithms were deployed to influence voting in the last U.S. presidential election, as well as the 2016 Brexit vote and the 2017 French presidential election.
An investigation by Carole Cadwalladr of The Observer last May claimed that "A shadowy global operation involving big data, billionaire friends of Trump and the disparate forces of the Leave campaign influenced the result of the EU referendum." The evidence adduced is convincing and disturbing.
Vyacheslav Polonski of Oxford Internet Institute suggests that political campaigns can learn how to combat algorithmic interference in elections based on the failure of #MacronLeaks to swing the French presidential election in May. However, he fails to note that in this case, the swing required would have been enormous, whereas in the other two cases, a small nudge was all that was required.
We in the big data and analytics industry can hardly claim surprise that interested parties -- of any political shade -- would try to use algorithms to influence behavior. It is the main purpose of algorithmic suggestions at Google, Facebook, and other enterprises. I, among others, fret about the impact on our freedom of having an algorithm influence even small purchasing decisions. However, given the enormous effect and value of influencing government (note the millions of dollars invested annually in lobbying), the application -- both overt and covert-- of technology to affect election outcomes was arguably inevitable.
The tools attempting such influence are becoming more insidious, reaching beyond mainly textual fake news. Recent work by the University of Washington's Synthesizing Obama project shows how easy it is to manipulate video to convincingly portray anybody saying something they didn't actually say. Combined with the increasing reach of data collection, sharing, and analysis as well as highly personalized and automated targeting, the opportunities for propagandists, con men, and mindless troublemakers have become almost endless.
Algorithms are dependent on the quality of their input data and the skills and integrity of their creators. Data, by definition, is historical, meaning that algorithms predict or propose a future statistically based on the past -- hence, those annoying ads for what you just bought.
More worrying is the reality that social media data is now flooded by bots with human-sourced misinformation. Algorithms based on a diet of such input data can rapidly go rogue, as seen in Microsoft Tay's descent into racism . The desired outcome of propagandists is, of course, a hardening -- subtle may suffice -- of public attitudes based on the bias of the content and the echo-chamber of social media.
The Digital Disinformation Forum, convened in Stanford last June, claimed the proliferation of deliberate misinformation to be "the most pressing threat to global democracy." As Philip Smith writes, "The insidious thing about information pollution is that it uses the Internet's strengths -- like openness and decentralization -- against it. ... There is a battle of epic proportions underway in the ecosystem of the Internet: profit, public relations, and politics vs. people's trust in what they read online."
Add to this manipulation of public thought the now commonplace dependence of politicians on "public opinion" as a basis for formulating policies. The outline of a potential positive feedback loop is evident. The dangers of applying algorithms to social situations are already clear. Predictive policing, for example, now common across the U.S., increases patrols in statistically discovered high-crime neighborhoods. It may reduce petty crime but often induces violence between police and disenfranchised residents.
In a recent article, Maëlle Gavet asks if AI-powered government is worth the risks involved. With a background in business strategy and technology, her answers bear careful consideration. Contrary to some technologists who believe that algorithms can better and more immediately apply laws and regulations in real time than humans can, her conclusion is that AI "represents nothing less than a threat to democracy itself" and goes on to list five major problems.
Her final issue -- algorithms don't do nuance -- is a fundamental problem for democracy. Although touted as the rule of the majority, democracy can only succeed with the agreement of the minority. Real solutions to difficult problems demand leadership, nuances, and making trade-offs, none of which are strengths of algorithms. Unfortunately, the same qualities are notable mostly by their absence in many of today's politicians.
An algorithmically run society based on big data may seem appealing in the light of current political dysfunction worldwide. However, it is also deeply inimical to the process we call democracy.