AI in the Crosshairs
At the 2017 World Economic Forum meeting, tech industry leaders openly discussed some of the ethical and societal challenges facing the current surge in big data-driven AI.
- By Barry Devlin
- February 17, 2017
At the 2017 World Economic Forum annual meeting in Davos, tech industry leaders finally began to openly admit to some of the challenges as well as the opportunities concerning the current surge in big data-driven artificial intelligence (AI).
These issues include a range of ethical, economic, and social implications that technologists and other experts have pointed out widely over the past year and more. I have also written about them here on TDWI Upside with particular reference to autonomous vehicles, which provide one of the most obvious routes for the widespread introduction of big data-driven AI into society.
A cynical view of this somewhat belated recognition of these challenges by our leaders would suggest that they are responding (out of self-preservation) to the widespread populist and nationalist political trends of the past year.
The current scapegoats for unemployment and other ills of Western economies are, of course, offshoring and immigration. However, automation and technological displacement have played at least an equal role, probably a bigger one. When (as seems likely) protectionist policies and "re-shoring" fail to deliver economic gains, one could expect technologies such as big data and AI to be next in the crosshairs.
A Future for Augmentation
What did the leaders of the tech industry have to say? A panel discussion including Ginny Rometty (CEO of IBM), Satya Nadella (CEO of Microsoft), and Joichi Ito (director of Media Lab at MIT) nicely summarized the state of thinking.
A common theme among participants was to emphasize the use of AI to augment humans in a wide variety of professional fields, such as medicine, education, and law, as well as in routine daily tasks such as scheduling meetings and organizing travel.
There are valid arguments for augmentation (rather than automation) here, including social resistance to robot doctors and the need for creativity and ethical judgments in professional tasks. Add a shortage of teachers and doctors in developing countries and an explosion of research information to be assimilated and assessed, and the case for augmentation is persuasive.
Effects of Automation
Nonetheless, downplaying or simply ignoring the perhaps more widespread use cases for automation and job displacement is not a good strategy. There is ample research that demonstrates the potential effects.
An ongoing study by McKinsey Global Institute concludes: "Given currently demonstrated technologies, very few occupations -- less than 5 percent -- are candidates for full automation. However, almost every occupation has partial automation potential, as a proportion of its activities could be automated. We estimate that about half of all the activities people are paid to do in the world's workforce could potentially be automated by adapting currently demonstrated technologies. That amounts to almost $15 trillion in wages." Note the phrase currently demonstrated technologies and consider the current extraordinary rate of advances in AI and the Internet of Things.
McKinsey's focus on activities rather than occupations emphasizes that augmentation and automation are often deeply intermixed. No doubt new jobs and paid activities will also emerge because of AI and big data. However, in a capitalist economy, most business eyes will be caught by the $15 trillion in wages that stand to be eliminated from the bottom line.
As a case in point, the Financial Times recently reported that automated compliance systems set up after the 2008 financial crisis are ready to replace thousands of jobs across the world's biggest banks. According to Richard Lumb, head of financial services at Accenture, "many of the jobs created by banks in recent years for compiling and checking data on customers and transactions had already been moved offshore to lower-cost countries. In the next wave of automation, they will simply disappear."
A similar pattern can be seen in the automobile industry, where President Trump's rather obsessive focus on restoring manufacturing jobs seems likely to fail largely due to a combination of ever more highly automated production and a long-term contraction of the market. (The trend is toward downsizing from millions of self-owned, self-driven automobiles to a significantly smaller fleet of autonomous, pay-per-ride vehicles.) Add the displacement of millions of truck-driving jobs and the economic, social, and political impact will be extreme, as discussed by universal basic income advocate Scott Santens as far back as May 2015.
Ethical Principles for AI
Returning to the World Economic Forum's panel on AI, perhaps the most practical point was made by Rometty, who listed IBM's three new principles for the cognitive (AI) era:
- Purpose: augmenting human intelligence
- Transparency: how AI is used and trained
- Skills: promoting acquisition and enhancement
At the CEO level, these are all good and needed, but more principles and much more depth will be required.
As might be expected at such an event, overall the panel put a positive spin on the emerging world of AI, focusing on humanity's potential benefits. In the current politically fearful and febrile climate, that's certainly necessary. However, considering the previously emphasized and still-valid economic goal of driving productivity and growth, businesses implementing AI solutions will be faced with significant ethical dilemmas before they even begin the design phase.
Dr. Barry Devlin defined the first data warehouse architecture in 1985 and is among the world’s foremost authorities on BI, big data, and beyond. His 2013 book, Business unIntelligence, offers a new architecture for modern information use and management.