TDWI Articles

Google Ups the Ante on AI

Google forges ahead with powerful techniques and discoveries in artificial intelligence. Where are examples of their application to BI to be seen?

In my series of articles last March, I noted that cognitive computing and related/overlapping concepts such as artificial intelligence (AI) and machine learning had seen slow uptake in business intelligence (BI). As 2016 draws to a close, a second look is warranted, given the enormous hype and publicity the topic has drawn in the intervening months. Although there has been limited progress in AI for BI in the interim, several fascinating developments have emerged in AI research, particularly from the wide world of Google.

Recent announcements suggest that the impressive advances seen in Google DeepMind AlphaGo's comprehensive defeat of Go world champion, Lee Se-dol, are only the first step on the journey. Until that moment, AI experts expected that it would still take some years before AI could outwit a world champion. In fact, AlphaGo's actual game play was described as both beautiful and different than human styles of play. "Such moves cannot be produced by just incorporating human knowledge," said Doina Precup, associate professor in the School of Computer Science at McGill University in Quebec.

In a paper published in Nature in early October, the Google DeepMind team that built AlphaGo described the creation of a hybrid system -- a differentiable neural computer (DNC) -- by addition of dynamic external memory to a neural network. Their description: "Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data."

This neural network, Deep Q Network, is designed as a general-purpose learning agent, applicable to more than one specific problem domain for which it was designed. Adding memory further expands the classes of problem it can address. This hybrid computer can already answer simple questions and reason its way around a map of the London Underground to find routes and shortest paths between stations, a problem that confounds many real visitors to London.

According to Alex Graves of DeepMind in an interview a full year earlier: "All the memory interactions are differentiable, making it possible to optimise the complete system. ... By learning how to manipulate their memory, Neural Turing Machines [precursors of DNC] can infer algorithms from input and output examples alone. In other words, they can learn how to program themselves." The 2016 paper doesn't extend to such lofty aims, but rather addresses future aims such as "representational engines for one-shot learning, scene understanding, language processing and cognitive mapping, capable of intuiting the variable structure and scale of the world within a single, generic model." Progress enough for now!

In a paper submitted in November, also by DeepMind research, the reinforcement learning approach (directly maximizing cumulative rewards) is developed further to rapidly adapt to the most relevant aspects of the actual task. The authors use a simile: "Just as animals dream about positively or negatively rewarding events more frequently, our agents preferentially replay sequences containing rewarding events." Following on from previous work on neural networks learning to play Atari games, this reinforcement method averages 880 percent of expert human performance.

In yet another November paper, DeepMind has applied a neural network AI system to sentence-level lipreading. Using a corpus of audio and video recordings of 34 speakers who produced 1000 sentences each, for a total of 28 hours of material, the LipNet system achieved nearly 95 percent accuracy, outperforming human, hearing-impaired lipreaders who achieved only 52 percent accuracy.

Also in November, Google itself announced that it has also been pushing boundaries with its production Multilingual Neural Machine Translation System with "zero-shot translation." This AI system enables translation between language pairs never previously encountered. For example, when the system has been trained on English <-> Japanese and English <-> Korean, it can then generate reasonable Korean <-> Japanese translations. How is the system doing this? The researchers examined the overall geometry of the translations and concluded that "the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of existence of an interlingua in the network."

These developments from Google alone illustrate just how fast AI techniques have been evolving over the past year or so. As mentioned, I have heard little new about the incorporation of AI into business intelligence by the mainstream vendors since my March articles, with the exception of ongoing work in IBM on Watson, which I'll discuss in my next article. I'd love to hear from any other vendors -- mainstream or niche -- who are building AI into their systems!

About the Author

Dr. Barry Devlin is among the foremost authorities on business insight and one of the founders of data warehousing in 1988. With over 40 years of IT experience, including 20 years with IBM as a Distinguished Engineer, he is a widely respected analyst, consultant, lecturer, and author of “Data Warehouse -- from Architecture to Implementation" and "Business unIntelligence--Insight and Innovation beyond Analytics and Big Data" as well as numerous white papers. As founder and principal of 9sight Consulting, Devlin develops new architectural models and provides international, strategic thought leadership from Cornwall. His latest book, "Cloud Data Warehousing, Volume I: Architecting Data Warehouse, Lakehouse, Mesh, and Fabric," is now available.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.