TDWI Articles

Why You Will Soon Use Deep Learning

Easy-to-use tools for deep learning will soon become available for mainstream consumption via packaged and SaaS applications as well as function-specific libraries.

Got deep learning? You soon will, according to Gartner.

Gartner says easy-to-use tools for deep learning will soon become available for mainstream consumption via packaged and software-as-a-service (SaaS) applications as well as function-specific libraries. Mainstream access to deep learning technology will "greatly impact most industries over the next three to five years," the market watcher writes in a recent report.

"Deep learning is here to stay. It is currently the most promising technology in predictive analytics for previously intractable data types for machine learning, such as images, speech, and video," write analysts Tom Austin, Alexander Linden, and Svetlana Sicular in the Gartner report, "Innovation Insight for Deep Learning."

"[Deep learning] also can deliver higher accuracy than other techniques for problems that involve complex data fusion."

(For more on the basic principles of deep learning, see "Styles of Deep Learning: What You Need to Know.")

The Coming of Deep Learning

Deep learning already powers several prominent mainstream applications, including Apple's intelligent personal assistant (IPA) Siri and Google Voice.

It powers Amazon's Alexa IPA, too, along with the new image recognition (Amazon Rekognition) and voice recognition (Amazon Lex) services Amazon announced late last year.

For Further Reading:

More Deep Learning Skills Needed

Repeatability Key to Scaling Data Science

AI, Deep Learning, and Financial Services

Think of Amazon's Rekognition and Lex services as part of the first wave of deep-learning-as-a-service offerings. The fact that they're available now, as embeddable, developer-invokable services, suggests that deep learning technology will be commoditized at a rapid rate.

In the last 36 months, both Microsoft and Google also developed advanced image recognition systems (Deep Residual Networks (ResNet) and Inception, respectively) that make extensive use of deep learning. These systems originated in the research arms of both companies -- as proofs-of-concept and as cutting-edge applications of "deeper" neural net technologies. Three years on, this cutting-edge technology is available to everybody.

Developers can now run Inception via Google's TensorFlow open source machine learning library. Microsoft offers ResNet via GitHub.

A Promise Mixed with Peril

The benefits of deep learning sure seem obvious. The Gartner report cites a number of no-brainer use cases, from improved machine translation capabilities -- Google's Neural Machine Translation (GMNT), which launched late last year, was the centerpiece of a story ("The Great A.I. Awakening") that ran in the December 14, 2016, issue of The New York Times Magazine -- to speech recognition, fraud detection, and recommendation systems.

Other compelling applications include medical diagnosis, demand prediction, self-driving cars, and predictive requirements of all kinds (propensity to buy, customer churn, impending failure, etc.).

What's not to like? Quite a few things, actually. In the first place, even though Amazon, Google, Microsoft, and other vendors are doing what they can to expose deep learning capabilities via easy-to-use products, services, and libraries/APIs, the core technology itself is incredibly complex. It requires highly specialized skills and expertise: even data scientists, the Gartner analysts argue, will find it challenging. Second, deep learning requires more of everything: more source data, more computational brawn, and more memory and storage resources.

"In its current form, deep learning is vastly more data-hungry and computationally intensive than traditional machine learning," the Gartner report says.

"For custom solutions, the implementation risks are magnified by inadequate data, the extreme scarcity of specialized data science skills, a need for high-performance compute infrastructure, ... and lack of high-level executive sponsorship for taking the required risks."

The market watcher lists a number of other potential risks, too. These include:

  • Immaturity: The available deep learning tools/technologies are still relatively new
  • Inclusive outcomes: It won't always yield results for all problems
  • Complexity: It's so complex that it's opaque to all but the savviest of users

In addition, the Gartner analysts say, most of the issues that apply to conventional machine learning (the difficulty of obtaining suitable data, sampling error/bias, the accuracy of a model degrading over time, and a narrow/function-specific scope) also apply to deep learning.

All that said, the technology has too much promise and is too important for challenges to deter potential users. "For corporate use cases, we expect deep learning to become a major factor of consideration," the Gartner report concludes. "It is currently the most promising technology choice to accomplish complex data fusion, that is, to extract knowledge when complex relationships are distributed over time and space, especially if heterogeneous data sources are involved."

TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.