By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Articles

Google's Machine-Learning Vision May Transform the Enterprise

Many of the products and services Google showcased at I/O 2017 seemed consumer-oriented, but their potential for transforming business is clear, such as TensorFlowLite, a new, mobile-optimized version of Google's hugely popular machine-learning framework.

The keynotes at this year's Google I/O conference were like something out of a consumer-oriented Utopia. CEO Sundar Pichai and other Google executives showcased the potential for machine learning (ML) and similar advanced technologies to transform our daily lives, whether in the form of a smarter, more useful Google Home, a new technology called Google Lens, a preview of standalone virtual reality (VR) technology, or other products and services.

The content of Google's I/O keynotes was so consumer-oriented, in fact, that its potential applicability to the enterprise seemed all but nonexistent. That's a facile impression, however.

The truth is that Google Lens, or the revamped Google Assistant, or Google's Daydream VR technology, along with many of the other products and services showcased at I/O 2017, are enabled by technologies that will prove to be no less transformative in the enterprise. [https://assistant.google.com] [https://vr.google.com/daydream/]

Consider Google's new Lens technology. It uses the embedded cameras that ship with all Android smartphones to "see," interpret, and, via Google's Assistant technology, interact with the world around us. Are the flowers in that vase dahlias or chrysanthemums? What about that print of the painting that's on the wall in the company lobby? Who painted it?

Point the camera-panopticon in your smartphone at whatever it is you want to learn about and – fingers crossed – Google Lens should be able to tell you. Lens isn't just a show-and-tell technology, however: it has the potential to interpret and, yes, do stuff in the world.

One of Google's examples featured a wireless router or access point that had its login credentials stickered to its rear plate. Fantastic, you say: Lens will use this to tell me the name and model number of my access point, right? Far from it, actually. When you point your smartphone at the sticker, Lens is smart enough to use the information to log onto the wireless network.

The Business Impact

Technology like this has obvious applications at all levels of business. It isn't a blindingly new application of an idea, however. Take a vendor such as Ephesoft, which markets products that perform digital document and data capture. Ephesoft's Transact technology isn't as versatile as Google's Lens + Assistant combination, but it's an application of the same concept: a person uses a smartphone to capture documents or data; Transact analyzes what has been captured and attempts to contextualize, classify, and (if successful) ingest and store it.

The challenge isn't just to identify values, features, attributes, entities, etc. but to do something with this knowledge: i.e., to determine a context and to identify one or more actions that are appropriate in that context.

In Ephesoft's case, it's applying text analytics, graph analytics, and machine-learning techniques to analyze content, classify it, identify and extract essential information or features, and trigger an appropriate workflow. This is a very hard problem. It's hard enough when your use case is limited to a specific context – e..g, document and data recognition and capture.

It's much, much harder when the context for your "use case" is the manifold richness of reality.

Machine Learning on the Go

This is a hard problem, but it's becoming less difficult, if infinitesimally so, with each passing day. Another announcement at I/O 2017 suggests why this is the case. Google first announced its TensorFlow ML library in late 2015. Since then, TensorFlow has become the most popular machine learning project on the Github open source project repository. [https://www.tensorflow.org/] [https://github.com/showcases/machine-learning]

At I/O 2017, Google announced TensorFlowLite, a mobile-optimized version of its smash-hit ML framework. TensorFlowLite is a framework for building applications and services that incorporate ML into the mobile device experience. The potential applications for mobile ML were summed up succinctly by Dave Burke, vice president of engineering for Android with Google: "We think these new capabilities will help power the next generation of on-device speech processing, visual search, augmented reality, and more."

As machine-learning technology becomes more portable, commoditized, and (as a result) ubiquitous, showcase solutions such as Google Lens and Google Assistant will become, well, commonplace. The next five years should see a huge increase in the number and variety of function-specific apps or services (such as Ephesoft's Transact).

Concomitant with this, we'll see the capabilities -- along with the potential for real-world interactivity -- of technologies such as Google Lens improve, too. Yes, the applications and use cases Google showcased at I/O 2017 were mostly consumer-oriented, but their potential applicability to and for all aspects of human life, especially business, are obvious -- not to mention a little intimidating.

TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.