By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

RESEARCH & RESOURCES

Artificial Intelligence May Transition Decision Support to Actual Decision-Making

Regardless of how you define artificial intelligence, its value is clear for big data applications.

Thanks in great part to IBM's Watson technology and its ability to answer queries presented in natural language, artificial intelligence (AI) has received much attention in the past few years. This was especially true after Watson competed on the quiz show Jeopardy! in 2011 and won against prior champions. Watson evolved from IBM's DeepQA project and the Deep Blue chess computer that defeated chess champion Garry Kasporov in 1997.

Watson, whose design goal was to win at Jeopardy!, had major enhancements including the ability to leverage natural language processing, data retrieval, massively parallel processing, as well as access to vast amounts of structured and unstructured data. It also could generate hypotheses and "learn" from it results in order to modify and improve its logic and algorithms.

Recognizing the value of its Watson technology, IBM has actively moved to commercialize it by creating the IBM Watson Group in early 2014 and making it available as a cloud-based service. Big Blue is working with partners and developers to create Watson-based operational and analytic applications. Although IBM's marketing team has been aggressively publicizing Watson's capabilities, IBM is certainly not the only vendor to pursue artificial intelligence technology. Major companies including Google, Facebook, and others are investing in it, as are universities and, we can assume, numerous governments as well.

Artificial intelligence has many definitions and encompasses subsets such as heuristic programming, expert systems, machine learning, and cognitive computing technologies. I consider a working definition of AI to be the ability of a machine to improve upon its original programming and enhance its ability to perform tasks that normally require human intelligence. Over time, the application would make better and more accurate decisions.

Regardless of how it is defined, there is little doubt that AI can be of great value, especially in big data applications. Many organizations are now collecting orders of magnitude more data than they did just a few years ago, even if they are not quite sure just how to analyze it. We are all familiar with how basic business intelligence and descriptive analytic technologies such as query, reporting, and OLAP can help us analyze what has happened in the past and how data mining and predictive analytics techniques can help predict what may occur in the future. I believe that artificial intelligence will be a major technology for prescriptive analytics, the step beyond predictive analytics that helps us determine how to implement and/or optimize what predictive analytics has helped us discover.

Applying the power of AI to the vast and ever-increasing amounts of data now being collected will ultimately yield new insights. Application areas include improving sales analysis and customer satisfaction, security analysis and trade execution, fraud detection and prevention, targeted education and training, land and air traffic control, national security and defense, and a wide variety of healthcare applications such as patient-specific treatments for diseases and illnesses. We are all aware of how jobs that once were the exclusive domain of humans such as facial recognition, sarcastic comment analysis, automobile operation, and language translation are now being done with software.

Before it was called business intelligence, the collection and analysis of data to assist in making decisions was known as decision support as it facilitated and assisted humans in their decision-making. With the inclusion of artificial intelligence technology, it may someday (and it some cases already is) lead to decision automation and making of actual decisions with minimum human intervention. Although AI systems will continue to evolve and improve their decision-making capabilities, we will still need human intervention to, at the very least, handle unforeseen exceptions. We are still a long way from the future AI-enabled cyborg villain foretold in the 1984 movie Terminator or HAL (Heuristically programmed ALgorithmic computer) in the 1968 movie 2001: A Space Odyssey.

If your organization is not already doing so, you should encourage it to undertake pilot projects involving AI to gain experience and better understand its capabilities and, perhaps more important, its limitations.

TDWI Membership

Get immediate access to training discounts, video library, research, and more.

Find the right level of Membership for you.