Be (More) Wrong Faster -- Dumbing Down Artificial Intelligence with Bad Data
AI’s benefits will be severely diminished if it’s hampered by poor quality data.
- By Bill O'Kane
- October 9, 2020
The reactive approach to discovering, analyzing, and correcting data quality issues in business applications has been marginally effective throughout the era of data consolidation for analytics and data warehousing as well as the subsequent onset of big data. However, in the “third wave” of data consolidation -- driven this time by the application of artificial intelligence (AI) and in particular machine learning (ML) -- detection of damaging data anomalies and root cause analysis have both become nearly impossible.
AI and ML are among the foundational disciplines that propel digital business transformation. Organizations that successfully implement these modern capabilities will surpass their competitors and ensure their future survival. However, the opportunities presented are not without significant risk. Attempting to use these models with low-quality data or repeatedly cleansing the same data for standalone initiatives can easily cause serious missteps and delays -- and even result in outright failures.
The lifeblood of AI and ML is data, and the input data used must be fully trustworthy. Data that is suitable for use across disparate business processes and analytics must meet a range of data quality requirements. If data quality is not continuously and automatically maintained, the data that was of sufficient quality at a given time and for a given purpose will very quickly decay. Nor will it be suitable for use in AI-supported business processes. The most critical data in this scenario is the master or dimensional data that describes the core entities involved in these business processes and analytics.
The business advantage of using AI is that it automates business processes and enables faster and more reliable business decisions. However, if the AI processes are given data that is not unique, accurate, consistent, and timely, these processes will not produce reliable results and therefore will lead to unwanted business outcomes. Examples of such undesirable business outcomes include:
- Making different decisions based on two customer or supplier master data records that in fact describe the same real-world party
- Recommending a product to a customer when a similar product was previously returned or generated a complaint
These results can negatively impact your business’s outcomes, reputation, and willingness to engage in new AI initiatives. This, in turn, can also hinder your ability to compete effectively in the future.
A common reaction to data quality issues in an AI process is to start on-the-fly cleansing of the specific physical data that feeds that process. Such a point solution is extremely costly and unhealthy in the long run, and standalone data quality solutions will eventually become unmanageable. The better, more sustainable way is to continuously cure the data quality issues by using a capable data management technology. This will result in your training data sets becoming rationalized production data with the same master data foundation.
Because master data is the most foundational and reusable category of data necessary to conduct trustworthy AI and ML activities, it is logical to start your data quality efforts using master data management (MDM) technology. As humans, we naturally understand the complexity of the master data that defines core entities. Machines, on the other hand, must get that picture digitally. Having your AI-supported business processes running on unique, accurate, consistent, and timely master data will significantly improve your business outcomes. The results will be reliable, the processes will be repeatable, and the concept will be reusable in other scenarios.
Incorporating robust data governance into your digital transformation will accelerate your transition to exploiting AI and ML and remove a range of future roadblocks. Such obstacles can include an incomplete or inaccurate 360-degree view of customers and business partners as well as missing relationships between customers, products, assets, and locations (and descriptions of these entities). By pursuing more active data governance through MDM, you will cleanse data only once and prevent repeated data quality issues. You will also manage the complexity of overseeing the many entities and relationships that must be consistently digitally digestible throughout your deployment of AI-supported business processes.
Stronger data governance, underpinned by an MDM approach, can put your organization on the fast track to automating processes and decisions. Data governance and MDM will also minimize resource requirements while simultaneously eliminating the risks associated with feeding untrusted data to AI and ML. In turn, your digital business transformation will be accelerated and your competitive edge will become rock-solid.
Bill O'Kane is the VP & MDM Strategist at Profisee. In his career, O’Kane has led the program management and architecture of multiple successful master data management (MDM) and data governance initiatives, and was Gartner’s lead analyst for MDM (and lead author of Gartner’s MDM Magic Quadrant) for eight years, advising thousands of clients on MDM and data governance. You can reach the author via email, Twitter, or LinkedIn.