By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Upside - Where Data Means Business

Data Quality Evolution with Big Data and Machine Learning

When big data is combined with machine learning, enterprises must be alert to new data quality issues.

IT departments have been struggling with data quality issues for decades, and satisfactory solutions have been found for ensuring quality in structured data warehouses. However, big data solutions, unstructured data, and machine learning are creating new types of quality issues that must be addressed.

For Further Reading:

Adapting Data Governance to Big Data

Trustworthy Data: The Goal of Data Quality and Governance

Top 5 Predictions for Data Quality in 2018

Big data affects quality because its defining features of volume, variety, and velocity make verification difficult. The elusive "fourth V," the veracity component (concerning data reliability), is challenging due to the large number of data sources that might be brought together, each of which might be subject to different quality problems. Big data also unleashes the possibility of new and more complex queries that could introduce new types of data errors.

Meanwhile, unstructured data creates issues because it is subject to greater uncertainty than structured data, and machine learning algorithms tend to operate as a "black box" within which biases contained in the data might never come to light.

Your Data Quality Toolbox

Although many tools have been developed to resolve data quality issues, automated entry correction itself can diminish the quality of data if it is not applied with care. All of the factors that interfere with data clarity (such as accuracy, consistency, timeliness, duplication, volatility, completeness, and relevance) can lead to further problems as enterprises correct and fit the data into a form suitable for processing. Every transformation potentially loses information that may be relevant to a given query.

Current data quality tools are supplied by major analytics firms, by niche companies, and from open source. They provide functionality such as data cleansing, data profiling, data matching, data standardization, data enrichment, and data monitoring. Niche tools, such as for financial service, focus on special types of problems, and new tools are being developed that enlist machine learning techniques for data classification and data cleansing.

Where big data is combined with machine learning, additional quality issues emerge. Changes made to normalize the data can lead to bias in interpretation by a machine learning algorithm. A relatively low frequency of errors in huge data stores arguably makes the need for data-quality scrutiny less important, but the reality is that quality issues are simply shifted to other areas. Automated corrections and general assumptions can introduce hidden bias across an entire data set.

Keeping It Real

Data quality must be understood according to the needs of the business. Some situations require a rigorous approach which involves innumerable variables, but a more lenient approach is acceptable for many inquiries. There is always a trade-off between timeliness and veracity, query value and data cleansing, and accuracy and acceptable error. In a complex data and analytics environment, there is no room for one size fits all. Queries demand different levels of accuracy and timeliness. Data structured in one way may be suitable for some uses but result in an inaccurate or biased result for others.

The ultimate test of data quality is whether it produces the required result. This demands rigorous testing, as well as consideration of potential sources for introduced error. Although tools for data cleansing, normalization, and wrangling are growing increasingly popular, the diversity of possible factors means that these processes will not be completely automated anytime soon. As automation spreads, you must ensure that an automated solution is not introducing new problems into the data stream as a result of transformation rules.

The Uncertainty of Certainty

With limited data sets and structured data, data quality issues are relatively clear. The processes creating the data are generally transparent and subject to known errors: data input errors, poorly filled forms, address issues, duplication, etc. The range of possibilities is fairly limited, and the data format for processing is rigidly defined.

With machine learning and big data, the mechanics of data cleansing must change. In addition to more and faster data, there is a great increase in uncertainty from unstructured data. Data cleansing must interpret the data and put it into a format suitable for processing without introducing new biases. The quality process, moreover, will differ according to specific use.

Data quality is now more relative than absolute. Queries need to be better matched to data sets depending on research objectives and business goals. Data cleansing tools can reduce some of the common errors in the data stream, but the potential for unexpected bias will always exist. At the same time, queries need to be timely and affordable. There has never been a greater need for a careful data quality approach.

Machine learning and advanced software tools certainly provide part of the solution, making it possible to bring new approaches to quality issues. There is no panacea, however. A new level of complexity means that data needs to be scrutinized more carefully.

 

About the Author

Brian J. Dooley is an author, analyst, and journalist with more than 30 years' experience in analyzing and writing about trends in IT. He has written six books, numerous user manuals, hundreds of reports, and more than 1,000 magazine features. You can contact the author at [email protected].

TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.