Free Up Your Data Science Team with Augmented Data Management
To alleviate the drudgery of data preparation for your data science team, look for solution providers that are augmenting their data management platforms by using artificial intelligence, machine learning, and advanced analytics.
- By Troy Hiltbrand
- May 20, 2019
A Forbes' survey of data scientists estimated that nearly 80 percent of a data scientist's time is spent collecting and preparing data for higher-value activities. Because this is a major part of a data scientist's day-to-day workload, it is no surprise that data management tool providers are starting to incorporate machine learning, artificial intelligence, and advanced analytics into their tools to help make them more self-configuring and self-tuning.
This will ultimately free up your data science team to spend more time on activities -- such as model development and data interpretation -- that have higher potential for driving your business objectives. This family of enhancements has been called augmented data management and appears as one of Gartner's top 10 data and analytics trends for 2019.
With augmented data management, there are five major areas where advanced techniques are being incorporated into tools to expedite data preparation activities and increase your team's efficiency: data quality, master data management, data integration, database management systems, and metadata management.
Data quality is the practice of ensuring that your organization's data is fit for its intended use in operations, decision making, and planning. Often your team must transform the raw data in your production systems before it can be used for its targeted purpose.
This transformation process includes data profiling, cleansing, linking, and reconciliation with a master source. Although statistical profiling has long been a tool employed by data scientists, new data management tools are becoming more advanced in their ability to automate this process. With increased volume and velocity of data and the demand for real-time decision making, you must automate the profiling and cleansing step to keep up with your business's needs.
Advanced analytics techniques, such as outlier detection, can be employed to isolate instances that are abnormal and need to be corrected before use. You create these outlier-detection models from your historical data -- once you establish what is normal within your organization, you can target attributes that fall outside of those thresholds.
As your source data flows through your automated data-ingestion pipeline, it passes through these models. If it fails to meet the definition of normal, it can be flagged to be held aside and cleansed. Your team can automate the cleansing of this flagged data using techniques such as statistical inference, or the data can be reviewed and addressed offline by your team. This statistics-based determination of outliers can help you maintain a high level of data quality.
Supervised learning techniques, such as predictive categorization and time series forecasting, can enrich your data and provide a much more complete picture of your business for other analytics activities. This enhanced data management process can fill in holes in your data or let you use the data in additional analytics work. Enhancing your data quality process expands the availability of the data to your analytics users andcreates consistency so everyone is working from the same base.
Master Data Management
Master data management helps you create a single source of truth for your data. With data spread across multiple sources, both internal and external, and the increased volume of data being generated every day, this task is not trivial. It requires significant team effort. The benefits of effective master data management include seamless information across multiple channels, an integrated view of customers and suppliers, and increased trust in the data for better forecasting and decision making. These outcomes can often lead to revenue generation and cost savings.
One critical aspect of master data management is matching records between sources to identify which source is authoritative or which combination of attributes from different sources results in an authoritative result. The hard-coded, rules-based methodology of the past, which required significant effort to set up and maintain, is transitioning into machine learning-based processes where rules are derived from historical data and can evolve and adapt as your business grows and changes.
Machine learning models have an advantage over hard-coded rules because they can avoid overfitting the training data through abstraction. This allows the models to perform well with both the training and production data.
The goal of data integration is to merge multiple data sources into a single target destination. One of the challenges that data scientists run into daily is demystifying which data elements in different data sources represent the same attributes even when they have different names.
With statistical methods, you can match data based on names and abbreviations as well as by creating statistical data profiles about the attributes. By automating the process of analyzing the names and domains of the instances, tools are making more accurate suggestions during data mapping. With these predicted mappings, your data science team can quickly add new data sources, highly confident that the resulting data set will maintain its authoritative quality.
Database Management Systems
One of the critical roles on your data science team is the database administrator. DBAs spend much of their time performing two tasks: hardware configuration and tuning and software configuration and tuning. Recent moves towards database-as-a-service (DBaaS) have reduced the need for the DBA to focus on hardware configuration and tuning. These DBaaS solutions include automatic management of security patching and upgrading and have improved the process of scaling instances to meet business demand so management can also be automated.
New tools based on machine learning are creating databases that can self-tune autonomously, including the automatic creation and optimization of indexes and database configuration parameters.
Amazon recently spotlighted a group of researchers in the Carnegie Mellon Database Group who developed a solution to use machine learning to auto-configure a database to perform optimally under different scenarios. Their project, named OtterTune, uses learning from tuning previous databases to predict and implement parameters for tuning new instances with similar characteristics.
Metadata management ensures that the data resulting from data quality and data integration activities is of high quality and that the metadata that shows data lineage is in place and usable by your information consumers. The more automated the processes are that perform attribute matching and data cleansing, the more traceable the lineage becomes. As data management tools become more instrumental in the data preparation process, they can more easily document this full path from source to destination and store this documentation to support downstream processes.
As data preparation tools become better at automating the data management process, your data science teams will be freed up to perform activities of higher value to your organization. Also, your results can scale to the volume and velocity needed for real-time decision making to support your business objectives.
To get started, look at how your data science team is performing these five functions and identify what tools are available to start incorporating these augmented data management practices into your daily activities.