By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Upside - Where Data Means Business

Preparing Data for Machine Learning

Before it can be used to train ML models, your data must be clean and consistent. These steps can help get your data in the proper shape.

Turning data into insights doesn’t happen magically. You must first understand your data and use it to create reports that drive actions. If your competitors are using machine learning and artificial intelligence to automatically drive actions and you aren’t, you are at a disadvantage.

For Further Reading:

How Data Preparation Can Accelerate AI

AI and BI Projects Are Bogged Down With Data Preparation Tasks

A Case for Managing Data Uniquely for Each Form of Advanced Analytics 

Getting your data ready for ML and AI involves combining structured and semistructured data sets in order to clean and standardize data into a format ready for machine learning or integration with BI and data visualization tools. When you prepare your data correctly, you benefit from insights that can be processed quickly and easily, resulting in faster time to value.

Data transformation and standardization help you build powerful models, reporting, and ad hoc analysis that all share a single source of truth. In fact, not only does data prep help you with AI models, you can use AI in your ETL process to prepare data for the data warehouse itself. For example, you can use AI to extract valuable sentiment data from customer comments without having to read them all. Either way, at the beginning of a data journey, a company's problem is not analytics or model-fitting, it is data ingestion and transformation.

Based on our customers’ experiences, there are common data transformations required before data is ready for use in machine learning models.

Remove unused and repeated columns: Handpicking the data that you specifically need will improve the speed at which your model trains and unclutters your analysis.

Change data types: Using the correct data types reduces memory resources. It can also be a requirement -- for example, making numerical data an integer in order to perform calculations or to enable a model to recognize what algorithms are best suited to the data.

Handle missing data: At some point you’ll come across incomplete data. Tactics for resolving the problem can vary depending on the data set. For example, if the missing value doesn’t render its associated data useless, you may want to consider imputation -- the process of replacing the missing value with a simple placeholder or another value, based on an assumption. Otherwise, if your data set is large enough, it is likely that you can remove the data without incurring substantial loss to your statistical power. Proceed with caution. On the one hand, you may inadvertently create a bias in your model; on the other hand, not dealing with the missing data can skew your results.

Remove string formatting and non-alphanumeric characters: You will want to remove characters such as line breaks, carriage returns, and white spaces at the beginning and the end of values, currency symbols, and other characters. You may also want to consider word-stemming as part of this process. Although removing formatting and other characters makes the sentence less readable for humans, this approach helps the algorithm better digest the data.

Convert categorical data to numerical: Although not always necessary, many machine learning models require categorical data to be in a numerical format. This means converting values such as yes and no into 1 and 0. However, be cautious not to accidentally create order to unordered categories, for example, converting Mr., Miss, and Mrs. into 1, 2, and 3.

Convert timestamps: You may encounter timestamps in all types of formats. It’s a good idea to define a specific date/time format and consistently convert all timestamps to this format. It’s often useful to “explode” a timestamp (using a data warehouse date dimension) into its constituent parts -- separate year, month, day-of-week, and hour-of-day fields all have more predictive power than milliseconds since 1960.

Getting Started

This list is not exhaustive and is offered as a simple guideline to get you started. There are other factors you may want to consider such as how to handle outliers. You may want to remove them from your data set depending on the training model you use. Retaining outliers may skew your training results, or you might need to include outlier data for an anomaly detection algorithm.

To get the most from data analytics and visualization tools, have your data ready and available for analytics by bringing all the relevant data together in a clean and standardized format to ensure that the data is high-quality and can be trusted. Preparing this as a pipeline of operations within a cloud ETL tool means that when you need to bring more data up to date, potentially from many different external sources, you can just press “Run” again and all data is refreshed.

About the Author

David Langton is a seasoned software professional with over 20 years of experience creating award-winning technology and products. David currently serves as the VP of Product at Matillion, a data transformation solution provider. Prior to his role at Matillion, he worked as a data warehouse manager and contractor in the financial industry.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.