How Machine-Learning Techniques Use Methods (Part 2 in a Series)
These six machine-learning techniques are worth getting to know.
- By David Loshin
- March 16, 2016
When considering your analytics needs, it is valuable to level-set among your stakeholders to ensure proper understanding of how analytics applications are integrated into your environment. In the data-mining and machine-learning worlds, we must differentiate among three different concepts that are often confused:
- Techniques describe the practical approaches to leveraging data mining methods
- Methods are the types of tasks that are performed
- Algorithms are the ways methods are performed and how they are combined to create the techniques
Clearly these concepts are related in that techniques employ methods that are developed using algorithms. An example of a technique is a recommendation engine, which can be used to present a purchaser with product suggestions that the purchaser is presumed to be predisposed to buy. That recommendation engine can employ affinity grouping, which is a method of identifying correlations among data values that appear more frequently than expected than if the attributes were independent. Affinity grouping is often done using association rule mining performed using the APRIORI algorithm.
It is worth differentiating between the ways that analyses are performed, what they are intended to achieve, how predictive models are created, and the building blocks used to build the models. Confusing these will lead to misguided time and energy investments in building "things" as opposed to solving problems. Therefore, the first step is knowing what your core building blocks are, understanding what they are used for, and then learning how they are assembled to build solutions.
[Editor's note: The discussion continues here.]
It is possible to establish a level of comfort in discussing data mining techniques when the participants are aware of the different methods and how they can be used. This helps to avoid confusion in two ways. For the technologists who are learning about data mining, it eases the learning curve by differentiating among what the analytical models do, how those models work, and how those models are derived. For the business user, it provides some confidence that there is a level of trust in the reliance of the analytical models (and consequently, their results) on a reasonable scientific foundation.
It is worth describing some machine-learning methods that will form the basis for further discussion of the techniques in a future column. They include:
Clustering, a process of analyzing a large collection of entities with a defined set of attributes and characteristics and dividing that collection into smaller groups of entities that exhibit similarity based on the values of their attributes. An example of clustering is to look at a pool of customers and divide them into groups that can be analyzed in terms of "how good a customer" each group represents.
Segmentation and classification are used to organize entities into defined classes through the evaluation of independent attribute values. This often uses data to devise models of defined classes and uses those models to assign entities into one of those classes.
Affinity grouping is the process of evaluating relationships or associations among data elements that demonstrate some kind of affinity between objects, such as customers with similar purchases or individuals donating to the same causes.
Estimation is the process of assigning a value-based number to an object. An example is credit risk rating for mortgage lending, which uses models to provide a scored assessment of the likelihood a borrower will default on a loan.
Prediction describes expected future behavior(s) based on analysis of past actions. Prediction can employ other methods (such as clustering and classification) to analyze historical data and then devise models that can be applied to new data.
Description is the process of trying to characterize what has been discovered or trying to explain the results of the data mining. Being able to describe a behavior or a business rule is another step toward an effective intelligence program that can identify knowledge, articulate it, and then evaluate actions that can be taken.
Each of these six methods can be used to drive practical techniques, and they can also be combined to devise more comprehensive end-to-end processes of analysis, model creation, testing, implementation, and continuous validation and review.
David Loshin is a recognized thought leader in the areas of data quality and governance, master data management, and business intelligence. David is a prolific author regarding BI best practices via the expert channel at BeyeNETWORK and numerous books on BI and data quality. His valuable MDM insights can be found in his book, Master Data Management, which has been endorsed by data management industry leaders.