By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Upside - Where Data Means Business

5 Things to Consider When Operationalizing Your Machine Learning

Operationalizing machine learning models requires a different process than creating those models. To be successful at this transition, you need to consider five critical areas.

As machine learning teams start, most of their work is done in a laboratory mode. This means that they work through the process in a very manual yet scientific manner. They iteratively develop valuable machine learning models by forming a hypothesis, testing the model to confirm this hypothesis, and adjusting to improve model behavior. As these projects mature and evolve, it often becomes important to take them out of experimentation mode and operationalize them.

For Further Reading:

Don’t Forget the Back End of the Machine Learning Process

From Data to Decisions: How ModelOps Can Get You There Faster

Getting Started with AI Governance 

Operationalizing machine learning requires a shift in mindset and a different set of skills for those performing the work. The inquisitive state of what-ifs and trial and error give way to practices that are predictable and stable. The goal is to reproduce the same valuable results that were generated as part of the creation process but do it in a way that is more hands off and long running. This changes the team’s goals from experimentation to experience management.

To effectively operationalize your machine learning model, consider these five key areas: data collection, error management, consumption, security, and model management.

Data Collection

During the experimentation phase, much of the data collection and cleansing is done manually. A training and testing data set is pulled from the source -- that source could be a data lake, a data warehouse, or an operational system -- and is often hand curated. The merging, matching, deduping, and overall data wrangling is generally done one step at a time. This is mainly because the data scientists are not sure what will persist (and what won’t) in the data set. This data management process can span from work done in programming languages such as Python and R to work performed using a spreadsheet or a text editor.

With an operational model, the uncertainty of what data is valuable is removed and all the data wrangling done during the build phase now needs to be automated and productionalized. This means that the scripts used during the development phase need to be standardized into something that can be supported in a production environment. This can mean rewriting scripts into a supported language, automating the steps performed in a spreadsheet using scripting or an ETL tool, or ensuring that all the data sources used are being updated regularly and are accessible as part of the data collection process.

Error Management

When data scientists are working through the process one step at a time, they manage the errors that arise. From dirty data to data access issues, if data scientists run into a problem, they interact with the people and systems that can resolve it. With these unanticipated challenges, the most effective path forward is to address them one at a time as they arise.

This is not the case once the models have been promoted to a production environment. As these models become integrated with an overall data pipeline, downstream process come to rely on their output and errors have a higher risk of business disruption. As many of these potential errors as possible need to be anticipated during the pre-operation design and development stage and automated mechanisms need to be designed and developed to address them.

Automation needs to be put in place for both the identification of potentially disruptive errors and for self-healing efforts. In addition, it is often important to establish the logs and alerts associated with the automated corrective actions to identify patterns that could indicate more pervasive and worrisome underlying challenges.

For Further Reading:

Don’t Forget the Back End of the Machine Learning Process

From Data to Decisions: How ModelOps Can Get You There Faster

Getting Started with AI Governance 

Consumption

During the initial phase, the goal is providing a single answer or proving out a concept. Often, the results from the machine learning models are curated, compiled, and presented -- either by manually producing charts and graphs or by summarizing the results into a slide presentation that is then wrapped using data storytelling to relay the most cogent discoveries. Because these results are new and novel, their presentation is one of the most important steps in the business deciding how to proceed. If there is sufficient value presented, this can be a stage gate approval to move to the next step of operationalizing the models.

As these same models are promoted from the information delivery stage to an operational stage, the consumption model changes. This includes both who is consuming the information and how they are being consumed.

The results of the initial phase are usually consumed by key decision makers, either using the research directly to make a business decision or as a gatekeeper deciding whether the model can move to an operational stage. Once in operation, the output of the machine learning models is usually built into larger processes that ultimately drive business results. The same decision makers who were the consumers of the initial-stage output become less concerned about the data and more driven by the results that come from having the data embedded in the company’s processes. The process owners become the new consumers during the operations phase and they partner with the IT team to automate the integration of these models with the business processes.

In an operations model, consumption moves from presentations in the boardroom to direct integrations with a data source that has been pre-populated from the model or through means of an API. This consumption model requires a much different set of outputs than were used in the discovery phase.

Security

As data scientists are developing the model, they are usually among a small number of people seeing and interacting with the data in question. With a limited audience and data insights that are still being developed, tested, and validated, companies can usually rely on strong desktop management practices as their primary defense against security breaches.

As more individuals and teams get involved and the company’s reliance on the data as part of its business processes grows, so does the need for implementing sound enterprise data management controls. This requires more advanced security protocols for the overall data ecosystem, such as enterprise data access controls, as well as master data management. Strong desktop security practices are still relevant at this stage but become only one facet of an overall defense-in-depth strategy.

When the raw data is enriched and processed to extract insights, it becomes a much more valuable asset to the business. In some cases, this information could be viewed as a trade secret. Its level of confidentiality, integrity, and availability needs to be increased commensurate to its organizational value.

Model Management

Although the process for model creation is very scientific and controlled, experimentation requires that the data science team has the autonomy to incrementally tweak and improve models to optimize the results. This alteration can happen in the data inputs, machine learning algorithms, and model hyperparameters.

In a state of operations, this incremental optimization and improvement need to be managed much more closely. This includes version control to ensure that inadvertent changes don’t disrupt operational processing. This also allows for rollbacks to previous versions if something does go awry.

As human behavior and the business environment evolve, the assumptions built into the model can become invalid or can start to drift away from the initial specifications. Once a model is in production, it is important to monitor it and ensure that it continues to have the same level of accuracy and precision that it had in the laboratory. If it varies from predefined thresholds, it needs to return to the experimentation phase to identify what alterations need to be made to bring it back into line.

Final Words

As a data and analytics leader, your ability to take into consideration and account for these key elements of the operationalization process will be fundamental. They will allow you to be the bridge between the creative experimentation phase of machine learning model development and the routine and stable operational phase. Through this balance, you will be able to optimize two seemingly contrarian processes and be able to deliver business value.

 

TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.