Getting Started with AI Governance
Organizations starting an AI journey can still fall into the old “garbage in, garbage out” trap. AI governance is a necessity, not an option.
- By Wilson Pang
- November 20, 2020
The ability to leverage AI to drive business growth is a compelling proposition across industries today, whether for customer service, R&D, finance, or any other business domain. In fact, according to our annual State of AI report, nearly three-quarters of businesses now consider AI critical to their success as AI continues to grow in importance and can help drive these decisions.
However, there’s an important caveat to this surge of interest in AI: bad data leads to bad results, so companies utilizing bad data to power data-driven decision making will make bad decisions. Besides leveraging high quality training data, organizations also need to manage and measure their AI models in the right way with right tools.
Yet, per O’Reilly’s AI Adoption in the Enterprise 2020 report, only slightly more than 20 percent of organizations have implemented formal AI governance processes or tools, and most think of governance as optional rather than essential. This needs to change -- and it’s starting to.
Defining AI Governance
Companies in the earliest stages of their AI projects give little thought to AI governance. However, they should begin thinking about what AI governance is and how to build it into their AI projects and processes to ensure they avoid the “garbage in, garbage out” trap.
AI governance is composed of three main areas:
AI model definition: The purpose of the AI model must be clearly defined. What does the organization wish to achieve, and is the AI model capable of achieving it? Can the model be clearly explained to others? Being able to do so can help with troubleshooting while ensuring knowledge transfer for future projects.
AI model management: Typically, once a company begins implementing AI, it quickly develops multiple AI models, and it is important to develop an AI model catalog to ensure the right model is being used for the right purpose. The catalog should track what each model can do and what it can’t do, which departments are using which models for what purpose, who built the model, and if it has been modified. As project sophistication grows, the catalog can also track:
- Requirements for the training data (e.g., how the data is collected, who owns it, where it is stored)
- Requirements for deployment (e.g., how the features are managed in real-time)
- The key metrics that need to be achieved
- The process for continuously monitoring the model to correct for model drift
Data governance: Because successful AI outcomes depend on high-quality data, effective data governance is essential. The need for data governance is now generally well understood. However, according to McKinsey, most organizations are still struggling to implement good data governance programs. The desire to benefit from AI projects can, therefore, become the impetus for an increased data governance, which answers a variety of key questions. What data do you have and what is its source and lineage? How is the data modified or transformed and by whom? Is data being duplicated for different use cases? Does it contain sensitive information (PII, IP), and does the way it’s being used for AI projects comply with evolving privacy regulations?
Best Practices for Getting Started
When you’re ready to get started, you can build an AI governance program following these key steps:
- Assess if your company is at the right stage of your AI journey. Getting it right takes time and coordination, so it is important to prioritize goals and take on the governance tasks that are appropriate for the sophistication of your AI projects.
- Start small. By defining the goal for each AI project, you can assess whether the models perform as expected, eliminate bias to ensure the AI model produces a fair result, and determine whether there are issues that could impact success.
- Create documentation to capture all the standards and best practices you are relying on and any knowledge you gain from experience. What are the key metrics that need to be measured? What processes will you use to track the data and monitor performance?
- Periodically review the standards and best practices to ensure they are keeping pace with evolving knowledge across the industry. For larger organizations, as the complexity and number of projects grow, consider creating an independent and dedicated governance team to enforce standards and consistency across the organization, reduce duplication and increase efficiency.
Some large organizations that rely heavily on machine learning (ML), such as eBay, Facebook, Google, and Uber, have built systems to automate the management of AI governance. Uber, for example, has hundreds of ML use cases with thousands of models deployed in production and millions of predictions made every second. The company’s AI workflow management system, Michelangelo, a combination of open source systems and components built in-house at Uber, enforces a standard workflow across data management, model training, model evaluation, model deployment, prediction making, and prediction monitoring.
Companies just starting out can look to recommended best practices to follow, or they can consider using a consultant to help them create a baseline governance strategy. They may even turn to a third-party provider to help them implement the processes or tools they need.
AI governance is a new and evolving discipline, and there is a lot of cutting-edge research being done. Over the next few years, we will likely see a variety of new tools and services come to market to support these efforts, especially as companies continue to up their investments in AI – in fact, companies have increased investment by 4.6 percent on average over the last year, and firms plan to nearly double that increase to 8.3 percent per year over the next three years, according to ESI ThoughtLabs research.
Although AI governance continues to evolve, companies may need to take a DIY approach to developing basic AI governance capabilities to ensure they are getting the most from their AI projects.
Wilson Pang joined Appen in November 2018 as CTO and is responsible for the company’s products and technology. Pang has over 17 years of experience in software engineering and data science. Prior to joining Appen, he was senior director of engineering at eBay in California and provided leadership to various domains including data service and solutions, search science, marketing technology, and billing systems. Pang obtained his master’s and bachelor’s degrees in electrical engineering from Zhejiang University in China. You can reach the author via LinkedIn.