RESEARCH & RESOURCES

Monitaur Launches GovernML to Guide and Assure Entire AI Life Cycle

New offering expands AI governance software from policy to proof.

Note: TDWI’s editors carefully choose vendor-issued press releases about new or upgraded products and services. We have edited and/or condensed this release to highlight key features but make no claims as to the accuracy of the vendor's statements.

Monitaur, an AI governance software company, has released GovernML, the latest addition to its ML Assurance platform, designed for enterprises committed to responsible AI. Offered as a web-based SaaS application, GovernML enables enterprises to establish and maintain a system of record of model governance policies, ethical practices, and model risk across their entire AI portfolio.

As deployments of AI accelerate across industries, so, too, have efforts to establish regulations and internal standards that ensure fair, safe, transparent, and responsible use.

  • Entities ranging from the European Union to the city of New York and the state of Colorado are finalizing legislation that codifies into law practices espoused by a wide range of public and private institutions.
  • Corporations are prioritizing the need to establish and operationalize governance policies across AI applications in order to demonstrate compliance and protect stakeholders from harm.

“Good AI needs great governance,” said Monitaur founding CEO Anthony Habayeb. “Many companies have no idea where to start with governing their AI. Others have a strong foundation of policies and enterprise risk management but no real enabled operations around them. They lack a central home for their policies, evidence of good practice, and collaboration across functions. We built GovernML to solve for both.”

Effective AI governance requires a strong foundation of risk management policies and tight collaboration between modeling and risk management stakeholders. Too often, conversations about managing the risks of AI focus narrowly on technical concepts such as model explainability, monitoring, or bias testing. This focus minimizes the broader business challenge of life cycle governance and ignores the prioritization of policies and enablement of human oversight.

 GovernML for Building and Managing Policies for AI Ethics

Available now, GovernML’s integration into the Monitaur ML Assurance platform supports a full life cycle AI governance offering, covering everything from policy management and technical monitoring to testing and human oversight. By centralizing policies, controls, and evidence across all advanced models in the enterprise, GovernML makes managing responsible, compliant, and ethical AI programs possible.

Highlights enable business, risk and compliance, and technical leaders to:

  • Create a comprehensive library of governance policies that map to specific business needs, including the ability to immediately leverage Monitaur’s proprietary controls based on best practices for AI and ML audits
  • Provide centralized access to model information and proof of responsible practice throughout the model life cycle
  • Embed multiple lines of defense and appropriate segregation of duties in a compliant, secure system of record
  • Gain consensus and drive cross-functional alignment around AI projects

For more information on GovernML, please visit https://monitaur.ai/products#GovernML.

TDWI Membership

Get immediate access to training discounts, video library, research, and more.

Find the right level of Membership for you.