Q&A: Overcoming AI & Data Governance Challenges
Further's Lauren Burke-McCarthy talks governance mistakes, shares insights on getting started, and more.
Almost every organization is currently struggling with some aspect of AI and data governance. Ahead of TDWI's upcoming free, half-day Virtual Summit, "Modern Data & AI Governance Playbook: Quality, Security, and Compliance," taking place March 31, 2026, we caught up with Lauren Burke-McCarthy, who will be speaking on "Data and AI Governance: Building Trust and Managing Risk" as part of the event.
As an associate principal of data science, AI, and product strategy at Further, host of the WIA After Hours podcast, and certified IAPP AIGP, Burke-McCarthy spends her days helping companies conquer their biggest data and AI challenges, often dealing with data and AI governance. She shared with us some of the biggest mistakes she sees companies make and some of her top tips.
Q.: What’s the biggest governance mistake you see companies making with AI right now?
LBM: There are several. One I often see is treating AI like traditional software and skipping rigorous evaluations before deployment. This can move prototypes to production without evidence that a non-deterministic system is reliable across real scenarios and edge cases.
Another is not extending governance to third-party AI, like vendors, AI features, models, and APIs, which can pose risks when you don't own or control training data, updates, or behavior.
Also, relying on policy alone. Policy works best when it’s backed by technical and operational controls like intake gates, pre-launch testing, monitoring, and incident response plans.
Finally, forgoing fundamentals like AI inventories, risk tiering, and tracing/observability.
How do you balance moving fast with AI versus being careful about risks?
I recommend you use a risk-based approach, tier use cases by criticality, and apply proportionate controls. This makes low-risk experimentation accessible and ensures those use cases with high-priority, critical risks undergo proper testing, review, and accountability measures.
Governance also needs to evolve in parallel. Start with priorities that reduce the biggest risks early, then expand controls as adoption grows and your portfolio becomes more complex.
What should a company do first when starting AI governance from scratch?
Start with Minimum Viable Governance as the foundation, and build on what already exists where possible. Inventory current AI systems and use cases. Identify which models are in play, where they are used, who owns them, which vendors are involved, the data they touch, and the sensitivity of that data.
Next, stand up a simple intake process with risk tiering based on a defined risk taxonomy, so you can prioritize effort where it matters most.
Assign roles and responsibilities, including owners and decision-makers who can make go/no-go approvals at stage gates.
Then, define launch readiness criteria that include testing, documentation, and sign-offs.
Don't forget to add monitoring and incident response plans, since post-production is where many AI risks show up—and it’s often with the edge cases.
What are the top 3 risks every organization should be watching for with their AI systems?
- Security and Privacy: Third-party tools, shadow AI, and extensive system access increase exposure risk. Establish boundaries on data usage and implement strong controls over access, auth, and vendor integrations.
- Data Access: Quality and readiness issues are common, particularly with unstructured data used for generative AI. Access control and data leakage prevention are critical, as AI pipelines can expose sensitive data.
- Reliability: AI behavior can change over time as data drifts, sources become outdated, and models update. Plan for downstream impact, maintain decision oversight, and implement monitoring.
What’s the top takeaway attendees will get from your session “Data and AI Governance: Building Trust and Managing Risk” on March 31?
Trust and sustainable value come from modernizing data and AI governance together. When those foundations are aligned, teams can keep innovating while managing evolving risk and meeting regulatory expectations.
Don't miss your chance to ask Lauren Burke-McCarthy your most pressing Data & AI Governance questions! Sign up for TDWI's free, half-day summit "Modern Data & AI Governance Playbook: Quality, Security, and Compliance," and ask your questions live during the Q&A portion of her session "Data and AI Governance: Building Trust and Managing Risk"