Prerequisite: None
As organizations race to scale AI, many overlook the hidden risks that undermine trust and operational resilience. In this session, we’ll examine why seemingly stable AI models often suffer from “silent failures” issues that remain undetected until they escalate into significant business or reputational harm. We’ll also explore how superficial testing can create blind spots, and how scaling AI across teams and business units introduces new layers of complexity, what we refer to as “scaling traps.”
True trust in AI isn’t just about technical accuracy. It’s about embedding governance and compliance into the core of every AI initiative. We’ll reflect on principles and approaches that can help organizations manage risk, strengthen oversight, and align with emerging regulatory expectations. Rather than offering a one-size-fits-all solution, the session highlights considerations for building resilient, transparent, and responsible AI systems, regardless of where you are in your AI journey.