Prerequisite: None
Ken Johnston
Vice President of Data, Analytics, and AI
Envorso
As generative AI and agent-based systems rapidly transition from experimental to essential in analytics workflows, data and AI teams are being asked to do more than just build performant models—they must ensure these systems are fair, safe, transparent, and aligned with organizational values. But how?
In this interactive 90-minute tutorial, AI and data science leader Ken Johnston—former technologist at Microsoft and Ford—introduces participants to the practical tools and frameworks of responsible AI, including the CSET AI Harm Framework and the NIST AI Risk Management Framework (RMF). Attendees will explore how to assess and mitigate risks in generative and agentic AI applications through real-world case studies, guided exercises, and collaborative discussion.
Whether you're building AI copilots, deploying agentic decision systems, or embedding LLMs into analytics pipelines, this session will equip you with actionable techniques to identify and address ethical and societal risks early in the development lifecycle.
Topics include:
- What makes AI “responsible”—and what can go wrong when it’s not
- Understanding tangible and intangible harms in AI systems
- Using incident case studies to spot overlooked risks
- Applying harm assessment methods to your own projects
- Crafting red flag criteria and escalation paths for AI initiative