Making AI Compliance Practical: A Guide for Data Teams Navigating Risk, Regulation, and Reality
The biggest compliance failures come from good teams moving too fast and assuming nothing will go wrong. How can you ensure your AI tools approach compliance effectively?
- By Jeremiah Folorunso
- February 18, 2026
Some years ago, I worked with a data team on an AI-powered internal tool designed to help HR evaluate employee performance. Everything was moving quickly, and the prototype was already generating insights from historical data. Then, in one review meeting, someone from the legal team raised a quiet but critical question: “Have we considered whether this tool complies with data protection laws?” The room paused. We hadn’t.
That question was a pivotal moment. It wasn’t that we had done anything malicious—our intentions were beneficial, and our use of the data was within what we believed to be reasonable. But we hadn’t designed with compliance in mind. We believed it could be added later, rather than having it influence the product from the beginning.
This happens more often than we think. As AI tools become more embedded in enterprise workflows, data teams are encountering a growing reality: compliance isn’t only a legal concern but also a design constraint, a quality signal, and, often, a competitive differentiator. But navigating compliance can feel complex, especially for teams focused on building and shipping. What is the good news? It doesn’t have to be. When approached intentionally, compliance becomes a pathway to better decisions, not a barrier.
Here’s how data teams can work together to navigate the risk landscape.
1. Identify Risk Early: During Discovery, Not Deployment
A common mistake is waiting until late in development to think about risk. I once worked on a fintech tool that let users upload financial documents. Midway through the build, someone asked, “What if users upload personal ID documents by mistake? Are we allowed to store them?”
We hadn’t considered that. Our storage setup worked, but it didn’t distinguish between document types. We had no protections in place.
We paused, redesigned the upload flow, added front-end filters, updated the privacy notice, and guided users through what to submit. That last-minute fix could have been avoided if we’d raised the question earlier.
It can help to ask your team questions like:
- What kind of data are we using?
- Is any of it sensitive or regulated?
- Do users know what’s happening with their data?
- Could this tool mislead or discriminate?
These don’t slow down development; they help you prevent problems and build with confidence.
2. Build Transparency in Every Layer
For a plan to work, it must also be easy to understand. Even more so when it comes to choices that affect people. I looked at an AI system that suggested raises based on data from inside the company. When I asked how it did that, it told me it didn't explain its choices. People just saw a list of ideas. No reasoning, no audit trail, and no one is responsible.
There is now a button that says, "Why this recommendation?" We showed below the three most important factors that the model used, for example: a history of delivering projects, feedback from peers, and time in the job. It wasn't perfect, but it made things clearer for people and made them trust the tool more.
Transparency starts with:
- Simple documentation of how models were built and what assumptions were made.
- Tools such as model cards or data set datasheets.
- In-product features that explain decisions in plain language.
When people understand how something works, they’re more likely to trust it, even when it’s imperfect.
3. Create a Culture of Red Flags and Gut Checks
In one project, a junior designer casually asked, “What happens if someone tries to game this scoring system?” That straightforward query revealed a significant blind spot. We had no safeguards in place. Her opinion changed the direction of the sprint and probably stopped people from abusing it after launch.
You don’t need a new framework. You need space. Here’s how:
- Add a “red flag moment” to design reviews: Does anything feel risky or unclear?
- Normalize raising concerns
- Document assumptions. If your model assumes users will not upload personal data, please make a note of that.
Risk awareness doesn’t start with policies. It starts with giving people room to ask challenging and difficult questions.
4. Make Automation Your Ally, Not a Crutch
Automation can help with regulations, but only if it's used correctly. I've looked at a tool before that used algorithms to find private information. It worked well with English, but when tested with material in more than one language, it missed a few personal identifiers. The group thought it was "smart enough." It wasn't.
We kept the automation, but we added human review for rare cases, confidence levels to make checks happen, and alerts for input formats that aren't common. The automation stayed the same, but there were built-in checks and balances.
Key actions for reducing risks from automation:
- Validate automated systems with realistic, diverse data.
- Don’t automate everything—design for human intervention where needed.
- Use automation to surface patterns, not hide problems.
Automated tools are helpful assistants, not decision-makers. Keep people in the loop.
5. Align with Regulations Instead of Waiting for Them
Many teams wait for someone to mention GDPR or CCPA before thinking about compliance. That’s usually when the rush begins: looping in legal, searching for shortcuts, or copying what others have done. But by then, you're reacting and not designing.
In one sprint, the brief said GDPR was a “priority,” yet the team only had surface-level protections—cookie banners and checkboxes. There was no plan for data retention, no self-service controls, and no clear user rights.
We reframed the questions: What rights should the user have here? What could go wrong? That shift led to real design improvements, such as cleaner consent flows, easier data deletion, and terms users could actually understand.
The lesson:
- Regulations are minimum standards, not ultimate goals.
- Don’t just follow rules, understand their purpose.
- Design with future expectations in mind.
Teams that stay ahead of compliance don’t wait for the law to catch up. They build as if users are watching, because they actually are.
Conclusion: Start Where Risk Lives
The biggest compliance failures don’t come from bad people. They come from good teams moving fast, skipping hard questions, and assuming nothing will go wrong.
But compliance isn’t a blocker. It’s a product quality signal. People will trust you more if they are aware that your team has carefully considered the details.
You don’t need a perfect system to start. Just a mindset shift:
- Catch risks early.
- Design for clarity.
- Encourage prompt feedback.
- Use automation responsibly.
- Treat regulation as a design input, not an afterthought.
In a world moving fast with AI, trust isn’t extra. It’s the product.
About the Author
Jeremiah Folorunso is driven by purpose to create positive influence in every sphere he finds himself in. Jeremiah currently functions as a product designer with over half a decade of experience building digital solutions that thrive across multiple sectors and playgrounds. Jeremiah is also a published author and founder of HelloCreatives, his brainchild aimed at empowering the next generation of creatives through education, storytelling, and mentorship.