Building Trust in AI: A 2025 Retrospective
Retrospective

Building Trust in AI: A 2025 Retrospective

December 31, 2025 3 min read

As 2025 comes to a close, I find myself reflecting on a pivotal year for both the tech industry and my own journey as a Google Developer Expert. If 2024 was the year of experimenting with Generative AI, 2025 has undeniably been the year of grounding it—literally and figuratively.

My focus this year shifted from “what can AI do?” to “how can we trust what AI does?”. This question became the cornerstone of my talks, my workshops, and the products we built at Oloodi.

The “Chain of Trust” Architecture

Throughout the fall, I had the privilege of touring multiple DevFests (Michigan, Montreal, Saskatoon, Waterloo) to present a concept I call the “Chain of Trust”.

Legal AI, finance, and healthcare demand zero tolerance for hallucinations. When a professional asks an AI assistant about case precedents or compliance, a “creative” answer isn’t innovative—it’s a liability. The core challenge was transforming models like Gemini from eloquent improvisers into rigorous experts.

The blueprint I shared involves:

  • Engineering Grounded Agents: Using Google’s Agent Development Kit (ADK) and Vertex AI Search to constrain the model to reason exclusively over a private, verified corpus.
  • Hybrid AI Backends: Orchestrating lightweight Cloud Functions for rapid tasks alongside powerful Cloud Run services for complex, multi-step reasoning.
  • Validation Pipelines: Building automated Python layers that fact-check every claim against canonical sources in Firestore before the user ever sees it.
  • Trust-First UI: Designing Flutter interfaces that don’t just show an answer, but asynchronously load and display the proof—the citations and source documents—alongside the text.

This wasn’t just academic theory. It was a battle-tested playbook.

From Theory to Practice: Mentor AI Notaire

The best way to prove a theory is to build a product with it. At Oloodi, we applied this exact “Chain of Trust” architecture to build Mentor AI Notaire.

Quebec’s notarial law is complex, precise, and unforgiving of errors. We partnered with Foisy Gemma Notaires Inc. to create an AI assistant that serves as a senior mentor for legal professionals.

By leveraging the architecture I discussed in my talks, we achieved:

  • Precision: An AI trained exclusively on the Civil Code of Quebec and relevant regulations.
  • Transparency: Every alert and suggestion is linked directly to the specific article of law.
  • Efficiency: A measurable reduction in file verification time—turning hours of work into minutes.
  • Sovereignty: Ensuring data stays in Quebec, compliant with strict deontological standards.

This project validated our belief: in high-stakes fields, “I don’t know” is infinitely better than “I’m confidently wrong.”

Community & Continuous Learning

My last talk of the year was at DevFest Michigan at the MotorCity Casino’s Soundboard. It was a highlight not just because of the venue, but because of the engagement.

One attendee, Maya Malavasi, filled her notebook and challenged me with questions that I couldn’t immediately answer. And honestly? That’s the best part of being a speaker. Engaged minds force you to articulate what you’re still figuring out. That friction is where real learning happens.

Speaking at DevFest about AI Agents
Discussing Production-Ready AI Agents and the Chain of Trust

Huge thanks to the organizers—Dave Koziol, GDG Detroit, and the entire community—for creating spaces where we can move AI forward together, through curiosity and accountability.

Looking Ahead to 2026

As we move into the new year, the bar for AI is higher. Users no longer just want magic; they want reliability. They want systems they can verify.

The journey from “Chatbot” to “Trusted Agent” is just beginning. I’m excited to continue exploring this frontier, building with Google Cloud, and sharing the lessons learned with all of you.

Happy New Year! 🚀