2023 - Present
A process case study showing how I approach AI product design, translating complex ML models into experiences that real users trust and adopt. Drawing from work across Pathfindr (4.5M users, insurance), a major insurance platform (NDA, AI agents), QMW Industries (safety-critical), Emesent (mining/defence), and Strike Analytics (data analytics).

Most product design assumes deterministic systems, press a button, get a predictable result. AI products are probabilistic. Outputs vary, confidence levels fluctuate, and the system's behaviour evolves over time. Designing for this requires a fundamentally different approach to trust, transparency, and user control. I've shipped AI products across insurance, heavy industry, healthcare, mining, and data analytics. Every one required solving the same core challenge: making algorithmic decisions feel trustworthy without hiding how they work.
My design process operates at two speeds. The classical process grounds every engagement in rigorous research and validation. The AI-augmented process accelerates each phase using pattern detection, automated testing, and continuous deployment. The key difference is where human judgement is most valuable, in AI product design, the designer's role shifts from creating solutions to curating and validating AI-generated possibilities.
At Pathfindr, I established design guidelines for AI transparency in a financial services context (SOC 2 Type II, ISO 27001). The core principle: users must always understand when they're interacting with AI versus humans, and they must be able to override AI decisions at any point.
For Pathfindr's insurance marketplace serving 4.5M+ users, I created frameworks for ML-driven personalisation across 6 distinct customer mindsets, from 'show me everything' to 'just give me the best deal.' Each mindset required different information density, different trust signals, and different decision support. The challenge was building a single system flexible enough to adapt in real-time while maintaining consistency in regulated financial services.
The most critical design pattern in AI products is the handoff threshold, when should AI step back for a human agent? For a major insurance comparison platform (NDA), I built scalable design patterns for this across insurance, energy, and financial comparison workflows. The AI handles routine comparisons and recommendations, but needs to recognise emotional signals, edge cases, and regulatory requirements that demand human judgement. Designing these thresholds requires understanding both the AI's capabilities and its failure modes.
At QMW Industries (heavy industry, ISO 9001:2015), I'm architecting an AI decision-support system for safety-critical operational decisions. The design constraints are completely different from consumer AI, every recommendation carries physical safety implications. The system must lookup, transform, and inform on operational decisions under strict compliance requirements. Zero tolerance for ambiguity in the interface.
I rapid prototype AI experiences using Python and React to test feasibility before committing engineering resources. At Pathfindr, this meant building functional prototypes that simulated ML-driven personalisation using real data, allowing us to validate the experience design before the models were production-ready. At Emesent, I'm building an AI customer insights system that synthesises 3,000+ data points, designing the queryable interface alongside the underlying data architecture.
The common thread across all these projects: AI products that ship to production in regulated industries, not prototypes that sit in a demo.
The biggest misconception in AI product design is that the hard part is the algorithm. It's not. The hard part is designing the trust layer, helping users calibrate their confidence in AI outputs, providing meaningful control without overwhelming them, and maintaining transparency in systems that are inherently probabilistic. The companies that succeed with AI products are those that treat the human experience layer with the same rigour as the model architecture.