
Over the past year, I’ve integrated AI and automation into multiple areas of my design practice, from prototyping to internal tooling to domain research. The results have been genuinely transformative: we’ve saved weeks of time, accelerated discovery cycles, and removed frustrating bottlenecks for multiple teams.
But here’s the reality: AI is still not plug-and-play. It doesn’t think for you. It doesn’t design for you. And it definitely doesn’t replace the messy, contextual, deeply human parts of product work.
That’s why I’ve developed a set of working principles, the boundaries and beliefs that guide how I approach AI in my day-to-day role as a product designer and leader.
Principle 1: AI is a tool — not a replacement
Treat it like any other piece of software. If you wouldn’t rely on a design system to generate your entire roadmap, don’t expect AI to do your job either. Tools like Lovable, n8n, and GPT are powerful, but only when used with clear intent and well-defined constraints.
I’ve seen teams burn hours chasing AI-generated outputs that looked impressive but had no technical feasibility or alignment with business goals. If you don’t stay in the driver’s seat, the tool will take you somewhere, just probably not where you need to go.
Principle 2: Accelerate the early stages, don’t skip the rest
AI is brilliant for ideation, prototyping, discovery, and internal process improvements. But once you move into validation, delivery, and production, it still needs to be backed by human judgment and rigour.
For example, a Lovable prototype might feel real to a client, but without engineering input, feasibility checks, and accessible interaction design, it’s a dead end. Similarly, an LLM might explain a policy perfectly, but if that policy isn’t followed in practice, you’re designing a fiction.
I’ve learned to frame AI outputs as conversation starters, not conclusions.
Principle 3: Always pair with traditional workflows
The best results come when AI complements established workflows, not when it tries to replace them. That means:
Prototypes still need usability testing
Generated research still needs validation
Automated outputs still need review and QA
I use AI to compress the boring parts, not cut out the essential ones. It removes grunt work so we can do the real work better.
Principle 4: Be watertight on data protection
Especially in sectors like healthcare, where data sensitivity is paramount, you can’t afford to guess. I’ve spent time getting very familiar with DIPA requirements, data residency, and third-party processing agreements. Every tool I use is assessed through that lens, especially LLMs, OCR agents, and automation platforms.
In my current role, all AI-powered research, automation, and content generation uses private, non-training models. Anything involving even low-sensitivity data is isolated from public or vendor-hosted models unless explicitly cleared. If you’re working with regulated clients, this isn’t optional, it’s table stakes.
Principle 5: Stay skeptical and stay in the loop
AI is evolving fast, but it's not infallible. The outputs are only as good as the prompts, data, and intent behind them. Even in 2025, AI still:
Hallucinates facts
Generates unusable or inaccessible code
Reinforces dominant patterns or biases
Creates illusion of accuracy when it's just surface-level confidence
I’ve built a habit of manual review, contradiction-spotting, and assumption-checking. It’s slower than “just trusting the output,” but it’s what keeps quality high and surprises low.
Closing Thought: AI as a Force Multiplier
When used responsibly, AI isn’t just a time-saver, it’s a capability unlock. It gives you:
More time to think
Better starting points for collaboration
A faster path to clarity in chaotic systems
But like any tool, it reflects the person using it. If you’re vague, you’ll get noise. If you’re intentional, you’ll get leverage.
AI should feel like a power tool, something you reach for when the job calls for speed and scale, but only if you also know what you’re building, who it’s for, and what constraints you’re working within.
I’m not here to chase hype. I’m here to evolve my practice. And AI (used well) is helping me do just that.