
If you’ve ever designed for a heavily regulated industry (healthcare, finance, government etc) you know that one of the hardest parts isn’t building the product. It’s understanding the system around the product: the context, policies, legacy structures, acronyms, and cultural inertia that shape how people work.
In my current role, I work on a B2B platform that helps NHS trusts manage workforce planning and compliance, a space packed with nuance, history, and local variation. Each trust does things slightly differently. Regulations are constantly shifting. Team structures can be decades old. And much of the knowledge that guides day-to-day practice? It’s undocumented. Or worse, shared informally and lost when someone leaves.
This makes product discovery unusually fragile. If you miss context, you misdiagnose the problem. If you ask the wrong question, you get surface-level answers. And when internal SMEs leave, it can feel like starting from scratch.
To close these gaps, I’ve started using LLMs, not as a replacement for discovery, but as a powerful jump-start for domain immersion.
How I Use LLMs in Discovery
When I’m kicking off work in a new area (say, pay banding rules, prospective cover logic, or rota compliance) I now default to a few simple prompts in a dedicated GPT-based workspace.
I’ll ask:
“Explain the concept of prospective cover in NHS junior doctor contracts.”
“Summarise the structure of a medical workforce department in a UK acute trust.”
“What are the implications of EWTD for clinical rota planning?”
The output is rarely perfect. But it gives me a structured, jargon-free primer in seconds, which previously would have taken hours of reading through outdated PDFs or chasing down SMEs who were already overloaded.
It’s like having a personal research assistant who works 24/7 and can translate dense policy into plain English. It’s not deep discovery, but it’s a remarkably effective on-ramp to start the real work.
The Hidden Benefit: Sharpening My Questions
The real value, though, isn’t just in the information itself. It’s in the quality of questions I can now bring to stakeholders.
Because I’m not spending all my time trying to understand the basic terms or structural logic, I can start conversations at a higher level. That builds confidence — both for me and for clients. They see that I get the context. I’m not parachuting in as a clueless outsider.
This has directly improved the speed and quality of our early-stage work:
Workshops get to nuance faster
Discovery calls go deeper
Clients feel heard, because we’re not wasting time catching up
But It’s Not Always Real Life
There is one big caveat: LLMs often describe how things should work, not how they actually do.
They tend to pull from documentation, policy, and “gold-standard” processes, not from the messy, lived reality of workarounds, broken systems, and legacy constraints. So if you rely solely on what they give you, you risk designing for an ideal world that doesn’t exist.
For example, a model might explain how a locum booking workflow should route through approvals and checks, but in real life, it’s a sticky note on someone’s monitor and a WhatsApp thread at 3am.
This is why I treat every AI-powered insight as a starting point, not a truth. It helps me frame assumptions, draft hypothesis maps, and build early journey flows, but those always get tested, challenged, and rewritten through actual discovery.
A Recent Use Case
We recently kicked off a redesign involving complex NHS contract rules. I used an LLM to map out the formal processes involved, then layered on input from internal SMEs and client interviews. By combining both, we quickly uncovered gaps between what the regulations said and what staff actually did to stay compliant.
This led to a design solution that balanced legal accuracy with real-world flexibility, something we wouldn’t have found if we’d only relied on legacy documentation or internal assumptions.
Best Practices for LLMs in Product Discovery
Here’s how I use them responsibly:
Start with structured prompts. The clearer your question, the better the output. Avoid vague prompts like “explain this.”
Always validate. Treat output as a guide, not gospel. Cross-check with SMEs or users before using anything as a foundation.
Look for contradictions. If something seems too neat, it probably is. Dig deeper.
Use them to write better research plans. Once I have a high-level understanding, I use LLMs to draft discussion guides or surveys, which I then refine with the team.
Keep domain-specific work in isolated tools. For NHS and sensitive projects, I use private, non-training models to ensure confidentiality and compliance.
Final Takeaway
In regulated, complex industries, product design is often as much about context as craft. LLMs won’t give you perfect answers, but they will help you ask smarter questions, faster.
And when you're working in messy systems with high stakes, sometimes that's the difference between guessing at a solution and building the right thing the first time.