123 Main Street New York, NY 10001

Designer collaborates with holographic AI on wireframes in a modern studio; sticky notes and arrows indicate an iterative, human-centred process.
AI doesn’t replace design; it expands it. This article shares a practical, human-centred loop for product work—Ignite, Explore, Define, Shape, Implement, Reflect—showing exactly where AI helps and where judgement must lead. You’ll get copy-and-paste prompts with acceptance criteria, advice on testing and accessibility, and a tidy file structure so your work stays portable (Markdown, .fig, code). The goal isn’t output for output’s sake; it’s a working outcome that people can actually use—and a process you can teach, repeat, and improve.

How I Design Real Products With AI (Without Losing the Plot)

A practical, human-centred loop for product design with AI: clear stages, testable prompts, accessibility baked in, and a portable workflow that consistently produces working outcomes.
 

TL;DR

  • Treat AI as a thinking partner, not an autopilot.
  • Work in a loop: Ignite → Explore → Define → Shape → Implement → Reflect.
  • Maintain a living Context Brief and save everything in portable formats (.md, .txt, .png, code).
  • Get GPT to write your prompts with acceptance criteria and explicit output formats.
  • Success = a working outcome that meets user needs, accessibility standards, and clear UX metrics.

 

Why this approach?

Most “AI workflows” over-automate and under-think. They skip the human judgement that makes products useful, ethical, and accessible. This method preserves craft and care while using AI to expand options, clarify decisions, and accelerate tangible progress.

Use it for product features, service flows, education tools, healthcare pathways — anywhere human needs and digital experiences meet.

 

The Human–AI Design Loop (product design edition)

  1. Ignite — capture ideas, set intent
  2. Explore — research people, context, constraints
  3. Define — distill insights into direction and guardrails
  4. Shape — prototype experiences you can test
  5. Implement — collaborate with coding tools to make it real
  6. Reflect — document what worked and what to change

Then repeat & iterate. It’s a loop, not a conveyor belt.

 

1) IGNITE — Start where the energy is

Purpose: Turn messy ideas into momentum and direction.

Do this

  • Create a project page in Notion/Obsidian.
  • Brain-dump: notes, voice memos, screenshots, links, stakeholder quotes.
  • Ask GPT-5 to transform fragments into problem frames and research questions.

 

AI prompt (copy/paste)

ROLE: Thinking partner
CONTEXT: [paste raw notes]
OBJECTIVE: Propose 3 distinct problem frames + 10 questions worth asking
CONSTRAINTS: Human-centred; avoid premature solutions; highlight unknowns
FORMAT: bullet list + one short paragraph on what to investigate first

The Golden trigger prompt

“Refactor this prompt using current best practice for [purpose]. Identify gaps, add acceptance criteria and an explicit output format, then ask me any clarifying questions.”

I do this EVERY.TIME. The output of this shows you how the prompt should actually be written — and its amazing. (See prompt anatomy at the end)

 

Save it

  • Export .md → /project/01_docs/ignite-notes.md
  • Keep raw assets in /project/_inbox/

 

Outcome → Draft interviews plan, desk-research plan, early success signals.

Hot tip: I also save the outcomes as Notion pages for documentation.

 

2) EXPLORE — Look wider than your assumptions

Purpose: Understand people, environments, and constraints.

Do this

  • Use Perplexity/Elicit for research: competitive analysis, definitions, standards, competitors, risks.
  • If you can, run 3–5 quick interviews or stakeholder chats.
  • Ask GPT-5 to cluster themes, contradictions, and open questions.

 

AI prompt

ROLE: Research synthesiser
CONTEXT: [interview notes + desk research]
TASK: Cluster themes; surface contradictions; trace each theme to evidence
FORMAT: table → theme | quote/evidence | risk/assumption | next question
ACCEPTANCE:5 themes; evidence linked; actionable next questions

 

Save it

  • /project/02_explore/research-summary.md
  • Sources (PDFs, links) → /project/02_explore/sources/

 

Outcome → Signals that the problem matters (or doesn’t), plus informed lines of enquiry.

 

3) DEFINE — Choose a direction on purpose

Purpose: Move from “interesting” to intentional with product-ready clarity.

Do this

  • Draft a one-page Context Brief: purpose, users, constraints, design principles.
  • Write 3 How Might We questions and 1–2 testable hypotheses.
  • Decide guardrails (accessibility, privacy, safety, data minimisation).

 

AI prompt

ROLE: Strategy partner
CONTEXT: [research-summary.md]
OBJECTIVE: Draft a 1-page Context Brief + 3 HMW + 2 hypotheses with metrics
CONSTRAINTS: Human-centred, privacy-first, WCAG focus
FORMAT:
- Context Brief (Purpose, People, Constraints, Principles)
- HMW (3)
- Hypotheses: statement | leading metric | guardrail | stop criteria
ACCEPTANCE: Clear, testable, decision-ready

 

Save it

  • /project/03_define/context-brief.md (this travels through the whole project)

 

Outcome → A shared north star and evaluation criteria for design decisions.

 

4) SHAPE — Make it tangible, fast

Purpose: Create prototypes that expose value, friction, and risk early.

Do this

  • Map the flow (wireflows) in FigJam/Miro.
  • Ask GPT-5 for interface copy (plain-language), edge cases, empty/error states.
  • Ask GPT-5 to write a prompt for Figma Make to create an interactive prototype & flow
  • Test with 3–5 users or run a quick remote test (Maze/Useberry).

 

AI prompt (Figma Make)

ROLE: Product designer
CONTEXT: [context-brief.md + flow outline]
TASK: Create an interactive [landing page/onboarding/feature] with states
CONSTRAINTS: Responsive; WCAG 2.2 AA; plain-language; performance-conscious
FORMAT: sections, components, interactions, a11y notes
ACCEPTANCE: Clickable prototype; clear narrative; exportable design code

Save it

  • Figma file; export preview .png → /project/04_shape/previews/
  • Figma code; export .zip → for Cursor /project
  • Usability notes → /project/04_shape/usability-notes.md

Outcome → Evidence of what to keep, change, or remove before investing in build.

 

5) IMPLEMENT — Collaborate to make it real

Purpose: Translate the design into a reliable, accessible, measurable product increment.

Do this

  • Pair with engineering. Move the Figma Make design code/specs into Cursor (or your IDE).
  • Also import all the contextual files from /project/docs
  • Align on design tokens, component mapping, states, and acceptance criteria.
  • Add analytics events, tests, and accessibility checks (NVDA/VoiceOver, Lighthouse).
  • Plan a safe rollout (feature flag, small cohort, reversible change).

 

AI prompt (Cursor, Next.js example)

ROLE: Full-stack engineer
CONTEXT: Figma code/specs (paste), context-brief.md, research summary
GOAL: Productionise the component/page with routing, state, a11y, tests, metrics
CONSTRAINTS: Next.js + TypeScript; WCAG 2.2 AA; no secrets client-side
TASKS: Refactor; add empty/error/success states; write unit/e2e tests; instrument analytics
ACCEPTANCE: Build passes; lints clean; a11y checks pass; README + CHANGELOG updated

Save it

Outcome → A working, test-covered experience that matches the intent and guardrails.

 

Production automation with Cursor + GitHub (PRs, previews, and safe OAuth)

When you’re ready to operationalise the build, give Cursor a single brief that sets up end-to-end automation:

  • initialise the repo, add the GitHub remote, and scaffold .github/workflows so any feature/* branch triggers CI (install, lint, type-check, test, a11y/Lighthouse) and opens a pull request with a comment that links to a preview.
  • Connect your host (e.g. Vercel or Netlify) to the repo so each PR gets an ephemeral preview URL automatically.
  • Never commit secrets; ship a .env.example, ignore real env files, and load OAuth credentials strictly via environment variables (OAUTH_CLIENT_ID, OAUTH_CLIENT_SECRET, etc.) stored in GitHub Environment Secrets (Preview/Production) and mirrored in the host’s env settings — prefer OIDC over long-lived deploy tokens.
  • Protect main with required checks and at least one review; add a PR template with acceptance criteria and a CODEOWNERS file for auto-reviewers; enable secret scanning. On merge to main, run a deploy-prod.yml workflow that promotes the build, runs any migrations, purges caches/CDN, tags a release, and writes a short changelog artefact.
  • Keep OAuth keys server-side only, document rotations in /docs/ops.md, and your online product will update safely, predictably, and traceably with every approved change.

 

6) REFLECT — Lock in the learning

Purpose: Turn this project into momentum and capability.

Do this

  • Write a 1-page retrospective: what worked, what failed, what to repeat.
  • Record a 3–5 minute Loom walking through key decisions.
  • Extract reusable patterns for your design system or knowledge base.

 

AI prompt

ROLE: Post-project analyst
CONTEXT: [all md docs + key decisions]
TASK: Write a 1-page retrospective + list 5 reusable patterns with links
FORMAT: summary; wins; misses; decisions; patterns; next bets
ACCEPTANCE: Clear lessons + specific actions for future work

Save it

  • /project/06_reflect/retrospective.md + Loom link
  • Patterns → /library/patterns/ (cross-project)

 

Outcome → Better instincts, faster starts, and stronger team alignment next time.

 

What to save (and where)

  • Portable by default: Markdown (.md) for docs and context, .png for previews, code in Cursor & Git.
  • Living Context Brief: update every phase; treat it as the source of truth.
  • Folder skeleton
/project-name
  /01_docs
  /02_explore
  /03_define
  /04_shape
  /05_implement
  /06_reflect
  /_inbox
  • Link everything: sources, dates, decisions. Traceability beats memory.

 

Tools that play nicely (swap as needed)

  • Notes/KB: Notion or Obsidian → export .md
  • Research: Perplexity, Elicit, Google NotebookLM
  • Mapping: FigJam/Miro, Whimsical
  • Design: Figma (+ Figma Make), Framer
  • Testing: Maze/Useberry, NVDA/VoiceOver, Lighthouse/PA11y
  • Implementation: Cursor, GitHub Copilot, Next.js/TypeScript, Vercel
  • Metrics/XP: PostHog/Amplitude, GrowthBook/Statsig
  • Ops: GitHub/Linear, Sentry/Datadog, Loom

 

Prompt Anatomy (and a “golden trigger”)

Use this scaffold

ROLE:
CONTEXT:
OBJECTIVE:
CONSTRAINTS:
TASK:
FORMAT:
ACCEPTANCE:

 

Common traps (and how to avoid them)

  • Pretty prototypes, weak reasoning → Keep the Context Brief fresh; trace decisions to evidence.
  • Over-automation → Insert “Human Insight” checkpoints between stages.
  • Excluding edge cases → Ask AI to enumerate risks; run a11y/privacy critiques.
  • No definition of done → Bake acceptance criteria into prompts and pull requests.

 

Why this works

It keeps the human at the centre (taste, judgement, ethics) and lets AI handle the heavy lifts (expansion, drafting, critique). The loop is simple: expand, focus, make, learn — and because you save the trail, future work starts clearer and faster.

If you adapt this, I’d love to hear what changed for you — and what you’d keep, cut, or remix for your context.