21.3 Image-based UX critique prompts

Overview and links for this section of the guide.

Goal: turn screenshots into actionable UX improvements

A screenshot can trigger a flood of opinions. You want something better:

a prioritized list of UX issues with suggested fixes and a way to validate each fix.

Use the model as a fast heuristic reviewer. Treat its feedback as hypotheses, not truth.

You want “engineering-grade” critique

The best critique is specific and testable: “The primary CTA doesn’t read as primary because contrast is low; raise contrast and re-check on mobile.”

Set the critique up for success (context + constraints)

Give the model enough context to avoid generic advice:

  • Audience: who is this for (new users, admins, power users)?
  • Goal: what should users do here (submit, decide, compare, navigate)?
  • Primary action: what is the main CTA and the secondary actions?
  • Platform: mobile/desktop; accessibility requirements.
  • Constraints: design system, typography scale, spacing rules.

Request outputs that drive decisions

Ask for critique in a structured, prioritized form:

  • Ranked issues: 5–12 findings, each tied to a specific UI element.
  • Impact: what user failure this causes (confusion, friction, error risk).
  • Effort: small/medium/large (or time estimate if you prefer).
  • Proposed change: one concrete change per finding.
  • Validation method: what to measure or observe to confirm improvement.
Force “one change per finding”

This prevents vague “improve spacing” feedback. You can always ask for a second pass later.

Accessibility and inclusive design checks

Screenshots can support a first-pass accessibility audit, especially for obvious problems:

  • Contrast: text vs background, disabled states, focus rings visibility.
  • Hierarchy: headings, grouping, and scan paths.
  • Affordances: do buttons look clickable? do links look like links?
  • Error handling: are errors discoverable and actionable?
  • Density: is the page overwhelming at typical viewport sizes?

But screenshots cannot validate semantic structure (ARIA, labels, keyboard navigation). For that you need real DOM inspection and testing.

Turn critique into experiments and ship points

For each proposed change, define a ship point:

  • Before/after screenshots: same viewport and state.
  • One success metric: completion rate, error rate, time-to-action, support tickets.
  • One quality bar: “no new layout regressions across breakpoints.”

If you don’t have metrics, use small user testing or internal review as your validation method. The point is: you should know how you’d tell if it got better.

Copy-paste prompts

Prompt: ranked findings with impact/effort/validation

Critique the attached UI screenshot.

Context:
- Product/user type: [who is this for?]
- User goal on this screen: [what should they accomplish?]
- Primary CTA: [what is the main action?]
- Platform: [mobile/desktop], viewport: [width x height]
- Constraints: [design system rules, “no layout refactor”, etc]

Task:
Return 8–12 findings as a ranked list. For each finding include:
- What you see (specific element/region)
- Why it’s a problem (user impact)
- A concrete suggested change (one change only)
- Effort: small/medium/large
- How to validate the change (metric, test, or observation)

Avoid generic advice. If you’re unsure, ask for missing context.

Prompt: UX copy pass (short, constrained)

Review the screenshot’s UI copy (headings, button labels, helper text).

Rules:
- Do not invent new features.
- Keep labels short and action-oriented.
- Preserve the existing tone: [tone]

Output:
List only the strings you would change, with "before" and "after" text.

Anti-patterns

  • “Make it prettier” (you’ll get subjective noise).
  • No context (the model can’t infer your user goal from pixels alone).
  • Letting critique become scope creep (ship a small set of improvements).

Where to go next