8.1 How to paste errors so the model actually helps

Overview and links for this section of the guide.

Why “pasting the error” often fails

Beginners often paste one line of an error message and ask “why is this happening?” That’s rarely enough. The model needs enough context to reconstruct the failing scenario, otherwise it guesses.

Common “low-signal” debug messages:

  • “It doesn’t work.”
  • “I got an error.”
  • “Here’s a screenshot.”
  • “Why is this undefined?” (without code or call site)

These force the model into speculation. You want the opposite: a tight, reproducible case.

Your goal

Give the model enough information that it can (1) reproduce mentally, (2) propose specific hypotheses, and (3) suggest a minimal fix.

The minimal reproducible error (MRE) mindset

An MRE is the smallest set of steps and inputs that reliably reproduces the failure.

In practice, “minimal” means:

  • one command or one test,
  • one input example,
  • the smallest code slice that matters,
  • no extra noise (unrelated logs, unrelated files).
The best debugging prompt starts with “Run this”

If you can’t tell a human how to reproduce the bug in one or two steps, your prompt will likely be ambiguous too.

What to include (the high-signal bundle)

When you ask the model to debug, include this bundle:

  • Goal: what you expected to happen.
  • Repro steps: the exact command(s) you ran.
  • Actual result: full stdout/stderr (copy/paste).
  • Environment: language/runtime version, OS, relevant tool versions.
  • Code slice: the smallest relevant file(s) or function(s).
  • Recent change: what changed since it last worked (if known).
  • Constraints: “diff-only, no new deps, preserve behavior except fix.”
Don’t paste the whole repo by default

Large context often decreases accuracy. Start with the failing slice, then add more only if the model asks.

How to format the paste (so it stays parseable)

Formatting matters because it preserves structure and reduces misreads.

Use separate code blocks

  • One code block for the command you ran.
  • One code block for stdout/stderr.
  • One code block per file you paste.

This avoids the model mixing output with code.

Label what you paste

Include headings like:

  • Command
  • Expected
  • Actual
  • File: src/foo.py

Keep error output intact

Don’t paraphrase stack traces. Copy/paste them exactly. Paraphrasing destroys the one thing the model can reliably use.

Redaction rules (don’t leak secrets)

Debugging often involves logs, configs, and headers. These can contain secrets.

  • Replace API keys/tokens with REDACTED.
  • Replace emails/PII with placeholders.
  • Don’t paste full request payloads if they contain sensitive user data.
  • Prefer metadata: sizes, counts, and categories over raw content.
Never paste secrets into prompts

Treat prompts as public. Even if you trust the tool, you’re training yourself into a dangerous habit.

Copy-paste templates

Template: minimal debug request

Goal:
When I run [command], I expect [expected behavior].

Reproduction:
Command:
```sh
[exact command]
```

Actual output (full):
```text
[stdout/stderr]
```

Environment:
- OS: [...]
- Runtime: [...] (e.g., python --version)

Relevant files:
File: [...]
```[lang]
[code]
```

Constraints:
- Provide 3–5 ranked hypotheses
- For each, propose a confirming/denying test
- Then propose a minimal diff-only fix
- Do not add dependencies

Template: failing test report

We have a failing test.

Test command:
```sh
[command]
```

Failing output:
```text
[output]
```

Relevant code:
File: [...]
```[lang]
[code]
```

Task:
Fix the failure with the smallest diff possible.
Add/keep a regression test so it can’t regress.

Where to go next