1. Mental Models for Working With LLMs
Overview and links for this section of the guide.
On this page
Why mental models matter
If you treat an LLM like a search engine or a senior engineer on autopilot, you’ll get the worst results: confident-sounding output that drifts, breaks, or hides mistakes. A simple mental model makes your prompting clearer and your verification sharper.
The model is optimized to produce plausible text. Your workflow is what turns plausible output into correct behavior.
How to use this section
Read these pages with an engineering goal: improve the quality of your next iteration loop.
- Skim first: get the vocabulary (tokens, context window, temperature, tools).
- Then apply: use each concept immediately in a small prompt → code → run cycle.
- Collect patterns: keep a short list of “prompts that work” and “failure modes I hit.”
Run tiny experiments. One knob change (temperature, context size, schema constraints) + one verification run = one real lesson.
Skills you’re building
- Calibration: knowing what the model can do reliably vs what must be verified.
- Instruction control: writing constraints that stick and avoiding contradictory context.
- Context discipline: including only what helps, summarizing state, and staying under budget.
- Determinism control: understanding variability and when you want repeatability.
- Verification habits: treating hallucination as normal and building checks into your loop.
Section map (1.1 → 1.5)
- 1.1 What the model is actually doing
- 1.2 The difference between knowledge, reasoning, and tools
- 1.3 Context windows and why “just add more text” fails
- 1.4 Temperature, randomness, and creativity knobs
- 1.5 Hallucinations: the predictable failure mode
A quick practice loop
Use this loop while reading the section. It keeps learning grounded and avoids “prompt theory.”
- Pick one concept (e.g. temperature, context budgeting).
- Create a tiny task (e.g. “generate a function + tests,” “summarize this text into JSON”).
- Run two variants (change one knob or constraint).
- Verify (tests, lints, runtime output, schema validation).
- Write one sentence: “When I changed X, Y happened; next time I’ll do Z.”
Great writing and clean code style can still be wrong. Treat verification as mandatory, not optional.
Where to go next
Explore next
1. Mental Models for Working With LLMs sub-sections
1.1 What the model is actually doing
Open page
1.2 The difference between knowledge, reasoning, and tools
Open page
1.3 Context windows and why "just add more text" fails
Open page
1.4 Temperature, randomness, and creativity knobs
Open page
1.5 Hallucinations: the predictable failure mode
Open page