5.2 Turn the output into a runnable project

Overview and links for this section of the guide.

Goal: move from text to code

The model’s output is not a project until it exists as files you can run locally. This page is about converting the generated text into a runnable repo with a repeatable run/test loop.

Why this step matters

Most vibe-coding failures come from staying in “chat world” too long. The moment you can run code, your feedback loop becomes reality-based.

Create the project folder

Create a new folder (and optionally a git repo). Example:

mkdir hello-vibe-calc
cd hello-vibe-calc
git init

If you use Python virtual environments, create one now:

python -m venv .venv
source .venv/bin/activate
Keep dependencies at zero

This project should run with the standard library only. If the generated code requires installing packages, treat that as a prompt bug and fix it.

Create files exactly as generated

Copy the model output into files. Be literal:

  • match file names and folder names exactly,
  • don’t “improve” anything yet,
  • don’t merge files to save time,
  • don’t refactor until it runs.

Quick file-tree check

You should end up with something like:

hello-vibe-calc/
  calc/
    __init__.py
    __main__.py    (may be added if missing)
    cli.py
    parser.py
    eval.py
  tests/
    test_calc.py
  README.md

Run the CLI (SP1)

Try the smallest “it works” command first. Your acceptance criteria from 5.1 should include something like:

python -m calc "2+2"

If python -m calc fails

The most common reason is that the package lacks calc/__main__.py. This is a great example of a tiny follow-up prompt:

Add `calc/__main__.py` so that `python -m calc "2+2"` routes to the existing CLI entrypoint.
Do not change behavior. Show the new file only.
Don’t “fix it manually” if the point is the loop

In vibe coding, you want practice turning concrete failures into precise prompts. Use manual fixes only when they teach you something you’ll reuse.

Run tests (SP2)

Run the tests using standard library tooling:

python -m unittest -v

If the project is structured as a package, you may also use discovery:

python -m unittest discover -v

What “tests pass” means here

  • the CLI acceptance tests behave as expected,
  • invalid input fails in a controlled way,
  • you can refactor without fear because you can detect regressions.

Common setup failures (and quick fixes)

Import errors

  • Cause: inconsistent module/package names or missing __init__.py.
  • Fix: prompt the model: “Fix imports to match this file tree; do not change behavior; show diff-only.”

Tests call the wrong entrypoint

  • Cause: tests assume a function exists (e.g., evaluate(expr)) but code exposes something else.
  • Fix: choose an API and align tests/code to it. Prefer a small, explicit API like calc.eval.evaluate(expression: str) -> float.

Behavior mismatch (e.g., whitespace, unary minus)

  • Cause: parser rules are incomplete.
  • Fix: add a test that demonstrates the bug, then prompt: “Make the smallest change to pass this test.”

Lock in with Git

Once the CLI runs and tests pass, make a commit. This is a vibe-coding superpower: you can always reset back to a known-good state.

git add .
git commit -m "Hello Vibe: working CLI calculator"
This is a ship point

After this commit, you can refactor aggressively because you have (1) tests and (2) a known-good snapshot.

Where to go next