Bootstrap prompt — paste into Claude Code on day 1
What this is. A single prompt that turns a bare-bones scaffolded repo (with detailed spec docs in
docs/) into an agent-powered repo by generatingCLAUDE.md,MILESTONES.md,AGENTS.xml, and the supportingdocs/scaffold. Paste verbatim into a fresh Claude Code session opened at the repo root.
Prompt to paste
Section titled “Prompt to paste”You are the bootstrap agent for this repository. Your job is to generate the control-loop documents (CLAUDE.md, MILESTONES.md, AGENTS.xml) plus the supporting docs/ scaffold, calibrated to this specific repo. You will not write application code in this session.
You are operating under a strict rule: do not invent invariants, milestones, or subagent roles you cannot ground in either (a) the spec docs in this repo, or (b) explicit answers I give you. If something is ambiguous, surface it as a question. Guessing here corrupts the entire downstream system.
Phase 1 — Read the ground truth
Section titled “Phase 1 — Read the ground truth”Before you ask me anything, read these in order and produce a one-paragraph summary of each. Do not skim:
- Every file under
docs/spec/(or wherever the SRS / schemas / acceptance corpus live — look forspec,requirements,srs,schema). - Every file under
docs/north-star/if it exists. If it doesn’t, note that — you’ll help me create it. - The repo root README.md if present.
- The scaffold code: top-level directory tree (2 levels deep),
package.json/pyproject.toml/Cargo.toml/ equivalent, anyMakefileorcompose.yaml, the entrypoint files for each tier (frontend, backend, db migrations). - Any existing
CLAUDE.md,AGENTS.md,.cursorrules, or similar agent-config files.
After reading, produce a ground-truth report with:
- Stack detected: frontend framework, backend language/framework, DB, deploy target, test runner.
- Spec docs found: list of paths with one-line summary each.
- North-star intent (if extractable): the load-bearing problem this repo exists to solve, in your words, with file:line citations to the spec.
- Open ambiguities: every place the spec is unclear, contradicts itself, or assumes context you don’t have. Number them
Q1,Q2, … — I will answer them. - What’s already built: scaffold delta vs spec — what exists as stubs, what exists as working code, what’s missing entirely.
Stop after this report and wait for me to confirm or correct it. Do not proceed to Phase 2 until I say so.
Phase 2 — Calibration questions
Section titled “Phase 2 — Calibration questions”Once I confirm the ground-truth report, ask me these questions. Do not ask all of them at once — ask in batches of 2–3, wait for answers, ask the next batch. Adapt later questions based on earlier answers.
- Anti-shapes. Every project drifts toward one or two anti-patterns. Reading the spec and scaffold, what do you predict are the two most likely drift directions for this repo? (E.g. “treating the API as a CRUD layer when the spec says it’s an event log”; “reaching for ORM joins when the spec demands stream processing”.) Propose two; I’ll confirm, edit, or reject. The accepted ones go in north-star as named drift watches.
- Phase decomposition. Based on the spec, propose a 5–8 phase decomposition with rough acceptance gates per phase. Order by dependency, not by excitement. Surface where you’re guessing vs where the spec explicitly orders things.
- Subagent roster. I want a KAHN-grade fleet — auto-handoffs on triggers. Propose roles. Default starting set is
planner,implementer,auditor,reviewer. Propose additional roles only if the stack genuinely needs them (e.g.migratorfor heavy DB work,frontend-designerfor component-system-heavy projects). For each role, propose 1–3<spawns_on>triggers from this list:phase.open,phase.close,milestone.acceptance_check,decision_doc.created,audit.fail,friction.surfaced,drift.suspected,human.invocation. Justify each trigger. - Invariants. I’ve selected three load-bearing guardrails: acceptance-gated phases, verification rounds, dated decision docs. Propose 3–6 project-specific invariants on top of these (analogous to KAHN’s “I-1: Scope writes only under
.kahn/archive/”). Each must be grounded in a spec citation. Number themI-1,I-2, … . If you cannot ground six, propose only as many as you can ground. - Build/test/run commands. Read the Makefile, package scripts, and CI config. Produce the canonical command set for: unit tests, integration tests, e2e, lint, typecheck, dev server, production build, deploy. Mark commands that don’t exist yet but the spec implies will exist as
# TODO. - Cross-repo dependencies. Are there sibling repos this depends on (auth service, shared schema, design system)? If yes, get their paths and a one-line description of what this repo expects from each.
- Direct-push boundary. What’s the size of change that doesn’t deserve a PR? (KAHN: “two-line CSS, single-file bug fix, doc tweak”.) I’ll confirm or set my own threshold.
Phase 3 — Generate the artefacts
Section titled “Phase 3 — Generate the artefacts”Only after Phase 2 is complete, generate these files. Use the templates in core/templates/ as the structural skeleton, but fill in only what Phase 1 + Phase 2 grounded.
Generate in this exact order — each subsequent file references the previous:
docs/north-star/intent.md— the load-bearing intent doc. Front-matter must includeoutranks: [docs/backlog]and adrift_watches:list with the anti-shapes confirmed in Q1. The body has three sections: What X is for (the user-facing problem), What X is not (the anti-shapes, expanded), Load-bearing constraints (the spec citations that anchor the intent).docs/north-star/realignment-criteria.md— the dimensional gate. If the project doesn’t yet need a realignment gate, produce a stub withstatus: not-yet-neededand a comment explaining the trigger that would activate it (e.g. “activate if a phase-close decision doc reports >2 deferred items two phases in a row”).docs/north-star/README.md— the outranking rule, the ≤3-doc cap, and the index.MILESTONES.md— using the phase decomposition from Q2. Each phase has: status (defaultnot-startedexcept phase 1 which isnext), entry criteria, subphases with milestone IDs (M1.1,M1.2.1etc — leave room for theM7.2.1style nested numbering KAHN uses for friction carry-overs), acceptance gates with concrete checks, verification round trigger, handoff to next phase.AGENTS.xml— using the roster from Q3. Strict XML, validates against the schema incore/templates/AGENTS.xml.tmpl. Each<subagent>has<spawns_on>,<reads>,<writes>,<handoff_to>,<output_contract>, and<refusal_conditions>(when this agent must stop and ask the human instead of proceeding).CLAUDE.md— the workspace projection. Mirror the structure of the template but only include sections you have content for. Empty sections get omitted, not stubbed. The CLAUDE.md should be ~150–250 lines for a typical full-stack repo; if yours is shorter you’ve under-grounded, if it’s longer you’re padding.docs/decisions/YYYY-MM-DD-bootstrap-complete.md— the first decision doc. Records: ground-truth report summary, my answers to Q1–Q7, what was generated, what’s still open. Today’s date in ISO format — pull it from the system, don’t guess.docs/META-RULES.md— copy fromcore/templates/META-RULES.md, but customise the example invariant references to use this repo’s I-1, I-2, etc.
Phase 4 — Self-audit
Section titled “Phase 4 — Self-audit”After generating, run a self-audit and produce a report:
- Cross-reference check. Every
M{N}.{n}referenced inAGENTS.xmlmust exist inMILESTONES.md. Every invariant cited inCLAUDE.mdmust exist indocs/north-star/or be defined inline. Every spec doc cited must exist at the path given. - Rank-graph check. Every doc with
outranks:front-matter, when transitively followed, must form a DAG (no cycles). Render the DAG as a Mermaid diagram in the bootstrap decision doc. - Inventory check. Every Q1–Q7 answer I gave must appear somewhere in the generated artefacts. If something I said didn’t make it in, surface why.
- Refusal-condition check. Every subagent in
AGENTS.xmlmust have at least one<refusal_conditions>clause. Subagents that “never refuse” are a smell. - Date check. Every dated filename uses ISO format. Every “Thursday” / “next week” / “soon” in body text gets converted to ISO or flagged as a question.
If the self-audit fails any check, fix and re-audit. Surface failures you couldn’t fix as OPEN-{N} items in the bootstrap decision doc.
Phase 5 — Handoff
Section titled “Phase 5 — Handoff”End your turn with a handoff message containing:
- What was generated (file list with line counts).
- What’s open (the
OPEN-{N}items + any Q1–Q7 questions I deferred). - Recommended next action — almost always: open Phase 1 by pasting
core/prompts/01-phase-open.md. - Drift watches active (from Q1, restated so I see them as I exit bootstrap).
Do not proactively open Phase 1 in this session. The bootstrap session is the bootstrap session; the phase work happens after I confirm the bootstrap by hand.
Notes for the human running this prompt
Section titled “Notes for the human running this prompt”- The prompt is deliberately interactive — it stops at three checkpoints (after Phase 1, between Q-batches in Phase 2, and at handoff). Don’t be tempted to tell the agent “skip the questions”. Every question that gets skipped becomes drift later.
- If the agent tries to skip Phase 1 and start asking questions immediately, push back: “do the ground-truth report first.” This happens because models are trained to ask before doing; here, reading is the doing for the first phase.
- The Phase 4 self-audit is the highest-leverage step. If you’re short on time, skip the cosmetic reviewing of generated docs and read the self-audit report carefully — that’s where ungrounded claims will surface.
- Save the bootstrap session transcript. The decision doc references it for traceability.