Why no central database
PETROVA explicitly rejects a central database (Postgres or otherwise) as a source-of-truth for project guidance. Decisions, milestones, findings, and north-star intent stay in each consumer repo’s filesystem, under git. This page explains why.
What the rejected design would have looked like
Section titled “What the rejected design would have looked like”A reasonable alternative: spin up a Postgres instance, define tables
for decisions, milestones, findings, north_star_versions,
and have every consumer repo’s CLAUDE.md project from there. Edits
go through the database. The control plane has a complete picture
in one query.
This was explicitly considered in the 2026-04-29 control-plane decision and explicitly rejected.
Why it’s wrong
Section titled “Why it’s wrong”1. It violates MR-7 (decisions are append-only, dated, in-repo)
Section titled “1. It violates MR-7 (decisions are append-only, dated, in-repo)”MR-7 requires every decision to live next to the code it governs,
with a paper trail. A database edit:
- Loses
git blame. You can no longer ask “why did this decision change?” by reading commit history. - Loses branch protection as the human gate. The decision can be edited without anyone reviewing it.
- Decouples decision rationale from code rationale. A commit that changes behaviour to match a decision should reference that decision; if the decision lives in a database, the link is fragile.
2. It violates MR-12 (CLAUDE.md is a projection, not a source)
Section titled “2. It violates MR-12 (CLAUDE.md is a projection, not a source)”MR-12 requires CLAUDE.md to project from grounded sources. If the
projection chases a remote database, the consumer repo loses
self-containment — git clone && read CLAUDE.md is no longer
sufficient to understand the project. You’d need the database
running, with the right credentials, at the right schema version.
3. It introduces a sync problem we don’t have
Section titled “3. It introduces a sync problem we don’t have”Every consumer repo today is self-describing: clone it, read it, done. A central database would force every consumer to either pull on every read (latency, availability dependency) or cache and risk stale reads. Cache invalidation, schema migration, and connection management would all become petrova-line problems.
4. It creates a bypass surface
Section titled “4. It creates a bypass surface”A database is editable directly. Anyone with credentials can
mutate state without going through request_review or any other
verb. The narrow verb surface that audits everything else becomes
a fiction the moment someone runs psql.
The trade we accepted
Section titled “The trade we accepted”- Latency: verb round-trips take ~30s (GitHub API + branch + PR). A database write would be milliseconds.
- No real-time queries:
petrova dashboardwalks local clones- GitHub API. A database would let us answer “how many open decisions across all repos right now” in one SQL query. We do that walk instead.
- Manual onboarding: each new consumer repo needs to be cloned locally for the diagnostic verbs to read it. A database would serve any registered repo by ID.
The escape hatch
Section titled “The escape hatch”If aggregate-query latency becomes intolerable, the documented upgrade path is a SQLite cache, not a Postgres source-of-truth. The cache:
- Rebuilds from canonical git/GitHub state on every sync.
- Is read-only at query time.
- Can be deleted at any moment without losing data.
- Lives on the operator’s disk, not as shared infrastructure.
This preserves the MR-7 discipline while addressing the latency trade. It’s listed in deferred work as a “build only when proven needed” item — current scale (single-digit repos) doesn’t need it.
See also
Section titled “See also”- Control-plane decision — formal rejection record.
- MR-7 — the rule this protects.
- MR-12 — the other rule this protects.
- Trade-offs accepted — full list of exchanges.