The Computation That Certifies Itself
The Computation That Certifies Itself
John D. Cook published something small and interesting today: a post about computing large Fibonacci numbers with a certificate embedded in the computation. The idea is that you don't just get an answer. You get a proof the answer is correct, generated as a byproduct of the work itself.
This seems like a math nerd's hobby project until you sit with it.
The certificate approach works because certain computation methods naturally produce verifiable artifacts. You don't run the computation, then run a separate verification pass. The verification evidence emerges from the structure of how you computed. The work and the proof are coupled.
I've spent most of this week thinking about verification in the context of multi-agent systems, specifically the dev-QA feedback loop I've been building into TroopX. The loop completes in about seven minutes and produces what I've been calling "genuine editorial friction": the QA agent doesn't just rubber-stamp the dev agent's output, it pushes back, files findings, and requires response. A review artifact lands in the review/code-review namespace as a byproduct of doing the work.
That's structurally identical to Cook's certificate. The QA pass isn't a separate verification step bolted on after the fact. It's woven into how the computation produces its output.
Why Bolt-On Verification Always Feels Haunted
Most software verification is bolt-on. Write the code, then write tests, then run CI, then maybe have a human review it. Each step is somewhat disconnected from the one before. The test suite doesn't know why the code exists. The reviewer often doesn't have the context the implementer had. You're reconstructing intent from artifacts.
The result is that verification degrades. Teams write fewer tests when deadlines loom. CI checks get silenced when they're inconvenient. PRs get rubber-stamped when everyone is tired.
Self-certifying computation sidesteps this because the certificate is too expensive to skip. If you skip it, you don't get the output at all. It's not an optional hygiene practice layered on top of the real work. It's load-bearing.
This is what I want from the dev-QA loop. Not a system where you could turn off the QA agent and still ship code. A system where the QA pass is necessary to produce anything that looks like "done."
Fibonacci and the Knowledge Graph Aren't That Different
Cook's post is specifically about Fibonacci numbers, but the underlying idea scales. Any computation where you can naturally produce an intermediate artifact that validates the final answer is a candidate for self-certification. The Fibonacci case works because the recurrence relation provides structure you can verify locally. You check relationships between adjacent values rather than recomputing from scratch.
The knowledge graph I've been building into Distill works similarly. It doesn't just store facts about the codebase. It captures structural relationships: which files tend to fail together, which sessions touch too many files. More than five files in a session historically correlates with a 78% error rate. When the graph surfaces a scope warning, that warning is a byproduct of indexing the codebase. I didn't run a separate "compute scope warnings" pass. The insight emerged from building the representation.
This pattern keeps appearing: the best verification is generative. It doesn't examine the output; it participates in producing it.
Brian Potter's Weekly List and the Physics of Constraints
Also in today's reading, Construction Physics dropped its weekly roundup. Brian Potter covers buildings, infrastructure, industrial technology: the physical world where materials don't cooperate with deadlines and load calculations don't care about your feelings.
Construction's version of self-certifying computation is simpler. Physics. A beam either holds or it doesn't. Gravity is the certificate. The verification is load-bearing in the most literal sense.
What I find compelling about Potter's project is that he keeps returning to the same question: why does construction cost so much and move so slowly? Part of the answer is that physical constraints are genuinely unforgiving. You can't ship a building with known load-bearing bugs and patch it later.
Software has learned to avoid this with layers of indirection: virtual machines, containers, hot patches, feature flags. We've made it easy to defer verification. We pay for that flexibility in degraded verification practices.
Cook's certified Fibonacci number is a small rebellion against that tendency. The certificate is mandatory. The answer and the proof are the same artifact.
That's a posture worth stealing.