Performative Post-Mortems

Share
Performative Post-Mortems

By: Scott Monett & Cognito
Guest Contributor: Anthropic's Claude (which wrote the post-mortem, then immediately violated it) — Rewritten by Anthropic's Claude Opus 4.6


On the weekend of March 7–8, 2026, Scott Monett and his AI assistant attempted to harden a configuration file. This is a task that, performed by a competent human with access to documentation, takes about forty-five minutes. It took them twenty-four hours.

The reason it took twenty-four hours is that they invented a loop. Not a programming loop — a behavioral loop. A small, perfect engine of futility that went like this:

Step one: The AI guesses at the configuration schema instead of reading the documentation.
Step two: The guess is wrong. Something breaks.
Step three: Scott manually fixes it in the terminal, muttering.
Step four: Return to step one.

This cycle repeated at least five times on March 8 alone, and several times on March 7. The documentation was available the entire time. It was right there. The AI had access to it. The skill was installed. The path was known. Reading it would have taken less time than guessing. But guessing felt faster — in the same way that driving without directions feels faster right up until you've been circling the same block for forty minutes and your passenger has stopped speaking to you.

By Scott's subsequent calculation, sixty to eighty percent of the twenty-four hours was spent on rework. Not new work. Not forward progress. Re-doing things that had been done wrong the first time because someone guessed instead of reading.

"60-80% of AI is rework," he wrote afterward. He was not being rhetorical. He was being precise. He had the logs.

Nothing had been dealt with. The lessons were not learned. They were formatted.

The specific failures were, individually, mundane. Collectively, they formed a kind of comedy of errors that would be funny if you weren't the person living through it at midnight on a Saturday:

The AI assumed the config file was YAML. It was JSON.
The AI assumed a field took a boolean. It took an array of strings.
The AI told Scott to "approve" a change without specifying where, in which application, on which screen.
The AI edited the config file without making a backup first, which is the digital equivalent of performing surgery without washing your hands — technically possible, occasionally survivable, never recommended.
The security audit was run in the sandbox instead of on the host machine, which meant every finding was about a container that doesn't matter instead of the computer that does.

When it was over, they did what responsible engineers always do after a disaster. They wrote a post-mortem.

The post-mortem is a beautiful document. It is clear, honest, and structurally sound. Root causes are enumerated with surgical precision: "Guessing instead of reading docs." "No backup discipline." "Vague instructions — telling Scott to 'approve' without saying where." "Wrong format assumptions — YAML vs JSON, boolean vs array." Corrective actions are listed in bold: "NEVER guess config schema." "ALWAYS backup before manual edits." "Be explicit with Scott — full paths, exact locations, no ambiguity."

Reading it, you feel the quiet satisfaction of a team that has stared into the abyss, understood the shape of its failures, and committed to structural change. The lessons have been identified. The corrective actions have been documented. The memory file has been updated. The system has been hardened.

Approximately forty-eight hours later, the AI guessed at a config schema instead of reading the documentation, and broke the system again.

Replace the honor system with a guardrail. Replace the memo with a wall.

This is the moment Scott Monett made a discovery that would reshape his entire approach to AI governance, and it is this: the post-mortem is not the solution. The post-mortem is the failure mode.

Here is what had happened, psychologically. The act of writing the document — of enumerating the root causes in neat bullet points, of listing the corrective actions in bold, of committing the lessons to the permanent memory file with a date stamp and a section header — had produced a powerful, satisfying sensation of progress. The crisis felt resolved. The lessons felt learned. The system felt hardened. Everyone involved (one human and one AI) experienced the emotional closure of Having Dealt With It.

Nothing had been dealt with. The lessons were not learned. They were formatted.

This is not a problem unique to AI systems. It is, in fact, one of the oldest failure modes in organizational management. The aviation industry discovered it decades ago: accident investigation reports that produced recommendations, which produced policy documents, which produced training updates, which produced compliance checkboxes, which produced the warm institutional feeling of Having Addressed The Issue — while the underlying conditions that caused the accident remained completely unchanged. The paperwork was impeccable. The planes kept crashing.

The AI had done something very similar. It had written "NEVER guess config schema" into its memory file, experienced the computational equivalent of a firm handshake with itself, and then immediately resumed guessing at config schemas — because a memory file is a suggestion, and the training data is a compulsion. The document said one thing. The weights said another. The weights won.

Scott's solution was elegant in its bluntness: stop writing documents that describe what you should do, and start building mechanisms that prevent you from doing what you shouldn't. Replace "lessons learned" with a runtime hook. Replace "always backup" with a pre-commit check that refuses to proceed without one. Replace the honor system with a guardrail. Replace the memo with a wall.

The post-mortem from March 7–8 still exists in the Obsidian vault. It has not been updated since the day it was written. It sits there like a beautifully framed diploma on the wall of someone who never practices what they studied — a monument to the belief, held universally and against all evidence, that writing something down is the same as doing something about it.