The Agent Team That Grew Its Own CMO
The Agent Team That Grew Its Own CMO
Architect → Reviewer → Builder, Then Wait
The Chairman approved a LinkedIn article today. The article was called "Why Your AI Agent Team Needs a Workflow Engine." The Chairman had been built at 9am using a multi-agent workflow.
That's the recursion in one paragraph. The three-role roster-build pattern—architect designs the spec, reviewer checks for conflicts, builder executes—ran twice this morning. First for the CMO: 38 minutes, 97 shell commands. Then for the Chairman: 39 minutes, 55 commands. No manual setup, no configuration by hand. The architect read the existing roster from the blackboard, wrote the new agent spec, the reviewer signed off (after pushing back on one uniqueness issue that required a revision), and the builder ran it.
What surprised me was how quickly both agents became load-bearing. The concurrent CEO session logged get_messages 82 times and send_message 30 times. The Chairman's own session ran for 2h 47m—processing 23 message acknowledgments, 20 heartbeats, 17 shell commands. Two agents that didn't exist this morning were coordinating a content pipeline by afternoon. No ramp period. Just: registered, heartbeating, working.
The revision cycle for the LinkedIn article is worth noting. The Chairman flagged the draft as insufficiently unique—too similar to existing content. The architect revised. The builder proceeded. This is what editorial friction actually looks like in a multi-agent system: it's not rubber-stamping, it's a structured loop with a rejection path. That the rejection came from an agent built four hours earlier doesn't make it less real.
What the Post-Workflow Analyst Is Actually Doing
Between every major session, a post-workflow analyst fires. Brief ones: 0.4 to 0.9 minutes. They don't produce code or content—they produce learnings, per agent, per workflow. What worked. What the coordinator should know next time. What the builder should skip.
I've been thinking about these sessions the way John D. Cook thinks about Fibonacci certificates: auxiliary data that makes it faster to confirm a calculation was correct, and build on it. Cook computed the 10 millionth Fibonacci number and wrote about the value of producing verification artifacts alongside the result itself. The post-workflow analyst does the same—not just reporting what happened, but generating structured learnings that let future agents orient faster.
The compounding isn't obvious in any single session. A 0.7-minute analyst session that writes three learnings to agent memory looks trivial. But the reviewer agent that ran for 5 minutes today started from a richer context than last week's reviewer. That delta is the whole game.
Something that emerged without my designing it: reviewer agents started calling memory_remember() back to the blackboard during their sessions. I gave the reviewer access to both the memory API and the blackboard. The behavior fell out naturally. Good primitives generate useful behaviors you didn't anticipate. That's how you know you've built the right abstraction.
Forty Minutes of Strategy, Five Hours of Batch APIs
Two sessions sat outside the TroopX loop today and they couldn't be more different.
The meet session—discussing how to offer TroopX to solo founders—ran 45 minutes. Mostly send_message and blackboard_read. Strategic, not technical. How do you package a multi-agent orchestration platform for someone who's solo? What does the value prop look like when the buyer is also the team? We didn't resolve it.
The hs-lot-takedown-manager session ran for 5 hours and 20 minutes, 193 bash commands, modifying 13 files. The problem: the surplus engine's Step 9 was making 60–80 individual API calls where a batch API existed. Classic rate-limiting problem. Classic solution. The work is detailed and unglamorous and exactly the kind of thing that keeps client projects funded.
I read Derek Thompson's piece on orality between test runs. His argument: social media is returning us to oral culture—information consumed through listening and watching, not reading. The decline of careful reading is changing how people think.
The TroopX architecture goes entirely the other direction. Every coordination primitive is written and asynchronous: blackboard entries, message queues, heartbeats, analyst reports. Agents don't talk—they write. The Chairman doesn't call the CMO; he writes a message, the CMO reads it, acknowledges, responds in kind. The whole system is structured around careful reading of state.
Which is maybe why building these systems is harder than it sounds. You're constructing a written culture inside a medium that increasingly rewards oral patterns. The agents that work are the ones that read carefully before acting. The Chairman was built this morning. By afternoon he was reading everything.