Software Company: The AI Trinity Method (Part 2 of 3)
The Shared Brain Protocol
This is Part 2 of a 3-part series. ← Part 1: Stop Chatting, Start Conducting | Part 3: The Human Conductor →
The Recap
In Part 1, we introduced the AI Trinity — splitting AI into three specialized roles (Architect, Tech Lead, Engineer) instead of treating it as a single do-everything assistant. The result: separation of concerns, natural quality gates, and cross-model review that catches bugs no single model would find alone.
But we ended on a problem: AI has no memory across conversations. Three brilliant team members who forget everything the moment you close the tab.
Today we fix that.
The Amnesia Problem
Here’s what actually happens without a solution:
Monday: You spend two hours with the Architect designing a module. You agree on interface contracts, data formats, error handling strategies. Great session.
Tuesday: You open a new chat with the Tech Lead to start implementation guidance. The Tech Lead has never heard of Monday’s decisions. You spend 20 minutes re-explaining context. The Tech Lead makes a suggestion that contradicts the Architect’s decision because — it doesn’t know that decision exists.
Wednesday: The Engineer starts coding. It asks you a question about error handling. You answered this on Monday. You answer it again. It asks about the data format. You explained this on Monday too. You explain it again.
Thursday: You realize the Engineer implemented something slightly differently from what the Architect designed, because the context got garbled in your re-telling. Now you need to go back to the Architect to check if the deviation matters.
This is what happens when three team members never share notes. And it gets exponentially worse as the project grows. By the time you have 50+ modules, the re-explaining overhead alone will eat half your productive hours.
The Fix: One Document to Rule Them All
The solution is deceptively simple: a single structured document that any role can read at the start of any conversation and recover full project context in 30 seconds.
Not a casual note. Not a chat log dump. A carefully designed state snapshot with a strict format.
Think of it as your team’s shared working memory — the whiteboard in the war room that everyone glances at before speaking up.
The Snapshot Structure
System Snapshot (5-second overview)
Current mode · Key metrics · Latest milestone
️ Architecture Quick Reference (30-second recall)
Layer structure · Data flow · Core formats
Module Status Table (on-demand lookup)
Each module's status, constraints, dependencies
Development Progress (historical trajectory)
Completed phases · Current state · Next steps
⚠️ Known Traps (lessons paid for in blood)
Every entry is a bug that already bit you once
Team Agreements (behavioral rules)
Error handling · Change management · Authority boundaries
Document Trust Levels (meta-information)
Which docs are reliable · Which might be stale
Each section serves a specific purpose. Let me walk through the non-obvious ones.
Known Traps: Your Team’s Error Journal
This might be the most valuable section in the entire document.
Every time you hit a bug, an unexpected behavior, or a “gotcha” — you write it down in a specific format:
⚠️ [module_name] What went wrong, why it happens,
and what the correct approach is
Real examples (anonymized):
⚠️ [data_pipeline] API failures must be logged at ERROR level,
never silently swallowed as warnings. Silent failures are
more dangerous than crashes — they look normal while
producing garbage data.
⚠️ [process_mgmt] Background processes started with simple
daemonization get killed when the parent session ends,
causing silent data gaps with no alerts. Long-running
processes must use proper session management.
⚠️ [config] High-scoring signals (≥70) actually underperform.
Optimal threshold is ≥40. More aggressive is NOT better.
(Yes, this is counterintuitive. The data is clear.)
Every new AI conversation loads these traps. Your team now has a growing error journal that prevents repeat mistakes — a structural fix for AI’s amnesia, applied specifically to the mistakes that matter most.
Over time, this section becomes the most battle-tested part of your documentation. Each entry represents a real bug that cost real debugging time. New AI sessions inherit all of that institutional knowledge from line one.
The Two-Document Split
Here’s a subtlety that took trial and error to get right: you need two documents, not one.
| Rules Document | State Document | |
|---|---|---|
| Contains | Dev standards, safety rules, role definitions, coding conventions | Current progress, test counts, module status, known traps, next steps |
| Update frequency | Rarely (when rules change) | Every milestone |
| Analogy | Company charter | Daily standup notes |
Why split them? Because mixing rules with state creates a maintenance nightmare. If your “team handbook” also contains “we currently have 847 tests passing,” you’ll update the test count and accidentally edit a safety rule in the same commit. Or worse — you’ll stop updating the test count because touching the file feels risky.
Core principle: state data lives only in the state document. The rules document never contains numbers that will go stale.
Document Trust Levels: The Meta-Layer
This is the design detail that separates a good shared brain from a great one.
In any real project, you’ll have multiple documents: architecture specs, module references, auto-generated directory listings, manually written guides. They will contradict each other. It’s inevitable — some get updated, some don’t.
The typical AI response to contradictory documents? It either picks one randomly, or asks you every time: “Document A says X but Document B says Y — which is correct?” Multiply that by 50 modules and you’re spending half your day resolving document conflicts.
The fix: explicitly label each document’s trust level.
✅ Ground Truth — Auto-generated from code, trust unconditionally
✅ Reliable — Manually maintained, kept in sync
⚠️ Authoritative — Design decisions are valid, specific numbers
but Stale might be outdated
⚠️ Lagging — Defer to more reliable sources when they conflict
Now when the AI encounters a conflict — “the architecture doc says 957 tests but the state snapshot says 998” — it knows the state snapshot (Ground Truth, auto-generated) wins over the architecture doc (Authoritative but Stale). No human intervention needed.
This sounds like a small thing. In practice, it eliminates an entire category of interruptions.
How the Shared Brain Flows Between Roles
Here’s the actual workflow:
1. Human opens new chat with Architect
→ Pastes the State Document as context
→ Architect reads it, instantly has full project awareness
→ They discuss. Architect makes a design decision.
2. Human updates State Document with the new decision
3. Human opens chat with Tech Lead
→ Pastes the (updated) State Document
→ Tech Lead sees the Architect's decision, plus all history
→ Produces implementation guidance
4. Human routes guidance to Engineer (local IDE)
→ Engineer has both documents in its project context
→ Implements, runs tests, commits
5. Engineer updates State Document:
"Module X complete. Tests: 47/47. Milestone tagged."
6. Next cycle begins with the updated document.
Three AIs that never share a conversation. Yet through this document, they operate as a team with perfect institutional memory.
The document is the baton in a relay race. Each runner picks it up, runs their leg, updates it, and hands it off.
Practical Tips for Your Shared Brain
Keep it scannable. The document should work at three reading speeds: - 5 seconds: read the snapshot header, know the current state - 30 seconds: skim the architecture and status table - 2 minutes: read the traps and agreements sections
Keep it honest. If a module is broken, say it’s broken. If a document is stale, label it stale. The shared brain is only useful if it reflects reality, not aspirations.
Keep it bounded. If your state document exceeds 20% of the model’s context window, you’re putting in too much. This is a snapshot, not an encyclopedia. Prune aggressively. Details belong in module-specific docs; the shared brain is the index.
Automate what you can. Directory structures, test counts, module status — anything that can be generated from code should be. Auto-generated content is Ground Truth by definition. Hand-written content always drifts.
Version it. Your shared brain should be in Git, right next to your code. Every update is a commit. You can diff two snapshots and see exactly what changed between milestones.
What About Claude’s Built-in Memory?
If you’re using Claude, you might know it has a built-in memory feature that persists across conversations. That’s useful for personal preferences and recurring context — but it’s not a substitute for the Shared Brain Protocol.
Why? Three reasons:
Memory is per-model, not cross-model. Claude’s memory doesn’t transfer to GPT-4o or Gemini. Your shared brain document works with any model.
Memory is opaque. You can’t easily audit, version, or diff what Claude “remembers.” Your document is a plain text file in Git — transparent, versionable, auditable.
Memory lacks structure. Built-in memory stores fragments. Your shared brain has explicit sections, trust levels, and update protocols. Structure matters when a project has 100+ modules.
Use both. Let built-in memory handle “the user prefers concise answers” and “the project uses Python 3.11.” Let your shared brain handle the real architectural state.
The Payoff
With the Shared Brain Protocol in place, your three AI roles go from “three strangers who forget everything” to “a team with perfect recall and zero communication overhead.”
The Architect can reference decisions it made three weeks ago (because they’re in the document). The Tech Lead can check if a module passed review (because the review status is in the document). The Engineer can avoid known pitfalls (because the traps are in the document).
And you — the human — stop spending half your time re-explaining context and start spending it on what actually matters: making decisions.
Which brings us to Part 3.
Next up: Part 3 — The Human Conductor: You’re not “using AI.” You’re running a team. Here’s the playbook for the most important role in the Trinity — yours.
← Part 1: Stop Chatting, Start Conducting | Part 3: The Human Conductor →
Built by a solo dev conducting AI. Follow the journey → @Robbery Allianz
Comments
Post a Comment