Cognitive Mesh
My Roadmap to Grab AI by the Horns - BDI Meets OODA Loop
It all started about a year ago. Maybe a bit more.
I was stuck in that classic loop: banging my head against the wall trying to steer AI models to do exactly what I wanted. Back then, I was running the basic $10/month GitHub Copilot subscription, trying to get the "most bang for my buck" because, honestly, I didn't fully trust the tech yet. I'd spend my time opening new chat windows constantly just to clear the context—hoping the model wouldn't "derail" and start hallucinating halfway through a task.
But it always did. It would start strong, then completely lose the plot.
Caralho.
Constraints power innovation.
The Era of the "Executor Lusitano"
I started reading papers. Methodology. Steering. At the time, "Personas" were the big thing. So, being me, I decided to inject a bit of the old Portuguese Empire into the AI's brain. I created a chatmode called executor_lusitano.
I gave it a "Council of Reasoning" based on four archetypes:
- CAMÕES (The Explorer): To define the mission and the "Why."
- PESSOA (The Strategist): To generate multiple paths and avoid linear thinking.
- VIEIRA (The Architect): For clean, irrefutable logic and code.
- AGOSTINHO (The Facilitator): For Socratic verification—testing the limits.
And surprise, surprise... it actually worked. The AI started giving me better answers. It had "depth." It felt like I was getting more value out of that tenner than anyone else. But even then, I knew I was just waiting for the models to catch up with the context windows I actually needed.
Derivations & Evolution: You can find derivations and related experiments from those early days on GitHub: github.com/loonix/github_copilot_prs — where the original executor_lusitano chatmode instructions could be stored.
"Em pt?? Nem parece de ti 😁 Pensamento lusófono? Soa interessante"
— Ari, testing the chatmode
The Vertigo of Autonomy
Fast forward. I was still on that $10/month Copilot subscription, constantly looking for alternatives. My friend Lucas told me about Claude—said it was really good. He was on the $90 plan, but I didn't want to pay that much. I went with the lower tier.
Then Lucas showed me how he was using it. Something clicked. I realized I'd been working with AI under constraints the whole time. I wasn't using AI for git commits, pushes, terminal actions—I was using it purely for logic and boilerplate. The stuff that took most time.
My problem? I was constantly trying to constrain the AI. I wasn't letting it loose. And it was costing me—I had to constantly steer it, correct it, babysit it.
Once I started getting the grips on it, I thought: "I'm going to take the $90 plan."
And man... I built the different AI harnesses. I started building CLIs to mimic that shelved "Empire" way of thinking, but I kept hitting the same wall: Context Rot. The AI would get tired. It would get confused.
The Harness Evolution
First Attempts: Various CLI tools + VSCode extension — Failed
Mengle: JavaScript solution — Too slow. Created pipelines for agents and built a scheduling system.
MengleX: Rust rewrite — First success. The first time I managed to get an agent to work truly by himself. A feeling I could not describe. This was done before OpenCLAW was a thing—only a week or two after my Rust harness was released, something called clawdbot.
eth0: Open source coming soon
Orbit: Currently building — Classified
Then, I saw a guy online talking about "Ralph Loops." It clicked. I went back to basics. I stripped away the fluff. I removed the personas. I followed the philosophy like a disciple. I built and rebuilt five different versions of autonomous systems. It was wild—I'm talking physical vertigo from lack of sleep and pure mental fatigue.
"That first time MengleX worked by himself... that was a feeling I could not describe."
The Fusion: BDI + OODA
When Claude 4.x models dropped, I noticed they still had gaps. They needed a "final check." I realized that the models were now smart enough that they didn't need me to tell them to act like a 16th-century architect. They just needed the structure.
I took my old Lusitano logic and meshed it with two heavy-duty cognitive frameworks:
BDI (Belief, Desire, Intention)
Beliefs: What do we know about the code now?
Desires: What is the goal?
Intentions: What is the immediate plan we're committing to?
OODA Loop (Observe, Orient, Decide, Act)
A military-grade cycle for reacting to changing environments.
I didn't invent these. But I meshed them. They are perfectly connectable. BDI handles the "internal state" of the agent, while OODA handles the "external execution."
Why It Works
Now, instead of a persona, I use a Cognitive Mesh.
If I have a massive issue, I don't just ask for a fix. I tell the system:
"Analyze this issue using BDI and OODA loops. Trigger agents to look for gaps, TODOs, and half-implemented features."
By using these terminologies, the AI dissects the problem with surgical precision. It uses the Observe/Orient phase to find the "Beliefs" (the context) and doesn't Act until the Intention is perfectly aligned with the Desire.
It's high-signal, low-verbosity. It's the evolution from "make-believe" personas to "hard-coded" cognitive protocols.
The Mesh in Action
Here's the difference:
Traditional Prompting
"Fix the authentication bug in my app"
Cognitive Mesh
"Analyze this issue using BDI and OODA loops. Trigger agents to look for gaps, TODOs, and half-implemented features."
With Cognitive Mesh, the AI doesn't just "fix." It observes, orients, decides, and acts. With beliefs, desires, and intentions guiding every step.
TASK: Analyze authentication failure
OBSERVE (Beliefs):
- What is the actual error?
- What does the code currently do?
- What's the state of the database?
ORIENT (Context):
- Where does the flow break?
- What are the relevant files?
- What's expected vs actual?
DECIDE (Intentions):
- Plan: Update validation function
- Files: auth.ts, validators.ts
- Risk: Low, isolated change
ACT:
- Execute the plan
- Verify the fix
- Run tests
DESIRES (Goals):
- Users can log in with valid credentials
- Invalid credentials are rejected
- No regression to existing users
CLI Example
$ cognitive-mesh analyze --issue "auth failures"
[COGNITIVE MESH: INITIALIZING]
├─ BDI Layer: Loading beliefs...
├─ OODA Loop: Starting observation cycle...
└─ Mesh: Active
[OBSERVE] Gathering ground truth...
├─ Error: Invalid token response
├─ File: src/auth/token-validator.ts
└─ DB: 472 pending tokens
[ORIENT] Analyzing context...
├─ Flow breaks at token validation
├─ Related: middleware/auth.ts
└─ Root cause: Expiry check logic
[DECIDE] Forming intentions...
├─ Plan: Refactor expiry validation
├─ Files: token-validator.ts, auth-types.ts
└─ Risk: Low, isolated change
[ACT] Executing...
├─ ✓ Updated validator
├─ ✓ Added unit tests
└─ ✓ All tests passing
[COGNITIVE MESH: COMPLETE]
Beliefs updated: 3
Intentions executed: 1
Desires satisfied: 100%
The Protocol
From Personas to Protocols
If you're building autonomous tools and you're hitting a wall, stop trying to make the AI "feel" like a person.
Personas are make-believe. They're roleplay. They're fun, but they're soft.
Protocols are hard-coded cognitive structures. They're how actual minds—artificial or biological—process complex problems.
Give it a loop. Give it a mesh.
The AI doesn't need to be Camões or Vieira. It needs to think in structures that work.
The Evolution
- Magic prompts and prayer
- Personas and roleplay (Executor Lusitano)
- Explicit loops (Ralph)
- Cognitive Mesh (BDI × OODA)
Each phase stripped away pretense. Each phase got closer to how actual cognition works.
The future isn't smarter prompts. It's cognitive architectures that reliably process complexity.
The Caravela
Now, I'm building continuously a caravela—just like the Portuguese back in the Age of Discovery. I'm digitally going to conquer the digital world with my motto and my Cognitive Cartography.
Every line of code, every protocol refinement, every mesh iteration—it's all planks in the hull. Every autonomous system that successfully navigates complexity is another nautical mile charted on the map.
The caravela wasn't just a ship. It was a platform for exploration. A marriage of constraint and innovation that made the impossible possible.
My Cognitive Mesh is the same. It's not a persona. It's not a prompt. It's a vessel—built on BDI and OODA, powered by constraint, designed for discovery.
Constraints power innovation. Innovation builds caravelas. Caravelas conquer worlds.