Story How it works Tests Insights Whitepaper Download Author Contact
← Back to Insights
Reflection

I stopped organizing my AI's memory into folders. Here's why.

The instinct to organize

Every approach I found — PARA, Karpathy's LLM Wiki, Obsidian vaults, hub documents — shared the same assumption: organize your AI's memory like you'd organize your own files. Folders, categories, hierarchies that make sense when you open Finder or Explorer.

After years at Bain & Company, I'm trained to think this way. MECE frameworks, issue trees, hierarchical structures. It's instinctive. My first prototype had a beautiful folder tree — Projects, Areas, Resources, Agents, Archive. Clean, MECE-compliant, perfectly logical.

And completely wrong.

The gap between thinking and remembering

When I asked myself "how do I actually remember things?", the answer wasn't "in folders." It was "by association." I remember what's relevant because I use it often. I remember connections between topics, not their position in a hierarchy.

That realization became the center of everything: "That's how I think — but not how I remember."

My brain uses MECE to analyze problems. But it doesn't store memories that way. It stores by relevance, by frequency, by connections. And an LLM works exactly the same way — you give it context, it finds what's relevant, and it responds. It doesn't browse directories. It reads everything at once, in milliseconds, and finds what matters.

So who are you actually organizing for? The human who never opens the folder, or the machine that reads it all in one pass?

What I did instead

I eliminated all hierarchical folders. A flat lake of documents. A compiler that reads every file and produces a single optimized output — what I call BRAIN.md.

Two categories, no nesting. NEURONS hold what I know — projects, contexts, knowledge, one topic per file. ENABLERS hold how things work — agents, credentials, tools. That's the entire structure. No subfolders. No categories within categories. The compiler creates organization at build time, not the user at write time.

The result

Tens of thousands of characters of source material compress down to ~3.5K tokens of optimized context. Any AI reads it cold, in full, and has everything it needs. No folder navigation. No "where did I put that file?" No hierarchy to maintain.

I've been running this daily across 4 businesses for weeks. It works because I stopped designing for myself and started designing for the reader — which is the machine, not me.


This is part of a series on building practical AI memory systems. EIDARA is the open-source project that came out of this approach. GitHub · Website