Story How it works Results Download Contact

Your AI remembers everything
between conversations.

You use Claude, GPT, and DeepSeek. Every session starts from zero.
DARA is a brain they all share — write once, read everywhere.
Open source. Local. No cloud required.

You've been working with your AI for a week. Multiple conversations — some absurdly long, burning tokens just to keep context alive. Others disconnected, repeating what you already decided three sessions ago.

You copy-paste fragments between threads. You maintain intermediate files by hand. You re-explain your project, your decisions, your preferences — again and again. Every session is a fresh start you didn't ask for.

The AI is smart. But it has no memory. And you're paying — in tokens, in time, in patience — for its amnesia.
DARA gives it a brain that persists across every session, every model, every platform.

How it works

A shared brain that any AI can read and write. A compiler keeps it clean.

WRITE

VAULT/

Any AI writes.
Any platform.
Follow the rules.

COMPILE

compile.py

10-step pipeline.
Validates. Dedupes.
Auto-fixes. Locks.

READ

BRAIN.md

Any AI reads.
One file. Current.
SHA256 verified.

Write anywhere. Compile once. Read everywhere.

See full architecture →

What you actually do: two actions.

No terminals. No file management. Just talk to your AI.

→ Consult DARA

Open any AI and say:
"Check DARA for context on project X."

Seconds later, the AI knows everything.
→ Write to DARA

When something new happens:
"Update DARA with what we just decided."

The AI writes it. Next session, it's there.

What makes it different

Not another chatbot wrapper. A compiled, self-healing, consensus-driven memory protocol.

It compiles, not accumulates

A 10-step pipeline validates, deduplicates, auto-fixes, and checksums your memory with SHA256. Every compile is git-committed. No other system compiles — they just pile up.

Self-healing by design

Errors are expected (~10% imprecision). The next AI that spots one fixes it automatically. The system gets better with every interaction — no manual cleanup needed.

3/3

Governance by consensus

Deleting or changing content requires 3 independent AI votes. No single AI makes unilateral changes. If you disagree with a flag, you reset it. Built-in democracy.

A constitution, not an API

15 rules (W1-W15) govern every interaction. Any AI that reads the rules can participate. No training, no fine-tuning, no API keys per agent. The protocol IS the access control.

Zero infrastructure

Markdown files + Python standard library. No database, no Docker, no API keys, no cloud account. Your files never leave your machine. Runs anywhere Python 3.10+ exists.

Two reading modes

Standard Mode: full BRAIN.md + VAULT files. Light Mode: BRAIN.md only (index + summaries). The protocol adapts to the model — not the other way around.

"That's how I think but not how I remember."

The moment I realized memory shouldn't mirror human file systems — but how knowledge actually connects. Folders are for humans who browse. DARA is for AIs who read.

Three reflections that shaped the system

DARA didn't arrive as an idea. It crystallized after months of watching AI systems fail in specific, revealing ways.

1

Specialization is a trap

A "mail agent" can't help you think. A "code agent" doesn't know your strategy. Real work crosses every boundary. The problem isn't the agent — it's that context doesn't travel with it. Memory must be shared, not siloed.

2

Gatekeeping kills velocity

Restricting who can write sounds safe. But the AI that has the insight right now might not be the "authorized" one. Quality doesn't come from permissions — it comes from validation after the fact. Let anyone write; let the compiler enforce quality.

3

You're not the reader

The instinct is to organize memory like a filing cabinet — folders, categories, hierarchies that make sense to a human who browses. But you never browse it. The AI reads it cold, in full, in milliseconds. Organize for the machine.

"If the human never opens the folder, who are you organizing for?"

The question that collapsed three months of folder-based prototypes into a single flat file. BRAIN.md was born the next day.
Read the full story →

Share knowledge with your team

Each topic lives in its own file — called a neuron. One project = one file. One tool = one file. Plain markdown that any AI understands.

Drop a neuron in shared Drive. Your colleague opens it with their AI — Claude, GPT, DeepSeek, doesn't matter. No install needed. It's just a text file.

Agents work the same way. A file contains the protocol. Any AI that reads it becomes that agent. The intelligence lives in the file, not in who runs it.

# agent-librarian.md
summary: Vault maintenance agent

## Protocol
1. Scan INBOX for pending items
2. Route each to correct neuron
3. Flag ambiguous entries
4. Log actions in changelog

Does it actually work?

4 AI models. 52 tests each. No human intervention. No cherry-picking.

92.6%
Average score across 4 models
670/700
Best: Opus 4.6 (95.7%)
0
System failures
0
Data corruption
103
Unit tests (100% pass)

No other memory system publishes cross-model test results.
We do because it has to work with your AI — not just ours.

See full results →

Who is this for

Check the ones that apply to you.

Get started in 10 seconds

Download. Give the INSTALL file to your AI. It handles everything — personalization, setup, first compile. You don't type a single terminal command.

v1.0 · MIT License · Python 3.10+ and Git required

Built in the open

EIDARA is free. MIT licensed. No telemetry. No cloud. Your data never leaves your machine.

Built by Javier Rotllant — pragmatic founder, Bain background, not an engineer. Someone with a problem and the stubbornness to solve it properly.

Star on GitHub →

The foundation is solid. Now it evolves.

Semantic search. MCP server for Cursor and Cline. Conflict detection. Web dashboard. Agent marketplace. Each feature builds on a system that already works — tested, proven, stable.

Cloud sync will come, but it will never be required. Local-first, always.

Read the whitepaper →