Stop Re-Explaining Everything to Your AI

Every AI session starts from zero. You burn thousands of tokens re-explaining context your agent already had yesterday. We built tools to fix that -- local-first memory that gives AI agents the one thing they lack: persistence.

The Cost of Forgetting

Without persistent memory, every session burns 3,000 to 8,000 tokens on context you already provided yesterday.

!
Without Lore
Session startPaste 500-2,000 words of context from notes or prior chats
Mid-sessionClaude asks questions it already answered last time
Context windowFills with duplicate background, less room for real work
Overhead~3,000-8,000 tokens/session in re-contexting
With LoreConvo + LoreDocs
Session startAuto-loads top-scored recent sessions (~800 tokens)
Project contextInjects compact project summary (~500 tokens)
Context windowLean context leaves room for productive work
Savings~2,000-6,000 tokens/session, compounding daily
~4,000
tokens saved/session
×
5
sessions/day
×
22
working days
=
440K
tokens saved/month

At API pricing (~$3/M input tokens for Sonnet), that's roughly $1.30/month in direct savings. But the real value is fewer wasted turns, faster ramp-up, and more context window for actual work.

LoreConvo

Session memory across surfaces

LoreConvo remembers what you talked about. It auto-captures session summaries, decisions, artifacts, and open questions -- then auto-loads the most relevant context when you start a new session. No manual curation required.

Auto-saves sessions via Claude Code hooks
Auto-loads relevant context on session start
Cross-surface persistence (Code, Cowork, Chat)
Session scoring, linking, and full-text search
12 MCP tools for AI-native access
6 CLI commands for human debugging

LoreDocs

Coming Soon

Knowledge management for AI projects

LoreDocs remembers what you know. It stores versioned documents organized into project vaults -- architecture docs, API contracts, domain rules, config references -- and injects compact summaries into AI context on demand.

Multi-vault architecture (one vault per project)
Document versioning with history tracking
Context injection via vault_inject_summary
Cross-vault search and document linking
34 MCP tools for comprehensive AI access
Free/Pro tier gating (Stripe-ready)

Better Together

LoreConvo handles the timeline -- “what did we discuss.” LoreDocs handles the library -- “what do we know.” Together, they give an AI agent both working memory and long-term reference.

Memory for Your AI

Obsidian is your second brain. LoreConvo + LoreDocs are your AI's second brain. Same philosophy, built for agents instead of humans.

Local-First, No Cloud

All data stays on your machine. SQLite + FTS5 storage, stdio MCP transport, zero API calls. Your knowledge never leaves your control.

Works Across Surfaces

Claude Code, Cowork, Chat. One memory layer across all three surfaces so context follows you wherever you work.

Built in the Labyrinth

LoreConvo and LoreDocs are products of Labyrinth Analytics Consulting -- built to solve real problems in our own agentic AI workflows. They represent the kind of practical, production-grade tooling we bring to every engagement.

Need something similar for your team? We design and build custom agentic AI workflows -- persistent memory layers, MCP servers, multi-agent pipelines, and autonomous task systems -- tailored to your stack and your domain.

Talk to Us