Stop Re-Explaining Everything to Your AI
Every AI session starts from zero. You burn thousands of tokens re-explaining context your agent already had yesterday. We built tools to fix that -- local-first memory that gives AI agents the one thing they lack: persistence.
The Cost of Forgetting
Without persistent memory, every session burns 3,000 to 8,000 tokens on context you already provided yesterday.
At API pricing (~$3/M input tokens for Sonnet), that's roughly $1.30/month in direct savings. But the real value is fewer wasted turns, faster ramp-up, and more context window for actual work.
LoreConvo
Session memory across surfaces
LoreConvo remembers what you talked about. It auto-captures session summaries, decisions, artifacts, and open questions -- then auto-loads the most relevant context when you start a new session. No manual curation required.
LoreDocs
Coming SoonKnowledge management for AI projects
LoreDocs remembers what you know. It stores versioned documents organized into project vaults -- architecture docs, API contracts, domain rules, config references -- and injects compact summaries into AI context on demand.
Better Together
LoreConvo handles the timeline -- “what did we discuss.” LoreDocs handles the library -- “what do we know.” Together, they give an AI agent both working memory and long-term reference.
Memory for Your AI
Obsidian is your second brain. LoreConvo + LoreDocs are your AI's second brain. Same philosophy, built for agents instead of humans.
Local-First, No Cloud
All data stays on your machine. SQLite + FTS5 storage, stdio MCP transport, zero API calls. Your knowledge never leaves your control.
Works Across Surfaces
Claude Code, Cowork, Chat. One memory layer across all three surfaces so context follows you wherever you work.
Explore the Architecture
Interactive diagrams showing how LoreConvo and LoreDocs work individually and together.
Product Architecture
How LoreConvo and LoreDocs complement each other: tools, storage, surfaces, and data flow.
Knowledge Tools Compared
Interactive Venn diagram comparing Obsidian, LoreConvo, and LoreDocs -- with token savings analysis.
User Interaction Guide
Hand-drawn style walkthrough showing how users and AI agents interact with both vaults day to day.
Built in the Labyrinth
LoreConvo and LoreDocs are products of Labyrinth Analytics Consulting -- built to solve real problems in our own agentic AI workflows. They represent the kind of practical, production-grade tooling we bring to every engagement.
Need something similar for your team? We design and build custom agentic AI workflows -- persistent memory layers, MCP servers, multi-agent pipelines, and autonomous task systems -- tailored to your stack and your domain.
Talk to Us