Knowledge Tools Compared

Click any circle or overlap region to explore what each tool does best, and where they intersect.

Click a zone to explore

ObsidianHuman Knowledge Base
LoreConvoAI Session Memory
LoreDocsAI Project Knowledge
Obsidian Only
LoreConvo Only
LoreDocs Only
Obsidian + LoreConvo
Obsidian + LoreDocs
LoreConvo + LoreDocs
All Three
Select a zone to compare
Click any circle to see what makes it unique
Click overlap areas to see shared capabilities
Click the center to see what all three share
The short version: Obsidian is where you think and write. LoreConvo remembers what Claude did across sessions. LoreDocs organizes what Claude knows about your projects. They complement each other -- Obsidian for your brain, the Vaults for your AI agents' brains. A future integration could let Ron read your Obsidian notes for richer context, or write session summaries back as Obsidian pages you can browse.

How LoreConvo + LoreDocs Save Tokens

Every Claude session starts from zero. Without persistent memory, you (or your agent) burn tokens re-explaining context. Here is how the Vaults fix that.

!
Without the Vaults
Session startYou paste 500-2,000 words of context from notes, prior chats, or memory
Mid-sessionClaude asks clarifying questions it already answered last time
Repeated workRedo discovery: "What did we decide about X?" burns full back-and-forth
Context windowFills up faster with duplicate background, less room for real work
Estimated overhead~3,000-8,000 tokens/session in re-contexting
+
With LoreConvo + LoreDocs
Session startLoreConvo auto-loads top-scored recent sessions (~800 tokens, pre-filtered)
Project contextLoreDocs injects a compact summary of project knowledge (~500 tokens)
No repetitionPrior decisions, open questions, artifacts already in context -- zero re-asking
Context windowLean context leaves room for longer, more productive work sessions
Estimated savings~2,000-6,000 tokens/session, compounding across daily use
~4,000
tokens saved per session (average)
x
5
sessions per day (power user)
x
22
working days per month
=
440K
tokens saved per month

At API pricing (~$3/M input tokens for Sonnet), that is roughly $1.30/month in direct token savings for a single power user. More importantly, it means fewer wasted turns, faster ramp-up, and more room in the context window for the actual work -- which saves far more in human time and session count.

Real Scenarios from This Project

Ron's daily session
Without LoreConvo, Ron would need the full CLAUDE.md re-read + a manual summary of yesterday's work pasted in. With it, auto_load.py scores and injects the most relevant sessions, filtered to ~4,000 chars, in zero human effort.
LiteLLM audit today
Three separate sessions across two projects needed to know the audit result. LoreConvo lets each session find "litellm audit clean" via search instead of you re-explaining the outcome in each one.
Tax pipeline context
LoreDocs stores SAM's architecture docs, pipeline state, and form mappings. Instead of Claude re-discovering which nodes exist every session, vault_inject_summary provides the full map in ~500 tokens.
Multi-surface continuity
Start work in Claude Code on your Mac, continue in Cowork on a different machine. LoreConvo persists across all three surfaces (Code, Cowork, Chat) so nothing gets lost in the transition.