Your AI Knowledge Stack

Four tools, four roles, zero overlap in what matters. Click any circle or overlap to explore how Obsidian, NotebookLM, LoreConvo, and LoreDocs work together.

Click a zone to explore

ObsidianHuman Knowledge Base
NotebookLMResearch Synthesizer
LoreConvoAI Session Memory
LoreDocsAI Project Knowledge
Obsidian
NotebookLM
LoreConvo
LoreDocs
Select a zone to compare
Click any circle to see what makes it unique
Click overlap areas to see shared capabilities
Click the center to see what all four share

Obsidian

Your second brain

Where you think, connect ideas, and build personal knowledge. Markdown files, graph view, 800+ plugins. Designed for humans to write and browse.

Google NotebookLM

Your research assistant

Upload documents, get AI-grounded Q&A and audio summaries. Excellent for consuming and synthesizing existing material. No API, no agent access.

LoreConvo

Your AI's working memory

Auto-saves and auto-loads Claude session context across Code, Cowork, and Chat. Your agents pick up where they left off without you re-explaining anything.

LoreDocs

Your AI's reference library

Structured project knowledge organized in vaults. Architecture docs, domain rules, API contracts -- queryable by AI agents via 34 MCP tools.

The knowledge stack: Obsidian is where you think and write. NotebookLM helps you consume and synthesize research. LoreConvo remembers what your AI did across sessions. LoreDocs stores what your AI knows about your projects. Each tool owns a different layer. None of them replace each other -- and using all four means nothing falls through the cracks, whether the reader is you or your AI agent.

How the Lore Stack Saves Tokens

Every Claude session starts from zero. Without persistent memory, you burn tokens re-explaining context. NotebookLM helps you research faster, but the Lore products keep your AI agents fast too.

!
Without the Lore Stack
Session startYou paste 500-2,000 words of context from notes, prior chats, or memory
Mid-sessionClaude asks clarifying questions it already answered last time
Repeated workRedo discovery: "What did we decide about X?" burns full back-and-forth
Context windowFills up faster with duplicate background, less room for real work
Overhead~3,000-8,000 tokens/session in re-contexting
+
With LoreConvo + LoreDocs
Session startLoreConvo auto-loads top-scored recent sessions (~800 tokens, pre-filtered)
Project contextLoreDocs injects a compact vault summary (~500 tokens)
No repetitionPrior decisions, open questions, artifacts already in context
Context windowLean context leaves room for longer, more productive sessions
Savings~2,000-6,000 tokens/session, compounding across daily use
~4,000
tokens saved/session
x
5
sessions/day
x
22
working days/month
=
440K
tokens saved/month

At API pricing (~$3/M input tokens for Sonnet), that is roughly $1.30/month in direct savings for a power user. The real win is fewer wasted turns, faster ramp-up, and more context window for actual work -- which saves far more in human time and session count. NotebookLM saves you research time on the input side; the Lore products save your AI time on the execution side.