LoreHaven — AI Forgets Who You Are. Every Single Time.
Context Architecture · Decomposition · Specification Precision
Every conversation with AI starts the same way: you explain who you are, what you're working on, and how you like to work. Then the session ends and the AI forgets everything. Next conversation, you start over. The more you use AI, the more time you spend re-introducing yourself. The tool that's supposed to save you time is wasting it on the same onboarding loop, every single session.
LoreHaven exists because that problem annoyed me enough to solve it. The idea is simple: write down who you are once — your role, your projects, your preferences, how you think — and have that context load automatically into every AI conversation. No copy-pasting. No re-explaining. You open Claude and it already knows you.
The implementation is a local MCP server that runs on your machine and serves your personal context to any connected AI tool. The vault has four layers: your Lore (the core document, 300-600 words), permanent reference files (5-10 curated documents the AI can see), active project workspaces, and temporary session artifacts. The architecture bets on curation over volume — every file you expose gets read, so what you include matters more than how much you store. Irrelevant context doesn't just waste tokens, it actively degrades the AI's performance.
Built with TypeScript, the Anthropic MCP SDK, and stdio transport. The hardest design problem wasn't the protocol or the server — it was deciding what not to expose.
Architecture
How context flows from the user to AI tools — and back
300–600 words
Always loaded
Auto-indexed
AI-readable
Per-project README
Working files
Date + UUID
Review & migrate
Rebuilds index on change
Claude Desktop config
OS keychain (keytar)
Pull on first launch
No install required (Tier 1 value)