LoreHaven MCP Server
A context architecture system that gives AI structured access to a personal knowledge vault.
Context Architecture · Decomposition · Specification Precision
Every conversation with AI starts the same way: you explain who you are, what you're working on, and how you like to work. Then the session ends and the AI forgets everything. Next conversation, you start over. The more you use AI, the more time you spend re-introducing yourself. The tool that's supposed to save you time is wasting it on the same onboarding loop, every single session.
LoreHaven exists because that problem annoyed me enough to solve it. The idea is simple: write down who you are once — your role, your projects, your preferences, how you think — and have that context load automatically into every AI conversation. No copy-pasting. No re-explaining. You open Claude and it already knows you.
The implementation is a local MCP server that runs on your machine and serves your personal context to any connected AI tool. The vault has four layers: your Lore (the core document, 300-600 words), permanent reference files (5-10 curated documents the AI can see), active project workspaces, and temporary session artifacts. The architecture bets on curation over volume — every file you expose gets read, so what you include matters more than how much you store. Irrelevant context doesn't just waste tokens, it actively degrades the AI's performance.
Built with TypeScript, the Anthropic MCP SDK, and stdio transport. The hardest design problem wasn't the protocol or the server — it was deciding what not to expose.