Introduction
The Problem
In AI-assisted coding, a developer + AI can independently build an entire service. This is productive, but every design decision — why option A over B, what trade-offs were accepted, what was rejected — does not live inside repo.
There's no shortage of places to store this context: CLAUDE.md, SDD system, Notion pages, commit messages, PR descriptions, AI chat transcripts. The problem isn't storage — it's everything after storage:
- Retrieval is broken. When you're editing a function, you need the decisions relevant to that function right now. Keyword search across flat documents are not optimal. You'd have to know the decision exists and roughly where it is to find it.
- Knowledge is isolated. A decision about the auth middleware and a decision about the payment flow may be deeply related (one caused the other), but in flat docs they're just two unrelated paragraphs. There's no organic connection between pieces of knowledge.
- None of it is consumption-friendly. Raw AI transcripts are too long. Summarized docs lose nuance. Neither format helps a teammate build real understanding of why the codebase is shaped the way it is.
What Context Chain Does
A local knowledge graph that extracts design decisions from your codebase and AI coding sessions, stores them in Memgraph, and serves them back to your coding AI via MCP.
Existing context engineering tools (OpenSpec, Git-AI, Dexicon) capture knowledge at the spec or repo level. Context Chain goes deeper:
- Function-level anchoring — decisions tied to specific functions via Joern CPG, not floating above the repo
- Automatic staleness detection — code changes flag affected decisions; knowledge doesn't silently rot
- Decision-level extraction — not prompt summaries or raw transcripts, but what was chosen, what was rejected, and why
- Decision relationships —
CAUSED_BY,DEPENDS_ON,CONFLICTS_WITHedges across a graph, not flat files - Runs on your subscription — uses
claude -p(Claude CLI), no API costs. Run heavy extraction overnight; during the day your coding AI gets instant context from what's already in the graph
Codebase → Joern CPG → LLM extracts decisions per function
→ Memgraph (graph DB) → MCP Server (9 tools)
→ Claude Code queries context while you codeTech Stack
| Component | Technology |
|---|---|
| Graph DB | Memgraph |
| Code analysis | Joern (CPG) |
| Decision extraction | Claude CLI / Anthropic API |
| MCP Server | TypeScript + @modelcontextprotocol/sdk |
| Dashboard | Hono + vanilla HTML/JS |
| Container | Docker Compose |
Current Status
Core pipeline (analyze_function + full-scan runner + MCP Server + Dashboard + session ingestion + Joern CPG + cross-repo linking) is production-tested on a multi-repo TypeScript project. Semantic vector search and refinement pipeline are code-complete.
Roadmap
| Area | What's coming |
|---|---|
| Consumption layer | Immersive KT system and team knowledge map — help people understand the codebase, not just AI |
| Agent support | Currently Claude Code; adding Cursor, Windsurf, Cline, Copilot, and other MCP-compatible agents |
| Multi-source ingestion | Slack threads, Notion docs, meeting transcripts — not just code and AI sessions |
| Spec-driven workflow | OpenSpec-style proposal → spec → design → implement, with decisions auto-anchored after implementation |