Skip to content

Introduction

The Problem

In AI-assisted coding, a developer + AI can independently build an entire service. This is productive, but every design decision — why option A over B, what trade-offs were accepted, what was rejected — does not live inside repo.

There's no shortage of places to store this context: CLAUDE.md, SDD system, Notion pages, commit messages, PR descriptions, AI chat transcripts. The problem isn't storage — it's everything after storage:

  • Retrieval is broken. When you're editing a function, you need the decisions relevant to that function right now. Keyword search across flat documents are not optimal. You'd have to know the decision exists and roughly where it is to find it.
  • Knowledge is isolated. A decision about the auth middleware and a decision about the payment flow may be deeply related (one caused the other), but in flat docs they're just two unrelated paragraphs. There's no organic connection between pieces of knowledge.
  • None of it is consumption-friendly. Raw AI transcripts are too long. Summarized docs lose nuance. Neither format helps a teammate build real understanding of why the codebase is shaped the way it is.

What Context Chain Does

A local knowledge graph that extracts design decisions from your codebase and AI coding sessions, stores them in Memgraph, and serves them back to your coding AI via MCP.

Existing context engineering tools (OpenSpec, Git-AI, Dexicon) capture knowledge at the spec or repo level. Context Chain goes deeper:

  • Function-level anchoring — decisions tied to specific functions via Joern CPG, not floating above the repo
  • Automatic staleness detection — code changes flag affected decisions; knowledge doesn't silently rot
  • Decision-level extraction — not prompt summaries or raw transcripts, but what was chosen, what was rejected, and why
  • Decision relationshipsCAUSED_BY, DEPENDS_ON, CONFLICTS_WITH edges across a graph, not flat files
  • Runs on your subscription — uses claude -p (Claude CLI), no API costs. Run heavy extraction overnight; during the day your coding AI gets instant context from what's already in the graph
Codebase → Joern CPG → LLM extracts decisions per function
    → Memgraph (graph DB) → MCP Server (9 tools)
        → Claude Code queries context while you code

Tech Stack

ComponentTechnology
Graph DBMemgraph
Code analysisJoern (CPG)
Decision extractionClaude CLI / Anthropic API
MCP ServerTypeScript + @modelcontextprotocol/sdk
DashboardHono + vanilla HTML/JS
ContainerDocker Compose

Current Status

Core pipeline (analyze_function + full-scan runner + MCP Server + Dashboard + session ingestion + Joern CPG + cross-repo linking) is production-tested on a multi-repo TypeScript project. Semantic vector search and refinement pipeline are code-complete.

Roadmap

AreaWhat's coming
Consumption layerImmersive KT system and team knowledge map — help people understand the codebase, not just AI
Agent supportCurrently Claude Code; adding Cursor, Windsurf, Cline, Copilot, and other MCP-compatible agents
Multi-source ingestionSlack threads, Notion docs, meeting transcripts — not just code and AI sessions
Spec-driven workflowOpenSpec-style proposal → spec → design → implement, with decisions auto-anchored after implementation

Released under the Apache 2.0 License.