Dogfooding Setup
Use Context Chain to analyze itself. Two instances, two Memgraphs, one codebase.
Why Two Instances?
To test the full pipeline on real code, run a second instance (context-chain-dev) that analyzes the original Context Chain codebase. This way:
- Every pipeline gets tested on real code
- The original repo gets its own decision graph — useful when developing Context Chain itself
- The two Memgraph databases are fully isolated
Architecture
~/dev/context-chain/ ← The product (what you're building)
├── Memgraph :7687 ← Decisions for your business repos
├── Dashboard :3001
└── .mcp.json ← Points to context-chain-dev's MCP server
~/dev/context-chain-dev/ ← The dogfood instance
├── Memgraph :7688 ← Decisions about context-chain itself
├── Dashboard :3003
├── Memgraph Lab :3002
└── ckg.config.json ← repos: [{ name: "context-chain", path: "..." }]Data flow:
context-chain-dev reads context-chain's source code
↓
Joern CPG + LLM extract decisions → Memgraph :7688
↓
MCP Server exposes decisions
↓
Claude Code in ~/dev/context-chain queries MCP → gets decisions about itselfSetup
1. Clone the Dogfood Instance
bash
cd ~/dev
git clone git@github.com:YOUR_ORG/context-chain.git context-chain-dev
cd context-chain-dev
npm install2. Configure Ports
Edit docker-compose.yml in context-chain-dev to avoid port collisions:
| Component | Production | Dogfood |
|---|---|---|
| Memgraph (bolt) | 7687 | 7688 |
| Memgraph Lab | 3000 | 3002 |
| Dashboard | 3001 | 3003 |
yaml
services:
memgraph:
container_name: ckg-memgraph-dev
ports:
- "7688:7687"
- "7445:7444"
volumes:
- memgraph-data-dev:/var/lib/memgraph
- memgraph-log-dev:/var/log/memgraph
memgraph-lab:
container_name: ckg-memgraph-lab-dev
ports:
- "3002:3000"
volumes:
memgraph-data-dev:
memgraph-log-dev:3. Configure Target Repo
Edit ckg.config.json in context-chain-dev:
json
{
"project": "context-chain",
"ai": { "provider": "claude-cli" },
"repos": [
{
"name": "context-chain",
"path": "/absolute/path/to/context-chain",
"type": "backend",
"cpgFile": "data/context-chain.json",
"language": "javascript",
"srcDir": "src",
"packages": []
}
]
}4. Create a Run Wrapper
Create run.sh in context-chain-dev:
bash
#!/bin/bash
export CKG_MEMGRAPH_PORT=7688
export DASHBOARD_PORT=3003
exec npm run "$@"bash
chmod +x run.sh5. Start and Initialize
bash
docker compose up -d
CKG_MEMGRAPH_PORT=7688 npm run db:schema
./run.sh dashboard # → http://localhost:30036. Run the Pipeline
From the dashboard at localhost:3003:
- System → Generate CPG for
context-chain - Run → Execute the full pipeline (skip
linkphase for single repo) - Overview → Verify decisions appear
7. Connect MCP
In ~/dev/context-chain/.mcp.json:
json
{
"mcpServers": {
"context-chain": {
"command": "/bin/bash",
"args": ["/absolute/path/to/context-chain-dev/mcp-start.sh"],
"env": {
"CKG_MEMGRAPH_PORT": "7688"
}
}
}
}Now Claude Code in ~/dev/context-chain can query design decisions about its own codebase.
Daily Usage
| Task | Command |
|---|---|
| Start dogfood Memgraph | cd context-chain-dev && docker compose up -d |
| Open dogfood dashboard | ./run.sh dashboard → localhost:3003 |
| Run any script | ./run.sh <script> (auto-sets port 7688) |
| Regenerate CPG | Dashboard → System → Regenerate CPG |
| View production data | localhost:3001 (port 7687) |
| View dogfood data | localhost:3003 (port 7688) |
Important Notes
- Don't push dev-specific files:
ckg.config.json,docker-compose.yml,run.sh, and.envshould be gitignored or reverted before pushing - Code changes go through dev first: Make improvements in
context-chain-dev, test, then push and pull into the original repo - CPG staleness detection works from the second regeneration onward (first run has no hash baseline to diff against)