Code health dashboard for AI-assisted codebases.
When AI writes most of your code and nobody reads it, the really important things become invisible: security gaps, silent error handling, broken implementations, dead code, inconsistencies. CodeView surfaces them.
Note: CodeView uses LLM to analyze your code. Like any AI-generated output, reports may contain false positives or miss real issues entirely. Treat the results as a starting point for understanding project health, not a definitive audit. Also be aware that analysis consumes tokens roughly proportional to the size of your codebase — large projects will use significantly more tokens per scan.
┌─────────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Claude Code │ ──▶ │ .codeview/ │ ──▶ │ Web App (SPA) │
│ (codeview skill) │ │ ├─ report.json │ │ reads + renders│
│ analyzes codebase │ │ └─ state.json │ │ │
└─────────────────────┘ └──────────────────┘ └─────────────────┘
incremental via source of truth pure UI, zero deps
git diff
CodeView uses Claude Code as the analysis engine. Instead of building language-specific linters, it delegates semantic analysis to an LLM that already understands code in context. The output is a structured JSON report that a zero-dependency web app visualizes as a health tree.
- Sensitive data exposure (hardcoded secrets, API keys, tokens)
- Lack of authentication on sensitive endpoints
- Known vulnerabilities in dependencies
- Input validation gaps (SQL/command injection, XSS)
- Secret leakage in logs
- Silent error handling — empty catch blocks,
.catch(() => {}) - Default-return masking — returning null/empty on error paths to hide bugs
- Broken implementation (code that doesn't do what it claims)
- Unhandled async (floating promises, missing
await) - Implementation drift (docs/comments contradict the code)
- Dead code, duplicate code
- Type safety escapes (
any,@ts-ignore, unsafe casts) - Commented-out code, TODO/FIXME debt
- Complexity hotspots, long files / god objects
- Orphaned/circular dependencies
- Inconsistent patterns (error handling, naming)
- Test coverage
- Logging hygiene
- Hardcoded configuration
- Side effects on module load
See skill/codeview/reference/dimensions.md for the full list with detection patterns.
git clone https://github.com/<your-org>/codeview ~/codeview
cd ~/codeview
./install.shThat's it. The installer:
- Detects your Claude Code install (
~/.claude) - Symlinks the skill into
<claude-dir>/skills/codeview - Offers to put the CLI launcher on your PATH (system symlink or shell alias — you pick)
Then restart Claude Code once so it picks up the new skill.
Options:
./install.sh --no-launcher # skill only, skip the CLI setup
./install.sh --dir=~/.claude # force a specific Claude Code dir
./install.sh --help # show all optionsUpdate later:
cd ~/codeview && git pullThe symlinks follow the repo, so pulling is enough — no need to re-run the installer.
Uninstall:
cd ~/codeview && ./uninstall.shCodeView has two independently installable parts:
| Part | What | Install location |
|---|---|---|
| The skill | Pure instructions for Claude Code — tells it how to analyze and what JSON to produce | ~/.claude/skills/codeview/ |
| The app | Node.js CLI + web UI that reads .codeview/report.json and renders the dashboard |
Anywhere (run directly, alias, or /usr/local/bin symlink) |
They communicate only through the .codeview/report.json file. You can install one without the other (e.g. use the skill in a CI pipeline and only publish the JSON, or run the app on a pre-generated report).
Part 1 — skill:
git clone https://github.com/<your-org>/codeview ~/codeview
# pick whichever Claude Code install you use
mkdir -p ~/.claude/skills
ln -s ~/codeview/skill/codeview ~/.claude/skills/codeviewRestart Claude Code once so it picks up the new skill.
Part 2 — launcher:
# Option A: run directly from the clone
~/codeview/bin/codeview
# Option B: add to PATH (needs sudo on macOS/Linux)
sudo ln -s ~/codeview/bin/codeview /usr/local/bin/codeview
# Option C: shell alias (no sudo)
echo "alias codeview='~/codeview/bin/codeview'" >> ~/.zshrcOnce both parts are installed:
cd ~/my-projectIn Claude Code:
/codeview
Or naturally: "use the codeview skill to scan this project."
Claude Code reads your codebase, detects the tech stack, analyzes files, and writes .codeview/report.json. Then in your terminal:
codeview # or the full path if not on PATH: <codeview project dir>/bin/codeviewOpens the dashboard in your browser with the tree + issues view.
Each project gets its own server with a unique port (auto-assigned in 4100-4199). State is tracked at ~/.codeview/running.json so multiple projects can run simultaneously without conflict.
codeview # start (or reuse existing) for current project
codeview list # show all running instances with PID, port, project
codeview stop # stop the instance for the current project
codeview stop --all # stop every running instance
codeview open # open browser to an already-running instance
codeview help # show all commandsRunning codeview twice in the same project won't spawn a duplicate — it detects the existing instance and just opens the browser. The first available port in 4100-4199 is used; pass --port <n> to override.
First run scans the entire codebase. Subsequent runs only re-analyze:
- Files changed since the last scanned git commit (
git diff <lastCommit> HEAD --name-only) - Or files whose content hash changed (for non-git projects)
Findings are merged into the existing report so old analysis isn't lost.
See SCHEMA.md for the full report format.
Early prototype. Scaffold complete, iterating on:
- Skill prompt quality (what to ask Claude to analyze)
- UI polish (tree interactions, issue grouping)
- Incremental diff strategy
MIT