Feat: Added Memory-graph.go , tools.go, peek.go #136
Feat: Added Memory-graph.go , tools.go, peek.go #136drQedwards wants to merge 33 commits intosupermodeltools:mainfrom
Conversation
WalkthroughAdds a persistent, file-based memory graph system storing typed nodes and weighted edges in JSON with concurrent read/write access, including search and traversal queries. Integrates graph inspection utilities and six new MCP tools for node management, edge creation, searching, and traversal operations. Changes
Sequence DiagramssequenceDiagram
actor Client as MCP Client
participant Server as MCP Server
participant Tools as memorygraph Tools
participant Core as Graph Core
participant Disk as JSON Storage
Client->>Server: callTool("upsert_memory_node", {rootDir, type, label, content})
Server->>Tools: ToolUpsertMemoryNode(opts)
Tools->>Core: UpsertNode(...)
Core->>Core: Load graph (lock mutex)
Core->>Core: Create/update node in cache
Core->>Disk: Write to .supermodel/memory-graph.json
Core-->>Tools: Return *Node
Tools-->>Server: Return status + GraphStats
Server-->>Client: Return formatted result
sequenceDiagram
actor Client as MCP Client
participant Server as MCP Server
participant Tools as memorygraph Tools
participant Core as Graph Core
participant Scoring as Scoring Engine
participant Async as Async Worker
Client->>Server: callTool("search_memory_graph", {rootDir, query, topK, maxDepth})
Server->>Tools: ToolSearchMemoryGraph(opts)
Tools->>Core: SearchGraph(query, topK, maxDepth)
Core->>Core: Load graph + index nodes
Core->>Scoring: Score all nodes (substring match + access-count bias)
Scoring-->>Core: Top-K ranked results
Core->>Core: Expand neighbors (1-hop, optional relation filter)
Core->>Async: Asynchronously bump access counts
Core-->>Tools: Return SearchResult {Direct, Neighbors, TotalNodes, TotalEdges}
Tools-->>Server: Return formatted results + stats
Server-->>Client: Return search outcome
sequenceDiagram
actor Client as MCP Client
participant Server as MCP Server
participant Tools as memorygraph Tools
participant Core as Graph Core
participant BFS as Traversal Engine
participant Async as Async Worker
Client->>Server: callTool("retrieve_with_traversal", {rootDir, startNodeID, maxDepth})
Server->>Tools: ToolRetrieveWithTraversal(opts)
Tools->>Core: RetrieveWithTraversal(startNodeID, maxDepth)
Core->>Core: Load graph + find start node
Core->>BFS: Depth-limited BFS from start
BFS->>BFS: For each neighbor: compute decayed relevance (edge weight × depth decay)
BFS-->>Core: All reachable nodes sorted by depth, then relevance
Core->>Async: Asynchronously bump access counts for all returned nodes
Core-->>Tools: Return []TraversalResult
Tools-->>Server: Return formatted traversal + stats
Server-->>Client: Return reachable nodes in order
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~50 minutes The addition spans multiple files with heterogeneous logic: concurrent cache management with RW-mutexes, graph algorithms (Jaccard scoring, BFS traversal with decay), JSON persistence, and tool wrapping. The apparent duplicate Possibly Related PRs
Suggested Reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Tools have been tested in typescript on ppm and context+ — we just translated it to the Go language for the CMD Directory. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7baaa7f777
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (2)
cmd/Peek.go (1)
218-231: Swap the hand-rolled insertion sort forsort.SliceStable— same behavior, one line, and you get O(n log n) for free when the graph grows.The comment says "insertion sort is fine for the typical small N" and that's true right now, but
PeekListreturns every node, and the moment someone calls this on a 10k-node graph you're eating 50M comparisons for what should be a ~140k-comparison sort.♻️ Proposed refactor
-func sortNodePeeks(peeks []NodePeek) { - // Insertion sort is fine for the typical small N here. - for i := 1; i < len(peeks); i++ { - for j := i; j > 0; j-- { - a, b := peeks[j-1], peeks[j] - if a.Node.AccessCount < b.Node.AccessCount || - (a.Node.AccessCount == b.Node.AccessCount && a.Node.Label > b.Node.Label) { - peeks[j-1], peeks[j] = peeks[j], peeks[j-1] - } else { - break - } - } - } -} +func sortNodePeeks(peeks []NodePeek) { + sort.SliceStable(peeks, func(i, j int) bool { + if peeks[i].Node.AccessCount != peeks[j].Node.AccessCount { + return peeks[i].Node.AccessCount > peeks[j].Node.AccessCount + } + return peeks[i].Node.Label < peeks[j].Node.Label + }) +}Don't forget to add
"sort"to the imports.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/Peek.go` around lines 218 - 231, Replace the hand-rolled insertion sort in sortNodePeeks with sort.SliceStable to get simpler code and better scalability: import "sort" and call sort.SliceStable(peeks, func(i, j int) bool { pi, pj := peeks[i], peeks[j]; if pi.Node.AccessCount != pj.Node.AccessCount { return pi.Node.AccessCount > pj.Node.AccessCount } return pi.Node.Label < pj.Node.Label }) so ordering remains: descending AccessCount, tie-break by ascending Label.cmd/tools.go (1)
222-244: Minor:maxDepthdefault is duplicated between caller and callee — pick one home.
RetrieveWithTraversalalready applies the "<= 0 ⇒ 2" default internally (Memory-graph.go:405–407). The wrapper repeats the same logic at lines 233–236 only to produce the "(depth limit: N)" header string. Harmless today, but the moment the default changes in one place and not the other, the header lies to the user.Easy fix: return the effective
maxDepthfromRetrieveWithTraversal(or surface it via a field onTraversalResult[0]), and have the wrapper just print what it was told. Otherwise, a single shared helper likedefaultMaxDepth(d int) intkeeps them honest.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/tools.go` around lines 222 - 244, ToolRetrieveWithTraversal currently recomputes the default maxDepth causing a possible mismatch with RetrieveWithTraversal; change RetrieveWithTraversal to surface the effective depth it used (e.g., return the computed effectiveMaxDepth alongside results) and update ToolRetrieveWithTraversal to use that returned effective depth in the header instead of re-applying the defaulting logic, removing the local maxDepth fallback; reference RetrieveWithTraversal and ToolRetrieveWithTraversal (and keep formatTraversalResult usage unchanged).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cmd/Memory-graph.go`:
- Around line 178-206: The upsert currently replaces g.Nodes[i].Metadata (and
treats nil as wipe) and also increments AccessCount on every upsert; change
UpsertNode so it merges incoming metadata into the existing map instead of
assigning it (preserve existing keys when the provided metadata is nil or
partial), i.e. iterate over metadata and set/overwrite only those keys on
g.Nodes[i].Metadata, ensuring g.Nodes[i].Metadata is initialized if nil; also
remove the AccessCount++ from the update branch so writes don’t count as reads
(leave AccessCount increment only in read paths like SearchGraph,
RetrieveWithTraversal, Peek or functions that explicitly represent reads).
- Around line 136-150: The save function (save, graphPath) currently writes
directly with os.WriteFile which can leave a truncated/corrupt JSON if
interrupted; change it to a safe write-temp-then-rename pattern: create a temp
file in the same directory (use os.CreateTemp with filepath.Dir(path)), write
the marshaled data to that temp file, call File.Sync() and close it, set the
final permissions (0o644) if needed, then atomically replace the target by
os.Rename(tempPath, path); on any error ensure the temp file is removed to avoid
leaks and return wrapped errors similar to existing fmt.Errorf messages.
- Around line 1-5: The three files declaring "package memorygraph"
(Memory-graph.go, Peek.go, tools.go) must be moved out of the cmd/ directory
into a new internal/memorygraph/ directory so the package name matches its
directory and avoids the "found packages cmd and memorygraph in cmd" error;
create internal/memorygraph/, move those three files there, keep their package
declaration as "package memorygraph", and update any imports/usages in the rest
of the codebase to import "internal/memorygraph" instead of referencing them
from package cmd.
- Around line 93-134: load() returns a pointer to shared graphData whose slices
(g.Nodes, g.Edges) are mutated under mu by UpsertNode, CreateRelation,
PruneStaleLinks and bumpAccess, causing data races when callers (SearchGraph,
RetrieveWithTraversal, GetGraphStats, Peek, PeekList) iterate those slices
without locking; fix by either holding mu.RLock() for the entire read-scan in
each reader (SearchGraph, RetrieveWithTraversal, GetGraphStats, Peek, PeekList)
or, preferably, under mu.Lock() create and return a shallow snapshot copy of
graphData with copied slices (copy nodes/edges into new slices) so readers can
iterate safely without holding the lock—choose one strategy and update the
listed functions accordingly to reference g.Nodes and g.Edges consistently to
eliminate concurrent mutation of shared backing arrays.
- Around line 316-317: The build fails because the type SearchResult referenced
by SearchGraph() and instantiated in Memory-graph.go is missing and the function
AddInterlinkedContext called from tools.go does not exist (only
ToolAddInterlinkedContext is implemented); to fix, add a concrete SearchResult
type definition containing the fields that SearchGraph() returns/uses (match the
struct shape used where it's instantiated/returned) and either implement
AddInterlinkedContext as a thin wrapper that calls ToolAddInterlinkedContext or
change the callers to call ToolAddInterlinkedContext consistently; ensure the
symbol names SearchResult, SearchGraph, AddInterlinkedContext and
ToolAddInterlinkedContext are updated so all references compile.
In `@cmd/Peek.go`:
- Around line 94-138: PeekList currently computes outDeg and inDeg then discards
them, wasting O(E) time; remove these unused maps and their population logic and
the `_ = outDeg / _ = inDeg` lines. Update the loop that iterates g.Edges to
only build edgesOut/edgesIn and increment nothing else, leaving
NodePeek/EdgePeek as-is (FormatPeekList already uses len(p.EdgesOut) for
degrees). If you prefer the alternative later, add a new "scan-only" mode in
PeekList that skips building edgesOut/edgesIn and instead populates only
outDeg/inDeg, but do not keep unused maps in the current implementation.
In `@cmd/tools.go`:
- Around line 177-194: The call to AddInterlinkedContext from
ToolAddInterlinkedContext is missing and the anonymous struct shim should be
removed: implement func AddInterlinkedContext(rootDir string, items
[]InterlinkedItem, autoLink bool) (*InterlinkedResult, error) in the memorygraph
package (matching InterlinkedItem and InterlinkedResult types) and update
ToolAddInterlinkedContext to pass opts.Items directly (remove the manual items
copy and the anonymous struct), and fix package declarations so files that
belong to the memorygraph implementation use package memorygraph (ensure
analyze.go and other files in the same directory are in the correct package) so
the build compiles.
---
Nitpick comments:
In `@cmd/Peek.go`:
- Around line 218-231: Replace the hand-rolled insertion sort in sortNodePeeks
with sort.SliceStable to get simpler code and better scalability: import "sort"
and call sort.SliceStable(peeks, func(i, j int) bool { pi, pj := peeks[i],
peeks[j]; if pi.Node.AccessCount != pj.Node.AccessCount { return
pi.Node.AccessCount > pj.Node.AccessCount } return pi.Node.Label < pj.Node.Label
}) so ordering remains: descending AccessCount, tie-break by ascending Label.
In `@cmd/tools.go`:
- Around line 222-244: ToolRetrieveWithTraversal currently recomputes the
default maxDepth causing a possible mismatch with RetrieveWithTraversal; change
RetrieveWithTraversal to surface the effective depth it used (e.g., return the
computed effectiveMaxDepth alongside results) and update
ToolRetrieveWithTraversal to use that returned effective depth in the header
instead of re-applying the defaulting logic, removing the local maxDepth
fallback; reference RetrieveWithTraversal and ToolRetrieveWithTraversal (and
keep formatTraversalResult usage unchanged).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bfc9da10-c740-4be5-a9eb-8ad0514c37a5
📒 Files selected for processing (3)
cmd/Memory-graph.gocmd/Peek.gocmd/tools.go
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Memorygraph is uppercase
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
cmd/Memory-graph.go (2)
506-512: Neighbors inSearchGraphdon't get access-bumped, onlydirectIDsdo.Tiny inconsistency vs.
RetrieveWithTraversal, which bumps every node inresults(line 506-512). InSearchGraph(line 416) only the direct matches are bumped, so a node that got surfaced purely because it was a one-hop neighbor never gets its popularity counter nudged, even though it was returned to the user.Not a correctness bug — just a slight asymmetry that will skew
AccessCount-based popularity ranking over time in favor of direct-match nodes. If the popularity bias inscoreNodematters to you long-term, consider bumping bothdirectIDsandneighborIDs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/Memory-graph.go` around lines 506 - 512, SearchGraph currently only bumps access for directIDs whereas RetrieveWithTraversal bumps every node in results, causing neighbor nodes returned by SearchGraph to miss popularity updates; update the goroutine that calls bumpAccess so it collects and passes both directIDs and neighborIDs (or all IDs from results) to bumpAccess (function bumpAccess and the local vars directIDs, neighborIDs, results are the targets) so every node returned to the user gets its access count incremented.
597-610:tokenOverlapdoes substring matching, not token matching — "go" will match "google".Right now
strings.Contains(text, t)on a token checks if the token appears anywhere inside the raw text string, not whether it appears as its own word. Concrete example:
- query =
"go", text ="googled it yesterday"→strings.Contains("googled it yesterday", "go")is true → counts as a hit.- query =
"cat", text ="concatenate the strings"→ also a hit.If you want real token overlap, check against the token set of
text, similar to whattokenSetalready does:🧹 Suggested tighter version
func tokenOverlap(query, text string) float64 { queryTokens := strings.Fields(query) if len(queryTokens) == 0 { return 0 } - found := 0 - for _, t := range queryTokens { - if strings.Contains(text, t) { - found++ - } - } + textTokens := tokenSet(text) // text is already lower-cased by callers + found := 0 + for _, t := range queryTokens { + if textTokens[t] { + found++ + } + } return float64(found) / float64(len(queryTokens)) }Not a critical bug — it just makes
scoreNode's content bonus noisier than it looks. Feel free to defer if the fuzzy behavior is actually desired.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/Memory-graph.go` around lines 597 - 610, tokenOverlap currently uses substring matching (strings.Contains) which treats "go" matching "google"; change tokenOverlap to tokenize the text (e.g., strings.Fields or reuse tokenSet logic) and build a set/map of text tokens, then count query tokens that are present in that set to compute the fraction; also normalize casing (strings.ToLower) for both query and text tokens to avoid case mismatches; keep the same signature and return float64(found)/float64(len(queryTokens)).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cmd/Memory-graph.go`:
- Around line 424-438: RetrieveWithTraversal currently returns (nil, nil) when
startNodeID is not present, making it impossible for callers to distinguish a
missing start node from an empty traversal; change the behavior in
RetrieveWithTraversal (after building nodeByID via indexNodes and the startNode
lookup) to return a sentinel error instead of nil, e.g., return nil,
fmt.Errorf("memorygraph: start node %q not found", startNodeID), so callers can
detect and surface this case.
- Around line 172-175: The nodeID function normalizes labels by lowercasing and
replacing spaces with underscores which causes collisions between distinct
labels like "API Key" and "api_key"; update the logic used by nodeID (or
UpsertNode) to avoid silent overwrites by either (a) computing the ID from a
stable hash of the original label (e.g., include sha256/hex of label alongside
or instead of the normalized text) or (b) rejecting labels that contain
underscores or spaces and returning an explicit error from UpsertNode; locate
and change the nodeID function and ensure UpsertNode checks/propagates errors
accordingly so distinct labels cannot silently map to the same ID.
- Around line 339-390: SearchGraph currently treats an empty query as a match
for every node because strings.Contains(..., "") is true; add an early guard in
SearchGraph (check queryLower := strings.ToLower(strings.TrimSpace(query)) or
similar) that returns either an empty SearchResult (with TotalNodes/TotalEdges
still populated) or an error when the trimmed query is empty, so scoreNode is
never invoked with an empty query; reference SearchGraph, queryLower, and
scoreNode when making this change.
---
Nitpick comments:
In `@cmd/Memory-graph.go`:
- Around line 506-512: SearchGraph currently only bumps access for directIDs
whereas RetrieveWithTraversal bumps every node in results, causing neighbor
nodes returned by SearchGraph to miss popularity updates; update the goroutine
that calls bumpAccess so it collects and passes both directIDs and neighborIDs
(or all IDs from results) to bumpAccess (function bumpAccess and the local vars
directIDs, neighborIDs, results are the targets) so every node returned to the
user gets its access count incremented.
- Around line 597-610: tokenOverlap currently uses substring matching
(strings.Contains) which treats "go" matching "google"; change tokenOverlap to
tokenize the text (e.g., strings.Fields or reuse tokenSet logic) and build a
set/map of text tokens, then count query tokens that are present in that set to
compute the fraction; also normalize casing (strings.ToLower) for both query and
text tokens to avoid case mismatches; keep the same signature and return
float64(found)/float64(len(queryTokens)).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3761ed86-1de4-47cf-aa7b-97ae6ebeb0e6
📒 Files selected for processing (3)
cmd/Internal/Memorygraph/Tools.gocmd/Memory-graph.gocmd/Peek.go
✅ Files skipped from review due to trivial changes (1)
- cmd/Internal/Memorygraph/Tools.go
🚧 Files skipped from review as they are similar to previous changes (1)
- cmd/Peek.go
Fixed uppercase bug for import Memorygraph
All three go files into internal/Memorygraph @coderabbitai
drQedwards
left a comment
There was a problem hiding this comment.
Patch merged and fixing most the bugs the agents found. Awaiting review from the graph theory, aka another human before anyone begins to question if I’m legitimately a human or not lol.
|
The .go files in the cmd have been deleted to pass linter check. If they are really needed in cmd, I’m quite sure a nudge from @coderabbitai might be given during this review wave |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 10677 🏁 Script executed: Length of output: 16633
However, after digging into the code, there are three things worth addressing before this is fully functional: 1. 🔌 The memory graph tools aren't wired into the MCP server yet
a) Add tool definitions in the {
Name: "upsert_memory_node",
Description: "Upsert a typed knowledge node into the persistent memory graph.",
InputSchema: toolSchema{
Type: "object",
Properties: map[string]schemaProp{
"type": {Type: "string", Description: "Node type: fact, concept, entity, event, procedure, context."},
"label": {Type: "string", Description: "Short unique label for the node."},
"content": {Type: "string", Description: "Full content body of the node."},
},
Required: []string{"type", "label", "content"},
},
},
// ... search_memory_graph, retrieve_with_traversal, prune_stale_links, etc.b) Add import at the top of "github.com/supermodeltools/cli/internal/Memorygraph"c) Add cases in the case "upsert_memory_node":
return memorygraph.ToolUpsertMemoryNode(memorygraph.UpsertMemoryNodeOptions{
RootDir: s.dir,
Type: memorygraph.NodeType(strArg(args, "type")),
Label: strArg(args, "label"),
Content: strArg(args, "content"),
})
// ... and so on for the other tools2. 📁 Directory name doesn't match Go conventionsEvery other package in
3. 📄
|
@coderabbitai yes plan those changes please |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain
Here's the plan:
🏁 Script executed: Length of output: 515 🏁 Script executed: Length of output: 14573
Change 1 — Rename directory & fileRun these two git mv internal/Memorygraph/Memory-graph.go internal/Memorygraph/memory_graph.go
git mv internal/Memorygraph internal/memorygraphNo content changes needed inside any of the three files — they already declare Change 2 — Wire memory-graph tools into
|
The ritual of Q
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/mcp/server.go (1)
292-369:⚠️ Potential issue | 🔴 Critical
callToolgot declared twice, nested inside itself — this file won't build.Look carefully at what happened here: the original
callToolopens at line 292, runs itsswitch, hitsdefault:at 302 andreturn "", fmt.Errorf("unknown tool: %s", name)at 303 — and then inside that same function body, at line 304, there's anotherfunc (s *server) callTool(ctx context.Context, name string, args map[string]any) (string, error) {with its own duplicatedswitch name {, duplicatedcase "analyze"/case "dead_code"/case "blast_radius"/case "get_graph"arms, your new memorygraph cases, and its owndefault. Then you get two closing braces for the inner function (367, 368) and a dangling}on 369.Go doesn't allow nested function declarations (let alone redeclaring a method on the same receiver), and the outer function never actually returned before the inner one starts. It's basically the same thing that happened in the
toolsslice — a patch was applied on top of the old code instead of replacing it, so both copies live side-by-side.The fix is to keep one
callToolthat contains all cases (old + new) in a singleswitch, with onedefault, one closing}for the switch, and one closing}for the function.🛠 Proposed single-function replacement for lines 292–369
func (s *server) callTool(ctx context.Context, name string, args map[string]any) (string, error) { switch name { case "analyze": return s.toolAnalyze(ctx, args) case "dead_code": return s.toolDeadCode(ctx, args) case "blast_radius": return s.toolBlastRadius(ctx, args) case "get_graph": return s.toolGetGraph(ctx, args) - default: - return "", fmt.Errorf("unknown tool: %s", name) - func (s *server) callTool(ctx context.Context, name string, args map[string]any) (string, error) { - switch name { - case "analyze": - return s.toolAnalyze(ctx, args) - case "dead_code": - return s.toolDeadCode(ctx, args) - case "blast_radius": - return s.toolBlastRadius(ctx, args) - case "get_graph": - return s.toolGetGraph(ctx, args) - case "upsert_memory_node": + case "upsert_memory_node": return memorygraph.ToolUpsertMemoryNode(memorygraph.UpsertMemoryNodeOptions{ RootDir: s.dir, Type: memorygraph.NodeType(strArg(args, "type")), Label: strArg(args, "label"), Content: strArg(args, "content"), }) case "create_relation": w := floatArg(args, "weight") if w == 0 { w = 1.0 } return memorygraph.ToolCreateRelation(memorygraph.CreateRelationOptions{ RootDir: s.dir, SourceID: strArg(args, "source_id"), TargetID: strArg(args, "target_id"), Relation: memorygraph.RelationType(strArg(args, "relation")), Weight: w, }) case "search_memory_graph": topK := intArg(args, "top_k") if topK == 0 { topK = 5 } return memorygraph.ToolSearchMemoryGraph(memorygraph.SearchMemoryGraphOptions{ RootDir: s.dir, Query: strArg(args, "query"), MaxDepth: intArg(args, "max_depth"), TopK: topK, }) case "retrieve_with_traversal": return memorygraph.ToolRetrieveWithTraversal(memorygraph.RetrieveWithTraversalOptions{ RootDir: s.dir, StartNodeID: strArg(args, "start_node_id"), MaxDepth: intArg(args, "max_depth"), }) case "prune_stale_links": return memorygraph.ToolPruneStaleLinks(memorygraph.PruneStaleLinksOptions{ RootDir: s.dir, Threshold: floatArg(args, "threshold"), }) case "add_interlinked_context": items, err := parseInterlinkedItems(args) if err != nil { return "", fmt.Errorf("add_interlinked_context: invalid items: %w", err) } return memorygraph.ToolAddInterlinkedContext(memorygraph.AddInterlinkedContextOptions{ RootDir: s.dir, Items: items, AutoLink: boolArg(args, "auto_link"), }) default: return "", fmt.Errorf("unknown tool: %s", name) } } -}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/mcp/server.go` around lines 292 - 369, The file contains two nested declarations of func (s *server) callTool(...) — remove the duplicated inner function and merge its new memorygraph cases into the original single callTool's switch so there is only one switch and one default branch; ensure the final callTool includes the existing arms (toolAnalyze, toolDeadCode, toolBlastRadius, toolGetGraph) plus the new memorygraph cases (memorygraph.ToolUpsertMemoryNode, ToolCreateRelation, ToolSearchMemoryGraph, ToolRetrieveWithTraversal, ToolPruneStaleLinks, ToolAddInterlinkedContext) and that parseInterlinkedItems error handling and helper callers (strArg, intArg, floatArg, boolArg) remain intact, then close the switch and function with the correct braces.
🧹 Nitpick comments (1)
internal/mcp/server.go (1)
117-190: Optional: schemas don't exposemetadataoredge_filter, which the underlying options support.Not a bug, just a heads-up for once the compile issues above are sorted. Looking at
internal/Memorygraph/tools.go,UpsertMemoryNodeOptionshasMetadata map[string]string,SearchMemoryGraphOptionshasEdgeFilter []RelationType,CreateRelationOptionshasMetadata, etc. — but none of those knobs are advertised in the toolInputSchemahere, so MCP clients won't know they exist.If you want these reachable from MCP, add them to the schemas (type
"object"for metadata,"array"for edge_filter) and plumb them through in thecallToolcases. If you'd rather keep the surface minimal for v1, totally fine — just mention it in the tool description so it doesn't look like an oversight later.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/mcp/server.go` around lines 117 - 190, The tool schemas (upsert_memory_node, create_relation, search_memory_graph, add_interlinked_context, etc.) omit options like Metadata and EdgeFilter present in the underlying structs (UpsertMemoryNodeOptions.Metadata, CreateRelationOptions.Metadata, SearchMemoryGraphOptions.EdgeFilter); add corresponding InputSchema properties (e.g., "metadata" with type "object" for map[string]string, and "edge_filter" with type "array" for RelationType values) to the relevant tool entries and then thread those fields through the MCP call handling (the callTool cases that construct UpsertMemoryNodeOptions, CreateRelationOptions, SearchMemoryGraphOptions) so the parsed request values are populated into the underlying option structs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/mcp/server.go`:
- Line 12: Import path uses PascalCase; rename the directory and update the
import to lowercase. Run git mv internal/Memorygraph internal/memorygraph,
update the import line in server.go from
"github.com/supermodeltools/cli/internal/Memorygraph" to
"github.com/supermodeltools/cli/internal/memorygraph", and verify call sites
like memorygraph.ToolUpsertMemoryNode(...) still compile (package name remains
memorygraph).
- Around line 117-190: The file contains literal '+' characters from a pasted
diff in the tools slice and in the callTool switch arms which break Go parsing;
remove the leading '+' from every added line in the tools definitions (the slice
entries with Name: "upsert_memory_node", "create_relation",
"search_memory_graph", "retrieve_with_traversal", "prune_stale_links",
"add_interlinked_context") and from the corresponding new case arms in the
callTool function so each entry starts with '{' or 'case' as normal; after
stripping those '+' chars the file will compile.
- Around line 321-332: The create_relation branch currently treats
floatArg(args, "weight") == 0 as "missing" which conflates a client-provided
zero with absent value; change the logic to check presence in args first (e.g.,
inspect args["weight"] or use a presence check) and only apply the default
Weight=1.0 when the weight key is absent, otherwise pass the parsed value (even
if 0) into memorygraph.ToolCreateRelation via CreateRelationOptions.Weight; keep
floatArg for parsing but use an explicit presence test on args before
defaulting.
- Around line 649-662: parseInterlinkedItems currently treats a missing
args["items"] as nil (which json.Marshal -> "null" and unmarshal -> nil slice),
so the server silently accepts a missing required field; update
parseInterlinkedItems to explicitly validate args["items"] before marshaling:
check presence (raw, ok := args["items"]; if !ok || raw == nil) and return a
clear error (e.g., "missing required field 'items'") and optionally ensure the
decoded result is a non-nil/non-empty []memorygraph.InterlinkedItem before
returning so ToolAddInterlinkedContext never receives an implicit zero-length
input.
---
Outside diff comments:
In `@internal/mcp/server.go`:
- Around line 292-369: The file contains two nested declarations of func (s
*server) callTool(...) — remove the duplicated inner function and merge its new
memorygraph cases into the original single callTool's switch so there is only
one switch and one default branch; ensure the final callTool includes the
existing arms (toolAnalyze, toolDeadCode, toolBlastRadius, toolGetGraph) plus
the new memorygraph cases (memorygraph.ToolUpsertMemoryNode, ToolCreateRelation,
ToolSearchMemoryGraph, ToolRetrieveWithTraversal, ToolPruneStaleLinks,
ToolAddInterlinkedContext) and that parseInterlinkedItems error handling and
helper callers (strArg, intArg, floatArg, boolArg) remain intact, then close the
switch and function with the correct braces.
---
Nitpick comments:
In `@internal/mcp/server.go`:
- Around line 117-190: The tool schemas (upsert_memory_node, create_relation,
search_memory_graph, add_interlinked_context, etc.) omit options like Metadata
and EdgeFilter present in the underlying structs
(UpsertMemoryNodeOptions.Metadata, CreateRelationOptions.Metadata,
SearchMemoryGraphOptions.EdgeFilter); add corresponding InputSchema properties
(e.g., "metadata" with type "object" for map[string]string, and "edge_filter"
with type "array" for RelationType values) to the relevant tool entries and then
thread those fields through the MCP call handling (the callTool cases that
construct UpsertMemoryNodeOptions, CreateRelationOptions,
SearchMemoryGraphOptions) so the parsed request values are populated into the
underlying option structs.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8e97416d-1da8-4e43-98fb-b17fadfe4d2c
📒 Files selected for processing (1)
internal/mcp/server.go
| + case "create_relation": | ||
| + w := floatArg(args, "weight") | ||
| + if w == 0 { | ||
| + w = 1.0 | ||
| + } | ||
| + return memorygraph.ToolCreateRelation(memorygraph.CreateRelationOptions{ | ||
| + RootDir: s.dir, | ||
| + SourceID: strArg(args, "source_id"), | ||
| + TargetID: strArg(args, "target_id"), | ||
| + Relation: memorygraph.RelationType(strArg(args, "relation")), | ||
| + Weight: w, | ||
| + }) |
There was a problem hiding this comment.
weight == 0 is ambiguous — can't tell "unset" from "explicitly zero".
floatArg returns the zero value when the key is missing or when the caller legitimately passed 0. So if a client ever wants a zero-weight edge (e.g. to mark "this link exists but contributes nothing to scoring"), your default silently rewrites it to 1.0.
Concrete example: user sends {"source_id":"a","target_id":"b","relation":"related_to","weight":0} → you store weight 1.0. Surprise.
If you want a real default, distinguish "present" from "zero" by looking in args directly before handing off to floatArg.
🛠 One way to disambiguate
- case "create_relation":
- w := floatArg(args, "weight")
- if w == 0 {
- w = 1.0
- }
+ case "create_relation":
+ w := 1.0
+ if raw, ok := args["weight"]; ok {
+ if f, ok := raw.(float64); ok {
+ w = f
+ }
+ }
return memorygraph.ToolCreateRelation(memorygraph.CreateRelationOptions{📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| + case "create_relation": | |
| + w := floatArg(args, "weight") | |
| + if w == 0 { | |
| + w = 1.0 | |
| + } | |
| + return memorygraph.ToolCreateRelation(memorygraph.CreateRelationOptions{ | |
| + RootDir: s.dir, | |
| + SourceID: strArg(args, "source_id"), | |
| + TargetID: strArg(args, "target_id"), | |
| + Relation: memorygraph.RelationType(strArg(args, "relation")), | |
| + Weight: w, | |
| + }) | |
| case "create_relation": | |
| w := 1.0 | |
| if raw, ok := args["weight"]; ok { | |
| if f, ok := raw.(float64); ok { | |
| w = f | |
| } | |
| } | |
| return memorygraph.ToolCreateRelation(memorygraph.CreateRelationOptions{ | |
| RootDir: s.dir, | |
| SourceID: strArg(args, "source_id"), | |
| TargetID: strArg(args, "target_id"), | |
| Relation: memorygraph.RelationType(strArg(args, "relation")), | |
| Weight: w, | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/mcp/server.go` around lines 321 - 332, The create_relation branch
currently treats floatArg(args, "weight") == 0 as "missing" which conflates a
client-provided zero with absent value; change the logic to check presence in
args first (e.g., inspect args["weight"] or use a presence check) and only apply
the default Weight=1.0 when the weight key is absent, otherwise pass the parsed
value (even if 0) into memorygraph.ToolCreateRelation via
CreateRelationOptions.Weight; keep floatArg for parsing but use an explicit
presence test on args before defaulting.
There was a problem hiding this comment.
@drQedwards, here's a fresh take on this one:
Bug: weight: 0 gets silently overwritten to 1.0
The check if w == 0 { w = 1.0 } can't distinguish between two very different situations:
- The caller didn't send
weight→floatArgreturns0.0(zero value) → default to1.0✅ - The caller explicitly sent
weight: 0→floatArgalso returns0.0→ gets bumped to1.0❌
Fix: check whether "weight" is actually present in args before applying the default.
case "create_relation":
w := 1.0
if raw, ok := args["weight"]; ok {
if f, ok := raw.(float64); ok {
w = f
}
}
return memorygraph.ToolCreateRelation(memorygraph.CreateRelationOptions{
RootDir: s.dir,
SourceID: strArg(args, "source_id"),
TargetID: strArg(args, "target_id"),
Relation: memorygraph.RelationType(strArg(args, "relation")),
Weight: w,
})This way weight absent → 1.0, weight: 0 → 0.0, weight: 0.5 → 0.5. Clean and unambiguous.
Call her HIV, because there are no more + artifacts in this. I’m gay. I get to do that joke. Don’t worry, there is a reason why we do Go instead of “Go” schizo with blank spaces because we are in C with atomic operations at OpenSSL repo.
Goodbye Pascal. Stupid case Linux CI
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
??? Removed letter b artifact, the actual bug. There is no duplicate call tool, @coderabbitai you told me to make the case functions followed by updating tool[] … this is structurally sound.
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…142) Introduces internal/memorygraph — a typed, weighted knowledge graph for agent memory, persisted at .supermodel/memory-graph.json. Nodes are classified as fact/concept/entity/event/procedure/context; edges carry relation types and weights. Provides BFS traversal, scored full-text search, similarity-based auto-linking, and stale-link pruning. Six new MCP tools exposed via server.go: upsert_memory_node, create_relation, search_memory_graph, retrieve_with_traversal, prune_stale_links, add_interlinked_context Also adds peek.go (node inspection layer) with full test coverage. Closes #136 Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…142) Introduces internal/memorygraph — a typed, weighted knowledge graph for agent memory, persisted at .supermodel/memory-graph.json. Nodes are classified as fact/concept/entity/event/procedure/context; edges carry relation types and weights. Provides BFS traversal, scored full-text search, similarity-based auto-linking, and stale-link pruning. Six new MCP tools exposed via server.go: upsert_memory_node, create_relation, search_memory_graph, retrieve_with_traversal, prune_stale_links, add_interlinked_context Also adds peek.go (node inspection layer) with full test coverage. Closes #136 Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> Co-Authored-By: Dr. Q and Company <213266729+drQedwards@users.noreply.github.com>
#What
Add internal/memorygraph — a persistent memory graph package for interlinked RAG, with MCP tool wrappers and a node inspection layer. Three files:
• memory_graph.go — core engine: typed nodes and edges, JSON persistence under .supermodel/memory-graph.json, upsert/relate/prune operations, BFS traversal (RetrieveWithTraversal), and scored search (SearchGraph) with one-hop neighbor expansion
• tools.go — MCP tool wrappers (ToolUpsertMemoryNode, ToolCreateRelation, ToolSearchMemoryGraph, ToolPruneStaleLinks, ToolAddInterlinkedContext, ToolRetrieveWithTraversal) mirroring the TypeScript tool surface in tools/memory-tools.ts, wired into internal/mcp/server.go
• peek.go — read-only node inspection (Peek by ID or label, PeekList for full graph scan) with formatted output for pruning decisions
#Why
Ports the TypeScript memory graph tool layer (see forloops/context+ repo) to Go so the MCP server can manage persistent node context natively. peek.go gives fine-grained visibility into individual nodes — access counts, age, in/out edge weights — before committing to a prune pass. Closes the gap between what internal/mcp/server.go can dispatch and what the TS tools already expose.
#Test plan
• make test passes
• make lint passes
• Manually tested:
• Peek by ID and by label resolves correctly
• PeekList returns nodes sorted by access count descending
• UpsertNode is idempotent on type+label
• CreateRelation returns nil (not error) when either node ID is missing
• PruneStaleLinks removes edges below threshold and orphaned nodes in one pass
• RetrieveWithTraversal respects maxDepth and edgeFilter
• AddInterlinkedContext with autoLink=true creates similarity edges at Jaccard ≥ 0.72
• All 6 tools confirmed callable via MCP server dispatch
#Notes for reviewers
• golangci-lint may flag //nolint:nilnil comments — confirm your lint config allows that suppression
• No automated tests yet for peek.go — manual steps above cover the behaviour, but table-driven tests matching the pattern in the rest of internal/ are expected before merge per CONTRIBUTING.md. Flagging for follow-up