Add worker/clustering support for Bun runtime#9
Conversation
- Added `cluster` option to `jetOptions.runtimes.bun` to allow enabling clustering. - Implemented clustering in `server()` primitive for Bun using `node:cluster`. - Updated `Jetpath.listen()` to correctly handle primary and worker process lifecycles. - Ensured primary process in clustered mode handles worker forking and automatic restarts. - Optimized worker initialization by skipping UI/route compilation in the primary process. Co-authored-by: FridayCandour <75016347+FridayCandour@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
📝 WalkthroughWalkthroughThis pull request adds cluster support for the Bun runtime in Jetpath. The server function now detects and handles primary process initialization for clustering, while the listen method branches early for primary processes. A new configuration option enables clustering with worker management. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Jetpath
participant Server Function
participant Cluster Primary
participant Worker Processes
User->>Jetpath: listen(options)
Jetpath->>Server Function: server(plugs, options)
alt Bun Runtime with Clustering Enabled
Server Function->>Server Function: Check cluster.isPrimary
rect rgba(0, 200, 100, 0.5)
Server Function->>Cluster Primary: Create Node cluster controller
Cluster Primary->>Cluster Primary: Fork workers (numWorkers)
Cluster Primary->>Worker Processes: Spawn worker processes
Worker Processes-->>Cluster Primary: Worker processes running
Cluster Primary->>Cluster Primary: Setup exit handlers & restart logic
end
Server Function-->>Jetpath: Return {isPrimary: true, listen, stop, close, edge: false}
else Worker Process
Server Function->>Server Function: Bun.serve() in worker context
Server Function-->>Jetpath: Return {isPrimary: false, listen, edge: true}
end
Jetpath->>Jetpath: Check this.server.isPrimary
alt Is Primary Process
rect rgba(200, 100, 0, 0.5)
Jetpath->>Cluster Primary: listen(port)
Jetpath->>Jetpath: Set listening = true, return early
end
else Worker Process
Jetpath->>Jetpath: Continue with compilation & middleware
Jetpath->>Server Function: listen(port)
Jetpath->>Jetpath: Set listening = true
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
src/primitives/types.ts (1)
200-200: Consider adding JSDoc documentation for theclusteroption.The other options in this file have documentation comments explaining their purpose. Adding a brief comment explaining what
booleanvsnumbermeans for this option would improve developer experience.📝 Suggested documentation
bun?: { reusePort?: boolean; + /** + * Enable clustering for multi-core performance. + * - `true`: Fork workers equal to CPU count + * - `number`: Fork specified number of workers + */ cluster?: boolean | number; };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/primitives/types.ts` at line 200, Add a JSDoc comment for the cluster option in src/primitives/types.ts next to the declaration "cluster?: boolean | number;" that briefly explains its purpose and the semantics of the two allowed types: boolean (enable/disable default clustering) and number (explicit cluster size or shard count), including any default behavior or boundaries (e.g., 0 or negative handling) and examples of expected usage so other developers understand when to pass true/false vs a numeric value.src/primitives/functions.ts (1)
170-173: Consider adding restart throttling to prevent restart storms.If workers crash immediately after forking (e.g., due to port conflicts or initialization errors), this code will enter a tight loop continuously forking and restarting workers.
💡 Suggested approach
+ const recentDeaths: number[] = []; + const RESTART_WINDOW_MS = 5000; + const MAX_RESTARTS = 5; + cluster.on("exit", (worker: any) => { LOG.log(`worker ${worker.process.pid} died, restarting...`, "warn"); + const now = Date.now(); + recentDeaths.push(now); + // Remove deaths outside the window + while (recentDeaths.length && recentDeaths[0] < now - RESTART_WINDOW_MS) { + recentDeaths.shift(); + } + if (recentDeaths.length >= MAX_RESTARTS) { + LOG.log(`Too many worker deaths in ${RESTART_WINDOW_MS}ms, stopping restarts`, "error"); + return; + } cluster.fork(); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/primitives/functions.ts` around lines 170 - 173, The cluster.on("exit", (worker) => { ... }) handler currently logs and immediately calls cluster.fork(), which can cause a restart storm; modify that handler to implement restart throttling by tracking recent restart timestamps (e.g., an array or sliding window) keyed per-worker or globally, enforce a maximum number of restarts within a time window, and apply an exponential or fixed delay before calling cluster.fork() when the threshold is exceeded; update LOG.log messages in the handler to include when restarts are delayed/blocked and ensure cluster.fork() is only invoked after the backoff/delay logic completes.src/index.ts (1)
106-112: Primary process exits silently without startup information.When clustering is enabled, the primary process returns early without logging the server URLs or startup status. Users may not see any feedback that the server started successfully.
💡 Suggested improvement
this.server = server(this.plugs, this.options); if (this.server.isPrimary) { this.server.listen(this.options.port); this.listening = true; this._nativeServer = this.server; + LOG.log(`Cluster primary listening on port ${this.options.port}`, "info"); return; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` around lines 106 - 112, When this.server.isPrimary is true the code returns early without emitting any startup logs; update the primary-process branch in the start/init routine so that after calling this.server.listen(this.options.port) and setting this.listening = true and this._nativeServer = this.server you also emit the same startup/logging output as the worker path (e.g., log server URL(s), port and a “listening/started” message). Locate the primary branch around the server() instantiation (references: server(), this.server.isPrimary, this.server.listen, this.listening, this._nativeServer) and add the log calls (or call the existing on-listen/log helper) before the early return so users see startup information.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/primitives/functions.ts`:
- Around line 175-179: The stop() method iterates cluster.workers and calls
kill() without guarding against a worker becoming undefined; update stop() to
check that cluster.workers[id] is truthy (e.g., !== undefined/null) before
calling its kill() method so you don't attempt to call kill() on a removed/exit
worker; locate the stop() function in primitives/functions.ts and add a simple
null/undefined guard around the cluster.workers[id].kill() call.
- Around line 159-161: The numWorkers assignment using clusterConfig can accept
0 or negatives; validate clusterConfig before using it: in the code that
computes numWorkers (referencing clusterConfig and numWorkers), check that
clusterConfig is a positive integer via Number.isInteger(clusterConfig) &&
clusterConfig > 0; if invalid, either throw a descriptive error (e.g., "cluster
must be a positive integer") or fall back to os.cpus().length — implement one
consistent behavior and update the numWorkers assignment to use the validated
value.
- Around line 154-158: The code uses node:cluster when clusterConfig is set (see
clusterConfig, require("node:cluster"), cluster.isPrimary) but Bun's cluster
support is limited and not battle-tested; update the implementation to detect
Bun (e.g., process.release?.name === "bun") and either disable/skip the cluster
branch or emit a clear runtime error/log when running under Bun, and add
guardrails to avoid passing handles/file descriptors between workers and to
require Linux + SO_REUSEPORT for TCP load-balancing; also add a TODO/test note
to verify cluster behavior on the target platform before enabling in production.
---
Nitpick comments:
In `@src/index.ts`:
- Around line 106-112: When this.server.isPrimary is true the code returns early
without emitting any startup logs; update the primary-process branch in the
start/init routine so that after calling this.server.listen(this.options.port)
and setting this.listening = true and this._nativeServer = this.server you also
emit the same startup/logging output as the worker path (e.g., log server
URL(s), port and a “listening/started” message). Locate the primary branch
around the server() instantiation (references: server(), this.server.isPrimary,
this.server.listen, this.listening, this._nativeServer) and add the log calls
(or call the existing on-listen/log helper) before the early return so users see
startup information.
In `@src/primitives/functions.ts`:
- Around line 170-173: The cluster.on("exit", (worker) => { ... }) handler
currently logs and immediately calls cluster.fork(), which can cause a restart
storm; modify that handler to implement restart throttling by tracking recent
restart timestamps (e.g., an array or sliding window) keyed per-worker or
globally, enforce a maximum number of restarts within a time window, and apply
an exponential or fixed delay before calling cluster.fork() when the threshold
is exceeded; update LOG.log messages in the handler to include when restarts are
delayed/blocked and ensure cluster.fork() is only invoked after the
backoff/delay logic completes.
In `@src/primitives/types.ts`:
- Line 200: Add a JSDoc comment for the cluster option in
src/primitives/types.ts next to the declaration "cluster?: boolean | number;"
that briefly explains its purpose and the semantics of the two allowed types:
boolean (enable/disable default clustering) and number (explicit cluster size or
shard count), including any default behavior or boundaries (e.g., 0 or negative
handling) and examples of expected usage so other developers understand when to
pass true/false vs a numeric value.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: dc25b046-96bd-40f5-b815-aa149d6662cf
📒 Files selected for processing (3)
src/index.tssrc/primitives/functions.tssrc/primitives/types.ts
| if (clusterConfig) { | ||
| const cluster = require("node:cluster"); | ||
| const os = require("node:os"); | ||
|
|
||
| if (cluster.isPrimary) { |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does Bun support node:cluster module for worker processes?
💡 Result:
Yes, Bun supports the node:cluster module for worker processes. It was implemented in Bun v1.1.25 (August 2024) via PR #11492 and is documented in the official Bun API reference. However, it has limitations: handles and file descriptors cannot be passed between workers (limiting TCP server load-balancing to Linux via SO_REUSEPORT), and it is described as "implemented but not battle-tested."
Citations:
- 1: https://bun.com/reference/node/cluster
- 2: implement
node:clusteroven-sh/bun#11492 - 3: Implement
node:clustermodule oven-sh/bun#2428 - 4: https://bun.com/docs/runtime/nodejs-compat
Verify Bun's node:cluster support meets production requirements.
Bun supports node:cluster (as of v1.1.25), but with important limitations: handles and file descriptors cannot be passed between workers, and TCP server load-balancing on Bun is limited to Linux using SO_REUSEPORT. Additionally, the feature is documented as "implemented but not battle-tested." Ensure the implementation accounts for these constraints and is tested on the target platform before production deployment.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/primitives/functions.ts` around lines 154 - 158, The code uses
node:cluster when clusterConfig is set (see clusterConfig,
require("node:cluster"), cluster.isPrimary) but Bun's cluster support is limited
and not battle-tested; update the implementation to detect Bun (e.g.,
process.release?.name === "bun") and either disable/skip the cluster branch or
emit a clear runtime error/log when running under Bun, and add guardrails to
avoid passing handles/file descriptors between workers and to require Linux +
SO_REUSEPORT for TCP load-balancing; also add a TODO/test note to verify cluster
behavior on the target platform before enabling in production.
| const numWorkers = typeof clusterConfig === "number" | ||
| ? clusterConfig | ||
| : os.cpus().length; |
There was a problem hiding this comment.
Validate that worker count is a positive integer.
If a user passes 0 or a negative number for cluster, the loop won't fork any workers, leaving the server non-functional without any error message.
🛡️ Suggested validation
if (cluster.isPrimary) {
- const numWorkers = typeof clusterConfig === "number"
- ? clusterConfig
- : os.cpus().length;
+ const numWorkers = typeof clusterConfig === "number"
+ ? Math.max(1, Math.floor(clusterConfig))
+ : os.cpus().length;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const numWorkers = typeof clusterConfig === "number" | |
| ? clusterConfig | |
| : os.cpus().length; | |
| const numWorkers = typeof clusterConfig === "number" | |
| ? Math.max(1, Math.floor(clusterConfig)) | |
| : os.cpus().length; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/primitives/functions.ts` around lines 159 - 161, The numWorkers
assignment using clusterConfig can accept 0 or negatives; validate clusterConfig
before using it: in the code that computes numWorkers (referencing clusterConfig
and numWorkers), check that clusterConfig is a positive integer via
Number.isInteger(clusterConfig) && clusterConfig > 0; if invalid, either throw a
descriptive error (e.g., "cluster must be a positive integer") or fall back to
os.cpus().length — implement one consistent behavior and update the numWorkers
assignment to use the validated value.
| stop() { | ||
| for (const id in cluster.workers) { | ||
| cluster.workers[id].kill(); | ||
| } | ||
| }, |
There was a problem hiding this comment.
Add null check before calling kill() on worker.
Workers can exit between iteration start and the kill() call, making cluster.workers[id] potentially undefined.
🛡️ Suggested fix
stop() {
for (const id in cluster.workers) {
- cluster.workers[id].kill();
+ cluster.workers[id]?.kill();
}
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| stop() { | |
| for (const id in cluster.workers) { | |
| cluster.workers[id].kill(); | |
| } | |
| }, | |
| stop() { | |
| for (const id in cluster.workers) { | |
| cluster.workers[id]?.kill(); | |
| } | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/primitives/functions.ts` around lines 175 - 179, The stop() method
iterates cluster.workers and calls kill() without guarding against a worker
becoming undefined; update stop() to check that cluster.workers[id] is truthy
(e.g., !== undefined/null) before calling its kill() method so you don't attempt
to call kill() on a removed/exit worker; locate the stop() function in
primitives/functions.ts and add a simple null/undefined guard around the
cluster.workers[id].kill() call.
This PR introduces native worker/clustering support for the Bun.js runtime within the Jetpath framework.
Key Changes:
runtimes.bun.clusterin theJetpathoptions. It accepts a boolean (to use all CPUs) or a specific number of workers.listen()method has been refactored to initialize the server early. In the primary process, it starts the cluster and exits the setup flow, avoiding unnecessary work like route scanning or UI generation. Worker processes proceed with the full initialization before serving requests.node:cluster, which handles transparent port sharing among workers.This addition enhances Jetpath's performance on multi-core systems when running on Bun, while maintaining its zero-config, runtime-agnostic philosophy.
PR created automatically by Jules for task 16655505401955298210 started by @FridayCandour
Summary by Bito
Summary by CodeRabbit