From 2a68327d9f67c0a139df9c4794f4996e09dd1777 Mon Sep 17 00:00:00 2001 From: Hong Minhee Date: Sun, 19 Apr 2026 02:04:45 +0900 Subject: [PATCH 01/24] Rewrite deployment guide intro, runtime selection, and configuration fundamentals The existing deployment guide described only a thin set of platform-specific setups and left out most of what a first-time Fedify operator needs to get right before launch. This rewrites the opening of the guide around Fedify-specific concerns: - A framing intro that states what this guide is and isn't, and names the two audiences it targets (Fedify developers who have never deployed, and experienced operators new to Fedify). - A runtime selection matrix that makes the operational trade-offs between Node.js, Deno, Bun, and Cloudflare Workers explicit, including the current Bun memory-leak caveat. - A "Configuration fundamentals" section covering the three decisions every Fedify operator has to make up front: the canonical origin (and when to rely on x-forwarded-fetch instead), the persistent KV/MQ backend, and the actor key lifecycle. Later sections (traditional deployments, containers, worker separation, serverless, security, observability, ActivityPub-specific operations, and a checklist) will follow in subsequent commits. Progresses https://github.com/fedify-dev/fedify/issues/689 Assisted-by: Claude Code:claude-opus-4-7[1m] --- docs/manual/deploy.md | 502 +++++++++++++++--------------------------- 1 file changed, 176 insertions(+), 326 deletions(-) diff --git a/docs/manual/deploy.md b/docs/manual/deploy.md index 468301ae0..c735c392f 100644 --- a/docs/manual/deploy.md +++ b/docs/manual/deploy.md @@ -1,364 +1,214 @@ --- description: >- - This document explains how to deploy Fedify applications to various platforms - and runtime environments, with specific guidance for serverless platforms - that have unique architectural constraints. + A production deployment guide for Fedify applications. Covers runtime + selection, reverse proxies, persistent backends, container and traditional + deployments, serverless platforms, security, observability, and the + ActivityPub-specific operational concerns that general web-app deployment + guides don't cover. --- Deployment ========== -Fedify applications can be deployed to various platforms and runtime -environments. While the core development patterns remain consistent across -platforms, some deployment targets—particularly serverless environments—require -specific architectural considerations and configuration patterns. +This document is a practical guide to putting a Fedify application into +production. It is written primarily for readers who have already built +something with Fedify locally and are about to deploy it for the first time— +and secondarily for readers who deploy web applications routinely but have not +worked with Fedify or the fediverse before. + +It does *not* retread general web-application deployment advice that is +already well-covered elsewhere. Instead, it focuses on the choices and +pitfalls that are specific to Fedify and to ActivityPub: how to pin your +canonical origin, where to terminate TLS, how to keep inbox delivery healthy +at scale, how to sanitize federated HTML, how to defend the fetches Fedify +can't protect for you, and how to retire a server without orphaning every +remote follower. + +The document assumes you already have a working Fedify app. If you are still +getting started, read the [*Manual* chapters](./federation.md) first, then +come back here when you are ready to ship. + + +Choosing a JavaScript runtime +----------------------------- + +Fedify supports Deno, Node.js, and Bun as first-class runtimes, and has +dedicated support for Cloudflare Workers. The choice matters for production. + +Node.js +: The default recommendation. Mature, widely deployed, extensive package + ecosystem, well-understood operational tooling. Node.js does not provide + a built-in HTTP server that accepts `fetch()`-style handlers, so you + will need [@hono/node-server] (or an equivalent adapter). + +Deno +: A strong choice if you prefer TypeScript-first tooling, a built-in HTTP + server (`Deno.serve()`), permission-based sandboxing, and native + OpenTelemetry support via the `--unstable-otel` flag. If you are already + comfortable with Deno, there is no reason to switch to Node.js for + production. + +Bun +: Bun runs Fedify, but has known memory-leak issues that make it a + difficult recommendation for long-running production workloads at the time + of writing. A common compromise is to develop on Bun (for its fast + install and test loops) and deploy on Node.js, since both consume the same + npm-style package sources. Revisit Bun for production once the memory + profile stabilizes for your workload. -This document covers deployment-specific concerns that go beyond basic -application development, focusing on platform-specific requirements, -configuration patterns, and operational considerations. - - -General deployment considerations ---------------------------------- - -### Environment configuration - -Fedify applications typically require several environment-specific -configurations that should be managed through environment variables or secure -configuration systems: - -Domain configuration -: Set your canonical domain through - the [`origin`](./federation.md#explicitly-setting-the-canonical-origin) - option or ensure proper `Host` header handling. - -Database connections -: Configure your chosen [key–value store](./kv.md) and - [message queue](./mq.md) implementations. - -Cryptographic keys -: Securely manage actor key pairs outside your application code. - -### Observability - -Fedify provides comprehensive observability features that should be configured -for production deployments: - -Logging -: [Configure loggers](./log.md) with appropriate log levels and sinks (output - destinations) for your environment. - -Tracing -: Enable [OpenTelemetry](./opentelemetry.md) integration to trace ActivityPub - operations across your infrastructure. - -Monitoring -: Set up monitoring for queue depths, delivery success rates, and response - times to ensure your application is performing optimally. - -### Scaling considerations - -ActivityPub applications have unique scaling characteristics due to their -federated nature. Consider the following when deploying: +Cloudflare Workers +: A cost-effective option for servers that do not need persistent + long-running processes. Workers impose architectural constraints—no + global mutable state across requests, bindings-only access to queues and + KV, execution time limits—that Fedify accommodates through a dedicated + [builder pattern](./federation.md#builder-pattern-for-structuring) and the + [`@fedify/cfworkers`] package. -Message processing -: [Consider separating web traffic from background message - processing](./mq.md#separating-message-processing-from-the-main-process) - for better resource utilization. +The rest of this guide assumes a traditional server environment (Node.js or +Deno) unless noted otherwise. -Database performance -: Optimize your [key–value store](./kv.md) and [message queue](./mq.md) for - the expected load patterns, including read/write ratios and concurrency. +[@hono/node-server]: https://github.com/honojs/node-server +[`@fedify/cfworkers`]: https://jsr.io/@fedify/cfworkers -Traditional server environments -------------------------------- +Configuration fundamentals +-------------------------- -While Fedify works seamlessly with traditional server deployments on Node.js, -Bun, and Deno, each runtime has specific considerations for production use. +These decisions apply to every deployment target. Make them once, early, and +document them—changing them after launch ranges from painful to impossible +(see [*Domain name permanence*](#domain-name-permanence)). -### Node.js +### Canonical origin -Node.js does not provide a built-in HTTP server that accepts `fetch()`-style -handlers, so you will need an adapter. The [@hono/node-server] package provides -this functionality: +Pin your canonical origin explicitly with the +`~FederationOptions.origin` option unless you are intentionally hosting +multiple domains on the same Fedify instance: ~~~~ typescript twoslash -import { MemoryKvStore } from "@fedify/fedify"; -// ---cut-before--- -import { serve } from "@hono/node-server"; -import { createFederation } from "@fedify/fedify"; - -const federation = createFederation({ -// ---cut-start--- - kv: new MemoryKvStore(), -// ---cut-end--- - // Configuration... -}); - -serve({ - async fetch(request) { - return await federation.fetch(request, { contextData: undefined }); - }, -}); -~~~~ - -For production deployments, consider using process managers like [PM2] or -[systemd] to ensure reliability and automatic restarts. - -[@hono/node-server]: https://github.com/honojs/node-server -[PM2]: https://pm2.keymetrics.io/ -[systemd]: https://systemd.io/ - -### Bun and Deno - -Both Bun and Deno provide built-in HTTP servers with `fetch()`-style handlers, -making integration straightforward: - -~~~~ typescript twoslash [index.ts] -import { MemoryKvStore } from "@fedify/fedify"; +import { type KvStore } from "@fedify/fedify"; // ---cut-before--- import { createFederation } from "@fedify/fedify"; const federation = createFederation({ -// ---cut-start--- - kv: new MemoryKvStore(), -// ---cut-end--- - // Configuration... + origin: "https://example.com", + // ---cut-start--- + kv: null as unknown as KvStore, + // ---cut-end--- + // Other options... }); - -export default { - async fetch(request: Request): Promise { - return await federation.fetch(request, { contextData: undefined }); - }, -}; ~~~~ -Then, you can run your application using the built-in server commands: - -::: code-group - -~~~~ bash [Deno] -deno serve index.ts -~~~~ - -~~~~ bash [Bun] -bun run index.ts -~~~~ - -::: - -> [!TIP] -> -> See also the documentation for [`deno serve`] and -> [Bun's `export default` syntax]. - -[`deno serve`]: https://docs.deno.com/runtime/reference/cli/serve/ -[Bun's `export default` syntax]: https://bun.sh/docs/api/http#export-default-syntax - -### Key–value store and message queue - -For traditional server environments, choose persistent storage solutions based -on your infrastructure: - -Development -: Use [`MemoryKvStore`](./kv.md#memorykvstore) and - [`InProcessMessageQueue`](./mq.md#inprocessmessagequeue) for quick setup. - -Production -: Consider [`PostgresKvStore`](./kv.md#postgreskvstore) and - [`PostgresMessageQueue`](./mq.md#postgresmessagequeue) if you already use - PostgreSQL, [`MysqlKvStore`](./kv.md#mysqlkvstore) and - [`MysqlMessageQueue`](./mq.md#mysqlmessagequeue) if you already use - MySQL or MariaDB, or [`RedisKvStore`](./kv.md#rediskvstore) and - [`RedisMessageQueue`](./mq.md#redismessagequeue) for dedicated caching - infrastructure. There is also [`AmqpMessageQueue`](./mq.md#amqpmessagequeue) - for RabbitMQ users. - -### Key management +When `~FederationOptions.origin` is set, Fedify constructs actor URIs, +activity IDs, and collection URLs using the canonical origin rather than the +origin derived from the incoming `Host` header. Without this option, an +attacker who bypasses your reverse proxy and hits the upstream directly can +coerce Fedify into constructing URLs with the upstream's address—leaking +infrastructure details and potentially producing activities that other +fediverse servers reject or cache under the wrong identity. -In traditional server environments, actor key pairs should be generated once -during user registration and securely stored in your database. Avoid generating -keys on every server restart—this will break federation with other servers that -have cached your public keys. +See [*Explicitly setting the canonical +origin*](./federation.md#explicitly-setting-the-canonical-origin) for the +full API, including the +`~FederationOrigin.handleHost`/`~FederationOrigin.webOrigin` split for +separating WebFinger handles from the server origin. -### Web frameworks +### Behind a reverse proxy -For web framework integration patterns, -see the [*Integration* section](./integration.md), which covers Express, Hono, -Fresh, SvelteKit, and other popular frameworks. +If you cannot pin a single canonical origin—typically because you host +multiple domains on the same process—the alternative is to trust your +reverse proxy's forwarded headers and let Fedify reconstruct the request URL +from them. The [x-forwarded-fetch] package does exactly this: it rewrites +the incoming `Request` so that `request.url` reflects the `X-Forwarded-Host`, +`X-Forwarded-Proto`, and related headers instead of the internal address the +proxy connected on. +On Node.js with Hono or similar frameworks, wrap your handler: -Cloudflare Workers ------------------- - -*Cloudflare Workers support is available in Fedify 1.6.0 and later.* - -[Cloudflare Workers] presents a unique deployment environment with specific -constraints and architectural requirements. Unlike traditional server -environments, Workers operate within strict execution time limits and provide -access to platform services through binding mechanisms rather than global -imports. - -[Cloudflare Workers]: https://workers.cloudflare.com/ - -### Node.js compatibility - -Fedify requires [Node.js compatibility flag] to function properly on Cloudflare -Workers. Add the following to your *wrangler.jsonc* configuration file: - -~~~~ jsonc -"compatibility_date": "2025-05-31", -"compatibility_flags": ["nodejs_compat"], -~~~~ - -This enables essential Node.js APIs that Fedify depends on, including -cryptographic functions and DNS resolution. - -[Node.js compatibility flag]: https://developers.cloudflare.com/workers/runtime-apis/nodejs/ - -### Builder pattern - -Unlike other environments where you can initialize a `Federation` object -globally, Workers only provide access to bindings (KV, Queues, etc.) through -`env` parameter in request handlers. This makes the [builder -pattern](./federation.md#builder-pattern-for-structuring) mandatory: - -~~~~ typescript twoslash -// @noErrors: 2345 -type Env = { - KV_NAMESPACE: KVNamespace; - QUEUE: Queue; -}; -import { Person } from "@fedify/vocab"; -// ---cut-before--- -import { createFederationBuilder } from "@fedify/fedify"; -import { WorkersKvStore, WorkersMessageQueue } from "@fedify/cfworkers"; - -const builder = createFederationBuilder(); +~~~~ typescript +import { serve } from "@hono/node-server"; +import { behindProxy } from "x-forwarded-fetch"; -// Configure your federation using the builder -builder.setActorDispatcher("/users/{identifier}", async (ctx, identifier) => { - // Your actor logic here -// ---cut-start--- - return new Person({}); -// ---cut-end--- +serve({ + fetch: process.env.BEHIND_PROXY === "true" + ? behindProxy(app.fetch.bind(app)) + : app.fetch.bind(app), + port: 3000, }); - -// Export the default handler -export default { - async fetch(request: Request, env: Env): Promise { - const federation = await builder.build({ - kv: new WorkersKvStore(env.KV_NAMESPACE), - queue: new WorkersMessageQueue(env.QUEUE), - // Other options... - }); - - return federation.fetch(request, { contextData: env }); - }, -}; ~~~~ -### Manual queue processing - -Cloudflare Queues don't provide polling-based APIs, so the `WorkersMessageQueue` -cannot implement a traditional `~MessageQueue.listen()` method. Instead, you -must manually connect queue handlers: - -~~~~ typescript twoslash -// @noErrors: 2345 -import { createFederationBuilder, type Message } from "@fedify/fedify"; -import { Person } from "@fedify/vocab"; -import { WorkersKvStore, WorkersMessageQueue } from "@fedify/cfworkers"; - -type Env = { - KV_NAMESPACE: KVNamespace; - QUEUE: Queue; -}; - -const builder = createFederationBuilder(); -// ---cut-before--- -// Handle queue messages -export default { - // ... fetch handler above - - async queue(batch: MessageBatch, env: Env): Promise { - const federation = await builder.build({ - kv: new WorkersKvStore(env.KV_NAMESPACE), - queue: new WorkersMessageQueue(env.QUEUE), - }); - - for (const message of batch.messages) { - try { - await federation.processQueuedTask( - env, - message.body as unknown as Message, - ); - message.ack(); - } catch (error) { - message.retry(); - } - } - }, -}; -~~~~ - -If you use queue ordering keys on Cloudflare Workers, instantiate -`WorkersMessageQueue` with an `orderingKv` namespace and call -`WorkersMessageQueue.processMessage()` before -`Federation.processQueuedTask()`. See the -[*`WorkersMessageQueue`* section](./mq.md#workersmessagequeue-cloudflare-workers-only) -for a complete example and caveats about best-effort ordering. - -### Example deployment - -For a complete working example, see the [Cloudflare Workers example] -in the Fedify repository, which demonstrates a simple functional ActivityPub -server deployed to Cloudflare Workers. - -[Cloudflare Workers example]: https://github.com/fedify-dev/fedify/tree/main/examples/cloudflare-workers - - -Deno Deploy ------------ - -[Deno Deploy] is a serverless platform optimized for Deno applications, offering -global distribution and built-in persistence through Deno KV. Fedify provides -first-class support for Deno Deploy through [dedicated key–value -store](./kv.md#denokvstore-deno-only) and [message -queue](./mq.md#denokvmessagequeue-deno-only) implementations. - -Deno Deploy applications can use Deno KV and leverage it for message queueing -as well: +On Deno with Fresh (or anywhere you have direct access to a middleware +chain), call `getXForwardedRequest()`: ~~~~ typescript -import { createFederation } from "@fedify/fedify"; -import { DenoKvStore, DenoKvMessageQueue } from "@fedify/denokv"; - -// Open Deno KV (automatically available on Deno Deploy) -const kv = await Deno.openKv(); - -const federation = createFederation({ - kv: new DenoKvStore(kv), - queue: new DenoKvMessageQueue(kv), - // Other configuration... +import { getXForwardedRequest } from "@hongminhee/x-forwarded-fetch"; + +app.use(async (ctx) => { + if (Deno.env.get("BEHIND_PROXY") === "true") { + ctx.req = await getXForwardedRequest(ctx.req); + ctx.url = new URL(ctx.req.url); + } + return await ctx.next(); }); - -// Standard Deno Deploy handler -Deno.serve((request) => federation.fetch(request, { contextData: undefined })); ~~~~ -[Deno Deploy]: https://deno.com/deploy - - -Other platforms ---------------- - -Support for additional serverless platforms is planned for future releases. -Each platform may have similar architectural requirements to Cloudflare Workers, -particularly around resource binding and execution constraints. - -If you are interested in support for a specific platform, please [open an issue] -to discuss requirements and implementation approaches. - -[open an issue]: https://github.com/fedify-dev/fedify/issues +> [!WARNING] +> Only enable `x-forwarded-fetch` when you actually sit behind a proxy you +> control. If the process is ever reachable directly from the public +> internet, a malicious client can spoof `X-Forwarded-Host` and impersonate +> any hostname. The common pattern is to gate the middleware behind a +> `BEHIND_PROXY=true` environment variable and set it only in the deployment +> that runs behind your proxy. + +If you can pin a canonical `~FederationOptions.origin`, prefer that over +`x-forwarded-fetch`—it is the simpler, safer default. + +[x-forwarded-fetch]: https://github.com/dahlia/x-forwarded-fetch + +### Persistent KV store and message queue + +The in-memory defaults are for development only. +`MemoryKvStore` loses data on restart, and `InProcessMessageQueue` loses every +in-flight activity; neither survives horizontal scaling. Pick a persistent +backend before you take traffic: + +| Backend | KV store | Message queue | When to choose | +| ---------------------- | ----------------- | ---------------------- | ------------------------------------- | +| PostgreSQL | `PostgresKvStore` | `PostgresMessageQueue` | You already run Postgres for app data | +| Redis | `RedisKvStore` | `RedisMessageQueue` | Dedicated cache/queue infrastructure | +| MySQL / MariaDB | `MysqlKvStore` | `MysqlMessageQueue` | You already run MySQL or MariaDB | +| SQLite | `SqliteKvStore` | `SqliteMessageQueue` | Single-node / embedded deployments | +| RabbitMQ | — | `AmqpMessageQueue` | Existing AMQP infrastructure | +| Deno KV | `DenoKvStore` | `DenoKvMessageQueue` | Deno Deploy | +| Cloudflare KV + Queues | `WorkersKvStore` | `WorkersMessageQueue` | Cloudflare Workers | + +See the [*Key–value store*](./kv.md) and [*Message queue*](./mq.md) chapters +for setup details and trade-offs between backends (connection pooling, +ordering guarantees, native retry support). + +A reasonable default if you have no prior preference: PostgreSQL for both +KV and MQ, on the same database you already use for your application data. +Single operational surface, one backup strategy, one set of metrics. + +### Actor key lifecycle + +Actor key pairs must be generated **once** per actor and stored durably— +typically in the same row as the actor record itself. Do not regenerate them +on startup or during deploys. Other fediverse servers cache your public keys +(often for hours or days), and a key rotation they don't know about will +cause every incoming signature verification to fail against the cached key +and every outgoing activity you sign with the new key to be rejected. The +symptoms—silent federation breakage with no clear error—are among the most +frustrating to diagnose after the fact. + +Keep two distinct categories of secret separate: + + - **Instance-wide secrets** (session secret, instance actor private key, + database credentials) live in environment variables or a secret manager. + See [*Secret and key management*](#secret-and-key-management). + - **Per-actor key pairs** live in the database, one pair per actor, created + at registration time. + +If you need to rotate a compromised key, use the `Update` activity to +announce the new key and keep serving both the old and new keys from the +actor document for a transition window. Fediverse clients will eventually +pick up the new one as caches expire. From 63d718bf1e980b52f41226441409e4586c1b3eef Mon Sep 17 00:00:00 2001 From: Hong Minhee Date: Sun, 19 Apr 2026 02:11:21 +0900 Subject: [PATCH 02/24] Add traditional deployment section: systemd, reverse proxies, process managers Expands the deployment guide with the operational layer that sits between a running Fedify process and the internet. The material is intentionally focused on the Fedify-specific details: - A systemd service unit template with the hardening flags and the `EnvironmentFile=` convention that keep database credentials and actor private keys off the world-readable filesystem. - nginx and Caddy reverse-proxy configurations sized for Fedify's payloads (activity documents routinely exceed the 1 MiB body limit) and its slower remote timeouts (inbox deliveries from struggling peers can take well over 60 seconds). The notes call out the `Accept`/`Content-Type` pass-through requirement, which is the most common silent cause of broken federation behind misconfigured CDNs. - A brief note on PM2 versus systemd explaining why systemd should be the default on Linux. Progresses https://github.com/fedify-dev/fedify/issues/689 Assisted-by: Claude Code:claude-opus-4-7[1m] --- docs/manual/deploy.md | 218 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 218 insertions(+) diff --git a/docs/manual/deploy.md b/docs/manual/deploy.md index c735c392f..b0cdbc6bc 100644 --- a/docs/manual/deploy.md +++ b/docs/manual/deploy.md @@ -212,3 +212,221 @@ If you need to rotate a compromised key, use the `Update` activity to announce the new key and keep serving both the old and new keys from the actor document for a transition window. Fediverse clients will eventually pick up the new one as caches expire. + + +Traditional deployments +----------------------- + +The most common way to run a Fedify application in production is as a +long-lived Node.js or Deno process, managed by a service supervisor, behind +a reverse proxy that terminates TLS. The pieces are deliberately boring: +everything in this section predates the fediverse by decades and is +thoroughly documented by its upstream projects. This section focuses on the +details that matter specifically for Fedify. + +### Running the process + +Node.js needs an adapter because it has no built-in `fetch()`-style HTTP +server; [@hono/node-server] is the usual choice: + +~~~~ typescript twoslash +import { type KvStore } from "@fedify/fedify"; +// ---cut-before--- +import { serve } from "@hono/node-server"; +import { createFederation } from "@fedify/fedify"; + +const federation = createFederation({ + // ---cut-start--- + kv: null as unknown as KvStore, + // ---cut-end--- + // Configuration... +}); + +serve({ + fetch: (request) => federation.fetch(request, { contextData: undefined }), + port: 3000, +}); +~~~~ + +Deno ships a native server; export a default object with a `fetch` method +and launch it with `deno serve`: + +~~~~ typescript twoslash [index.ts] +import { type KvStore } from "@fedify/fedify"; +// ---cut-before--- +import { createFederation } from "@fedify/fedify"; + +const federation = createFederation({ + // ---cut-start--- + kv: null as unknown as KvStore, + // ---cut-end--- + // Configuration... +}); + +export default { + fetch(request: Request): Promise { + return federation.fetch(request, { contextData: undefined }); + }, +}; +~~~~ + +~~~~ bash +deno serve index.ts +~~~~ + +For framework integration patterns (Hono, Express, Fresh, SvelteKit, and +others), see the [*Integration* chapter](./integration.md). + +### Running under systemd + +A Fedify process that exits unexpectedly—for any reason, from an OOM kill to +an unhandled rejection—must be restarted automatically, or federation +stalls silently while outgoing activities pile up and remote servers time +out waiting for responses. On Linux, [systemd] is the standard way to do +this. + +A minimal service unit for a Node.js-based Fedify application might look +like: + +~~~~ ini [/etc/systemd/system/fedify.service] +[Unit] +Description=Fedify application +After=network-online.target postgresql.service +Wants=network-online.target + +[Service] +Type=simple +User=fedify +Group=fedify +WorkingDirectory=/srv/fedify +EnvironmentFile=/etc/fedify/env +ExecStart=/usr/bin/node --enable-source-maps dist/server.js +Restart=always +RestartSec=5 +StandardOutput=journal +StandardError=journal + +# Hardening +NoNewPrivileges=true +ProtectSystem=strict +ProtectHome=true +PrivateTmp=true +ReadWritePaths=/var/lib/fedify + +[Install] +WantedBy=multi-user.target +~~~~ + +The `EnvironmentFile=` path should be `chmod 600` and owned by `root:root` +(or by the service user)—it contains database passwords, session secrets, +and similar material that must not be world-readable. Keep the actual +application code in a path the service user cannot modify +(`ProtectSystem=strict` enforces this for system directories; the +`ReadWritePaths=` list is for any local storage you do need). + +Enable and start the unit: + +~~~~ bash +systemctl enable --now fedify.service +journalctl -u fedify.service -f +~~~~ + +On Deno, replace the `ExecStart=` line with +`/usr/bin/deno serve --allow-net --allow-env --allow-read=/srv/fedify --allow-write=/var/lib/fedify /srv/fedify/index.ts` +(tighten the permission flags to the minimum your app needs—that's the point of +running on Deno). + +If you plan to split web traffic from background queue processing into +separate processes, see [*Separating web and worker +nodes*](#separating-web-and-worker-nodes) below; you will typically run two +service units (or a templated `fedify@.service` instantiated twice) rather +than one. + +[systemd]: https://systemd.io/ + +### Process managers: PM2 and friends + +[PM2] and similar Node.js process managers work with Fedify, but they +duplicate responsibilities that systemd already handles on any Linux +server—respawn, log rotation, resource limits—and they don't integrate +with the rest of your system supervision. Prefer systemd on Linux. PM2 is +reasonable on platforms without systemd, or when you're deploying to a +shared host where you don't control PID 1. + +[PM2]: https://pm2.keymetrics.io/ + +### Reverse proxy + +Run your Fedify process on a loopback port (for example, `127.0.0.1:3000`) +and put a reverse proxy in front of it. The proxy handles TLS termination, +HTTP/2 (and increasingly HTTP/3), static asset caching, and—importantly for +ActivityPub—shielding your upstream from direct traffic so that the +[canonical origin](#canonical-origin) guarantee holds. + +#### Nginx + +~~~~ nginx +server { + listen 443 ssl http2; + listen [::]:443 ssl http2; + server_name example.com; + + ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; + ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; + + # Fedify emits activity payloads that can exceed nginx's 1 MiB default, + # especially collections with many inline items. + client_max_body_size 10m; + + location / { + proxy_pass http://127.0.0.1:3000; + proxy_http_version 1.1; + + # Pass the original Host so Fedify (or x-forwarded-fetch) can reconstruct + # the public URL. If you set `origin` explicitly, these are used only + # for logging, but they should still be correct. + proxy_set_header Host $host; + proxy_set_header X-Forwarded-Host $host; + proxy_set_header X-Forwarded-Proto $scheme; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Real-IP $remote_addr; + + # Inbox deliveries from slow remote servers and long-running queue + # operations can exceed nginx's 60s default. + proxy_read_timeout 120s; + proxy_send_timeout 120s; + } +} + +# Redirect plain HTTP to HTTPS; ActivityPub assumes HTTPS throughout. +server { + listen 80; + listen [::]:80; + server_name example.com; + return 301 https://$host$request_uri; +} +~~~~ + +#### Caddy + +Caddy's defaults suit Fedify well: automatic HTTPS from Let's Encrypt, +HTTP/2 and HTTP/3 on by default, and forwarded headers set correctly out +of the box. A full configuration fits on one line: + +~~~~ caddy +example.com { + reverse_proxy 127.0.0.1:3000 +} +~~~~ + +Caddy sends `X-Forwarded-Host`, `X-Forwarded-Proto`, and `X-Forwarded-For` +automatically, which means [x-forwarded-fetch] works without extra +configuration. For most new Fedify deployments where you don't already +have an nginx footprint to fit into, Caddy is the path of least resistance. + +> [!TIP] +> Whichever proxy you use, make sure it forwards `Accept` and +> `Content-Type` verbatim and does not rewrite or strip them. ActivityPub +> content negotiation depends on these headers matching `application/ld+json` +> or `application/activity+json` exactly. A surprising number of default +> proxy rules on CDNs rewrite or drop them and silently break federation. From 92cd36bfcff54802e136424eead045b4766e565f Mon Sep 17 00:00:00 2001 From: Hong Minhee Date: Sun, 19 Apr 2026 02:16:37 +0900 Subject: [PATCH 03/24] Add container deployment section: Dockerfile, Compose, Kubernetes, PaaS Covers containerized Fedify deployments across the main orchestration layers operators choose: - Minimal Dockerfiles for both Node.js and Deno that run the process as a non-root user and leave supervision to the orchestrator, with a note about multi-arch builds for operators running on ARM VPSes. - A Compose file that splits web and worker services off the same image via NODE_TYPE, wires up Postgres and Redis with healthchecks, and binds the app port to 127.0.0.1 so the canonical-origin invariant isn't broken by direct upstream traffic. - A short Kubernetes sketch that names only the Fedify-specific pieces (two Deployments, HPA on queue depth, PVC choices) and defers mechanics to upstream docs. - A PaaS index (Fly.io, AWS ECS/EKS, Cloud Run, Render, Railway) with the Fedify-relevant caveat for each rather than reimplementing each vendor's quickstart. Progresses https://github.com/fedify-dev/fedify/issues/689 Assisted-by: Claude Code:claude-opus-4-7[1m] --- docs/manual/deploy.md | 214 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 214 insertions(+) diff --git a/docs/manual/deploy.md b/docs/manual/deploy.md index b0cdbc6bc..1e65f9204 100644 --- a/docs/manual/deploy.md +++ b/docs/manual/deploy.md @@ -430,3 +430,217 @@ have an nginx footprint to fit into, Caddy is the path of least resistance. > content negotiation depends on these headers matching `application/ld+json` > or `application/activity+json` exactly. A surprising number of default > proxy rules on CDNs rewrite or drop them and silently break federation. + + +Container deployments +--------------------- + +Running Fedify in a container does not change the application's architecture, +but it changes how you supervise, scale, and secure it. This section covers +the container-specific pieces that matter for a Fedify app: the shape of a +minimal Dockerfile for each runtime, a Compose file that wires up the +application, a worker, and the backing services it typically needs, and some +notes on Kubernetes and managed container platforms. + +### Dockerfile + +A minimal Node.js Dockerfile follows the familiar pattern: install +dependencies, copy the source, expose the port, run the server. Keep the +runtime image small (the `-alpine` or `-slim` variants are usually fine), +run as a non-root user, and depend on process supervision from the +orchestrator (Compose `restart:`, Kubernetes `restartPolicy`, etc.) rather +than baking in a process manager. + +~~~~ dockerfile [Node.js] +FROM node:24-alpine + +# Install runtime OS packages your app needs (e.g., ffmpeg for media). +RUN apk add --no-cache pnpm + +WORKDIR /app +COPY pnpm-lock.yaml package.json ./ +RUN pnpm install --frozen-lockfile --prod + +COPY . . + +# Run as an unprivileged user. The stock `node` user UID 1000 exists in +# the official node images. +USER node + +EXPOSE 3000 +CMD ["pnpm", "run", "start"] +~~~~ + +For Deno, use an official Deno image and lean on `deno task` for the +command surface. Deno's permissions flags belong in the `start` task in +*deno.json*, not in the Dockerfile, so that they are version-controlled +with the code: + +~~~~ dockerfile [Deno] +FROM denoland/deno:2.7.4 + +WORKDIR /app +COPY deno.json deno.lock ./ +RUN deno install + +COPY . . +RUN deno task build + +USER deno +EXPOSE 8000 +CMD ["deno", "task", "start"] +~~~~ + +> [!TIP] +> If you build multi-arch images (linux/amd64 and linux/arm64 are the common +> pair for fediverse servers, since many operators run on ARM VPSes for +> cost), use `docker buildx` with `--platform=linux/amd64,linux/arm64`. +> Avoid base images that only ship amd64. + +### Docker compose / Podman compose + +A single *compose.yaml* is usually enough to describe a Fedify deployment: +the app itself, optionally a separate worker process from the same image, +and the KV/MQ backend (Postgres or Redis). [Podman Compose] understands +the same file format, so the example below works unchanged under both +Docker and Podman. + +~~~~ yaml [compose.yaml] +services: + web: + image: ghcr.io/example/my-fedify-app:latest + environment: + NODE_TYPE: web + DATABASE_URL: postgres://fedify:${DB_PASSWORD}@postgres:5432/fedify + REDIS_URL: redis://redis:6379/0 + BEHIND_PROXY: "true" + FEDIFY_ORIGIN: https://example.com + SECRET_KEY: ${SECRET_KEY} + depends_on: + postgres: + condition: service_healthy + redis: + condition: service_started + ports: + - "127.0.0.1:3000:3000" + restart: unless-stopped + + worker: + image: ghcr.io/example/my-fedify-app:latest + environment: + NODE_TYPE: worker + DATABASE_URL: postgres://fedify:${DB_PASSWORD}@postgres:5432/fedify + REDIS_URL: redis://redis:6379/0 + SECRET_KEY: ${SECRET_KEY} + depends_on: + postgres: + condition: service_healthy + redis: + condition: service_started + restart: unless-stopped + + postgres: + image: postgres:17 + environment: + POSTGRES_USER: fedify + POSTGRES_PASSWORD: ${DB_PASSWORD} + POSTGRES_DB: fedify + volumes: + - postgres_data:/var/lib/postgresql/data + healthcheck: + test: ["CMD", "pg_isready", "-U", "fedify"] + interval: 10s + restart: unless-stopped + + redis: + image: redis:7-alpine + volumes: + - redis_data:/data + restart: unless-stopped + +volumes: + postgres_data: + redis_data: +~~~~ + +The `web` and `worker` services run the same image with different +`NODE_TYPE` environment variables so that your application code can decide +whether to bind an HTTP port or only start the message queue processor. +See [*Separating web and worker +nodes*](#separating-web-and-worker-nodes) for the code pattern this +mirrors. If you do not need that separation yet, drop the `worker` service +and keep a single combined process. + +Bind the application port to `127.0.0.1` rather than `0.0.0.0`, and run +your [reverse proxy](#reverse-proxy) on the host; this keeps the Fedify +process unreachable from the public internet, which is the invariant that +the [canonical origin](#canonical-origin) guarantee depends on. + +[Podman Compose]: https://github.com/containers/podman-compose + +### Kubernetes + +For deployments large enough to justify Kubernetes, the same pattern +applies—just spread across more objects. The essentials: + + - **Two Deployments**: one for web pods (multiple replicas, behind a + Service and Ingress) and one for worker pods (replicas tuned to queue + depth, no Service). + - **ConfigMap** for non-sensitive environment variables and a + **Secret** for credentials and actor private keys. + - **Ingress** terminating TLS with cert-manager. Most Fedify apps don't + need anything exotic here; a default nginx-ingress with + `proxy-body-size: 10m` is a reasonable starting point. + - **HorizontalPodAutoscaler** on the worker Deployment targeting queue + depth (via a custom metric from your MQ backend) or CPU. Web pods + usually scale on CPU or request count. + - **StatefulSet + PVC** for PostgreSQL if you self-host it, or an + external managed database; Fedify is indifferent as long as the + connection string works. + +This document does not attempt to replace the upstream Kubernetes +documentation—the mechanics of Deployments, Services, and Ingress are the +same as for any other HTTP service. The Fedify-specific pieces are the +ones covered throughout this guide: origin pinning, forwarded headers, +worker separation, persistent KV/MQ, and actor key persistence. + +### Managed container platforms + +Platform-as-a-service container hosts are the fastest way to get a Fedify +app into production if you don't want to operate the underlying +infrastructure yourself. Rather than duplicate each vendor's +documentation, this section lists which Fedify constraints to watch for. +Follow the links for setup details. + +[Fly.io] +: Works well with Fedify. You can run web and worker processes as + separate [processes] in one *fly.toml* and scale them independently. + Enable HTTP/2 in `[[services]]` and make sure the forwarded-headers + behavior matches what [x-forwarded-fetch] expects. + +[AWS ECS] / [AWS EKS] +: Standard container-orchestration on AWS. If you use ALB as the + ingress, its request/response byte limits and header handling behave + like nginx with generous defaults; the [*Reverse proxy*](#reverse-proxy) + `Accept`/`Content-Type` tip still applies. + +[Google Cloud Run] +: Runs a single container per service with no persistent disk and + request-scoped execution. Worker separation using a long-running + queue consumer does not fit Cloud Run's execution model well; if you + need that separation, prefer a platform that supports long-running + processes (Fly.io, Kubernetes) or move the queue backend to one with + a native push consumer. + +[Render] / [Railway] +: Both treat Fedify apps as ordinary Node.js or Deno services and work + well for small-to-medium deployments. Define a separate “background + worker” service for the queue processor. + +[Fly.io]: https://fly.io/docs/ +[processes]: https://fly.io/docs/reference/configuration/#the-processes-section +[AWS ECS]: https://docs.aws.amazon.com/ecs/ +[AWS EKS]: https://docs.aws.amazon.com/eks/ +[Google Cloud Run]: https://cloud.google.com/run/docs +[Render]: https://render.com/docs +[Railway]: https://docs.railway.com/ From d926abd56c69db543a9296e2b46977171a7aaff5 Mon Sep 17 00:00:00 2001 From: Hong Minhee Date: Sun, 19 Apr 2026 02:21:32 +0900 Subject: [PATCH 04/24] Document web/worker separation, immediate:true hazard, and pool sizing Adds the section on splitting web and worker roles, which is the single most important scaling pattern for busy Fedify servers and the one most commonly missed by first-time operators. Highlights: - The manuallyStartQueue/startQueue/NODE_TYPE trio, with the full example in the Message queue chapter called out as the reference and only the deployment-specific wiring (Compose, systemd, Kubernetes) described here. - A warning against placing worker nodes behind a load balancer, which silently breaks the enqueue/process split. - An explicit call to audit the codebase for `immediate: true` before launch, with the three reasons it's dangerous in production (blocks the request, no retries, couples delivery to request lifetime). - A reminder about sizing the connection pool for ParallelMessageQueue, since the failure mode (stalled jobs, not errors) is easy to misread as a slow queue. Progresses https://github.com/fedify-dev/fedify/issues/689 Assisted-by: Claude Code:claude-opus-4-7[1m] --- docs/manual/deploy.md | 108 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 108 insertions(+) diff --git a/docs/manual/deploy.md b/docs/manual/deploy.md index 1e65f9204..6133e15d8 100644 --- a/docs/manual/deploy.md +++ b/docs/manual/deploy.md @@ -644,3 +644,111 @@ Follow the links for setup details. [Google Cloud Run]: https://cloud.google.com/run/docs [Render]: https://render.com/docs [Railway]: https://docs.railway.com/ + + +Separating web and worker nodes +------------------------------- + +By default, a Fedify process both accepts HTTP requests and runs the +message-queue consumer that delivers outgoing activities and dispatches +incoming ones. For low-traffic servers this works fine. For anything busy +enough to care about tail latency—or for any server where a queue backlog +during a federation spike would hurt web responsiveness—split these roles +into separate processes: + + - **Web nodes** serve HTTP, enqueue outgoing activities, and accept + incoming ones. They do not consume the queue. + - **Worker nodes** consume the queue and process delivery. They do not + serve HTTP and should not be exposed through your load balancer. + +This is a Fedify-level concern implemented with two options: +`~FederationOptions.manuallyStartQueue: true` tells Fedify not to start the +queue consumer automatically, and `Federation.startQueue()` starts it only +on nodes that should consume. + +~~~~ typescript twoslash +import type { KvStore } from "@fedify/fedify"; +// ---cut-before--- +import { createFederation } from "@fedify/fedify"; +import { RedisMessageQueue } from "@fedify/redis"; +import Redis from "ioredis"; +import process from "node:process"; + +const federation = createFederation({ + queue: new RedisMessageQueue(() => new Redis()), + manuallyStartQueue: true, + // ---cut-start--- + kv: null as unknown as KvStore, + // ---cut-end--- + // Other options... +}); + +if (process.env.NODE_TYPE === "worker") { + const controller = new AbortController(); + process.on("SIGINT", () => controller.abort()); + process.on("SIGTERM", () => controller.abort()); + await federation.startQueue(undefined, { signal: controller.signal }); +} +~~~~ + +The [*Separating message processing from the main +process*](./mq.md#separating-message-processing-from-the-main-process) +section has the complete reference, including the Deno variant and the +table describing which role enqueues versus processes. + +The deployment-side pieces are: + +Compose +: Two services referencing the same image with different `NODE_TYPE` + environment variables, as in the Compose example + [above](#docker-compose-podman-compose). + +systemd +: Either two separate units (*fedify-web.service* and + *fedify-worker.service*, each with its own `EnvironmentFile=`), or a + single templated *fedify@.service* unit instantiated twice + (`systemctl start fedify@web.service fedify@worker.service`). + +Kubernetes +: Two Deployments. Only the web Deployment gets a Service and Ingress. + Scale workers on queue depth (via a custom metric adapter reading from + your MQ backend) rather than CPU—a queue that's falling behind is not + necessarily CPU-bound. + +> [!WARNING] +> Do not place worker nodes behind a load balancer or expose them on a +> public address. They do not accept HTTP requests (or rather, they accept +> them but don't enqueue properly), and exposing them weakens the invariant +> that every Fedify HTTP response is signed by a node that has the full +> web configuration. + +### Avoid `immediate: true` in production + +`Context.sendActivity()` accepts an `immediate: true` option that bypasses +the message queue and attempts delivery synchronously as part of the +current request. It has a specific purpose—delivery in environments +where no queue is configured, or in tests—but it is actively dangerous in +production: + + - Remote servers that are slow to respond will block your request. + - There is no retry on failure; a single transient network error silently + loses the activity. + - It ties delivery success to request lifetime, which breaks the + invariant that `sendActivity()` is fire-and-forget from the caller's + point of view. + +Before launch, search your codebase for `immediate: true` and remove every +occurrence that isn't in a test fixture. + +### Parallel processing and connection pools + +If you wrap your queue in `ParallelMessageQueue(queue, N)` to consume +messages concurrently on a single worker process, make sure the database +connection pool behind your KV store and MQ can accommodate at least `N` +plus a few extra connections. A pool that's too small won't cause errors +you'll notice immediately—it causes jobs to stall waiting on connections, +which looks like a slow queue rather than a misconfiguration. + +See [*Parallel message processing*](./mq.md#parallel-message-processing) +for the full context, which includes specific notes about +`PostgresMessageQueue` and shared pools. From 5b4cb1c7fbc9bb8ed793897b16112ae85c0004d5 Mon Sep 17 00:00:00 2001 From: Hong Minhee Date: Sun, 19 Apr 2026 02:27:02 +0900 Subject: [PATCH 05/24] Document serverless deployments: Cloudflare Workers and Deno Deploy EA Replaces the earlier thin Cloudflare Workers and Deno Deploy sections with expanded guidance shaped by what operators actually hit in production: - Cloudflare Workers: the builder pattern, manual queue() export wiring, Node.js compatibility flag, and the native-retry delegation through MessageQueue.nativeRetrial. Adds operational notes that are absent from upstream Cloudflare docs: storing credentials with `wrangler secret put` rather than in `vars`, and the WAF skip rules needed to keep Cloudflare's default Bot Protection from challenging fediverse traffic carrying `application/activity+json` bodies. Remote servers don't solve CAPTCHAs, so this silent-failure mode is one of the most common launch blockers on Workers. - Deno Deploy: calls out the EA/Classic split explicitly. Deno Deploy Classic is deprecated and in maintenance mode; new deployments should target Deno Deploy EA. Keeps the zero-infrastructure Deno KV example and mentions DenoKvMessageQueue's native retry delegation. Progresses https://github.com/fedify-dev/fedify/issues/689 Assisted-by: Claude Code:claude-opus-4-7[1m] --- docs/manual/deploy.md | 200 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 200 insertions(+) diff --git a/docs/manual/deploy.md b/docs/manual/deploy.md index 6133e15d8..20b0ab1ab 100644 --- a/docs/manual/deploy.md +++ b/docs/manual/deploy.md @@ -752,3 +752,203 @@ which looks like a slow queue rather than a misconfiguration. See [*Parallel message processing*](./mq.md#parallel-message-processing) for the full context, which includes specific notes about `PostgresMessageQueue` and shared pools. + + +Serverless and edge deployments +------------------------------- + +Fedify runs on two classes of platform that don't fit the long-running +process model: Cloudflare Workers and Deno Deploy. Both can host a +Fedify application with zero self-managed infrastructure, at a cost that +scales down to near-zero for low-traffic servers. The trade-off is that +each platform imposes architectural constraints that shape how the code is +organized—so unlike the traditional- and container-based sections above, +the choice here affects your *application code*, not just the deployment +configuration. + +### Cloudflare Workers + +*Cloudflare Workers support is available in Fedify 1.6.0 and later.* + +[Cloudflare Workers] is an edge runtime with per-request execution limits +and no mutable global state between invocations. Platform services—KV, +Queues, R2, D1—are exposed through the `env` parameter of the request +handler rather than as ambient imports. Fedify accommodates this through +the [builder pattern](./federation.md#builder-pattern-for-structuring) and +the [`@fedify/cfworkers`] package, which provides `WorkersKvStore` and +`WorkersMessageQueue`. + +#### Node.js compatibility + +Fedify depends on Node.js APIs for cryptography and DNS, so Workers need +the Node.js compatibility flag. In your *wrangler.jsonc*: + +~~~~ jsonc +"compatibility_date": "2025-05-31", +"compatibility_flags": ["nodejs_compat"], +~~~~ + +See the [Node.js compatibility] documentation for details. + +#### Builder pattern + +Because `env` (the handle to KV, Queues, and other bindings) is only +available inside the request handler, you cannot instantiate `Federation` +at module load time. Use `createFederationBuilder()` to define your +dispatchers and build the `Federation` object per request: + +~~~~ typescript twoslash +// @noErrors: 2345 +type Env = { + KV_NAMESPACE: KVNamespace; + QUEUE: Queue; +}; +import { Person } from "@fedify/vocab"; +// ---cut-before--- +import { createFederationBuilder } from "@fedify/fedify"; +import { WorkersKvStore, WorkersMessageQueue } from "@fedify/cfworkers"; + +const builder = createFederationBuilder(); + +builder.setActorDispatcher("/users/{identifier}", async (ctx, identifier) => { + // Actor logic... + // ---cut-start--- + return new Person({}); + // ---cut-end--- +}); + +export default { + async fetch(request: Request, env: Env): Promise { + const federation = await builder.build({ + kv: new WorkersKvStore(env.KV_NAMESPACE), + queue: new WorkersMessageQueue(env.QUEUE), + }); + return federation.fetch(request, { contextData: env }); + }, +}; +~~~~ + +#### Manual queue processing + +Cloudflare Queues deliver messages by invoking your Worker's `queue()` +export rather than via a polling API, so `WorkersMessageQueue` cannot +implement `~MessageQueue.listen()` the traditional way. Wire the handler +manually: + +~~~~ typescript twoslash +// @noErrors: 2345 +import { createFederationBuilder, type Message } from "@fedify/fedify"; +import { WorkersKvStore, WorkersMessageQueue } from "@fedify/cfworkers"; + +type Env = { + KV_NAMESPACE: KVNamespace; + QUEUE: Queue; +}; + +const builder = createFederationBuilder(); +// ---cut-before--- +export default { + // ... fetch handler above + + async queue(batch: MessageBatch, env: Env): Promise { + const federation = await builder.build({ + kv: new WorkersKvStore(env.KV_NAMESPACE), + queue: new WorkersMessageQueue(env.QUEUE), + }); + + for (const message of batch.messages) { + try { + await federation.processQueuedTask( + env, + message.body as unknown as Message, + ); + message.ack(); + } catch { + message.retry(); + } + } + }, +}; +~~~~ + +If you use ordering keys, instantiate `WorkersMessageQueue` with an +`orderingKv` namespace and call `WorkersMessageQueue.processMessage()` +before `Federation.processQueuedTask()`. See the +[*`WorkersMessageQueue`*](./mq.md#workersmessagequeue-cloudflare-workers-only) +section for a complete example. + +#### Native retry + +Cloudflare Queues provide native retry with exponential backoff and +dead-letter queue support, which Fedify recognizes through +[`~MessageQueue.nativeRetrial`]. When native retry is available, Fedify +skips its own retry logic and relies on the backend. Configure +`max_retries` and a `dead_letter_queue` in your Queue definition in +*wrangler.jsonc* rather than in application code. + +#### Secrets and WAF + +Store secrets with `wrangler secret put` rather than committing them to +*wrangler.jsonc*'s `vars` section. The `vars` section is visible in the +dashboard and to anyone with read access to the Worker; `secrets` are +encrypted. + +Cloudflare's default WAF Bot Protection and “Managed Challenge” rules +sometimes treat fediverse user agents or the `application/activity+json` +content type as suspicious and challenge them, which breaks federation +silently (remote servers don't solve CAPTCHAs). If your Worker sits +behind a Cloudflare WAF, add a skip rule for requests whose `Accept` or +`Content-Type` contains `application/activity+json` or +`application/ld+json`, and whitelist known-good fediverse user agents. + +#### Example deployment + +For a complete working example, see the [Cloudflare Workers example] in +the Fedify repository, which demonstrates a minimal ActivityPub server +deployed to Workers. + +[Cloudflare Workers]: https://workers.cloudflare.com/ +[Node.js compatibility]: https://developers.cloudflare.com/workers/runtime-apis/nodejs/ +[`~MessageQueue.nativeRetrial`]: ./mq.md#native-retry-mechanisms +[Cloudflare Workers example]: https://github.com/fedify-dev/fedify/tree/main/examples/cloudflare-workers + +### Deno Deploy + +[Deno Deploy] is a serverless platform for Deno applications with global +distribution and built-in persistence through Deno KV. At the time of +writing, Deno Deploy offers two products: + + - **Deno Deploy Early Access (EA)** is the current generation and the + one you should target for new deployments. It runs on Deno 2 with + improved cold-start behavior, native HTTP/3, and first-class + OpenTelemetry support. + - **Deno Deploy Classic** is the previous generation. It is now + deprecated and in maintenance mode; existing applications continue to + run but new deployments should use EA. + +Fedify targets Deno Deploy (both EA and Classic) through the +[`@fedify/denokv`] package, which exposes `DenoKvStore` and +`DenoKvMessageQueue`. Deno Deploy EA's Deno KV is automatically +available—no configuration required, no separate database to provision: + +~~~~ typescript +import { createFederation } from "@fedify/fedify"; +import { DenoKvStore, DenoKvMessageQueue } from "@fedify/denokv"; + +const kv = await Deno.openKv(); + +const federation = createFederation({ + kv: new DenoKvStore(kv), + queue: new DenoKvMessageQueue(kv), + // Other configuration... +}); + +Deno.serve((request) => federation.fetch(request, { contextData: undefined })); +~~~~ + +`DenoKvMessageQueue` exposes native retry via +[`~MessageQueue.nativeRetrial`], so Fedify delegates retry semantics to +Deno KV's built-in exponential-backoff mechanism. + +[Deno Deploy]: https://deno.com/deploy +[`@fedify/denokv`]: https://jsr.io/@fedify/denokv From cbb49c08f0d3ba0f328939df5f96f1439cf8c91a Mon Sep 17 00:00:00 2001 From: Hong Minhee Date: Sun, 19 Apr 2026 02:32:33 +0900 Subject: [PATCH 06/24] Document the security concerns that matter most to Fedify servers Adds the security section, focused on the three threats that disproportionately affect Fedify apps compared to other web applications: - XSS through federated HTML. ActivityPub carries post content as HTML from untrusted remote servers, and Fedify deliberately does not sanitize it for you (because what's safe depends on the rendering context). The section gives allowlist-based sanitizer examples for Node.js (sanitize-html) and Deno (xss), warns against regex-based sanitizers, recommends a CSP as defense in depth, and calls out the common "signatures mean trust" mistake. - SSRF through follow-on fetches. Documents the built-in protection in Fedify's document loaders, warns explicitly against allowPrivateAddress: true in production, and enumerates the common application-code scenarios (avatar downloads, attachment fetching, link previews, webhook delivery) where application code bypasses that protection and has to defend itself with validatePublicUrl() or an ssrfcheck-style guard. Also notes the redirect-following pitfall. - Secret and key management. Separates instance-wide secrets (env/secret manager) from per-actor key pairs (database rows), with concrete notes for systemd, Docker, Kubernetes, and Cloudflare Workers. A shorter "other practices" subsection collects the HTTPS requirement, the skipSignatureVerification warning, inbox-level blocklists, and the clock-sync reminder. Progresses https://github.com/fedify-dev/fedify/issues/689 Assisted-by: Claude Code:claude-opus-4-7[1m] --- docs/manual/deploy.md | 238 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 238 insertions(+) diff --git a/docs/manual/deploy.md b/docs/manual/deploy.md index 20b0ab1ab..27e9ee441 100644 --- a/docs/manual/deploy.md +++ b/docs/manual/deploy.md @@ -952,3 +952,241 @@ Deno KV's built-in exponential-backoff mechanism. [Deno Deploy]: https://deno.com/deploy [`@fedify/denokv`]: https://jsr.io/@fedify/denokv + + +Security +-------- + +Fedify servers face a different threat model than most web applications. +Content arrives from strangers' servers, often as HTML, usually signed but +not always usefully so. URLs point at resources you must then fetch from +the public internet. Every user is potentially the target of an attacker +on some other instance halfway around the world. Three concerns matter +far more in this setting than the generic web-security checklist suggests: +cross-site scripting through federated HTML, server-side request forgery +through follow-on fetches, and the safekeeping of the cryptographic +material that identifies your instance and its actors. + +### Cross-site scripting (XSS) + +ActivityPub carries post content as HTML in fields like `content`, +`summary`, and `name`. Remote servers can and do put arbitrary markup in +these fields—including, if they are malicious or compromised, `