Files
gstack/ARCHITECTURE.md
Garry Tan 3bf43766d5 v1.38.0.0 fix wave: Windows install hardening + Unicode sanitization at server egress (4 community PRs) (#1505)
* fix(browse): single-point Unicode sanitization at server egress

Add sanitizeLoneSurrogates (regex-based UTF-16 lone-half cleaner) and
sanitizeReplacer (JSON.stringify replacer that runs the cleaner on every
string field during encoding).

Split handleCommandInternal into handleCommandInternalImpl (raw) plus a
thin sanitizing wrapper. The wrapper applies sanitizeLoneSurrogates to
cr.result so both single-command (handleCommand line 1034) and batch-loop
(line 1966) egress paths inherit it. Inline INVARIANT comment near the
wrapper documents the architectural constraint.

Both SSE producers (activity feed at /activity/stream and inspector
stream) stringify with sanitizeReplacer. Post-stringify regex is
ineffective on those paths because JSON.stringify has already converted
the lone surrogate into the escape sequence "\\\\uD800" before any regex
could match it; the replacer runs during stringify on the raw string
value, so the substitution lands.

Originated from @realcarsonterry PR #1463 (handleCommand-only wrap).
Architectural lift to handleCommandInternal + SSE coverage authored on
this branch.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(setup): _link_or_copy helper for Windows file-copy fallback

On Windows without Developer Mode (MSYS2/Git Bash), plain ln -snf
silently creates a frozen file copy that doesn't refresh on git pull.
Skill files become stale after every upgrade.

Add a _link_or_copy SRC DST helper near IS_WINDOWS detection (line ~33).
It auto-dispatches: on Unix it preserves ln -snf semantics, on Windows
it copies (cp -R for directories, cp -f for files). When the source is
a Unix-style name-only alias that doesn't resolve on disk (the
connect-chrome → gstack/open-gstack-browser pattern), the helper
returns 0 silently on Windows rather than aborting setup under set -e.

Rewrite all 42 prior ln -snf call sites to route through the helper:
link_claude_skill_dirs (line 437), team-claude install paths (lines 556,
581, 592), Codex host adapter block (lines 618-640), Factory host
adapter block (lines 658-678), OpenCode host adapter block (lines
696-731), Kiro host adapter block (lines 939-953), plus migration and
alias sites.

Add _print_windows_copy_note_once helper and call it from
link_claude_skill_dirs after any linking work completes so Windows
users see one user-visible note explaining they must re-run ./setup
after every git pull.

Extend cleanup_old_claude_symlinks and cleanup_prefixed_claude_symlinks
with a Windows branch: when the target is a real directory containing a
real-file SKILL.md (no symlink to readlink), and IS_WINDOWS=1, treat
the name-matched directory as gstack-managed and remove it. This makes
--prefix / --no-prefix flips work on Windows instead of leaving stale
copies behind.

Originated from @realcarsonterry PR #1462 (1 of 42 sites). Helper
extraction, 42-site rewrite, alias-resolution edge case, and Windows
cleanup compat authored on this branch.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(docs): rename stale gbrain_sync_mode to artifacts_sync_mode + register /document-generate

Five stale gstack-config references in docs/ pointed to the deprecated
gbrain_sync_mode key (renamed to artifacts_sync_mode in v1.27.0.0):
- docs/gbrain-sync.md: lines 62, 110, 111, 173
- docs/gbrain-sync-errors.md: lines 26, 203

Users following the docs would set a key that gstack-brain-sync no
longer reads, silently breaking artifacts sync.

Originated from @realcarsonterry PR #1461 (verbatim).

Also register /document-generate in AGENTS.md (Operational + memory
table) and docs/skills.md (skill index). The skill shipped in v1.35.0.0
but the doc-inventory cross-check in test/skill-validation.test.ts was
failing because neither file mentioned it.

Allowlist the new test/docs-config-keys.test.ts file in
test/no-stale-gstack-brain-refs.test.ts — it intentionally lists the
deprecated keys in its DEPRECATED_KEYS denylist (defending the rename).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* ci(windows): migrate windows-free-tests to paid faster runner + register wave tests

Move the Windows free-test job from GitHub-hosted windows-latest to
Blacksmith's paid Windows runner (blacksmith-2vcpu-windows-2022).
Spin-up drops from ~60s to ~10s and Bun installs land 3-4x faster. The
label can swap to namespace-profile-windows or ubicloud-windows-* if
this repo's Blacksmith installation isn't configured.

Register the four new wave tests in the workflow's curated test list:
  - browse/test/server-sanitize-surrogates.test.ts
  - test/setup-windows-fallback.test.ts
  - test/build-script-shell-compat.test.ts
  - test/docs-config-keys.test.ts

These tests cover the Windows-hardening surface that this wave ships
(sanitizer wiring, _link_or_copy helper, build-script subshells, doc-
config drift), so they need to run on Windows where the bug shapes
actually manifest.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* test: wave coverage for sanitizer, link_or_copy, build script, doc drift

Four new test files (29 cases total):

browse/test/server-sanitize-surrogates.test.ts:
  - 11 unit cases for sanitizeLoneSurrogates (passthrough, valid pair,
    lone high/low mid-string, trailing/leading lone, adjacent doubles,
    pair-then-lone, lone-then-pair, empty)
  - 2 bug-repro tests pinning the regression intent (UTF-8 round-trip,
    JSON.parse round-trip with codepoint assertion)
  - 4 wiring invariants asserting the architectural choke points stay
    intact (handleCommandInternalImpl rename, central sanitization
    line, sanitizeReplacer function exists, SSE producers stringify
    with replacer)
  Function extracted from server.ts via regex + eval'd in test scope
  so no production-code export is needed.

test/setup-windows-fallback.test.ts:
  - Static invariant (D7): zero raw `ln` calls outside the
    _link_or_copy helper body and comments
  - Helper-existence assertions
  - 4-cell behavior matrix (file/dir × Windows/Unix) via awk-style
    helper extraction + bash -c sourcing
  - Windows-note printer registration check
  Mirrors test/setup-conductor-worktree.test.ts patterns.

test/build-script-shell-compat.test.ts:
  - Regex assertion that package.json scripts.* contain no bash brace
    groups (Bun-Windows-hostile)
  - Subshell-precedence check for `.version` redirects
  Strips single-quoted strings before regexing so embedded JS code
  inside echo '...' doesn't false-positive.

test/docs-config-keys.test.ts:
  - DEPRECATED_KEYS denylist scanned across docs/**/*.md
  - Round-trip test for `gstack-config get artifacts_sync_mode`
  Defends the v1.27.0.0 rename from doc drift.

Updates to two existing tests:
  - test/setup-conductor-worktree.test.ts: expect `_link_or_copy`
    instead of `ln -snf` at the Conductor-worktree guard call site
  - test/gen-skill-docs.test.ts: same swap at three assertion sites
    (Codex section, Claude link_claude_skill_dirs body, Codex
    link_codex_skill_dirs body)

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* chore: bump v1.38.0.0 + build-script subshells + CHANGELOG

VERSION 1.35.0.0 → 1.38.0.0 (MINOR). PR #1500 (lyon-v2) claimed
v1.37.0.0 ahead of this branch; v1.38.0.0 is the next free MINOR slot
per bin/gstack-next-version queue check. Workspace-aware ship rule
applies — queue-advancing past a claimed version within the same
bump level is explicitly permitted.

package.json build script: three `{ git rev-parse HEAD ...; }` brace
groups → `( git rev-parse HEAD ... )` subshells. Bun's Windows shell
parser doesn't grok bash brace groups; subshells are POSIX-universal.
Originated from @realcarsonterry PR #1460.

CHANGELOG entry covers the full wave:
- Windows install hardening (42-site _link_or_copy + cleanup compat)
- Unicode sanitization architecture (handleCommandInternal + SSE
  replacer)
- Build script POSIX-shell compat (subshells)
- Doc rename (gbrain_sync_mode → artifacts_sync_mode)
- Windows CI on paid faster runner
- 4 new wave tests (29 cases)
Frames each item as a current system property, not a fix narrative.

Credits @realcarsonterry for PRs #1460, #1461, #1462, #1463 (the seed
of the wave). Scope expansion to all 42 setup sites, every server
egress path, Windows CI migration, and codex-flagged P0/P1 fixes
(connect-chrome alias on Windows, SSE replacer, prefix-cleanup
Windows compat) authored on this branch.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* docs: post-ship sync for v1.38.0.0

Document the two architectural invariants that landed in v1.38.0.0 in
their persistent homes (not just CHANGELOG):

- README Windows section: add the `./setup` re-run-after-git-pull
  requirement that `_print_windows_copy_note_once` shows at runtime.
- CONTRIBUTING "Things to know": add the no-raw-`ln` invariant for
  contributors editing `setup`, with the test that enforces it.
- ARCHITECTURE: new "Unicode sanitization at server egress" section
  between Shell injection prevention and Prompt injection defense,
  with egress table (HTTP/batch/SSE) and the post-stringify-regex
  rationale.
- CLAUDE.md: cross-references for both invariants, matching the
  v1.6.0.0 dual-listener pattern (each constraint says which files
  to read before editing and which test pins it).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* ci(windows): use windows-latest-8-cores instead of unregistered Blacksmith label

actionlint failed PR #1505 because `blacksmith-2vcpu-windows-2022` isn't
in the repo's approved runner-label list (actionlint.yaml only registers
`ubicloud-standard-2`, and Ubicloud doesn't ship a Windows pool).

Switch to GitHub's paid larger Windows runner `windows-latest-8-cores`
— 4x the cores of the free `windows-latest` at the larger-runner billing
rate, no new third-party CI provider, no actionlint config changes.

CHANGELOG: replace "Blacksmith" / "blacksmith-2vcpu-windows-2022" /
"~6x faster spin-up" claims with the actual choice (8 cores vs 4, paid
larger runner).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* ci(windows): switch from windows-latest-8-cores to ubicloud-standard-2-windows

`windows-latest-8-cores` sat queued indefinitely because the GitHub
larger-runner billing isn't enabled at the org level — the
"Queued — Waiting to run this check" status surfaced on PR #1505 with
no progress for the whole CI run.

Switch to Ubicloud Windows runners (`ubicloud-standard-2-windows`) so
Windows CI uses the same provider as the existing Linux evals
(`ubicloud-standard-2`). Billing stays under one account instead of
two.

Register the new label in actionlint.yaml alongside the existing
ubicloud-standard-2 entry so actionlint doesn't reject it as unknown.

CHANGELOG entry updated: runner row reflects the actual provider chosen,
"Itemized changes" mentions the actionlint.yaml registration, and the
narrative paragraph documents why `windows-latest-8-cores` failed first.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* ci: migrate all workflows to Ubicloud (Linux + Windows, 8-core)

Switch every `runs-on` in this repo to Ubicloud so CI has a single billing
surface, consistent capacity, and 4x more cores on the workloads that were
previously stuck on free `ubuntu-latest` (2 cores). Windows uses Ubicloud's
Windows pool too — `ubicloud-standard-8-windows` — so the queued-forever
problem with GitHub's `windows-latest-8-cores` paid larger runner (org-level
larger-runner billing not enabled) goes away.

Workflows touched (9):
- evals.yml, evals-periodic.yml, ci-image.yml — bump default + matrix from
  `ubicloud-standard-2` to `ubicloud-standard-8`. The one matrix entry that
  was already on -8 stays.
- windows-free-tests.yml — `ubicloud-standard-2-windows` → `ubicloud-standard-8-windows`.
- make-pdf-gate.yml — matrix `ubuntu-latest` → `ubicloud-standard-8`. macOS
  entry preserved; the poppler-install `if: matrix.os` conditional swaps to
  match the new label.
- actionlint.yml, pr-title-sync.yml, skill-docs.yml, version-gate.yml —
  `ubuntu-latest` → `ubicloud-standard-8`.

.github/actionlint.yaml registers all four Ubicloud labels in one place:
- ubicloud-standard-2
- ubicloud-standard-8
- ubicloud-standard-2-windows  (the v1.38.0.0 windows-free-tests target)
- ubicloud-standard-8-windows  (this PR's windows-free-tests target)

Removed the duplicate `actionlint.yaml` at the repo root that I accidentally
created in the prior commit — actionlint only reads `.github/actionlint.yaml`,
so the root file was dead weight.

CHANGELOG entry updated: a single "all Ubicloud" sentence in the narrative
plus a metrics-row covering the runner pool change, and the itemized line
expanded to enumerate the 9 affected workflows. The previously-orphaned
"Itemized changes" line about just `windows-free-tests.yml` is replaced.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* ci(windows): revert to free `windows-latest`

Ubicloud doesn't ship Windows runners — confirmed via their docs. The
`ubicloud-standard-*-windows` labels I added do not exist and were causing
`windows-free-tests` to sit "Queued — Waiting to run this check" forever
(GitHub Actions can't tell a typoed label from a self-hosted runner that's
about to register; it just waits).

Three prior Windows-runner attempts all failed for different reasons:
- `blacksmith-2vcpu-windows-2022` — Blacksmith app not installed on the org
- `windows-latest-8-cores` — GitHub paid larger-runner billing not enabled
- `ubicloud-standard-2/8-windows` — Ubicloud doesn't offer Windows at all

The free `windows-latest` runner (4 cores, ~60s spin-up, $0) is the one
path that actually runs. The wave-coverage Windows tests are <30s of real
work; total job time stays under 2 minutes.

Cleaned up `.github/actionlint.yaml` to drop the bogus
`ubicloud-standard-*-windows` entries — kept only the two real Linux labels.

CHANGELOG: split the runner-pool row into Linux (migrated to Ubicloud-8)
vs Windows (stays on free windows-latest), with the why on each. Itemized
line for windows-free-tests rewritten to reflect the actual outcome.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* test(windows): skip Unix-only cases on Windows runner

windows-free-tests on GitHub free windows-latest fails three cases that
depend on Unix tooling the runner doesn't have:

1. `setup-windows-fallback.test.ts` behavior matrix — IS_WINDOWS=0 cells
   assert `ln -snf` produces a real symlink. On Windows-without-Developer-
   Mode (which the free `windows-latest` runner is), `ln -snf` silently
   creates a file copy. That's literally the bug `_link_or_copy` exists
   to work around, so the assertion can never pass there. Skip the whole
   describe block on win32. The static-invariant test (zero raw `ln`
   outside the helper body) above the matrix still runs and pins the
   shape the Windows install relies on.

2. `docs-config-keys.test.ts` round-trip — spawnSync(`bin/gstack-config`)
   on Windows doesn't read the bash shebang and fails to exec. Skip on
   win32; the deprecated-key denylist test in the same file still runs
   and is the actual invariant defending the v1.27.0.0 rename at the doc
   layer.

Use `describe.skipIf(process.platform === 'win32', ...)` and
`test.skipIf(process.platform === 'win32', ...)`. Tests still run on
macOS and Linux unchanged.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-14 21:19:58 -07:00

31 KiB
Raw Blame History

Architecture

This document explains why gstack is built the way it is. For setup and commands, see CLAUDE.md. For contributing, see CONTRIBUTING.md.

The core idea

gstack gives Claude Code a persistent browser and a set of opinionated workflow skills. The browser is the hard part — everything else is Markdown.

The key insight: an AI agent interacting with a browser needs sub-second latency and persistent state. If every command cold-starts a browser, you're waiting 3-5 seconds per tool call. If the browser dies between commands, you lose cookies, tabs, and login sessions. So gstack runs a long-lived Chromium daemon that the CLI talks to over localhost HTTP.

Claude Code                     gstack
─────────                      ──────
                               ┌──────────────────────┐
  Tool call: $B snapshot -i    │  CLI (compiled binary)│
  ─────────────────────────→   │  • reads state file   │
                               │  • POST /command      │
                               │    to localhost:PORT   │
                               └──────────┬───────────┘
                                          │ HTTP
                               ┌──────────▼───────────┐
                               │  Server (Bun.serve)   │
                               │  • dispatches command  │
                               │  • talks to Chromium   │
                               │  • returns plain text  │
                               └──────────┬───────────┘
                                          │ CDP
                               ┌──────────▼───────────┐
                               │  Chromium (headless)   │
                               │  • persistent tabs     │
                               │  • cookies carry over  │
                               │  • 30min idle timeout  │
                               └───────────────────────┘

First call starts everything (~3s). Every call after: ~100-200ms.

Why Bun

Node.js would work. Bun is better here for three reasons:

  1. Compiled binaries. bun build --compile produces a single ~58MB executable. No node_modules at runtime, no npx, no PATH configuration. The binary just runs. This matters because gstack installs into ~/.claude/skills/ where users don't expect to manage a Node.js project.

  2. Native SQLite. Cookie decryption reads Chromium's SQLite cookie database directly. Bun has new Database() built in — no better-sqlite3, no native addon compilation, no gyp. One less thing that breaks on different machines.

  3. Native TypeScript. The server runs as bun run server.ts during development. No compilation step, no ts-node, no source maps to debug. The compiled binary is for deployment; source files are for development.

  4. Built-in HTTP server. Bun.serve() is fast, simple, and doesn't need Express or Fastify. The server handles ~10 routes total. A framework would be overhead.

The bottleneck is always Chromium, not the CLI or server. Bun's startup speed (~1ms for the compiled binary vs ~100ms for Node) is nice but not the reason we chose it. The compiled binary and native SQLite are.

The daemon model

Why not start a browser per command?

Playwright can launch Chromium in ~2-3 seconds. For a single screenshot, that's fine. For a QA session with 20+ commands, it's 40+ seconds of browser startup overhead. Worse: you lose all state between commands. Cookies, localStorage, login sessions, open tabs — all gone.

The daemon model means:

  • Persistent state. Log in once, stay logged in. Open a tab, it stays open. localStorage persists across commands.
  • Sub-second commands. After the first call, every command is just an HTTP POST. ~100-200ms round-trip including Chromium's work.
  • Automatic lifecycle. The server auto-starts on first use, auto-shuts down after 30 minutes idle. No process management needed.

State file

The server writes .gstack/browse.json (atomic write via tmp + rename, mode 0o600):

{ "pid": 12345, "port": 34567, "token": "uuid-v4", "startedAt": "...", "binaryVersion": "abc123" }

The CLI reads this file to find the server. If the file is missing or the server fails an HTTP health check, the CLI spawns a new server. On Windows, PID-based process detection is unreliable in Bun binaries, so the health check (GET /health) is the primary liveness signal on all platforms.

Port selection

Random port between 10000-60000 (retry up to 5 on collision). This means 10 Conductor workspaces can each run their own browse daemon with zero configuration and zero port conflicts. The old approach (scanning 9400-9409) broke constantly in multi-workspace setups.

Version auto-restart

The build writes git rev-parse HEAD to browse/dist/.version. On each CLI invocation, if the binary's version doesn't match the running server's binaryVersion, the CLI kills the old server and starts a new one. This prevents the "stale binary" class of bugs entirely — rebuild the binary, next command picks it up automatically.

Security model

Localhost only

The HTTP server binds to 127.0.0.1, not 0.0.0.0. It's not reachable from the network.

Dual-listener tunnel architecture (v1.6.0.0)

When a user runs pair-agent --client, the daemon starts an ngrok tunnel so a remote paired agent can drive the browser. Exposing the full daemon surface to the internet (even behind a random ngrok subdomain) meant /health leaked the root token on any Origin spoof, and /cookie-picker embedded the token into HTML that any caller could fetch.

The fix is two HTTP listeners, not one:

  • Local listener (127.0.0.1:LOCAL_PORT) — always bound. Serves bootstrap (/health with token delivery), /cookie-picker, /inspector/*, /welcome, /refs, the sidebar-agent API, and the full command surface. Never forwarded.
  • Tunnel listener (127.0.0.1:TUNNEL_PORT) — bound lazily on /tunnel/start, torn down on /tunnel/stop. Serves a locked allowlist: /connect (pairing ceremony, unauth + rate-limited), /command (scoped tokens only, further restricted to a browser-driving command allowlist), and /sidebar-chat. Everything else 404s.

ngrok forwards only the tunnel port. The security property comes from physical port separation: a tunnel caller cannot reach /health or /cookie-picker because those paths don't exist on that TCP socket. Header inference (check x-forwarded-for, check origin) is unreliable (ngrok header behavior changes; local proxies can add these headers); socket separation isn't.

Endpoint Local listener Tunnel listener Notes
GET /health public (no token unless headed/extension) 404 Token bootstrap for extension happens locally only
GET /connect public ({alive:true}) public ({alive:true}) Probe path for tunnel liveness
POST /connect public (rate-limited 300/min) public (rate-limited) Setup-key exchange for pair-agent
POST /command auth (Bearer root OR scoped) auth (scoped only, allowlisted commands) Root token on tunnel = 403
POST /sidebar-chat auth auth Lets remote agent post into local sidebar
POST /pair root-only 404 Pairing mint — local operator action
POST /tunnel/{start,stop} root-only 404 Daemon configuration
POST /token, DELETE /token/:id root-only 404 Scoped token mint/revoke
GET /cookie-picker, GET /cookie-picker/* public UI, auth API 404 Local-only — reads local browser DBs
GET /inspector, /inspector/events, etc. auth 404 Extension callback, local-only
GET /welcome public 404 GStack Browser landing page, local-only
GET /refs auth 404 Ref map — internal state
GET /activity/stream Bearer OR HttpOnly gstack_sse cookie 404 SSE. ?token= query param no longer accepted
GET /inspector/events Bearer OR HttpOnly gstack_sse cookie 404 SSE. Same cookie as /activity/stream
POST /sse-session auth (Bearer) 404 Mints the view-only 30-min SSE session cookie

Tunnel surface denial logs. Every rejection on the tunnel listener (path_not_on_tunnel, root_token_on_tunnel, missing_scoped_token, disallowed_command:*) is recorded asynchronously to ~/.gstack/security/attempts.jsonl with timestamp, source IP (from x-forwarded-for), path, and method. Rate-capped at 60 writes/min globally to prevent log-flood DoS. Shares the attempt log with the prompt-injection scanner.

SSE session cookies. EventSource can't send Authorization headers, so the extension POSTs /sse-session once at bootstrap with the root Bearer and receives a 30-minute view-only cookie (gstack_sse, HttpOnly, SameSite=Strict). The cookie is valid ONLY for /activity/stream and /inspector/events — it is NOT a scoped token and cannot be used on /command. Scope isolation is enforced by the module boundary: sse-session-cookie.ts has no imports from token-registry.ts.

Non-goal in this wave (tracked as #1136): the cookie-import-browser path launches Chrome with --remote-debugging-port=<random>. On Windows with App-Bound Encryption v20, a same-user local process can connect to that port and exfiltrate decrypted v20 cookies — an elevation path relative to reading the SQLite DB directly (which can't decrypt v20 without DPAPI context). Fix direction is --remote-debugging-pipe instead of TCP; requires restructuring the CDP client.

Bearer token auth

Every server session generates a random UUID token, written to the state file with mode 0o600 (owner-only read). Every HTTP request that mutates browser state must include Authorization: Bearer <token>. If the token doesn't match, the server returns 401.

This prevents other processes on the same machine from talking to your browse server. The cookie picker UI (/cookie-picker) and health check (/health) are exempt on the local listener — they're 127.0.0.1-bound and don't execute commands. On the tunnel listener nothing is exempt except /connect.

Cookies are the most sensitive data gstack handles. The design:

  1. Keychain access requires user approval. First cookie import per browser triggers a macOS Keychain dialog. The user must click "Allow" or "Always Allow." gstack never silently accesses credentials.

  2. Decryption happens in-process. Cookie values are decrypted in memory (PBKDF2 + AES-128-CBC), loaded into the Playwright context, and never written to disk in plaintext. The cookie picker UI never displays cookie values — only domain names and counts.

  3. Database is read-only. gstack copies the Chromium cookie DB to a temp file (to avoid SQLite lock conflicts with the running browser) and opens it read-only. It never modifies your real browser's cookie database.

  4. Key caching is per-session. The Keychain password + derived AES key are cached in memory for the server's lifetime. When the server shuts down (idle timeout or explicit stop), the cache is gone.

  5. No cookie values in logs. Console, network, and dialog logs never contain cookie values. The cookies command outputs cookie metadata (domain, name, expiry) but values are truncated.

Shell injection prevention

The browser registry (Comet, Chrome, Arc, Brave, Edge) is hardcoded. Database paths are constructed from known constants, never from user input. Keychain access uses Bun.spawn() with explicit argument arrays, not shell string interpolation.

Unicode sanitization at server egress (v1.38.0.0)

Page content harvested by CDP can contain lone UTF-16 surrogate halves (orphaned high or low surrogates from broken JavaScript string handling on the page). When those reach JSON.stringify, Bun emits them as \uD800-style escape sequences that the downstream consumer's JSON.parse accepts, but the Anthropic API rejects with a 400 — turning a single weird page into a session-killing error. Defense is single-point, applied at every server egress that ships page-derived strings.

Egress path Module Sanitization point
POST /command (HTTP) browse/src/server.ts handleCommandInternal wrapper (sanitizes the result of handleCommandInternalImpl)
POST /command/batch browse/src/server.ts Same wrapper — batch consumers inherit it
GET /activity/stream (SSE) browse/src/server.ts sanitizeReplacer passed to JSON.stringify
GET /inspector/events (SSE) browse/src/server.ts sanitizeReplacer passed to JSON.stringify

sanitizeReplacer is a JSON.stringify replacer function that cleans every string value during encoding. Post-stringify regex doesn't work here — JSON.stringify has already converted \uD800 into the literal escape sequence "\\ud800" before the regex could match, so the replacer must run inside the encoding pipeline. The pure-string helper sanitizeLoneSurrogates is used directly for text/plain responses.

Architectural invariant. Every new SSE/WebSocket writer or HTTP response that ships page-content-derived strings MUST go through one of two paths: JSON.stringify(payload, sanitizeReplacer) for object payloads, or sanitizeLoneSurrogates(body) for text bodies. New surfaces that bypass both will desync the system. Inline comments at both SSE producers in server.ts say so; browse/test/server-sanitize-surrogates.test.ts pins wiring with bug-repro + invariant tests (handleCommandInternalImpl rename, central sanitization line, replacer existence, SSE producers stringify with replacer).

Prompt injection defense (sidebar agent)

The Chrome sidebar agent has tools (Bash, Read, Glob, Grep, WebFetch) and reads hostile web pages, so it's the part of gstack most exposed to prompt injection. Defense is layered, not single-point.

  1. L1-L3 content security (browse/src/content-security.ts). Runs on every page-content command and every tool output: datamarking, hidden-element strip, ARIA regex, URL blocklist, and a trust-boundary envelope wrapper. Applied at both the server and the agent.

  2. L4 ML classifier — TestSavantAI (browse/src/security-classifier.ts). A 22MB BERT-small ONNX model (int8 quantized) bundled with the agent. Runs locally, no network. Scans every user message and every Read/Glob/Grep/WebFetch tool output before Claude sees it. Opt-in 721MB DeBERTa-v3 ensemble via GSTACK_SECURITY_ENSEMBLE=deberta.

  3. L4b transcript classifier. A Claude Haiku pass that looks at the full conversation shape (user message, tool calls, tool output), not just text. Gated by LOG_ONLY: 0.40 so most clean traffic skips the paid call.

  4. L5 canary token (browse/src/security.ts). A random token injected into the system prompt at session start. Rolling-buffer detection across text_delta and input_json_delta streams catches the token if it shows up anywhere in Claude's output, tool arguments, URLs, or file writes. Deterministic BLOCK — if the token leaks, the attacker convinced Claude to reveal the system prompt, and the session ends.

  5. L6 ensemble combiner (combineVerdict). BLOCK requires agreement from two ML classifiers at >= WARN (0.75), not a single confident hit. This is the Stack Overflow instruction-writing false-positive mitigation. On tool-output scans, single-layer high confidence BLOCKs directly — the content wasn't user-authored, so the FP concern doesn't apply.

Critical constraint: security-classifier.ts runs only in the sidebar-agent process, never in the compiled browse binary. @huggingface/transformers v4 requires onnxruntime-node, which fails dlopen from Bun compile's temp extract directory. Only the pure-string pieces (canary inject/check, verdict combiner, attack log, status) are in security.ts, which is safe to import from server.ts.

Env knobs: GSTACK_SECURITY_OFF=1 is a real kill switch (skips ML scan, canary still injects). Model cache at ~/.gstack/models/testsavant-small/ (112MB, first run) and ~/.gstack/models/deberta-v3-injection/ (721MB, opt-in only). Attack log at ~/.gstack/security/attempts.jsonl (salted sha256 + domain, rotates at 10MB, 5 generations). Per-device salt at ~/.gstack/security/device-salt (0600), cached in-process to survive FS-unwritable environments.

Visibility. The sidebar header shows a shield icon (green/amber/red) polled via /sidebar-chat. A centered banner appears on canary leak or BLOCK verdict with the exact layer scores. bin/gstack-security-dashboard aggregates local attempts; supabase/functions/community-pulse aggregates opt-in community telemetry across users.

The ref system

Refs (@e1, @e2, @c1) are how the agent addresses page elements without writing CSS selectors or XPath.

How it works

1. Agent runs: $B snapshot -i
2. Server calls Playwright's page.accessibility.snapshot()
3. Parser walks the ARIA tree, assigns sequential refs: @e1, @e2, @e3...
4. For each ref, builds a Playwright Locator: getByRole(role, { name }).nth(index)
5. Stores Map<string, RefEntry> on the BrowserManager instance (role + name + Locator)
6. Returns the annotated tree as plain text

Later:
7. Agent runs: $B click @e3
8. Server resolves @e3 → Locator → locator.click()

Why Locators, not DOM mutation

The obvious approach is to inject data-ref="@e1" attributes into the DOM. This breaks on:

  • CSP (Content Security Policy). Many production sites block DOM modification from scripts.
  • React/Vue/Svelte hydration. Framework reconciliation can strip injected attributes.
  • Shadow DOM. Can't reach inside shadow roots from the outside.

Playwright Locators are external to the DOM. They use the accessibility tree (which Chromium maintains internally) and getByRole() queries. No DOM mutation, no CSP issues, no framework conflicts.

Ref lifecycle

Refs are cleared on navigation (the framenavigated event on the main frame). This is correct — after navigation, all locators are stale. The agent must run snapshot again to get fresh refs. This is by design: stale refs should fail loudly, not click the wrong element.

Ref staleness detection

SPAs can mutate the DOM without triggering framenavigated (e.g. React router transitions, tab switches, modal opens). This makes refs stale even though the page URL didn't change. To catch this, resolveRef() performs an async count() check before using any ref:

resolveRef(@e3) → entry = refMap.get("e3")
                → count = await entry.locator.count()
                → if count === 0: throw "Ref @e3 is stale — element no longer exists. Run 'snapshot' to get fresh refs."
                → if count > 0: return { locator }

This fails fast (~5ms overhead) instead of letting Playwright's 30-second action timeout expire on a missing element. The RefEntry stores role and name metadata alongside the Locator so the error message can tell the agent what the element was.

Cursor-interactive refs (@c)

The -C flag finds elements that are clickable but not in the ARIA tree — things styled with cursor: pointer, elements with onclick attributes, or custom tabindex. These get @c1, @c2 refs in a separate namespace. This catches custom components that frameworks render as <div> but are actually buttons.

Logging architecture

Three ring buffers (50,000 entries each, O(1) push):

Browser events → CircularBuffer (in-memory) → Async flush to .gstack/*.log

Console messages, network requests, and dialog events each have their own buffer. Flushing happens every 1 second — the server appends only new entries since the last flush. This means:

  • HTTP request handling is never blocked by disk I/O
  • Logs survive server crashes (up to 1 second of data loss)
  • Memory is bounded (50K entries × 3 buffers)
  • Disk files are append-only, readable by external tools

The console, network, and dialog commands read from the in-memory buffers, not disk. Disk files are for post-mortem debugging.

SKILL.md template system

The problem

SKILL.md files tell Claude how to use the browse commands. If the docs list a flag that doesn't exist, or miss a command that was added, the agent hits errors. Hand-maintained docs always drift from code.

The solution

SKILL.md.tmpl          (human-written prose + placeholders)
       ↓
gen-skill-docs.ts      (reads source code metadata)
       ↓
SKILL.md               (committed, auto-generated sections)

Templates contain the workflows, tips, and examples that require human judgment. Placeholders are filled from source code at build time:

Placeholder Source What it generates
{{COMMAND_REFERENCE}} commands.ts Categorized command table
{{SNAPSHOT_FLAGS}} snapshot.ts Flag reference with examples
{{PREAMBLE}} gen-skill-docs.ts Startup block: update check, session tracking, contributor mode, AskUserQuestion format
{{BROWSE_SETUP}} gen-skill-docs.ts Binary discovery + setup instructions
{{BASE_BRANCH_DETECT}} gen-skill-docs.ts Dynamic base branch detection for PR-targeting skills (ship, review, qa, plan-ceo-review)
{{QA_METHODOLOGY}} gen-skill-docs.ts Shared QA methodology block for /qa and /qa-only
{{DESIGN_METHODOLOGY}} gen-skill-docs.ts Shared design audit methodology for /plan-design-review and /design-review
{{REVIEW_DASHBOARD}} gen-skill-docs.ts Review Readiness Dashboard for /ship pre-flight
{{TEST_BOOTSTRAP}} gen-skill-docs.ts Test framework detection, bootstrap, CI/CD setup for /qa, /ship, /design-review
{{CODEX_PLAN_REVIEW}} gen-skill-docs.ts Optional cross-model plan review (Codex or Claude subagent fallback) for /plan-ceo-review and /plan-eng-review
{{DESIGN_SETUP}} resolvers/design.ts Discovery pattern for $D design binary, mirrors {{BROWSE_SETUP}}
{{DESIGN_SHOTGUN_LOOP}} resolvers/design.ts Shared comparison board feedback loop for /design-shotgun, /plan-design-review, /design-consultation
{{UX_PRINCIPLES}} resolvers/design.ts User behavioral foundations (scanning, satisficing, goodwill reservoir, trunk test) for /design-html, /design-shotgun, /design-review, /plan-design-review
{{GBRAIN_CONTEXT_LOAD}} resolvers/gbrain.ts Brain-first context search with keyword extraction, health awareness, and data-research routing. Injected into 10 brain-aware skills. Suppressed on non-brain hosts.
{{GBRAIN_SAVE_RESULTS}} resolvers/gbrain.ts Post-skill brain persistence with entity enrichment, throttle handling, and per-skill save instructions. 8 skill-specific save formats.

This is structurally sound — if a command exists in code, it appears in docs. If it doesn't exist, it can't appear.

The preamble

Every skill starts with a {{PREAMBLE}} block that runs before the skill's own logic. It handles five things in a single bash command:

  1. Update check — calls gstack-update-check, reports if an upgrade is available.
  2. Session tracking — touches ~/.gstack/sessions/$PPID and counts active sessions (files modified in the last 2 hours). When 3+ sessions are running, all skills enter "ELI16 mode" — every question re-grounds the user on context because they're juggling windows.
  3. Operational self-improvement — at the end of every skill session, the agent reflects on failures (CLI errors, wrong approaches, project quirks) and logs operational learnings to the project's JSONL file for future sessions.
  4. AskUserQuestion format — universal format: context, question, RECOMMENDATION: Choose X because ___, lettered options. Consistent across all skills.
  5. Search Before Building — before building infrastructure or unfamiliar patterns, search first. Three layers of knowledge: tried-and-true (Layer 1), new-and-popular (Layer 2), first-principles (Layer 3). When first-principles reasoning reveals conventional wisdom is wrong, the agent names the "eureka moment" and logs it. See ETHOS.md for the full builder philosophy.

Why committed, not generated at runtime?

Three reasons:

  1. Claude reads SKILL.md at skill load time. There's no build step when a user invokes /browse. The file must already exist and be correct.
  2. CI can validate freshness. gen:skill-docs --dry-run + git diff --exit-code catches stale docs before merge.
  3. Git blame works. You can see when a command was added and in which commit.

Template test tiers

Tier What Cost Speed
1 — Static validation Parse every $B command in SKILL.md, validate against registry Free <2s
2 — E2E via claude -p Spawn real Claude session, run each skill, check for errors ~$3.85 ~20min
3 — LLM-as-judge Sonnet scores docs on clarity/completeness/actionability ~$0.15 ~30s

Tier 1 runs on every bun test. Tiers 2+3 are gated behind EVALS=1. The idea is: catch 95% of issues for free, use LLMs only for judgment calls.

Command dispatch

Commands are categorized by side effects:

  • READ (text, html, links, console, cookies, ...): No mutations. Safe to retry. Returns page state.
  • WRITE (goto, click, fill, press, ...): Mutates page state. Not idempotent.
  • META (snapshot, screenshot, tabs, chain, ...): Server-level operations that don't fit neatly into read/write.

This isn't just organizational. The server uses it for dispatch:

if (READ_COMMANDS.has(cmd))   handleReadCommand(cmd, args, bm)
if (WRITE_COMMANDS.has(cmd))  handleWriteCommand(cmd, args, bm)
if (META_COMMANDS.has(cmd))   handleMetaCommand(cmd, args, bm, shutdown)

The help command returns all three sets so agents can self-discover available commands.

Error philosophy

Errors are for AI agents, not humans. Every error message must be actionable:

  • "Element not found" → "Element not found or not interactable. Run snapshot -i to see available elements."
  • "Selector matched multiple elements" → "Selector matched multiple elements. Use @refs from snapshot instead."
  • Timeout → "Navigation timed out after 30s. The page may be slow or the URL may be wrong."

Playwright's native errors are rewritten through wrapError() to strip internal stack traces and add guidance. The agent should be able to read the error and know what to do next without human intervention.

Crash recovery

The server doesn't try to self-heal. If Chromium crashes (browser.on('disconnected')), the server exits immediately. The CLI detects the dead server on the next command and auto-restarts. This is simpler and more reliable than trying to reconnect to a half-dead browser process.

E2E test infrastructure

Session runner (test/helpers/session-runner.ts)

E2E tests spawn claude -p as a completely independent subprocess — not via the Agent SDK, which can't nest inside Claude Code sessions. The runner:

  1. Writes the prompt to a temp file (avoids shell escaping issues)
  2. Spawns sh -c 'cat prompt | claude -p --output-format stream-json --verbose'
  3. Streams NDJSON from stdout for real-time progress
  4. Races against a configurable timeout
  5. Parses the full NDJSON transcript into structured results

The parseNDJSON() function is pure — no I/O, no side effects — making it independently testable.

Observability data flow

  skill-e2e-*.test.ts
        │
        │ generates runId, passes testName + runId to each call
        │
  ┌─────┼──────────────────────────────┐
  │     │                              │
  │  runSkillTest()              evalCollector
  │  (session-runner.ts)         (eval-store.ts)
  │     │                              │
  │  per tool call:              per addTest():
  │  ┌──┼──────────┐              savePartial()
  │  │  │          │                   │
  │  ▼  ▼          ▼                   ▼
  │ [HB] [PL]    [NJ]          _partial-e2e.json
  │  │    │        │             (atomic overwrite)
  │  │    │        │
  │  ▼    ▼        ▼
  │ e2e-  prog-  {name}
  │ live  ress   .ndjson
  │ .json .log
  │
  │  on failure:
  │  {name}-failure.json
  │
  │  ALL files in ~/.gstack-dev/
  │  Run dir: e2e-runs/{runId}/
  │
  │         eval-watch.ts
  │              │
  │        ┌─────┴─────┐
  │     read HB     read partial
  │        └─────┬─────┘
  │              ▼
  │        render dashboard
  │        (stale >10min? warn)

Split ownership: session-runner owns the heartbeat (current test state), eval-store owns partial results (completed test state). The watcher reads both. Neither component knows about the other — they share data only through the filesystem.

Non-fatal everything: All observability I/O is wrapped in try/catch. A write failure never causes a test to fail. The tests themselves are the source of truth; observability is best-effort.

Machine-readable diagnostics: Each test result includes exit_reason (success, timeout, error_max_turns, error_api, exit_code_N), timeout_at_turn, and last_tool_call. This enables jq queries like:

jq '.tests[] | select(.exit_reason == "timeout") | .last_tool_call' ~/.gstack-dev/evals/_partial-e2e.json

Eval persistence (test/helpers/eval-store.ts)

The EvalCollector accumulates test results and writes them in two ways:

  1. Incremental: savePartial() writes _partial-e2e.json after each test (atomic: write .tmp, fs.renameSync). Survives kills.
  2. Final: finalize() writes a timestamped eval file (e.g. e2e-20260314-143022.json). The partial file is never cleaned up — it persists alongside the final file for observability.

eval:compare diffs two eval runs. eval:summary aggregates stats across all runs in ~/.gstack-dev/evals/.

Test tiers

Tier What Cost Speed
1 — Static validation Parse $B commands, validate against registry, observability unit tests Free <5s
2 — E2E via claude -p Spawn real Claude session, run each skill, scan for errors ~$3.85 ~20min
3 — LLM-as-judge Sonnet scores docs on clarity/completeness/actionability ~$0.15 ~30s

Tier 1 runs on every bun test. Tiers 2+3 are gated behind EVALS=1. The idea: catch 95% of issues for free, use LLMs only for judgment calls and integration testing.

What's intentionally not here

  • No WebSocket streaming. HTTP request/response is simpler, debuggable with curl, and fast enough. Streaming would add complexity for marginal benefit.
  • No MCP protocol. MCP adds JSON schema overhead per request and requires a persistent connection. Plain HTTP + plain text output is lighter on tokens and easier to debug.
  • No multi-user support. One server per workspace, one user. The token auth is defense-in-depth, not multi-tenancy.
  • No Windows/Linux cookie decryption. macOS Keychain is the only supported credential store. Linux (GNOME Keyring/kwallet) and Windows (DPAPI) are architecturally possible but not implemented.
  • No iframe auto-discovery. $B frame supports cross-frame interaction (CSS selector, @ref, --name, --url matching), but the ref system does not auto-crawl iframes during snapshot. You must explicitly enter a frame context first.