mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-10 22:47:26 +08:00
* feat(gbrain-sync): queue primitives + writer shims
Adds bin/gstack-brain-enqueue (atomic append to sync queue) and
bin/gstack-jsonl-merge (git merge driver, ts-sort with SHA-256 fallback).
Wires one backgrounded enqueue call into learnings-log, timeline-log,
review-log, and developer-profile --migrate. question-log and
question-preferences stay local per Codex v2 decision.
gstack-config gains gbrain_sync_mode (off/artifacts-only/full) and
gbrain_sync_mode_prompted keys, plus GSTACK_HOME env alignment so
tests don't leak into real ~/.gstack/config.yaml.
* feat(gbrain-sync): --once drain + secret scan + push
bin/gstack-brain-sync is the core sync binary. Subcommands: --once
(drain queue, allowlist-filter, privacy-class-filter, secret-scan
staged diff, commit with template, push with fetch+merge retry),
--status, --skip-file <path>, --drop-queue --yes, --discover-new
(cursor-based detection of artifact writes that skip the shim).
Secret regex families: AWS keys, GitHub tokens (ghp_/gho_/ghu_/ghs_/
ghr_/github_pat_), OpenAI sk-, PEM blocks, JWTs, bearer-token-in-JSON.
On hit: unstage, preserve queue, print remediation hint (--skip-file
or edit), exit clean. No daemon — invoked by preamble at skill
boundaries.
* feat(gbrain-sync): init, restore, uninstall, consumer registry
bin/gstack-brain-init: idempotent first-run. git init ~/.gstack/,
.gitignore=*, canonical .brain-allowlist + .brain-privacy-map.json,
pre-commit secret-scan hook (defense-in-depth), merge driver registration
via git config, gh repo create --private OR arbitrary --remote <url>,
initial push, ~/.gstack-brain-remote.txt for new-machine discovery,
GBrain consumer registration via HTTP POST.
bin/gstack-brain-restore: safe new-machine bootstrap. Refuses clobber
of existing allowlisted files, clones to staging, rsync-copies tracked
files, re-registers merge drivers (required — not cloned from remote),
rehydrates consumers.json, prompts for per-consumer tokens.
bin/gstack-brain-uninstall: clean off-ramp. Removes .git + .brain-*
files + consumers.json + config keys. Preserves user data (learnings,
plans, retros, profile). Optional --delete-remote for GitHub repos.
bin/gstack-brain-consumer + bin/gstack-brain-reader (symlink alias):
registry management. Internal 'consumer' term; user-facing 'reader'
per DX review decision.
* feat(gbrain-sync): preamble block — privacy gate + boundary sync
scripts/resolvers/preamble/generate-brain-sync-block.ts emits bash that
runs at every skill invocation:
- Detects ~/.gstack-brain-remote.txt on machines without local .git
and surfaces a restore-available hint (does NOT auto-run restore).
- Runs gstack-brain-sync --once at skill start to drain any pending
writes (and at skill end via prose instruction).
- Once-per-day auto-pull (cached via .brain-last-pull) for append-only
JSONL files.
- Emits BRAIN_SYNC: status line every skill run.
Also emits prose for the host LLM to fire the one-time privacy
stop-gate (full / artifacts-only / off) when gbrain is detected and
gbrain_sync_mode_prompted is false. Wired into preamble.ts composition.
* test(gbrain-sync): 27-test consolidated suite
test/brain-sync.test.ts covers:
- Config: validation, defaults, GSTACK_HOME env isolation
- Enqueue: no-op gates, skip list, concurrent atomicity, JSON escape
- JSONL merge driver: 3-way + ts-sort + SHA-256 fallback
- Init + sync: canonical file creation, merge driver registration,
push-reject + fetch+merge retry path
- Init refuses different remote (idempotency)
- Cross-machine restore round-trip (machine A write → machine B sees)
- Secret scan across all 6 regex families (AWS, GH, OpenAI, PEM, JWT,
bearer-JSON). --skip-file unblock remediation
- Uninstall removes sync config, preserves user data
- --discover-new idempotence via mtime+size cursor
Behaviors verified via integration smokes during implementation. Known
follow-up: bun-test 5s default timeout needs 30s wrapper for
spawnSync-heavy tests.
* docs(gbrain-sync): user guide + error lookup + README section
docs/gbrain-sync.md: setup walkthrough, privacy modes, cross-machine
workflow, secret protection, two-machine conflict handling, uninstall,
troubleshooting reference.
docs/gbrain-sync-errors.md: problem/cause/fix index for every
user-visible error. Patterned on Rust's error docs + Stripe's API
error reference.
README.md: 'Cross-machine memory with GBrain sync' section near the
top (discovery moment), plus docs-table entry.
* chore: bump version and changelog (v1.7.0.0)
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* chore: regenerate SKILL.md files for gbrain-sync preamble block
Re-runs bun run gen:skill-docs after adding generateBrainSyncBlock
to scripts/resolvers/preamble.ts in a2aa8a07. CI check-freshness
caught the drift. All 36 SKILL.md files regenerated with the new
skill-start bash block + privacy-gate prose + skill-end sync
instructions baked in.
* fix(test): session-awareness reads AskUserQuestion Format from a Tier 2+ SKILL.md
The test was reading ROOT/SKILL.md (browse skill, Tier 1) which never
contained '## AskUserQuestion Format' — that section is only emitted
for Tier 2+ skills by scripts/resolvers/preamble.ts. As a result the
agent was prompted with an empty format guide and only emitted
'RECOMMENDATION' intermittently, making the test flaky.
Pre-existing on main (same ROOT/SKILL.md shape there) — surfaced now
because the agent run didn't hit the RECOMMENDATION/recommend/option a
fallback strings in this particular attempt.
Fix: read from office-hours/SKILL.md (Tier 3, always has the section)
with a fallback that scans for the first top-level skill dir whose
SKILL.md contains the header. Future template moves won't break this
test again.
* feat(browse): domain-skills storage + state machine
New module browse/src/domain-skills.ts implements the per-site notes
the agent writes for itself, persisted as type:"domain" rows alongside
/learn's per-project learnings.
Three scopes layered: per-project default, global by explicit promotion.
Project-active shadows global for the same host.
State machine (T6 — codex outside-voice):
quarantined --3 uses w/o flag--> active(project) --promote--> global
^ |
+----- classifier flag during use
- Append-only JSONL with O_APPEND for atomic small writes
- Tolerant parser drops partial trailing line on read
- Tombstone for deletes (compactor cleans up later)
- Version log per (host, scope) enables rollback
- Hostname derived from active tab top-level origin (T3 confused-deputy fix)
- writeSkill rejects classifier_score >= 0.85 with structured error
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(browse): domain-skills storage + state machine
14 tests covering:
- T3 hostname normalization (lowercase, www. strip, port/path/query strip,
subdomain-exact preserved)
- T4 scope shadowing (per-project active shadows global for same host)
- T5 persistence (version monotonicity, tolerant parser drops partial line)
- T6 state machine (quarantined → active after N=3 uses, classifier-flag
blocks promotion, save-time score >= 0.85 rejected)
- Rollback by version log (restore prior body, advance version counter)
- Tombstone deletion (read returns null after delete)
All 14 pass in 27ms via bun test.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(browse): $B domain-skill subcommands
Wire the domain-skills storage layer into the browse CLI as a META command:
$B domain-skill save save body from stdin or --from-file
(host derived from active tab — T3)
$B domain-skill list list all skills visible to current project
$B domain-skill show <host> print skill body
$B domain-skill edit <host> open in $EDITOR
$B domain-skill promote-to-global <host> cross-project promotion (T4)
$B domain-skill rollback <host> [--global] restore prior version
$B domain-skill rm <host> [--global] tombstone
Save path runs L1-L3 content filters from content-security.ts (importable
in compiled binary, unlike L4 ML classifier — see CLAUDE.md). The L4
classifier scan happens in sidebar-agent at prompt-injection load time.
Output is structured (problem + cause + suggested-action) per DX D7.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(browse): $B cdp escape hatch — deny-default allowlist + two-tier mutex
Codex T2: flip CDP posture to deny-default. Allowed methods enumerated in
cdp-allowlist.ts with (scope: tab|browser, output: trusted|untrusted,
justification) per entry.
Initial allowlist (~25 methods) covers:
- Accessibility tree extraction (read-only)
- DOM/CSS inspection (read-only)
- Performance metrics
- Tracing
- Emulation viewport/UA override
- Page screenshot/PDF capture (output is binary, no marker injection vector)
- Network.enable/disable (no bodies/cookies — those are exfil surfaces)
- Runtime.getProperties (NO evaluate/callFunctionOn — those would be RCE)
Page.navigate is INTENTIONALLY NOT allowed; agents use $B goto which
goes through the URL blocklist.
Codex T7: two-tier mutex. tab-scoped methods take per-tab lock; browser-
scoped take global lock that blocks all tab locks. 5s acquire timeout
yields CDPMutexAcquireTimeout (no silent hangs). All lock acquires use
try/finally so errors don't leak the lock.
Path A from spike: uses Playwright's newCDPSession() per page. No second
WebSocket, no need for --remote-debugging-port. CDPSession is cached
per page in a WeakMap and cleared on page close.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(browse): CDP allowlist + two-tier mutex
13 tests:
- Allowlist linter: every entry has 4 required fields, no duplicates,
justification length > 20 chars
- Deny-list verification: dangerous methods (Runtime.evaluate, Page.navigate,
Network.getResponseBody, Browser.close, Target.attachToTarget, etc.) are
NOT allowed (Codex T2 categories 4-7)
- Per-tab mutex serializes ops on same tab
- Per-tab mutex allows parallel ops across different tabs
- Global lock blocks tab locks; tab locks block global lock
- Acquire timeout yields CDPMutexAcquireTimeout (no silent hang)
- Timeout error names the tab id and the timeout budget
Also extends Network.disable justification to satisfy linter.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(browse): telemetry signals + project-slug helper
Lightweight telemetry per DX D9: piggybacks on ~/.gstack/analytics/ pattern.
Hostname + aggregate counters only, no body content. GSTACK_TELEMETRY_OFF=1
silences. Fire-and-forget — never blocks calling path.
Signals fired so far:
- domain_skill_saved {host, scope, state, bytes}
- domain_skill_save_blocked {host, reason}
(domain_skill_fired and cdp_method_* fired in subsequent commits.)
Also extracts project-slug resolution into project-slug.ts so server.ts
and domain-skill-commands.ts share one cached lookup.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(browse): sidebar prompt-context injection + CDP telemetry
server.ts spawnClaude now:
- Imports per-project domain skill matching the active tab's hostname
via readDomainSkill()
- Wraps the body in UNTRUSTED EXTERNAL CONTENT envelope (so the L4
classifier in sidebar-agent sees it at load time per Eng D4)
- Appends as <domain-skill source="..." host="..." version="..."> block
- Fires domain_skill_fired telemetry (host, source, version)
- Calls recordSkillUse fire-and-forget so the auto-promote-after-N=3
state machine advances on each successful prompt injection
System prompt also gets a one-liner introducing $B domain-skill commands
to agents (DX D4 start-of-task discoverability hint).
cdp-bridge.ts fires:
- cdp_method_denied (drives next allow-list growth)
- cdp_method_lock_acquire_ms (P50/P99 quantile observability)
- cdp_method_called (allowed methods)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(browse): telemetry module
3 tests covering:
- logTelemetry writes JSONL with ts injected
- GSTACK_TELEMETRY_OFF=1 silences all events
- logTelemetry never throws on disk failures
Uses GSTACK_HOME env var to redirect writes to a tmp dir; the telemetry
module reads HOME lazily so test mutations take effect.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: domain-skills reference + error lookup table
docs/domain-skills.md mirrors the layered shape of docs/gbrain-sync.md
(DX D8): how agents use it, state machine, storage layout, security model
(L1-L3 + L4 layered defense), error reference table.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(readme): browser-harness-js plug + domain-skills section
New "Domain skills + raw CDP escape hatch" section under "The sprint"
covering both v1.8.0.0 features. Plugs browser-use/browser-harness-js
as the no-rails alternative for users who want raw CDP without gstack's
security stack.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: bump version and changelog (v1.8.0.0)
Branch-scoped bump on top of merged 1.7.0.0 base. CHANGELOG entry covers
the full v1.8.0.0 scope: $B domain-skill, $B cdp escape hatch, two-tier
mutex, telemetry signals, sidebar prompt-context injection. Includes
Codex outside-voice trail (7 of 20 findings resolved, 12 mooted by T1
scope drop).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* todos: 7 follow-ups from v1.8.0.0 review trail
P1: Self-authoring $B commands with out-of-process worker isolation
(Codex T1 deferred from v1.8.0.0 — needs real isolation design)
P2: Migrate /learn to SQLite (Codex T5 long-term primitive fix)
P2: Remove plan-mode handshake from /plan-devex-review (skill bug)
P3: GBrain skillpack publishing for domain-skills
P3: Replay/record demonstrated flows to domain-skills
P3: $B commands review batch-mode UX (alternative to inline approval)
P3: Heuristic command-gap watcher (DX D4 alternative C)
Each entry has the standard What/Why/Pros/Cons/Context/Effort/Priority/
Depends-on shape so anyone picking these up later has full context.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(browse): lazy GSTACK_HOME resolution in domain-skills
Module-level constants (GLOBAL_FILE, derived path) were evaluated at
module-load and cached. When E2E and unit tests run in the same Bun
test pass and set GSTACK_HOME differently, the second test sees the
first test's path. Switch to lazy gstackHome() / globalFile() / projectFile()
helpers so process.env mutations take effect.
Mirrors the pattern already used in telemetry.ts.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(browse): E2E gate-tier tests for domain-skills + CDP
domain-skills-e2e.test.ts (4 tests):
- save derives host from active tab top-level origin (T3)
- save lands quarantined; list surfaces it
- readSkill returns null until 3 uses without flag promote to active (T6)
- save without an active page errors with structured guidance
cdp-e2e.test.ts (8 tests):
- Accessibility.getFullAXTree returns wrapped JSON (allowed, untrusted-output)
- Performance.getMetrics returns plain JSON (allowed, trusted-output)
- Runtime.evaluate DENIED with structured guidance (T2 RCE block)
- Page.navigate DENIED (must use $B goto for blocklist routing)
- Network.getResponseBody DENIED (exfil block)
- malformed JSON params surfaces clear error
- non Domain.method format surfaces clear error
- $B cdp help returns help text
Both files boot a real Chromium via BrowserManager.launch() and exercise
the dispatch handlers end-to-end. Total 12 E2E tests in <2s.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: regenerate SKILL.md files with new $B commands
bun run gen:skill-docs picks up the domain-skill and cdp META_COMMANDS
entries added in commands.ts. Both top-level SKILL.md and browse/SKILL.md
now list the new commands in their Meta and Inspection tables.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(fixtures): regenerate ship SKILL.md golden baselines for v1.7.0.0
Pre-existing failures inherited from garrytan/gbrain-support: the GBrain
Sync preamble block (added in v1.7.0.0) appears in regenerated SKILL.md
output but the golden baselines in test/fixtures/golden/ were never
updated. Three failures fixed:
golden-file regression > Claude ship skill matches golden baseline
golden-file regression > Codex ship skill matches golden baseline
golden-file regression > Factory ship skill matches golden baseline
Goldens regenerated by copying the current ship/SKILL.md, codex
.agents/skills/gstack-ship/SKILL.md, and .factory/skills/gstack-ship/SKILL.md
files. Diff is the v1.7.0.0 GBrain Sync preamble block + privacy stop-gate
(no behavioral changes — just preamble text).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(brain-sync): bearer-token regex catches values with leading space
Pre-existing bug from v1.7.0.0: the bearer-token-json secret pattern
required values matching [A-Za-z0-9_./+=-]{16,}, which rejected the
"Bearer <token>" form because the literal space after "Bearer" wasn't
in the character class. Real Authorization headers use "Bearer <token>"
syntax, and the test fixture
'"authorization":"Bearer abcdef1234567890abcdef1234567890"'
sat unscanned despite being a leak-class secret.
One-character fix: add space to the value character class. Test
'gstack-brain-sync secret scan > blocks bearer-json' now passes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(brain-sync): GSTACK_HOME isolation test compares mtime, not content
Pre-existing flaky test: the GSTACK_HOME-overrides-real-config test asserted
the real ~/.gstack/config.yaml does NOT contain "gbrain_sync_mode: full"
after the test. That fails for any user whose real config legitimately has
that key set from prior usage — the test's invariant is "the command did
not modify the real file," not "the real file lacks any specific value."
Switch to mtime + content snapshot: capture both BEFORE running the command,
then verify both are unchanged after. Also add a positive assertion that
the tmpHome config DID get the new key.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(skill-validation): exempt deliberate large fixtures from 2MB limit
Pre-existing failure: the "git tracks no files larger than 2MB" test
caught browse/test/fixtures/security-bench-haiku-responses.json (28.8MB
of replay data committed in v1.6.4.0 for security benchmark gate tests).
The test exists to catch accidentally-committed binaries (Mach-O dist
binaries, etc), not to forbid all large files. Add an explicit
LARGE_FIXTURE_EXEMPTIONS allowlist so deliberate replay fixtures pass
the gate while accidental binaries still fail.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(skill-token): mint scoped tokens per skill spawn
Wraps token-registry.createToken/revokeToken with skill-specific
clientId encoding (skill:<name>:<spawn-id>) and read+write defaults.
Skill scripts get a per-spawn capability token bound to browser-driving
commands; the daemon root token never leaves the harness.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(browse-client): SDK for browser-skill scripts
Thin wrapper over POST /command with bearer auth. Resolves daemon
port + token from GSTACK_PORT + GSTACK_SKILL_TOKEN env vars first
(set by $B skill run when spawning), falls back to .gstack/browse.json
for standalone debug runs.
Convenience methods cover the read+write surface skills typically need:
goto, click, fill, text, html, snapshot, links, forms, accessibility,
attrs, media, data, scroll, press, type, select, wait, hover, screenshot.
Low-level command(cmd, args) escape hatch for anything else.
This is the canonical SDK source. Each browser-skill ships a sibling
copy at <skill>/_lib/browse-client.ts so each skill is fully portable
and version-pinned.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(browser-skills): 3-tier storage helpers
listBrowserSkills() walks project > global > bundled (first-wins),
parses SKILL.md frontmatter, no INDEX.json. readBrowserSkill() does
the same for a single name. tombstoneBrowserSkill() moves a skill
into .tombstones/<name>-<ts>/ for recoverability.
Frontmatter parser handles the subset browser-skills need: scalars
(host, description, trusted, version, source), string lists
(triggers), and arg-mapping lists ([{name, description}, ...]).
Quoted values handle colons; trusted defaults to false.
Bundled tier path is auto-detected from the binary install location;
project tier comes from git rev-parse; global is ~/.gstack/. All tier
paths are overridable for hermetic tests.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(browser-skills): \$B skill list/show/run/test/rm subcommands
handleSkillCommand dispatches to per-subcommand handlers; spawnSkill is
the load-bearing function that:
1. Mints a per-spawn scoped token (read+write only) bound to the
skill name + spawn-id.
2. Builds the spawn env:
- trusted: passes process.env minus GSTACK_TOKEN (defense in depth).
- untrusted: minimal allowlist (LANG, LC_ALL, TERM, TZ) + locked
PATH; explicitly drops anything matching TOKEN/KEY/SECRET/etc.
Also drops AWS_/AZURE_/GCP_/GOOGLE_APPLICATION_/ANTHROPIC_/OPENAI_/
GITHUB_/GH_/SSH_/GPG_/NPM_TOKEN/PYPI_ patterns.
3. Always injects GSTACK_PORT + GSTACK_SKILL_TOKEN last (cannot be
overridden by parent env).
4. Spawns bun run script.ts -- <args> with cwd=skillDir, captures
stdout (1MB cap), stderr, and timeout-kills past the deadline.
5. Revokes the token in finally{}, always.
list output prints the resolved tier inline so "why did it run that
one?" never becomes a debugging mystery (Codex finding #4 mitigation).
server.ts threads the listen port to meta-commands via MetaCommandOpts.daemonPort.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(browser-skills): bundled hackernews-frontpage reference skill
Smallest interesting browser-skill: scrapes HN front page, returns
30 stories as JSON. No auth, stable HTML, fully fixture-tested.
Files:
SKILL.md frontmatter + prose
script.ts exports parseStoriesFromHtml(html)
main: goto + html + parse + JSON.stringify
_lib/browse-client.ts vendored copy of the SDK
fixtures/hn-2026-04-26.html captured front page (5 stories)
script.test.ts 13 assertions against the fixture
The parser is a pure function over HTML so script.test.ts runs
without a daemon (just imports parseStoriesFromHtml and asserts).
This exercises every Phase 1 component end-to-end:
- browse-client SDK (script imports browse from ./_lib/)
- 3-tier lookup (hackernews-frontpage lives in the bundled tier)
- scoped tokens (read+write is enough for goto + html)
- spawn lifecycle (\$B skill run hackernews-frontpage)
- file-fixture testing (\$B skill test hackernews-frontpage)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(skill-validation): cover bundled browser-skills
Adds 7 assertions per bundled skill at <root>/browser-skills/<name>/:
- SKILL.md exists
- frontmatter parses with required fields (name/host/triggers/args)
- script.ts exists
- _lib/browse-client.ts exists and matches the canonical SDK byte-for-byte
- script.test.ts exists
- script.ts imports browse from ./_lib/browse-client
The byte-identical SDK check enforces the version-pinning contract:
when the canonical SDK at browse/src/browse-client.ts changes, every
bundled skill's _lib/ copy must be re-synced or this test fails.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(designs): add BROWSER_SKILLS_V1 design doc
Captures the 13 locked decisions, two-axis trust model (daemon-side
scoped tokens + process-side env access), 3-tier lookup, file
layout, and full responses to all 8 Codex outside-voice findings.
Includes Phase 2-4 sketches for future branches.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(todos): replace self-authoring-\$B P1 with browser-skills phases
Phase 1 of the browser-skills design shipped on this branch (sidesteps
the in-daemon isolation problem the original P1 was blocked on). The
new entries enumerate the work that remains:
P1: Phase 2 (/scrape + /automate skill templates)
P2: Phase 3 (resolver injection at session start)
P2: Phase 4 (eval infra + fixture staleness + OS sandbox)
Cross-references docs/designs/BROWSER_SKILLS_V1.md for the full
architecture and the 8 Codex review findings + responses.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* release: v1.9.0.0 — browser-skills runtime
VERSION 1.8.0.0 → 1.9.0.0. CHANGELOG entry leads with what humans
can do today (hand-write deterministic browser scripts, run them in
200ms via \$B skill run). Notes explicitly that agent authoring
lands in next release; no fabricated perf numbers.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(browser-skills-e2e): exercise dispatch with bundled hackernews-frontpage
Covers the full \$B skill list/show/test pipeline against the real
bundled reference skill (defaultTierPaths picks up <repo>/browser-skills/).
Verifies frontmatter shape, the three-tier walk surfaces the bundled
entry, and \$B skill test successfully runs the bundled script.test.ts
in a child bun process.
\$B skill run end-to-end against the live network is intentionally NOT
covered here (would be flaky against news.ycombinator.com); the spawn
lifecycle is exercised in browser-skill-commands.test.ts using inline
synthetic skills.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: regen SKILL.md to surface the skill META command
bun run gen:skill-docs picked up the new \`skill\` command from
COMMAND_DESCRIPTIONS in browse/src/commands.ts.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* release: bump v1.9.0.0 → v1.13.0.0
Main shipped through v1.11.1.0 while this branch was in flight; v1.12.x
is presumed claimed by another in-flight branch. Use v1.13.0.0 as the
next available slot.
Updated VERSION, package.json, and the CHANGELOG header. Entry body
unchanged.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* release: bump v1.13.0.0 → v1.16.0.0
Main shipped v1.13.0.0 (claude outside-voice skill), v1.14.0.0
(sidebar REPL), and v1.15.0.0 (slim preamble + plan-mode E2E)
while this branch was in flight. Use v1.16.0.0 as the next
available slot.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(browse-skills): atomic write helper for /skillify (D3)
stageSkill writes a candidate skill into ~/.gstack/.tmp/skillify-<spawnId>/
with restrictive perms. commitSkill does an atomic fs.renameSync into the
final tier path with realpath/lstat discipline (refuses symlinked staging
dirs, refuses to clobber existing skills). discardStaged is the cleanup
path for test failures and approval rejections, idempotent and bounded
to the per-spawn wrapper. validateSkillName enforces lowercase/digits/
dashes only, no path-escape characters.
Implements the D3 contract from the v1.19.0.0 plan review: never a
half-written skill on disk. Test fail or approval reject = rm -rf the
temp dir, no tombstone for never-approved skills.
Closes Codex finding #5 (atomic skill packaging) for Phase 2a.
34 unit assertions covering: stage validation, file-path escape rejection,
permission check, atomic rename, clobber refusal, symlink refusal, project
tier unresolved, idempotent discard, end-to-end happy + simulated test
failure + approval reject paths.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(scrape): /scrape <intent> skill template
One entry point for pulling page data. Three paths under the hood:
1. Match — agent reads $B skill list, semantically matches the user's
intent against each skill's triggers + description + host. Confident
match = $B skill run <name> in ~200ms.
2. Prototype — no match, drive the page with $B goto/text/html/links etc.
Return JSON, append a one-line "say /skillify" nudge.
3. Mutating refusal — verbs like submit/click/fill route to /automate
(Phase 2b P0); /scrape is read-only by contract.
Match decision lives in the agent, not the daemon. No new code in
browse/src/, no expanded daemon command surface, no new prompt-injection
blast radius.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(skillify): /skillify codifies last /scrape into permanent skill
The productivity multiplier. /scrape discovers the flow; /skillify writes
it as deterministic Playwright-via-browse-client code so the next /scrape
on the same intent runs in ~200ms.
11-step flow with three locked contracts from the v1.19.0.0 plan review:
D1 — Provenance guard. Walk back ≤10 agent turns for a clearly-bounded
/scrape result. Refuse with one specific message if cold. No silent
synthesis from chat fragments.
D2 — Synthesis input slice. Extract ONLY the final-attempt $B calls that
produced the JSON the user accepted, plus the user's intent string. Drop
failed selectors, drop unrelated chat, drop earlier-session content.
Closes Codex finding #6 by picking option (b) from the design doc:
re-prompt from agent's own context, not a structured recorder.
D3 — Atomic write. Stage to ~/.gstack/.tmp/skillify-<spawnId>/, run
$B skill test against the temp dir, only rename into the final tier path
on test pass + user approval. Test fail or approval reject = rm -rf the
temp dir entirely.
Default tier: global (~/.gstack/browser-skills/<name>/). --project flag
overrides to per-project. Generated test must include at least one ★★
assertion (parsed JSON has expected shape + non-empty key fields), not a
smoke ★ assertion.
Bun runtime distribution (Codex finding #7) carries over to Phase 4.
Documented in the skill's Limits section.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(browser-skills): gate-tier E2E for /scrape + /skillify (D4)
Five scenarios cover the productivity loop and the contracts locked
during the v1.19.0.0 plan review:
scrape-match-path — intent matching bundled hackernews-frontpage
routes via $B skill run, no prototype phase
scrape-prototype-path — no matching skill, drives $B against a local
file:// fixture, returns JSON, suggests
/skillify
skillify-happy-path — /scrape then /skillify; skill written to
~/.gstack/browser-skills/<name>/ with the
full file tree; SKILL.md prose body must
not contain conversation fragments (D2)
skillify-provenance-refusal — cold /skillify with no prior /scrape refuses
with the D1 message; nothing on disk (D1)
skillify-approval-reject — /scrape then /skillify but reject in the
approval gate; temp dir is removed, nothing
at the final tier path (D3)
All five gate-tier (~$0.50-$1.50 each, ~$5 total per CI run). Set EVALS=1
to enable. Uses local file:// fixtures so prototype + skillify scenarios
run deterministically without network.
Touchfiles registers all 5 entries with proper deps on scrape/**,
skillify/**, browse/src/browser-skill-write.ts, and the Phase 1 runtime
modules. The match-path test depends on the bundled hackernews-frontpage
skill so its touchfile includes browser-skills/hackernews-frontpage/**.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(browser-skills): TODOS Phase 2a + design doc D1-D4 decisions
TODOS.md:
- Narrows existing P1 (was "/scrape and /automate") to "/scrape and
/skillify" — the /scrape + /skillify wedge ships in this branch.
Codex finding #6 (synthesis) removed from Cons (resolved by D2);
finding #7 (Bun runtime) stays as the open carry-over.
- Adds new ## P0 above PACING_UPDATES_V0 for the /automate follow-up.
Same skillify pattern as /scrape, different trust profile (per-step
confirmation gate when running non-codified). Reuses /skillify and
the D3 helper as-is. Effort M.
BROWSER_SKILLS_V1.md:
- Phase table re-organized into 1, 2a, 2b, 3, 4. Phase 1 + Phase 2a
consolidate into v1.19.0.0 ship (the v1.16.0.0 branch-internal
bump never landed on main).
- New "Phase 2a" sub-section captures the four decisions locked
during /plan-eng-review:
D1 — provenance guard (≤10 turn walk-back, refuse if cold)
D2 — synthesis input slice (final-attempt $B calls only,
closes Codex finding #6)
D3 — atomic write discipline (temp-dir-then-rename via new
browse/src/browser-skill-write.ts helper)
D4 — full test scope (5 gate E2E + 1 unit + smoke)
- New "Phase 2b" sketch for /automate: same skillify machinery,
per-mutating-step confirmation gate, deferred to next branch.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* release: v1.16.0.0 -> v1.19.0.0 — browser-skills Phase 1 + 2a
Consolidates the v1.16.0.0 branch-internal bump (Phase 1 runtime, never
landed on main) with Phase 2a (/scrape + /skillify + atomic-write helper)
into one v1.19.0.0 ship per CLAUDE.md "Never orphan branch-internal
versions" rule.
Headline: Browser-skills land end-to-end. /scrape <intent> first call
drives the page; second call runs the codified script in 200ms.
The unified CHANGELOG entry covers:
- Phase 1 runtime: $B skill list/show/run/test/rm, scoped tokens,
3-tier storage, bundled hackernews-frontpage reference.
- Phase 2a: /scrape + /skillify gstack skills, browser-skill-write.ts
atomic helper, 5 gate-tier E2E + 34 unit assertions.
Numbers table updated: 5 new modules (+browser-skill-write), 2 new
gstack skills, 6 of 8 Codex outside-voice findings resolved (synthesis
#6 closed by D2; Bun runtime #7 + OS sandbox #1 stay deferred to Phase 4).
/automate (Phase 2b) is split out as P0 in TODOS for the next branch.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(commands): tighten descriptions for LLM-judge baseline pinning
The skill-llm-eval test "baseline score pinning" failed CI on three
retry attempts: judge gave command_reference.actionability=3, baseline
demands ≥4. Judge cited 8 specific gaps in COMMAND_DESCRIPTIONS.
This commit closes 7 of 8 by tightening the descriptions:
- press: documents that key names are case-sensitive Playwright keys,
shows modifier syntax (Shift+Enter, Control+A), links the full key
list. Removes the "is this case-sensitive?" guesswork.
- is: documents that <sel> accepts either a CSS selector OR an @ref
token from a prior snapshot, and that property values are case-
sensitive.
- scroll: documents that there is no --by/--to amount option, points
at `js window.scrollTo(0, N)` for pixel-precise scrolling.
- js / eval: clarifies that both run in the same JS sandbox, the
difference is just inline expr (js) vs file (eval).
- storage: clarifies sessionStorage is read-only via this command,
points at `js sessionStorage.setItem(...)` for the write path.
- chain: walks through how to invoke (pipe a JSON array of arrays to
$B chain), confirms it stops at the first error.
- cdp: explains how to discover allowed methods (read cdp-allowlist.ts)
+ shows a concrete example invocation.
- domain-skill: explains that the "classifier flag" is set automatically
by the L4 prompt-injection scan (agents do not set it manually);
enumerates the full lifecycle verbs.
The 8th gap (storage set syntax conflict) is also resolved as part of
the storage rewrite.
Two pipe-character bugs caught by the existing
`no command description contains pipe character` guard at
`test/gen-skill-docs.test.ts:595`: the chain example originally used
`echo '[...]' | $B chain` (literal pipe) and the cdp description used
`tab|browser` / `trusted|untrusted` (also literal pipes). Both rewritten
to keep markdown table cells intact.
Verification: 696/0 pass on skill-validation + gen-skill-docs after
regen across all hosts. The CI llm-judge eval will re-run against the
new SKILL.md and should hit actionability ≥4 reliably.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(browser): rewrite BROWSER.md as complete reference
Full rewrite covering the gstack browser surface as of v1.19.0.0. Up from
488 to 1,299 lines, 26 top-level sections.
Adds previously-undocumented subsystems:
- The productivity loop: /scrape + /skillify with D1 (provenance guard),
D2 (final-attempt-only synthesis), D3 (atomic-write discipline) contracts.
- Browser-skills runtime: anatomy, three-tier storage, scoped tokens, trust
model (capability + env axes), sibling SDK distribution, atomic-write
helper, bundled hackernews-frontpage reference.
- Domain-skills: per-site agent notes with quarantined → active → global
state machine and the L4-classifier auto-promotion gate.
- Pair-agent: dual-listener architecture, 26-command tunnel allowlist,
canDispatchOverTunnel pure gate, three token types (root, setup key,
scoped), denial log path + salt model.
- Security stack L1-L6: layer table, thresholds (BLOCK/WARN/LOG_ONLY/
SOLO_CONTENT_BLOCK), ensemble rule, classifier model paths, env knobs.
- Side Panel deep dive: Terminal pane (Claude PTY) as the primary surface
with Activity/Refs/Inspector as debug overlays, WS auth via
Sec-WebSocket-Protocol, gstackInjectToTerminal cross-pane plumbing.
- CDP escape hatch: $B cdp deny-default allowlist, $B inspect CSS inspector,
$B ux-audit page structure extraction.
- Meta commands previously undocumented: tabs/frames/state/watch/inbox/
tab-each, with usage and storage paths.
- Authentication: three token types with lifetimes, SSE session cookie,
PTY session cookie, token registry behavior.
- Full source map: 30+ file inventory of browse/src/ vs the old 11-file
list.
Preserves from before: architecture diagram, daemon lifecycle, snapshot
ref staleness, screenshot modes, goto file:// vs load-html semantics,
batch endpoint, JS await wrapping, env vars, performance numbers vs MCP,
Playwright acknowledgments, dev guide.
Cross-links to ARCHITECTURE.md, CLAUDE.md, docs/REMOTE_BROWSER_ACCESS.md,
docs/designs/BROWSER_SKILLS_V1.md, scrape/SKILL.md, skillify/SKILL.md,
TODOS.md so anyone landing on BROWSER.md can navigate to the load-bearing
companion docs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(server): tab-ownership gate keys on tabPolicy, not isWrite
Browser-skill spawns hit `403: Tab not owned by your agent` on every
first run because the gate at server.ts:639 fired for any non-root
write, regardless of the token's tabPolicy. The bundled
hackernews-frontpage reference skill failed identically. Every
/skillify-generated skill failed identically. The user's natural
tabs have no claimed owner — by design — so any skill driving
them via `goto` (a write) was 403'd.
The intent in skill-token.ts:79 was always correct: `tabPolicy: 'shared'`
with the comment "skill scripts may switch tabs as needed." The
enforcement just ignored it.
Two surgical changes:
browser-manager.ts:checkTabAccess — gate now keys on options.ownOnly
only. Shared-policy tokens (skill spawns, default scoped clients) get
permissive access — root-equivalent for the tab gate. Own-only tokens
(pair-agent over the ngrok tunnel) still require ownership for every
read and write. isWrite stays in the signature for callers that want
to log or branch elsewhere; it no longer gates the decision.
server.ts:639 — gate predicate narrowed from
(WRITE_COMMANDS.has(command) || tokenInfo.tabPolicy === 'own-only')
to just
tokenInfo.tabPolicy === 'own-only'
The 'newtab' exemption stays. Shared tokens skip the gate entirely;
own-only tokens still hit it. Comment block above the gate updated to
document the new predicate intent.
Pair-agent isolation is intact. Tunnel tokens still default to
tabPolicy: 'own-only', still must `newtab` first to get a tab they
can drive, still can't dispatch any of the 23 commands outside the
tunnel allowlist.
The capability gate (scope checks) and rate limits already constrain
what local scoped clients can do; tab ownership was never a security
boundary for them — only for pair-agent. This release makes the
enforcement match the original design intent.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(server): lock the shared-vs-own-only tab gate contract
The pre-fix tests at tab-isolation.test.ts:43,57 encoded the broken
behavior as the contract — they specifically asserted "scoped agent
cannot write to unowned tab," which was the exact failure mode that
broke browser-skills. They passed because they tested the wrong
invariant.
This commit replaces those tests with explicit shared-vs-own-only
coverage that documents what each policy actually means:
- Shared scoped agents (skill spawns, default scoped clients) can
read AND write any tab — unowned, their own, or another agent's.
The capability is gated by scope checks + rate limits, not by tab
ownership.
- Own-only scoped agents (pair-agent over tunnel) cannot read OR
write any tab they don't own. Pre-fix this case was conflated with
shared writes; now it's explicit.
9 unit assertions on checkTabAccess, up from 6. Each test names
the policy axis it's covering so a future refactor can't quietly
flip the contract.
Adds source-shape regression test 10a in server-auth.test.ts:
"tab gate predicate is own-only-scoped, not write-scoped." The
gate's `if (...)` line MUST contain `tabPolicy === 'own-only'` and
MUST NOT contain `WRITE_COMMANDS.has(command) ||`. If a future
refactor re-introduces the write-scoped gate, this fails immediately
in free-tier `bun test`.
Updates the marker for the existing newtab-excluded test to match
the new comment block ("Tab ownership check (own-only tokens /
pair-agent isolation)").
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* release: v1.19.0.0 -> v1.20.0.0 — fix tab-ownership footgun
Patch release on top of v1.19.0.0. The shipping headline of v1.19.0.0
(/scrape + /skillify productivity loop) was broken on first run in any
session where the daemon already had a tab. Bundled
hackernews-frontpage failed identically. Every /skillify-generated
skill failed identically.
The fix narrows the tab-ownership gate from "any non-root write" to
"tabPolicy === 'own-only' only." Pair-agent isolation (the v1.6.0.0
threat model) is intact; local skill spawns get their original
behavior back.
VERSION: 1.19.0.0 -> 1.20.0.0
package.json version: synced.
CHANGELOG entry leads with the user-visible impact: the productivity
loop works again, no half-second-stalls of confused 403s. Includes
before/after metrics on the bundled reference skill and the broken-
contract pre-fix tests that hid the regression.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(claude): sharpen CHANGELOG rule — diff between main and ship
Codifies what was already implicit in the existing "Never orphan
branch-internal versions" + "Only document what shipped between main
and this change" sections, but with sharper language and concrete
NEVER examples.
The rule: a CHANGELOG entry is the diff between main and the shipping
branch — what users get when they upgrade. NOT how the branch got
there. Branch-internal version bumps, mid-branch bug fixes, plan
review outcomes, and patch narratives all belong in PR descriptions
and commit messages, not in CHANGELOG.
Adds explicit examples of phrasing to NEVER use:
- "v1.X had a bug that v1.Y fixes" (mentions a branch-internal version)
- "The shipping headline of v1.X was broken because..." (apologizes
for never-released state)
- "Pre-fix tests encoded the broken behavior" (contributor's victory
lap, not user benefit)
- "Two surgical edits, both in the dispatch path" (micro-narrative
of the patch)
The constructive replacement: describe the released system as a
property, not as a fix. "Browser-skills run end-to-end with the
expected tab-access semantics." If a property is worth calling out,
document it in the trust-model section, not as a "we fixed X" callout.
Pairs with feedback_no_shame_changelog and
feedback_changelog_harden_against_critics memories — entries should
read as a flex even to a hostile screenshotter, never admit prior
breakage.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(changelog): consolidate v1.20.0.0 as the diff vs main
Rewrites the v1.20.0.0 entry to describe what users get when they
upgrade from main (v1.17.0.0) to this release: browser-skills
end-to-end. Drops all branch-internal narrative — Phase 1 / Phase 2a
labels, the v1.8.0.0 P1 history paragraph, the test-counts-by-phase
split, and the patch micro-narrative for the tab-policy semantics.
The previously-separate v1.19.0.0 entry (a branch-internal version
that never landed on main) collapses into v1.20.0.0 per the
"Never orphan branch-internal versions" rule.
Tab-access policies are now documented as a property of the trust
model: `'shared'` (skill spawns) is permissive, `'own-only'`
(pair-agent over the tunnel) is strict. No "fix" framing, no
mention of an intermediate state where it was broken.
Adds the BROWSER.md rewrite and the new tab-isolation +
server-auth source-shape regression tests to the itemized changes.
The reverse-chronological order remains: v1.20.0.0 → v1.17.0.0 →
v1.16.0.0 → v1.15.0.0 → ... Gaps (v1.18, v1.19) are fine — those
were branch-internal version numbers that never landed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2135 lines
92 KiB
TypeScript
2135 lines
92 KiB
TypeScript
/**
|
|
* gstack browse server — persistent Chromium daemon
|
|
*
|
|
* Architecture:
|
|
* Bun.serve HTTP on localhost → routes commands to Playwright
|
|
* Console/network/dialog buffers: CircularBuffer in-memory + async disk flush
|
|
* Chromium crash → server EXITS with clear error (CLI auto-restarts)
|
|
* Auto-shutdown after BROWSE_IDLE_TIMEOUT (default 30 min)
|
|
*
|
|
* State:
|
|
* State file: <project-root>/.gstack/browse.json (set via BROWSE_STATE_FILE env)
|
|
* Log files: <project-root>/.gstack/browse-{console,network,dialog}.log
|
|
* Port: random 10000-60000 (or BROWSE_PORT env for debug override)
|
|
*/
|
|
|
|
import { BrowserManager } from './browser-manager';
|
|
import { handleReadCommand } from './read-commands';
|
|
import { handleWriteCommand } from './write-commands';
|
|
import { handleMetaCommand } from './meta-commands';
|
|
import { handleCookiePickerRoute, hasActivePicker } from './cookie-picker-routes';
|
|
import { sanitizeExtensionUrl } from './sidebar-utils';
|
|
import { COMMAND_DESCRIPTIONS, PAGE_CONTENT_COMMANDS, DOM_CONTENT_COMMANDS, wrapUntrustedContent, canonicalizeCommand, buildUnknownCommandError, ALL_COMMANDS } from './commands';
|
|
import {
|
|
wrapUntrustedPageContent, datamarkContent,
|
|
runContentFilters, type ContentFilterResult,
|
|
markHiddenElements, getCleanTextWithStripping, cleanupHiddenMarkers,
|
|
} from './content-security';
|
|
import { generateCanary, injectCanary, getStatus as getSecurityStatus, writeDecision } from './security';
|
|
import { handleSnapshot, SNAPSHOT_FLAGS } from './snapshot';
|
|
import {
|
|
initRegistry, validateToken as validateScopedToken, checkScope, checkDomain,
|
|
checkRate, createToken, createSetupKey, exchangeSetupKey, revokeToken,
|
|
rotateRoot, listTokens, serializeRegistry, restoreRegistry, recordCommand,
|
|
isRootToken, checkConnectRateLimit, type TokenInfo,
|
|
} from './token-registry';
|
|
import { validateTempPath } from './path-security';
|
|
import { resolveConfig, ensureStateDir, readVersionHash } from './config';
|
|
import { emitActivity, subscribe, getActivityAfter, getActivityHistory, getSubscriberCount } from './activity';
|
|
import { initAuditLog, writeAuditEntry } from './audit';
|
|
import { inspectElement, modifyStyle, resetModifications, getModificationHistory, detachSession, type InspectorResult } from './cdp-inspector';
|
|
// Bun.spawn used instead of child_process.spawn (compiled bun binaries
|
|
// fail posix_spawn on all executables including /bin/bash)
|
|
import { safeUnlink, safeUnlinkQuiet, safeKill } from './error-handling';
|
|
import { logTunnelDenial } from './tunnel-denial-log';
|
|
import {
|
|
mintSseSessionToken, validateSseSessionToken, extractSseCookie,
|
|
buildSseSetCookie, SSE_COOKIE_NAME,
|
|
} from './sse-session-cookie';
|
|
import {
|
|
mintPtySessionToken, buildPtySetCookie, revokePtySessionToken,
|
|
} from './pty-session-cookie';
|
|
import * as fs from 'fs';
|
|
import * as net from 'net';
|
|
import * as path from 'path';
|
|
import * as crypto from 'crypto';
|
|
|
|
// ─── Config ─────────────────────────────────────────────────────
|
|
const config = resolveConfig();
|
|
ensureStateDir(config);
|
|
initAuditLog(config.auditLog);
|
|
|
|
// ─── Auth ───────────────────────────────────────────────────────
|
|
const AUTH_TOKEN = crypto.randomUUID();
|
|
initRegistry(AUTH_TOKEN);
|
|
const BROWSE_PORT = parseInt(process.env.BROWSE_PORT || '0', 10);
|
|
const IDLE_TIMEOUT_MS = parseInt(process.env.BROWSE_IDLE_TIMEOUT || '1800000', 10); // 30 min
|
|
|
|
/**
|
|
* Port the local listener bound to. Set once the daemon picks a port.
|
|
* Used by `$B skill run` to point spawned skill scripts at the daemon over
|
|
* loopback. Module-level so handleCommandInternal can read it without threading
|
|
* the port through every dispatch.
|
|
*/
|
|
let LOCAL_LISTEN_PORT: number = 0;
|
|
// Sidebar chat is always enabled in headed mode (ungated in v0.12.0)
|
|
|
|
// ─── Tunnel State ───────────────────────────────────────────────
|
|
//
|
|
// Dual-listener architecture: the daemon binds TWO HTTP listeners when a
|
|
// tunnel is active. The local listener serves bootstrap + CLI + sidebar
|
|
// (never exposed to ngrok). The tunnel listener serves only the pairing
|
|
// ceremony and scoped-token command endpoints (the ONLY port ngrok forwards).
|
|
//
|
|
// Security property comes from physical port separation: a tunnel caller
|
|
// cannot reach bootstrap endpoints because they live on a different TCP
|
|
// socket, not because of any per-request check.
|
|
let tunnelActive = false;
|
|
let tunnelUrl: string | null = null;
|
|
let tunnelListener: any = null; // ngrok listener handle
|
|
let tunnelServer: ReturnType<typeof Bun.serve> | null = null; // tunnel HTTP listener
|
|
|
|
/** Which HTTP listener accepted this request. */
|
|
export type Surface = 'local' | 'tunnel';
|
|
|
|
/**
|
|
* Paths reachable over the tunnel surface. Everything else returns 404.
|
|
*
|
|
* `/connect` is the only unauthenticated tunnel endpoint — POST for setup-key
|
|
* exchange, GET for an `{alive: true}` probe used by /pair and /tunnel/start
|
|
* to detect dead ngrok tunnels. Other paths in this set require a scoped
|
|
* token via Authorization: Bearer.
|
|
*
|
|
* Updating this set is a deliberate security decision. Every addition widens
|
|
* the tunnel attack surface.
|
|
*/
|
|
const TUNNEL_PATHS = new Set<string>([
|
|
'/connect',
|
|
'/command',
|
|
'/sidebar-chat',
|
|
]);
|
|
|
|
/**
|
|
* Commands reachable via POST /command over the tunnel surface. A paired
|
|
* remote agent can drive the browser (goto, click, text, etc.) but cannot
|
|
* configure the daemon, bootstrap new sessions, import cookies, or reach
|
|
* extension-inspector state. This allowlist maps to the eng-review decision
|
|
* logged in the CEO plan for sec-wave v1.6.0.0.
|
|
*/
|
|
export const TUNNEL_COMMANDS = new Set<string>([
|
|
// Original 17
|
|
'goto', 'click', 'text', 'screenshot',
|
|
'html', 'links', 'forms', 'accessibility',
|
|
'attrs', 'media', 'data',
|
|
'scroll', 'press', 'type', 'select', 'wait', 'eval',
|
|
// Tab + navigation primitives operator docs and CLI hints already promised
|
|
'newtab', 'tabs', 'back', 'forward', 'reload',
|
|
// Read/inspect/write operators paired agents need to be useful
|
|
'snapshot', 'fill', 'url', 'closetab',
|
|
]);
|
|
|
|
/**
|
|
* Pure gate: returns true iff the command is reachable over the tunnel surface.
|
|
* Extracted from the inline /command handler so the gate logic is unit-testable
|
|
* without standing up an HTTP listener. Behavior is identical to the inline
|
|
* check; the function canonicalizes the command (so aliases hit the same set)
|
|
* and returns false for null/undefined input.
|
|
*/
|
|
export function canDispatchOverTunnel(command: string | undefined | null): boolean {
|
|
if (typeof command !== 'string' || command.length === 0) return false;
|
|
const cmd = canonicalizeCommand(command);
|
|
return TUNNEL_COMMANDS.has(cmd);
|
|
}
|
|
|
|
/**
|
|
* Read ngrok authtoken from env var, ~/.gstack/ngrok.env, or ngrok's native
|
|
* config files. Returns null if nothing found. Shared between the
|
|
* /tunnel/start handler and the BROWSE_TUNNEL=1 auto-start flow.
|
|
*/
|
|
function resolveNgrokAuthtoken(): string | null {
|
|
let authtoken = process.env.NGROK_AUTHTOKEN;
|
|
if (authtoken) return authtoken;
|
|
|
|
const home = process.env.HOME || '';
|
|
const ngrokEnvPath = path.join(home, '.gstack', 'ngrok.env');
|
|
if (fs.existsSync(ngrokEnvPath)) {
|
|
try {
|
|
const envContent = fs.readFileSync(ngrokEnvPath, 'utf-8');
|
|
const match = envContent.match(/^NGROK_AUTHTOKEN=(.+)$/m);
|
|
if (match) return match[1].trim();
|
|
} catch {}
|
|
}
|
|
|
|
const ngrokConfigs = [
|
|
path.join(home, 'Library', 'Application Support', 'ngrok', 'ngrok.yml'),
|
|
path.join(home, '.config', 'ngrok', 'ngrok.yml'),
|
|
path.join(home, '.ngrok2', 'ngrok.yml'),
|
|
];
|
|
for (const conf of ngrokConfigs) {
|
|
try {
|
|
const content = fs.readFileSync(conf, 'utf-8');
|
|
const match = content.match(/authtoken:\s*(.+)/);
|
|
if (match) return match[1].trim();
|
|
} catch {}
|
|
}
|
|
return null;
|
|
}
|
|
|
|
/**
|
|
* Tear down the tunnel: close the ngrok listener and stop the tunnel-surface
|
|
* Bun.serve listener. Safe to call with nothing running. Always clears
|
|
* tunnel state regardless of individual close failures.
|
|
*/
|
|
async function closeTunnel(): Promise<void> {
|
|
try { if (tunnelListener) await tunnelListener.close(); } catch {}
|
|
try { if (tunnelServer) tunnelServer.stop(true); } catch {}
|
|
tunnelListener = null;
|
|
tunnelServer = null;
|
|
tunnelUrl = null;
|
|
tunnelActive = false;
|
|
}
|
|
|
|
function validateAuth(req: Request): boolean {
|
|
const header = req.headers.get('authorization');
|
|
return header === `Bearer ${AUTH_TOKEN}`;
|
|
}
|
|
|
|
/**
|
|
* Terminal-agent discovery. The non-compiled bun process at
|
|
* `browse/src/terminal-agent.ts` writes its chosen port to
|
|
* `<stateDir>/terminal-port` and the loopback handshake token to
|
|
* `<stateDir>/terminal-internal-token` once it boots. Read on demand —
|
|
* lazy so we don't break tests that don't spawn the agent.
|
|
*/
|
|
function readTerminalPort(): number | null {
|
|
try {
|
|
const f = path.join(path.dirname(config.stateFile), 'terminal-port');
|
|
const v = parseInt(fs.readFileSync(f, 'utf-8').trim(), 10);
|
|
return Number.isFinite(v) && v > 0 ? v : null;
|
|
} catch { return null; }
|
|
}
|
|
function readTerminalInternalToken(): string | null {
|
|
try {
|
|
const f = path.join(path.dirname(config.stateFile), 'terminal-internal-token');
|
|
const t = fs.readFileSync(f, 'utf-8').trim();
|
|
return t.length > 16 ? t : null;
|
|
} catch { return null; }
|
|
}
|
|
|
|
/**
|
|
* Push a freshly-minted PTY cookie token to the terminal-agent so its
|
|
* /ws upgrade can validate the cookie. Loopback POST authenticated with
|
|
* the internal token written by the agent at startup. Fire-and-forget;
|
|
* if the agent isn't up yet, the extension just retries /pty-session.
|
|
*/
|
|
async function grantPtyToken(token: string): Promise<boolean> {
|
|
const port = readTerminalPort();
|
|
const internal = readTerminalInternalToken();
|
|
if (!port || !internal) return false;
|
|
try {
|
|
const resp = await fetch(`http://127.0.0.1:${port}/internal/grant`, {
|
|
method: 'POST',
|
|
headers: {
|
|
'Content-Type': 'application/json',
|
|
'Authorization': `Bearer ${internal}`,
|
|
},
|
|
body: JSON.stringify({ token }),
|
|
signal: AbortSignal.timeout(2000),
|
|
});
|
|
return resp.ok;
|
|
} catch { return false; }
|
|
}
|
|
|
|
/** Extract bearer token from request. Returns the token string or null. */
|
|
function extractToken(req: Request): string | null {
|
|
const header = req.headers.get('authorization');
|
|
if (!header?.startsWith('Bearer ')) return null;
|
|
return header.slice(7);
|
|
}
|
|
|
|
/** Validate token and return TokenInfo. Returns null if invalid/expired. */
|
|
function getTokenInfo(req: Request): TokenInfo | null {
|
|
const token = extractToken(req);
|
|
if (!token) return null;
|
|
return validateScopedToken(token);
|
|
}
|
|
|
|
/** Check if request is from root token (local use). */
|
|
function isRootRequest(req: Request): boolean {
|
|
const token = extractToken(req);
|
|
return token !== null && isRootToken(token);
|
|
}
|
|
|
|
// Sidebar model router was here (sonnet vs opus by message intent). Ripped
|
|
// alongside the chat queue; the interactive PTY just runs whatever model
|
|
// the user's `claude` CLI is configured with.
|
|
|
|
// ─── Help text (auto-generated from COMMAND_DESCRIPTIONS) ────────
|
|
function generateHelpText(): string {
|
|
// Group commands by category
|
|
const groups = new Map<string, string[]>();
|
|
for (const [cmd, meta] of Object.entries(COMMAND_DESCRIPTIONS)) {
|
|
const display = meta.usage || cmd;
|
|
const list = groups.get(meta.category) || [];
|
|
list.push(display);
|
|
groups.set(meta.category, list);
|
|
}
|
|
|
|
const categoryOrder = [
|
|
'Navigation', 'Reading', 'Interaction', 'Inspection',
|
|
'Visual', 'Snapshot', 'Meta', 'Tabs', 'Server',
|
|
];
|
|
|
|
const lines = ['gstack browse — headless browser for AI agents', '', 'Commands:'];
|
|
for (const cat of categoryOrder) {
|
|
const cmds = groups.get(cat);
|
|
if (!cmds) continue;
|
|
lines.push(` ${(cat + ':').padEnd(15)}${cmds.join(', ')}`);
|
|
}
|
|
|
|
// Snapshot flags from source of truth
|
|
lines.push('');
|
|
lines.push('Snapshot flags:');
|
|
const flagPairs: string[] = [];
|
|
for (const flag of SNAPSHOT_FLAGS) {
|
|
const label = flag.valueHint ? `${flag.short} ${flag.valueHint}` : flag.short;
|
|
flagPairs.push(`${label} ${flag.long}`);
|
|
}
|
|
// Print two flags per line for compact display
|
|
for (let i = 0; i < flagPairs.length; i += 2) {
|
|
const left = flagPairs[i].padEnd(28);
|
|
const right = flagPairs[i + 1] || '';
|
|
lines.push(` ${left}${right}`);
|
|
}
|
|
|
|
return lines.join('\n');
|
|
}
|
|
|
|
// ─── Buffer (from buffers.ts) ────────────────────────────────────
|
|
import { consoleBuffer, networkBuffer, dialogBuffer, addConsoleEntry, addNetworkEntry, addDialogEntry, type LogEntry, type NetworkEntry, type DialogEntry } from './buffers';
|
|
export { consoleBuffer, networkBuffer, dialogBuffer, addConsoleEntry, addNetworkEntry, addDialogEntry, type LogEntry, type NetworkEntry, type DialogEntry };
|
|
|
|
const CONSOLE_LOG_PATH = config.consoleLog;
|
|
const NETWORK_LOG_PATH = config.networkLog;
|
|
const DIALOG_LOG_PATH = config.dialogLog;
|
|
|
|
|
|
// ─── Sidebar agent / chat state ripped ──────────────────────────────
|
|
// ChatEntry, SidebarSession, TabAgentState interfaces; chatBuffer,
|
|
// chatBuffers, sidebarSession, agentProcess, agentStatus, agentStartTime,
|
|
// agentTabId, messageQueue, currentMessage, tabAgents; addChatEntry,
|
|
// loadSession, createSession, persistSession, processAgentEvent,
|
|
// killAgent, listSessions, getTabAgent, getTabAgentStatus, and the
|
|
// agentHealthInterval all lived here. Replaced by the live PTY in
|
|
// terminal-agent.ts; chat queue + per-tab agent multiplexing are no
|
|
// longer needed.
|
|
|
|
let lastNetworkFlushed = 0;
|
|
let lastDialogFlushed = 0;
|
|
let flushInProgress = false;
|
|
|
|
async function flushBuffers() {
|
|
if (flushInProgress) return; // Guard against concurrent flush
|
|
flushInProgress = true;
|
|
|
|
try {
|
|
// Console buffer
|
|
const newConsoleCount = consoleBuffer.totalAdded - lastConsoleFlushed;
|
|
if (newConsoleCount > 0) {
|
|
const entries = consoleBuffer.last(Math.min(newConsoleCount, consoleBuffer.length));
|
|
const lines = entries.map(e =>
|
|
`[${new Date(e.timestamp).toISOString()}] [${e.level}] ${e.text}`
|
|
).join('\n') + '\n';
|
|
fs.appendFileSync(CONSOLE_LOG_PATH, lines);
|
|
lastConsoleFlushed = consoleBuffer.totalAdded;
|
|
}
|
|
|
|
// Network buffer
|
|
const newNetworkCount = networkBuffer.totalAdded - lastNetworkFlushed;
|
|
if (newNetworkCount > 0) {
|
|
const entries = networkBuffer.last(Math.min(newNetworkCount, networkBuffer.length));
|
|
const lines = entries.map(e =>
|
|
`[${new Date(e.timestamp).toISOString()}] ${e.method} ${e.url} → ${e.status || 'pending'} (${e.duration || '?'}ms, ${e.size || '?'}B)`
|
|
).join('\n') + '\n';
|
|
fs.appendFileSync(NETWORK_LOG_PATH, lines);
|
|
lastNetworkFlushed = networkBuffer.totalAdded;
|
|
}
|
|
|
|
// Dialog buffer
|
|
const newDialogCount = dialogBuffer.totalAdded - lastDialogFlushed;
|
|
if (newDialogCount > 0) {
|
|
const entries = dialogBuffer.last(Math.min(newDialogCount, dialogBuffer.length));
|
|
const lines = entries.map(e =>
|
|
`[${new Date(e.timestamp).toISOString()}] [${e.type}] "${e.message}" → ${e.action}${e.response ? ` "${e.response}"` : ''}`
|
|
).join('\n') + '\n';
|
|
fs.appendFileSync(DIALOG_LOG_PATH, lines);
|
|
lastDialogFlushed = dialogBuffer.totalAdded;
|
|
}
|
|
} catch (err: any) {
|
|
console.error('[browse] Buffer flush failed:', err.message);
|
|
} finally {
|
|
flushInProgress = false;
|
|
}
|
|
}
|
|
|
|
// Flush every 1 second
|
|
const flushInterval = setInterval(flushBuffers, 1000);
|
|
|
|
// ─── Idle Timer ────────────────────────────────────────────────
|
|
let lastActivity = Date.now();
|
|
|
|
function resetIdleTimer() {
|
|
lastActivity = Date.now();
|
|
}
|
|
|
|
const idleCheckInterval = setInterval(() => {
|
|
// Headed mode: the user is looking at the browser. Never auto-die.
|
|
// Only shut down when the user explicitly disconnects or closes the window.
|
|
if (browserManager.getConnectionMode() === 'headed') return;
|
|
// Tunnel mode: remote agents may send commands sporadically. Never auto-die.
|
|
if (tunnelActive) return;
|
|
if (Date.now() - lastActivity > IDLE_TIMEOUT_MS) {
|
|
console.log(`[browse] Idle for ${IDLE_TIMEOUT_MS / 1000}s, shutting down`);
|
|
shutdown();
|
|
}
|
|
}, 60_000);
|
|
|
|
// ─── Parent-Process Watchdog ────────────────────────────────────────
|
|
// When the spawning CLI process (e.g. a Claude Code session) exits, this
|
|
// server can become an orphan — keeping chrome-headless-shell alive and
|
|
// causing console-window flicker on Windows. Poll the parent PID every 15s
|
|
// and self-terminate if it is gone.
|
|
//
|
|
// Headed mode (BROWSE_HEADED=1 or BROWSE_PARENT_PID=0): The user controls
|
|
// the browser window lifecycle. The CLI exits immediately after connect,
|
|
// so the watchdog would kill the server prematurely. Disabled in both cases
|
|
// as defense-in-depth — the CLI sets PID=0 for headed mode, and the server
|
|
// also checks BROWSE_HEADED in case a future launcher forgets.
|
|
// Cleanup happens via browser disconnect event or $B disconnect.
|
|
const BROWSE_PARENT_PID = parseInt(process.env.BROWSE_PARENT_PID || '0', 10);
|
|
// Outer gate: if the spawner explicitly marks this as headed (env var set at
|
|
// launch time), skip registering the watchdog entirely. Cheaper than entering
|
|
// the closure every 15s. The CLI's connect path sets BROWSE_HEADED=1 + PID=0,
|
|
// so this branch is the normal path for /open-gstack-browser.
|
|
const IS_HEADED_WATCHDOG = process.env.BROWSE_HEADED === '1';
|
|
if (BROWSE_PARENT_PID > 0 && !IS_HEADED_WATCHDOG) {
|
|
let parentGone = false;
|
|
setInterval(() => {
|
|
try {
|
|
process.kill(BROWSE_PARENT_PID, 0); // signal 0 = existence check only, no signal sent
|
|
} catch {
|
|
// Parent exited. Resolution order:
|
|
// 1. Active cookie picker (one-time code or session live)? Stay alive
|
|
// regardless of mode — tearing down the server mid-import leaves the
|
|
// picker UI with a stale "Failed to fetch" error.
|
|
// 2. Headed / tunnel mode? Shutdown. The idle timeout doesn't apply in
|
|
// these modes (see idleCheckInterval above — both early-return), so
|
|
// ignoring parent death here would leak orphan daemons after
|
|
// /pair-agent or /open-gstack-browser sessions.
|
|
// 3. Normal (headless) mode? Stay alive. Claude Code's Bash tool kills
|
|
// the parent shell between invocations. The idle timeout (30 min)
|
|
// handles eventual cleanup.
|
|
if (hasActivePicker()) return;
|
|
const headed = browserManager.getConnectionMode() === 'headed';
|
|
if (headed || tunnelActive) {
|
|
console.log(`[browse] Parent process ${BROWSE_PARENT_PID} exited in ${headed ? 'headed' : 'tunnel'} mode, shutting down`);
|
|
shutdown();
|
|
} else if (!parentGone) {
|
|
parentGone = true;
|
|
console.log(`[browse] Parent process ${BROWSE_PARENT_PID} exited (server stays alive, idle timeout will clean up)`);
|
|
}
|
|
}
|
|
}, 15_000);
|
|
} else if (IS_HEADED_WATCHDOG) {
|
|
console.log('[browse] Parent-process watchdog disabled (headed mode)');
|
|
} else if (BROWSE_PARENT_PID === 0) {
|
|
console.log('[browse] Parent-process watchdog disabled (BROWSE_PARENT_PID=0)');
|
|
}
|
|
|
|
// ─── Command Sets (from commands.ts — single source of truth) ───
|
|
import { READ_COMMANDS, WRITE_COMMANDS, META_COMMANDS } from './commands';
|
|
export { READ_COMMANDS, WRITE_COMMANDS, META_COMMANDS };
|
|
|
|
// ─── Inspector State (in-memory) ──────────────────────────────
|
|
let inspectorData: InspectorResult | null = null;
|
|
let inspectorTimestamp: number = 0;
|
|
|
|
// Inspector SSE subscribers
|
|
type InspectorSubscriber = (event: any) => void;
|
|
const inspectorSubscribers = new Set<InspectorSubscriber>();
|
|
|
|
function emitInspectorEvent(event: any): void {
|
|
for (const notify of inspectorSubscribers) {
|
|
queueMicrotask(() => {
|
|
try { notify(event); } catch (err: any) {
|
|
console.error('[browse] Inspector event subscriber threw:', err.message);
|
|
}
|
|
});
|
|
}
|
|
}
|
|
|
|
// ─── Server ────────────────────────────────────────────────────
|
|
const browserManager = new BrowserManager();
|
|
// When the user closes the headed browser window, run full cleanup
|
|
// (kill sidebar-agent, save session, remove profile locks, delete state file)
|
|
// before exiting with code 2. Exit code 2 distinguishes user-close from crashes (1).
|
|
browserManager.onDisconnect = () => shutdown(2);
|
|
let isShuttingDown = false;
|
|
|
|
// Test if a port is available by binding and immediately releasing.
|
|
// Uses net.createServer instead of Bun.serve to avoid a race condition
|
|
// in the Node.js polyfill where listen/close are async but the caller
|
|
// expects synchronous bind semantics. See: #486
|
|
function isPortAvailable(port: number, hostname: string = '127.0.0.1'): Promise<boolean> {
|
|
return new Promise((resolve) => {
|
|
const srv = net.createServer();
|
|
srv.once('error', () => resolve(false));
|
|
srv.listen(port, hostname, () => {
|
|
srv.close(() => resolve(true));
|
|
});
|
|
});
|
|
}
|
|
|
|
// Find port: explicit BROWSE_PORT, or random in 10000-60000
|
|
async function findPort(): Promise<number> {
|
|
// Explicit port override (for debugging)
|
|
if (BROWSE_PORT) {
|
|
if (await isPortAvailable(BROWSE_PORT)) {
|
|
return BROWSE_PORT;
|
|
}
|
|
throw new Error(`[browse] Port ${BROWSE_PORT} (from BROWSE_PORT env) is in use`);
|
|
}
|
|
|
|
// Random port with retry
|
|
const MIN_PORT = 10000;
|
|
const MAX_PORT = 60000;
|
|
const MAX_RETRIES = 5;
|
|
for (let attempt = 0; attempt < MAX_RETRIES; attempt++) {
|
|
const port = MIN_PORT + Math.floor(Math.random() * (MAX_PORT - MIN_PORT));
|
|
if (await isPortAvailable(port)) {
|
|
return port;
|
|
}
|
|
}
|
|
throw new Error(`[browse] No available port after ${MAX_RETRIES} attempts in range ${MIN_PORT}-${MAX_PORT}`);
|
|
}
|
|
|
|
/**
|
|
* Translate Playwright errors into actionable messages for AI agents.
|
|
*/
|
|
function wrapError(err: any): string {
|
|
const msg = err.message || String(err);
|
|
// Timeout errors
|
|
if (err.name === 'TimeoutError' || msg.includes('Timeout') || msg.includes('timeout')) {
|
|
if (msg.includes('locator.click') || msg.includes('locator.fill') || msg.includes('locator.hover')) {
|
|
return `Element not found or not interactable within timeout. Check your selector or run 'snapshot' for fresh refs.`;
|
|
}
|
|
if (msg.includes('page.goto') || msg.includes('Navigation')) {
|
|
return `Page navigation timed out. The URL may be unreachable or the page may be loading slowly.`;
|
|
}
|
|
return `Operation timed out: ${msg.split('\n')[0]}`;
|
|
}
|
|
// Multiple elements matched
|
|
if (msg.includes('resolved to') && msg.includes('elements')) {
|
|
return `Selector matched multiple elements. Be more specific or use @refs from 'snapshot'.`;
|
|
}
|
|
// Pass through other errors
|
|
return msg;
|
|
}
|
|
|
|
/** Internal command result — used by handleCommand and chain subcommand routing */
|
|
interface CommandResult {
|
|
status: number;
|
|
result: string;
|
|
headers?: Record<string, string>;
|
|
json?: boolean; // true if result is JSON (errors), false for text/plain
|
|
}
|
|
|
|
/**
|
|
* Core command execution logic. Returns a structured result instead of HTTP Response.
|
|
* Used by both the HTTP handler (handleCommand) and chain subcommand routing.
|
|
*
|
|
* Options:
|
|
* skipRateCheck: true when called from chain (chain counts as 1 request)
|
|
* skipActivity: true when called from chain (chain emits 1 event for all subcommands)
|
|
* chainDepth: recursion guard — reject nested chains (depth > 0 means inside a chain)
|
|
*/
|
|
async function handleCommandInternal(
|
|
body: { command: string; args?: string[]; tabId?: number },
|
|
tokenInfo?: TokenInfo | null,
|
|
opts?: { skipRateCheck?: boolean; skipActivity?: boolean; chainDepth?: number },
|
|
): Promise<CommandResult> {
|
|
const { args = [], tabId } = body;
|
|
const rawCommand = body.command;
|
|
|
|
if (!rawCommand) {
|
|
return { status: 400, result: JSON.stringify({ error: 'Missing "command" field' }), json: true };
|
|
}
|
|
|
|
// ─── Alias canonicalization (before scope, watch, tab-ownership, dispatch) ─
|
|
// Agent-friendly names like 'setcontent' route to canonical 'load-html'. Must
|
|
// happen BEFORE scope check so a read-scoped token calling 'setcontent' is still
|
|
// rejected (load-html lives in SCOPE_WRITE). Audit logging preserves rawCommand
|
|
// so the trail records what the agent actually typed.
|
|
const command = canonicalizeCommand(rawCommand);
|
|
const isAliased = command !== rawCommand;
|
|
|
|
// ─── Recursion guard: reject nested chains ──────────────────
|
|
if (command === 'chain' && (opts?.chainDepth ?? 0) > 0) {
|
|
return { status: 400, result: JSON.stringify({ error: 'Nested chain commands are not allowed' }), json: true };
|
|
}
|
|
|
|
// ─── Scope check (for scoped tokens) ──────────────────────────
|
|
if (tokenInfo && tokenInfo.clientId !== 'root') {
|
|
if (!checkScope(tokenInfo, command)) {
|
|
return {
|
|
status: 403, json: true,
|
|
result: JSON.stringify({
|
|
error: `Command "${command}" not allowed by your token scope`,
|
|
hint: `Your scopes: ${tokenInfo.scopes.join(', ')}. Ask the user to re-pair with --admin for eval/cookies/storage access.`,
|
|
}),
|
|
};
|
|
}
|
|
|
|
// Domain check for navigation commands
|
|
if ((command === 'goto' || command === 'newtab') && args[0]) {
|
|
if (!checkDomain(tokenInfo, args[0])) {
|
|
return {
|
|
status: 403, json: true,
|
|
result: JSON.stringify({
|
|
error: `Domain not allowed by your token scope`,
|
|
hint: `Allowed domains: ${tokenInfo.domains?.join(', ') || 'none configured'}`,
|
|
}),
|
|
};
|
|
}
|
|
}
|
|
|
|
// Rate check (skipped for chain subcommands — chain counts as 1 request)
|
|
if (!opts?.skipRateCheck) {
|
|
const rateResult = checkRate(tokenInfo);
|
|
if (!rateResult.allowed) {
|
|
return {
|
|
status: 429, json: true,
|
|
result: JSON.stringify({
|
|
error: 'Rate limit exceeded',
|
|
hint: `Max ${tokenInfo.rateLimit} requests/second. Retry after ${rateResult.retryAfterMs}ms.`,
|
|
}),
|
|
headers: { 'Retry-After': String(Math.ceil((rateResult.retryAfterMs || 1000) / 1000)) },
|
|
};
|
|
}
|
|
}
|
|
|
|
// Record command execution for idempotent key exchange tracking
|
|
if (!opts?.skipRateCheck && tokenInfo.token) recordCommand(tokenInfo.token);
|
|
}
|
|
|
|
// Pin to a specific tab if requested (set by BROWSE_TAB env var in sidebar agents).
|
|
// This prevents parallel agents from interfering with each other's tab context.
|
|
// Safe because Bun's event loop is single-threaded — no concurrent handleCommand.
|
|
let savedTabId: number | null = null;
|
|
if (tabId !== undefined && tabId !== null) {
|
|
savedTabId = browserManager.getActiveTabId();
|
|
// bringToFront: false — internal tab pinning must NOT steal window focus
|
|
try { browserManager.switchTab(tabId, { bringToFront: false }); } catch (err: any) {
|
|
console.warn('[browse] Failed to pin tab', tabId, ':', err.message);
|
|
}
|
|
}
|
|
|
|
// ─── Tab ownership check (own-only tokens / pair-agent isolation) ──
|
|
//
|
|
// Only `own-only` tokens (pair-agent over tunnel) are bound to their own
|
|
// tabs. `shared` tokens — the default for skill spawns and local scoped
|
|
// clients — can drive any tab; the capability gate (scope checks above)
|
|
// and rate limits already constrain what they can do.
|
|
//
|
|
// Skip for `newtab` — it creates a tab rather than accessing one.
|
|
if (command !== 'newtab' && tokenInfo && tokenInfo.clientId !== 'root' && tokenInfo.tabPolicy === 'own-only') {
|
|
const targetTab = tabId ?? browserManager.getActiveTabId();
|
|
if (!browserManager.checkTabAccess(targetTab, tokenInfo.clientId, { isWrite: WRITE_COMMANDS.has(command), ownOnly: true })) {
|
|
return {
|
|
status: 403, json: true,
|
|
result: JSON.stringify({
|
|
error: 'Tab not owned by your agent. Use newtab to create your own tab.',
|
|
hint: `Tab ${targetTab} is owned by ${browserManager.getTabOwner(targetTab) || 'root'}. Your agent: ${tokenInfo.clientId}.`,
|
|
}),
|
|
};
|
|
}
|
|
}
|
|
|
|
// ─── newtab with ownership for scoped tokens ──────────────
|
|
if (command === 'newtab' && tokenInfo && tokenInfo.clientId !== 'root') {
|
|
const newId = await browserManager.newTab(args[0] || undefined, tokenInfo.clientId);
|
|
return {
|
|
status: 200, json: true,
|
|
result: JSON.stringify({
|
|
tabId: newId,
|
|
owner: tokenInfo.clientId,
|
|
hint: 'Include "tabId": ' + newId + ' in subsequent commands to target this tab.',
|
|
}),
|
|
};
|
|
}
|
|
|
|
// Block mutation commands while watching (read-only observation mode)
|
|
if (browserManager.isWatching() && WRITE_COMMANDS.has(command)) {
|
|
return {
|
|
status: 400, json: true,
|
|
result: JSON.stringify({ error: 'Cannot run mutation commands while watching. Run `$B watch stop` first.' }),
|
|
};
|
|
}
|
|
|
|
// Activity: emit command_start (skipped for chain subcommands)
|
|
const startTime = Date.now();
|
|
if (!opts?.skipActivity) {
|
|
emitActivity({
|
|
type: 'command_start',
|
|
command,
|
|
args,
|
|
url: browserManager.getCurrentUrl(),
|
|
tabs: browserManager.getTabCount(),
|
|
mode: browserManager.getConnectionMode(),
|
|
clientId: tokenInfo?.clientId,
|
|
});
|
|
}
|
|
|
|
try {
|
|
let result: string;
|
|
|
|
const session = browserManager.getActiveSession();
|
|
|
|
// Per-request warnings collected during hidden-element detection,
|
|
// surfaced into the envelope the LLM sees. Carries across the read
|
|
// phase into the centralized wrap block below.
|
|
let hiddenContentWarnings: string[] = [];
|
|
|
|
if (READ_COMMANDS.has(command)) {
|
|
const isScoped = tokenInfo && tokenInfo.clientId !== 'root';
|
|
// Hidden-element / ARIA-injection detection for every scoped
|
|
// DOM-reading channel (text, html, links, forms, accessibility,
|
|
// attrs, data, media, ux-audit). Previously only `text` received
|
|
// stripping; other channels let hidden injection payloads reach
|
|
// the LLM despite the envelope wrap. Detections become CONTENT
|
|
// WARNINGS on the outgoing envelope so the model can see what it
|
|
// would have otherwise trusted silently.
|
|
if (isScoped && DOM_CONTENT_COMMANDS.has(command)) {
|
|
const page = session.getPage();
|
|
try {
|
|
const strippedDescs = await markHiddenElements(page);
|
|
if (strippedDescs.length > 0) {
|
|
console.warn(`[browse] Content security: ${strippedDescs.length} hidden elements flagged on ${command} for ${tokenInfo.clientId}`);
|
|
hiddenContentWarnings = strippedDescs.slice(0, 8).map(d =>
|
|
`hidden content: ${d.slice(0, 120)}`,
|
|
);
|
|
if (strippedDescs.length > 8) {
|
|
hiddenContentWarnings.push(`hidden content: +${strippedDescs.length - 8} more flagged elements`);
|
|
}
|
|
}
|
|
if (command === 'text') {
|
|
const target = session.getActiveFrameOrPage();
|
|
result = await getCleanTextWithStripping(target);
|
|
} else {
|
|
result = await handleReadCommand(command, args, session, browserManager);
|
|
}
|
|
} finally {
|
|
await cleanupHiddenMarkers(page);
|
|
}
|
|
} else {
|
|
result = await handleReadCommand(command, args, session, browserManager);
|
|
}
|
|
} else if (WRITE_COMMANDS.has(command)) {
|
|
result = await handleWriteCommand(command, args, session, browserManager);
|
|
} else if (META_COMMANDS.has(command)) {
|
|
// Pass chain depth + executeCommand callback so chain routes subcommands
|
|
// through the full security pipeline (scope, domain, tab, wrapping).
|
|
const chainDepth = (opts?.chainDepth ?? 0);
|
|
result = await handleMetaCommand(command, args, browserManager, shutdown, tokenInfo, {
|
|
chainDepth,
|
|
daemonPort: LOCAL_LISTEN_PORT,
|
|
executeCommand: (body, ti) => handleCommandInternal(body, ti, {
|
|
skipRateCheck: true, // chain counts as 1 request
|
|
skipActivity: true, // chain emits 1 event for all subcommands
|
|
chainDepth: chainDepth + 1, // recursion guard
|
|
}),
|
|
});
|
|
// Start periodic snapshot interval when watch mode begins
|
|
if (command === 'watch' && args[0] !== 'stop' && browserManager.isWatching()) {
|
|
const watchInterval = setInterval(async () => {
|
|
if (!browserManager.isWatching()) {
|
|
clearInterval(watchInterval);
|
|
return;
|
|
}
|
|
try {
|
|
const snapshot = await handleSnapshot(['-i'], browserManager.getActiveSession());
|
|
browserManager.addWatchSnapshot(snapshot);
|
|
} catch {
|
|
// Page may be navigating — skip this snapshot
|
|
}
|
|
}, 5000);
|
|
browserManager.watchInterval = watchInterval;
|
|
}
|
|
} else if (command === 'help') {
|
|
const helpText = generateHelpText();
|
|
return { status: 200, result: helpText };
|
|
} else {
|
|
// Use the rich unknown-command helper: names the input, suggests the closest
|
|
// match via Levenshtein (≤ 2 distance, ≥ 4 chars input), and appends an upgrade
|
|
// hint if the command is listed in NEW_IN_VERSION.
|
|
return {
|
|
status: 400, json: true,
|
|
result: JSON.stringify({
|
|
error: buildUnknownCommandError(rawCommand, ALL_COMMANDS),
|
|
hint: `Available commands: ${[...READ_COMMANDS, ...WRITE_COMMANDS, ...META_COMMANDS].sort().join(', ')}`,
|
|
}),
|
|
};
|
|
}
|
|
|
|
// ─── Centralized content wrapping (single location for all commands) ───
|
|
// Scoped tokens: content filter + enhanced envelope + datamarking
|
|
// Root tokens: basic untrusted content wrapper (backward compat)
|
|
// Chain exempt from top-level wrapping (each subcommand wrapped individually)
|
|
if (PAGE_CONTENT_COMMANDS.has(command) && command !== 'chain') {
|
|
const isScoped = tokenInfo && tokenInfo.clientId !== 'root';
|
|
if (isScoped) {
|
|
// Run content filters
|
|
const filterResult: ContentFilterResult = runContentFilters(
|
|
result, browserManager.getCurrentUrl(), command,
|
|
);
|
|
if (filterResult.blocked) {
|
|
return { status: 403, json: true, result: JSON.stringify({ error: filterResult.message }) };
|
|
}
|
|
// Datamark text command output only (not html, forms, or structured data)
|
|
if (command === 'text') {
|
|
result = datamarkContent(result);
|
|
}
|
|
// Enhanced envelope wrapping for scoped tokens.
|
|
// Merge per-request hidden-element warnings with content-filter
|
|
// warnings so both reach the LLM through the same CONTENT
|
|
// WARNINGS header.
|
|
const combinedWarnings = [...filterResult.warnings, ...hiddenContentWarnings];
|
|
result = wrapUntrustedPageContent(
|
|
result, command,
|
|
combinedWarnings.length > 0 ? combinedWarnings : undefined,
|
|
);
|
|
} else {
|
|
// Root token: basic wrapping (backward compat, Decision 2)
|
|
result = wrapUntrustedContent(result, browserManager.getCurrentUrl());
|
|
}
|
|
}
|
|
|
|
// Activity: emit command_end (skipped for chain subcommands)
|
|
const successDuration = Date.now() - startTime;
|
|
if (!opts?.skipActivity) {
|
|
emitActivity({
|
|
type: 'command_end',
|
|
command,
|
|
args,
|
|
url: browserManager.getCurrentUrl(),
|
|
duration: successDuration,
|
|
status: 'ok',
|
|
result: result,
|
|
tabs: browserManager.getTabCount(),
|
|
mode: browserManager.getConnectionMode(),
|
|
clientId: tokenInfo?.clientId,
|
|
});
|
|
}
|
|
|
|
writeAuditEntry({
|
|
ts: new Date().toISOString(),
|
|
cmd: command,
|
|
aliasOf: isAliased ? rawCommand : undefined,
|
|
args: args.join(' '),
|
|
origin: browserManager.getCurrentUrl(),
|
|
durationMs: successDuration,
|
|
status: 'ok',
|
|
hasCookies: browserManager.hasCookieImports(),
|
|
mode: browserManager.getConnectionMode(),
|
|
});
|
|
|
|
browserManager.resetFailures();
|
|
// Restore original active tab if we pinned to a specific one
|
|
if (savedTabId !== null) {
|
|
try { browserManager.switchTab(savedTabId, { bringToFront: false }); } catch (restoreErr: any) {
|
|
console.warn('[browse] Failed to restore tab after command:', restoreErr.message);
|
|
}
|
|
}
|
|
return { status: 200, result };
|
|
} catch (err: any) {
|
|
// Restore original active tab even on error
|
|
if (savedTabId !== null) {
|
|
try { browserManager.switchTab(savedTabId, { bringToFront: false }); } catch (restoreErr: any) {
|
|
console.warn('[browse] Failed to restore tab after error:', restoreErr.message);
|
|
}
|
|
}
|
|
|
|
// Activity: emit command_end (error) — skipped for chain subcommands
|
|
const errorDuration = Date.now() - startTime;
|
|
if (!opts?.skipActivity) {
|
|
emitActivity({
|
|
type: 'command_end',
|
|
command,
|
|
args,
|
|
url: browserManager.getCurrentUrl(),
|
|
duration: errorDuration,
|
|
status: 'error',
|
|
error: err.message,
|
|
tabs: browserManager.getTabCount(),
|
|
mode: browserManager.getConnectionMode(),
|
|
clientId: tokenInfo?.clientId,
|
|
});
|
|
}
|
|
|
|
writeAuditEntry({
|
|
ts: new Date().toISOString(),
|
|
cmd: command,
|
|
aliasOf: isAliased ? rawCommand : undefined,
|
|
args: args.join(' '),
|
|
origin: browserManager.getCurrentUrl(),
|
|
durationMs: errorDuration,
|
|
status: 'error',
|
|
error: err.message,
|
|
hasCookies: browserManager.hasCookieImports(),
|
|
mode: browserManager.getConnectionMode(),
|
|
});
|
|
|
|
browserManager.incrementFailures();
|
|
let errorMsg = wrapError(err);
|
|
const hint = browserManager.getFailureHint();
|
|
if (hint) errorMsg += '\n' + hint;
|
|
return { status: 500, result: JSON.stringify({ error: errorMsg }), json: true };
|
|
}
|
|
}
|
|
|
|
/** HTTP wrapper — converts CommandResult to Response */
|
|
async function handleCommand(body: any, tokenInfo?: TokenInfo | null): Promise<Response> {
|
|
const cr = await handleCommandInternal(body, tokenInfo);
|
|
const contentType = cr.json ? 'application/json' : 'text/plain';
|
|
return new Response(cr.result, {
|
|
status: cr.status,
|
|
headers: { 'Content-Type': contentType, ...cr.headers },
|
|
});
|
|
}
|
|
|
|
async function shutdown(exitCode: number = 0) {
|
|
if (isShuttingDown) return;
|
|
isShuttingDown = true;
|
|
|
|
console.log('[browse] Shutting down...');
|
|
// Kill the terminal-agent daemon (spawned by cli.ts, detached). Without
|
|
// this, the agent keeps sitting on its WebSocket port.
|
|
try {
|
|
const { spawnSync } = require('child_process');
|
|
spawnSync('pkill', ['-f', 'terminal-agent\\.ts'], { stdio: 'ignore', timeout: 3000 });
|
|
} catch (err: any) {
|
|
console.warn('[browse] Failed to kill terminal-agent:', err.message);
|
|
}
|
|
// Best-effort cleanup of agent state files so a reconnect doesn't try to
|
|
// hit a dead port.
|
|
try { safeUnlinkQuiet(path.join(path.dirname(config.stateFile), 'terminal-port')); } catch {}
|
|
try { safeUnlinkQuiet(path.join(path.dirname(config.stateFile), 'terminal-internal-token')); } catch {}
|
|
// Clean up CDP inspector sessions
|
|
try { detachSession(); } catch (err: any) {
|
|
console.warn('[browse] Failed to detach CDP session:', err.message);
|
|
}
|
|
inspectorSubscribers.clear();
|
|
// Stop watch mode if active
|
|
if (browserManager.isWatching()) browserManager.stopWatch();
|
|
clearInterval(flushInterval);
|
|
clearInterval(idleCheckInterval);
|
|
await flushBuffers(); // Final flush (async now)
|
|
|
|
await browserManager.close();
|
|
|
|
// Clean up Chromium profile locks (prevent SingletonLock on next launch)
|
|
const profileDir = path.join(process.env.HOME || '/tmp', '.gstack', 'chromium-profile');
|
|
for (const lockFile of ['SingletonLock', 'SingletonSocket', 'SingletonCookie']) {
|
|
safeUnlinkQuiet(path.join(profileDir, lockFile));
|
|
}
|
|
|
|
// Clean up state file
|
|
safeUnlinkQuiet(config.stateFile);
|
|
|
|
process.exit(exitCode);
|
|
}
|
|
|
|
// Handle signals
|
|
//
|
|
// Node passes the signal name (e.g. 'SIGTERM') as the first arg to listeners.
|
|
// Wrap calls to shutdown() so it receives no args — otherwise the string gets
|
|
// passed as exitCode and process.exit() coerces it to NaN, exiting with code 1
|
|
// instead of 0. (Caught in v0.18.1.0 #1025.)
|
|
//
|
|
// SIGINT (Ctrl+C): user intentionally stopping → shutdown.
|
|
process.on('SIGINT', () => shutdown());
|
|
// SIGTERM behavior depends on mode:
|
|
// - Normal (headless) mode: Claude Code's Bash sandbox fires SIGTERM when the
|
|
// parent shell exits between tool invocations. Ignoring it keeps the server
|
|
// alive across $B calls. Idle timeout (30 min) handles eventual cleanup.
|
|
// - Headed / tunnel mode: idle timeout doesn't apply in these modes. Respect
|
|
// SIGTERM so external tooling (systemd, supervisord, CI) can shut cleanly
|
|
// without waiting forever. Ctrl+C and /stop still work either way.
|
|
// - Active cookie picker: never tear down mid-import regardless of mode —
|
|
// would strand the picker UI with "Failed to fetch."
|
|
process.on('SIGTERM', () => {
|
|
if (hasActivePicker()) {
|
|
console.log('[browse] Received SIGTERM but cookie picker is active, ignoring to avoid stranding the picker UI');
|
|
return;
|
|
}
|
|
const headed = browserManager.getConnectionMode() === 'headed';
|
|
if (headed || tunnelActive) {
|
|
console.log(`[browse] Received SIGTERM in ${headed ? 'headed' : 'tunnel'} mode, shutting down`);
|
|
shutdown();
|
|
} else {
|
|
console.log('[browse] Received SIGTERM (ignoring — use /stop or Ctrl+C for intentional shutdown)');
|
|
}
|
|
});
|
|
// Windows: taskkill /F bypasses SIGTERM, but 'exit' fires for some shutdown paths.
|
|
// Defense-in-depth — primary cleanup is the CLI's stale-state detection via health check.
|
|
if (process.platform === 'win32') {
|
|
process.on('exit', () => {
|
|
safeUnlinkQuiet(config.stateFile);
|
|
});
|
|
}
|
|
|
|
// Emergency cleanup for crashes (OOM, uncaught exceptions, browser disconnect)
|
|
function emergencyCleanup() {
|
|
if (isShuttingDown) return;
|
|
isShuttingDown = true;
|
|
// Clean Chromium profile locks
|
|
const profileDir = path.join(process.env.HOME || '/tmp', '.gstack', 'chromium-profile');
|
|
for (const lockFile of ['SingletonLock', 'SingletonSocket', 'SingletonCookie']) {
|
|
safeUnlinkQuiet(path.join(profileDir, lockFile));
|
|
}
|
|
safeUnlinkQuiet(config.stateFile);
|
|
}
|
|
process.on('uncaughtException', (err) => {
|
|
console.error('[browse] FATAL uncaught exception:', err.message);
|
|
emergencyCleanup();
|
|
process.exit(1);
|
|
});
|
|
process.on('unhandledRejection', (err: any) => {
|
|
console.error('[browse] FATAL unhandled rejection:', err?.message || err);
|
|
emergencyCleanup();
|
|
process.exit(1);
|
|
});
|
|
|
|
// ─── Start ─────────────────────────────────────────────────────
|
|
async function start() {
|
|
// Clear old log files
|
|
safeUnlink(CONSOLE_LOG_PATH);
|
|
safeUnlink(NETWORK_LOG_PATH);
|
|
safeUnlink(DIALOG_LOG_PATH);
|
|
|
|
const port = await findPort();
|
|
LOCAL_LISTEN_PORT = port;
|
|
|
|
// Launch browser (headless or headed with extension)
|
|
// BROWSE_HEADLESS_SKIP=1 skips browser launch entirely (for HTTP-only testing)
|
|
const skipBrowser = process.env.BROWSE_HEADLESS_SKIP === '1';
|
|
if (!skipBrowser) {
|
|
const headed = process.env.BROWSE_HEADED === '1';
|
|
if (headed) {
|
|
await browserManager.launchHeaded(AUTH_TOKEN);
|
|
console.log(`[browse] Launched headed Chromium with extension`);
|
|
} else {
|
|
await browserManager.launch();
|
|
}
|
|
}
|
|
|
|
const startTime = Date.now();
|
|
|
|
// ─── Request handler factory ────────────────────────────────────
|
|
//
|
|
// Same logic serves both the local listener (bootstrap, CLI, sidebar) and
|
|
// the tunnel listener (pairing + scoped-token commands). The factory
|
|
// closes over `surface` so the filter that runs before route dispatch
|
|
// knows which socket accepted the request.
|
|
//
|
|
// On the tunnel surface: reject anything not in TUNNEL_PATHS (404), reject
|
|
// root-token bearers (403), and require a scoped token for everything
|
|
// except /connect. Denials are logged to ~/.gstack/security/attempts.jsonl.
|
|
const makeFetchHandler = (surface: Surface) => async (req: Request): Promise<Response> => {
|
|
const url = new URL(req.url);
|
|
|
|
// ─── Tunnel surface filter (runs before any route dispatch) ──
|
|
if (surface === 'tunnel') {
|
|
const isGetConnect = req.method === 'GET' && url.pathname === '/connect';
|
|
const allowed = TUNNEL_PATHS.has(url.pathname);
|
|
if (!allowed && !isGetConnect) {
|
|
logTunnelDenial(req, url, 'path_not_on_tunnel');
|
|
return new Response(JSON.stringify({ error: 'Not found' }), {
|
|
status: 404, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
if (isRootRequest(req)) {
|
|
logTunnelDenial(req, url, 'root_token_on_tunnel');
|
|
return new Response(JSON.stringify({
|
|
error: 'Root token rejected on tunnel surface',
|
|
hint: 'Remote agents must pair via /connect to receive a scoped token.',
|
|
}), { status: 403, headers: { 'Content-Type': 'application/json' } });
|
|
}
|
|
if (url.pathname !== '/connect' && !getTokenInfo(req)) {
|
|
logTunnelDenial(req, url, 'missing_scoped_token');
|
|
return new Response(JSON.stringify({ error: 'Unauthorized' }), {
|
|
status: 401, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
}
|
|
|
|
// GET /connect — alive probe. Unauth on both surfaces. Used by /pair
|
|
// and /tunnel/start to detect dead ngrok tunnels via the tunnel URL,
|
|
// since /health is not tunnel-reachable under the dual-listener design.
|
|
//
|
|
// Shares the same rate limit as POST /connect — otherwise a tunnel
|
|
// caller can probe unlimited GETs and lock out nothing, which makes
|
|
// the endpoint a free daemon-enumeration surface.
|
|
if (url.pathname === '/connect' && req.method === 'GET') {
|
|
if (!checkConnectRateLimit()) {
|
|
return new Response(JSON.stringify({ error: 'Rate limited' }), {
|
|
status: 429, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
return new Response(JSON.stringify({ alive: true }), {
|
|
status: 200, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
|
|
// Cookie picker routes — HTML page unauthenticated, data/action routes require auth
|
|
if (url.pathname.startsWith('/cookie-picker')) {
|
|
return handleCookiePickerRoute(url, req, browserManager, AUTH_TOKEN);
|
|
}
|
|
|
|
// Welcome page — served when GStack Browser launches in headed mode
|
|
if (url.pathname === '/welcome') {
|
|
const welcomePath = (() => {
|
|
// Gate GSTACK_SLUG on a strict regex BEFORE interpolating it into
|
|
// the filesystem path. Without this, a slug like "../../etc/passwd"
|
|
// would resolve to ~/.gstack/projects/../../etc/passwd/... — path
|
|
// traversal. Not exploitable today (attacker needs local env-var
|
|
// access), but the gate is one regex and buys us defense-in-depth.
|
|
const rawSlug = process.env.GSTACK_SLUG || 'unknown';
|
|
const slug = /^[a-z0-9_-]+$/.test(rawSlug) ? rawSlug : 'unknown';
|
|
const homeDir = process.env.HOME || process.env.USERPROFILE || '/tmp';
|
|
const projectWelcome = `${homeDir}/.gstack/projects/${slug}/designs/welcome-page-20260331/finalized.html`;
|
|
if (fs.existsSync(projectWelcome)) return projectWelcome;
|
|
// Fallback: built-in welcome page from gstack install. Reject
|
|
// SKILL_ROOT values containing '..' for the same defense-in-depth
|
|
// reason as the GSTACK_SLUG regex above. Not exploitable today
|
|
// (env set at install time), but the gate is one check.
|
|
const rawSkillRoot = process.env.GSTACK_SKILL_ROOT || `${homeDir}/.claude/skills/gstack`;
|
|
if (rawSkillRoot.includes('..')) return null;
|
|
const builtinWelcome = `${rawSkillRoot}/browse/src/welcome.html`;
|
|
if (fs.existsSync(builtinWelcome)) return builtinWelcome;
|
|
return null;
|
|
})();
|
|
if (welcomePath) {
|
|
try {
|
|
const html = require('fs').readFileSync(welcomePath, 'utf-8');
|
|
return new Response(html, { headers: { 'Content-Type': 'text/html; charset=utf-8' } });
|
|
} catch (err: any) {
|
|
console.error('[browse] Failed to read welcome page:', welcomePath, err.message);
|
|
}
|
|
}
|
|
// No welcome page found — serve a simple fallback (avoid ERR_UNSAFE_REDIRECT on Windows)
|
|
return new Response(
|
|
`<!DOCTYPE html><html><head><title>GStack Browser</title>
|
|
<style>body{background:#111;color:#fff;font-family:system-ui;display:flex;align-items:center;justify-content:center;height:100vh;margin:0;}
|
|
.msg{text-align:center;opacity:.7;}.gold{color:#f5a623;font-size:2em;margin-bottom:12px;}</style></head>
|
|
<body><div class="msg"><div class="gold">◈</div><p>GStack Browser ready.</p><p style="font-size:.85em">Waiting for commands from Claude Code.</p></div></body></html>`,
|
|
{ status: 200, headers: { 'Content-Type': 'text/html; charset=utf-8' } }
|
|
);
|
|
}
|
|
|
|
// Health check — no auth required, does NOT reset idle timer
|
|
if (url.pathname === '/health') {
|
|
const healthy = await browserManager.isHealthy();
|
|
return new Response(JSON.stringify({
|
|
status: healthy ? 'healthy' : 'unhealthy',
|
|
mode: browserManager.getConnectionMode(),
|
|
uptime: Math.floor((Date.now() - startTime) / 1000),
|
|
tabs: browserManager.getTabCount(),
|
|
// Auth token for extension bootstrap. Safe: /health is localhost-only.
|
|
// Previously served unconditionally, but that leaks the token if the
|
|
// server is tunneled to the internet (ngrok, SSH tunnel).
|
|
// In headed mode the server is always local, so return token unconditionally
|
|
// (fixes Playwright Chromium extensions that don't send Origin header).
|
|
...(browserManager.getConnectionMode() === 'headed' ||
|
|
req.headers.get('origin')?.startsWith('chrome-extension://')
|
|
? { token: AUTH_TOKEN } : {}),
|
|
// The chat queue is gone — Terminal pane is the sole sidebar
|
|
// surface. Keep `chatEnabled: false` so any older extension
|
|
// build still treats the chat input as disabled.
|
|
chatEnabled: false,
|
|
// Security module status — drives the shield icon in the sidepanel.
|
|
// Returns {status: 'protected'|'degraded'|'inactive', layers: {...}}.
|
|
// The chat-path classifier no longer feeds this since
|
|
// sidebar-agent.ts was ripped; only the page-content side
|
|
// (canary, content-security) keeps reporting in.
|
|
security: getSecurityStatus(),
|
|
// Terminal-agent discovery. ONLY a port number — never a token.
|
|
// Tokens flow via the /pty-session HttpOnly cookie path. See
|
|
// `pty-session-cookie.ts` for the rationale (codex outside-voice
|
|
// finding #2: don't reuse this endpoint for shell auth).
|
|
terminalPort: readTerminalPort(),
|
|
}), {
|
|
status: 200,
|
|
headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
|
|
// ─── /pty-session — mint Terminal-tab WebSocket cookie ───────────
|
|
//
|
|
// The extension POSTs here with the bootstrap AUTH_TOKEN, gets back a
|
|
// short-lived HttpOnly cookie scoped to the terminal-agent's /ws
|
|
// upgrade. We push the cookie value to the agent over loopback so the
|
|
// upgrade can validate it. The cookie travels automatically with the
|
|
// browser's WebSocket upgrade because it's same-origin to the agent
|
|
// when the daemon binds 127.0.0.1. NEVER added to TUNNEL_PATHS — the
|
|
// tunnel surface 404s any /pty-session attempt by default-deny.
|
|
if (url.pathname === '/pty-session' && req.method === 'POST') {
|
|
if (!validateAuth(req)) {
|
|
return new Response(JSON.stringify({ error: 'Unauthorized' }), {
|
|
status: 401, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
const port = readTerminalPort();
|
|
if (!port) {
|
|
return new Response(JSON.stringify({
|
|
error: 'terminal-agent not ready',
|
|
}), { status: 503, headers: { 'Content-Type': 'application/json' } });
|
|
}
|
|
const minted = mintPtySessionToken();
|
|
const granted = await grantPtyToken(minted.token);
|
|
if (!granted) {
|
|
revokePtySessionToken(minted.token);
|
|
return new Response(JSON.stringify({
|
|
error: 'failed to grant terminal session',
|
|
}), { status: 503, headers: { 'Content-Type': 'application/json' } });
|
|
}
|
|
return new Response(JSON.stringify({
|
|
terminalPort: port,
|
|
// Returned in the JSON body so the extension can pass it to
|
|
// `new WebSocket(url, [token])`. Browsers translate that to a
|
|
// `Sec-WebSocket-Protocol` header — the only auth header we can
|
|
// set from the browser WebSocket API. SameSite=Strict cookies
|
|
// don't survive the port change between server.ts (34567) and
|
|
// the agent (random port), and HttpOnly + cross-origin makes
|
|
// the cookie path unreliable across browsers anyway.
|
|
//
|
|
// The token is short-lived (30 min, auto-revoked on WS close)
|
|
// and never persisted to disk on the extension side. The
|
|
// pre-existing AUTH_TOKEN leak via /health is a separate
|
|
// concern (v1.1+ TODO).
|
|
ptySessionToken: minted.token,
|
|
expiresAt: minted.expiresAt,
|
|
}), {
|
|
status: 200,
|
|
headers: {
|
|
'Content-Type': 'application/json',
|
|
// Set-Cookie is kept for non-browser callers / future use,
|
|
// but the WS upgrade no longer depends on it.
|
|
'Set-Cookie': buildPtySetCookie(minted.token),
|
|
},
|
|
});
|
|
}
|
|
|
|
// ─── /connect — setup key exchange for /pair-agent ceremony ────
|
|
if (url.pathname === '/connect' && req.method === 'POST') {
|
|
if (!checkConnectRateLimit()) {
|
|
return new Response(JSON.stringify({
|
|
error: 'Too many connection attempts. Wait 1 minute.',
|
|
}), { status: 429, headers: { 'Content-Type': 'application/json' } });
|
|
}
|
|
try {
|
|
const connectBody = await req.json() as { setup_key?: string };
|
|
if (!connectBody.setup_key) {
|
|
return new Response(JSON.stringify({ error: 'Missing setup_key' }), {
|
|
status: 400, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
const session = exchangeSetupKey(connectBody.setup_key);
|
|
if (!session) {
|
|
return new Response(JSON.stringify({
|
|
error: 'Invalid, expired, or already-used setup key',
|
|
}), { status: 401, headers: { 'Content-Type': 'application/json' } });
|
|
}
|
|
console.log(`[browse] Remote agent connected: ${session.clientId} (scopes: ${session.scopes.join(',')})`);
|
|
return new Response(JSON.stringify({
|
|
token: session.token,
|
|
expires: session.expiresAt,
|
|
scopes: session.scopes,
|
|
agent: session.clientId,
|
|
}), { status: 200, headers: { 'Content-Type': 'application/json' } });
|
|
} catch {
|
|
return new Response(JSON.stringify({ error: 'Invalid request body' }), {
|
|
status: 400, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
}
|
|
|
|
// ─── /token — mint scoped tokens (root-only) ──────────────────
|
|
if (url.pathname === '/token' && req.method === 'POST') {
|
|
if (!isRootRequest(req)) {
|
|
return new Response(JSON.stringify({
|
|
error: 'Only the root token can mint sub-tokens',
|
|
}), { status: 403, headers: { 'Content-Type': 'application/json' } });
|
|
}
|
|
try {
|
|
const tokenBody = await req.json() as any;
|
|
if (!tokenBody.clientId) {
|
|
return new Response(JSON.stringify({ error: 'Missing clientId' }), {
|
|
status: 400, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
const session = createToken({
|
|
clientId: tokenBody.clientId,
|
|
scopes: tokenBody.scopes,
|
|
domains: tokenBody.domains,
|
|
tabPolicy: tokenBody.tabPolicy,
|
|
rateLimit: tokenBody.rateLimit,
|
|
expiresSeconds: tokenBody.expiresSeconds,
|
|
});
|
|
return new Response(JSON.stringify({
|
|
token: session.token,
|
|
expires: session.expiresAt,
|
|
scopes: session.scopes,
|
|
agent: session.clientId,
|
|
}), { status: 200, headers: { 'Content-Type': 'application/json' } });
|
|
} catch {
|
|
return new Response(JSON.stringify({ error: 'Invalid request body' }), {
|
|
status: 400, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
}
|
|
|
|
// ─── /token/:clientId — revoke a scoped token (root-only) ─────
|
|
if (url.pathname.startsWith('/token/') && req.method === 'DELETE') {
|
|
if (!isRootRequest(req)) {
|
|
return new Response(JSON.stringify({ error: 'Root token required' }), {
|
|
status: 403, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
const clientId = url.pathname.slice('/token/'.length);
|
|
const revoked = revokeToken(clientId);
|
|
if (!revoked) {
|
|
return new Response(JSON.stringify({ error: `Agent "${clientId}" not found` }), {
|
|
status: 404, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
console.log(`[browse] Revoked token for: ${clientId}`);
|
|
return new Response(JSON.stringify({ revoked: clientId }), {
|
|
status: 200, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
|
|
// ─── /agents — list connected agents (root-only) ──────────────
|
|
if (url.pathname === '/agents' && req.method === 'GET') {
|
|
if (!isRootRequest(req)) {
|
|
return new Response(JSON.stringify({ error: 'Root token required' }), {
|
|
status: 403, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
const agents = listTokens().map(t => ({
|
|
clientId: t.clientId,
|
|
scopes: t.scopes,
|
|
domains: t.domains,
|
|
expiresAt: t.expiresAt,
|
|
commandCount: t.commandCount,
|
|
createdAt: t.createdAt,
|
|
}));
|
|
return new Response(JSON.stringify({ agents }), {
|
|
status: 200, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
|
|
// ─── /pair — create setup key for pair-agent ceremony (root-only) ───
|
|
if (url.pathname === '/pair' && req.method === 'POST') {
|
|
if (!isRootRequest(req)) {
|
|
return new Response(JSON.stringify({ error: 'Root token required' }), {
|
|
status: 403, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
try {
|
|
const pairBody = await req.json() as any;
|
|
// Default: full access (read+write+admin+meta). The trust boundary is
|
|
// the pairing ceremony itself, not the scope. --control adds browser-wide
|
|
// destructive commands (stop, restart, disconnect). --restrict limits scope.
|
|
const scopes = pairBody.control || pairBody.admin
|
|
? ['read', 'write', 'admin', 'meta', 'control'] as const
|
|
: (pairBody.scopes || ['read', 'write', 'admin', 'meta']) as const;
|
|
const setupKey = createSetupKey({
|
|
clientId: pairBody.clientId,
|
|
scopes: [...scopes],
|
|
domains: pairBody.domains,
|
|
rateLimit: pairBody.rateLimit,
|
|
});
|
|
// Verify tunnel is actually alive before reporting it (ngrok may have died externally).
|
|
// Probe via GET /connect — under dual-listener /health is NOT on the tunnel allowlist,
|
|
// so the old probe would return 404 and always mark the tunnel as dead.
|
|
let verifiedTunnelUrl: string | null = null;
|
|
if (tunnelActive && tunnelUrl) {
|
|
try {
|
|
const probe = await fetch(`${tunnelUrl}/connect`, {
|
|
method: 'GET',
|
|
headers: { 'ngrok-skip-browser-warning': 'true' },
|
|
signal: AbortSignal.timeout(5000),
|
|
});
|
|
if (probe.ok) {
|
|
verifiedTunnelUrl = tunnelUrl;
|
|
} else {
|
|
console.warn(`[browse] Tunnel probe failed (HTTP ${probe.status}), marking tunnel as dead`);
|
|
await closeTunnel();
|
|
}
|
|
} catch {
|
|
console.warn('[browse] Tunnel probe timed out or unreachable, marking tunnel as dead');
|
|
await closeTunnel();
|
|
}
|
|
}
|
|
return new Response(JSON.stringify({
|
|
setup_key: setupKey.token,
|
|
expires_at: setupKey.expiresAt,
|
|
scopes: setupKey.scopes,
|
|
tunnel_url: verifiedTunnelUrl,
|
|
server_url: `http://127.0.0.1:${server?.port || 0}`,
|
|
}), { status: 200, headers: { 'Content-Type': 'application/json' } });
|
|
} catch {
|
|
return new Response(JSON.stringify({ error: 'Invalid request body' }), {
|
|
status: 400, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
}
|
|
|
|
// ─── /tunnel/start — start ngrok tunnel on demand (root-only) ──
|
|
//
|
|
// Dual-listener model: binds a SECOND Bun.serve listener on an
|
|
// ephemeral 127.0.0.1 port dedicated to tunnel traffic, then points
|
|
// ngrok.forward() at THAT port. The existing local listener (which
|
|
// serves /health+token, /cookie-picker, /inspector/*, welcome, etc.)
|
|
// is never exposed to ngrok.
|
|
//
|
|
// Hard fail if the tunnel listener bind fails — NEVER fall back to
|
|
// the local port, which would silently defeat the whole security
|
|
// property.
|
|
if (url.pathname === '/tunnel/start' && req.method === 'POST') {
|
|
if (!isRootRequest(req)) {
|
|
return new Response(JSON.stringify({ error: 'Root token required' }), {
|
|
status: 403, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
if (tunnelActive && tunnelUrl && tunnelServer) {
|
|
// Verify tunnel is still alive before returning cached URL.
|
|
// Probe GET /connect (the only unauth-reachable path on the tunnel
|
|
// surface); /health is NOT tunnel-reachable under dual-listener.
|
|
try {
|
|
const probe = await fetch(`${tunnelUrl}/connect`, {
|
|
method: 'GET',
|
|
headers: { 'ngrok-skip-browser-warning': 'true' },
|
|
signal: AbortSignal.timeout(5000),
|
|
});
|
|
if (probe.ok) {
|
|
return new Response(JSON.stringify({ url: tunnelUrl, already_active: true }), {
|
|
status: 200, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
} catch {}
|
|
// Tunnel is dead — tear down cleanly before restarting
|
|
console.warn('[browse] Cached tunnel is dead, restarting...');
|
|
await closeTunnel();
|
|
}
|
|
|
|
// 1) Resolve ngrok authtoken from env / .gstack / native config
|
|
const authtoken = resolveNgrokAuthtoken();
|
|
if (!authtoken) {
|
|
return new Response(JSON.stringify({
|
|
error: 'No ngrok authtoken found',
|
|
hint: 'Run: ngrok config add-authtoken YOUR_TOKEN',
|
|
}), { status: 400, headers: { 'Content-Type': 'application/json' } });
|
|
}
|
|
|
|
// 2) Bind the tunnel listener on an ephemeral port. HARD FAIL if
|
|
// this errors — never fall back to the local port.
|
|
let boundTunnel: ReturnType<typeof Bun.serve>;
|
|
try {
|
|
boundTunnel = Bun.serve({
|
|
port: 0,
|
|
hostname: '127.0.0.1',
|
|
fetch: makeFetchHandler('tunnel'),
|
|
});
|
|
} catch (err: any) {
|
|
return new Response(JSON.stringify({
|
|
error: `Failed to bind tunnel listener: ${err.message}`,
|
|
}), { status: 500, headers: { 'Content-Type': 'application/json' } });
|
|
}
|
|
const tunnelPort = boundTunnel.port;
|
|
|
|
// 3) Point ngrok at the TUNNEL port (not the local port). If this
|
|
// fails, tear the listener back down so we don't leak sockets.
|
|
try {
|
|
const ngrok = await import('@ngrok/ngrok');
|
|
const domain = process.env.NGROK_DOMAIN;
|
|
const forwardOpts: any = { addr: tunnelPort, authtoken };
|
|
if (domain) forwardOpts.domain = domain;
|
|
|
|
tunnelListener = await ngrok.forward(forwardOpts);
|
|
tunnelUrl = tunnelListener.url();
|
|
tunnelServer = boundTunnel;
|
|
tunnelActive = true;
|
|
console.log(`[browse] Tunnel listener bound on 127.0.0.1:${tunnelPort}, ngrok → ${tunnelUrl}`);
|
|
|
|
// Update state file
|
|
const stateContent = JSON.parse(fs.readFileSync(config.stateFile, 'utf-8'));
|
|
stateContent.tunnel = { url: tunnelUrl, domain: domain || null, startedAt: new Date().toISOString() };
|
|
const tmpState = config.stateFile + '.tmp';
|
|
fs.writeFileSync(tmpState, JSON.stringify(stateContent, null, 2), { mode: 0o600 });
|
|
fs.renameSync(tmpState, config.stateFile);
|
|
|
|
return new Response(JSON.stringify({ url: tunnelUrl }), {
|
|
status: 200, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
} catch (err: any) {
|
|
// Clean up BOTH ngrok and the Bun listener on failure. If
|
|
// ngrok.forward() succeeded but tunnelListener.url() or the
|
|
// state-file write threw, we'd otherwise leak an active ngrok
|
|
// session on the user's account.
|
|
try { if (tunnelListener) await tunnelListener.close(); } catch {}
|
|
try { boundTunnel.stop(true); } catch {}
|
|
tunnelListener = null;
|
|
return new Response(JSON.stringify({
|
|
error: `Failed to open ngrok tunnel: ${err.message}`,
|
|
}), { status: 500, headers: { 'Content-Type': 'application/json' } });
|
|
}
|
|
}
|
|
|
|
// ─── SSE session cookie mint (auth required) ──────────────────
|
|
//
|
|
// Issues a short-lived view-only token in an HttpOnly SameSite=Strict
|
|
// cookie so EventSource calls can authenticate without putting the
|
|
// root token in a URL. The returned cookie is valid ONLY on the SSE
|
|
// endpoints (/activity/stream, /inspector/events); it is not a
|
|
// scoped token and cannot be used against /command.
|
|
//
|
|
// The extension calls this once at bootstrap with the root Bearer
|
|
// header, then opens EventSource with `withCredentials: true` which
|
|
// sends the cookie back automatically.
|
|
if (url.pathname === '/sse-session' && req.method === 'POST') {
|
|
if (!validateAuth(req)) {
|
|
return new Response(JSON.stringify({ error: 'Unauthorized' }), {
|
|
status: 401,
|
|
headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
const minted = mintSseSessionToken();
|
|
return new Response(JSON.stringify({
|
|
expiresAt: minted.expiresAt,
|
|
cookie: SSE_COOKIE_NAME,
|
|
}), {
|
|
status: 200,
|
|
headers: {
|
|
'Content-Type': 'application/json',
|
|
'Set-Cookie': buildSseSetCookie(minted.token),
|
|
},
|
|
});
|
|
}
|
|
|
|
// Refs endpoint — auth required, does NOT reset idle timer
|
|
if (url.pathname === '/refs') {
|
|
if (!validateAuth(req)) {
|
|
return new Response(JSON.stringify({ error: 'Unauthorized' }), {
|
|
status: 401,
|
|
headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
const refs = browserManager.getRefMap();
|
|
return new Response(JSON.stringify({
|
|
refs,
|
|
url: browserManager.getCurrentUrl(),
|
|
mode: browserManager.getConnectionMode(),
|
|
}), {
|
|
status: 200,
|
|
headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
|
|
// Activity stream — SSE, auth required, does NOT reset idle timer
|
|
if (url.pathname === '/activity/stream') {
|
|
// Auth: Bearer header OR view-only SSE session cookie (EventSource
|
|
// can't send Authorization headers, so the extension fetches a cookie
|
|
// via POST /sse-session first, then opens EventSource with
|
|
// withCredentials: true). The ?token= query param is NO LONGER
|
|
// accepted — URLs leak to logs/referer/history. See N1 in the
|
|
// v1.6.0.0 security wave plan.
|
|
const cookieToken = extractSseCookie(req);
|
|
if (!validateAuth(req) && !validateSseSessionToken(cookieToken)) {
|
|
return new Response(JSON.stringify({ error: 'Unauthorized' }), {
|
|
status: 401,
|
|
headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
const afterId = parseInt(url.searchParams.get('after') || '0', 10);
|
|
const encoder = new TextEncoder();
|
|
|
|
const stream = new ReadableStream({
|
|
start(controller) {
|
|
// 1. Gap detection + replay
|
|
const { entries, gap, gapFrom, availableFrom } = getActivityAfter(afterId);
|
|
if (gap) {
|
|
controller.enqueue(encoder.encode(`event: gap\ndata: ${JSON.stringify({ gapFrom, availableFrom })}\n\n`));
|
|
}
|
|
for (const entry of entries) {
|
|
controller.enqueue(encoder.encode(`event: activity\ndata: ${JSON.stringify(entry)}\n\n`));
|
|
}
|
|
|
|
// 2. Subscribe for live events
|
|
const unsubscribe = subscribe((entry) => {
|
|
try {
|
|
controller.enqueue(encoder.encode(`event: activity\ndata: ${JSON.stringify(entry)}\n\n`));
|
|
} catch (err: any) {
|
|
console.debug('[browse] Activity SSE stream error, unsubscribing:', err.message);
|
|
unsubscribe();
|
|
}
|
|
});
|
|
|
|
// 3. Heartbeat every 15s
|
|
const heartbeat = setInterval(() => {
|
|
try {
|
|
controller.enqueue(encoder.encode(`: heartbeat\n\n`));
|
|
} catch (err: any) {
|
|
console.debug('[browse] Activity SSE heartbeat failed:', err.message);
|
|
clearInterval(heartbeat);
|
|
unsubscribe();
|
|
}
|
|
}, 15000);
|
|
|
|
// 4. Cleanup on disconnect
|
|
req.signal.addEventListener('abort', () => {
|
|
clearInterval(heartbeat);
|
|
unsubscribe();
|
|
try { controller.close(); } catch {
|
|
// Expected: stream already closed
|
|
}
|
|
});
|
|
},
|
|
});
|
|
|
|
return new Response(stream, {
|
|
headers: {
|
|
'Content-Type': 'text/event-stream',
|
|
'Cache-Control': 'no-cache',
|
|
'Connection': 'keep-alive',
|
|
},
|
|
});
|
|
}
|
|
|
|
// Activity history — REST, auth required, does NOT reset idle timer
|
|
if (url.pathname === '/activity/history') {
|
|
if (!validateAuth(req)) {
|
|
return new Response(JSON.stringify({ error: 'Unauthorized' }), {
|
|
status: 401,
|
|
headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
const limit = parseInt(url.searchParams.get('limit') || '50', 10);
|
|
const { entries, totalAdded } = getActivityHistory(limit);
|
|
return new Response(JSON.stringify({ entries, totalAdded, subscribers: getSubscriberCount() }), {
|
|
status: 200,
|
|
headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
|
|
|
|
// ─── Sidebar chat endpoints ripped ──────────────────────────────
|
|
// /sidebar-tabs, /sidebar-tabs/switch, /sidebar-chat[/clear],
|
|
// /sidebar-command, /sidebar-agent/{event,kill,stop},
|
|
// /sidebar-queue/dismiss, /sidebar-session{,/new,/list} all lived
|
|
// here. They drove the one-shot claude -p chat queue. Replaced by
|
|
// the interactive PTY in terminal-agent.ts; the queue + browser-tab
|
|
// multiplexing are no longer needed.
|
|
|
|
|
|
// ─── Batch endpoint — N commands, 1 HTTP round-trip ─────────────
|
|
// Accepts both root AND scoped tokens (same as /command).
|
|
// Executes commands sequentially through the full security pipeline.
|
|
// Designed for remote agents where tunnel latency dominates.
|
|
if (url.pathname === '/batch' && req.method === 'POST') {
|
|
const tokenInfo = getTokenInfo(req);
|
|
if (!tokenInfo) {
|
|
return new Response(JSON.stringify({ error: 'Unauthorized' }), {
|
|
status: 401,
|
|
headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
resetIdleTimer();
|
|
const body = await req.json();
|
|
const { commands } = body;
|
|
|
|
if (!Array.isArray(commands) || commands.length === 0) {
|
|
return new Response(JSON.stringify({ error: '"commands" must be a non-empty array' }), {
|
|
status: 400,
|
|
headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
if (commands.length > 50) {
|
|
return new Response(JSON.stringify({ error: 'Max 50 commands per batch' }), {
|
|
status: 400,
|
|
headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
|
|
const startTime = Date.now();
|
|
emitActivity({
|
|
type: 'command_start',
|
|
command: 'batch',
|
|
args: [`${commands.length} commands`],
|
|
url: browserManager.getCurrentUrl(),
|
|
tabs: browserManager.getTabCount(),
|
|
mode: browserManager.getConnectionMode(),
|
|
clientId: tokenInfo?.clientId,
|
|
});
|
|
|
|
const results: Array<{ index: number; status: number; result: string; command: string; tabId?: number }> = [];
|
|
for (let i = 0; i < commands.length; i++) {
|
|
const cmd = commands[i];
|
|
if (!cmd || typeof cmd.command !== 'string') {
|
|
results.push({ index: i, status: 400, result: JSON.stringify({ error: 'Missing "command" field' }), command: '' });
|
|
continue;
|
|
}
|
|
// Reject nested batches
|
|
if (cmd.command === 'batch') {
|
|
results.push({ index: i, status: 400, result: JSON.stringify({ error: 'Nested batch commands are not allowed' }), command: 'batch' });
|
|
continue;
|
|
}
|
|
const cr = await handleCommandInternal(
|
|
{ command: cmd.command, args: cmd.args, tabId: cmd.tabId },
|
|
tokenInfo,
|
|
{ skipRateCheck: true, skipActivity: true },
|
|
);
|
|
results.push({
|
|
index: i,
|
|
status: cr.status,
|
|
result: cr.result,
|
|
command: cmd.command,
|
|
tabId: cmd.tabId,
|
|
});
|
|
}
|
|
|
|
const duration = Date.now() - startTime;
|
|
emitActivity({
|
|
type: 'command_end',
|
|
command: 'batch',
|
|
args: [`${commands.length} commands`],
|
|
url: browserManager.getCurrentUrl(),
|
|
duration,
|
|
status: 'ok',
|
|
result: `${results.filter(r => r.status === 200).length}/${commands.length} succeeded`,
|
|
tabs: browserManager.getTabCount(),
|
|
mode: browserManager.getConnectionMode(),
|
|
clientId: tokenInfo?.clientId,
|
|
});
|
|
|
|
return new Response(JSON.stringify({
|
|
results,
|
|
duration,
|
|
total: commands.length,
|
|
succeeded: results.filter(r => r.status === 200).length,
|
|
failed: results.filter(r => r.status !== 200).length,
|
|
}), {
|
|
status: 200,
|
|
headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
|
|
// ─── File serving endpoint (for remote agents to retrieve downloaded files) ────
|
|
if (url.pathname === '/file' && req.method === 'GET') {
|
|
const tokenInfo = getTokenInfo(req);
|
|
if (!tokenInfo) {
|
|
return new Response(JSON.stringify({ error: 'Unauthorized' }), {
|
|
status: 401, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
const filePath = url.searchParams.get('path');
|
|
if (!filePath) {
|
|
return new Response(JSON.stringify({ error: 'Missing "path" query parameter' }), {
|
|
status: 400, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
try {
|
|
validateTempPath(filePath);
|
|
} catch (err: any) {
|
|
return new Response(JSON.stringify({ error: err.message }), {
|
|
status: 403, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
if (!fs.existsSync(filePath)) {
|
|
return new Response(JSON.stringify({ error: 'File not found' }), {
|
|
status: 404, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
const stat = fs.statSync(filePath);
|
|
if (stat.size > 200 * 1024 * 1024) {
|
|
return new Response(JSON.stringify({ error: 'File too large (max 200MB)' }), {
|
|
status: 413, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
const ext = path.extname(filePath).toLowerCase();
|
|
const MIME_MAP: Record<string, string> = {
|
|
'.png': 'image/png', '.jpg': 'image/jpeg', '.jpeg': 'image/jpeg',
|
|
'.gif': 'image/gif', '.webp': 'image/webp', '.svg': 'image/svg+xml',
|
|
'.avif': 'image/avif',
|
|
'.mp4': 'video/mp4', '.webm': 'video/webm', '.mov': 'video/quicktime',
|
|
'.mp3': 'audio/mpeg', '.wav': 'audio/wav', '.ogg': 'audio/ogg',
|
|
'.pdf': 'application/pdf', '.json': 'application/json',
|
|
'.html': 'text/html', '.txt': 'text/plain', '.mhtml': 'message/rfc822',
|
|
};
|
|
const contentType = MIME_MAP[ext] || 'application/octet-stream';
|
|
resetIdleTimer();
|
|
return new Response(Bun.file(filePath), {
|
|
headers: {
|
|
'Content-Type': contentType,
|
|
'Content-Length': String(stat.size),
|
|
'Content-Disposition': `inline; filename="${path.basename(filePath)}"`,
|
|
'Cache-Control': 'no-cache',
|
|
},
|
|
});
|
|
}
|
|
|
|
// ─── Command endpoint (accepts both root AND scoped tokens) ────
|
|
// Must be checked BEFORE the blanket root-only auth gate below,
|
|
// because scoped tokens from /connect are valid for /command.
|
|
if (url.pathname === '/command' && req.method === 'POST') {
|
|
const tokenInfo = getTokenInfo(req);
|
|
if (!tokenInfo) {
|
|
return new Response(JSON.stringify({ error: 'Unauthorized' }), {
|
|
status: 401,
|
|
headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
resetIdleTimer();
|
|
const body = await req.json() as any;
|
|
// Tunnel surface: only commands in TUNNEL_COMMANDS are allowed.
|
|
// Paired remote agents drive the browser but cannot configure the
|
|
// daemon, launch new browsers, import cookies, or rotate tokens.
|
|
if (surface === 'tunnel') {
|
|
if (!canDispatchOverTunnel(body?.command)) {
|
|
logTunnelDenial(req, url, `disallowed_command:${body?.command}`);
|
|
return new Response(JSON.stringify({
|
|
error: `Command '${body?.command}' is not allowed over the tunnel surface`,
|
|
hint: `Tunnel commands: ${[...TUNNEL_COMMANDS].sort().join(', ')}`,
|
|
}), { status: 403, headers: { 'Content-Type': 'application/json' } });
|
|
}
|
|
}
|
|
return handleCommand(body, tokenInfo);
|
|
}
|
|
|
|
// ─── Auth-required endpoints (root token only) ─────────────────
|
|
|
|
if (!validateAuth(req)) {
|
|
return new Response(JSON.stringify({ error: 'Unauthorized' }), {
|
|
status: 401,
|
|
headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
|
|
// ─── Inspector endpoints ──────────────────────────────────────
|
|
|
|
// POST /inspector/pick — receive element pick from extension, run CDP inspection
|
|
if (url.pathname === '/inspector/pick' && req.method === 'POST') {
|
|
const body = await req.json();
|
|
const { selector, activeTabUrl } = body;
|
|
if (!selector) {
|
|
return new Response(JSON.stringify({ error: 'Missing selector' }), {
|
|
status: 400, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
try {
|
|
const page = browserManager.getPage();
|
|
const result = await inspectElement(page, selector);
|
|
inspectorData = result;
|
|
inspectorTimestamp = Date.now();
|
|
// Also store on browserManager for CLI access
|
|
(browserManager as any)._inspectorData = result;
|
|
(browserManager as any)._inspectorTimestamp = inspectorTimestamp;
|
|
emitInspectorEvent({ type: 'pick', selector, timestamp: inspectorTimestamp });
|
|
return new Response(JSON.stringify(result), {
|
|
status: 200, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
} catch (err: any) {
|
|
return new Response(JSON.stringify({ error: err.message }), {
|
|
status: 500, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
}
|
|
|
|
// GET /inspector — return latest inspector data
|
|
if (url.pathname === '/inspector' && req.method === 'GET') {
|
|
if (!inspectorData) {
|
|
return new Response(JSON.stringify({ data: null }), {
|
|
status: 200, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
const stale = inspectorTimestamp > 0 && (Date.now() - inspectorTimestamp > 60000);
|
|
return new Response(JSON.stringify({ data: inspectorData, timestamp: inspectorTimestamp, stale }), {
|
|
status: 200, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
|
|
// POST /inspector/apply — apply a CSS modification
|
|
if (url.pathname === '/inspector/apply' && req.method === 'POST') {
|
|
const body = await req.json();
|
|
const { selector, property, value } = body;
|
|
if (!selector || !property || value === undefined) {
|
|
return new Response(JSON.stringify({ error: 'Missing selector, property, or value' }), {
|
|
status: 400, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
try {
|
|
const page = browserManager.getPage();
|
|
const mod = await modifyStyle(page, selector, property, value);
|
|
emitInspectorEvent({ type: 'apply', modification: mod, timestamp: Date.now() });
|
|
return new Response(JSON.stringify(mod), {
|
|
status: 200, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
} catch (err: any) {
|
|
return new Response(JSON.stringify({ error: err.message }), {
|
|
status: 500, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
}
|
|
|
|
// POST /inspector/reset — clear all modifications
|
|
if (url.pathname === '/inspector/reset' && req.method === 'POST') {
|
|
try {
|
|
const page = browserManager.getPage();
|
|
await resetModifications(page);
|
|
emitInspectorEvent({ type: 'reset', timestamp: Date.now() });
|
|
return new Response(JSON.stringify({ ok: true }), {
|
|
status: 200, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
} catch (err: any) {
|
|
return new Response(JSON.stringify({ error: err.message }), {
|
|
status: 500, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
}
|
|
|
|
// GET /inspector/history — return modification list
|
|
if (url.pathname === '/inspector/history' && req.method === 'GET') {
|
|
return new Response(JSON.stringify({ history: getModificationHistory() }), {
|
|
status: 200, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
|
|
// GET /inspector/events — SSE for inspector state changes (auth required)
|
|
if (url.pathname === '/inspector/events' && req.method === 'GET') {
|
|
// Same auth model as /activity/stream: Bearer OR view-only cookie.
|
|
// ?token= query param dropped (see N1 in the v1.6.0.0 security plan).
|
|
const cookieToken = extractSseCookie(req);
|
|
if (!validateAuth(req) && !validateSseSessionToken(cookieToken)) {
|
|
return new Response(JSON.stringify({ error: 'Unauthorized' }), {
|
|
status: 401, headers: { 'Content-Type': 'application/json' },
|
|
});
|
|
}
|
|
const encoder = new TextEncoder();
|
|
const stream = new ReadableStream({
|
|
start(controller) {
|
|
// Send current state immediately
|
|
if (inspectorData) {
|
|
controller.enqueue(encoder.encode(
|
|
`event: state\ndata: ${JSON.stringify({ data: inspectorData, timestamp: inspectorTimestamp })}\n\n`
|
|
));
|
|
}
|
|
|
|
// Subscribe for live events
|
|
const notify: InspectorSubscriber = (event) => {
|
|
try {
|
|
controller.enqueue(encoder.encode(
|
|
`event: inspector\ndata: ${JSON.stringify(event)}\n\n`
|
|
));
|
|
} catch (err: any) {
|
|
console.debug('[browse] Inspector SSE stream error:', err.message);
|
|
inspectorSubscribers.delete(notify);
|
|
}
|
|
};
|
|
inspectorSubscribers.add(notify);
|
|
|
|
// Heartbeat every 15s
|
|
const heartbeat = setInterval(() => {
|
|
try {
|
|
controller.enqueue(encoder.encode(`: heartbeat\n\n`));
|
|
} catch (err: any) {
|
|
console.debug('[browse] Inspector SSE heartbeat failed:', err.message);
|
|
clearInterval(heartbeat);
|
|
inspectorSubscribers.delete(notify);
|
|
}
|
|
}, 15000);
|
|
|
|
// Cleanup on disconnect
|
|
req.signal.addEventListener('abort', () => {
|
|
clearInterval(heartbeat);
|
|
inspectorSubscribers.delete(notify);
|
|
try { controller.close(); } catch (err: any) {
|
|
// Expected: stream already closed
|
|
}
|
|
});
|
|
},
|
|
});
|
|
|
|
return new Response(stream, {
|
|
headers: {
|
|
'Content-Type': 'text/event-stream',
|
|
'Cache-Control': 'no-cache',
|
|
'Connection': 'keep-alive',
|
|
},
|
|
});
|
|
}
|
|
|
|
return new Response('Not found', { status: 404 });
|
|
};
|
|
// ─── End of makeFetchHandler ────────────────────────────────────
|
|
|
|
const server = Bun.serve({
|
|
port,
|
|
hostname: '127.0.0.1',
|
|
fetch: makeFetchHandler('local'),
|
|
});
|
|
|
|
// Write state file (atomic: write .tmp then rename)
|
|
const state: Record<string, unknown> = {
|
|
pid: process.pid,
|
|
port,
|
|
token: AUTH_TOKEN,
|
|
startedAt: new Date().toISOString(),
|
|
serverPath: path.resolve(import.meta.dir, 'server.ts'),
|
|
binaryVersion: readVersionHash() || undefined,
|
|
mode: browserManager.getConnectionMode(),
|
|
};
|
|
const tmpFile = config.stateFile + '.tmp';
|
|
fs.writeFileSync(tmpFile, JSON.stringify(state, null, 2), { mode: 0o600 });
|
|
fs.renameSync(tmpFile, config.stateFile);
|
|
|
|
browserManager.serverPort = port;
|
|
|
|
// Navigate to welcome page if in headed mode and still on about:blank
|
|
if (browserManager.getConnectionMode() === 'headed') {
|
|
try {
|
|
const currentUrl = browserManager.getCurrentUrl();
|
|
if (currentUrl === 'about:blank' || currentUrl === '') {
|
|
const page = browserManager.getPage();
|
|
page.goto(`http://127.0.0.1:${port}/welcome`, { timeout: 3000 }).catch((err: any) => {
|
|
console.warn('[browse] Failed to navigate to welcome page:', err.message);
|
|
});
|
|
}
|
|
} catch (err: any) {
|
|
console.warn('[browse] Welcome page navigation setup failed:', err.message);
|
|
}
|
|
}
|
|
|
|
// Clean up stale state files (older than 7 days)
|
|
try {
|
|
const stateDir = path.join(config.stateDir, 'browse-states');
|
|
if (fs.existsSync(stateDir)) {
|
|
const SEVEN_DAYS = 7 * 24 * 60 * 60 * 1000;
|
|
for (const file of fs.readdirSync(stateDir)) {
|
|
const filePath = path.join(stateDir, file);
|
|
const stat = fs.statSync(filePath);
|
|
if (Date.now() - stat.mtimeMs > SEVEN_DAYS) {
|
|
fs.unlinkSync(filePath);
|
|
console.log(`[browse] Deleted stale state file: ${file}`);
|
|
}
|
|
}
|
|
}
|
|
} catch (err: any) {
|
|
console.warn('[browse] Failed to clean stale state files:', err.message);
|
|
}
|
|
|
|
console.log(`[browse] Server running on http://127.0.0.1:${port} (PID: ${process.pid})`);
|
|
console.log(`[browse] State file: ${config.stateFile}`);
|
|
console.log(`[browse] Idle timeout: ${IDLE_TIMEOUT_MS / 1000}s`);
|
|
|
|
// initSidebarSession() ripped alongside the chat queue (it loaded
|
|
// chat.jsonl into memory and started the agent-health watchdog —
|
|
// both functions are gone). The Terminal pane manages its own state
|
|
// directly via terminal-agent.ts.
|
|
|
|
// ─── Tunnel startup (optional) ────────────────────────────────
|
|
// Start ngrok tunnel if BROWSE_TUNNEL=1 is set. Uses the dual-listener
|
|
// pattern: bind a dedicated tunnel listener on an ephemeral port and
|
|
// point ngrok.forward() at IT, not the local daemon port.
|
|
if (process.env.BROWSE_TUNNEL === '1') {
|
|
const authtoken = resolveNgrokAuthtoken();
|
|
if (!authtoken) {
|
|
console.error('[browse] BROWSE_TUNNEL=1 but no NGROK_AUTHTOKEN found. Set it via env var or ~/.gstack/ngrok.env');
|
|
} else {
|
|
let boundTunnel: ReturnType<typeof Bun.serve> | null = null;
|
|
try {
|
|
boundTunnel = Bun.serve({
|
|
port: 0,
|
|
hostname: '127.0.0.1',
|
|
fetch: makeFetchHandler('tunnel'),
|
|
});
|
|
const tunnelPort = boundTunnel.port;
|
|
|
|
const ngrok = await import('@ngrok/ngrok');
|
|
const domain = process.env.NGROK_DOMAIN;
|
|
const forwardOpts: any = { addr: tunnelPort, authtoken };
|
|
if (domain) forwardOpts.domain = domain;
|
|
|
|
tunnelListener = await ngrok.forward(forwardOpts);
|
|
tunnelUrl = tunnelListener.url();
|
|
tunnelServer = boundTunnel;
|
|
tunnelActive = true;
|
|
|
|
console.log(`[browse] Tunnel listener bound on 127.0.0.1:${tunnelPort}, ngrok → ${tunnelUrl}`);
|
|
|
|
// Update state file with tunnel URL
|
|
const stateContent = JSON.parse(fs.readFileSync(config.stateFile, 'utf-8'));
|
|
stateContent.tunnel = { url: tunnelUrl, domain: domain || null, startedAt: new Date().toISOString() };
|
|
const tmpState = config.stateFile + '.tmp';
|
|
fs.writeFileSync(tmpState, JSON.stringify(stateContent, null, 2), { mode: 0o600 });
|
|
fs.renameSync(tmpState, config.stateFile);
|
|
} catch (err: any) {
|
|
console.error(`[browse] Failed to start tunnel: ${err.message}`);
|
|
// Same cleanup as /tunnel/start's error path: tear down BOTH
|
|
// ngrok and the Bun listener so we don't leak an ngrok session
|
|
// if the error happened after ngrok.forward() resolved.
|
|
try { if (tunnelListener) await tunnelListener.close(); } catch {}
|
|
try { if (boundTunnel) boundTunnel.stop(true); } catch {}
|
|
tunnelListener = null;
|
|
}
|
|
}
|
|
} else if (process.env.BROWSE_TUNNEL_LOCAL_ONLY === '1') {
|
|
// Test-only: bind the dual-listener tunnel surface on 127.0.0.1 with NO
|
|
// ngrok forwarding. Lets paid evals exercise the surface==='tunnel' gate
|
|
// without an ngrok authtoken or live network. Production tunneling still
|
|
// requires BROWSE_TUNNEL=1 + a valid authtoken above.
|
|
try {
|
|
const boundTunnel = Bun.serve({
|
|
port: 0,
|
|
hostname: '127.0.0.1',
|
|
fetch: makeFetchHandler('tunnel'),
|
|
});
|
|
tunnelServer = boundTunnel;
|
|
tunnelActive = true;
|
|
const tunnelPort = boundTunnel.port;
|
|
console.log(`[browse] Tunnel listener bound (local-only test mode) on 127.0.0.1:${tunnelPort}`);
|
|
const stateContent = JSON.parse(fs.readFileSync(config.stateFile, 'utf-8'));
|
|
stateContent.tunnelLocalPort = tunnelPort;
|
|
const tmpState = config.stateFile + '.tmp';
|
|
fs.writeFileSync(tmpState, JSON.stringify(stateContent, null, 2), { mode: 0o600 });
|
|
fs.renameSync(tmpState, config.stateFile);
|
|
} catch (err: any) {
|
|
console.error(`[browse] BROWSE_TUNNEL_LOCAL_ONLY=1 listener bind failed: ${err.message}`);
|
|
}
|
|
}
|
|
}
|
|
|
|
start().catch((err) => {
|
|
console.error(`[browse] Failed to start: ${err.message}`);
|
|
// Write error to disk for the CLI to read — on Windows, the CLI can't capture
|
|
// stderr because the server is launched with detached: true, stdio: 'ignore'.
|
|
try {
|
|
const errorLogPath = path.join(config.stateDir, 'browse-startup-error.log');
|
|
fs.mkdirSync(config.stateDir, { recursive: true, mode: 0o700 });
|
|
fs.writeFileSync(errorLogPath, `${new Date().toISOString()} ${err.message}\n${err.stack || ''}\n`, { mode: 0o600 });
|
|
} catch {
|
|
// stateDir may not exist — nothing more we can do
|
|
}
|
|
process.exit(1);
|
|
});
|