Garry Tan 5d4fe7df07 v1.31.0.0 fix: delete AskUserQuestion fallback (root cause of forever war) + harness primitives (#1390)
* test: add multi-finding batching regression test (periodic tier)

Adds a periodic-tier E2E that catches the May 2026 transcript bug shape
the existing single-finding gate-tier floor test cannot detect: a model
that fires one AskUserQuestion and then batches the remaining findings
into a single "## Decisions to confirm" plan write + ExitPlanMode.

Why a separate test from skill-e2e-plan-eng-finding-floor: the gate-tier
floor (runPlanSkillFloorCheck) exits on the first AUQ render and returns
success, so a once-then-batch model would pass it trivially. This test
uses runPlanSkillCounting at periodic tier with N-AUQ tracking and
asserts >= 3 distinct review-phase AUQs on a 4-finding seeded plan.

- test/fixtures/forcing-finding-seeds.ts: FORCING_BATCHING_ENG fixture
  (4 distinct non-trivial findings spread across Architecture, Code
  Quality, Tests, Performance — mirrors the D1-D4 transcript shape)
- test/skill-e2e-plan-eng-multi-finding-batching.test.ts: new test
- test/helpers/touchfiles.ts: registered in BOTH E2E_TOUCHFILES and
  E2E_TIERS (touchfiles.test.ts asserts exact equality)

Test will fail on baseline today because today's model uses the preamble
fallback to batch findings; passes after the architectural fix lands in
a follow-up commit.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test: expand plan-mode pass envelopes to accept BLOCKED path

Three existing plan-mode regression tests previously codified the
preamble fallback as a valid PASS path under --disallowedTools
AskUserQuestion: outcome=plan_ready was accepted only when the model
wrote a "## Decisions to confirm" section. The forever-war fix deletes
that fallback, so this assertion would fail post-deletion.

Expanded envelope accepts EITHER:
- 'plan_ready' WITH (## Decisions section [legacy] OR BLOCKED string
  visible in TTY [post-fix])
- 'exited' WITH BLOCKED string visible in TTY [post-fix]

The legacy ## Decisions branch stays in the envelope so these tests
keep passing on today's code (where the fallback still exists) and
on tomorrow's code (where the model reports BLOCKED instead). Once
the deletion has been on main long enough that the cache flushes,
the legacy branch can be removed in a follow-up.

Failure signals (regression we DO want to catch) unchanged:
auto_decided / silent_write / timeout / exited-without-BLOCKED /
plan_ready-without-(decisions OR BLOCKED).

- test/skill-e2e-plan-ceo-plan-mode.test.ts (test 2 only)
- test/skill-e2e-autoplan-auto-mode.test.ts
- test/skill-e2e-plan-design-plan-mode.test.ts

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: delete AskUserQuestion fallback (root cause of forever war)

The /plan-eng-review skill failed to fire AskUserQuestion on a real
plan review and surfaced 4 calibration decisions via prose instead.
Investigation traced this to a "fallback when neither variant is
callable" clause in the preamble that the model rationalizes around
as a general escape hatch from "fanning out round-trip AUQs," even
when an AUQ variant IS callable. Codex review confirmed the fallback
exists in 8 inline sites with 2 surviving escape hatches the original
narrowing missed (a "genuinely trivial" exception duplicated across
all 4 plan-* templates, and a "outside plan mode, output as prose
and stop" branch in the preamble itself).

Net deletion in skill text. Closes both branches of the deleted
fallback (plan-file write AND prose-and-stop) and the trivial-fix
exception with a single hard rule:

  If no AskUserQuestion variant appears in your tool list, this
  skill is BLOCKED. Stop, report `BLOCKED — AskUserQuestion
  unavailable`, and wait for the user.

Honest about being a model directive, not a runtime guard — none of
the PTY harness helpers enforce BLOCKED today. The architectural
improvement is that the model has fewer alternatives to obey it
against. Runtime enforcement is a follow-up TODO.

Sources changed:
- scripts/resolvers/preamble/generate-ask-user-format.ts: delete both
  fallback branches; replace with 1-line BLOCKED rule
- scripts/resolvers/preamble/generate-completion-status.ts: delete
  fallback in generatePlanModeInfo
- plan-eng-review/SKILL.md.tmpl: delete fallback at Step 0 + Sections
  1-4 (5 instances) + delete trivial-fix exception
- office-hours/SKILL.md.tmpl: delete fallback in approach-selection
- plan-ceo-review/SKILL.md.tmpl: delete trivial-fix exception
- plan-design-review/SKILL.md.tmpl: delete trivial-fix exception
- plan-devex-review/SKILL.md.tmpl: delete trivial-fix exception

Generated SKILL.md regen lands in a follow-up commit per the bisect
convention (template changes separate from regenerated output).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: regenerate SKILL.md after fallback deletion

Regenerates all 47 generated SKILL.md files (default + 7 host adapters)
after the template/resolver edits in the prior commit. Pure mechanical
output of `bun run gen:skill-docs`; no hand-edits.

Verifies fallback deletion landed across the entire skill surface:
- zero hits for "Decisions to confirm" in canonical SKILL.md / .tmpl
- zero hits for "no AskUserQuestion variant is callable"
- zero hits for "genuinely trivial"
- BLOCKED rule present in 42 generated SKILL.md (every Tier-2+ skill)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(harness): detect prose-rendered AskUserQuestion in plan mode

When --disallowedTools AskUserQuestion is set and no MCP variant is
callable, the model surfaces decisions as visible prose options
("A) ... B) ... C) ..." or "1. ... 2. ... 3. ...") rather than via the
native numbered-prompt UI. isNumberedOptionListVisible doesn't catch
these because the ❯ cursor sits on the empty input prompt rather than
on option 1, so runPlanSkillObservation and runPlanSkillFloorCheck
would time out at 5-10 minutes per test even though the model was
correctly waiting for user input.

This was exposed by the v1.28 fallback deletion: pre-deletion the
model used the preamble fallback to silently auto-resolve to
plan_ready in this scenario. Post-deletion the model correctly
surfaces the question and waits, but the harness couldn't tell.

isProseAUQVisible matches:
  - 2+ distinct lettered options at line starts (A/B/C/D form)
  - 3+ distinct numbered options at line starts WITHOUT a `❯ 1.`
    cursor (so it doesn't double-fire on native numbered prompts)

Wired into:
  - classifyVisible (used by runPlanSkillObservation) → returns
    outcome='asked' instead of timeout
  - runPlanSkillFloorCheck → counts as auq_observed (floor met)

8 new unit tests in claude-pty-runner.unit.test.ts cover the lettered
shape, numbered shape, threshold edges, native-cursor exclusion, and
mid-prose false-positive guard.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(harness): LLM judge for waiting-vs-working PTY state + snapshot logs

Regex detectors (isNumberedOptionListVisible, isProseAUQVisible) are
fast and free, but PTY rendering quirks fragment prose AUQ option
lists across logical lines that no regex can reliably reassemble.
When detection misses, polling loops time out at the full budget
even though the model is correctly waiting for user input.

Adds judgePtyState — a Haiku-graded trichotomy classifier:
  - waiting: agent surfaced a question/options, sitting at input prompt
  - working: spinner / tool calls / generation in progress
  - hung:    stopped without surfacing anything (rare crash signal)

Wired as a fallback into the polling loops of runPlanSkillObservation
and runPlanSkillFloorCheck: after 60s with no regex hit, snapshot the
TTY every 30s and call the judge. On 'waiting' verdict, return
outcome=asked / auq_observed early. On 'working' or 'hung', enrich the
eventual timeout summary with the verdict so failures are diagnosable.

Implementation:
  - Spawns `claude -p --model claude-haiku-4-5 --max-turns 1` synchronously
    with prompt piped via stdin (subscription auth, no API key env required)
  - In-process cache keyed by SHA-1 of normalized last-4KB so identical
    spinner-frame snapshots don't re-charge
  - Best-effort JSONL log to ~/.gstack/analytics/pty-judge.jsonl with
    timestamp, testName, state, reasoning, hash, judge wall time
  - 30s timeout per call; returns state='unknown' with diagnostic on any
    failure mode (timeout, malformed JSON, missing claude binary)

Snapshot logging: when GSTACK_PTY_LOG=1 is set, dump last 4KB of visible
TTY at every judge tick to ~/.gstack/analytics/pty-snapshots/<test>-
<elapsed>ms.txt — postmortem trail for debugging flakes.

Cost: ~$0.0005 per call; ~10 calls per 5-min test budget; ~$0.005 per
test added in worst case (only when regex detectors miss).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test: accept prose-AUQ visible as third valid surface in plan-mode envelopes

The first re-run after wiring the LLM judge revealed that the model also
emits a third surface I hadn't anticipated: a properly-formatted question
with options ("Pick A, B, or C in your reply") rendered as prose AND
followed by ExitPlanMode (outcome=plan_ready). The migrated tests only
accepted (## Decisions section) OR (BLOCKED string) — neither matched
this case, so the test failed even though the user clearly saw the
question.

Three valid surfaces now:
  1. `## Decisions to confirm` section in plan file (legacy fallback path,
     still valid through migration window)
  2. `BLOCKED — AskUserQuestion` string in TTY (post-v1.28 BLOCKED rule)
  3. Numbered/lettered options visible in TTY as prose (post-v1.28 prose
     rendering — uses the existing isProseAUQVisible detector)

Also fixes assertReportAtBottomIfPlanWritten to be tolerant of:
  - Missing files (path detected from TTY but file not persisted) — was
    throwing ENOENT on plan_design_plan_mode and plan_ceo_plan_mode test 1
  - 'asked' outcome (smoke test exited at first AUQ before the model
    reached the report-writing step) — was throwing on the 1 fail in the
    plan-eng-plan-mode --disallowedTools test

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test: drop GSTACK REVIEW REPORT contract from --disallowedTools migrations

The plan-ceo / plan-design --disallowedTools migrated tests called
assertReportAtBottomIfPlanWritten as the final assertion, but that
contract is for full multi-section review completions. Under
--disallowedTools AskUserQuestion the model can't run the full
review (no AUQ tools to ask findings questions through), so it exits
at Step 0 with either prose-AUQ rendering or the legacy decisions
fallback. A plan file written in that mode WON'T have a GSTACK
REVIEW REPORT section — the workflow never reached the report-writing
step.

The contract is still enforced by the periodic finding-count tests
(skill-e2e-plan-{ceo,eng,design,devex}-finding-count.test.ts), which
DO run the full review end-to-end and assert report-at-bottom there.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(harness): high-water-mark prose-AUQ tracking across polling iterations

The autoplan E2E surfaces a brief prose-AUQ window (model emits options,
waits ~30s for non-existent test responder, then resumes thinking) that
the existing polling loop misses: by judge-tick time the buffer has
moved into spinner state, so the LLM judge correctly reports 'working'
and the loop times out at 5min.

Adds two flags tracked across polling iterations:
  - proseAUQEverObserved: set true the first tick isProseAUQVisible
    returns true on the recent buffer
  - waitingEverObserved: set true on the first LLM judge 'waiting' verdict

At timeout, if either flag is set, return outcome='asked' with a
summary explaining the historical signal. The model DID surface the
question — we just missed the live-state window.

Snapshot logged with tag='prose-auq-surfaced' when GSTACK_PTY_LOG=1
for postmortem trace.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test: migrate plan-eng-plan-mode test 2 envelope to match other plan-mode tests

The plan-ceo, plan-design, and autoplan plan-mode tests under
--disallowedTools all moved to the same surface-visibility envelope
(decisions section OR BLOCKED string OR prose-AUQ visible) and dropped
the GSTACK REVIEW REPORT contract because the workflow can't complete
without AUQ tools. plan-eng-plan-mode test 2 had been left on the old
envelope and was the last failing test.

This commit migrates it to match. Also lifts 'exited' out of the failure
list and into a guarded path (acceptable when surface-visible).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(harness): isProseAUQVisible — gate numbered path on tail, not full buffer

The numbered-options branch of isProseAUQVisible deferred to
isNumberedOptionListVisible whenever a `❯ 1.` cursor was visible in the
full buffer. But the boot trust dialog (`❯ 1. Yes, trust`) lives in
scrollback for the entire run, so this gate suppressed prose-numbered
detection for any session that had the trust prompt at startup —
i.e., every E2E run after the first user-trust acceptance.

Fix: check only the last 4KB tail. Native-UI deferral applies when
the cursor list is CURRENTLY rendered, not historically present in
scrollback.

Adds a regression test that puts the trust dialog in early scrollback
+ 5KB filler + a current prose-AUQ render, asserts true.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(harness): isProseAUQVisible — lower numbered threshold to 2 (matches lettered)

The 4KB tail window often contains only options 2-4 of a 4-option
numbered prose AUQ because the model emits the question header + option 1
several KB earlier in the buffer. The threshold of 3 distinct numbered
markers caused the detector to miss real prose AUQs whenever option 1
had scrolled out.

Threshold 2 matches the lettered branch and is still tightly gated by:
- Line-start anchoring (no false positives on inline `1.` references)
- No-cursor gate (defers to native UI when ❯ 1. is currently rendered)
- The 4KB tail window itself (prose-AUQ rendering happens at the end of
  the model's response, so options are clustered in the tail)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test: expose high-water-mark flags through PlanSkillObservation

The 2KB obs.evidence window often misses the prose-AUQ moment because
ExitPlanMode UI ("Ready to execute" + numbered approve/reject prompt)
pushes the model's earlier option list out of the tail by the time
outcome=plan_ready fires. Tests checking "did the user see a question"
need to consult historical state, not just the truncated final tail.

Adds two optional fields to PlanSkillObservation:
  - proseAUQEverObserved: true if isProseAUQVisible was true at any tick
  - waitingEverObserved: true if the LLM judge ever returned 'waiting'

The 4 plan-mode --disallowedTools tests now check these flags as part
of the surfaceVisible computation:
    isProseAUQVisible(obs.evidence) || obs.proseAUQEverObserved === true
    blockedVisible || proseAUQVisible || obs.waitingEverObserved === true

This catches the autoplan / plan-ceo / plan-eng case where the model
surfaces options briefly, fails to get a response, then keeps thinking
— eventually emitting ExitPlanMode and pushing options out of evidence.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(plan-ceo): bump --disallowedTools test timeout to 10 min

Last 5 runs showed the model under --disallowedTools spending the full
5-min budget in 'high effort thinking' before surfacing options. The LLM
judge correctly reports state=working at every 30s tick, so the
high-water-mark fallback never fires.

10-min budget gives the model 20 judge windows to eventually surface
the question. Outer bun timeout bumped accordingly to 660s (inner +60s).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(plan-ceo): pre-prime --disallowedTools test with concrete plan content

Root cause of the persistent timeout: under --disallowedTools, the model
can't fire the AUQ tool to ask "what should I review?" — it has to
prose-render that question. Prose-rendering a 4-option choice requires
the model to first enumerate every option, which spent the full 5min
budget in 'high effort thinking' (8 consecutive 'state=working' verdicts
from the LLM judge).

Fix: pass initialPlanContent (already supported by runPlanSkillObservation)
with a CEO-review-shaped seed plan (vague success metric, missing
premise, scope creep smell). The model now has concrete material to
critique on entry, bypasses the scope-deliberation loop, and moves
directly to surfacing Step 0 / Section 1 findings — the actual
behavior we want to regression-test.

Reverted timeout from 600_000 back to 300_000 since the 5-min budget
is plenty when the model has a real plan to work with.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test: delete --disallowedTools AskUserQuestion-blocked test variants

These tests simulated a fictional environment that doesn't exist in
production. Real Conductor sessions launch claude with
`--disallowedTools AskUserQuestion` AND register
`mcp__conductor__AskUserQuestion` — the model has the MCP variant. But
the tests passed `--disallowedTools` without standing up any MCP server,
so they tested "model behavior with NO AUQ available," which no real
user state produces.

Combined with bare `/plan-ceo-review` invocation (no follow-up content),
this forced the model into a 5+ minute deliberation loop trying to
prose-render a question with options it had to first invent. The result
was persistent flakes that consumed nine paid E2E runs trying to fix
"the model takes too long" — but the actual problem was the test
configuration, not the model.

Removals:
- test/skill-e2e-autoplan-auto-mode.test.ts (deleted; the entire file
  was a single AUQ-blocked test)
- test/skill-e2e-plan-ceo-plan-mode.test.ts test 2 (the migrated
  --disallowedTools test); test 1 (baseline plan-mode smoke) stays
- test/skill-e2e-plan-design-plan-mode.test.ts test 2 (same shape);
  test 1 stays
- test/skill-e2e-plan-eng-plan-mode.test.ts test 2 (same shape); test 1
  (baseline) and test 3 (STOP-gate with seeded plan, different
  contract) stay
- test/helpers/touchfiles.ts: autoplan-auto-mode entry removed
- test/touchfiles.test.ts: assertion count + commentary updated

Coverage retained: test 1 of each plan-mode file already verifies the
model fires AUQ; the periodic finding-count tests verify per-finding
AUQ cadence end-to-end. The harness improvements landed during this
debugging cycle (isProseAUQVisible regex, LLM judge, snapshot logging,
high-water-mark tracking, ENOENT-tolerant assertReportAtBottomIfPlanWritten)
all stay — they're useful for the remaining plan-mode tests that can
also encounter prose rendering and slow-thinking phases.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: bump version and changelog (v1.31.0.0)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 17:01:13 -07:00
2026-03-12 01:32:16 -07:00

gstack

"I don't think I've typed like a line of code probably since December, basically, which is an extremely large change." — Andrej Karpathy, No Priors podcast, March 2026

When I heard Karpathy say this, I wanted to find out how. How does one person ship like a team of twenty? Peter Steinberger built OpenClaw — 247K GitHub stars — essentially solo with AI agents. The revolution is here. A single builder with the right tooling can move faster than a traditional team.

I'm Garry Tan, President & CEO of Y Combinator. I've worked with thousands of startups — Coinbase, Instacart, Rippling — when they were one or two people in a garage. Before YC, I was one of the first eng/PM/designers at Palantir, cofounded Posterous (sold to Twitter), and built Bookface, YC's internal social network.

gstack is my answer. I've been building products for twenty years, and right now I'm shipping more products than I ever have. In the last 60 days: 3 production services, 40+ shipped features, part-time, while running YC full-time. On logical code change — not raw LOC, which AI inflates — my 2026 run rate is ~810× my 2013 pace (11,417 vs 14 logical lines/day). Year-to-date (through April 18), 2026 has already produced 240× the entire 2013 year. Measured across 40 public + private garrytan/* repos including Bookface, after excluding one demo repo. AI wrote most of it. The point isn't who typed it, it's what shipped.

The LOC critics aren't wrong that raw line counts inflate with AI. They are wrong that normalized-for-inflation, I'm less productive. I'm more productive, by a lot. Full methodology, caveats, and reproduction script: On the LOC Controversy.

2026 — 1,237 contributions and counting:

GitHub contributions 2026 — 1,237 contributions, massive acceleration in Jan-Mar

2013 — when I built Bookface at YC (772 contributions):

GitHub contributions 2013 — 772 contributions building Bookface at YC

Same person. Different era. The difference is the tooling.

gstack is how I do it. It turns Claude Code into a virtual engineering team — a CEO who rethinks the product, an eng manager who locks architecture, a designer who catches AI slop, a reviewer who finds production bugs, a QA lead who opens a real browser, a security officer who runs OWASP + STRIDE audits, and a release engineer who ships the PR. Twenty-three specialists and eight power tools, all slash commands, all Markdown, all free, MIT license.

This is my open source software factory. I use it every day. I'm sharing it because these tools should be available to everyone.

Fork it. Improve it. Make it yours. And if you want to hate on free open source software — you're welcome to, but I'd rather you just try it first.

Who this is for:

  • Founders and CEOs — especially technical ones who still want to ship
  • First-time Claude Code users — structured roles instead of a blank prompt
  • Tech leads and staff engineers — rigorous review, QA, and release automation on every PR

Quick start

  1. Install gstack (30 seconds — see below)
  2. Run /office-hours — describe what you're building
  3. Run /plan-ceo-review on any feature idea
  4. Run /review on any branch with changes
  5. Run /qa on your staging URL
  6. Stop there. You'll know if this is for you.

Install — 30 seconds

Requirements: Claude Code, Git, Bun v1.0+, Node.js (Windows only)

Step 1: Install on your machine

Open Claude Code and paste this. Claude does the rest.

Install gstack: run git clone --single-branch --depth 1 https://github.com/garrytan/gstack.git ~/.claude/skills/gstack && cd ~/.claude/skills/gstack && ./setup then add a "gstack" section to CLAUDE.md that says to use the /browse skill from gstack for all web browsing, never use mcp__claude-in-chrome__* tools, and lists the available skills: /office-hours, /plan-ceo-review, /plan-eng-review, /plan-design-review, /design-consultation, /design-shotgun, /design-html, /review, /ship, /land-and-deploy, /canary, /benchmark, /browse, /connect-chrome, /qa, /qa-only, /design-review, /setup-browser-cookies, /setup-deploy, /setup-gbrain, /retro, /investigate, /document-release, /codex, /cso, /autoplan, /plan-devex-review, /devex-review, /careful, /freeze, /guard, /unfreeze, /gstack-upgrade, /learn. Then ask the user if they also want to add gstack to the current project so teammates get it.

From inside your repo, paste this. Switches you to team mode, bootstraps the repo so teammates get gstack automatically, and commits the change:

(cd ~/.claude/skills/gstack && ./setup --team) && ~/.claude/skills/gstack/bin/gstack-team-init required && git add .claude/ CLAUDE.md && git commit -m "require gstack for AI-assisted work"

No vendored files in your repo, no version drift, no manual upgrades. Every Claude Code session starts with a fast auto-update check (throttled to once/hour, network-failure-safe, completely silent).

Swap required for optional if you'd rather nudge teammates than block them.

OpenClaw

OpenClaw spawns Claude Code sessions via ACP, so every gstack skill just works when Claude Code has gstack installed. Paste this to your OpenClaw agent:

Install gstack: run git clone --single-branch --depth 1 https://github.com/garrytan/gstack.git ~/.claude/skills/gstack && cd ~/.claude/skills/gstack && ./setup to install gstack for Claude Code. Then add a "Coding Tasks" section to AGENTS.md that says: when spawning Claude Code sessions for coding work, tell the session to use gstack skills. Include these examples — security audit: "Load gstack. Run /cso", code review: "Load gstack. Run /review", QA test a URL: "Load gstack. Run /qa https://...", build a feature end-to-end: "Load gstack. Run /autoplan, implement the plan, then run /ship", plan before building: "Load gstack. Run /office-hours then /autoplan. Save the plan, don't implement."

After setup, just talk to your OpenClaw agent naturally:

You say What happens
"Fix the typo in README" Simple — Claude Code session, no gstack needed
"Run a security audit on this repo" Spawns Claude Code with Run /cso
"Build me a notifications feature" Spawns Claude Code with /autoplan → implement → /ship
"Help me plan the v2 API redesign" Spawns Claude Code with /office-hours → /autoplan, saves plan

See docs/OPENCLAW.md for advanced dispatch routing and the gstack-lite/gstack-full prompt templates.

Native OpenClaw Skills (via ClawHub)

Four methodology skills that work directly in your OpenClaw agent, no Claude Code session needed. Install from ClawHub:

clawhub install gstack-openclaw-office-hours gstack-openclaw-ceo-review gstack-openclaw-investigate gstack-openclaw-retro
Skill What it does
gstack-openclaw-office-hours Product interrogation with 6 forcing questions
gstack-openclaw-ceo-review Strategic challenge with 4 scope modes
gstack-openclaw-investigate Root cause debugging methodology
gstack-openclaw-retro Weekly engineering retrospective

These are conversational skills. Your OpenClaw agent runs them directly via chat.

Other AI Agents

gstack works on 10 AI coding agents, not just Claude. Setup auto-detects which agents you have installed:

git clone --single-branch --depth 1 https://github.com/garrytan/gstack.git ~/gstack
cd ~/gstack && ./setup

Or target a specific agent with ./setup --host <name>:

Agent Flag Skills install to
OpenAI Codex CLI --host codex ~/.codex/skills/gstack-*/
OpenCode --host opencode ~/.config/opencode/skills/gstack-*/
Cursor --host cursor ~/.cursor/skills/gstack-*/
Factory Droid --host factory ~/.factory/skills/gstack-*/
Slate --host slate ~/.slate/skills/gstack-*/
Kiro --host kiro ~/.kiro/skills/gstack-*/
Hermes --host hermes ~/.hermes/skills/gstack-*/
GBrain (mod) --host gbrain ~/.gbrain/skills/gstack-*/

Want to add support for another agent? See docs/ADDING_A_HOST.md. It's one TypeScript config file, zero code changes.

See it work

You:    I want to build a daily briefing app for my calendar.
You:    /office-hours
Claude: [asks about the pain — specific examples, not hypotheticals]

You:    Multiple Google calendars, events with stale info, wrong locations.
        Prep takes forever and the results aren't good enough...

Claude: I'm going to push back on the framing. You said "daily briefing
        app." But what you actually described is a personal chief of
        staff AI.
        [extracts 5 capabilities you didn't realize you were describing]
        [challenges 4 premises — you agree, disagree, or adjust]
        [generates 3 implementation approaches with effort estimates]
        RECOMMENDATION: Ship the narrowest wedge tomorrow, learn from
        real usage. The full vision is a 3-month project — start with
        the daily briefing that actually works.
        [writes design doc → feeds into downstream skills automatically]

You:    /plan-ceo-review
        [reads the design doc, challenges scope, runs 10-section review]

You:    /plan-eng-review
        [ASCII diagrams for data flow, state machines, error paths]
        [test matrix, failure modes, security concerns]

You:    Approve plan. Exit plan mode.
        [writes 2,400 lines across 11 files. ~8 minutes.]

You:    /review
        [AUTO-FIXED] 2 issues. [ASK] Race condition → you approve fix.

You:    /qa https://staging.myapp.com
        [opens real browser, clicks through flows, finds and fixes a bug]

You:    /ship
        Tests: 42 → 51 (+9 new). PR: github.com/you/app/pull/42

You said "daily briefing app." The agent said "you're building a chief of staff AI" — because it listened to your pain, not your feature request. Eight commands, end to end. That is not a copilot. That is a team.

The sprint

gstack is a process, not a collection of tools. The skills run in the order a sprint runs:

Think → Plan → Build → Review → Test → Ship → Reflect

Each skill feeds into the next. /office-hours writes a design doc that /plan-ceo-review reads. /plan-eng-review writes a test plan that /qa picks up. /review catches bugs that /ship verifies are fixed. Nothing falls through the cracks because every step knows what came before it.

Skill Your specialist What they do
/office-hours YC Office Hours Start here. Six forcing questions that reframe your product before you write code. Pushes back on your framing, challenges premises, generates implementation alternatives. Design doc feeds into every downstream skill.
/plan-ceo-review CEO / Founder Rethink the problem. Find the 10-star product hiding inside the request. Four modes: Expansion, Selective Expansion, Hold Scope, Reduction.
/plan-eng-review Eng Manager Lock in architecture, data flow, diagrams, edge cases, and tests. Forces hidden assumptions into the open.
/plan-design-review Senior Designer Rates each design dimension 0-10, explains what a 10 looks like, then edits the plan to get there. AI Slop detection. Interactive — one AskUserQuestion per design choice.
/plan-devex-review Developer Experience Lead Interactive DX review: explores developer personas, benchmarks against competitors' TTHW, designs your magical moment, traces friction points step by step. Three modes: DX EXPANSION, DX POLISH, DX TRIAGE. 20-45 forcing questions.
/design-consultation Design Partner Build a complete design system from scratch. Researches the landscape, proposes creative risks, generates realistic product mockups.
/review Staff Engineer Find the bugs that pass CI but blow up in production. Auto-fixes the obvious ones. Flags completeness gaps.
/investigate Debugger Systematic root-cause debugging. Iron Law: no fixes without investigation. Traces data flow, tests hypotheses, stops after 3 failed fixes.
/design-review Designer Who Codes Same audit as /plan-design-review, then fixes what it finds. Atomic commits, before/after screenshots.
/devex-review DX Tester Live developer experience audit. Actually tests your onboarding: navigates docs, tries the getting started flow, times TTHW, screenshots errors. Compares against /plan-devex-review scores — the boomerang that shows if your plan matched reality.
/design-shotgun Design Explorer "Show me options." Generates 4-6 AI mockup variants, opens a comparison board in your browser, collects your feedback, and iterates. Taste memory learns what you like. Repeat until you love something, then hand it to /design-html.
/design-html Design Engineer Turn a mockup into production HTML that actually works. Pretext computed layout: text reflows, heights adjust, layouts are dynamic. 30KB, zero deps. Detects React/Svelte/Vue. Smart API routing per design type (landing page vs dashboard vs form). The output is shippable, not a demo.
/qa QA Lead Test your app, find bugs, fix them with atomic commits, re-verify. Auto-generates regression tests for every fix.
/qa-only QA Reporter Same methodology as /qa but report only. Pure bug report without code changes.
/pair-agent Multi-Agent Coordinator Share your browser with any AI agent. One command, one paste, connected. Works with OpenClaw, Hermes, Codex, Cursor, or anything that can curl. Each agent gets its own tab. Auto-launches headed mode so you watch everything. Auto-starts ngrok tunnel for remote agents. Scoped tokens, tab isolation, rate limiting, activity attribution.
/cso Chief Security Officer OWASP Top 10 + STRIDE threat model. Zero-noise: 17 false positive exclusions, 8/10+ confidence gate, independent finding verification. Each finding includes a concrete exploit scenario.
/ship Release Engineer Sync main, run tests, audit coverage, push, open PR. Bootstraps test frameworks if you don't have one.
/land-and-deploy Release Engineer Merge the PR, wait for CI and deploy, verify production health. One command from "approved" to "verified in production."
/canary SRE Post-deploy monitoring loop. Watches for console errors, performance regressions, and page failures.
/benchmark Performance Engineer Baseline page load times, Core Web Vitals, and resource sizes. Compare before/after on every PR.
/document-release Technical Writer Update all project docs to match what you just shipped. Catches stale READMEs automatically.
/retro Eng Manager Team-aware weekly retro. Per-person breakdowns, shipping streaks, test health trends, growth opportunities. /retro global runs across all your projects and AI tools (Claude Code, Codex, Gemini).
/browse QA Engineer Give the agent eyes. Real Chromium browser, real clicks, real screenshots. ~100ms per command. /open-gstack-browser launches GStack Browser with sidebar, anti-bot stealth, and auto model routing.
/setup-browser-cookies Session Manager Import cookies from your real browser (Chrome, Arc, Brave, Edge) into the headless session. Test authenticated pages.
/autoplan Review Pipeline One command, fully reviewed plan. Runs CEO → design → eng review automatically with encoded decision principles. Surfaces only taste decisions for your approval.
/learn Memory Manage what gstack learned across sessions. Review, search, prune, and export project-specific patterns, pitfalls, and preferences. Learnings compound across sessions so gstack gets smarter on your codebase over time.

Which review should I use?

Building for... Plan stage (before code) Live audit (after shipping)
End users (UI, web app, mobile) /plan-design-review /design-review
Developers (API, CLI, SDK, docs) /plan-devex-review /devex-review
Architecture (data flow, perf, tests) /plan-eng-review /review
All of the above /autoplan (runs CEO → design → eng → DX, auto-detects which apply)

Power tools

Skill What it does
/codex Second Opinion — independent code review from OpenAI Codex CLI. Three modes: review (pass/fail gate), adversarial challenge, and open consultation. Cross-model analysis when both /review and /codex have run.
/careful Safety Guardrails — warns before destructive commands (rm -rf, DROP TABLE, force-push). Say "be careful" to activate. Override any warning.
/freeze Edit Lock — restrict file edits to one directory. Prevents accidental changes outside scope while debugging.
/guard Full Safety/careful + /freeze in one command. Maximum safety for prod work.
/unfreeze Unlock — remove the /freeze boundary.
/open-gstack-browser GStack Browser — launch GStack Browser with sidebar, anti-bot stealth, auto model routing (Sonnet for actions, Opus for analysis), one-click cookie import, and Claude Code integration. Clean up pages, take smart screenshots, edit CSS, and pass info back to your terminal.
/setup-deploy Deploy Configurator — one-time setup for /land-and-deploy. Detects your platform, production URL, and deploy commands.
/setup-gbrain GBrain Onboarding — from zero to running gbrain in under 5 minutes. PGLite local, Supabase existing URL, or auto-provision a new Supabase project via Management API. MCP registration for Claude Code + per-repo trust triad (read-write/read-only/deny). Full guide.
/sync-gbrain Keep Brain Current — re-index this repo's code into gbrain via gbrain sources add + gbrain sync --strategy code, refresh the ## GBrain Search Guidance block in CLAUDE.md, and auto-remove guidance when the capability check fails. --incremental (default), --full, --dry-run. Idempotent; safe to re-run.
/gstack-upgrade Self-Updater — upgrade gstack to latest. Detects global vs vendored install, syncs both, shows what changed.

New binaries (v0.19)

Beyond the slash-command skills, gstack ships standalone CLIs for workflows that don't belong inside a session:

Command What it does
gstack-model-benchmark Cross-model benchmark — run the same prompt through Claude, GPT (via Codex CLI), and Gemini; compare latency, tokens, cost, and (optionally) LLM-judge quality score. Auth detected per provider, unavailable providers skip cleanly. Output as table, JSON, or markdown. --dry-run validates flags + auth without spending API calls.
gstack-taste-update Design taste learning — writes approvals and rejections from /design-shotgun into a persistent per-project taste profile. Decays 5%/week. Feeds back into future variant generation so the system learns what you actually pick.

Continuous checkpoint mode (opt-in, local by default)

Set gstack-config set checkpoint_mode continuous and skills auto-commit your work as you go with a WIP: prefix plus a structured [gstack-context] body (decisions, remaining work, failed approaches). Survives crashes and context switches. /context-restore reads those commits to reconstruct session state. /ship filter-squashes WIP commits before the PR (preserving non-WIP commits) so bisect stays clean. Push is opt-in via checkpoint_push=true — default is local-only so you don't trigger CI on every WIP commit.

Domain skills + raw CDP escape hatch

Two new browser primitives compound the gstack agent over time:

  • $B domain-skill save — agent saves a per-site note (e.g., "LinkedIn's Apply button lives in an iframe") that fires automatically next time it visits that hostname. Quarantined → active after 3 successful uses → optional cross-project promotion via $B domain-skill promote-to-global. Storage lives alongside /learn's per-project learnings file. Full reference: docs/domain-skills.md.
  • $B cdp <Domain.method> — raw Chrome DevTools Protocol escape hatch for the rare case curated commands miss. Deny-default: methods must be explicitly added to browse/src/cdp-allowlist.ts with a one-line justification. Two-tier mutex serializes browser-scoped CDP calls against per-tab work. Output for data-exfil methods is wrapped in the UNTRUSTED envelope.

Want raw CDP with no rails, no allowlist, no daemon — just thin transport from agent to Chrome? browser-use/browser-harness-js is a different philosophy (agent-authored helpers vs gstack's curated commands) and a good fit if you don't want gstack's security stack. The two can coexist: gstack's $B cdp and harness can both attach to the same Chrome via Playwright's newCDPSession.

Deep dives with examples and philosophy for every skill →

Karpathy's four failure modes? Already covered.

Andrej Karpathy's AI coding rules (17K stars) nail four failure modes: wrong assumptions, overcomplexity, orthogonal edits, imperative over declarative. gstack's workflow skills enforce all four. /office-hours forces assumptions into the open before code is written. The Confusion Protocol stops Claude from guessing on architectural decisions. /review catches unnecessary complexity and drive-by edits. /ship transforms tasks into verifiable goals with test-first execution. If you already use Karpathy-style CLAUDE.md rules, gstack is the workflow enforcement layer that makes them stick across entire sprints, not just single prompts.

Parallel sprints

gstack works well with one sprint. It gets interesting with ten running at once.

Design is at the heart. /design-consultation builds your design system from scratch, researches what's out there, proposes creative risks, and writes DESIGN.md. But the real magic is the shotgun-to-HTML pipeline.

/design-shotgun is how you explore. You describe what you want. It generates 4-6 AI mockup variants using GPT Image. Then it opens a comparison board in your browser with all variants side by side. You pick favorites, leave feedback ("more whitespace", "bolder headline", "lose the gradient"), and it generates a new round. Repeat until you love something. Taste memory kicks in after a few rounds so it starts biasing toward what you actually like. No more describing your vision in words and hoping the AI gets it. You see options, pick the good ones, and iterate visually.

/design-html makes it real. Take that approved mockup (from /design-shotgun, a CEO plan, a design review, or just a description) and turn it into production-quality HTML/CSS. Not the kind of AI HTML that looks fine at one viewport width and breaks everywhere else. This uses Pretext for computed text layout: text actually reflows on resize, heights adjust to content, layouts are dynamic. 30KB overhead, zero dependencies. It detects your framework (React, Svelte, Vue) and outputs the right format. Smart API routing picks different Pretext patterns depending on whether it's a landing page, dashboard, form, or card layout. The output is something you'd actually ship, not a demo.

/qa was a massive unlock. It let me go from 6 to 12 parallel workers. Claude Code saying "I SEE THE ISSUE" and then actually fixing it, generating a regression test, and verifying the fix — that changed how I work. The agent has eyes now.

Smart review routing. Just like at a well-run startup: CEO doesn't have to look at infra bug fixes, design review isn't needed for backend changes. gstack tracks what reviews are run, figures out what's appropriate, and just does the smart thing. The Review Readiness Dashboard tells you where you stand before you ship.

Test everything. /ship bootstraps test frameworks from scratch if your project doesn't have one. Every /ship run produces a coverage audit. Every /qa bug fix generates a regression test. 100% test coverage is the goal — tests make vibe coding safe instead of yolo coding.

/document-release is the engineer you never had. It reads every doc file in your project, cross-references the diff, and updates everything that drifted. README, ARCHITECTURE, CONTRIBUTING, CLAUDE.md, TODOS — all kept current automatically. And now /ship auto-invokes it — docs stay current without an extra command.

Real browser mode. /open-gstack-browser launches GStack Browser, an AI-controlled Chromium with anti-bot stealth, custom branding, and the sidebar extension baked in. Sites like Google and NYTimes work without captchas. The menu bar says "GStack Browser" instead of "Chrome for Testing." Your regular Chrome stays untouched. All existing browse commands work unchanged. $B disconnect returns to headless. The browser stays alive as long as the window is open... no idle timeout killing it while you're working.

Sidebar agent — your AI browser assistant. Type natural language in the Chrome side panel and a child Claude instance executes it. "Navigate to the settings page and screenshot it." "Fill out this form with test data." "Go through every item in this list and extract the prices." The sidebar auto-routes to the right model: Sonnet for fast actions (click, navigate, screenshot) and Opus for reading and analysis. Each task gets up to 5 minutes. The sidebar agent runs in an isolated session, so it won't interfere with your main Claude Code window. One-click cookie import right from the sidebar footer.

Personal automation. The sidebar agent isn't just for dev workflows. Example: "Browse my kid's school parent portal and add all the other parents' names, phone numbers, and photos to my Google Contacts." Two ways to get authenticated: (1) log in once in the headed browser, your session persists, or (2) click the "cookies" button in the sidebar footer to import cookies from your real Chrome. Once authenticated, Claude navigates the directory, extracts the data, and creates the contacts.

Prompt injection defense. Hostile web pages try to hijack your sidebar agent. gstack ships a layered defense: a 22MB ML classifier bundled with the browser scans every page and tool output locally, a Claude Haiku transcript check votes on the full conversation shape, a random canary token in the system prompt catches session exfil attempts across text, tool args, URLs, and file writes, and a verdict combiner requires two classifiers to agree before blocking (prevents single-model false positives on Stack Overflow-style instruction pages). A shield icon in the sidebar header shows status (green/amber/red). Opt in to a 721MB DeBERTa-v3 ensemble via GSTACK_SECURITY_ENSEMBLE=deberta for 2-of-3 agreement. Emergency kill switch: GSTACK_SECURITY_OFF=1. See ARCHITECTURE.md for the full stack.

Browser handoff when the AI gets stuck. Hit a CAPTCHA, auth wall, or MFA prompt? $B handoff opens a visible Chrome at the exact same page with all your cookies and tabs intact. Solve the problem, tell Claude you're done, $B resume picks up right where it left off. The agent even suggests it automatically after 3 consecutive failures.

/pair-agent is cross-agent coordination. You're in Claude Code. You also have OpenClaw running. Or Hermes. Or Codex. You want them both looking at the same website. Type /pair-agent, pick your agent, and a GStack Browser window opens so you can watch. The skill prints a block of instructions. Paste that block into the other agent's chat. It exchanges a one-time setup key for a session token, creates its own tab, and starts browsing. You see both agents working in the same browser, each in their own tab, neither able to interfere with the other. If ngrok is installed, the tunnel starts automatically so the other agent can be on a completely different machine. Same-machine agents get a zero-friction shortcut that writes credentials directly. This is the first time AI agents from different vendors can coordinate through a shared browser with real security: scoped tokens, tab isolation, rate limiting, domain restrictions, and activity attribution.

Multi-AI second opinion. /codex gets an independent review from OpenAI's Codex CLI — a completely different AI looking at the same diff. Three modes: code review with a pass/fail gate, adversarial challenge that actively tries to break your code, and open consultation with session continuity. When both /review (Claude) and /codex (OpenAI) have reviewed the same branch, you get a cross-model analysis showing which findings overlap and which are unique to each.

Safety guardrails on demand. Say "be careful" and /careful warns before any destructive command — rm -rf, DROP TABLE, force-push, git reset --hard. /freeze locks edits to one directory while debugging so Claude can't accidentally "fix" unrelated code. /guard activates both. /investigate auto-freezes to the module being investigated.

Proactive skill suggestions. gstack notices what stage you're in — brainstorming, reviewing, debugging, testing — and suggests the right skill. Don't like it? Say "stop suggesting" and it remembers across sessions.

10-15 parallel sprints

gstack is powerful with one sprint. It is transformative with ten running at once.

Conductor runs multiple Claude Code sessions in parallel — each in its own isolated workspace. One session running /office-hours on a new idea, another doing /review on a PR, a third implementing a feature, a fourth running /qa on staging, and six more on other branches. All at the same time. I regularly run 10-15 parallel sprints — that's the practical max right now.

The sprint structure is what makes parallelism work. Without a process, ten agents is ten sources of chaos. With a process — think, plan, build, review, test, ship — each agent knows exactly what to do and when to stop. You manage them the way a CEO manages a team: check in on the decisions that matter, let the rest run.

Voice input (AquaVoice, Whisper, etc.)

gstack skills have voice-friendly trigger phrases. Say what you want naturally — "run a security check", "test the website", "do an engineering review" — and the right skill activates. You don't need to remember slash command names or acronyms.

Uninstall

Option 1: Run the uninstall script

If gstack is installed on your machine:

~/.claude/skills/gstack/bin/gstack-uninstall

This handles skills, symlinks, global state (~/.gstack/), project-local state, browse daemons, and temp files. Use --keep-state to preserve config and analytics. Use --force to skip confirmation.

Option 2: Manual removal (no local repo)

If you don't have the repo cloned (e.g. you installed via a Claude Code paste and later deleted the clone):

# 1. Stop browse daemons
pkill -f "gstack.*browse" 2>/dev/null || true

# 2. Remove per-skill directories whose SKILL.md points into gstack/
find ~/.claude/skills -mindepth 1 -maxdepth 1 -type d ! -name gstack 2>/dev/null |
while IFS= read -r dir; do
  link="$dir/SKILL.md"
  [ -L "$link" ] || continue
  target=$(readlink "$link" 2>/dev/null) || continue
  case "$target" in
    gstack/*|*/gstack/*)
      rm -f "$link"
      rmdir "$dir" 2>/dev/null || true
      ;;
  esac
done

# 3. Remove gstack
rm -rf ~/.claude/skills/gstack

# 4. Remove global state
rm -rf ~/.gstack

# 5. Remove integrations (skip any you never installed)
rm -rf ~/.codex/skills/gstack* 2>/dev/null
rm -rf ~/.factory/skills/gstack* 2>/dev/null
rm -rf ~/.kiro/skills/gstack* 2>/dev/null
rm -rf ~/.openclaw/skills/gstack* 2>/dev/null

# 6. Remove temp files
rm -f /tmp/gstack-* 2>/dev/null

# 7. Per-project cleanup (run from each project root)
rm -rf .gstack .gstack-worktrees .claude/skills/gstack 2>/dev/null
rm -rf .agents/skills/gstack* .factory/skills/gstack* 2>/dev/null

Clean up CLAUDE.md

The uninstall script does not edit CLAUDE.md. In each project where gstack was added, remove the ## gstack and ## Skill routing sections.

Playwright

~/Library/Caches/ms-playwright/ (macOS) is left in place because other tools may share it. Remove it if nothing else needs it.


Free, MIT licensed, open source. No premium tier, no waitlist.

I open sourced how I build software. You can fork it and make it your own.

We're hiring. Want to ship real products at AI-coding speed and help harden gstack? Come work at YC — ycombinator.com/software Extremely competitive salary and equity. San Francisco, Dogpatch District.

GBrain — persistent knowledge for your coding agent

GBrain is a persistent knowledge base for AI agents — think of it as the memory your agent actually keeps between sessions. GStack gives you a one-command path from zero to "it's running, my agent can call it."

/setup-gbrain

Three paths, pick one:

  • Supabase, existing URL — your cloud agent already provisioned a brain; paste the Session Pooler URL, now this laptop uses the same data.
  • Supabase, auto-provision — paste a Supabase Personal Access Token; the skill creates a new project, polls to healthy, fetches the pooler URL, hands it to gbrain init. ~90 seconds end-to-end.
  • PGLite local — zero accounts, zero network, ~30 seconds. Isolated brain on this Mac only. Great for try-first; migrate to Supabase later with /setup-gbrain --switch.

After init, the skill offers to register gbrain as an MCP server for Claude Code (claude mcp add gbrain -- gbrain serve) so gbrain search, gbrain put_page, etc. show up as first-class typed tools — not bash shell-outs.

Keeping the brain current. Run /sync-gbrain from any repo to re-index its code into gbrain (incremental by default, --full for a full reindex, --dry-run to preview). The skill registers the cwd as a federated source via gbrain sources add, runs gbrain sync --strategy code, and writes a ## GBrain Search Guidance block to your project's CLAUDE.md so the agent prefers gbrain search/code-def/code-refs over Grep. The block is removed automatically if the capability check fails — no stale guidance pointing at tools that aren't installed.

Per-remote trust policy. Each repo on your machine gets one of three tiers:

  • read-write — agent can search the brain AND write new pages back from this repo
  • read-only — agent can search but never writes (best for multi-client consultants: search the shared brain, don't contaminate it with Client A's work while in Client B's repo)
  • deny — no gbrain interaction at all

The skill asks once per repo. The decision is sticky across worktrees and branches of the same remote.

GStack memory sync (different feature, same private-repo infra). Optionally pushes your gstack state (learnings, CEO plans, design docs, retros, developer profile) to a private git repo so your memory follows you across machines, with a one-time privacy prompt (everything allowlisted / artifacts only / off) and a defense-in-depth secret scanner that blocks AWS keys, tokens, PEM blocks, and JWTs before they leave your machine.

gstack-brain-init

Full monty — every scenario, every flag, every bin helper, every troubleshooting step: USING_GBRAIN_WITH_GSTACK.md

Other references: docs/gbrain-sync.md (sync-specific guide) • docs/gbrain-sync-errors.md (error index)

Docs

Doc What it covers
Skill Deep Dives Philosophy, examples, and workflow for every skill (includes Greptile integration)
Builder Ethos Builder philosophy: Boil the Lake, Search Before Building, three layers of knowledge
Using GBrain with GStack Every path, flag, bin helper, and troubleshooting step for /setup-gbrain
GBrain Sync Cross-machine memory setup, privacy modes, troubleshooting
Architecture Design decisions and system internals
Browser Reference Full command reference for /browse
Contributing Dev setup, testing, contributor mode, and dev mode
Changelog What's new in every version

Privacy & Telemetry

gstack includes opt-in usage telemetry to help improve the project. Here's exactly what happens:

  • Default is off. Nothing is sent anywhere unless you explicitly say yes.
  • On first run, gstack asks if you want to share anonymous usage data. You can say no.
  • What's sent (if you opt in): skill name, duration, success/fail, gstack version, OS. That's it.
  • What's never sent: code, file paths, repo names, branch names, prompts, or any user-generated content.
  • Change anytime: gstack-config set telemetry off disables everything instantly.

Data is stored in Supabase (open source Firebase alternative). The schema is in supabase/migrations/ — you can verify exactly what's collected. The Supabase publishable key in the repo is a public key (like a Firebase API key) — row-level security policies deny all direct access. Telemetry flows through validated edge functions that enforce schema checks, event type allowlists, and field length limits.

Local analytics are always available. Run gstack-analytics to see your personal usage dashboard from the local JSONL file — no remote data needed.

Troubleshooting

Skill not showing up? cd ~/.claude/skills/gstack && ./setup

/browse fails? cd ~/.claude/skills/gstack && bun install && bun run build

Stale install? Run /gstack-upgrade — or set auto_upgrade: true in ~/.gstack/config.yaml

Want shorter commands? cd ~/.claude/skills/gstack && ./setup --no-prefix — switches from /gstack-qa to /qa. Your choice is remembered for future upgrades.

Want namespaced commands? cd ~/.claude/skills/gstack && ./setup --prefix — switches from /qa to /gstack-qa. Useful if you run other skill packs alongside gstack.

Codex says "Skipped loading skill(s) due to invalid SKILL.md"? Your Codex skill descriptions are stale. Fix: cd ~/.codex/skills/gstack && git pull && ./setup --host codex — or for repo-local installs: cd "$(readlink -f .agents/skills/gstack)" && git pull && ./setup --host codex

Windows users: gstack works on Windows 11 via Git Bash or WSL. Node.js is required in addition to Bun — Bun has a known bug with Playwright's pipe transport on Windows (bun#4253). The browse server automatically falls back to Node.js. Make sure both bun and node are on your PATH.

Claude says it can't see the skills? Make sure your project's CLAUDE.md has a gstack section. Add this:

## gstack
Use /browse from gstack for all web browsing. Never use mcp__claude-in-chrome__* tools.
Available skills: /office-hours, /plan-ceo-review, /plan-eng-review, /plan-design-review,
/design-consultation, /design-shotgun, /design-html, /review, /ship, /land-and-deploy,
/canary, /benchmark, /browse, /open-gstack-browser, /qa, /qa-only, /design-review,
/setup-browser-cookies, /setup-deploy, /setup-gbrain, /sync-gbrain, /retro, /investigate, /document-release,
/codex, /cso, /autoplan, /pair-agent, /careful, /freeze, /guard, /unfreeze, /gstack-upgrade, /learn.

License

MIT. Free forever. Go build something.

Description
No description provided
Readme MIT 319 MiB
Languages
TypeScript 89.1%
Shell 7.3%
JavaScript 2.2%
CSS 0.9%
HTML 0.5%