mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-08 21:49:45 +08:00
feat: community PRs — faster install, skill namespacing, uninstall, Codex fallback, Windows fix, Python patterns (v0.12.9.0) (#561)
* fix: sync package.json version with VERSION file (0.12.7.0) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * perf: shallow clone for faster install (#484) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: Python/async/SSRF patterns in review checklist (#531) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: namespace skill symlinks with gstack- prefix (#503) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add uninstall script (#323) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: office-hours Claude subagent fallback when Codex unavailable (#464) Updates generateCodexSecondOpinion resolver to always offer second opinion and fall back to Claude subagent when Codex is unavailable or errors. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: findPort() race condition via net.createServer (#490) Replaces Bun.serve() port probing with net.createServer() for proper async bind/close semantics. Fixes Windows EADDRINUSE race condition. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test: add tests for uninstall, setup prefix, and resolver fallback - Uninstall integration tests: syntax, flags, mock install layout, upgrade path - Setup prefix tests: gstack-* prefixing, --no-prefix, cleanup migration - Resolver tests: Claude subagent fallback in generated SKILL.md Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: bump version and changelog (v0.12.9.0) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
20
CHANGELOG.md
20
CHANGELOG.md
@@ -1,5 +1,25 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## [0.12.9.0] - 2026-03-27 — Community PRs: Faster Install, Skill Namespacing, Uninstall
|
||||||
|
|
||||||
|
Six community PRs landed in one batch. Install is faster, skills no longer collide with other tools, and you can cleanly uninstall gstack when needed.
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- **Uninstall script.** `bin/gstack-uninstall` cleanly removes gstack from your system: stops browse daemons, removes all skill installs (Claude/Codex/Kiro), cleans up state. Supports `--force` (skip confirmation) and `--keep-state` (preserve config). (#323)
|
||||||
|
- **Python security patterns in /review.** Shell injection (`subprocess.run(shell=True)`), SSRF via LLM-generated URLs, stored prompt injection, async/sync mixing, and column name safety checks now fire automatically on Python projects. (#531)
|
||||||
|
- **Office-hours works without Codex.** The "second opinion" step now falls back to a Claude subagent when Codex CLI is unavailable, so every user gets the cross-model perspective. (#464)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- **Faster install (~30s).** All clone commands now use `--single-branch --depth 1`. Full history available for contributors. (#484)
|
||||||
|
- **Skills namespaced with `gstack-` prefix.** Skill symlinks are now `gstack-review`, `gstack-ship`, etc. instead of bare `review`, `ship`. Prevents collisions with other skill packs. Old symlinks are auto-cleaned on upgrade. Use `--no-prefix` to opt out. (#503)
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- **Windows port race condition.** `findPort()` now uses `net.createServer()` instead of `Bun.serve()` for port probing, fixing an EADDRINUSE race on Windows where the polyfill's `stop()` is fire-and-forget. (#490)
|
||||||
|
- **package.json version sync.** VERSION file and package.json now agree (was stuck at 0.12.5.0).
|
||||||
|
|
||||||
## [0.12.8.1] - 2026-03-27 — zsh Glob Compatibility
|
## [0.12.8.1] - 2026-03-27 — zsh Glob Compatibility
|
||||||
|
|
||||||
Skill scripts now work correctly in zsh. Previously, bash code blocks in skill templates used raw glob patterns like `.github/workflows/*.yaml` and `ls ~/.gstack/projects/$SLUG/*-design-*.md` that would throw "no matches found" errors in zsh when no files matched. Fixed 38 instances across 13 templates and 2 resolvers using two approaches: `find`-based alternatives for complex patterns, and `setopt +o nomatch` guards for simple `ls` commands.
|
Skill scripts now work correctly in zsh. Previously, bash code blocks in skill templates used raw glob patterns like `.github/workflows/*.yaml` and `ls ~/.gstack/projects/$SLUG/*-design-*.md` that would throw "no matches found" errors in zsh when no files matched. Fixed 38 instances across 13 templates and 2 resolvers using two approaches: `find`-based alternatives for complex patterns, and `setopt +o nomatch` guards for simple `ls` commands.
|
||||||
|
|||||||
13
README.md
13
README.md
@@ -46,7 +46,7 @@ Fork it. Improve it. Make it yours. And if you want to hate on free open source
|
|||||||
|
|
||||||
Open Claude Code and paste this. Claude does the rest.
|
Open Claude Code and paste this. Claude does the rest.
|
||||||
|
|
||||||
> Install gstack: run **`git clone https://github.com/garrytan/gstack.git ~/.claude/skills/gstack && cd ~/.claude/skills/gstack && ./setup`** then add a "gstack" section to CLAUDE.md that says to use the /browse skill from gstack for all web browsing, never use mcp\_\_claude-in-chrome\_\_\* tools, and lists the available skills: /office-hours, /plan-ceo-review, /plan-eng-review, /plan-design-review, /design-consultation, /review, /ship, /land-and-deploy, /canary, /benchmark, /browse, /qa, /qa-only, /design-review, /setup-browser-cookies, /setup-deploy, /retro, /investigate, /document-release, /codex, /cso, /autoplan, /careful, /freeze, /guard, /unfreeze, /gstack-upgrade. Then ask the user if they also want to add gstack to the current project so teammates get it.
|
> Install gstack: run **`git clone --single-branch --depth 1 https://github.com/garrytan/gstack.git ~/.claude/skills/gstack && cd ~/.claude/skills/gstack && ./setup`** then add a "gstack" section to CLAUDE.md that says to use the /browse skill from gstack for all web browsing, never use mcp\_\_claude-in-chrome\_\_\* tools, and lists the available skills: /office-hours, /plan-ceo-review, /plan-eng-review, /plan-design-review, /design-consultation, /review, /ship, /land-and-deploy, /canary, /benchmark, /browse, /qa, /qa-only, /design-review, /setup-browser-cookies, /setup-deploy, /retro, /investigate, /document-release, /codex, /cso, /autoplan, /careful, /freeze, /guard, /unfreeze, /gstack-upgrade. Then ask the user if they also want to add gstack to the current project so teammates get it.
|
||||||
|
|
||||||
### Step 2: Add to your repo so teammates get it (optional)
|
### Step 2: Add to your repo so teammates get it (optional)
|
||||||
|
|
||||||
@@ -54,6 +54,11 @@ Open Claude Code and paste this. Claude does the rest.
|
|||||||
|
|
||||||
Real files get committed to your repo (not a submodule), so `git clone` just works. Everything lives inside `.claude/`. Nothing touches your PATH or runs in the background.
|
Real files get committed to your repo (not a submodule), so `git clone` just works. Everything lives inside `.claude/`. Nothing touches your PATH or runs in the background.
|
||||||
|
|
||||||
|
> **Contributing or need full history?** The commands above use `--depth 1` for a fast install. If you plan to contribute or need full git history, do a full clone instead:
|
||||||
|
> ```bash
|
||||||
|
> git clone https://github.com/garrytan/gstack.git ~/.claude/skills/gstack
|
||||||
|
> ```
|
||||||
|
|
||||||
### Codex, Gemini CLI, or Cursor
|
### Codex, Gemini CLI, or Cursor
|
||||||
|
|
||||||
gstack works on any agent that supports the [SKILL.md standard](https://github.com/anthropics/claude-code). Skills live in `.agents/skills/` and are discovered automatically.
|
gstack works on any agent that supports the [SKILL.md standard](https://github.com/anthropics/claude-code). Skills live in `.agents/skills/` and are discovered automatically.
|
||||||
@@ -61,7 +66,7 @@ gstack works on any agent that supports the [SKILL.md standard](https://github.c
|
|||||||
Install to one repo:
|
Install to one repo:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/garrytan/gstack.git .agents/skills/gstack
|
git clone --single-branch --depth 1 https://github.com/garrytan/gstack.git .agents/skills/gstack
|
||||||
cd .agents/skills/gstack && ./setup --host codex
|
cd .agents/skills/gstack && ./setup --host codex
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -70,7 +75,7 @@ When setup runs from `.agents/skills/gstack`, it installs the generated Codex sk
|
|||||||
Install once for your user account:
|
Install once for your user account:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/garrytan/gstack.git ~/gstack
|
git clone --single-branch --depth 1 https://github.com/garrytan/gstack.git ~/gstack
|
||||||
cd ~/gstack && ./setup --host codex
|
cd ~/gstack && ./setup --host codex
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -81,7 +86,7 @@ discovery from the source repo checkout.
|
|||||||
Or let setup auto-detect which agents you have installed:
|
Or let setup auto-detect which agents you have installed:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/garrytan/gstack.git ~/gstack
|
git clone --single-branch --depth 1 https://github.com/garrytan/gstack.git ~/gstack
|
||||||
cd ~/gstack && ./setup --host auto
|
cd ~/gstack && ./setup --host auto
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
228
bin/gstack-uninstall
Executable file
228
bin/gstack-uninstall
Executable file
@@ -0,0 +1,228 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# gstack-uninstall — remove gstack skills, state, and browse daemons
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# gstack-uninstall — interactive uninstall (prompts before removing)
|
||||||
|
# gstack-uninstall --force — remove everything without prompting
|
||||||
|
# gstack-uninstall --keep-state — remove skills but keep ~/.gstack/ data
|
||||||
|
#
|
||||||
|
# What gets REMOVED:
|
||||||
|
# ~/.claude/skills/gstack — global Claude skill install (git clone or vendored)
|
||||||
|
# ~/.claude/skills/{skill} — per-skill symlinks created by setup
|
||||||
|
# ~/.codex/skills/gstack* — Codex skill install + per-skill symlinks
|
||||||
|
# ~/.kiro/skills/gstack* — Kiro skill install + per-skill symlinks
|
||||||
|
# ~/.gstack/ — global state (config, analytics, sessions, projects,
|
||||||
|
# repos, installation-id, browse error logs)
|
||||||
|
# .claude/skills/gstack* — project-local skill install (--local installs)
|
||||||
|
# .gstack/ — per-project browse state (in current git repo)
|
||||||
|
# .gstack-worktrees/ — per-project test worktrees (in current git repo)
|
||||||
|
# .agents/skills/gstack* — Codex/Gemini/Cursor sidecar (in current git repo)
|
||||||
|
# Running browse daemons — stopped via SIGTERM before cleanup
|
||||||
|
#
|
||||||
|
# What is NOT REMOVED:
|
||||||
|
# ~/Library/Caches/ms-playwright/ — Playwright Chromium (shared, may be used by other tools)
|
||||||
|
# ~/.gstack-dev/ — developer eval artifacts (only present in gstack contributors)
|
||||||
|
#
|
||||||
|
# Env overrides (for testing):
|
||||||
|
# GSTACK_DIR — override auto-detected gstack root
|
||||||
|
# GSTACK_STATE_DIR — override ~/.gstack state directory
|
||||||
|
#
|
||||||
|
# NOTE: Uses set -uo pipefail (no -e) — uninstall must never abort partway.
|
||||||
|
set -uo pipefail
|
||||||
|
|
||||||
|
if [ -z "${HOME:-}" ]; then
|
||||||
|
echo "ERROR: \$HOME is not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
GSTACK_DIR="${GSTACK_DIR:-$(cd "$(dirname "$0")/.." && pwd)}"
|
||||||
|
STATE_DIR="${GSTACK_STATE_DIR:-$HOME/.gstack}"
|
||||||
|
_GIT_ROOT="$(git rev-parse --show-toplevel 2>/dev/null || true)"
|
||||||
|
|
||||||
|
# ─── Parse flags ─────────────────────────────────────────────
|
||||||
|
FORCE=0
|
||||||
|
KEEP_STATE=0
|
||||||
|
while [ $# -gt 0 ]; do
|
||||||
|
case "$1" in
|
||||||
|
--force) FORCE=1; shift ;;
|
||||||
|
--keep-state) KEEP_STATE=1; shift ;;
|
||||||
|
-h|--help)
|
||||||
|
sed -n '2,/^[^#]/{ /^#/s/^# \{0,1\}//p; }' "$0"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1" >&2
|
||||||
|
echo "Usage: gstack-uninstall [--force] [--keep-state]" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# ─── Confirmation ────────────────────────────────────────────
|
||||||
|
if [ "$FORCE" -eq 0 ]; then
|
||||||
|
echo "This will remove gstack from your system:"
|
||||||
|
{ [ -d "$HOME/.claude/skills/gstack" ] || [ -L "$HOME/.claude/skills/gstack" ]; } && echo " ~/.claude/skills/gstack (+ per-skill symlinks)"
|
||||||
|
[ -d "$HOME/.codex/skills" ] && echo " ~/.codex/skills/gstack*"
|
||||||
|
[ -d "$HOME/.kiro/skills" ] && echo " ~/.kiro/skills/gstack*"
|
||||||
|
[ "$KEEP_STATE" -eq 0 ] && [ -d "$STATE_DIR" ] && echo " $STATE_DIR"
|
||||||
|
|
||||||
|
if [ -n "$_GIT_ROOT" ]; then
|
||||||
|
[ -d "$_GIT_ROOT/.claude/skills/gstack" ] && echo " $_GIT_ROOT/.claude/skills/gstack (project-local)"
|
||||||
|
[ -d "$_GIT_ROOT/.gstack" ] && echo " $_GIT_ROOT/.gstack/ (browse state + reports)"
|
||||||
|
[ -d "$_GIT_ROOT/.gstack-worktrees" ] && echo " $_GIT_ROOT/.gstack-worktrees/"
|
||||||
|
[ -d "$_GIT_ROOT/.agents/skills" ] && echo " $_GIT_ROOT/.agents/skills/gstack*"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Preview running daemons
|
||||||
|
if [ -n "$_GIT_ROOT" ] && [ -f "$_GIT_ROOT/.gstack/browse.json" ]; then
|
||||||
|
_PREVIEW_PID="$(awk -F'[:,]' '/"pid"/ { for(i=1;i<=NF;i++) if($i ~ /"pid"/) { gsub(/[^0-9]/, "", $(i+1)); print $(i+1); exit } }' "$_GIT_ROOT/.gstack/browse.json" 2>/dev/null || true)"
|
||||||
|
[ -n "$_PREVIEW_PID" ] && kill -0 "$_PREVIEW_PID" 2>/dev/null && echo " browse daemon (PID $_PREVIEW_PID) will be stopped"
|
||||||
|
fi
|
||||||
|
|
||||||
|
printf "\nContinue? [y/N] "
|
||||||
|
read -r REPLY
|
||||||
|
case "$REPLY" in
|
||||||
|
y|Y|yes|YES) ;;
|
||||||
|
*) echo "Aborted."; exit 0 ;;
|
||||||
|
esac
|
||||||
|
fi
|
||||||
|
|
||||||
|
REMOVED=()
|
||||||
|
|
||||||
|
# ─── Stop running browse daemons ─────────────────────────────
|
||||||
|
# Browse servers write PID to {project}/.gstack/browse.json.
|
||||||
|
# Stop any we can find before removing state directories.
|
||||||
|
stop_browse_daemon() {
|
||||||
|
local state_file="$1"
|
||||||
|
if [ ! -f "$state_file" ]; then
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
local pid
|
||||||
|
pid="$(awk -F'[:,]' '/"pid"/ { for(i=1;i<=NF;i++) if($i ~ /"pid"/) { gsub(/[^0-9]/, "", $(i+1)); print $(i+1); exit } }' "$state_file" 2>/dev/null || true)"
|
||||||
|
if [ -n "$pid" ] && kill -0 "$pid" 2>/dev/null; then
|
||||||
|
kill "$pid" 2>/dev/null || true
|
||||||
|
# Wait up to 2s for graceful shutdown
|
||||||
|
local waited=0
|
||||||
|
while [ "$waited" -lt 4 ] && kill -0 "$pid" 2>/dev/null; do
|
||||||
|
sleep 0.5
|
||||||
|
waited=$(( waited + 1 ))
|
||||||
|
done
|
||||||
|
if kill -0 "$pid" 2>/dev/null; then
|
||||||
|
kill -9 "$pid" 2>/dev/null || true
|
||||||
|
fi
|
||||||
|
REMOVED+=("browse daemon (PID $pid)")
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Stop daemon in current project
|
||||||
|
if [ -n "$_GIT_ROOT" ] && [ -f "$_GIT_ROOT/.gstack/browse.json" ]; then
|
||||||
|
stop_browse_daemon "$_GIT_ROOT/.gstack/browse.json"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Stop daemons tracked in global projects directory
|
||||||
|
if [ -d "$STATE_DIR/projects" ]; then
|
||||||
|
while IFS= read -r _BJ; do
|
||||||
|
stop_browse_daemon "$_BJ"
|
||||||
|
done < <(find "$STATE_DIR/projects" -name browse.json -path '*/.gstack/*' 2>/dev/null || true)
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ─── Remove global Claude skills ────────────────────────────
|
||||||
|
CLAUDE_SKILLS="$HOME/.claude/skills"
|
||||||
|
if [ -d "$CLAUDE_SKILLS/gstack" ] || [ -L "$CLAUDE_SKILLS/gstack" ]; then
|
||||||
|
# Remove per-skill symlinks that point into gstack/
|
||||||
|
for _LINK in "$CLAUDE_SKILLS"/*; do
|
||||||
|
[ -L "$_LINK" ] || continue
|
||||||
|
_NAME="$(basename "$_LINK")"
|
||||||
|
[ "$_NAME" = "gstack" ] && continue
|
||||||
|
_TARGET="$(readlink "$_LINK" 2>/dev/null || true)"
|
||||||
|
case "$_TARGET" in
|
||||||
|
gstack/*|*/gstack/*) rm -f "$_LINK"; REMOVED+=("claude/$_NAME") ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
rm -rf "$CLAUDE_SKILLS/gstack"
|
||||||
|
REMOVED+=("~/.claude/skills/gstack")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ─── Remove project-local Claude skills (--local installs) ──
|
||||||
|
if [ -n "$_GIT_ROOT" ] && [ -d "$_GIT_ROOT/.claude/skills" ]; then
|
||||||
|
for _LINK in "$_GIT_ROOT/.claude/skills"/*; do
|
||||||
|
[ -L "$_LINK" ] || continue
|
||||||
|
_TARGET="$(readlink "$_LINK" 2>/dev/null || true)"
|
||||||
|
case "$_TARGET" in
|
||||||
|
gstack/*|*/gstack/*) rm -f "$_LINK"; REMOVED+=("local claude/$(basename "$_LINK")") ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
if [ -d "$_GIT_ROOT/.claude/skills/gstack" ] || [ -L "$_GIT_ROOT/.claude/skills/gstack" ]; then
|
||||||
|
rm -rf "$_GIT_ROOT/.claude/skills/gstack"
|
||||||
|
REMOVED+=("$_GIT_ROOT/.claude/skills/gstack")
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ─── Remove Codex skills ────────────────────────────────────
|
||||||
|
CODEX_SKILLS="$HOME/.codex/skills"
|
||||||
|
if [ -d "$CODEX_SKILLS" ]; then
|
||||||
|
for _ITEM in "$CODEX_SKILLS"/gstack*; do
|
||||||
|
[ -e "$_ITEM" ] || [ -L "$_ITEM" ] || continue
|
||||||
|
rm -rf "$_ITEM"
|
||||||
|
REMOVED+=("codex/$(basename "$_ITEM")")
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ─── Remove Kiro skills ─────────────────────────────────────
|
||||||
|
KIRO_SKILLS="$HOME/.kiro/skills"
|
||||||
|
if [ -d "$KIRO_SKILLS" ]; then
|
||||||
|
for _ITEM in "$KIRO_SKILLS"/gstack*; do
|
||||||
|
[ -e "$_ITEM" ] || [ -L "$_ITEM" ] || continue
|
||||||
|
rm -rf "$_ITEM"
|
||||||
|
REMOVED+=("kiro/$(basename "$_ITEM")")
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ─── Remove per-project .agents/ sidecar ─────────────────────
|
||||||
|
if [ -n "$_GIT_ROOT" ] && [ -d "$_GIT_ROOT/.agents/skills" ]; then
|
||||||
|
for _ITEM in "$_GIT_ROOT/.agents/skills"/gstack*; do
|
||||||
|
[ -e "$_ITEM" ] || [ -L "$_ITEM" ] || continue
|
||||||
|
rm -rf "$_ITEM"
|
||||||
|
REMOVED+=("agents/$(basename "$_ITEM")")
|
||||||
|
done
|
||||||
|
|
||||||
|
rmdir "$_GIT_ROOT/.agents/skills" 2>/dev/null || true
|
||||||
|
rmdir "$_GIT_ROOT/.agents" 2>/dev/null || true
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ─── Remove per-project state ───────────────────────────────
|
||||||
|
if [ -n "$_GIT_ROOT" ]; then
|
||||||
|
if [ -d "$_GIT_ROOT/.gstack" ]; then
|
||||||
|
rm -rf "$_GIT_ROOT/.gstack"
|
||||||
|
REMOVED+=("$_GIT_ROOT/.gstack/")
|
||||||
|
fi
|
||||||
|
if [ -d "$_GIT_ROOT/.gstack-worktrees" ]; then
|
||||||
|
rm -rf "$_GIT_ROOT/.gstack-worktrees"
|
||||||
|
REMOVED+=("$_GIT_ROOT/.gstack-worktrees/")
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ─── Remove global state ────────────────────────────────────
|
||||||
|
if [ "$KEEP_STATE" -eq 0 ] && [ -d "$STATE_DIR" ]; then
|
||||||
|
rm -rf "$STATE_DIR"
|
||||||
|
REMOVED+=("$STATE_DIR")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ─── Clean up temp files ────────────────────────────────────
|
||||||
|
for _TMP in /tmp/gstack-latest-version /tmp/gstack-sketch-*.html /tmp/gstack-sketch.png /tmp/gstack-sync-*; do
|
||||||
|
if [ -e "$_TMP" ]; then
|
||||||
|
rm -f "$_TMP"
|
||||||
|
REMOVED+=("$(basename "$_TMP")")
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# ─── Summary ────────────────────────────────────────────────
|
||||||
|
if [ ${#REMOVED[@]} -gt 0 ]; then
|
||||||
|
echo "Removed: ${REMOVED[*]}"
|
||||||
|
echo "gstack uninstalled."
|
||||||
|
else
|
||||||
|
echo "Nothing to remove — gstack is not installed."
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
||||||
@@ -26,6 +26,7 @@ import { emitActivity, subscribe, getActivityAfter, getActivityHistory, getSubsc
|
|||||||
// Bun.spawn used instead of child_process.spawn (compiled bun binaries
|
// Bun.spawn used instead of child_process.spawn (compiled bun binaries
|
||||||
// fail posix_spawn on all executables including /bin/bash)
|
// fail posix_spawn on all executables including /bin/bash)
|
||||||
import * as fs from 'fs';
|
import * as fs from 'fs';
|
||||||
|
import * as net from 'net';
|
||||||
import * as path from 'path';
|
import * as path from 'path';
|
||||||
import * as crypto from 'crypto';
|
import * as crypto from 'crypto';
|
||||||
|
|
||||||
@@ -547,17 +548,28 @@ export { READ_COMMANDS, WRITE_COMMANDS, META_COMMANDS };
|
|||||||
const browserManager = new BrowserManager();
|
const browserManager = new BrowserManager();
|
||||||
let isShuttingDown = false;
|
let isShuttingDown = false;
|
||||||
|
|
||||||
|
// Test if a port is available by binding and immediately releasing.
|
||||||
|
// Uses net.createServer instead of Bun.serve to avoid a race condition
|
||||||
|
// in the Node.js polyfill where listen/close are async but the caller
|
||||||
|
// expects synchronous bind semantics. See: #486
|
||||||
|
function isPortAvailable(port: number, hostname: string = '127.0.0.1'): Promise<boolean> {
|
||||||
|
return new Promise((resolve) => {
|
||||||
|
const srv = net.createServer();
|
||||||
|
srv.once('error', () => resolve(false));
|
||||||
|
srv.listen(port, hostname, () => {
|
||||||
|
srv.close(() => resolve(true));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
// Find port: explicit BROWSE_PORT, or random in 10000-60000
|
// Find port: explicit BROWSE_PORT, or random in 10000-60000
|
||||||
async function findPort(): Promise<number> {
|
async function findPort(): Promise<number> {
|
||||||
// Explicit port override (for debugging)
|
// Explicit port override (for debugging)
|
||||||
if (BROWSE_PORT) {
|
if (BROWSE_PORT) {
|
||||||
try {
|
if (await isPortAvailable(BROWSE_PORT)) {
|
||||||
const testServer = Bun.serve({ port: BROWSE_PORT, fetch: () => new Response('ok') });
|
|
||||||
testServer.stop();
|
|
||||||
return BROWSE_PORT;
|
return BROWSE_PORT;
|
||||||
} catch {
|
|
||||||
throw new Error(`[browse] Port ${BROWSE_PORT} (from BROWSE_PORT env) is in use`);
|
|
||||||
}
|
}
|
||||||
|
throw new Error(`[browse] Port ${BROWSE_PORT} (from BROWSE_PORT env) is in use`);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Random port with retry
|
// Random port with retry
|
||||||
@@ -566,12 +578,8 @@ async function findPort(): Promise<number> {
|
|||||||
const MAX_RETRIES = 5;
|
const MAX_RETRIES = 5;
|
||||||
for (let attempt = 0; attempt < MAX_RETRIES; attempt++) {
|
for (let attempt = 0; attempt < MAX_RETRIES; attempt++) {
|
||||||
const port = MIN_PORT + Math.floor(Math.random() * (MAX_PORT - MIN_PORT));
|
const port = MIN_PORT + Math.floor(Math.random() * (MAX_PORT - MIN_PORT));
|
||||||
try {
|
if (await isPortAvailable(port)) {
|
||||||
const testServer = Bun.serve({ port, fetch: () => new Response('ok') });
|
|
||||||
testServer.stop();
|
|
||||||
return port;
|
return port;
|
||||||
} catch {
|
|
||||||
continue;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
throw new Error(`[browse] No available port after ${MAX_RETRIES} attempts in range ${MIN_PORT}-${MAX_PORT}`);
|
throw new Error(`[browse] No available port after ${MAX_RETRIES} attempts in range ${MIN_PORT}-${MAX_PORT}`);
|
||||||
|
|||||||
191
browse/test/findport.test.ts
Normal file
191
browse/test/findport.test.ts
Normal file
@@ -0,0 +1,191 @@
|
|||||||
|
import { describe, test, expect } from 'bun:test';
|
||||||
|
import * as net from 'net';
|
||||||
|
import * as path from 'path';
|
||||||
|
|
||||||
|
const polyfillPath = path.resolve(import.meta.dir, '../src/bun-polyfill.cjs');
|
||||||
|
|
||||||
|
// Helper: bind a port and hold it open, returning a cleanup function
|
||||||
|
function occupyPort(port: number): Promise<() => Promise<void>> {
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
const srv = net.createServer();
|
||||||
|
srv.once('error', reject);
|
||||||
|
srv.listen(port, '127.0.0.1', () => {
|
||||||
|
resolve(() => new Promise<void>((r) => srv.close(() => r())));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Helper: find a known-free port by binding to 0
|
||||||
|
function getFreePort(): Promise<number> {
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
const srv = net.createServer();
|
||||||
|
srv.once('error', reject);
|
||||||
|
srv.listen(0, '127.0.0.1', () => {
|
||||||
|
const port = (srv.address() as net.AddressInfo).port;
|
||||||
|
srv.close(() => resolve(port));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('findPort / isPortAvailable', () => {
|
||||||
|
|
||||||
|
test('isPortAvailable returns true for a free port', async () => {
|
||||||
|
// Use the same isPortAvailable logic from server.ts
|
||||||
|
const port = await getFreePort();
|
||||||
|
|
||||||
|
const available = await new Promise<boolean>((resolve) => {
|
||||||
|
const srv = net.createServer();
|
||||||
|
srv.once('error', () => resolve(false));
|
||||||
|
srv.listen(port, '127.0.0.1', () => {
|
||||||
|
srv.close(() => resolve(true));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(available).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('isPortAvailable returns false for an occupied port', async () => {
|
||||||
|
const port = await getFreePort();
|
||||||
|
const release = await occupyPort(port);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const available = await new Promise<boolean>((resolve) => {
|
||||||
|
const srv = net.createServer();
|
||||||
|
srv.once('error', () => resolve(false));
|
||||||
|
srv.listen(port, '127.0.0.1', () => {
|
||||||
|
srv.close(() => resolve(true));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(available).toBe(false);
|
||||||
|
} finally {
|
||||||
|
await release();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
test('port is actually free after isPortAvailable returns true', async () => {
|
||||||
|
// This is the core race condition test: after isPortAvailable says
|
||||||
|
// a port is free, can we IMMEDIATELY bind to it?
|
||||||
|
const port = await getFreePort();
|
||||||
|
|
||||||
|
// Simulate isPortAvailable
|
||||||
|
const isFree = await new Promise<boolean>((resolve) => {
|
||||||
|
const srv = net.createServer();
|
||||||
|
srv.once('error', () => resolve(false));
|
||||||
|
srv.listen(port, '127.0.0.1', () => {
|
||||||
|
srv.close(() => resolve(true));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(isFree).toBe(true);
|
||||||
|
|
||||||
|
// Now immediately try to bind — this would fail with the old
|
||||||
|
// Bun.serve() polyfill approach because the test server's
|
||||||
|
// listen() would still be pending
|
||||||
|
const canBind = await new Promise<boolean>((resolve) => {
|
||||||
|
const srv = net.createServer();
|
||||||
|
srv.once('error', () => resolve(false));
|
||||||
|
srv.listen(port, '127.0.0.1', () => {
|
||||||
|
srv.close(() => resolve(true));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(canBind).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('polyfill Bun.serve stop() is fire-and-forget (async)', async () => {
|
||||||
|
// Verify that the polyfill's stop() does NOT wait for the socket
|
||||||
|
// to actually close — this is the root cause of the race condition.
|
||||||
|
// On macOS/Linux the OS reclaims the port fast enough that the race
|
||||||
|
// rarely manifests, but on Windows TIME_WAIT makes it 100% repro.
|
||||||
|
const result = Bun.spawnSync(['node', '-e', `
|
||||||
|
require('${polyfillPath}');
|
||||||
|
const net = require('net');
|
||||||
|
|
||||||
|
async function test() {
|
||||||
|
const port = 10000 + Math.floor(Math.random() * 50000);
|
||||||
|
|
||||||
|
const testServer = Bun.serve({
|
||||||
|
port,
|
||||||
|
hostname: '127.0.0.1',
|
||||||
|
fetch: () => new Response('ok'),
|
||||||
|
});
|
||||||
|
|
||||||
|
// stop() returns undefined — it does NOT return a Promise,
|
||||||
|
// so callers cannot await socket teardown
|
||||||
|
const retval = testServer.stop();
|
||||||
|
console.log(typeof retval === 'undefined' ? 'FIRE_AND_FORGET' : 'AWAITABLE');
|
||||||
|
}
|
||||||
|
|
||||||
|
test();
|
||||||
|
`], { stdout: 'pipe', stderr: 'pipe' });
|
||||||
|
|
||||||
|
const output = result.stdout.toString().trim();
|
||||||
|
// Confirms the polyfill's stop() is fire-and-forget — callers
|
||||||
|
// cannot wait for the port to be released, hence the race
|
||||||
|
expect(output).toBe('FIRE_AND_FORGET');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('net.createServer approach does not have the race condition', async () => {
|
||||||
|
// Prove the fix: net.createServer with proper async bind/close
|
||||||
|
// releases the port cleanly
|
||||||
|
const result = Bun.spawnSync(['node', '-e', `
|
||||||
|
const net = require('net');
|
||||||
|
|
||||||
|
async function testFix() {
|
||||||
|
const port = 10000 + Math.floor(Math.random() * 50000);
|
||||||
|
|
||||||
|
// Simulate the NEW isPortAvailable: proper async bind/close
|
||||||
|
const isFree = await new Promise((resolve) => {
|
||||||
|
const srv = net.createServer();
|
||||||
|
srv.once('error', () => resolve(false));
|
||||||
|
srv.listen(port, '127.0.0.1', () => {
|
||||||
|
srv.close(() => resolve(true));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!isFree) {
|
||||||
|
console.log('PORT_BUSY');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Immediately try to bind — should succeed because close()
|
||||||
|
// completed before the Promise resolved
|
||||||
|
const canBind = await new Promise((resolve) => {
|
||||||
|
const srv = net.createServer();
|
||||||
|
srv.once('error', () => resolve(false));
|
||||||
|
srv.listen(port, '127.0.0.1', () => {
|
||||||
|
srv.close(() => resolve(true));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log(canBind ? 'FIX_WORKS' : 'FIX_BROKEN');
|
||||||
|
}
|
||||||
|
|
||||||
|
testFix();
|
||||||
|
`], { stdout: 'pipe', stderr: 'pipe' });
|
||||||
|
|
||||||
|
const output = result.stdout.toString().trim();
|
||||||
|
expect(output).toBe('FIX_WORKS');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('isPortAvailable handles rapid sequential checks', async () => {
|
||||||
|
// Stress test: check the same port multiple times in sequence
|
||||||
|
const port = await getFreePort();
|
||||||
|
const results: boolean[] = [];
|
||||||
|
|
||||||
|
for (let i = 0; i < 5; i++) {
|
||||||
|
const available = await new Promise<boolean>((resolve) => {
|
||||||
|
const srv = net.createServer();
|
||||||
|
srv.once('error', () => resolve(false));
|
||||||
|
srv.listen(port, '127.0.0.1', () => {
|
||||||
|
srv.close(() => resolve(true));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
results.push(available);
|
||||||
|
}
|
||||||
|
|
||||||
|
// All 5 checks should succeed — no leaked sockets
|
||||||
|
expect(results).toEqual([true, true, true, true, true]);
|
||||||
|
});
|
||||||
|
});
|
||||||
@@ -674,21 +674,19 @@ Use AskUserQuestion to confirm. If the user disagrees with a premise, revise und
|
|||||||
|
|
||||||
## Phase 3.5: Cross-Model Second Opinion (optional)
|
## Phase 3.5: Cross-Model Second Opinion (optional)
|
||||||
|
|
||||||
**Binary check first — no question if unavailable:**
|
**Binary check first:**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE"
|
which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE"
|
||||||
```
|
```
|
||||||
|
|
||||||
If `CODEX_NOT_AVAILABLE`: skip Phase 3.5 entirely — no message, no AskUserQuestion. Proceed directly to Phase 4.
|
Use AskUserQuestion (regardless of codex availability):
|
||||||
|
|
||||||
If `CODEX_AVAILABLE`: use AskUserQuestion:
|
> Want a second opinion from an independent AI perspective? It will review your problem statement, key answers, premises, and any landscape findings from this session without having seen this conversation — it gets a structured summary. Usually takes 2-5 minutes.
|
||||||
|
|
||||||
> Want a second opinion from a different AI model? Codex will independently review your problem statement, key answers, premises, and any landscape findings from this session. It hasn't seen this conversation — it gets a structured summary. Usually takes 2-5 minutes.
|
|
||||||
> A) Yes, get a second opinion
|
> A) Yes, get a second opinion
|
||||||
> B) No, proceed to alternatives
|
> B) No, proceed to alternatives
|
||||||
|
|
||||||
If B: skip Phase 3.5 entirely. Remember that Codex did NOT run (affects design doc, founder signals, and Phase 4 below).
|
If B: skip Phase 3.5 entirely. Remember that the second opinion did NOT run (affects design doc, founder signals, and Phase 4 below).
|
||||||
|
|
||||||
**If A: Run the Codex cold read.**
|
**If A: Run the Codex cold read.**
|
||||||
|
|
||||||
@@ -726,15 +724,26 @@ cat "$TMPERR_OH"
|
|||||||
rm -f "$TMPERR_OH" "$CODEX_PROMPT_FILE"
|
rm -f "$TMPERR_OH" "$CODEX_PROMPT_FILE"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Error handling:** All errors are non-blocking — Codex second opinion is a quality enhancement, not a prerequisite.
|
**Error handling:** All errors are non-blocking — second opinion is a quality enhancement, not a prerequisite.
|
||||||
- **Auth failure:** If stderr contains "auth", "login", "unauthorized", or "API key": "Codex authentication failed. Run \`codex login\` to authenticate. Skipping second opinion."
|
- **Auth failure:** If stderr contains "auth", "login", "unauthorized", or "API key": "Codex authentication failed. Run \`codex login\` to authenticate." Fall back to Claude subagent.
|
||||||
- **Timeout:** "Codex timed out after 5 minutes. Skipping second opinion."
|
- **Timeout:** "Codex timed out after 5 minutes." Fall back to Claude subagent.
|
||||||
- **Empty response:** "Codex returned no response. Stderr: <paste relevant error>. Skipping second opinion."
|
- **Empty response:** "Codex returned no response." Fall back to Claude subagent.
|
||||||
|
|
||||||
On any error, proceed to Phase 4 — do NOT fall back to a Claude subagent (this is brainstorming, not adversarial review).
|
On any Codex error, fall back to the Claude subagent below.
|
||||||
|
|
||||||
|
**If CODEX_NOT_AVAILABLE (or Codex errored):**
|
||||||
|
|
||||||
|
Dispatch via the Agent tool. The subagent has fresh context — genuine independence.
|
||||||
|
|
||||||
|
Subagent prompt: same mode-appropriate prompt as above (Startup or Builder variant).
|
||||||
|
|
||||||
|
Present findings under a `SECOND OPINION (Claude subagent):` header.
|
||||||
|
|
||||||
|
If the subagent fails or times out: "Second opinion unavailable. Continuing to Phase 4."
|
||||||
|
|
||||||
4. **Presentation:**
|
4. **Presentation:**
|
||||||
|
|
||||||
|
If Codex ran:
|
||||||
```
|
```
|
||||||
SECOND OPINION (Codex):
|
SECOND OPINION (Codex):
|
||||||
════════════════════════════════════════════════════════════
|
════════════════════════════════════════════════════════════
|
||||||
@@ -742,10 +751,18 @@ SECOND OPINION (Codex):
|
|||||||
════════════════════════════════════════════════════════════
|
════════════════════════════════════════════════════════════
|
||||||
```
|
```
|
||||||
|
|
||||||
5. **Cross-model synthesis:** After presenting Codex output, provide 3-5 bullet synthesis:
|
If Claude subagent ran:
|
||||||
- Where Claude agrees with Codex
|
```
|
||||||
|
SECOND OPINION (Claude subagent):
|
||||||
|
════════════════════════════════════════════════════════════
|
||||||
|
<full subagent output, verbatim — do not truncate or summarize>
|
||||||
|
════════════════════════════════════════════════════════════
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Cross-model synthesis:** After presenting the second opinion output, provide 3-5 bullet synthesis:
|
||||||
|
- Where Claude agrees with the second opinion
|
||||||
- Where Claude disagrees and why
|
- Where Claude disagrees and why
|
||||||
- Whether Codex's challenged premise changes Claude's recommendation
|
- Whether the challenged premise changes Claude's recommendation
|
||||||
|
|
||||||
6. **Premise revision check:** If Codex challenged an agreed premise, use AskUserQuestion:
|
6. **Premise revision check:** If Codex challenged an agreed premise, use AskUserQuestion:
|
||||||
|
|
||||||
@@ -783,7 +800,7 @@ Rules:
|
|||||||
- One must be the **"minimal viable"** (fewest files, smallest diff, ships fastest).
|
- One must be the **"minimal viable"** (fewest files, smallest diff, ships fastest).
|
||||||
- One must be the **"ideal architecture"** (best long-term trajectory, most elegant).
|
- One must be the **"ideal architecture"** (best long-term trajectory, most elegant).
|
||||||
- One can be **creative/lateral** (unexpected approach, different framing of the problem).
|
- One can be **creative/lateral** (unexpected approach, different framing of the problem).
|
||||||
- If Codex proposed a prototype in Phase 3.5, consider using it as a starting point for the creative/lateral approach.
|
- If the second opinion (Codex or Claude subagent) proposed a prototype in Phase 3.5, consider using it as a starting point for the creative/lateral approach.
|
||||||
|
|
||||||
**RECOMMENDATION:** Choose [X] because [one-line reason].
|
**RECOMMENDATION:** Choose [X] because [one-line reason].
|
||||||
|
|
||||||
@@ -949,7 +966,7 @@ Supersedes: {prior filename — omit this line if first design on this branch}
|
|||||||
{from Phase 3}
|
{from Phase 3}
|
||||||
|
|
||||||
## Cross-Model Perspective
|
## Cross-Model Perspective
|
||||||
{If Codex ran in Phase 3.5: Codex's independent cold read — steelman, key insight, challenged premise, prototype suggestion. Verbatim or close paraphrase of what Codex said. If Codex did NOT run (skipped or unavailable): omit this section entirely — do not include it.}
|
{If second opinion ran in Phase 3.5 (Codex or Claude subagent): independent cold read — steelman, key insight, challenged premise, prototype suggestion. Verbatim or close paraphrase. If second opinion did NOT run (skipped or unavailable): omit this section entirely — do not include it.}
|
||||||
|
|
||||||
## Approaches Considered
|
## Approaches Considered
|
||||||
### Approach A: {name}
|
### Approach A: {name}
|
||||||
@@ -1006,7 +1023,7 @@ Supersedes: {prior filename — omit this line if first design on this branch}
|
|||||||
{from Phase 3}
|
{from Phase 3}
|
||||||
|
|
||||||
## Cross-Model Perspective
|
## Cross-Model Perspective
|
||||||
{If Codex ran in Phase 3.5: Codex's independent cold read — coolest version, key insight, existing tools, prototype suggestion. Verbatim or close paraphrase of what Codex said. If Codex did NOT run (skipped or unavailable): omit this section entirely — do not include it.}
|
{If second opinion ran in Phase 3.5 (Codex or Claude subagent): independent cold read — coolest version, key insight, existing tools, prototype suggestion. Verbatim or close paraphrase. If second opinion did NOT run (skipped or unavailable): omit this section entirely — do not include it.}
|
||||||
|
|
||||||
## Approaches Considered
|
## Approaches Considered
|
||||||
### Approach A: {name}
|
### Approach A: {name}
|
||||||
|
|||||||
@@ -382,7 +382,7 @@ Rules:
|
|||||||
- One must be the **"minimal viable"** (fewest files, smallest diff, ships fastest).
|
- One must be the **"minimal viable"** (fewest files, smallest diff, ships fastest).
|
||||||
- One must be the **"ideal architecture"** (best long-term trajectory, most elegant).
|
- One must be the **"ideal architecture"** (best long-term trajectory, most elegant).
|
||||||
- One can be **creative/lateral** (unexpected approach, different framing of the problem).
|
- One can be **creative/lateral** (unexpected approach, different framing of the problem).
|
||||||
- If Codex proposed a prototype in Phase 3.5, consider using it as a starting point for the creative/lateral approach.
|
- If the second opinion (Codex or Claude subagent) proposed a prototype in Phase 3.5, consider using it as a starting point for the creative/lateral approach.
|
||||||
|
|
||||||
**RECOMMENDATION:** Choose [X] because [one-line reason].
|
**RECOMMENDATION:** Choose [X] because [one-line reason].
|
||||||
|
|
||||||
@@ -462,7 +462,7 @@ Supersedes: {prior filename — omit this line if first design on this branch}
|
|||||||
{from Phase 3}
|
{from Phase 3}
|
||||||
|
|
||||||
## Cross-Model Perspective
|
## Cross-Model Perspective
|
||||||
{If Codex ran in Phase 3.5: Codex's independent cold read — steelman, key insight, challenged premise, prototype suggestion. Verbatim or close paraphrase of what Codex said. If Codex did NOT run (skipped or unavailable): omit this section entirely — do not include it.}
|
{If second opinion ran in Phase 3.5 (Codex or Claude subagent): independent cold read — steelman, key insight, challenged premise, prototype suggestion. Verbatim or close paraphrase. If second opinion did NOT run (skipped or unavailable): omit this section entirely — do not include it.}
|
||||||
|
|
||||||
## Approaches Considered
|
## Approaches Considered
|
||||||
### Approach A: {name}
|
### Approach A: {name}
|
||||||
@@ -519,7 +519,7 @@ Supersedes: {prior filename — omit this line if first design on this branch}
|
|||||||
{from Phase 3}
|
{from Phase 3}
|
||||||
|
|
||||||
## Cross-Model Perspective
|
## Cross-Model Perspective
|
||||||
{If Codex ran in Phase 3.5: Codex's independent cold read — coolest version, key insight, existing tools, prototype suggestion. Verbatim or close paraphrase of what Codex said. If Codex did NOT run (skipped or unavailable): omit this section entirely — do not include it.}
|
{If second opinion ran in Phase 3.5 (Codex or Claude subagent): independent cold read — coolest version, key insight, existing tools, prototype suggestion. Verbatim or close paraphrase. If second opinion did NOT run (skipped or unavailable): omit this section entirely — do not include it.}
|
||||||
|
|
||||||
## Approaches Considered
|
## Approaches Considered
|
||||||
### Approach A: {name}
|
### Approach A: {name}
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "gstack",
|
"name": "gstack",
|
||||||
"version": "0.12.8.1",
|
"version": "0.12.9.0",
|
||||||
"description": "Garry's Stack — Claude Code skills + fast headless browser. One repo, one install, entire AI engineering workflow.",
|
"description": "Garry's Stack — Claude Code skills + fast headless browser. One repo, one install, entire AI engineering workflow.",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
|
|||||||
@@ -49,6 +49,13 @@ Be terse. For each issue: one line describing the problem, one line with the fix
|
|||||||
#### LLM Output Trust Boundary
|
#### LLM Output Trust Boundary
|
||||||
- LLM-generated values (emails, URLs, names) written to DB or passed to mailers without format validation. Add lightweight guards (`EMAIL_REGEXP`, `URI.parse`, `.strip`) before persisting.
|
- LLM-generated values (emails, URLs, names) written to DB or passed to mailers without format validation. Add lightweight guards (`EMAIL_REGEXP`, `URI.parse`, `.strip`) before persisting.
|
||||||
- Structured tool output (arrays, hashes) accepted without type/shape checks before database writes.
|
- Structured tool output (arrays, hashes) accepted without type/shape checks before database writes.
|
||||||
|
- LLM-generated URLs fetched without allowlist — SSRF risk if URL points to internal network (Python: `urllib.parse.urlparse` → check hostname against blocklist before `requests.get`/`httpx.get`)
|
||||||
|
- LLM output stored in knowledge bases or vector DBs without sanitization — stored prompt injection risk
|
||||||
|
|
||||||
|
#### Shell Injection (Python-specific)
|
||||||
|
- `subprocess.run()` / `subprocess.call()` / `subprocess.Popen()` with `shell=True` AND f-string/`.format()` interpolation in the command string — use argument arrays instead
|
||||||
|
- `os.system()` with variable interpolation — replace with `subprocess.run()` using argument arrays
|
||||||
|
- `eval()` / `exec()` on LLM-generated code without sandboxing
|
||||||
|
|
||||||
#### Enum & Value Completeness
|
#### Enum & Value Completeness
|
||||||
When the diff introduces a new enum value, status string, tier name, or type constant:
|
When the diff introduces a new enum value, status string, tier name, or type constant:
|
||||||
@@ -59,6 +66,16 @@ To do this: use Grep to find all references to the sibling values (e.g., grep fo
|
|||||||
|
|
||||||
### Pass 2 — INFORMATIONAL
|
### Pass 2 — INFORMATIONAL
|
||||||
|
|
||||||
|
#### Async/Sync Mixing (Python-specific)
|
||||||
|
- Synchronous `subprocess.run()`, `open()`, `requests.get()` inside `async def` endpoints — blocks the event loop. Use `asyncio.to_thread()`, `aiofiles`, or `httpx.AsyncClient` instead.
|
||||||
|
- `time.sleep()` inside async functions — use `asyncio.sleep()`
|
||||||
|
- Sync DB calls in async context without `run_in_executor()` wrapping
|
||||||
|
|
||||||
|
#### Column/Field Name Safety
|
||||||
|
- Verify column names in ORM queries (`.select()`, `.eq()`, `.gte()`, `.order()`) against actual DB schema — wrong column names silently return empty results or throw swallowed errors
|
||||||
|
- Check `.get()` calls on query results use the column name that was actually selected
|
||||||
|
- Cross-reference with schema documentation when available
|
||||||
|
|
||||||
#### Conditional Side Effects
|
#### Conditional Side Effects
|
||||||
- Code paths that branch on a condition but forget to apply a side effect on one branch. Example: item promoted to verified but URL only attached when a secondary condition is true — the other branch promotes without the URL, creating an inconsistent record.
|
- Code paths that branch on a condition but forget to apply a side effect on one branch. Example: item promoted to verified but URL only attached when a secondary condition is true — the other branch promotes without the URL, creating an inconsistent record.
|
||||||
- Log messages that claim an action happened but the action was conditionally skipped. The log should reflect what actually occurred.
|
- Log messages that claim an action happened but the action was conditionally skipped. The log should reflect what actually occurred.
|
||||||
|
|||||||
@@ -251,21 +251,19 @@ export function generateCodexSecondOpinion(ctx: TemplateContext): string {
|
|||||||
|
|
||||||
return `## Phase 3.5: Cross-Model Second Opinion (optional)
|
return `## Phase 3.5: Cross-Model Second Opinion (optional)
|
||||||
|
|
||||||
**Binary check first — no question if unavailable:**
|
**Binary check first:**
|
||||||
|
|
||||||
\`\`\`bash
|
\`\`\`bash
|
||||||
which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE"
|
which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE"
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
|
||||||
If \`CODEX_NOT_AVAILABLE\`: skip Phase 3.5 entirely — no message, no AskUserQuestion. Proceed directly to Phase 4.
|
Use AskUserQuestion (regardless of codex availability):
|
||||||
|
|
||||||
If \`CODEX_AVAILABLE\`: use AskUserQuestion:
|
> Want a second opinion from an independent AI perspective? It will review your problem statement, key answers, premises, and any landscape findings from this session without having seen this conversation — it gets a structured summary. Usually takes 2-5 minutes.
|
||||||
|
|
||||||
> Want a second opinion from a different AI model? Codex will independently review your problem statement, key answers, premises, and any landscape findings from this session. It hasn't seen this conversation — it gets a structured summary. Usually takes 2-5 minutes.
|
|
||||||
> A) Yes, get a second opinion
|
> A) Yes, get a second opinion
|
||||||
> B) No, proceed to alternatives
|
> B) No, proceed to alternatives
|
||||||
|
|
||||||
If B: skip Phase 3.5 entirely. Remember that Codex did NOT run (affects design doc, founder signals, and Phase 4 below).
|
If B: skip Phase 3.5 entirely. Remember that the second opinion did NOT run (affects design doc, founder signals, and Phase 4 below).
|
||||||
|
|
||||||
**If A: Run the Codex cold read.**
|
**If A: Run the Codex cold read.**
|
||||||
|
|
||||||
@@ -303,15 +301,26 @@ cat "$TMPERR_OH"
|
|||||||
rm -f "$TMPERR_OH" "$CODEX_PROMPT_FILE"
|
rm -f "$TMPERR_OH" "$CODEX_PROMPT_FILE"
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
|
||||||
**Error handling:** All errors are non-blocking — Codex second opinion is a quality enhancement, not a prerequisite.
|
**Error handling:** All errors are non-blocking — second opinion is a quality enhancement, not a prerequisite.
|
||||||
- **Auth failure:** If stderr contains "auth", "login", "unauthorized", or "API key": "Codex authentication failed. Run \\\`codex login\\\` to authenticate. Skipping second opinion."
|
- **Auth failure:** If stderr contains "auth", "login", "unauthorized", or "API key": "Codex authentication failed. Run \\\`codex login\\\` to authenticate." Fall back to Claude subagent.
|
||||||
- **Timeout:** "Codex timed out after 5 minutes. Skipping second opinion."
|
- **Timeout:** "Codex timed out after 5 minutes." Fall back to Claude subagent.
|
||||||
- **Empty response:** "Codex returned no response. Stderr: <paste relevant error>. Skipping second opinion."
|
- **Empty response:** "Codex returned no response." Fall back to Claude subagent.
|
||||||
|
|
||||||
On any error, proceed to Phase 4 — do NOT fall back to a Claude subagent (this is brainstorming, not adversarial review).
|
On any Codex error, fall back to the Claude subagent below.
|
||||||
|
|
||||||
|
**If CODEX_NOT_AVAILABLE (or Codex errored):**
|
||||||
|
|
||||||
|
Dispatch via the Agent tool. The subagent has fresh context — genuine independence.
|
||||||
|
|
||||||
|
Subagent prompt: same mode-appropriate prompt as above (Startup or Builder variant).
|
||||||
|
|
||||||
|
Present findings under a \`SECOND OPINION (Claude subagent):\` header.
|
||||||
|
|
||||||
|
If the subagent fails or times out: "Second opinion unavailable. Continuing to Phase 4."
|
||||||
|
|
||||||
4. **Presentation:**
|
4. **Presentation:**
|
||||||
|
|
||||||
|
If Codex ran:
|
||||||
\`\`\`
|
\`\`\`
|
||||||
SECOND OPINION (Codex):
|
SECOND OPINION (Codex):
|
||||||
════════════════════════════════════════════════════════════
|
════════════════════════════════════════════════════════════
|
||||||
@@ -319,10 +328,18 @@ SECOND OPINION (Codex):
|
|||||||
════════════════════════════════════════════════════════════
|
════════════════════════════════════════════════════════════
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
|
||||||
5. **Cross-model synthesis:** After presenting Codex output, provide 3-5 bullet synthesis:
|
If Claude subagent ran:
|
||||||
- Where Claude agrees with Codex
|
\`\`\`
|
||||||
|
SECOND OPINION (Claude subagent):
|
||||||
|
════════════════════════════════════════════════════════════
|
||||||
|
<full subagent output, verbatim — do not truncate or summarize>
|
||||||
|
════════════════════════════════════════════════════════════
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
5. **Cross-model synthesis:** After presenting the second opinion output, provide 3-5 bullet synthesis:
|
||||||
|
- Where Claude agrees with the second opinion
|
||||||
- Where Claude disagrees and why
|
- Where Claude disagrees and why
|
||||||
- Whether Codex's challenged premise changes Claude's recommendation
|
- Whether the challenged premise changes Claude's recommendation
|
||||||
|
|
||||||
6. **Premise revision check:** If Codex challenged an agreed premise, use AskUserQuestion:
|
6. **Premise revision check:** If Codex challenged an agreed premise, use AskUserQuestion:
|
||||||
|
|
||||||
|
|||||||
53
setup
53
setup
@@ -23,11 +23,13 @@ esac
|
|||||||
# ─── Parse flags ──────────────────────────────────────────────
|
# ─── Parse flags ──────────────────────────────────────────────
|
||||||
HOST="claude"
|
HOST="claude"
|
||||||
LOCAL_INSTALL=0
|
LOCAL_INSTALL=0
|
||||||
|
SKILL_PREFIX=1
|
||||||
while [ $# -gt 0 ]; do
|
while [ $# -gt 0 ]; do
|
||||||
case "$1" in
|
case "$1" in
|
||||||
--host) [ -z "$2" ] && echo "Missing value for --host (expected claude, codex, kiro, or auto)" >&2 && exit 1; HOST="$2"; shift 2 ;;
|
--host) [ -z "$2" ] && echo "Missing value for --host (expected claude, codex, kiro, or auto)" >&2 && exit 1; HOST="$2"; shift 2 ;;
|
||||||
--host=*) HOST="${1#--host=}"; shift ;;
|
--host=*) HOST="${1#--host=}"; shift ;;
|
||||||
--local) LOCAL_INSTALL=1; shift ;;
|
--local) LOCAL_INSTALL=1; shift ;;
|
||||||
|
--no-prefix) SKILL_PREFIX=0; shift ;;
|
||||||
*) shift ;;
|
*) shift ;;
|
||||||
esac
|
esac
|
||||||
done
|
done
|
||||||
@@ -199,6 +201,9 @@ fi
|
|||||||
mkdir -p "$HOME/.gstack/projects"
|
mkdir -p "$HOME/.gstack/projects"
|
||||||
|
|
||||||
# ─── Helper: link Claude skill subdirectories into a skills parent directory ──
|
# ─── Helper: link Claude skill subdirectories into a skills parent directory ──
|
||||||
|
# When SKILL_PREFIX=1 (default), symlinks are prefixed with "gstack-" to avoid
|
||||||
|
# namespace pollution (e.g., gstack-review instead of review).
|
||||||
|
# Use --no-prefix to restore the old flat names.
|
||||||
link_claude_skill_dirs() {
|
link_claude_skill_dirs() {
|
||||||
local gstack_dir="$1"
|
local gstack_dir="$1"
|
||||||
local skills_dir="$2"
|
local skills_dir="$2"
|
||||||
@@ -208,11 +213,20 @@ link_claude_skill_dirs() {
|
|||||||
skill_name="$(basename "$skill_dir")"
|
skill_name="$(basename "$skill_dir")"
|
||||||
# Skip node_modules
|
# Skip node_modules
|
||||||
[ "$skill_name" = "node_modules" ] && continue
|
[ "$skill_name" = "node_modules" ] && continue
|
||||||
target="$skills_dir/$skill_name"
|
# Apply gstack- prefix unless --no-prefix or already prefixed
|
||||||
|
if [ "$SKILL_PREFIX" -eq 1 ]; then
|
||||||
|
case "$skill_name" in
|
||||||
|
gstack-*) link_name="$skill_name" ;;
|
||||||
|
*) link_name="gstack-$skill_name" ;;
|
||||||
|
esac
|
||||||
|
else
|
||||||
|
link_name="$skill_name"
|
||||||
|
fi
|
||||||
|
target="$skills_dir/$link_name"
|
||||||
# Create or update symlink; skip if a real file/directory exists
|
# Create or update symlink; skip if a real file/directory exists
|
||||||
if [ -L "$target" ] || [ ! -e "$target" ]; then
|
if [ -L "$target" ] || [ ! -e "$target" ]; then
|
||||||
ln -snf "gstack/$skill_name" "$target"
|
ln -snf "gstack/$skill_name" "$target"
|
||||||
linked+=("$skill_name")
|
linked+=("$link_name")
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
@@ -221,6 +235,37 @@ link_claude_skill_dirs() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# ─── Helper: remove old unprefixed Claude skill symlinks ──────────────────────
|
||||||
|
# Migration: when switching from flat names to gstack- prefixed names,
|
||||||
|
# clean up stale symlinks that point into the gstack directory.
|
||||||
|
cleanup_old_claude_symlinks() {
|
||||||
|
local gstack_dir="$1"
|
||||||
|
local skills_dir="$2"
|
||||||
|
local removed=()
|
||||||
|
for skill_dir in "$gstack_dir"/*/; do
|
||||||
|
if [ -f "$skill_dir/SKILL.md" ]; then
|
||||||
|
skill_name="$(basename "$skill_dir")"
|
||||||
|
[ "$skill_name" = "node_modules" ] && continue
|
||||||
|
# Skip already-prefixed dirs (gstack-upgrade) — no old symlink to clean
|
||||||
|
case "$skill_name" in gstack-*) continue ;; esac
|
||||||
|
old_target="$skills_dir/$skill_name"
|
||||||
|
# Only remove if it's a symlink pointing into gstack/
|
||||||
|
if [ -L "$old_target" ]; then
|
||||||
|
link_dest="$(readlink "$old_target" 2>/dev/null || true)"
|
||||||
|
case "$link_dest" in
|
||||||
|
gstack/*|*/gstack/*)
|
||||||
|
rm -f "$old_target"
|
||||||
|
removed+=("$skill_name")
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
if [ ${#removed[@]} -gt 0 ]; then
|
||||||
|
echo " cleaned up old symlinks: ${removed[*]}"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
# ─── Helper: link generated Codex skills into a skills parent directory ──
|
# ─── Helper: link generated Codex skills into a skills parent directory ──
|
||||||
# Installs from .agents/skills/gstack-* (the generated Codex-format skills)
|
# Installs from .agents/skills/gstack-* (the generated Codex-format skills)
|
||||||
# instead of source dirs (which have Claude paths).
|
# instead of source dirs (which have Claude paths).
|
||||||
@@ -348,6 +393,10 @@ fi
|
|||||||
|
|
||||||
if [ "$INSTALL_CLAUDE" -eq 1 ]; then
|
if [ "$INSTALL_CLAUDE" -eq 1 ]; then
|
||||||
if [ "$SKILLS_BASENAME" = "skills" ]; then
|
if [ "$SKILLS_BASENAME" = "skills" ]; then
|
||||||
|
# Clean up old unprefixed symlinks from previous installs
|
||||||
|
if [ "$SKILL_PREFIX" -eq 1 ]; then
|
||||||
|
cleanup_old_claude_symlinks "$SOURCE_GSTACK_DIR" "$INSTALL_SKILLS_DIR"
|
||||||
|
fi
|
||||||
link_claude_skill_dirs "$SOURCE_GSTACK_DIR" "$INSTALL_SKILLS_DIR"
|
link_claude_skill_dirs "$SOURCE_GSTACK_DIR" "$INSTALL_SKILLS_DIR"
|
||||||
if [ "$LOCAL_INSTALL" -eq 1 ]; then
|
if [ "$LOCAL_INSTALL" -eq 1 ]; then
|
||||||
echo "gstack ready (project-local)."
|
echo "gstack ready (project-local)."
|
||||||
|
|||||||
@@ -1023,12 +1023,18 @@ describe('CODEX_SECOND_OPINION resolver', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
test('contains opt-in AskUserQuestion text', () => {
|
test('contains opt-in AskUserQuestion text', () => {
|
||||||
expect(content).toContain('second opinion from a different AI model');
|
expect(content).toContain('second opinion from an independent AI perspective');
|
||||||
});
|
});
|
||||||
|
|
||||||
test('contains cross-model synthesis instructions', () => {
|
test('contains cross-model synthesis instructions', () => {
|
||||||
expect(content).toMatch(/[Ss]ynthesis/);
|
expect(content).toMatch(/[Ss]ynthesis/);
|
||||||
expect(content).toContain('Where Claude agrees with Codex');
|
expect(content).toContain('Where Claude agrees with the second opinion');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('contains Claude subagent fallback', () => {
|
||||||
|
expect(content).toContain('CODEX_NOT_AVAILABLE');
|
||||||
|
expect(content).toContain('Agent tool');
|
||||||
|
expect(content).toContain('SECOND OPINION (Claude subagent)');
|
||||||
});
|
});
|
||||||
|
|
||||||
test('contains premise revision check', () => {
|
test('contains premise revision check', () => {
|
||||||
@@ -1635,6 +1641,50 @@ describe('setup script validation', () => {
|
|||||||
expect(setupContent).toContain('$HOME/.gstack/repos/gstack');
|
expect(setupContent).toContain('$HOME/.gstack/repos/gstack');
|
||||||
expect(setupContent).toContain('avoid duplicate skill discovery');
|
expect(setupContent).toContain('avoid duplicate skill discovery');
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// --- Symlink prefix tests (PR #503) ---
|
||||||
|
|
||||||
|
test('link_claude_skill_dirs applies gstack- prefix by default', () => {
|
||||||
|
const fnStart = setupContent.indexOf('link_claude_skill_dirs()');
|
||||||
|
const fnEnd = setupContent.indexOf('}', setupContent.indexOf('linked[@]}', fnStart));
|
||||||
|
const fnBody = setupContent.slice(fnStart, fnEnd);
|
||||||
|
expect(fnBody).toContain('SKILL_PREFIX');
|
||||||
|
expect(fnBody).toContain('link_name="gstack-$skill_name"');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('link_claude_skill_dirs preserves already-prefixed dirs', () => {
|
||||||
|
const fnStart = setupContent.indexOf('link_claude_skill_dirs()');
|
||||||
|
const fnEnd = setupContent.indexOf('}', setupContent.indexOf('linked[@]}', fnStart));
|
||||||
|
const fnBody = setupContent.slice(fnStart, fnEnd);
|
||||||
|
// gstack-* dirs should keep their name (e.g., gstack-upgrade stays gstack-upgrade)
|
||||||
|
expect(fnBody).toContain('gstack-*) link_name="$skill_name"');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('setup supports --no-prefix flag', () => {
|
||||||
|
expect(setupContent).toContain('--no-prefix');
|
||||||
|
expect(setupContent).toContain('SKILL_PREFIX=0');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('cleanup_old_claude_symlinks removes only gstack-pointing symlinks', () => {
|
||||||
|
expect(setupContent).toContain('cleanup_old_claude_symlinks');
|
||||||
|
const fnStart = setupContent.indexOf('cleanup_old_claude_symlinks()');
|
||||||
|
const fnEnd = setupContent.indexOf('}', setupContent.indexOf('removed[@]}', fnStart));
|
||||||
|
const fnBody = setupContent.slice(fnStart, fnEnd);
|
||||||
|
// Should check readlink before removing
|
||||||
|
expect(fnBody).toContain('readlink');
|
||||||
|
expect(fnBody).toContain('gstack/*');
|
||||||
|
// Should skip already-prefixed dirs
|
||||||
|
expect(fnBody).toContain('gstack-*) continue');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('cleanup runs before link when prefix is enabled', () => {
|
||||||
|
// In the Claude install section, cleanup should happen before linking
|
||||||
|
const claudeInstallSection = setupContent.slice(
|
||||||
|
setupContent.indexOf('INSTALL_CLAUDE'),
|
||||||
|
setupContent.lastIndexOf('link_claude_skill_dirs')
|
||||||
|
);
|
||||||
|
expect(claudeInstallSection).toContain('cleanup_old_claude_symlinks');
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('discover-skills hidden directory filtering', () => {
|
describe('discover-skills hidden directory filtering', () => {
|
||||||
|
|||||||
165
test/uninstall.test.ts
Normal file
165
test/uninstall.test.ts
Normal file
@@ -0,0 +1,165 @@
|
|||||||
|
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
|
||||||
|
import { spawnSync } from 'child_process';
|
||||||
|
import * as fs from 'fs';
|
||||||
|
import * as path from 'path';
|
||||||
|
import * as os from 'os';
|
||||||
|
|
||||||
|
const ROOT = path.resolve(import.meta.dir, '..');
|
||||||
|
const UNINSTALL = path.join(ROOT, 'bin', 'gstack-uninstall');
|
||||||
|
|
||||||
|
describe('gstack-uninstall', () => {
|
||||||
|
test('syntax check passes', () => {
|
||||||
|
const result = spawnSync('bash', ['-n', UNINSTALL], { stdio: 'pipe' });
|
||||||
|
expect(result.status).toBe(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('--help prints usage and exits 0', () => {
|
||||||
|
const result = spawnSync('bash', [UNINSTALL, '--help'], { stdio: 'pipe' });
|
||||||
|
expect(result.status).toBe(0);
|
||||||
|
const output = result.stdout.toString();
|
||||||
|
expect(output).toContain('gstack-uninstall');
|
||||||
|
expect(output).toContain('--force');
|
||||||
|
expect(output).toContain('--keep-state');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('unknown flag exits with error', () => {
|
||||||
|
const result = spawnSync('bash', [UNINSTALL, '--bogus'], {
|
||||||
|
stdio: 'pipe',
|
||||||
|
env: { ...process.env, HOME: '/nonexistent' },
|
||||||
|
});
|
||||||
|
expect(result.status).toBe(1);
|
||||||
|
expect(result.stderr.toString()).toContain('Unknown option');
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('integration tests with mock layout', () => {
|
||||||
|
let tmpDir: string;
|
||||||
|
let mockHome: string;
|
||||||
|
let mockGitRoot: string;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'gstack-uninstall-test-'));
|
||||||
|
mockHome = path.join(tmpDir, 'home');
|
||||||
|
mockGitRoot = path.join(tmpDir, 'repo');
|
||||||
|
|
||||||
|
// Create mock gstack install layout
|
||||||
|
fs.mkdirSync(path.join(mockHome, '.claude', 'skills', 'gstack'), { recursive: true });
|
||||||
|
fs.writeFileSync(path.join(mockHome, '.claude', 'skills', 'gstack', 'SKILL.md'), 'test');
|
||||||
|
|
||||||
|
// Create per-skill symlinks (both old unprefixed and new prefixed)
|
||||||
|
fs.symlinkSync('gstack/review', path.join(mockHome, '.claude', 'skills', 'review'));
|
||||||
|
fs.symlinkSync('gstack/ship', path.join(mockHome, '.claude', 'skills', 'gstack-ship'));
|
||||||
|
|
||||||
|
// Create a non-gstack symlink (should NOT be removed)
|
||||||
|
fs.mkdirSync(path.join(mockHome, '.claude', 'skills', 'other-tool'), { recursive: true });
|
||||||
|
|
||||||
|
// Create state directory
|
||||||
|
fs.mkdirSync(path.join(mockHome, '.gstack', 'projects'), { recursive: true });
|
||||||
|
fs.writeFileSync(path.join(mockHome, '.gstack', 'config.json'), '{}');
|
||||||
|
|
||||||
|
// Create mock git repo
|
||||||
|
fs.mkdirSync(mockGitRoot, { recursive: true });
|
||||||
|
spawnSync('git', ['init', '-b', 'main'], { cwd: mockGitRoot, stdio: 'pipe' });
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
fs.rmSync(tmpDir, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
test('--force removes global Claude skills and state', () => {
|
||||||
|
const result = spawnSync('bash', [UNINSTALL, '--force'], {
|
||||||
|
stdio: 'pipe',
|
||||||
|
env: {
|
||||||
|
...process.env,
|
||||||
|
HOME: mockHome,
|
||||||
|
GSTACK_DIR: path.join(mockHome, '.claude', 'skills', 'gstack'),
|
||||||
|
GSTACK_STATE_DIR: path.join(mockHome, '.gstack'),
|
||||||
|
},
|
||||||
|
cwd: mockGitRoot,
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(result.status).toBe(0);
|
||||||
|
const output = result.stdout.toString();
|
||||||
|
expect(output).toContain('gstack uninstalled');
|
||||||
|
|
||||||
|
// Global skill dir should be removed
|
||||||
|
expect(fs.existsSync(path.join(mockHome, '.claude', 'skills', 'gstack'))).toBe(false);
|
||||||
|
|
||||||
|
// Per-skill symlinks pointing into gstack/ should be removed
|
||||||
|
expect(fs.existsSync(path.join(mockHome, '.claude', 'skills', 'review'))).toBe(false);
|
||||||
|
expect(fs.existsSync(path.join(mockHome, '.claude', 'skills', 'gstack-ship'))).toBe(false);
|
||||||
|
|
||||||
|
// Non-gstack tool should still exist
|
||||||
|
expect(fs.existsSync(path.join(mockHome, '.claude', 'skills', 'other-tool'))).toBe(true);
|
||||||
|
|
||||||
|
// State should be removed
|
||||||
|
expect(fs.existsSync(path.join(mockHome, '.gstack'))).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('--keep-state preserves state directory', () => {
|
||||||
|
const result = spawnSync('bash', [UNINSTALL, '--force', '--keep-state'], {
|
||||||
|
stdio: 'pipe',
|
||||||
|
env: {
|
||||||
|
...process.env,
|
||||||
|
HOME: mockHome,
|
||||||
|
GSTACK_DIR: path.join(mockHome, '.claude', 'skills', 'gstack'),
|
||||||
|
GSTACK_STATE_DIR: path.join(mockHome, '.gstack'),
|
||||||
|
},
|
||||||
|
cwd: mockGitRoot,
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(result.status).toBe(0);
|
||||||
|
|
||||||
|
// Skills should be removed
|
||||||
|
expect(fs.existsSync(path.join(mockHome, '.claude', 'skills', 'gstack'))).toBe(false);
|
||||||
|
|
||||||
|
// State should still exist
|
||||||
|
expect(fs.existsSync(path.join(mockHome, '.gstack'))).toBe(true);
|
||||||
|
expect(fs.existsSync(path.join(mockHome, '.gstack', 'config.json'))).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('clean system outputs nothing to remove', () => {
|
||||||
|
const cleanHome = path.join(tmpDir, 'clean-home');
|
||||||
|
fs.mkdirSync(cleanHome, { recursive: true });
|
||||||
|
|
||||||
|
const result = spawnSync('bash', [UNINSTALL, '--force'], {
|
||||||
|
stdio: 'pipe',
|
||||||
|
env: {
|
||||||
|
...process.env,
|
||||||
|
HOME: cleanHome,
|
||||||
|
GSTACK_DIR: path.join(cleanHome, 'nonexistent'),
|
||||||
|
GSTACK_STATE_DIR: path.join(cleanHome, '.gstack'),
|
||||||
|
},
|
||||||
|
cwd: mockGitRoot,
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(result.status).toBe(0);
|
||||||
|
expect(result.stdout.toString()).toContain('Nothing to remove');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('upgrade path: prefixed install + uninstall cleans both old and new symlinks', () => {
|
||||||
|
// Simulate the state after setup --no-prefix followed by setup (with prefix):
|
||||||
|
// Both old unprefixed and new prefixed symlinks exist
|
||||||
|
// (mockHome already has both 'review' and 'gstack-ship' symlinks)
|
||||||
|
|
||||||
|
const result = spawnSync('bash', [UNINSTALL, '--force'], {
|
||||||
|
stdio: 'pipe',
|
||||||
|
env: {
|
||||||
|
...process.env,
|
||||||
|
HOME: mockHome,
|
||||||
|
GSTACK_DIR: path.join(mockHome, '.claude', 'skills', 'gstack'),
|
||||||
|
GSTACK_STATE_DIR: path.join(mockHome, '.gstack'),
|
||||||
|
},
|
||||||
|
cwd: mockGitRoot,
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(result.status).toBe(0);
|
||||||
|
|
||||||
|
// Both old (review) and new (gstack-ship) symlinks should be gone
|
||||||
|
expect(fs.existsSync(path.join(mockHome, '.claude', 'skills', 'review'))).toBe(false);
|
||||||
|
expect(fs.existsSync(path.join(mockHome, '.claude', 'skills', 'gstack-ship'))).toBe(false);
|
||||||
|
|
||||||
|
// Non-gstack should survive
|
||||||
|
expect(fs.existsSync(path.join(mockHome, '.claude', 'skills', 'other-tool'))).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
Reference in New Issue
Block a user