mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-08 13:39:45 +08:00
* feat(browse): SOCKS5 bridge with auth + cred redaction helper
Adds browse/src/socks-bridge.ts: a 127.0.0.1-only SOCKS5 listener that
accepts unauthenticated connections from Chromium and relays them through
an authenticated upstream proxy. Chromium does not prompt for SOCKS5 auth
at launch, so this bridge is the workaround for using auth-required
residential SOCKS5 upstreams.
- startSocksBridge({ upstream, port: 0 }) → ephemeral 127.0.0.1 listener
- testUpstream({ upstream, retries: 3, backoffMs: 500, budgetMs: 5000 })
pre-flight that connects to a known endpoint (default 1.1.1.1:443)
- Stream-error policy: kill affected client + upstream sockets on any
error mid-stream; no transport retries (a transport-layer retry can
corrupt browser traffic)
Adds browse/src/proxy-redact.ts: single source of truth for redacting
credentials in any logged proxy URL or upstream config. Every code path
that prints proxy config goes through this helper.
Adds the socks npm dep (~30KB) and 16 tests covering: 127.0.0.1-only
bind, byte-for-byte round trip through the bridge, auth rejection,
mid-stream upstream drop kills client conn, listener teardown,
testUpstream success + retry-exhaust paths, redaction of every
credential shape.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(browse): --proxy and --headed flags wire bridge into daemon
Adds the global --proxy <url> and --headed flags to the browse CLI.
Resolves cred policy and routes the daemon launch through the SOCKS5
bridge (or pass-through for HTTP/HTTPS) before chromium.launch().
CLI (cli.ts):
- extractGlobalFlags() strips --proxy/--headed from argv, parses URL via
Node URL class, validates D9 cred-mixing (env BROWSE_PROXY_USER/PASS
+ URL creds → exit 1 with hint), composes canonical proxy URL with
resolved creds, computes a stable configHash for daemon-mismatch
- ensureServer() now reads existing daemon's configHash from state file
and refuses (exit 1 with disconnect hint) if --proxy/--headed mismatch
the existing daemon. No silent restart that would drop tab state.
- All proxy-related stderr lines go through redactProxyUrl
proxy-config.ts (new):
- parseProxyConfig() — URL parser + D9 cred-mixing detector + scheme allowlist
- computeConfigHash() — stable hash of (proxy URL minus creds + headed flag)
- toUpstreamConfig() — map ParsedProxyConfig → socks-bridge.UpstreamConfig
Server (server.ts):
- Reads BROWSE_PROXY_URL at startup; for SOCKS5+auth, runs testUpstream
pre-flight (5s budget, 3 retries, 500ms backoff) and exits 1 on failure
with redacted error
- Spawns startSocksBridge() on 127.0.0.1:<ephemeral> and points
Chromium at it via socks5://127.0.0.1:<port>
- HTTP/HTTPS or unauth SOCKS5 → pass-through to chromium.launch
proxy.server (with username/password if present)
- State file gains optional configHash for daemon-mismatch check
- Bridge tears down via process.on('exit')
Browser manager (browser-manager.ts):
- New setProxyConfig({ server, username, password }) called by server.ts
before launch
- chromium.launch() and both launchPersistentContext sites pass the
proxy config through when set
Tests: 22 new across proxy-config (parse + cred-mixing + hash stability)
and extractGlobalFlags (flag stripping + cred-mixing rejection + cred
rotation hash stability + redaction).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(browse): Xvfb auto-spawn with PID + start-time validation
Adds browse/src/xvfb.ts: a Linux-only Xvfb auto-spawn module for
running headed Chromium in containers without DISPLAY. The module
walks a display range to pick a free one (never hardcodes :99) and
validates orphan PIDs by BOTH /proc/<pid>/cmdline matching 'Xvfb' AND
start-time matching the recorded value before sending any signal.
Defends against PID reuse — refuses to kill anything that doesn't
match both checks.
- shouldSpawnXvfb(env, platform) — pure decision: skip on macOS/Windows,
on Linux skip when DISPLAY or WAYLAND_DISPLAY is set (codex F2)
- pickFreeDisplay(99..120) — probes via xdpyinfo
- spawnXvfb(display) — returns { pid, startTime, display } handle
- isOurXvfb(pid, startTime) — both-checks validator
- cleanupXvfb(state) — best-effort, validates ownership before SIGTERM
Wired into server.ts startup: when shouldSpawnXvfb says yes, picks a
free display, spawns Xvfb, sets DISPLAY for chromium.launchHeaded, and
records xvfbPid/xvfbStartTime/xvfbDisplay in the state file. Cleanup
runs on process.on('exit'). The CLI's disconnect path also runs
cleanupXvfb() in the force-cleanup branch when the server is dead.
Disconnect now applies to any non-default daemon (headed mode OR
configHash-tagged daemon — i.e. one started with --proxy/--headed),
not just headed mode.
Adds xvfb + x11-utils to .github/docker/Dockerfile.ci so CI exercises
the Linux container --headed path on every run. Without it the most
common production path would go untested.
Tests: 17 new across decision logic, PID validation defenses
(cmdline mismatch, start-time mismatch), no-op safety on bad inputs,
and a Linux+Xvfb-installed gate for the spawn → validate → cleanup
round trip. Tests skip on macOS/Windows automatically.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(browse): webdriver-mask stealth + Chromium-through-bridge e2e
D7 (codex narrowing): mask navigator.webdriver only via addInitScript.
The wintermute approach (fake plugins=[1..5], fake languages=['en-US',
'en'], stub window.chrome) is intentionally NOT applied — modern
fingerprinters check consistency between plugins.length, languages,
userAgent, and platform, and synthesizing fixed values can flag MORE
bot-like, not less. The honest minimum is webdriver, which Chromium
exposes as a known automation tell.
Adds browse/src/stealth.ts: single source of truth for the stealth
init script and launch args. Both browser-manager.launch() (headless)
and launchHeaded() (persistent context with extension) call
applyStealth(context) and pass STEALTH_LAUNCH_ARGS into chromium.launch.
The pre-existing launchHeaded stealth that did fake plugins/languages
is removed for the same reason. The cdc_/__webdriver runtime cleanup
and Permissions API patch are kept — they remove automation-injected
artifacts, not synthesize fake natural-browser values.
Adds bridge-chromium-e2e.test.ts (codex F3): the test that proves the
FEATURE works. Real Chromium with proxy.server = 'socks5://127.0.0.1:
<bridgePort>' navigates to a local HTTP fixture; the auth upstream's
connect counter and the HTTP fixture's hit counter both increment,
proving traffic actually traversed bridge → auth-upstream → destination.
Without this test, we could ship a working byte-relay and a broken
Chromium integration and never know.
Adds bridge-port-restart.test.ts (codex F1, reframed): old test
assumed two daemons coexist, which contradicts D2 single-daemon model.
Reframed as restart-then-restart, asserting fresh ephemeral ports
(never the hardcoded 1090) on each spin-up.
Adds stealth-webdriver.test.ts: navigator.webdriver=false in both
fresh contexts and persistent contexts; navigator.plugins/languages
are NOT replaced with the wintermute fake list (D7 verification).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(gstack): generate llms.txt — single-file capability index for AI agents
Adds scripts/gen-llms-txt.ts: produces gstack/llms.txt at repo root,
indexing every skill (47), every browse command (75), and design
commands when the design CLI is present. Per the llmstxt.org
convention, agents can read one file to learn what gstack offers
instead of crawling 47 SKILL.md files.
Sources:
- skill SKILL.md.tmpl frontmatter (name + description block scalar)
- browse/src/commands.ts COMMAND_DESCRIPTIONS (sorted by category)
- design/src/commands.ts COMMAND_DESCRIPTIONS if present (best-effort)
Wired into scripts/gen-skill-docs.ts as a post-step so it regenerates
on every `bun run gen:skill-docs` (the same script that re-emits all
SKILL.md files). Failures are non-fatal warnings, not build breaks —
the generator never blocks SKILL.md regen.
Strict mode (--strict, also used by tests) throws when a skill is
missing name or description in its frontmatter, catching missing
metadata before it ships.
Tests: shape (top-level sections, sort order, single-line summary
discipline), every-skill-and-command-appears, strict-mode rejection of
incomplete frontmatter, and freshness check that the committed
gstack/llms.txt matches what the generator produces now.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(browse): --navigate flag on download for browser-triggered files
Adds the --navigate strategy from community PR #1355 (originally from
@garrytan-agents). When set, download navigates to the URL with
waitUntil:'commit' and captures the resulting browser download via
page.waitForEvent('download'), then saves via download.saveAs().
Handles URLs that trigger files via Content-Disposition headers,
multi-hop CDN redirects requiring browser cookies, or anti-bot CDN
chains where page.request.fetch() can't follow the auth/redirect
chain.
Defaults still use the existing direct-fetch strategy. --navigate is
opt-in.
Goes through the same validateNavigationUrl SSRF gate as goto, so
download --navigate cannot reach IPv4 metadata endpoints (AWS IMDSv1,
GCP/Azure equivalents) or arbitrary internal hosts.
Inferred content type from suggested filename for common extensions
(epub, pdf, zip, gz, mp3/mp4, jpg/jpeg/png, txt, html, json) — falls
back to application/octet-stream. Same 200MB cap as Strategy 1.
Frames the use case generically (anti-bot CDN, Content-Disposition,
redirect chains) rather than naming any specific site, per project
voice rules.
Co-Authored-By: @garrytan-agents
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: v1.28.0.0 — browse SKILL section + VERSION + CHANGELOG
VERSION 1.27.1.0 → 1.28.0.0 (MINOR — substantial new capability:
five new flags/features, ~600 LOC added, new socks dep, multiple
new modules).
browse/SKILL.md.tmpl: new "Headed Mode + Proxy + Anti-Bot Sites"
section between User Handoff and Snapshot Flags. Documents
--headed (auto-Xvfb on Linux), --proxy (with embedded SOCKS5
bridge for auth), download --navigate, the cred-mixing policy,
daemon-discipline (refuse-on-mismatch), the narrowed
webdriver-only stealth, container support caveats, and the
fail-fast/no-retry failure modes.
CHANGELOG entry follows the release-summary format from CLAUDE.md:
two-line headline, lead paragraph, "The numbers that matter"
table tied to specific test files that prove each capability,
"What this means for AI agents" closing tied to a real workflow
shift, then itemized Added/Changed/Fixed/For-contributors
sections.
Browse SKILL.md regenerated via bun run gen:skill-docs.
gstack/llms.txt regenerated automatically from the same pipeline.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(browse): integration coverage for daemon mismatch + proxy fail-fast
Adds two integration tests that exercise the full process boundary,
not just the module-level wiring.
daemon-mismatch-refuse.test.ts (D2):
- Stubs a healthy state file with a fake configHash and a fake /health
HTTP server, runs the actual cli.ts binary with a mismatching
--proxy, asserts exit 1 + 'different config' / 'browse disconnect'
hint in stderr.
- Same shape with the plain-daemon-meets---headed case.
- Positive case: matching configHash → CLI does NOT emit the mismatch
hint (regardless of whether the actual command succeeds).
server-proxy-fail-fast.test.ts:
- Starts the rejecting SOCKS5 upstream, spawns server.ts with
BROWSE_PROXY_URL pointing at it, BROWSE_HEADLESS_SKIP=1 to skip
Chromium launch.
- Asserts exit 1, 'FAIL upstream' in stderr (testUpstream pre-flight
ran), no raw credential leakage in any output (redaction works on
the failure path), and exit within 30s upper bound.
Both tests use the existing spawn-bun-cli pattern from
commands.test.ts so they run on the same CI infrastructure as the
rest of the bun test suite.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(gen-skill-docs): keep module sync so test require() still works
Two regressions caught by the full test suite after the v1.28.0.0
landing pass:
1) package.json version mismatch — VERSION was bumped to 1.28.0.0
but package.json still pinned to 1.27.1.0.
test/gen-skill-docs.test.ts asserts they match.
2) Top-level await in scripts/gen-llms-txt.ts (CLI entry block) and
scripts/gen-skill-docs.ts (post-step) made gen-skill-docs an
async module. test/gen-skill-docs.test.ts uses require() to pull
extractVoiceTriggers/processVoiceTriggers from gen-skill-docs,
which Bun rejects on async modules with:
"TypeError: require() async module ... unsupported.
use 'await import()' instead."
Fix: wrap the await blocks in void IIFEs so the modules remain sync
from a require() perspective.
After fix: all 379 gen-skill-docs tests pass, all 77 new feature
tests pass (3 skipped on macOS — Linux+Xvfb gates).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(browse): apply codex adversarial findings on the new lifecycle
Codex outside-voice review caught five real production-failure modes in
the v1.28.0.0 proxy/headed lifecycle. Fixed:
1) `browse disconnect` skip-graceful for proxy-only daemons
(browse/src/cli.ts). The graceful /command POST went out with stray
`domains,` shorthand and (even fixed) the server's disconnect handler
only tears down headed mode — proxy-only daemons returned 200 "Not
in headed mode" while leaving the bridge running. Now disconnect
short-circuits to force-cleanup for non-headed daemons, which kicks
process.on('exit') in server.ts to close the bridge + Xvfb.
2) sendCommand crash retry preserves --proxy / --headed
(browse/src/cli.ts). The ECONNRESET retry path called startServer()
with no extraEnv, silently dropping the proxied flags. A daemon that
died mid-command would silently restart in default direct/headless
mode and bypass the SOCKS bridge. Now reapplies BROWSE_PROXY_URL,
BROWSE_HEADED, and BROWSE_CONFIG_HASH from the resolved global flags.
3) `connect` honors --proxy (browse/src/cli.ts). The headed-mode
`connect` command built its own serverEnv that didn't include
BROWSE_PROXY_URL, so `browse --proxy <url> connect` launched headed
Chromium without the proxy. Now threads proxyUrl + configHash into
the connect serverEnv.
4) SOCKS5 bridge handles fragmented TCP frames
(browse/src/socks-bridge.ts). Previously used once('data') and
parsed each chunk as a complete SOCKS5 frame — TCP doesn't preserve
message boundaries and split greetings/CONNECT requests caused
intermittent handshake failures. Replaced with a single state
machine that buffers chunks and uses size predicates on the SOCKS5
header to know when a complete frame has arrived. Pauses the client
socket during upstream connect and replays any remainder bytes
into the upstream on success.
5) Xvfb cleanup-then-state-delete ordering
(browse/src/server.ts). emergencyCleanup() previously deleted the
state file BEFORE any Xvfb cleanup could read it, orphaning Xvfb
on uncaughtException / unhandledRejection. Now reads the state
file first, calls cleanupXvfb() (which validates cmdline +
start-time before kill), then deletes the state file.
Adds a regression test for #4: writes the SOCKS5 greeting + CONNECT
one byte at a time with 5ms ticks, asserts a clean round trip after
the fragmented handshake.
Codex's sixth finding (bridge advertises NO_AUTH on 127.0.0.1, so any
co-located process can use the authenticated upstream) is documented
as a known limitation — gstack's threat model assumes single-user
hosts. Adding bridge-side auth is a separate change.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: update BROWSER.md + TODOS.md for v1.28.0.0
BROWSER.md picks up a "Headed mode + proxy + browser-native downloads
(v1.28.0.0)" subsection inside Real-browser mode plus the new source-map
entries (socks-bridge.ts, proxy-config.ts, proxy-redact.ts, xvfb.ts,
stealth.ts). TODOS.md anti-bot-stealth item updated to reflect the v1.28
narrowing — the "fake plugins" line is no longer accurate.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(ci): include bun.lock in image build for deterministic install
CI evals all failed on PR #1363 with:
error: Could not resolve: "smart-buffer". Maybe you need to "bun install"?
error: Could not resolve: "ip-address". Maybe you need to "bun install"?
at /opt/node_modules_cache/socks/build/client/socksclient.js:15
The cached node_modules layer in the pre-baked Docker image had
`socks` (the new dep) but was missing its transitive deps (smart-buffer,
ip-address). The image build copied only package.json into the build
context — without bun.lock, `bun install` resolved a different tree
than local `bun install` did, dropping required transitive deps.
Reproduces locally as 229 packages (correct) when bun.lock is present
or absent. Why CI diverged isn't fully understood — possibly Docker
layer cache reuse across image rebuilds — but the deterministic fix is
to include the lockfile in the image build context and use
`--frozen-lockfile`, matching what every CI doc recommends.
Changes:
- .github/docker/Dockerfile.ci: COPY bun.lock alongside package.json,
switch `bun install` → `bun install --frozen-lockfile` so any future
lockfile drift fails loudly during image build instead of producing
a partially-installed cache that breaks downstream eval jobs.
- .github/workflows/evals.yml: include bun.lock in the image-tag hash
so adding/removing a dep invalidates the image, AND copy bun.lock
into the docker context alongside package.json.
- .github/workflows/evals-periodic.yml: same updates.
- .github/workflows/ci-image.yml: rebuild trigger now fires on bun.lock
changes too; build context includes bun.lock.
Image hash changes → fresh image gets built on next CI run → install
matches the lockfile exactly → no missing transitive deps.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(ci): use hardlink copy instead of symlink for node_modules cache
After the bun.lock fix landed, the eval matrix STILL failed identically:
Could not resolve: "smart-buffer" / "ip-address"
at /opt/node_modules_cache/socks/build/client/socksclient.js
But the hash-tagged image actually contains smart-buffer + ip-address +
socks all flat in /opt/node_modules_cache (verified by pulling and
inspecting the image). 207 packages, all present.
Root cause: the workflow used `ln -s /opt/node_modules_cache node_modules`
to restore deps. Bun build (and Node module resolution generally) walks
a file's realpath to find sibling deps. From the symlinked
/workspace/node_modules/socks/build/client/socksclient.js, realpath
resolves to /opt/node_modules_cache/socks/build/client/socksclient.js,
and walking up to find a node_modules/smart-buffer dir fails — there's
no `node_modules` segment in the realpath.
Switch `ln -s` → `cp -al` (hardlink-copy). Each file in the cache becomes
a hardlink at /workspace/node_modules/<pkg>, sharing inodes (no data
copy). Realpath of /workspace/node_modules/socks/.../socksclient.js
stays inside /workspace/node_modules, so sibling deps resolve correctly.
Speed is comparable to symlink — `cp -al` on ~200 packages on tmpfs is
sub-second. Same caching story preserved.
Both evals.yml and evals-periodic.yml updated.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(ci): cp -r instead of cp -al — /opt and /workspace are different filesystems
The hardlink-copy fix landed and immediately broke with:
cp: cannot create hard link 'node_modules/<file>' to
'/opt/node_modules_cache/<file>': Invalid cross-device link
GitHub Actions runners mount the workspace volume at /workspace
(overlay-fs layered onto the runner image), and /opt is the runner
image's own filesystem. Cross-filesystem hardlinks aren't supported.
Switch `cp -al` → `cp -r`. Cost: ~5s for ~200 packages of small JS
files vs ~0s for the broken symlink. Still cheaper than the ~15s
`bun install` fallback. Realpath of /workspace/node_modules/<pkg>/...
stays inside /workspace, so bun build's sibling-dep resolution works.
Both evals.yml and evals-periodic.yml updated.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
688 lines
29 KiB
TypeScript
688 lines
29 KiB
TypeScript
#!/usr/bin/env bun
|
|
/**
|
|
* Generate SKILL.md files from .tmpl templates.
|
|
*
|
|
* Pipeline:
|
|
* read .tmpl → find {{PLACEHOLDERS}} → resolve from source → format → write .md
|
|
*
|
|
* Supports --dry-run: generate to memory, exit 1 if different from committed file.
|
|
* Used by skill:check and CI freshness checks.
|
|
*/
|
|
|
|
import { COMMAND_DESCRIPTIONS } from '../browse/src/commands';
|
|
import { SNAPSHOT_FLAGS } from '../browse/src/snapshot';
|
|
import { discoverTemplates } from './discover-skills';
|
|
import { writeLlmsTxt } from './gen-llms-txt';
|
|
import * as fs from 'fs';
|
|
import * as path from 'path';
|
|
import type { Host, TemplateContext } from './resolvers/types';
|
|
import { HOST_PATHS } from './resolvers/types';
|
|
import { RESOLVERS } from './resolvers/index';
|
|
import { externalSkillName, extractHookSafetyProse as _extractHookSafetyProse, extractNameAndDescription as _extractNameAndDescription, condenseOpenAIShortDescription as _condenseOpenAIShortDescription, generateOpenAIYaml as _generateOpenAIYaml } from './resolvers/codex-helpers';
|
|
import { generatePlanCompletionAuditShip, generatePlanCompletionAuditReview, generatePlanVerificationExec } from './resolvers/review';
|
|
import { ALL_HOST_CONFIGS, ALL_HOST_NAMES, resolveHostArg, getHostConfig } from '../hosts/index';
|
|
import type { HostConfig } from './host-config';
|
|
|
|
const ROOT = path.resolve(import.meta.dir, '..');
|
|
const DRY_RUN = process.argv.includes('--dry-run');
|
|
|
|
// ─── Host Detection (config-driven) ─────────────────────────
|
|
|
|
const HOST_ARG = process.argv.find(a => a.startsWith('--host'));
|
|
type HostArg = Host | 'all';
|
|
const HOST_ARG_VAL: HostArg = (() => {
|
|
if (!HOST_ARG) return 'claude';
|
|
const val = HOST_ARG.includes('=') ? HOST_ARG.split('=')[1] : process.argv[process.argv.indexOf(HOST_ARG) + 1];
|
|
if (val === 'all') return 'all';
|
|
try {
|
|
return resolveHostArg(val) as Host;
|
|
} catch {
|
|
throw new Error(`Unknown host: ${val}. Use ${ALL_HOST_NAMES.join(', ')}, or all.`);
|
|
}
|
|
})();
|
|
|
|
// For single-host mode, HOST is the host. For --host all, it's set per iteration below.
|
|
let HOST: Host = HOST_ARG_VAL === 'all' ? 'claude' : HOST_ARG_VAL;
|
|
|
|
// ─── Model Overlay Selection ────────────────────────────────
|
|
// --model is explicit. We do NOT auto-detect from host (host ≠ model).
|
|
// Default is 'claude'. Missing overlay file → empty string (graceful).
|
|
import { ALL_MODEL_NAMES, resolveModel, type Model } from './models';
|
|
const MODEL_ARG = process.argv.find(a => a.startsWith('--model'));
|
|
const MODEL_ARG_VAL: Model = (() => {
|
|
if (!MODEL_ARG) return 'claude';
|
|
const val = MODEL_ARG.includes('=') ? MODEL_ARG.split('=')[1] : process.argv[process.argv.indexOf(MODEL_ARG) + 1];
|
|
const resolved = resolveModel(val);
|
|
if (!resolved) {
|
|
throw new Error(`Unknown model: ${val}. Use ${ALL_MODEL_NAMES.join(', ')}, or a family variant (e.g., claude-opus-4-7, gpt-5.4-mini, o3).`);
|
|
}
|
|
return resolved;
|
|
})();
|
|
|
|
// HostPaths, HOST_PATHS, and TemplateContext imported from ./resolvers/types (line 7-8)
|
|
// Design constants (AI_SLOP_BLACKLIST, OPENAI_HARD_REJECTIONS, OPENAI_LITMUS_CHECKS)
|
|
// live in ./resolvers/constants and are consumed by resolvers directly.
|
|
|
|
// ─── External Host Helpers ───────────────────────────────────
|
|
|
|
// Re-export local copy for use in this file (matches codex-helpers.ts)
|
|
// Accepts optional frontmatter name to support directory/invocation name divergence
|
|
function externalSkillName(skillDir: string, frontmatterName?: string): string {
|
|
// Root skill (skillDir === '' or '.') always maps to 'gstack' regardless of frontmatter
|
|
if (skillDir === '.' || skillDir === '') return 'gstack';
|
|
// Use frontmatter name when it differs from directory name (e.g., run-tests/ with name: test)
|
|
const baseName = frontmatterName && frontmatterName !== skillDir ? frontmatterName : skillDir;
|
|
// Don't double-prefix: gstack-upgrade → gstack-upgrade (not gstack-gstack-upgrade)
|
|
if (baseName.startsWith('gstack-')) return baseName;
|
|
return `gstack-${baseName}`;
|
|
}
|
|
|
|
function extractNameAndDescription(content: string): { name: string; description: string } {
|
|
const fmStart = content.indexOf('---\n');
|
|
if (fmStart !== 0) return { name: '', description: '' };
|
|
const fmEnd = content.indexOf('\n---', fmStart + 4);
|
|
if (fmEnd === -1) return { name: '', description: '' };
|
|
|
|
const frontmatter = content.slice(fmStart + 4, fmEnd);
|
|
const nameMatch = frontmatter.match(/^name:\s*(.+)$/m);
|
|
const name = nameMatch ? nameMatch[1].trim() : '';
|
|
|
|
let description = '';
|
|
const lines = frontmatter.split('\n');
|
|
let inDescription = false;
|
|
const descLines: string[] = [];
|
|
for (const line of lines) {
|
|
if (line.match(/^description:\s*\|?\s*$/)) {
|
|
inDescription = true;
|
|
continue;
|
|
}
|
|
if (line.match(/^description:\s*\S/)) {
|
|
description = line.replace(/^description:\s*/, '').trim();
|
|
break;
|
|
}
|
|
if (inDescription) {
|
|
if (line === '' || line.match(/^\s/)) {
|
|
descLines.push(line.replace(/^ /, ''));
|
|
} else {
|
|
break;
|
|
}
|
|
}
|
|
}
|
|
if (descLines.length > 0) {
|
|
description = descLines.join('\n').trim();
|
|
}
|
|
|
|
return { name, description };
|
|
}
|
|
|
|
// ─── Voice Trigger Processing ────────────────────────────────
|
|
|
|
/**
|
|
* Extract voice-triggers YAML list from frontmatter.
|
|
* Returns an array of trigger strings, or [] if no voice-triggers field.
|
|
*/
|
|
function extractVoiceTriggers(content: string): string[] {
|
|
const fmStart = content.indexOf('---\n');
|
|
if (fmStart !== 0) return [];
|
|
const fmEnd = content.indexOf('\n---', fmStart + 4);
|
|
if (fmEnd === -1) return [];
|
|
const frontmatter = content.slice(fmStart + 4, fmEnd);
|
|
|
|
const triggers: string[] = [];
|
|
let inVoice = false;
|
|
for (const line of frontmatter.split('\n')) {
|
|
if (/^voice-triggers:/.test(line)) { inVoice = true; continue; }
|
|
if (inVoice) {
|
|
const m = line.match(/^\s+-\s+"(.+)"$/);
|
|
if (m) triggers.push(m[1]);
|
|
else if (!/^\s/.test(line)) break;
|
|
}
|
|
}
|
|
return triggers;
|
|
}
|
|
|
|
/**
|
|
* Preprocess voice triggers: fold voice-triggers YAML field into description,
|
|
* then strip the field from frontmatter. Must run BEFORE transformFrontmatter
|
|
* and extractNameAndDescription so all hosts see the updated description.
|
|
*/
|
|
function processVoiceTriggers(content: string): string {
|
|
const triggers = extractVoiceTriggers(content);
|
|
if (triggers.length === 0) return content;
|
|
|
|
// Strip voice-triggers block from frontmatter
|
|
content = content.replace(/^voice-triggers:\n(?:\s+-\s+"[^"]*"\n?)*/m, '');
|
|
|
|
// Get current description (after stripping voice-triggers, so it's clean)
|
|
const { description } = extractNameAndDescription(content);
|
|
if (!description) return content;
|
|
|
|
// Build new description with voice triggers appended
|
|
const voiceLine = `Voice triggers (speech-to-text aliases): ${triggers.map(t => `"${t}"`).join(', ')}.`;
|
|
const newDescription = description + '\n' + voiceLine;
|
|
|
|
// Replace old indented description with new in frontmatter
|
|
const oldIndented = description.split('\n').map(l => ` ${l}`).join('\n');
|
|
const newIndented = newDescription.split('\n').map(l => ` ${l}`).join('\n');
|
|
content = content.replace(oldIndented, newIndented);
|
|
|
|
return content;
|
|
}
|
|
|
|
// Export for testing
|
|
export { extractVoiceTriggers, processVoiceTriggers };
|
|
|
|
const OPENAI_SHORT_DESCRIPTION_LIMIT = 120;
|
|
|
|
function condenseOpenAIShortDescription(description: string): string {
|
|
const firstParagraph = description.split(/\n\s*\n/)[0] || description;
|
|
const collapsed = firstParagraph.replace(/\s+/g, ' ').trim();
|
|
if (collapsed.length <= OPENAI_SHORT_DESCRIPTION_LIMIT) return collapsed;
|
|
|
|
const truncated = collapsed.slice(0, OPENAI_SHORT_DESCRIPTION_LIMIT - 3);
|
|
const lastSpace = truncated.lastIndexOf(' ');
|
|
const safe = lastSpace > 40 ? truncated.slice(0, lastSpace) : truncated;
|
|
return `${safe}...`;
|
|
}
|
|
|
|
function generateOpenAIYaml(displayName: string, shortDescription: string): string {
|
|
return `interface:
|
|
display_name: ${JSON.stringify(displayName)}
|
|
short_description: ${JSON.stringify(shortDescription)}
|
|
default_prompt: ${JSON.stringify(`Use ${displayName} for this task.`)}
|
|
policy:
|
|
allow_implicit_invocation: true
|
|
`;
|
|
}
|
|
|
|
/**
|
|
* Transform frontmatter for external hosts.
|
|
* Claude: strips `sensitive:` field (only Factory uses it).
|
|
* Codex: keeps name + description only, enforces 1024-char limit.
|
|
* Factory: keeps name + description + user-invocable, conditionally adds disable-model-invocation.
|
|
*/
|
|
function transformFrontmatter(content: string, host: Host): string {
|
|
const hostConfig = getHostConfig(host);
|
|
const fm = hostConfig.frontmatter;
|
|
|
|
if (fm.mode === 'denylist') {
|
|
// Denylist mode: strip listed fields, keep everything else
|
|
for (const field of fm.stripFields || []) {
|
|
if (field === 'voice-triggers') {
|
|
content = content.replace(/^voice-triggers:\n(?:\s+-\s+"[^"]*"\n?)*/m, '');
|
|
} else {
|
|
content = content.replace(new RegExp(`^${field}:\\s*.*\\n`, 'm'), '');
|
|
}
|
|
}
|
|
return content;
|
|
}
|
|
|
|
// Allowlist mode: reconstruct frontmatter with only allowed fields
|
|
const fmStart = content.indexOf('---\n');
|
|
if (fmStart !== 0) return content;
|
|
const fmEnd = content.indexOf('\n---', fmStart + 4);
|
|
if (fmEnd === -1) return content;
|
|
const frontmatter = content.slice(fmStart + 4, fmEnd);
|
|
const body = content.slice(fmEnd + 4);
|
|
const { name, description } = extractNameAndDescription(content);
|
|
|
|
// Description limit enforcement
|
|
if (fm.descriptionLimit) {
|
|
const behavior = fm.descriptionLimitBehavior || 'error';
|
|
if (description.length > fm.descriptionLimit) {
|
|
if (behavior === 'error') {
|
|
throw new Error(
|
|
`${hostConfig.displayName} description for "${name}" is ${description.length} chars (max ${fm.descriptionLimit}). ` +
|
|
`Compress the description in the .tmpl file.`
|
|
);
|
|
} else if (behavior === 'warn') {
|
|
console.warn(`WARNING: ${hostConfig.displayName} description for "${name}" exceeds ${fm.descriptionLimit} chars`);
|
|
}
|
|
// 'truncate' — silently proceed
|
|
}
|
|
}
|
|
|
|
// Build frontmatter with allowed fields
|
|
const indentedDesc = description.split('\n').map(l => ` ${l}`).join('\n');
|
|
let newFm = `---\nname: ${name}\ndescription: |\n${indentedDesc}\n`;
|
|
|
|
// Add extra fields (host-wide)
|
|
if (fm.extraFields) {
|
|
for (const [key, value] of Object.entries(fm.extraFields)) {
|
|
if (key !== 'name' && key !== 'description') {
|
|
newFm += `${key}: ${value}\n`;
|
|
}
|
|
}
|
|
}
|
|
|
|
// Add conditional fields
|
|
if (fm.conditionalFields) {
|
|
for (const rule of fm.conditionalFields) {
|
|
const match = Object.entries(rule.if).every(([k, v]) =>
|
|
new RegExp(`^${k}:\\s*${v}`, 'm').test(frontmatter)
|
|
);
|
|
if (match) {
|
|
for (const [key, value] of Object.entries(rule.add)) {
|
|
newFm += `${key}: ${value}\n`;
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
// Preserve additional keepFields beyond name and description
|
|
if (fm.keepFields) {
|
|
for (const field of fm.keepFields) {
|
|
if (field === 'name' || field === 'description') continue;
|
|
// Match YAML field with possible multi-line/array value (indented lines after colon)
|
|
const fieldMatch = frontmatter.match(new RegExp(`^${field}:(.*(?:\\n(?:[ \\t]+.+))*)`, 'm'));
|
|
if (fieldMatch) {
|
|
newFm += `${field}:${fieldMatch[1]}\n`;
|
|
}
|
|
}
|
|
}
|
|
|
|
// Rename fields (copy values from template frontmatter with new keys)
|
|
if (fm.renameFields) {
|
|
for (const [oldName, newName] of Object.entries(fm.renameFields)) {
|
|
const fieldMatch = frontmatter.match(new RegExp(`^${oldName}:(.+(?:\\n(?:\\s+.+)*)?)`, 'm'));
|
|
if (fieldMatch) {
|
|
newFm += `${newName}:${fieldMatch[1]}\n`;
|
|
}
|
|
}
|
|
}
|
|
|
|
newFm += '---';
|
|
return newFm + body;
|
|
}
|
|
|
|
/**
|
|
* Extract hook descriptions from frontmatter for inline safety prose.
|
|
* Returns a description of what the hooks do, or null if no hooks.
|
|
*/
|
|
function extractHookSafetyProse(tmplContent: string): string | null {
|
|
if (!tmplContent.match(/^hooks:/m)) return null;
|
|
|
|
// Parse the hook matchers to build a human-readable safety description
|
|
const matchers: string[] = [];
|
|
const matcherRegex = /matcher:\s*"(\w+)"/g;
|
|
let m;
|
|
while ((m = matcherRegex.exec(tmplContent)) !== null) {
|
|
if (!matchers.includes(m[1])) matchers.push(m[1]);
|
|
}
|
|
|
|
if (matchers.length === 0) return null;
|
|
|
|
// Build safety prose based on what tools are hooked
|
|
const toolDescriptions: Record<string, string> = {
|
|
Bash: 'check bash commands for destructive operations (rm -rf, DROP TABLE, force-push, git reset --hard, etc.) before execution',
|
|
Edit: 'verify file edits are within the allowed scope boundary before applying',
|
|
Write: 'verify file writes are within the allowed scope boundary before applying',
|
|
};
|
|
|
|
const safetyChecks = matchers
|
|
.map(t => toolDescriptions[t] || `check ${t} operations for safety`)
|
|
.join(', and ');
|
|
|
|
return `> **Safety Advisory:** This skill includes safety checks that ${safetyChecks}. When using this skill, always pause and verify before executing potentially destructive operations. If uncertain about a command's safety, ask the user for confirmation before proceeding.`;
|
|
}
|
|
|
|
// ─── External Host Config (now derived from hosts/*.ts) ──────
|
|
// EXTERNAL_HOST_CONFIG replaced by getHostConfig() from hosts/index.ts
|
|
|
|
// ─── Template Processing ────────────────────────────────────
|
|
|
|
const GENERATED_HEADER = `<!-- AUTO-GENERATED from {{SOURCE}} — do not edit directly -->\n<!-- Regenerate: bun run gen:skill-docs -->\n`;
|
|
|
|
/**
|
|
* Process external host output: routing, frontmatter, path rewrites, metadata.
|
|
* Shared between Codex and Factory (and future external hosts).
|
|
*/
|
|
function processExternalHost(
|
|
content: string,
|
|
tmplContent: string,
|
|
host: Host,
|
|
skillDir: string,
|
|
extractedDescription: string,
|
|
ctx: TemplateContext,
|
|
frontmatterName?: string,
|
|
): { content: string; outputPath: string; outputDir: string; symlinkLoop: boolean } {
|
|
const hostConfig = getHostConfig(host);
|
|
|
|
const name = externalSkillName(skillDir === '.' ? '' : skillDir, frontmatterName);
|
|
const outputDir = path.join(ROOT, hostConfig.hostSubdir, 'skills', name);
|
|
fs.mkdirSync(outputDir, { recursive: true });
|
|
const outputPath = path.join(outputDir, 'SKILL.md');
|
|
|
|
// Guard against symlink loops
|
|
let symlinkLoop = false;
|
|
const claudePath = ctx.tmplPath.replace(/\.tmpl$/, '');
|
|
try {
|
|
const resolvedClaude = fs.realpathSync(claudePath);
|
|
const resolvedExternal = fs.realpathSync(path.dirname(outputPath)) + '/' + path.basename(outputPath);
|
|
if (resolvedClaude === resolvedExternal) {
|
|
symlinkLoop = true;
|
|
}
|
|
} catch {
|
|
// realpathSync fails if file doesn't exist yet — no symlink loop
|
|
}
|
|
|
|
// Extract hook safety prose BEFORE transforming frontmatter (which strips hooks)
|
|
const safetyProse = extractHookSafetyProse(tmplContent);
|
|
|
|
// Transform frontmatter (host-aware)
|
|
let result = transformFrontmatter(content, host);
|
|
|
|
// Insert safety advisory at the top of the body (after frontmatter)
|
|
if (safetyProse) {
|
|
const bodyStart = result.indexOf('\n---') + 4;
|
|
result = result.slice(0, bodyStart) + '\n' + safetyProse + '\n' + result.slice(bodyStart);
|
|
}
|
|
|
|
// Config-driven path rewrites (order matters, replaceAll)
|
|
for (const rewrite of hostConfig.pathRewrites) {
|
|
result = result.replaceAll(rewrite.from, rewrite.to);
|
|
}
|
|
|
|
// Config-driven tool rewrites
|
|
if (hostConfig.toolRewrites) {
|
|
for (const [from, to] of Object.entries(hostConfig.toolRewrites)) {
|
|
result = result.replaceAll(from, to);
|
|
}
|
|
}
|
|
|
|
// Config-driven: generate metadata (e.g., openai.yaml for Codex)
|
|
if (hostConfig.generation.generateMetadata && !symlinkLoop) {
|
|
const agentsDir = path.join(outputDir, 'agents');
|
|
fs.mkdirSync(agentsDir, { recursive: true });
|
|
const shortDescription = condenseOpenAIShortDescription(extractedDescription);
|
|
fs.writeFileSync(path.join(agentsDir, 'openai.yaml'), generateOpenAIYaml(name, shortDescription));
|
|
}
|
|
|
|
return { content: result, outputPath, outputDir, symlinkLoop };
|
|
}
|
|
|
|
function processTemplate(tmplPath: string, host: Host = 'claude'): { outputPath: string; content: string; symlinkLoop?: boolean } {
|
|
const tmplContent = fs.readFileSync(tmplPath, 'utf-8');
|
|
const relTmplPath = path.relative(ROOT, tmplPath);
|
|
let outputPath = tmplPath.replace(/\.tmpl$/, '');
|
|
|
|
// Determine skill directory relative to ROOT
|
|
const skillDir = path.relative(ROOT, path.dirname(tmplPath));
|
|
|
|
// Extract skill name from frontmatter early — needed for both TemplateContext and external host output paths.
|
|
// When frontmatter name: differs from directory name (e.g., run-tests/ with name: test),
|
|
// the frontmatter name is used for external skill naming and setup script symlinks.
|
|
const { name: extractedName, description: extractedDescription } = extractNameAndDescription(tmplContent);
|
|
const skillName = extractedName || path.basename(path.dirname(tmplPath));
|
|
|
|
|
|
// Extract benefits-from list from frontmatter (inline YAML: benefits-from: [a, b])
|
|
const benefitsMatch = tmplContent.match(/^benefits-from:\s*\[([^\]]*)\]/m);
|
|
const benefitsFrom = benefitsMatch
|
|
? benefitsMatch[1].split(',').map(s => s.trim()).filter(Boolean)
|
|
: undefined;
|
|
|
|
// Extract preamble-tier from frontmatter (1-4, controls which preamble sections are included)
|
|
const tierMatch = tmplContent.match(/^preamble-tier:\s*(\d+)$/m);
|
|
const preambleTier = tierMatch ? parseInt(tierMatch[1], 10) : undefined;
|
|
|
|
// Extract interactive flag from frontmatter (generator-only; controls plan-mode handshake inclusion)
|
|
const interactiveMatch = tmplContent.match(/^interactive:\s*(true|false)\s*$/m);
|
|
const interactive = interactiveMatch ? interactiveMatch[1] === 'true' : undefined;
|
|
|
|
const ctx: TemplateContext = { skillName, tmplPath, benefitsFrom, host, paths: HOST_PATHS[host], preambleTier, model: MODEL_ARG_VAL, interactive };
|
|
|
|
// Replace placeholders (supports parameterized: {{NAME:arg1:arg2}})
|
|
// Config-driven: suppressedResolvers return empty string for this host
|
|
const currentHostConfig = getHostConfig(host);
|
|
const suppressed = new Set(currentHostConfig.suppressedResolvers || []);
|
|
let content = tmplContent.replace(/\{\{(\w+(?::[^}]+)?)\}\}/g, (match, fullKey) => {
|
|
const parts = fullKey.split(':');
|
|
const resolverName = parts[0];
|
|
const args = parts.slice(1);
|
|
if (suppressed.has(resolverName)) return '';
|
|
const resolver = RESOLVERS[resolverName];
|
|
if (!resolver) throw new Error(`Unknown placeholder {{${resolverName}}} in ${relTmplPath}`);
|
|
return args.length > 0 ? resolver(ctx, args) : resolver(ctx);
|
|
});
|
|
|
|
// Check for any remaining unresolved placeholders
|
|
const remaining = content.match(/\{\{(\w+(?::[^}]+)?)\}\}/g);
|
|
if (remaining) {
|
|
throw new Error(`Unresolved placeholders in ${relTmplPath}: ${remaining.join(', ')}`);
|
|
}
|
|
|
|
// Preprocess voice triggers: fold into description, strip field from frontmatter.
|
|
// Must run BEFORE transformFrontmatter so all hosts see the updated description,
|
|
// and BEFORE extractedDescription is used by external host metadata.
|
|
content = processVoiceTriggers(content);
|
|
|
|
// Re-extract description AFTER voice trigger preprocessing so Codex openai.yaml
|
|
// metadata gets the updated description with voice triggers included.
|
|
const postProcessDescription = extractNameAndDescription(content).description;
|
|
|
|
// For Claude: strip sensitive: field (only Factory uses it)
|
|
// For external hosts: route output, transform frontmatter, rewrite paths
|
|
let symlinkLoop = false;
|
|
if (host === 'claude') {
|
|
content = transformFrontmatter(content, host);
|
|
} else {
|
|
const result = processExternalHost(content, tmplContent, host, skillDir, postProcessDescription, ctx, extractedName || undefined);
|
|
content = result.content;
|
|
outputPath = result.outputPath;
|
|
symlinkLoop = result.symlinkLoop;
|
|
}
|
|
|
|
// Prepend generated header (after frontmatter)
|
|
const header = GENERATED_HEADER.replace('{{SOURCE}}', path.basename(tmplPath));
|
|
const fmEnd = content.indexOf('---', content.indexOf('---') + 3);
|
|
if (fmEnd !== -1) {
|
|
const insertAt = content.indexOf('\n', fmEnd) + 1;
|
|
content = content.slice(0, insertAt) + header + content.slice(insertAt);
|
|
} else {
|
|
content = header + content;
|
|
}
|
|
|
|
return { outputPath, content, symlinkLoop };
|
|
}
|
|
|
|
// ─── Main ───────────────────────────────────────────────────
|
|
|
|
function findTemplates(): string[] {
|
|
return discoverTemplates(ROOT).map(t => path.join(ROOT, t.tmpl));
|
|
}
|
|
|
|
const ALL_HOSTS: Host[] = ALL_HOST_NAMES as Host[];
|
|
const hostsToRun: Host[] = HOST_ARG_VAL === 'all' ? ALL_HOSTS : [HOST];
|
|
const failures: { host: string; error: Error }[] = [];
|
|
|
|
for (const currentHost of hostsToRun) {
|
|
HOST = currentHost;
|
|
|
|
try {
|
|
let hasChanges = false;
|
|
const tokenBudget: Array<{ skill: string; lines: number; tokens: number }> = [];
|
|
|
|
const currentHostConfig = getHostConfig(currentHost);
|
|
for (const tmplPath of findTemplates()) {
|
|
const dir = path.basename(path.dirname(tmplPath));
|
|
|
|
// includeSkills allowlist (union logic: include minus skip)
|
|
if (currentHostConfig.generation.includeSkills?.length) {
|
|
if (!currentHostConfig.generation.includeSkills.includes(dir)) continue;
|
|
}
|
|
// skipSkills denylist (subtracts from includeSkills or full set)
|
|
if (currentHostConfig.generation.skipSkills?.length) {
|
|
if (currentHostConfig.generation.skipSkills.includes(dir)) continue;
|
|
}
|
|
|
|
const { outputPath, content, symlinkLoop } = processTemplate(tmplPath, currentHost);
|
|
const relOutput = path.relative(ROOT, outputPath);
|
|
|
|
if (symlinkLoop) {
|
|
console.log(`SKIPPED (symlink loop): ${relOutput}`);
|
|
} else if (DRY_RUN) {
|
|
const existing = fs.existsSync(outputPath) ? fs.readFileSync(outputPath, 'utf-8') : '';
|
|
if (existing !== content) {
|
|
console.log(`STALE: ${relOutput}`);
|
|
hasChanges = true;
|
|
} else {
|
|
console.log(`FRESH: ${relOutput}`);
|
|
}
|
|
} else {
|
|
fs.writeFileSync(outputPath, content);
|
|
console.log(`GENERATED: ${relOutput}`);
|
|
}
|
|
|
|
// Track token budget
|
|
const lines = content.split('\n').length;
|
|
const tokens = Math.round(content.length / 4); // ~4 chars per token
|
|
tokenBudget.push({ skill: relOutput, lines, tokens });
|
|
|
|
// Token ceiling check: warn if any generated SKILL.md exceeds ~40K tokens (160KB).
|
|
// The ceiling is a "watch for feature bloat" guardrail, not a hard gate. Modern
|
|
// flagship models have 200K-1M context windows, so 40K (4-20% of window) is fine.
|
|
// Prompt caching further reduces the marginal cost of larger skills. This ceiling
|
|
// exists to catch a runaway preamble or resolver that's grown by 10K+ tokens in
|
|
// a release, not to force compression on carefully-tuned big skills (ship,
|
|
// plan-ceo-review, office-hours all legitimately pack 25-35K tokens of behavior).
|
|
const TOKEN_CEILING_BYTES = 160_000;
|
|
if (content.length > TOKEN_CEILING_BYTES) {
|
|
console.warn(`⚠️ TOKEN CEILING: ${relOutput} is ${content.length} bytes (~${tokens} tokens), exceeds ${TOKEN_CEILING_BYTES} byte ceiling (~40K tokens)`);
|
|
}
|
|
}
|
|
|
|
// Generate gstack-lite and gstack-full for OpenClaw host
|
|
if (currentHost === 'openclaw' && !DRY_RUN) {
|
|
const openclawDir = path.join(ROOT, 'openclaw');
|
|
if (!fs.existsSync(openclawDir)) fs.mkdirSync(openclawDir, { recursive: true });
|
|
|
|
const gstackLite = `# gstack-lite Planning Discipline
|
|
|
|
Injected by the orchestrator into spawned Claude Code sessions. Append to existing CLAUDE.md.
|
|
|
|
## Planning Discipline
|
|
1. Read every file you will modify. Understand existing patterns first.
|
|
2. Before writing code, state your plan: what, why, which files, test case, risk.
|
|
3. When ambiguous, prefer: completeness over shortcuts, existing patterns over new ones,
|
|
reversible choices over irreversible ones, safe defaults over clever ones.
|
|
4. Self-review your changes before reporting done. Check for: missed files, broken
|
|
imports, untested paths, style inconsistencies.
|
|
5. Report when done: what shipped, what decisions you made, anything uncertain.
|
|
`;
|
|
fs.writeFileSync(path.join(openclawDir, 'gstack-lite-CLAUDE.md'), gstackLite);
|
|
console.log('GENERATED: openclaw/gstack-lite-CLAUDE.md');
|
|
|
|
const gstackFull = `# gstack-full Pipeline
|
|
|
|
Injected by the orchestrator for complete feature builds. Append to existing CLAUDE.md.
|
|
|
|
## Full Pipeline
|
|
1. Read CLAUDE.md and understand the project context.
|
|
2. Run /autoplan to review your approach (CEO + eng + design review pipeline).
|
|
3. Implement the approved plan. Follow the planning discipline above.
|
|
4. Run /ship to create a PR with tests, changelog, and version bump.
|
|
5. Report back: PR URL, what shipped, decisions made, anything uncertain.
|
|
|
|
Do not ask for human input until the PR is ready for review.
|
|
`;
|
|
fs.writeFileSync(path.join(openclawDir, 'gstack-full-CLAUDE.md'), gstackFull);
|
|
console.log('GENERATED: openclaw/gstack-full-CLAUDE.md');
|
|
|
|
const gstackPlan = `# gstack-plan: Full Review Gauntlet
|
|
|
|
Injected by the orchestrator when the user wants to plan a Claude Code project.
|
|
Append to existing CLAUDE.md.
|
|
|
|
## Planning Pipeline
|
|
1. Read CLAUDE.md and understand the project context.
|
|
2. Run /office-hours to produce a design doc (problem statement, premises, alternatives).
|
|
3. Run /autoplan to review the design (CEO + eng + design + DX reviews + codex adversarial).
|
|
4. Save the final reviewed plan to a file the orchestrator can reference later.
|
|
Write it to: plans/<project-slug>-plan-<date>.md in the current repo.
|
|
Include the design doc, all review decisions, and the implementation sequence.
|
|
5. Report back to the orchestrator:
|
|
- Plan file path
|
|
- One-paragraph summary of what was designed and the key decisions
|
|
- List of accepted scope expansions (if any)
|
|
- Recommended next step (usually: spawn a new session with gstack-full to implement)
|
|
|
|
Do not implement anything. This is planning only.
|
|
The orchestrator will persist the plan link to its own memory/knowledge store.
|
|
`;
|
|
fs.writeFileSync(path.join(openclawDir, 'gstack-plan-CLAUDE.md'), gstackPlan);
|
|
console.log('GENERATED: openclaw/gstack-plan-CLAUDE.md');
|
|
}
|
|
|
|
if (DRY_RUN && hasChanges) {
|
|
console.error(`\nGenerated SKILL.md files are stale (${currentHost} host). Run: bun run gen:skill-docs --host ${currentHost}`);
|
|
if (HOST_ARG_VAL !== 'all') process.exit(1);
|
|
failures.push({ host: currentHost, error: new Error('Stale files detected') });
|
|
}
|
|
|
|
// Print token budget summary
|
|
if (!DRY_RUN && tokenBudget.length > 0) {
|
|
tokenBudget.sort((a, b) => b.lines - a.lines);
|
|
const totalLines = tokenBudget.reduce((s, t) => s + t.lines, 0);
|
|
const totalTokens = tokenBudget.reduce((s, t) => s + t.tokens, 0);
|
|
|
|
console.log('');
|
|
console.log(`Token Budget (${currentHost} host)`);
|
|
console.log('═'.repeat(60));
|
|
for (const t of tokenBudget) {
|
|
const hostSubdirs = ALL_HOST_CONFIGS.map(c => c.hostSubdir.replace('.', '\\.')).join('|');
|
|
const name = t.skill.replace(/\/SKILL\.md$/, '').replace(new RegExp(`^\\.(${hostSubdirs})\\/skills\\/`), '');
|
|
console.log(` ${name.padEnd(30)} ${String(t.lines).padStart(5)} lines ~${String(t.tokens).padStart(6)} tokens`);
|
|
}
|
|
console.log('─'.repeat(60));
|
|
console.log(` ${'TOTAL'.padEnd(30)} ${String(totalLines).padStart(5)} lines ~${String(totalTokens).padStart(6)} tokens`);
|
|
console.log('');
|
|
}
|
|
} catch (e) {
|
|
failures.push({ host: currentHost, error: e as Error });
|
|
console.error(`WARNING: ${currentHost} generation failed: ${(e as Error).message}`);
|
|
}
|
|
}
|
|
|
|
// --host all: report failures. Only exit(1) if claude failed.
|
|
if (failures.length > 0 && HOST_ARG_VAL === 'all') {
|
|
console.error(`\n${failures.length} host(s) failed: ${failures.map(f => f.host).join(', ')}`);
|
|
if (failures.some(f => f.host === 'claude')) process.exit(1);
|
|
}
|
|
// Single host dry-run failure already handled above
|
|
|
|
// After all hosts processed, warn if prefix patches may need re-applying
|
|
if (!DRY_RUN) {
|
|
try {
|
|
const configPath = path.join(process.env.HOME || '', '.gstack', 'config.yaml');
|
|
if (fs.existsSync(configPath)) {
|
|
const config = fs.readFileSync(configPath, 'utf-8');
|
|
if (/^skill_prefix:\s*true/m.test(config)) {
|
|
console.log('\nNote: skill_prefix is true. Run gstack-relink to re-apply name: patches.');
|
|
}
|
|
}
|
|
} catch { /* non-fatal */ }
|
|
}
|
|
|
|
// Regenerate gstack/llms.txt — single-file capability index for AI agents.
|
|
// Runs after SKILL.md generation so it sees current skill descriptions and
|
|
// browse command list. Wrapped in an IIFE so the await-import doesn't make
|
|
// this module async (test/gen-skill-docs.test.ts uses require() to pull
|
|
// extractVoiceTriggers/processVoiceTriggers, which fails on async modules).
|
|
// Freshness is asserted in test/llms-txt-shape.test.ts.
|
|
if (!DRY_RUN) {
|
|
void (async () => {
|
|
try {
|
|
const result = await writeLlmsTxt();
|
|
if (result.warnings.length > 0) {
|
|
for (const w of result.warnings) console.error(`[gen-llms-txt] WARN: ${w}`);
|
|
} else {
|
|
console.log(`[gen-llms-txt] gstack/llms.txt: ${result.skills.length} skills, ${result.browseCommands.length} browse commands`);
|
|
}
|
|
} catch (err) {
|
|
const msg = err instanceof Error ? err.message : String(err);
|
|
console.error(`[gen-llms-txt] FAILED: ${msg}`);
|
|
}
|
|
})();
|
|
}
|