mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-15 16:52:14 +08:00
* fix(learnings): accept type:"investigation" in gstack-learnings-log The /investigate skill instructed agents to log learnings with type:"investigation", but bin/gstack-learnings-log:22 rejected anything not in [pattern, pitfall, preference, architecture, tool, operational]. Every investigation run exited 1 to stderr and the learning was dropped, silently to the user. Fix: add 'investigation' to ALLOWED_TYPES. Regression test: round-trips a learning with type:"investigation" and asserts exit 0 + file write; second test reads investigate/SKILL.md.tmpl and asserts it emits the literal type:"investigation" string, guarding the template/validator contract at both ends. Fixes #1423. Reported by diogolealassis. * fix(gbrain): engine detection survives gbrain ≥0.25 schema + non-zero doctor exit freshDetectEngineTier() in lib/gstack-memory-helpers.ts returned engine: "unknown" for every Supabase user on gbrain ≥0.25. Two stacking bugs: 1. execSync("gbrain doctor --json --fast 2>/dev/null") threw on non-zero exit. gbrain doctor exits 1 whenever health_score < 100, which is essentially every fresh install due to resolver_health warnings. The JSON output never reached the parser. 2. gbrain ≥0.25 shipped schema_version:2 doctor output that dropped the top-level 'engine' field entirely. Result: every /sync-gbrain on Supabase logged 'engine=unknown' and skipped all sync stages silently. Fix: - Replace execSync with execFileSync (no shell, no bash-specific 2>/dev/null redirect; portable to Windows). - Recover stdout from the thrown error object so non-zero exits still parse. - Fall back to reading gbrain's config.json (respecting GBRAIN_HOME env var, defaulting to ~/.gbrain/config.json) when doctor output doesn't surface an engine field. - Add logGbrainError() helper that appends one-line JSONL to ~/.gstack/.gbrain-errors.jsonl on parse failure, so future regressions leave a forensic trail. The "supabase" tier here means "remote postgres" in practice — gbrain config uses engine:"postgres" for both real Supabase and any other remote postgres (e.g. local-postgres-for-testing). Downstream sync code treats them identically, so the label compression is intentional and documented inline. Regression test: existing detectEngineTier suite now isolates HOME + GBRAIN_HOME + PATH to temp dirs (closes a flake source where the prior tests would read whatever was on the reviewer's machine). New test forces gbrain off PATH, writes a synthetic config.json with engine:"postgres", asserts detectEngineTier() returns engine:"supabase". Fixes #1415. Patch shape contributed by Shiv @shivasymbl (tested on gstack v1.31.0.0 + gbrain v0.31.3 + Supabase). * fix(codex): /codex review works on Codex CLI ≥0.130.0 Codex CLI 0.130.0 made [PROMPT] and --base <BRANCH> mutually exclusive at argv level. Step 2A of codex/SKILL.md.tmpl had always passed both (the filesystem boundary prefix as the prompt argument + the base branch), so every /codex review call died with: error: the argument '[PROMPT]' cannot be used with '--base <BRANCH>' Fix: split Step 2A into two paths. Default (no custom user instructions): bare 'codex review --base <base>'. Codex's review prompt is internally diff-scoped, so the model focuses on the changes against base. The filesystem boundary prefix is dropped here because Codex 0.130 has no documented system-prompt config key (probed -c 'system_prompt="..."' against 0.130 — the flag is silently accepted but the value isn't applied). Skill files under .claude/ and agents/ are public, so this is a token-efficiency concern, not a safety one. Custom instructions (/codex review <focus>): route through codex exec with the diff written to a tempfile, inlined into the prompt between explicit DIFF_START / DIFF_END markers. The boundary is preserved here because codex exec isn't auto-scoped to the diff. The DIFF_START/END delimiters tell the model where data ends and instructions resume, which materially reduces prompt-injection hijack rates when the diff contains adversarial content. Note on bash semantics: codex's earlier review flagged the exec route as "command injection via $_DIFF interpolation." That framing is wrong — bash parameter expansion does not re-evaluate $(...) or backticks inside the expanded value, so a diff containing $(rm -rf /) is plain string data to codex exec. The real risk is prompt injection (model-side, not shell-side), which the DIFF_START/END pattern mitigates. Regression tests in test/codex-hardening.test.ts assert across BOTH codex/SKILL.md.tmpl AND the generated codex/SKILL.md: 1. No 'codex review' invocation line combines a quoted-string OR variable positional argument with --base. 2. Step 2A still contains either bare 'codex review --base' OR 'codex exec' (guards against accidental deletion of both fix paths). Fixes #1428. Reported by Stashub. * test: raise timeouts for slow integration tests Two test files were timing out at the default 5s on developer machines, both pre-existing on origin/main but unrelated to this branch's bug fixes: - test/gstack-artifacts-init.test.ts: 13 tests spawning real subprocesses via fake gh/glab/git shims in PATH. bun's fork+exec overhead pushed these past 5s consistently. Added a local test-wrapper that aliases test() with a 30s timeout (matches the brain-sync.test.ts pattern already in the repo). - test/gstack-next-version.test.ts: one integration smoke test that spawns 'bun run ./bin/gstack-next-version' and parses the resulting JSON. The subprocess does a 'gh pr list' against the live GitHub API to enumerate claimed version slots. Network latency makes 5s tight; raised this single test to 30s. No production code changed. The tests already passed deterministically once given enough wall-clock time. * chore: bump version and changelog (v1.34.2.0) Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
305 lines
13 KiB
TypeScript
305 lines
13 KiB
TypeScript
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
|
|
import { execSync, ExecSyncOptionsWithStringEncoding } from 'child_process';
|
|
import * as fs from 'fs';
|
|
import * as path from 'path';
|
|
import * as os from 'os';
|
|
|
|
const ROOT = path.resolve(import.meta.dir, '..');
|
|
const BIN = path.join(ROOT, 'bin');
|
|
|
|
let tmpDir: string;
|
|
let slugDir: string;
|
|
let learningsFile: string;
|
|
|
|
function runLog(input: string, opts: { expectFail?: boolean } = {}): { stdout: string; exitCode: number } {
|
|
const execOpts: ExecSyncOptionsWithStringEncoding = {
|
|
cwd: ROOT,
|
|
env: { ...process.env, GSTACK_HOME: tmpDir },
|
|
encoding: 'utf-8',
|
|
timeout: 15000,
|
|
};
|
|
try {
|
|
const stdout = execSync(`${BIN}/gstack-learnings-log '${input.replace(/'/g, "'\\''")}'`, execOpts).trim();
|
|
return { stdout, exitCode: 0 };
|
|
} catch (e: any) {
|
|
if (opts.expectFail) {
|
|
return { stdout: e.stderr?.toString() || '', exitCode: e.status || 1 };
|
|
}
|
|
throw e;
|
|
}
|
|
}
|
|
|
|
function runSearch(args: string = ''): string {
|
|
const execOpts: ExecSyncOptionsWithStringEncoding = {
|
|
cwd: ROOT,
|
|
env: { ...process.env, GSTACK_HOME: tmpDir },
|
|
encoding: 'utf-8',
|
|
timeout: 15000,
|
|
};
|
|
try {
|
|
return execSync(`${BIN}/gstack-learnings-search ${args}`, execOpts).trim();
|
|
} catch {
|
|
return '';
|
|
}
|
|
}
|
|
|
|
beforeEach(() => {
|
|
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'gstack-learn-'));
|
|
slugDir = path.join(tmpDir, 'projects');
|
|
fs.mkdirSync(slugDir, { recursive: true });
|
|
});
|
|
|
|
afterEach(() => {
|
|
fs.rmSync(tmpDir, { recursive: true, force: true });
|
|
});
|
|
|
|
function findLearningsFile(): string | null {
|
|
const projectDirs = fs.readdirSync(slugDir);
|
|
if (projectDirs.length === 0) return null;
|
|
const f = path.join(slugDir, projectDirs[0], 'learnings.jsonl');
|
|
return fs.existsSync(f) ? f : null;
|
|
}
|
|
|
|
describe('gstack-learnings-log', () => {
|
|
test('appends valid JSON to learnings.jsonl', () => {
|
|
const input = '{"skill":"review","type":"pattern","key":"test-key","insight":"test insight","confidence":8,"source":"observed"}';
|
|
const result = runLog(input);
|
|
expect(result.exitCode).toBe(0);
|
|
|
|
const f = findLearningsFile();
|
|
expect(f).not.toBeNull();
|
|
const content = fs.readFileSync(f!, 'utf-8').trim();
|
|
const parsed = JSON.parse(content);
|
|
expect(parsed.skill).toBe('review');
|
|
expect(parsed.key).toBe('test-key');
|
|
expect(parsed.confidence).toBe(8);
|
|
});
|
|
|
|
test('auto-injects timestamp when ts is missing', () => {
|
|
const input = '{"skill":"review","type":"pattern","key":"ts-test","insight":"test","confidence":5,"source":"observed"}';
|
|
runLog(input);
|
|
|
|
const f = findLearningsFile();
|
|
expect(f).not.toBeNull();
|
|
const parsed = JSON.parse(fs.readFileSync(f!, 'utf-8').trim());
|
|
expect(parsed.ts).toBeDefined();
|
|
expect(new Date(parsed.ts).getTime()).toBeGreaterThan(0);
|
|
});
|
|
|
|
test('rejects non-JSON input with non-zero exit code', () => {
|
|
const result = runLog('not json at all', { expectFail: true });
|
|
expect(result.exitCode).not.toBe(0);
|
|
});
|
|
|
|
test('append-only: duplicate keys create multiple entries', () => {
|
|
const input1 = '{"skill":"review","type":"pattern","key":"dup-key","insight":"first version","confidence":6,"source":"observed"}';
|
|
const input2 = '{"skill":"review","type":"pattern","key":"dup-key","insight":"second version","confidence":8,"source":"observed"}';
|
|
runLog(input1);
|
|
runLog(input2);
|
|
|
|
const f = findLearningsFile();
|
|
expect(f).not.toBeNull();
|
|
const lines = fs.readFileSync(f!, 'utf-8').trim().split('\n');
|
|
expect(lines.length).toBe(2);
|
|
});
|
|
|
|
// Regression test for #1423: investigate skill emits type:"investigation"
|
|
// but ALLOWED_TYPES previously rejected it. Now accepted.
|
|
test('accepts type:"investigation" (regression: #1423)', () => {
|
|
const input = '{"skill":"investigate","type":"investigation","key":"root-cause","insight":"verified","confidence":9,"source":"observed"}';
|
|
const result = runLog(input);
|
|
expect(result.exitCode).toBe(0);
|
|
const f = findLearningsFile();
|
|
expect(f).not.toBeNull();
|
|
const parsed = JSON.parse(fs.readFileSync(f!, 'utf-8').trim());
|
|
expect(parsed.type).toBe('investigation');
|
|
});
|
|
|
|
// Caller contract: investigate/SKILL.md.tmpl must emit type:"investigation"
|
|
// verbatim. Guards against the template drifting to an invalid type and
|
|
// silently breaking the log path. See codex review finding for #1423.
|
|
test('investigate template emits type:"investigation" verbatim (caller contract)', () => {
|
|
const tmpl = fs.readFileSync(path.join(ROOT, 'investigate/SKILL.md.tmpl'), 'utf-8');
|
|
// The invocation line must include "type":"investigation" exactly.
|
|
expect(tmpl).toContain('"type":"investigation"');
|
|
});
|
|
});
|
|
|
|
describe('gstack-learnings-search', () => {
|
|
test('returns empty and exits 0 when no learnings file exists', () => {
|
|
const output = runSearch();
|
|
expect(output).toBe('');
|
|
});
|
|
|
|
test('returns formatted output when learnings exist', () => {
|
|
runLog('{"skill":"review","type":"pattern","key":"test-search","insight":"search test insight","confidence":7,"source":"observed"}');
|
|
const output = runSearch();
|
|
expect(output).toContain('LEARNINGS:');
|
|
expect(output).toContain('test-search');
|
|
expect(output).toContain('search test insight');
|
|
});
|
|
|
|
test('deduplicates entries by key+type (latest wins)', () => {
|
|
const old = JSON.stringify({ skill: 'review', type: 'pattern', key: 'dedup-test', insight: 'old version', confidence: 5, source: 'observed', ts: '2026-01-01T00:00:00Z' });
|
|
const newer = JSON.stringify({ skill: 'review', type: 'pattern', key: 'dedup-test', insight: 'new version', confidence: 8, source: 'observed', ts: '2026-03-28T00:00:00Z' });
|
|
runLog(old);
|
|
runLog(newer);
|
|
|
|
const output = runSearch();
|
|
expect(output).toContain('new version');
|
|
expect(output).not.toContain('old version');
|
|
expect(output).toContain('1 loaded');
|
|
});
|
|
|
|
test('filters by --type', () => {
|
|
runLog('{"skill":"review","type":"pattern","key":"p1","insight":"a pattern","confidence":7,"source":"observed"}');
|
|
runLog('{"skill":"review","type":"pitfall","key":"p2","insight":"a pitfall","confidence":7,"source":"observed"}');
|
|
|
|
const patternOnly = runSearch('--type pattern');
|
|
expect(patternOnly).toContain('p1');
|
|
expect(patternOnly).not.toContain('p2');
|
|
});
|
|
|
|
test('filters by --query', () => {
|
|
runLog('{"skill":"review","type":"pattern","key":"auth-bypass","insight":"check session tokens","confidence":7,"source":"observed"}');
|
|
runLog('{"skill":"review","type":"pattern","key":"n-plus-one","insight":"use includes for associations","confidence":7,"source":"observed"}');
|
|
|
|
const authOnly = runSearch('--query auth');
|
|
expect(authOnly).toContain('auth-bypass');
|
|
expect(authOnly).not.toContain('n-plus-one');
|
|
});
|
|
|
|
test('respects --limit', () => {
|
|
for (let i = 0; i < 5; i++) {
|
|
runLog(`{"skill":"review","type":"pattern","key":"limit-${i}","insight":"insight ${i}","confidence":7,"source":"observed"}`);
|
|
}
|
|
|
|
const limited = runSearch('--limit 2');
|
|
// Should show 2, not 5
|
|
expect(limited).toContain('2 loaded');
|
|
});
|
|
|
|
test('applies confidence decay for observed/inferred sources', () => {
|
|
// Entry from 90 days ago with source=observed, confidence=8
|
|
// Should decay to 8 - floor(90/30) = 8 - 3 = 5
|
|
const ts = new Date(Date.now() - 90 * 86400000).toISOString();
|
|
runLog(`{"skill":"review","type":"pattern","key":"decay-test","insight":"old observation","confidence":8,"source":"observed","ts":"${ts}"}`);
|
|
|
|
const output = runSearch();
|
|
// Should show confidence 5 (decayed from 8)
|
|
expect(output).toContain('confidence: 5/10');
|
|
});
|
|
|
|
test('does NOT decay user-stated learnings', () => {
|
|
const ts = new Date(Date.now() - 90 * 86400000).toISOString();
|
|
runLog(`{"skill":"review","type":"preference","key":"no-decay-test","insight":"user preference","confidence":9,"source":"user-stated","ts":"${ts}"}`);
|
|
|
|
const output = runSearch();
|
|
// Should still show confidence 9 (no decay for user-stated)
|
|
expect(output).toContain('confidence: 9/10');
|
|
});
|
|
|
|
test('skips malformed JSONL lines gracefully', () => {
|
|
// Write a valid entry, then manually append a bad line
|
|
runLog('{"skill":"review","type":"pattern","key":"valid-entry","insight":"valid","confidence":7,"source":"observed"}');
|
|
const f = findLearningsFile();
|
|
expect(f).not.toBeNull();
|
|
fs.appendFileSync(f!, '\nthis is not json\n');
|
|
fs.appendFileSync(f!, '{"skill":"review","type":"pattern","key":"also-valid","insight":"also valid","confidence":6,"source":"observed","ts":"2026-03-28T00:00:00Z"}\n');
|
|
|
|
const output = runSearch();
|
|
expect(output).toContain('valid-entry');
|
|
expect(output).toContain('also-valid');
|
|
});
|
|
});
|
|
|
|
describe('gstack-learnings-log edge cases', () => {
|
|
test('preserves existing timestamp when ts is present', () => {
|
|
const input = '{"skill":"review","type":"pattern","key":"ts-preserve","insight":"test","confidence":5,"source":"observed","ts":"2025-06-15T10:00:00Z"}';
|
|
runLog(input);
|
|
|
|
const f = findLearningsFile();
|
|
expect(f).not.toBeNull();
|
|
const parsed = JSON.parse(fs.readFileSync(f!, 'utf-8').trim());
|
|
expect(parsed.ts).toBe('2025-06-15T10:00:00Z');
|
|
});
|
|
|
|
test('handles JSON with special characters in insight', () => {
|
|
const input = JSON.stringify({ skill: 'review', type: 'pattern', key: 'special-chars', insight: 'Use "quotes" and \\backslashes', confidence: 7, source: 'observed' });
|
|
runLog(input);
|
|
|
|
const f = findLearningsFile();
|
|
expect(f).not.toBeNull();
|
|
const parsed = JSON.parse(fs.readFileSync(f!, 'utf-8').trim());
|
|
expect(parsed.insight).toContain('quotes');
|
|
expect(parsed.insight).toContain('backslashes');
|
|
});
|
|
|
|
test('handles JSON with files array field', () => {
|
|
const input = JSON.stringify({ skill: 'review', type: 'architecture', key: 'with-files', insight: 'test', confidence: 8, source: 'observed', files: ['src/auth.ts', 'src/db.ts'] });
|
|
runLog(input);
|
|
|
|
const f = findLearningsFile();
|
|
expect(f).not.toBeNull();
|
|
const parsed = JSON.parse(fs.readFileSync(f!, 'utf-8').trim());
|
|
expect(parsed.files).toEqual(['src/auth.ts', 'src/db.ts']);
|
|
});
|
|
});
|
|
|
|
describe('gstack-learnings-search edge cases', () => {
|
|
test('sorts by confidence then recency', () => {
|
|
// Two entries: one high confidence old, one lower confidence recent
|
|
runLog(JSON.stringify({ skill: 'review', type: 'pattern', key: 'high-conf', insight: 'high confidence entry', confidence: 9, source: 'user-stated', ts: '2026-01-01T00:00:00Z' }));
|
|
runLog(JSON.stringify({ skill: 'review', type: 'pattern', key: 'recent', insight: 'recent entry', confidence: 5, source: 'observed', ts: '2026-03-28T00:00:00Z' }));
|
|
|
|
const output = runSearch();
|
|
const highIdx = output.indexOf('high-conf');
|
|
const recentIdx = output.indexOf('recent');
|
|
// High confidence should appear first
|
|
expect(highIdx).toBeLessThan(recentIdx);
|
|
});
|
|
|
|
test('groups output by type', () => {
|
|
runLog(JSON.stringify({ skill: 'review', type: 'pattern', key: 'p1', insight: 'a pattern', confidence: 7, source: 'observed' }));
|
|
runLog(JSON.stringify({ skill: 'review', type: 'pitfall', key: 'pit1', insight: 'a pitfall', confidence: 7, source: 'observed' }));
|
|
|
|
const output = runSearch();
|
|
expect(output).toContain('## Patterns');
|
|
expect(output).toContain('## Pitfalls');
|
|
});
|
|
|
|
test('combined --type and --query filtering', () => {
|
|
runLog(JSON.stringify({ skill: 'review', type: 'pattern', key: 'auth-token', insight: 'check token expiry', confidence: 7, source: 'observed' }));
|
|
runLog(JSON.stringify({ skill: 'review', type: 'pitfall', key: 'auth-leak', insight: 'auth token in logs', confidence: 7, source: 'observed' }));
|
|
runLog(JSON.stringify({ skill: 'review', type: 'pattern', key: 'cache-key', insight: 'cache invalidation', confidence: 7, source: 'observed' }));
|
|
|
|
const output = runSearch('--type pattern --query auth');
|
|
expect(output).toContain('auth-token');
|
|
expect(output).not.toContain('auth-leak'); // wrong type
|
|
expect(output).not.toContain('cache-key'); // wrong query
|
|
});
|
|
|
|
test('entries with missing key or type are skipped', () => {
|
|
runLog(JSON.stringify({ skill: 'review', type: 'pattern', key: 'valid', insight: 'valid entry', confidence: 7, source: 'observed' }));
|
|
const f = findLearningsFile();
|
|
expect(f).not.toBeNull();
|
|
// Append entries missing key and type
|
|
fs.appendFileSync(f!, JSON.stringify({ skill: 'review', type: 'pattern', insight: 'no key', confidence: 7, source: 'observed' }) + '\n');
|
|
fs.appendFileSync(f!, JSON.stringify({ skill: 'review', key: 'no-type', insight: 'no type', confidence: 7, source: 'observed' }) + '\n');
|
|
|
|
const output = runSearch();
|
|
expect(output).toContain('valid');
|
|
expect(output).not.toContain('no key');
|
|
expect(output).not.toContain('no-type');
|
|
});
|
|
|
|
test('confidence decay floors at 0 (never negative)', () => {
|
|
// Entry from 1 year ago with confidence 3 — decay would be 12, clamped to 0
|
|
const ts = new Date(Date.now() - 365 * 86400000).toISOString();
|
|
runLog(JSON.stringify({ skill: 'review', type: 'pattern', key: 'ancient', insight: 'very old', confidence: 3, source: 'observed', ts }));
|
|
|
|
const output = runSearch();
|
|
expect(output).toContain('confidence: 0/10');
|
|
});
|
|
});
|