mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-08 21:49:45 +08:00
* feat: gstack-gbrain-mcp-verify helper for remote MCP probe
Probes a remote gbrain MCP endpoint with bearer auth. POSTs initialize,
classifies failures into NETWORK / AUTH / MALFORMED with one-line
remediation hints, and runs a tools/list capability probe to detect
sources_add MCP support (forward-compat for when gbrain ships URL ingest).
Token consumed from GBRAIN_MCP_TOKEN env, never argv. Required to set
both 'application/json' AND 'text/event-stream' in Accept; that gotcha
costs 10 minutes of debugging when missed (regression-tested).
Live-verified against wintermute (gbrain v0.27.1).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: gstack-artifacts-init + gstack-artifacts-url helpers
artifacts-init replaces brain-init with provider choice (gh / glab /
manual), per-user gstack-artifacts-$USER repo, HTTPS-canonical storage in
~/.gstack-artifacts-remote.txt, and a "send this to your brain admin"
hookup printout. Always prints the command, never auto-executes — gbrain
v0.26.x has no admin-scope MCP probe (codex Finding #3).
artifacts-url centralizes HTTPS↔SSH/host/owner-repo conversion so callers
don't each string-mangle (codex Finding #10). The remote-conflict check in
artifacts-init compares at the canonical level so re-running with HTTPS
input doesn't trip on a stored SSH URL for the same logical repo.
The "URL form not supported" branch prints a two-line clone-then-path
form for gbrain v0.26.x; the supported branch is a one-liner with --url
ready for when gbrain ships URL ingest.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: extend gstack-gbrain-detect with mcp_mode + artifacts_remote
Adds two new fields to detect's JSON output:
- gbrain_mcp_mode: local-stdio | remote-http | none
Resolved via 3-tier fallback (codex Finding D3): claude mcp get --json
→ claude mcp list text-grep → ~/.claude.json jq read. If Anthropic moves
the file format, the first two tiers absorb it.
- gstack_artifacts_remote: HTTPS URL from ~/.gstack-artifacts-remote.txt
Falls back to ~/.gstack-brain-remote.txt during the v1.27.0.0 migration
window so detect doesn't return empty between upgrade and migration.
Existing detect tests still pass (15/15). New 19 tests cover every fallback
tier independently, plus a schema regression for /sync-gbrain compat.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: setup-gbrain Path 4 (remote MCP) + artifacts rename
Path 4 lets users paste an HTTPS MCP URL + bearer token and registers it
as an HTTP-transport MCP without needing a local gbrain CLI install. The
flow:
- Step 2 gains a fourth option (Remote gbrain MCP)
- Step 4 adds Path 4 sub-flow: collect URL, secret-read bearer, verify
via gstack-gbrain-mcp-verify (NETWORK / AUTH / MALFORMED classifier)
- Step 5 (local doctor), Step 7.5 (transcript ingest), Step 5a's stdio
branch all skip on Path 4
- Step 5a adds an HTTP+bearer registration form: claude mcp add
--transport http --header "Authorization: Bearer ..."
- Step 7 renamed "session memory sync" → "artifacts sync" and now calls
gstack-artifacts-init (which always prints the brain-admin hookup
command — no auto-execute, codex Finding #3)
- Step 8 CLAUDE.md block branches: remote-http includes URL + server
version (never the token); local-stdio keeps engine + config-file
- Step 9 smoke test on Path 4 prints the curl-equivalent for
post-restart verification (MCP tools aren't visible mid-session)
- Step 10 verdict block has separate templates per mode
Idempotency: re-running with gbrain_mcp_mode=remote-http already in
detect output skips Step 2 entirely and goes to verification.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: rename gbrain_sync_mode → artifacts_sync_mode (v1.27.0.0 prep)
Hard rename, no dual-read alias (codex Finding D4). The on-disk migration
script (Phase C, separate commit) renames the config key in users'
~/.gstack/config.yaml and any CLAUDE.md blocks.
Touched call sites:
- bin/gstack-config defaults + validation + list/defaults output
- bin/gstack-gbrain-detect (gstack_brain_sync_mode field still emitted
with the same name for downstream-tool compat; reads new key)
- bin/gstack-brain-sync, bin/gstack-brain-enqueue, bin/gstack-brain-uninstall
- bin/gstack-timeline-log (comment ref)
- scripts/resolvers/preamble/generate-brain-sync-block.ts: renames key,
branches on gbrain_mcp_mode=remote-http to emit "ARTIFACTS_SYNC:
remote-mode (managed by brain server <host>)" instead of the local
mode/queue/last_push line (codex Finding #11)
- bin/gstack-brain-restore + bin/gstack-gbrain-source-wireup: read
~/.gstack-artifacts-remote.txt with ~/.gstack-brain-remote.txt fallback
during the migration window
- bin/gstack-artifacts-init: tolerant of unrecognized URL forms (local
paths, file://, self-hosted gitea) so test infrastructure and unusual
remotes work without canonicalization
- test/brain-sync.test.ts: gstack-brain-init → gstack-artifacts-init
- test/skill-e2e-brain-privacy-gate.test.ts: artifacts_sync_mode keys
- test/gen-skill-docs.test.ts: budget 35K → 36.5K for the new MCP-mode
probe in the preamble resolver
- health/SKILL.md.tmpl, sync-gbrain/SKILL.md.tmpl: comment + verdict line
Hard delete:
- bin/gstack-brain-init (replaced by bin/gstack-artifacts-init in v1.27.0.0)
- test/gstack-brain-init-gh-mock.test.ts (replaced by gstack-artifacts-init.test.ts)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: regenerate SKILL.md files after artifacts-sync rename
Mechanical regen via \`bun run gen:skill-docs --host all\`. All */SKILL.md
files reflect the renamed config key (gbrain_sync_mode →
artifacts_sync_mode), the renamed remote-helper file
(~/.gstack-artifacts-remote.txt with brain fallback), the renamed init
script (gstack-artifacts-init), and the new ARTIFACTS_SYNC: remote-mode
status line that fires when a remote-http MCP is registered.
Golden fixtures (test/fixtures/golden/*-ship-SKILL.md) refreshed to match
the regenerated default-ship output.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: v1.27.0.0 migration — gstack-brain → gstack-artifacts rename
Journaled, interruption-safe migration. Six steps, each writes to
~/.gstack/.migrations/v1.27.0.0.journal on success; re-entry resumes
from the next un-done step. On final success, journal is replaced by
~/.gstack/.migrations/v1.27.0.0.done.
Steps:
1. gh_repo_renamed gh/glab repo rename gstack-brain-$USER →
gstack-artifacts-$USER (idempotent: detects
already-renamed and skips)
2. remote_txt_renamed mv ~/.gstack-brain-remote.txt → artifacts file,
rewriting URL path to match the new repo name
3. config_key_renamed sed -i in ~/.gstack/config.yaml flips
gbrain_sync_mode → artifacts_sync_mode
4. claude_md_block sed flips "- Memory sync:" → "- Artifacts sync:"
in cwd CLAUDE.md and ~/.gstack/CLAUDE.md
5. sources_swapped gbrain sources add NEW (verify) → remove OLD
(codex Finding #6: add-before-remove ordering,
no downtime window). On remote-MCP mode, prints
commands for the brain admin instead of executing.
6. done touchfile + delete journal
User opt-out: any "n" or "skip-for-now" answer at the initial prompt
writes a marker file that prevents re-prompting; user can re-invoke
via /setup-gbrain --rerun-migration.
11 unit tests cover: nothing-to-migrate, GitHub happy path, idempotent
re-run, journal-resume mid-flight, remote-MCP print-only path,
add-before-remove ordering verification, add-fail → old source stays
registered, CLAUDE.md field rewrite.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test: regression suite + E2E for v1.27.0.0 rename
Three new regression tests guard the rename's blast radius (per codex
Findings #1, #8, #9, #12):
- test/no-stale-gstack-brain-refs.test.ts: greps bin/, scripts/, *.tmpl,
test/ for forbidden identifiers (gstack-brain-init, gbrain_sync_mode);
fails CI if any non-allowlisted file references them.
- test/post-rename-doc-regen.test.ts: confirms gen-skill-docs output has
no stale references in any */SKILL.md (the cross-product blind spot).
- test/setup-gbrain-path4-structure.test.ts: structural lint over the
Path 4 prose contract — STOP gates after verify failure, never-write-
token rules, mode-aware CLAUDE.md block, bearer always via env-var.
Two new gate-tier E2E tests (deterministic stub HTTP server, fixed inputs):
- test/skill-e2e-setup-gbrain-remote.test.ts: Path 4 happy path. Stubs
an HTTP MCP server, drives the skill via Agent SDK with a stubbed
bearer, asserts claude.json gets the http MCP entry, CLAUDE.md gets
the remote-http block, the secret token NEVER leaks to CLAUDE.md.
- test/skill-e2e-setup-gbrain-bad-token.test.ts: stub server returns 401;
asserts the AUTH classifier hint surfaces, no MCP registration occurs,
CLAUDE.md is unchanged. Regression guard for the "verify failed → STOP"
rule.
touchfiles.ts: setup-gbrain-remote and setup-gbrain-bad-token added at
gate-tier so CI catches Path 4 regressions on every PR.
Plus a few comment refs flipped: bin/gstack-jsonl-merge, bin/gstack-timeline-log
(legacy gstack-brain-init mentions in headers).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* release: v1.27.0.0 — /setup-gbrain Path 4 + brain → artifacts rename
Bumps VERSION 1.26.4.0 → 1.27.0.0 (MINOR per CLAUDE.md scale-aware bump
guidance: ~1500 line net change including a new path in /setup-gbrain,
two new bin helpers, a journaled migration, 59 new tests, and a config
key rename across the codebase).
CHANGELOG entry covers: Path 4 (Remote MCP) end-to-end, the brain →
artifacts rename, the journaled migration, the verify-helper error
classifier, the artifacts-init multi-host provider choice. Includes
the canonical Garry-voice headline + numbers table + audience close
per the release-summary format.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test: demote setup-gbrain Path 4 E2E to periodic-tier
The Agent SDK E2E tests for Path 4 (skill-e2e-setup-gbrain-remote and
skill-e2e-setup-gbrain-bad-token) are inherently non-deterministic —
the model interprets "follow Path 4 only" prompts flexibly and can
skip Step 8 (CLAUDE.md write) or shortcut past the verify helper, which
makes the gate-tier assertions flaky.
The deterministic gate coverage for Path 4 is in
test/setup-gbrain-path4-structure.test.ts: a fast structural lint that
catches AUQ-pacing regressions and prose contract drift in <200ms with
zero token spend. That test is the right tool for catching the failure
mode the gate-tier was meant to guard against.
The Agent SDK E2E tests stay available on-demand for periodic-tier runs
(EVALS=1 EVALS_TIER=periodic bun test test/skill-e2e-setup-gbrain-*.test.ts).
Also tightened the verify-error assertion to the literal field shape
("error_class": "AUTH") instead of a substring match that false-matches
the parent claude session's "needs-auth" MCP discovery markers.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: sync package.json version to 1.27.0.0
VERSION was bumped to 1.27.0.0 in f6ec11eb but package.json was not
updated in the same commit. The gen-skill-docs.test.ts assertion
"package.json version matches VERSION file" caught the drift.
This is the DRIFT_STALE_PKG case the /ship Step 12 idempotency check
is designed for; the fix is the documented sync-only repair (no
re-bump, package.json synced to existing VERSION).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
376 lines
17 KiB
TypeScript
376 lines
17 KiB
TypeScript
/**
|
|
* gbrain-sync integration tests.
|
|
*
|
|
* Covers the core cross-machine memory sync feature end-to-end:
|
|
* - bin/gstack-config gbrain keys (validation, isolation)
|
|
* - bin/gstack-brain-enqueue (atomicity, skip list, no-op gates)
|
|
* - bin/gstack-jsonl-merge (3-way, ts-sort, hash-fallback)
|
|
* - bin/gstack-brain-sync --once (drain, commit, push, secret-scan, skip-file)
|
|
* - bin/gstack-artifacts-init + --restore round-trip
|
|
* - bin/gstack-brain-uninstall preserves user data
|
|
* - env isolation (GSTACK_HOME never bleeds into real ~/.gstack/config.yaml)
|
|
*
|
|
* Runs each test against a temp GSTACK_HOME and a local bare git repo as
|
|
* a fake remote. No live GitHub, no live GBrain.
|
|
*/
|
|
|
|
import { describe, test as _test, expect, beforeEach, afterEach } from 'bun:test';
|
|
|
|
// Boost timeout: brain-sync tests spawn git, network-ls-remote, and 10-way
|
|
// parallel processes — 5s default is too tight.
|
|
const test = (name: string, fn: any) => _test(name, fn, 30000);
|
|
import * as fs from 'fs';
|
|
import * as path from 'path';
|
|
import * as os from 'os';
|
|
import { spawnSync } from 'child_process';
|
|
|
|
const ROOT = path.resolve(import.meta.dir, '..');
|
|
const BIN = path.join(ROOT, 'bin');
|
|
|
|
let tmpHome: string;
|
|
let bareRemote: string;
|
|
|
|
function run(argv: string[], opts: { env?: Record<string, string>; input?: string } = {}) {
|
|
const bin = argv[0];
|
|
const full = bin.startsWith('/') ? bin : path.join(BIN, bin);
|
|
const res = spawnSync(full, argv.slice(1), {
|
|
env: { ...process.env, GSTACK_HOME: tmpHome, ...(opts.env || {}) },
|
|
encoding: 'utf-8',
|
|
input: opts.input,
|
|
cwd: ROOT,
|
|
});
|
|
return { stdout: res.stdout || '', stderr: res.stderr || '', status: res.status ?? -1 };
|
|
}
|
|
|
|
function git(args: string[], cwd?: string) {
|
|
const res = spawnSync('git', args, { cwd: cwd || tmpHome, encoding: 'utf-8' });
|
|
return { stdout: res.stdout || '', stderr: res.stderr || '', status: res.status ?? -1 };
|
|
}
|
|
|
|
beforeEach(() => {
|
|
tmpHome = fs.mkdtempSync(path.join(os.tmpdir(), 'brain-sync-home-'));
|
|
bareRemote = fs.mkdtempSync(path.join(os.tmpdir(), 'brain-sync-remote-'));
|
|
spawnSync('git', ['init', '--bare', '-q', '-b', 'main', bareRemote]);
|
|
});
|
|
|
|
afterEach(() => {
|
|
fs.rmSync(tmpHome, { recursive: true, force: true });
|
|
fs.rmSync(bareRemote, { recursive: true, force: true });
|
|
// Clean up any remote-helper file init may have written.
|
|
const remoteFile = path.join(os.homedir(), '.gstack-brain-remote.txt');
|
|
// Only remove if it points at OUR bare remote (don't clobber a real user file).
|
|
try {
|
|
const contents = fs.readFileSync(remoteFile, 'utf-8').trim();
|
|
if (contents === bareRemote) fs.unlinkSync(remoteFile);
|
|
} catch {}
|
|
});
|
|
|
|
// ---------------------------------------------------------------
|
|
// Config key validation + env isolation
|
|
// ---------------------------------------------------------------
|
|
describe('gstack-config gbrain keys', () => {
|
|
test('default artifacts_sync_mode is off', () => {
|
|
const r = run(['gstack-config', 'get', 'artifacts_sync_mode']);
|
|
expect(r.status).toBe(0);
|
|
expect(r.stdout.trim()).toBe('off');
|
|
});
|
|
|
|
test('default artifacts_sync_mode_prompted is false', () => {
|
|
const r = run(['gstack-config', 'get', 'artifacts_sync_mode_prompted']);
|
|
expect(r.stdout.trim()).toBe('false');
|
|
});
|
|
|
|
test('accepts full / artifacts-only / off', () => {
|
|
for (const val of ['full', 'artifacts-only', 'off']) {
|
|
const set = run(['gstack-config', 'set', 'artifacts_sync_mode', val]);
|
|
expect(set.status).toBe(0);
|
|
const get = run(['gstack-config', 'get', 'artifacts_sync_mode']);
|
|
expect(get.stdout.trim()).toBe(val);
|
|
}
|
|
});
|
|
|
|
test('invalid artifacts_sync_mode value warns + defaults', () => {
|
|
const r = run(['gstack-config', 'set', 'artifacts_sync_mode', 'bogus']);
|
|
expect(r.stderr).toContain('not recognized');
|
|
const get = run(['gstack-config', 'get', 'artifacts_sync_mode']);
|
|
expect(get.stdout.trim()).toBe('off');
|
|
});
|
|
|
|
test('GSTACK_HOME overrides real config dir', () => {
|
|
// Real ~/.gstack/config.yaml must not change, regardless of what it
|
|
// already contains on the developer's machine.
|
|
const realConfig = path.join(os.homedir(), '.gstack', 'config.yaml');
|
|
const before = fs.existsSync(realConfig) ? fs.readFileSync(realConfig, 'utf-8') : null;
|
|
|
|
run(['gstack-config', 'set', 'artifacts_sync_mode', 'full']);
|
|
|
|
// The override actually took effect — temp config got the new value.
|
|
const tempConfig = fs.readFileSync(path.join(tmpHome, 'config.yaml'), 'utf-8');
|
|
expect(tempConfig).toContain('artifacts_sync_mode: full');
|
|
|
|
// Real ~/.gstack/config.yaml must not be touched.
|
|
const after = fs.existsSync(realConfig) ? fs.readFileSync(realConfig, 'utf-8') : null;
|
|
expect(after).toBe(before);
|
|
});
|
|
});
|
|
|
|
// ---------------------------------------------------------------
|
|
// Enqueue behavior
|
|
// ---------------------------------------------------------------
|
|
describe('gstack-brain-enqueue', () => {
|
|
test('no-op when feature not initialized', () => {
|
|
const r = run(['gstack-brain-enqueue', 'projects/foo/learnings.jsonl']);
|
|
expect(r.status).toBe(0);
|
|
expect(fs.existsSync(path.join(tmpHome, '.brain-queue.jsonl'))).toBe(false);
|
|
});
|
|
|
|
test('no-op when mode is off (even if .git exists)', () => {
|
|
fs.mkdirSync(path.join(tmpHome, '.git'), { recursive: true });
|
|
const r = run(['gstack-brain-enqueue', 'projects/foo/learnings.jsonl']);
|
|
expect(r.status).toBe(0);
|
|
expect(fs.existsSync(path.join(tmpHome, '.brain-queue.jsonl'))).toBe(false);
|
|
});
|
|
|
|
test('enqueues when mode is full and .git exists', () => {
|
|
fs.mkdirSync(path.join(tmpHome, '.git'), { recursive: true });
|
|
run(['gstack-config', 'set', 'artifacts_sync_mode', 'full']);
|
|
run(['gstack-brain-enqueue', 'projects/foo/learnings.jsonl']);
|
|
const queue = fs.readFileSync(path.join(tmpHome, '.brain-queue.jsonl'), 'utf-8');
|
|
expect(queue).toContain('projects/foo/learnings.jsonl');
|
|
const obj = JSON.parse(queue.trim());
|
|
expect(obj.file).toBe('projects/foo/learnings.jsonl');
|
|
expect(obj.ts).toBeTruthy();
|
|
});
|
|
|
|
test('skip list honored', () => {
|
|
fs.mkdirSync(path.join(tmpHome, '.git'), { recursive: true });
|
|
run(['gstack-config', 'set', 'artifacts_sync_mode', 'full']);
|
|
fs.writeFileSync(path.join(tmpHome, '.brain-skip.txt'), 'projects/foo/secret.jsonl\n');
|
|
run(['gstack-brain-enqueue', 'projects/foo/secret.jsonl']);
|
|
run(['gstack-brain-enqueue', 'projects/foo/ok.jsonl']);
|
|
const queue = fs.readFileSync(path.join(tmpHome, '.brain-queue.jsonl'), 'utf-8');
|
|
expect(queue).not.toContain('secret.jsonl');
|
|
expect(queue).toContain('ok.jsonl');
|
|
});
|
|
|
|
test('concurrent enqueues all land (atomic append)', async () => {
|
|
fs.mkdirSync(path.join(tmpHome, '.git'), { recursive: true });
|
|
run(['gstack-config', 'set', 'artifacts_sync_mode', 'full']);
|
|
const procs = [];
|
|
for (let i = 0; i < 10; i++) {
|
|
procs.push(new Promise<void>((resolve) => {
|
|
const r = spawnSync(path.join(BIN, 'gstack-brain-enqueue'), [`file-${i}.jsonl`], {
|
|
env: { ...process.env, GSTACK_HOME: tmpHome },
|
|
encoding: 'utf-8',
|
|
});
|
|
resolve();
|
|
}));
|
|
}
|
|
await Promise.all(procs);
|
|
const queue = fs.readFileSync(path.join(tmpHome, '.brain-queue.jsonl'), 'utf-8');
|
|
const lines = queue.trim().split('\n').filter(Boolean);
|
|
expect(lines.length).toBe(10);
|
|
});
|
|
|
|
test('no args does not crash', () => {
|
|
const r = run(['gstack-brain-enqueue']);
|
|
expect(r.status).toBe(0);
|
|
});
|
|
});
|
|
|
|
// ---------------------------------------------------------------
|
|
// JSONL merge driver
|
|
// ---------------------------------------------------------------
|
|
describe('gstack-jsonl-merge', () => {
|
|
test('3-way merge dedups + sorts by ts', () => {
|
|
const base = path.join(tmpHome, 'base.jsonl');
|
|
const ours = path.join(tmpHome, 'ours.jsonl');
|
|
const theirs = path.join(tmpHome, 'theirs.jsonl');
|
|
fs.writeFileSync(base, '');
|
|
fs.writeFileSync(ours, '{"x":1,"ts":"2026-01-01T10:00:00Z"}\n{"x":2,"ts":"2026-01-01T11:00:00Z"}\n');
|
|
fs.writeFileSync(theirs, '{"x":3,"ts":"2026-01-01T09:00:00Z"}\n{"x":2,"ts":"2026-01-01T11:00:00Z"}\n');
|
|
const r = run([path.join(BIN, 'gstack-jsonl-merge'), base, ours, theirs]);
|
|
expect(r.status).toBe(0);
|
|
const lines = fs.readFileSync(ours, 'utf-8').trim().split('\n');
|
|
expect(lines.length).toBe(3);
|
|
expect(lines[0]).toContain('"x":3'); // earliest ts
|
|
expect(lines[2]).toContain('"x":2'); // latest ts
|
|
});
|
|
|
|
test('falls back to hash order for lines without ts', () => {
|
|
const base = path.join(tmpHome, 'base.jsonl');
|
|
const ours = path.join(tmpHome, 'ours.jsonl');
|
|
const theirs = path.join(tmpHome, 'theirs.jsonl');
|
|
fs.writeFileSync(base, '');
|
|
fs.writeFileSync(ours, '{"a":1}\n{"a":2}\n');
|
|
fs.writeFileSync(theirs, '{"a":3}\n{"a":2}\n');
|
|
run([path.join(BIN, 'gstack-jsonl-merge'), base, ours, theirs]);
|
|
const lines = fs.readFileSync(ours, 'utf-8').trim().split('\n');
|
|
expect(lines.length).toBe(3);
|
|
// Order is deterministic (sha256 of each line).
|
|
const again = spawnSync(path.join(BIN, 'gstack-jsonl-merge'), [base, ours, theirs]);
|
|
// (re-running doesn't change the order since same input → same output)
|
|
});
|
|
});
|
|
|
|
// ---------------------------------------------------------------
|
|
// Init + sync + restore round-trip
|
|
// ---------------------------------------------------------------
|
|
describe('init + sync + restore round-trip', () => {
|
|
test('init creates canonical files + registers drivers', () => {
|
|
const r = run(['gstack-artifacts-init', '--remote', bareRemote]);
|
|
expect(r.status).toBe(0);
|
|
expect(fs.existsSync(path.join(tmpHome, '.git'))).toBe(true);
|
|
expect(fs.existsSync(path.join(tmpHome, '.gitignore'))).toBe(true);
|
|
expect(fs.existsSync(path.join(tmpHome, '.brain-allowlist'))).toBe(true);
|
|
expect(fs.existsSync(path.join(tmpHome, '.brain-privacy-map.json'))).toBe(true);
|
|
expect(fs.existsSync(path.join(tmpHome, '.gitattributes'))).toBe(true);
|
|
expect(fs.existsSync(path.join(tmpHome, '.git/hooks/pre-commit'))).toBe(true);
|
|
// Merge driver registered in local git config.
|
|
const cfg = git(['config', '--get', 'merge.jsonl-append.driver']);
|
|
expect(cfg.stdout).toContain('gstack-jsonl-merge');
|
|
});
|
|
|
|
test('refuses init on different remote', () => {
|
|
run(['gstack-artifacts-init', '--remote', bareRemote]);
|
|
const otherRemote = fs.mkdtempSync(path.join(os.tmpdir(), 'brain-other-'));
|
|
spawnSync('git', ['init', '--bare', '-q', '-b', 'main', otherRemote]);
|
|
const r = run(['gstack-artifacts-init', '--remote', otherRemote]);
|
|
expect(r.status).not.toBe(0);
|
|
expect(r.stderr).toContain('already a git repo pointing at');
|
|
fs.rmSync(otherRemote, { recursive: true, force: true });
|
|
});
|
|
|
|
test('full sync: init → enqueue → --once → commit pushed', () => {
|
|
run(['gstack-artifacts-init', '--remote', bareRemote]);
|
|
run(['gstack-config', 'set', 'artifacts_sync_mode', 'full']);
|
|
fs.mkdirSync(path.join(tmpHome, 'projects', 'p'), { recursive: true });
|
|
fs.writeFileSync(path.join(tmpHome, 'projects/p/learnings.jsonl'),
|
|
'{"skill":"x","insight":"y","ts":"2026-04-22T10:00:00Z"}\n');
|
|
run(['gstack-brain-enqueue', 'projects/p/learnings.jsonl']);
|
|
const r = run(['gstack-brain-sync', '--once']);
|
|
expect(r.status).toBe(0);
|
|
// Check the remote got the commit.
|
|
const log = spawnSync('git', ['--git-dir=' + bareRemote, 'log', '--oneline'], { encoding: 'utf-8' });
|
|
expect(log.stdout).toMatch(/sync: 1 file/);
|
|
});
|
|
|
|
test('restore round-trip: writes on machine A visible on machine B', () => {
|
|
// Machine A.
|
|
run(['gstack-artifacts-init', '--remote', bareRemote]);
|
|
run(['gstack-config', 'set', 'artifacts_sync_mode', 'full']);
|
|
fs.mkdirSync(path.join(tmpHome, 'projects', 'myproj'), { recursive: true });
|
|
const aLearning = '{"skill":"x","insight":"machine A wisdom","ts":"2026-04-22T10:00:00Z"}\n';
|
|
fs.writeFileSync(path.join(tmpHome, 'projects/myproj/learnings.jsonl'), aLearning);
|
|
run(['gstack-brain-enqueue', 'projects/myproj/learnings.jsonl']);
|
|
run(['gstack-brain-sync', '--once']);
|
|
|
|
// Machine B (new temp home).
|
|
const machineB = fs.mkdtempSync(path.join(os.tmpdir(), 'brain-machineB-'));
|
|
const r = run(['gstack-brain-restore', bareRemote], {
|
|
env: { GSTACK_HOME: machineB },
|
|
});
|
|
expect(r.status).toBe(0);
|
|
const restored = fs.readFileSync(path.join(machineB, 'projects/myproj/learnings.jsonl'), 'utf-8');
|
|
expect(restored).toContain('machine A wisdom');
|
|
// Merge drivers re-registered on B.
|
|
const cfg = spawnSync('git', ['-C', machineB, 'config', '--get', 'merge.jsonl-append.driver'], { encoding: 'utf-8' });
|
|
expect(cfg.stdout).toContain('gstack-jsonl-merge');
|
|
fs.rmSync(machineB, { recursive: true, force: true });
|
|
});
|
|
});
|
|
|
|
// ---------------------------------------------------------------
|
|
// Secret scan: all regex families block
|
|
// ---------------------------------------------------------------
|
|
describe('gstack-brain-sync secret scan', () => {
|
|
const SECRETS: [string, string][] = [
|
|
['aws-access-key', 'AKIAABCDEFGHIJKLMNOP'],
|
|
['github-token-ghp', 'ghp_abcdefghij1234567890abcdef1234567890'],
|
|
['github-token-github-pat', 'github_pat_11ABCDEFG1234567890_abcdef'],
|
|
['openai-key', 'sk-abcdefghij1234567890abcdef1234567890'],
|
|
['pem-block', '-----BEGIN PRIVATE KEY-----'],
|
|
['jwt', 'eyJ0eXAiOiJKV1QiLCJh.eyJzdWIiOiIxMjM0NTY3.SflKxwRJSMeKKF30oGTbU'],
|
|
['bearer-json', '"authorization":"Bearer abcdef1234567890abcdef1234567890"'],
|
|
];
|
|
|
|
for (const [name, content] of SECRETS) {
|
|
test(`blocks ${name}`, () => {
|
|
run(['gstack-artifacts-init', '--remote', bareRemote]);
|
|
run(['gstack-config', 'set', 'artifacts_sync_mode', 'full']);
|
|
fs.mkdirSync(path.join(tmpHome, 'projects', 'p'), { recursive: true });
|
|
fs.writeFileSync(path.join(tmpHome, 'projects/p/learnings.jsonl'),
|
|
`{"leaked":"${content}"}\n`);
|
|
run(['gstack-brain-enqueue', 'projects/p/learnings.jsonl']);
|
|
const r = run(['gstack-brain-sync', '--once']);
|
|
expect(r.status).toBe(0); // exits clean even when blocked
|
|
// No new commit should have been created.
|
|
const log = git(['log', '--oneline']);
|
|
expect(log.stdout.split('\n').filter(Boolean).length).toBeLessThanOrEqual(3);
|
|
// Status file should report blocked.
|
|
const status = JSON.parse(fs.readFileSync(path.join(tmpHome, '.brain-sync-status.json'), 'utf-8'));
|
|
expect(status.status).toBe('blocked');
|
|
});
|
|
}
|
|
|
|
test('--skip-file unblocks specific file', () => {
|
|
run(['gstack-artifacts-init', '--remote', bareRemote]);
|
|
run(['gstack-config', 'set', 'artifacts_sync_mode', 'full']);
|
|
fs.mkdirSync(path.join(tmpHome, 'projects', 'p'), { recursive: true });
|
|
const leakPath = 'projects/p/leaked.jsonl';
|
|
fs.writeFileSync(path.join(tmpHome, leakPath),
|
|
'{"gh":"ghp_abcdefghij1234567890abcdef1234567890"}\n');
|
|
run(['gstack-brain-enqueue', leakPath]);
|
|
run(['gstack-brain-sync', '--once']); // blocked
|
|
run(['gstack-brain-sync', '--skip-file', leakPath]);
|
|
// Any future enqueue of this path should no-op.
|
|
run(['gstack-brain-enqueue', leakPath]);
|
|
const skip = fs.readFileSync(path.join(tmpHome, '.brain-skip.txt'), 'utf-8');
|
|
expect(skip).toContain(leakPath);
|
|
});
|
|
});
|
|
|
|
// ---------------------------------------------------------------
|
|
// Uninstall preserves user data
|
|
// ---------------------------------------------------------------
|
|
describe('gstack-brain-uninstall', () => {
|
|
test('removes sync config but preserves learnings/project data', () => {
|
|
run(['gstack-artifacts-init', '--remote', bareRemote]);
|
|
fs.mkdirSync(path.join(tmpHome, 'projects', 'user-data'), { recursive: true });
|
|
const preservedContent = '{"keep":"me","ts":"2026-04-22T12:00:00Z"}\n';
|
|
fs.writeFileSync(path.join(tmpHome, 'projects/user-data/learnings.jsonl'), preservedContent);
|
|
const r = run(['gstack-brain-uninstall', '--yes']);
|
|
expect(r.status).toBe(0);
|
|
expect(fs.existsSync(path.join(tmpHome, '.git'))).toBe(false);
|
|
expect(fs.existsSync(path.join(tmpHome, '.gitignore'))).toBe(false);
|
|
expect(fs.existsSync(path.join(tmpHome, '.brain-allowlist'))).toBe(false);
|
|
expect(fs.existsSync(path.join(tmpHome, 'consumers.json'))).toBe(false);
|
|
// Project data preserved.
|
|
const preserved = fs.readFileSync(path.join(tmpHome, 'projects/user-data/learnings.jsonl'), 'utf-8');
|
|
expect(preserved).toBe(preservedContent);
|
|
// Config key reset.
|
|
const mode = run(['gstack-config', 'get', 'artifacts_sync_mode']);
|
|
expect(mode.stdout.trim()).toBe('off');
|
|
});
|
|
});
|
|
|
|
// ---------------------------------------------------------------
|
|
// --discover-new: cursor-based change detection
|
|
// ---------------------------------------------------------------
|
|
describe('gstack-brain-sync --discover-new', () => {
|
|
test('enqueues new allowlisted files; idempotent on re-run', () => {
|
|
run(['gstack-artifacts-init', '--remote', bareRemote]);
|
|
run(['gstack-config', 'set', 'artifacts_sync_mode', 'full']);
|
|
fs.mkdirSync(path.join(tmpHome, 'retros'), { recursive: true });
|
|
fs.writeFileSync(path.join(tmpHome, 'retros/week-1.md'), '# retro\n');
|
|
run(['gstack-brain-sync', '--discover-new']);
|
|
let queue = fs.readFileSync(path.join(tmpHome, '.brain-queue.jsonl'), 'utf-8');
|
|
expect(queue).toContain('retros/week-1.md');
|
|
// Clear queue, run again — idempotent (no new entries).
|
|
fs.writeFileSync(path.join(tmpHome, '.brain-queue.jsonl'), '');
|
|
run(['gstack-brain-sync', '--discover-new']);
|
|
queue = fs.readFileSync(path.join(tmpHome, '.brain-queue.jsonl'), 'utf-8');
|
|
expect(queue.trim()).toBe('');
|
|
});
|
|
});
|