mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-17 09:41:28 +08:00
v1.39.2.0 feat: GSTACK_* env-shim for Conductor + gbrain/gstack setup docs (#1534)
* feat: GSTACK_* env-key shim for Conductor workspaces New lib/conductor-env-shim.ts promotes GSTACK_ANTHROPIC_API_KEY and GSTACK_OPENAI_API_KEY to canonical names when canonical is empty. Wired into the four TS entry points that hit paid APIs or gbrain embeddings: gstack-gbrain-sync.ts, gstack-model-benchmark, preflight-agent-sdk.ts, test/helpers/e2e-helpers.ts. Side-effect-only import, 15 lines total. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: gbrain+gstack setup, Conductor env mapping (v1.39.2.0) USING_GBRAIN_WITH_GSTACK.md: new "What you get after setup" section, Path 4 (remote MCP / split-engine), /sync-gbrain workflow stages + watermark mechanics, "Conductor + GSTACK_* env vars" section, env vars table extended, two troubleshooting entries (silent embedding failure and FILE_TOO_LARGE watermark block). CONTRIBUTING.md "Conductor workspaces": new paragraph on the GSTACK_* prefix pattern and the four entry points importing the shim. VERSION 1.39.1.0 → 1.39.2.0 and CHANGELOG entry covering the shim + docs (full release-summary format with before/after table). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test: unit coverage for conductor-env-shim Refactor lib/conductor-env-shim.ts to export promoteConductorEnv() so unit tests can manipulate env and call it directly (a bare side- effect IIFE on import isn't reachable from bun:test once cached). The on-import IIFE still runs — existing four-entry-point imports keep working unchanged. test/conductor-env-shim.test.ts covers all three branches: GSTACK_FOO present + FOO empty → promotion; FOO already set → no-overwrite; nothing in env → no-op. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: Conductor strips canonical API keys (not just "doesn't inherit") The prior docs framed the GSTACK_* prefix as collision-avoidance: "Conductor exposes API keys under a GSTACK_ prefix so it never collides with whatever the host system has set." That understates the mechanism — Conductor actively strips ANTHROPIC_API_KEY and OPENAI_API_KEY from every workspace's process env, so setting them in ~/.zshrc or .env doesn't help. The fix path is to set the GSTACK_-prefixed forms in Conductor's workspace env config; Conductor passes those through untouched. Three docs updated to reflect the strip, not the polite framing: USING_GBRAIN_WITH_GSTACK.md (Conductor section), CONTRIBUTING.md (Conductor workspaces paragraph), CHANGELOG.md (release summary). README.md gains a "Running gstack in Conductor?" callout in the GBrain section pointing at the canonical doc's anchor, plus a fourth path entry (remote gbrain MCP / split-engine) that was already documented in USING_GBRAIN but missing from the README summary. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
43
CHANGELOG.md
43
CHANGELOG.md
@@ -1,5 +1,48 @@
|
||||
# Changelog
|
||||
|
||||
## [1.39.2.0] - 2026-05-15
|
||||
|
||||
## **Conductor workspaces wire `GSTACK_*` keys straight into gbrain embeddings and paid evals.**
|
||||
## **No more sourcing keys from your shell before every paid run.**
|
||||
|
||||
Conductor explicitly strips `ANTHROPIC_API_KEY` and `OPENAI_API_KEY` from every workspace's process env, so `.env` copies and `~/.zshrc` exports never reach gbrain's embedding pipeline or `@anthropic-ai/claude-agent-sdk`. The fix path is `GSTACK_ANTHROPIC_API_KEY` / `GSTACK_OPENAI_API_KEY` — Conductor passes those through untouched. The new `lib/conductor-env-shim.ts` closes the loop on the gstack side: it promotes the prefixed form to canonical when canonical is empty. Four TS entry points import the shim as a side effect (`gstack-gbrain-sync.ts`, `gstack-model-benchmark`, `preflight-agent-sdk.ts`, `e2e-helpers.ts`). `README.md`, `USING_GBRAIN_WITH_GSTACK.md`, and `CONTRIBUTING.md` document the pattern, plus the checklist for adding the import to new entry points.
|
||||
|
||||
### The numbers that matter
|
||||
|
||||
Source: working-tree verification before commit. Three observable scenarios in a fresh Conductor workspace with only `GSTACK_OPENAI_API_KEY` and `GSTACK_ANTHROPIC_API_KEY` in env.
|
||||
|
||||
| Surface | Before | After |
|
||||
|---|---|---|
|
||||
| `/sync-gbrain` embeddings | 50+ lines of `[gbrain] embedding failed for code file ...: OpenAI embedding requires OPENAI_API_KEY`; pages indexed structurally but semantic search degrades to BM25 | 3294 chunks embedded; `gbrain search "browser security canary token"` returns ranked code regions at 0.95 top score |
|
||||
| `bun run test:evals` | `ANTHROPIC_API_KEY not set, judge requires Anthropic access` from `test/helpers/benchmark-judge.ts:15` before any test runs | Shim promotes at module import; paid evals proceed normally |
|
||||
| Adding a new paid-API entry point | Manual env mapping every invocation, or every new entry point ships broken inside Conductor | One import line: `import "../lib/conductor-env-shim";` at the top of the file |
|
||||
|
||||
### What this means for Conductor users
|
||||
|
||||
If you run gstack inside Conductor, `/sync-gbrain` embeddings, paid evals, and the agent SDK just work without sourcing keys from your shell. The shim is 15 lines, side-effect-only, and the import is one line per consumer. The new "Conductor + GSTACK_* env vars" section in `USING_GBRAIN_WITH_GSTACK.md` and the updated "Conductor workspaces" block in `CONTRIBUTING.md` cover the pattern so you don't have to reverse-engineer it from a stack trace.
|
||||
|
||||
### Itemized changes
|
||||
|
||||
#### Added
|
||||
|
||||
- `lib/conductor-env-shim.ts` (new, 15 lines) — side-effect IIFE that promotes `GSTACK_FOO_API_KEY` to `FOO_API_KEY` when the canonical name is empty. Currently covers `ANTHROPIC_API_KEY` and `OPENAI_API_KEY`.
|
||||
- `USING_GBRAIN_WITH_GSTACK.md` "What you get after setup" section — semantic code search + cross-session memory framed as concrete capabilities.
|
||||
- `USING_GBRAIN_WITH_GSTACK.md` Path 4 (remote gbrain MCP / split-engine) section — covers brain-via-remote-MCP + code-via-local-PGLite, the two engines being independent, when to pick this path.
|
||||
- `USING_GBRAIN_WITH_GSTACK.md` `/sync-gbrain` workflow section — three stages (code, memory, brain-sync), pre-flight gating on local engine health, watermark + `--skip-failed` mechanics, capability check governing the CLAUDE.md guidance block.
|
||||
- `USING_GBRAIN_WITH_GSTACK.md` "Conductor + GSTACK_* env vars" section — explains the prefix pattern, lists the four entry points that import the shim, points contributors at `CONTRIBUTING.md`.
|
||||
- `USING_GBRAIN_WITH_GSTACK.md` troubleshooting entries: "`/sync-gbrain` reports OK but `gbrain search` returns nothing semantic" (embeddings failed silently) and "`gbrain sync` blocked at a commit hash, `FILE_TOO_LARGE`" (5 MB hard limit, fix via `--skip-failed`).
|
||||
|
||||
#### Changed
|
||||
|
||||
- `bin/gstack-gbrain-sync.ts`, `bin/gstack-model-benchmark`, `scripts/preflight-agent-sdk.ts`, `test/helpers/e2e-helpers.ts` — added `import "../lib/conductor-env-shim";` at the top of each. One line each, side-effect-only.
|
||||
- `USING_GBRAIN_WITH_GSTACK.md` "three paths" → "four paths" header now that Path 4 (remote MCP) is documented as a first-class choice.
|
||||
- `USING_GBRAIN_WITH_GSTACK.md` environment variables table — added rows for `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `GSTACK_OPENAI_API_KEY`, `GSTACK_ANTHROPIC_API_KEY` covering what reads each one and the GSTACK_-prefix fallback.
|
||||
- `CONTRIBUTING.md` "Conductor workspaces" — new paragraph documenting the `GSTACK_*` prefix injection pattern, the shim file, and the four entry points that already import it.
|
||||
|
||||
#### For contributors
|
||||
|
||||
- New TS entry points that hit Anthropic or OpenAI APIs (paid evals, `claude-agent-sdk`, gbrain embeddings, model benchmarks) should add `import "../lib/conductor-env-shim";` as the first import. Without it, the entry point ships broken inside Conductor even though it works in a bare shell. The contributor checklist in `CONTRIBUTING.md`'s "Conductor workspaces" block names the four entry points already wired up.
|
||||
|
||||
## [1.39.1.0] - 2026-05-15
|
||||
|
||||
## **Plan-mode reviews now enforce a blocking ExitPlanMode gate.**
|
||||
|
||||
@@ -333,6 +333,8 @@ When Conductor creates a new workspace, `bin/dev-setup` runs automatically. It d
|
||||
|
||||
**First-time setup:** Put your `ANTHROPIC_API_KEY` in `.env` in the main repo (see `.env.example`). Every Conductor workspace inherits it automatically.
|
||||
|
||||
**`GSTACK_*` env prefix (Conductor-injected keys).** Conductor explicitly strips `ANTHROPIC_API_KEY` and `OPENAI_API_KEY` from every workspace's process env. The `.env` copy path doesn't restore them either — the strip happens after env inheritance. Users who want paid evals, `/sync-gbrain` embeddings, or `claude-agent-sdk` calls to work in a Conductor workspace must set `GSTACK_ANTHROPIC_API_KEY` and `GSTACK_OPENAI_API_KEY` in Conductor's workspace env config; Conductor passes those through untouched. On the gstack side, TS entry points import `lib/conductor-env-shim.ts` as a side effect, which promotes `GSTACK_FOO_API_KEY` to `FOO_API_KEY` when the canonical name is empty. If you add a new TS entry point that hits a paid API, add `import "../lib/conductor-env-shim";` to the top of the file. Today the shim is imported from `bin/gstack-gbrain-sync.ts`, `bin/gstack-model-benchmark`, `scripts/preflight-agent-sdk.ts`, and `test/helpers/e2e-helpers.ts`.
|
||||
|
||||
## Things to know
|
||||
|
||||
- **SKILL.md files are generated.** Edit the `.tmpl` template, not the `.md`. Run `bun run gen:skill-docs` to regenerate.
|
||||
|
||||
@@ -388,11 +388,12 @@ I open sourced how I build software. You can fork it and make it your own.
|
||||
/setup-gbrain
|
||||
```
|
||||
|
||||
Three paths, pick one:
|
||||
Four paths, pick one:
|
||||
|
||||
- **Supabase, existing URL** — your cloud agent already provisioned a brain; paste the Session Pooler URL, now this laptop uses the same data.
|
||||
- **Supabase, auto-provision** — paste a Supabase Personal Access Token; the skill creates a new project, polls to healthy, fetches the pooler URL, hands it to `gbrain init`. ~90 seconds end-to-end.
|
||||
- **PGLite local** — zero accounts, zero network, ~30 seconds. Isolated brain on this Mac only. Great for try-first; migrate to Supabase later with `/setup-gbrain --switch`.
|
||||
- **Remote gbrain MCP** — your brain runs on another machine (Tailscale, ngrok, internal LAN) or a teammate's server; paste an MCP URL and bearer token. Optionally pair with a local PGLite for symbol-aware code search in split-engine mode. Best for cross-machine memory without standing up a local DB.
|
||||
|
||||
After init, the skill offers to register gbrain as an MCP server for Claude Code (`claude mcp add gbrain -- gbrain serve`) so `gbrain search`, `gbrain put_page`, etc. show up as first-class typed tools — not bash shell-outs.
|
||||
|
||||
@@ -412,6 +413,8 @@ The skill asks once per repo. The decision is sticky across worktrees and branch
|
||||
gstack-brain-init
|
||||
```
|
||||
|
||||
**Running gstack in Conductor?** Conductor explicitly strips `ANTHROPIC_API_KEY` and `OPENAI_API_KEY` from every workspace's process env, so paid evals and gbrain embeddings won't work out of the box. Set `GSTACK_ANTHROPIC_API_KEY` and `GSTACK_OPENAI_API_KEY` in Conductor's workspace env config instead — gstack's TS entry points promote them to canonical names at runtime. Full details and the contributor checklist for adding the import to new entry points: [Conductor + GSTACK_* env vars](USING_GBRAIN_WITH_GSTACK.md#conductor--gstack_-env-vars).
|
||||
|
||||
**Full monty — every scenario, every flag, every bin helper, every troubleshooting step:** [USING_GBRAIN_WITH_GSTACK.md](USING_GBRAIN_WITH_GSTACK.md)
|
||||
|
||||
Other references: [docs/gbrain-sync.md](docs/gbrain-sync.md) (sync-specific guide) • [docs/gbrain-sync-errors.md](docs/gbrain-sync-errors.md) (error index)
|
||||
|
||||
@@ -16,7 +16,16 @@ This is the full monty: every scenario, every flag, every helper bin, every trou
|
||||
|
||||
That's it. The skill detects your current state, asks three questions at most, and walks you through install, init, MCP registration for Claude Code, and per-repo trust policy. On a clean Mac with nothing installed it finishes in under five minutes. On a Mac where something's already set up it takes seconds (it detects the existing state and skips done work).
|
||||
|
||||
## The three paths
|
||||
## What you get after setup
|
||||
|
||||
Once `/setup-gbrain` finishes, your coding agent has two retrieval surfaces it didn't have before:
|
||||
|
||||
- **Semantic code search across this repo.** `gbrain search "browser security canary"` returns ranked file regions, not exact-match grep hits. `gbrain code-def`, `code-refs`, `code-callers`, `code-callees` walk the call graph by symbol — useful when you don't know which file holds the implementation but you know what it does. The agent prefers these over Grep when the question is semantic; CLAUDE.md gets a `## GBrain Search Guidance` block that teaches it the routing rules.
|
||||
- **Cross-session memory.** Plans, retros, decisions, and learnings from past sessions live in `~/.gstack/` and (if you opted in to artifacts sync) get pushed to a private git repo that gbrain indexes. `gbrain search "what did we decide about auth?"` actually finds the prior CEO plan instead of you re-describing context every session.
|
||||
|
||||
If you also enabled remote MCP (Path 4 below), brain queries route to a shared brain server that other machines can write to — your laptop, your desktop, and a teammate's machine all see the same memory.
|
||||
|
||||
## The four paths
|
||||
|
||||
You pick one when the skill asks "Where should your brain live?"
|
||||
|
||||
@@ -52,6 +61,19 @@ Best for: try-it-first, no account, no cloud, no sharing. Or a dedicated "this M
|
||||
|
||||
This is the best first choice if you just want to see what gbrain feels like before committing to cloud. You can always migrate later with `/setup-gbrain --switch`.
|
||||
|
||||
### Path 4: Remote gbrain MCP (split-engine)
|
||||
|
||||
Best for: your brain runs on another machine you control (Tailscale, ngrok, internal LAN) or a teammate's server. You want the cross-machine memory benefit without standing up a local database, and you still want symbol-aware code search on this Mac.
|
||||
|
||||
**What happens:** You paste an MCP URL (e.g. `https://wintermute.tail554574.ts.net:3131/mcp`) and a bearer token. The skill verifies the URL over the wire, registers gbrain as an HTTP MCP in `~/.claude.json` at user scope, and offers to also stand up a tiny local PGLite for code search (~30 seconds, ~120 MB disk).
|
||||
|
||||
If you accept the local PGLite, you end up in **split-engine mode**:
|
||||
|
||||
- **Brain/context queries** (`mcp__gbrain__search`, `mcp__gbrain__query`, `mcp__gbrain__get_page`) route to the remote MCP. Plans, retros, learnings, cross-machine memory — all on the shared server.
|
||||
- **Code queries** (`gbrain code-def`, `code-refs`, `code-callers`, `code-callees`, `gbrain search` for code) route to the local PGLite via the `.gbrain-source` pin in each worktree. Indexed locally, fast, never leaves the machine.
|
||||
|
||||
The two engines are independent. Wiping the local PGLite doesn't touch the remote brain; rotating the remote MCP bearer doesn't affect local code search. This is also the right configuration if your remote brain admin can't (or shouldn't) index every developer's checkout — local code stays local.
|
||||
|
||||
## MCP registration for Claude Code
|
||||
|
||||
By default the skill asks "Give Claude Code a typed tool surface for gbrain?" If you say yes, it runs:
|
||||
@@ -95,6 +117,35 @@ SSH and HTTPS remote variants collapse to the same key: `https://github.com/foo/
|
||||
|
||||
Storage: `~/.gstack/gbrain-repo-policy.json`, mode 0600, schema-versioned so future migrations stay deterministic.
|
||||
|
||||
## Keeping the brain current with `/sync-gbrain`
|
||||
|
||||
`/setup-gbrain` is one-time onboarding. `/sync-gbrain` is the verb you run every time you want gbrain to see fresh changes in this repo's code.
|
||||
|
||||
```bash
|
||||
/sync-gbrain # incremental: mtime fast-path, ~seconds on a clean tree
|
||||
/sync-gbrain --full # full reindex (~25-35 minutes on a big Mac)
|
||||
/sync-gbrain --code-only # only the code stage; skip memory + brain-sync
|
||||
/sync-gbrain --dry-run # preview what would sync; no writes
|
||||
```
|
||||
|
||||
The skill runs three stages — code, memory, brain-sync — independently. A failure in one doesn't block the others. State persists to `~/.gstack/.gbrain-sync-state.json` so re-running picks up cleanly.
|
||||
|
||||
**What it does on a fresh worktree:**
|
||||
|
||||
1. **Pre-flight.** Checks `gbrain_local_status` (the local engine's health). If the engine is `broken-db` or `broken-config`, the skill STOPs with a remediation menu — it refuses to silently degrade. If the local engine is missing and you're in remote-MCP mode (Path 4), the code stage SKIPs cleanly and only brain-sync runs.
|
||||
2. **Code stage.** Registers the cwd as a federated source via `gbrain sources add`, writes a `.gbrain-source` pin file in the repo root (kubectl-style context — every worktree gets its own pin, so Conductor sibling worktrees don't collide), runs `gbrain sync --strategy code`.
|
||||
3. **Memory stage.** Stages your `~/.gstack/` transcripts + curated memory. In local-stdio MCP mode, ingests into the local engine. In remote-http MCP mode, persists staged markdown to `~/.gstack/transcripts/run-<pid>-<ts>/` for the remote brain admin's pull pipeline.
|
||||
4. **Brain-sync stage.** Pushes curated artifacts (plans, designs, retros) to your private artifacts repo if you have one configured.
|
||||
5. **CLAUDE.md guidance.** Capability-checks the round-trip (write a page → search → find it). If green, writes the `## GBrain Search Guidance` block to your project's CLAUDE.md. If red, REMOVES the block — the agent should never be told to use a tool that isn't installed.
|
||||
|
||||
**The watermark.** Sync state advances by commit hash. If gbrain hits a file it can't index (5 MB hard limit per file, or a file vanished mid-sync), the watermark stays put and subsequent syncs retry. To acknowledge an unfixable failure and move past it:
|
||||
|
||||
```bash
|
||||
gbrain sync --source <source-id> --skip-failed
|
||||
```
|
||||
|
||||
Re-runnable, idempotent, safe to run from multiple terminals on the same machine (locked at `~/.gstack/.sync-gbrain.lock`).
|
||||
|
||||
## Switching engines later
|
||||
|
||||
Picked PGLite and now want to join a team brain? One command:
|
||||
@@ -200,6 +251,25 @@ Gbrain itself ships with these that gstack wraps:
|
||||
| `SUPABASE_API_BASE` | `gstack-gbrain-supabase-provision` | Override the Management API host. Used by tests to point at a mock server. |
|
||||
| `GBRAIN_INSTALL_DIR` | `gstack-gbrain-install` | Override default install path (`~/gbrain`) |
|
||||
| `GSTACK_HOME` | every bin helper | Override `~/.gstack` state dir. Heavy test use. |
|
||||
| `OPENAI_API_KEY` | `gbrain embed` subprocess | Required for embeddings during `gbrain sync` / `/sync-gbrain`. Without it, pages are imported structurally (symbol tables, chunks) but semantic search degrades — you'll see `[gbrain] embedding failed for code file ... OpenAI embedding requires OPENAI_API_KEY` in the sync log. |
|
||||
| `ANTHROPIC_API_KEY` | `claude-agent-sdk`, paid evals | Required for `bun run test:evals` and any direct `query()` call against Claude. |
|
||||
| `GSTACK_OPENAI_API_KEY` | `lib/conductor-env-shim.ts` | Conductor-injected fallback. Promoted to `OPENAI_API_KEY` when the canonical name is empty. |
|
||||
| `GSTACK_ANTHROPIC_API_KEY` | `lib/conductor-env-shim.ts` | Same pattern as above for Anthropic. |
|
||||
|
||||
## Conductor + GSTACK_* env vars
|
||||
|
||||
If you run gstack inside a [Conductor](https://conductor.build) workspace, **Conductor explicitly strips `ANTHROPIC_API_KEY` and `OPENAI_API_KEY` from the workspace env.** Setting them in `~/.zshrc` or `.env` won't help — the strip happens after env inheritance. To get a usable API key into a workspace, set `GSTACK_ANTHROPIC_API_KEY` and `GSTACK_OPENAI_API_KEY` in Conductor's workspace env config instead. Conductor passes those through untouched.
|
||||
|
||||
`lib/conductor-env-shim.ts` bridges the gap on the gstack side: when imported as a side effect (`import "../lib/conductor-env-shim";`), it promotes `GSTACK_FOO_API_KEY` to `FOO_API_KEY` for any subprocess that doesn't see the canonical name. The shim is already wired into:
|
||||
|
||||
- `bin/gstack-gbrain-sync.ts` — so `/sync-gbrain` picks up OpenAI for embeddings
|
||||
- `bin/gstack-model-benchmark` — so `--judge` runs work without manual env mapping
|
||||
- `scripts/preflight-agent-sdk.ts` — so paid-eval auth probes work
|
||||
- `test/helpers/e2e-helpers.ts` — so `bun run test:evals` finds Anthropic
|
||||
|
||||
If you add a new TS entry point that hits a paid API or needs gbrain embeddings, add the same one-line import at the top. See [CONTRIBUTING.md "Conductor workspaces"](CONTRIBUTING.md#conductor-workspaces) for the contributor checklist.
|
||||
|
||||
`bin/gstack-codex-probe` is bash and doesn't read these directly — it relies on `~/.codex/` auth managed by the Codex CLI.
|
||||
|
||||
## Security model
|
||||
|
||||
@@ -267,6 +337,26 @@ You edited `~/.gstack/gbrain-repo-policy.json` by hand with legacy `allow` value
|
||||
|
||||
`/health` treats that as yellow, not red. Check `gbrain doctor --json | jq .checks` to see which sub-checks are warning. Typical causes: resolver MECE overlap (skill names clashing) or DB connection not yet configured.
|
||||
|
||||
### `/sync-gbrain` reports `OK` but `gbrain search` returns nothing semantic
|
||||
|
||||
Embeddings probably failed during import. Symbol queries (`code-def`, `code-refs`) still work because they don't need embeddings, but `gbrain search "<terms>"` falls back to a degraded BM25 path. Look in the sync output for lines like:
|
||||
|
||||
```
|
||||
[gbrain] embedding failed for code file <name>: OpenAI embedding requires OPENAI_API_KEY
|
||||
```
|
||||
|
||||
The fix is to put `OPENAI_API_KEY` in the process env before re-running. On a bare Mac shell, source it from `~/.zshrc` before calling. In Conductor, set `GSTACK_OPENAI_API_KEY` at the workspace level — `lib/conductor-env-shim.ts` promotes it to canonical automatically when imported. Re-run `/sync-gbrain --code-only` to backfill embeddings on already-imported pages.
|
||||
|
||||
### `gbrain sync` blocked at a commit hash — `FILE_TOO_LARGE`
|
||||
|
||||
A file in your tree exceeds gbrain's 5 MB hard limit (`MAX_FILE_SIZE` in `gbrain/src/core/import-file.ts`). Common culprits: response replay caches, captured screenshots, large JSON fixtures. Gbrain doesn't honor `.gitignore`-style exclude lists for code sync; the only knob is acknowledging the failure:
|
||||
|
||||
```bash
|
||||
gbrain sync --source <source-id> --skip-failed
|
||||
```
|
||||
|
||||
Watermark advances past the offending commit. The same file fails again if it changes; re-skip when that happens.
|
||||
|
||||
### Switching PGLite → Supabase hangs
|
||||
|
||||
Another gstack session in a sibling Conductor workspace may be holding a lock on your local PGLite file via its preamble's `gstack-brain-sync` call. Close other workspaces, re-run `/setup-gbrain --switch`. The timeout is bounded at 180s so you'll never actually wait forever.
|
||||
|
||||
@@ -35,6 +35,7 @@ import { execSync, spawnSync } from "child_process";
|
||||
import { homedir } from "os";
|
||||
import { createHash } from "crypto";
|
||||
|
||||
import "../lib/conductor-env-shim";
|
||||
import { detectEngineTier, withErrorContext, canonicalizeRemote } from "../lib/gstack-memory-helpers";
|
||||
import { ensureSourceRegistered, sourcePageCount } from "../lib/gbrain-sources";
|
||||
import { localEngineStatus, type LocalEngineStatus } from "../lib/gbrain-local-status";
|
||||
|
||||
@@ -24,6 +24,7 @@
|
||||
* gstack-model-benchmark --prompt "hi" --models claude,gpt,gemini --dry-run
|
||||
*/
|
||||
|
||||
import '../lib/conductor-env-shim';
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import { runBenchmark, formatTable, formatJson, formatMarkdown, type BenchmarkInput } from '../test/helpers/benchmark-runner';
|
||||
|
||||
18
lib/conductor-env-shim.ts
Normal file
18
lib/conductor-env-shim.ts
Normal file
@@ -0,0 +1,18 @@
|
||||
/**
|
||||
* Conductor workspaces don't inherit the user's interactive shell env, so the
|
||||
* canonical ANTHROPIC_API_KEY / OPENAI_API_KEY may be missing while
|
||||
* Conductor's GSTACK_-prefixed forms are present. Promote the GSTACK_ form to
|
||||
* canonical when canonical is empty, so subprocesses (gbrain embed,
|
||||
* @anthropic-ai/claude-agent-sdk, etc) pick it up.
|
||||
*
|
||||
* Import this for its side effect: `import "../lib/conductor-env-shim";`
|
||||
*/
|
||||
export function promoteConductorEnv(): void {
|
||||
for (const key of ["ANTHROPIC_API_KEY", "OPENAI_API_KEY"] as const) {
|
||||
if (!process.env[key] && process.env[`GSTACK_${key}`]) {
|
||||
process.env[key] = process.env[`GSTACK_${key}`];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
promoteConductorEnv();
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "gstack",
|
||||
"version": "1.39.1.0",
|
||||
"version": "1.39.2.0",
|
||||
"description": "Garry's Stack — Claude Code skills + fast headless browser. One repo, one install, entire AI engineering workflow.",
|
||||
"license": "MIT",
|
||||
"type": "module",
|
||||
|
||||
@@ -16,6 +16,7 @@
|
||||
* side effects beyond stdout and a ~15 token API call.
|
||||
*/
|
||||
|
||||
import '../lib/conductor-env-shim';
|
||||
import { query, type SDKMessage } from '@anthropic-ai/claude-agent-sdk';
|
||||
import { readOverlay } from './resolvers/model-overlay';
|
||||
import { resolveClaudeBinary } from '../browse/src/claude-bin';
|
||||
|
||||
46
test/conductor-env-shim.test.ts
Normal file
46
test/conductor-env-shim.test.ts
Normal file
@@ -0,0 +1,46 @@
|
||||
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
|
||||
import { promoteConductorEnv } from '../lib/conductor-env-shim';
|
||||
|
||||
describe('conductor-env-shim', () => {
|
||||
const KEYS = ['ANTHROPIC_API_KEY', 'OPENAI_API_KEY', 'GSTACK_ANTHROPIC_API_KEY', 'GSTACK_OPENAI_API_KEY'] as const;
|
||||
const saved: Record<string, string | undefined> = {};
|
||||
|
||||
beforeEach(() => {
|
||||
for (const k of KEYS) {
|
||||
saved[k] = process.env[k];
|
||||
delete process.env[k];
|
||||
}
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
for (const k of KEYS) {
|
||||
if (saved[k] === undefined) delete process.env[k];
|
||||
else process.env[k] = saved[k];
|
||||
}
|
||||
});
|
||||
|
||||
test('promotes GSTACK_ANTHROPIC_API_KEY to ANTHROPIC_API_KEY when canonical is empty', () => {
|
||||
process.env.GSTACK_ANTHROPIC_API_KEY = 'sk-ant-test-123';
|
||||
promoteConductorEnv();
|
||||
expect(process.env.ANTHROPIC_API_KEY).toBe('sk-ant-test-123');
|
||||
});
|
||||
|
||||
test('promotes GSTACK_OPENAI_API_KEY to OPENAI_API_KEY when canonical is empty', () => {
|
||||
process.env.GSTACK_OPENAI_API_KEY = 'sk-oai-test-456';
|
||||
promoteConductorEnv();
|
||||
expect(process.env.OPENAI_API_KEY).toBe('sk-oai-test-456');
|
||||
});
|
||||
|
||||
test('does not overwrite canonical when both canonical and GSTACK_-prefixed are set', () => {
|
||||
process.env.ANTHROPIC_API_KEY = 'sk-ant-original';
|
||||
process.env.GSTACK_ANTHROPIC_API_KEY = 'sk-ant-prefixed';
|
||||
promoteConductorEnv();
|
||||
expect(process.env.ANTHROPIC_API_KEY).toBe('sk-ant-original');
|
||||
});
|
||||
|
||||
test('no-op when neither canonical nor GSTACK_-prefixed are set', () => {
|
||||
promoteConductorEnv();
|
||||
expect(process.env.ANTHROPIC_API_KEY).toBeUndefined();
|
||||
expect(process.env.OPENAI_API_KEY).toBeUndefined();
|
||||
});
|
||||
});
|
||||
@@ -5,6 +5,7 @@
|
||||
* tests across multiple files by category.
|
||||
*/
|
||||
|
||||
import '../../lib/conductor-env-shim';
|
||||
import { describe, test, beforeAll, afterAll, expect } from 'bun:test';
|
||||
import type { SkillTestResult } from './session-runner';
|
||||
import { EvalCollector, judgePassed } from './eval-store';
|
||||
|
||||
Reference in New Issue
Block a user