fix(security): learnings input validation + cross-project trust gate

Three fixes to the learnings system:

1. Input validation in gstack-learnings-log: type must be from allowed list,
   key must be alphanumeric, confidence must be 1-10 integer, source must
   be from allowed list. Prevents injection via malformed fields.

2. Prompt injection defense: insight field checked against 10 instruction-like
   patterns (ignore previous, system:, override, etc.). Rejected with clear
   error message.

3. Cross-project trust gate in gstack-learnings-search: AI-generated learnings
   from other projects are filtered out. Only user-stated learnings cross
   project boundaries. Prevents silent prompt injection across codebases.

Also adds trusted field (true for user-stated source, false for AI-generated)
to enable the trust gate at read time.

Closes #841

Co-Authored-By: Ziad Al Sharif <Ziadstr@users.noreply.github.com>
This commit is contained in:
Garry Tan
2026-04-13 09:36:33 -07:00
parent 58bd7d19ef
commit 9b65041d1c
2 changed files with 76 additions and 14 deletions

View File

@@ -68,7 +68,13 @@ for (const line of lines) {
// Determine if this is from the current project or cross-project
// Cross-project entries are tagged for display
e._crossProject = !line.includes(slug) && process.env.GSTACK_SEARCH_CROSS === 'true';
const isCrossProject = !line.includes(slug) && process.env.GSTACK_SEARCH_CROSS === 'true';
e._crossProject = isCrossProject;
// Trust gate: cross-project learnings only loaded if trusted (user-stated)
// This prevents prompt injection from one project's AI-generated learnings
// silently influencing reviews in another project.
if (isCrossProject && e.trusted === false) continue;
entries.push(e);
} catch {}