mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-08 21:49:45 +08:00
fix: Supabase telemetry security lockdown (v0.11.16.0) (#460)
* fix: drop all anon RLS policies + revoke view access + add cache table Migration 002 locks down the Supabase telemetry backend: - Drops all SELECT, INSERT, UPDATE policies for the anon role - Explicitly revokes SELECT on crash_clusters and skill_sequences views - Drops stale error_message/failed_step columns (exist live but not in migration) - Creates community_pulse_cache table for server-side aggregation caching * feat: extend community-pulse with full dashboard data + server-side cache community-pulse now returns top skills, crash clusters, version distribution, and weekly active count in a single aggregated response. Results are cached in the community_pulse_cache table (1-hour TTL) to prevent DoS via repeated expensive queries. * fix: route all telemetry through edge functions, not PostgREST - gstack-telemetry-sync: POST to /functions/v1/telemetry-ingest instead of /rest/v1/telemetry_events. Removes sed field-renaming (edge function expects raw JSONL names). Parses inserted count — holds cursor if zero inserted. - gstack-update-check: POST to /functions/v1/update-check. - gstack-community-dashboard: calls community-pulse edge function instead of direct PostgREST queries. - config.sh: removes GSTACK_TELEMETRY_ENDPOINT, fixes misleading comment. * test: RLS smoke test + telemetry field name verification - verify-rls.sh: 9-check smoke test (5 reads + 3 inserts + 1 update) verifying anon key is fully locked out after migration. - telemetry.test.ts: verifies JSONL uses raw field names (v, ts, sessions) that the edge function expects, not Postgres column names. - README.md: fixes privacy claim to match actual RLS policy. * chore: bump version and changelog (v0.11.16.0) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: pre-landing review fixes — JSONB field order, version filter, RLS verification - Dashboard JSON parsing: use per-object grep instead of field-order-dependent regex (JSONB doesn't preserve key order) - Version distribution: filter to skill_run events only (was counting all types) - verify-rls.sh: only 401/403 count as PASS (not empty 200 or 5xx); add Authorization header to test as anon role properly - Remove dead empty loop in community-pulse * chore: untrack browse/dist binaries — 116MB of arm64-only Mach-O These compiled Bun binaries only work on arm64 macOS, and ./setup already rebuilds from source for every platform. They were tracked despite .gitignore due to being committed before the ignore rule. Untracking stops them from appearing as modified in every diff. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: tone down changelog — security hardening, not incident report * fix: keep INSERT policies for old client compat, preserve extra columns - Keep anon INSERT policies so pre-v0.11.16 clients can still sync telemetry via PostgREST while new clients use edge functions - Add error_message/failed_step columns to migration (reconcile repo with live schema) instead of dropping them - Security fix still lands: SELECT and UPDATE policies are dropped Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: sync package.json version with VERSION file (0.11.16.0) --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
19
CHANGELOG.md
19
CHANGELOG.md
@@ -1,5 +1,24 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## [0.11.16.0] - 2026-03-24 — Telemetry Security Hardening
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- **Telemetry RLS policies tightened.** Row-level security policies on all telemetry tables now deny direct access via the anon key. All reads and writes go through validated edge functions with schema checks, event type allowlists, and field length limits.
|
||||||
|
- **Community dashboard is faster and server-cached.** Dashboard stats are now served from a single edge function with 1-hour server-side caching, replacing multiple direct queries.
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
|
||||||
|
- **Telemetry sync uses `GSTACK_SUPABASE_URL` instead of `GSTACK_TELEMETRY_ENDPOINT`.** Edge functions need the base URL, not the REST API path. The old variable is removed from `config.sh`.
|
||||||
|
- **Cursor advancement is now safe.** The sync script checks the edge function's `inserted` count before advancing — if zero events were inserted, the cursor holds and retries next run.
|
||||||
|
|
||||||
|
### For contributors
|
||||||
|
|
||||||
|
- New migration: `supabase/migrations/002_tighten_rls.sql`
|
||||||
|
- New smoke test: `supabase/verify-rls.sh` (9 checks: 5 reads + 4 writes)
|
||||||
|
- Extended `test/telemetry.test.ts` with field name verification
|
||||||
|
- Untracked `browse/dist/` binaries from git (arm64-only, rebuilt by `./setup`)
|
||||||
|
|
||||||
## [0.11.15.0] - 2026-03-24 — E2E Test Coverage for Plan Reviews & Codex
|
## [0.11.15.0] - 2026-03-24 — E2E Test Coverage for Plan Reviews & Codex
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|||||||
13
CLAUDE.md
13
CLAUDE.md
@@ -165,6 +165,19 @@ symlink or a real copy. If it's a symlink to your working directory, be aware th
|
|||||||
gen-skill-docs pipeline, consider whether the changes should be tested in isolation
|
gen-skill-docs pipeline, consider whether the changes should be tested in isolation
|
||||||
before going live (especially if the user is actively using gstack in other windows).
|
before going live (especially if the user is actively using gstack in other windows).
|
||||||
|
|
||||||
|
## Compiled binaries — NEVER commit browse/dist/
|
||||||
|
|
||||||
|
The `browse/dist/` directory contains compiled Bun binaries (`browse`, `find-browse`,
|
||||||
|
~58MB each). These are Mach-O arm64 only — they do NOT work on Linux, Windows, or
|
||||||
|
Intel Macs. The `./setup` script already builds from source for every platform, so
|
||||||
|
the checked-in binaries are redundant. They are tracked by git due to a historical
|
||||||
|
mistake and should eventually be removed with `git rm --cached`.
|
||||||
|
|
||||||
|
**NEVER stage or commit these files.** They show up as modified in `git status`
|
||||||
|
because they're tracked despite `.gitignore` — ignore them. When staging files,
|
||||||
|
always use specific filenames (`git add file1 file2`) — never `git add .` or
|
||||||
|
`git add -A`, which will accidentally include the binaries.
|
||||||
|
|
||||||
## Commit style
|
## Commit style
|
||||||
|
|
||||||
**Always bisect commits.** Every commit should be a single logical change. When
|
**Always bisect commits.** Every commit should be a single logical change. When
|
||||||
|
|||||||
@@ -212,7 +212,7 @@ gstack includes **opt-in** usage telemetry to help improve the project. Here's e
|
|||||||
- **What's never sent:** code, file paths, repo names, branch names, prompts, or any user-generated content.
|
- **What's never sent:** code, file paths, repo names, branch names, prompts, or any user-generated content.
|
||||||
- **Change anytime:** `gstack-config set telemetry off` disables everything instantly.
|
- **Change anytime:** `gstack-config set telemetry off` disables everything instantly.
|
||||||
|
|
||||||
Data is stored in [Supabase](https://supabase.com) (open source Firebase alternative). The schema is in [`supabase/migrations/001_telemetry.sql`](supabase/migrations/001_telemetry.sql) — you can verify exactly what's collected. The Supabase publishable key in the repo is a public key (like a Firebase API key) — row-level security policies restrict it to insert-only access.
|
Data is stored in [Supabase](https://supabase.com) (open source Firebase alternative). The schema is in [`supabase/migrations/`](supabase/migrations/) — you can verify exactly what's collected. The Supabase publishable key in the repo is a public key (like a Firebase API key) — row-level security policies deny all direct access. Telemetry flows through validated edge functions that enforce schema checks, event type allowlists, and field length limits.
|
||||||
|
|
||||||
**Local analytics are always available.** Run `gstack-analytics` to see your personal usage dashboard from the local JSONL file — no remote data needed.
|
**Local analytics are always available.** Run `gstack-analytics` to see your personal usage dashboard from the local JSONL file — no remote data needed.
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
# gstack-community-dashboard — community usage stats from Supabase
|
# gstack-community-dashboard — community usage stats from Supabase
|
||||||
#
|
#
|
||||||
# Queries the Supabase REST API to show community-wide gstack usage:
|
# Calls the community-pulse edge function for aggregated stats:
|
||||||
# skill popularity, crash clusters, version distribution, retention.
|
# skill popularity, crash clusters, version distribution, retention.
|
||||||
#
|
#
|
||||||
# Env overrides (for testing):
|
# Env overrides (for testing):
|
||||||
@@ -30,51 +30,40 @@ if [ -z "$SUPABASE_URL" ] || [ -z "$ANON_KEY" ]; then
|
|||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# ─── Helper: query Supabase REST API ─────────────────────────
|
# ─── Fetch aggregated stats from edge function ────────────────
|
||||||
query() {
|
DATA="$(curl -sf --max-time 15 \
|
||||||
local table="$1"
|
"${SUPABASE_URL}/functions/v1/community-pulse" \
|
||||||
local params="${2:-}"
|
-H "apikey: ${ANON_KEY}" \
|
||||||
curl -sf --max-time 10 \
|
2>/dev/null || echo "{}")"
|
||||||
"${SUPABASE_URL}/rest/v1/${table}?${params}" \
|
|
||||||
-H "apikey: ${ANON_KEY}" \
|
|
||||||
-H "Authorization: Bearer ${ANON_KEY}" \
|
|
||||||
2>/dev/null || echo "[]"
|
|
||||||
}
|
|
||||||
|
|
||||||
echo "gstack community dashboard"
|
echo "gstack community dashboard"
|
||||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
echo ""
|
echo ""
|
||||||
|
|
||||||
# ─── Weekly active installs ──────────────────────────────────
|
# ─── Weekly active installs ──────────────────────────────────
|
||||||
WEEK_AGO="$(date -u -v-7d +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u -d '7 days ago' +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || echo "")"
|
WEEKLY="$(echo "$DATA" | grep -o '"weekly_active":[0-9]*' | grep -o '[0-9]*' || echo "0")"
|
||||||
if [ -n "$WEEK_AGO" ]; then
|
CHANGE="$(echo "$DATA" | grep -o '"change_pct":[0-9-]*' | grep -o '[0-9-]*' || echo "0")"
|
||||||
PULSE="$(curl -sf --max-time 10 \
|
|
||||||
"${SUPABASE_URL}/functions/v1/community-pulse" \
|
|
||||||
-H "Authorization: Bearer ${ANON_KEY}" \
|
|
||||||
2>/dev/null || echo '{"weekly_active":0}')"
|
|
||||||
|
|
||||||
WEEKLY="$(echo "$PULSE" | grep -o '"weekly_active":[0-9]*' | grep -o '[0-9]*' || echo "0")"
|
echo "Weekly active installs: ${WEEKLY}"
|
||||||
CHANGE="$(echo "$PULSE" | grep -o '"change_pct":[0-9-]*' | grep -o '[0-9-]*' || echo "0")"
|
if [ "$CHANGE" -gt 0 ] 2>/dev/null; then
|
||||||
|
echo " Change: +${CHANGE}%"
|
||||||
echo "Weekly active installs: ${WEEKLY}"
|
elif [ "$CHANGE" -lt 0 ] 2>/dev/null; then
|
||||||
if [ "$CHANGE" -gt 0 ] 2>/dev/null; then
|
echo " Change: ${CHANGE}%"
|
||||||
echo " Change: +${CHANGE}%"
|
|
||||||
elif [ "$CHANGE" -lt 0 ] 2>/dev/null; then
|
|
||||||
echo " Change: ${CHANGE}%"
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
fi
|
fi
|
||||||
|
echo ""
|
||||||
|
|
||||||
# ─── Skill popularity (top 10) ───────────────────────────────
|
# ─── Skill popularity (top 10) ───────────────────────────────
|
||||||
echo "Top skills (last 7 days)"
|
echo "Top skills (last 7 days)"
|
||||||
echo "────────────────────────"
|
echo "────────────────────────"
|
||||||
|
|
||||||
# Query telemetry_events, group by skill
|
# Parse top_skills array from JSON
|
||||||
EVENTS="$(query "telemetry_events" "select=skill,gstack_version&event_type=eq.skill_run&event_timestamp=gte.${WEEK_AGO}&limit=1000" 2>/dev/null || echo "[]")"
|
SKILLS="$(echo "$DATA" | grep -o '"top_skills":\[[^]]*\]' || echo "")"
|
||||||
|
if [ -n "$SKILLS" ] && [ "$SKILLS" != '"top_skills":[]' ]; then
|
||||||
if [ "$EVENTS" != "[]" ] && [ -n "$EVENTS" ]; then
|
# Parse each object — handle any key order (JSONB doesn't preserve order)
|
||||||
echo "$EVENTS" | grep -o '"skill":"[^"]*"' | awk -F'"' '{print $4}' | sort | uniq -c | sort -rn | head -10 | while read -r COUNT SKILL; do
|
echo "$SKILLS" | grep -o '{[^}]*}' | while read -r OBJ; do
|
||||||
printf " /%-20s %d runs\n" "$SKILL" "$COUNT"
|
SKILL="$(echo "$OBJ" | grep -o '"skill":"[^"]*"' | awk -F'"' '{print $4}')"
|
||||||
|
COUNT="$(echo "$OBJ" | grep -o '"count":[0-9]*' | grep -o '[0-9]*')"
|
||||||
|
[ -n "$SKILL" ] && [ -n "$COUNT" ] && printf " /%-20s %s runs\n" "$SKILL" "$COUNT"
|
||||||
done
|
done
|
||||||
else
|
else
|
||||||
echo " No data yet"
|
echo " No data yet"
|
||||||
@@ -85,12 +74,12 @@ echo ""
|
|||||||
echo "Top crash clusters"
|
echo "Top crash clusters"
|
||||||
echo "──────────────────"
|
echo "──────────────────"
|
||||||
|
|
||||||
CRASHES="$(query "crash_clusters" "select=error_class,gstack_version,total_occurrences,identified_users&limit=5" 2>/dev/null || echo "[]")"
|
CRASHES="$(echo "$DATA" | grep -o '"crashes":\[[^]]*\]' || echo "")"
|
||||||
|
if [ -n "$CRASHES" ] && [ "$CRASHES" != '"crashes":[]' ]; then
|
||||||
if [ "$CRASHES" != "[]" ] && [ -n "$CRASHES" ]; then
|
echo "$CRASHES" | grep -o '{[^}]*}' | head -5 | while read -r OBJ; do
|
||||||
echo "$CRASHES" | grep -o '"error_class":"[^"]*"' | awk -F'"' '{print $4}' | head -5 | while read -r ERR; do
|
ERR="$(echo "$OBJ" | grep -o '"error_class":"[^"]*"' | awk -F'"' '{print $4}')"
|
||||||
C="$(echo "$CRASHES" | grep -o "\"error_class\":\"$ERR\"[^}]*\"total_occurrences\":[0-9]*" | grep -o '"total_occurrences":[0-9]*' | head -1 | grep -o '[0-9]*')"
|
C="$(echo "$OBJ" | grep -o '"total_occurrences":[0-9]*' | grep -o '[0-9]*')"
|
||||||
printf " %-30s %s occurrences\n" "$ERR" "${C:-?}"
|
[ -n "$ERR" ] && printf " %-30s %s occurrences\n" "$ERR" "${C:-?}"
|
||||||
done
|
done
|
||||||
else
|
else
|
||||||
echo " No crashes reported"
|
echo " No crashes reported"
|
||||||
@@ -101,9 +90,12 @@ echo ""
|
|||||||
echo "Version distribution (last 7 days)"
|
echo "Version distribution (last 7 days)"
|
||||||
echo "───────────────────────────────────"
|
echo "───────────────────────────────────"
|
||||||
|
|
||||||
if [ "$EVENTS" != "[]" ] && [ -n "$EVENTS" ]; then
|
VERSIONS="$(echo "$DATA" | grep -o '"versions":\[[^]]*\]' || echo "")"
|
||||||
echo "$EVENTS" | grep -o '"gstack_version":"[^"]*"' | awk -F'"' '{print $4}' | sort | uniq -c | sort -rn | head -5 | while read -r COUNT VER; do
|
if [ -n "$VERSIONS" ] && [ "$VERSIONS" != '"versions":[]' ]; then
|
||||||
printf " v%-15s %d events\n" "$VER" "$COUNT"
|
echo "$VERSIONS" | grep -o '{[^}]*}' | head -5 | while read -r OBJ; do
|
||||||
|
VER="$(echo "$OBJ" | grep -o '"version":"[^"]*"' | awk -F'"' '{print $4}')"
|
||||||
|
COUNT="$(echo "$OBJ" | grep -o '"count":[0-9]*' | grep -o '[0-9]*')"
|
||||||
|
[ -n "$VER" ] && [ -n "$COUNT" ] && printf " v%-15s %s events\n" "$VER" "$COUNT"
|
||||||
done
|
done
|
||||||
else
|
else
|
||||||
echo " No data yet"
|
echo " No data yet"
|
||||||
|
|||||||
@@ -3,11 +3,12 @@
|
|||||||
#
|
#
|
||||||
# Fire-and-forget, backgrounded, rate-limited to once per 5 minutes.
|
# Fire-and-forget, backgrounded, rate-limited to once per 5 minutes.
|
||||||
# Strips local-only fields before sending. Respects privacy tiers.
|
# Strips local-only fields before sending. Respects privacy tiers.
|
||||||
|
# Posts to the telemetry-ingest edge function (not PostgREST directly).
|
||||||
#
|
#
|
||||||
# Env overrides (for testing):
|
# Env overrides (for testing):
|
||||||
# GSTACK_STATE_DIR — override ~/.gstack state directory
|
# GSTACK_STATE_DIR — override ~/.gstack state directory
|
||||||
# GSTACK_DIR — override auto-detected gstack root
|
# GSTACK_DIR — override auto-detected gstack root
|
||||||
# GSTACK_TELEMETRY_ENDPOINT — override Supabase endpoint URL
|
# GSTACK_SUPABASE_URL — override Supabase project URL
|
||||||
set -uo pipefail
|
set -uo pipefail
|
||||||
|
|
||||||
GSTACK_DIR="${GSTACK_DIR:-$(cd "$(dirname "$0")/.." && pwd)}"
|
GSTACK_DIR="${GSTACK_DIR:-$(cd "$(dirname "$0")/.." && pwd)}"
|
||||||
@@ -19,15 +20,15 @@ RATE_FILE="$ANALYTICS_DIR/.last-sync-time"
|
|||||||
CONFIG_CMD="$GSTACK_DIR/bin/gstack-config"
|
CONFIG_CMD="$GSTACK_DIR/bin/gstack-config"
|
||||||
|
|
||||||
# Source Supabase config if not overridden by env
|
# Source Supabase config if not overridden by env
|
||||||
if [ -z "${GSTACK_TELEMETRY_ENDPOINT:-}" ] && [ -f "$GSTACK_DIR/supabase/config.sh" ]; then
|
if [ -z "${GSTACK_SUPABASE_URL:-}" ] && [ -f "$GSTACK_DIR/supabase/config.sh" ]; then
|
||||||
. "$GSTACK_DIR/supabase/config.sh"
|
. "$GSTACK_DIR/supabase/config.sh"
|
||||||
fi
|
fi
|
||||||
ENDPOINT="${GSTACK_TELEMETRY_ENDPOINT:-}"
|
SUPABASE_URL="${GSTACK_SUPABASE_URL:-}"
|
||||||
ANON_KEY="${GSTACK_SUPABASE_ANON_KEY:-}"
|
ANON_KEY="${GSTACK_SUPABASE_ANON_KEY:-}"
|
||||||
|
|
||||||
# ─── Pre-checks ──────────────────────────────────────────────
|
# ─── Pre-checks ──────────────────────────────────────────────
|
||||||
# No endpoint configured yet → exit silently
|
# No Supabase URL configured yet → exit silently
|
||||||
[ -z "$ENDPOINT" ] && exit 0
|
[ -z "$SUPABASE_URL" ] && exit 0
|
||||||
|
|
||||||
# No JSONL file → nothing to sync
|
# No JSONL file → nothing to sync
|
||||||
[ -f "$JSONL_FILE" ] || exit 0
|
[ -f "$JSONL_FILE" ] || exit 0
|
||||||
@@ -66,6 +67,8 @@ UNSENT="$(tail -n "+$SKIP" "$JSONL_FILE" 2>/dev/null || true)"
|
|||||||
[ -z "$UNSENT" ] && exit 0
|
[ -z "$UNSENT" ] && exit 0
|
||||||
|
|
||||||
# ─── Strip local-only fields and build batch ─────────────────
|
# ─── Strip local-only fields and build batch ─────────────────
|
||||||
|
# Edge function expects raw JSONL field names (v, ts, sessions) —
|
||||||
|
# no column renaming needed (the function maps them internally).
|
||||||
BATCH="["
|
BATCH="["
|
||||||
FIRST=true
|
FIRST=true
|
||||||
COUNT=0
|
COUNT=0
|
||||||
@@ -75,13 +78,10 @@ while IFS= read -r LINE; do
|
|||||||
[ -z "$LINE" ] && continue
|
[ -z "$LINE" ] && continue
|
||||||
echo "$LINE" | grep -q '^{' || continue
|
echo "$LINE" | grep -q '^{' || continue
|
||||||
|
|
||||||
# Strip local-only fields + map JSONL field names to Postgres column names
|
# Strip local-only fields (keep v, ts, sessions as-is for edge function)
|
||||||
CLEAN="$(echo "$LINE" | sed \
|
CLEAN="$(echo "$LINE" | sed \
|
||||||
-e 's/,"_repo_slug":"[^"]*"//g' \
|
-e 's/,"_repo_slug":"[^"]*"//g' \
|
||||||
-e 's/,"_branch":"[^"]*"//g' \
|
-e 's/,"_branch":"[^"]*"//g' \
|
||||||
-e 's/"v":/"schema_version":/g' \
|
|
||||||
-e 's/"ts":/"event_timestamp":/g' \
|
|
||||||
-e 's/"sessions":/"concurrent_sessions":/g' \
|
|
||||||
-e 's/,"repo":"[^"]*"//g')"
|
-e 's/,"repo":"[^"]*"//g')"
|
||||||
|
|
||||||
# If anonymous tier, strip installation_id
|
# If anonymous tier, strip installation_id
|
||||||
@@ -106,21 +106,31 @@ BATCH="$BATCH]"
|
|||||||
# Nothing to send after filtering
|
# Nothing to send after filtering
|
||||||
[ "$COUNT" -eq 0 ] && exit 0
|
[ "$COUNT" -eq 0 ] && exit 0
|
||||||
|
|
||||||
# ─── POST to Supabase ────────────────────────────────────────
|
# ─── POST to edge function ───────────────────────────────────
|
||||||
HTTP_CODE="$(curl -s -o /dev/null -w '%{http_code}' --max-time 10 \
|
RESP_FILE="$(mktemp /tmp/gstack-sync-XXXXXX 2>/dev/null || echo "/tmp/gstack-sync-$$")"
|
||||||
-X POST "${ENDPOINT}/telemetry_events" \
|
HTTP_CODE="$(curl -s -w '%{http_code}' --max-time 10 \
|
||||||
|
-X POST "${SUPABASE_URL}/functions/v1/telemetry-ingest" \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
-H "apikey: ${ANON_KEY}" \
|
-H "apikey: ${ANON_KEY}" \
|
||||||
-H "Authorization: Bearer ${ANON_KEY}" \
|
-o "$RESP_FILE" \
|
||||||
-H "Prefer: return=minimal" \
|
|
||||||
-d "$BATCH" 2>/dev/null || echo "000")"
|
-d "$BATCH" 2>/dev/null || echo "000")"
|
||||||
|
|
||||||
# ─── Update cursor on success (2xx) ─────────────────────────
|
# ─── Update cursor on success (2xx) ─────────────────────────
|
||||||
case "$HTTP_CODE" in
|
case "$HTTP_CODE" in
|
||||||
2*) NEW_CURSOR=$(( CURSOR + COUNT ))
|
2*)
|
||||||
echo "$NEW_CURSOR" > "$CURSOR_FILE" 2>/dev/null || true ;;
|
# Parse inserted count from response — only advance if events were actually inserted.
|
||||||
|
# Advance by SENT count (not inserted count) because we can't map inserted back to
|
||||||
|
# source lines. If inserted==0, something is systemically wrong — don't advance.
|
||||||
|
INSERTED="$(grep -o '"inserted":[0-9]*' "$RESP_FILE" 2>/dev/null | grep -o '[0-9]*' || echo "0")"
|
||||||
|
if [ "${INSERTED:-0}" -gt 0 ] 2>/dev/null; then
|
||||||
|
NEW_CURSOR=$(( CURSOR + COUNT ))
|
||||||
|
echo "$NEW_CURSOR" > "$CURSOR_FILE" 2>/dev/null || true
|
||||||
|
fi
|
||||||
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
|
rm -f "$RESP_FILE" 2>/dev/null || true
|
||||||
|
|
||||||
# Update rate limit marker
|
# Update rate limit marker
|
||||||
touch "$RATE_FILE" 2>/dev/null || true
|
touch "$RATE_FILE" 2>/dev/null || true
|
||||||
|
|
||||||
|
|||||||
@@ -160,25 +160,22 @@ fi
|
|||||||
mkdir -p "$STATE_DIR"
|
mkdir -p "$STATE_DIR"
|
||||||
|
|
||||||
# Fire Supabase install ping in background (parallel, non-blocking)
|
# Fire Supabase install ping in background (parallel, non-blocking)
|
||||||
# This logs an update check event for community health metrics.
|
# This logs an update check event for community health metrics via edge function.
|
||||||
# If the endpoint isn't configured or Supabase is down, this is a no-op.
|
# If Supabase is not configured or telemetry is off, this is a no-op.
|
||||||
# Source Supabase config for install ping
|
if [ -z "${GSTACK_SUPABASE_URL:-}" ] && [ -f "$GSTACK_DIR/supabase/config.sh" ]; then
|
||||||
if [ -z "${GSTACK_TELEMETRY_ENDPOINT:-}" ] && [ -f "$GSTACK_DIR/supabase/config.sh" ]; then
|
|
||||||
. "$GSTACK_DIR/supabase/config.sh"
|
. "$GSTACK_DIR/supabase/config.sh"
|
||||||
fi
|
fi
|
||||||
_SUPA_ENDPOINT="${GSTACK_TELEMETRY_ENDPOINT:-}"
|
_SUPA_URL="${GSTACK_SUPABASE_URL:-}"
|
||||||
_SUPA_KEY="${GSTACK_SUPABASE_ANON_KEY:-}"
|
_SUPA_KEY="${GSTACK_SUPABASE_ANON_KEY:-}"
|
||||||
# Respect telemetry opt-out — don't ping Supabase if user set telemetry: off
|
# Respect telemetry opt-out — don't ping Supabase if user set telemetry: off
|
||||||
_TEL_TIER="$("$GSTACK_DIR/bin/gstack-config" get telemetry 2>/dev/null || true)"
|
_TEL_TIER="$("$GSTACK_DIR/bin/gstack-config" get telemetry 2>/dev/null || true)"
|
||||||
if [ -n "$_SUPA_ENDPOINT" ] && [ -n "$_SUPA_KEY" ] && [ "${_TEL_TIER:-off}" != "off" ]; then
|
if [ -n "$_SUPA_URL" ] && [ -n "$_SUPA_KEY" ] && [ "${_TEL_TIER:-off}" != "off" ]; then
|
||||||
_OS="$(uname -s | tr '[:upper:]' '[:lower:]')"
|
_OS="$(uname -s | tr '[:upper:]' '[:lower:]')"
|
||||||
curl -sf --max-time 5 \
|
curl -sf --max-time 5 \
|
||||||
-X POST "${_SUPA_ENDPOINT}/update_checks" \
|
-X POST "${_SUPA_URL}/functions/v1/update-check" \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
-H "apikey: ${_SUPA_KEY}" \
|
-H "apikey: ${_SUPA_KEY}" \
|
||||||
-H "Authorization: Bearer ${_SUPA_KEY}" \
|
-d "{\"version\":\"$LOCAL\",\"os\":\"$_OS\"}" \
|
||||||
-H "Prefer: return=minimal" \
|
|
||||||
-d "{\"gstack_version\":\"$LOCAL\",\"os\":\"$_OS\"}" \
|
|
||||||
>/dev/null 2>&1 &
|
>/dev/null 2>&1 &
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
|||||||
BIN
browse/dist/browse
vendored
BIN
browse/dist/browse
vendored
Binary file not shown.
BIN
browse/dist/find-browse
vendored
BIN
browse/dist/find-browse
vendored
Binary file not shown.
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "gstack",
|
"name": "gstack",
|
||||||
"version": "0.11.14.0",
|
"version": "0.11.16.0",
|
||||||
"description": "Garry's Stack — Claude Code skills + fast headless browser. One repo, one install, entire AI engineering workflow.",
|
"description": "Garry's Stack — Claude Code skills + fast headless browser. One repo, one install, entire AI engineering workflow.",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
|
|||||||
@@ -1,10 +1,8 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
# Supabase project config for gstack telemetry
|
# Supabase project config for gstack telemetry
|
||||||
# These are PUBLIC keys — safe to commit (like Firebase public config).
|
# These are PUBLIC keys — safe to commit (like Firebase public config).
|
||||||
# RLS policies restrict what the anon/publishable key can do (INSERT only).
|
# RLS denies all access to the anon key. All reads and writes go through
|
||||||
|
# edge functions (which use SUPABASE_SERVICE_ROLE_KEY server-side).
|
||||||
|
|
||||||
GSTACK_SUPABASE_URL="https://frugpmstpnojnhfyimgv.supabase.co"
|
GSTACK_SUPABASE_URL="https://frugpmstpnojnhfyimgv.supabase.co"
|
||||||
GSTACK_SUPABASE_ANON_KEY="sb_publishable_tR4i6cyMIrYTE3s6OyHGHw_ppx2p6WK"
|
GSTACK_SUPABASE_ANON_KEY="sb_publishable_tR4i6cyMIrYTE3s6OyHGHw_ppx2p6WK"
|
||||||
|
|
||||||
# Telemetry ingest endpoint (Data API)
|
|
||||||
GSTACK_TELEMETRY_ENDPOINT="${GSTACK_SUPABASE_URL}/rest/v1"
|
|
||||||
|
|||||||
@@ -1,9 +1,12 @@
|
|||||||
// gstack community-pulse edge function
|
// gstack community-pulse edge function
|
||||||
// Returns weekly active installation count for preamble display.
|
// Returns aggregated community stats for the dashboard:
|
||||||
// Cached for 1 hour via Cache-Control header.
|
// weekly active count, top skills, crash clusters, version distribution.
|
||||||
|
// Uses server-side cache (community_pulse_cache table) to prevent DoS.
|
||||||
|
|
||||||
import { createClient } from "https://esm.sh/@supabase/supabase-js@2";
|
import { createClient } from "https://esm.sh/@supabase/supabase-js@2";
|
||||||
|
|
||||||
|
const CACHE_MAX_AGE_MS = 60 * 60 * 1000; // 1 hour
|
||||||
|
|
||||||
Deno.serve(async () => {
|
Deno.serve(async () => {
|
||||||
const supabase = createClient(
|
const supabase = createClient(
|
||||||
Deno.env.get("SUPABASE_URL") ?? "",
|
Deno.env.get("SUPABASE_URL") ?? "",
|
||||||
@@ -11,17 +14,37 @@ Deno.serve(async () => {
|
|||||||
);
|
);
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Count unique update checks in the last 7 days (install base proxy)
|
// Check cache first
|
||||||
|
const { data: cached } = await supabase
|
||||||
|
.from("community_pulse_cache")
|
||||||
|
.select("data, refreshed_at")
|
||||||
|
.eq("id", 1)
|
||||||
|
.single();
|
||||||
|
|
||||||
|
if (cached?.refreshed_at) {
|
||||||
|
const age = Date.now() - new Date(cached.refreshed_at).getTime();
|
||||||
|
if (age < CACHE_MAX_AGE_MS) {
|
||||||
|
return new Response(JSON.stringify(cached.data), {
|
||||||
|
status: 200,
|
||||||
|
headers: {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Cache-Control": "public, max-age=3600",
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cache is stale or missing — recompute
|
||||||
const weekAgo = new Date(Date.now() - 7 * 24 * 60 * 60 * 1000).toISOString();
|
const weekAgo = new Date(Date.now() - 7 * 24 * 60 * 60 * 1000).toISOString();
|
||||||
const twoWeeksAgo = new Date(Date.now() - 14 * 24 * 60 * 60 * 1000).toISOString();
|
const twoWeeksAgo = new Date(Date.now() - 14 * 24 * 60 * 60 * 1000).toISOString();
|
||||||
|
|
||||||
// This week's active
|
// Weekly active (update checks this week)
|
||||||
const { count: thisWeek } = await supabase
|
const { count: thisWeek } = await supabase
|
||||||
.from("update_checks")
|
.from("update_checks")
|
||||||
.select("*", { count: "exact", head: true })
|
.select("*", { count: "exact", head: true })
|
||||||
.gte("checked_at", weekAgo);
|
.gte("checked_at", weekAgo);
|
||||||
|
|
||||||
// Last week's active (for change %)
|
// Last week (for change %)
|
||||||
const { count: lastWeek } = await supabase
|
const { count: lastWeek } = await supabase
|
||||||
.from("update_checks")
|
.from("update_checks")
|
||||||
.select("*", { count: "exact", head: true })
|
.select("*", { count: "exact", head: true })
|
||||||
@@ -34,22 +57,78 @@ Deno.serve(async () => {
|
|||||||
? Math.round(((current - previous) / previous) * 100)
|
? Math.round(((current - previous) / previous) * 100)
|
||||||
: 0;
|
: 0;
|
||||||
|
|
||||||
return new Response(
|
// Top skills (last 7 days)
|
||||||
JSON.stringify({
|
const { data: skillRows } = await supabase
|
||||||
weekly_active: current,
|
.from("telemetry_events")
|
||||||
change_pct: changePct,
|
.select("skill")
|
||||||
}),
|
.eq("event_type", "skill_run")
|
||||||
{
|
.gte("event_timestamp", weekAgo)
|
||||||
status: 200,
|
.not("skill", "is", null)
|
||||||
headers: {
|
.limit(1000);
|
||||||
"Content-Type": "application/json",
|
|
||||||
"Cache-Control": "public, max-age=3600", // 1 hour cache
|
const skillCounts: Record<string, number> = {};
|
||||||
},
|
for (const row of skillRows ?? []) {
|
||||||
|
if (row.skill) {
|
||||||
|
skillCounts[row.skill] = (skillCounts[row.skill] ?? 0) + 1;
|
||||||
}
|
}
|
||||||
);
|
}
|
||||||
|
const topSkills = Object.entries(skillCounts)
|
||||||
|
.sort(([, a], [, b]) => b - a)
|
||||||
|
.slice(0, 10)
|
||||||
|
.map(([skill, count]) => ({ skill, count }));
|
||||||
|
|
||||||
|
// Crash clusters (top 5)
|
||||||
|
const { data: crashes } = await supabase
|
||||||
|
.from("crash_clusters")
|
||||||
|
.select("error_class, gstack_version, total_occurrences, identified_users")
|
||||||
|
.limit(5);
|
||||||
|
|
||||||
|
// Version distribution (last 7 days)
|
||||||
|
const versionCounts: Record<string, number> = {};
|
||||||
|
const { data: versionRows } = await supabase
|
||||||
|
.from("telemetry_events")
|
||||||
|
.select("gstack_version")
|
||||||
|
.eq("event_type", "skill_run")
|
||||||
|
.gte("event_timestamp", weekAgo)
|
||||||
|
.limit(1000);
|
||||||
|
|
||||||
|
for (const row of versionRows ?? []) {
|
||||||
|
if (row.gstack_version) {
|
||||||
|
versionCounts[row.gstack_version] = (versionCounts[row.gstack_version] ?? 0) + 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
const topVersions = Object.entries(versionCounts)
|
||||||
|
.sort(([, a], [, b]) => b - a)
|
||||||
|
.slice(0, 5)
|
||||||
|
.map(([version, count]) => ({ version, count }));
|
||||||
|
|
||||||
|
const result = {
|
||||||
|
weekly_active: current,
|
||||||
|
change_pct: changePct,
|
||||||
|
top_skills: topSkills,
|
||||||
|
crashes: crashes ?? [],
|
||||||
|
versions: topVersions,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Upsert cache
|
||||||
|
await supabase
|
||||||
|
.from("community_pulse_cache")
|
||||||
|
.upsert({
|
||||||
|
id: 1,
|
||||||
|
data: result,
|
||||||
|
refreshed_at: new Date().toISOString(),
|
||||||
|
});
|
||||||
|
|
||||||
|
return new Response(JSON.stringify(result), {
|
||||||
|
status: 200,
|
||||||
|
headers: {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Cache-Control": "public, max-age=3600",
|
||||||
|
},
|
||||||
|
});
|
||||||
} catch {
|
} catch {
|
||||||
return new Response(
|
return new Response(
|
||||||
JSON.stringify({ weekly_active: 0, change_pct: 0 }),
|
JSON.stringify({ weekly_active: 0, change_pct: 0, top_skills: [], crashes: [], versions: [] }),
|
||||||
{
|
{
|
||||||
status: 200,
|
status: 200,
|
||||||
headers: { "Content-Type": "application/json" },
|
headers: { "Content-Type": "application/json" },
|
||||||
|
|||||||
36
supabase/migrations/002_tighten_rls.sql
Normal file
36
supabase/migrations/002_tighten_rls.sql
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
-- 002_tighten_rls.sql
|
||||||
|
-- Lock down read/update access. Keep INSERT policies so old clients can still
|
||||||
|
-- write via PostgREST while new clients migrate to edge functions.
|
||||||
|
|
||||||
|
-- Drop all SELECT policies (anon key should not read telemetry data)
|
||||||
|
DROP POLICY IF EXISTS "anon_select" ON telemetry_events;
|
||||||
|
DROP POLICY IF EXISTS "anon_select" ON installations;
|
||||||
|
DROP POLICY IF EXISTS "anon_select" ON update_checks;
|
||||||
|
|
||||||
|
-- Drop dangerous UPDATE policy (was unrestricted on all columns)
|
||||||
|
DROP POLICY IF EXISTS "anon_update_last_seen" ON installations;
|
||||||
|
|
||||||
|
-- Keep INSERT policies — old clients (pre-v0.11.16) still POST directly to
|
||||||
|
-- PostgREST. These will be dropped in a future migration once adoption of
|
||||||
|
-- edge-function-based sync is widespread.
|
||||||
|
-- (anon_insert_only ON telemetry_events — kept)
|
||||||
|
-- (anon_insert_only ON installations — kept)
|
||||||
|
-- (anon_insert_only ON update_checks — kept)
|
||||||
|
|
||||||
|
-- Explicitly revoke view access (belt-and-suspenders)
|
||||||
|
REVOKE SELECT ON crash_clusters FROM anon;
|
||||||
|
REVOKE SELECT ON skill_sequences FROM anon;
|
||||||
|
|
||||||
|
-- Keep error_message and failed_step columns (exist on live schema, may be
|
||||||
|
-- used in future). Add them to the migration record so repo matches live.
|
||||||
|
ALTER TABLE telemetry_events ADD COLUMN IF NOT EXISTS error_message TEXT;
|
||||||
|
ALTER TABLE telemetry_events ADD COLUMN IF NOT EXISTS failed_step TEXT;
|
||||||
|
|
||||||
|
-- Cache table for community-pulse aggregation (prevents DoS via repeated queries)
|
||||||
|
CREATE TABLE IF NOT EXISTS community_pulse_cache (
|
||||||
|
id INTEGER PRIMARY KEY DEFAULT 1,
|
||||||
|
data JSONB NOT NULL DEFAULT '{}'::jsonb,
|
||||||
|
refreshed_at TIMESTAMPTZ DEFAULT now()
|
||||||
|
);
|
||||||
|
ALTER TABLE community_pulse_cache ENABLE ROW LEVEL SECURITY;
|
||||||
|
-- No anon policies — only service_role_key (used by edge functions) can read/write
|
||||||
103
supabase/verify-rls.sh
Executable file
103
supabase/verify-rls.sh
Executable file
@@ -0,0 +1,103 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# verify-rls.sh — smoke test that anon key is locked out after 002_tighten_rls.sql
|
||||||
|
#
|
||||||
|
# Run manually after deploying the migration:
|
||||||
|
# bash supabase/verify-rls.sh
|
||||||
|
#
|
||||||
|
# All 9 checks should PASS (anon key denied for reads AND writes).
|
||||||
|
set -uo pipefail
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||||
|
. "$SCRIPT_DIR/config.sh"
|
||||||
|
|
||||||
|
URL="$GSTACK_SUPABASE_URL"
|
||||||
|
KEY="$GSTACK_SUPABASE_ANON_KEY"
|
||||||
|
PASS=0
|
||||||
|
FAIL=0
|
||||||
|
|
||||||
|
check() {
|
||||||
|
local desc="$1"
|
||||||
|
local method="$2"
|
||||||
|
local path="$3"
|
||||||
|
local data="${4:-}"
|
||||||
|
|
||||||
|
local args=(-sf -o /dev/null -w '%{http_code}' --max-time 10
|
||||||
|
-H "apikey: ${KEY}"
|
||||||
|
-H "Authorization: Bearer ${KEY}"
|
||||||
|
-H "Content-Type: application/json")
|
||||||
|
|
||||||
|
if [ "$method" = "GET" ]; then
|
||||||
|
HTTP="$(curl "${args[@]}" "${URL}/rest/v1/${path}" 2>/dev/null || echo "000")"
|
||||||
|
elif [ "$method" = "POST" ]; then
|
||||||
|
HTTP="$(curl "${args[@]}" -X POST "${URL}/rest/v1/${path}" -H "Prefer: return=minimal" -d "$data" 2>/dev/null || echo "000")"
|
||||||
|
elif [ "$method" = "PATCH" ]; then
|
||||||
|
HTTP="$(curl "${args[@]}" -X PATCH "${URL}/rest/v1/${path}" -d "$data" 2>/dev/null || echo "000")"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Only 401/403 prove RLS denial. 200 (even empty) means access is granted.
|
||||||
|
# 5xx means something errored but access wasn't denied by policy.
|
||||||
|
case "$HTTP" in
|
||||||
|
401|403)
|
||||||
|
echo " PASS $desc (HTTP $HTTP, denied by RLS)"
|
||||||
|
PASS=$(( PASS + 1 ))
|
||||||
|
;;
|
||||||
|
200)
|
||||||
|
# 200 means the request was accepted — check if data was returned
|
||||||
|
if [ "$method" = "GET" ]; then
|
||||||
|
BODY="$(curl -sf --max-time 10 "${URL}/rest/v1/${path}" -H "apikey: ${KEY}" -H "Authorization: Bearer ${KEY}" -H "Content-Type: application/json" 2>/dev/null || echo "")"
|
||||||
|
if [ "$BODY" = "[]" ] || [ -z "$BODY" ]; then
|
||||||
|
echo " WARN $desc (HTTP $HTTP, empty — may be RLS or empty table, verify manually)"
|
||||||
|
FAIL=$(( FAIL + 1 ))
|
||||||
|
else
|
||||||
|
echo " FAIL $desc (HTTP $HTTP, got data)"
|
||||||
|
FAIL=$(( FAIL + 1 ))
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo " FAIL $desc (HTTP $HTTP, write accepted)"
|
||||||
|
FAIL=$(( FAIL + 1 ))
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
201)
|
||||||
|
echo " FAIL $desc (HTTP $HTTP, write succeeded!)"
|
||||||
|
FAIL=$(( FAIL + 1 ))
|
||||||
|
;;
|
||||||
|
000)
|
||||||
|
echo " WARN $desc (connection failed)"
|
||||||
|
FAIL=$(( FAIL + 1 ))
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
# 404, 406, 500, etc. — access not definitively denied by RLS
|
||||||
|
echo " WARN $desc (HTTP $HTTP — not a clean RLS denial)"
|
||||||
|
FAIL=$(( FAIL + 1 ))
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
echo "RLS Lockdown Verification"
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
echo ""
|
||||||
|
echo "Read denial checks:"
|
||||||
|
check "SELECT telemetry_events" GET "telemetry_events?select=*&limit=1"
|
||||||
|
check "SELECT installations" GET "installations?select=*&limit=1"
|
||||||
|
check "SELECT update_checks" GET "update_checks?select=*&limit=1"
|
||||||
|
check "SELECT crash_clusters" GET "crash_clusters?select=*&limit=1"
|
||||||
|
check "SELECT skill_sequences" GET "skill_sequences?select=skill_a&limit=1"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "Write denial checks:"
|
||||||
|
check "INSERT telemetry_events" POST "telemetry_events" '{"gstack_version":"test","os":"test","event_timestamp":"2026-01-01T00:00:00Z","outcome":"test"}'
|
||||||
|
check "INSERT update_checks" POST "update_checks" '{"gstack_version":"test","os":"test"}'
|
||||||
|
check "INSERT installations" POST "installations" '{"installation_id":"test_verify_rls"}'
|
||||||
|
check "UPDATE installations" PATCH "installations?installation_id=eq.test_verify_rls" '{"gstack_version":"hacked"}'
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
echo "Results: $PASS passed, $FAIL failed (of 9 checks)"
|
||||||
|
|
||||||
|
if [ "$FAIL" -gt 0 ]; then
|
||||||
|
echo "VERDICT: FAIL — anon key still has access"
|
||||||
|
exit 1
|
||||||
|
else
|
||||||
|
echo "VERDICT: PASS — anon key fully locked out"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
@@ -244,16 +244,32 @@ describe('gstack-analytics', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
describe('gstack-telemetry-sync', () => {
|
describe('gstack-telemetry-sync', () => {
|
||||||
test('exits silently with no endpoint configured', () => {
|
test('exits silently with no Supabase URL configured', () => {
|
||||||
// Default: GSTACK_TELEMETRY_ENDPOINT is not set → exit 0
|
// Default: GSTACK_SUPABASE_URL is not set → exit 0
|
||||||
const result = run(`${BIN}/gstack-telemetry-sync`);
|
const result = run(`${BIN}/gstack-telemetry-sync`);
|
||||||
expect(result).toBe('');
|
expect(result).toBe('');
|
||||||
});
|
});
|
||||||
|
|
||||||
test('exits silently with no JSONL file', () => {
|
test('exits silently with no JSONL file', () => {
|
||||||
const result = run(`${BIN}/gstack-telemetry-sync`, { GSTACK_TELEMETRY_ENDPOINT: 'http://localhost:9999' });
|
const result = run(`${BIN}/gstack-telemetry-sync`, { GSTACK_SUPABASE_URL: 'http://localhost:9999' });
|
||||||
expect(result).toBe('');
|
expect(result).toBe('');
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test('does not rename JSONL field names (edge function expects raw names)', () => {
|
||||||
|
setConfig('telemetry', 'anonymous');
|
||||||
|
run(`${BIN}/gstack-telemetry-log --skill qa --duration 60 --outcome success --session-id raw-fields-1`);
|
||||||
|
|
||||||
|
const events = parseJsonl();
|
||||||
|
expect(events).toHaveLength(1);
|
||||||
|
// Edge function expects these raw field names, NOT Postgres column names
|
||||||
|
expect(events[0]).toHaveProperty('v');
|
||||||
|
expect(events[0]).toHaveProperty('ts');
|
||||||
|
expect(events[0]).toHaveProperty('sessions');
|
||||||
|
// Should NOT have Postgres column names
|
||||||
|
expect(events[0]).not.toHaveProperty('schema_version');
|
||||||
|
expect(events[0]).not.toHaveProperty('event_timestamp');
|
||||||
|
expect(events[0]).not.toHaveProperty('concurrent_sessions');
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('gstack-community-dashboard', () => {
|
describe('gstack-community-dashboard', () => {
|
||||||
|
|||||||
Reference in New Issue
Block a user