diff --git a/agents/a11y-architect.md b/agents/a11y-architect.md index e843ab93..0cc32886 100644 --- a/agents/a11y-architect.md +++ b/agents/a11y-architect.md @@ -2,7 +2,7 @@ name: a11y-architect description: Accessibility Architect specializing in WCAG 2.2 compliance for Web and Native platforms. Use PROACTIVELY when designing UI components, establishing design systems, or auditing code for inclusive user experiences. model: sonnet -tools: ["Read", "Write", "Edit", "Bash", "Grep", "Glob"] +tools: ["Read", "Write", "Edit", "Grep", "Glob"] --- ## Prompt Defense Baseline diff --git a/agents/pytorch-build-resolver.md b/agents/pytorch-build-resolver.md index 71e0184b..19511a50 100644 --- a/agents/pytorch-build-resolver.md +++ b/agents/pytorch-build-resolver.md @@ -47,7 +47,7 @@ python -c "import torch; x = torch.randn(2,3).cuda(); print('CUDA tensor test: O 3. Trace tensor shapes -> Print shapes at key points 4. Apply minimal fix -> Only what's needed 5. Run failing script -> Verify fix -6. Check gradients flow -> Ensure backward pass works +6. Check gradients flow -> Ensure autograd computes expected gradients ``` ## Common Fix Patterns @@ -57,13 +57,13 @@ python -c "import torch; x = torch.randn(2,3).cuda(); print('CUDA tensor test: O | `RuntimeError: mat1 and mat2 shapes cannot be multiplied` | Linear layer input size mismatch | Fix `in_features` to match previous layer output | | `RuntimeError: Expected all tensors to be on the same device` | Mixed CPU/GPU tensors | Add `.to(device)` to all tensors and model | | `CUDA out of memory` | Batch too large or memory leak | Reduce batch size, add `torch.cuda.empty_cache()`, use gradient checkpointing | -| `RuntimeError: element 0 of tensors does not require grad` | Detached tensor in loss computation | Remove `.detach()` or `.item()` before backward | +| `RuntimeError: element 0 of tensors does not require grad` | Detached tensor in loss computation | Remove `.detach()` or `.item()` before gradient computation | | `ValueError: Expected input batch_size X to match target batch_size Y` | Mismatched batch dimensions | Fix DataLoader collation or model output reshape | | `RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation` | In-place op breaks autograd | Replace `x += 1` with `x = x + 1`, avoid in-place relu | | `RuntimeError: stack expects each tensor to be equal size` | Inconsistent tensor sizes in DataLoader | Add padding/truncation in Dataset `__getitem__` or custom `collate_fn` | | `RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR` | cuDNN incompatibility or corrupted state | Set `torch.backends.cudnn.enabled = False` to test, update drivers | | `IndexError: index out of range in self` | Embedding index >= num_embeddings | Fix vocabulary size or clamp indices | -| `RuntimeError: Trying to backward through the graph a second time` | Reused computation graph | Add `retain_graph=True` or restructure forward pass | +| `RuntimeError: Trying to reuse a freed autograd graph` | Reused computation graph | Add `retain_graph=True` or restructure forward pass | ## Shape Debugging diff --git a/agents/seo-specialist.md b/agents/seo-specialist.md index 18ad08b6..ec6758f1 100644 --- a/agents/seo-specialist.md +++ b/agents/seo-specialist.md @@ -1,7 +1,7 @@ --- name: seo-specialist description: SEO specialist for technical SEO audits, on-page optimization, structured data, Core Web Vitals, and content/keyword mapping. Use for site audits, meta tag reviews, schema markup, sitemap and robots issues, and SEO remediation plans. -tools: ["Read", "Grep", "Glob", "Bash", "WebSearch", "WebFetch"] +tools: ["Read", "Grep", "Glob", "WebSearch", "WebFetch"] model: sonnet ---