mirror of
https://github.com/affaan-m/everything-claude-code.git
synced 2026-05-12 07:37:24 +08:00
docs: salvage scientific research skills
This commit is contained in:
committed by
Affaan Mustafa
parent
0e12267ff2
commit
df32d6bea8
@@ -11,7 +11,7 @@
|
||||
{
|
||||
"name": "ecc",
|
||||
"source": "./",
|
||||
"description": "The most comprehensive Claude Code plugin — 53 agents, 192 skills, 69 legacy command shims, selective install profiles, and production-ready hooks for TDD, security scanning, code review, and continuous learning",
|
||||
"description": "The most comprehensive Claude Code plugin — 53 agents, 195 skills, 69 legacy command shims, selective install profiles, and production-ready hooks for TDD, security scanning, code review, and continuous learning",
|
||||
"version": "2.0.0-rc.1",
|
||||
"author": {
|
||||
"name": "Affaan Mustafa",
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"name": "ecc",
|
||||
"version": "2.0.0-rc.1",
|
||||
"description": "Battle-tested Claude Code plugin for engineering teams — 53 agents, 192 skills, 69 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use",
|
||||
"description": "Battle-tested Claude Code plugin for engineering teams — 53 agents, 195 skills, 69 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use",
|
||||
"author": {
|
||||
"name": "Affaan Mustafa",
|
||||
"url": "https://x.com/affaanmustafa"
|
||||
|
||||
@@ -22,6 +22,11 @@
|
||||
"plugin": [
|
||||
"./plugins"
|
||||
],
|
||||
"skills": {
|
||||
"paths": [
|
||||
"../skills"
|
||||
]
|
||||
},
|
||||
"agent": {
|
||||
"build": {
|
||||
"description": "Primary coding agent for development work",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Everything Claude Code (ECC) — Agent Instructions
|
||||
|
||||
This is a **production-ready AI coding plugin** providing 53 specialized agents, 192 skills, 69 commands, and automated hook workflows for software development.
|
||||
This is a **production-ready AI coding plugin** providing 53 specialized agents, 195 skills, 69 commands, and automated hook workflows for software development.
|
||||
|
||||
**Version:** 2.0.0-rc.1
|
||||
|
||||
@@ -146,7 +146,7 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
|
||||
|
||||
```
|
||||
agents/ — 53 specialized subagents
|
||||
skills/ — 192 workflow skills and domain knowledge
|
||||
skills/ — 195 workflow skills and domain knowledge
|
||||
commands/ — 69 slash commands
|
||||
hooks/ — Trigger-based automations
|
||||
rules/ — Always-follow guidelines (common + per-language)
|
||||
|
||||
@@ -350,7 +350,7 @@ If you stacked methods, clean up in this order:
|
||||
/plugin list ecc@ecc
|
||||
```
|
||||
|
||||
**That's it!** You now have access to 53 agents, 192 skills, and 69 legacy command shims.
|
||||
**That's it!** You now have access to 53 agents, 195 skills, and 69 legacy command shims.
|
||||
|
||||
### Dashboard GUI
|
||||
|
||||
@@ -1338,7 +1338,7 @@ The configuration is automatically detected from `.opencode/opencode.json`.
|
||||
|---------|-------------|----------|--------|
|
||||
| Agents | PASS: 53 agents | PASS: 12 agents | **Claude Code leads** |
|
||||
| Commands | PASS: 69 commands | PASS: 31 commands | **Claude Code leads** |
|
||||
| Skills | PASS: 192 skills | PASS: 37 skills | **Claude Code leads** |
|
||||
| Skills | PASS: 195 skills | PASS: 37 skills | **Claude Code leads** |
|
||||
| Hooks | PASS: 8 event types | PASS: 11 events | **OpenCode has more!** |
|
||||
| Rules | PASS: 29 rules | PASS: 13 instructions | **Claude Code leads** |
|
||||
| MCP Servers | PASS: 14 servers | PASS: Full | **Full parity** |
|
||||
@@ -1443,7 +1443,7 @@ ECC is the **first plugin to maximize every major AI coding tool**. Here's how e
|
||||
|---------|------------|------------|-----------|----------|
|
||||
| **Agents** | 53 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
|
||||
| **Commands** | 69 | Shared | Instruction-based | 31 |
|
||||
| **Skills** | 192 | Shared | 10 (native format) | 37 |
|
||||
| **Skills** | 195 | Shared | 10 (native format) | 37 |
|
||||
| **Hook Events** | 8 types | 15 types | None yet | 11 types |
|
||||
| **Hook Scripts** | 20+ scripts | 16 scripts (DRY adapter) | N/A | Plugin hooks |
|
||||
| **Rules** | 34 (common + lang) | 34 (YAML frontmatter) | Instruction-based | 13 instructions |
|
||||
|
||||
@@ -160,7 +160,7 @@ Copy-Item -Recurse rules/typescript "$HOME/.claude/rules/"
|
||||
/plugin list ecc@ecc
|
||||
```
|
||||
|
||||
**完成!** 你现在可以使用 53 个代理、192 个技能和 69 个命令。
|
||||
**完成!** 你现在可以使用 53 个代理、195 个技能和 69 个命令。
|
||||
|
||||
### multi-* 命令需要额外配置
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Everything Claude Code (ECC) — 智能体指令
|
||||
|
||||
这是一个**生产就绪的 AI 编码插件**,提供 53 个专业代理、192 项技能、69 条命令以及自动化钩子工作流,用于软件开发。
|
||||
这是一个**生产就绪的 AI 编码插件**,提供 53 个专业代理、195 项技能、69 条命令以及自动化钩子工作流,用于软件开发。
|
||||
|
||||
**版本:** 2.0.0-rc.1
|
||||
|
||||
@@ -147,7 +147,7 @@
|
||||
|
||||
```
|
||||
agents/ — 53 个专业子代理
|
||||
skills/ — 192 个工作流技能和领域知识
|
||||
skills/ — 195 个工作流技能和领域知识
|
||||
commands/ — 69 个斜杠命令
|
||||
hooks/ — 基于触发的自动化
|
||||
rules/ — 始终遵循的指导方针(通用 + 每种语言)
|
||||
|
||||
@@ -224,7 +224,7 @@ Copy-Item -Recurse rules/typescript "$HOME/.claude/rules/"
|
||||
/plugin list ecc@ecc
|
||||
```
|
||||
|
||||
**搞定!** 你现在可以使用 53 个智能体、192 项技能和 69 个命令了。
|
||||
**搞定!** 你现在可以使用 53 个智能体、195 项技能和 69 个命令了。
|
||||
|
||||
***
|
||||
|
||||
@@ -1134,7 +1134,7 @@ opencode
|
||||
|---------|-------------|----------|--------|
|
||||
| 智能体 | PASS: 53 个 | PASS: 12 个 | **Claude Code 领先** |
|
||||
| 命令 | PASS: 69 个 | PASS: 31 个 | **Claude Code 领先** |
|
||||
| 技能 | PASS: 192 项 | PASS: 37 项 | **Claude Code 领先** |
|
||||
| 技能 | PASS: 195 项 | PASS: 37 项 | **Claude Code 领先** |
|
||||
| 钩子 | PASS: 8 种事件类型 | PASS: 11 种事件 | **OpenCode 更多!** |
|
||||
| 规则 | PASS: 29 条 | PASS: 13 条指令 | **Claude Code 领先** |
|
||||
| MCP 服务器 | PASS: 14 个 | PASS: 完整 | **完全对等** |
|
||||
@@ -1242,7 +1242,7 @@ ECC 是**第一个最大化利用每个主要 AI 编码工具的插件**。以
|
||||
|---------|------------|------------|-----------|----------|
|
||||
| **智能体** | 53 | 共享 (AGENTS.md) | 共享 (AGENTS.md) | 12 |
|
||||
| **命令** | 69 | 共享 | 基于指令 | 31 |
|
||||
| **技能** | 192 | 共享 | 10 (原生格式) | 37 |
|
||||
| **技能** | 195 | 共享 | 10 (原生格式) | 37 |
|
||||
| **钩子事件** | 8 种类型 | 15 种类型 | 暂无 | 11 种类型 |
|
||||
| **钩子脚本** | 20+ 个脚本 | 16 个脚本 (DRY 适配器) | N/A | 插件钩子 |
|
||||
| **规则** | 34 (通用 + 语言) | 34 (YAML 前页) | 基于指令 | 13 条指令 |
|
||||
|
||||
@@ -276,7 +276,10 @@
|
||||
"paths": [
|
||||
"skills/deep-research",
|
||||
"skills/exa-search",
|
||||
"skills/research-ops"
|
||||
"skills/research-ops",
|
||||
"skills/scientific-db-pubmed-database",
|
||||
"skills/scientific-thinking-literature-review",
|
||||
"skills/scientific-thinking-scholar-evaluation"
|
||||
],
|
||||
"targets": [
|
||||
"claude",
|
||||
|
||||
@@ -207,6 +207,9 @@
|
||||
"skills/regex-vs-llm-structured-text/",
|
||||
"skills/remotion-video-creation/",
|
||||
"skills/research-ops/",
|
||||
"skills/scientific-db-pubmed-database/",
|
||||
"skills/scientific-thinking-literature-review/",
|
||||
"skills/scientific-thinking-scholar-evaluation/",
|
||||
"skills/returns-reverse-logistics/",
|
||||
"skills/rust-patterns/",
|
||||
"skills/rust-testing/",
|
||||
|
||||
175
skills/scientific-db-pubmed-database/SKILL.md
Normal file
175
skills/scientific-db-pubmed-database/SKILL.md
Normal file
@@ -0,0 +1,175 @@
|
||||
---
|
||||
name: pubmed-database
|
||||
description: Direct PubMed and NCBI E-utilities search workflows for biomedical literature, MeSH queries, PMID lookup, citation retrieval, and API-backed literature monitoring.
|
||||
origin: community
|
||||
---
|
||||
|
||||
# PubMed Database
|
||||
|
||||
Use this skill when a task needs biomedical literature from PubMed rather than
|
||||
general web search.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Searching MEDLINE or life-sciences literature.
|
||||
- Building PubMed queries with MeSH terms, field tags, dates, or article types.
|
||||
- Looking up PMIDs, abstracts, publication metadata, or related citations.
|
||||
- Running systematic-review search passes that need repeatable search strings.
|
||||
- Using NCBI E-utilities directly from Python, shell, or another HTTP client.
|
||||
|
||||
## Query Construction
|
||||
|
||||
Start with the research question, split it into concepts, then combine concepts
|
||||
with Boolean operators.
|
||||
|
||||
```text
|
||||
concept_1 AND concept_2 AND filter
|
||||
synonym_a OR synonym_b
|
||||
NOT exclusion_term
|
||||
```
|
||||
|
||||
Useful PubMed field tags:
|
||||
|
||||
- `[ti]`: title
|
||||
- `[ab]`: abstract
|
||||
- `[tiab]`: title or abstract
|
||||
- `[au]`: author
|
||||
- `[ta]`: journal title abbreviation
|
||||
- `[mh]`: MeSH term
|
||||
- `[majr]`: major MeSH topic
|
||||
- `[pt]`: publication type
|
||||
- `[dp]`: date of publication
|
||||
- `[la]`: language
|
||||
|
||||
Examples:
|
||||
|
||||
```text
|
||||
diabetes mellitus[mh] AND treatment[tiab] AND systematic review[pt] AND 2023:2026[dp]
|
||||
(metformin[nm] OR insulin[nm]) AND diabetes mellitus, type 2[mh] AND randomized controlled trial[pt]
|
||||
smith ja[au] AND cancer[tiab] AND 2026[dp] AND english[la]
|
||||
```
|
||||
|
||||
## MeSH and Subheadings
|
||||
|
||||
Prefer MeSH when the concept has a stable controlled-vocabulary term. Combine
|
||||
MeSH with title/abstract terms when the topic is new or terminology varies.
|
||||
|
||||
Correct subheading syntax puts the subheading before the field tag:
|
||||
|
||||
```text
|
||||
diabetes mellitus, type 2/drug therapy[mh]
|
||||
cardiovascular diseases/prevention & control[mh]
|
||||
```
|
||||
|
||||
Use `[majr]` only when the topic must be central to the paper. It can improve
|
||||
precision but may miss relevant work.
|
||||
|
||||
## Filters
|
||||
|
||||
Publication types:
|
||||
|
||||
- `clinical trial[pt]`
|
||||
- `meta-analysis[pt]`
|
||||
- `randomized controlled trial[pt]`
|
||||
- `review[pt]`
|
||||
- `systematic review[pt]`
|
||||
- `guideline[pt]`
|
||||
|
||||
Date filters:
|
||||
|
||||
```text
|
||||
2026[dp]
|
||||
2020:2026[dp]
|
||||
2026/03/15[dp]
|
||||
```
|
||||
|
||||
Availability filters:
|
||||
|
||||
```text
|
||||
free full text[sb]
|
||||
hasabstract[text]
|
||||
```
|
||||
|
||||
## E-utilities Workflow
|
||||
|
||||
NCBI E-utilities supports repeatable API workflows:
|
||||
|
||||
1. `esearch.fcgi`: search and return PMIDs.
|
||||
2. `esummary.fcgi`: return lightweight article metadata.
|
||||
3. `efetch.fcgi`: fetch abstracts or full records in XML, MEDLINE, or text.
|
||||
4. `elink.fcgi`: find related articles and linked resources.
|
||||
|
||||
Use an email and API key for production scripts. Store API keys in environment
|
||||
variables, never in committed files or command history.
|
||||
|
||||
```python
|
||||
import os
|
||||
import time
|
||||
import requests
|
||||
|
||||
BASE = "https://eutils.ncbi.nlm.nih.gov/entrez/eutils"
|
||||
|
||||
|
||||
def esearch(query: str, retmax: int = 20) -> list[str]:
|
||||
params = {
|
||||
"db": "pubmed",
|
||||
"term": query,
|
||||
"retmode": "json",
|
||||
"retmax": retmax,
|
||||
"tool": "ecc-pubmed-search",
|
||||
"email": os.environ.get("NCBI_EMAIL", ""),
|
||||
}
|
||||
api_key = os.environ.get("NCBI_API_KEY")
|
||||
if api_key:
|
||||
params["api_key"] = api_key
|
||||
|
||||
response = requests.get(f"{BASE}/esearch.fcgi", params=params, timeout=30)
|
||||
response.raise_for_status()
|
||||
time.sleep(0.35)
|
||||
return response.json()["esearchresult"]["idlist"]
|
||||
|
||||
|
||||
pmids = esearch("hypertension[mh] AND randomized controlled trial[pt] AND 2024:2026[dp]")
|
||||
print(pmids)
|
||||
```
|
||||
|
||||
For batches, prefer NCBI history server parameters (`usehistory=y`,
|
||||
`WebEnv`, `query_key`) instead of passing very long PMID lists through URLs.
|
||||
|
||||
## Output Discipline
|
||||
|
||||
For each search pass, record:
|
||||
|
||||
- exact search string
|
||||
- database searched
|
||||
- date searched
|
||||
- filters used
|
||||
- result count
|
||||
- export format
|
||||
- any manual exclusions
|
||||
|
||||
Example:
|
||||
|
||||
```markdown
|
||||
| Database | Date searched | Query | Filters | Results |
|
||||
| --- | --- | --- | --- | ---: |
|
||||
| PubMed | 2026-05-11 | `sickle cell disease[mh] AND CRISPR[tiab]` | 2020:2026[dp], English | 42 |
|
||||
```
|
||||
|
||||
## Review Checklist
|
||||
|
||||
- Are field tags valid PubMed tags?
|
||||
- Are MeSH terms paired with free-text synonyms for newer topics?
|
||||
- Is the date range explicit and appropriate?
|
||||
- Does the search log include enough detail to reproduce the query?
|
||||
- Are API keys loaded from the environment?
|
||||
- Does HTTP code call `raise_for_status()` or otherwise handle non-200
|
||||
responses before parsing?
|
||||
- Are rate limits respected?
|
||||
|
||||
## References
|
||||
|
||||
- [PubMed help](https://pubmed.ncbi.nlm.nih.gov/help/)
|
||||
- [NCBI E-utilities documentation](https://www.ncbi.nlm.nih.gov/books/NBK25501/)
|
||||
- [NCBI API key guidance](https://support.nlm.nih.gov/kbArticle/?pn=KA-05317)
|
||||
- NCBI support: <eutilities@ncbi.nlm.nih.gov>
|
||||
192
skills/scientific-thinking-literature-review/SKILL.md
Normal file
192
skills/scientific-thinking-literature-review/SKILL.md
Normal file
@@ -0,0 +1,192 @@
|
||||
---
|
||||
name: literature-review
|
||||
description: Systematic literature-review workflow for academic, biomedical, technical, and scientific topics, including search planning, source screening, synthesis, citation checks, and evidence logging.
|
||||
origin: community
|
||||
---
|
||||
|
||||
# Literature Review
|
||||
|
||||
Use this skill when the task is to find, screen, synthesize, and cite a body of
|
||||
academic or technical literature.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Building a systematic, scoping, or narrative literature review.
|
||||
- Synthesizing the state of the art for a research question.
|
||||
- Finding gaps, contradictions, or future-work directions.
|
||||
- Preparing citation-backed background sections for papers or reports.
|
||||
- Comparing evidence across peer-reviewed papers, preprints, patents, and
|
||||
technical reports.
|
||||
|
||||
## Review Types
|
||||
|
||||
- **Narrative review**: broad synthesis; useful for orientation.
|
||||
- **Scoping review**: maps concepts, methods, and evidence gaps.
|
||||
- **Systematic review**: predefined protocol, reproducible search, explicit
|
||||
screening and exclusion.
|
||||
- **Meta-analysis**: systematic review plus quantitative effect aggregation.
|
||||
|
||||
Ask the user which level of rigor is needed. If unspecified, default to a
|
||||
scoping review for exploratory work and a systematic review for publication or
|
||||
clinical claims.
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Define the Question
|
||||
|
||||
Convert the prompt into a searchable research question.
|
||||
|
||||
For clinical or biomedical work, use PICO:
|
||||
|
||||
- Population
|
||||
- Intervention or exposure
|
||||
- Comparator
|
||||
- Outcome
|
||||
|
||||
For technical work, use:
|
||||
|
||||
- system or domain
|
||||
- method or intervention
|
||||
- comparison baseline
|
||||
- evaluation metric
|
||||
|
||||
### 2. Plan the Search
|
||||
|
||||
Create a search protocol before collecting sources:
|
||||
|
||||
- databases to search
|
||||
- date range
|
||||
- languages
|
||||
- publication types
|
||||
- inclusion criteria
|
||||
- exclusion criteria
|
||||
- exact search strings
|
||||
|
||||
Minimum useful database set:
|
||||
|
||||
- PubMed for biomedical and life-sciences literature.
|
||||
- arXiv for CS, math, physics, quantitative biology, and preprints.
|
||||
- Semantic Scholar or Crossref for broad academic discovery.
|
||||
- Domain-specific sources when relevant, such as clinical-trial registries,
|
||||
patent databases, standards bodies, or official technical docs.
|
||||
|
||||
### 3. Search and Log Evidence
|
||||
|
||||
Keep a search log that makes the review reproducible:
|
||||
|
||||
```markdown
|
||||
| Database | Date searched | Query | Filters | Results | Export |
|
||||
| --- | --- | --- | --- | ---: | --- |
|
||||
| PubMed | 2026-05-11 | `("CRISPR"[tiab] OR "Cas9"[tiab]) AND "sickle cell"[tiab]` | 2020:2026, English | 86 | PMID list |
|
||||
| arXiv | 2026-05-11 | `CRISPR sickle cell gene editing` | q-bio, 2020:2026 | 9 | BibTeX |
|
||||
```
|
||||
|
||||
Save raw IDs, URLs, DOIs, abstracts, and notes separately from the final prose.
|
||||
|
||||
### 4. Deduplicate
|
||||
|
||||
Deduplicate in this order:
|
||||
|
||||
1. DOI
|
||||
2. PMID or arXiv ID
|
||||
3. exact title
|
||||
4. normalized title plus first author and year
|
||||
|
||||
Record how many duplicates were removed.
|
||||
|
||||
### 5. Screen Sources
|
||||
|
||||
Screen in stages:
|
||||
|
||||
1. title
|
||||
2. abstract
|
||||
3. full text
|
||||
|
||||
For systematic work, record exclusion reasons:
|
||||
|
||||
- wrong population
|
||||
- wrong intervention
|
||||
- wrong outcome
|
||||
- not primary research
|
||||
- duplicate
|
||||
- unavailable full text
|
||||
- outside date range
|
||||
|
||||
### 6. Extract Data
|
||||
|
||||
Use a structured extraction table:
|
||||
|
||||
```markdown
|
||||
| Study | Design | Population/Data | Method | Comparator | Outcome | Key finding | Limitations |
|
||||
| --- | --- | --- | --- | --- | --- | --- | --- |
|
||||
| Author Year | RCT/cohort/review/etc. | sample or corpus | method | baseline | measured outcome | result | caveat |
|
||||
```
|
||||
|
||||
For technical papers, include dataset, benchmark, metric, baseline, and
|
||||
reproducibility notes.
|
||||
|
||||
### 7. Synthesize
|
||||
|
||||
Group evidence by theme rather than summarizing papers one by one.
|
||||
|
||||
Useful synthesis lenses:
|
||||
|
||||
- strongest evidence
|
||||
- conflicting evidence
|
||||
- methodological weaknesses
|
||||
- population or dataset limits
|
||||
- recency and replication
|
||||
- practical implications
|
||||
- unanswered questions
|
||||
|
||||
Separate claims by confidence:
|
||||
|
||||
- **High confidence**: replicated, high-quality evidence across sources.
|
||||
- **Medium confidence**: plausible but limited by sample, method, or recency.
|
||||
- **Low confidence**: early, speculative, single-source, or weakly measured.
|
||||
|
||||
### 8. Verify Citations
|
||||
|
||||
Before finalizing:
|
||||
|
||||
- verify DOI, PMID, arXiv ID, or official URL
|
||||
- check author names and publication year
|
||||
- do not cite a paper for a claim it does not make
|
||||
- mark preprints as preprints
|
||||
- distinguish reviews from primary evidence
|
||||
|
||||
## Output Template
|
||||
|
||||
```markdown
|
||||
# Literature Review: <Topic>
|
||||
|
||||
Generated: <date>
|
||||
Review type: <narrative | scoping | systematic | meta-analysis>
|
||||
Search window: <dates>
|
||||
Databases: <list>
|
||||
|
||||
## Research Question
|
||||
|
||||
## Search Strategy
|
||||
|
||||
## Inclusion and Exclusion Criteria
|
||||
|
||||
## Evidence Summary
|
||||
|
||||
## Thematic Synthesis
|
||||
|
||||
## Gaps and Limitations
|
||||
|
||||
## References
|
||||
|
||||
## Search Log
|
||||
```
|
||||
|
||||
## Pitfalls
|
||||
|
||||
- Do not treat search snippets as evidence.
|
||||
- Do not mix preprints, reviews, and primary studies without labeling them.
|
||||
- Do not omit negative or conflicting findings.
|
||||
- Do not claim systematic-review rigor without a reproducible protocol.
|
||||
- Do not use a single database for a broad claim unless the scope is explicitly
|
||||
limited to that database.
|
||||
160
skills/scientific-thinking-scholar-evaluation/SKILL.md
Normal file
160
skills/scientific-thinking-scholar-evaluation/SKILL.md
Normal file
@@ -0,0 +1,160 @@
|
||||
---
|
||||
name: scholar-evaluation
|
||||
description: Structured scholarly-work evaluation for papers, proposals, literature reviews, methods sections, evidence quality, citation support, and research-writing feedback.
|
||||
origin: community
|
||||
---
|
||||
|
||||
# Scholar Evaluation
|
||||
|
||||
Use this skill to evaluate academic or scientific work with a repeatable rubric.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Reviewing a research paper, proposal, thesis chapter, or literature review.
|
||||
- Checking whether claims are supported by cited evidence.
|
||||
- Evaluating methodology, study design, analysis, or limitations.
|
||||
- Comparing two or more papers for quality or relevance.
|
||||
- Producing structured feedback for revision.
|
||||
|
||||
## Evaluation Scope
|
||||
|
||||
Start by identifying the artifact:
|
||||
|
||||
- empirical research paper
|
||||
- theoretical paper
|
||||
- technical report
|
||||
- systematic or narrative literature review
|
||||
- research proposal
|
||||
- thesis or dissertation chapter
|
||||
- conference abstract or short paper
|
||||
|
||||
Then choose scope:
|
||||
|
||||
- **comprehensive**: all rubric dimensions
|
||||
- **targeted**: one or two dimensions, such as method or citations
|
||||
- **comparative**: rank multiple works against the same rubric
|
||||
|
||||
## Rubric
|
||||
|
||||
Score each applicable dimension from 1 to 5:
|
||||
|
||||
- 5: excellent; clear, rigorous, and publication-ready
|
||||
- 4: good; minor improvements needed
|
||||
- 3: adequate; meaningful gaps but usable
|
||||
- 2: weak; substantial revision needed
|
||||
- 1: poor; major validity or clarity problems
|
||||
|
||||
Use `N/A` for dimensions that do not apply.
|
||||
|
||||
### 1. Problem and Research Question
|
||||
|
||||
- Is the problem clear and specific?
|
||||
- Is the contribution meaningful?
|
||||
- Are scope and assumptions explicit?
|
||||
- Does the question match the claimed contribution?
|
||||
|
||||
### 2. Literature and Context
|
||||
|
||||
- Is relevant prior work covered?
|
||||
- Does the work synthesize rather than merely list sources?
|
||||
- Are gaps accurately identified?
|
||||
- Are recent and foundational sources balanced?
|
||||
|
||||
### 3. Methodology
|
||||
|
||||
- Does the method answer the research question?
|
||||
- Are design choices justified?
|
||||
- Are variables, datasets, participants, or materials described clearly?
|
||||
- Could another researcher reproduce the work?
|
||||
- Are ethical and practical constraints acknowledged?
|
||||
|
||||
### 4. Data and Evidence
|
||||
|
||||
- Are data sources credible and appropriate?
|
||||
- Is sample size or corpus coverage adequate?
|
||||
- Are inclusion, exclusion, and preprocessing decisions documented?
|
||||
- Are missing data and bias risks discussed?
|
||||
|
||||
### 5. Analysis
|
||||
|
||||
- Are statistical, qualitative, or computational methods appropriate?
|
||||
- Are baselines and controls fair?
|
||||
- Are uncertainty, sensitivity, or robustness checks included when needed?
|
||||
- Are alternative explanations considered?
|
||||
|
||||
### 6. Results and Interpretation
|
||||
|
||||
- Are results clearly presented?
|
||||
- Do claims stay within the evidence?
|
||||
- Are figures, tables, and metrics understandable?
|
||||
- Are negative or null results handled honestly?
|
||||
|
||||
### 7. Limitations and Threats to Validity
|
||||
|
||||
- Are limitations specific rather than generic?
|
||||
- Are internal, external, construct, and conclusion-validity risks addressed?
|
||||
- Does the paper distinguish speculation from demonstrated results?
|
||||
|
||||
### 8. Writing and Structure
|
||||
|
||||
- Is the argument easy to follow?
|
||||
- Are sections organized around the research question?
|
||||
- Are definitions and notation clear?
|
||||
- Is the tone precise and scholarly?
|
||||
|
||||
### 9. Citations
|
||||
|
||||
- Do cited papers support the claims attached to them?
|
||||
- Are primary sources used where possible?
|
||||
- Are reviews labeled as reviews?
|
||||
- Are preprints labeled as preprints?
|
||||
- Are citation metadata and links correct?
|
||||
|
||||
## Review Process
|
||||
|
||||
1. Read the abstract, introduction, figures, and conclusion for claimed
|
||||
contribution.
|
||||
2. Read methods and results for evidence quality.
|
||||
3. Check the strongest claims against cited sources.
|
||||
4. Score each applicable dimension.
|
||||
5. Separate critical blockers from revision suggestions.
|
||||
6. End with concrete next edits.
|
||||
|
||||
## Output Template
|
||||
|
||||
```markdown
|
||||
# Scholar Evaluation: <Artifact>
|
||||
|
||||
## Overall Assessment
|
||||
|
||||
- Overall score: <1-5 or N/A>
|
||||
- Confidence: <high | medium | low>
|
||||
- Summary: <3-5 sentences>
|
||||
|
||||
## Dimension Scores
|
||||
|
||||
| Dimension | Score | Evidence | Revision priority |
|
||||
| --- | ---: | --- | --- |
|
||||
| Problem and question | | | |
|
||||
| Literature and context | | | |
|
||||
| Methodology | | | |
|
||||
| Data and evidence | | | |
|
||||
| Analysis | | | |
|
||||
| Results and interpretation | | | |
|
||||
| Limitations | | | |
|
||||
| Writing and structure | | | |
|
||||
| Citations | | | |
|
||||
|
||||
## Critical Issues
|
||||
|
||||
## Recommended Revisions
|
||||
|
||||
## Evidence Checks Needed
|
||||
```
|
||||
|
||||
## Pitfalls
|
||||
|
||||
- Do not use the score as a substitute for concrete feedback.
|
||||
- Do not penalize a paper for omitting a dimension outside its scope.
|
||||
- Do not treat citation count, venue, or author reputation as proof of quality.
|
||||
- Do not accept unsupported claims just because they appear in the abstract.
|
||||
Reference in New Issue
Block a user