🧪 Skills
Paper Card Analyzer
Analyze `paper-parse` outputs and generate a research-oriented paper card directly in natural language. Use this skill after paper parsing when you need a st...
v1.0.0
Description
name: paper-card-analyzer
description: Analyze paper-parse outputs and generate a research-oriented paper card directly in natural language. Use this skill after paper parsing when you need a structured summary of contributions, method, experiments, limitations, reproducibility notes, and future work without running any extra script.
metadata:
{
"openclaw":
{
"emoji": "🧪",
},
}
Paper Card Analyzer
Generate a research-oriented paper-card from paper-parse results using direct natural-language analysis.
Input Expectations
Read artifacts produced by paper-parse:
*_content.md(full parsed paper content in markdown)*_parsed.json(metadata and figures)
Output
Produce the paper card in English by default, with balanced depth, and always save outputs in the same folder as the selected *_content.md and *_parsed.json.
Always save:
paper-card.mdpaper-card.jsonpaper-card-feedback.md(feedback log and revision history)
The generated card uses this fixed section order:
- Paper Snapshot
- Research Problem and Motivation
- Core Contributions
- Method Overview
- Experimental Setup
- Main Results and Evidence
- Ablation and Analysis Findings
- Limitations and Threats to Validity
- Reproducibility Notes
- Open Questions and Future Work
Workflow
- Identify the target pair of files:
- Preferred: one
*_content.mdand one*_parsed.jsonin the same folder. - If multiple candidates exist, ask user to pick one pair.
- Preferred: one
- Read parsed metadata from
*_parsed.json:title,paper_name,num_pages,figures.
- Read
*_content.mdand extract evidence by section:- abstract/introduction/method/experiments/results/ablation/limitations/conclusion.
- Write a research-oriented card:
- Prioritize scientific novelty, methodological logic, evidence strength, validity threats, and reproducibility.
- Save first draft to the same folder:
paper-card.mdandpaper-card.json.
- Request human feedback and revise:
- Ask what to correct, expand, or make stricter.
- Update card and save again (overwrite current files).
- Append each round to
paper-card-feedback.mdwith: round number, user request, key edits.
- Repeat revision rounds until the user explicitly confirms satisfaction.
- Keep uncertainty explicit:
- If a section is missing, say "Not clearly stated in parsed content."
Reliability Protocol
For every claim in the paper card:
- Use only evidence from
*_content.mdor*_parsed.json. - If evidence is weak or absent, mark it as "Not clearly stated in parsed content."
- Separate "author-reported result" from "analyst assessment."
- Never infer exact numbers, datasets, or baselines without direct textual support.
- Prefer conservative wording over speculative interpretation.
Before finalizing each round, run a self-check:
- No unsupported factual claims.
- All metric numbers appear in source content or are removed.
- Limitations include at least one explicit validity threat.
- Reproducibility notes include what is known and unknown.
- JSON keys and Markdown section order are complete and stable.
Section Requirements (Detailed)
- Paper Snapshot
- Include title, paper_name, venue/year (if detectable), pages, figure count.
- If venue/year is uncertain, mark as unknown.
- Research Problem and Motivation
- State task, real gap in prior work, and why gap matters.
- Include scope boundaries if described by the authors.
- Core Contributions
- List 2-5 explicit novelty points.
- Each contribution must be independently understandable and non-redundant.
- Method Overview
- Explain major components, data/model flow, and design rationale.
- Avoid implementation-level noise unless necessary for understanding.
- Experimental Setup
- Capture datasets, baselines, metrics, and protocol details present in text.
- Flag missing setup details that hurt comparability.
- Main Results and Evidence
- Report strongest outcomes with metrics when available.
- Distinguish aggregate gains from per-dataset or per-metric gains.
- Ablation and Analysis Findings
- Summarize what ablation or analysis proves about component necessity.
- If absent, explicitly say no dedicated ablation evidence was found.
- Limitations and Threats to Validity
- Cover at least: data/benchmark bias risk, method assumptions, external validity risk.
- Include whether limitation is author-stated or analyst-inferred.
- Reproducibility Notes
- Record code/data links, hyperparameter clues, missing artifacts, reproducibility blockers.
- State expected effort/risk level for independent reproduction.
- Open Questions and Future Work
- Provide 2-4 concrete research questions tied to observed evidence gaps.
- Keep questions falsifiable and experiment-oriented.
Style Rules
- Use concise, factual scientific writing.
- Do not invent metrics, datasets, or claims not supported by the parsed text.
- Distinguish author claims from your assessment.
- Keep section order fixed for consistency across papers.
- Keep language precise, avoid hype words, and avoid absolute certainty unless directly supported.
JSON Shape
Use these top-level keys:
paper_snapshotresearch_problem_and_motivationcore_contributionsmethod_overviewexperimental_setupmain_results_and_evidenceablation_and_analysis_findingslimitations_and_threats_to_validityreproducibility_notesopen_questions_and_future_workfigures
Store paper-card.json on every round, not only on request.
Reviews (0)
Sign in to write a review.
No reviews yet. Be the first to review!
Comments (0)
No comments yet. Be the first to share your thoughts!