Claw Self Improving Plus
Turn raw mistakes, corrections, discoveries, and repeated decisions into structured learnings and promotion candidates. Use when the user wants a conservativ...
Description
name: claw-self-improving-plus description: Turn raw mistakes, corrections, discoveries, and repeated decisions into structured learnings and promotion candidates. Use when the user wants a conservative self-improvement workflow that captures lessons, scores reuse value, deduplicates similar learnings, drafts anchored candidate patches for SOUL.md, AGENTS.md, TOOLS.md, or MEMORY.md, reviews them through an approval step, and keeps human control before any long-term file edits.
Claw Self Improving Plus
Build a conservative learning pipeline. Optimize for signal, not clutter.
Core stance
Do not auto-rewrite long-term memory or behavior files by default.
Use this flow:
- Capture raw learning candidates.
- Normalize them into a structured schema.
- Score each item for promotion value.
- Detect duplicates or merge candidates.
- Consolidate repeated learnings into stronger records.
- Build a prioritized learning backlog.
- Draft anchored candidate patches.
- Review patches with human approval.
- Apply only approved patches.
Learning types
Use these types:
mistake: the agent did something wrongcorrection: the user corrected a wrong assumption or behaviordiscovery: a useful fact about environment, tools, preferences, or workflowdecision: a durable preference, policy, or chosen designregression: a known failure mode that should not recur
Minimal record schema
Store each learning candidate as JSON with these fields:
id: stable slug or timestamped idtimestampsourcetypesummarydetailsevidenceconfidencereuse_valueimpact_scopepromotion_target_candidatesstatusrelated_ids
Default enums:
confidence:low|medium|highreuse_value:low|medium|highimpact_scope:single-task|project|workspace|cross-sessionstatus:captured|scored|merged|promoted|rejected
Routing rules
Promote by destination, not vibes:
SOUL.md: durable style, personality, voice rulesAGENTS.md: operating rules, workflows, safety/process lessonsTOOLS.md: environment-specific commands, paths, model/tool preferencesMEMORY.md: important long-term facts about user, projects, decisions, history- daily/raw store only: low-confidence or highly local observations
If a learning does not clearly deserve promotion, keep it in the raw log.
Scoring heuristic
Score each record on five dimensions:
reuse_value: will this help again?confidence: how well supported is it?impact_scope: how broadly does it matter?promotion_worthiness: should it become a lasting rule or memory?promotion_target_candidates: where should it go if promoted?
Use this practical rubric:
- High promotion priority: repeated mistake, explicit user preference, environment fact that breaks tasks, regression with real cost
- Medium priority: useful workflow pattern seen more than once
- Low priority: one-off trivia, speculative interpretation, emotional noise, temporary state
Anchored patch generation
Prefer anchored insertion or exact replacement over blind append.
Each patch may contain:
target_fileanchorinsert_modeold_textnew_textsuggested_entryapprovedreview_status
Use exact replacement when the old text is known. Use anchored insertion when the destination section is known. Use append only as fallback.
Learning store layout
Use a stable .learnings/ structure. See references/learning-store-layout.md.
Recommended files:
.learnings/inbox.jsonl.learnings/scored.jsonl.learnings/merge.json.learnings/patches.json.learnings/apply-report.json.learnings/archive/
Default workflow
1. Capture
Append raw learnings into .learnings/inbox.jsonl.
Use scripts/capture_learning.py to create normalized records.
2. Score
Run scripts/score_learnings.py on the inbox or a batch export.
3. Review duplicates
Run scripts/merge_candidates.py to group likely duplicates.
4. Draft patches
Run scripts/draft_patches.py to produce anchored reviewable patch candidates.
5. Review
Use scripts/review_patches.py to list, approve, reject, or skip candidates.
Examples:
python scripts/review_patches.py .learnings/patches.json list
python scripts/review_patches.py .learnings/patches.json act --index 1 --action approve
python scripts/review_patches.py .learnings/patches.json act --index 2 --action reject --note "too vague"
6. Apply only after approval
Run scripts/apply_approved_patches.py.
This script only applies entries explicitly approved.
It validates allowed targets, supports --dry-run, skips duplicate entries already present, and prefers exact replacement, then anchored insertion, then append fallback.
Output style
When reporting results, use this structure:
new_candidates: counthigh_priority: countmerge_groups: countpatch_candidates: short bullet listneeds_human_review: yes
Resources
References
- Scoring rubric: see
references/scoring-rubric.md - Patch target guide: see
references/promotion-targets.md - Learning store layout: see
references/learning-store-layout.md
Scripts
scripts/capture_learning.pyscripts/score_learnings.pyscripts/merge_candidates.pyscripts/draft_patches.pyscripts/detect_patch_conflicts.pyscripts/consolidate_learnings.pyscripts/build_backlog.pyscripts/age_backlog.pyscripts/review_backlog.pyscripts/check_existing_promotions.pyscripts/review_patches.pyscripts/render_review.pyscripts/apply_approved_patches.pyscripts/archive_batch.pyscripts/run_pipeline.py
Reviews (0)
No reviews yet. Be the first to review!
Comments (0)
No comments yet. Be the first to share your thoughts!