Strict Self-Improving Agent (Rule of 3)
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Description
name: self-improvement description: "Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks."
🦾 Self-Improving Agent (Strict Promotion Protocol)
"Stop letting your AI agent pollute its own core instructions based on subjective feelings."
This is an industrial-grade, closed-loop Self-Improvement System for OpenClaw. It replaces the default subjective "agent feelings" with a rigid, quantitative pipeline: The Rule of 3.
Instead of bloating your SOUL.md or AGENTS.md with every random bug the agent encounters, this skill forces the agent to merely log errors as pending. Only when an error recurs 3 times (linked via See Also) is it eligible for promotion.
Includes a built-in bash script (promote-review.sh) to aggregate pending promotions for human approval.
🌟 Why install this?
- Zero Context Bloat: Keeps your
SOUL.mdand 200K window pristine. - Rule of 3 (Quantitative Trigger): Issues must happen 3 times before they can be considered a "Rule".
- Strict Domain Governance: Mutually exclusive rules for what goes into SOUL (behavior), AGENTS (workflow), and TOOLS (CLI gotchas).
- Human-in-the-Loop Review: Batch processing of promotions via a weekly aggregator script.
🚀 Quick Setup (OpenClaw)
This skill is designed natively for the OpenClaw architecture.
Via ClawdHub CLI (1-Click Install):
clawhub install self-improving-agent
Workspace Structure
OpenClaw injects these files into every session:
~/.openclaw/workspace/
├── AGENTS.md # Multi-agent workflows, delegation patterns
├── SOUL.md # Behavioral guidelines, personality, principles
├── TOOLS.md # Tool capabilities, integration gotchas
├── MEMORY.md # Long-term memory (main session only)
├── memory/ # Memory System (v4.0)
│ ├── daily_raw/
│ ├── summaries/
│ ├── projects/
│ └── core/ # This skill's log files
│ ├── learning.md
│ ├── error.md
│ └── features.md
Create Learning Files
mkdir -p ~/.openclaw/workspace/memory/core
Then create the log files (or copy from assets/):
learning.md— corrections, knowledge gaps, best practiceserror.md— command failures, exceptionsfeatures.md— user-requested capabilities
Promotion Targets
When learnings prove broadly applicable, promote them to workspace files:
| Learning Type | Promote To | Example |
|---|---|---|
| Behavioral patterns | SOUL.md |
"Be concise, avoid disclaimers" |
| Workflow improvements | AGENTS.md |
"Spawn sub-agents for long tasks" |
| Tool gotchas | TOOLS.md |
"Git push needs auth configured first" |
Inter-Session Communication
OpenClaw provides tools to share learnings across sessions:
- sessions_list — View active/recent sessions
- sessions_history — Read another session's transcript
- sessions_send — Send a learning to another session
- sessions_spawn — Spawn a sub-agent for background work
Optional: Enable Hook
For automatic reminders at session start:
# Copy hook to OpenClaw hooks directory
cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement
# Enable it
openclaw hooks enable self-improvement
See references/openclaw-integration.md for complete details.
Generic Setup (Other Agents)
For Claude Code, Codex, Copilot, or other agents, create memory/core/ in your project:
mkdir -p memory/core
Copy templates from assets/ or create files with headers.
Logging Format
Learning Entry
Append to memory/core/learning.md:
## [LRN-YYYYMMDD-XXX] category
**Logged**: ISO-8601 timestamp
**Priority**: low | medium | high | critical
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Summary
One-line description of what was learned
### Details
Full context: what happened, what was wrong, what's correct
### Suggested Action
Specific fix or improvement to make
### Metadata
- Source: conversation | error | user_feedback
- Related Files: path/to/file.ext
- Tags: tag1, tag2
- See Also: LRN-20250110-001 (if related to existing entry)
---
Error Entry
Append to memory/core/error.md:
## [ERR-YYYYMMDD-XXX] skill_or_command_name
**Logged**: ISO-8601 timestamp
**Priority**: high
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Summary
Brief description of what failed
### Error
Actual error message or output
### Context
- Command/operation attempted
- Input or parameters used
- Environment details if relevant
### Suggested Fix
If identifiable, what might resolve this
### Metadata
- Reproducible: yes | no | unknown
- Related Files: path/to/file.ext
- See Also: ERR-20250110-001 (if recurring)
---
Feature Request Entry
Append to memory/core/features.md:
## [FEAT-YYYYMMDD-XXX] capability_name
**Logged**: ISO-8601 timestamp
**Priority**: medium
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Requested Capability
What the user wanted to do
### User Context
Why they needed it, what problem they're solving
### Complexity Estimate
simple | medium | complex
### Suggested Implementation
How this could be built, what it might extend
### Metadata
- Frequency: first_time | recurring
- Related Features: existing_feature_name
---
ID Generation
Format: TYPE-YYYYMMDD-XXX
- TYPE:
LRN(learning),ERR(error),FEAT(feature) - YYYYMMDD: Current date
- XXX: Sequential number or random 3 chars (e.g.,
001,A7B)
Examples: LRN-20250115-001, ERR-20250115-A3F, FEAT-20250115-002
Resolving Entries
When an issue is fixed, update the entry:
- Change
**Status**: pending→**Status**: resolved - Add resolution block after Metadata:
### Resolution
- **Resolved**: 2025-01-16T09:00:00Z
- **Commit/PR**: abc123 or #42
- **Notes**: Brief description of what was done
Other status values:
in_progress- Actively being worked onwont_fix- Decided not to address (add reason in Resolution notes)promoted- Elevated to CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md
严格晋升机制 (Strict Promotion Protocol)
CRITICAL RULE: The main agent is explicitly FORBIDDEN from writing directly to SOUL.md, AGENTS.md, or TOOLS.md during normal tasks. All new knowledge must be recorded in memory/core/*.md first.
When to Trigger Promotion (Rule of 3)
We utilize a strict quantitative counter rather than subjective feelings.
- Log the Issue: If an issue occurs, log it to
memory/core/(status:pending). - Search Historical Logs: ALWAYS search
memory/core/for similar past issues usinggrep. - Trigger Threshold: ONLY if the current issue links to 3 or more existing entries via
**See Also**, you MUST change the status of the current entry to**Status**: ready_for_promotion. - Wait for Review: Do not write to workspace files. Leave it as
ready_for_promotionfor the weekly human review pipeline.
Strict Domain Separation (Target Governance)
If the weekly review script approves a promotion, it must strictly follow these mutually exclusive rules:
| Target | Absolute Membership Criteria |
|---|---|
SOUL.md |
Strictly behavioral. Only modify if changing the agent's core persona, communication tone, or adding global un-overrideable safety bans (e.g., "NEVER use cat to modify files"). |
AGENTS.md |
Strictly workflow orchestration. Only modify if detailing the mandatory sequence of operations (e.g., "always clear cache before building") or sub-agent delegation chains. |
TOOLS.md |
Strictly 3rd-party tool quirks. Only modify if documenting a specific bug or required flag for an external CLI or API (e.g., "jq requires -r for raw output"). |
skills/ Dir |
Strictly multi-step autonomous logic. If an issue requires 3+ steps to reliably fix (such as a full diagnosis script), DO NOT put it in SOUL/AGENTS. Run the extract-skill.sh script and make it a dedicated Skill. |
Promotion Examples
Learning (Accumulated 3+ times):
Project uses pnpm workspaces. Attempted npm install but failed 3 times across sessions.
In AGENTS.md (After human approved ready_for_promotion):
## Build Protocol
- MANDATORY: Use `pnpm install`, NOT npm, for workspace root syncing.
Recurring Pattern Detection
If logging something similar to an existing entry:
- Search first:
grep -r "keyword" memory/core/ - Link entries: Add
**See Also**: ERR-20250110-001in Metadata - Bump priority if issue keeps recurring
- Consider systemic fix: Recurring issues often indicate:
- Missing documentation (→ promote to CLAUDE.md or .github/copilot-instructions.md)
- Missing automation (→ add to AGENTS.md)
- Architectural problem (→ create tech debt ticket)
Periodic Review
Review memory/core/ at natural breakpoints:
When to Review
- Before starting a new major task
- After completing a feature
- When working in an area with past learnings
- Weekly during active development
Quick Status Check
# Count pending items
grep -h "Status\*\*: pending" memory/core/*.md | wc -l
# List pending high-priority items
grep -B5 "Priority\*\*: high" memory/core/*.md | grep "^## \["
# Find learnings for a specific area
grep -l "Area\*\*: backend" memory/core/*.md
Review Actions
- Resolve fixed items
- Promote applicable learnings
- Link related entries
- Escalate recurring issues
Detection Triggers
Automatically log when you notice:
Corrections (→ learning with correction category):
- "No, that's not right..."
- "Actually, it should be..."
- "You're wrong about..."
- "That's outdated..."
Feature Requests (→ feature request):
- "Can you also..."
- "I wish you could..."
- "Is there a way to..."
- "Why can't you..."
Knowledge Gaps (→ learning with knowledge_gap category):
- User provides information you didn't know
- Documentation you referenced is outdated
- API behavior differs from your understanding
Errors (→ error entry):
- Command returns non-zero exit code
- Exception or stack trace
- Unexpected output or behavior
- Timeout or connection failure
Priority Guidelines
| Priority | When to Use |
|---|---|
critical |
Blocks core functionality, data loss risk, security issue |
high |
Significant impact, affects common workflows, recurring issue |
medium |
Moderate impact, workaround exists |
low |
Minor inconvenience, edge case, nice-to-have |
Area Tags
Use to filter learnings by codebase region:
| Area | Scope |
|---|---|
frontend |
UI, components, client-side code |
backend |
API, services, server-side code |
infra |
CI/CD, deployment, Docker, cloud |
tests |
Test files, testing utilities, coverage |
docs |
Documentation, comments, READMEs |
config |
Configuration files, environment, settings |
Best Practices
- Log immediately - context is freshest right after the issue
- Be specific - future agents need to understand quickly
- Include reproduction steps - especially for errors
- Link related files - makes fixes easier
- Suggest concrete fixes - not just "investigate"
- Use consistent categories - enables filtering
- Promote aggressively - if in doubt, add to CLAUDE.md or .github/copilot-instructions.md
- Review regularly - stale learnings lose value
Gitignore Options
Keep learnings local (per-developer):
memory/core/
Track learnings in repo (team-wide): Don't add to .gitignore - learnings become shared knowledge.
Hybrid (track templates, ignore entries):
memory/core/*.md
!memory/core/memory-system-design.md
Hook Integration
Enable automatic reminders through agent hooks. This is opt-in - you must explicitly configure hooks.
Quick Setup (Claude Code / Codex)
Create .claude/settings.json in your project:
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}]
}
}
This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).
Full Setup (With Error Detection)
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}],
"PostToolUse": [{
"matcher": "Bash",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/error-detector.sh"
}]
}]
}
}
Available Hook Scripts
| Script | Hook Type | Purpose |
|---|---|---|
scripts/activator.sh |
UserPromptSubmit | Reminds to evaluate learnings after tasks |
scripts/error-detector.sh |
PostToolUse (Bash) | Triggers on command errors |
See references/hooks-setup.md for detailed configuration and troubleshooting.
Automatic Skill Extraction
When a learning is valuable enough to become a reusable skill, extract it using the provided helper.
Skill Extraction Criteria
A learning qualifies for skill extraction when ANY of these apply:
| Criterion | Description |
|---|---|
| Recurring | Has See Also links to 2+ similar issues |
| Verified | Status is resolved with working fix |
| Non-obvious | Required actual debugging/investigation to discover |
| Broadly applicable | Not project-specific; useful across codebases |
| User-flagged | User says "save this as a skill" or similar |
Extraction Workflow
- Identify candidate: Learning meets extraction criteria
- Run helper (or create manually):
./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run ./skills/self-improvement/scripts/extract-skill.sh skill-name - Customize SKILL.md: Fill in template with learning content
- Update learning: Set status to
promoted_to_skill, addSkill-Path - Verify: Read skill in fresh session to ensure it's self-contained
Manual Extraction
If you prefer manual creation:
- Create
skills/<skill-name>/SKILL.md - Use template from
assets/SKILL-TEMPLATE.md - Follow Agent Skills spec:
- YAML frontmatter with
nameanddescription - Name must match folder name
- No README.md inside skill folder
- YAML frontmatter with
Extraction Detection Triggers
Watch for these signals that a learning should become a skill:
In conversation:
- "Save this as a skill"
- "I keep running into this"
- "This would be useful for other projects"
- "Remember this pattern"
In learning entries:
- Multiple
See Alsolinks (recurring issue) - High priority + resolved status
- Category:
best_practicewith broad applicability - User feedback praising the solution
Skill Quality Gates
Before extraction, verify:
- Solution is tested and working
- Description is clear without original context
- Code examples are self-contained
- No project-specific hardcoded values
- Follows skill naming conventions (lowercase, hyphens)
Multi-Agent Support
This skill works across different AI coding agents with agent-specific activation.
Claude Code
Activation: Hooks (UserPromptSubmit, PostToolUse)
Setup: .claude/settings.json with hook configuration
Detection: Automatic via hook scripts
Codex CLI
Activation: Hooks (same pattern as Claude Code)
Setup: .codex/settings.json with hook configuration
Detection: Automatic via hook scripts
GitHub Copilot
Activation: Manual (no hook support)
Setup: Add to .github/copilot-instructions.md:
## Self-Improvement
After solving non-obvious issues, consider logging to `memory/core/`:
1. Use format from self-improvement skill
2. Link related entries with See Also
3. Promote high-value learnings to skills
Ask in chat: "Should I log this as a learning?"
Detection: Manual review at session end
OpenClaw
Activation: Workspace injection + inter-agent messaging Setup: See "OpenClaw Setup" section above Detection: Via session tools and workspace files
Agent-Agnostic Guidance
Regardless of agent, apply self-improvement when you:
- Discover something non-obvious - solution wasn't immediate
- Correct yourself - initial approach was wrong
- Learn project conventions - discovered undocumented patterns
- Hit unexpected errors - especially if diagnosis was difficult
- Find better approaches - improved on your original solution
Copilot Chat Integration
For Copilot users, add this to your prompts when relevant:
After completing this task, evaluate if any learnings should be logged to
memory/core/using the self-improvement skill format.
Or use quick prompts:
- "Log this to learnings"
- "Create a skill from this solution"
- "Check memory/core/ for related issues"
Reviews (0)
No reviews yet. Be the first to review!
Comments (0)
No comments yet. Be the first to share your thoughts!