🧠 BrainX V5 — The First Brain for OpenClaw
Vector memory engine with PostgreSQL + pgvector + OpenAI embeddings. Store, search, and inject contextual memories into LLM prompts. Includes auto-injection...
Description
name: brainx description: | Vector memory engine with PostgreSQL + pgvector + OpenAI embeddings. Store, search, and inject contextual memories into LLM prompts. Includes auto-injection hook for OpenClaw and full backup/recovery system. metadata: openclaw: emoji: "🧠" requires: bins: ["psql"] env: ["DATABASE_URL", "OPENAI_API_KEY"] primaryEnv: "DATABASE_URL" hooks: - name: brainx-auto-inject event: agent:bootstrap description: Auto-injects relevant memories on agent session start user-invocable: true
BrainX V5 — Vector Memory for OpenClaw
Persistent memory system using vector embeddings for contextual retrieval in AI agents.
When to Use
✅ USE when:
- An agent needs to "remember" information from previous sessions
- You want to give additional LLM context about past actions
- You need semantic content search
- You want to store important decisions with metadata
❌ DON'T USE when:
- Ephemeral information that doesn't need persistence
- Structured tabular data (use a regular DB)
- Simple cache (use Redis or in-memory)
Auto-Injection (Hook)
BrainX V5 includes an OpenClaw hook that automatically injects relevant memories when an agent starts:
How It Works:
agent:bootstrapevent → Hook runs automatically- Queries PostgreSQL → Gets recent hot/warm memories
- Generates file → Creates
BRAINX_CONTEXT.mdin the workspace - Agent reads → File loads as initial context
Configuration:
In ~/.openclaw/openclaw.json:
{
"hooks": {
"internal": {
"enabled": true,
"entries": {
"brainx-auto-inject": {
"enabled": true,
"limit": 5,
"tier": "hot+warm",
"minImportance": 5
}
}
}
}
}
Per Agent:
Add to AGENTS.md in each workspace:
## Every Session
1. Read `SOUL.md`
2. Read `USER.md`
3. Read `brainx.md`
4. Read `BRAINX_CONTEXT.md` ← Auto-injected context
Available Tools
brainx_add_memory
Stores a memory in the vector brain.
Parameters:
content(required) — Memory texttype(optional) — Type: note, decision, action, learning (default: note)context(optional) — Namespace/scopetier(optional) — Priority: hot, warm, cold, archive (default: warm)importance(optional) — Importance 1-10 (default: 5)tags(optional) — Comma-separated tagsagent(optional) — Agent name creating the memory
Example:
brainx add --type decision --content "Use embeddings 3-small to reduce costs" --tier hot --importance 9 --tags config,openai
brainx_search
Searches memories by semantic similarity.
Parameters:
query(required) — Search textlimit(optional) — Number of results (default: 10)minSimilarity(optional) — Threshold 0-1 (default: 0.3)minImportance(optional) — Importance filter 0-10tier(optional) — Tier filtercontext(optional) — Exact context filter
Example:
brainx search --query "API configuration" --limit 5 --minSimilarity 0.5
Returns: JSON with results.
brainx_inject
Gets formatted memories ready for direct LLM prompt injection.
Parameters:
query(required) — Search textlimit(optional) — Number of results (default: 10)minImportance(optional) — Importance filtertier(optional) — Tier filter (default: hot+warm)context(optional) — Context filtermaxCharsPerItem(optional) — Truncate content (default: 2000)
Example:
brainx inject --query "what decisions were made about openai" --limit 3
Returns: Formatted text ready for injection:
[sim:0.82 imp:9 tier:hot type:decision agent:coder ctx:openclaw]
Use embeddings 3-small to reduce costs...
---
[sim:0.71 imp:8 tier:hot type:decision agent:support ctx:brainx]
Create SKILL.md for OpenClaw integration...
brainx_health
Checks that BrainX is operational.
Parameters: none
Example:
brainx health
Returns: PostgreSQL + pgvector connection status.
Backup and Recovery
Create Backup
./scripts/backup-brainx.sh ~/backups
Creates brainx-v5_backup_YYYYMMDD_HHMMSS.tar.gz containing:
- Full PostgreSQL database (SQL dump)
- OpenClaw configuration (hooks, .env)
- Skill files
- Workspace documentation
Restore Backup
./scripts/restore-brainx.sh backup.tar.gz --force
Fully restores BrainX V5 including:
- All memories (with embeddings)
- Hook configuration
- Environment variables
Full Documentation
See RESILIENCE.md for:
- Complete disaster scenarios
- Migration to new VPS
- Troubleshooting
- Automatic backup configuration
Configuration
Environment Variables
# Required
DATABASE_URL=postgresql://user:pass@host:5432/brainx_v5
OPENAI_API_KEY=sk-...
# Optional
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
OPENAI_EMBEDDING_DIMENSIONS=1536
BRAINX_INJECT_DEFAULT_TIER=hot+warm
BRAINX_INJECT_MAX_CHARS_PER_ITEM=2000
BRAINX_INJECT_MAX_LINES_PER_ITEM=80
Database Setup
# Schema is in ~/.openclaw/skills/brainx-v5/sql/
# Requires PostgreSQL with pgvector extension
psql $DATABASE_URL -f ~/.openclaw/skills/brainx-v5/sql/v3-schema.sql
Direct Integration
You can also use the unified wrapper that reads the API key from OpenClaw:
cd ~/.openclaw/skills/brainx-v5
./brainx add --type note --content "test"
./brainx search --query "test"
./brainx inject --query "test"
./brainx health
Compatibility: ./brainx-v5 and ./brainx-v5-cli also work as aliases for the main wrapper.
Advisory System (Pre-Action Check)
BrainX includes an advisory system that queries relevant memories, trajectories, and recurring patterns before executing high-risk tools. This helps agents avoid repeating past mistakes.
High-Risk Tools
The following tools automatically trigger advisory checks: exec, deploy, railway, delete, rm, drop, git push, git force-push, migration, cron, message send, email send.
CLI Usage
# Check for advisories before a tool execution
./brainx-v5 advisory --tool exec --args '{"command":"rm -rf /tmp/old"}' --agent coder --json
# Quick check via helper script
./scripts/advisory-check.sh exec '{"command":"rm -rf /tmp/old"}' coder
Agent Integration (Manual)
Since only agent:bootstrap is supported as a hook event, agents should manually call brainx advisory before high-risk tools:
# In agent SKILL.md or AGENTS.md, add:
# Before exec/deploy/delete/migration, run:
cd ~/.openclaw/skills/brainx-v5 && ./scripts/advisory-check.sh <tool> '<args_json>' <agent>
The advisory returns relevant memories, similar past problem→solution paths, and recurring patterns with a confidence score. It's informational — never blocking.
Agent-Aware Hook Injection
The agent:bootstrap hook now uses agent profiles (hook/agent-profiles.json) to customize memory injection per agent:
- coder: Boosts gotcha/error/learning memories; filters by infrastructure/code/deploy/github contexts
- writer: Boosts decision/learning; filters by content/seo/marketing contexts
- monitor: Boosts gotcha/error; filters by infrastructure/health/monitoring
- echo: No filtering (default behavior)
Agents not listed in the profiles file get the default unfiltered injection. Edit hook/agent-profiles.json to add new agent profiles.
EIDOS Loop (Prediction → Evaluation → Learning)
BrainX V5 includes an EIDOS cycle for adaptive learning:
- Predict — Record a prediction before an action
- Evaluate — Compare prediction against actual outcome
- Distill — Extract learnings from evaluation patterns
- Stats — Track prediction accuracy over time
./brainx-v5 eidos predict --prediction "Deploy will succeed" --context deploy --agent coder
./brainx-v5 eidos evaluate --id <cycle_id> --outcome "success" --evaluation "Prediction correct"
./brainx-v5 eidos distill
./brainx-v5 eidos stats
Memory Consolidation
Clusters semantically similar memories (>0.85 similarity) and merges them via heuristic consolidation, superseding originals. Reduces noise and duplication.
./scripts/memory-consolidator.js --dry-run # Preview clusters
./scripts/memory-consolidator.js --limit 50 # Process 50 clusters
./scripts/memory-consolidator.js --threshold 0.90 # Stricter similarity
Auto-Distillation
Processes OpenClaw session logs to automatically generate high-quality memories from error-fix sequences, explicit decisions, repeated failures, and complex sessions.
./scripts/auto-distiller.js --dry-run # Preview candidates
./scripts/auto-distiller.js # Process and store
Notes
- Memories are stored with vector embeddings (1536 dimensions)
- Search uses cosine similarity
injectis the most useful tool for giving LLMs context- Tier hot = quick access, cold/archive = long-term storage
- Memories are persistent in PostgreSQL (independent of OpenClaw)
- Auto-injection hook runs on every
agent:bootstrap - Pre-storage quality gate rejects noise (<20 chars, known patterns like "ok", "HEARTBEAT_OK")
- Enhanced scoring integrates feedback_score, confidence_score, and temporal decay
- HNSW indexing for fast vector search
Feature Status (Tables)
✅ All Operational
| Table | Function | Status |
|---|---|---|
brainx_memories |
Core: stores memories with embeddings | ✅ Active (1800+) |
brainx_query_log |
Search/inject query tracking | ✅ Active |
brainx_pilot_log |
Agent auto-inject tracking | ✅ Active |
brainx_context_packs |
Pre-generated context packages | ✅ Active |
brainx_patterns |
Detects recurring errors/issues | ✅ Active |
brainx_session_snapshots |
Captures state at session close | ✅ Active |
brainx_learning_details |
Extended metadata for learning/gotcha memories | ✅ Active |
brainx_trajectories |
Records problem→solution paths | ✅ Active |
brainx_advisories |
Advisory pre-action checks | ✅ Active |
brainx_eidos_cycles |
EIDOS prediction-evaluation cycles | ✅ Active |
brainx_distillation_log |
Auto-distillation tracking | ✅ Active |
brainx_schema_version |
Schema version tracking | ✅ Active |
12/12 tables operational. Schema version: V5.
Complete Feature Inventory (35+)
CLI Core (brainx <cmd>)
| # | Command | Function |
|---|---|---|
| 1 | add |
Store memory (7 types, 20+ categories, V5 metadata) |
| 2 | search |
Semantic search by cosine similarity |
| 3 | inject |
Formatted memories for LLM prompt injection |
| 4 | fact / facts |
Shortcut to store/list infrastructure facts |
| 5 | resolve |
Mark pattern as resolved/promoted/wont_fix |
| 6 | promote-candidates |
Detect memories eligible for promotion |
| 7 | lifecycle-run |
Degrade/promote memories by age/usage |
| 8 | metrics |
Metrics dashboard and top patterns |
| 9 | doctor |
Full diagnostic (schema, integrity, stats) |
| 10 | fix |
Auto-repair issues detected by doctor |
| 11 | feedback |
Mark memory as useful/useless/incorrect |
| 12 | health |
PostgreSQL + pgvector connection status |
| 13 | advisory |
Pre-action advisory check |
| 14 | eidos |
EIDOS prediction-evaluation cycle |
Processing Scripts (scripts/)
| # | Script | Function |
|---|---|---|
| 15 | memory-bridge.js |
Syncs memory between sessions/agents |
| 16 | memory-distiller.js |
Distills sessions into new memories |
| 17 | session-harvester.js |
Harvests info from past sessions |
| 18 | session-snapshot.js |
Captures state at session close |
| 19 | pattern-detector.js |
Detects recurring errors/issues |
| 20 | learning-detail-extractor.js |
Extracts metadata from learnings/gotchas |
| 21 | trajectory-recorder.js |
Records problem→solution paths |
| 22 | fact-extractor.js |
Extracts facts from conversations |
| 23 | contradiction-detector.js |
Detects contradicting memories |
| 24 | cross-agent-learning.js |
Shares learnings across agents |
| 25 | quality-scorer.js |
Scores memory quality |
| 26 | context-pack-builder.js |
Generates pre-built context packages |
| 27 | reclassify-memories.js |
Reclassifies memories with correct types/categories |
| 28 | cleanup-low-signal.js |
Cleans up low-value memories |
| 29 | dedup-supersede.js |
Detects and marks duplicates |
| 30 | eval-memory-quality.js |
Evaluates dataset quality |
| 31 | generate-eval-dataset-from-memories.js |
Generates evaluation dataset |
| 32 | memory-feedback.js |
Per-memory feedback system |
| 33 | import-workspace-memory-md.js |
Imports from workspace MEMORY.md files |
| 34 | auto-distiller.js |
Auto-distills session logs into memories |
| 35 | memory-consolidator.js |
Clusters and merges similar memories |
| 36 | advisory-check.sh |
Quick advisory check helper |
Hooks and Infrastructure
| # | Component | Function |
|---|---|---|
| 37 | brainx-auto-inject |
Auto-injection hook at each agent bootstrap |
| 38 | backup-brainx.sh |
Full backup (DB + config + skills) |
| 39 | restore-brainx.sh |
Full restoration from backup |
V5 Metadata
sourceKind— Origin: user_explicit, agent_inference, tool_verified, llm_distilled, consolidated, etc.sourcePath— Source file/URLconfidence— Score 0-1expiresAt— Automatic expirationsensitivity— normal/sensitive/restricted- Automatic PII scrubbing (
BRAINX_PII_SCRUB_ENABLED) - Similarity-based dedup (
BRAINX_DEDUPE_SIM_THRESHOLD)
Reviews (0)
No reviews yet. Be the first to review!
Comments (0)
No comments yet. Be the first to share your thoughts!