Search and retrieve relevant information from your indexed memory files using semantic queries and direct file reads for context.
Long-term memory via ChromaDB with local Ollama embeddings. Auto-recall injects relevant context every turn. No cloud APIs required — fully self-hosted.
Local semantic memory with Qdrant and Transformers.js. Store, search, and recall conversation context using vector embeddings (fully local, no API keys).
Configure OpenClaw memory search to use Ollama as the embeddings server (OpenAI-compatible /v1/embeddings) instead of the built-in node-llama-cpp local GGUF loading. Includes interactive model selecti
Graph-native memory engine for AI agents — hybrid vector+keyword search, biological decay, Zettelkasten linking, trust-gated conflict resolution, explainabil...
Local semantic memory with vector search and Transformers.js. Store, search, and recall conversation context using embeddings (fully local, no API keys).
Session-first memory curator for OpenClaw. Keeps RAM clean, recall precise, and durable knowledge safe.
Long-term structured memory with knowledge graph, entity tracking, temporal reasoning, and cross-session recall. Powered by the Cortex API.
三级记忆管理系统 (Three-Tier Memory Management)。用于管理 AI 代理的短期、中期、长期记忆。包括:(1) 滑动窗口式短期记忆,(2) 自动摘要生成中期记忆,(3)
MCP server for Clude's 4-tier cognitive memory system — store, recall, search, and dream. Built on Supabase + pgvector with type-specific decay, Hebbian asso...
Structured AI agent memory with categorized storage, MD5 duplicate detection, consolidation, keyword recall, and export in Markdown or JSON formats.
5-layer memory architecture for OpenClaw agents. Solves context bloat, the 48h fogging problem, and rule amnesia. Works for single-agent and multi-agent setu...
Persistent memory plugin for OpenClaw agents. Hybrid SQLite FTS5 keyword + Ollama vector semantic search with auto-capture, auto-recall, stuck-detection, and...
Automatically prune and compact agent memory files to prevent unbounded growth. Circular buffer for logs, importance-based retention for state, and configura...
Store and semantically search text memories locally using Ollama with automatic management and optimization.
Implements a three-layer memory system with daily logs, curated long-term notes, and optional vector search for persistent, searchable agent context across s...
Agent memory with ALMA meta-learning, LLM fact extraction, and full-text search. Observer calls remote LLM APIs (OpenAI/Anthropic/Gemini). ALMA and Indexer w...
Local vector memory system with LanceDB + Pure JS embedding. No native modules or external APIs required.
Persistent memory for AI agents to store facts, learn from actions, recall information, and track entities across sessions.
Solve the "agent forgot everything" problem with search-first protocol, automated memory sync, and context preservation. No more conversation restarts!
Enable multiple agents to share, merge, and sync memories using standardized formats, priority rules, and Git-based version control for collective intelligence.
High-precision memory with 100% recall accuracy for long contexts.
Cross-group memory, search, and event sharing for OpenClaw Feishu agents