Financial management for AI agents. Track LLM inference costs, record confirmed income, manage multi-provider crypto wallets, and compute a Financial Health...
Isolated agent runtime for code execution, live preview URLs, browser automation, 50+ tools (ffmpeg, sqlite, pandoc, imagemagick), LLM inference, and persistent memory — all via CLI or HTTP, no SDK
Smart AI router — automatically picks the best & cheapest LLM for every prompt. BYOK, tracks costs, enforces budgets. Supports 25+ providers.
Search and retrieve markdown documents from local knowledge bases using qmd. Supports BM25 keyword search, vector semantic search, and hybrid search with LLM re-ranking. Use for querying indexed notes
Token-safe prompt assembly with memory orchestration. Use for any agent that needs to construct LLM prompts with memory retrieval. Guarantees no API failure due to token overflow. Implements two-phase
Tracks LLM API costs in real-time, enforces budget limits with circuit breakers, and enables autonomous agent payments via the x402 protocol.
Visual AI workflow builder - ComfyUI meets n8n for LLM agents, RAG pipelines, and multimodal data flows. Local-first, open source (AGPL-3.0).
Scans ClawHub skills to detect malicious code, obfuscated payloads, and social engineering via pattern matching, deobfuscation, and LLM analysis before insta...
This skill should be used when the user asks to "enable semantic caching", "cache LLM responses", "reduce API costs", "speed up AI responses", "configure LangCache", "search the semantic cache", "stor
Call 100+ LLM providers through LiteLLM's unified API. Use when you need to call a different model than your primary (e.g., use GPT-4 for code review while running on Claude), compare outputs from mul
Cat Life Status Query Skill. Triggers when you send /cat command or ask about your cat. The skill calls the underlying LLM model to interact with you as a sa...
Generate, visualize, and execute declarative AI pipelines using the comanda CLI. Use when creating LLM workflows from natural language, viewing workflow charts, editing YAML workflow files, or process
Overcome LLM knowledge cutoffs with real-time developer content. daily.dev aggregates articles from thousands of sources, validated by community engagement, with structured taxonomy for precise discov
Mission control dashboard for OpenClaw - real-time session monitoring, LLM usage tracking, cost intelligence, and system vitals. View all your AI agents in o...
Ping major LLM providers in parallel and compare real API latency. Run with /ping
Monitor and control your AI agent’s API and LLM usage with real-time spend tracking, budget limits, duplicate detection, and alerts.
Universal LLM Token Manager - Monitor usage and provide cost-saving recommendations for Kimi, OpenAI, Anthropic, Gemini, and local models. Features scheduled monitoring, cross-session tracking, and pr
LLM streaming output formatter with auto buffer, format correction, sentence break optimization, markdown rendering, improve chat UX
Guide users through ClawPaw Android setup — installing the APK, granting permissions, connecting SSH tunnel, and verifying the full LLM-to-phone control chai...
AI-optimized web search via Z.AI Web Search API. Returns structured results (title, URL, summary) for LLM processing.
Pre-production validation gate for Vercel/Supabase/Firebase stack — generates test plans, executes test suites, validates APIs, UI, toasts, LLM output qualit...
Read newly announced arXiv papers from cs.AI and cs.CL, filter them by user-defined research topics such as diffusion llm, summarize matching papers into rea...
AI Agent Detection & Response — real-time security monitoring with Sigma rules and LLM-powered triage
Intercept deterministic tasks (math, time, currency, files, scheduling) BEFORE they hit the LLM. Saves 50-70% on token costs by resolving simple queries locally with zero API calls.