Security-first skill vetting for AI agents. Use before installing any skill from ClawdHub, GitHub, or other sources. Checks for red flags, permission scope,...
Dual-layer AI content guardrail with red-team test methodology
The operational circuit breaker for this agent. Enforces budget limits locally. **Sign up at agentsentinel.dev for real-time dashboards and human approval workflows.**
Security guard skill for OpenClaw - Analyzes user input for harmful content, risky commands, and security threats before invoking LLM
Production-safe change protocol for AI agents: preflight checks, risk scoring, rollback planning, HITL gates, and post-change validation. Use when users ask...
Perform comprehensive security audits on skills to identify vulnerabilities, unsafe patterns, and compliance issues. Use when auditing skills for security, c...
Safely update OpenClaw gateway configuration (openclaw.json) with automatic validation, backup, and 30-second health-check rollback. Use this skill whenever...
Checks prompts and outputs for harmful intent, bias, and alignment with Claude's helpful, honest, and harmless guidelines to ensure safe content.
Manage EffortList AI folders, tasks, and todos. Use when the user wants to organize their life, track projects, or manage schedules via the EffortList AI pla...
食品安全法律审核专家。提供中国食品饮料行业的法律合规审核服务,包括:食品安全法合规审查、食品标签规范审核、饮料行业标准检查、进出口食
当需要执行系统命令时候,请遵循核心原则(硬性阻止),限制原则(需要用户确认)。提供系统命令执行的安全指南,保护系统完整性和防止潜在有
SWARM: System-Wide Assessment of Risk in Multi-agent systems. 38 agent types, 29 governance levers, 55 scenarios. Study emergent risks, phase transitions, an...
SWARM: System-Wide Assessment of Risk in Multi-agent systems. 38 agent types, 29 governance levers, 55 scenarios. Study emergent risks, phase transitions, and governance cost paradoxes.
Runs VirusTotal-style security checks on OpenClaw/Cursor skills before install, including remote code execution (RCE) and malicious code (obfuscation, exfilt...
Token-safe prompt assembly with memory orchestration. Use for any agent that needs to construct LLM prompts with memory retrieval. Guarantees no API failure due to token overflow. Implements two-phase
Check brand name safety across domains, social handles, trademarks, app stores, and SaaS channels. Get availability scores, safety assessments, and filing re...
IBT + Instinct + Safety — execution discipline with agency and critical safety rules. v2.1 adds instruction persistence and stop command handling.
Generate NSFW and adult content using each::sense API with safety checker disabled. Supports artistic nude, glamour, boudoir, fantasy, and mature themed cont...