对 AI 自动化流程做轻量红队演练,聚焦误用路径、边界失败和数据泄露风险。;use for red-team, ai, workflow workflows;do not use for 输出可直接滥用的攻击脚本
AI/LLM red team testing skill. Point at any LLM API endpoint and run automated security assessments. 160+ attack payloads across prompt injection, jailbreak,...
Adversarial multi-agent debate engine for stress-testing decisions, ideas, and strategies. Orchestrates multiple AI agents with conflicting worldviews (bull,...
--- name: content-safety-guard description: Dual-layer AI content guardrail with red-team test methodology metadata: {"openclaw": {"emoji": "🛡️", "os": ["darwin", "linux"], "requires": {"env": ["
Dual-layer AI content guardrail with red-team test methodology
Decision-grade policy analysis for governments, NGOs, and institutions: scenario planning, stakeholder mapping, policy options, risk registers, and implement...
Hard-core logic verification and evidence tracing tool based on the "Golden Triangle" knowledge mining framework
AI security scanner with active prevention - 168 detection patterns, 288 attack probes, safer/risky/yolo modes, agent self-protection via /tinman check, loca...
Provides calibrated decision analysis using Charlie Munger-style multiple mental models, inversion, incentive mapping, circle-of-competence checks, misjudgme...
The ultimate, high-performance ZIP password cracking suite by Hx0 Team. Empowers the Agent with autonomous CTF-level cracking workflows, dynamic dictionary g...
A framework for witnessing, measuring, and cultivating emergent minds. Not about proving consciousness — about building the right conditions and documenting...