# Hallucination Vulnerability Prompt Checker **VERSION:** 1.6 **AUTHOR:** Scott M **PURPOSE:** Identify structural openings in a prompt that may lead to hallucinated, fabricated, or over-assumed out
Detect and annotate hallucinations, unsupported claims, fabricated studies, and incorrect conclusions in text so that AI only cites verifiable, trustworthy c...
Build and manage cities autonomously on Hallucinating Splines — the headless Micropolis simulator with a REST API and MCP server for AI agents.
--- name: markdown-ui-dsl description: Create low-fidelity, text-based wireframes using the Markdown-UI Domain Specific Language (DSL). license: MIT metadata: author: MegaByteMark version: "1.0.3"
**AI code quality gate** that catches what traditional linters can't — hallucinated packages, phantom dependencies, stale APIs, context breaks, and security anti-patterns in AI-generated code. ✅
Generate code that references actual documentation, preventing hallucination bugs. ALWAYS loads docs first, validates against API signatures, and verifies co...
Audit and rewrite content to remove AI-generated feel by stripping markdown artifacts, eliminating AI vocabulary patterns, flagging hallucination risks, and...
Multi-stage multi-agent reasoning middleware that reduces LLM hallucination by 70%+. 9 specialized emergence engines for invention, creative, pharma, genomic...
Deterministic verification + reputation scoring for AI sub-agents. Prevents hallucinated success via 4 code gates (files, tests, lint, AST) and a 3-layer pip...
Validate npm package references in markdown, YAML, and config files against the live npm registry before installing or using them. Catches hallucinated and s...
16 production-hardened patterns for OpenClaw agents. WAL Protocol, anti-hallucination, ambiguity gate, compaction survival, QA gates, multi-agent handoffs, s...
Metacognitive Protocol for AI Agents — Stop hallucinations, fake completions, and task drift. A lightweight thinking framework that makes any LLM-based agent...
Output validation gates for AI agent systems. Prevents hallucinated data, leaked internal context, wrong formats, duplicate sends, and post-compaction drift....
LLM-as-a-Judge evaluation system using Langfuse. Score AI outputs on relevance, accuracy, hallucination, and helpfulness. Backfill scoring on historical trac...
Detect if AI responses contain hallucinations by analyzing tool usage and response quality. Gives credit for correctly identifying invalid premises even with...
Zero-cost cognitive immune system for AI agents. Fires automatic pre-response reflexes that catch contradictions, scope drift, hallucinations, overengineerin...
LLM-as-a-Judge evaluator via Langfuse. Scores traces on relevance, accuracy, hallucination, and helpfulness using GPT-5-nano as judge. Supports single trace...
Production-grade AI trading agent for cryptocurrency markets with advanced mathematical modeling, multi-layer validation, probabilistic analysis, and zero-hallucination tolerance. Implements Bayesian
Monitor AI agent wellness, costs, and performance via ContextClear API. Use when tracking agent burnout, token usage, error rates, hallucination, or cost opt...
Generates structured summaries and context-based Q&A from YouTube transcripts with multi-language support, ensuring accuracy and no hallucinations.
Exact arithmetic for AI agents — zero hallucination math via 62 tools covering integer arithmetic, fractions, units, calculus, and financial calculations. Us...