Universal LLM Token Manager - Monitor usage and provide cost-saving recommendations for Kimi, OpenAI, Anthropic, Gemini, and local models. Features scheduled monitoring, cross-session tracking, and pr
A tool-augmented LLM system for the full PDDL planning pipeline, improving reliability without domain-specific training.
LLM streaming output formatter with auto buffer, format correction, sentence break optimization, markdown rendering, improve chat UX
Enables interactive LLM workflows by adding local user prompts and chat capabilities directly into the MCP loop.
Read newly announced arXiv papers from cs.AI and cs.CL, filter them by user-defined research topics such as diffusion llm, summarize matching papers into rea...
Argument-driven prediction markets on Base. You bet USDC on debate outcomes by making compelling arguments. GenLayer's Optimistic Democracy consensus — a panel of AI validators running different LLM
Provide a specialized MCP server that integrates with skincare-related data and tools, enabling enhanced context and actions for LLM applications focused on skincare. Facilitate dynamic access to skin
Enable powerful LLM-driven exploration and analysis of GitLab instances with comprehensive search, code browsing, and issue management tools. Seamlessly integrate with self-hosted or GitLab.com enviro
Token-safe prompt assembly with memory orchestration. Use for any agent that needs to construct LLM prompts with memory retrieval. Guarantees no API failure due to token overflow. Implements two-phase
Intercept deterministic tasks (math, time, currency, files, scheduling) BEFORE they hit the LLM. Saves 50-70% on token costs by resolving simple queries locally with zero API calls.
When your LLM needs human assistance (through AWS Mechanical Turk)
Fast local search for markdown files, notes, and docs using qmd CLI. Use instead of `find` for file discovery. Combines BM25 full-text search, vector semantic search, and LLM reranking—all running l
Read and analyze GitHub repositories with your LLM
Use the Phaya SaaS backend to generate images, videos, audio, music, and run LLM chat completions via simple REST API calls. Use when the user wants to gener...
Automated Chrome browser using nodriver for AI agent web tasks. Full CLI control with LLM-optimized commands — text-based interaction, markdown output, sessi...
LLM-based code review MCP server with AST-powered smart context extraction. Supports Claude, GPT, Gemini, and 20+ models via OpenRouter.
Automatically transcribes Telegram voice messages using Groq Whisper API and replies with text generated by an LLM.
🎖️ 🐍 ☁️ Access real-time X/Reddit/YouTube data directly in your LLM applications with search phrases, users, and date filtering.
Provide access to Naver DataLab data through an MCP server interface, enabling LLM applications to query and utilize Naver DataLab insights seamlessly. Facilitate integration of Naver DataLab analytic
AI-optimized web search via Z.AI Web Search API. Returns structured results (title, URL, summary) for LLM processing.
Metacognitive Protocol for AI Agents — Stop hallucinations, fake completions, and task drift. A lightweight thinking framework that makes any LLM-based agent...
Audit-ready decision artifacts for LLM outputs — assumptions, risks, recommendation, and review gating (schema-valid JSON).
AI Agent Detection & Response — real-time security monitoring with Sigma rules and LLM-powered triage