🧪 Skills
Edge Router
Route AI agent compute tasks to the cheapest viable backend. Supports local inference (Ollama), cloud GPU (Vast.ai), and quantum hardware (Wukong 72Q). Use w...
v1.0.0
Description
name: edge-router description: Route AI agent compute tasks to the cheapest viable backend. Supports local inference (Ollama), cloud GPU (Vast.ai), and quantum hardware (Wukong 72Q). Use when an agent needs to decide where to run a task, optimize compute costs, check backend availability, or execute workloads across edge/cloud/quantum infrastructure.
Edge Router
Routes tasks to cheapest available backend: local (free) → cloud GPU ($0.01) → quantum ($0.10).
API
Base: https://edge-router.gpupulse.dev/api/v1 (or localhost:3825)
Route (recommend)
curl -X POST "$BASE/route" -H "Content-Type: application/json" \
-d '{"task_type": "inference"}'
Execute (route + run)
curl -X POST "$BASE/execute" -H "Content-Type: application/json" \
-d '{"task_type": "inference", "payload": {"model": "llama3.2:1b", "prompt": "hello"}}'
Task Types
inference→ local first, cloud fallbacktraining→ cloud GPUquantum→ Wukong 72Qauto→ cheapest available
Other
GET /backends— list + statusGET /stats— routing statisticsGET /health— health check
Reviews (0)
Sign in to write a review.
No reviews yet. Be the first to review!
Comments (0)
No comments yet. Be the first to share your thoughts!