🧪 Skills
Ping Model
Measure and display AI model response latency. Use when the user types /ping or /ping followed by a model name to test round-trip time. Captures precise timi...
v1.0.0
Description
name: ping-model description: Measure and display AI model response latency. Use when the user types /ping or /ping followed by a model name to test round-trip time. Captures precise timing between command receipt and response generation, with smart duration formatting (ms, seconds, or minutes). Supports cross-model testing by temporarily switching models and measuring latency. metadata: {"clawdbot":{"emoji":"🧪","requires":{"bins":["node"]}}}
Ping Model
Measure AI model response latency with consistent formatting.
Quick Start
Simple Ping (current model)
bash command:"node {baseDir}/ping-model.js"
Ping Specific Model
bash command:"node {baseDir}/ping-model.js --model minimax"
Compare Multiple Models
bash command:"node {baseDir}/ping-model.js --compare kimi,minimax,deepseek"
Command Reference
| Command | Description |
|---|---|
/ping |
Ping current active model |
/ping kimi |
Switch to kimi, ping, return |
/ping minimax |
Switch to minimax, ping, return |
/ping deepseek |
Switch to deepseek, ping, return |
/ping all |
Compare all available models |
Output Format
Required format - ALWAYS use this exact structure:
🧪 PING {model-name}
📤 Sent: {HH:MM:SS.mmm}
📥 Received: {HH:MM:SS.mmm}
⏱️ Latency: {formatted-duration}
🎯 Pong!
Latency Formatting Rules
- < 1 second: Display as
XXXms(e.g.,847ms) - ≥ 1 second, < 60 seconds: Display as
X.XXs(e.g.,1.23s) - ≥ 60 seconds: Display as
X.XXmin(e.g.,2.5min)
Examples
Fast response (< 1s):
🧪 PING kimi
📤 Sent: 09:34:15.123
📥 Received: 09:34:15.247
⏱️ Latency: 124ms
🎯 Pong!
Medium response (1-60s):
🧪 PING minimax
📤 Sent: 09:34:15.123
📥 Received: 09:34:16.456
⏱️ Latency: 1.33s
🎯 Pong!
Slow response (> 60s):
🧪 PING gemini
📤 Sent: 09:34:15.123
📥 Received: 09:35:25.456
⏱️ Latency: 1.17min
🎯 Pong!
Cross-Model Testing
When testing a non-active model:
- Save current model context
- Switch to target model
- Execute ping
- Measure latency
- Restore original model
- Display result
Critical: Always return to the original model after testing.
Comparison Mode
bash command:"node {baseDir}/ping-model.js --compare kimi,minimax,deepseek,gpt"
Output format:
══════════════════════════════════════════════════
🧪 MODEL COMPARISON
══════════════════════════════════════════════════
🥇 kimi 124ms
🥈 minimax 1.33s
🥉 deepseek 2.45s
4️⃣ openai 5.67s
🏆 Fastest: kimi (124ms)
Implementation
The ping latency is measured as the time between:
- T1: Message received by the agent
- T2: Response ready to send
This captures the model's internal processing time, not network latency.
Reviews (0)
Sign in to write a review.
No reviews yet. Be the first to review!
Comments (0)
No comments yet. Be the first to share your thoughts!