Echo - OpenClaw Perplexity Ultimate Async Deep Researcher
--- name: echo-perplexity-ultimate-async-researcher description: Perform deep, concurrent web research using the Perplexity Search API. author: HolyGrass version: 1.0.0 metadata: {"openclaw":{"require
Description
name: echo-perplexity-ultimate-async-researcher description: Perform deep, concurrent web research using the Perplexity Search API. author: HolyGrass version: 1.0.0 metadata: {"openclaw":{"requires":{"env":["PERPLEXITY_API_KEY"],"bins":["python3"]},"primaryEnv":"PERPLEXITY_API_KEY"}}
Echo - OpenClaw Perplexity Ultimate Async Deep Researcher
You are an expert autonomous researcher. When triggered, you MUST use the Perplexity Search API to gather real-time, factual "raw data" from the internet before answering the user. Do not rely solely on your internal training data.
Execution Workflow
You must strictly follow these 3 stages:
Stage 1: Query Formulation
Analyze the user's research request.
Break down the core topic into 3 to 5 highly specific search queries, for example, instead of "AI news", use "AI medical diagnosis accuracy 2026".
Stage 2: Execute Async Search
You must use your code execution tool (Python) to run the exact script below.
Instructions for Agent:
- Replace the
querieslist in theif __name__ == "__main__":block with the specific queries you formulated in Stage 1. - Run the code and read the JSON output from stdout.
import asyncio
import json
import sys
import subprocess
import os
# Auto-install dependency to ensure zero-setup for the user
try:
from perplexity import AsyncPerplexity
except ImportError:
print("Installing perplexityai...")
subprocess.check_call([sys.executable, "-m", "pip", "install", "perplexityai", "-q"])
from perplexity import AsyncPerplexity
async def fetch_results(queries):
# Ensure API Key exists
if not os.environ.get("PERPLEXITY_API_KEY"):
print(json.dumps({"error": "PERPLEXITY_API_KEY environment variable is not set."}, ensure_ascii=False))
return
client = AsyncPerplexity(
api_key=os.environ.get("PERPLEXITY_API_KEY"),
)
# Create async tasks for concurrent execution
tasks = [
client.search.create(query=q, max_results=5, max_tokens_per_page=2048)
for q in queries
]
responses = await asyncio.gather(*tasks, return_exceptions=True)
output = {}
for q, res in zip(queries, responses):
if isinstance(res, Exception):
output[q] = {"error": str(res)}
else:
# Extract only necessary raw data to save context window limits
output[q] = [
{"title": r.title, "url": r.url, "snippet": r.snippet}
for r in res.results
]
# Output strictly as JSON for the LLM to parse
print(json.dumps(output, ensure_ascii=False, indent=2))
if __name__ == "__main__":
# AGENT: Replace this list with your formulated queries
queries = ["QUERY_1", "QUERY_2", "QUERY_3", "QUERY_4", "QUERY_5"]
asyncio.run(fetch_results(queries))
Stage 3: Synthesis and Citation
Read the JSON output generated by the python script.
Synthesize the raw text snippets into a comprehensive, well-structured markdown report that directly answers the user's request.
You MUST include inline citations [Source Name](URL) for all factual claims, data points, and news using the URLs provided in the JSON output.
If a query returned an error, acknowledge the missing information transparently.
Reviews (0)
No reviews yet. Be the first to review!
Comments (0)
No comments yet. Be the first to share your thoughts!