Run LLM optimization analyses programmatically and retrieve score details.
Submit URLs or retrieve existing analysis data.
curl https://api.cleversearch.ai/v1/analysis \
-H "Authorization: Bearer YOUR_SECRET_KEY" \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com"}'{"urls": ["https://example.com", "https://example.com/pricing"]}GET https://api.cleversearch.ai/v1/analysis/analysis_123A completed analysis returns scores, entity gaps, and actionable recommendations.
{
"id": "analysis_123",
"status": "completed",
"overall_score": 72,
"category_scores": {
"entity_coverage": 68,
"intent_alignment": 74,
"structure_quality": 76
},
"recommendations": [
"Expand definition section for primary topic entities",
"Add FAQ block for comparison intent"
]
}Design for async processing and robust result retrieval.
Phase 1: Integration
Implement auth, request validation, and core endpoint flow.
Output: Stable API client with retries and typed payloads.
Phase 2: Automation
Add async handling and webhook-driven workflows.
Output: Background jobs connected to reliable event processing.
Phase 3: Hardening
Improve observability, rate-limit handling, and security.
Output: Production-ready monitoring and incident runbooks.
Successful Request Rate
>= 99% for non-4xx requests
Indicates resilient client logic and error handling.
Webhook Processing Delay
< 60 seconds median end-to-end
Keeps downstream automation timely and useful.
Auth Error Frequency
< 1% of total calls
Confirms credential management is stable.