BotsKYC enforces rate limits to ensure fair usage and maintain service quality for all users.
Rate Limit Tiers
| Tier | Requests/Minute | Requests/Day | Burst Limit | Price |
|---|
| Free | 10 | 100 | 20 | Free |
| Starter | 100 | 1,000 | 200 | Contact Sales |
| Professional | 1,000 | 10,000 | 2,000 | Contact Sales |
| Enterprise | Custom | Custom | Custom | Contact Sales |
Every API response includes rate limit information in the headers:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1732449600
- X-RateLimit-Limit: Your total rate limit (requests per window)
- X-RateLimit-Remaining: Number of requests remaining in current window
- X-RateLimit-Reset: Unix timestamp when the rate limit resets
Rate Limit Response
When you exceed your rate limit, the API returns a 429 Too Many Requests response:
{
"error": "rate_limit_exceeded",
"message": "You have exceeded your rate limit of 100 requests per minute",
"retryAfter": 45,
"limit": 100,
"reset": 1732449600
}
The response includes:
- retryAfter: Seconds to wait before retrying
- limit: Your rate limit
- reset: When the limit resets
Handling Rate Limits
Always check the Retry-After header when you receive a 429 response:
async function makeRequest(url, options) {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
console.log(`Rate limited. Waiting ${retryAfter} seconds...`);
await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
return makeRequest(url, options); // Retry
}
return response;
}
Exponential Backoff
Implement exponential backoff for resilient retries:
async function requestWithBackoff(url, options, maxRetries = 3) {
let retries = 0;
let delay = 1000; // Start with 1 second
while (retries < maxRetries) {
const response = await fetch(url, options);
if (response.status !== 429) {
return response;
}
retries++;
console.log(`Retry ${retries}/${maxRetries} after ${delay}ms`);
await new Promise(resolve => setTimeout(resolve, delay));
delay *= 2; // Exponential backoff
}
throw new Error('Max retries exceeded');
}
Monitor Remaining Requests
Track your remaining requests to avoid hitting limits:
async function monitoredRequest(url, options) {
const response = await fetch(url, options);
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
console.log(`Rate limit: ${remaining}/${limit} remaining`);
if (remaining < limit * 0.1) {
console.warn('Warning: Less than 10% of rate limit remaining!');
}
return response;
}
Best Practices
1. Batch Requests
Process multiple documents in a single request instead of separate requests:
// ❌ Bad: 3 separate requests
await fetch('/v1/identity/scan', { body: formData1 });
await fetch('/v1/identity/scan', { body: formData2 });
await fetch('/v1/identity/scan', { body: formData3 });
// ✅ Good: 1 request with multiple files
const formData = new FormData();
formData.append('documents', file1);
formData.append('documents', file2);
formData.append('documents', file3);
await fetch('/v1/identity/scan', { body: formData });
2. Cache Results
Cache API responses to reduce redundant requests:
const cache = new Map();
async function cachedRequest(url, options, ttl = 3600000) {
const cacheKey = `${url}:${JSON.stringify(options)}`;
if (cache.has(cacheKey)) {
const { data, timestamp } = cache.get(cacheKey);
if (Date.now() - timestamp < ttl) {
return data;
}
}
const response = await fetch(url, options);
const data = await response.json();
cache.set(cacheKey, { data, timestamp: Date.now() });
return data;
}
3. Implement Request Queuing
Queue requests to stay within rate limits:
class RateLimitedQueue {
constructor(rateLimit = 100, interval = 60000) {
this.queue = [];
this.rateLimit = rateLimit;
this.interval = interval;
this.requestCount = 0;
}
async enqueue(fn) {
return new Promise((resolve, reject) => {
this.queue.push({ fn, resolve, reject });
this.processQueue();
});
}
async processQueue() {
if (this.queue.length === 0 || this.requestCount >= this.rateLimit) {
return;
}
const { fn, resolve, reject } = this.queue.shift();
this.requestCount++;
setTimeout(() => {
this.requestCount--;
this.processQueue();
}, this.interval);
try {
const result = await fn();
resolve(result);
} catch (error) {
reject(error);
}
}
}
// Usage
const queue = new RateLimitedQueue(100, 60000);
const result = await queue.enqueue(() =>
fetch('https://api.botskyc.com/v1/identity/scan', options)
);
4. Use Webhooks
Instead of polling for results, use webhooks (coming soon) to receive notifications:
// ❌ Bad: Polling
async function pollForResults(sessionId) {
while (true) {
const response = await fetch(`/v1/liveness/session/${sessionId}/results`);
if (response.status === 200) return response.json();
await new Promise(resolve => setTimeout(resolve, 1000));
}
}
// ✅ Good: Webhook (coming soon)
// Configure webhook URL in portal
// Receive POST notification when results ready
Rate Limit Errors
Common Scenarios
Scenario 1: Burst Traffic
- Problem: Sudden spike in requests exceeds burst limit
- Solution: Implement request queuing and exponential backoff
Scenario 2: Continuous Processing
- Problem: Background job exceeds daily limit
- Solution: Spread processing across time, upgrade tier
Scenario 3: Multiple Instances
- Problem: Multiple servers sharing same API key
- Solution: Use separate API keys per instance or implement distributed rate limiting
Upgrading Your Tier
To increase your rate limits:
- Visit portal.botskyc.com (coming soon)
- Navigate to Settings → Subscription
- Select a higher tier
- Or contact [email protected] for Enterprise
Monitoring Usage
Track your API usage to avoid unexpected rate limits:
- View usage dashboard at portal.botskyc.com (coming soon)
- Set up usage alerts
- Review monthly usage reports
- Monitor
X-RateLimit-* headers in real-time
Support
Questions about rate limits?
Related Documentation:
Last modified on