Rate Limits
Understanding API rate limits, usage tiers, and best practices for staying within your limits.
Overview
API Shield implements rate limiting to ensure fair usage and maintain service quality for all users. Rate limits are applied per API key and vary based on your subscription plan.
Rate Limit Tiers
Free Tier
Perfect for testing and small projects.
- Requests: 100 requests/month
- Rate: 10 requests/minute
Developer Plan
For growing applications and startups.
- Requests: 10,000 requests/month
- Rate: 120 requests/minute
Pro Plan
For production applications with steady traffic.
- Requests: 100,000 requests/month
- Rate: 200 requests/minute
View detailed pricing for all plans and features.
How Rate Limiting Works
Rate limits are enforced using a sliding window algorithm:
- Each API key has a maximum number of requests allowed per time window
- The window slides continuously rather than resetting at fixed intervals
Example
With a limit of 120 requests/minute:
- You can make 20 requests within any 60-second period
- If you make 20 requests in the first 10 seconds, you must wait 50 seconds before making more
- The window slides continuously, not in fixed 1-minute blocks
Rate Limit Headers
Every API response includes rate limit information in the headers:
X-RateLimit-Limit: 20
X-RateLimit-Remaining: 15
X-RateLimit-Reset: 1699564800Header Descriptions
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed in the current window |
X-RateLimit-Remaining | Remaining requests in the current window |
X-RateLimit-Reset | Unix timestamp when the rate limit resets |
Reading Rate Limit Headers
const response = await fetch(
'https://bifrost.api-armor.com/v1/check?email=test@example.com',
{
headers: { 'Authorization': `Bearer ${API_KEY}` }
}
);
const rateLimit = {
limit: response.headers.get('X-RateLimit-Limit'),
remaining: response.headers.get('X-RateLimit-Remaining'),
reset: response.headers.get('X-RateLimit-Reset')
};
console.log('Rate Limit Info:', rateLimit);
// Check if we're close to the limit
if (rateLimit.remaining < 10) {
console.warn('Approaching rate limit!');
}Rate Limit Exceeded Response
When you exceed your rate limit, you'll receive a 429 Too Many Requests response:
{
"error": "rate_limit_exceeded",
"message": "Rate limit exceeded. Please try again later.",
"retry_after": 60
}The retry_after field indicates how many seconds you should wait before retrying.
Best Practices
1. Implement Client-Side Rate Limiting
Track your request rate and implement client-side throttling:
class RateLimiter {
constructor(maxRequests, windowMs) {
this.maxRequests = maxRequests;
this.windowMs = windowMs;
this.requests = [];
}
async checkLimit() {
const now = Date.now();
// Remove requests outside the window
this.requests = this.requests.filter(
time => now - time < this.windowMs
);
// Check if we're at the limit
if (this.requests.length >= this.maxRequests) {
const oldestRequest = this.requests[0];
const waitTime = this.windowMs - (now - oldestRequest);
// Wait until the oldest request expires
await new Promise(resolve => setTimeout(resolve, waitTime));
return this.checkLimit();
}
// Add current request
this.requests.push(now);
}
}
// Usage
const limiter = new RateLimiter(20, 60000); // 20 req/min
async function checkEmail(email) {
await limiter.checkLimit();
const response = await fetch(
`https://bifrost.api-armor.com/v1/check?email=${encodeURIComponent(email)}`,
{
headers: { 'Authorization': `Bearer ${API_KEY}` }
}
);
return await response.json();
}2. Handle Rate Limit Errors Gracefully
Implement exponential backoff when you hit rate limits:
async function checkEmailWithBackoff(email, retries = 3) {
for (let i = 0; i < retries; i++) {
const response = await fetch(
`https://bifrost.api-armor.com/v1/check?email=${encodeURIComponent(email)}`,
{
headers: { 'Authorization': `Bearer ${API_KEY}` }
}
);
if (response.ok) {
return await response.json();
}
if (response.status === 429) {
const error = await response.json();
const retryAfter = error.retry_after || Math.pow(2, i) * 1000;
if (i < retries - 1) {
console.log(`Rate limited. Waiting ${retryAfter}s before retry...`);
await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
continue;
}
}
throw new Error(`Request failed: ${response.status}`);
}
}3. Cache Results
Reduce API calls by caching validation results:
class EmailChecker {
constructor(cacheTimeout = 24 * 60 * 60 * 1000) { // 24 hours
this.cache = new Map();
this.cacheTimeout = cacheTimeout;
}
async check(email) {
// Check cache first
const cached = this.cache.get(email);
if (cached && Date.now() - cached.timestamp < this.cacheTimeout) {
console.log('Cache hit:', email);
return cached.data;
}
// Make API request
const response = await fetch(
`https://bifrost.api-armor.com/v1/check?email=${encodeURIComponent(email)}`,
{
headers: { 'Authorization': `Bearer ${API_KEY}` }
}
);
const data = await response.json();
// Cache the result
this.cache.set(email, {
data,
timestamp: Date.now()
});
return data;
}
}
const checker = new EmailChecker();4. Batch Processing with Delays
When processing large batches of emails, add delays between requests:
async function processBatch(emails, delayMs = 100) {
const results = [];
for (const email of emails) {
try {
const result = await checkEmail(email);
results.push({ email, result });
// Add delay between requests
await new Promise(resolve => setTimeout(resolve, delayMs));
} catch (error) {
results.push({ email, error: error.message });
}
}
return results;
}
// Process 100 emails with 100ms delay = ~10 requests/second
const results = await processBatch(emails, 100);5. Use Request Queues
For high-volume applications, implement a request queue:
class RequestQueue {
constructor(rateLimit, windowMs) {
this.rateLimit = rateLimit;
this.windowMs = windowMs;
this.queue = [];
this.processing = false;
}
async enqueue(request) {
return new Promise((resolve, reject) => {
this.queue.push({ request, resolve, reject });
this.process();
});
}
async process() {
if (this.processing || this.queue.length === 0) return;
this.processing = true;
while (this.queue.length > 0) {
const { request, resolve, reject } = this.queue.shift();
try {
const result = await request();
resolve(result);
} catch (error) {
reject(error);
}
// Add delay to respect rate limits
await new Promise(resolve =>
setTimeout(resolve, this.windowMs / this.rateLimit)
);
}
this.processing = false;
}
}
// Usage
const queue = new RequestQueue(20, 60000); // 20 req/min
async function checkEmailQueued(email) {
return queue.enqueue(async () => {
const response = await fetch(
`https://bifrost.api-armor.com/v1/check?email=${encodeURIComponent(email)}`,
{
headers: { 'Authorization': `Bearer ${API_KEY}` }
}
);
return await response.json();
});
}6. Monitor Your Usage
Track your API usage in your dashboard to:
- Monitor requests per day/month
- Identify usage patterns
- Plan for plan upgrades
- Set up usage alerts
// Log rate limit info with each request
async function checkEmailWithLogging(email) {
const response = await fetch(
`https://bifrost.api-armor.com/v1/check?email=${encodeURIComponent(email)}`,
{
headers: { 'Authorization': `Bearer ${API_KEY}` }
}
);
const remaining = response.headers.get('X-RateLimit-Remaining');
const limit = response.headers.get('X-RateLimit-Limit');
const usagePercent = ((limit - remaining) / limit * 100).toFixed(1);
console.log(`Rate limit usage: ${usagePercent}% (${remaining}/${limit} remaining)`);
// Alert if usage is high
if (usagePercent > 90) {
console.warn('WARNING: Approaching rate limit!');
// Send alert to monitoring system
}
return await response.json();
}Upgrading Your Plan
If you consistently hit rate limits, consider upgrading your plan:
- Visit your Dashboard
- Go to Billing & Plans
- Select a higher tier
- Changes take effect immediately
Need custom limits? Contact us for Enterprise pricing with tailored rate limits and SLAs.
Rate Limit FAQs
Are rate limits per API key or per account?
Rate limits are applied per API key. You can create multiple API keys to separate usage across different applications or environments.
What happens if I exceed my monthly quota?
When you reach your monthly request quota:
- Free tier: Requests will be rejected with a 429 error
- Paid plans: Additional requests are billed at your plan's overage rate
- You can upgrade your plan anytime for higher limits
Do failed requests count towards my rate limit?
- ✅ Do count: All requests that reach our servers, including those that return errors
- ❌ Don't count: Requests that fail due to network issues before reaching our servers
Can I request a temporary rate limit increase?
Yes! If you have a temporary need (e.g., data migration), contact support and we can arrange a temporary increase.