Rate Limits
Understanding and working with API rate limits
Overview
Rate limits help ensure the SafePays API remains stable and responsive for all users. This guide explains our rate limiting policies and how to work within them effectively.
Current Limits
Per Minute
1,000 requests per minute per API key
Per Hour
10,000 requests per hour per API key
Rate limits are applied per API key. If you need higher limits for your use case, please contact support.
Rate Limit Headers
Every API response includes headers with rate limit information:
| Header | Description | Example |
|---|---|---|
X-RateLimit-Limit | Maximum requests allowed | 1000 |
X-RateLimit-Remaining | Requests remaining in window | 950 |
X-RateLimit-Reset | Unix timestamp when limit resets | 1642089600 |
Example Response Headers
HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 950
X-RateLimit-Reset: 1642089600Rate Limit Exceeded
When you exceed the rate limit, you'll receive a 429 Too Many Requests response:
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1642089600
Retry-After: 60
{
"error": "Rate limit exceeded. Please retry after 60 seconds."
}The Retry-After header indicates how many seconds to wait before retrying.
Best Practices
1. Monitor Rate Limit Headers
Track your usage to avoid hitting limits:
class RateLimitMonitor {
constructor() {
this.remaining = null;
this.limit = null;
this.resetTime = null;
}
updateFromHeaders(headers) {
this.remaining = parseInt(headers.get('x-ratelimit-remaining'));
this.limit = parseInt(headers.get('x-ratelimit-limit'));
this.resetTime = parseInt(headers.get('x-ratelimit-reset'));
// Warn when approaching limit
if (this.remaining < this.limit * 0.1) {
console.warn(`Rate limit warning: ${this.remaining} requests remaining`);
}
}
shouldThrottle() {
// Throttle when less than 10% remaining
return this.remaining < this.limit * 0.1;
}
getResetIn() {
if (!this.resetTime) return null;
const now = Math.floor(Date.now() / 1000);
return Math.max(0, this.resetTime - now);
}
}
// Usage
const monitor = new RateLimitMonitor();
async function apiCall(url, options) {
const response = await fetch(url, options);
monitor.updateFromHeaders(response.headers);
if (monitor.shouldThrottle()) {
console.log(`Throttling: Reset in ${monitor.getResetIn()} seconds`);
// Add delay before next request
}
return response;
}2. Implement Exponential Backoff
When rate limited, use exponential backoff with jitter:
async function callWithBackoff(fn, maxRetries = 5) {
let lastError;
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (error) {
if (error.status === 429) {
lastError = error;
// Get retry delay from header or calculate
const retryAfter = error.headers?.['retry-after'];
let delay;
if (retryAfter) {
delay = parseInt(retryAfter) * 1000;
} else {
// Exponential backoff with jitter
const baseDelay = Math.min(1000 * Math.pow(2, i), 30000);
const jitter = Math.random() * 1000;
delay = baseDelay + jitter;
}
console.log(`Rate limited. Retrying in ${delay}ms...`);
await new Promise(resolve => setTimeout(resolve, delay));
} else {
throw error;
}
}
}
throw lastError;
}
// Usage
const result = await callWithBackoff(async () => {
const response = await fetch(url, options);
if (!response.ok) {
const error = new Error(`HTTP ${response.status}`);
error.status = response.status;
error.headers = response.headers;
throw error;
}
return response.json();
});3. Batch Operations
Reduce API calls by batching operations where possible:
// Instead of creating invoices one by one
async function createInvoicesIndividually(invoiceDataArray) {
const results = [];
for (const data of invoiceDataArray) {
const result = await createInvoice(data); // 1 API call each
results.push(result);
}
return results;
}
// Better: Create customers once, then batch invoice creation
async function createInvoicesBatched(invoiceDataArray) {
// Group by customer email
const byCustomer = {};
for (const data of invoiceDataArray) {
if (!byCustomer[data.email]) {
byCustomer[data.email] = [];
}
byCustomer[data.email].push(data);
}
const results = [];
for (const [email, invoices] of Object.entries(byCustomer)) {
// Create customer once
const customer = await createOrGetCustomer(email);
// Create all invoices for this customer
for (const invoice of invoices) {
const result = await createInvoice({
...invoice,
customer_id: customer.id
});
results.push(result);
}
}
return results;
}4. Implement Request Queuing
Queue requests to stay within limits:
class RateLimitedQueue {
constructor(maxPerMinute = 1000) {
this.queue = [];
this.processing = false;
this.requestTimes = [];
this.maxPerMinute = maxPerMinute;
}
async add(fn) {
return new Promise((resolve, reject) => {
this.queue.push({ fn, resolve, reject });
this.process();
});
}
async process() {
if (this.processing || this.queue.length === 0) return;
this.processing = true;
while (this.queue.length > 0) {
// Clean old request times
const oneMinuteAgo = Date.now() - 60000;
this.requestTimes = this.requestTimes.filter(t => t > oneMinuteAgo);
// Check if we can make a request
if (this.requestTimes.length >= this.maxPerMinute) {
// Wait until we can make another request
const oldestRequest = this.requestTimes[0];
const waitTime = 60000 - (Date.now() - oldestRequest) + 100;
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}
// Process next request
const { fn, resolve, reject } = this.queue.shift();
this.requestTimes.push(Date.now());
try {
const result = await fn();
resolve(result);
} catch (error) {
reject(error);
}
// Small delay between requests
await new Promise(resolve => setTimeout(resolve, 100));
}
this.processing = false;
}
}
// Usage
const queue = new RateLimitedQueue(900); // Leave some buffer
async function createManyInvoices(invoiceDataArray) {
const promises = invoiceDataArray.map(data =>
queue.add(() => createInvoice(data))
);
return Promise.all(promises);
}5. Cache Responses
Reduce API calls by caching responses:
class APICache {
constructor(ttl = 300000) { // 5 minutes default
this.cache = new Map();
this.ttl = ttl;
}
getCacheKey(method, url, params) {
return `${method}:${url}:${JSON.stringify(params)}`;
}
get(key) {
const item = this.cache.get(key);
if (!item) return null;
if (Date.now() > item.expiry) {
this.cache.delete(key);
return null;
}
return item.data;
}
set(key, data) {
this.cache.set(key, {
data,
expiry: Date.now() + this.ttl
});
}
async fetch(method, url, params, fetcher) {
const key = this.getCacheKey(method, url, params);
// Check cache for GET requests
if (method === 'GET') {
const cached = this.get(key);
if (cached) {
console.log('Cache hit:', key);
return cached;
}
}
// Fetch fresh data
const data = await fetcher();
// Cache GET responses
if (method === 'GET') {
this.set(key, data);
}
return data;
}
}
// Usage
const cache = new APICache();
async function getCustomer(customerId) {
return cache.fetch(
'GET',
`/api/customer/${customerId}`,
{ api_key: API_KEY },
async () => {
const response = await fetch(`/api/customer/${customerId}?api_key=${API_KEY}`);
return response.json();
}
);
}Rate Limit Strategies
For Bulk Operations
When processing large batches:
- Pre-calculate capacity: Check rate limits before starting
- Chunk operations: Break into smaller batches
- Add delays: Space out requests
- Handle failures: Retry failed items separately
async function bulkCreateInvoices(invoices, chunkSize = 50) {
const results = [];
const failed = [];
// Process in chunks
for (let i = 0; i < invoices.length; i += chunkSize) {
const chunk = invoices.slice(i, i + chunkSize);
console.log(`Processing chunk ${i / chunkSize + 1} of ${Math.ceil(invoices.length / chunkSize)}`);
// Process chunk with delays
for (const invoice of chunk) {
try {
const result = await createInvoice(invoice);
results.push(result);
} catch (error) {
if (error.status === 429) {
// Rate limited - wait and retry this chunk
const retryAfter = parseInt(error.headers['retry-after'] || '60');
console.log(`Rate limited. Waiting ${retryAfter} seconds...`);
await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
// Retry the current invoice
i--;
} else {
// Other error - add to failed list
failed.push({ invoice, error });
}
}
// Small delay between requests
await new Promise(resolve => setTimeout(resolve, 100));
}
// Delay between chunks
await new Promise(resolve => setTimeout(resolve, 1000));
}
return { results, failed };
}For Real-time Applications
When rate limits affect user experience:
- Implement client-side caching: Reduce redundant requests
- Use webhooks: Get push notifications instead of polling
- Optimize API usage: Combine multiple operations
- Show loading states: Inform users during delays
Monitoring and Alerts
Set up monitoring for rate limit usage:
class RateLimitAlerts {
constructor(threshold = 0.8) {
this.threshold = threshold;
this.alerted = false;
}
check(remaining, limit) {
const usage = 1 - (remaining / limit);
if (usage > this.threshold && !this.alerted) {
this.sendAlert({
message: `Rate limit warning: ${(usage * 100).toFixed(1)}% used`,
remaining,
limit
});
this.alerted = true;
} else if (usage < this.threshold) {
this.alerted = false;
}
}
sendAlert(data) {
console.warn('RATE LIMIT ALERT:', data);
// Send to monitoring service
// Send email/SMS alert
// Log to dashboard
}
}Requesting Higher Limits
If you consistently hit rate limits, you may need higher limits:
- Analyze your usage patterns
- Optimize your implementation using strategies above
- Contact support with:
- Your use case
- Current usage patterns
- Expected growth
- Peak usage times
Enterprise customers can request custom rate limits. Contact sales@safepays.com for more information.
Rate Limiting FAQ
Q: Are rate limits per API key or per account?
A: Rate limits are applied per API key. Each key has its own limit.
Q: What happens to queued webhooks during rate limiting?
A: Webhooks are not affected by your API rate limits. They are delivered independently.
Q: Can I check my current usage?
A: Yes, check the rate limit headers in any API response, or view usage in the dashboard.
Q: Do failed requests count against the limit?
A: Yes, all requests count including those that result in errors.
Q: Is there a burst limit?
A: The per-minute limit (1000) serves as a burst limit within the hourly limit (10,000).
Related Resources
- Error Handling - Handle rate limit errors
- API Reference - Optimize API usage
- Webhooks - Use push notifications instead of polling