Rate Limiting Configuration
UberLotto implements a three-tier in-memory rate limiting system to protect against abuse, DoS attacks, and replay spam. The rate limiter uses a sliding window algorithm with LRU eviction for memory management.
File: app/lib/rate-limiter.server.ts
Rate Limit Tiers
| Tier | Scope | Limit | Window | Purpose |
|---|---|---|---|---|
| Transaction | Per transaction ID | 10 req | 1 min | Prevents replay spam of same transaction |
| IP Address | Per client IP | 100 req | 1 min | Prevents DoS from single source |
| Global Circuit Breaker | All requests | 1000 req | 1 min | Infrastructure protection |
Correction
The manual documentation listed incorrect values. The table above reflects the actual values from RATE_LIMIT_CONFIG in rate-limiter.server.ts.
Additional Invoice Limits
Enforced in app/routes/api.plisio-invoice.ts (separate from the rate limiter module):
| Check | Limit | Scope |
|---|---|---|
| Pending transactions | Max 5 | Per user email |
| New invoices | Max 3/min | Per user email |
These limits query Supabase directly (countPendingTransactions, countRecentTransactions) rather than using the in-memory rate limiter.
Implementation Details
Sliding Window Algorithm
The rate limiter uses a sliding window counter — more accurate than fixed-window approaches because it considers the actual timestamp of each request:
check(key: string): RateLimitResult {
const now = Date.now();
const entry = this.store.get(key) || { requests: [], firstRequest: now };
// Remove requests outside the sliding window
const validRequests = entry.requests.filter(
(timestamp) => timestamp > now - this.windowMs
);
// Check if limit exceeded
if (validRequests.length >= this.maxRequests) {
return { isAllowed: false, remaining: 0, resetTime: ... };
}
// Record this request
validRequests.push(now);
this.store.set(key, { requests: validRequests, firstRequest: entry.firstRequest });
return { isAllowed: true, remaining: this.maxRequests - validRequests.length, ... };
}LRU Eviction
When the store exceeds maxEntries (default: 10,000), the oldest entry is evicted:
private evictOldest(): void {
let oldestKey: string | null = null;
let oldestTime = Date.now();
for (const [key, entry] of this.store.entries()) {
if (entry.firstRequest < oldestTime) {
oldestTime = entry.firstRequest;
oldestKey = key;
}
}
if (oldestKey) this.store.delete(oldestKey);
}Memory Usage
- ~100 bytes per entry
- 10,000 entries maximum = ~1 MB
- Performance: <1ms per check (Map lookup + array filter)
Singleton Instances
Three rate limiter instances are created at module level:
const transactionLimiter = new RateLimiter({
maxRequests: 10,
windowMs: 60_000,
maxEntries: 10_000,
});
const ipLimiter = new RateLimiter({
maxRequests: 100,
windowMs: 60_000,
maxEntries: 10_000,
});
const globalLimiter = new RateLimiter({
maxRequests: 1000,
windowMs: 60_000,
maxEntries: 1, // Global only needs one entry
});Public API
Individual Check Functions
import {
checkTransactionRateLimit,
checkIPRateLimit,
checkGlobalRateLimit,
} from '@lib/rate-limiter.server';
// Each returns Promise<RateLimitResult>
const result = await checkIPRateLimit(clientIP, context);
if (!result.isAllowed) {
return Response.json(
{ error: 'Rate limit exceeded' },
{ status: 429, headers: createRateLimitHeaders(result) }
);
}Combined Check
import { checkAllRateLimits } from '@lib/rate-limiter.server';
// Checks all three tiers in order: global → IP → transaction
const result = await checkAllRateLimits(txnId, clientIP, context);Check order is optimized for fastest fail:
- Global — single lookup, protects infrastructure
- IP — prevents single-source DoS
- Transaction — prevents replay spam
Helper Functions
import { extractClientIP, createRateLimitHeaders } from '@lib/rate-limiter.server';
// Extract client IP from request headers
const ip = extractClientIP(request);
// Priority: cf-connecting-ip → x-forwarded-for (first) → x-real-ip → 'unknown'
// Create standard rate limit response headers
const headers = createRateLimitHeaders(result);
// Returns: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, Retry-AfterResponse Headers
When a rate limit is hit, the response includes standard headers:
| Header | Description | Example |
|---|---|---|
X-RateLimit-Limit | Maximum requests allowed | 100 |
X-RateLimit-Remaining | Requests remaining in window | 0 |
X-RateLimit-Reset | Window reset time (ISO 8601) | 2024-01-15T12:01:00.000Z |
Retry-After | Seconds until retry is allowed | 45 |
Security Event Logging
Rate limit violations are automatically logged to security_events via logRateLimitViolation():
{
event_type: 'rate_limit_violation',
severity: 'warning',
source: 'rate_limiter',
status: 'blocked',
error_message: 'Rate limit exceeded: 100 requests per 1 minute(s)',
event_data: { identifier: '192.168.1.100', limit: 100, window_minutes: 1 }
}Periodic Cleanup
Expired entries are cleaned up to prevent memory growth:
import { cleanupExpiredEntries, startPeriodicCleanup } from '@lib/rate-limiter.server';
// Manual cleanup
const stats = cleanupExpiredEntries();
// Returns: { transaction: 5, ip: 12, global: 0, total: 17 }
// Automatic cleanup (every 5 minutes)
startPeriodicCleanup(); // Call once at app startupMonitoring
import { getRateLimiterStats } from '@lib/rate-limiter.server';
const stats = getRateLimiterStats();
// { transaction: 150, ip: 3200, global: 1, total: 3351 }Single-Instance Limitation
The in-memory rate limiter only works for single-instance deployments. On Shopify Oxygen (stateless edge workers), each worker instance maintains its own rate limit state. For distributed rate limiting, migration to Redis or a similar shared store would be needed. The API interface is designed to make this migration straightforward.
Configuration Reference
All configuration is defined in RATE_LIMIT_CONFIG:
export const RATE_LIMIT_CONFIG = {
transaction: { maxRequests: 10, windowMs: 60_000 },
ip: { maxRequests: 100, windowMs: 60_000 },
global: { maxRequests: 1000, windowMs: 60_000 },
maxEntries: 10_000,
cleanupIntervalMs: 300_000, // 5 minutes
} as const;