API Rate Limiting and Throttling
Rate limiting and throttling are crucial techniques used to control the amount of incoming requests to an API. They help prevent abuse, ensure fair usage, and maintain the stability of the service. In this blog post, we will explore how to implement rate limiting in an Express application.
What is Rate Limiting?
Rate limiting restricts the number of requests a client can make to an API within a specified time frame. For example, you might allow a user to make 100 requests per hour. If they exceed this limit, they receive a “429 Too Many Requests” response.
What is Throttling?
Throttling is a broader concept that not only limits requests but also slows down the rate of requests for clients that exceed a certain threshold. For instance, if a user exceeds the rate limit, instead of rejecting their requests outright, you might allow them to make fewer requests over a longer time period.
Implementing Rate Limiting in Express
To implement rate limiting in an Express application, we can use the express-rate-limit
middleware. First, we need to install the package:
npm install express-rate-limit
Example of Rate Limiting
Here’s how to set up rate limiting in an Express app:
const express = require('express');
const rateLimit = require('express-rate-limit');
const app = express();
// Set up rate limiter: maximum of 100 requests per hour
const limiter = rateLimit({
windowMs: 60 * 60 * 1000, // 1 hour in milliseconds
max: 100, // Limit each IP to 100 requests per windowMs
message: 'Too many requests from this IP, please try again later.', // Response message
});
// Apply the rate limiter to all requests
app.use(limiter);
// Sample route
app.get('/', (req, res) => {
res.send('Hello, world!');
});
// Start the server
app.listen(3000, () => {
console.log('Server started on http://localhost:3000');
});
Line-by-Line Explanation
const rateLimit = require('express-rate-limit');
Imports theexpress-rate-limit
middleware to handle rate limiting.const limiter = rateLimit({ ... });
Configures the rate limiter with the following settings:windowMs: 60 * 60 * 1000
: Sets the time window for rate limiting to one hour (60 minutes × 60 seconds × 1000 milliseconds).max: 100
: Limits each IP address to 100 requests per hour.message: 'Too many requests from this IP, please try again later.'
: Customizes the response message when the limit is exceeded.
app.use(limiter);
Applies the rate limiter middleware to all incoming requests, enforcing the defined limits.app.get('/', (req, res) => { ... });
Sets up a sample route that responds with “Hello, world!”.app.listen(3000, () => { ... });
Starts the server on port 3000.
Throttling Requests
For throttling, you may want to implement a more advanced approach, such as delaying requests after a certain threshold. You can achieve this with custom middleware:
Example of Simple Throttling
const express = require('express');
const app = express();
let requestCount = 0; // Counter for incoming requests
const requestLimit = 100; // Max requests allowed
const throttleTime = 1000; // Time in milliseconds for throttle
app.use((req, res, next) => {
requestCount++;
if (requestCount > requestLimit) {
// Throttle requests
return setTimeout(() => {
requestCount = 0; // Reset count after throttle time
next(); // Allow next request after throttle period
}, throttleTime);
}
next(); // Proceed to the next middleware/route handler
});
// Sample route
app.get('/', (req, res) => {
res.send('Hello, world!');
});
// Start the server
app.listen(3000, () => {
console.log('Server started on http://localhost:3000');
});
Line-by-Line Explanation
let requestCount = 0;
Initializes a counter to track the number of incoming requests.const requestLimit = 100;
Defines the maximum number of requests allowed.const throttleTime = 1000;
Sets the throttle time in milliseconds (1 second).app.use((req, res, next) => { ... });
Middleware to count incoming requests. If the request count exceeds the limit, it delays the next request.return setTimeout(() => { ... }, throttleTime);
If the limit is exceeded, it waits for the throttle time before allowing the next request and resetting the count.