graphql-limiter
v1.3.0
Published
A GraphQL rate limiting library using query complexity analysis.
Downloads
7
Readme
Summary
Developed under tech-accelerator OSLabs, GraphQLGate strives for a principled approach to complexity analysis and rate-limiting for GraphQL queries by accurately estimating an upper-bound of the response size of the query. Within a loosely opinionated framework with lots of configuration options, you can reliably throttle GraphQL queries by complexity and depth to protect your GraphQL API. Our solution is inspired by this paper from IBM research teams.
Table of Contents
- Getting Started
- Configuration
- Notes on Lists
- How It Works
- Response
- Error Handling
- Internals
- Future Development
- Contributions
- Developers
- License
Getting Started
Install the package
npm i graphql-limiter
Import the package and add the rate-limiting middleware to the Express middleware chain before the GraphQL server.
NOTE: a Redis server instance will need to be started in order for the limiter to cache data.
// import package
import { expressGraphQLRateLimiter } from 'graphql-limiter';
/**
* Import other dependencies
* */
//Add the middleware into your GraphQL middleware chain
app.use(
'gql',
expressGraphQLRateLimiter(schemaObject, {
rateLimiter: {
type: 'TOKEN_BUCKET',
refillRate: 10,
capacity: 100,
},
}) /** add GraphQL server here */
);
Configuration
schema: GraphQLSchema
| requiredconfig: ExpressMiddlewareConfig
| requiredrateLimiter: RateLimiterOptions
| requiredtype: 'TOKEN_BUCKET' | 'FIXED_WINDOW' | 'SLIDING_WINDOW_LOG' | 'SLIDING_WINDOW_COUTER'
capacity: number
refillRate: number
| bucket algorithms onlywindowSize: number
| (in ms) window algorithms only
redis: RedisConfig
options: RedisOptions
| ioredis configuration options | defaults to standard ioredis connection options (localhost:6379
)keyExpiry: number
(ms) | custom expiry of keys in redis cache | defaults to 24 hours
typeWeights: TypeWeightObject
mutation: number
| assigned weight to mutations | defaults to 10query: number
| assigned weight of a query | defaults to 1object: number
| assigned weight of GraphQL object, interface and union types | defaults to1
scalar: number
| assigned weight of GraphQL scalar and enum types | defaults to0
depthLimit: number
| throttle queies by the depth of the nested stucture | defaults toInfinity
(ie. no limit)enforceBoundedLists: boolean
| if true, an error will be thrown if any lists types are not bound by slicing arguments [first
,last
,limit
] or directives | defaults tofalse
dark: boolean
| if true, the package will calculate complexity, depth and tokens but not throttle any queries. Use this to dark launch the package and monitor the rate limiter's impact without limiting user requests.
All configuration options
expressGraphQLRateLimiter(schemaObject, { rateLimiter: { type: 'SLIDING_WINDOW_LOG', // rate-limiter selection windowSize: 6000, // 6 seconds capacity: 100, }, redis: { keyExpiry: 14400000 // 4 hours, defaults to 86400000 (24 hours) options: { host: 'localhost' // ioredis connection options port: 6379, } }, typeWeights: { // weights of GraphQL types mutation: 10, query: 1, object: 1, scalar: 0, }, enforceBoundedLists: false, // defaults to false dark: false, // defaults to false depthLimit: 7 // defaults to Infinity (ie. no depth limiting) });
Notes on Lists
For queries that return a list, the complexity can be determined by providing a slicing argument to the query (first
, last
, limit
), or using a schema directive.
Slicing arguments: lists must be bounded by one integer slicing argument in order to calculate the complexity for the field. This package supports the slicing arguments
first
,last
andlimit
. The complexity of the list will be the value passed as the argument to the field.Directives: To use directives,
@listCost
must be defined in your schema withdirective @listCost(cost: Int!) on FIELD_DEFINITION
. Then, on any field which resolves to an unbounded list, add@listCost(cost: [Int])
where[Int]
is the complexity for this field.
(Note: Slicing arguments are preferred and will override the the @listCost
directive! @listCost
is in place as a fall back.)
directive @listCost(cost: Int!) on FIELD_DEFINITION
type Human {
id: ID!
}
type Query {
humans: [Human] @listCost(cost: 10)
}
How It Works
Requests are rate-limited based on the IP address associated with the request.
On startup, the GraphQL (GQL) schema is parsed to build an object that maps GQL types/fields to their corresponding weights. Type weights can be provided during initial configuration. When a request is received, this object is used to cross reference the fields queried by the user and compute the complexity of each field. The total complexity of the request is the sum of these values.
Complexity is determined, statically (before any resolvers are called) to estimate the upper bound of the response size - a proxy for the work done by the server to build the response. The total complexity is then used to allow/block the request based on popular rate-limiting algorithms.
Requests for each user are processed sequentially by the rate limiter.
Example (with default weights):
query {
# 1 query
hero(episode: EMPIRE) {
# 1 object
name # 0 scalar
id # 0 scalar
friends(first: 3) {
# 3 objects
name # 0 scalar
id # 0 scalar
}
}
reviews(episode: EMPIRE, limit: 5) {
# 5 objects
stars # 0 scalar
commentary # 0 scalar
}
} # total complexity of 10
Response
Blocked Requests: blocked requests recieve a response with,
- status of
429
forToo Many Requests
Retry-After
header indicating the time to wait in seconds before the request could be approved (Infinity
if the complexity is greater than rate-limiting capacity).- A JSON response with the remaining
tokens
available,complexity
of the query,depth
of the query,success
of the query set tofalse
, and the UNIXtimestamp
of the request
- status of
Successful Requests: successful requests are passed on to the next function in the middleware chain with the following properties saved to
res.locals
{
graphqlGate: {
success: boolean, // true when successful
tokens: number, // tokens available after request
compexity: number, // complexity of the query
depth: number, // depth of the query
timestamp: number, // UNIX timestamp
}
}
Error Handling
- Incoming queries are validated against the GraphQL schema. If the query is invalid, a response with status code
400
is returned along with an array of GraphQL Errors that were found. - To avoid disrupting server activity, errors thrown during the analysis and rate-limiting of the query are logged and the request is passed onto the next piece of middleware in the chain.
Internals
This package exposes 3 additional functionalities which comprise the internals of the package. This is a breif documentaion on them.
Complexity Analysis
typeWeightsFromSchema
| function to create the type weight object from the schema for complexity analysisschema: GraphQLSchema
| GraphQL schema objecttypeWeightsConfig: TypeWeightConfig = defaultTypeWeightsConfig
| type weight configurationenforceBoundedLists = false
returns:
TypeWeightObject
usage:
import { typeWeightsFromSchema } from 'graphql-limiter'; import { GraphQLSchema } from 'graphql/type/schema'; import { buildSchema } from 'graphql'; let schema: GraphQLSchema = buildSchema(`...`); const typeWeights: TypeWeightObject = typeWeightsFromSchema(schema);
QueryParser
| class to calculate the complexity of the query based on the type weights and variablestypeWeights: TypeWeightObject
variables: Variables
| variables on requestreturns a class with method:
processQuery(queryAST: DocumentNode): number
returns: complexity of the query and exposes
maxDepth
property for depth limitingimport { typeWeightsFromSchema } from 'graphql-limiter'; import { parse, validate } from 'graphql'; let queryAST: DocumentNode = parse(`...`); const queryParser: QueryParser = new QueryParser(typeWeights, variables); // query must be validatied against the schema before processing the query const validationErrors = validate(schema, queryAST); const complexity: number = queryParser.processQuery(queryAST);
Rate-limiting
rateLimiter
| returns a rate limiting class instance based on selectionsrateLimiter: RateLimiterConfig
| see "configuration" -> rateLimiterclient: Redis
| an ioredis clientkeyExpiry: number
| time (ms) for key to persist in cachereturns a rate limiter class with method:
processRequest(uuid: string, timestamp: number, tokens = 1): Promise<RateLimiterResponse>
- returns:
{ success: boolean, tokens: number, retryAfter?: number }
| wheretokens
is tokens available,retryAfter
is time to wait in seconds before the request would be successful andsuccess
is false if the request is blocked
import { rateLimiter } from 'graphql-limiter'; const limiter: RateLimiter = rateLimiter( { type: 'TOKEN_BUCKET', refillRate: 1, capacity: 10, }, redisClient, 86400000 // 24 hours ); const response: RateLimiterResponse = limiter.processRequest( 'user-1', new Date().valueOf(), 5 );
Future Development
- Ability to use this package with other caching technologies or libraries
- Implement "resolve complexity analysis" for queries
- Implement leaky bucket algorithm for rate-limiting
- Experiment with performance improvements
- caching optimization
- Ensure connection pagination conventions can be accuratly acconuted for in complexity analysis
- Ability to use middleware with other server frameworks
Contributions
Contributions to the code, examples, documentation, etc. are very much appreciated.
- Please report issues and bugs directly in this GitHub project.
Developers
License
This product is licensed under the MIT License - see the LICENSE.md file for details.
This is an open source product.
This product is accelerated by OS Labs.