npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

graphql-limiter

v1.3.0

Published

A GraphQL rate limiting library using query complexity analysis.

Downloads

7

Readme

 

Summary

Developed under tech-accelerator OSLabs, GraphQLGate strives for a principled approach to complexity analysis and rate-limiting for GraphQL queries by accurately estimating an upper-bound of the response size of the query. Within a loosely opinionated framework with lots of configuration options, you can reliably throttle GraphQL queries by complexity and depth to protect your GraphQL API. Our solution is inspired by this paper from IBM research teams.

Table of Contents

Getting Started

Install the package

npm i graphql-limiter

Import the package and add the rate-limiting middleware to the Express middleware chain before the GraphQL server.

NOTE: a Redis server instance will need to be started in order for the limiter to cache data.

// import package
import { expressGraphQLRateLimiter } from 'graphql-limiter';

/**
 * Import other dependencies
 * */

//Add the middleware into your GraphQL middleware chain
app.use(
    'gql',
    expressGraphQLRateLimiter(schemaObject, {
        rateLimiter: {
            type: 'TOKEN_BUCKET',
            refillRate: 10,
            capacity: 100,
        },
    }) /** add GraphQL server here */
);

Configuration

  1. schema: GraphQLSchema | required

  2. config: ExpressMiddlewareConfig | required

    • rateLimiter: RateLimiterOptions | required

      • type: 'TOKEN_BUCKET' | 'FIXED_WINDOW' | 'SLIDING_WINDOW_LOG' | 'SLIDING_WINDOW_COUTER'
      • capacity: number
      • refillRate: number | bucket algorithms only
      • windowSize: number | (in ms) window algorithms only
    • redis: RedisConfig

      • options: RedisOptions | ioredis configuration options | defaults to standard ioredis connection options (localhost:6379)
      • keyExpiry: number (ms) | custom expiry of keys in redis cache | defaults to 24 hours
    • typeWeights: TypeWeightObject

      • mutation: number | assigned weight to mutations | defaults to 10
      • query: number | assigned weight of a query | defaults to 1
      • object: number | assigned weight of GraphQL object, interface and union types | defaults to 1
      • scalar: number | assigned weight of GraphQL scalar and enum types | defaults to 0
    • depthLimit: number | throttle queies by the depth of the nested stucture | defaults to Infinity (ie. no limit)

    • enforceBoundedLists: boolean | if true, an error will be thrown if any lists types are not bound by slicing arguments [first, last, limit] or directives | defaults to false

    • dark: boolean | if true, the package will calculate complexity, depth and tokens but not throttle any queries. Use this to dark launch the package and monitor the rate limiter's impact without limiting user requests.

    All configuration options

    expressGraphQLRateLimiter(schemaObject, {
        rateLimiter: {
            type: 'SLIDING_WINDOW_LOG', // rate-limiter selection
            windowSize: 6000, // 6 seconds
            capacity: 100,
        },
        redis: {
            keyExpiry: 14400000 // 4 hours, defaults to 86400000 (24 hours)
            options: {
                host: 'localhost' // ioredis connection options
                port: 6379,
            }
        },
        typeWeights: { // weights of GraphQL types
            mutation: 10,
            query: 1,
            object: 1,
            scalar: 0,
        },
        enforceBoundedLists: false, // defaults to false
        dark: false, // defaults to false
        depthLimit: 7 // defaults to Infinity (ie. no depth limiting)
    });

Notes on Lists

For queries that return a list, the complexity can be determined by providing a slicing argument to the query (first, last, limit), or using a schema directive.

  1. Slicing arguments: lists must be bounded by one integer slicing argument in order to calculate the complexity for the field. This package supports the slicing arguments first, last and limit. The complexity of the list will be the value passed as the argument to the field.

  2. Directives: To use directives, @listCost must be defined in your schema with directive @listCost(cost: Int!) on FIELD_DEFINITION. Then, on any field which resolves to an unbounded list, add @listCost(cost: [Int]) where [Int] is the complexity for this field.

(Note: Slicing arguments are preferred and will override the the @listCost directive! @listCost is in place as a fall back.)

directive @listCost(cost: Int!) on FIELD_DEFINITION
type Human {
    id: ID!
}
type Query {
    humans: [Human] @listCost(cost: 10)
}

How It Works

Requests are rate-limited based on the IP address associated with the request.

On startup, the GraphQL (GQL) schema is parsed to build an object that maps GQL types/fields to their corresponding weights. Type weights can be provided during initial configuration. When a request is received, this object is used to cross reference the fields queried by the user and compute the complexity of each field. The total complexity of the request is the sum of these values.

Complexity is determined, statically (before any resolvers are called) to estimate the upper bound of the response size - a proxy for the work done by the server to build the response. The total complexity is then used to allow/block the request based on popular rate-limiting algorithms.

Requests for each user are processed sequentially by the rate limiter.

Example (with default weights):

query {
    # 1 query
    hero(episode: EMPIRE) {
        # 1 object
        name # 0 scalar
        id # 0 scalar
        friends(first: 3) {
            # 3 objects
            name # 0 scalar
            id # 0 scalar
        }
    }
    reviews(episode: EMPIRE, limit: 5) {
        #   5 objects
        stars # 0 scalar
        commentary # 0 scalar
    }
} # total complexity of 10

Response

  1. Blocked Requests: blocked requests recieve a response with,

    • status of 429 for Too Many Requests
    • Retry-After header indicating the time to wait in seconds before the request could be approved (Infinity if the complexity is greater than rate-limiting capacity).
    • A JSON response with the remaining tokens available, complexity of the query, depth of the query, success of the query set to false, and the UNIX timestamp of the request
  2. Successful Requests: successful requests are passed on to the next function in the middleware chain with the following properties saved to res.locals

{
   graphqlGate: {
      success: boolean, // true when successful
      tokens: number, // tokens available after request
      compexity: number, // complexity of the query
      depth: number, // depth of the query
      timestamp: number, // UNIX timestamp
   }
}

Error Handling

  • Incoming queries are validated against the GraphQL schema. If the query is invalid, a response with status code 400 is returned along with an array of GraphQL Errors that were found.
  • To avoid disrupting server activity, errors thrown during the analysis and rate-limiting of the query are logged and the request is passed onto the next piece of middleware in the chain.

Internals

This package exposes 3 additional functionalities which comprise the internals of the package. This is a breif documentaion on them.

Complexity Analysis

  1. typeWeightsFromSchema | function to create the type weight object from the schema for complexity analysis

    • schema: GraphQLSchema | GraphQL schema object

    • typeWeightsConfig: TypeWeightConfig = defaultTypeWeightsConfig | type weight configuration

    • enforceBoundedLists = false

    • returns: TypeWeightObject

    • usage:

      import { typeWeightsFromSchema } from 'graphql-limiter';
      import { GraphQLSchema } from 'graphql/type/schema';
      import { buildSchema } from 'graphql';
      
      let schema: GraphQLSchema = buildSchema(`...`);
      
      const typeWeights: TypeWeightObject = typeWeightsFromSchema(schema);
  2. QueryParser | class to calculate the complexity of the query based on the type weights and variables

    • typeWeights: TypeWeightObject

    • variables: Variables | variables on request

    • returns a class with method:

      • processQuery(queryAST: DocumentNode): number

      • returns: complexity of the query and exposes maxDepth property for depth limiting

        import { typeWeightsFromSchema } from 'graphql-limiter';
        import { parse, validate } from 'graphql';
        
        let queryAST: DocumentNode = parse(`...`);
        
        const queryParser: QueryParser = new QueryParser(typeWeights, variables);
        
        // query must be validatied against the schema before processing the query
        const validationErrors = validate(schema, queryAST);
        
        const complexity: number = queryParser.processQuery(queryAST);

Rate-limiting

  1. rateLimiter | returns a rate limiting class instance based on selections

    • rateLimiter: RateLimiterConfig | see "configuration" -> rateLimiter

    • client: Redis | an ioredis client

    • keyExpiry: number | time (ms) for key to persist in cache

    • returns a rate limiter class with method:

      • processRequest(uuid: string, timestamp: number, tokens = 1): Promise<RateLimiterResponse>
      • returns: { success: boolean, tokens: number, retryAfter?: number } | where tokens is tokens available, retryAfter is time to wait in seconds before the request would be successful and success is false if the request is blocked
      import { rateLimiter } from 'graphql-limiter';
      
      const limiter: RateLimiter = rateLimiter(
          {
              type: 'TOKEN_BUCKET',
              refillRate: 1,
              capacity: 10,
          },
          redisClient,
          86400000 // 24 hours
      );
      
      const response: RateLimiterResponse = limiter.processRequest(
          'user-1',
          new Date().valueOf(),
          5
      );

Future Development

  • Ability to use this package with other caching technologies or libraries
  • Implement "resolve complexity analysis" for queries
  • Implement leaky bucket algorithm for rate-limiting
  • Experiment with performance improvements
    • caching optimization
  • Ensure connection pagination conventions can be accuratly acconuted for in complexity analysis
  • Ability to use middleware with other server frameworks

Contributions

Contributions to the code, examples, documentation, etc. are very much appreciated.

Developers

License

This product is licensed under the MIT License - see the LICENSE.md file for details.

This is an open source product.

This product is accelerated by OS Labs.