npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@fastify/under-pressure

v9.0.2

Published

Measure process load with automatic handling of 'Service Unavailable' plugin for Fastify.

Downloads

328,152

Readme

@fastify/under-pressure

CI NPM version neostandard javascript style

Measure process load with automatic handling of "Service Unavailable" plugin for Fastify. It can check maxEventLoopDelay, maxHeapUsedBytes, maxRssBytes and maxEventLoopUtilization values. You can also specify a custom health check, to verify the status of external resources.

Requirements

Fastify ^4.0.0. Please refer to this branch and related versions for Fastify ^1.1.0 compatibility.

Install

npm i @fastify/under-pressure

Usage

Require the plugin and register it into the Fastify instance.

const fastify = require('fastify')()

fastify.register(require('@fastify/under-pressure'), {
  maxEventLoopDelay: 1000,
  maxHeapUsedBytes: 100000000,
  maxRssBytes: 100000000,
  maxEventLoopUtilization:0.98
})

fastify.get('/', (request, reply) => {
  if (fastify.isUnderPressure()) {
    // skip complex computation
  }
  reply.send({ hello: 'world'})
})

fastify.listen({ port: 3000 }, err => {
  if (err) throw err
  console.log(`server listening on ${fastify.server.address().port}`)
})

@fastify/under-pressure will automatically handle for you the Service Unavailable error once one of the thresholds has been reached. You can configure the error message and the Retry-After header.

fastify.register(require('@fastify/under-pressure'), {
  maxEventLoopDelay: 1000,
  message: 'Under pressure!',
  retryAfter: 50
})

You can also configure custom Error instance @fastify/under-pressure will throw.

class CustomError extends Error {
  constructor () {
    super('Custom error message')
    Error.captureStackTrace(this, CustomError)
  }
}

fastify.register(require('@fastify/under-pressure'), {
  maxEventLoopDelay: 1000,
  customError: CustomError
})

The default value for maxEventLoopDelay, maxHeapUsedBytes, maxRssBytes and maxEventLoopUtilization is 0. If the value is 0 the check will not be performed.

Thanks to the encapsulation model of Fastify, you can selectively use this plugin in some subset of routes or even with different thresholds in different plugins.

memoryUsage

This plugin also exposes a function that will tell you the current values of heapUsed, rssBytes, eventLoopDelay and eventLoopUtilized.

console.log(fastify.memoryUsage())

Pressure Handler

You can provide a pressure handler in the options to handle the pressure errors. The advantage is that you know why the error occurred. Moreover, the request can be handled as if nothing happened.

const fastify = require('fastify')()
const underPressure = require('@fastify/under-pressure')()

fastify.register(underPressure, {
  maxHeapUsedBytes: 100000000,
  maxRssBytes: 100000000,
  pressureHandler: (request, reply, type, value) => {
    if (type === underPressure.TYPE_HEAP_USED_BYTES) {
      fastify.log.warn(`too many heap bytes used: ${value}`)
    } else if (type === underPressure.TYPE_RSS_BYTES) {
      fastify.log.warn(`too many rss bytes used: ${value}`)
    }

    reply.send('out of memory') // if you omit this line, the request will be handled normally
  }
})

It is possible as well to return a Promise that will call reply.send (or something else).

fastify.register(underPressure, {
  maxHeapUsedBytes: 100000000,
  pressureHandler: (request, reply, type, value) => {
    return getPromise().then(() => reply.send({ hello: 'world' }))
  }
})

Any other return value than a promise or nullish will be sent to client with reply.send.

It's also possible to specify the pressureHandler on the route:

const fastify = require('fastify')()
const underPressure = require('@fastify/under-pressure')()

fastify.register(underPressure, {
  maxHeapUsedBytes: 100000000,
  maxRssBytes: 100000000,
})

fastify.register(async function (fastify) {
  fastify.get('/', {
    config: {
      pressureHandler: (request, reply, type, value) => {
        if (type === underPressure.TYPE_HEAP_USED_BYTES) {
          fastify.log.warn(`too many heap bytes used: ${value}`)
        } else if (type === underPressure.TYPE_RSS_BYTES) {
          fastify.log.warn(`too many rss bytes used: ${value}`)
        }

        reply.send('out of memory') // if you omit this line, the request will be handled normally
      }
    }
  }, () => 'A')
})

Status route

If needed you can pass { exposeStatusRoute: true } and @fastify/under-pressure will expose a /status route for you that sends back a { status: 'ok' } object. This can be useful if you need to attach the server to an ELB on AWS for example.

If you need the change the exposed route path, you can pass { exposeStatusRoute: '/alive' } options.

To configure the endpoint more specifically you can pass an object. This consists of

  • routeOpts - Any Fastify route options except schema
  • routeSchemaOpts - As per the Fastify route options, an object containing the schema for request
  • routeResponseSchemaOpts - An object containing the schema for additional response items to be merged with the default response schema, see below
  • url - The URL to expose the status route on
fastify.register(require('@fastify/under-pressure'), {
  maxEventLoopDelay: 1000,
  exposeStatusRoute: {
    routeOpts: {
      logLevel: 'debug',
      config: {
        someAttr: 'value'
      }
    },
    routeSchemaOpts: { // If you also want to set a custom route schema
      hide: true
    },
    url: '/alive' // If you also want to set a custom route path and pass options
  }
})

The above example will set the logLevel value for the /alive route to be debug.

If you need to return other information in the response, you can return an object from the healthCheck function (see next paragraph) and use the routeResponseSchemaOpts property to describe your custom response schema (note: status will always be present in the response)

fastify.register(underPressure, {
  ...
  exposeStatusRoute: {
    routeResponseSchemaOpts: {
      extraValue: { type: 'string' },
      metrics: {
        type: 'object',
        properties: {
          eventLoopDelay: { type: 'number' },
          rssBytes: { type: 'number' },
          heapUsed: { type: 'number' },
          eventLoopUtilized: { type: 'number' },
        },
      },
      // ...
    }
  },
  healthCheck: async (fastifyInstance) => {
    return {
      extraValue: await getExtraValue(),
      metrics: fastifyInstance.memoryUsage(),
      // ...
    }
  },
}

Custom health checks

If needed you can pass a custom healthCheck property, which is an async function, and @fastify/under-pressure will allow you to check the status of other components of your service.

This function should return a promise that resolves to a boolean value or to an object. The healthCheck function can be called either:

  • every X milliseconds, the time can be configured with the healthCheckInterval option.
  • every time the status route is called, if exposeStatusRoute is set to true.

By default when this function is supplied your service health is considered unhealthy, until it has started to return true.

const fastify = require('fastify')()

fastify.register(require('@fastify/under-pressure'), {
  healthCheck: async function (fastifyInstance) {
    // do some magic to check if your db connection is healthy, etc...
    return true
  },
  healthCheckInterval: 500
})

Sample interval

You can set a custom value for sampling the metrics returned by memoryUsage using the sampleInterval option, which accepts a number that represents the interval in milliseconds.

The default value is different depending on which Node version is used. In version 8 and 10 it is 5, while on version 11.10.0 and up it is 1000. This difference is because from version 11.10.0 the event loop delay can be sampled with monitorEventLoopDelay and this allows to increase the interval value.

const fastify = require('fastify')()

fastify.register(require('@fastify/under-pressure'), {
  sampleInterval: <your custom sample interval in ms>
})

Additional information

setTimeout vs setInterval

Under the hood the @fastify/under-pressure uses the setTimeout method to perform its polling checks. The choice is based on the fact that we do not want to add additional pressure to the system.

In fact, it is known that setInterval will call repeatedly at the scheduled time regardless of whether the previous call ended or not, and if the server is already under load, this will likely increase the problem, because those setInterval calls will start piling up. setTimeout, on the other hand, is called only once and does not cause the mentioned problem.

One note to consider is that because the two methods are not identical, the timer function is not guaranteed to run at exactly the same rate when the system is under pressure or running a long-running process.

Acknowledgements

This project is kindly sponsored by LetzDoIt.

License

Licensed under MIT.