npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

adaptive-throttling

v2.0.3

Published

Adaptive throttling based on the sre-book + rafaelcapucho

Downloads

18

Readme

Adaptive throttling

npm-version bundle-size node-version downloads

Is a library that implements adaptive throttling. It is based on the sre-book + rafaelcapucho.

Installation

npm i adaptive-throttling -S

or

yarn add adaptive-throttling

Docs

  • https://sre.google/sre-book/handling-overload/
  • https://rafaelcapucho.github.io/2016/10/enhance-the-quality-of-your-api-calls-with-client-side-throttling/

Usage

Import

import { AdaptiveThrottling } from 'adaptive-throttling';
// or
const { AdaptiveThrottling } = require('adaptive-throttling');

Example

const { AdaptiveThrottling } = require('adaptive-throttling');
const axios = require('axios');

const adaptiveThrottling = AdaptiveThrottling();

adaptiveThrottling
  .execute(() => {
    return axios.get('/user?ID=12345');
  })
  .then((response) => {
    console.log('success', response.data);
  })
  .catch((error) => {
    console.log('error', error.message);
  });

Prameters

historyTime

Each client task keeps the following information for the last N minutes of its history. In case of "Out of quota" means time to wait for server recovery.

k

Clients can continue to issue requests to the backend until requests is K times as large as accepts. Google services and they suggest k = 2

Indicates how many minimum failures will be needed to start counting

upperLimiteToReject

if the server goes down for more than {historyTime} minutes, the P0 value will stand in 1, rejecting locally every new request to the server, so the client app won1t be able to set up a new conection. As the result of it, the client app will never have another request reaching the server. 0.9 allowing the client to recover even in that worst scenario, when the service is down more than {historyTime} minutes.

"Benchmark"

benchmark

Client-Side Throttling

When a customer is out of quota, a backend task should reject requests quickly with the expectation that returning a "customer is out of quota" error consumes significantly fewer resources than actually processing the request and serving back a correct response. However, this logic doesn't hold true for all services. For example, it's almost equally expensive to reject a request that requires a simple RAM lookup (where the overhead of the request/response protocol handling is significantly larger than the overhead of producing the response) as it is to accept and run that request. And even in the case where rejecting requests saves significant resources, those requests still consume some resources. If the amount of rejected requests is significant, these numbers add up quickly. In such cases, the backend can become overloaded even though the vast majority of its CPU is spent just rejecting requests!

Client-side throttling addresses this problem.106 When a client detects that a significant portion of its recent requests have been rejected due to "out of quota" errors, it starts self-regulating and caps the amount of outgoing traffic it generates. Requests above the cap fail locally without even reaching the network.

We implemented client-side throttling through a technique we call adaptive throttling. Specifically, each client task keeps the following information for the last two minutes of its history:

requests The number of requests attempted by the application layer(at the client, on top of the adaptive throttling system) accepts The number of requests accepted by the backend Under normal conditions, the two values are equal. As the backend starts rejecting traffic, the number of accepts becomes smaller than the number of requests. Clients can continue to issue requests to the backend until requests is K times as large as accepts. Once that cutoff is reached, the client begins to self-regulate and new requests are rejected locally (i.e., at the client) with the probability calculated in Client request rejection probability.

Client request rejection probability

As the client itself starts rejecting requests, requests will continue to exceed accepts. While it may seem counterintuitive, given that locally rejected requests aren't actually propagated to the backend, this is the preferred behavior. As the rate at which the application attempts requests to the client grows (relative to the rate at which the backend accepts them), we want to increase the probability of dropping new requests.

For services where the cost of processing a request is very close to the cost of rejecting that request, allowing roughly half of the backend resources to be consumed by rejected requests can be unacceptable. In this case, the solution is simple: modify the accepts multiplier K (e.g., 2) in the client request rejection probability (Client request rejection probability). In this way:

Reducing the multiplier will make adaptive throttling behave more aggressively Increasing the multiplier will make adaptive throttling behave less aggressively For example, instead of having the client self-regulate when requests = 2 _ accepts, have it self-regulate when requests = 1.1 _ accepts. Reducing the modifier to 1.1 means only one request will be rejected by the backend for every 10 requests accepted.

We generally prefer the 2x multiplier. By allowing more requests to reach the backend than are expected to actually be allowed, we waste more resources at the backend, but we also speed up the propagation of state from the backend to the clients. For example, if the backend decides to stop rejecting traffic from the client tasks, the delay until all client tasks have detected this change in state is shorter.

We've found adaptive throttling to work well in practice, leading to stable rates of requests overall. Even in large overload situations, backends end up rejecting one request for each request they actually process. One large advantage of this approach is that the decision is made by the client task based entirely on local information and using a relatively simple implementation: there are no additional dependencies or latency penalties.

One additional consideration is that client-side throttling may not work well with clients that only very sporadically send requests to their backends. In this case, the view that each client has of the state of the backend is reduced drastically, and approaches to increment this visibility tend to be expensive.

Roadmap

1.x.x

  • [x] Add support for rejection based on (requests - K * accepts) / (requests + 1)
  • [x] Add support for cjs and esm

1.1.x

  • [x] Add support for optional params with Spread Operator
  • [x] Add support for use AdaptiveThrottling()
  • [x] createAdaptiveThrottling deprecated

2.x.x

  • [x] Add support for client request rejection probability Math.random() < Client request rejection probability