npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

smart-request-balancer

v2.2.0

Published

Smart request balancer with fine tuning of rates and limits of queue execution

Downloads

6,442

Readme

Smart request balancer

npm Node CI Coverage Status code style: prettier

Smart request queue with fine tuning of rates and limits of queue execution

Installation

NPM

npm install smart-request-balancer

Yarn

yarn add smart-request-balancer

Usage

CommonJS

const Queue = require('smart-request-balancer');

Typescript

import Queue from 'smart-request-balancer';

Imagine you have some telegram bot and you need to follow telegram rules of sending messages. You have this page on Telegram bot API which says that your bot cannot send more than 1 message per second to person and not more 20 messages per minute to group/chat/channel. You can easily configure it in smart request balancer:


const queue = new Queue({
  rules: {
    telegramIndividual: { // Rule for sending private message via telegram API
      rate: 1,            // one message
      limit: 1,           // per second
      priority: 1
    },
    telegramGroup: {      // Rule for sending group message via telegram API
      rate: 20,           // 20 messages
      limit: 60           // per minute
    }
  }
});

And then just send this message easily:

const axios = require('axios');

queue.request((retry) => axios(config)
  .then(response => response.data)
  .catch(error => {
    if (error.response.status === 429) { // We've got 429 - too many requests
      return retry(error.response.data.parameters.retry_after) // usually 300 seconds
    }
    
    throw error; // throw error further
  }), user_id, 'telegramIndividual')
  .then(response => console.log(response)) // our actual response
  .catch(error => console.error(error));

Here you see that we are calling queue.request() with 3 parameters:

  • fn Request handler: promise which will be executed
  • key Unique key of request: For example, user_id of chat
  • rule Rule name: Rule which we configured at queue creation

Also you can see that we are handling retry in request handler. That's our plan B in order that Telegram API somehow gets requests overflow. Just call this retry function with some Number and this request will be fulfilled right after this time.

Queue API

Configuration

const queue = new Queue({
  rules: {                     // Describing our rules by rule name
    common: {                  // Common rule. Will be used if you won't provide rule argument
      rate: 30,                // Allow to send 30 messages
      limit: 1,                // per 1 second
      priority: 1,             // Rule priority. The lower priority is, the higher chance that
                               // this rule will execute faster 
    }
  },
  default: {                   // Default rules (if provided rule name is not found
    rate: 30,
    limit: 1
  },
  overall: {                   // Overall queue rates and limits
    rate: 30,
    limit: 1
  },
  retryTime: 300,              // Default retry time. Can be configured in retry fn
  ignoreOverallOverheat: true  // Should we ignore overheat of queue itself
})

Making requests

For making requests you should provide callback which will have one argument called retry and should return promise

const key = user_id; // Some telegram user id
const rule = 'telegramIndividual'; // Our rule for sending messages to chats
const response = await queue.request((retry) => axios(config)
  .then(response => response.data)
  .catch(error => {
    if (error.response.status === 429) { // We've got 429 - too many requests
      return retry(error.response.data.parameters.retry_after) // usually 300 seconds
    }
    
    throw error; // throw error further
  }), key, rule);

You can use any available promise-based library to make requests. Promise resolve will be transferred further.

Getting responses

queue.request(...) will return promise which will resolve only when our queue will execute our request and get results. Let's extend our previous example:

try {
  const response = await queue.request(requestHandler, key, rule); // our actual response
} catch (e) {
  console.error(e); // our request error (excluding 429)
}

Priorities

Each rule has it's own priority. This was made to allow more urgent request execute faster than less urgent. Imagine you have two rules: for individual messages and for broadcasting. Broadcasting can be a hard routine and you should not totally wait for it to finish in order to send private message to somebody. In that case you should put priority 1 for private messages and priority 2 for broadcasting. In that case our queue will send broadcasting continuously but as soon as it gets private message it will interrupt broadcasting, send message and continue.

Available methods

  • request(handler: (retry: RetryFunction) => Promise, key: string, rule: string) => Promise The main entrypoint for making requests with this library
  • get totalLength(): number - Getter for total length of queue
  • get isOverheated(): boolean - Getter for verifying is our queue is overheated

Getting retry error

You should use retry function in request in order to set retry for this request. You can easily determine it by HTTP 429 code. Sometimes servers also return retry_after param which you can pass to retry function and set retry interval for this request. You don't need to do anything special. Our promise will only be resolved when server will respond us correctly.

Overall overheat

Sometimes you need to set overall overheat of queue (e.g. Telegram API has restriction to not send more than 30 messages per second overall). For that purpose you can configure overall rule in config and set ignoreOverallOverheat to false.

Debug

In order to debug queue you can use environment variable DEBUG=smart-request-balancer.

Other usages

You can use this queue not only for API requests. This library can also be used for any routines which should be queued and executed sequentially based on rules, grouping, priority and ability to retry.