npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@nodeart/async-buffer

v1.1.2

Published

AsyncBuffer is used for async tasks accumulation and calling them sequentially after buffer limit will be exceeded. By default AsyncBuffer starts its operation automatically just after tasks limit is reached.

Downloads

26

Readme

AsyncBuffer

AsyncBuffer is used for async tasks accumulation and calling them sequentially after buffer limit will be exceeded. By default AsyncBuffer starts its operation automatically just after tasks limit is reached. AsyncBuffer can be used in browser as well as in Node.

For example one can use this package to work with database. Imagine you can use http server working and on each request you must push some info to database. Instead of doing it each time you can push task to buffer and writing to database will start after limit is exceeded.

Each task is provided with callback as first parameter. It must be called to proceed operation. You can optionally pass a parameter to the callback and it will be treated as a result of the task and stored inside results array. Second parameter in task is result if previous task (if no parameter provided null will be pushed to results array). Default stack capacity is 50 tasks.

Basic usage:

let buffer = new AsyncBuffer(4),
    task   = function(cb) {
               setTimeout(function () {
                   console.log(`I was called`);
                   cb('result');
               }, 1000)
             };

buffer.on('drain', function (results) {
    console.log('I was drained', results);
}).push(task);

You can switch off auto execution by setting second parameter in constructor to false and start task execution manually. Also push method supports multiple parameters, so you can provide several tasks like so:

let buffer = new AsyncBuffer(5, false),
    task = function(cb) {
               setTimeout(function () {
                   console.log(`I was called`);
                   cb('result');
               }, 1000)
           };
buffer.on('stack_filled', function () {
    console.log('Stack is filled');
    buffer.drainBuffer();
}).push(task, task, task, task, task);

or by using of apply

buffer.push(buffer, [task, task, task]);

Also you there is two events that are used to notify about starting and ending of operation ('start' and 'drain' respectively). 'drain' event callback is provided with results of operation as a first parameter.

buffer.on('start', ()      => console.log('Execution is started'))
      .on('drain', results => console.log('I was drained', results), 
                   results => console.log('Really drained', results));

Also you can use drainBufferParallel method to replace sequential execution with parallel. After execution chunk_done event will be emitted and if no tasks were added during process drain event will be emitted as well.

Example:

buffer.on('stop', () => console.log('stop'))
      .push(task, task, task, task, task)
      .drainBuffer()
      .once('drain', res => {                           // wait till all sequential tasks will be executed;
          console.log('drained', res);
          buffer.push(task, task, task, task, task) 
                 .drainBufferParallel()                 // execute these tasks parallel;
                 .push(task, task, task, task, task)    // add another chunk;
                 .stopExecution()                       // stop execution (only first five tasks will be executed);
                 .on('chunk_done', res => console.log('chunk_done', res)); // get results of first five tasks;
      });

If you need to:

  1. Start tasks execution before the limit will be exceeded or if second parameter in constructor was false use drainBuffer function.
  2. Clear tasks stack use clearTasksStack function.
  3. Stop execution at some moment of time use stopExecution function. Buffer will handle all tasks that was executed, but will not trigger next. Example:
buffer.on('stop', function(currentResults) {
    console.log(`Execution has been stopped. Here is results ${currentResults}`);
    //continue execution
    buffer.drainBuffer();
})
  1. Drain buffer before process will exit you can use monkey patch provided in this package like that:
monkeyPatch(function (exit) {
    return function () {
        buffer.on('drain', () => exit())
              .drainBuffer();
    }
});