npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

batch-these

v0.8.0

Published

batch with ease

Downloads

7

Readme

batch-these

batch data with ease

install

npm install --save batch-these

example

var batch = require('batch-these');
    batch.wait(10); // 10 miliseconds tops

process.on('stuff-started', function(e){
  var chunk = e.name;
  batch.these(chunk, function(data){
    console.log('Started ', data.join(', ') );
  });
});

process.on('stuff-done', function(e){
  var chunk = e.name + ' in ' + Math.floor(e.time) + ' ms';
  batch.these(chunk, function(data){
    console.log('Done with Mr.', data.join(', Mr. ') );
  });
});

var dogs = ['Blue', 'Pink', 'Eddie', 'Joe','White','Brown', 'Blonde','Orange'];

dogs.forEach(function(name, index){
  var time = process.hrtime();
  setTimeout(function(){
    process.emit('stuff-started', {
      name : name,
      time : time
    });

    var rand = Math.floor(Math.random()*100);
    setTimeout(function(){
      process.emit('stuff-done', {
        name : name,
        time : process.hrtime(time)[1]/1000000
      });
    }, rand);
  }, (index + 1)*11);
});

which will output something similar to

Started  Blue, Pink, Eddie, Joe, White
Done with Mr. Pink in 31 ms
Started  Brown
Done with Mr. Joe in 20 ms, Mr. Brown in 3 ms, Mr. Eddie in 37 ms, Mr. Blue in 59 ms
Started  Blonde
Done with Mr. White in 20 ms
Started  Orange
Done with Mr. Blonde in 15 ms
Done with Mr. Orange in 44 ms

documentation

var batch = require('batch-these')

batch.these(chunk, callback)

chunk type: none | default: none

Data to be accumulated.

function to pass the data when the time comes.

batch.store([callback])

How to store your chunks. This is the default

function batchStore(batch, chunk){
  batch.data = batch.data || [ ];
  batch.data.push(chunk);
};

batch.store() returns the current storer.

batch.filter([callback])

Decides how the chunks are accumulated. The default is

function batchFilter(batch, caller){
  return batch.location === caller.location;
};

Where location has the stack format filename:lineNumber:columNumber

batch.filter() returns the current filter.

batch.wait([ms])

ms

type: number | default: 0 miliseconds

Time in ms to wait in between batches.

batch.origin([handle])

handle

type: function | default: console.log

Function to track down for the batches.

how it works

Internally is using callers-module to get only 1 stacktrace frame. With that frame one can figure out the exact location of the callback. Based on that, a batch is stored. For each location a batch will kept waiting for a new chunk using a timer. Thats it.

The time to be waiting is set with batch.wait([ms]) time.

The origin from which the stacktrace will be taken is set with batch.origin([handle])

NOTE: the package is devised to work hand in hand with process.stdout.write. That is, the package monkeypatches stdout in order to feed from its data.

Though it would need some changes as it is, it should work with any other function call. With a prior patch, that is.

why

You would like to keep stdout writes to the bare minimum.

test

npm test

license