npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

benchr

v4.3.0

Published

Benchmark runner

Downloads

794

Readme

benchr

Node.js benchmark runner, modelled after Mocha and bencha, based on Benchmark.js.

Installation

$ npm i benchr [-g]

Usage

Run the benchr script and provide it with files containing benchmarks:

$ benchr benchmarks/**/*.js
$ benchr benchmarks/suite1.js benchmarks/suite2.js benchmarks/suite3.js

Options

$ benchr -h
benchr – benchmark runner

Usage:
  benchr [options] <file>...

Options:
  -h --help                  Show this screen
  -V --version               Show version
  -d --delay=<s>             Delay between test cycles, in seconds       [default: 0]
  -M --min-time=<s>          Minimum run time per test cycle, in seconds [default: 0]
  -m --max-time=<s>          Maximum run time per test cycle, in seconds [default: 5]
  -g --grep=<s>              Only run suites matching pattern
  -R --reporter=<name/file>  Reporter to use, either a file path or built-in (`console` or `json`) [default: console]
  -r --require=<module>      `require()` a module before starting (for instance, `babel-register`)
  -p --progress              Show progress       (depending on reporter)
  -P --pretty-print          Pretty-print output (depending on reporter)
  -v --verbose               More verbose output (depending on reporter)

Suites + benchmarks

A benchmark file declares one or more suites, each with one or more benchmarks to run.

Syntax

suite(NAME[, OPTIONS], FN);
benchmark(NAME[, OPTIONS], FN);

Calling suite() is optional.

Synchronous benchmarks

suite('Finding a substring', () => {

  benchmark('RegExp#test', () => {
    /o/.test('Hello World!');
  });

  benchmark('String#indexOf', () => {
    'Hello World!'.indexOf('o') > -1;
  });

  benchmark('String#match', () => {
    !!'Hello World!'.match(/o/);
  });

});

(taken from the example on the Benchmark.js website)

Asynchronous benchmarks

Using promises

Return a promise from a benchmark function and it will be tested asynchronously:

suite('Timeouts', () => {

  benchmark('100ms', () => {
    return new Promise((resolve) => {
      setTimeout(resolve, 100);
    });
  });

  benchmark('200ms', () => {
    return new Promise((resolve) => {
      setTimeout(resolve, 200);
    });
  });

});

NOTE: to determine if a function under test returns a promise, it is called once before the tests start. If this is undesirable, for instance due to side-effects, set the promises option for the benchmark or the entire suite:

suite('Timeouts',  { promises : true }, () => { ... });
benchmark('100ms', { promises : true }, () => { ... });
Using callbacks

If a benchmark takes an argument, it is assumed to be a continuation callback (pass any errors as first argument to abort the test):

suite('Timeouts', () => {

  benchmark('100ms', (done) => {
    setTimeout(done, 100);
  });

  benchmark('200ms', (done) => {
    setTimeout(done, 200);
  });

});

Playing nice with linters

To work around linter errors regarding suite and benchmark being undefined, your test files can export a function that would take suite and benchmark as its arguments, thereby making the linter happy:

module.exports = (suite, benchmark) => {
  suite('My test suite', () => {
    benchmark('Bench 1', ...);
    benchmark('Bench 2', ...);
    ...
  })
}

Using the Runner programmatically

const Runner = require('benchr');
const runner = new Runner({
    reporter    : Function,
    grep        : String,
    delay       : Number,
    minTime     : Number,
    maxTime     : Number,
    progress    : Boolean,
    prettyPrint : Boolean,
    verbose     : Boolean,
}, [ "file1.js", "file2.js", ... ]);

All options map to the similarly-named command line options.

Implementing a reporter

A reporter is a CommonJS module that should export a function that gets passed a benchmark runner instance as argument.

This runner instance implements the EventEmitter interface, and will emit the following events:

  • run.start / run.complete
  • file.start / file.complete
  • suite.start / suite.complete
  • benchmark.start / benchmark.complete

The different parts are explained as follows:

  • a "run" consists of one or more files containing benchmarks
  • a "file" is a file that exports or contains one or more suites
  • a "suite" is a structure that consists of one or more benchmarks (where each benchmark is used in a comparison between the other benchmarks in the same suite)
  • a "benchmark" is a single test

TODO

  • [x] Option to pass custom reported module
  • [x] Before/after hooks
  • [x] Benchmark/suite options (minTime, maxTime, ...)
  • [x] Separate reporters (very hardcoded now)
  • [x] Handle multiple "fastest" benchmarks better
  • [x] Promises support (just like Mocha)