npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

nodemark

v0.3.0

Published

A modern benchmarking library for Node.js

Downloads

4,563

Readme

nodemark Build Status

A modern benchmarking library for Node.js, capable of generating statistically significant results.

Installation

npm install --save-dev nodemark

Usage

const benchmark = require('nodemark');

const result = benchmark(myFunction, setupFunction);
console.log(result); // => 14,114,886 ops/sec ±0.58% (7906233 samples)
console.log(result.nanoseconds()); // => 71

Statistical Significance

In benchmarking, it's important to generate statistically significant results. Thankfully, nodemark makes this easy:

  • The margin of error is calculated for you.
  • The noise caused by nodemark is factored out of the results.
  • The garbage collector is manipulated to prevent early runs from having an unfair advantage.
  • Executions done before v8 has a chance to optimize things (JIT) are ignored.

The combination of these things makes it a highly accurate measuring device. However, any benchmark done in JavaScript has its limits. If the average time measured by a benchmark is too small to be reliable (< 10ns), the results will be NaN in order to avoid providing misleading information.

API

benchmark(subject, [setup, [duration]]) -> benchmarkResult

Runs a new benchmark. This measures the performance of the subject function. If a setup function is provided, it will be invoked before every execution of subject.

By default, the benchmark runs for about 3 seconds, but this can be overridden by passing a duration number (in milliseconds). Regardless of the desired duration, the benchmark will not finish until the subject has been run at least 10 times.

Both subject and setup can run asynchronously by declaring a callback argument in their signature. If you do this, you must invoke the callback to indicate that the operation is complete. When running an asyncronous benchmark, this function returns a promise. However, because subject and setup use callbacks rather than promises, synchronous errors will not automatically be caught.

benchmark(callback => fs.readFile('foo.txt', callback))
  .then(console.log);

There is no plan to support promises in subject and setup because it would cause too much overhead and yield inaccurate results.

class BenchmarkResult

Each benchmark returns an immutable object describing the result of that benchmark. It has five properties:

  • mean, the average measured time in nanoseconds
  • error, the margin of error as a ratio of the mean
  • max, the fastest measured time in nanoseconds
  • min, the slowest measured time in nanoseconds
  • count, the number of times the subject was invoked and measured

.nanoseconds([precision]) -> number

Returns this.mean, rounded to the nearest whole number or the number or decimal places specified by precision.

.microseconds([precision]) -> number

Same as .nanoseconds(), but the value is in microseconds.

.milliseconds([precision]) -> number

Same as .nanoseconds(), but the value is in milliseconds.

.seconds([precision]) -> number

Same as .nanoseconds(), but the value is in seconds.

.hz([precision]) -> number

Returns the average number of executions per second, rounded to the nearest whole number or the number of decimal places specified by precision.

.sd([precision]) -> number

Returns the standard deviation in nanoseconds, rounded to the nearest whole number or the number of decimal places specified by precision.

.toString([format]) -> number

Returns a nicely formatted string describing the result of the benchmark. By default, the "hz" format is used, which displays ops/sec, but you can optionally specify "nanoseconds", "microseconds", "milliseconds", or "seconds" to change the displayed information.

License

MIT