npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

speedy

v0.1.1

Published

Tiny benchmark utility

Downloads

67

Readme

speedy

Node.js project

Tiny benchmark utility

Version: 0.0.2

Performs a benchmark of almost any test case.

It's basically the Isaac Z. Schlueter's node-bench project revisited and rewritten from scratch.

If you need to benchmark some sort of code or if you are writting a module and want to see how well it performs in comparison with older versions, you can use speedy. For serious benchmarking use other tools more accurate and exact. speedy is intended for rapid prototyping and informal benchmarks.

This module doesn't check for errors because using domains the benchmark ends up with inconsistent results. Make sure your code doesn't break before running the benchmark.

Also, a recursive loop is used to measure the speed of the code. There are basically 2 ways to make a recursive loop: nextTick and setImmediate. nextTick cannot loop indefinitely because it has a maximum call stack limit (process.maxTickDepth) and setImmediate is slower than nextTick (see examples/nexttick-vs-setimmediate-vs-setTimeout.js) and produces inconsistent benchmark results. The solution is to use an hybrid approach (found in the node-bench source code):

var async = (function (){
	var i = 0;
	return function (fn){
		if (i++ < 100){
			process.nextTick (fn);
		}else{
			setImmediate (fn);
			i = 0;
		}
	}
})();

async (function (){
	//Asynchronous code
});

This worked for me and produces consistent benchmark results. But it has a problem. Cannot execute codes that make an excessive use of nextTick in a recursive way.

Installation

npm install speedy

Example

var speedy = require ("../lib");

speedy.run ({
  "literal": function (){
		return {};
	},
	"constructor": function (){
		return new Object ();
	},
	"create": function (){
		return Object.create (Object.prototype);
	}
});

/*
File: object-creation.js

Node v0.10.12
V8 v3.14.5.9

Benchmarks: 3
Amplifier: 100
Time per test: 1000ms (1s 0ms)
Runs per test: 3 (+1)
Total time per test: ~4000ms (4s 0ms)
Total benchmark time: ~12000ms (12s 0ms)

Higher is better

literal
  142634.785
constructor
  36133.003
create
  8014.257

Benchmark finished
*/

Amplifier or timeout. Which one to use?

The amplifier increases the result value without varying the time execution but the accuracy decreases with very high amplification factors. The timeout increases the result value but "can" become inconsistent with very high execution times and other cpu consuming programs running at the same time.

In practice, because the time is a priority for rapid prototyping, the amplifier is more appropriate.

Functions

Descriptions

module.amplifier([n]) : undefined | Number
Changes or returns the amplification factor of the result. The higher is this value the higher is the benchmark result. Very high amplification values decrease the benchmark precision. Default is 100.

Increment the value if the benchmark returns a number near 0.

speedy.amplifier (200);

module.run([name][, fn][, callback]) : undefined
Executes the benchmark. If a callback is passed the raw data will be returned as a parameter and nothing will be printed in the stdout.

The returned value is an array, each index stores the result of each test, an object with a raw property storing an array with all the results and a name property storing the name of the test, if any. For example, a baseline benchmark with default attributes (amplifier 100, runs 3, timeout 1000):

speedy.run (function fn (){}, function (data){
	console.log (data);
	
	/*
	[{
    name: "fn",
    raw: [217620.69306930693, 218626.13861386137, 218811.48514851485]
	}]
	*/
});

Asynchronous benchmarking

Simply execute the callback when you are ready to iterate again.

speedy.run (function (done){
	//...
	done ();
});

Ways to run the benchmark and its results

Anonymous function:

speedy.run (function (){});

/*
<value>
*/

Named function:

speedy.run (function fn (){});

/*
fn
  <value>
*/

Using the name parameter:

speedy.run ("fn", function (){});

/*
fn
  <value>
*/

Batch:

speedy.run ({
	a: function (){},
	b: function (){},
	c: function (){}
});

/*
a
  <value>
b
  <value>
c
  <value>
*/

module.runs([n]) : undefined | Number
Changes or returns the number of runs per test. With more runs the final result will be more stable an exact. An arithmetic mean is applied to all of the results. Default is 3.

The first run is always ignored because produces inconsistent values due the virtual machine warm-up stuff. So if you set 3 runs, 4 runs will be executed.

speedy.runs (10);

module.timeout([n]) : undefined | Number
Changes or returns the execution time per test. Higher values don't imply more exact results because in larger periods of time there are more probabilities of cpu usage changes. The higher is this value the higher will be the result.

speedy.timeout (100);