npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

raptor-compare

v0.2.2

Published

Test Raptor results for statistical significance

Downloads

43

Readme

raptor-compare Build Status

Compare sets of Raptor results and test for their statistical significance (t-test with 0.05 alpha).

$ raptor-compare my_test.ldjson

music.gaiamobile.org   base: mean  1: mean  1: delta  1: p-value
---------------------  ----------  -------  --------  ----------
navigationLoaded              711      726        14        0.06
navigationInteractive         737      748        12        0.10
visuallyLoaded               1322     1217      -105      * 0.00
contentInteractive           1323     1217      -105      * 0.00
fullyLoaded                  1462     1442       -20        0.14
uss                        19.881   20.370     0.489      * 0.00
pss                        23.468   23.981     0.513      * 0.00
rss                        39.640   40.152     0.512      * 0.00

In the example above, Raptor measurements for the Music app were stable for the visuallyLoaded and contentInteractive events, as indicated by the asterisks next to the p-values. At the same time, we can see that the memory footprint has regressed: the mean uss usage is higher than the base measurement and the difference is statistically significant as well.

For all measurements marked with the asterisk (*) it is valid to assume that the means are indeed significantly different between the base and the try runs.

The remaining results, e.g. the 20 ms fullyLoaded speed-up, are not significant and might be caused by a random instability of the data. Try increasing the sample size (via Raptor's --runs option; see below) and run Raptor again.

What is p-value?

The p-value is a concept used in statistical testing which represents our willingness to make mistakes about the data. A low p-value means that there's only a small risk of making a mistake by concluding that the test data indicates that the means are truly different and that the observed differences are not due to poor sampling and randomness.

For the data above, a p-value of 0.14 for fullyLoaded means that the risk of being wrong is 14% when we conclude that the 20 ms difference between the means is due to an actual code change and not to randomness.

Good p-values are below 0.05.

Installation

npm install -g raptor-compare

Running Raptor tests

(For best results, follow the Raptor guide on MDN.)

Install Raptor with:

$ sudo npm install -g @mozilla/raptor

Connect your device to the computer, go into you Gaia directory and build Gaia:

$ make raptor

Then, run the desired perf test:

$ raptor test coldlaunch --runs 30 --app music --metrics my_test.ldjson

Raptor will print the output to stdout. The raw data will be saved in the ldjson file specified in the --metrics option. The data is appended so you can runmultiple tests for different revisions and apps and raptor-compare will figure out how to handle it. All testing is conducted relative to the first result set for the given app.

API

You can also use raptor-compare programmatically. It exposes three functions for working with Raptor data: read reads in a LDJSON stream with the raw metrics data, parse aggregates the data into a Map and build creates the comparison tables with p-values for significance testing.

// Needed for Node.js 0.10 and 0.12.
require('babel/polyfill');

const fs = require('fs');
const compare = require('raptor-compare');

compare.read(fs.createReadStream(filename))
  .then(compare.parse)
  .then(compare.build)
  .then(tables => tables.forEach(
    table => console.log(table.toString())))
  .catch(console.error);