npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

perfect

v1.1.0

Published

PERFect, simple JavaScript PERFormance tests

Downloads

10

Readme

perfect

Run simple performance tests.

npm install perfect --save

With perfect you can run simple performance tests to see which variations of a piece of code (f.e. different algos to achieve the same thing) are the fastest.

Requirements

  • The source is not transpiled for older browsers (yet) so you may have to transpile this in your own build setup.

Basic Usage

See the examples.

The API looks like this:

import { performanceTest } from 'perfect'

performanceTest({

    // describe the test (similar to describe() in other testing frameworks)
    description: 'compare sorting algorithms',

    iterations: 100000, // optional, default is 500000

    testCases: {

        // define a test case. The property key is the name of the test
        // (similar to it() in other test frameworks)
        'merge sort': {

            // setup anything you need for the test on `this`, which persists for
            // every iteration of the test. Called only once, before all iterations
            setup() {},

            // called before each iteration
            before() {},

            // The code to test. Called for each iteration. Receives start and end
            // markers. Wrap the specific part of the functionality you are testing
            // with start() and end() calls.
            test(start, end) {},

            // called after each iteration
            after() {},

            // called only once, after all iterations
            cleanup() {},
        },

        // define another test case
        'quick sort': {
            // ...
        },

        // ...define any number of test cases to compare performance of...
    },
)

The performanceTest function will start a performance test right away (synchronously). Async tests are not supported yet (they will be good for testing async functionality like worker responses or network conditions).

Each test in testCases will be executed as many times as specified by iterations, which defaults to 500000.

The test function of each test case is what the performance test is interested in timing. Inside a test function, call start() and end() to specify exactly which part of the test function we care about timing. If start() and end() are not called, then the performance test will time the entire execution of the test function. Calling start() or end() are completely optional, but you may like to use them to get best results. If you omit start(), then the timer will start from the beginning of the execution of the test function. Likewise, if you omit end(), then timing ends after the test has returned.

The setup() function is called once, at the beginning of a testCase, before any iterations call the test function. In all functions, this is a context that you can use throughout all invocations of any functions. Use this in setup() to set up anything that you may across all iterations of the test function.

before and after are called during each iteration, before and after the test function is called. This useful for resetting state, incrementing variables, etc.

Finally, cleanup is called once after interation has finished. Use this to cleanup anything you no longer need, for example in case your tests are part of a long running application and you don't want things to be left over in memory. Plus, it can be nice to reset the playing field for each test case, if you need to.