npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@aldy505/kruonis

v0.0.1

Published

A tool to perform benchmarks on TS

Downloads

25

Readme

:hourglass: kruonis :hourglass:

A tool to perform benchmarks on TypeScript.

npm npm NPM BCH compliance Build Status Coverage Status

Kruonis is an embodiment of the Lithuanian goddess of time, as essentially the tool measures the time it takes for code to run.

tl;dr

A Benchmark is a set of Tests.

When running a Benchmark, each Test is run several times (test cycle).

Kruonis summarizes the statistiscs of each cycle's performance among all cycles.

Note: we use performance-now to measure performance.

Usage example

First, import kruonis' main classes:

import { Benchmark, Test } from "kruonis";

Then, let's create a benchmark:

let benchmark = new Benchmark();

Additionally, kruonis lets you pass your benchmark properties as an object to the constructor, such as:

benchmark = new Benchmark({ 'maxCycles': 50, 'name': 'Benchmark', 'maxTime': 15 });

The possible properties are available here.

We can also define events for the benchmark class, such as:

benchmark
    .on('onBegin', (benchmark: Benchmark) => {
        // Code to run on the beginning of this benchmark
        // Example:
        console.log("Beginning of the benchmark")
    })
    .on('onTestBegin', (benchmark: Benchmark, test: Test) => {
        // Code to run on the end of the benchmark (on the end of all tests)
        // Example:
        console.log("Running test: " + test.name)
    })
    .on('onTestEnd', (benchmark: Benchmark) => {
        // Code to run on the end of the benchmark (on the end of all tests)
        // Example:
        console.log("The stats of the test that just ran are: " + test.getStats())
    })
    .on('onEnd', (benchmark: Benchmark) => {
        // Code to run on the end of the benchmark (on the end of all tests)
        // Example:
        console.log("Ended benchmark")
    });

A benchmark consists of a set of tests. Therefore, we can add tests to a benchmark. Each Test can also take events. For example:

// Example object for test
let testArray: number[];

benchmark
    .add(
        new Test('exampleTest1', () => {
            // Measure code performance of what goes here
            // Example:
            for (let i = 0; i < testArray.length; ++i)
                testArray[i] *= testArray[i];
        })
        .on('onBegin', (test: Test) => {
            // Code to execute on before starting the cycle loop
            // Example:
            testArray = [1, 2, 3, 4, 5, 6];
        })
        .on('onCycleBegin', (test: Test) => {
            // Code to execute before each cycle
            // Example:
            console.log("Starting cycle");
        })
        .on('onCycleEnd', (test: Test) => {
            // Code to execute after each cycle ran
            // Example:
            testArray = [1, 2, 3, 4, 5, 6];
        })
        .on('onEnd', (test: Test) => {
            // Test to execute after all cycles
            // The Stats object with the cycle performances is now populated
            // Example:
            console.log("Finished running all cycles");
        })
    )
    .add(
        // Add another test ...
    );

After adding all tests, we can then run them using:

benchmark.run();

After running the benchmark we can obtain the statistics of each ran test in several ways:

  1. Using the return array of the behcnmark.run() method. Example:
const results: Array<[string, Stats]> = benchmark.run();
for(let result of results) {
    console.log("Test name: " + result[0]);
    console.log("Test stats: " + result[1]);
}
  1. Adding an event listener of the onTestEnd event to the benchmark.
benchmark.on('onTestEnd', (benchmark: Benchmark, test: Test) => {
    console.log("Test name: " + test.name);
    console.log("Test stats: " + test.getStats());
}). run();
  1. Adding an event listener of the onEnd event to the benchmark.
benchmark.on('onEnd', (benchmark: Benchmark) => {
    for(let result of benchmark.getResults()) {
        console.log("Test name: " + result[0]);
        console.log("Test stats: " + result[1]);
    }
}). run();
  1. Adding an identical event listener to all tests' onEnd events.
benchmark
    .add(
        new Test('exampleTest1', () => {
            // Code
        })
        .on('onEnd', (test: Test) => {
            console.log("Test name: " + test.name);
            console.log("Test stats: " + test.getStats());
        })
    .add(
        // Similar for other tests
    ).run();
}

The Statistics outputted can be consulted here. An example of a Stats object is:

{
    // The mean run time of the test
    'mean': 11.4,
    // The standard deviation
    'std': 3.352610922848042,
    // The number of ran cycles
    'count': 10,
    // The maximum time it took to run the test code
    'max': 18,
    // The maximum time it took to run the test code
    'min': 6
}

How does it work?

The logic behind the benchmark.run() method (and the order in which the events are run) is:

# Benchmark scope
Benchmark.onBegin()

for each test of tests:
    Benchmark.onTestBegin()

    # Test scope
    test.onBegin()

    while(number of cycles < minCycles and
          spent time on ran cycles < maxTime and
          number of ran cycles < maxCycles)

        # Cycle scope
        test.onCycleBegin()

        runTestCode()

        test.onCycleEnd()

    test.onEnd()
    # Ended test

    Benchmark.onTestEnd()

Benchmark.onEnd()