npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

benchmark-meter

v1.0.0

Published

benchmark-meter is a straightforward benchmarking tool designed for measuring the performance of algorithms

Downloads

958

Readme

benchmark-meter

benchmark-meter is a straightforward benchmarking tool designed for measuring the performance of algorithms.

Usage

Install using npm:

npm i benchmark-meter -D

Using with Synchronous Algorithms

To use the benchmark-meter with synchronous algorithms, follow the steps below:

// Load the module using import
import { Benchmark } from 'benchmark-meter';

// Alternatively, use CommonJS syntax
const { Benchmark } = require('benchmark-meter');

// Create a new instance of the Benchmark class
const benchmark = new Benchmark();

// The add method takes three arguments:
// 1. The name of the algorithm
// 2. A callback containing the algorithm's logic
// 3. (Optional) A number specifying how many times your algorithm will be executed
benchmark.add('count to 100_000', () => {
    let sum = 0;
    for (let i = 0; i < 100_000; i += 1) {
        sum += 1;
    }
});

// The run method will execute all the added algorithms
benchmark.run().then((result) => console.log(result.get()));

Using with Asynchronous Algorithms

If you plan to utilize benchmark-meter with asynchronous algorithms, follow the modified code structure below!

const benchmark = new Benchmark();

// For asynchronous callbacks, you can use the async/await syntax directly
benchmark.add('promise', async () => {
    await promise();
});

// If your callback is non-async, use .then with return
benchmark.add('promise', () => {
    return promise().then();
});

// Ensure you follow this structure!
// Using other approaches may not work as expected!


benchmark.run().then((result) => console.log(result.get()));

Adding Multiple Algorithms

To include more than one algorithm, you can use the add method multiple times, as demonstrated in the example below:

const benchmark = new Benchmark();

// Algorithm 1: Count to 100_000
benchmark.add('count to 100_000', () => {
    let sum = 0;
    for (let i = 0; i < 100_000; i += 1) {
        sum += 1;
    }
});

// Algorithm 2: Count to 1_000_000
benchmark.add('count to 1_000_000', () => {
    let sum = 0;
    for (let i = 0; i < 1_000_000; i += 1) {
        sum += 1;
    }
});

// Algorithm 3: Asynchronous Promise
benchmark.add('promise', async () => {
    await promise();
});

// Run the benchmark and log the results
benchmark.run().then((result) => console.log(result.get()));

Modifying Execution Frequency

You have the flexibility to adjust how many times your algorithms will be executed, providing a balance between precision and benchmark duration. The default execution frequency for all algorithms is set to 10 times. If you wish to customize this, follow the examples below:

Adjusting Frequency for All Algorithms:

const config = {
    repeat: 20,
}

const benchmark = new Benchmark(config);

// Continue with the rest of your code...

In this case, all algorithms added to the benchmark will execute 20 times, enhancing result precision at the cost of increased benchmark duration

Specifying Frequency for Individual Algorithms:

const config = {
    repeat: 20,
}

const benchmark = new Benchmark(config);

// Algorithm 1: Count to 100_000 (executed 20 times)
benchmark.add('count to 100_000', () => {
    let sum = 0;
    for (let i = 0; i < 100_000; i += 1) {
        sum += 1;
    }
});

// Algorithm 2: Count to 1_000_000 (executed 5 times)
benchmark.add('count to 1_000_000', () => {
    let sum = 0;
    for (let i = 0; i < 1_000_000; i += 1) {
        sum += 1;
    }
}, 5);

// Continue with the rest of your code...

Here, we specified different execution frequencies for individual algorithms. This allows you to fine-tune the benchmark based on the specific requirements of each algorithm

Retrieving Benchmark Results

After running your algorithms, you will receive an instance of DataResult, which facilitates the display of your benchmark results.

Using the get Method

The get method provides the results of your benchmark in an array, following the order in which the algorithms were added:

benchmark.run().then((result) => console.log(result.get()));

Using the fastestToSlowest Method

The fastestToSlowest method returns an array where the first position (index 0) corresponds to the fastest algorithm, and the last position represents the slowest algorithm:

benchmark.run().then((result) => console.log(result.fastestToSlowest()));

Using the fastest Method

To obtain the information about the fastest algorithm, you can use the fastest method, which returns an object containing details about the fastest algorithm:

benchmark.run().then((result) => console.log(result.fastest()));

Additional Result Retrieval Methods

In addition to the previously mentioned methods, there are two more methods that function in a similar manner:

// returns an array where the first position represents the slowest algorithm, 
// and the last position corresponds to the fastest algorithm
benchmark.run().then((result) => console.log(result.slowestToFastest()));

// returns an object containing details about the slowest algorithm
benchmark.run().then((result) => console.log(result.slowest()));

Result Order Clarification

For all the methods mentioned earlier (fastestToSlowest, fastest, slowestToFastest, slowest), the results are ordered based on the average duration of execution. This sorting approach is selected because arranging results solely by the fastest or slowest execution does not offer conclusive insights.

The average duration is calculated by summing up all the execution durations and dividing that sum by the number of times the algorithm was executed. This method provides a more representative measure of performance, considering variations in execution times across multiple repetitions.

Handling Errors in Algorithms

When your algorithm has the potential to throw an error, it's crucial to handle it appropriately. Errors thrown during the benchmark will halt the execution. Here's how you should handle errors in your algorithms:

// Incorrect way: If your algorithm throws an error, it will stop the execution
benchmark.add('can throw an Error', () => {
  canThrowAnError();
});
// Correct way: Handle errors using a try-catch block to prevent execution interruption
benchmark.add('can throw an Error', () => {
  try {
    canThrowAnError();
  } catch (error) {
    console.log(error);
  }
});

Using with API Calls

It is not recommended to use our lib with APIs, especially when specifying a larger repeat option (greater than 1). This can potentially lead to HTTP Error 429 (Too Many Requests) due to an excessive number of API calls. If the API is still under development and you have control over it, then you may consider using it in such cases.

Author