npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

cathedra

v0.5.0

Published

A microbenchmarking tool

Downloads

4

Readme

cathedra Build Status npm version

A javascript microbenchmarking tool.

Installation

You'll need all 3 of these, trust me.

npm i -D cathedra cathedra-default-presenter cathedra-cli

Usage

Simple example

The simplest way to create a benchmark is to import suite from cathedra, and wrap the functions you want to compare with it:

const { suite } = require('cathedra')
const { doFoo, doBar, doFooBar } = require('./costlyFunctions')

const exampleSuite = suite(
  doFoo,
  doBar,
  doFooBar
)

module.exports = exampleSuite

Save this as bench.js and run it using the following command (either put this in scripts in your package.json or install cathedra-cli globally)

cathedra bench.js

The cathedra command accepts a glob pattern. If you would for example supply it the benchmarks/*.js pattern, it would try to run all of the javascript files in the benchmark folder.

Configuration

Once you created a suite or a benchmark you can configure it by currying as many times as you want. Each new call will return a new, clean instance with the merged configuration, so you can safely share config while keeping previously created suites or benchmarks intact. Configuration from suites are passed down to children (benchmarks or other suites), but only if the given configuration is not specified ("overriden") in the children.

const { suite, benchmark, milliseconds } = require('cathedra')
const { doFoo, doBar, doFooBar } = require('./costlyFunctions')

// We want to configure the benchmarking of this function strictly
const fooBench = benchmark(foo)({
  name: 'foo',
  until: milliseconds(2500) // The most specific config always wins
})

const exampleSuite3000ms = suite(fooBench, doBar, doFooBar)({
  until: milliseconds(3000), // All benchmarks run for 3000 ms except fooBench
  name: 'example running for 3 seconds'
})

// This suite overrides until and name from exampleSuite3000ms, but has the same
// children (fooBench - foo, doBar, and doFooBar) and leaves until in fooBench intact. 
const exampleSuite5000ms = exampleSuite3000ms({
  until: milliseconds(5000),
  name: 'example running for 5 seconds'
})

// The exported suite will run both child suites. Both child suites run the same
// functions, but will have different names and run the tests for respectively 
// 3 and 5 seconds, except the function foo (wrapped in fooBench), which is running
// for 2.5 seconds because we configured fooBench that way.
module.exports = suite(
  exampleSuite3000ms,
  exampleSuite5000ms
)

Configuration options

The full list of configurations that you can supply to either a suite or a benchmark

  • name - The name of the benchmark or suite. By default benchmarks use the name of the given function (or "unknown benchmark" if not available), and suites use "unkown suite"
  • until - A function used to determine how long the benchmark is supposed to run. API is subject to change, so please use milliseconds of times exported by the cathedra package
  • initialize - A function returning an array of arguments passed to before, after and fn. Useful when you want some heavy sample data, and you don't want to pollute your benchmarking function with it's creation
  • before - A function running before fn is repeatedly ran. Receives arguments from initialize
  • after - A function running after fn is repeatedly ran. Receives arguments from initialize
  • now - A function returning the current time in milliseconds. By default Date#now is used on node and performance.now in browsers.

Specific to benchmarks

  • fn - The function to run repeatedly. Receives arguments from initialize. Most of the time you shouldn't pass this manually as configuration, but you have the option.

Specific to suites

  • children - An array of either benchmarks or other suites. Most of the time you shouldn't pass this manually as configuration, but you have the option.

Why the name "cathedra"

Since "cathedra" was the only chair-like synonym to bench on thesaurus.com that wasn't an npm package already, I took it.

Contributing

Feel free to open an issue if you are missing a feature, or find a bug. PRs are also welcome in both cases - in this case make sure the tests (npm test) and the linter (npm run lint) are running ok. If you add new features or modify existing ones, please add new tests as well.