npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@fightmegg/scientist

v1.0.2

Published

A JavaScript library for carefully refactoring critical paths

Downloads

3

Readme

Scientist

Version Downloads CircleCI

A JavaScript library for carefully refactoring critical paths, port of GitHub Scientist

Installation

npm install @fightmegg/scientist

Usage

import Scientist from '@fightmegg/scientist'

const experiment = new Scientist('experiment #1');
experiment.use(() => 12); // old way
experiment.try(() => 10 - 8); // new way
experiment.publish(results => console.log(results)); // publish
const result = experiment.run();

// Sometimes we need to go Async
const experiment = new Scientist('experiment #1', { async: true });
experiment.use(async () => 12); // old way (async or promise)
experiment.try(async () => 10 - 8); // new way (async or promise)
experiment.publish(results => console.log(results)); // publish
const result = await experiment.run();

How to Science

Let's pretend you are changing how the result of a calculation is obtained. Tests can help you refactor, but really you want to compare the current and refactored behaviours under load.

Pass your original code to the use function, and pass your new code / behaviour to the try function. experiment.run will always return whatever the use function returns, but it does some stuff behind the scenes:

  • It decided whether or not to run the try block
  • Randomizes the order in which use and try functions are run
  • Measures the duration of all behaviours
  • Compares the result of try and the result of use
  • Swallow and record errors thrown in the try block
  • Publishes all of this information

The use function is called the control. The try function is called the candidate

If you do not declare a candidate, then the control is always ran and returned.

Creating useful experiments

The examples above are rather basic, the try blocks don't run yet, and none of the results get published, lets improve upon that:

const experiment = new Scientist('experiment #1', { async: true });

experiment.enabled(() => {
    // see "Ramping up experiments" below
    return true;
})

experiment.publish(results => {
   // see "Publishing results" below
   console.log(results)
});

experiment.use(async () => 12);
experiment.try(async () => 10 - 8);

const result = await experiment.run();

Controlling comparison

Scientist compares control and candidate values using lodash.isequal. To override this behaviour, use compare to define how to compared observed values instead:

const experiment = new Scientist('experiment #1', { async: true });

experiment.use(async () => 12);
experiment.try(async () => 10 - 8);

experiment.compare((control, candidate) => {
    return control.id === candidate.id
})

const result = await experiment.run();

Keeping it clean

Sometimes you dont want to store the full value for later analysis. For example an experiment may return a whole object instance, but you only care about a specific property. You can define how to clean these values in an experiment:

const experiment = new Scientist('experiment #1', { async: true });

experiment.use(async () => 12);
experiment.try(async () => 10 - 8);

experiment.clean((value) => {
    return value.map(n => n.id).sort()
})

const result = await experiment.run();

And this cleaned value is available in observations in the final published result:

const experiment = new Scientist('experiment #1', { async: true });

// ...

experiment.publish((result) => {
    console.log(result.control.value)           // [<User Bob>, <User alice>]
    console.log(result.control.cleaned_value)   // [alice, Bob]
})

Ramping up experiments

As a scientist, you know its always important to be able to turn your experiment off, lest it run wild. In order to control whether or not an experiment is enabled, you must include the enabled method in your implementation:

let percentEnabled = 100;

const experiment = new Scientist('experiment #1', { async: true });

// ...

experiment.enabled(() => {
    return percentEnabled > 0 && (Math.random() * 100) < percentEnabled
})

const result = await experiment.run();

This code will be invoked for every method with an experiment every time, so be sensitive about its performance

Publishing results

Whats the point of being a Scientist if you can't publish your results?

You must implement the publish(result) method, and you can publish the data however you like. For example timing data could be sent to graphite, and mismatches coule be sent to a debugging service.

The structure of result is:

{
    "name": "<Experiment Name>",
    "matched": true,
    "execution_order": ["control", "candidate"],
    "error": false,
    "control": {
        "duration": 100,
        "value": 20, 
        "cleaned_value": 20, 
        "startTime": 100002320,
        "endTime": 1000021000, 
        "error": "error ...",
    },
    "candidate": {
        "duration": 100,
        "value": 20, 
        "cleaned_value": 20, 
        "startTime": 100002320, 
        "endTime": 1000021000,
        "error": "error ...",
    },
}

Handling errors

Scientist handles and tracks all errors raised in a try or use block. Scientist also calls publish which contains all of the information about any errors.

In control code

If an error is thrown in a use block, it will eventually be thrown by the Scientist module (in order to mimick real world), therefore you will want to handle this properly.

In candidate code

If an error is thrown in a try block, it will be swallowed and not thrown, you will want to handle this in your publish block.

Breaking the rules

Sometimes it can be useful to break the rules.

Run Control or Candidate only

There will be times when you cannot have both the control and candidate running sequentially (such as performing a database update, but l would not advise using this library for that).

In order to get around that, we can use the run_only method, this will either run the candidate or control, and return the result from whatever function you ran:

const experiment = new Scientist('experiment #1', { async: true });

experiment.use(async () => 12);
experiment.try(async () => 10 - 8);

const candidateResult = await experiment.run_only('candidate');

// OR

const controlResult = await experiment.run_only('control');

In both scenarios above, the publish method will still be invovked, but the control or candidate properties will be null depending on which one you ran.

A/B Testing

Similar to the above, we can utilize the run_only method to perform A/B testing:

const experiment = new Scientist('experiment #1', { async: true });

experiment.use(async () => 12);
experiment.try(async () => 10 - 8);

const result = experiment.run_only(['control', 'candidate'][Math.round(Math.random())]);

Maintainers

@olliejennings