npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

discrete-spsa

v0.1.2

Published

Discrete Simultaneous Perturbation Stochastic Approximation algorithm by Hill, 2005

Downloads

5

Readme

discrete-spsa

NPM

Build Status Dependency Status [devDependency Status] (https://david-dm.org/streamcode9/node-discrete-spsa#info=devDependencies) experimental

Code Climate Test Coverage Issue Count

An implementation of the Discrete Simultaneous Perturbation Stochastic Aproximation algorithm.

See Stacy D. Hill, "Discrete Stochastic Approximation with Application to Resource Allocation", 2005 (pdf)

The algorithm is suitable for optimization tasks in feedback control and simulation-based optimization. It iteratively improves an array of interdependent parameters based on subsequent benchmarks. The algorithm requires only 2 benchmark runs per iteration regardless the number of parameters, so it is suitable for optimization problems with large number of parameters which are simultaneously improved from run to run. Another advantage is that the algorithm works well with unreliable benchmarks. No averaging or measures to guarantee statistical significance of benchmarks is required.

Common examples of problems this algorithm is suitable for are finding buffer sizes and worker counts in distributed systems, finding optimal memory and CPU frequency in overclocking scenarios and so on.

The Algorithm

On each iteration:

  • a random direction of perturbation is chosen
  • the benchmark is run twice with a positive and a negative perturbation of current guess in that direction
  • the stochastic gradient is calculated from the benchmark results
  • the next guess is calculated by multiplying the gradient by the learning rate parameter

In pseudo-code:

{ xs, step } = perturb(initialParameters, learningRate)
ys[0] = benchmark(xs[0])
ys[1] = benchmark(xs[1])
improvedParameters = step(ys)
  • Choose values for each parameter and feed them to the algorithm
  • Run the benchmark twice with the parameters suggested by the algorithm and collect the results
  • Feed the results to the algorithm to get the improved parameters
  • Use the improved parameters in next iteration
  • Stop when benchmark results stop to improve

API Documentation

The perturb() function implements the algorithm. The rest of the library are wrappers to use it more easily.

perturb() accepts currentGuess[] and learningRate, and returns xs[] and step()

  • currentGuess: an array numbers, current parameters
  • learningRate: a positive or negative number. Use positive numbers to minimize benchmark results. Use negative numbers to maximize bechmark results. Large absolute values mean large jump distance at each iteration. Use smaller values as you get closer to the optimum.
  • xs is a 2-element array. Run benchmark twice with the provided sets of parameters.
  • feed benchmark results as an array of 2 numbers to step() function, and obtain improvedParameters
  • if improved parameters are out of range, fix them: round them and clip to closest valid values. Many methods of rounding and chosing the closest valid values are possible.
  • run the next iteration

iterate() and iterateSync() use perturb() to run steps 1-4 in synchronous and asynchronous manner respectively

optimize() runs all the 6 steps in a loop. It accepts an object with 6 fields, all of them are mandatory:

  • iterations: number of iterations to perform
  • initialGuess: initial parameters vector
  • learningRate: the learning rate numeric parameter, see above
  • fn: the benchmark function. Accepts parameters, returns a promise of a number.
  • fix: accepts parameters vector, returns a fixed version

0.2 roadmap

  • Field-test with real problems
  • Jupyter visualizations

0.1 (current)

  • Synchronous spsa.iterateSync()
  • Asynchronous spsa.iterate() (utilizes Bluebird promises)
  • High-level spsa.optimize(), asynchronous only
  • Disk seek concurrency optimization demo in examples/ncq.ts
  • Network buffering optimization demo in example/ping.ts