npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

fencache

v1.4.0

Published

A fast memoizer which forgets least accessed calculations

Downloads

1

Readme

Fencache.js

This is a javascript function memoizer which uses an array based structure for storage. It has different performance characteristics to most javascript memoizers which use the native object type for storage, and can work much faster for certain algorithms.

Differences

The keys of a javascript object must be string, so when floats are stored as object keys (to memoize their return values) they get transparently cast to string which hinders performance with algebraic functions. Functions that take object references as parameters also require extra effort to memoize due to casting to string. Fencache's storage involves no casting.

Object store based memoizers can get excessively large if supplied lots of unique values. Fencache.js holds a limited number of entries, it forgets the entries which are recalled least and it promotes the most frequent to be accessed most quickly.

Storage structure

Two arrays in parallel store the calculation input/key values and output/return values. The arrays have a head section where entries are incrementally bubble sorted on every hit, and in the remaining space a ring buffer receives any missed calculations and swaps any hits into the lowest position of the sorting section. If this arrangement were a standard thing it might be called a "nose-ring cache". It is two different abstract data types combined low level for efficiency, done mostly in the private function nsrng.

The best size for the cache depends on the speed of the calculating function to memoize and on the degree of repetition of calculations which will run. Loosely, a value of around 20 can often work well enough assuming there is a useful amount of repetition.

To be keen and for testing fencache.js also implements an object store option, but no Map option.

Basic Usage

  fencache = require("fencache.js") 

  enMathsin = fencache( Math.sin )

  enMathsin(x) === Math.sin(x) // always true 

Default cache size is 20, any size can be set:

  enMathsin = fencache( Math.sin ,1 ) 

Cache size of 1 is streamlined with no cache management; ideal for when the calculation rarely changes, like:

  for( a=0; a<1000; a++ )
    for( b=0; b<1000; b++ )
      somesum += enMathsin(a) + Math.sin(b)

Native store mode

Size 0 sets native storage mode, negative value limits the native store size.

  enReply = fencache(reply, 0)     //use the native object as cache
  enReply = fencache(reply, -1000) //flushes half the cache after 1000 different entries

This mode may perform better for keeping thousands of equally distributed calculation results, and not requiring old results to be flushed. (Result flushing is relatively slow)

Optional parameter for keying Objects and multiple arguments.

  enCalcObj = fencache(calcOnObj, 1000, ob=>ob.idstring )

In this case where calcOnObj takes objects and processes data within them, a function in the third parameter can return a value to use as the storage key. Without this function, in default mode objects are identified by their native reference (not contents), in native object mode they are automatically stringified. Memoized functions can take up to 5 arguments but a keying function is then needed to id results to the multiple input arguments:

  enpow = fencache(Math.pow,30, (a,b)=>""+a+","+b )
  cando = enpow(2,8) //(result is keyed to "2,8")
  // here is a fast insecure way to key two numeric params:
  enpow = fencache(Math.pow,30, (a,b)=> a + b*888888887 )

When cache size is set to 1, the third parameter is ignored.

Other Methods

  result = enMathsin.bypass(k) // returns calculation without caching it
  cchval = enMathsin.val(k)    // returns cached val without calculating 
  oldval = enMathsin.put(val,key)   // sets val to key, returns old val 
  sttobj = enMathsin.state()        // returns the whole state
  enMathsin.reset(/**sttobj**/)     // clears or replaces state (memory)  

Performance and Cache Size

Where 'mixed hit' means average time to randomly access and return results from a full cache:

  • Cache size 1 returns a previous hit about 3 times as fast as Math.sin
  • Cache size 4 returns mixed hits about 2 times as fast as Math.sin
  • Cache size 20 returns mixed hits about 1 times as fast as Math.sin
  • Cache size 50 returns mixed hits about 75% as fast as Math.sin
  • Cache size 500 returns mixed hits about 10% as fast as Math.sin

With 'Slowest hit' meaning time to return the lowest ranked result in the cache:

  • Slowest hit on size 15 is about 1 times as fast as Math.sin
  • Slowest hit on size 500 is about 5% as fast as Math.sin

'Hit return speed' is not affected by the speed of the function to memoize as that function only runs on a cache miss. Cache misses take no longer than slow hits - plus the time which the function requires.

Basically, try values between 1 and a few hundred depending on the weight of the cached function and on the distribution of repetitive inputs.

In contrast, an object store's best return time of floating point keyed data is about 8 times slower than Math.sin (25 times slower than fencaches best). The native objects performance scales better with n items stored, but still can take until about n=400 to catch up with fencaches sorted list mode - when accessed uniformly. When some inputs recur more often than others, the sorted list mode can work a great deal faster, as more of the hits are found early in its list.

Performance Micro-Optimizations

On the first call, fencache fills its two storage arrays with the first argument:result pair it sees. This allows the JS engine to make the arrays contiguous and monomorphic improving the subsequent performance. An 'init' method is also included to set the arrays up eg: ensine.init(-0,-0) will set the cache to non-integer number type. The reason for this method is, if the types are to be real numbers the first calculation might by chance be an integer, which would mix types in the array.

Fencache contains other obscure micro optimizations tested on several versions of Node and Firefox, which seem appropriate for a small performance orientated tool.

Version History

  • 1.4.0 - Jan19 : Correct put method
  • 1.3.0 - Jan19 : Fix bug in reset method
  • 1.2.0 - Oct18 : Reduced optional parameters to 5
  • 1.1.0 - Oct18 : A little fix to 'put' method
  • 0.9.0 - Aug18 : Quite tested