npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

memize

v2.1.0

Published

Unabashedly-barebones memoization library with an aim toward speed

Downloads

492,370

Readme

Memize

Memize is a unabashedly-barebones memoization library with an aim toward speed. By all accounts, Memize is the fastest memoization implementation in JavaScript (see benchmarks, how it works). It supports multiple arguments, including non-primitive arguments (by reference). All this weighing in at less than 0.3kb minified and gzipped, with no dependencies.

Example

Simply pass your original function as an argument to Memize. The return value is a new, memoized function.

function fibonacci( number ) {
	if ( number < 2 ) {
		return number;
	}

	return fibonacci( number - 1 ) + fibonacci( number - 2 );
}

var memoizedFibonacci = memize( fibonacci );

memoizedFibonnaci( 8 ); // Invoked, cached, and returned
memoizedFibonnaci( 8 ); // Returned from cache
memoizedFibonnaci( 5 ); // Invoked, cached, and returned
memoizedFibonnaci( 8 ); // Returned from cache

Installation

Using npm as a package manager:

npm install memize

Usage

Memize accepts a function to be memoized, and returns a new memoized function.

memize( fn: Function, options: ?{
	maxSize?: number
} ): Function

Optionally, pass an options object with maxSize defining the maximum size of the cache.

The memoized function exposes a clear function if you need to reset the cache:

memoizedFn.clear();

Benchmarks

The following benchmarks are performed in Node 10.16.0 on a MacBook Pro (2019), 2.4 GHz 8-Core Intel Core i9, 32 GB 2400 MHz DDR4 RAM.

Single argument

Name | Ops / sec | Relative margin of error | Sample size -------------------|-------------|--------------------------|------------ fast-memoize | 360,812,575 | ± 0.55% | 87
memize | 128,909,282 | ± 1.06% | 87
moize | 102,858,648 | ± 0.66% | 88
lru-memoize | 71,589,564 | ± 0.90% | 88
lodash | 49,575,743 | ± 1.00% | 88
underscore | 35,805,268 | ± 0.86% | 88
memoizee | 35,357,004 | ± 0.55% | 87
moize (serialized) | 27,246,184 | ± 0.88% | 87
memoizerific | 8,647,735 | ± 0.91% | 91
ramda | 8,011,334 | ± 0.74% | 90
memoizejs | 2,111,745 | ± 0.52% | 88

* Note: fast-memoize uses Function.length to optimize for singular argument functions, which can yield unexpected behavior if not account for.

Multiple arguments (primitive)

Name | Ops / sec | Relative margin of error | Sample size -------------------|------------|--------------------------|------------ memize | 81,460,517 | ± 0.61% | 88
moize | 66,896,395 | ± 0.90% | 83
lru-memoize | 26,315,198 | ± 1.26% | 85
memoizee | 18,237,056 | ± 0.60% | 90
moize (serialized) | 15,207,105 | ± 0.78% | 84
memoizerific | 6,363,555 | ± 0.63% | 88
memoizejs | 1,764,673 | ± 0.57% | 90
fast-memoize | 1,560,421 | ± 0.72% | 87

Multiple arguments (non-primitive)

Name | Ops / sec | Relative margin of error | Sample size -------------------|------------|--------------------------|------------ memize | 79,105,918 | ± 0.81% | 86
moize | 62,374,610 | ± 0.55% | 87
lru-memoize | 24,814,747 | ± 0.54% | 89
memoizee | 12,119,005 | ± 0.47% | 89
memoizerific | 6,748,675 | ± 0.66% | 88
moize (serialized) | 2,027,250 | ± 1.07% | 87
fast-memoize | 1,263,457 | ± 1.00% | 89
memoizejs | 1,075,690 | ± 0.61% | 87

How it works

If you haven't already, feel free to glance over the source code. The code is heavily commented and should help provide substance to the implementation concepts.

Memize creates a last-in first-out stack implemented as a doubly linked list. It biases recent access favoring real-world scenarios where the function is subsequently invoked multiple times with the same arguments. The choice to implement as a linked list is due to dramatically better performance characteristics compared to Array#unshift for surfacing an entry to the head of the list (jsperf). A downside of linked lists is inability to efficiently access arbitrary indices, but iterating from the beginning of the cache list is optimized by guaranteeing the list is sorted by recent access / insertion.

Each node in the list tracks the original arguments as an array. This acts as a key of sorts, matching arguments of the current invocation by performing a shallow equality comparison on the two arrays. Other memoization implementations often use JSON.stringify to generate a string key for lookup in an object cache, but this benchmarks much slower than a shallow comparison (jsperf).

Finally, special care is made toward treatment of arguments due to engine-specific deoptimizations which can occur in V8 via arguments leaking. Order is important here; we only create a shallow clone when necessary, after the cache has been checked, to avoid creating a clone unnecessarily if a cache entry exists. Looking at the code, you'd not be blamed for thinking that dropping the shallow clone would improve performance, but in fact it would slow execution by approximately 60%. This is due to how the lingering arguments reference would carry over by reference ("leaks") in the node's args property. Update: As of November 2019, engine improvements are such that arguments leaking does not have as dramatic an effect. However, my testing shows that the shallow clone still performs equal or better than referencing arguments directly, and as such the implementation has not been revised in order to achieve optimal performance in the most versions of V8.

License

Copyright 2018-2020 Andrew Duthie

Released under the MIT License.