npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@nrk/nodecache-as-promised

v2.0.1

Published

NodeJs in-memory cache with Promise support. Extendable with middlewares. Middlewares provided: Distributed invalidation and persistence of cache misses

Downloads

942

Maintainers

stiansstianslorecasterlorecasternrk-ps-teamcitynrk-ps-teamcityswlaswlanrk-midas-jenkinsnrk-midas-jenkinsandorpandorandorpandornrkrichardnrkrichardgesigesigundelsby-nrkgundelsby-nrkjonstalecarlsenjonstalecarlsenmslhmmslhmnrk-sofie-cinrk-sofie-cinytaminnytaminjesperstarkarjesperstarkarskjalgepalgskjalgepalgeirikhalvardeirikhalvardastokkeastokken640071n640071jfjeldskaarjfjeldskaarsondrbwsondrbwhenrik-mattssonhenrik-mattssonolapeterolapeterdervodevdervodevstefanogdennrkstefanogdennrkhaavardmhaavardmyryrnrk-kurator-jenkinsnrk-kurator-jenkinstorgeilotorgeilonrk-user-syncnrk-user-syncdhdeploydhdeployespenwaespenwaovstetunovstetunstianljstianljharaldkjharaldkjmariusumariusucristobalcristobalknuthaugknuthaugthohalvthohalvjohnarnejohnarneeshaswinieshaswinimorrowmorrowoyvindehoyvindehlaatlaattoggutoggunrk-jenkinsnrk-jenkinszenangstzenangstplommaplommaevjandevjandmoltubakkmoltubakkmariehovlandmariehovlandingridgureningridgurenlu-luxlu-luxanderslianderslisiljesiljestiandgstiandgsjurlursjurlurandipodnrkandipodnrkpkejpkejyosrimtiyosrimtimorten.nyhaugmorten.nyhaugingvildcathingvildcatherlend.joneserlend.jonesbrneirikbrneirikmollersemollersetbnrktbnrknordankenordankesimonmitternachtsimonmitternachtmartintorgersenmartintorgersenrebchrrebchrsteipalsteipalcarinafraningcarinafraningdiscobusdiscobuseirikjstnrkeirikjstnrkmartingundersenmartingundersentinkajtstinkajtshallvardlidhallvardlidtomivartomivarajacoajacotobinustobinusmortenokmortenokfeiringfeiringnrk-ark-deploynrk-ark-deployjeangilbertlouisjeangilbertlouisheidimorkheidimorkingriddraageningriddraagenfridajalborgfridajalborgbruusibruusirosvollrosvollchristianeidechristianeideenordbyenordbyglen_imrieglen_imriemia.aasbakkenmia.aasbakkenelathamnaelathamnaevjjan17evjjan17olatoftolatoftkongsrudkongsrudchrpeterchrpeteringvildforsethingvildforsethharaldk76haraldk76stigokstigokjohannesodlandjohannesodlandanders993anders993vildefjvildefjvildepkvildepkhelenperhelenperrolerbolerrolerbolermeloyguttmeloyguttanders.baggethunanders.baggethunespenhalstensenespenhalstensenandreeldareideandreeldareideytterboytterbovagifabilovvagifabilovhermangudesenhermangudesenhaakemonhaakemonhenningkollerhenningkollerteodor-elstadteodor-elstaddanjohnrkdanjohnrkopetopetandrefauandrefaumadsernmadserncbjerkancbjerkankariaankariaangardkroyergardkroyermikkelnygardmikkelnygardsiddisesiddiseklizterklizteredplayzedplayztrulsltrulslmiatollaksvikmiatollaksvikmachineboycommachineboycomclaudio-nrkclaudio-nrkgrimburgrimburn07073n07073theasparrowhawktheasparrowhawkshieldyarmorshieldyarmoremte123emte123supermeisensupermeisentoshbtoshbarevjensenarevjensenbaltebalteeschoieneschoienmaar1052maar1052ragnaroh-nrkragnaroh-nrkrannveigncrannveigncharaldskharaldskluminrkluminrkeskilgheskilghtorsrextorsrexthormodbthormodbjoakimmohnjoakimmohnebrautasetebrautasetkjellovenordlienkjellovenordlienjanerikbrjanerikbrgunderwondergunderwonderjulusianjulusiandaardaldaardalmmepmmepjimalexbergerjimalexbergersiiverssiiversmuddahmuddahihlnrkihlnrkmorten-nrkmorten-nrkkristjgrkristjgrhenrkhenrkknutboknutbosokkemannensokkemannensanderknrksanderknrkjensragejensragekjellvnnrkkjellvnnrknicklassvendsrudnicklassvendsrudellenulrikellenulrikjorn_georgjorn_georgingryeeingryeehalvorhhalvorhphajsiphajsimartioskmartiosktobiasrptobiasrpjimmeloysundjimmeloysundlarsjorgentvedtlarsjorgentvedtkjartanmichalsenkjartanmichalsenmariusenerlynrkmariusenerlynrkkennethdammyrkennethdammyrnikolai.kjarem.ellingsennikolai.kjarem.ellingsentekbeartekbear

Keywords

Readme

@nrk/nodecache-as-promised

Fast and resilient cache for NodeJs targeting high-volume sites

Installing

npm install @nrk/nodecache-as-promised --save

Publish

npm install 
npm login 

# one of
npm version patch -m 'Release patch %s'
npm version minor -m 'Release minor %s'
npm version major -m 'Release major %s'

npm run build
git push
npm publish

Motivation

Sometimes Node.js needs to do some heavy lifting, performing CPU or network intensive tasks and yet respond quickly on incoming requests. For repetitive tasks like Server side rendering of markup or parsing big JSON responses caching can give the application a great performance boost. Since many requests may hit the server concurrently, you do not want more than one worker to run for a given resource at the same time. In addition - serving stale content when a backend resource is down may save your day! The intention of nodecache-as-promised is to give you a fairly simple interface, yet powerful application cache, with fine-grained control over caching behaviour.

nodecache-as-promised is inspired by how Varnish works. It is not intended to replace Varnish (but works great in combination). Whereas Varnish is a high-performant edge/burst/failover cache, working as a reverse proxy and loadbalancer, it depends on a fast backend when configured with short a cache window (ie. TTL ~1s). It uses URLs in combination with cookies as keys for its cached content. Since there are no restrictions on conformant URLs/cookies for clients requesting content, it is quite easy to bust it's cache without any security measures. nodecache-as-promised on the other hand is running at application level for more strict handling of cache keys, and may use many different caches and policies on how the web page is built.

Features

  • In-memory cache is used as primary storage since it will always be faster than parsing and fetching data from disk or via network. An LRU-cache is enabled to constrain the amount of memory used.
  • Caches are filled using worker promises since cached objects often are depending on async operations. RxJs is used to queue concurrent requests for the same key; thus ensuring that only one worker is performed when cached content is missing/stale.
  • Caching of custom class instances, functions and native objects such as Date, RegExp and Redux stores are supported through in-memory caching. Non-serializable (using JSON.stringify) objects are filtered out in persistent caches though.
  • Grace mode is used if a worker fails (eg. caused by failing backends), ie. stale cache is returned instead.
  • Avoidance of spamming backend resources using a configurable deltaWait parameter, serving either a stale object or a rejection.
  • Middleware support so you may create your own custom extensions. Provided middlewares:
    • Persistent cache is used as secondary storage to avoid high back-pressure when inMemoryCaches are cleared after server restarts. This is achieved storing cache-misses and deletions on cache evictions using a ioredis-factory connecting to a redis instance.
    • Distributed on demand expiry so that new content may be published across servers/instances before cache-TTL is reached. This is achieved using Redis pub/sub depending on a ioredis-factory

Performance testing

Parsing a json-file at around 47kb (file contents are cached at startup). Using a Macbook pro, mid 2015, 16gb ram, i7 CPU.

The image shows a graph from running the test script npm run perf:nocache-cache-file -- --type=linear. At around 1300 iterations the event loop starts lagging, and at around 1500 iterations the process stops responding. It displays that even natively optimized JSON.parse could be a bottleneck when fetching remote API-data for rendring. (React.render would be even slower)

The second image is a graph from running test script npm run perf:cache -- --type=linear. At around 3.1 million iterations the event loop starts lagging, and at around 3.4 million iterations the process runs out of memory and crashes. The graph has no relation to how fast JSON.parse is, but what speed is achievable by skipping it altogether (ie. Promise-processing)

APIs

Create a new inMemoryCache instance using a factory method. This instance may be extended by the distCache and/or persistentCache middlewares (.use(..)).

inMemoryCache factory

Creating a new instance

import inMemoryCache from '@nrk/nodecache-as-promised'
const cache = inMemoryCache(options)

options

An object containing configuration

  • initial - Object. Initial key/value set to prefill cache. Default: {}
  • maxLength - Number. Max key count before LRU-cache evicts object. Default: 1000
  • maxAge - Number. Max time before a (stale) key is evicted by LRU-cache (in ms). Default: 172800000 (48h)
  • log - Object with log4j-facade. Used to log internal work. Default: console

Instance methods

When the factory is created (with or without middlewares), the following methods may be used.

.get(key, [options])

Get an item from the cache.

const {value} = cache.get('myKey')
console.log(value)

Using parameter options - the function either fetches a value from cache or executes provided worker if the cache is stale or cold. The worker will set the cache key if ran and thus returns a Promise

cache.get('myKey', options)
  .then(({value}) => {
    console.log(value)
  })

options

Configuration for the newly created object

  • worker - function. A function that returns a promise which resolves new value to be set in cache.
  • ttl - Number. Ttl (in ms) before cached object becomes stale. Default: 86400000 (24h)
  • workerTimeout - Number. max time allowed to run promise. Default: 5000
  • deltaWait - Number. delta wait (in ms) before retrying promise, when stale. Default: 10000

returned object

  • value - any - value set in cache
  • created - Number - UX timestamp (ms) when the value was created
  • cache - Enum(hit|miss|stale) - status of cached content
  • TTL - Number - Amount of ms until until value becomes stale since creation

NOTE: It might seem a bit strange to set cache values using .get - but it is to avoid a series of operations using .get() to check if a value exists, then call .set(), and finally running .get() once more (making queing difficult). In summary: .get() returns a value from cache or a provided worker.

.set(key, value, [ttl])

Set a new cache value.

// set a cache value that becomes stale after 1 minute
cache.set('myKey', 'someData', 60 * 1000)

If ttl-parameter is omitted, a default will be used: 86400000 (24h)

.has(key)

Check if a key is in the cache, without updating the recent-ness or deleting it for being stale.

.del(key)

Deletes a key out of the cache.

.expire(keys)

Mark keys as stale (ie. set TTL = 0)

cache.expire(['myKey*', 'anotherKey'])

Asterisk * is used for wildcards

.keys()

Get all keys as an array of strings stored in cache

.values()

Get all values as an array of all values in cache

.entries()

Get all entries as a Map of all keys and values in cache

.clear()

Clear the cache entirely, throwing away all values.

.addDisposer(callback)

Add callback to be called when an item is evicted by LRU-cache. Used to do cleanup

const cb = (key, value) => cleanup(key, value)
cache.addDisposer(cb)

.removeDisposer(callback)

Remove callback attached to LRU-cache

cache.removeDisposer(cb)

.debug([extraData])

Prints debug information about current cache (ie. hot keys, stale keys, keys in waiting state etc). Use extraData to add custom properties to the debug info, eg. hostname.

cache.debug({hostname: os.hostname()})

.log.[trace|debug|info|warn|error] (data)

Logger instance exposed to be used by middlewares

cache.log.info('hello world!')

Examples

Note! These examples are written using ES2015 syntax. The lib is exported using Babel as CJS modules

Basic usage

import inMemoryCache from '@nrk/nodecache-as-promised'
const cache = inMemoryCache({ /* options */})

// implicit set cache on miss, or use cached value
cache.get('key', { worker: () => Promise.resolve({hello: 'world'}) })
  .then((data) => {
    console.log(data)
    // {
    //   value: {
    //     hello: 'world'
    //   },
    //   created: 123456789,
    //   cache: 'miss',
    //   TTL: 86400000
    // }
  })

Basic usage with options

import inMemoryCache from '@nrk/nodecache-as-promised';

const cache = inMemoryCache({
  initial: {                    // initial state
    foo: 'bar'
  },                            
  maxLength: 1000,              // LRU max object count
  maxAge: 24 * 60 * 60 * 1000   // LRU max age in ms
})
// set/overwrite cache key
cache.set('key', {hello: 'world'})
// imiplicit set cache on miss, or use cached value
cache.get('anotherkey', {
  worker: () => Promise.resolve({hello: 'world'}),
  ttl: 60 * 1000,               // TTL for cached object, in ms
  workerTimeout: 5 * 1000,      // worker timeout, in ms
  deltaWait: 5 * 1000,          // wait time, if worker fails
}).then((data) => {
    console.log(data)
    // {
    //   value: {
    //     hello: 'world'
    //   },
    //   created: 123456789,
    //   cache: 'miss',
    //   TTL: 86400000
    // }
  })

Middlewares

distCache middleware

Creating a new distCache middleware instance. The distCache middleware is extending the inMemoryCache instance by making a publish call to Redis using the provided namespace when the .expire-method is called. A subscription to the namespace ensures calls to .expire is distributed to all instances of the inMemoryCache using the same distCache middleware with the same namespace. It adds a couple of parameters to the .debug-method.

import cache, {distCache} from '@nrk/nodecache-as-promised'
const cache = inMemoryCache()
cache.use(distCache(redisFactory, namespace))

Parameters

Parameters that must be provided upon creation:

  • redisFactory - Function. A function that returns an ioredis compatible redisClient.
  • namespace - String. Pub/sub-namespace used for distributed expiries

Example

import inMemoryCache, {distCache} from '@nrk/nodecache-as-promised'
import Redis from 'ioredis'

// a factory function that returns a redisClient
const redisFactory = () => new Redis(/* options */)
const cache = inMemoryCache({initial: {fooKey: 'bar'}})
cache.use(distCache(redisFactory, 'namespace'))
// publish to redis (using wildcard)
cache.expire(['foo*'])
setTimeout(() => {
  cache.get('fooKey').then(console.log)
  // expired in server # 1 + 2
  // {value: {fooKey: 'bar'}, created: 123456789, cache: 'stale', TTL: 86400000}
}, 1000)

persistentCache middleware

Creating a new persistentCache middleware instance. The persistentCache middleware is extending the inMemoryCache instance by serializing and storing any new values recieved via workers in .get or in .set-calls to Redis. In addition it deletes values from Redis when the .del and .clear-methods are called. Cache values evicted by the LRU-cache are also deleted. On creation it will load and set initial cache values by doing a search for stored keys on the provided keySpace (may be disabled using the option bootLoad: false - so that loading may be done afterwards using the provided .load-method). It adds a couple of parameters to the .debug-method.

import cache, {persistentCache} from '@nrk/nodecache-as-promised'
const cache = inMemoryCache()
cache.use(persistentCache(redisFactory, options))

Parameters

Parameters that must be provided upon creation:

  • redisFactory - Function. A function that returns an ioredis compatible redisClient.

options

  • doNotPersist - RegExp. Keys matching this regexp is not persisted to cache. Default null
  • keySpace - String. Prefix used when storing keys in redis.
  • grace - Number. Used to calculate TTL in redis (before auto removal), ie. object.TTL + grace. Default 86400000 (24h)
  • bootload - Boolean. Flag to choose if persisted cache is loaded from redis on middleware creation. Default true

Example

import inMemoryCache, {persistentCache} from '@nrk/nodecache-as-promised'
import Redis from 'ioredis'

const redisFactory = () => new Redis(/* options */)
const cache = inMemoryCache({/* options */})
cache.use(persistentCache(
  redisFactory,
  {
    keySpace: 'myCache',   // key prefix used when storing in redis
    grace: 60 * 60         // auto expire unused keys in Redis after TTL + grace seconds
  }
))

cache.get('key', { worker: () => Promise.resolve('hello') })
// will store a key in redis, using key: myCache-<key>
// {value: 'hello', created: 123456789, cache: 'hit', TTL: 60000}

Combining middlewares

Example in combining persistentCache and distCache

import inMemoryCache, {distCache, persistentCache} from '@nrk/nodecache-as-promised'
import Redis from 'ioredis'

const redisFactory = () => new Redis(/* options */)
const cache = inMemoryCache({/* options */})
cache.use(distCache(redisFactory, 'namespace'))
cache.use(persistentCache(
  redisFactory,
  {
    keySpace: 'myCache',   // key prefix used when storing in redis
    grace: 60 * 60         // auto expire unused keys in Redis after TTL + grace seconds
  }
))

cache.expire(['foo*'])  // distributed expire of all keys starting with foo
cache.get('key', {
  worker: () => Promise.resolve('hello'),
  ttl: 60000,                       // in ms
  workerTimeout: 5000,
  deltaWait: 5000
}).then(console.log)
// will store a key in redis, using key: myCache-<key>
// {value: 'hello', created: 123456789, cache: 'miss', TTL: 60000}

Creating your own middleware

A middleware consists of three parts:

  1. an exported factory function
  2. constructor arguments to be used within the middleware
  3. an exported facade that corresponds with the overriden functions (appending a next parameter that runs the next function in the middleware chain)

Lets say you want to build a middleware that notifies some other part of your application that a new value has been set (eg. using RxJs streams).

Here's an example on how to achieve this:

// export namespace to be applied in inMemoryCache.use().
export const streamingMiddleware = (onSet, onDispose) => (cacheInstance) => {
  // create a function that runs before the others in the middleware chain
  const set = (key, value, next) => {
    onSet(key, value)
    next(key, value)
  }

  // use functionality exposed by the inMemoryCache instance
  cacheInstance.addDisposer(onDispose)

  // export facade
  return {
    set
  }
}

Local development

First clone the repo and install its dependencies:

git clone [email protected]:nrkno/nodecache-as-promised.git
git checkout -b feature/my-changes
cd nodecache-as-promised
npm install && npm run build && npm run test

Building and committing

After having applied changes, remember to build and run/fix tests before pushing the changes upstream.

# run the tests, generate code coverage report
npm run test
# inspect code coverage
open ./coverage/lcov-report/index.html
# update the code
npm run build
git commit -am "Add my changes"
git push origin feature/my-changes
# then make a PR to the master branch,
# and assign one of the maintainers to review your code

NOTE! Please make sure to keep commits small and clean (that the commit message actually refers to the updated files). Stylistically, make sure the commit message is Capitalized and starts with a verb in the present tense (eg. Add minification support).

License

MIT © NRK