npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@gis-ag/oniyi-http-plugin-cache-redis

v1.1.0

Published

Plugin responsible for caching responses into redis db

Downloads

6

Readme

oniyi-http-plugin-cache-redis

Plugin responsible for storing data into redis database

Install

$ npm install --save oniyi-http-plugin-cache-redis

Usage

const OniyiHttpClient = require('@gis-ag/oniyi-http-client');
const oniyiHttpPluginCacheRedis = require('@gis-ag/oniyi-http-plugin-cache-redis');

const httpClientParams = {
  requestPhases: ['cache'], // indicates that we want phase hook handler with name 'credentials' should be run in request phase list
  responsePhases: ['cache'], // indicates that we want phase hook handler with name 'credentials' should be run in request phase list
  defaults: {
    baseUrl: 'http://base-url.com',
  },
};

// default values
const pluginOptions = {
  ttl: 60 * 60 * 24 * 7,
  minTtl: 60 * 60,
  delayCaching: false,
  redis: {
    host: '127.0.0.1',
    port: 6379,
    retry_strategy: (options) => {/* check lib/index.js for default strategy options */},
  },
  hostConfig: {
    'google.com': {
      storePrivate: false,
      storeNoStore: false,
      ignoreNoLastMod: false,
      requestValidators: [],
      responseValidators: [],
    }
  },
  validatorsToSkip: {
    requestValidators: [],
    responseValidators: [],
  },
};
const plugin = oniyiHttpPluginCacheRedis(pluginOptions);

const httpClient = OniyiHttpClient
  .create(httpClientParams) // create custom http client with defined phase lists
  .use(plugin);             // mount a plugin

Plugin Options

The oniyi-http-plugin-cache-redis module exports a factory function that takes a single options argument.

available options are:

  • ttl: (number) - Ttl of redis key if cache-control max-age or s-max-age are not provided

  • minTtl: (number) - Minimum ttl, which ignores ttl if it is provided below this value

  • delayCaching: (boolean) - Provides an option to cache the data "later". It is useful when it is required to make multiple http requests and combine their http responses.

  • redis: (object) - Overrides default redis configuration provided by this plugin. Possible options can be found here

  • redis.host: (string) - IP address of the Redis server

  • redis.port: (port) - Port of the Redis server

  • redis.retry_strategy: (function) - A function that receives an options object, and handles errors/reconnection with redis server.

  • hostConfig: (object) - Instructions that overrides the default values provided by oniyi-cache library

  • hostConfig.hostName: (object) - Holds the configuration for a specific "host" name

  • hostConfig.hostName.storePrivate: (boolean) - Indicates that response with cache-control "private" flag should be stored. Rules for this feature explained below

  • hostConfig.hostName.storeNoStore: (boolean) - Indicates that response with cache-control "no-store" flag should be stored.

  • hostConfig.hostName.ignoreNoLastMod: (boolean) - Indicates that response without headers "last-modified" flag should not stored.

  • hostConfig.hostName.requestValidators: (array) - List of functions that validate http request options. These functions get appended to a default oniyi-cache request validators.

  • hostConfig.hostName.responseValidators: (array) - List of functions that validate http response. These functions get appended to a default oniyi-cache response validators.

  • validatorsToSkip: (object) - Via hostConfig we can provide request/response validators. This option provides skipping of certain default validators.

  • validatorsToSkip.requestValidators: (array) - List of function (validator) names that should be skipped

  • validatorsToSkip.responseValidators: (array) - List of function (validator) names that should be skipped

Per-request options

As explained above, there are couple of options that can be provided while initiating a plugin. Plugin options are considered as global options, where some of them can be overridden by each http request.

Options that can be provided via http request options are:

  • user: (object) - The user object should be provided if there is a need to store private data.

  • user.id: (string) - Unique user identifier.

  • user.getId: (function) - Function that resolves to unique user identifier.

  • plugins: (object) - Plugin specific options, where each http plugin handles its own options

  • plugins.cache: (object) - Cache plugin specific options used to override settings provided by initial plugin options

  • plugins.cache.hostConfig: (object) - Instructions that overrides the default host configuration provided by initial plugin options

  • plugins.cache.validatorsToSkip: (object) - Same functionality as provided by plugin options, but it overrides its values

  • plugins.cache.ttl: (number) - Ttl of redis key that overrides ttl provided by plugin options

  • plugins.cache.delayCaching: (boolean) - Same functionality as provided by plugin options, but it overrides its value

  • phasesToSkip: (object) - Indicates that some of the phase hook should not perform its designed operations.

  • phasesToSkip.requestPhases: (array) - List of phase hook names that should be skipped while request phase list is running

  • phasesToSkip.responsePhases: (array) - List of phase hook names that should be skipped while response phase list is running

How does it work?

This plugin relies on logic implemented in oniyi-http-client, which has extensive documentation on how phase lists work and what conventions must be followed when implementing a plugin.

Initially, once pluginOptions are loaded, they override the default options provided by plugin itself and create combined options object. Then we build a redis client instance from options.redis configuration, which is used for building Oniyi-cache instance.

Oniyi-cache instance has methods that are responsible for storing key -> value pairs, as well as removing keys from it. Another role of this library is to build evaluator, built by using hostConfig parameters and which is used to evaluate requestOptions(request phase hook handler) and response(response phase hook handler).

Now that we have cache instance ready, we pass it along side with combined options to phase hook handlers to do their magic.

Let's dive in into the logic behind these phase hook handlers.

Basically, phase hook handlers are functions that are invoked with 2 parameters:

  • ctx: (object) - Context object that keeps the references to provided requestOptions, and hookState which is being shared between phase lists

  • next: (function) - Callback function that is being invoked once execution should be passed to the next phase hook handler in the list

RequestPhaseHookHandler - cache

This is the phase hook that is responsible for request options validation and cached response retrieval. Next sections explain the logic behind this handler.

should phase be skipped ?

Validate if phase hook is marked for skipping. This phase hook can be skipped via phasesToSkip.requestPhases request options:

const requestOptions = {
    phasesToSkip: {
      requestPhases: ['cache'],
    },
};

If phase hook is marked for skipping, next() fn is invoked, which automatically invokes next phase hook handler in the phase list.

is response retrievable ?

It makes sure that response is retrievable. This is where evaluator comes into the game. Once it is invoked with hostConfig configuration, it check if any validators should be ignored / skipped via validatorsToSkip options:

const requestOptions = {
  plugins: {
    cache: {
      validatorsToSkip: {
        requestValidators: ['maxAgeZero'],
        responseValidators: ['maxAgeZero', 'maxAgeFuture'],
      },
    },
  },
};

If the response is not retrievable(when at least one validation rule has been broken), next() fn is invoked.

do we have cached response ?

Once evaluator has done it's job, we can proceed with extracting response from the cache. There can be both private and public data cached for the same http request, so first we check if we have private data available, and the we check for a public one. The important difference between public and private cached response is in user object, which can be provided via requestOptions.

Within this phase hook we examine requestOptions user property. More specifically, we look for id property or getIdID function that resolves to user unique id. Without unique user identifier, plugin will not be able to cache response that is marked as private.

If there is no cached data (public or private), that means that this is either an initial (first) http request, or that cached data has expired. In either case, next() fn is invoked.

does cached response needs re-validation ?

Let's assume that we have valid cached data by now, next step is re-validation. If we do not find must-revalidate or no-cache in response header (which got extracted from cache), it means that this phase hook handler is done.

If data has to be re-validated, we look for a flag called storeMultiResponse stored within response headers as well. This flag tells us that cached data can not be re-validated(as it is explained in response phase hook handler section).

WARNING: These 2 scenarios will exit request phase list immediately and return cached response to the caller. Also, response phase list will not be invoked, since we got what we need already.

Ok, so we need to re-validate cached response. Next (and final) step is assigning E-tag and Last-Modified to request options headers. These validators will be examined by origin server, and proper response will be provided(as we will soon see in next sections).

ResponsePhaseHookHandler - cache:before

If we got this far, it means that cached response has not been found, or that it was found but it needed re-validation. This phase hook handler has a name cache:before, which means that it will always be invoked right before the main cache handler. This is a good place to add validation required before performing actual caching.

This phase hook handler, beside requestOptions and hookState, receives:

  • response: (object) - an IncomingMessage object provided by origin server

  • responseError: (object) - error that explains what went wrong with http request

  • responseBody: (object) - payload provided by the origin server

Since hookState is being shared between phase lists, we can extract next properties from it:

  • hashedId: (string) - unique id build from provided request options in request phase hook handler

  • privateHashedId: (string) - present only if valid user object is present and it follows the explained user rules

  • evaluator: (object) - Oniyi-cache evaluator built in request phase hook handler, used for validation if response is storable

What is being validated:

  1. Did we receive a response error from the server?

  2. Is the response phase hook handler marked for skipping?

  3. Is the response storable, by using evaluator from the hookState.

  4. Is response marked as private and is privateHashedId present?

If any of these validations are not fulfilled, hookState gets updated with cachingAborted flag, which is validated in the next phase hook handler.

ResponsePhaseHookHandler - cache

No matter what we receive as a response from previous validation phase hook, first we need to validate response statusCode.

As already mentioned in previous sections, once we got the cached response which should be re-validated, plugin automatically adds E-tag and/or Last-Modified (if present) to the request options.

This means that origin server might respond with:

  1. statusCode = 200 OK -> meaning that cached data was stale(not fresh) and server provided completely new response and response body.

  2. statusCode = 304 Not Modified -> meaning that we can freely use cached response since it has not been modified since it got initially cached.

That said, if for some reason this phase hook is marked for skipping, and if statusCode = 304, caller would receive an empty response body. This is why we need to update response/responseBody with cachedResponse/cachedResponseBody respectively before validating the result from the cache:before phase hook.

In this phase hook handler delayCaching plays an important role. It is used to store/cache combined response build by making multiple http requests(or even response received by making a single http request!). And so, caching can be done on two ways.

1. regular caching

With regular caching, user makes an http request and gets a server response. Under the hood, this plugin tries to store the received response. Next time user initiates the same request, plugin tries to load it from the cache. If re-validation is not necessary, user receives the cached response. If re-validation is required, new http request is made with provided E-tag / Last-Modified header params(as already explained above).

Right after the data is set for caching, new event gets registered, called removeFromCache. If for some reason the latest response needs to be removed from the cache, it can be done:

  client.makeRequest(requestOptions, (error, response, body) => {
    response.emit('removeFromCache');
    });

Even if the data got cached, we simply remove it from the cache by invoking this event.

Now, what if a user needs to make multiple requests to retrieve a huge amount of data(news feed from a favorite service), plugin will cache every single response, and reload it next time these requests are being made. But what if, in order to retrieve all the data, user needs to make 100 http requests? Then reloading all of this data(even from the cache!) becomes pretty slow.

2. delayed caching

This caching mechanism can fix the problem introduced by regular caching. When delayCaching is set to true, plugin registers an addToCache event, but it does not cache the data yet. This provides the ability to collect all the data by making multiple http requests(all 100 of them!), and at the very end emit addToCache event with combined data.

Instructions

Initial response stream is the one that has registered an event. Use it to emit addToCache event once data is ready to be cached.

  const requestOptions = {
  plugins: {
    cache: {
      delayCaching: true,
    },
  },
};
  client.makeRequest(requestOptions, (originalError, originalResponse, originalBody) => {
    // do something cool with the originalResponse and originalBody
    // this original response has not been cached yet
    client.makeRequest(anotherRequestOptions, (error, response, body) => {
      // combine originalBody and body if you want to
      const finalBody = Object.assign({}, originalBody, body);
      originalResponse.emit('addToCache', { data: finalBody, storeMultiResponse: true });
    });
});

The convention that must be followed when storing data build by multiple responses:

const dataToCache = { data: 'yourData', storeMultiResponse: true };
  • data: (any) - data that should be cached. It can be any data type.

  • storeMultiResponse: (boolean) - this flag must be set to true when it is required to cache the data by building multiple response.

By providing storeMultiResponse flag, request phase hook handler will not try to re-validate this data, but it will provide it as is, from the cache. Basically, by choosing this mechanism over regular one, the response will stay in cache until it expires.

Also, this mechanism can be used in a similar way as regular caching, when storeMultiResponse is omitted. It can be useful when we need to examine the response before choosing to cache it, where we do not know upfront if more requests has to be make or not.