npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@gguridi/mockserver

v1.0.2

Published

Reimplementation of namshi's mockserver with extended functionality

Downloads

97

Readme

mockserver

Build Status Publish codecov

Reimplementation of namshi's mockserver as it doesn't seem to be getting any support soon and there's lots of improvements that could be done.

I love the idea of mocking an API by dropping a bunch of files somewhere and letting the directory structure decide which one of them we are going to serve on each case. However, the lack of maintenance and the missing functionalities made me decide to make a revamp of their mockserver.

The improvements coming with this revamp are:

  • Multiple imports. Combine data from different files into one to easily recreate REST endpoints while maintaining coherence in the data displayed. See REST examples for further information.

  • Multiple evaluations inside body, header or status entries. See [example](./test/examples/GET--multiple eval.mock) for further information. The request and path to the file are inside the scope of the evaluations to get all the information.

  • Middlewares allowed. Now it's possible to add middlewares that will execute before or after the handling of the request to add the same functionality to all the requests or to implement that complex behaviour that needs several functions. See examples for further information.

  • Improved permuations to find the files that better suits our requests. Now combinations of headers & query parameters are allowed. If the body of the request is JSON, it gets serialised as url parameters to make it easier to write the file name.

Installation

You can install mockserver globally:

npm install -g @gguridi/mockserver

or

yarn global add @gguridi/mockserver

And you can install mockserver as part of your project:

npm install --save-dev @gguridi/mockserver

or

yarn add --dev @gguridi/mockserver

Running

To run the server we must execute the binary indicating the folder where the mock files will be (it can be relative to the pwd):

mockserver -m {mocks folder}

This will create an express HTTP webserver, running by default on port 8080.

To change the port the server starts running on we can do:

mockserver -m {mocks folder} -p 9000

For a detailed explanation of all the commands type:

mockserver --help

Verbosity

By default the mockserver is running in info mode.

mockserver -m ./test/examples
Mockserver serving ./test/examples at http://:::8028

If we want to increase the verbosity of the mockserver we can decrease the log level to report on:

mockserver -m ./test/examples -l debug

If we want to decrease the verbosity of the mockserver we can increase the log level to report on:

mockserver -m ./test/examples -l error

The logging library used is winston and we can use any of the logging levels defined there.

Header-based behaviour

By default we don't use header information to determine which mock file matches better the request we are receiving. However, we can switch on this functionality by passing the list of headers that we want to use as part of the filename selection process.

In this mockserver, the order the headers are received doesn't affect the file name matching.

To enable the header Content-Type and Accept we would do:

mockserver -m ./test/examples -h Content-Type,Accept

Additionally, we can also use the environment variable MOCK_HEADERS to achieve the same result.

Middlewares

It's possible to easily create middlewares that will be used for all the requests the mockserver receives. These middlewares are integrated with the express server.

To specify the middlewares folder we can do it as:

mockserver -m ./test/examples -w ./test/examples/middlewares

The middlewares must be added into two files before.js and after.js that indicates in which point of the request process they will be executed.

If none of those files are present don't worry, they are not mandatory and even if the middlewares are not found the mockserver will ignore them and keep running.

The middlewares on each file will be executed in the order they are declared:

const uppercase = (req, res, next) => {
    req.body = req.body.toUpperCase();
    next();
};

const uniqueId = (req, res, next) => {
    req.headers["X-Unique-ID"] = Math.floor(Math.random() * 999 + 1000);
    next();
};

module.exports = {
    uppercase,
    uniqueId,
};

In this case, the middleware to convert the request body in all uppercase will be executed before the middleware that automatically adds a unique id into our request. When evaluating code inside our .mock files, these headers will be available there.

For further information regarding middlewares check the examples.

Request body parser

In order to support different types of API it's possible to specify the request body parser to use. At this point we use the body-parser library that allows us different validations/parsing.

The default body parser is json, but we can enable any of the availables with:

mockserver -m ./test/examples -b (json|text|raw|urlencoded)

Depending which one of them we use the filename permutations we get to load files might change, and the content of the request.body inside our code evaluations too.

  • json: The request.body is passed as an object request.body === {"id":"value"}
  • text: The request.body is passed as a string request.body === This is the body
  • raw : The request.body is passed as a buffer.
  • urlencoded: The request.body is passed as an object from a form encoded data.

Response Delays

To keep the behaviour of the original mockserver the ability of delay the response has been implemented in the same way by adding a Response-Delay header.

This header can be added in the mock file or in the before or after middlewares.

Response-Delay: 5000

The delay value is expected in milliseconds. If this header is not found there will be no delay.

Mock files

The mock files naming conventions are based on the response that they are going to serve. This mockserver follows the same approach as the original mockserver. Example of what can be done can be found here.

$REQUEST-PATH/${HTTP-METHOD}--${BODY}--${QUERY PARAMS}_${HEADERS}.mock

To simplify the explanation, I will give a full example of the path/file seleccion process that would happen when the -h X-H1,X-H2 headers are activated and we receive the following request:

  • A POST.
  • With URL and parameters /path/subpath/item?param1=value1
  • A body in the request such as {"key":"value"}
  • And the headers received would be X-H1=h1 and X-H2=h2.

The mockserver would check the following files in order to determine which of them are valid to serve that request. The filepaths are ordered from bigger to lower preference. The {mock-path} passed to the mockserver in this example would be ./mock-folder;

The wildcards __ are also used in the permutations to find the correct file. Wildcards have lower preference in the filepath match because they are more generic.

The complete list of permutations, in order, that the mockserver will check to find the most convenient response can be found here.

Query Parameters

This mockserver supports the query string parameters as part of the mocked files as the original mockserver does. It replaces the occurrences of ? with --, and then appends the entire string as part of the file name permutation.

GET /hello?a=b&c=d
hello/GET--a=b&c=d.mock

POST body

This mockserver supports the POST body as part of the mocked files. The POST body can be used altogether with query parameters and headers.

If the body is a JSON we automatically transform it into a query parameter string to make the matching process easier. For more information regarding the permutations that match files check the mock files [section](#Mock files).

POST /hello?a=b&c=d
{"id":"123"}

hello/POST--a=b&c=d--id=123.mock

If the POST body is a string, we don't change it:

POST /hello?a=b&c=d
This is the Body

hello/POST--a=b&c=d--this is the body.mock

Wildcard slugs

If you want to match against a route with a wildcard because that chunk might be dynamic, you can create a directory named __ as a wildcard.

For example, let's say that you want mock the response of a GET request to /users/:id, you can create files named users/1/GET.mock, users/2/GET.mock, users/3/GET.mock, etc.

Or you can also create one file to accept all. A file users/__/GET.mock, with a wildcard, will match all the requests above stated.

Imports

It's possible to import data or code from other files into the response we are seeking. To do that we can use the following syntax as the original mockserver does (slightly different):

#import ./data.json;

Where the path is relative to the .mock file the mockserver is reading from. There's no limit of how many imports you can use. And there's no limit of using it at the beginning of the line. It can be used wherever you want.

In case of importing a .js file, the code is automatically evaluated and injected into the response in the same place the #import {file}.js command was placed.

The following commands are completely valid:

{ "resources": #import ./resources-list.json; }
{ "resources": #import ./load-resources.js; }

The import statements can also be used to get status or header values:

#import ./status.js;
X-Cache: #import ./cache.js;

Evaluations

It's possible to insert code directly in the middle of our answers. The status, headers or body can be obtained from executing code wrapped between {{ }} brackets. The code can be multiline (except in status & headers, if you need complex code for that I suggest to use the imports) and must output a string.

{{ request.body === "valid" ? `HTTP/1.1 428 I'm a teapot` : `HTTP/1.1 400 Bad request` }}
X-Cache: {{ Math.floor(Math.random() * 999 + 1000) }}

{{
    const isValid = request.body === "valid";
    isValid ? `Yeah! It's valid.` : `Mmmmm... nope!`
}}

The imports are processed before the code evaluations, so it's possible to import text that later on will evaluate it's code. You can use this feature to place common evaluations in files and importing them from the responses that need it.

Tests

Tests run on jest and can be executed locally by typing:

yarn test