npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@stlib/testing

v1.0.7

Published

Testing framework for TypeScript Node.js applications

Downloads

685

Readme

sTest

NPM Version NPM Downloads

Node.js CI Open Source Love MIT License

See docs >>

Table of contents

About

sTest - is an testing framework for node.js applications, which provides new testing experience to the TypeScript.

Getting started

[!IMPORTANT] Node.js 18.x+ version must be installed in your OS.

CLI

Usage: npx stest [options]

Testing framework for TypeScript Node.js applications

Options:
  --init [extension]   initialise configuration file [json | yml | ts | js]
  -w, --watch          run tests in watch mode
  -c, --config <path>  define custom config file path

Installation

  • Install dependency

    $ yarn add @stlib/testing
  • Enable decorators in your tsconfing.json

    {
      "experimentalDecorators": true
    }

Usage

How to create tests and run tests

[!NOTE] You can see test examples here

First you need is to create a .spec.ts or .test.ts file.

This testing framework uses decorators to define tests. You need to create a class with @Test('Test suite name') decorator. Each test must be a class method with decorator @Case('test case description'). @Case description and @Test name can be ignored

Example:

import { assertThat, Test, Case } from '@stlib/testing';

@Test('Example testing suite')
class MyTests {

  @Case('Example pass test case')
  checkIfTenIsMoreThatFive() {
    assertThat(10).toBeGreaterThan(5);
  }
  
  @Case()
  checkIfTenIsString() {
    assertThat(10).toBeTypeOf('string');
  }
}

To run tests, use cli command

$ npx stest

Decorators API

[!NOTE] You can see test examples here

| Decorator | Description | |-------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------| | @Test(description?: string) | Define a class as a test suite | | @Case(description?: string) | Define method as a test case | | @Case({ description?: string, timeout?: number }) | Define method as a test case with custom timeout time | | @DataSet(...dataSets: any[][]) | Define data sets for multiple running one test case with different data | | @DataTable(dataTable: { inputs: any[], expected: any }[]) | Define data table for multiple running one test case with different data. Data table prevents running tests with data sets. | | @AfterAll(description?: string) | Force method to run after all test cases | | @BeforeAll(description?: string) | Force method to run before all test cases | | @AfterEach(description?: string) | Runs after each test case | | @BeforeEach(description?: string) | Runs before each test case |

Assertions API

assertThat(actual).to*(expected);

| Method | Description | |----------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------| | toEqual(expected: any) | Check equality between actual and expected | | toNotEqual(expected: any) | Check if actual and expected are not equal | | toStrictEqual(expected: any) | Check strict equality between actual and expected | | toStrictNotEqual(expected: any) | Check if actual and expected are strict not equal | | toBe(expected: any) | Check if actual is an expected | | toNotBe(expected: any) | Check if actual is not an expected | | toBeTruthy() | Check if actual is true | | toBeFalsy() | Check if actual is false | | toBeGreaterThan(expected: number) | Check if actual is greater than expected | | toBeGreaterThanOrEqual(expected: number) | Check if actual is greater than or equals expected | | toBeLessThan(expected: number) | Check if actual is less than expected | | toBeLessThanOrEqual(expected: number) | Check if actual is less than or equal expected | | toBeDefined() | Check if actual is defined | | toBeUndefined() | Check if actual is undefined | | toBeNull() | Check if actual is null | | toBeNotNull() | Check if actual is not null | | toBeNaN() | Check if actual is NaN | | toBeFinite() | Check if actual is finite number | | toBeTypeOf(type: any) | Check if actual is type of expected. Example: assertThat('a').toBeTypeOf('string'), assertThat(TypeError).toBeTypeOf(Error) | | toHaveProperty(property: any) | Check if actual has expected property | | toThrow(expectedError?: ErrorConstructor, ...args: any[]) | Check if actual throw an error or expected error. Also you can provide arguments for actual function | | toNotThrow(expectedError?: ErrorConstructor, ...args: any[]) | Check if actual do not throw an error or do not throw an error provided to as 'expectedError' param. Also you can provide arguments for actual function | | toContain(expected: any) | Check if actual contains a value from expected | | toContainEqual(expected: any) | Check if actual contains a value which equal to value from expected | | toMatch(expected: RegExp or string) | Check if actual mathes to expected regular expression | | toHaveLength(expected: number) | Check if actual has expected length | | toStartWith(expected: string) | Check if actual starts with expected string | | toEndWith(expected: string) | Check if actual ends with expected string | | toSatisfy(predicate: (value: any) => boolean) | Check if actual satisfies a predicate | | toIncludeAllMembers(expected: any[]) | Check if actual includes all members of expected array | | toIncludeAnyMembers(expected: any[]) | Check if actual includes at least one member of expected array |

Mocking API

| Class | Method | Description | |----------------|-----------------------------------------------------------------------------|-----------------------------------------------------------| | MockRegistry | restoreAll() | Restores all created mocks | | MockFn | new MockFn(functionToMock: Function, implementation?: Function) | Creates function mock | | | mockFn.mock(implementation?: Function) | Mocks a function | | | mockFn.verifyCalled(expectedCallcount: number) | Verifies function has been called expected times | | | mockFn.verifyCalledWith(expectedArguments: any[]) | Verifies function has been called with expected arguments | | | mockFn.restore() | Restores original function logic | | | mockFn.getFunction() | Returns fuction | | | mockFn.call(...args: any[]) | Call function | | MockModule | new MockModule(moduleName: string) | Creates module mock | | | mockModule.mockMethod(methodName: string, implementation: Function) | Mocks a function | | | mockModule.verifyCalled(methodName: string, expectedCallcount: number) | Verifies function has been called expected times | | | mockModule.verifyCalledWith(methodName: string, expectedArguments: any[]) | Verifies function has been called with expected arguments | | | mockModule.restoreMethod(methodName: string) | Restores original function logic | | | mockModule.restoreAll() | Restores all functios | | Mock | new Mock(instance: T) | Creates class instance mock | | | mockClass.mockMethod(methodName: string, implementation: Function) | Mocks a function | | | mockClass.verifyCalled(methodName: string, expectedCallcount: number) | Verifies function has been called expected times | | | mockClass.verifyCalledWith(methodName: string, expectedArguments: any[]) | Verifies function has been called with expected arguments | | | mockClass.restoreMethod(methodName: string) | Restores original function logic | | | mockClass.restoreAll() | Restores all functios |

Spy API

Create a new spy by spyOn function:

const example = new ExampleClass();
const greetSpy = spyOn(example, 'greet');

| Class | Method | Description | |------------------------------------------|---------------------------------------|-----------------------------------------------------------------------------| | spyOn(object: any, methodName: string) | | Creates new spy on class method | | | getCallCount() | Returns amount of method calls | | | getCallOrder() | Returns an array with call order | | | getCallResults(callIndex?: number) | Returns an array with call results | | | getCallArgs(callIndex?: number) | Returns an array with arguments passed | | | getThrownErrors(callIndex?: number) | Returns an array with with all thrown errors | | | wasCalled(amount?: number) | Returns true if method was called at least once or more than amount times | | | wasCalledWith(...args: any[]) | Returns true if method was called with specified arguments at least once |

Configuration

To provide custom configuration for stest create stest.config.{json,yml,ts,js} file in project base directory.
Or use this command to initialize config file. By default it has json format. Override it by adding one of the possible options for the --init flag: ts, js, json, yml

$ npx stest --init

Also it is possible to define custom path to the config file using --config flag.

$ npx stest --config ./configs/test.conf.yml

Config properties:

  • pattern - tests path pattern
  • ignore - files and directories to ignore
  • autoClearMocks - clears all created mocks after each test,
  • cacheWatcher - enables caching in watch mode, to run only new or changed tests
  • enableReporting - enables generating test reports. To enable it install a reporting package:
    $ yarn add @stlib/testing-reporter

Config file example:

  • json:

    {
      "pattern": "test/**/*.{spec,test}.ts",
      "ignore": ["node_modules", "lib"],
      "autoClearMocks": true,
      "cacheWatcher": false,
      "enableReporting": false
    }
  • yaml:

    pattern: "test/**/*.{spec,test}.ts"
    ignore:
      - node_modules
      - lib
    autoClearMocks: true
    cacheWatcher: false
    enableReporting: false
  • TS/JS:

    import { StestConfig } from "@stlib/testing";
    
    const config: StestConfig = {
      pattern: "test/**/*.{spec,test}.ts",
      ignore: ["node_modules", "lib"],
      autoClearMocks: true,
      cacheWatcher: false,
      enableReporting: false,
    };
    export default config;

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

Changelog

Project changes are writen in changelog, see the CHANGELOG.md.

We use SemVer for versioning. For the versions available, see the tags on this repository. For the versions supported, see the SECURITY.md.

Authors

License

This project is licensed under the MIT License - see the LICENSE.md