npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

trapit

v1.0.22

Published

Trapit - JavaScript Unit Tester/Formatter

Downloads

105

Readme

Trapit - JavaScript Unit Tester/Formatter

The Math Function Unit Testing design pattern, implemented in JavaScript

:detective:

This module supports a new design pattern for unit testing, The Math Function Unit Testing Design Pattern, that can be applied in any language, and is here implemented in JavaScript. The module name is derived from 'TRansactional API Testing' (TRAPIT), and the 'unit' should be considered to be a transactional unit. This is not micro-testing: It is data-driven and fully supports multi-scenario testing and re-factoring.

The Trapit module supports the complete process for testing JavaScript programs, and, for non-JavaScript programs following the design pattern, formats the results by reading in a results object from a JSON file materialized by the external unit test program.

This blog post, Unit Testing, Scenarios and Categories: The SCAN Method provides guidance on effective selection of scenarios for unit testing.

There is also a powershell module, Powershell Trapit Unit Testing Utilities module that includes a utility to generate a template for the JSON input file used by the design pattern, based on simple input CSV files. The resulting JSON file contains sections for the input and output metadata, and a scenario skeleton for each scenario listed in the relevant CSV file.

There is an extended Usage section below that illustrates the use of the powershell utility, along with the JavaScript program, for unit testing, by means of two examples. The Unit Testing section also uses them in testing the pure function, getUTResults, which is called by the JavaScript formatting APIs.

In This README...

↓ Background ↓ Usage ↓ API ↓ Installation ↓ Unit Testing ↓ Folder Structure ↓ See Also

Background

↑ In This README...

I explained the concepts for the unit testing design pattern in relation specifically to database testing in a presentation at the Oracle User Group Ireland Conference in March 2018:

I later named the approach The Math Function Unit Testing Design Pattern when I applied it in Javascript and wrote a JavaScript program to format results both in plain text and as HTML pages:

The module also allowed for the formatting of results obtained from testing in languages other than JavaScript by means of an intermediate output JSON file. In 2021 I developed a powershell module that included a utility to generate a template for the JSON input scenarios file required by the design pattern:

Also in 2021 I developed a systematic approach to the selection of unit test scenarios:

In early 2023 I extended both the the JavaScript results formatter, and the powershell utility to incorporate Category Set as a scenario attribute. Both utilities support use of the design pattern in any language, while the unit testing driver utility is language-specific and is currently available in Powershell, JavaScript, Python and Oracle PL/SQL versions.

Usage

↑ In This README... ↓ General Usage ↓ Usage 1 - JavaScript Unit Testing ↓ Usage 2 - Formatting Test Results for External Programs

As noted above, the JavaScript module allows for unit testing of JavaScript programs and also the formatting of test results for both JavaScript and non-JavaScript programs. Similarly, the powershell module mentioned allows for unit testing of powershell programs, and also the generation of the JSON input scenarios file template for testing in any language.

In this section we'll start by describing the steps involved in The Math Function Unit Testing Design Pattern at an overview level. This will show how the generic powershell and JavaScript utilities fit in alongside the language-specific driver utilities.

Secondly, we'll show how to use the design pattern in unit testing JavaScript programs by means of two simple examples.

Finally, we'll show how to use the JavaScript formatting utility in unit testing non-JavaScript programs, where the utility uses an intermediate JSON file created from the external programs as input. This section contains a set of examples with results summaries and links to the GitHub projects generating the JSON files.

General Usage

↑ Usage ↓ Step 1: Create JSON File ↓ Step 2: Create Results Object ↓ Step 3: Format Results

At a high level The Math Function Unit Testing Design Pattern involves three main steps:

  1. Create an input file containing all test scenarios with input data and expected output data for each scenario, as well as metadata describing the structure
  2. Create a results object based on the input file, but with actual outputs merged in, based on calls to the unit under test
  3. Use the results object to generate unit test results files formatted in HTML and/or text

Step 1: Create JSON File

↑ General Usage

Step 1 requires analysis to determine the extended signature for the unit under test, and to determine appropriate scenarios to test.

The art of unit testing lies in choosing a set of scenarios that will produce a high degree of confidence in the functioning of the unit under test across the often very large range of possible inputs. A useful approach to this can be to think in terms of categories of inputs, where we reduce large ranges to representative categories, an approach discussed in Unit Testing, Scenarios and Categories: The SCAN Method. While the examples in the blog post aimed at minimal sets of scenarios, we have since found it simpler and clearer to use a separate scenario for each category.

The results of this analysis can be summarised in three CSV files which a powershell API uses as inputs to create a template for the JSON file.

The powershell API, Write-UT_Template, creates a template for the JSON file, with the full meta section, and a set of template scenarios having name as scenario key, a category set attribute, and a single record with default values for each input and output group. The API takes as inputs three CSV files:

  • stem_inp.csv: Input group triplets - (Input group name, field name, default value)
  • stem_out.csv: Input group triplets - (Output group name, field name, default value)
  • stem_sce.csv: Scenario triplets - (Category set, scenario name, active flag)

It may be useful during the analysis phase to create two diagrams, one for the extended signature:

  • JSON Structure Diagram: showing the groups with their fields for input and output

and another for the category sets and categories:

  • Category Structure Diagram: showing the category sets identified with their categories

You can see examples of these diagrams later in this document, eg: JSON Structure Diagram - ColGroup and Category Structure Diagram - ColGroup.

The API can be run (after installing the TrapitUtils module) with the following powershell in the folder of the CSV files:

Format-JSON-Stem.ps1
Import-Module TrapitUtils
Write-UT_Template 'stem' '|'

This creates the template JSON file, stem_temp.json based on the CSV files having prefix stem and using the field delimiter '|'. The template file is then updated manually with data appropriate to each scenario.

This powershell API can be used for testing in any language.

Step 2: Create Results Object

↑ General Usage ↓ JavaScript ↓ External Programs

Step 2 requires the writing of a wrapper function that is passed into a call to the unit test driver API. Both wrapper function and driver API are in the language of the unit under test.

In scripting languages, such as JavaScript or Python, there will be a driving script containing the wrapper function definition, followed by a 1-line call to the driver API in a library module. In a database language, such as Oracle PL/SQL the wrapper function would be in a stored package, and called by the driver API internally depending on a parameter passed.

The driver API reads the input JSON file, calls the wrapper function for each scenario, and creates the output JSON object with the actual results merged in along with the expected results.

In the JavaScript version of the unit test driver API, the object is used directly to create the formatted HTML and text results files; in non-JavaScript versions the object is written to file to be read by the JavaScript formatter in a separate step.

JavaScript

↑ Step 2: Create Results Object

The unit test driver script in Javascript has the form:

test-uut.js
const Trapit = require('trapit');
function purelyWrapUnit(inpGroups) { // input groups object
(function body)
}
Trapit.fmtTestUnit(INPUT_JSON, ROOT, purelyWrapUnit, 'B', colors);

If the script, test-uut.js, is in path [path], we would call it like this:

$ node [path]/test-uut

The call creates the results object and goes on to format it, producing listings of the results in HTML and/or text format in a subfolder named from the unit test title.

External Programs

↑ Step 2: Create Results Object

For external programs, the scripts create the object and materialize it as a JSON file. There are projects, with library module and examples, under this GitHub account (BrenPatF) for Powershell, Python and Oracle PL/SQL at present. For example, in Python the driver script has the form:

testuut.py
import trapit
def purely_wrap_unit(inp_groups): # input groups object
  (function body)
trapit.test_unit(INPUT_JSON, OUTPUT_JSON, purely_wrap_unit)

where now we pass in an output JSON file name, OUTPUT_JSON, as well as the input file name.

If the script, testuut.py, is in path [path] we would call it like this:

$ py [path]/testuut

Step 3: Format Results

↑ General Usage

As mentioned, for JavaScript step 3 is incorporated within the API called for step 2. Other languages require the use of a JavaScript program that reads in the JSON from step 2. In either case the formatter produces listings of the results in HTML and/or text format in a subfolder named from the unit test title.

There are a number of ways to use the JavaScript module for the formatting step of non-JavaScript unit testing.

  • format-external-file.js: Formats the results for a single JSON file, within a subfolder of the file's parent folder
  • format-external-folder.js: Formats the results for all JSON files in a general folder, within subfolders
  • format-externals.js: Formats the results for all JSON files in a subfolder of the Trapit externals folder, within subfolders

Each of these returns a summary of the results. Here is an example of a call from powershell to the first script:

$ node ($npmRoot + '/node_modules/trapit/externals/format-external-file') $jsonFile

The call would normally be encapsulated within a function in a library package in the non-JavaScript language, as in:

These JavaScript APIs can be used for formatting the test results objects created in any language.

Usage 1 - JavaScript Unit Testing

↑ Usage ↓ Example 1 - HelloWorld ↓ Example 2 - ColGroup

For JavaScript programs tested using the Math Function Unit Testing design pattern, the results object is created within the JavaScript library package. The diagram below shows the flow of processing triggered by the specific test package main function:

  • First, the output results object is created by the Test Unit library function
  • Second, the function calls another function to format the results in HTML and/or text files

This creates a subfolder with name based on the unit test title within the input JSON file, and also outputs a table of summary results. The processing is split between three code units:

  • Trapit library package with Test Unit function that drives the unit testing with a callback to a specific wrapper function, then calls the Format Results function to do the formatting
  • Specific Test Package: This has a 1-line main program to call the library driver function, passing in the callback wrapper function
  • Unit Under Test (API): Called by the wrapper function, which converts between its specific inputs and outputs and the generic version used by the library package

This section illustrates the usage of the package for testing JavaScript programs by means of two examples. The first is a version of the 'Hello World' program traditionally used as a starting point in learning a new programming language. This is useful as it shows the core structures involved in following the design pattern with a minimalist unit under test.

The second example, 'ColGroup', is larger and intended to show a wider range of features, but without too much extraneous detail.

Example 1 - HelloWorld

↑ Usage 1 - JavaScript Unit Testing ↓ Step 1: Create JSON File - HelloWorld ↓ Step 2: Create Results Object - HelloWorld ↓ Step 3: Format Results - HelloWorld

This is a pure function form of Hello World program, returning a value rather than writing to screen itself. It is of course trivial, but has some interest as an edge case with no inputs and extremely simple JSON input structure and test code.

helloWorld.js
module.exports = {
  helloWorld: () => {return 'Hello World!'}
}

There is a main script that shows how the function might be called outside of unit testing, run from the examples folder:

main-helloworld.js
const Hw = require('./helloworld');
console.log(Hw.helloWorld());

This can be called from a command window in the examples folder:

$ node helloworld/main-helloworld

with output to console:

Hello World!
Step 1: Create JSON File - HelloWorld

↑ Example 1 - HelloWorld ↓ Unit Test Wrapper Function - HelloWorld ↓ Scenario Category ANalysis (SCAN) - HelloWorld

Unit Test Wrapper Function - HelloWorld

↑ Step 1: Create JSON File - HelloWorld

Here is a diagram of the input and output groups for this example:

From the input and output groups depicted we can construct CSV files with flattened group/field structures, and default values added, as follows (with helloworld_inp.csv left, helloworld_out.csv right):

Scenario Category ANalysis (SCAN) - HelloWorld

↑ Step 1: Create JSON File - HelloWorld

The Category Structure diagram for the HelloWorld example is of course trivial:

It has just one scenario, with its input being void:

| # | Category Set | Category | Scenario | |---:|:-------------|:---------|:---------| | 1 | Global | No input | No input |

From the scenarios identified we can construct the following CSV file (helloworld_sce.csv), taking the category set and scenario columns, and adding an initial value for the active flag:

The powershell API can be run with the following powershell script in the folder of the CSV files:

Format-JSON-HelloWorld.ps1
Import-Module TrapitUtils
Write-UT_Template 'helloworld' '|'

This creates the template JSON file, helloworld_temp.json, which contains an element for each of the scenarios, with the appropriate category set and active flag, with a single record in each group with default values from the groups CSV files. Here is the complete file:

helloworld_temp.json
{
  "meta": {
    "title": "title",
    "delimiter": "|",
    "inp": {},
    "out": {
      "Group": [
        "Greeting"
      ]
    }
  },
  "scenarios": {
    "No input": {
      "active_yn": "Y",
      "category_set": "Global",
      "inp": {},
      "out": {
        "Group": [
          "Hello World!"
        ]
      }
    }
  }
}

The actual JSON file has just the "title" value replaced with: "HelloWorld - JavaScript".

Step 2: Create Results Object - HelloWorld

↑ Example 1 - HelloWorld

Step 2 requires the writing of a wrapper function that is passed into a call to the unit test driver API.

  • Trapit.fmtTestUnit is the unit test driver API that reads the input JSON file, calls the wrapper function for each scenario, and creates the output object with the actual results merged in along with the expected results.

Here is the complete script for this case, where we use a Lambda expression as the wrapper function is so simple:

test-helloworld.js
const [Trapit,                    Hw                     ] =
      [require('trapit'),         require('./helloworld')],
      [ROOT,                      GROUP                  ] =
      [__dirname + '/',           'Group'                ];

const INPUT_JSON = ROOT + 'helloworld.json';

Trapit.fmtTestUnit(INPUT_JSON, ROOT, (inpGroups) => { return {[GROUP] : [Hw.helloWorld()]} }, 'B');

This creates the output object and goes on to format it, producing listings of the results in HTML and/or text format in a subfolder named from the unit test title, here helloworld.

Step 3: Format Results - HelloWorld

↑ Example 1 - HelloWorld ↓ Unit Test Report - HelloWorld ↓ Scenario 1: No input

Here we show the scenario-level summary of results for the specific example, and also show the detail for the only scenario.

You can review the HTML formatted unit test results here:

Unit Test Report - HelloWorld

↑ Step 3: Format Results - HelloWorld

This is the summary page in text format.

Unit Test Report: Hello World - JavaScript
==========================================

      #    Category Set  Scenario  Fails (of 2)  Status
      ---  ------------  --------  ------------  -------
      1    Global        No input  0             SUCCESS

Test scenarios: 0 failed of 1: SUCCESS
======================================
Formatted: 2023-04-12 05:52:48
Scenario 1: No input

↑ Step 3: Format Results - HelloWorld

This is the page for the single scenario in text format.

SCENARIO 1: No input [Category Set: Global] {
=============================================
   INPUTS
   ======
   OUTPUTS
   =======
      GROUP 1: Group {
      ================
            #  Greeting
            -  ------------
            1  Hello World!
      } 0 failed of 1: SUCCESS
      ========================
      GROUP 2: Unhandled Exception: Empty as expected: SUCCESS
      ========================================================
} 0 failed of 2: SUCCESS
========================

Note that the second output group, 'Unhandled Exception', is not specified in the CSV file: In fact, this is generated by the unit test driver API itself in order to capture any unhandled exception.

Example 2 - ColGroup

↑ Usage 1 - JavaScript Unit Testing ↓ Step 1: Create JSON File - ColGroup ↓ Step 2: Create Results Object - ColGroup ↓ Step 3: Format Results - ColGroup

This example involves a class with a constructor function that reads in a CSV file and counts instances of distinct values in a given column. The constructor function appends a timestamp and call details to a log file. The class has methods to list the value/count pairs in several orderings.

ColGroup.js (skeleton)
...
class ColGroup {
    ...
}
module.exports = ColGroup;

There is a main script that shows how the class might be called outside of unit testing, run from the examples folder:

main-colgroup.js
const ColGroup = require('./colgroup');
const [INPUT_FILE,                                             DELIM, COL] =
      [__dirname + '/fantasy_premier_league_player_stats.csv', ',',   6];

let grp = new ColGroup(INPUT_FILE, DELIM, COL);

grp.prList('(as is)', grp.listAsIs());
grp.prList('key', grp.sortByKey());
grp.prList('value', grp.sortByValue());

This can be called from a command window in the examples folder:

$ node colgroup/main-colgroup

with output to console:

Counts sorted by (as is)
========================
Team         #apps
-----------  -----
Man City      1099
Southampton   1110
Stoke City    1170
...

Counts sorted by key
====================
Team         #apps
-----------  -----
Arsenal        534
Aston Villa    685
Blackburn       33
...
Counts sorted by value
======================
Team         #apps
-----------  -----
Wolves          31
Blackburn       33
Bolton          37
...

and to log file, fantasy_premier_league_player_stats.csv.log:

Mon Apr 10 2023 07:46:22: File [MY_PATH]/node_modules/trapit/examples/colgroup/fantasy_premier_league_player_stats.csv, delimiter ',', column 6/fantasy_premier_league_player_stats.csv, delimiter ',', column team_name

The example illustrates how a wrapper function can handle impure features of the unit under test:

  • Reading input from file
  • Writing output to file

...and also how the JSON input file can allow for nondeterministic outputs giving rise to deterministic test outcomes:

  • By using regex matching for strings including timestamps
  • By using number range matching and converting timestamps to epochal offsets (number of units of time since a fixed time)
Step 1: Create JSON File - ColGroup

↑ Example 2 - ColGroup ↓ Unit Test Wrapper Function - ColGroup ↓ Scenario Category ANalysis (SCAN) - ColGroup

Unit Test Wrapper Function - ColGroup

↑ Step 1: Create JSON File - ColGroup

Here is a diagram of the input and output groups for this example:

From the input and output groups depicted we can construct CSV files with flattened group/field structures, and default values added, as follows (with colgroup_inp.csv left, colgroup_out.csv right):

Scenario Category ANalysis (SCAN) - ColGroup

↑ Step 1: Create JSON File - ColGroup

As noted earlier, a useful approach to scenario selection can be to think in terms of categories of inputs, where we reduce large ranges to representative categories.

Generic Category Sets

As explained in the article mentioned earlier, it can be very useful to think in terms of generic category sets that apply in many situations. Multiplicity is relevant here (as it often is):

Multiplicity

There are several entities where the generic category set of multiplicity applies, and we should check each of the None / One / Multiple instance categories.

| Code | Description | |:--------:|:----------------| | None | No values | | One | One value | | Multiple | Multiple values |

Apply to:

Categories and Scenarios

After analysis of the possible scenarios in terms of categories and category sets, we can depict them on a Category Structure diagram:

We can tabulate the results of the category analysis, and assign a scenario against each category set/category with a unique description:

| # | Category Set | Category | Scenario | |---:|:--------------------------|:--------------------|:-----------------------------------------| | 1 | Lines Multiplicity | None | No lines | | 2 | Lines Multiplicity | One | One line | | 3 | Lines Multiplicity | Multiple | Multiple lines | | 4 | File Column Multiplicity | One | One column in file | | 5 | File Column Multiplicity | Multiple | Multiple columns in file | | 6 | Key Instance Multiplicity | One | One key instance | | 7 | Key Instance Multiplicity | Multiple | Multiple key instances | | 8 | Delimiter Multiplicity | One | One delimiter character | | 9 | Delimiter Multiplicity | Multiple | Multiple delimiter characters | | 10 | Key Size | Short | Short key | | 11 | Key Size | Long | Long key | | 12 | Log file existence | No | Log file does not exist at time of call | | 13 | Log file existence | Yes | Log file exists at time of call | | 14 | Key/Value Ordering | Same | Order by key same as order by value | | 15 | Key/Value Ordering | Not Same | Order by key differs from order by value | | 16 | Errors | Mismatch | Actual/expected mismatch | | 17 | Errors | Unhandled Exception | Unhandled exception |

From the scenarios identified we can construct the following CSV file (colgroup_sce.csv), taking the category set and scenario columns, and adding an initial value for the active flag:

The powershell API can be run with the following powershell script in the folder of the CSV files:

Format-JSON-ColGroup.ps1
Import-Module TrapitUtils
Write-UT_Template 'colgroup' '|'

This creates the template JSON file, colgroup_temp.json, which contains an element for each of the scenarios, with the appropriate category set and active flag, with a single record in each group with default values from the groups CSV files. Here is the "Multiple lines" element:

"Multiple lines": {
  "active_yn": "N",
  "category_set": "Lines Multiplicity",
  "inp": {
    "Log": [
      ""
    ],
    "Scalars": [
      ",|col_1|N"
    ],
    "Lines": [
      "col_0,col_1,col_2"
    ]
  },
  "out": {
    "Log": [
      "1|IN [0,2000]|LIKE /.*: File .*ut_group.*.csv, delimiter ',', column 0/"
    ],
    "listAsIs": [
      "1"
    ],
    "sortByKey": [
      "val_1|1"
    ],
    "sortByValue": [
      "val_1|1"
    ]
  }
},

For each scenario element, we need to update the values to reflect the scenario to be tested, in the actual input JSON file, colgroup.json. In the case above, we can just replace the "Lines" input group with:

    "Lines": [
      "col_0,col_1,col_2",
      "val_0,val_1,val_2",
      "val_0,val_1,val_2"
    ]

and replace '1' with '2' in two of the output groups:

    "sortByKey": [
      "val_1|2"
    ],
    "sortByValue": [
      "val_1|2"
    ]
Step 2: Create Results Object - ColGroup

↑ Example 2 - ColGroup

Step 2 requires the writing of a wrapper function that is passed into a call to the second API.

  • Trapit.fmtTestUnit is the unit test driver API that reads the input JSON file, calls the wrapper function for each scenario, and creates the output object with the actual results merged in along with the expected results.

Here is a skeleton of the script for this case:

test-colgroup.js (skeleton)
const Trapit = require('trapit');
function purelyWrapUnit(inpGroups) { // input groups object
(function body)
}
Trapit.fmtTestUnit(INPUT_JSON, ROOT, purelyWrapUnit, 'B', colors);

This creates the output object and goes on to format it, producing listings of the results in HTML and/or text format in a subfolder named from the unit test title, here colgroup.

Step 3: Format Results - ColGroup

↑ Example 2 - ColGroup ↓ Unit Test Report - ColGroup ↓ Scenario 16: Actual/expected mismatch [Category Set: Errors]

Here we show the scenario-level summary of results for the specific example, and also show the detail for one scenario.

You can review the HTML formatted unit test results here:

Unit Test Report - ColGroup

↑ Step 3: Format Results - ColGroup

This is a screenshot of the summary page in HTML format.

Scenario 16: Actual/expected mismatch [Category Set: Errors]

↑ Step 3: Format Results - ColGroup

This scenario is designed to fail, with one of the expected values in group 4 set to 9999 instead of the correct value of 2, just to show how mismatches are displayed.

Usage 2 - Formatting Test Results for External Programs

↑ Usage ↓ Results Summaries for External Folders

For non-JavaScript programs tested using the Math Function Unit Testing design pattern, the results object is materialized using a library package in the relevant language. The diagram below shows how the processing is split into two steps:

  • First, the output results object is created using the external library package in a similar way to the JavaScript processing, and is then written to a JSON file
  • Second, a JavasScript script from the current project is run, passing in the name of the folder with the results JSON file(s)

This creates a subfolder for each JSON file with name based on the unit test title within the file, and also outputs a table of summary results for each file. The processing is split between three code units in a similar way to the JavaScript case:

  • Test Unit: External library function that drives the unit testing with a callback to a specific wrapper function
  • Specific Test Package: This has a 1-line main program to call the library driver function, passing in the callback wrapper function
  • Unit Under Test (API): Called by the wrapper function, which converts between its specific inputs and outputs and the generic version used by the library package

In the first step the external program creates the output results JSON file, while in the second step the file is read into an object by the Trapit library package, which then formats the results in exactly the same way as for JavaScript testing.

As mentioned in the General Usage section above, there are three alternative JavaScript scripts for formatting non-JavaScript unit test results, and usually the calls would be be encapsulated within a function in a library package in the non-JavaScript language, as in:

In the next section below we show the results by subfolder from the script format-externals.js, passing as a parameter the name of a subfolder within the externals folder. It is run from a Powershell window in the root trapit folder for a subfolder containing a set of JSON results files:

$ node externals/format-externals subfolder

Results Summaries for External Folders

↑ Usage 2 - Formatting Test Results for External Programs ↓ oracle_api_demos ↓ oracle_plsql ↓ oracle_unit_test_examples ↓ powershell ↓ python ↓ shortest_path_sql

Here we give the top-level results summaries output to console for each of the groups of externally-sourced JSON files. Links to the source GitHub project are included for each group.

oracle_api_demos

↑ Results Summaries for External Folders The results JSON file is sourced from the following GitHub project, and the formatted results files can be seen in the indicated subfolders:

Running the format-externals script for subfolder oracle_api_demos from a Powershell window in the root trapit folder:

$ node externals/format-externals oracle_api_demos

gives the following output to console, as well as writing the results subfolders as indicated:

Unit Test Results Summary for Folder [MY_PATH]/node_modules/trapit/externals/oracle_api_demos
=============================================================================================
 File                                                 Title                                                    Inp Groups  Out Groups  Tests  Fails  Folder
----------------------------------------------------  -------------------------------------------------------  ----------  ----------  -----  -----  -------------------------------------------------------
 tt_emp_batch.purely_wrap_load_emps_out.json          Oracle PL/SQL API Demos: TT_Emp_Batch.Load_Emps                   5           5      9      0  oracle-pl_sql-api-demos_-tt_emp_batch.load_emps
 tt_emp_ws.purely_wrap_get_dept_emps_out.json         Oracle PL/SQL API Demos: TT_Emp_WS.Get_Dept_Emps                  2           2      5      0  oracle-pl_sql-api-demos_-tt_emp_ws.get_dept_emps
*tt_emp_ws.purely_wrap_save_emps_out.json             Oracle PL/SQL API Demos: TT_Emp_WS.Save_Emps                      1           4      4      1  oracle-pl_sql-api-demos_-tt_emp_ws.save_emps
 tt_view_drivers.purely_wrap_hr_test_view_v_out.json  Oracle PL/SQL API Demos: TT_View_Drivers.HR_Test_View_V           2           2      4      0  oracle-pl_sql-api-demos_-tt_view_drivers.hr_test_view_v

1 externals failed, see [MY_PATH]/node_modules/trapit/externals/oracle_api_demos for scenario listings
tt_emp_ws.purely_wrap_save_emps_out.json
oracle_plsql

↑ Results Summaries for External Folders The results JSON files are sourced from the following GitHub projects, and the formatted results files can be seen in the indicated subfolders:

Running the format-externals script for subfolder oracle_plsql from a Powershell window in the root trapit folder:

$ node externals/format-externals oracle_plsql

gives the following output to console, as well as writing the results subfolders as indicated:

Unit Test Results Summary for Folder [MY_PATH]/node_modules/trapit/externals/oracle_plsql
=========================================================================================
 File                                         Title                           Inp Groups  Out Groups  Tests  Fails  Folder
--------------------------------------------  ------------------------------  ----------  ----------  -----  -----  ------------------------------
 tt_log_set.purely_wrap_log_set_out.json      Oracle PL/SQL Log Set                    6           6     21      0  oracle-pl_sql-log-set
 tt_net_pipe.purely_wrap_all_nets_out.json    Oracle PL/SQL Network Analysis           1           2      3      0  oracle-pl_sql-network-analysis
 tt_timer_set.purely_wrap_timer_set_out.json  Oracle PL/SQL Timer Set                  2           9      8      0  oracle-pl_sql-timer-set
 tt_utils.purely_wrap_utils_out.json          Oracle PL/SQL Utilities                 15          16      4      0  oracle-pl_sql-utilities

0 externals failed, see [MY_PATH]/node_modules/trapit/externals/oracle_plsql for scenario listings
oracle_unit_test_examples

↑ Results Summaries for External Folders The results JSON files are sourced from the following GitHub project, and the formatted results files can be seen in the indicated subfolders:

Running the format-externals script for subfolder oracle_plsql from a Powershell window in the root trapit folder:

$ node externals/format-externals oracle_unit_test_examples

gives the following output to console, as well as writing the results subfolders as indicated:

Unit Test Results Summary for Folder [MY_PATH]/node_modules/trapit/externals/oracle_unit_test_examples
======================================================================================================
 File                                                         Title                           Inp Groups  Out Groups  Tests  Fails  Folder
------------------------------------------------------------  ------------------------------  ----------  ----------  -----  -----  ------------------------------
*tt_feuertips_13.purely_wrap_feuertips_13_poc_out.json        Feuertips 13 - Base                      3           3     15     11  feuertips-13---base
*tt_feuertips_13_v1.purely_wrap_feuertips_13_poc_out.json     Feuertips 13 - v1                        3           3     15      7  feuertips-13---v1
 tt_feuertips_13_v2.purely_wrap_feuertips_13_poc_out.json     Feuertips 13 - v2                        3           3     15      0  feuertips-13---v2
 tt_investigation_mgr.purely_wrap_investigation_mgr_out.json  EPA Investigations                       2           2      9      0  epa-investigations
*tt_login_bursts.purely_wrap_view_ana_out.json                Login Bursts - Analytics                 1           2      3      2  login-bursts---analytics
 tt_login_bursts.purely_wrap_view_mod_out.json                Login Bursts - Model                     1           2      3      0  login-bursts---model
 tt_login_bursts.purely_wrap_view_mre_out.json                Login Bursts - Match_Recognize           1           2      3      0  login-bursts---match_recognize
 tt_login_bursts.purely_wrap_view_rsf_out.json                Login Bursts - Recursive                 1           2      3      0  login-bursts---recursive

3 externals failed, see [MY_PATH]/node_modules/trapit/externals/oracle_unit_test_examples for scenario listings
tt_feuertips_13.purely_wrap_feuertips_13_poc_out.json
tt_feuertips_13_v1.purely_wrap_feuertips_13_poc_out.json
tt_login_bursts.purely_wrap_view_ana_out.json
powershell

↑ Results Summaries for External Folders The results JSON file is sourced from the following GitHub project, and the formatted results files can be seen in the indicated subfolder:

Running the format-externals script for subfolder powershell from a Powershell window in the root trapit folder:

$ node externals/format-externals powershell

gives the following output to console, as well as writing the results subfolders as indicated:

Unit Test Results Summary for Folder [MY_PATH]/node_modules/trapit/externals/powershell
=======================================================================================
 File                             Title                     Inp Groups  Out Groups  Tests  Fails  Folder
--------------------------------  ------------------------  ----------  ----------  -----  -----  ------------------------
*colgroup_out.json                ColGroup - Powershell              3           5     17      3  colgroup---powershell
 get_ut_template_object_out.json  Get UT Template Object             4           6     18      0  get-ut-template-object
 helloworld_out.json              Hello World - Powershell           0           2      1      0  hello-world---powershell
 merge-mdfiles_out.json           Merge MD Files                     3           3      5      0  merge-md-files
 ps_utils_out.json                Powershell Utils                   7           6      6      0  powershell-utils

1 externals failed, see [MY_PATH]/node_modules/trapit/externals/powershell for scenario listings
colgroup_out.json
python

↑ Results Summaries for External Folders The results JSON file is sourced from the following GitHub project, and the formatted results files can be seen in the indicated subfolder:

Running the format-externals script for subfolder python from a Powershell window in the root trapit folder:

$ node externals/format-externals python

gives the following output to console, as well as writing the results subfolders as indicated:

Unit Test Results Summary for Folder [MY_PATH]/node_modules/trapit/externals/python
===================================================================================
 File                  Title               Inp Groups  Out Groups  Tests  Fails  Folder
---------------------  ------------------  ----------  ----------  -----  -----  ------------------
*colgroup_out.json     Col Group                    3           4      5      1  col-group
 helloworld_out.json   Hello World                  0           1      1      0  hello-world
 timerset_py_out.json  Python Timer Set             2           8      7      0  python-timer-set
 trapit_py_out.json    Python Unit Tester           7           6      4      0  python-unit-tester

1 externals failed, see [MY_PATH]/node_modules/trapit/externals/python for scenario listings
colgroup_out.json
shortest_path_sql

↑ Results Summaries for External Folders The results JSON file is sourced from the following GitHub project, and the formatted results files can be seen in the indicated subfolder:

Running the format-externals script for subfolder python from a Powershell window in the root trapit folder:

$ node externals/format-externals shortest_path_sql

gives the following output to console, as well as writing the results subfolders as indicated:

Unit Test Results Summary for Folder [MY_PATH]/node_modules/trapit/externals/shortest_path_sql
==============================================================================================
 File                                                          Title                                  Inp Groups  Out Groups  Tests  Fails  Folder
-------------------------------------------------------------  -------------------------------------  ----------  ----------  -----  -----  -------------------------------------
 tt_shortest_path_sql.purely_wrap_ins_min_tree_links_out.json  Oracle SQL Shortest Paths: Node Tree            3           2      7      0  oracle-sql-shortest-paths_-node-tree
 tt_shortest_path_sql.purely_wrap_ins_node_roots_out.json      Oracle SQL Shortest Paths: Node Roots           2           2      3      0  oracle-sql-shortest-paths_-node-roots

0 externals failed, see [MY_PATH]/node_modules/trapit/externals/shortest_path_sql for scenario listings

API

↑ In This README... ↓ Functions ↓ Scripts

const Trapit = require('trapit');

Functions

↑ API ↓ testUnit ↓ fmtTestUnit ↓ mkUTExternalResultsFolders ↓ tabMkUTExternalResultsFolders

testUnit

↑ Functions

Trapit.testUnit(inpFile, root, purelyWrapUnit, formatType = 'B', colors)

This is the base entry point for testing JavaAcript programs. It writes the output results folder and returns a value containing summary data for the unit test. It has the following parameters:

  • inpFile: JSON input file
  • root: root folder, where the results output files are to be written, in a subfolder with name based on the report title
  • purelyWrapUnit: wrapper function, which calls the unit under test passing the appropriate parameters and returning its outputs, with the following signature:
    • Input parameter: 3-level list with test inputs as an object with groups as properties having 2-level arrays of record/field as values: {GROUP: [[String]], ...}
    • Return Value: 2-level list with test outputs as an object with groups as properties having an array of records as delimited fields strings as value: {GROUP: [String], ...}
  • formatType: format type = H/T/B - Format in HTML/Text/Both; default 'B'
  • colors: object with HTML heading colours; default {h1: '#FFFF00', h2: '#2AE6C1', h3: '#33F0FF', h4: '#7DFF33'}

and object return value with the following fields:

  • nTest: number of test scenarios
  • nFail: number of test scenarios that failed
  • status: status = SUCCESS/FAIL
  • resFolder: name of results subfolder
  • nInpGroups: number of input groups
  • nOutGroups: number of output groups
  • title: unit test title

fmtTestUnit

↑ Functions

Trapit.fmtTestUnit(inpFile, root, purelyWrapUnit, formatType = 'B', colors)

This is a wrapper function that calls the base entry point Trapit.testUnit with the same parameters and prints its return object to console.

mkUTExternalResultsFolders

↑ Functions

Trapit.mkUTExternalResultsFolders(extFolder, formatType = 'B', colors)

This is the base entry point for formatting results JSON files from external programs. It writes the output results folders for each file in the external folder, and returns a value containing unit test summary data for the JSON files as an array of objects. It has the following parameters:

  • extFolder: external folder, where the results output files are to be written, in a subfolder with name based on the report title
  • formatType: format type = H/T/B - Format in HTML/Text/Both; default 'B'
  • colors: object with HTML heading colours; default {h1: '#FFFF00', h2: '#2AE6C1', h3: '#33F0FF', h4: '#7DFF33'}

and array return value with the following fields:

  • file: JSON results file name
  • nTest: number of test scenarios
  • nFail: number of test scenarios that failed
  • status: status = SUCCESS/FAIL
  • resFolder: name of results subfolder
  • nInpGroups: number of input groups
  • nOutGroups: number of output groups
  • title: unit test title

tabMkUTExternalResultsFolders

↑ Functions

Trapit.tabMkUTExternalResultsFolders(extFolder, formatType = 'B', colors)

This is a wrapper function that calls the base entry point Trapit.mkUTExternalResultsFolders with the same parameters and prints its return array in tabular format to console.

Scripts

↑ API ↓ format-external-file.js.js ↓ format-external-folder.js ↓ format-externals.js

format-external-file.js.js

↑ Scripts

$ node externals/test-external-file inpFile

This script reads a JSON results file and creates results files formatted in HTML and text in a subfolder named from the unit test title, within the same folder as the JSON file. It has the following parameters:

  • inpFile: JSON results file

and return value:

  • [Summary of results]

format-external-folder.js

↑ Scripts

$ node externals/format-external-folder inpFolder

This script loops over all JSON files in a specified folder and creates results files formatted in HTML and text in a subfolder named from the unit test title. It has the following parameters:

  • inpFolder: input folder for the JSON files, and where the results output files are to be written, in subfolders with names based on the report titles

and return value:

  • [Summary table of results]

format-externals.js

↑ Scripts

$ node externals/format-externals subFolder

This script loops over all JSON files in a specified subfolder and creates results files formatted in HTML and text in subfolders with names based on the report titles. It has the following parameters:

  • subFolder: subfolder (of externals folder), where the results output files are to be written, in subfolders with names based on the report titles

and return value:

  • [Summary table of results]

Installation

↑ In This README...

With Node.js installed, run (from the folder where you want the package to be installed):

$ npm install trapit

Unit Testing

↑ In This README... ↓ Step 1: Create JSON File ↓ Step 2: Create Results Object ↓ Step 3: Format Results

The package itself is tested using The Math Function Unit Testing Design Pattern. A 'pure' wrapper function is constructed that takes input parameters and returns a value, and is tested within a loop over scenario records read from a JSON file.

In this case, the pure function getUTResults is unit tested explicitly, while the function fmtTestUnit is called as the main section of the unit test script, test-trapit.js.

Step 1: Create JSON File

↑ Unit Testing ↓ Unit Test Wrapper Function ↓ Scenario Category ANalysis (SCAN)

Unit Test Wrapper Function

↑ Step 1: Create JSON File ↓ Wrapper Function Signature Diagram

The signature of the unit under test is:

Trapit.getUTResults(inMeta, inScenarios);

where the parameters are input metadata and scenarios objects. The diagram below shows the structure of the input and output of the wrapper function.

Wrapper Function Signature Diagram

↑ Unit Test Wrapper Function

From the input and output groups depicted we can construct CSV files with flattened group/field structures, and default values added, as follows (with getutresults_inp.csv left, getutresults_out.csv right):

Scenario Category ANalysis (SCAN)

↑ Step 1: Create JSON File ↓ Generic Category Sets ↓ Categories and Scenarios

The art of unit testing lies in choosing a set of scenarios that will produce a high degree of confidence in the functioning of the unit under test across the often very large range of possible inputs.

A useful approach to this can be to think in terms of categories of inputs, where we reduce large ranges to representative categories. I explore this approach further in this article:

Generic Category Sets

↑ Scenario Category ANalysis (SCAN)

As explained in the article mentioned above, it can be very useful to think in terms of generic category sets that apply in many situations. Multiplicity is relevant here (as it often is):

Multiplicity

There are several entities where the generic category set of multiplicity applies, and we should check each of the None / One / Multiple instance categories.

| Code | Description | |:--------:|:----------------| | None | No values | | One | One value | | Multiple | Multiple values |

Apply to:

Categories and Scenarios

↑ Scenario Category ANalysis (SCAN)

After analysis of the possible scenarios in terms of categories and category sets, we can depict them on a Category Structure diagram:

We can tabulate the results of the category analysis, and assign a scenario against each category set/category with a unique description:

| #| Category Set | Category | Scenario | |-:|:---------------------------|:-------------|:-----------------------------------------------------| | 1| Input Group Multiplicity | None | No input groups | | 2| Input Group Multiplicity | One | One input group | | 3| Input Group Multiplicity | Multiple | Multiple input groups | | 4| Output Group Multiplicity | None | No output groups | | 5| Output Group Multiplicity | One | One output group | | 6| Output Group Multiplicity | Multiple | Multiple output groups | | 7| Input Field Multiplicity | One | One input group field | | 8| Input Field Multiplicity | Multiple | Multiple input fields | | 9| Output Field Multiplicity | One | One output group field | |10| Output Field Multiplicity | Multiple | Multiple output fields | |11| Input Record Multiplicity | None | No input group records | |12| Input Record Multiplicity | One | One input group record | |13| Input Record Multiplicity | Multiple | Multiple input group records | |14| Output Record Multiplicity | None | No output group records | |15| Output Record Multiplicity | One | One output group record | |16| Output Record Multiplicity | Multiple | Multiple output group records | |17| Scenario Multiplicity | None | No scenarios | |18| Scenario Multiplicity | One | One scenario | |19| Scenario Multiplicity | Multiple | Multiple scenarios | |20| Test Status | Pass | All scenarios pass | |21| Test Status | Fail | At least one scenario fails | |22| Exception | #Groups | Groups number mismatch | |23| Exception | #Fields | Fields number mismatch | |24| Match Type String | Exact Pass | Exact string pass | |25| Match Type String | Exact Fail | Exact string fail | |26| Match Type String | Inexact Pass | Inexact (regex) pass | |27| Match Type String | Inexact Fail | Inexact (regex) fail | |28| Match Type String | Untested | Untested | |29| Match Type Number | Exact Pass | Exact number pass | |30| Match Type Number | Exact Fail | Exact number fail | |31| Match Type Number | Inexact Pass | Inexact (range) just pass | |32| Match Type Number | Inexact Fail | Inexact (range) just fail | |33| Match Type Number | Inexact/Null | Number (range) fail null | |34| Category Set | Undefined | Category sets undefined | |35| Category Set | Null | Category sets null | |36| Category Set | Same | Multiple category sets with the same value | |37| Category Set | Different | Multiple category sets with null and not null values |

From the scenarios identified we can construct the following CSV file (getutresults_sce.csv), taking the category set and scenario columns, and adding an initial value for the active flag:

The powershell API to generate a template JSON file can be run with the following powershell in the folder of the CSV files:

Format-JSON-GetUTResults.ps1

Import-Module TrapitUtils
Write-UT_Template 'getutresults' '|'

This creates the template JSON file, getutresults_temp.json, which contains an element for each of the scenarios, with the appropriate category set and active flag, with a single record in each group with default values from the groups CSV files and using the field delimiter '|'.

Step 2: Create Results Object

↑ Unit Testing

Step 2 requires the writing of a wrapper function that is passed into a call to the unit test driver function, Trapit.fmtTestUnit. This reads the input JSON file, calls the wrapper function for each scenario, and creates the output object with the actual results merged in along with the expected results. In this JavaScript version, the entry point goes on to execute step 3, formatting the results, without needing to materialize the output object.

A skeleton of the test script is shown below. It starts by loading the library testing module, defines some local functions, then the wrapper function, and finally executes a 1-line call to the library entry point, Trapit.fmtTestUnit, passing in the wrapper function.

test-trapit.js (skeleton)

const Trapit = require('trapit');

function setFldsRows(inpOrOut, sce, groups) { (function body) }
function setOut(utOutput) { (function body) }
function setOutException(s, exceptions) { (function body) }
function addSce(inpGroupNames, outGroupNames, lolSce, category_setInc) { (function body) }
function getGroups(fields) { (function body) }
function getInScenarios(inMeta, inpGroups, repFields) { (function body) }
function purelyWrapUnit(inpGroups) { (function body) }

Trapit.fmtTestUnit(INPUT_JSON, ROOT, purelyWrapUnit);

Step 3: Format Results

↑ Unit Testing ↓ Unit Test Report - getUTResults ↓ Scenario 14: Multiple scenarios [Category Set: Scenarios Multiplicity]

As noted, in this JavaScript version, the entry point goes on to execute step 3, formatting the results, so this does not require an explicit call. Here we just show extracts from the formatted results.

You can review the HTML formatted unit test results here:

Unit Test Report - getUTResults

↑ Step 3: Format Results

Scenario 14: Multiple scenarios [Category Set: Scenarios Multiplicity]

↑ Step 3: Format Results ↓ Results for Scenario 19: Multiple scenarios [Category Set: Scenario Multiplicity]

The summary report in text format shows the scenarios tested:

Unit Test Report: getUTResults
==============================

      #    Category Set                Scenario                                              Fails (of 6)  Status
      ---  --------------------------  ----------------------------------------------------  ------------  -------
      1    Input Group Multiplicity    No input groups