k4-base-bridge
v7.0.2
Published
Protocol-agnostic core module for applications interfacing with devices
Downloads
1
Readme
k4-base-bridge
Prerequisites
Node.js
Please install Node.js version 10.16.3 or greater.
Usage
API
The available documentation reflects the trunk of this project, which is currently master
.
A broad flow diagram is available here.
Documentation Creation
Generation
Using Node.js 10.16.3 or greater, please run the following to generate the
documentation under ./jsdoc-out
:
npm i
npm run doc-gen
Please view the documentation by opening ./jsdoc-out/index.html
using a Web
Browser like Mozilla Firefox or Google Chrome.
Publishing
N.B. Please publish responsibly. The current accepted practice is to publish
only documentation generated from the trunk (master
) branch of the project
AWS CLI
- Please install the AWS CLI for your system
- Ensure that you have access to the
edge-iot
prefixed paths within thecom.k4connect.docs
AWS S3 Bucket. - Configure your AWS CLI installation with your AWS IAM credentials using:
aws configure
- Generate the documentation according to the above instructions
- Publish the documentation to the
edge-iot/k4-base-bridge
project within thecom.k4connect.docs
AWS S3 Bucket as follows:
aws s3 cp --recursive ./jsdoc-out/ s3://com.k4connect.docs/edge-iot/k4-base-bridge/
Contributing
Workflow to Implementing a Feature
- Install the library dependencies
- Run
npm install
in a Terminal session
- Run
- Determine what changes have to be made
- Identify which entities need new behavior
- Is it a core class? An accessory utility? A core plugin?
- Find the most common acceptable location to make the change
- This will avoid multiple potential failure areas
- Work backwards from entities that will be changing to see if their dependent entities will need revision
- Make those necessary changes as well
- Identify which entities need new behavior
- Write tests to cover the new features (see writing tests)
- Execute the test suite (see running tests)
- Revise the JSDoc-style annotated comments in the source code to reflect any API updates
- Line up any new dependency library installs and versioning
- Create a Pull Request against trunk (
master
). This will trigger the complete test suite as well.- If any status checks fail, please address the feedback and re-push the updated branch
Tests
Writing Tests
- For every new module or class created, please create a companion test source file.
For additions to an existing module or class, please use (or even improve 😄) the scaffolding in the existing test file, and append your new test cases to it.
- NOTE: It may be difficult to estimate what scope of testing is required for a new addition. Please use your best judgment.
- For tests closer to "unit" level, these files will fall under the
test/
directory.- Examples of entities that currently are tested in this scope:
- Core Classes (e.g. Adapter, Command, Device, Response)
- Transports (e.g. Serial, Udp)
- Simple Plugins (e.g. Queue, Sequencer)
- Simple Utilities (e.g. Timing Utility, Device Fingerprint Utility)
- Simple End-to-End Tests
- Send/Receive from Transports
- Examples of entities that currently are tested in this scope:
- For tests on the "integration" level or ones that require a sample bridge, these files will fall under the
sample/<iot-protocol-name>/test/
directory- Examples of entities that currently are tested in this scope:
- Complex plugins (e.g. Polling Manager, Node State Monitor, Configurator, Pairer)
- Complex utilities (e.g. Mapping Change Monitor, Model Change Monitor)
- Some example ZWave functionality (e.g. Zip Packet Helper, Zip Wait Helper, Ack/Nack Handling)
- Examples of entities that currently are tested in this scope:
- Aim towards Behavior Driven Development first. (Does the test validate the feature
being implemented?)
- Use the simplest test logic to verify correctness
- Prefer unit-tests over integration tests, but understand that some modules may be complex enough that unit-testing may not be possible and/or may not provide confidence
- Use the full extent of the toolkit
- Spies - For verifying that a function has been called in a certain way
- Stubs/Mocks - For essentially substituting a dependency in place of the real thing, for better visibility of the internals
Running Tests
- To run a specific set of tests, simply run
./node_modules/.bin/mocha <list of test files>
- To run the complete test suite, invoke
npm run test
to use the parallel-test-orchestrator - The amount of the test suite needed to execute for providing confidence increases with
the scope and complexity of the change.
- In order of increasing test execution overhead are: core classes, accessory utilities, and core plugins
- If running more than a few long-running tests directly with
mocha
, it may be better to simply run the complete test suite. This makes parallel executors available.- The speed of the complete test suite execution depends on the number of parallel executors.
- Due to several longer-running integration tests, on a single executor, the total suite may take up to 2 hours. However, with 4 executors, this time drops down to 30 minutes. Further parallelization may not help, since some of the longer-running tests actually execute for the full 30 minutes.
Helpful References
- Mocha.js Test Runner documentation
- Chai.js Expect/BDD Assertion documentation
- Sinon.js Spy/Fakes Library documentation
- Proxyquire Stubs/Mocks Library documentation
Side Notes
- The parallel-coverage-orchestrator does not have the effect one might expect.
- It does successfully invoke the NYC/Istanbul Code Coverage tool, but the parallelization fragments the coverage reports
- There is no solution yet to "merge" the many individual coverage reports which may actually underreport the test coverage
- If a coverage report is truly desired, the only way to do so reliably is to use the
.nycrc
file at this project's root and run all tests in non-parallelized sequence usingnyc
.