npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

k4-base-bridge

v7.0.2

Published

Protocol-agnostic core module for applications interfacing with devices

Downloads

1

Readme

k4-base-bridge

Prerequisites

Node.js

Please install Node.js version 10.16.3 or greater.

Usage

API

The available documentation reflects the trunk of this project, which is currently master.

A broad flow diagram is available here.


Documentation Creation

Generation

Using Node.js 10.16.3 or greater, please run the following to generate the documentation under ./jsdoc-out:

npm i
npm run doc-gen

Please view the documentation by opening ./jsdoc-out/index.html using a Web Browser like Mozilla Firefox or Google Chrome.

Publishing

N.B. Please publish responsibly. The current accepted practice is to publish only documentation generated from the trunk (master) branch of the project

AWS CLI

  1. Please install the AWS CLI for your system
  2. Ensure that you have access to the edge-iot prefixed paths within the com.k4connect.docs AWS S3 Bucket.
  3. Configure your AWS CLI installation with your AWS IAM credentials using: aws configure
  4. Generate the documentation according to the above instructions
  5. Publish the documentation to the edge-iot/k4-base-bridge project within the com.k4connect.docs AWS S3 Bucket as follows:
aws s3 cp --recursive ./jsdoc-out/ s3://com.k4connect.docs/edge-iot/k4-base-bridge/

Contributing

Workflow to Implementing a Feature

  1. Install the library dependencies
    • Run npm install in a Terminal session
  2. Determine what changes have to be made
    • Identify which entities need new behavior
      • Is it a core class? An accessory utility? A core plugin?
    • Find the most common acceptable location to make the change
      • This will avoid multiple potential failure areas
    • Work backwards from entities that will be changing to see if their dependent entities will need revision
      • Make those necessary changes as well
  3. Write tests to cover the new features (see writing tests)
  4. Execute the test suite (see running tests)
  5. Revise the JSDoc-style annotated comments in the source code to reflect any API updates
  6. Line up any new dependency library installs and versioning
  7. Create a Pull Request against trunk (master). This will trigger the complete test suite as well.
    • If any status checks fail, please address the feedback and re-push the updated branch

Tests

Writing Tests

  1. For every new module or class created, please create a companion test source file. For additions to an existing module or class, please use (or even improve 😄) the scaffolding in the existing test file, and append your new test cases to it.
    • NOTE: It may be difficult to estimate what scope of testing is required for a new addition. Please use your best judgment.
    • For tests closer to "unit" level, these files will fall under the test/ directory.
      • Examples of entities that currently are tested in this scope:
        • Core Classes (e.g. Adapter, Command, Device, Response)
        • Transports (e.g. Serial, Udp)
        • Simple Plugins (e.g. Queue, Sequencer)
        • Simple Utilities (e.g. Timing Utility, Device Fingerprint Utility)
        • Simple End-to-End Tests
          • Send/Receive from Transports
    • For tests on the "integration" level or ones that require a sample bridge, these files will fall under the sample/<iot-protocol-name>/test/ directory
      • Examples of entities that currently are tested in this scope:
        • Complex plugins (e.g. Polling Manager, Node State Monitor, Configurator, Pairer)
        • Complex utilities (e.g. Mapping Change Monitor, Model Change Monitor)
        • Some example ZWave functionality (e.g. Zip Packet Helper, Zip Wait Helper, Ack/Nack Handling)
  2. Aim towards Behavior Driven Development first. (Does the test validate the feature being implemented?)
    • Use the simplest test logic to verify correctness
    • Prefer unit-tests over integration tests, but understand that some modules may be complex enough that unit-testing may not be possible and/or may not provide confidence
  3. Use the full extent of the toolkit
    • Spies - For verifying that a function has been called in a certain way
    • Stubs/Mocks - For essentially substituting a dependency in place of the real thing, for better visibility of the internals

Running Tests

  1. To run a specific set of tests, simply run ./node_modules/.bin/mocha <list of test files>
  2. To run the complete test suite, invoke npm run test to use the parallel-test-orchestrator
  3. The amount of the test suite needed to execute for providing confidence increases with the scope and complexity of the change.
    • In order of increasing test execution overhead are: core classes, accessory utilities, and core plugins
  4. If running more than a few long-running tests directly with mocha, it may be better to simply run the complete test suite. This makes parallel executors available.
    • The speed of the complete test suite execution depends on the number of parallel executors.
    • Due to several longer-running integration tests, on a single executor, the total suite may take up to 2 hours. However, with 4 executors, this time drops down to 30 minutes. Further parallelization may not help, since some of the longer-running tests actually execute for the full 30 minutes.

Helpful References

Side Notes

  1. The parallel-coverage-orchestrator does not have the effect one might expect.
    • It does successfully invoke the NYC/Istanbul Code Coverage tool, but the parallelization fragments the coverage reports
    • There is no solution yet to "merge" the many individual coverage reports which may actually underreport the test coverage
    • If a coverage report is truly desired, the only way to do so reliably is to use the .nycrc file at this project's root and run all tests in non-parallelized sequence using nyc.