npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@skills17/karma-helpers

v2.0.0

Published

Provides some karma helpers for usage in a skills competition environment.

Downloads

5

Readme

skills17/karma-helpers

This package provides some Karma helpers for usage in a skills competition environment. It includes:

  • Custom output formatter
  • Automatic karma configuration
  • ... and more

Table of contents

Installation

Requirements:

  • Node 16 or greater
  • Karma 5.0 or greater

Karma already needs to be installed with a compatible testing framework (like mocha).

To install this package, simply run the following command:

npm install @skills17/karma-helpers

It is suggested to add the following npm scripts:

  "scripts": {
    "test": "karma start",
    "test:json": "karma start --reporters skills17-json"
  },

This will provide the following commands:

  • npm test - Run all tests once and show a nice output with the awarded points (useful for the competitors to see their points)
  • npm run test:json - Run all tests once and get a json output (useful for automated marking scripts)

Usage

A config.yaml file needs to be created that contains some information about the task. It should be placed in the root folder of your task, next to the package.json file.

See the @skills17/task-config package for a detailed description of all available properties in the config.yaml file.

If the test files in your tasks do not match the default file pattern (./tests/**/*.@(spec|test).@(js|ts)), the tests setting needs to be overwritten.

Karma config

This package provides a function that automatically configures karma for the current task. To use it, create a karma.conf.js file with the following content:

const config = require('@skills17/karma-helpers');

module.exports = config({
  frameworks: ['mocha', 'chai'],
  plugins: ['karma-mocha', 'karma-chai', 'karma-chrome-launcher'],
});

If a different testing framework than mocha with chai is used, modify the frameworks and plugins list. It is also possible to overwrite any other karma configuration value, but shouldn't be necessary usually as the @skills17/karma-helpers takes the correct values from the config.yaml file.

Grouping

A core concept is test groups. You usually don't want to test everything for one criterion in one test function but instead split it into multiple ones for a cleaner test class and a better overview.

In JS, tests are grouped by a test name prefix defined in the config.yaml file.

All describes are concatenated with the actual test names before evaluation.

For example, the following test will have the name Countries > Overview > lists all countries:

describe('Countries', () => {
  describe('Overview', () => {
    it('lists all countries', () => {
      // ...
    });
  });
});

To catch and group all tests within the Overview description, the group matcher can be set to Countries > Overview > .+ for example. Each of the tests within that group will now award 1 point to the group.

Extra tests

To prevent cheating, extra tests can be used. They are not available to the competitors and should test exactly the same things as the normal tests do, but with different values.

For example, if your normal test contains a check to search the list of all countries by 'Sw*', copy the test into an extra test and change the search string to 'Ca*'. Since the competitors will not know the extra test, it would detect statically returned values that were returned to simply satisfy the 'Sw*' tests instead of actually implement the search logic.

Extra tests are detected by their describe, which should equal 'Extra' or 'extra'. That means that you can simply wrap your test in an aditional extra describe like shown below. The other describes and test names should exactly equal the ones from the normal tests. If they don't, a warning will be displayed.

describe('Extra', () => {    // <-- only this describe has to be added
  describe('Countries', () => {
    it('lists all countries', () => {
      // ...
    });
  });
});

It usually makes sense to move the extra tests in a separate folder, so the folder can simply be deleted before the tasks are distributed to the competitors. Nothing else needs to be done or configured.

If an extra test fails while the corresponding normal test passes, a warning will be displayed that a manual review of that test is required since it detected possible cheating. The penalty then has to be decided manually from case to case, the points visible in the output assumed that the test passed and there was no cheating.

License

MIT