npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@bifravst/bdd-markdown

v8.2.48

Published

Write BDD tests in Markdown.

Downloads

1,878

Readme

BDD Markdown npm version

Test and Release semantic-release Renovate @commitlint/config-conventional code style: prettier ESLint: TypeScript

Write BDD tests in Markdown.

Idea

Writing BDD tests should be more comfortable than this, so why not use Markdown? It can look like this.

  • it is a well supported document format, many tools like auto-formatters already exist
  • it provide good tools to structure a hierarchical document
  • it has support for embedding source code / JSON payloads, and even tables
  • front matter can be used for feature-level configuration

History

Work on the original BDD e2e feature runner began in 2018, and the project has been proved very useful for testing cloud-native solutions. Read more about the original idea here. However, the implementation had some shortcomings. Especially understanding test results and the way state and retries were handled was not optimal. In addition was the old codebase itself not sufficiently covered with tests. Therefore this project was initiated in 2022, with four years of experience authoring and running tests. With a fresh set of eyes, the way to write test was complete changed from Gherkin to Markdown which called for releasing it as a standalone project.

Examples

  • Demo of supported syntax
  • Gherkin Rule keyword
  • Mars Rover Kata (this demonstrates the Soon keyword which retries steps)
    Run: $(set -o pipefail && npx tsx examples/mars-rover/tests.ts | npx tsx reporter/console-cli.ts)
  • Firmware UART log assertions (this demonstrates the use of the Context, which is a global object available to provide run-time settings to the test run, which replace placeholders in step titles and codeblocks.)
    Run: $(set -o pipefail && npx tsx examples/firmware/tests.ts | npx tsx reporter/console-cli.ts)

Test eventual consistent systems using the Soon keyword

Let's have a look at this scenario:

# To Do List

## Create a new todo list item

Given I create a new task named `My item`

Then the list of tasks should contain `My item`

What if you are testing a todo list system, that is eventually consistent?

More specifically: creating a new task happens through a POST request to an API that returns a 202 Accepted status code.

The system does not guarantee that task you've just created is immediately available.

The Then assertion will fail, because it is executed immediately.

For testing eventual consistent systems, we need to either wait a reasonable enough time or retry the assertion.

However, if there are many similar assertions in your test suite will quickly add up to long run times.

Therefore the most efficient solution is to retry the assertion until it passes, or times out. This way a back-off algorithm can be used to wait increasing longer times and many tries during the test run will have the least amount of impact on the run time.

Implementing the appropriate way of retrying is left to the implementing step, however you are encourage to mark these eventual consisted steps using the Soon keyword.

Control feature execution order via dependencies

By default the features are loaded in no particular order. You may attempt to order them using a naming convention, however this can enforce a forced ranking of all features, and over time files might need to get renamed to make room for new features.

In this project, features can specify a dependency to one or more other features in their front matter, and after parsing all features files, they will be sorted topologically.

Features can define their dependencies via the needs keyword:

---
needs:
  - First feature
---

# Second

## Scenario

Given this is the first step

This feature will be run after a feature with the name First feature

Running features first and last

In addition, features can specify whether they should be run before all other features, or after all. Multiple keywords can have this flag, but dependencies will take precedence.

Example: running a feature before all others

---
order: first
---

# Runs before all others

## Scenario

Given this is the first step

Example: running a feature after all others:

---
order: last
---

# Runs before all others

## Scenario

Given this is the first step

Skipping features

Features can be skipped, this will also skip all dependent and transiently dependent features.

Example: skipping a feature

---
run: never
---

# This feature never runs

## Scenario

Given this is the first step

Running only specific features

Features can be run exclusively, this will also run all dependent and transiently dependent features. All other features not marked as run: only will be skipped.

Example: running only a specific feature

---
run: only
---

# This feature runs, all other features are skipped

## Scenario

Given this is the first step

Variants

Variants (defined in the frontmatter of a feature) can be used to run the same feature in different variants. For every entry in variants, the feature file is run. (Example)

Number placeholders in JSON

For JSON code-blocks there is a special notation to replace number placeholders, while still maintaining the JSON syntax and allow for formatters like prettier to format the code-block.

Given `v` is the number `42`

And I store this in `result`

```json
{ "foo": "$number{v}" }
```

Then `result` should match

```json
{ "foo": 42 }
```

Markdown Reporter

It includes a markdown reporter, which will turn the suite result into markdown, suitable for displaying it as GitHub Actions job summaries.

Example: Mars Rover Report