npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

evaldown

v2.0.0

Published

Evalute JavaScript snippets in markdown files and output static pages.

Downloads

2,357

Readme

Evaldown

Evaluate JavaScript and TypeScript snippets in markdown files.

NPM version Build Status Coverage Status

This tool provides both CLI and programmatic interfaces for locating JS/TS code blocks in one or more markdown files, extracting and evaluating these blocks and provides a range formats in which to serialise their pretty-printed output.

We start with discussing making use of the tool from the command line, but later sections cover authoring snippets and validating them post annotation.

The sections below discuss configuring the tool and authoring of examples.

Use

We start by introducing an invocation for processing a single markdown file:

npx evaldown ./docs/README.md > README.md

The file will be processed and the output written to stdout. In order to store the output within the source file, thereby automatically capturing it, we can use the --inplace option:

npx evaldown --inplace ./docs/README.md

All the examples in this section are executable in a checkout of the evaldown repository.

Process directories of files

Processing all the files in a directory looks almost identical:

npx evaldown --target-path testdata/output testdata/example

As does applying an update to the source files within a directory:

npx evaldown --inplace ./testdata/example

Working with TypeScript

Support is inbuilt for processing TypeScript blocks into files. An explicit path to the tsconfig.json file is required from which point the project specific compiler is detected and used to transpile snippets:

npx evaldown --tsconfig-path ./testdata/typescript/tsconfig.json ./testdata/typescript/example.md

Beyond command line options

The tool supports many additional options to alter its behaviour.

Typically, the tool would be installed via a dependency via npm and any options will be read directly from a configuration file:

npm install --save-dev evaldown
npx evaldown --config <path_to_config>

Configuration

The tool ships with inbuilt support for processing directories of markdown files. To do this, a small config file is needed to indicate where the source path to read files from a target path to write generated output to.

A basic evaldown.conf.js file is as follows:

module.exports = {
  sourcePath: "./input",
  targetPath: "./output"
};

Output format and extension

Currently the rendering process will produce HTML files as standard with their usual .html file extension. The tool can however be requested to output markdown files to the output directory - with the output blocks populated - allowing its use to pre-process markdown files before they are passed to another template engine.

"inlined"

This option will write markdown files with the code and output blocks replaced with static HTML that inlines all the colouring information.

module.exports = {
  outputFormat: "inlined",
  sourcePath: "./input",
  targetPath: "./output"
};

"markdown"

This option will write markdown files with the code and output blocks replaced with text (for use when external highlighting is desired).

module.exports = {
  outputFormat: "markdown",
  sourcePath: "./input",
  targetPath: "./output"
};

Capturing evaluation results from the console

By default, JavaScript code blocks found in markdown files - which we refer to as snippets - are allowed to use return statements. The returned values will be rendered as an output block - an example of this is shown in the authoring section below.

In some cases, rather than capture the result you may wish to capture the logging output of a command, perhaps for code that emits messages when it finished or just an example that uses the console.

The most convenient way to capture the console is usually to add a flag to the particular snippet (this is discussed below). If all the examples require the console, it can be enabled globally via an option to the CLI:

npx evaldown --capture console ...

or by adding an outputCapture key to the configuration object:

module.exports = {
  outputCapture: "console",
  sourcePath: "./input",
  targetPath: "./output"
};

Keeping the source up-to-date

As you change your examples, updating means you can always keep the output up-to-date. This mode is considered a key use-case and can enabled by default via the configuration file:

It can also be activaited on the command line on demand:

./node_modules/.bin/evaldown --config <path_to_config> --update

Authoring

Inside the input folder, you can make add markdown files that contain "javascript" code blocks. In order to have any output shown these need to be followed by "output" snippets.

By default, value returned from the code block is what will be captured and displayed in the subsequent output block:

When they are rendered, the output will look something like:

function doSomething() {
  return { foo: "bar" };
}

// objects are inspected too
return doSomething();
{ foo: 'bar' }

Customising snippets with flags

When authoring examples you may find that you want to customise how individual snippets are treated - this is done by annotations above the code clocks inline in the markdown.

Comments with the evaldown marker will be located and the values afterwards, which we call flags, will be used as processing hints.

async: true

This flag permits the use of the await keyword:

console: true

This flag will instrument the console such that any logged lines will appear in the output snippet:

Validation

Once you have authored your documentation and annotated the markdown such that it can be executed, it is important that as your project develops these examples stay correct with respect to future changes.

We can make use of the ability to execute the snippets to check that the evaluated output matches what was previously recorded in the markdown. To do this we can use the --validate option:

npx evaldown --validate ./README.md

In the case of an output mismatch an informative diff is produced:

SnippetValidationError:
expected \'foo\' to equal \'foobar\'

-foo
+foobar

This option allows such checks to easily occur as part of CI pipelines.