npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

reffy

v17.2.10

Published

W3C/WHATWG spec dependencies exploration companion. Features a short set of tools to study spec references as well as WebIDL term definitions and references found in W3C specifications.

Downloads

2,036

Readme

Reffy

Reffy is a Web spec crawler tool. It is notably used to update Webref every 6 hours.

The code features a generic crawler that can fetch Web specifications and generate machine-readable extracts out of them. Created extracts include lists of CSS properties, definitions, IDL, links and references contained in the specification.

How to use

Pre-requisites

To install Reffy, you need Node.js 20.12.1 or greater (the crawler itself may still run with earlier versions of Node.js but without any guarantee).

Installation

Reffy is available as an NPM package. To install the package globally, run:

npm install -g reffy

This will install Reffy as a command-line interface tool.

The list of specs crawled by default evolves regularly. To make sure that you run the latest version, use:

npm update -g reffy

Launch Reffy

Reffy crawls requested specifications and runs a set of processing modules on the content fetched to create relevant extracts from each spec. Which specs get crawled, and which processing modules get run depend on how the crawler gets called. By default, the crawler crawls all specs defined in browser-specs and runs all core processing modules defined in the browserlib folder.

Reffy can also run post-processing modules on the results of the crawl to create additional views of the data extracted from the spec during the crawl.

Crawl results will either be returned to the console or saved in individual files in a report folder when the --output parameter is set.

Examples of information that can be extracted from the specs:

  1. Generic information such as the title of the spec or the URL of the Editor's Draft. This information is typically copied over from browser-specs.
  2. The list of terms that the spec defines, in a format suitable for ingestion in cross-referencing tools such as ReSpec.
  3. The list of IDs, the list of headings and the list of links in the spec.
  4. The list of normative/informative references found in the spec.
  5. Extended information about WebIDL term definitions and references that the spec contains
  6. For CSS specs, the list of CSS properties, descriptors and value spaces that the spec defines.

The crawler can be fully parameterized to crawl a specific list of specs and run a custom set of processing modules on them. For example:

  • To extract the raw IDL defined in Fetch, run:
    reffy --spec fetch --module idl
  • To retrieve the list of specs that the HTML spec references, run (noting that crawling the HTML spec takes some time due to it being a multipage spec):
    reffy --spec html --module refs
  • To extract the list of CSS properties defined in CSS Flexible Box Layout Module Level 1, run:
    reffy --spec css-flexbox-1 --module css
  • To extract the list of terms defined in WAI ARIA 1.2, run:
    reffy --spec wai-aria-1.2 --module dfns
  • To run an hypothetical extract-editors.mjs processing module and create individual spec extracts with the result of the processing under an editors folder for all specs in browser-specs, run:
    reffy --output reports/test --module editors:extract-editors.mjs

You may add --terse (or -t) to the above commands to access the extracts directly.

Run reffy -h for a complete list of options and usage details.

Some notes:

  • The crawler may take a few minutes, depending on the number of specs it needs to crawl.
  • The crawler uses a local cache for HTTP exchanges. It will create and fill a .cache subfolder in particular.
  • If you cloned the repo instead of installing Reffy globally, replace reffy width node reffy.js in the above example to run Reffy.

Additional tools

Additional CLI tools in the src/cli folder complete the main specs crawler.

WebIDL parser

The WebIDL parser takes the relative path to an IDL extract and generates a JSON structure that describes WebIDL term definitions and references that the spec contains. The parser uses WebIDL2 to parse the WebIDL content found in the spec. To run the WebIDL parser: node src/cli/parse-webidl.js [idlfile]

To create the WebIDL extract in the first place, you will need to run the idl module in Reffy, as in:

reffy --spec fetch --module idl > fetch.idl

Crawl results merger

The crawl results merger merges a new JSON crawl report into a reference one. This tool is typically useful to replace the crawl results of a given specification with the results of a new run of the crawler on that specification. To run the crawl results merger: node src/cli/merge-crawl-results.js [new crawl report] [reference crawl report] [crawl report to create]

Analysis tools

Starting with Reffy v5, analysis tools that used to be part of Reffy's suite of tools to study extracts and create human-readable reports of potential spec anomalies migrated to a companion tool named Strudy. The actual reports get published in a separate w3c/webref-analysis repository as well.

WebIDL terms explorer

See the related WebIDLPedia project and its repo.

Technical notes

Reffy should be able to parse most of the W3C/WHATWG specifications that define CSS and/or WebIDL terms (both published versions and Editor's Drafts), and more generally speaking specs authored with one of Bikeshed or ReSpec. Reffy can also parse certain IETF specs to some extent, and may work with other types of specs as well.

List of specs to crawl

Reffy crawls specs defined in w3c/browser-specs. If you believe a spec is missing, please check the Spec selection criteria and create an issue (or prepare a pull request) against the w3c/browser-specs repository.

Crawling a spec

Given some spec info, the crawler basically goes through the following steps:

  1. Load the URL through Puppeteer.
  2. If the document contains a "head" section that includes a link whose label looks like "single page", go back to step 2 and load the target of that link instead. This makes the crawler load the single page version of multi-page specifications such as HTML5.
  3. If the document is a multi-page spec without a "single page" version, load the individual subpage and add their content to the bottom of the first page to create a single page version.
  4. If the document uses ReSpec, let ReSpec finish its generation work.
  5. Run internal tools on the generated document to build the relevant information.

The crawler processes 4 specifications at a time. Network and parsing errors should be reported in the crawl results.

Config parameters

The crawler reads parameters from the config.json file. Optional parameters:

  • cacheRefresh: set this flag to never to tell the crawler to use the cache entry for a URL directly, instead of sending a conditional HTTP request to check whether the entry is still valid. This parameter is typically useful when developing Reffy's code to work offline.
  • resetCache: set this flag to true to tell the crawler to reset the contents of the local cache when it starts.

Contributing

Authors so far are François Daoust and Dominique Hazaël-Massieux.

Additional ideas, bugs and/or code contributions are most welcome. Create issues on GitHub as needed!

Licensing

The code is available under an MIT license.