npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@alshdavid/html-lexer

v0.0.9

Published

An HTML5 lexer

Downloads

197

Readme

An HTML5 lexer for safe template languages

NPM version

A standard compliant, incremental/ streaming HTML5 lexer.

This is an HTML5 lexer designed to be used a basis for safe and HTML-context aware template languages, IDEs or syntax highlighters. It is different from the other available tokenizers in that it preserves all the information of the input string, e.g. formatting, quotation style and other idiosyncrasies. It does so by producing annotated chunks of the input string rather than the slightly more high level tokens that are described in the specification. However, it does do so in a manner that is compatible with the language defined in the HTML5 specification.

The main motivation for this project is a jarring absence of safe HTML template languages. By safe, I mean that the template placeholders are typed according to their context, and that the template engine ensures that the strings that come to fill the placeholders are automatically and correctly escaped to yield valid HTML.

Usage

The produced tokens are simply tuples (arrays) [type, chunk] of a token type and a chunk of the input string.

Basic usage

import { Lexer } from "@alshdavid/html-lexer"

const result = Lexer.tokenize("<h1>Hello, World</h1>")

Chunked usage

The lexer has a ‘push parser’ API. Writes are listened to using the subscribable method onWrite, returning an unsubscribe function.

Example:

import { Lexer } from "@alshdavid/html-lexer"

// Create lexer
const lexer = new Lexer()

// Stream tokens synchronously as they are parsed
lexer.onWrite((token) => console.log(token))
lexer.onEnd(() => console.log("Job's done"))

// Write tokens
lexer.write("<h1>Hello, World</h1>")

// Closes lexer, no more writes can occur
lexer.end()

Results

Both of the examples will create an output that looks like this:

["startTagStart", "<"]
["tagName", "h1"]
["tagEnd", ">"]
["data", "Hello,"]
["space", " "]
["data", "World"]
["endTagStart", "</"]
["tagName", "h1"]
["tagEnd", ">"]

The lexer is incremental: onWrite will be called as soon as a token is available and you can split the input across multiple writes:

const lexer = new Lexer()

lexer.onWrite((token) => console.log(token))

lexer.write("<h")
lexer.write("1>Hello, W")
lexer.write("orld</h1>")

lexer.end()

Token types

The tokens emitted are simple tuples [type, chunk]. The type of a token is just a string, and it is one of:

  • attributeAssign
  • attributeName
  • attributeValueData
  • attributeValueEnd
  • attributeValueStart
  • bogusCharRef
  • charRefDecimal
  • charRefHex
  • charRefLegacy
  • charRefNamed
  • commentData
  • commentEndBogus
  • commentEnd
  • commentStartBogus
  • commentStart
  • data
  • endTagStart
  • lessThanSign
  • uncodedAmpersand
  • newline
  • nulls
  • plaintext
  • rawtext
  • rcdata
  • space
  • startTagStart
  • tagEndAutoclose
  • tagEnd
  • tagName
  • tagSpace

The uncodedAmpersand is emitted for ampersand & characters that do not start a character reference.

The tagSpace is emitted for 'space' between attributes in element tags.

Otherwise the names should be self explanatory.

Limitations

  • Doctype
    The lexer still interprets doctypes as 'bogus comments'.

  • CDATA
    The lexer interprets CDATA sections as 'bogus comments'.
    (CDATA is only allowed in foreign content - svg and mathml.)

  • Script tags
    The lexer interprets script tags as rawtext elements. This has no dire consequences, other than that html begin and end comment tags that may surround it, are not marked as such.

License

The source code for this project is licensed under the Mozilla Public License Version 2.0, copyright Alwin Blok 2016–2018, 2020–2021, 2023.