npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

coffee-lex

v9.3.2

Published

Stupid lexer for CoffeeScript.

Downloads

42,614

Readme

coffee-lex CI package version

Stupid lexer for CoffeeScript.

Install

# via yarn
$ yarn add coffee-lex
# via npm
$ npm install coffee-lex

Usage

The main lex function simply returns a list of tokens:

import lex, { SourceType } from 'coffee-lex'

const source = 'a?(b: c)'
const tokens = lex(source)

// Print tokens along with their source.
tokens.forEach((token) =>
  console.log(
    SourceType[token.type],
    JSON.stringify(source.slice(token.start, token.end)),
    `${token.start}→${token.end}`
  )
)
// IDENTIFIER "a" 0→1
// EXISTENCE "?" 1→2
// CALL_START "(" 2→3
// IDENTIFIER "b" 3→4
// COLON ":" 4→5
// IDENTIFIER "c" 6→7
// CALL_END ")" 7→8

You can also get more fine control of what you'd like to lex by using the stream function:

import { stream, SourceType } from 'coffee-lex';

const source = 'a?(b: c)';
const step = stream(source);
const location;

do {
  location = step();
  console.log(location.index, SourceType[location.type]);
} while (location.type !== SourceType.EOF);
// 0 IDENTIFIER
// 1 EXISTENCE
// 2 CALL_START
// 3 IDENTIFIER
// 4 COLON
// 5 SPACE
// 6 IDENTIFIER
// 7 CALL_END
// 8 EOF

This function not only lets you control how far into the source you'd like to go, it also gives you information about source code that wouldn't become part of a token, such as spaces.

Note that the input source code should have only UNIX line endings (LF). If you want to process a file with Windows line endings (CRLF), you should convert to UNIX line endings first, then use coffee-lex, then convert back if necessary.

Why?

The official CoffeeScript lexer does a lot of pre-processing, even with rewrite: false. That makes it good for building an AST, but bad for identifying parts of the source code that aren't part of the final AST, such as the location of operators. One good example of this is string interpolation. The official lexer turns it into a series of string tokens separated by (virtual) + tokens, but they have no reality in the original source code. Here's what the official lexer generates for "a#{b}c":

;[
  [
    'STRING_START',
    '(',
    { first_line: 0, first_column: 0, last_line: 0, last_column: 0 },
    (origin: ['STRING', null, [Object]]),
  ],
  [
    'STRING',
    '"a"',
    { first_line: 0, first_column: 0, last_line: 0, last_column: 1 },
  ],
  ['+', '+', { first_line: 0, first_column: 3, last_line: 0, last_column: 3 }],
  ['(', '(', { first_line: 0, first_column: 3, last_line: 0, last_column: 3 }],
  [
    'IDENTIFIER',
    'b',
    { first_line: 0, first_column: 4, last_line: 0, last_column: 4 },
    (variable: true),
  ],
  [
    ')',
    ')',
    { first_line: 0, first_column: 5, last_line: 0, last_column: 5 },
    (origin: ['', 'end of interpolation', [Object]]),
  ],
  ['+', '+', { first_line: 0, first_column: 6, last_line: 0, last_column: 6 }],
  [
    'STRING',
    '"c"',
    { first_line: 0, first_column: 6, last_line: 0, last_column: 7 },
  ],
  [
    'STRING_END',
    ')',
    { first_line: 0, first_column: 7, last_line: 0, last_column: 7 },
  ],
  [
    'TERMINATOR',
    '\n',
    { first_line: 0, first_column: 8, last_line: 0, last_column: 8 },
  ],
]

Here's what coffee-lex generates for the same source:

[ SourceToken { type: DSTRING_START, start: 0, end: 1 },
  SourceToken { type: STRING_CONTENT, start: 1, end: 2 },
  SourceToken { type: INTERPOLATION_START, start: 2, end: 4 },
  SourceToken { type: IDENTIFIER, start: 4, end: 5 },
  SourceToken { type: INTERPOLATION_END, start: 5, end: 6 },
  SourceToken { type: STRING_CONTENT, start: 6, end: 7 },
  SourceToken { type: DSTRING_END, start: 7, end: 8 } ]

License

MIT