parser-transform
v1.0.4
Published
Streaming+Async lexer and parser
Downloads
12
Maintainers
Readme
Streaming and Async Lexer and Parser
Usage
npm install parser-transform
For a complete example you might want to look at the test suite, implementing a fully-streaming JSON parser implemented as a JSON tokenizer that can be piped into a streaming JSON parser with a selector. Notice that the parser acts as a Stream Transform and doesn't even use the Async possibilities offered by this package.
For a simpler example, the same tokenizer can be piped into a collecting JSON parser which builds the data in memory. Compare with the streaming parser to see how to inject operations in the middle of a language when streaming.
Streaming Tokenizer
The lexer/tokenizer is implemented as a Node.js Stream transform that translates an incoming text stream into a stream of lexical tokens.
For example, assuming lexer
is a String containing the textual description of your tokenizer:
{LexerParser,LexerTransform} = require('parser-transform')
text_stream.setEncoding('utf8')
lexical_stream = text_stream.pipe( new LexerTransform(LexerParser.parse(lexer)) )
LexerParser.parse
LexerParser.parse(text)
converts the textual description into a Map of Deterministic Finite Automaton (DFA) (one DFA per start condition described in the text).
The lexical parser supports the usual two sections of a lex
file, separated by %%
:
- defining names for regular expressions:
digit [0-9]
- generating lexical tokens:
true return 'TRUE'
However, since the tokenizer handles streams, there are operations (such as lookaheads) that it cannot perform. (Other projects implementing tokenizers for Node.js use Regular Expressions to implement their tokenizers, and conversely cannot be used to handle streams of arbitrary lengths.)
The current operations are:
*
(zero or more)+
(one or more)?
(zero or one)|
(alternative)- concatenation
- start conditions
<…>
The code that generates lexical tokens can access the current text as this.text
(and modify it before it is passed down to the stream).
It may also use this.begin(start_condition)
and this.pop()
to switch in- and out-of start conditions.
Finally, it may also use this.yy
(a regular Object) to store data that must be persisted between actions.
At this time, the lexer only supports single-line actions. (This is a limitation in the parser that handles the actions, not a limitation of the lexer in itself.)
new LexerTransform
new LexerTransform(dfas)
creates a Stream transform based on a Map of DFAs.
The lexical tokens generated contain {token,text,line,column,eof}
; the token is either the value returned by the code that generated the token, ERROR
in case of error (for example if the input does not match any pattern in the current start condition), or null
at the end of the stream (in which case eof
is set to true).
Streaming Parser
The parser is implemented as a Node.js Stream transform that receives an incoming stream of lexical tokens. You can send messages out on the stream by using emit(data)
inside the parser's actions, or perform arbitrary actions, including asynchronous actions; the stream will be paused while asynchronous actions are performed.
For example, assuming parser
is a String containing the textual description of your parser:
{Grammar,ParserTransform} = require('parser-transform')
lexical_stream.pipe( new ParserTransform(Grammar.fromString(parser,{mode:'LALR1'},'bnf')) )
Grammar
Grammar
is imported as-is from syntax-cli
and supports LL and LR parsing, precedence, etc.
Notice that the lex
section is not used.