npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

twink-tokenizer-js-tt

v1.0.6

Published

An apps for tokenization text.

Readme

Welcome to Twink Tokenizer

Twink tokenizer is a small tokenizer tool which automatically tags each token with its type via regular expression. Tokenize sentences and user are free to create their own token package. Some of it's top feature are outlined below:

  1. Support for English and Swedish and many more.

  2. Intelligent tokenization of sentence containing words, number, operators and special characters via regular expression which is provided from user.

  3. Automatic detection & tagging of different types of tokens based on their features:

    • These include word, dot, number, float, integer och operators.
    • User define tokentyper (name/type, regex-regular expression).

Installation

  • Use npm to install: npm i twink-tokenizer-js-tt

  • Come to github and clone this project and try it yourself https://github.com/Thanhtran34/Twink-Tokenizerjs

Getting started

import { Tokenizer, TokenTyper, Grammar } from 'twink-tokenizer-js-tt'

// Create your own token package via give it a name and your sentence
const tokenizer = new Tokenizer('WORDANDDOTGRAMMAR', 'Meningen består av ord.')

// You add tokentyper to your tokenizer with regular expression. Remember to begin your regular expression with ^ and you don't need //g for your expression. The whitespace will be ignored in this tokenizer to make your tokenPackage simplier.
tokenizer.setRule(new TokenTyper('WORD', '^[a-zA-Z|äöåÄÖÅ]+$'))
tokenizer.setRule(new TokenTyper('DOT', '^\\.$'))

// Your own token package after matchning
const tokens = tokenizer.runTokenizer()
console.log(tokens)

// Result of token package
[
  {type: 'WORD', value: 'Meningen', line: 1, col: 1},
  {type: 'WORD', value: 'består', line: 1, col: 9},
  {type: 'WORD', value: 'av', line: 1, col: 15},
  {type: 'WORD', value: 'ord', line: 1, col: 17},
  {type: 'DOT', value: '.', line: 1, col: 20},
  {type: 'EOF', value: '', line: 1, col: 21}
]

// Sometime, you just want only one token then use this getActiveToken method.
tokenizer.getActiveToken(tokens)
const currentToken = tokens.curr()
currentToken = {type: 'WORD', value: 'Meningen', line: 1, col: 1}

const nexttoken = tokens.next()
nexttoken = {type: 'WORD', value: 'består', line: 1, col: 9}

const previoustoken = tokens.prev()
previoustoken = {type: 'WORD', value: 'Meningen', line: 1, col: 1}

Tokenizer an text document with Twink

const fileName = 'myTextFile.txt' // path to your text document
const fileData = fs.readFileSync(fileName, 'utf8')
const tokenizer = new Tokenizer('WORDANDDOTGRAMMAR', fileData)
tokenizer.setRule(new TokenTyper('WORD', '^[a-zA-Z|äöåÄÖÅ]+$'))
tokenizer.setRule(new TokenTyper('DOT', '^\\.$'))

const tokens = tokenizer.runTokenizer()
console.log(tokens)

// Result of token package
[
  {type: 'WORD', value: 'Meningen', line: 1, col: 1},
  {type: 'WORD', value: 'består', line: 1, col: 9},
  {type: 'WORD', value: 'av', line: 1, col: 15},
  {type: 'WORD', value: 'ord', line: 1, col: 17},
  {type: 'DOT', value: '.', line: 1, col: 20},
  {type: 'EOF', value: '', line: 1, col: 21}
]
]

About Twink

This small project is used for an assignment in school so it still need to improve more. Help me to upgrade it if you have interest for it. Its warm welcome from me. This new update will fix bug and make Twink stronger to tokenize the whole document istead of only one sentence.

Copyright & License

It is licensed under the terms of the MIT License. You are free to use it so give it a try.