npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

tokenx

v0.4.0

Published

GPT token estimation and context size utilities without a full tokenizer

Downloads

13

Readme

tokenx

GPT token count and context size utilities when approximations are good enough. For advanced use cases, please use a full tokenizer like gpt-tokenizer. This library is intended to be used for quick estimations and to avoid the overhead of a full tokenizer, e.g. when you want to limit your bundle size.

Benchmarks

The following table shows the accuracy of the token count approximation for different input texts:

| Description | Actual GPT Token Count | Estimated Token Count | Token Count Deviation | | --- | --- | --- | --- | | Short English text | 10 | 11 | 10.00% | | German text with umlauts | 56 | 49 | 12.50% | | Metamorphosis by Franz Kafka (English) | 31891 | 33928 | 6.39% | | Die Verwandlung by Franz Kafka (German) | 40620 | 34908 | 14.06% | | 道德經 by Laozi (Chinese) | 14386 | 11919 | 17.15% | | TypeScript ES5 Type Declarations (~ 4000 loc) | 47890 | 50464 | 5.37% |

Features

  • 🌁 Estimate token count without a full tokenizer
  • 📐 Supports multiple model context sizes
  • 🗣️ Supports accented characters, like German umlauts or French accents
  • 🪽 Zero dependencies

Installation

Run the following command to add tokenx to your project.

# npm
npm install tokenx

# pnpm
pnpm add tokenx

# yarn
yarn add tokenx

Usage

import {
  approximateMaxTokenSize,
  approximateTokenSize,
  isWithinTokenLimit
} from 'tokenx'

const prompt = 'Your prompt goes here.'
const inputText = 'Your text goes here.'

// Estimate the number of tokens in the input text
const estimatedTokens = approximateTokenSize(inputText)
console.log(`Estimated token count: ${estimatedTokens}`)

// Calculate the maximum number of tokens allowed for a given model
const modelName = 'gpt-3.5-turbo'
const maxResponseTokens = 1000
const availableTokens = approximateMaxTokenSize({
  prompt,
  modelName,
  maxTokensInResponse: maxResponseTokens
})
console.log(`Available tokens for model ${modelName}: ${availableTokens}`)

// Check if the input text is within a specific token limit
const tokenLimit = 1024
const withinLimit = isWithinTokenLimit(inputText, tokenLimit)
console.log(`Is within token limit: ${withinLimit}`)

API

approximateTokenSize

Estimates the number of tokens in a given input string based on common English patterns and tokenization heuristics. Work well for other languages too, like German.

Usage:

const estimatedTokens = approximateTokenSize('Hello, world!')

Type Declaration:

function approximateTokenSize(input: string): number

approximateMaxTokenSize

Calculates the maximum number of tokens that can be included in a response given the prompt length and model's maximum context size.

Usage:

const maxTokens = approximateMaxTokenSize({
  prompt: 'Sample prompt',
  modelName: 'text-davinci-003',
  maxTokensInResponse: 500
})

Type Declaration:

function approximateMaxTokenSize({ prompt, modelName, maxTokensInResponse }: {
  prompt: string
  modelName: ModelName
  /** The maximum number of tokens to generate in the reply. 1000 tokens are roughly 750 English words. */
  maxTokensInResponse?: number
}): number

isWithinTokenLimit

Checks if the estimated token count of the input is within a specified token limit.

Usage:

const withinLimit = isWithinTokenLimit('Check this text against a limit', 100)

Type Declaration:

function isWithinTokenLimit(input: string, tokenLimit: number): boolean

License

MIT License © 2023-PRESENT Johann Schopplich