@meistrari/gpt-tokenizer
v0.3.2
Published
A pure JavaScript implementation of a BPE tokenizer (Encoder/Decoder) for GPT-2 / GPT-3 / GPT-4 / Claude Instant / Claude 2
Downloads
91
Readme
gpt-tokenizer
gpt-tokenizer
is a highly optimized Token Byte Pair Encoder/Decoder for all OpenAI's models (including those used by GPT-2, GPT-3, GPT-3.5 and GPT-4). It's written in TypeScript, and is fully compatible with all modern JavaScript environments.
This package is a port of OpenAI's tiktoken, with some additional features sprinkled on top.
OpenAI's GPT models utilize byte pair encoding to transform text into a sequence of integers before feeding them into the model.
As of 2023, it is the most feature-complete, open-source GPT tokenizer on NPM. It implements some unique features, such as:
- Support for easily tokenizing chats thanks to the
encodeChat
function - Support for all current OpenAI models (available encodings:
r50k_base
,p50k_base
,p50k_edit
andcl100k_base
) - Generator function versions of both the decoder and encoder functions
- Provides the ability to decode an asynchronous stream of data (using
decodeAsyncGenerator
anddecodeGenerator
with any iterable input) - No global cache (no accidental memory leaks, as with the original GPT-3-Encoder implementation)
- Includes a highly performant
isWithinTokenLimit
function to assess token limit without encoding the entire text/chat - Improves overall performance by eliminating transitive arrays
- Type-safe (written in TypeScript)
- Works in the browser out-of-the-box
Thanks to @dmitry-brazhenko's SharpToken, whose code was served as a reference for the port.
Historical note: This package started off as a fork of latitudegames/GPT-3-Encoder, but version 2.0 was rewritten from scratch.
Installation
As NPM package
npm install gpt-tokenizer
As a UMD module
<script src="https://unpkg.com/gpt-tokenizer"></script>
<script>
// the package is now available as a global:
const { encode, decode } = GPTTokenizer_cl100k_base
</script>
If you wish to use a custom encoding, fetch the relevant script.
- https://unpkg.com/gpt-tokenizer/dist/cl100k_base.js
- https://unpkg.com/gpt-tokenizer/dist/p50k_base.js
- https://unpkg.com/gpt-tokenizer/dist/p50k_edit.js
- https://unpkg.com/gpt-tokenizer/dist/r50k_base.js
The global name is a concatenation: GPTTokenizer_${encoding}
.
Refer to supported models and their encodings section for more information.
Playground
The playground is published under a memorable URL: https://gpt-tokenizer.dev/
You can play with the package in the browser using the Playground.
The playground mimics the official OpenAI Tokenizer.
Usage
import {
encode,
encodeChat,
decode,
isWithinTokenLimit,
encodeGenerator,
decodeGenerator,
decodeAsyncGenerator,
} from 'gpt-tokenizer'
const text = 'Hello, world!'
const tokenLimit = 10
// Encode text into tokens
const tokens = encode(text)
// Decode tokens back into text
const decodedText = decode(tokens)
// Check if text is within the token limit
// returns false if the limit is exceeded, otherwise returns the actual number of tokens (truthy value)
const withinTokenLimit = isWithinTokenLimit(text, tokenLimit)
// Example chat:
const chat = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'assistant', content: 'gpt-tokenizer is awesome.' },
]
// Encode chat into tokens
const chatTokens = encodeChat(chat)
// Check if chat is within the token limit
const chatWithinTokenLimit = isWithinTokenLimit(chat, tokenLimit)
// Encode text using generator
for (const tokenChunk of encodeGenerator(text)) {
console.log(tokenChunk)
}
// Decode tokens using generator
for (const textChunk of decodeGenerator(tokens)) {
console.log(textChunk)
}
// Decode tokens using async generator
// (assuming `asyncTokens` is an AsyncIterableIterator<number>)
for await (const textChunk of decodeAsyncGenerator(asyncTokens)) {
console.log(textChunk)
}
By default, importing from gpt-tokenizer
uses cl100k_base
encoding, used by gpt-3.5-turbo
and gpt-4
.
To get a tokenizer for a different model, import it directly, for example:
import {
encode,
decode,
isWithinTokenLimit,
} from 'gpt-tokenizer/model/text-davinci-003'
If you're dealing with a resolver that doesn't support package.json exports
resolution, you might need to import from the respective cjs
or esm
directory, e.g.:
import {
encode,
decode,
isWithinTokenLimit,
} from 'gpt-tokenizer/cjs/model/text-davinci-003'
Supported models and their encodings
chat:
gpt-4-32k
(cl100k_base
)gpt-4-0314
(cl100k_base
)gpt-4-32k-0314
(cl100k_base
)gpt-3.5-turbo
(cl100k_base
)gpt-3.5-turbo-0301
(cl100k_base
)
text-only:
text-davinci-003
(p50k_base
)text-davinci-002
(p50k_base
)text-davinci-001
(r50k_base
)text-curie-001
(r50k_base
)text-babbage-001
(r50k_base
)text-ada-001
(r50k_base
)davinci
(r50k_base
)curie
(r50k_base
)babbage
(r50k_base
)ada
(r50k_base
)
code:
code-davinci-002
(p50k_base
)code-davinci-001
(p50k_base
)code-cushman-002
(p50k_base
)code-cushman-001
(p50k_base
)davinci-codex
(p50k_base
)cushman-codex
(p50k_base
)
edit:
text-davinci-edit-001
(p50k_edit
)code-davinci-edit-001
(p50k_edit
)
embeddings:
text-embedding-ada-002
(cl100k_base
)
old embeddings:
text-similarity-davinci-001
(r50k_base
)text-similarity-curie-001
(r50k_base
)text-similarity-babbage-001
(r50k_base
)text-similarity-ada-001
(r50k_base
)text-search-davinci-doc-001
(r50k_base
)text-search-curie-doc-001
(r50k_base
)text-search-babbage-doc-001
(r50k_base
)text-search-ada-doc-001
(r50k_base
)code-search-babbage-code-001
(r50k_base
)code-search-ada-code-001
(r50k_base
)
API
encode(text: string): number[]
Encodes the given text into a sequence of tokens. Use this method when you need to transform a piece of text into the token format that the GPT models can process.
Example:
import { encode } from 'gpt-tokenizer'
const text = 'Hello, world!'
const tokens = encode(text)
decode(tokens: number[]): string
Decodes a sequence of tokens back into text. Use this method when you want to convert the output tokens from GPT models back into human-readable text.
Example:
import { decode } from 'gpt-tokenizer'
const tokens = [18435, 198, 23132, 328]
const text = decode(tokens)
isWithinTokenLimit(text: string, tokenLimit: number): false | number
Checks if the text is within the token limit. Returns false
if the limit is exceeded, otherwise returns the number of tokens. Use this method to quickly check if a given text is within the token limit imposed by GPT models, without encoding the entire text.
Example:
import { isWithinTokenLimit } from 'gpt-tokenizer'
const text = 'Hello, world!'
const tokenLimit = 10
const withinTokenLimit = isWithinTokenLimit(text, tokenLimit)
encodeChat(chat: ChatMessage[], model?: ModelName): number[]
Encodes the given chat into a sequence of tokens.
If you didn't import the model version directly, or if model
wasn't provided during initialization, it must be provided here to correctly tokenize the chat for a given model. Use this method when you need to transform a chat into the token format that the GPT models can process.
Example:
import { encodeChat } from 'gpt-tokenizer'
const chat = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'assistant', content: 'gpt-tokenizer is awesome.' },
]
const tokens = encodeChat(chat)
encodeGenerator(text: string): Generator<number[], void, undefined>
Encodes the given text using a generator, yielding chunks of tokens. Use this method when you want to encode text in chunks, which can be useful for processing large texts or streaming data.
Example:
import { encodeGenerator } from 'gpt-tokenizer'
const text = 'Hello, world!'
const tokens = []
for (const tokenChunk of encodeGenerator(text)) {
tokens.push(...tokenChunk)
}
encodeChatGenerator(chat: Iterator<ChatMessage>, model?: ModelName): Generator<number[], void, undefined>
Same as encodeChat
, but uses a generator as output, and may use any iterator as the input chat
.
decodeGenerator(tokens: Iterable<number>): Generator<string, void, undefined>
Decodes a sequence of tokens using a generator, yielding chunks of decoded text. Use this method when you want to decode tokens in chunks, which can be useful for processing large outputs or streaming data.
Example:
import { decodeGenerator } from 'gpt-tokenizer'
const tokens = [18435, 198, 23132, 328]
let decodedText = ''
for (const textChunk of decodeGenerator(tokens)) {
decodedText += textChunk
}
decodeAsyncGenerator(tokens: AsyncIterable<number>): AsyncGenerator<string, void, undefined>
Decodes a sequence of tokens asynchronously using a generator, yielding chunks of decoded text. Use this method when you want to decode tokens in chunks asynchronously, which can be useful for processing large outputs or streaming data in an asynchronous context.
Example:
import { decodeAsyncGenerator } from 'gpt-tokenizer'
async function processTokens(asyncTokensIterator) {
let decodedText = ''
for await (const textChunk of decodeAsyncGenerator(asyncTokensIterator)) {
decodedText += textChunk
}
}
Special tokens
There are a few special tokens that are used by the GPT models. Not all models support all of these tokens.
Custom Allowed Sets
gpt-tokenizer
allows you to specify custom sets of allowed special tokens when encoding text. To do this, pass a
Set
containing the allowed special tokens as a parameter to the encode
function:
import {
EndOfPrompt,
EndOfText,
FimMiddle,
FimPrefix,
FimSuffix,
ImStart,
ImEnd,
ImSep,
encode,
} from 'gpt-tokenizer'
const inputText = `Some Text ${EndOfPrompt}`
const allowedSpecialTokens = new Set([EndOfPrompt])
const encoded = encode(inputText, allowedSpecialTokens)
const expectedEncoded = [8538, 2991, 220, 100276]
expect(encoded).toBe(expectedEncoded)
Custom Disallowed Sets
Similarly, you can specify custom sets of disallowed special tokens when encoding text. Pass a Set
containing the disallowed special tokens as a parameter to the encode
function:
import { encode } from 'gpt-tokenizer'
const inputText = `Some Text`
const disallowedSpecial = new Set(['Some'])
// throws an error:
const encoded = encode(inputText, undefined, disallowedSpecial)
In this example, an Error is thrown, because the input text contains a disallowed special token.
Testing and Validation
gpt-tokenizer
includes a set of test cases in the TestPlans.txt file to ensure its compatibility with OpenAI's Python tiktoken
library. These test cases validate the functionality and behavior of gpt-tokenizer
, providing a reliable reference for developers.
Running the unit tests and verifying the test cases helps maintain consistency between the library and the original Python implementation.
License
MIT
Contributing
Contributions are welcome! Please open a pull request or an issue to discuss your bug reports, or use the discussions feature for ideas or any other inquiries.
Hope you find the gpt-tokenizer
useful in your projects!