npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

chatstreamjs

v1.1.0

Published

Client Side Library for ChatStream

Readme

ChatStreamJs

English | 日本語

ChatStreamJs is a web front-end client for LLM web servers built on ChatStream.

It can handle streaming chats that are generated sequentially by pre-trained large-scale language models and sent as WebStreaming.

npm version License

Install

npm install chatstreamjs

Usage

Streaming text generation

import {ChatStreamClient, StreamStatus} from 'chatstreamjs';

// Generate a ChatStream client
const client = new ChatStreamClient({
    endpoint: `http://localhost:3000/chat_stream`, // The endpoint of the ChatStream server
});

// Send a request (user's input prompt) to the ChatStream server
client.send(
    {
        user_input: 'Hello',// Input text
        onResponse: (data) => { // Response from the ChatStream server (called repeatedly for each token generation)

            const {
                // [response_text] Text responded from the server
                response_text,
                // [pos]
                // The position of the response.
                // "begin": The first token generation in this sentence generation
                // "mid": Midway token generation in this sentence generation
                // "end": The end of this sentence generation (since this is a notice of completion, response_text is null)
                pos,
                // [status]
                // Status during sentence generation
                // StreamStatus.OK: The streaming request was processed successfully
                // StreamStatus.ABORTED: The communication was interrupted (by calling the abort method oneself)
                // StreamStatus.SERVER_ERROR: A 5xx (server error) HTTP status code was returned from the server
                // StreamStatus.CLIENT_ERROR: A 4xx (client error) HTTP status code was returned from the server
                // StreamStatus.NETWORK_ERROR: The network was disconnected during communication
                // StreamStatus.FETCH_ERROR: Other unexpected communication errors
                // StreamStatus.REQUEST_ALREADY_STARTED: The send method was called again during the streaming request
                status,
                // Error details during sentence generation
                // err.json.error: Error message from the ChatStream server
                // err.json.detail: Detailed error message from the ChatStream server
                err,
                // [statusCode]
                // HTTP status code
                // statusCode==429 ... Access to the ChatStream server was concentrated and the request could not be handled. It is set at the same time as StreamStatus.CLIENT_ERROR
                // statusCode==500 ... An error occurred within the ChatStream server. It is set at the same time as StreamStatus.SERVER_ERROR
                statusCode,

            } = data;

            if (response_text) {
                console.log(`ASSISTANT: ${response_text}`);
            }

            if (pos == "end") {
                // Sentence generation by this turn's request has finished
                // However, since it may not have ended normally, handle the status
            }
        }
    });

The return value of #send is a Promise, but it is not recommended to use await when calling. Because the response from the chatstream server is called back via onResponse, there is little point in awaiting.

Also, if await is used, subsequent processing may be blocked by await when abort is called after the request.

Aborting generation during sentence generation

The abort method can be used to explicitly disconnect from the current communication and stop streaming text generation.

client.abort();

Although this method appears to force the client to stop generating sentences by disconnecting from the communication, it is a very legitimate operation because the ChatStream server handles client disconnections properly.

Regenerating sentences

You can regenerate the chat on the AI assistant side by adding regenerate:true to the parameter of the send method.

client.send(
    {
        user_input: null,
        regenerate: true,
        onResponse: (data) => {
        }
    });

In UI implementations, it is common to use abort to break communication, and then use regenerate:true to generate sentences again.

Specifying fetch options

Since fetch is used internally for communication with ChatStream servers, you can specify the fetch option in fetchOpts as is.

fetch options in constructor

const client = new ChatStreamClient({
        endpoint: `http://localhost:3000/chat_stream`,
        fetchOpts: {
            credentials: 'include',
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                'X-Original-Header': 'original value',
            }
        },
    }
);

fetch options in send method

It is also possible to change the headers for each request, for example, by specifying them in each send method.

In this case, headers are added to those specified in the constructor.

client.send(
    {
        user_input: null,
        fetchOpts:{
            headers: {
                'Content-Type': 'application/json',
                'X-Original-Header-Now': 'original value now',
            }
        },
        onResponse: (data) => {
        }
    });