npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

openai-edge-stream

v2.0.1

Published

## Basic usage

Downloads

591

Readme

OpenAI Edge Stream

Basic usage

Use it like you would with fetch:

Next JS example (with Node)

// api/chat/sendMessage.js
import { OpenAIEdgeStream } from 'openai-edge-stream';

export default async function handler(req, res) {
  // set appropriate headers for streaming
  res.status(200);
  res.setHeader('Content-Type', 'text/event-stream;charset=utf-8');
  res.setHeader('Cache-Control', 'no-cache, no-transform');
  res.setHeader('X-Accel-Buffering', 'no');

  const stream = await OpenAIEdgeStream(
    'https://api.openai.com/v1/chat/completions',
    {
      headers: {
        'content-type': 'application/json',
        Authorization: `Bearer ${process.env.OPENAI_KEY}`,
      },
      method: 'POST',
      body: JSON.stringify({
        model: 'gpt-3.5-turbo',
        stream: true,
        messages: [{ role: 'user', content: 'Tell me 5 interesting facts' }],
      }),
    }
  );

  for await (const chunk of stream) {
    res.write(chunk);
  }
  res.end();
}

Next JS example (with Edge Functions)

// api/chat/sendMessage.js
import { OpenAIEdgeStream } from 'openai-edge-stream';

export const config = {
  runtime: 'edge',
};

export default async function handler(req) {
  const stream = await OpenAIEdgeStream(
    'https://api.openai.com/v1/chat/completions',
    {
      headers: {
        'content-type': 'application/json',
        Authorization: `Bearer ${process.env.OPENAI_KEY}`,
      },
      method: 'POST',
      body: JSON.stringify({
        model: 'gpt-3.5-turbo',
        stream: true,
        messages: [{ role: 'user', content: 'Tell me 5 interesting facts' }],
      }),
    }
  );

  return new Response(stream);
}

Then on the front end:

import { streamReader } from 'openai-edge-stream';

const handleSendMessage = async () => {
  const response = await fetch(`/api/chat/sendMessage`, {
    headers: {
      'content-type': 'application/json',
    },
    method: 'POST',
  });

  let content = '';

  const data = response.body;

  // make sure the data is a ReadableStream
  if (!data) {
    return;
  }

  const reader = data.getReader();

  /*
  the second argument to streamReader is the callback for every
  complete message chunk received, in the following structure:
  {
    event: string,
    content: string
  }
  event defaults to "event", but will also be the eventId if
  any custom events that are emitted using the OpenAIEdgeStream's
  onBeforeStream or onAfterStream emit function (reference below)
  */
  await streamReader(reader, (message) => {
    content = content + message.content;
  });

  console.log('CONTENT: ', content);
};

Advanced usage

onBeforeStream

If you need to perform any logic or emit a custom message before streaming begins, then you can use the onBeforeStream function:

const stream = await OpenAIEdgeStream(
  'https://api.openai.com/v1/chat/completions',
  {
    headers: {
      'content-type': 'application/json',
      Authorization: `Bearer ${process.env.OPENAI_KEY}`,
    },
    method: 'POST',
    body: JSON.stringify({
      model: 'gpt-3.5-turbo',
      stream: true,
      messages: [{ role: 'user', content: 'Tell me 5 interesting facts' }],
    }),
  },
  {
    onBeforeStream: ({ emit }) => {
      /*
        emit takes 2 arguments, the message to emit (required)
        and the eventId to assign to this message (optional).
        The eventId can be grabbed in the streamReader as shown in
        the second code snippet below
      */
      emit('my custom message', 'customMessageEvent');
    },
  }
);
await streamReader(reader, (message) => {
  if (message.event === 'customMessageEvent') {
    console.log(message.content); // my custom message
  } else {
    content = content + message.content;
  }
});

onAfterStream

If you need to perform any logic or emit a custom message after streaming has finished, but before the stream closes, then you can use the onAfterStream function:

const stream = await OpenAIEdgeStream(
  'https://api.openai.com/v1/chat/completions',
  {
    headers: {
      'content-type': 'application/json',
      Authorization: `Bearer ${process.env.OPENAI_KEY}`,
    },
    method: 'POST',
    body: JSON.stringify({
      model: 'gpt-3.5-turbo',
      stream: true,
      messages: [{ role: 'user', content: 'Tell me 5 interesting facts' }],
    }),
  },
  {
    onAfterStream: ({ emit, fullContent }) => {
      /*
        emit is the same as onBeforeStream.
        fullContent contains the entire content that was received
        from OpenAI. This is ideal if needed to persist to a db etc.
      */
    },
  }
);

Overriding the default terminationMessage:

The default terminationMessage is [DONE] (the message sent by OpenAI to determine when the stream has ended), but can be overridden like so:

const stream = await OpenAIEdgeStream(
  'https://api.openai.com/v1/chat/completions',
  {
    headers: {
      'content-type': 'application/json',
      Authorization: `Bearer ${process.env.OPENAI_KEY}`,
    },
    method: 'POST',
    body: JSON.stringify({
      model: 'gpt-3.5-turbo',
      stream: true,
      messages: [{ role: 'user', content: 'Tell me 5 interesting facts' }],
    }),
  },
  {
    terminationMessage: 'MY TERMINATION MESSAGE OVERRIDE',
  }
);

Overriding the default textToEmit:

The default textToEmit logic is:

const json = JSON.parse(data);
text = json.choices[0].delta?.content || '';

i.e. the data string is emitted from OpenAI's stream, which is stringified JSON, but our actual message content lives in json.choices[0].delta?.content. If for some reason you need to access a different property or want to supply your own logic, you can do so:

const stream = await OpenAIEdgeStream(
  'https://api.openai.com/v1/chat/completions',
  {
    headers: {
      'content-type': 'application/json',
      Authorization: `Bearer ${process.env.OPENAI_KEY}`,
    },
    method: 'POST',
    body: JSON.stringify({
      model: 'gpt-3.5-turbo',
      stream: true,
      messages: [{ role: 'user', content: 'Tell me 5 interesting facts' }],
    }),
  },
  {
    textToEmit: (data) => {
      // access differentProperty and prefix all messages with 'jim '
      const json = JSON.parse(data);
      return `jim ${json.choices[0].delta?.differentProperty || ''}`;
    },
  }
);