npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

chat-about-video

v3.2.1

Published

Chat about a video clip using ChatGPT hosted in OpenAI or Azure, or Gemini provided by Google

Downloads

151

Readme

chat-about-video

Chat about a video clip (or without the video clip) using the powerful OpenAI ChatGPT (hosted in OpenAI or Microsoft Azure) or Google Gemini (hosted in Google Could).

Version Downloads/week License

chat-about-video is an open-source NPM package designed to accelerate the development of conversation applications about video content. Harnessing the capabilities of ChatGPT from Microsoft Azure or OpenAI, as well as Gemini from Google, this package opens up a range of usage scenarios with minimal effort.

Key features:

  • ChatGPT models hosted in both Azure and OpenAI are supported.
  • Gemini models hosted in Google Cloud are supported.
  • Frame images are extracted from the input video, and uploaded for ChatGPT/Gemini to consume.
  • It can automatically retry on receiving throttling (HTTP status code 429) and error (HTTP status code 5xx) responses from the API.
  • Options supported by the underlying API are exposed for customisation.
  • It can also be used in scenario that no video needs to be involved, that means it can be used for "normal" text chats.

Usage

Installation

To use chat-about-video in your Node.js application, add it as a dependency along with other necessary packages based on your usage scenario. Below are examples for typical setups:

# ChatGPT on OpenAI or Azure with Azure Blob Storage
npm i chat-about-video @azure/openai @ffmpeg-installer/ffmpeg @azure/storage-blob
# Gemini in Google Cloud
npm i chat-about-video @google/generative-ai @ffmpeg-installer/ffmpeg
# ChatGPT on OpenAI or Azure with AWS S3
npm i chat-about-video @azure/openai @ffmpeg-installer/ffmpeg @handy-common-utils/aws-utils @aws-sdk/s3-request-presigner @aws-sdk/client-s3

Optional dependencies

ChatGPT

To use ChatGPT hosted on OpenAI or Azure:

npm i @azure/openai

Gemini

To use Gemini hosted on Google Cloud:

npm i @google/generative-ai

ffmpeg

If you need ffmpeg for extracting video frame images, ensure it is installed. You can use a system package manager or an NPM package:

sudo apt install ffmpeg
# or
npm i @ffmpeg-installer/ffmpeg

Azure Blob Storage

To use Azure Blob Storage for frame images (not needed for Gemini):

npm i @azure/storage-blob

AWS S3

To use AWS S3 for frame images (not needed for Gemini):

npm i @handy-common-utils/aws-utils @aws-sdk/s3-request-presigner @aws-sdk/client-s3

How the video is provided to ChatGPT or Gemini

ChatGPT

There are two approaches for feeding video content to ChatGPT. chat-about-video supports both of them.

Frame image extraction:

  • Integrate ChatGPT from Microsoft Azure or OpenAI effortlessly.
  • Utilize ffmpeg integration provided by this package for frame image extraction or opt for a DIY approach.
  • Store frame images with ease, supporting Azure Blob Storage and AWS S3.
  • GPT-4o and GPT-4 Vision Preview hosted in Azure allows analysis of up to 10 frame images.
  • GPT-4o and GPT-4 Vision Preview hosted in OpenAI allows analysis of more than 10 frame images.

Video indexing with Microsoft Azure:

  • Exclusively supported by GPT-4 Vision Preview from Microsoft Azure.
  • Ingest videos seamlessly into Microsoft Azure's Video Retrieval Index.
  • Automatic extraction of up to 20 frame images using Video Retrieval Indexer.
  • Default integration of speech transcription for enhanced comprehension.
  • Flexible storage options with support for Azure Blob Storage and AWS S3.

Gemini

chat-about-video supports sending Video frames directly to Google's API without a cloud storage.

  • Utilize ffmpeg integration provided by this package for frame image extraction or opt for a DIY approach.
  • Number of frame images is only limited by Gemini API in Google Cloud.

Concrete types and low level clients

ChatAboutVideo and Conversation are generic classes. Use them without concrete generic type parameters when you want the flexibility to easily switch between ChatGPT and Gemini.

Otherwise, you may want to use concrete type. Below are some examples:

// cast to a concrete type
const castToChatGpt = chat as ChatAboutVideoWithChatGpt;

// you can also just leave the ChatAboutVideo instance generic, but narrow down the conversation type
const conversationWithGemini = (await chat.startConversation(...)) as ConversationWithGemini;
const conversationWithChatGpt = await (chat as ChatAboutVideoWithChatGpt).startConversation(...);

To access the underlying API wrapper, use the getApi() function on the ChatAboutVideo instance. To get the raw API client, use the getClient() function on the awaited object returned from getApi().

Cleaning up

Intermediate files, such as extracted frame images, can be saved locally or in the cloud. To remove these files when they are no longer needed, remember to call the end() function on the Conversation instance when the conversion finishes.

Customisation

Frame extraction

If you would like to customise how frame images are extracted and stored, consider these:

  • In the options object passed to the constructor of ChatAboutVideo, there's a property extractVideoFrames. This property allows you to customise how frame images are extracted.
    • format, interval, limit, width, height - These allows you to specify your expectation on the extraction.
    • deleteFilesWhenConversationEnds - This flag allows you to specify whether you want extracted frame images to be deleted from the local file system when the conversation ends, or not.
    • framesDirectoryResolver - You can supply a function for determining where extracted frame image files should be stored locally.
    • extractor - You can supply a function for doing the extraction.
  • In the options object passed to the constructor of ChatAboutVideo, there's a property storage. For ChatGPT, storing frame images in the cloud is recommended. You can use this property to customise how frame images are stored in the cloud.
    • azureStorageConnectionString - If you would like to use Azure Blob Storage, you need to put the connection string in this property. If this property does not have a value, ChatAboutVideo would assume that you'd like to use AWS S3, and default AWS identity/credential will be picked up from the OS.
    • storageContainerName, storagePathPrefix - They allows you to specify where those images should be stored.
    • downloadUrlExpirationSeconds - For images stored in the cloud, presigned download URLs with expiration are generated for ChatGPT to access. This property allows you to control the expiration time.
    • deleteFilesWhenConversationEnds - This flag allows you to specify whether you want extracted frame images to be deleted from the cloud when the conversation ends, or not.
    • uploader - You can supply a function for uploading images into the cloud.

Settings of the underlying model

In the options object passed to the constructor of ChatAboutVideo, there's a property clientSettings, and there's another property completionSettings. Settings of the underlying model can be configured through those two properties.

You can also override settings using the last parameter of startConversation(...) function on ChatAboutVideo, or the last parameter of say(...) function on Conversation.

Code examples

Example 1: Using GPT-4o or GPT-4 Vision Preview hosted in OpenAI with Azure Blob Storage

// This is a demo utilising GPT-4o or Vision preview hosted in OpenAI.
// OpenAI API allows more than 10 (maximum allowed by Azure's OpenAI API) images to be supplied.
// Video frame images are uploaded to Azure Blob Storage and then made available to GPT from there.
//
// This script can be executed with a command line like this from the project root directory:
// export OPENAI_API_KEY=...
// export AZURE_STORAGE_CONNECTION_STRING=...
// export OPENAI_MODEL_NAME=...
// export AZURE_STORAGE_CONTAINER_NAME=...
// ENABLE_DEBUG=true DEMO_VIDEO=~/Downloads/test1.mp4 npx ts-node test/demo1.ts
//

import { consoleWithColour } from '@handy-common-utils/misc-utils';
import chalk from 'chalk';
import readline from 'node:readline';

import { ChatAboutVideo, ConversationWithChatGpt } from 'chat-about-video';

async function demo() {
  const chat = new ChatAboutVideo(
    {
      credential: {
        key: process.env.OPENAI_API_KEY!,
      },
      storage: {
        azureStorageConnectionString: process.env.AZURE_STORAGE_CONNECTION_STRING!,
        storageContainerName: process.env.AZURE_STORAGE_CONTAINER_NAME || 'vision-experiment-input',
        storagePathPrefix: 'video-frames/',
      },
      completionOptions: {
        deploymentName: process.env.OPENAI_MODEL_NAME || 'gpt-4o', // 'gpt-4-vision-preview', // or gpt-4o
      },
      extractVideoFrames: {
        limit: 100,
        interval: 2,
      },
    },
    consoleWithColour({ debug: process.env.ENABLE_DEBUG === 'true' }, chalk),
  );

  const conversation = (await chat.startConversation(process.env.DEMO_VIDEO!)) as ConversationWithChatGpt;

  const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
  const prompt = (question: string) => new Promise<string>((resolve) => rl.question(question, resolve));
  while (true) {
    const question = await prompt(chalk.red('\nUser: '));
    if (!question) {
      continue;
    }
    if (['exit', 'quit', 'q', 'end'].includes(question)) {
      await conversation.end();
      break;
    }
    const answer = await conversation.say(question, { maxTokens: 2000 });
    console.log(chalk.blue('\nAI:' + answer));
  }
  console.log('Demo finished');
  rl.close();
}

demo().catch((error) => console.log(chalk.red(JSON.stringify(error, null, 2))));

Example 2: Using GPT-4 Vision Preview hosted in Azure with Azure Video Retrieval Indexer

// This is a demo utilising GPT-4 Vision preview hosted in Azure.
// Azure Video Retrieval Indexer is used for extracting information from the input video.
// Information in Azure Video Retrieval Indexer is supplied to GPT.
//
// This script can be executed with a command line like this from the project root directory:
// export AZURE_OPENAI_API_ENDPOINT=..
// export AZURE_OPENAI_API_KEY=...
// export AZURE_OPENAI_DEPLOYMENT_NAME=...
// export AZURE_STORAGE_CONNECTION_STRING=...
// export AZURE_STORAGE_CONTAINER_NAME=...
// export AZURE_CV_API_KEY=...
// ENABLE_DEBUG=true DEMO_VIDEO=~/Downloads/test1.mp4 npx ts-node test/demo2.ts
//

import { consoleWithColour } from '@handy-common-utils/misc-utils';
import chalk from 'chalk';
import readline from 'node:readline';

import { ChatAboutVideo, ConversationWithChatGpt } from 'chat-about-video';

async function demo() {
  const chat = new ChatAboutVideo(
    {
      endpoint: process.env.AZURE_OPENAI_API_ENDPOINT!,
      credential: {
        key: process.env.AZURE_OPENAI_API_KEY!,
      },
      storage: {
        azureStorageConnectionString: process.env.AZURE_STORAGE_CONNECTION_STRING!,
        storageContainerName: process.env.AZURE_STORAGE_CONTAINER_NAME || 'vision-experiment-input',
        storagePathPrefix: 'video-frames/',
      },
      completionOptions: {
        deploymentName: process.env.AZURE_OPENAI_DEPLOYMENT_NAME || 'gpt4vision',
      },
      videoRetrievalIndex: {
        endpoint: process.env.AZURE_CV_API_ENDPOINT!,
        apiKey: process.env.AZURE_CV_API_KEY!,
        createIndexIfNotExists: true,
        deleteIndexWhenConversationEnds: true,
      },
    },
    consoleWithColour({ debug: process.env.ENABLE_DEBUG === 'true' }, chalk),
  );

  const conversation = (await chat.startConversation(process.env.DEMO_VIDEO!)) as ConversationWithChatGpt;

  const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
  const prompt = (question: string) => new Promise<string>((resolve) => rl.question(question, resolve));
  while (true) {
    const question = await prompt(chalk.red('\nUser: '));
    if (!question) {
      continue;
    }
    if (['exit', 'quit', 'q', 'end'].includes(question)) {
      await conversation.end();
      break;
    }
    const answer = await conversation.say(question, { maxTokens: 2000 });
    console.log(chalk.blue('\nAI:' + answer));
  }
  console.log('Demo finished');
  rl.close();
}

demo().catch((error) => console.log(chalk.red(JSON.stringify(error, null, 2)), (error as Error).stack));

Example 3: Using GPT-4 Vision Preview hosted in Azure with Azure Blob Storage

// This is a demo utilising GPT-4o or Vision preview hosted in Azure.
// Up to 10 (maximum allowed by Azure's OpenAI API) frames are extracted from the input video.
// Video frame images are uploaded to Azure Blob Storage and then made available to GPT from there.
//
// This script can be executed with a command line like this from the project root directory:
// export AZURE_OPENAI_API_ENDPOINT=..
// export AZURE_OPENAI_API_KEY=...
// export AZURE_OPENAI_DEPLOYMENT_NAME=...
// export AZURE_STORAGE_CONNECTION_STRING=...
// export AZURE_STORAGE_CONTAINER_NAME=...
// ENABLE_DEBUG=true DEMO_VIDEO=~/Downloads/test1.mp4 npx ts-node test/demo3.ts

import { consoleWithColour } from '@handy-common-utils/misc-utils';
import chalk from 'chalk';
import readline from 'node:readline';

import { ChatAboutVideo, ConversationWithChatGpt } from 'chat-about-video';

async function demo() {
  const chat = new ChatAboutVideo(
    {
      endpoint: process.env.AZURE_OPENAI_API_ENDPOINT!,
      credential: {
        key: process.env.AZURE_OPENAI_API_KEY!,
      },
      storage: {
        azureStorageConnectionString: process.env.AZURE_STORAGE_CONNECTION_STRING!,
        storageContainerName: process.env.AZURE_STORAGE_CONTAINER_NAME || 'vision-experiment-input',
        storagePathPrefix: 'video-frames/',
      },
      completionOptions: {
        deploymentName: process.env.AZURE_OPENAI_DEPLOYMENT_NAME || 'gpt4vision',
      },
    },
    consoleWithColour({ debug: process.env.ENABLE_DEBUG === 'true' }, chalk),
  );

  const conversation = (await chat.startConversation(process.env.DEMO_VIDEO!)) as ConversationWithChatGpt;

  const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
  const prompt = (question: string) => new Promise<string>((resolve) => rl.question(question, resolve));
  while (true) {
    const question = await prompt(chalk.red('\nUser: '));
    if (!question) {
      continue;
    }
    if (['exit', 'quit', 'q', 'end'].includes(question)) {
      await conversation.end();
      break;
    }
    const answer = await conversation.say(question, { maxTokens: 2000 });
    console.log(chalk.blue('\nAI:' + answer));
  }
  console.log('Demo finished');
  rl.close();
}

demo().catch((error) => console.log(chalk.red(JSON.stringify(error, null, 2))));

Example 4: Using Gemini hosted in Google Cloud

// This is a demo utilising Google Gemini through Google Generative Language API.
// Google Gemini allows more than 10 (maximum allowed by Azure's OpenAI API) frame images to be supplied.
// Video frame images are sent through Google Generative Language API directly.
//
// This script can be executed with a command line like this from the project root directory:
// export GEMINI_API_KEY=...
// ENABLE_DEBUG=true DEMO_VIDEO=~/Downloads/test1.mp4 npx ts-node test/demo4.ts

import { consoleWithColour } from '@handy-common-utils/misc-utils';
import chalk from 'chalk';
import readline from 'node:readline';

import { HarmBlockThreshold, HarmCategory } from '@google/generative-ai';

import { ChatAboutVideo, ConversationWithGemini } from 'chat-about-video';

async function demo() {
  const chat = new ChatAboutVideo(
    {
      credential: {
        key: process.env.GEMINI_API_KEY!,
      },
      clientSettings: {
        modelParams: {
          model: 'gemini-1.5-flash',
        },
      },
      extractVideoFrames: {
        limit: 100,
        interval: 0.5,
      },
    },
    consoleWithColour({ debug: process.env.ENABLE_DEBUG === 'true' }, chalk),
  );

  const conversation = (await chat.startConversation(process.env.DEMO_VIDEO!)) as ConversationWithGemini;

  const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
  const prompt = (question: string) => new Promise<string>((resolve) => rl.question(question, resolve));
  while (true) {
    const question = await prompt(chalk.red('\nUser: '));
    if (!question) {
      continue;
    }
    if (['exit', 'quit', 'q', 'end'].includes(question)) {
      await conversation.end();
      break;
    }
    const answer = await conversation.say(question, {
      safetySettings: [{ category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, threshold: HarmBlockThreshold.BLOCK_NONE }],
    });
    console.log(chalk.blue('\nAI:' + answer));
  }
  console.log('Demo finished');
  rl.close();
}

demo().catch((error) => console.log(chalk.red(JSON.stringify(error, null, 2))));

API

chat-about-video

Modules

Classes

Class: VideoRetrievalApiClient

azure/video-retrieval-api-client.VideoRetrievalApiClient

Constructors

constructor

new VideoRetrievalApiClient(endpointBaseUrl, apiKey, apiVersion?)

Parameters

| Name | Type | Default value | | :---------------- | :------- | :--------------------- | | endpointBaseUrl | string | undefined | | apiKey | string | undefined | | apiVersion | string | '2023-05-01-preview' |

Methods

createIndex

createIndex(indexName, indexOptions?): Promise<void>

Parameters

| Name | Type | | :------------- | :-------------------------------------------------------------------------------------- | | indexName | string | | indexOptions | CreateIndexOptions |

Returns

Promise<void>


createIndexIfNotExist

createIndexIfNotExist(indexName, indexOptions?): Promise<void>

Parameters

| Name | Type | | :-------------- | :-------------------------------------------------------------------------------------- | | indexName | string | | indexOptions? | CreateIndexOptions |

Returns

Promise<void>


createIngestion

createIngestion(indexName, ingestionName, ingestion): Promise<void>

Parameters

| Name | Type | | :-------------- | :---------------------------------------------------------------------------------- | | indexName | string | | ingestionName | string | | ingestion | IngestionRequest |

Returns

Promise<void>


deleteDocument

deleteDocument(indexName, documentUrl): Promise<void>

Parameters

| Name | Type | | :------------ | :------- | | indexName | string | | documentUrl | string |

Returns

Promise<void>


deleteIndex

deleteIndex(indexName): Promise<void>

Parameters

| Name | Type | | :---------- | :------- | | indexName | string |

Returns

Promise<void>


getIndex

getIndex(indexName): Promise<undefined | IndexSummary>

Parameters

| Name | Type | | :---------- | :------- | | indexName | string |

Returns

Promise<undefined | IndexSummary>


getIngestion

getIngestion(indexName, ingestionName): Promise<IngestionSummary>

Parameters

| Name | Type | | :-------------- | :------- | | indexName | string | | ingestionName | string |

Returns

Promise<IngestionSummary>


ingest

ingest(indexName, ingestionName, ingestion, backoff?): Promise<void>

Parameters

| Name | Type | | :-------------- | :---------------------------------------------------------------------------------- | | indexName | string | | ingestionName | string | | ingestion | IngestionRequest | | backoff | number[] |

Returns

Promise<void>


listDocuments

listDocuments(indexName): Promise<DocumentSummary[]>

Parameters

| Name | Type | | :---------- | :------- | | indexName | string |

Returns

Promise<DocumentSummary[]>


listIndexes

listIndexes(): Promise<IndexSummary[]>

Returns

Promise<IndexSummary[]>

Class: ChatAboutVideo<CLIENT, OPTIONS, PROMPT, RESPONSE>

chat.ChatAboutVideo

Type parameters

| Name | Type | | :--------- | :--------------------------------------------------------------------------------------------- | | CLIENT | any | | OPTIONS | extends AdditionalCompletionOptions = any | | PROMPT | any | | RESPONSE | any |

Constructors

constructor

new ChatAboutVideo<CLIENT, OPTIONS, PROMPT, RESPONSE>(options, log?)

Type parameters

| Name | Type | | :--------- | :--------------------------------------------------------------------------------------------- | | CLIENT | any | | OPTIONS | extends AdditionalCompletionOptions = any | | PROMPT | any | | RESPONSE | any |

Parameters

| Name | Type | | :-------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | options | SupportedChatApiOptions | | log | undefined | LineLogger<(message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void> |

Properties

| Property | Description | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------- | | Protected apiPromise: Promise<ChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE>> | | | Protected log: undefined | LineLogger<(message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void> | | | Protected options: SupportedChatApiOptions | |

Methods

getApi

getApi(): Promise<ChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE>>

Get the underlying API instance.

Returns

Promise<ChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE>>

The underlying API instance.


startConversation

startConversation(options?): Promise<Conversation<CLIENT, OPTIONS, PROMPT, RESPONSE>>

Start a conversation without a video

Parameters

| Name | Type | Description | | :--------- | :-------- | :--------------------------------------- | | options? | OPTIONS | Overriding options for this conversation |

Returns

Promise<Conversation<CLIENT, OPTIONS, PROMPT, RESPONSE>>

The conversation.

startConversation(videoFile, options?): Promise<Conversation<CLIENT, OPTIONS, PROMPT, RESPONSE>>

Start a conversation about a video.

Parameters

| Name | Type | Description | | :---------- | :-------- | :----------------------------------------- | | videoFile | string | Path to a video file in local file system. | | options? | OPTIONS | Overriding options for this conversation |

Returns

Promise<Conversation<CLIENT, OPTIONS, PROMPT, RESPONSE>>

The conversation.

Class: Conversation<CLIENT, OPTIONS, PROMPT, RESPONSE>

chat.Conversation

Type parameters

| Name | Type | | :--------- | :--------------------------------------------------------------------------------------------- | | CLIENT | any | | OPTIONS | extends AdditionalCompletionOptions = any | | PROMPT | any | | RESPONSE | any |

Constructors

constructor

new Conversation<CLIENT, OPTIONS, PROMPT, RESPONSE>(conversationId, api, prompt, options, cleanup?, log?)

Type parameters

| Name | Type | | :--------- | :--------------------------------------------------------------------------------------------- | | CLIENT | any | | OPTIONS | extends AdditionalCompletionOptions = any | | PROMPT | any | | RESPONSE | any |

Parameters

| Name | Type | | :--------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | conversationId | string | | api | ChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE> | | prompt | undefined | PROMPT | | options | OPTIONS | | cleanup? | () => Promise<any> | | log | undefined | LineLogger<(message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void> |

Properties

| Property | Description | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------- | | Protected api: ChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE> | | | Protected Optional cleanup: () => Promise<any> | | | Protected conversationId: string | | | Protected log: undefined | LineLogger<(message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void> | | | Protected options: OPTIONS | | | Protected prompt: undefined | PROMPT | |

Methods

end

end(): Promise<void>

Returns

Promise<void>


getApi

getApi(): ChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE>

Get the underlying API instance.

Returns

ChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE>

The underlying API instance.


getPrompt

getPrompt(): undefined | PROMPT

Get the prompt for the current conversation. The prompt is the accumulated messages in the conversation so far.

Returns

undefined | PROMPT

The prompt which is the accumulated messages in the conversation so far.


say

say(message, options?): Promise<undefined | string>

Say something in the conversation, and get the response from AI

Parameters

| Name | Type | Description | | :--------- | :--------------------- | :-------------------------------------- | | message | string | The message to say in the conversation. | | options? | Partial<OPTIONS> | Options for fine control. |

Returns

Promise<undefined | string>

The response/completion

Class: ChatGptApi

chat-gpt.ChatGptApi

Implements

Constructors

constructor

new ChatGptApi(options)

Parameters

| Name | Type | | :-------- | :---------------------------------- | | options | ChatGptOptions |

Properties

| Property | Description | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------- | | Protected client: OpenAIClient | | | Protected Optional extractVideoFrames: Pick<ExtractVideoFramesOptions, "height"> & Required<Omit<ExtractVideoFramesOptions, "height">> | | | Protected options: ChatGptOptions | | | Protected storage: Required<Pick<StorageOptions, "uploader">> & StorageOptions | | | Protected tmpDir: string | | | Protected Optional videoRetrievalIndex: Required<Pick<VideoRetrievalIndexOptions, "createIndexIfNotExists" | "deleteDocumentWhenConversationEnds" | "deleteIndexWhenConversationEnds">> & VideoRetrievalIndexOptions | |

Methods

appendToPrompt

appendToPrompt(newPromptOrResponse, prompt?): Promise<ChatRequestMessageUnion[]>

Append a new prompt or response to the form a full prompt. This function is useful to build a prompt that contains conversation history.

Parameters

| Name | Type | Description | | :-------------------- | :----------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | newPromptOrResponse | ChatCompletions | ChatRequestMessageUnion[] | A new prompt to be appended, or previous response to be appended. | | prompt? | ChatRequestMessageUnion[] | The conversation history which is a prompt containing previous prompts and responses. If it is not provided, the conversation history returned will contain only what is in newPromptOrResponse. |

Returns

Promise<ChatRequestMessageUnion[]>

The full prompt which is effectively the conversation history.

Implementation of

ChatApi.appendToPrompt


buildTextPrompt

buildTextPrompt(text, _conversationId?): Promise<{ prompt: ChatRequestMessageUnion[] }>

Build prompt for sending text content to AI

Parameters

| Name | Type | Description | | :----------------- | :------- | :------------------------------------- | | text | string | The text content to be sent. | | _conversationId? | string | Unique identifier of the conversation. |

Returns

Promise<{ prompt: ChatRequestMessageUnion[] }>

An object containing the prompt.

Implementation of

ChatApi.buildTextPrompt


buildVideoPrompt

buildVideoPrompt(videoFile, conversationId?): Promise<BuildPromptOutput<ChatRequestMessageUnion[], { deploymentName: string } & AdditionalCompletionOptions & GetChatCompletionsOptions>>

Build prompt for sending video content to AI. Sometimes, to include video in the conversation, additional options and/or clean up is needed. In such case, options to be passed to generateContent function and/or a clean up call back function will be returned in the output of this function.

Parameters

| Name | Type | Description | | :---------------- | :------- | :------------------------------------- | | videoFile | string | Path to the video file. | | conversationId? | string | Unique identifier of the conversation. |

Returns

Promise<BuildPromptOutput<ChatRequestMessageUnion[], { deploymentName: string } & AdditionalCompletionOptions & GetChatCompletionsOptions>>

An object containing the prompt, optional options, and an optional cleanup function.

Implementation of

ChatApi.buildVideoPrompt


buildVideoPromptWithFrames

Protected buildVideoPromptWithFrames(videoFile, conversationId?): Promise<BuildPromptOutput<ChatRequestMessageUnion[], { deploymentName: string } & AdditionalCompletionOptions & GetChatCompletionsOptions>>

Parameters

| Name | Type | | :--------------- | :------- | | videoFile | string | | conversationId | string |

Returns

Promise<BuildPromptOutput<ChatRequestMessageUnion[], { deploymentName: string } & AdditionalCompletionOptions & GetChatCompletionsOptions>>


buildVideoPromptWithVideoRetrievalIndex

Protected buildVideoPromptWithVideoRetrievalIndex(videoFile, conversationId?): Promise<BuildPromptOutput<ChatRequestMessageUnion[], { deploymentName: string } & AdditionalCompletionOptions & GetChatCompletionsOptions>>

Parameters

| Name | Type | | :--------------- | :------- | | videoFile | string | | conversationId | string |

Returns

Promise<BuildPromptOutput<ChatRequestMessageUnion[], { deploymentName: string } & AdditionalCompletionOptions & GetChatCompletionsOptions>>


generateContent

generateContent(prompt, options): Promise<ChatCompletions>

Generate content based on the given prompt and options.

Parameters

| Name | Type | Description | | :-------- | :--------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------- | | prompt | ChatRequestMessageUnion[] | The full prompt to generate content. | | options | { deploymentName: string } & AdditionalCompletionOptions & GetChatCompletionsOptions | Optional options to control the content generation. |

Returns

Promise<ChatCompletions>

The generated content.

Implementation of

ChatApi.generateContent


getClient

getClient(): Promise<OpenAIClient>

Get the raw client. This function could be useful for advanced use cases.

Returns

Promise<OpenAIClient>

The raw client.

Implementation of

ChatApi.getClient


getResponseText

getResponseText(result): Promise<undefined | string>

Get the text from the response object

Parameters

| Name | Type | Description | | :------- | :---------------- | :------------------ | | result | ChatCompletions | the response object |

Returns

Promise<undefined | string>

Implementation of

ChatApi.getResponseText


isServerError

isServerError(error): boolean

Check if the error is a server error.

Parameters

| Name | Type | Description | | :------ | :---- | :--------------- | | error | any | any error object |

Returns

boolean

true if the error is a server error, false otherwise.

Implementation of

ChatApi.isServerError


isThrottlingError

isThrottlingError(error): boolean

Check if the error is a throttling error.

Parameters

| Name | Type | Description | | :------ | :---- | :--------------- | | error | any | any error object |

Returns

boolean

true if the error is a throttling error, false otherwise.

Implementation of

ChatApi.isThrottlingError

Class: GeminiApi

gemini.GeminiApi

Implements

Constructors

constructor

new GeminiApi(options)

Parameters

| Name | Type | | :-------- | :-------------------------------- | | options | GeminiOptions |

Properties

| Property | Description | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- | | Protected client: GenerativeModel | | | Protected extractVideoFrames: Pick<ExtractVideoFramesOptions, "height"> & Required<Omit<ExtractVideoFramesOptions, "height">> | | | Protected options: GeminiOptions | | | Protected tmpDir: string | |

Methods

appendToPrompt

appendToPrompt(newPromptOrResponse, prompt?): Promise<Content[]>

Append a new prompt or response to the form a full prompt. This function is useful to build a prompt that contains conversation history.

Parameters

| Name | Type | Description | | :-------------------- | :------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | newPromptOrResponse | Content[] | GenerateContentResult | A new prompt to be appended, or previous response to be appended. | | prompt? | Content[] | The conversation history which is a prompt containing previous prompts and responses. If it is not provided, the conversation history returned will contain only what is in newPromptOrResponse. |

Returns

Promise<Content[]>

The full prompt which is effectively the conversation history.

Implementation of

ChatApi.appendToPrompt


buildTextPrompt

buildTextPrompt(text, _conversationId?): Promise<{ prompt: Content[] }>

Build prompt for sending text content to AI

Parameters

| Name | Type | Description | | :----------------- | :------- | :------------------------------------- | | text | string | The text content to be sent. | | _conversationId? | string | Unique identifier of the conversation. |

Returns

Promise<{ prompt: Content[] }>

An object containing the prompt.

Implementation of

ChatApi.buildTextPrompt


buildVideoPrompt

buildVideoPrompt(videoFile, conversationId?): Promise<BuildPromptOutput<Content[], GeminiCompletionOptions>>

Build prompt for sending video content to AI. Sometimes, to include video in the conversation, additional options and/or clean up is needed. In such case, options to be passed to generateContent function and/or a clean up call back function will be returned in the output of this function.

Parameters

| Name | Type | Description | | :--------------- | :------- | :------------------------------------- | | videoFile | string | Path to the video file. | | conversationId | string | Unique identifier of the conversation. |

Returns

Promise<BuildPromptOutput<Content[], GeminiCompletionOptions>>

An object containing the prompt, optional options, and an optional cleanup function.

Implementation of

ChatApi.buildVideoPrompt


generateContent

generateContent(prompt, options): Promise<GenerateContentResult>

Generate content based on the given prompt and options.

Parameters

| Name | Type | Description | | :-------- | :---------------------------------------------------- | :-------------------------------------------------- | | prompt | Content[] | The full prompt to generate content. | | options | GeminiCompletionOptions | Optional options to control the content generation. |

Returns

Promise<GenerateContentResult>

The generated content.

Implementation of

ChatApi.generateContent


getClient

getClient(): Promise<GenerativeModel>

Get the raw client. This function could be useful for advanced use cases.

Returns

Promise<GenerativeModel>

The raw client.

Implementation of

ChatApi.getClient


getResponseText

getResponseText(result): Promise<undefined | string>

Get the text from the response object

Parameters

| Name | Type | Description | | :------- | :---------------------- | :------------------ | | result | GenerateContentResult | the response object |

Returns

Promise<undefined | string>

Implementation of

ChatApi.getResponseText


isServerError

isServerError(error): boolean

Check if the error is a server error.

Parameters

| Name | Type | Description | | :------ | :---- | :--------------- | | error | any | any error object |

Returns

boolean

true if the error is a server error, false otherwise.

Implementation of

ChatApi.isServerError


isThrottlingError

isThrottlingError(error): boolean

Check if the error is a throttling error.

Parameters

| Name | Type | Description | | :------ | :---- | :--------------- | | error | any | any error object |

Returns

boolean

true if the error is a throttling error, false otherwise.

Implementation of

ChatApi.isThrottlingError

Interfaces

Interface: CreateIndexOptions

azure/video-retrieval-api-client.CreateIndexOptions

Properties

| Property | Description | | ------------------------------------------------------------------------------------------------------------------------ | ----------- | | Optional features: IndexFeature[] | | | Optional metadataSchema: IndexMetadataSchema | | | Optional userData: object | |

Interface: DocumentSummary

azure/video-retrieval-api-client.DocumentSummary

Properties

| Property | Description | | ------------------------------------ | ----------- | | createdDateTime: string | | | documentId: string | | | Optional documentUrl: string | | | lastModifiedDateTime: string | | | Optional metadata: object | | | Optional userData: object | |

Interface: IndexFeature

azure/video-retrieval-api-client.IndexFeature

Properties

| Property | Description | | ------------------------------------------------------ | ----------- | | Optional domain: "surveillance" | "generic" | | | Optional modelVersion: string | | | name: "vision" | "speech" | |

Interface: IndexMetadataSchema

azure/video-retrieval-api-client.IndexMetadataSchema

Properties

| Property | Description | | ----------------------------------------------------------------------------------------------------------------- | ----------- | | fields: IndexMetadataSchemaField[] | | | Optional language: string | |

Interface: IndexMetadataSchemaField

azure/video-retrieval-api-client.IndexMetadataSchemaField

Properties

| Property | Description | | ------------------------------------ | ----------- | | filterable: boolean | | | name: string | | | searchable: boolean | | | type: "string" | "datetime" | |

Interface: IndexSummary

azure/video-retrieval-api-client.IndexSummary

Properties

| Property | Description | | ------------------------------------------------------------------------------------------------------ | ----------- | | createdDateTime: string | | | eTag: string | | | Optional features: IndexFeature[] | | | lastModifiedDateTime: string | | | name: string | | | Optional userData: object | |

Interface: IngestionRequest

azure/video-retrieval-api-client.IngestionRequest

Properties

| Property | Description | | --------------------------------------------------------------------------------------------- | ----------- | | Optional filterDefectedFrames: boolean | | | Optional generateInsightIntervals: boolean | | | Optional includeSpeechTranscript: boolean | | | Optional moderation: boolean | | | videos: VideoIngestion[] | |

Interface: IngestionStatusDetail

azure/video-retrieval-api-client.IngestionStatusDetail

Properties

| Property | Description | | ----------------------------- | ----------- | | documentId: string | | | documentUrl: string | | | lastUpdatedTime: string | | | succeeded: boolean | |

Interface: IngestionSummary

azure/video-retrieval-api-client.IngestionSummary

Properties

| Property | Description | | --------------------------------------------------------------------------------------------------------------------------------- | ----------- | | Optional batchName: string | | | createdDateTime: string | | | Optional fileStatusDetails: IngestionStatusDetail[] | | | lastModifiedDateTime: string | | | name: string | | | state: "NotStarted" | "Running" | "Completed" | "Failed" | "PartiallySucceeded" | |

Interface: VideoIngestion

azure/video-retrieval-api-client.VideoIngestion

Properties

| Property | Description | | --------------------------------------------- | ----------- | | Optional documentId: string | | | documentUrl: string | | | Optional metadata: object | | | mode: "update" | "remove" | "add" | | | Optional userData: object | |

Interface: AdditionalCompletionOptions

types.AdditionalCompletionOptions

Properties

| Property | Description | | ----------------------------------------------- | ------------------------------------------------------------------------------------------------------------------