chatgpt-wrapper
v1.1.6
Published
NodeJS ChatGPT API wrapper
Downloads
15
Maintainers
Readme
Rate me
ChatGPT-wrapper
ChatGPT API wrapper
Official docs - https://platform.openai.com/docs/api-reference/chat
Features
- types included
- docs are included
- Stream included
Install
npm i chatgpt-wrapper
or
yarn add chatgpt-wrapper
Usage
Import
CommonJS
const { ChatGPT } = require('chatgpt-wrapper');
Modules
import { ChatGPT } from 'chatgpt-wrapper';
with Types
import { ChatGPT, Message, ReqBody, ResBody } from 'chatgpt-wrapper';
New instance
API_KEY (Required): Visit your API Keys page to retrieve the API key
ORG (Optional): For users who belong to multiple organizations, you can specify which organization is used for an API request. Usage from these API requests will count against the specified organization's subscription quota. Get Org ID here.
URL (Optional): API endpoint. Default set to 'Create chat completion' method.
MODEL (Optional): Model for requests, where not specified. Default is 'gpt-3.5-turbo'. Models list.
const chat = new ChatGPT({
API_KEY: '...', // Your API KEY (Required)
ORG: '...', // Your organization (Optional)
URL: '...', // API endpoint (Optional)
MODEL: '...', // Custom default model (Optional)
});
Error Handling
Don't forget to catch errors from your requests since OpenAI API sometimes returns an error message instead of response.
"API error" errors returns APIError type.
async/await
try {
const answer = await chat.send('question');
// ...
} catch (err) {
// handle error
}
Promise
chat.send('question')
.then((answer) => { /* ... */ })
.catch((err) => { /* handle error */ });
Methods
.send(content, [fetchOptions])
send(content: ReqBody | string, fetchOptions: RequestInit = {}): Promise<ResBody>
- content - string or ReqBody
- fetchOptions - [optional] node-fetch options
Use this method to send a request to ChatGPT API
Raw string equals to
{
model: 'gpt-3.5-turbo',
messages: [{
role: 'user',
content: 'YOUR STRING',
}],
}
⚠️ To use stream option, use .stream() method! ⚠️
Examples:
const answer = await chat.send('what is JavaScript');
console.log(answer.choices[0].message);
chat.send('what is JavaScript').then((answer) => {
console.log(answer.choices[0].message);
});
const answer = await chat.send({
model: 'gpt-3.5-turbo-0301',
messages: [{
role: 'user',
content: 'what is JavaScript',
}],
max_tokens: 200,
});
console.log(answer.choices[0].message);
.stream(content, [fetchOptions])
stream(content: ReqBody | string, fetchOptions: RequestInit = {}): Promise<ResBody>
- content - string or ReqBody
- fetchOptions - [optional] node-fetch options
Use this method to send a request to ChatGPT API and get steam response back
Raw string equals to
{
model: 'gpt-3.5-turbo',
stream: true,
messages: [{
role: 'user',
content: 'YOUR STRING',
}],
}
Examples:
(async () => {
const answer = await chat.stream('what is JavaScript in 200 words');
answer.pipe(process.stdout);
})();
How to implement "stop" command
Since you can pass options to fetch, you can abort request with AbortController. See fetch docs.
Example:
const controller = new AbortController();
const doStop = () => controller.abort();
// ...
const answer = await chat.stream('generate some long story', {
signal: controller.signal,
});
answer.pipe(process.stdout);
Now, if you call doStop(), the controller will abort the request along with the stream.
Types
Message
Message in chat format
Source: index.ts#L4
FunctionModel
Function model description. See more
Source: index.ts#L46
ReqBody
Request body
Source: index.ts#L70
ResBody
Response body
Source: index.ts#L188
APIError
OpenAI API error
Source: index.ts#L263