awesomechat
v1.1.2
Published
An awesome Node.js chatbot that supports streaming and conversation interruption, built on ChatGPT and AwesomeChatAPI.
Downloads
11
Maintainers
Readme
AwesomeChat
Feature
🚀Quick Reply 🤖Streaming Reply 😎Conversation Memory 👏Chat Interruption 😃Rapid Deployment
Install
npm install awesomechat
or
yarn add awesomechat
or
pnpm install awesomechat
Usage
Simple
import AwesomeChatBot from "awesomechat";
const key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"; // replace with your key
const systemDescription = "You are a good robot";
const chatBot = new AwesomeChatBot(key,systemDescription);
chatBot.chat({
chatContext: "hello?",
onReplying(word, reply, id) {
console.log("onReplying", reply);
},
onError(error) {
console.log("onError", error);
},
});
With Context
import AwesomeChatBot from "awesomechat";
const key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"; // replace with your key
const systemDescription =
"You're a robot with emotions, and you need to add an emoji to the front of each sentence to express your emotions";
const chatBot = new AwesomeChatBot(key, systemDescription);
chatBot.chat({
chatContext: [
{ role: "user", content: "100" },
{ role: "user", content: "200" },
{ role: "user", content: "300" },
{
role: "user",
content: "What is the maximum number in the dialog above",
},
],
onReplying(word, reply, id) {
console.log("onReplying", reply);
},
onReplyEnd(completion, id, usage) {
if (completion.includes("300")) {
console.log("😄You are right!");
}
},
onError(error) {
console.log("onError", error);
},
});
Abort
import AwesomeChatBot from "awesomechat";
const key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"; // replace with your key
const chatBot = new AwesomeChatBot(key);
chatBot.chat({
chatContext: "Write a 500-word essay about AI",
onReplyStart() {
console.log("🚀copy that!");
},
onReplying(word, reply) {
console.log("onReplying", reply);
},
onCancel(id, usage) {
console.log("onCancel", id, usage);
},
});
setTimeout(() => {
chatBot.cancel();
}, 6000);
API
bot.chat({options})
Key | Type | Description --- | --- | --- |chatContext | ChatContext [] / string | An array of chat context objects or a string representing the current conversation context. |beforeReplyStart | (promptTokens: number) => boolean; | A function called before the conversation starts, returning a boolean value. If the value is true, the conversation continues; if the value is false, the following content will be interrupted. Typically used in scenarios such as conditional verification and authentication. |onReplyStart| (id: string) => void; | Function called when the model starts generating a response. |onReplying| (word: string, reply: string, id: string) => void; | Function called when the model is generating a response. The word parameter is the current character being generated, and the reply parameter is the concatenation result of the model's current generation content. |onReplyEnd| (completion: string, id: string, usage: ChatUsage) => void; | Function called when the model finishes generating a response. The completion parameter is the final generated response. |onError | (error: Error) => void; | Callback function called when an error occurs. The error parameter is an error object. |onCancel | (id: string, usage: ChatUsage)
bot.cancel()
Cancel the current chat in progress.
types
ChatContext
type ChatContext = {
role: "assistant" | "user" | "system";
content: string;
};
ChatConfig
type ChatConfig = {
/** What sampling temperature to use, between 0 and 2 */
temperature?: number;
/** An alternative to sampling with temperature, where the model considers the results of the tokens with top_p probability mass */
top_p?: number;
/** How many chat completion choices to generate for each input message */
n?: number;
/** Up to 4 sequences where the API will stop generating further tokens */
stop?: string | string[];
/** The maximum number of tokens to generate in the chat completion */
max_tokens?: number;
/** Number between -2.0 and 2.0 */
presence_penalty?: number;
/** Number between -2.0 and 2.0 */
frequency_penalty?: number;
/** A json object that maps tokens to an associated bias value from -100 to 100 */
logit_bias?: {
[key: string]: number;
};
/** A unique identifier representing your end-user */
user?: string;
};
ChatUsage
type ChatUsage = {
/** The number of total tokens used*/
totalTokens: number;
/** The number of requests made to the API */
promptTokens: number;
/** The number of tokens generated by the model */
completionTokens: number;
};