chatgpt-api-arnolds
v5.2.11
Published
Node.js client for the official ChatGPT API. 原作者 Travis Fischer. Thanks for Travis Fischer.
Downloads
13
Maintainers
Readme
ChatGPT API
Node.js client for the official ChatGPT API.
Intro
This package is a Node.js wrapper around ChatGPT by OpenAI. TS batteries included. ✨
Updates
This package now fully supports GPT-4! 🔥
We also just released a TypeScript chatgpt-plugin package which contains helpers and examples to make it as easy as possible to start building your own ChatGPT Plugins in JS/TS. Even if you don't have developer access to ChatGPT Plugins yet, you can still use the chatgpt-plugin repo to get a head start on building your own plugins locally.
If you have access to the gpt-4
model, you can run the following to test out the CLI with GPT-4:
npx chatgpt@latest --model gpt-4 "Hello world"
We still support both the official ChatGPT API and the unofficial proxy API, but we now recommend using the official API since it's significantly more robust and supports GPT-4.
| Method | Free? | Robust? | Quality? |
| --------------------------- | ------ | ------- | ------------------------------- |
| ChatGPTAPI
| ❌ No | ✅ Yes | ✅️ Real ChatGPT models + GPT-4 |
| ChatGPTUnofficialProxyAPI
| ✅ Yes | ❌ No️ | ✅ ChatGPT webapp |
Note: We strongly recommend using ChatGPTAPI
since it uses the officially supported API from OpenAI. We will likely remove support for ChatGPTUnofficialProxyAPI
in a future release.
ChatGPTAPI
- Uses thegpt-3.5-turbo
model with the official OpenAI chat completions API (official, robust approach, but it's not free)ChatGPTUnofficialProxyAPI
- Uses an unofficial proxy server to access ChatGPT's backend API in a way that circumvents Cloudflare (uses the real ChatGPT and is pretty lightweight, but relies on a third-party server and is rate-limited)
The official OpenAI chat completions API has been released, and it is now the default for this package! 🔥
| Method | Free? | Robust? | Quality? |
| --------------------------- | ------ | -------- | ----------------------- |
| ChatGPTAPI
| ❌ No | ✅ Yes | ✅️ Real ChatGPT models |
| ChatGPTUnofficialProxyAPI
| ✅ Yes | ☑️ Maybe | ✅ Real ChatGPT |
Note: We strongly recommend using ChatGPTAPI
since it uses the officially supported API from OpenAI. We may remove support for ChatGPTUnofficialProxyAPI
in a future release.
ChatGPTAPI
- Uses thegpt-3.5-turbo
model with the official OpenAI chat completions API (official, robust approach, but it's not free)ChatGPTUnofficialProxyAPI
- Uses an unofficial proxy server to access ChatGPT's backend API in a way that circumvents Cloudflare (uses the real ChatGPT and is pretty lightweight, but relies on a third-party server and is rate-limited)
We now provide three ways of accessing the unofficial ChatGPT API, all of which have tradeoffs:
| Method | Free? | Robust? | Quality? |
| --------------------------- | ------ | -------- | ----------------- |
| ChatGPTAPI
| ❌ No | ✅ Yes | ☑️ Mimics ChatGPT |
| ChatGPTUnofficialProxyAPI
| ✅ Yes | ☑️ Maybe | ✅ Real ChatGPT |
| ChatGPTAPIBrowser
(v3) | ✅ Yes | ❌ No | ✅ Real ChatGPT |
Note: I recommend that you use either ChatGPTAPI
or ChatGPTUnofficialProxyAPI
.
ChatGPTAPI
- (Used to use)text-davinci-003
to mimic ChatGPT via the official OpenAI completions API (most robust approach, but it's not free and doesn't use a model fine-tuned for chat)ChatGPTUnofficialProxyAPI
- Uses an unofficial proxy server to access ChatGPT's backend API in a way that circumvents Cloudflare (uses the real ChatGPT and is pretty lightweight, but relies on a third-party server and is rate-limited)ChatGPTAPIBrowser
- (deprecated; v3.5.1 of this package) Uses Puppeteer to access the official ChatGPT webapp (uses the real ChatGPT, but very flaky, heavyweight, and error prone)
OpenAI has disabled the leaked chat model we were previously using, so we're now defaulting to text-davinci-003
, which is not free.
We've found several other hidden, fine-tuned chat models, but OpenAI keeps disabling them, so we're searching for alternative workarounds.
This package no longer requires any browser hacks – it is now using the official OpenAI completions API with a leaked model that ChatGPT uses under the hood. 🔥
import { ChatGPTAPI } from 'chatgpt'
const api = new ChatGPTAPI({
apiKey: process.env.OPENAI_API_KEY
})
const res = await api.sendMessage('Hello World!')
console.log(res.text)
Please upgrade to chatgpt@latest
(at least v4.0.0). The updated version is significantly more lightweight and robust compared with previous versions. You also don't have to worry about IP issues or rate limiting.
Huge shoutout to @waylaidwanderer for discovering the leaked chat model!
If you run into any issues, we do have a pretty active ChatGPT Hackers Discord with over 8k developers from the Node.js & Python communities.
Lastly, please consider starring this repo and following me on twitter to help support the project.
Thanks && cheers, Travis
CLI
To run the CLI, you'll need an OpenAI API key:
export OPENAI_API_KEY="sk-TODO"
npx chatgpt "your prompt here"
By default, the response is streamed to stdout, the results are stored in a local config file, and every invocation starts a new conversation. You can use -c
to continue the previous conversation and --no-stream
to disable streaming.
Usage:
$ chatgpt <prompt>
Commands:
<prompt> Ask ChatGPT a question
rm-cache Clears the local message cache
ls-cache Prints the local message cache path
For more info, run any command with the `--help` flag:
$ chatgpt --help
$ chatgpt rm-cache --help
$ chatgpt ls-cache --help
Options:
-c, --continue Continue last conversation (default: false)
-d, --debug Enables debug logging (default: false)
-s, --stream Streams the response (default: true)
-s, --store Enables the local message cache (default: true)
-t, --timeout Timeout in milliseconds
-k, --apiKey OpenAI API key
-o, --apiOrg OpenAI API organization
-n, --conversationName Unique name for the conversation
-h, --help Display this message
-v, --version Display version number
If you have access to the gpt-4
model, you can run the following to test out the CLI with GPT-4:
Install
npm install chatgpt
Make sure you're using node >= 18
so fetch
is available (or node >= 14
if you install a fetch polyfill).
Usage
To use this module from Node.js, you need to pick between two methods:
| Method | Free? | Robust? | Quality? |
| --------------------------- | ------ | ------- | ------------------------------- |
| ChatGPTAPI
| ❌ No | ✅ Yes | ✅️ Real ChatGPT models + GPT-4 |
| ChatGPTUnofficialProxyAPI
| ✅ Yes | ❌ No️ | ✅ Real ChatGPT webapp |
ChatGPTAPI
- Uses thegpt-3.5-turbo
model with the official OpenAI chat completions API (official, robust approach, but it's not free). You can override the model, completion params, and system message to fully customize your assistant.ChatGPTUnofficialProxyAPI
- Uses an unofficial proxy server to access ChatGPT's backend API in a way that circumvents Cloudflare (uses the real ChatGPT and is pretty lightweight, but relies on a third-party server and is rate-limited)
Both approaches have very similar APIs, so it should be simple to swap between them.
Note: We strongly recommend using ChatGPTAPI
since it uses the officially supported API from OpenAI and it also supports gpt-4
. We will likely remove support for ChatGPTUnofficialProxyAPI
in a future release.
Usage - ChatGPTAPI
Sign up for an OpenAI API key and store it in your environment.
import { ChatGPTAPI } from 'chatgpt'
async function example() {
const api = new ChatGPTAPI({
apiKey: process.env.OPENAI_API_KEY
})
const res = await api.sendMessage('Hello World!')
console.log(res.text)
}
You can override the default model
(gpt-3.5-turbo
) and any OpenAI chat completion params using completionParams
:
const api = new ChatGPTAPI({
apiKey: process.env.OPENAI_API_KEY,
completionParams: {
model: 'gpt-4',
temperature: 0.5,
top_p: 0.8
}
})
If you want to track the conversation, you'll need to pass the parentMessageId
like this:
const api = new ChatGPTAPI({ apiKey: process.env.OPENAI_API_KEY })
// send a message and wait for the response
let res = await api.sendMessage('What is OpenAI?')
console.log(res.text)
// send a follow-up
res = await api.sendMessage('Can you expand on that?', {
parentMessageId: res.id
})
console.log(res.text)
// send another follow-up
res = await api.sendMessage('What were we talking about?', {
parentMessageId: res.id
})
console.log(res.text)
You can add streaming via the onProgress
handler:
const res = await api.sendMessage('Write a 500 word essay on frogs.', {
// print the partial response as the AI is "typing"
onProgress: (partialResponse) => console.log(partialResponse.text)
})
// print the full text at the end
console.log(res.text)
You can add a timeout using the timeoutMs
option:
// timeout after 2 minutes (which will also abort the underlying HTTP request)
const response = await api.sendMessage(
'write me a really really long essay on frogs',
{
timeoutMs: 2 * 60 * 1000
}
)
If you want to see more info about what's actually being sent to OpenAI's chat completions API, set the debug: true
option in the ChatGPTAPI
constructor:
const api = new ChatGPTAPI({
apiKey: process.env.OPENAI_API_KEY,
debug: true
})
We default to a basic systemMessage
. You can override this in either the ChatGPTAPI
constructor or sendMessage
:
const res = await api.sendMessage('what is the answer to the universe?', {
systemMessage: `You are ChatGPT, a large language model trained by OpenAI. You answer as concisely as possible for each responseIf you are generating a list, do not have too many items.
Current date: ${new Date().toISOString()}\n\n`
})
Note that we automatically handle appending the previous messages to the prompt and attempt to optimize for the available tokens (which defaults to 4096
).
async function example() {
// To use ESM in CommonJS, you can use a dynamic import like this:
const { ChatGPTAPI } = await import('chatgpt')
// You can also try dynamic importing like this:
// const importDynamic = new Function('modulePath', 'return import(modulePath)')
// const { ChatGPTAPI } = await importDynamic('chatgpt')
const api = new ChatGPTAPI({ apiKey: process.env.OPENAI_API_KEY })
const res = await api.sendMessage('Hello World!')
console.log(res.text)
}
Usage - ChatGPTUnofficialProxyAPI
The API for ChatGPTUnofficialProxyAPI
is almost exactly the same. You just need to provide a ChatGPT accessToken
instead of an OpenAI API key.
import { ChatGPTUnofficialProxyAPI } from 'chatgpt'
async function example() {
const api = new ChatGPTUnofficialProxyAPI({
accessToken: process.env.OPENAI_ACCESS_TOKEN
})
const res = await api.sendMessage('Hello World!')
console.log(res.text)
}
See demos/demo-reverse-proxy for a full example:
npx tsx demos/demo-reverse-proxy.ts
ChatGPTUnofficialProxyAPI
messages also contain a conversationid
in addition to parentMessageId
, since the ChatGPT webapp can't reference messages across different accounts & conversations.
Reverse Proxy
You can override the reverse proxy by passing apiReverseProxyUrl
:
const api = new ChatGPTUnofficialProxyAPI({
accessToken: process.env.OPENAI_ACCESS_TOKEN,
apiReverseProxyUrl: 'https://your-example-server.com/api/conversation'
})
Known reverse proxies run by community members include:
| Reverse Proxy URL | Author | Rate Limits | Last Checked |
| ------------------------------------------------- | -------------------------------------------- | ---------------------------- | ------------ |
| https://ai.fakeopen.com/api/conversation
| @pengzhile | 5 req / 10 seconds by IP | 4/18/2023 |
| https://api.pawan.krd/backend-api/conversation
| @PawanOsman | 50 req / 15 seconds (~3 r/s) | 3/23/2023 |
Note: info on how the reverse proxies work is not being published at this time in order to prevent OpenAI from disabling access.
Access Token
To use ChatGPTUnofficialProxyAPI
, you'll need an OpenAI access token from the ChatGPT webapp. To do this, you can use any of the following methods which take an email
and password
and return an access token:
- Node.js libs
- Python libs
These libraries work with email + password accounts (e.g., they do not support accounts where you auth via Microsoft / Google).
Alternatively, you can manually get an accessToken
by logging in to the ChatGPT webapp and then opening https://chat.openai.com/api/auth/session
, which will return a JSON object containing your accessToken
string.
Access tokens last for days.
Note: using a reverse proxy will expose your access token to a third-party. There shouldn't be any adverse effects possible from this, but please consider the risks before using this method.
Docs
See the auto-generated docs for more info on methods and parameters.
Demos
Most of the demos use ChatGPTAPI
. It should be pretty easy to convert them to use ChatGPTUnofficialProxyAPI
if you'd rather use that approach. The only thing that needs to change is how you initialize the api with an accessToken
instead of an apiKey
.
To run the included demos:
- clone repo
- install node deps
- set
OPENAI_API_KEY
in .env
A basic demo is included for testing purposes:
npx tsx demos/demo.ts
A demo showing on progress handler:
npx tsx demos/demo-on-progress.ts
The on progress demo uses the optional onProgress
parameter to sendMessage
to receive intermediary results as ChatGPT is "typing".
npx tsx demos/demo-conversation.ts
A persistence demo shows how to store messages in Redis for persistence:
npx tsx demos/demo-persistence.ts
Any keyv adaptor is supported for persistence, and there are overrides if you'd like to use a different way of storing / retrieving messages.
Note that persisting message is required for remembering the context of previous conversations beyond the scope of the current Node.js process, since by default, we only store messages in memory. Here's an external demo of using a completely custom database solution to persist messages.
Note: Persistence is handled automatically when using ChatGPTUnofficialProxyAPI
because it is connecting indirectly to ChatGPT.
Projects
All of these awesome projects are built using the chatgpt
package. 🤯
- Twitter Bot powered by ChatGPT ✨
- Mention @ChatGPTBot on Twitter with your prompt to try it out
- ChatGPT API Server - API server for this package with support for multiple OpenAI accounts, proxies, and load-balancing requests between accounts.
- ChatGPT Prompts - A collection of 140+ of the best ChatGPT prompts from the community.
- Lovelines.xyz
- Chrome Extension (demo)
- VSCode Extension #1 (demo, updated version, marketplace)
- VSCode Extension #2 (marketplace)
- VSCode Extension #3 (marketplace)
- VSCode Extension #4 (marketplace)
- Raycast Extension #1 (demo)
- Raycast Extension #2
- Telegram Bot #1
- Telegram Bot #2
- Telegram Bot #3 (group privacy mode, ID-based auth)
- Telegram Bot #4 (queue system, ID-based chat thread)
- Telegram Bot #5 (group privacy mode, ID-based chat thread)
- Deno Telegram Bot
- Go Telegram Bot
- Telegram Bot for YouTube Summaries
- GitHub ProBot
- Discord Bot #1
- Discord Bot #2
- Discord Bot #3
- Discord Bot #4 (selfbot)
- Discord Bot #5
- Discord Bot #6 (Shakespeare bot)
- Discord Bot #7
- Zoom Chat
- WeChat Bot #1
- WeChat Bot #2
- WeChat Bot #3 (
- WeChat Bot #4
- WeChat Bot #5
- WeChat Bot #6
- WeChat Bot #7
- QQ Bot (plugin for Yunzai-bot)
- QQ Bot (plugin for KiviBot)
- QQ Bot (oicq)
- QQ Bot (oicq + RabbitMQ)
- QQ Bot (go-cqhttp)
- QQ Bot (plugin for Yunzai-Bot + Bull) (Lightweight, Google Bard support 💪)
- EXM smart contracts
- Flutter ChatGPT API
- Carik Bot
- Github Action for reviewing PRs
- WhatsApp Bot #1 (DALL-E + Whisper support 💪)
- WhatsApp Bot #2
- WhatsApp Bot #3 (multi-user support)
- WhatsApp Bot #4 (schedule periodic messages)
- WhatsApp Bot #5 (RaspberryPi + ngrok + Twilio)
- WhatsApp Bot #6 (Session and chat history storage with MongoStore)
- Matrix Bot
- Rental Cover Letter Generator
- Assistant CLI
- Teams Bot
- Askai
- TalkGPT
- ChatGPT With Voice
- iOS Shortcut
- Slack Bot #1
- Slack Bot #2 (with queueing mechanism)
- Slack Bot #3
- Slack Bot #4 ( Serverless AWS Lambda )
- Slack Bot #5 (Hosted)
- Electron Bot
- Kodyfire CLI
- Twitch Bot
- Continuous Conversation
- Figma plugin
- NestJS server
- NestJS ChatGPT Starter Boilerplate
- Wordsmith: Add-in for Microsoft Word
- QuizGPT: Create Kahoot quizzes with ChatGPT
- openai-chatgpt: Talk to ChatGPT from the terminal
- Clippy the Saleforce chatbot ClippyJS joke bot
- ai-assistant Chat assistant
- Feishu Bot
- DomainGPT: Discover available domain names
- AI Poem Generator
- Next.js ChatGPT With Firebase
- ai-commit – GPT-3 Commit Message Generator
- AItinerary – ChatGPT itinerary Generator
- wechaty-chatgpt - A chatbot based on Wechaty & ChatGPT
- Julius GPT - Generate and publish your content from the CLI
- OpenAI-API-Service - Provides OpenAI related APIs for businesses
- Discord Daily News Bot - Discord bot that generate funny daily news
- ai-assistant - Create a chat website similar to ChatGPT
If you create a cool integration, feel free to open a PR and add it to the list.
Compatibility
- This package is ESM-only.
- This package supports
node >= 14
. - This module assumes that
fetch
is installed. - If you want to build a website using
chatgpt
, we recommend using it only from your backend API
Credits
- Huge thanks to @waylaidwanderer, @abacaj, @wong2, @simon300000, @RomanHotsiy, @ElijahPepe, and all the other contributors 💪
- OpenAI for creating ChatGPT 🔥
- I run the ChatGPT Hackers Discord with over 8k developers – come join us!
License
MIT © Travis Fischer
If you found this project interesting, please consider sponsoring me or following me on twitter