openpipe
v0.37.0
Published
LLM metrics and inference
Downloads
106,770
Readme
OpenPipe Node API Library
This library wraps TypeScript or Javascript OpenAI API calls and logs additional data to the configured OPENPIPE_BASE_URL
for further processing.
It is fully compatible with OpenAI's sdk and logs both streaming and non-streaming requests and responses.
Installation
npm install --save openpipe
# or
yarn add openpipe
Import
ESM
// import OpenAI from "openai"
import OpenAI from "openpipe/openai";
CJS
// const OpenAI = require("openai")
const OpenAI = require("openpipe/openai").default;
Usage
- Create a project at https://app.openpipe.ai
- Find your project's API key at https://app.openpipe.ai/settings
- Configure the OpenPipe client as shown below.
// import OpenAI from "openai"
import OpenAI from "openpipe/openai";
// Fully compatible with original OpenAI initialization
const openai = new OpenAI({
apiKey: "my api key", // defaults to process.env["OPENAI_API_KEY"]
// openpipe key is optional
openpipe: {
apiKey: "my api key", // defaults to process.env["OPENPIPE_API_KEY"]
baseUrl: "my url", // defaults to process.env["OPENPIPE_BASE_URL"] or https://api.openpipe.ai/api/v1 if not set
},
});
async function main() {
// Allows optional openpipe object
const completion = await openai.chat.completions.create({
messages: [{ role: "user", content: "Say this is a test" }],
model: "gpt-3.5-turbo",
// optional
openpipe: {
// Add custom searchable tags
tags: {
prompt_id: "extract_user_intent",
any_key: "any_value",
},
logRequest: true, // Enable/disable data collection. Defaults to true.
},
});
console.log(completion.choices);
}
main();
FAQ
How do I report calls to my self-hosted instance?
Start an instance by following the instructions on Running Locally. Once it's running, point your OPENPIPE_BASE_URL
to your self-hosted instance.
What if my OPENPIPE_BASE_URL
is misconfigured or my instance goes down? Will my OpenAI calls stop working?
Your OpenAI calls will continue to function as expected no matter what. The sdk handles logging errors gracefully without affecting OpenAI inference.
See the GitHub repo for more details.