@devoxa/openai-structured-chat
v1.1.0
Published
OpenAI chat completion with schema validation and error correction
Downloads
3
Readme
Installation
yarn add @devoxa/openai-structured-chat openai zod
Usage
Basic usage
import OpenAI from 'openai'
import { StructuredChat } from '@devoxa/openai-structured-chat'
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })
const chat = new StructuredChat({
openai: openai,
params: { model: 'gpt-4o' },
})
const result = await chat.send({
messages: [
{ role: 'system', content: 'Your task is to convert any statements to standard English.' },
{ role: 'user', content: 'She no went to the market.' },
],
})
Schema-validated functions
import { z } from 'zod'
const chat = new StructuredChat({
openai: openai,
params: {
model: 'gpt-4o',
tools: [
{
type: 'function',
function: {
name: 'return_sentiment' as const,
parameters: z.object({
/** The sentiment of the tweet */
sentiment: z
.enum(['positive', 'neutral', 'negative'])
.describe('The sentiment of the tweet'),
}),
},
},
],
},
options: {
// The maximum number of times that the client will attempt to correct errors in case of a
// hallucinated model response, like an incorrect function name or an argument schema mismatch.
// Defaults to `2`.
maxErrorCorrectionTries: 2,
},
})
const result = await chat.send({
// Messages to add to the existing conversation
messages: [
{
role: 'system',
content:
'You will be provided with a tweet, and your task is to classify its sentiment as positive, neutral, or negative.',
},
{ role: 'user', content: 'I loved the new Batman movie!' },
],
// - `none` means the model will not call a function and instead generate a message.
// - `auto` means the model can pick between generating a message or calling a function.
// - `function` means the model will always call a function and never generate a message.
// - `{ type: 'function', function: { name: 'my_function' } }` forces the model to call a specific function.
// Defaults to `none` when no functions are present or `auto` when functions are present.
tool_choice: 'function',
})
const func = result.choices[0].message.tool_calls?.[0].function
if (func?.name === 'return_sentiment') {
console.log(func.arguments) // -> { sentiment: 'positive' }
}
Message history
You can retrieve and manipulate the message history of the chat with the provided functions:
const messages = chat.getMessages()
chat.setMessages([
{ role: 'system', content: 'Your task is to convert any statements to correct English.' },
{ role: 'user', content: 'She no went to the market.' },
])
chat.addMessage({ role: 'assistant', content: "She didn't go to the market." })
Caching
You can provide an optional cacheAdapter
, which causes all requests to the OpenAI API to be cached
based on the parameters and options of the requests.
This package includes a FileSystemCacheAdapter
that is meant to be used for consistent integration
tests. For your own use-cases you can bring your own cache adapter by implementing the
CacheAdapter
interface.
import { FileSystemCacheAdapter } from '@devoxa/openai-structured-chat'
import path from 'path'
const cacheAdapter = new FileSystemCacheAdapter({
baseDirectory: path.join(__dirname, '.cache'),
})
const chat = new StructuredChat({
openai: openai,
cacheAdapter,
// ...
})
Default request params and options
You can provide defaults for all requests in the top level params
and options
parameters of the
constructor. Keep in mind that the default timeout of the openai
library is 60s
(options.timeout
) with 2 additional retries (options.maxRetries
), which may be too long for your
application.
Contributors
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!
License
MIT