smolai
v0.0.10
Published
Use SmolAI's API from an edge runtime, using standard Web APIs only
Downloads
19
Readme
SmolAI
this is an alpha package in active development. Don't rely on it but feel free to try it out and reach out to swyx if any questions.
import { SmolLogger } from '@smol-ai/logger';
import { z } from 'zod';
import { zodToJsonSchema } from "zod-to-json-schema"
import {
Configuration,
OpenAIApi,
SmolAI
} from 'smolai'; // just a nice DX overlay on top of OpenAI's API
const logger = new SmolLogger()
const tagSchema = z.object({
tag: z.string({ description: "A short Wikipedia-style news story tag describing the topic of the conversation, using acronyms and phrasing familiar for a developer and investor audience. e.g. Docker, CLIs, Compute, Audio, AI, 3D, Security, Gaming." }),
confidence: z.number({ description: "Confidence level for the tag, a value between 0 to 1." }).min(0).max(1)
})
const printSchema = zodToJsonSchema(z.object({
title: z.string({ description: "Verbatim title of the story" }),
tags: z.array(tagSchema).min(3).max(6),
}));
const openai = new OpenAIApi(new Configuration({ apiKey: process.env.OPENAI_API_KEY }));
const smolai = new SmolAI(openai, `You are a bot that suggests appropriate Wikipedia-style news story tags given a Hacker News blog post title and comments, together with the degree of confidence in the tag. Suggested tags should be short. One word, as far as possible. e.g. Docker, Audio, AI, 3D, Security. The user will give you a title, respond with the tags you think are appropriate.`);
const print = ({ title, tags }) => title + tags // dont really care about the impl of print
smolai.addFunction(print, printSchema); // schema is validated coming in and going out
smolai.model = 'gpt-4-0613'
const response = await smolai.chat({
messages: [
'The post title: Testing the memory safe Rust implementation of Sudo/Su',
'The HN comments: Is sudo known to be memory _un_safe? Because otherwise, calling this one "the memory safe [Rust] implementation of Sudo" is a bit weird.'
],
})
const args = JSON.parse(response.choices[0].message.function_call.arguments);
logger.log('final result', args)
OpenAI Edge
A TypeScript module for querying OpenAI's API using fetch
(a standard Web API)
instead of axios
. This is a drop-in replacement for the official openai
module (which has axios
as a dependency) - except you need to supply fetch
in Node v17 and below (fetch landed in Node 18).
As well as reducing the bundle size, removing the dependency means we can query OpenAI from edge environments. Edge functions such as Next.js Edge API Routes are very fast and, unlike lambda functions, allow streaming data to the client.
The latest version of this module has feature parity with the official v3.3.0
,
and also supports the chat completion functions
parameter, which isn't yet
included in the official module.
Installation
npm install smolai
# or yarn add smolai
Responses
Every method returns a promise resolving to the standard fetch
response i.e.
Promise<Response>
. Since fetch
doesn't have built-in support for types in
its response data, smolai
includes an export ResponseTypes
which you
can use to assert the correct type on the JSON response:
import { Configuration, OpenAIApi, ResponseTypes } from "smolai"
const configuration = new Configuration({
apiKey: "YOUR-API-KEY",
})
const openai = new OpenAIApi(configuration)
const response = await openai.createImage({
prompt: "A cute baby sea otter",
size: "512x512",
response_format: "url",
})
const data = (await response.json()) as ResponseTypes["createImage"]
const url = data.data?.[0]?.url
console.log({ url })
Without global fetch
This module has zero dependencies and it expects fetch
to be in the global
namespace (as it is in web, edge and modern Node environments). If you're
running in an environment without a global fetch
defined e.g. an older version
of Node.js, please pass fetch
when creating your instance:
import fetch from "node-fetch"
const openai = new OpenAIApi(configuration, undefined, fetch)
Available methods
cancelFineTune
createAnswer
createChatCompletion
(including support forfunctions
)createClassification
createCompletion
createEdit
createEmbedding
createFile
createFineTune
createImage
createImageEdit
createImageVariation
createModeration
createSearch
createTranscription
createTranslation
deleteFile
deleteModel
downloadFile
listEngines
listFiles
listFineTuneEvents
listFineTunes
listModels
retrieveEngine
retrieveFile
retrieveFineTune
retrieveModel
Edge route handler examples
Here are some sample
Next.js Edge API Routes
using smolai
.
1. Streaming chat with gpt-3.5-turbo
Note that when using the stream: true
option, OpenAI responds with
server-sent events.
Here's an example
react hook to consume SSEs
and here's a full NextJS example.
import type { NextRequest } from "next/server"
import { Configuration, OpenAIApi } from "smolai"
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
})
const openai = new OpenAIApi(configuration)
const handler = async (req: NextRequest) => {
const { searchParams } = new URL(req.url)
try {
const completion = await openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Who won the world series in 2020?" },
{
role: "assistant",
content: "The Los Angeles Dodgers won the World Series in 2020.",
},
{ role: "user", content: "Where was it played?" },
],
max_tokens: 7,
temperature: 0,
stream: true,
})
return new Response(completion.body, {
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "text/event-stream;charset=utf-8",
"Cache-Control": "no-cache, no-transform",
"X-Accel-Buffering": "no",
},
})
} catch (error: any) {
console.error(error)
return new Response(JSON.stringify(error), {
status: 400,
headers: {
"content-type": "application/json",
},
})
}
}
export const config = {
runtime: "edge",
}
export default handler
2. Text completion with Davinci
import type { NextRequest } from "next/server"
import { Configuration, OpenAIApi, ResponseTypes } from "smolai"
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
})
const openai = new OpenAIApi(configuration)
const handler = async (req: NextRequest) => {
const { searchParams } = new URL(req.url)
try {
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt: searchParams.get("prompt") ?? "Say this is a test",
max_tokens: 7,
temperature: 0,
stream: false,
})
const data = (await completion.json()) as ResponseTypes["createCompletion"]
return new Response(JSON.stringify(data.choices), {
status: 200,
headers: {
"content-type": "application/json",
},
})
} catch (error: any) {
console.error(error)
return new Response(JSON.stringify(error), {
status: 400,
headers: {
"content-type": "application/json",
},
})
}
}
export const config = {
runtime: "edge",
}
export default handler
3. Creating an Image with DALL·E
import type { NextRequest } from "next/server"
import { Configuration, OpenAIApi, ResponseTypes } from "smolai"
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
})
const openai = new OpenAIApi(configuration)
const handler = async (req: NextRequest) => {
const { searchParams } = new URL(req.url)
try {
const image = await openai.createImage({
prompt: searchParams.get("prompt") ?? "A cute baby sea otter",
n: 1,
size: "512x512",
response_format: "url",
})
const data = (await image.json()) as ResponseTypes["createImage"]
const url = data.data?.[0]?.url
return new Response(JSON.stringify({ url }), {
status: 200,
headers: {
"content-type": "application/json",
},
})
} catch (error: any) {
console.error(error)
return new Response(JSON.stringify(error), {
status: 400,
headers: {
"content-type": "application/json",
},
})
}
}
export const config = {
runtime: "edge",
}
export default handler
acknowledgements
This is a fork of https://github.com/dan-kwiat/openai-edge!