@cjpais/inference
v0.0.18
Published
Trying to wrap a bunch of different inference providers models and rate limit them. As well as getting them to support typescript more natively.
Downloads
34
Readme
inference
Trying to wrap a bunch of different inference providers models and rate limit them. As well as getting them to support typescript more natively.
My specific application may send many parallel requests to inference models and I need to rate limit these requests across the application per provider. This effectively solves that problem
This is a major WIP so a bunch of things are left unimplmented for the time being. However the basic functionality should be there
Supported providers:
- OpenAI (for chat, audio, image, embedding)
- Together (for chat)
- Mistral (for chat)
- Whisper.cpp (for audio)
WIP Stuff:
- consistent JSON mode
- error handling
- more rate limiting options
- more providers (llama.cpp for chat, image and embedding)
- move to config file & code gen for better typing?
Usage
Check out test/index.test.ts
for usage examples.
Generally speaking
- Instantiate a provider
const oai = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY!,
});
- Create a rate limiter based on your own usage (this is in requests per second)
const oaiLimiter = createRateLimiter(2);
- Define what models you want to use and their alias
const CHAT_MODELS: Record<string, ChatModel> = {
"gpt-3.5": {
provider: oai,
name: "gpt-3.5",
providerModel: "gpt-3.5-turbo-0125",
rateLimiter: oaiLimiter,
},
"gpt-4": {
provider: oai,
name: "gpt-4",
providerModel: "gpt-4-0125-preview",
rateLimiter: oaiLimiter,
}
}
- Create inference with the models you want
const inference = new Inference({chatModels: CHAT_MODELS});
- Call the inference with the model you want to use
const result = await inference.chat({model: "gpt-3.5", prompt: "Hello, world!"});
To install dependencies:
bun install
To run:
bun run index.ts