aifn
v2.0.0-alpha.1
Published
Create functions using AI LLMs.
Downloads
227
Maintainers
Readme
aifn
Create type-safe functions using AI Language Models with ease.
Contents
Features
- 🤖 Support for multiple AI providers:
- OpenAI
- Anthropic
- Google Gemini
- Ollama (Local models)
- 🛠️ Ability to implement custom providers
- 📝 Type-safe function creation using Zod
- 🔒 Runtime validation of LLM output
- 🧪 Built-in mocking support for testing
- 🔄 Easy provider switching
- 🎯 Example-based prompt enhancement
Installation
# Using npm
npm install aifn
# Using yarn
yarn add aifn
# Using pnpm
pnpm add aifn
You'll also need to install the provider SDKs you want to use:
# For OpenAI
pnpm add openai
# For Anthropic
pnpm add @anthropic-ai/sdk
# For Google's Gemini
pnpm add @google/generative-ai
# For Ollama
pnpm add ollama
Quick Usage
Usage with OpenAI
import { z } from 'zod'
import { llm, fn } from 'aifn'
import { OpenAI } from 'openai'
const toFrench = fn({
llm: llm.openai(new OpenAI({ apiKey: 'YOUR_OPENAI_API_KEY' }), 'gpt-4o-mini'),
description: `Translate the user message from English to French`,
input: z.string().describe('The text to translate'),
output: z.object({
translation: z.string().describe('The translated text'),
}),
})
const res = await toFrench('Hello, how are you?')
console.log(res.translation) // 'Bonjour, comment ça va?'
Usage with Anthropic
import { z } from 'zod'
import { llm, fn } from 'aifn'
import { Anthropic } from '@anthropic-ai/sdk'
const toFrench = fn({
llm: llm.anthropic(new Anthropic({ apiKey: 'YOUR_ANTHROPIC_API_KEY' }), 'claude-3-5-haiku-20241022'),
description: `Translate the user message from English to French`,
input: z.string().describe('The text to translate'),
output: z.object({
translation: z.string().describe('The translated text'),
}),
})
const res = await toFrench('Hello, how are you?')
console.log(res.translation) // 'Bonjour, comment ça va?'
Usage with Gemini
import { z } from 'zod'
import { llm, fn } from 'aifn'
import { GoogleGenerativeAI } from '@google/generative-ai'
const toFrench = fn({
llm: llm.gemini(new GoogleGenerativeAI('YOUR_GEMINI_API_KEY'), 'gemini-1.5-flash'),
description: `Translate the user message from English to French`,
input: z.string().describe('The text to translate'),
output: z.object({
translation: z.string().describe('The translated text'),
}),
})
const res = await toFrench('Hello, how are you?')
console.log(res.translation) // 'Bonjour, comment ça va?'
Usage with Ollama
const toFrench = fn({
llm: llm.ollama(new Ollama(), 'mistral:7b'),
description: `Translate the user message from English to French`,
input: z.string().describe('The text to translate'),
output: z.object({
translation: z.string().describe('The translated text'),
}),
})
const res = await toFrench('Hello, how are you?')
console.log(res.translation) // 'Bonjour, comment ça va?'
Guides
Adding examples for better results
You can specify examples for your function to improve the quality of the output.
import { z } from 'zod'
import { llm, fn } from 'aifn'
const toFrench = fn({
llm: llm.openai(new OpenAI({ apiKey: 'YOUR_OPENAI_API_KEY' }), 'gpt-4o-mini'),
description: `Translate the user message from English to French`,
input: z.string().describe('The text to translate'),
output: z.object({
translation: z.string().describe('The translated text'),
}),
examples: [
{ input: 'Hello', output: { translation: 'Bonjour' } },
{ input: 'How are you?', output: { translation: 'Comment ça va?' } },
],
})
Using custom LLM provider
You can use custom LLM providers by specifying the llm.custom
method
import { z } from 'zod'
import { llm, fn, LLMRequest, LLMResponse } from 'aifn'
const toFrench = fn({
llm: llm.custom(async (req: LLMRequest): Promise<LLMResponse> => {
// implement your custom LLM calling logic here
}),
description: `Translate the user message from English to French`,
input: z.string().describe('The text to translate'),
output: z.object({
translation: z.string().describe('The translated text'),
})
})
The request and response types look as follows:
type LLMRequest = {
system: string
messages: Message[]
output_schema?: ZodSchema<any>
}
type Message = {
role: 'user' | 'assistant'
content: string
}
type LLMResponse =
| { type: 'text'; content: string; response: any }
| { type: 'json'; data: unknown; response: any }
| { type: 'error'; error: unknown }
Get the function configuration
The function created with fn
has a config
property that contains the configuration used to create the function.
import { z } from 'zod'
import { OpenAI } from 'openai'
import { llm, fn } from 'aifn'
const toFrench = fn({
llm: llm.openai(new OpenAI({ apiKey: 'YOUR_OPENAI_API_KEY' }), 'gpt-4o-mini'),
description: `Translate the user message from English to French`,
input: z.string().describe('The text to translate'),
output: z.object({
translation: z.string().describe('The translated text'),
}),
})
console.log(toFrench.config)
// {
// llm: {
// provider: 'openai',
// client: OpenAI {...},
// model: 'gpt-4o-mini',
// ...
// },
// description: 'Translate the user message from English to French',
// input: ZodString {...},
// output: ZodObject {...},
// }
You can use this configuration to duplicate the function with a different LLM for example:
const otherToFrench = fn({
... toFrench.config,
llm: llm.ollama(new Ollama(), 'llama3.1'),
})
Mock the function during tests
The function created with fn
has mock
and unmock
methods that can be used to mock the function during tests.
import { toFrench } from './my/file.js'
describe('my awesome feature', () => {
before(() => {
toFrench.mock(async text => ({ translation: `Translated(${text})` }))
})
after(() => {
toFrench.unmock()
})
it('translates text', async () => {
const res = await toFrench('Hello, how are you?')
expect(res.translation).to.equal('Translated(Hello, how are you?)')
})
})
API Reference
Functions
fn
function fn<Args, R>(config: FnConfig<Args, R>): Fn<Args, R>
Creates a type-safe function that uses an LLM to transform inputs into outputs.
Parameters:
config
: Configuration object with the following properties:llm
: LLM provider instance (see LLM Providers below)description
: String describing what the function does (used as system prompt)input
: Zod schema for the input typeoutput
: Zod schema for the output typeexamples?
: Optional array of input/output examples to guide the LLM
Returns: A function with the following properties:
(args: Args) => Promise<R>
: The main function that processes inputsconfig
: The configuration object used to create the functionmock(implementation: (args: Args) => Promise<R>)
: Method to set a mock implementationunmock()
: Method to remove the mock implementation
Example:
import { z } from 'zod'
import { fn, llm } from 'aifn'
import { OpenAI } from 'openai'
const summarize = fn({
llm: llm.openai(new OpenAI({ apiKey: 'YOUR_API_KEY' }), 'gpt-3.5-turbo'),
description: 'Summarize the given text in a concise way',
input: z.object({
text: z.string().describe('The text to summarize'),
maxWords: z.number().describe('Maximum number of words in the summary')
}),
output: z.object({
summary: z.string().describe('The summarized text'),
wordCount: z.number().describe('Number of words in the summary')
}),
examples: [{
input: { text: 'TypeScript is a programming language...', maxWords: 10 },
output: { summary: 'TypeScript: JavaScript with static typing.', wordCount: 5 }
}]
})
LLM Providers
llm.openai
function openai(client: OpenAI, model: string): LLM
Creates an OpenAI LLM provider.
Parameters:
client
: OpenAI client instancemodel
: Model name (e.g., 'gpt-4', 'gpt-4o-mini')
Example:
import { OpenAI } from 'openai'
import { llm } from 'aifn'
const provider = llm.openai(
new OpenAI({ apiKey: 'YOUR_API_KEY' }),
'gpt-4o-mini'
)
llm.anthropic
function anthropic(client: Anthropic, model: string): LLM
Creates an Anthropic LLM provider.
Parameters:
client
: Anthropic client instancemodel
: Model name (e.g., 'claude-3-5-haiku-20241022')
Example:
import Anthropic from '@anthropic-ai/sdk'
import { llm } from 'aifn'
const provider = llm.anthropic(
new Anthropic({ apiKey: 'YOUR_API_KEY' }),
'claude-3-5-haiku-20241022'
)
llm.gemini
function gemini(client: GoogleGenerativeAI, model: string): LLM
Creates a Google Gemini LLM provider.
Parameters:
client
: Google GenerativeAI client instancemodel
: Model name (e.g., 'gemini-1.5-flash')
Example:
import { GoogleGenerativeAI } from '@google/generative-ai'
import { llm } from 'aifn'
const provider = llm.gemini(
new GoogleGenerativeAI('YOUR_API_KEY'),
'gemini-1.5-flash'
)
llm.ollama
function ollama(client: Ollama, model: string): LLM
Creates an Ollama LLM provider for local models.
Parameters:
client
: Ollama client instancemodel
: Model name (e.g., 'llama3.1', 'mistral')
Example:
import { Ollama } from 'ollama'
import { llm } from 'aifn'
const provider = llm.ollama(new Ollama(), 'llama3.1')
llm.custom
function custom(generate: (req: LLMRequest) => Promise<LLMResponse>): LLM
Creates a custom LLM provider with your own implementation.
Parameters:
generate
: Function that implements the LLM request/response cycle
Example:
import { llm, LLMRequest, LLMResponse } from 'aifn'
const provider = llm.custom(async (req: LLMRequest): Promise<LLMResponse> => {
// Your custom implementation here
return {
type: 'json',
data: { /* your response data */ },
response: { /* raw response data */ }
}
})
Types
LLMRequest
type LLMRequest = {
system: string // System prompt
messages: Message[] // Conversation history
output_schema?: ZodSchema // Expected output schema
}
LLMResponse
type LLMResponse =
| { type: 'text'; content: string; response: any }
| { type: 'json'; data: unknown; response: any }
| { type: 'error'; error: unknown }
Message
type Message = {
role: 'user' | 'assistant'
content: string
}
Changelog
2.0.0-alpha.1 (Dec 8th 2024)
- Implement structured output for Ollama
2.0.0-alpha.0 (Dec 2nd 2024)
- Refactor of codebase
- Add ability to implement custom LLM providers
- Add ability to mock the function during tests
- Add ability to get the function configuration
- Implement structured output for OpenAI
1.0.0 (Nov 25th 2024)
- First version