aiconfig
v1.1.15
Published
Library to help manage AI prompts, models and parameters using the .aiconfig file format.
Downloads
787
Readme
Full documentation: aiconfig.lastmileai.dev
Overview
AIConfig saves prompts, models and model parameters as source control friendly configs. This allows you to iterate on prompts and model parameters separately from your application code.
- Prompts as configs: a standardized JSON format to store generative AI model settings, prompt inputs/outputs, and flexible metadata.
- Model-agnostic SDK: Python & Node SDKs to use
aiconfig
in your application code. AIConfig is designed to be model-agnostic and multi-modal, so you can extend it to work with any generative AI model, including text, image and audio. - AI Workbook editor: A notebook-like playground to edit
aiconfig
files visually, run prompts, tweak models and model settings, and chain things together.
What problem it solves
Today, application code is tightly coupled with the gen AI settings for the application -- prompts, parameters, and model-specific logic is all jumbled in with app code.
- results in increased complexity
- makes it hard to iterate on the prompts or try different models easily
- makes it hard to evaluate prompt/model performance
AIConfig helps unwind complexity by separating prompts, model parameters, and model-specific logic from your application.
- simplifies application code -- simply call
config.run()
- open the
aiconfig
in a playground to iterate quickly - version control and evaluate the
aiconfig
- it's the AI artifact for your application.
Quicknav
Features
- [x] Source-control friendly
aiconfig
format to save prompts and model settings, which you can use for evaluation, reproducibility and simplifying your application code. - [x] Multi-modal and model agnostic. Use with any model, and serialize/deserialize data with the same
aiconfig
format. - [x] Prompt chaining and parameterization with {{handlebars}} templating syntax, allowing you to pass dynamic data into prompts (as well as between prompts).
- [x] Streaming supported out of the box, allowing you to get playground-like streaming wherever you use
aiconfig
. - [x] Notebook editor. AI Workbooks editor to visually create your
aiconfig
, and use the SDK to connect it to your application code.
Install
Install with your favorite package manager for Node.
npm
or yarn
npm install aiconfig
yarn add aiconfig
Detailed installation instructions.
Getting Started
Please see the detailed Getting Started guide
In this quickstart, you will create a customizable NYC travel itinerary using aiconfig
.
This AIConfig contains a prompt chain to get a list of travel activities from an LLM and then generate an itinerary in an order specified by the user.
Link to tutorial code: here
https://github.com/lastmile-ai/aiconfig/assets/25641935/d3d41ad2-ab66-4eb6-9deb-012ca283ff81
Download travel.aiconfig.json
Note: Don't worry if you don't understand all the pieces of this yet, we'll go over it step by step.
{
"name": "NYC Trip Planner",
"description": "Intrepid explorer with ChatGPT and AIConfig",
"schema_version": "latest",
"metadata": {
"models": {
"gpt-3.5-turbo": {
"model": "gpt-3.5-turbo",
"top_p": 1,
"temperature": 1
},
"gpt-4": {
"model": "gpt-4",
"max_tokens": 3000,
"system_prompt": "You are an expert travel coordinator with exquisite taste."
}
},
"default_model": "gpt-3.5-turbo"
},
"prompts": [
{
"name": "get_activities",
"input": "Tell me 10 fun attractions to do in NYC."
},
{
"name": "gen_itinerary",
"input": "Generate an itinerary ordered by {{order_by}} for these activities: {{get_activities.output}}.",
"metadata": {
"model": "gpt-4",
"parameters": {
"order_by": "geographic location"
}
}
}
]
}
Run the get_activities
prompt.
Note: Make sure to specify the API keys (such as
OPENAI_API_KEY
) in your environment before proceeding.
In your CLI, set the environment variable:
export OPENAI_API_KEY=my_key
You don't need to worry about how to run inference for the model; it's all handled by AIConfig. The prompt runs with gpt-3.5-turbo since that is the default_model
for this AIConfig.
import * as path from "path";
import { AIConfigRuntime, InferenceOptions } from "aiconfig";
async function travelWithGPT() {
const aiConfig = AIConfigRuntime.load(
path.join(__dirname, "travel.aiconfig.json")
);
const options: InferenceOptions = {
callbacks: {
streamCallback: (data: any, _acc: any, _idx: any) => {
// Write streamed content to console
process.stdout.write(data?.content || "\n");
},
},
};
// Run a single prompt
await aiConfig.run("get_activities", /*params*/ undefined, options);
}
Run the gen_itinerary
prompt.
This prompt depends on the output of get_activities
. It also takes in parameters (user input) to determine the customized itinerary.
Let's take a closer look:
gen_itinerary
prompt:
"Generate an itinerary ordered by {{order_by}} for these activities: {{get_activities.output}}."
prompt metadata:
{
"metadata": {
"model": "gpt-4",
"parameters": {
"order_by": "geographic location"
}
}
}
Observe the following:
- The prompt depends on the output of the
get_activities
prompt. - It also depends on an
order_by
parameter (using {{handlebars}} syntax) - It uses gpt-4, whereas the
get_activities
prompt it depends on uses gpt-3.5-turbo.
Effectively, this is a prompt chain between
gen_itinerary
andget_activities
prompts, as well as as a model chain between gpt-3.5-turbo and gpt-4.
Let's run this with AIConfig:
Replace config.run
above with this:
// Run a prompt chain, with data passed in as params
// This will first run get_activities with GPT-3.5, and
// then use its output to run the gen_itinerary using GPT-4
await aiConfig.runWithDependencies(
"gen_itinerary",
/*params*/ { order_by: "duration" },
options
);
Notice how simple the syntax is to perform a fairly complex task - running 2 different prompts across 2 different models and chaining one's output as part of the input of another.
The code will just run get_activities
, then pipe its output as an input to gen_itinerary
, and finally run gen_itinerary
.
Save the AIConfig
Let's save the AIConfig back to disk, and serialize the outputs from the latest inference run as well:
// Save the AIConfig to disk, and serialize outputs from the model run
aiConfig.save(
"updated.aiconfig.json",
/*saveOptions*/ { serializeOutputs: true }
);
Edit aiconfig
in a notebook editor
We can iterate on an aiconfig
using a notebook-like editor called an AI Workbook. Now that we have an aiconfig
file artifact that encapsulates the generative AI part of our application, we can iterate on it separately from the application code that uses it.
- Go to https://lastmileai.dev.
- Go to Workbooks page: https://lastmileai.dev/workbooks
- Click dropdown from '+ New Workbook' and select 'Create from AIConfig'
- Upload
travel.aiconfig.json
https://github.com/lastmile-ai/aiconfig/assets/81494782/5d901493-bbda-4f8e-93c7-dd9a91bf242e
Try out the workbook playground here: NYC Travel Workbook
We are working on a local editor that you can run yourself. For now, please use the hosted version on https://lastmileai.dev.
Additional Guides
There is a lot you can do with aiconfig
. We have several other tutorials to help get you started:
- Create an AIConfig from scratch
- Run a prompt
- Pass data into prompts
- Prompt chains
- Callbacks and monitoring
Here are some example uses:
Supported Models
AIConfig supports the following model models out of the box:
- OpenAI chat models (GPT-3, GPT-3.5, GPT-4)
- LLaMA2 (running locally)
- Google PaLM models (PaLM chat)
- Hugging Face text generation models (e.g. Mistral-7B)
Examples
If you need to use a model that isn't provided out of the box, you can implement a
ModelParser
for it (see Extending AIConfig). We welcome contributions
AIConfig Schema
AIConfig SDK
Read the Usage Guide for more details.
The AIConfig SDK supports CRUD operations for prompts, models, parameters and metadata. Here are some common examples.
The root interface is the AIConfigRuntime
object. That is the entrypoint for interacting with an AIConfig programmatically.
Let's go over a few key CRUD operations to give a glimpse.
AIConfig create
config = AIConfigRuntime.create("aiconfig name", "description");
Prompt resolve
resolve
deserializes an existing Prompt
into the data object that its model expects.
config.resolve("prompt_name", params);
params
are overrides you can specify to resolve any {{handlebars}}
templates in the prompt. See the gen_itinerary
prompt in the Getting Started example.
Prompt serialize
serialize
is the inverse of resolve
-- it serializes the data object that a model understands into a Prompt
object that can be serialized into the aiconfig
format.
config.serialize("model_name", data, "prompt_name");
Prompt run
run
is used to run inference for the specified Prompt
.
config.run("prompt_name", params);
run_with_dependencies
This is a variant of run
-- this re-runs all prompt dependencies.
For example, in travel.aiconfig.json
, the gen_itinerary
prompt references the output of the get_activities
prompt using {{get_activities.output}}
.
Running this function will first execute get_activities
, and use its output to resolve the gen_itinerary
prompt before executing it.
This is transitive, so it computes the Directed Acyclic Graph of dependencies to execute. Complex relationships can be modeled this way.
config.run_with_dependencies("gen_itinerary");
Updating metadata and parameters
Use the get/setMetadata
and get/setParameter
methods to interact with metadata and parameters (setParameter
is just syntactic sugar to update "metadata.parameters"
)
config.setMetadata("key", data, "prompt_name");
Note: if "prompt_name"
is specified, the metadata is updated specifically for that prompt. Otherwise, the global metadata is updated.
AIConfigRuntime.registerModelParser
Use the AIConfigRuntime.registerModelParser
if you want to use a different ModelParser
, or configure AIConfig to work with an additional model.
AIConfig uses the model name string to retrieve the right ModelParser
for a given Prompt (see AIConfigRuntime.getModelParser
), so you can register a different ModelParser for the same ID to override which ModelParser
handles a Prompt.
For example, suppose I want to use MyOpenAIModelParser
to handle gpt-4
prompts. I can do the following at the start of my application:
AIConfigRuntime.registerModelParser(myModelParserInstance, ["gpt-4"]);
Callback events
Use callback events to trace and monitor what's going on -- helpful for debugging and observability.
import * as path from "path";
import {
AIConfigRuntime,
Callback,
CallbackEvent,
CallbackManager,
} from "aiconfig";
const config = AIConfigRuntime.load(path.join(__dirname, "aiconfig.json"));
const myCustomCallback: Callback = async (event: CallbackEvent) => {
console.log(`Event triggered: ${event.name}`, event);
};
const callbackManager = new CallbackManager([myCustomCallback]);
config.setCallbackManager(callbackManager);
await config.run("prompt_name");
Extensibility
AIConfig is designed to be customized and extended for your use-case. The Extensibility guide goes into more detail.
Currently, there are 3 core ways to extend AIConfig:
- Supporting other models - define a ModelParser extension
- Callback event handlers - tracing and monitoring
- Custom metadata - save custom fields in
aiconfig
Contributing to aiconfig
This is our first open-source project and we'd love your help.
See our contributing guidelines -- we would especially love help adding support for additional models that the community wants.
Cookbooks
We provide several guides to demonstrate the power of aiconfig
.
See the
cookbooks
folder for examples to clone.
Chatbot
Wizard GPT - speak to a wizard on your CLI
CLI-mate - help you make code-mods interactively on your codebase.
Retrieval Augmented Generated (RAG)
At its core, RAG is about passing data into prompts. Read how to pass data with AIConfig.
Function calling
Prompt routing
Chain of Thought
A variant of chain-of-thought is Chain of Verification, used to help reduce hallucinations. Check out the aiconfig cookbook for CoVe:
Using local LLaMA2 with aiconfig
Hugging Face text generation
Google PaLM
Roadmap
This project is under active development.
If you'd like to help, please see the contributing guidelines.
Please create issues for additional capabilities you'd like to see.
Here's what's already on our roadmap:
- Evaluation interfaces: allow
aiconfig
artifacts to be evaluated with user-defined eval functions.- We are also considering integrating with existing evaluation frameworks.
- Local editor for
aiconfig
: enable you to interact with aiconfigs more intuitively. - OpenAI Assistants API support
- Multi-modal ModelParsers:
- GPT4-V support
- DALLE-3
- Whisper
- HuggingFace image generation
FAQs
How should I edit an aiconfig
file?
Editing a configshould be done either programmatically via SDK or via the UI (workbooks):
Programmatic editing.
Edit with a workbook editor: this is similar to editing an ipynb file as a notebook (most people never touch the json ipynb directly)
You should only edit the aiconfig
by hand for minor modifications, like tweaking a prompt string or updating some metadata.
Does this support custom endpoints?
Out of the box, AIConfig already supports all OpenAI GPT* models, Google’s PaLM model and any “textgeneration” model on Hugging Face (like Mistral). See Supported Models for more details.
Additionally, you can install aiconfig
extensions for additional models (see question below).
Is OpenAI function calling supported?
Yes. This example goes through how to do it.
We are also working on adding support for the Assistants API.
How can I use aiconfig with my own model endpoint?
Model support is implemented as “ModelParser”s in the AIConfig SDK, and the idea is that anyone, including you, can define a ModelParser (and even publish it as an extension package).
All that’s needed to use a model with AIConfig is a ModelParser that knows
- how to serialize data from a model into the aiconfig format
- how to deserialize data from an aiconfig into the type the model expects
- how to run inference for model.
For more details, see Extensibility.
When should I store outputs in an aiconfig
?
The AIConfigRuntime
object is used to interact with an aiconfig programmatically (see SDK usage guide). As you run prompts, this object keeps track of the outputs returned from the model.
You can choose to serialize these outputs back into the aiconfig
by using the config.save(include_outputs=True)
API. This can be useful for preserving context -- think of it like session state.
For example, you can use aiconfig to create a chatbot, and use the same format to save the chat history so it can be resumed for the next session.
You can also choose to save outputs to a different file than the original config -- config.save("history.aiconfig.json", include_outputs=True)
.
Why should I use aiconfig
instead of things like configurator?
It helps to have a standardized format specifically for storing generative AI prompts, inference results, model parameters and arbitrary metadata, as opposed to a general-purpose configuration schema.
With that standardization, you just need a layer that knows how to serialize/deserialize from that format into whatever the inference endpoints require.
This looks similar to ipynb
for Jupyter notebooks
We believe that notebooks are a perfect iteration environment for generative AI -- they are flexible, multi-modal, and collaborative.
The multi-modality and flexibility offered by notebooks and ipynb
offers a good interaction model for generative AI. The aiconfig
file format is extensible like ipynb
, and AI Workbook editor allows rapid iteration in a notebook-like IDE.
AI Workbooks are to AIConfig what Jupyter notebooks are to ipynb
There are 2 areas where we are going beyond what notebooks offer:
aiconfig
is more source-control friendly thanipynb
.ipynb
stores binary data (images, etc.) by encoding it in the file, whileaiconfig
recommends using file URI references instead.aiconfig
can be imported and connected to application code using the AIConfig SDK.