metaprog
v2.0.3
Published
An experimental and versatile library for exploring LLM-assisted metaprogramming.
Downloads
357
Maintainers
Readme
About the Project
Metaprog is an AI metaprogramming library for TypeScript that enables you to generate, validate and test code using LLMs on runtime. It provides a simple yet powerful builder API to describe the code you want to generate and automatically handles the interaction with LLMs, validation of the output, and testing of the generated code.
Key Features
- On-demand function generation based on a function description
- Integration with LLMs from the LangChain ecosystem
- Automatic caching of generated functions to avoid re-generation
- Automated test and re-prompt process if a generated function fails a user-supplied test case
- Strong type-safety and flexible configuration for input and output schemas using Zod
Getting Started
Installation
You'll need to install the Metaprog package, as well as LangChain and the LLM-specific package you want to use. For the rest of the guide, we'll use Anthropic's Claude 3.5 Sonnet model.
npm install metaprog @langchain/core @langchain/anthropic # or any other LLM provider
# or
pnpm add metaprog @langchain/core @langchain/anthropic
# or
yarn add metaprog @langchain/core @langchain/anthropic
Basic Usage
Below is a simple (and extremely overkill) example demonstrating how to generate a function that logs "Hello world!" to the console.
import { MetaprogFunctionBuilder } from 'metaprog';
import { ChatAnthropic } from '@langchain/anthropic';
const model = new ChatAnthropic({
model: 'claude-3-5-sonnet-latest',
apiKey: 'your_api_key_here',
});
const builder = new MetaprogFunctionBuilder('Console log "Hello world!"', {
model,
});
const func = await builder.build();
func(); // logs "Hello world!"
How It Works
- You provide a textual description of what the function should do.
- Metaprog sends this description (and optional schemas for input or output) to an LLM.
- The LLM returns TypeScript code, which is then compiled and cached locally.
- You can immediately invoke the compiled function within your application.
- On subsequent runs, Metaprog checks the cache to avoid re-generation.
Using Schemas for Validation
To further constrain or validate your function's input and output, you can provide Zod schemas. This will be used on the generation process as well as to strictly type the built function.
import { z } from 'zod';
import { MetaprogFunctionBuilder } from 'metaprog';
import { ChatAnthropic } from '@langchain/anthropic';
const model = new ChatAnthropic({
model: 'claude-3-5-sonnet-latest',
apiKey: 'your_api_key_here',
});
// Define input/output Zod schemas
const inputSchema = [
z.array(z.array(z.number())).describe('Adjacency matrix'),
z.number().describe('Start node'),
z.number().describe('End node'),
];
const outputSchema = z.number().describe('Shortest path length');
const pathFinderBuilder = new MetaprogFunctionBuilder(
'Get shortest path between two nodes on a graph given an adjacency matrix, a start node, and an end node.',
{
model,
inputSchema,
outputSchema,
},
);
const findPathLength = await pathFinderBuilder.build();
// ^? (adjacencyMatrix: number[][], startNode: number, endNode: number) => number
findPathLength(
[
[0, 1, 7],
[1, 2, 3],
[5, 3, 4],
],
0,
2,
); // 4
Advanced Usage
Automatic Testing and Regeneration
Metaprog can automatically run a test against the generated function. If the function fails, it will ask the LLM to fix the generated code and retry until it passes (up to a configurable number of retries).
import { z } from 'zod';
import { MetaprogFunctionBuilder } from 'metaprog';
import { ChatAnthropic } from '@langchain/anthropic';
const model = new ChatAnthropic({
model: 'claude-3-5-sonnet-latest',
apiKey: 'your_api_key_here',
});
const addStrings = await new MetaprogFunctionBuilder('Add two numbers', {
model,
})
.test((f) => f('1', '2') === 3) // If not passed, retries generation
.test((f) => f('-5', '15') === 10) // If not passed, retries generation
.build();
addStrings('1', '2'); // This result is ensured to be 3 as per the test
Caching
All generated functions are cached so that on subsequent runs, the same function doesn't need to be re-generated unnecesarily. This reduces both latency and usage quotas on your LLM. By default, files are stored under a "generated" folder, and metadata is stored in a JSON file.
Custom Cache Handler
If you want more control over how or where functions are stored, implement the CacheHandler
interface:
import { CacheHandler } from 'metaprog';
class MyCustomCacheHandler implements CacheHandler {
// Your cache handler code
}
// Then provide it to MetaprogFunctionBuilder:
import { MetaprogFunctionBuilder } from 'metaprog';
import { ChatAnthropic } from '@langchain/anthropic';
const model = new ChatAnthropic({
model: 'claude-3-5-sonnet-latest',
apiKey: 'your_api_key_here',
});
const myCustomCache = new MyCustomCacheHandler();
const myFunc = new MetaprogFunctionBuilder(
'Some descriptive text',
{ model },
myCustomCache,
);
Contributing
Contributions are welcome! Feel free to submit issues or PRs on GitHub if you find bugs or want to propose new features.
License
This project is licensed under the MIT License. See the LICENSE file for details.