npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@typeai/core

v0.6.1

Published

An AI Engineering Framework for TypeScript

Downloads

2,200

Readme

TypeAI: An AI Engineering Framework for TypeScript

TypeAI Example

TypeAI is a toolkit for building AI-enabled apps using TypeScript that makes things look so simple it seems like magic. More importantly, it makes building with LLMs "feel" like ordinary code with low impedance mismatch.

An example:

import { toAIFunction } from '@typeai/core'

/** @description Given `text`, returns a number between 1 (positive) and -1 (negative) indicating its sentiment score. */
function sentimentSpec(text: string): number | void {}
const sentiment = toAIFunction(sentimentSpec)

const score = await sentiment('That was surprisingly easy!')

Just specify your types and function signatures as you naturally would, and TypeAI will generate the appropriate implementation respecting your type declarations. No loading separate schema files, no prompt engineering, and no manually writing JSON Schema representations of your functions.

Contents

  1. Installation
  2. Usage
    • Using TypeAI to generate functionality
      • AI Models
      • AI Functions
      • AI Classifiers
    • Using TypeAI to expose functionality to an LLM
      • AI "Tool Functions"
  3. Gotchas
  4. How does it work?
  5. Future Direction & TODOs
  6. Acknowledgements
  7. License

Support

Follow me on Twitter: Twitter Follow

Installation

DeepKit is required in order to provide runtime type information on your functions and types.

npm install @typeai/core @deepkit/core

NOTE: For now, automatic extraction of JSDoc @description tags requires these forked npm package builds @deepkit/type and @deepkit/type-compiler

npm install @deepkit/type@npm:@jefflaporte/[email protected]
npm install --save-dev @deepkit/type-compiler@npm:@jefflaporte/[email protected]
# Bash
./node_modules/.bin/deepkit-type-install
# PowerShell
pwsh ./node_modules/.bin/deepkit-type-install.ps1

tsconfig.json

// tsconfig.json
{
  "compilerOptions": {
    // ...

    // Note: DeepKit says that experimentalDecorators is not necessary when using @deepkit/type,
    // but I have found that deepkit's typeOf() does not always work with TypeScript > 4.9
    // without experimentalDecorators set.
    "experimentalDecorators": true
  },
  "reflection": true
}

NOTE: Some runtimes, such as tsx, won't work with Deepkit. See Gotchas for more info.

At execution time

export OPENAI_API_KEY='...'    # currently required for core functionality
export BING_API_KEY='...'      # if using predefined SearchWeb Tool function

TypeAI makes connecting your functions and types to AI APIs like OpenAI's chat completion endpoints lightweight by using runtime type reflection on TypeScript code to generate the JSON schema required by OpenAI's function calling feature, and by handling function dispatch and result delivery to the LLM.

Usage

TypeAI currently provides two main areas of functionality:

  • Generation of "magic" AI-backed functions
    • AI Models
    • AI Functions
    • AI Classifiers
  • Generation and handing of LLM tool function glue
    • AI "Tool Functions"

AI Functions

To create an AI-backed function, write a stub function and pass it to toAIFunction(), which will generate an AI-backed function with the desired behaviour.

/** @description Given `text`, returns a number between 1 (positive) and -1 (negative) indicating its sentiment score. */
function sentimentSpec(text: string): number | void {}
const sentiment = toAIFunction(sentimentSpec)

const score = await sentiment('That was surprisingly easy!')

Functions with complex input and output TypeScript types work too. Here's a more interesting example:

type Patient = {
  name: string
  age: number
  isSmoker: boolean
}
type Diagnosis = {
  condition: string
  diagnosisDate: Date
  stage?: string
  type?: string
  histology?: string
  complications?: string
}
type Treatment = {
  name: string
  startDate: Date
  endDate?: Date
}
type Medication = Treatment & {
  dose?: string
}
type BloodTest = {
  name: string
  result: string
  testDate: Date
}
type PatientData = {
  patient: Patient
  diagnoses: Diagnosis[]
  treatments: Treatment | Medication[]
  bloodTests: BloodTest[]
}

/** @description Returns a PatientData record generate from the content of doctorsNotes notes. */
function generateElectronicHealthRecordSpec(input: string): PatientData | void {}
const generateElectronicHealthRecord = toAIFunction(generateElectronicHealthRecordSpec, {
  model: 'gpt-4',
})

TypeScript enums to AI-backed Classifiers

enum AppRouteEnum {
  USER_PROFILE = '/user-profile',
  SEARCH = '/search',
  NOTIFICATIONS = '/notifications',
  SETTINGS = '/settings',
  HELP = '/help',
  SUPPORT_CHAT = '/support-chat',
  DOCS = '/docs',
  PROJECTS = '/projects',
  WORKSPACES = '/workspaces',
}
const AppRoute = toAIClassifier(AppRouteEnum)

const appRouteRes = await AppRoute('I need to talk to somebody about billing')

AI "Tool Function" Helpers

An AI tool function is a function provided to an LLM for it's own use in generating answers.

Say you have a function and want to provide it's functionality to OpenAI's LLM for use with their Function Calling feature.

See:

TypeAI provides three functions that make exposing your functions and models to GPT-3.5/4, and handling the resulting function call requests from GPT-3/4, transparent:

static ToolFunction.from<R>(
  fn: (...args: any[]) => R,
  options?: ToolFunctionFromOptions
): ToolFunction

static ToolFunction.modelSubmissionToolFor<T>(
  cb: (arg: T) => Promise<void>
): ToolFunction

function handleToolUse(
  openAIClient: OpenAIApi,
  originalRequest: CreateChatCompletionRequest,
  responseData: CreateChatCompletionResponse,
  options?: {
    model?: string,
    registry?: SchemaRegistry,
    handle?: 'single' | 'multiple'
  },
): Promise<CreateChatCompletionResponse | undefined>

They can be used like this:

import {
  OpenAIApi,
  Configuration,
  CreateChatCompletionRequest,
  ChatCompletionRequestMessage,
  ChatCompletionRequestMessageRoleEnum,
} from 'openai'
import { ToolFunction, handleToolUse } from '@typeai/core'
import { getCurrentWeather } from 'yourModule'

// Init OpenAI client
const configuration = new Configuration({ apiKey: process.env.OPENAI_API_KEY })
const openai = new OpenAIApi(configuration)

// Generate JSON Schema for function and dependent types
const getCurrentWeatherTool = ToolFunction.from(getCurrentWeather)

// Run a chat completion sequence
const messages: ChatCompletionRequestMessage[] = [
  {
    role: ChatCompletionRequestMessageRoleEnum.User,
    content: "What's the weather like in Boston? Say it like a weather reporter.",
  },
]
const request: CreateChatCompletionRequest = {
  model: 'gpt-3.5-turbo',
  messages,
  functions: [getCurrentWeatherTool.schema],
  stream: false,
  max_tokens: 1000,
}
const { data: response } = await openai.createChatCompletion(request)

// Transparently handle any LLM calls to your function.
// handleToolUse() returns OpenAI's final response after
// any/all function calls have been completed
const responseData = await handleToolUse(openai, request, response)
const result = responseData?.choices[0].message

/*
Good afternoon, Boston! This is your weather reporter bringing you the latest
updates. Currently, we're experiencing a pleasant temperature of 82 degrees Celsius. The sky is a mix of sunshine and clouds, making for a beautiful day. However, there is a 25% chance of precipitation, so you might want to keep an umbrella handy. Additionally, the atmospheric pressure is at 25 mmHg. Overall, it's a great day to get outside and enjoy the city. Stay safe and have a wonderful time!
*/

Gotchas

Due to the way Deepkit injects it's type-compiler transform, by patching tsc, some runtimes may not work. These are know NOT to work:

  • tsx

How does it work?

TypeAI uses TypeScript runtime type info provided by @deepkit/type to:

  • generate replacement functions with the same signature as your function stubs
  • generate JSON Schema descriptions of your function and dependent types which are provided to the OpenAI API so that it can repect your desired type structure

This results in a coding experience that feels "native".

Example

import { ToolFunction, handleToolUse } from '@typeai/core'

// Your type definitions
// ...
// Your function definitions dependent on your types
// ...
// eg:
const getCurrentWeather = function getCurrentWeather(
  location: string,
  unit: TemperatureUnit = 'fahrenheit',
  options?: WeatherOptions,
): WeatherInfo {
  const weatherInfo: WeatherInfo = {
    location: location,
    temperature: 82,
    unit: unit,
    precipitationPct: options?.flags?.includePrecipitation ? 25 : undefined,
    pressureMmHg: options?.flags?.includePressure ? 25 : undefined,
    forecast: ['sunny', 'cloudy'],
  }
  return weatherInfo
}

// Register your function and type info
const getCurrentWeatherTool = ToolFunction.from(getCurrentWeather)

// Run a completion series
const messages: ChatCompletionRequestMessage[] = [
  {
    role: ChatCompletionRequestMessageRoleEnum.User,
    content: "What's the weather like in Boston? Say it like a weather reporter.",
  },
]
const request: CreateChatCompletionRequest = {
  model: 'gpt-3.5-turbo-0613',
  messages,
  functions: [getCurrentWeatherTool.schema],
  stream: false,
  max_tokens: 1000,
}
const { data: response } = await openai.createChatCompletion(request)
const responseData = await handleToolUse(openai, request, response)
const result = responseData?.choices[0].message
console.log(`LLM final result: ${JSON.stringify(result, null, 2)}`)

Note: The OpenAI completion API does not like void function responses.

Future Direction & TODOs

  • TODO

Acknowledgements

  • the Prefect / Marvin Team
    • The concept of source-code-less AI Functions, Models, and that use function specification and description info to auto-generate their behavior is originally due to the amazing team at PrefectHQ that created prefecthq/marvin for use in Python.
  • Wang Chenyu

License

See LICENSE.txt