npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@baseai/core

v0.0.1

Published

The AI framework for building declarative and composable AI-powered LLM products.

Downloads

15

Readme

Base AI

The AI framework for building declarative and composable AI-powered LLM products.

Getting Started with @baseai/core

Installation

First, install the @baseai/core package using npm or yarn:

npm install @baseai/core

or

pnpm add @baseai/core

or

yarn add @baseai/core

Usage

To use the generate function from the @baseai/core package, follow these steps:

  1. Import the generate function:

    import {generate} from '@baseai/core';
  2. Set up environment variables:

    Ensure you have the following environment variables set in your .env file:

    OPENAI_API_KEY=your_openai_api_key
  3. Generate a response using a prompt:

    import {generate} from '@baseai/core';
    
    async function exampleWithPrompt() {
    	const response = await generate({
    		model: 'gpt-3.5-turbo-0125',
    		prompt: '1+1',
    	});
    
    	console.log(response); // Output: '2'
    }
    
    exampleWithPrompt();
  4. Generate a response using messages array:

    import {generate} from '@baseai/core';
    
    async function exampleWithMessages() {
    	const response = await generate({
    		model: 'gpt-3.5-turbo-0125',
    		messages: [
    			{role: 'system', content: 'You are a helpful assistant.'},
    			{role: 'user', content: 'Give me 5 title ideas'},
    			{role: 'assistant', content: 'Sure, here you go … …'},
    		],
    	});
    
    	console.log(response);
    }
    
    exampleWithMessages();

API Reference

generate

Generates a response using the specified model, prompt, or messages array.

Signature

async function generate(params: GenerateParams): Promise<string>;

Parameters

  • params: An object containing the following properties:
    • model (string): The model to use for generating the response.
    • prompt (optional string): The prompt to use for generating the response. Either prompt or messages must be provided.
    • messages (optional Message[]): An array of message objects. Each message object should contain role and content properties. Either prompt or messages must be provided.

Returns

  • A promise that resolves to a string containing the generated response.

Example

import {generate} from '@baseai/core';

const responseFromPrompt = await generate({
	model: 'gpt-3.5-turbo-0125',
	prompt: '1+1',
});

console.log(responseFromPrompt);

const responseFromMessages = await generate({
	model: 'gpt-3.5-turbo-0125',
	messages: [
		{role: 'system', content: 'You are a helpful assistant.'},
		{role: 'user', content: 'Give me 5 title ideas'},
		{role: 'assistant', content: 'Sure, here you go … …'},
	],
});

console.log(responseFromMessages);

validateInput

Validates the input parameters and environment variables.

Signature

function validateInput(params: GenerateParams): ValidatedParams;

Parameters

  • params: An object containing the following properties:
    • model (string): The model to use for generating the response.
    • prompt (optional string): The prompt to use for generating the response.
    • messages (optional Message[]): An array of message objects.

Returns

  • An object containing the validated parameters and environment variables.

Example

const validatedParams = validateInput({
	model: 'gpt-3.5-turbo-0125',
	prompt: 'Hi',
});

buildMessages

Constructs the messages array using the provided prompt or messages array.

Signature

function buildMessages({
	prompt,
	messages,
}: {
	prompt?: string;
	messages?: Message[];
}): Message[];

Parameters

  • prompt (optional string): The prompt to use for generating the response.
  • messages (optional Message[]): An array of message objects.

Returns

  • An array of message objects.

Example

const messages = buildMessages({prompt: 'Hi'});

buildHeaders

Constructs the headers for the API request using the provided API key.

Signature

function buildHeaders(API_KEY: string): Record<string, string>;

Parameters

  • API_KEY (string): The API key to use for the request.

Returns

  • An object containing the headers for the API request.

Example

const headers = buildHeaders('your-api-key');

handleResponse

Processes the API response and extracts the generated message content.

Signature

async function handleResponse(response: Response): Promise<string>;

Parameters

  • response (Response): The response object from the API request.

Returns

  • A promise that resolves to a string containing the generated message content.

Example

const content = await handleResponse(response);

Types

GenerateParams

Type definition for the parameters of the generate function.

interface GenerateParams {
	model: string;
	prompt?: string;
	messages?: Message[];
}

Message

Type definition for a message object.

interface Message {
	role: 'system' | 'user' | 'assistant';
	content: string;
}

ValidatedEnv

Type definition for the validated environment variables.

interface ValidatedEnv {
	API_KEY: string;
	API_URL_CHAT: string;
}

Environment Variables

OPEN_AI_API_KEY

The API key for authenticating requests to the OpenAI API.

OPEN_AI_API_URL_CHAT

The URL for the OpenAI API chat endpoint.

Example Usage

import {generate} from '@baseai/core';

const responseFromPrompt = await generate({
	model: 'gpt-3.5-turbo-0125',
	prompt: '1+1',
});

console.log(responseFromPrompt);

const responseFromMessages = await generate({
	model: 'gpt-3.5-turbo-0125',
	messages: [
		{role: 'system', content: 'You are a helpful assistant.'},
		{role: 'user', content: 'Give me 5 title ideas'},
		{role: 'assistant', content: 'Sure, here you go … …'},
	],
});

console.log(responseFromMessages);

This documentation provides a comprehensive guide for getting started with the @baseai/core package, as well as a detailed API reference for the generate function and its related components.