npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

fluent-ai

v0.2.1

Published

A lightweight AI toolkit for multiple platforms

Downloads

165

Readme

fluent-ai

NPM Version GitHub Actions Workflow Status

[!WARNING] This project is in beta. The API is subject to changes and may break.

fluent-ai is a lightweight, type-safe AI toolkit that seamlessly integrates multiple AI providers. It features structured outputs, streaming capabilities, and job serialization support.

Installation

npm install fluent-ai

AI Service provider support

fluent-ai includes support for multiple AI providers and modalities.

| Provider | chat | embedding | image | | --------- | ------------------ | ------------------ | ------------------ | | anthropic | :white_check_mark: | | | | fal | | | :white_check_mark: | | ollama | :white_check_mark: | :white_check_mark: | | openai | :white_check_mark: | :white_check_mark: | :white_check_mark: | | voyageai | | :white_check_mark: | |

By default, API keys for providers are read from environment variable (process.env) following the format <PROVIDER>_API_KEY (e.g., OPENAI_API_KEY, ANTHROPIC_API_KEY).

You can also initialize a provider with manual API key settings:

import { openai } from "fluent-ai";

openai({ apiKey: "<key>" });

For more examples with different AI providers, check out the examples directory.

Don't see your AI providers? Feel free to open an issue or start a discussion to request support. Join our Discord community

Job API

Each request to AI providers is wrapped in a Job. which can also serialized and deserialized. A fluent API with method chaining help create jobs easily.

Method chaining

import { openai, userPrompt } from "fluent-ai";

const job = openai()
  .chat("gpt-4o-mini")
  .messages([userPrompt("Hi")])
  .temperature(0.5)
  .maxTokens(1024);

Declaration

Alternatively, fluent-ai also supports job declaration from json object.

import { load } from "fluent-ai";

const job = load({
  provider: "openai",
  chat: {
    model: "gpt-4o-mini",
    params: {
      messages: [{ role: "user", content: "hi" }],
      temperature: 0.5,
    },
  },
});

Job serialization and deserialization

To serialize a job to a JSON object, use the dump method:

const obj = await job.dump();

This allows you to save the job's state for later use, such as storing it in a queue or database. To recreate and execute a job from the JSON object, use the load function:

import { load } from "fluent-ai";

const job = load(obj);
await job.run();

Chat completions

Chat completion, such as ChatGPT, is the most common AI service. It generates responses in a conversational format based on given inputs, also knows as prompts.

Text generation

import { openai, systemPrompt, userPrompt } from "fluent-ai";

const job = openai()
  .chat("gpt-4o-mini")
  .messages([systemPrompt("You are a helpful assistant"), userPrompt("Hi")]);

const { text } = await job.run();

Structured output

Structured output from AI chat completions involves formatting the responses based on predefined json schema. This feature is essential when building applications with chat completions.

Zod is a popular type of validation library for TypeScript and JavaScript that allows developers to define and validate data schemas in a concise and type-safe manner. fluent-ai provides built-in integration for declare json-schema with zod. To use zod integration, first install zod from npm. Any parameter in fluent-ai that accepts a JSON schema will also work with a Zod schema.

npm install zod

fluent-ai provides a consistent jsonSchema() function for all providers to generate structured output. For more details, refer to the structured output docs

import { z } from "zod";
import { openai, userPrompt } from "fluent-ai";

const personSchema = z.object({
  name: z.string(),
  age: z.number(),
});

const job = openai()
  .chat("gpt-4o-mini")
  .messages([userPrompt("generate a person with name and age in json format")])
  .jsonSchema(personSchema, "person");

const { object } = await job.run();

Function calling (tool calling)

Function calling (or tool calling) is an advanced functionality in chat completions that enhances their ability to interact with external systems and perform specific tasks.

Here's how to create a tool:

import { z } from "zod";
import { tool } from "fluent-ai";

const weatherTool = tool("get_current_weather")
  .description("Get the current weather in a given location")
  .parameters(
    z.object({
      location: z.string(),
      unit: z.enum(["celsius", "fahrenheit"]).optional(),
    })
  );

To use the tool, add it to a chat job with a function-calling-enabled model, such as gpt-4o-mini from openai.

const job = openai().chat("gpt-4o-mini").tool(weatherTool);

const { toolCalls } = await job
  .messages([userPrompt("What is the weather in San Francisco?")])
  .run();

Streaming support

Rather than waiting for the complete response, streaming enables the model to return portions of the response as they're generated. fluent-ai provides built-in streaming support for text, objects, and tools in chat models.

const job = openai()
  .chat("gpt-4o-mini")
  .messages([systemPrompt("You are a helpful assistant"), userPrompt("Hi")])
  .stream();

const { stream } = await job.run();
for await (const chunk of stream) {
  console.log(chunk);
}

fluent-ai supports streaming text, object and tool calls on demand. For more details, see the streaming docs.

Vision support

You can leverage chat models with vision capabilities by including an image URL in your prompt.

import { openai, systemPrompt, userPrompt } from "fluent-ai";

openai()
  .chat("gpt-4o-mini")
  .messages([
    userPrompt("Describe the image", { image: { url: "<image_url>" } }),
  ]);

Embedding

import { openai } from "fluent-ai";

const job = openai().embedding("text-embedding-3-small").input("hello");
const result = await job.run();

Image generation

import { openai } from "fluent-ai";

const job = openai().image("dalle-2").prompt("a cat").n(1).size("512x512");
const result = await job.run();

Support

Feel free to open an issue or start a discussion if you have any questions. Join our Discord community

License

fluent-ai is licensed under Apache 2.0 as found in the LICENSE file.