npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

cross-llm

v0.1.3

Published

Use every LLM in every environment with one simple API

Downloads

51

Readme

cross-llm

Use LLM and Vector Embedding APIs on the web platform. Uses standard fetch() and thus runs everywhere, including in Service Workers.

🌟 Features

The most simple API to use LLMs. It can hardly be easier than 1 function call 😉

And what's bes?

AI models currently supported:

  • OpenAI: Any OpenAI LLM, including GPT-4 and newer models.
    • ✅ Promise-based
    • ✅ Streaming
    • ✅ Single message system prompt (instruct)
    • ✅ Multi-message prompt (chat)
    • ✅ Cost model
    • ✅ Text Embedding
  • Anthropic: The whole Claude model-series, including Opus.
    • ✅ Promise-based
    • ✅ Streaming
    • ✅ Single message system prompt (instruct)
    • ✅ Multi-message prompt (chat)
    • ✅ Cost model
    • 〰️ Text Embedding (Anthropic doesn't provide embedding endpoints)
  • Perplexity: All models supported.
    • ✅ Promise-based
    • ✅ Streaming
    • ✅ Single message system prompt (instruct)
    • ✅ Multi-message prompt (chat)
    • ✅ Cost model (including flat fee)
    • 〰️ Text Embedding (Perplexity doesn't provide embedding endpoints)
  • VoyageAI: Text Embedding models
    • ✅ Text Embedding
  • Mixedbread AI: Text Embedding models, specifically for German
    • ✅ Text Embedding

AI providers and models to be supported soon:

  • Google: The whole Gemeni model-series, including 1.5 Pro, Advanced.
  • Cohere: The whole Command model-series, including Command R Plus.
  • Ollama: All Ollama LLMs, including Llama 3.
  • HuggingFace: All HuggingFace LLMs.

📚 Usage

  1. 🔨 First install the library: npm/pnpm/yarn/bun install cross-llm

  2. 💡 Take a look at the super-simple code examples.

Single System Prompt

import { systemPrompt } from "cross-llm";

const promptResonse = await systemPrompt("Respond with JSON: { works: true }", "anthropic", {
  model: "claude-3-haiku-20240307",
  temperature: 0.7,
  max_tokens: 4096
}, { apiKey: import.meta.env[`anthropic_api_key`] });

// promptResponse.message => {\n  "works": true\n}
// promptResponse.usage.outputTokens => 12
// promptResponse.usage.inputTokens => 42
// promptResponse.usage.totalTokens => 54
// promptResponse.price.input => 0.0000105
// promptResponse.price.output => 0.000015
// promptResponse.price.total => 0.0000255
// promptResponse.finishReason => "end_turn"
// promptResponse.elapsedMs => 888 // milliseconds elapsed
// promptResponse.raw => provider's raw completion response object, no mapping
// promptResponse.rawBody => the exact body object passed to the provider's completion endpoint

Text Embedding

import { embed } from "cross-llm";

const textEmbedding = await embed(["Let's have fun with JSON, shall we?"], "voyageai", {
  model: "voyage-large-2-instruct",
}, { apiKey: import.meta.env[`voyageai_api_key`], });

// textEmbedding.data[0].embedding => [0.1134245, ...] // n-dimensional embedding vector
// textEmbedding.data[0].index => 0
// textEmbedding.usage.totalTokens => 23
// textEmbedding.price.total => calculated price
// textEmbedding.elapsedMs => 564 // in milliseconds

Multi-Message Prompt, Streaming

import { promptStreaming, type PromptFinishReason, type Usage, type Price } from "cross-llm";

await promptStreaming(
  [
    {
      role: "user",
      content: "Let's have fun with JSON, shall we?",
    },
    {
      role: "assistant",
      content: "Yeah. Let's have fun with JSON.",
    },
    {
      role: "user",
      content: "Respond with JSON: { works: true }",
    },
  ],
  "openai",
  async (partialText: string, elapsedMs: number) => {
    // onChunk

    // stream-write to terminal
    process.stdout.write(partialText);
  },
  async (fullText: string, 
    elapsedMs: number,
    usage: Usage,
    finishReason: PromptFinishReason,
    price: Price) => {

    // onStop
    console.log("")
    console.log("parsed JSON", JSON.parse(fullText));
    console.log("finishReason", finishReason);
    console.log("elapsedMs", elapsedMs);
    console.log("usage", usage);
    console.log("price", price);
  },
  async (error: unknown, elapsedMs: number) => {
    // onError
    console.log("error", error, elapsedMs, 'ms elapsed');
  },
  {
    model: "gpt-4-turbo",
    temperature: 0.7,
    response_format: {
      type: "json_object",
    }
  },
  {
    // union of options passed down, mapped internally
    apiKey: import.meta.env[`openai_api_key`],
  },
);
  1. 📋 Copy & Paste -> enjoy! 🎉

🔥 Contributing

Simply create an issue or fork this repository, clone it and create a Pull Request (PR). I'm just implementing the features, AI model providers, cost model mappings that I need, but feel free to simply add your models or implement new AI providers. Every contribution is very welcome! 🤗

List/verify supported models

Please verify that your model/provider has been added correctly in ./src/models.

npm run print-models

Write and verify example code

Please add example code for when you implement a new AI provider in ./examples.

npm run example openai.ts

or

npm run example voyageai-embedding.ts

Write tests for new AI providers

Please write and run unit/integration/e2e tests using jest by creating ./src/*.spec.ts test suites:

npm run test

Build a release

Run the following command to update the ./dist files:

npm run build

Create a new NPM release build:

npm pack

Check the package contents for integrity.

npm publish