npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

multillama

v1.2.3

Published

A TypeScript library for interacting with multiple language models via orchestrated pipelines.

Downloads

1,100

Readme

MultiLlama

MultiLlama 🦙🦙🦙 is an innovative TypeScript framework that helps developers use multiple Large Language Models (LLMs) simultaneously. Designed to unify different AI models, MultiLlama enables the creation of dynamic decision flows and manages complex processes, leveraging the strengths of each AI model together.


Supported Services

MultiLlama currently supports the following services:

  • OpenAI
  • Ollama
  • Anthropic
  • Gemini (coming soon)

Table of Contents


Features

  • Unified Interface: Interact with multiple language models through a single, consistent API.
  • Pipeline Processing: Build complex processing pipelines with conditional branching and context management.
  • Extensibility: Easily add support for new models and services via adapters.
  • Configurable: Initialize and manage configurations from code or external files.
  • Spinner Integration: Built-in support for CLI spinners to enhance user experience during processing.

Installation

To install MultiLlama, use npm:

npm install multillama

Getting Started

First, import the necessary classes and initialize the MultiLlama instance with your configuration.

import { MultiLlama, OpenAIAdapter, OllamaAdapter } from 'multillama';

// Define service configurations
const openaiService = {
  adapter: new OpenAIAdapter(),
  apiKey: 'your-openai-api-key',
};

const ollamaService = {
  adapter: new OllamaAdapter(),
};

// Define model configurations
const models = {
  gpt4: {
    service: openaiService,
    name: 'gpt-4',
    response_format: 'json',
  },
  llama: {
    service: ollamaService,
    name: 'llama-2',
    response_format: 'text',
  },
};

// Initialize MultiLlama
MultiLlama.initialize({
  services: {
    openai: openaiService,
    ollama: ollamaService,
  },
  models,
  spinnerConfig: {
    loadingMessage: 'Processing...',
    successMessage: 'Done!',
    errorMessage: 'An error occurred.',
  },
});

Usage Examples

Basic Usage

Use a specific model to generate a response to a prompt.

const multillama = new MultiLlama();

async function generateResponse() {
  const prompt = 'What is the capital of France?';
  const response = await multillama.useModel('gpt4', [{role: 'user', content: prompt}]);
  console.log(response);
}

generateResponse();

Output:

Paris

Creating a Pipeline

Create a processing pipeline with conditional steps and branching.

import { Pipeline } from 'multillama';

async function processInput(userInput: string) {
  const multillama = new MultiLlama();
  const pipeline = new Pipeline<string>();
  pipeline.setEnableLogging(true);

  // Initial Step: Analyze the input
  const initialStep = pipeline.addStep(async (input, context) => {
    // Determine the type of question
    const analysisPrompt = `Analyze the following question and categorize it: "${input}"`;
    const response = await multillama.useModel('gpt4', [{role: 'user', content: analysisPrompt}]);
    if (response.includes('weather')) {
      return 'weather_question';
    } else {
      return 'general_question';
    }
  });

  // Branch for weather-related questions
  const weatherStep = pipeline.addStep(async (input, context) => {
    const weatherPrompt = `Provide a weather report for "${context.initialInput}"`;
    return await multillama.useModel('gpt4', [{role: 'user', content: weatherPrompt}]);
  });

  // Branch for general questions
  const generalStep = pipeline.addStep(async (input, context) => {
    return await multillama.useModel('llama', [{role: 'user', content: context.initialInput}]);
  });

  // Set up branching
  pipeline.addBranch(initialStep, 'weather_question', weatherStep);
  pipeline.addBranch(initialStep, 'general_question', generalStep);

  // Execute the pipeline
  const result = await pipeline.execute(userInput);
  console.log(result);
}

processInput('What is the weather like in New York?');

Output:

The current weather in New York is sunny with a temperature of 25°C.

Happy Coding! 🦙🎉