npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

llm-proxy

v1.0.30

Published

An LLM Proxy that allows the user to interact with different language models from different providers using unified request and response formats.

Downloads

2,194

Readme

LLM Proxy

llm-proxy is a TypeScript library that provides a unified interface for interacting with multiple large language model (LLM) providers, such as OpenAI and Anthropic. The library simplifies cross-provider communication by standardizing input and output formats, allowing users to call different providers with consistent request and response structures. This proxy library also supports both streaming and non-streaming responses.

Features

  • Unified Interface: Send chat completion requests in a consistent format, regardless of the underlying LLM provider.

  • Automatic Provider Detection: The library determines the appropriate provider (OpenAI, Anthropic, etc.) based on the model specified in the request.

  • Streamed and Non-Streamed Responses: Separate functions handle streaming and non-streaming responses, giving flexibility in response handling.

  • Modular Design: Includes distinct middleware and service layers for handling provider-specific logic and request formatting.

Installation

Install llm-proxy via npm:

npm install llm-proxy

Usage

usage discription goes here

Theory - How It Works ?

Workflow Overview

  1. User Request: The user sends a chat completion request in a unified format. The request is passed to the llm-proxy.

  2. Middleware Layer:

  • ProviderFinder: Identifies the intended provider (e.g., OpenAI, Anthropic) based on the model specified in the request.
  • InputFormatAdapter: Transforms the request from the unified format into the format expected by the identified provider.
  1. Service Layer:
  • ClientService: A general service interface that routes the request to the correct provider-specific service.
  • Provider-Specific Services: For example, AwsBedrockAnthropicService and OpenAIService handle the actual API communication with Anthropic via AWS Bedrock or OpenAI directly.
  1. Response Handling: OutputFormatAdapter: Transforms the provider-specific response back into the unified format.

  2. Return Response: The final response is returned to the user in the unified format.

Detailed Components

  1. Middleware Layer:
  • ProviderFinder: Determines which provider to use based on the model in the request (e.g., "Claude" indicates Anthropic).
  • InputFormatAdapter: Adapts the request from the unified format to the provider's specific format.
  • OutputFormatAdapter: Converts the provider-specific response into the unified format.
  1. Service Layer:
  • ClientService: A high-level service that selects the appropriate provider service.
  • AwsBedrockAnthropicService: Handles requests and responses for Anthropic models via AWS Bedrock.
  • OpenAIService: Manages requests and responses for OpenAI models.

Architecture Diagram

Below is a flow diagram illustrating how llm-proxy processes requests.

LLM Proxy Flow Diagram

Contributing

Contributions are welcome! Please follow the standard GitHub flow for submitting issues and pull requests.

License

This project is licensed under the MIT License.