@unsync/ai-gateway
v1.0.18
Published
A fast AI gateway by Portkey
Downloads
6
Maintainers
Readme
Gateway
npx @portkey-ai/gateway
Route to 100+ LLMs with 1 fast & friendly API.
Portkey's AI Gateway is the interface between your app and hosted LLMs. It streamlines API requests to OpenAI, Anthropic, Mistral, LLama2, Anyscale, Google Gemini and more with a unified API.
✅ Blazing fast (9.9x faster) with a tiny footprint (~45kb installed) ✅ Load balance across multiple models, providers, and keys ✅ Fallbacks make sure your app stays resilient ✅ Automatic Retries with exponential fallbacks come by default ✅ Configurable Request Timeouts to easily handle unresponsive LLM requests ✅ Plug-in middleware as needed ✅ Battle tested over 100B tokens ✅ Enterprise-ready for enhanced security, scale, and custom deployments
Enterprise Version: Read more here
Getting Started
Installation
If you're familiar with Node.js and npx
, you can run your private AI gateway locally. (Other deployment options)
npx @portkey-ai/gateway
Your AI Gateway is now running on http://localhost:8787 🚀
Usage
Let's try making a chat completions call to OpenAI through the AI gateway:
curl '127.0.0.1:8787/v1/chat/completions' \
-H 'x-portkey-provider: openai' \
-H "Authorization: Bearer $OPENAI_KEY" \
-H 'Content-Type: application/json' \
-d '{"messages": [{"role": "user","content": "Say this is test."}], "max_tokens": 20, "model": "gpt-4"}'
Supported Providers
|| Provider | Support | Stream | Supported Endpoints |
|---|---|---|---|--|
| | OpenAI | ✅ |✅ | /completions
, /chat/completions
,/embeddings
, /assistants
, /threads
, /runs
, /images/generations
, /audio/*
|
| | Azure OpenAI | ✅ |✅ | /completions
, /chat/completions
,/embeddings
|
| | Anyscale | ✅ | ✅ | /chat/completions
|
| | Google Gemini & Palm | ✅ |✅ | /generateMessage
, /generateText
, /embedText
|
| | Anthropic | ✅ |✅ | /messages
, /complete
|
| | Cohere | ✅ |✅ | /generate
, /embed
, /rerank
|
| | Together AI | ✅ |✅ | /chat/completions
, /completions
, /inference
|
| | Perplexity | ✅ |✅ | /chat/completions
|
| | Mistral | ✅ |✅ | /chat/completions
, /embeddings
|
| | Nomic | ✅ |✅ | /embeddings
|
| | AI21 | ✅ |✅ | /complete
, /chat
, /embed
|
| | Stability AI | ✅ |✅ | /generation/{engine_id}/text-to-image
|
| | DeepInfra | ✅ |✅ | /inference
|
| | Ollama | ✅ |✅ | /chat/completions
|
| | Cloudflare Workers AI | ✅ |✅ | /completions
, /chat/completions
|
Features
Configuring the AI Gateway
The AI gateway supports configs to enable versatile routing strategies like fallbacks, load balancing, retries and more.
You can use these configs while making the OpenAI call through the x-portkey-config
header
// Using the OpenAI JS SDK
const client = new OpenAI({
baseURL: "http://127.0.0.1:8787", // The gateway URL
defaultHeaders: {
'x-portkey-config': {.. your config here ..},
}
});
{
"retry": { "count": 5 },
"strategy": { "mode": "fallback" },
"targets": [{
"provider": "openai",
"api_key": "sk-***"
},{
"provider": "google",
"api_key": "gt5***",
"override_params": {"model": "gemini-pro"}
}]
}
{
"strategy": { "mode": "loadbalance" },
"targets": [{
"provider": "openai",
"api_key": "sk-***",
"weight": "0.5"
},{
"provider": "openai",
"api_key": "sk-***",
"weight": "0.5"
}
]
}
Read more about the config object.
Supported SDKs
| Language | Supported SDKs | |---|---| | Node.js / JS / TS | Portkey SDK OpenAI SDK LangchainJS LlamaIndex.TS | | Python | Portkey SDK OpenAI SDK Langchain LlamaIndex | | Go | go-openai | | Java | openai-java | | Rust | async-openai | | Ruby | ruby-openai |
Deploying AI Gateway
See docs on installing the AI Gateway locally or deploying it on popular locations.
- Deploy to Cloudflare Workers
- Deploy using Docker
- Deploy using Docker Compose
- Deploy to Zeabur
- Run a Node.js server
Gateway Enterprise Version
Make your AI app more reliable and forward compatible, while ensuring complete data security and privacy.
✅ Secure Key Management - for role-based access control and tracking ✅ Simple & Semantic Caching - to serve repeat queries faster & save costs ✅ Access Control & Inbound Rules - to control which IPs and Geos can connect to your deployments ✅ PII Redaction - to automatically remove sensitive data from your requests to prevent indavertent exposure ✅ SOC2, ISO, HIPAA, GDPR Compliances - for best security practices ✅ Professional Support - along with feature prioritization
Schedule a call to discuss enterprise deployments
Roadmap
- Support for more providers. Missing a provider or LLM Platform, raise a feature request.
- Enhanced load balancing features to optimize resource use across different models and providers.
- More robust fallback and retry strategies to further improve the reliability of requests.
- Increased customizability of the unified API signature to cater to more diverse use cases.
Participate in Roadmap discussions here.
Contributing
The easiest way to contribute is to pick any issue with the good first issue
tag 💪. Read the Contributing guidelines here.
Bug Report? File here | Feature Request? File here
Community
Join our growing community around the world, for help, ideas, and discussions on AI.