npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

together-ai-sdk

v0.0.7

Published

A typescript SDK for the Together AI API

Downloads

6

Readme

Together AI SDK

A 100% typescript client library to connect to the together.ai API.

Features:

  • Robust parsing and convenient wrapper for stream events

  • Types for all request parameters, responses, and available models

  • Automatic retry after failed API response with customizable cooldowns

  • Allows for custom fetch implementation for testing or logging

  • Browser support (although not recommended for security reasons)

  • 100% test coverage

  • No library dependencies

This project is an open source community. It is not sponsored by the Together AI company.

If you have bugs or feature requests for the client library, feel free to submit an issue. If you have bugs or feature requests for the Together AI company, please contact them directly.

Installation

Install with NPM:

npm install together-ai-sdk

Usage

See the example scripts for more example implementations.

Begin by instantiating a client with your API key:

import { togetherClient } from 'together-ai-sdk'

const client = togetherClient({ apiKey: 'xxx' })

Client Configuration

There are a number of configuration options supported to customize the behavior of the client.

apiKey - string

This is a required value that stores the API key for authenticating requests. Sign up for a key on together.ai's website.

address - string

An optional value to override the address of where the requests are sent to. Defaults to api.together.xyz.

protocol - 'http' | 'https'

An optional value to override the protocol used in the requests. Defaults to https.

retryCooldowns - number[]

An optional value to set custom retry cooldowns for failed requests. Waits the specified number of milliseconds before retrying. The number of elements in this array determines how many retries are performed. Set to [] to disable retrying. Defaults to [1000, 5000, 30000].

customFetch - typeof fetch

A custom fetch function to use to send the request. Useful for testing, logging, or to use a different fetch implementation. This library only uses the following fields from the fetch response:

  • status
  • body [Expecting stream reader]
  • json()

Defaults to built in fetch.

NOTE: If you are running in the browser, you must set this field to: customFetch: window.fetch.bind(window).

Chat

To perform a chat inference, use the chat method on the client object:

const result = await client.chat({
  model: TogetherChatModel.LLaMA_2_Chat_70B // or togetherModel.chat.meta.llamaChat.b70,
  messages: [{
    role: 'user',
    content: 'Hello, how are you?'
  }]
})

Users can use either the TogetherChatModel enum for a specific chat model, or the togetherModel object which enumerates every model by the inference type, organization, name, and size.

This inference request will wait until the LLM has finished processing to return the response from the API.

Streaming Chat

To stream the inference while it is still in progress, simply add a callback to the request parameters:

const result = await client.chat({
  model: TogetherChatModel.LLaMA_2_Chat_70B,
  messages: [{
    role: 'user',
    content: 'Hello, how are you?'
  }],
  streamCallback: v => console.log(v)
})

The streamCallback function will get called for every event sent by the API. It will still return the entire reponse when completed.

Language

To perform a language inference, use the language method on the client object:

const result = await client.language({
  model: TogetherLanguageModel.Falcon_40B,
  prompt: 'The capital of France is'
})

The same stream callback function can be added to the language request as well.

Code

The same system works for the code inference:

const result = await client.code({
  model: TogetherCodeModel.Code_Llama_Python_13B,
  prompt: '# Write a function for fibonacci'
})

The same stream callback function can be added to the code request as well.

Image

To perform an image inference, use the image method on the client object:

const result = await client.image({
  model: TogetherImageModel.Stable_Diffusion_XL_1_0,
  prompt: 'A picture of a cat',
  width: 1024,
  height: 1024,
  n: 2
})

The width and height parameters determine the size of the image in pixels. The n parameter determines how many images to generate. The stream callback is not available on image requests.

Embedding

To perform an embedding, use the embedding method on the client object:

const result = await client.embedding({
  model: TogetherEmbeddingModel.BERT,
  input: 'Hello, how are you?'
})