npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

generative-ts

v0.1.0-alpha.6

Published

simple, type-safe, isomorphic LLM interactions (with power)

Downloads

10

Readme

generative-ts

a typescript library for building LLM applications+agents

Documentation NPM License

Install

To install everything:

npm i generative-ts

You can also do more granular installs of scoped packages if you want to optimize your builds further (see packages)

Usage

AWS Bedrock

API docs: createAwsBedrockModelProvider

import {
  AmazonTitanTextApi,
  createAwsBedrockModelProvider
} from "generative-ts";

// Bedrock supports many different APIs and models. See API docs (above) for full list.
const titanText = createAwsBedrockModelProvider({
  api: AmazonTitanTextApi,
  modelId: "amazon.titan-text-express-v1",
  // If your code is running in an AWS Environment (eg, Lambda) authorization will happen automatically. Otherwise, explicitly pass in `auth`
});

const response = await titanText.sendRequest({ 
  $prompt:"Brief history of NY Mets:" 
  // all other options for the specified `api` available here
});

console.log(response.results[0]?.outputText);

Cohere

API docs: createCohereModelProvider

import { createCohereModelProvider } from "generative-ts";

const commandR = createCohereModelProvider({
  modelId: "command-r-plus", // Cohere defined model ID
  // you can explicitly pass auth here, otherwise by default it is read from process.env
});

const response = await commandR.sendRequest({
  $prompt:"Brief History of NY Mets:",
  preamble: "Talk like Jafar from Aladdin",
  // all other Cohere /generate options available here
});

console.log(response.text);

Google Cloud VertexAI

API docs: createVertexAiModelProvider

import { createVertexAiModelProvider } from "@packages/gcloud-vertex-ai";

const gemini = await createVertexAiModelProvider({
  modelId: "gemini-1.0-pro", // VertexAI defined model ID
  // you can explicitly pass auth here, otherwise by default it is read from process.env
});

const response = await gemini.sendRequest({
  $prompt:"Brief History of NY Mets:",
  // all other Gemini options available here
});

console.log(response.data.candidates[0]);

Groq

API docs: createGroqModelProvider

import { createGroqModelProvider } from "generative-ts";

const llama3 = createGroqModelProvider({
  modelId: "llama3-70b-8192", // Groq defined model ID
  // you can explicitly pass auth here, otherwise by default it is read from process.env
});

const response = await llama3.sendRequest({ 
  $prompt:"Brief History of NY Mets:" 
  // all other OpenAI ChatCompletion options available here (Groq uses the OpenAI ChatCompletion API for all the models it hosts)
});

console.log(response.choices[0]?.message.content);

Huggingface Inference

API docs: createHuggingfaceInferenceModelProvider

import { 
  createHuggingfaceInferenceModelProvider, 
  HfTextGenerationTaskApi 
} from "generative-ts";

// Huggingface Inference supports many different APIs and models. See API docs (above) for full list.
const gpt2 = createHuggingfaceInferenceModelProvider({
  api: HfTextGenerationTaskApi,
  modelId: "gpt2",
  // you can explicitly pass auth here, otherwise by default it is read from process.env
});

const response = await gpt2.sendRequest({ 
  $prompt:"Hello," 
  // all other options for the specified `api` available here
});

console.log(response[0]?.generated_text);

LMStudio

API docs: createLmStudioModelProvider

import { createLmStudioModelProvider } from "generative-ts";

const llama3 = createLmStudioModelProvider({
  modelId: "lmstudio-community/Meta-Llama-3-70B-Instruct-GGUF", // a ID of a model you have downloaded in LMStudio
});

const response = await llama3.sendRequest({ 
  $prompt:"Brief History of NY Mets:" 
  // all other OpenAI ChatCompletion options available here (LMStudio uses the OpenAI ChatCompletion API for all the models it hosts)
});

console.log(response.choices[0]?.message.content);

Mistral

API docs: createMistralModelProvider

import { createMistralModelProvider } from "generative-ts";

const mistralLarge = createMistralModelProvider({
  modelId: "mistral-large-latest", // Mistral defined model ID
  // you can explicitly pass auth here, otherwise by default it is read from process.env
});

const response = await mistralLarge.sendRequest({ 
  $prompt:"Brief History of NY Mets:" 
  // all other Mistral ChatCompletion API options available here
});

console.log(response.choices[0]?.message.content);

OpenAI

API docs: createOpenAiChatModelProvider

import { createOpenAiChatModelProvider } from "generative-ts";

const gpt = createOpenAiChatModelProvider({
  modelId: "gpt-4-turbo", // OpenAI defined model ID
  // you can explicitly pass auth here, otherwise by default it is read from process.env
});

const response = await gpt.sendRequest({
  $prompt:"Brief History of NY Mets:",
  max_tokens: 100,
  // all other OpenAI ChatCompletion options available here
});

console.log(response.choices[0]?.message.content);

Custom HTTP Client

todo;

Supported Providers and Models

See Usage for how to use each provider.

|Provider|Models|Model APIs| |-|-|-| |AWS Bedrock|Multiple hosted models|Native model APIs| |Cohere|Command / Command R+|Cohere /generate and /chat| |Google Vertex AI|Gemini x.y|Gemini; OpenAI in preview| |Groq|Multiple hosted models|OpenAI ChatCompletion| |Huggingface Inference|Open-source|Huggingface Inference APIs| |LMStudio (localhost)|Open-source (must be downloaded)|OpenAI ChatCompletion| |Mistral|Mistral x.y|Mistral ChatCompletion| |OpenAI|GPT x.y|OpenAI ChatCompletion| |Azure (coming soon)|| |Replicate (coming soon)|| |Anthropic (coming soon)|| |Fireworks (coming soon)||

It's also easy to add your own TODO LINK

Packages

If you're using a modern bundler, just install generative-ts to get everything. Modern bundlers support tree-shaking, so your final bundle won't include unused code. (Note: we distribute both ESM and CJS bundles for compatibility.) If you prefer to avoid unnecessary downloads, or you're operating under constraints where tree-shaking isn't an option, we offer scoped packages under @generative-ts/ with specific functionality for more fine-grained installs.

|Package|Description|| |-|-|-| | generative-ts | Everything | Includes all scoped packages listed below | | @generative-ts/core | Core functionality (zero dependencies) | Interfaces, classes, utilities, etc | | @generative-ts/gcloud-vertex-ai | Google Cloud VertexAI ModelProvider | Uses Application Default Credentials (ADC) to properly authenticate in GCloud environments | | @generative-ts/aws-bedrock | AWS Bedrock ModelProvider | Uses aws4 to properly authenticate when running in AWS environments |

Report Bugs / Submit Feature Requests

Please submit all issues here: https://github.com/Econify/generative-ts/issues

Contributing

To get started developing, optionally fork and then clone the repository and run:

nvm use
npm ci

To run examples and integration/e2e tests, create an .env file by running cp .env.example .env and then add values where necessary

Publishing

The "main" generative-ts package and the scoped @generative-ts packages both are controlled by the generative-ts npm organization. Releases are published via circleci job upon pushes of tags that have a name starting with release/. The job requires an NPM token that has publishing permissions to both generative-ts and @generative-ts. Currently this is a "granular" token set to expire every 30 days, created by @jnaglick, set in a circleci context.