npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@huggingface/inference

v2.8.1

Published

Typescript wrapper for the Hugging Face Inference Endpoints & Inference API

Downloads

359,544

Readme

🤗 Hugging Face Inference Endpoints

A Typescript powered wrapper for the Hugging Face Inference Endpoints API. Learn more about Inference Endpoints at Hugging Face. It works with both Inference API (serverless) and Inference Endpoints (dedicated).

Check out the full documentation.

You can also try out a live interactive notebook, see some demos on hf.co/huggingfacejs, or watch a Scrimba tutorial that explains how Inference Endpoints works.

Getting Started

Install

Node

npm install @huggingface/inference

pnpm add @huggingface/inference

yarn add @huggingface/inference

Deno

// esm.sh
import { HfInference } from "https://esm.sh/@huggingface/inference"
// or npm:
import { HfInference } from "npm:@huggingface/inference"

Initialize

import { HfInference } from '@huggingface/inference'

const hf = new HfInference('your access token')

Important note: Using an access token is optional to get started, however you will be rate limited eventually. Join Hugging Face and then visit access tokens to generate your access token for free.

Your access token should be kept private. If you need to protect it in front-end applications, we suggest setting up a proxy server that stores the access token.

Tree-shaking

You can import the functions you need directly from the module instead of using the HfInference class.

import { textGeneration } from "@huggingface/inference";

await textGeneration({
  accessToken: "hf_...",
  model: "model_or_endpoint",
  inputs: ...,
  parameters: ...
})

This will enable tree-shaking by your bundler.

Natural Language Processing

Text Generation

Generates text from an input prompt.

Demo

await hf.textGeneration({
  model: 'gpt2',
  inputs: 'The answer to the universe is'
})

for await (const output of hf.textGenerationStream({
  model: "google/flan-t5-xxl",
  inputs: 'repeat "one two three four"',
  parameters: { max_new_tokens: 250 }
})) {
  console.log(output.token.text, output.generated_text);
}

Text Generation (Chat Completion API Compatible)

Using the chatCompletion method, you can generate text with models compatible with the OpenAI Chat Completion API. All models served by TGI on Hugging Face support Messages API.

Demo

// Non-streaming API
const out = await hf.chatCompletion({
  model: "mistralai/Mistral-7B-Instruct-v0.2",
  messages: [{ role: "user", content: "Complete the this sentence with words one plus one is equal " }],
  max_tokens: 500,
  temperature: 0.1,
  seed: 0,
});

// Streaming API
let out = "";
for await (const chunk of hf.chatCompletionStream({
  model: "mistralai/Mistral-7B-Instruct-v0.2",
  messages: [
    { role: "user", content: "Complete the equation 1+1= ,just the answer" },
  ],
  max_tokens: 500,
  temperature: 0.1,
  seed: 0,
})) {
  if (chunk.choices && chunk.choices.length > 0) {
    out += chunk.choices[0].delta.content;
  }
}

It's also possible to call Mistral or OpenAI endpoints directly:

const openai = new HfInference(OPENAI_TOKEN).endpoint("https://api.openai.com");

let out = "";
for await (const chunk of openai.chatCompletionStream({
  model: "gpt-3.5-turbo",
  messages: [
    { role: "user", content: "Complete the equation 1+1= ,just the answer" },
  ],
  max_tokens: 500,
  temperature: 0.1,
  seed: 0,
})) {
  if (chunk.choices && chunk.choices.length > 0) {
    out += chunk.choices[0].delta.content;
  }
}

// For mistral AI:
// endpointUrl: "https://api.mistral.ai"
// model: "mistral-tiny"

Fill Mask

Tries to fill in a hole with a missing word (token to be precise).

await hf.fillMask({
  model: 'bert-base-uncased',
  inputs: '[MASK] world!'
})

Summarization

Summarizes longer text into shorter text. Be careful, some models have a maximum length of input.

await hf.summarization({
  model: 'facebook/bart-large-cnn',
  inputs:
    'The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930.',
  parameters: {
    max_length: 100
  }
})

Question Answering

Answers questions based on the context you provide.

await hf.questionAnswering({
  model: 'deepset/roberta-base-squad2',
  inputs: {
    question: 'What is the capital of France?',
    context: 'The capital of France is Paris.'
  }
})

Table Question Answering

await hf.tableQuestionAnswering({
  model: 'google/tapas-base-finetuned-wtq',
  inputs: {
    query: 'How many stars does the transformers repository have?',
    table: {
      Repository: ['Transformers', 'Datasets', 'Tokenizers'],
      Stars: ['36542', '4512', '3934'],
      Contributors: ['651', '77', '34'],
      'Programming language': ['Python', 'Python', 'Rust, Python and NodeJS']
    }
  }
})

Text Classification

Often used for sentiment analysis, this method will assign labels to the given text along with a probability score of that label.

await hf.textClassification({
  model: 'distilbert-base-uncased-finetuned-sst-2-english',
  inputs: 'I like you. I love you.'
})

Token Classification

Used for sentence parsing, either grammatical, or Named Entity Recognition (NER) to understand keywords contained within text.

await hf.tokenClassification({
  model: 'dbmdz/bert-large-cased-finetuned-conll03-english',
  inputs: 'My name is Sarah Jessica Parker but you can call me Jessica'
})

Translation

Converts text from one language to another.

await hf.translation({
  model: 't5-base',
  inputs: 'My name is Wolfgang and I live in Berlin'
})

await hf.translation({
  model: 'facebook/mbart-large-50-many-to-many-mmt',
  inputs: textToTranslate,
  parameters: {
  "src_lang": "en_XX",
  "tgt_lang": "fr_XX"
 }
})

Zero-Shot Classification

Checks how well an input text fits into a set of labels you provide.

await hf.zeroShotClassification({
  model: 'facebook/bart-large-mnli',
  inputs: [
    'Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!'
  ],
  parameters: { candidate_labels: ['refund', 'legal', 'faq'] }
})

Conversational

This task corresponds to any chatbot-like structure. Models tend to have shorter max_length, so please check with caution when using a given model if you need long-range dependency or not.

await hf.conversational({
  model: 'microsoft/DialoGPT-large',
  inputs: {
    past_user_inputs: ['Which movie is the best ?'],
    generated_responses: ['It is Die Hard for sure.'],
    text: 'Can you explain why ?'
  }
})

Sentence Similarity

Calculate the semantic similarity between one text and a list of other sentences.

await hf.sentenceSimilarity({
  model: 'sentence-transformers/paraphrase-xlm-r-multilingual-v1',
  inputs: {
    source_sentence: 'That is a happy person',
    sentences: [
      'That is a happy dog',
      'That is a very happy person',
      'Today is a sunny day'
    ]
  }
})

Audio

Automatic Speech Recognition

Transcribes speech from an audio file.

Demo

await hf.automaticSpeechRecognition({
  model: 'facebook/wav2vec2-large-960h-lv60-self',
  data: readFileSync('test/sample1.flac')
})

Audio Classification

Assigns labels to the given audio along with a probability score of that label.

Demo

await hf.audioClassification({
  model: 'superb/hubert-large-superb-er',
  data: readFileSync('test/sample1.flac')
})

Text To Speech

Generates natural-sounding speech from text input.

Interactive tutorial

await hf.textToSpeech({
  model: 'espnet/kan-bayashi_ljspeech_vits',
  inputs: 'Hello world!'
})

Audio To Audio

Outputs one or multiple generated audios from an input audio, commonly used for speech enhancement and source separation.

await hf.audioToAudio({
  model: 'speechbrain/sepformer-wham',
  data: readFileSync('test/sample1.flac')
})

Computer Vision

Image Classification

Assigns labels to a given image along with a probability score of that label.

Demo

await hf.imageClassification({
  data: readFileSync('test/cheetah.png'),
  model: 'google/vit-base-patch16-224'
})

Object Detection

Detects objects within an image and returns labels with corresponding bounding boxes and probability scores.

Demo

await hf.objectDetection({
  data: readFileSync('test/cats.png'),
  model: 'facebook/detr-resnet-50'
})

Image Segmentation

Detects segments within an image and returns labels with corresponding bounding boxes and probability scores.

await hf.imageSegmentation({
  data: readFileSync('test/cats.png'),
  model: 'facebook/detr-resnet-50-panoptic'
})

Image To Text

Outputs text from a given image, commonly used for captioning or optical character recognition.

await hf.imageToText({
  data: readFileSync('test/cats.png'),
  model: 'nlpconnect/vit-gpt2-image-captioning'
})

Text To Image

Creates an image from a text prompt.

Demo

await hf.textToImage({
  inputs: 'award winning high resolution photo of a giant tortoise/((ladybird)) hybrid, [trending on artstation]',
  model: 'stabilityai/stable-diffusion-2',
  parameters: {
    negative_prompt: 'blurry',
  }
})

Image To Image

Image-to-image is the task of transforming a source image to match the characteristics of a target image or a target image domain.

Interactive tutorial

await hf.imageToImage({
  inputs: new Blob([readFileSync("test/stormtrooper_depth.png")]),
  parameters: {
    prompt: "elmo's lecture",
  },
  model: "lllyasviel/sd-controlnet-depth",
});

Zero Shot Image Classification

Checks how well an input image fits into a set of labels you provide.

await hf.zeroShotImageClassification({
  model: 'openai/clip-vit-large-patch14-336',
  inputs: {
    image: await (await fetch('https://placekitten.com/300/300')).blob()
  },  
  parameters: {
    candidate_labels: ['cat', 'dog']
  }
})

Multimodal

Feature Extraction

This task reads some text and outputs raw float values, that are usually consumed as part of a semantic database/semantic search.

await hf.featureExtraction({
  model: "sentence-transformers/distilbert-base-nli-mean-tokens",
  inputs: "That is a happy person",
});

Visual Question Answering

Visual Question Answering is the task of answering open-ended questions based on an image. They output natural language responses to natural language questions.

Demo

await hf.visualQuestionAnswering({
  model: 'dandelin/vilt-b32-finetuned-vqa',
  inputs: {
    question: 'How many cats are lying down?',
    image: await (await fetch('https://placekitten.com/300/300')).blob()
  }
})

Document Question Answering

Document question answering models take a (document, question) pair as input and return an answer in natural language.

Demo

await hf.documentQuestionAnswering({
  model: 'impira/layoutlm-document-qa',
  inputs: {
    question: 'Invoice number?',
    image: await (await fetch('https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png')).blob(),
  }
})

Tabular

Tabular Regression

Tabular regression is the task of predicting a numerical value given a set of attributes.

await hf.tabularRegression({
  model: "scikit-learn/Fish-Weight",
  inputs: {
    data: {
      "Height": ["11.52", "12.48", "12.3778"],
      "Length1": ["23.2", "24", "23.9"],
      "Length2": ["25.4", "26.3", "26.5"],
      "Length3": ["30", "31.2", "31.1"],
      "Species": ["Bream", "Bream", "Bream"],
      "Width": ["4.02", "4.3056", "4.6961"]
    },
  },
})

Tabular Classification

Tabular classification is the task of classifying a target category (a group) based on set of attributes.

await hf.tabularClassification({
  model: "vvmnnnkv/wine-quality",
  inputs: {
    data: {
      "fixed_acidity": ["7.4", "7.8", "10.3"],
      "volatile_acidity": ["0.7", "0.88", "0.32"],
      "citric_acid": ["0", "0", "0.45"],
      "residual_sugar": ["1.9", "2.6", "6.4"],
      "chlorides": ["0.076", "0.098", "0.073"],
      "free_sulfur_dioxide": ["11", "25", "5"],
      "total_sulfur_dioxide": ["34", "67", "13"],
      "density": ["0.9978", "0.9968", "0.9976"],
      "pH": ["3.51", "3.2", "3.23"],
      "sulphates": ["0.56", "0.68", "0.82"],
      "alcohol": ["9.4", "9.8", "12.6"]
    },
  },
})

Custom Calls

For models with custom parameters / outputs.

await hf.request({
  model: 'my-custom-model',
  inputs: 'hello world',
  parameters: {
    custom_param: 'some magic',
  }
})

// Custom streaming call, for models with custom parameters / outputs
for await (const output of hf.streamingRequest({
  model: 'my-custom-model',
  inputs: 'hello world',
  parameters: {
    custom_param: 'some magic',
  }
})) {
  ...
}

You can use any Chat Completion API-compatible provider with the chatCompletion method.

// Chat Completion Example
const MISTRAL_KEY = process.env.MISTRAL_KEY;
const hf = new HfInference(MISTRAL_KEY);
const ep = hf.endpoint("https://api.mistral.ai");
const stream = ep.chatCompletionStream({
  model: "mistral-tiny",
  messages: [{ role: "user", content: "Complete the equation one + one = , just the answer" }],
});
let out = "";
for await (const chunk of stream) {
  if (chunk.choices && chunk.choices.length > 0) {
    out += chunk.choices[0].delta.content;
    console.log(out);
  }
}

Custom Inference Endpoints

Learn more about using your own inference endpoints here

const gpt2 = hf.endpoint('https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2');
const { generated_text } = await gpt2.textGeneration({inputs: 'The answer to the universe is'});

// Chat Completion Example
const ep = hf.endpoint(
  "https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.2"
);
const stream = ep.chatCompletionStream({
  model: "tgi",
  messages: [{ role: "user", content: "Complete the equation 1+1= ,just the answer" }],
  max_tokens: 500,
  temperature: 0.1,
  seed: 0,
});
let out = "";
for await (const chunk of stream) {
  if (chunk.choices && chunk.choices.length > 0) {
    out += chunk.choices[0].delta.content;
    console.log(out);
  }
}

By default, all calls to the inference endpoint will wait until the model is loaded. When scaling to 0 is enabled on the endpoint, this can result in non-trivial waiting time. If you'd rather disable this behavior and handle the endpoint's returned 500 HTTP errors yourself, you can do so like so:

const gpt2 = hf.endpoint('https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2');
const { generated_text } = await gpt2.textGeneration(
  {inputs: 'The answer to the universe is'},
  {retry_on_error: false},
);

Running tests

HF_TOKEN="your access token" pnpm run test

Finding appropriate models

We have an informative documentation project called Tasks to list available models for each task and explain how each task works in detail.

It also contains demos, example outputs, and other resources should you want to dig deeper into the ML side of things.

Dependencies

  • @huggingface/tasks : Typings only