npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

deepinfra

v2.0.2

Published

Official API wrapper for DeepInfra

Downloads

10,663

Readme

npm npm Maintainability Rating Reliability Rating Security Rating Vulnerabilities Code Smells Bugs Technical Debt Lines of Code Duplicated Lines (%) Quality Gate Status

deepinfra-node

DeepInfra API is a Node.js client for the DeepInfra Inference API. It provides a simple way to interact with the DeepInfra API. Check out the docs from here.

Installation

npm install deepinfra-api

Usage

Use text generation models

The Mixtral mixture of expert model, developed by Mistral AI, is an innovative experimental machine learning model that leverages a mixture of 8 experts (MoE) within 7b models. Its release was facilitated via a torrent, and the model's implementation remains in the experimental phase._

import { Mixtral } from "deepinfra-api";
const modelName = "mistralai/Mixtral-8x22B-Instruct-v0.1";
const apiKey = "YOUR_DEEPINFRA_API_KEY";
const main = async () => {
  const mixtral = new Mixtral(Mixtral, apiKey);
  const body = {
    input: "What is the capital of France?",
  };
  const output = await mixtral.generate(body);
  const text = output.results[0].generated_text;
  console.log(text);
};

main();

Use text embedding models

Gte Base is an text embedding model that generates embeddings for the input text. The model is trained by Alibaba DAMO Academy.

import { GteBase } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "thenlper/gte-base";
const main = async () => {
  const gteBase = new Embeddings(modelName, apiKey);
  const body = {
    inputs: [
      "What is the capital of France?",
      "What is the capital of Germany?",
      "What is the capital of Italy?",
    ],
  };
  const output = await gteBase.generate(body);
  const embeddings = output.embeddings[0];
  console.log(embeddings);
};

main();

Use SDXL to generate images

SDXL requires unique parameters, therefore it requires different initialization.

import { Sdxl } from "deepinfra-api";
import axios from "axios";
import fs from "fs";

const apiKey = "YOUR_DEEPINFRA_API_KEY";

const main = async () => {
  const model = new Sdxl(apiKey);

  const input = {
    prompt: "The quick brown fox jumps over the lazy dog with",
  };
  const response = await model.generate({ input });
  const { output } = response;
  const image = output[0];

  await axios.get(image, { responseType: "arraybuffer" }).then((response) => {
    fs.writeFileSync("image.png", response.data);
  });
};

main();

Use other text to image models

import { TextToImage } from "deepinfra-api";
import axios from "axios";
import fs from "fs";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "stabilityai/stable-diffusion-2-1";
const main = async () => {
  const model = new TextToImage(modelName, apiKey);
  const input = {
    prompt: "The quick brown fox jumps over the lazy dog with",
  };

  const response = await model.generate(input);
  const { output } = response;
  const image = output[0];

  await axios.get(image, { responseType: "arraybuffer" }).then((response) => {
    fs.writeFileSync("image.png", response.data);
  });
};
main();

Use automatic speech recognition models

import { AutomaticSpeechRecognition } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "openai/whisper-base";

const main = async () => {
  const model = new AutomaticSpeechRecognition(modelName, apiKey);

  const input = {
    audio: path.join(__dirname, "audio.mp3"),
  };
  const response = await model.generate(input);
  const { text } = response;
  console.log(text);
};

main();

Use object detection models

import { ObjectDetection } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "hustvl/yolos-tiny";
const main = async () => {
  const model = new ObjectDetection(modelName, apiKey);

  const input = {
    image: path.join(__dirname, "image.jpg"),
  };
  const response = await model.generate(input);
  const { results } = response;
  console.log(results);
};

Use token classification models

import { TokenClassification } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "Davlan/bert-base-multilingual-cased-ner-hrl";

const main = async () => {
  const model = new TokenClassification(modelName, apiKey);

  const input = {
    text: "The quick brown fox jumps over the lazy dog",
  };
  const response = await model.generate(input);
  const { results } = response;
  console.log(results);
};

Use fill mask models

import { FillMask } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "GroNLP/bert-base-dutch-cased";

const main = async () => {
  const model = new FillMask(modelName, apiKey);

  const body = {
    input: "Ik heb een [MASK] gekocht.",
  };

  const { results } = await model.generate(body);
  console.log(results);
};

Use image classification models

import { ImageClassification } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "google/vit-base-patch16-224";

const main = async () => {
  const model = new ImageClassification(modelName, apiKey);

  const body = {
    image: path.join(__dirname, "image.jpg"),
  };

  const { results } = await model.generate(body);
  console.log(results);
};

Use zero-shot image classification models

import { ZeroShotImageClassification } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "openai/clip-vit-base-patch32";

const main = async () => {
  const model = new ZeroShotImageClassification(modelName, apiKey);

  const body = {
    image: path.join(__dirname, "image.jpg"),
    candidate_labels: ["dog", "cat", "car"],
  };

  const { results } = await model.generate(body);
  console.log(results);
};

Use text classification models

import { TextClassification } from "deepinfra-api";

const apiKey = "YOUR_DEEPINFRA_API_KEY";
const modelName = "ProsusAI/finbert";

const misc = async () => {
  const model = new TextClassification(apiKey);

  const body = {
    input:
      "DeepInfra emerges from stealth with $8M to make running AI inferences more affordable",
  };

  const { results } = await model.generate(body);
  console.log(results);
};

Contributors

Oguz Vuruskaner

Iskren Ivov Chernev

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

License

This project is licensed under the MIT License - see the LICENSE file for details.