npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

vicuna-7b

v0.1.2

Published

Programmatic access to the Vicuna 7B LLM model

Downloads

20

Readme

Vicuna 7B

Vicuna 7B is a large language model that runs in the browser.

This library is a port of the fantastic web-llm implementation that exposes programmatic local access to the model with minimal configuration.

Demo

Gif of UI Demo

Prerequisites

See the Instructions section here for more information on required prerequisites, and confirm that the demo UI works with your hardware.

Getting Started

Install with:

npm install vicuna-7b

Then, you can import it into your project with:

import Vicuna7B from 'vicuna-7b';

const llm = new Vicuna7B();

llm.generate(`Tell me a joke about otters!`).then(response => console.log(response));

API

constructor

const initCallback = ({ progress, timeElapsed, text }) => {
  console.log(progress, timeElapsed, text);
};
new Vicuna7B({
  initCallback,
});

The constructor accepts an optional payload of arguments:

  • initCallback - An optional initialization callback
  • logger - An optional general logging function callback;
  • runtimeURL - An optional string denoting a URL to the runtime URL (corresponds to lib/tvmjs_runtime.wasi.js)
  • bundleURL - An optional string denoting a URL to the bundle URL (corresponds to lib/tvmjs.bundle.js)
  • tokenizerURL - An optional string denoting a URL to the tokenizer URL (corresponds to lib//tokenizer.model)
  • vicunaURL - An optional string denoting a URL to the model URL (corresponds to lib/vicuna-7b_webgpu.wasm)
  • sentencePieceURL - An optional string denoting a URL to the sentence piece URL (corresponds to lib/sentencepiece/index.js)
  • cacheURL - An optional string denoting a URL to the cache URL (corresponds to https://huggingface.co/mlc-ai/web-lm/resolve/main/vicuna-0b/)

By default the library will load its runtime requirements from a CDN, https://www.jsdelivr.com/, if no URL arguments are provided.

An example with full options looks like:

const initCallback = ({ progress, timeElapsed, text }) => {
  console.log(progress, timeElapsed, text);
};
new Vicuna7B({
  initCallback,
  logger: console.log,
  runtimeURL: 'http://path-to-url',
  bundleURL: 'http://path-to-url',
  tokenizerURL: 'http://path-to-url',
  vicunaURL: 'http://path-to-url',
  sentencePieceURL: 'http://path-to-url',
  cacheURL: 'http://path-to-url',
});

generate

const prompt = `Tell me a joke about otters!`;
const generateCallback = (step, text) => {
  console.log(text);
}
const config = {
  maxGenLength: 32,
  stopWords: ['Q:'],
  temperature: 0.5,
  top_p: 0.95,
  callback: generateCallback,
};
llm.generate(prompt, config).then(response => {
  console.log(response);
});

generate accepts two arguments:

  • prompt - the text prompt to pass to the LLM model
  • params - an optional set of config parameters to pass to the model

The callback argument from the params will be invoked on every step of the model generation. This callback receives a step parameter indicating the current step integer, and the text generated at that step. This can be useful if you wish to show the output as the model generates it.

reset

llm.reset().then(() => {
  console.log('done');
});

Resets the LLM.

You can optionally pass in a new set of constructor parameters that will then be persisted.

getRuntimeStats

llm.getRuntimeStats().then(stats => {
  console.log(stats);
})

Exposes runtime stats information. Runtime stats is returned in the format:

{
  encoding: float,
  decoding: float,
}

You can see an example in the original UI.

License

Apache 2.0

Credits

All credit goes to the original implementation at web-llm.

Simon Willison has a great piece on his blog about his experiments with the LLM.