npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

smart-whisper

v0.8.1

Published

Whisper.cpp Node.js binding with auto model offloading strategy.

Downloads

5,722

Readme

smart-whisper

Smart-Whisper is a native Node.js addon designed for efficient and streamlined interaction with the whisper.cpp, with automatic model offloading and reloading and model manager.

Features

  • Node.js Native Addon Interaction: Directly interact with whisper.cpp, ensuring fast and efficient processing.
  • Single Model Load for Multiple Inferences: Load the model once and perform multiple and parallel inferences, optimizing resource usage and reducing load times.
  • Automatic Model Offloading and Reloading: Manages memory effectively by automatically offloading and reloading models as needed.
  • Model Manager: Automates the process of downloading and updating models, ensuring that the latest models are always available.

Installation

The standard installation supports Windows, macOS, and Linux out of the box. And it also automatically enables the GPU and CPU acceleration on macOS.

npm i smart-whisper

Support Matrix:

| OS and Arch | CPU | GPU | | ------------------- | ---------------- | --------- | | macOS Apple Silicon | ✅ (Acceleration) | ✅ (Metal) | | macOS Intel | ✅ (Acceleration) | BYOL | | Linux / Windows | ✅ | BYOL |

  • ✅: Out of the box support with standard installation.
  • BYOL: Bring Your Own Library, see Acceleration for more information.

Acceleration

Due to the complexity of the different acceleration methods for different devices. You need to compile the libwhisper.a or libwhisper.so from whisper.cpp yourself.

And then set the BYOL (Bring Your Own Library) environment variable to the path of the compiled library.

BYOL='/path/to/libwhisper.a' npm i smart-whisper

You may need to link other libraries like:

BYOL='/path/to/libwhisper.a -lopenblas' npm i smart-whisper

OpenBLAS

For Linux and Windows without GPU, the best acceleration method might be using OpenBLAS. After installing OpenBLAS, you can compile the libwhisper.a with the following command:

git clone https://github.com/ggerganov/whisper.cpp
cd whisper.cpp
WHISPER_OPENBLAS=1 make -j

CUDA and other acceleration methods

Check out the whisper.cpp repository for more information.

Documentation

The documentation is available at https://jacoblincool.github.io/smart-whisper/.

Example

See examples for more examples.

import { Whisper } from "smart-whisper";
import { decode } from "node-wav";
import fs from "node:fs";

const model = process.argv[2];
const wav = process.argv[3];

const whisper = new Whisper(model, { gpu: true });
const pcm = read_wav(wav);

const task = await whisper.transcribe(pcm, { language: "auto" });
console.log(await task.result);

await whisper.free();
console.log("Maunally freed");

function read_wav(file: string): Float32Array {
    const { sampleRate, channelData } = decode(fs.readFileSync(file));

    if (sampleRate !== 16000) {
        throw new Error(`Invalid sample rate: ${sampleRate}`);
    }
    if (channelData.length !== 1) {
        throw new Error(`Invalid channel count: ${channelData.length}`);
    }

    return channelData[0];
}

The transcribe method returns a task object that can be used to retrieve the result of the transcription, which also emits events for the progress of the transcription.

const task = await whisper.transcribe(pcm, { language: "auto" });
task.on("transcribed", (result) => {
    console.log("Transcribed", result);
});
console.log(await task.result);

Links

License

MIT