npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

hyllama

v0.2.2

Published

llama.cpp gguf file parser for javascript

Downloads

2,789

Readme

hyllama

hyllama

npm workflow status mit license dependencies

Javascript parser for llama.cpp gguf files.

This library makes it easy to parse metadata from GGUF files.

llama.cpp was originally an implementation of meta's llama model in C++, particularly on apple m-series chips. But it has quickly evolved into a powerful tool for running various trained LLM models, on cpu or gpu. The runtime has minimal dependencies and so is easy to deploy. Model files are frequently distributed as .gguf files which contain all the info needed to run a model including architecture and weights. TheBloke provides a great collection of serialized gguf model files, at varying levels of quantization.

Model files are often very large. A goal of this library is to parse the file efficiently, without loading the entire file into memory.

Dependency free since 2023!

Installation

npm install hyllama

Usage

Node.js

If you're in a node.js environment, you can load a .gguf file with the following example:

const { ggufMetadata } = await import('hyllama')
const fs = await import('fs')

// Read first 10mb of gguf file
const fd = fs.openSync('example.gguf', 'r')
const buffer = new Uint8Array(10_000_000)
fs.readSync(fd, buffer, 0, 10_000_000, 0)
fs.closeSync(fd)

// Parse metadata and tensor info
const { metadata, tensorInfos } = ggufMetadata(buffer.buffer)

Browser

If you're in a browser environment, you'll probably get .gguf file data from either a drag-and-dropped file from the user, or downloaded from the web.

To load .gguf data in the browser from a remote url, it is recommended that you use an HTTP range request to get just the first bytes:

import { ggufMetadata } from 'hyllama'

const headers = new Headers({ Range: 'bytes=0-10000000' })
const res = await fetch(url, { headers })
const arrayBuffer = await res.arrayBuffer()
const { metadata, tensorInfos } = ggufMetadata(arrayBuffer)

To parse .gguf files from a user drag-and-drop action, see example in index.html.

File Size

Since .gguf files are typically very large, it is recommended that you only load the start of the file that contains the metadata. How many bytes you need for the metadata depends on the gguf file. In practice, most .gguf files have metadata that takes up a few megabytes. If you get an error "RangeError: Offset is outside the bounds of the DataView" then you probably didn't fetch enough bytes.

References

  • https://github.com/ggerganov/llama.cpp
  • https://github.com/ggerganov/ggml/blob/master/docs/gguf.md
  • https://huggingface.co/TheBloke