npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

tents

v1.0.14

Published

<h1 align="center"> TenTS </h1> <p align="center"> TenTS is a WebGPU-accelerated Tensor library for the browser written in TypeScript with a PyTorch-like API. <b>Currently, matrix operations run between 10x and 100x slower than they would natively with Py

Downloads

18

Readme

Usage

pnpm i tents

Example:

import { Tensor } from "tents";

const a = Tensor.rand([1000, 1000]);
const b = Tensor.rand([1000, 1000]);

const cpu = await Tensor.matmul(a, b);
const gpu = await Tensor.matmul(a.gpu(), b.gpu());

if (Tensor.almostEq(cpu, gpu)) console.log("🎉");

Documentation

Introduction

TenTS introduces a Tensor class designed to mimic PyTorch's tensor. Tensors may be constructed directly from nested arrays or via convenient constructors. Data is stored internally using a Float32Array. Tensors are immutable, although you can edit them manually if you choose to; all TenTS operations construct new tensor objects with new data buffers.

Constructors

The default constructor accepts nested arrays. All constructors accept an optional requiresGrad argument which defaults to false.

// Standard, non-differentiable matrix
const x = new Tensor([
  [1, 2, 3],
  [4, 5, 6],
]);

This is useful for small tensors, but more useful constructors are also provided. Note:

// Fill constructors
const a = Tensor.zeros([2, 2]);
const b = Tensor.ones(4);

// Sampled from uniform random distribution
const c = Tensor.rand([2, 3]);

// Sampled from standard normal distribution
const d = Tensor.randn(10);

// Sampled from normal distribution with μ=5, σ=0.01
const e = Tensor.randn(10, 5, 0.01);

Unary Operations

All unary operations are synchronous since they are only implemented CPU-side.

const a = new Tensor([-1, 0, 1, 2]);

const b = a.neg(); // [1, 0, -1, -2]
const c = a.scale(2); // [-2, 0, 2, 4]
const d = a.pow(2); // [1, 0, 1, 4];
const e = a.exp(); // [1/e, 1, e, e^2]

const matrix = new Tensor([
  [1, 2],
  [3, 4],
]);

const matrixT = matrix.T(); // [[1, 3], [2, 4]]

Binary Operations

TenTS includes exact and approximate equality operations. Both are synchronous.

const actualEq = Tensor.eq(a, b);
const usefulEq = Tensor.almostEq(a, b);

if (actualEq !== usefulEq) console.log("float moment");

// You can specify ε if you want (default = 1e-3)
const badEq = Tensor.almostEq(a, b, 1);

The bread and butter of TenTS are plus() and matmul(). These are both asynchronous methods.

const a = new Tensor([1, 2, 3]);
const b = new tensor([4, 5, 6]);

const c = await Tensor.plus(a, b);
// [5, 7, 9]

The matmul function supports both standard 2D matrix multiplication, and 3D batch multiplication. In the latter case, broadcasting is supported.

// Basic 2D matrix multiplication

const a = new Tensor([
  [1, 2],
  [3, 4],
]);

const b = new Tensor([
  [5, 6],
  [7, 8],
]);

const c = await Tensor.matmul(a, b);
// [
//   [19, 22],
//   [43, 50],
// ]

Utility Methods

In addition to the above operations, TenTS includes a few utility methods. The main difference between these and the unary operations is that they either return a scalar, or do some sort of non-differentiable manipulation.

const a = new Tensor([1, 2, 3]);

const sum = a.sum(); // 6
const mean = a.mean(); // 2

// Convert to a one-hot matrix for classification
// The argument is the number of classes
// [
//   [0, 1, 0, 0],
//   [0, 0, 1, 0],
// ];
const onehot = a.onehot(4);

GPU Acceleration

Instead of PyTorch's general .to(device) syntax, TenTS uses a simpler .gpu() method. Despite GPU writes being asynchronous, this method is synchronous and does not return a promise. The GPU device is only actually awaited during operations.

Using WebGPU Acceleration is as simple as calling .gpu() before an operation. Just make sure all or no operands are gpu mapped!

const a = Tensor.rand([100, 100]);
const b = Tensor.rand([100, 100]);

const c = await Tensor.matmul(a.gpu(), b.gpu());

WebGPU acceleration is available for only the plus() and matmul() functions.

Automatic Differentiation

TenTS also includes an automatic differentiation system. Requiring gradients is as simple

Benchmarks and Performance

Benchmarks below compare the TenTS CPU and GPU implementation of these algorithms with the equivalent operations using numpy and PyTorch (with a CUDA backend):

Local Dev Environment

To start a dev environment, first install the project locally:

# Clone and enter the repository
git clone https://github.com/jhsul/tents && cd tents

# Install dependencies
pnpm i

# Build the project (should create bin/)
pnpm build

# Enter the development environment
cd env

# Install separate dependencies (for the env)
pnpm i

# Start the vite dev server
pnpm run dev

⚠️ Note: The Vite environment bundles directly from the bin/ folder, so make sure to run pnpm build in the root directory before starting the dev server.