npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

offload-ai

v1.2.1

Published

Offload is an in-browser AI stack. It provides an SDK to run AI inference directly on your users' devices, increasing their data privacy and saving on inference costs.

Downloads

17

Readme

Offload logo

Offload - Run AI inference on your users' devices

Offload is an SDK that allows you to automatically run AI inference directly on your users' devices.

For a user, having the option to opt-in for local execution means the highest level of security and data privacy, with minimal code changes to your application. Furthermore, your cloud inference cost for the users that enable Offload becomes zero.

How it works

By default, Offload shows a widget to a user when its device has enough resources to run AI locally. When the user clicks on the widget, it enables Offload, and all of the inference tasks are executed directly within the user device, avoiding to send its data to any third-party inference API.

When a user device does not support local inference, the Offload widget is simply not shown, and the inference tasks run via an API that you can configure on the dashboard. This fallback API is also used when, for some reason, a user decides not to enable Offload.

Offload takes care of automatically serving your users with the model that better fits their device resources. We support different models for different GPUs, mobile, desktop, etc.

You can configure different models depending on the target device and adjust prompts per model, track analytics, configure fallback APIs, and more directly on the Offload dashboard.

Let your users opt for privacy! Join us today!

Usage

See below the steps to use Offload. You can find complete usage examples on this repository.

  1. Install offload

    • Using a package manager:
    npm install --save offload-ai

    Then in your code use:

    import Offload from "offload-ai";
    • From library:
    <script src="//unpkg.com/offload-ai" defer></script>

    Then in your code use:

    window.Offload.<method>
  2. Create a prompt in the dashboard and select a model

  3. Configure the SDK:

    Offload.config({
        appUuid: "b370195d-a8ad-47bd-9d25-2818a6905896",
        promptUuids: { // a map for meaningful prompt names <-> prompt id in the dashboard
            user_text: "4e151113-22ae-41e8-abf1-c8b358163cc9"
        }
    });
  4. Add the widget:

    Create a container for the widget on your page:

    <div id="offload-widget-container"></div>

    And initialize the widget (ensure to run the following JS code after the div above exists):

    Offload.Widget('offload-widget-container');
  5. Use the SDK to run prompts:

    try {
        const response = await (window as any).Offload.offload({
            promptKey: "user_text", // the key you give to the prompt uuid in the configuration object.
        });
        console.log(response.text)
        console.log(response.finishReason);
        console.log(response.usage);
    } catch(e: any) {
        console.error(e);
    }

Generate structured data

Add the schema field with a JSON schema:

try {
    const response = await (window as any).Offload.offload({
        promptKey: "user_text", // the key you gave to the prompt uuid in the configuration object.
        // A JSON schema to generate the output
        schema: {
            "$schema": "http://json-schema.org/draft-07/schema#",
            "type": "object",
            "properties": {
                "name": {
                "type": "string"
                },
                "age": {
                "type": "integer"
                }
            }
            }
    });
    console.log(response.object); // Note this is now response.object instead of response.text
    console.log(response.finishReason);
    console.log(response.usage);
} catch(e: any) {
    console.error(e);
}

Streaming the output

Simply add stream: true:

try {
    const response = await (window as any).Offload.offload({
        promptKey: "user_text", // the key you give to the prompt uuid in the configuration object.
        stream: true
    });
    console.log(response.textStream); // Note we now use response.textStream
    // Finish rason and usage are now promises
    response.finishReason.then((reason) => console.log(reason));
    response.usage.then((usage) => console.log(usage));
} catch(e: any) {
    console.error(e);
}

Using prompt variables

Add the variables map:

try {
    const response = await (window as any).Offload.offload({
        promptKey: "user_text", // the key you give to the prompt uuid in the configuration object.
        variables: {
            message: "my user message", // This will substitute the placeholder {{message}} in your prompt
        },
    });
    console.log(response.text)
    console.log(response.finishReason);
    console.log(response.usage);
} catch(e: any) {
    console.error(e);
}