npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

react-native-local-gen-ai

v0.2.2

Published

Local generative ai capabilities using mediapipe in react native

Downloads

11

Readme

React Native Local Gen AI

Local Generative AI capabilities using google mediapipe.

Non-blocking local LLM inference using quantized models.

Supports only Android.

Pre-requisites

Generative AI models are large in size and should not be bundled in apk. Ideally in production, the model must be downloaded from server upon user request. For development, we manually download the preferred model to PC and push to an android debugging device (adb).

Gemma models compatible with mediapipe format can be downloaded directly from Kaggle (no conversion needed)

  # Export your Kaggle username and API key
  # export KAGGLE_USERNAME=
  # export KAGGLE_KEY=

  curl -L -u $KAGGLE_USERNAME:$KAGGLE_KEY\
  -o ~/Downloads/gemma-2b-it-cpu-int4.tar.gz\
  https://www.kaggle.com/api/v1/models/google/gemma/tfLite/gemma-2b-it-cpu-int4/1/download

  # Extract model
  tar -xvzf ~/Downloads/gemma-2b-it-cpu-int4.tar.gz -C ~/Downloads

For other models, they need to be converted/quantized. Checkout the below links on how to download and convert models to media pipe compatible format.

  • https://developers.google.com/mediapipe/solutions/genai/llm_inference#models
  • https://developers.google.com/mediapipe/solutions/genai/llm_inference/android#model

Android Inference

For testing in Android, push the downloaded model to a physical device in developer mode using the below commands.

# Clear directory to remove previous models
adb shell rm -r /data/local/tmp/llm/

# Create directory to save model
adb shell mkdir -p /data/local/tmp/llm/

# Push model to device
cd ~/Downloads
adb push gemma-2b-it-cpu-int4.bin /data/local/tmp/llm/gemma-2b-it-cpu-int4.bin

Installation

yarn add react-native-local-gen-ai
#or
npm i react-native-local-gen-ai

Usage

Update minSdkVersion to 24 in android/build.gradle file.

Invoke chatWithLLM async method with your prompt.

import { chatWithLLM } from 'react-native-local-gen-ai';

// non-blocking prompting !!
const response = await chatWithLLM("hello !");
console.log(response)

// Response

! 👋

I am a large language model, trained by Google.
I can talk and answer your questions to the best of my knowledge.

What would you like to talk about today? 😊

[Optional] Override model options

import { setModelOptions } from 'react-native-local-gen-ai'

/* Default model path is set to 
   /data/local/tmp/llm/gemma-2b-it-cpu-int4.bin

   For other model variants, modelPath needs to be 
   updated before invoking chatWithLLM
*/
useEffect(()=>{
    setModelOptions({
        modelPath: '/data/local/tmp/llm/gemma-2b-it-gpu-int4.bin',
        randomSeed: 0, // Default 0
        topK: 30, // Default 40
        temperature: 0.7, // Default 0.8
        maxTokens: 1000 // Default 512
    });
},[])

GPU inference in Android

For GPU models, add an entry in Application Manifest file (android/app/src/main/AndroidManifest.xml) to use openCL.

<application>
    <!-- Add this for gpu inference -->
    <uses-library android:name="libOpenCL.so"
        android:required="false"/>
</application>

Expo

Use local app development instead of Expo GO. Rest of the steps remain the same.

More info on Expo local app development can be found here:

https://docs.expo.dev/guides/local-app-development/

npx expo run:android

Examples

https://github.com/nagaraj-real/expo-chat-llm

https://github.com/nagaraj-real/react-native-local-genai/tree/main/example

Contributing

See the contributing guide to learn how to contribute to the repository and the development workflow.

License

MIT


Made with create-react-native-library

Uses Google Mediapipe under the hood.