npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

react-tfjs-models

v1.1.4

Published

React helpers to use with tensorflow js on videos

Downloads

59

Readme

react-tfjs-models

react-tfjs-models is a set of components and utilities to create machine learning applications using React. It's based on Google's tensorflow tfjs models, including image classification, pose detection, face detection, body segmentation and more.

Comparing to integrating the underlying library, this project provides various supports for non-machine learning experts to use these models in their Rect applications.

The model implements a proper approach for Movenet && Blaze model on pose detection which is accurate The model variations are adaptative and it renders the canvas with the controls over the video if desired. Styling is adaptative to needs

Usage example

Working project located under src/demos/VideoPlaybackDemo.jsx with the following code as example of use

Considerations:

  • The video component contains a recalculation for poses according to aspect ratio of the video with it's original size To use the recalculation, just add the width parameter in VideoPlayback component
  • Provided example contains a climbing video
  • Blazepose component needs to be adjusted to your needs, as it works as a loader for movenet or blaze models In this case, use the valid import for MovenetLoader or BlazePoseLoader
    // loader={BlazePoseLoader}
    // type={"full"}
    loader={MoveNetLoader}
    type={posedetection.movenet.modelType.SINGLEPOSE_THUNDER}

import './VideoPlaybackDemo.css';
import React, {useRef, useState} from 'react';
import * as posedetection from '@tensorflow-models/pose-detection';
import {drawPose} from '../lib';
import MoveNetLoader from '../lib/models/MoveNetLoader';
// import BlazePoseLoader from '../lib/models/BlazePoseLoader';
import VideoPlayback from '../lib/components/VideoPlayback';
import BlazePose from '../lib/components/BlazePose';
import log from '../lib/utils/logger'
const VideoPlaybackDemo = () => {
    const [selectedFile, setSelectedFile] = useState(null);
    const [originalSize, setOriginalSize] = useState(null);
    const canvasRef = useRef(null);
    const videoPlaybackRef = useRef(null);
    const model = posedetection.SupportedModels.MoveNet;
    const keypointIndices = posedetection.util.getKeypointIndexBySide(model);
    const adjacentPairs = posedetection.util.getAdjacentPairs(model);

    const style = {
        position: 'absolute',
        top: 0,
        left: 0,
        right: 0,
        zIndex: 9,
    };

    const [videoSource, setVideoSource] = useState("/climbing.mp4");

    const fileSelectedHandler = (event) => {
        setSelectedFile(event.target.files[0]);
    };

    const fileUploadHandler = () => {
        setVideoSource(URL.createObjectURL(selectedFile));
    };

    let isPoseShown = false;
    const onPoseEstimate = (pose) => {
        const ctx = canvasRef.current.getContext('2d');
        const canvas = canvasRef.current;
        if (!isPoseShown) {
            log(pose);
            log(originalSize);
            isPoseShown = true;
        }
        ctx.clearRect(0, 0, canvas.width, canvas.height);

        drawPose(pose,
            keypointIndices,
            adjacentPairs,
            ctx,
            {width: canvas.width, height: canvas.height},
            originalSize
        );
    };

    const setCanvas = (canvas) => {
        canvasRef.current = canvas;
        log(`Current size of canvas: ${canvas.width}x${canvas.height}`)
    };

    return (
        <div className="App">
            {videoSource == null && <>
                <input type="file" onChange={fileSelectedHandler} accept="video/*"/>
                <button onClick={fileUploadHandler}>Upload</button>
            </>}
            <VideoPlayback style={style} videoSource={videoSource} ref={videoPlaybackRef}
                           setCanvas={setCanvas} controlsEnabled={false} width={900}
                           setOriginalVideoSize={setOriginalSize}>
                <BlazePose
                    backend='webgl'
                    runtime='tfjs'
                    // loader={BlazePoseLoader}
                    // type={"full"}
                    loader={MoveNetLoader}
                    type={posedetection.movenet.modelType.SINGLEPOSE_THUNDER}
                    maxPoses={1}
                    flipHorizontal={true}
                    onPoseEstimate={onPoseEstimate}
                />
            </VideoPlayback>
        </div>
    );
};

export default VideoPlaybackDemo;

Credits

This project has been updated and forked from https://github.com/SeedV/react-tfjs-models The adaptation has been really though, but now, is reusable for external project so I would like to share with the community Adapted using vite with specific polyfills and included adaptative behaviour for aspect ratio insted of original video size

React components hierarchy

react-tfjs-models has provided a more intuitive declarative syntax, rather than the traditional imperative approach. An application to use BlazePose model to analyze each frame from a webcam stream would look like:

<Camera ...>
  <BlazePose ...>
    <Animation />
  </BlazePose>
</Camera>

Generally speaking, a streaming based machine learning hierarchy would consist of an input layer, a model layer and an output layer, and each layer has swappable components developed in react-tfjs-models, and can also be implemented by application developers.

<Input ...>
  <Model ...>
    <Output />
  </Model>
</Input>

Input layer

The components in this layer generate a stream of images from input devices. It can be from a webcam or a video extractor. This layer wraps the heavy lifting in setting up the HTML structure of using <video> and <canvas> elements and convert the extracted frame into a Rect state.

react-tfjs-models provides the below components as input layer:

| Component | Description | | --------- |------------------------------------------------------------------------------| | Camera | A webcam that provides video source to the model. | | VideoPlayback | A video extractor that send video frames. Width parameter adjusts video size |

Model layer

The components in this layer are machine learning models provided by tfjs-models.

This layer will also support model acceleration on webgl and wasm backend, if the model supports.

react-tjfs-models provides the below components as models:

| Component | Description | | --------- | ----------- | | BlazePose | Pose estimator, the implementation can be chosen from BlazePose and MoveNet. (PoseNet isn't provided yet.) | | HandPose | Mediapipe handpose, a 21-point 3D hand keypoints detector. | | FaceMesh | Mediapipe facemesh, a 486-point 3D facial landmark detector. |

Please refer to respective model details in https://github.com/tensorflow/tfjs-models.

Output layer

The components in this layer can be used to render the result. react-tfjs-models will provide some predefined overlay debug UI, e.g. rendering the skeleton on the web frames, to help developers to understand model performance and tweak the algorithms. It's also highly customizable to adopt to the real application need.

Demos

This project provides a list of demos to show case how the components work. Please check out the demos folder.

| Demo | Description | | ---- | ----------- | | RockPaperScissors | a HandPose estimation demo of the classic game. | | CartoonMirror | a BlazePose demo to recognize the pose from webcam, and control a 3D character to mimic the pose. | | FaceMeshDemo | a FaceMesh demo to recognize face landmarks (still in development). | | VideoPlaybackDemo | a demo to use a video to test ML model (MoveNet). |

Development

# Install dependencies
yarn install

# Start demo server on http.
yarn start