npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

tribe_lib

v1.0.23

Published

a typescript-starter project

Downloads

8

Readme

Virtual Background Bodypix :computer:

Description :page_facing_up:

Using getusermedia as a video input, we offer the effects blur background, blur bodyParts, virtualBackground, grayScale having bodyPix as a backbone! Use bodypix in 3 lines!, making easy for non developers and developers to use bodypix with multiple effects.

Install

Can install it via npm for use in a TypeScript / ES6 project.

$ npm i tribe_lib

How To Use

After installing we simply use our three magic lines, to get a MediaStream Object that you can use for wherever you want.

//Common Javascript
const virtualBackgroundBodypixLite = require('tribe_lib');

//ES6
import { VideoTracking, Prediction } from 'tribe_lib';

//How to Use
const Tracking = new VideoTracking(
  type_model_architecture,
  effect_config_type,
  type_of_device
  width:number, 
  height:number, 
  device_id_str?:string
);
Tracking.predictionModel.loop_(type_prediciton, configEffect);
const MediaStream = Tracking.predictionModel.canvas_mediaStream();

How To Reuse Object

if you want to reuse the VideoTracking object after initialazing it and starting a Loop do the following.

const Tracking = new VideoTracking(
  type_model_architecture,
  effect_config_type,
  type_of_device
  width:number, 
  height:number, 
  device_id_str?:string
);
Tracking.predictionModel.loop_(type_prediciton, configEffect);

//Stop Actual Loop
Tracking.predicitionModel.stopAnimationLoop();

//start another desired Effect
Tracking.predictionModel.loop_(type_prediction_2, configEffect_2);

Models

We can define 5 categories Ultra low, Low, Medium, High, Ultra for model configuration

| Categories | Description | Option | | ---------- | ------------------------------------------------------------------- | ------ | | Ultra low | Recommended for low-end mobile devices | 0 | | Low | Recommended for mid-end mobile devices | 1 | | Medium | Recommended for computers with at least one intel pentium processor | 2 | | High | Recommended for computers with at least one intel core i3 processor | 3

const model_config_ultra_low = {
  architecture: 'MobileNetV1',
  outputStride: 16,
  multiplier: 0.5,
  quantBytes: 2,
};
const model_config_low = {
  architecture: 'MobileNetV1',
  outputStride: 16,
  multiplier: 0.75,
  quantBytes: 2,
};
const model_config_medium = {
  architecture: 'MobileNetV1',
  outputStride: 16,
  multiplier: 0.75,
  quantBytes: 2,
};
const model_config_high = {
  architecture: 'MobileNetV1',
  outputStride: 8,
  multiplier: 1,
  quantBytes: 2,
};
const model_config_ultra = {
  architecture: 'ResNet50',
  outputStride: 16,
  quantBytes: 2,
};

Level Of Prediction

We can define the level of prediction |Categories | Resolution |Option |--|--|--| | Low | Minimum|0 | Medium| Average|1 | High| High|2 | ultra| ultra|3

const effect_config_precission_low = {
  flipHorizontal: false,
  internalResolution: 'low',
  segmentationThreshold: 0.7,
};
const effect_config_precission_mid = {
  flipHorizontal: false,
  internalResolution: 'medium',
  segmentationThreshold: 0.7,
};
const effect_config_precission_high = {
  flipHorizontal: false,
  internalResolution: 'high',
  segmentationThreshold: 0.7,
};
const effect_config_precission_high = {
  flipHorizontal: false,
  internalResolution: 'ultra',
  segmentationThreshold: 0.7,
};

Type Of Devices

We can define the dimensions of the video, in addition to which camera to request in mobile.

const mobile_front_camera = {
  audio: false,
  video: { facingMode: 'user', width: width, height: height },
};
const mobile_rear_camera = {
  audio: false,
  video: { facingMode: { exact: 'environment' }, width: width, height: height },
};
const desktop_selected_camera_device = {
  audio: false,
  video: {
    deviceId: '0faf4c1dc3b35ff09df6a31...',
    width: width,
    height: height,
  },
};
const desktop_select_camera_default = {
  audio: false,
  video: { width: width, height: height },
};

Config Effects

These are the effects, the magic of this library, you can define what effect use:

Grey scale

Image

Image

Tracking.predictionModel.loop_(3);
const MediaStream = Tracking.predictionModel.canvas_mediStream();
Tracking.predictionModel.stopAnimationLoop();/*To Stop Loop*/

Blur

Image

Tracking.predictionModel.loop_(1, config_effect_bokek);
const MediaStream = Tracking.predictionModel.canvas_mediaStream();
Tracking.predictionModel.stopAnimationLoop();/*To Stop Loop*/

Virtual Background

Image

Tracking.predictionModel.loop_(2, config_virtual_background);
const MediaStream = Tracking.predictionModel.canvas_mediaStream();
Tracking.predictionModel.stopAnimationLoop();/*To Stop Loop*/

Blur Body Parts

Image

const parts: partsbody = {
  left_face: 0,
  torso_front: 12,
  right_face: 1,
  torso_back: 13,
  left_upper_arm_front: 2,
  left_upper_leg_front: 14,
  left_upper_arm_back: 3,
  left_upper_leg_back: 15,
  right_upper_arm_front: 4,
  right_upper_leg_front: 16,
  right_upper_arm_back: 5,
  right_upper_leg_back: 17,
  left_lower_arm_front: 8,
  left_lower_leg_front: 18,
  left_lower_arm_back: 7,
  left_lower_leg_back: 19,
  right_lower_arm_front: 8,
  right_lower_leg_front: 20,
  right_lower_arm_back: 9,
  right_lower_leg_back: 21,
  left_hand: 10,
  left_foot: 22,
  right_hand: 11,
  right_foot: 23,
};
const config_blur_body_part = { backgroundBlurAmount: 30, edgeBlurAmount: 2.1, faceBodyPartIdsToBlur: [0, 1] };
Tracking.predictionModel.loop_(4, config_blur_body_part);
const MediaStream = Tracking.predictionModel.canvas_mediaStream();
Tracking.predictionModel.stopAnimationLoop();/*To Stop Loop*/
const config_effect_bokek = { backgroundBlurAmount: 20, edgeBlurAmount: 10 };
const config_virtual_background = {
  backgroundBlurAmount: 1,
  edgeBlurAmount: 2.1,
  URL: 'base64',
};
const config_greyScale = {};
const config_blur_body_part = {
  backgroundBlurAmount: 30,
  edgeBlurAmount: 2.1,
  faceBodyPartIdsToBlur: [0, 1],
};

More information about the model

Repo BodyPix: Github

Authors :black_nib:

Hector Lopez Github

Hugo Fernel Github

Jhonathan Angarita Github

David De La Hoz Github