npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

mtr-facemesh

v5.2.6

Published

Face Mesh for MTR projects.

Downloads

331

Readme

mtr-facemesh

Contents

Description

The current repository is the mtr-facemesh project. The module projects 3D masks (in this case make ups) on a face from a video element detected by an AI model.

Usage

In order to use the module, add the div with id="facemesh-holder" and set the width and height of the canvas to show.

<div id="facemesh-holder" style="width: 282px; height: 500px;"></div>

After that, you can create the FaceMesh object with the following.

import {FaceMesh, THREE} from './lib/mtr_facemesh.js';

...

const config = {
  divName: "facemesh-holder",
  modelConfigs: undefined,
  meshConfigs: [
    {
        alphaMapDir: './assets/imgs/face/mask_lips.jpg',
        ambientOcclusionMapDir: './assets/imgs/face/ambient_occlusion.png',
        normalMapDir: './assets/imgs/face/normal_texture_lips_3.png',
        color: 0xff0000,
        roughness: 0.25,
        metalness: 0.1,
        opacity: 0.3,
        clearcoat: 0.5,
        clearcoatRoughness: 1.0,
    }
  ],
  binaryDir: './lib',
  refineLandmarks: true,
  backend: 'webgl',
  flipCamera: true,
  verbose: true,
  showMesh: false,
  showFPSPanel: true,
  showScreenLogs: true,
  loadedCallback,
  errorCallback,
  cameraStatusCallback,
  oneEuroFilterConfig: {
    mincutoff: 0.005,
    beta: 0.12,
    dcutoff: 1,
  },
  faceRetouchConfig: {
    retouchAlphaMask: './assets/imgs/face/facemesh_alpha_base_v4.jpg',
    opacity: 1.0,
    blurRadius: 10,
  },
  eyelinerConfig: {
    option: 2,
    color: "#000000FF", // or "black"
  }
};

faceMesh = new FaceMesh(config);

Where config is an object with properties:

  • divName HTML div ID of the AR holder. Default is facemesh-holder.
  • imageID HTML image element ID that will replace the camera.
  • modelConfig object of configuration for the 3D model:
    • dir: 3D model file path. File must be .glb or .gltf. No default.
    • animationSpeed: 3D model animation speed. (0, 1) for slower animations and animationSpeed > 1 for faster animations. Defaults to 1.
    • scale: Sets how big the 3D model will be. Defaults to 1.
    • position: position of the model considering camera is at {0, 0, 0}. Defaults to global {0, 0,-distance}.
    • rotation: rotation of the model. Defaults to {0, 0, 0}.
    • loop: if the model should loop. Default is true.
    • showMannequin: show the head occluder to better position the object. Default is false.
  • meshConfigs Array of configuration for each make up mesh:
    • colorMapDir: texture file path. File must be an image. No default.
    • alphaMapDir: alpha mask file path. File must be an image. No default.
    • ambientOcclusionMapDir: illumination mask file path. File must be an image. No default.
    • normalMapDir: normal map file path. File must be an image. No default.
    • color: make up color in hexadecimal. Default is 0xffffff.
    • metalness: how much the material is like a metal. Wood or stone use 0.0, metallic use 1.0. Default is 0.0.
    • opacity: float in the range of 0.0 - 1.0 indicating how transparent the material is. Default is 1.0.
    • clearcoat: use clear coat related properties to enable multilayer materials that have a thin translucent layer over the base layer. Default is 0.0.
    • roughness: represents the intensity of the clear coat layer, from 0.0 to 1.0. Default is 0.0.
    • clearcoatRoughness: roughness of the clear coat layer, from 0.0 to 1.0. Default is 0.0.
  • binaryDir Path to the face mesh binary files. No default.
  • refineLandmarks Refine landmarks option. Default is true.
  • backend Tensorflow backend: webgl, wasm or cpu. Default is webgl.
  • flipCamera Flips the feedback camera. Default is false.
  • verbose If set to true shows logs on the console. Default is false.
  • loadedCallback Function to call after the AI model is loaded. No default.
  • cameraStatusCallback Function to call after the video permission (no default):
    • 'requesting': when user is requested for the camera
    • 'hasStream': when user is granted access for the camera
    • 'failed': when something went wrong with the camera
  • errorCallback Function to call after any error. No default.
  • showMesh If set to true shows the mesh on the feedback video. Default is false.
  • showFPSPanel If set to true shows the FPS on the main loop. Default is false.
  • showScreenLogs If set to true shows the FPS for every step on the main loop. Default is false.
  • oneEuroFilterConfig One Euro Filter parameters.
    • 'mincutoff': minimum cutoff frequency. Decreasing the minimum cutoff frequency decreases slow speed jitter. Default is 0.001.
    • 'beta': speed coefficient. Increasing the speed coefficient decreases speed lag. Default is 0.007.
    • 'dcutoff': cutoff frequency. Default is 1.
  • faceRetouchConfig face retouch parameter (one object).
    • 'retouchAlphaMask': path to the alpha mask file. Only apply the retouch on the mask. No default.
    • 'blurRadius': Face retouch intensity. Default is 10.
    • 'color': Foundation color. Default is 0xffffff (white).
    • 'opacity': Foundation opacity. Default is 0.0 (transparent).
  • faceRetouchConfigs face retouch parameters array (multiple face retouch objects). Each object of the array must contain:
    • 'retouchAlphaMask': path to the alpha mask file. Only apply the retouch on the mask. No default.
    • 'blurRadius': Face retouch intensity. Default is 10.
    • 'color': Foundation color. Default is 0xffffff (white).
    • 'opacity': Foundation opacity. Default is 0.0 (transparent).
  • foundationMatchConfig If set, executes the foundation match algorithm after togglePhoto method.
    • 'option': tone algorithm option.
    • 'colors': array of foundation colors in RGB and its names.
  • eyelinerConfig If set, apply the eyeliner on the face.
    • 'option': drawing type of the eyeliner. Options are from 1 to 2. Default is 1.
    • 'color': color of the eyeliner.

The method requestPermissions must be called after the model is loaded using the loadedCallback config property, since the face detector only starts predicting after the video is set. The method receives the camera configuration as a parameter, the same as seen at getUserMedia. Inside loadedCallback you must also add lights to the scene since the textures are very sensible to it. The following code is an example of one loadedCallback.

const loadedCallback = function () {
  scene = faceMesh.getScene();

  const hemiLight = new THREE.HemisphereLight(0xffffff, 0x080820, 0.5);
  scene.add(hemiLight);

  const ambientLight = new THREE.AmbientLight(0x404040, 0.1);
  scene.add(ambientLight);

  faceMesh.requestPermissions({
    facingMode: {ideal: 'user'},
    width: window.innerHeight,
    height: window.innerWidth,
  });
};

With these lines of code it's already possible to run the face mesh. If you want to personalize the threejs scene, you can do it by using the get methods faceMesh.getRenderer(), faceMesh.getScene(), faceMesh.getOrthographicCamera(), faceMesh.getPerspectiveCamera(), faceMesh.getGroup() and faceMesh.getMeshes(). For example, you can change material params with faceMesh.getMeshes()[0].material.color = new THREE.Color(0xff0000);.

Features

The AR object also include other functionalities.

Partial Render

The partial render works like a curtain for the threejs renderer and only renders what is from 0 to windowSlider.value * canvas.width. For this, you need to create a checkbox and a slider from 0 to 1 and then use them as follows.

const windowCheckbox = document.getElementById('window-checkbox');
const windowSlider = document.getElementById('window-slider');

windowCheckbox.oninput = function () {
  faceMesh.setWindow(windowCheckbox.checked);
  windowSlider.style.display = windowCheckbox.checked ? 'block' : 'none';
};

windowSlider.oninput = function () {
  faceMesh.setWindowFactor(windowSlider.value);
};

Photograph

The photograph functionality is simple and works like it is said. When the method togglePhoto is called, the animation loop freezes/unfreezes depending on the actual state. You only need to create on html and use the following code. The method also returns a promise of a canvas of the photograph.

const screenshotButton = document.getElementById('photo-button');

screenshotButton.onclick = async function () {
  const frozenVideoCanvas = await faceMesh.togglePhoto();
};

Share

The next feature complements the photograph feature. When the share method is called, the object takes a screenshot and then share it through the selected social media. For that, you only need to create a button to trigger the method.

const shareButton = document.getElementById('share-button');

shareButton.onclick = function () {
  faceMesh.share('screenshot');
};

Stop and Continue

The application can be stopped or continued after calling .stop() and .continue() methods. They are useful when you want to stop the application with a close button or start the application instantly after the user clicks a start button. The demo shows an example of usage.

const closeButton = document.getElementById('close-button');

var toggleCloseContinue = true;

closeButton.onclick = function () {
  if (toggleCloseContinue) {
    faceMesh.stop();
    closeButton.textContent = 'Continue';
  } else {
    faceMesh.continue();
    closeButton.textContent = 'Close';
  }

  toggleCloseContinue = !toggleCloseContinue;
};

Foundation Color

If faceRetouchConfig and/or faceRetouchConfigs are set, it is possible to set a color on the users face as if there is a foundation using .setFoundation method. The function receives two parameters, color and opacity. The color is a number array with 3 positions in RGB space, and the opacity is a floating point number between 0.0 and 1.0 that is the strength of the color. If there are multiple face retouch objects, you can set the color and opacity with the third parameter index. The following example will make up the face with red foundation.

faceMesh.setFoundation([255, 0, 0], 0.5, 0);

Light Exposure

Light exposure is a feature related to face color match since it detects the illumination on the users face. If the user face is under or over exposed, the faceMesh.exposure() will return -2 and 2 respectively.

<button id="exposure-log" style="position: absolute; left: 5%; top: 15%;">Loading</button>
function anim() {
  document.getElementById(
    'exposure-log'
  ).textContent = `${faceMesh.exposure()}`;
}
setInterval(anim, 500);

Face Color Match

To use it, call the method .getFaceClosestColor() to get the object of closest color. This feature matches the users face color to the closest color in foundationMatchConfig.colors and return an object with attributes foundationID, skinColor and distances.

  • foundationID is the index of the best foundation ID in foundationMatchConfig.colors array;
  • skinColor is the extracted color of the user skin in RGB space;
  • distances This attribute is also an object with attributes distance and index. The index is the index to the foundationMatchConfig.colors array and the distance is the euclidean distance from foundationMatchConfig.colors[foundationID] to foundationMatchConfig.colors[distances[..].index]. This array is sorted in ascending order by the distance attribute.
FaceMesh(
  ...,
  foundationMatchConfig: {
    option: 1,
    colors: [
      {color: [203, 176, 145], name: '00'},
      {color: [169, 129, 83], name: '50'},
      {color: [77, 52, 48], name: '100'},
    ]
  }
)
window.afterScreenshot = function afterScreenshot() {
  // matches the face color with one of the 3 colors set in foundationMatchConfig
  const distances = faceMesh.getFaceClosestColor().distances;
  // second best color distance, and index
  console.log(distances[1].distance, distances[1].index);
  const foundationId = faceMesh.getFaceClosestColor().foundationID;
  console.log(window.foundations[foundationId], foundationId);
  // uses the setFoundation feature to set the matched color on faceRetouch as a foundation
  faceMesh.setFoundation(window.foundations[foundationId].color, 0.3, 0);

Setting Eyeliner config

To change or set the eyeliner config after the process has already started, you need to call the method .setEyelinerConfigOption(option) where option is the style parameter. In case you want to also change the eyeliner color, you can change it through the attribute eyelinerConfig.color. Here is an example:

window.faceMesh.setEyelinerConfigOption(2);
window.faceMesh.eyelinerConfig.color = "#00FFFF";

Using an image instead of camera

If faceMesh is created with imageID set, the video camera will be replaced with the image loaded on html with id=<imageID> and .requestPermissions() method will not be necessary to call. For example, an image tag is loaded with id "photo-id", and faceMesh is created with it replacing the camera.

<img id="photo-id" src="./any-image" />
faceMesh = new FaceMesh({
  imageID: "photo-id",
  ...
});

Demo

The demos folder have demos with all the features. The photo shows an example of using an image loaded on html and the webcam uses the user camera. The application loads a canvas and shows blush, lipsticks and eye shadows make ups on the face with the face retouch. It is also possible to render partially with the setWindow method, freeze the video like a photograph and share the photo.

WASM

  1. Activate o EMSDK
    source /path/to/emsdk/emsdk_env.sh
  2. Generate the Makefile with cmake command
    cd ./wasm/build/
    emcmake cmake .
  3. Compile
    emmake make
  4. The wasm and js files will be inside output directory