npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@yoonit/nativescript-camera

v3.2.1

Published

Yoonit Camera have a custom view that shows a preview layer of the front/back camera and detects human faces in it and read qr code.

Downloads

11

Readme

NativeScript Yoonit Camera

NativeScript Version Downloads

Android iOS MIT license

A NativeScript plugin to provide:

  • Modern Android Camera API Camera X
  • Camera preview (Front & Back)
  • PyTorch integration (Android)
  • Computer vision pipeline
  • Face detection, capture and image crop
  • Understanding of the human face
  • Frame capture
  • Capture timed images
  • QR Code scanning

More about...

The plugin's core is the native layer. Every change in the native layer, reflects here. This plugin, the Yoonit Camera, we can say that is an aggregation of many Yoonit's native libs:

All this native libs can be used independently.

Table Of Contents

Installation

npm i -s @yoonit/nativescript-camera  

Usage

All the functionalities that the @yoonit/nativescript-camera provides is accessed through the YoonitCamera component, that includes the camera preview. Below we have the basic usage code, for more details, your can see the Methods, Events or the Demo Vue.

VueJS Plugin

main.js

import Vue from 'nativescript-vue'  
import YoonitCamera from '@yoonit/nativescript-camera/vue'  
  
Vue.use(YoonitCamera)  

After that, you can access the camera object in your entire project using this.$yoo.camera

Vue Component

App.vue

<template>
  <Page @loaded="onLoaded">
    <YoonitCamera
      ref="yooCamera"
      lens="front"
      captureType="face"
      imageCapture=true
      imageCaptureAmount=10
      imageCaptureInterval=500
      detectionBox=true
      @faceDetected="doFaceDetected"
      @imageCaptured="doImageCaptured"
      @endCapture="doEndCapture"
      @qrCodeContent="doQRCodeContent"
      @status="doStatus"
      @permissionDenied="doPermissionDenied"
    />
  </Page>
</template>

<script>
  export default {
    data: () => ({}),

    methods: {
      async onLoaded() {

        console.log('[YooCamera] Getting Camera view')
        this.$yoo.camera.registerElement(this.$refs.yooCamera)

        console.log('[YooCamera] Getting permission')
        if (await this.$yoo.camera.requestPermission()) {
          
          console.log('[YooCamera] Permission granted, start preview')
          this.$yoo.camera.preview()
        }
      },

      doFaceDetected({ 
        x, 
        y, 
        width, 
        height,
        leftEyeOpenProbability,
        rightEyeOpenProbability,
        smilingProbability,
        headEulerAngleX,
        headEulerAngleY,
        headEulerAngleZ
      }) {
        console.log(
          '[YooCamera] doFaceDetected',
          `
          x: ${x}
          y: ${y}
          width: ${width}
          height: ${height}
          leftEyeOpenProbability: ${leftEyeOpenProbability}
          rightEyeOpenProbability: ${rightEyeOpenProbability}
          smilingProbability: ${smilingProbability}
          headEulerAngleX: ${headEulerAngleX}
          headEulerAngleY: ${headEulerAngleY}
          headEulerAngleZ: ${headEulerAngleZ}
          `
        )
        if (!x || !y || !width || !height) {
          this.imagePath = null
        }
      },

      doImageCaptured({
        type,
        count,
        total,
        image: {
          path,
          source
        },
        inferences,
        darkness,
        lightness,
        sharpness
      }) {
        if (total === 0) {
          console.log('[YooCamera] doImageCreated', `${type}: [${count}] ${path}`)
          this.imageCreated = `${count}`
        } else {
          console.log('[YooCamera] doImageCreated', `${type}: [${count}] of [${total}] - ${path}`)
          this.imageCreated = `${count} de ${total}`
        }
        console.log('[YooCamera] Mask Pytorch', inferences)
        
        console.log('[YooCamera] Image Quality Darkness:', darkness)
        console.log('[YooCamera] Image Quality Lightness', lightness)
        console.log('[YooCamera] Image Quality Sharpness', sharpness)
        this.imagePath = source
      },

      doEndCapture() {
        console.log('[YooCamera] doEndCapture')
      },

      doQRCodeContent({ content }) {
        console.log('[YooCamera] doQRCodeContent', content)
      },

      doStatus({ status }) {
        console.log('[YooCamera] doStatus', status)
      },

      doPermissionDenied() {
        console.log('[YooCamera] doPermissionDenied')
      }
    }
  }
</script>

API

Props

| Props | Input/Format | Default value | Description |
| - | - | - | - |
| lens | "front" or "back" | "front" | The camera lens to use "front" or "back". |
| captureType | "none", "front", "frame" or "qrcode" | "none" | The capture type of the camera. |
| imageCapture | boolean | false | Enable/disabled save image capture. |
| imageCaptureAmount | number | 0 | The image capture amount goal. |
| imageCaptureInterval | number | 1000 | The image capture time interval in milliseconds. |
| imageCaptureWidth | "NNpx" | "200px" | The image capture width in pixels. |
| imageCaptureHeight | "NNpx" | "200px" | The image capture height in pixels. |
| colorEncoding | "RGB" or "YUV" | "RGB" | Only for android. The image capture color encoding type: "RGB" or "YUV". |
| detectionBox | boolean | false | Show/hide the face detection box. | detectionBoxColor | string | #ffffff | Set detection box color. |
| detectionMinSize | "NN%" | "0%" | The face minimum size percentage to capture. |
| detectionMaxSize | "NN%" | "100%" | The face maximum size percentage to capture. | | detectionTopSize | "NN%" | "100%" | Represents the percentage. Positive value enlarges and negative value reduce the top side of the detection. Use the detectionBox to have a visual result. | detectionRightSize | "NN%" | "100%" | Represents the percentage. Positive value enlarges and negative value reduce the right side of the detection. Use the detectionBox to have a visual result. | detectionBottomSize | "NN%" | "100%" | Represents the percentage. Positive value enlarges and negative value reduce the bottom side of the detection. Use the detectionBox to have a visual result. | detectionLeftSize | "NN%" | "100%" | Represents the percentage. Positive value enlarges and negative value reduce the left side of the detection. Use the detectionBox to have a visual result. | roi | boolean | false | Enable/disable the region of interest capture. |
| roiTopOffset | "NN%" | "0%" | Distance in percentage of the top face bounding box with the top of the camera preview. |
| roiRightOffset | "NN%" | "0%" | Distance in percentage of the right face bounding box with the right of the camera preview. | | roiBottomOffset | "NN%" | "0%" | Distance in percentage of the bottom face bounding box with the bottom of the camera preview. |
| roiLeftOffset | "NN%" | "0%" | Distance in percentage of the left face bounding box with the left of the camera preview. |
| roiAreaOffset | boolean | false | Enable/disable display of the region of interest area offset. |
| roiAreaOffsetColor | string | '#ffffff73' | Set display of the region of interest area offset color. |
| faceContours | boolean | false | Enable/disable display list of points on a detected face. |
| faceContoursColor | string | '#FFFFFF' | Set face contours color. |
| computerVision (Android Only) | boolean | false | Enable/disable computer vision model. | | torch | boolean | false | Enable/disable device torch. Available only to camera lens "back". |

Methods

| Function | Parameters | Valid values | Return Type | Description |
| - | - | - | - | - |
| requestPermission | - | - | promise | Ask the user to give the permission to access camera. | | hasPermission | - | - | boolean | Return if application has camera permission. |
| preview | - | - | void | Start camera preview if has permission. |
| startCapture | type: string | "none""face""qrcode""frame" | void | Set capture type "none", "face", "qrcode" or "frame". Default value is "none". |
| stopCapture | - | - | void | Stop any type of capture. |
| destroy | - | - | void | Destroy preview. |
| toggleLens | - | - | void | Toggle camera lens facing "front"/"back". |
| setCameraLens | lens: string | "front" or "back" | void | Set camera to use "front" or "back" lens. Default value is "front". |
| getLens | - | - | string | Return "front" or "back". |
| setImageCapture | enable: boolean | true or false | void | Enable/disabled save image capture. Default value is false. |
| setImageCaptureAmount | amount: Int | Any positive Int value | void | For value 0, save infinity images. When the capture image amount is reached, the event onEndCapture is triggered. Default value is 0. |
| setImageCaptureInterval | interval: number | Any positive number that represent time in milliseconds | void | Set the image capture time interval in milliseconds. |
| setImageCaptureWidth | width: string | Value format must be in NNpx | void | Set the image capture width in pixels. |
| setImageCaptureHeight | height: string | Value format must be in NNpx | void | Set the image capture height in pixels. |
| setImageCaptureColorEncoding | colorEncoding: string | "YUV" or "RGB" | void | Only for android. Set the image capture color encoding type: "RGB" or "YUV". |
| setDetectionBox | enable: boolean | true or false | void | Set to show/hide the face detection box. | | setDetectionBoxColor | color: string | hexadecimal | void | Set detection box color. | | setDetectionMinSize | percentage: string | Value format must be in NN% | void | Set the face minimum size percentage to capture. |
| setDetectionMaxSize | percentage: string | Value format must be in NN% | void | Set the face maximum size percentage to capture. | | setDetectionTopSize | percentage: string | Value format must be in NN% | void | Represents the percentage. Positive value enlarges and negative value reduce the top side of the detection. Use the setDetectionBox to have a visual result. | setDetectionRightSize | percentage: string | Value format must be in NN% | void | Represents the percentage. Positive value enlarges and negative value reduce the right side of the detection. Use the setDetectionBox to have a visual result. | setDetectionBottomSize | percentage: string | Value format must be in NN% | void | Represents the percentage. Positive value enlarges and negative value reduce the bottom side of the detection. Use the setDetectionBox to have a visual result. | setDetectionLeftSize | percentage: string | Value format must be in NN% | void | Represents the percentage. Positive value enlarges and negative value reduce the left side of the detection. Use the setDetectionBox to have a visual result. | setROI | enable: boolean | true or false | void | Enable/disable face region of interest capture. |
| setROITopOffset | percentage: string | Value format must be in NN% | void | Distance in percentage of the top face bounding box with the top of the camera preview. |
| setROIRightOffset | percentage: string | Value format must be in NN% | void | Distance in percentage of the right face bounding box with the right of the camera preview. |
| setROIBottomOffset | percentage: string | Value format must be in NN% | void | Distance in percentage of the bottom face bounding box with the bottom of the camera preview. |
| setROILeftOffset | percentage: string | Value format must be in NN% | void | Distance in percentage of the left face bounding box with the left of the camera preview. |
| setROIMinSize | percentage: string | Value format must be in NN% | void | Set the minimum face size related within the ROI. |
| setROIAreaOffset | enable: boolean | true or false | void | Enable/disable display of the region of interest area offset. |
| setROIAreaOffsetColor | color: string | Hexadecimal color | void | Set display of the region of interest area offset color. |
| setFaceContours | enable: boolean | true or false | void | Enable/disable display list of points on a detected face. |
| setFaceContoursColor | color: string | Hexadecimal color | void | Set face contours color. |
| setComputerVision (Android Only) | enable: boolean | true or false | void | Enable/disable computer vision model. |
| setComputerVisionLoadModels (Android Only) | modelPaths: Array<string> | Valid system path file to a PyTorch computer vision model | void | Set model to be used when image is captured. To se more about it, Click Here. |
| computerVisionClearModels (Android Only) | - | - | void | Clear models that was previous added using setComputerVisionLoadModels. | | setTorch | enable: boolean | true or false | void | Enable/disable device torch. Available only to camera lens "back". |

Events

| Event | Parameters | Description
| - | - | -
| imageCaptured | { type: string, count: number, total: number, image: object = { path: string, source: any, binary: any }, inferences: [{ ['model name']: model output }], darkness: number, lightness: number, sharpness: number } | Must have started capture type of face/frame. Emitted when the face image file saved: type: "face" or "frame"count: current indextotal: total to createimage.path: the face/frame image pathimage.source: the blob fileimage.binary: the blob fileinferences: An Array with models outputdarkness: image darkness classification.lightness: image lightness classification.sharpness: image sharpness classification.
| faceDetected | { x: number, y: number, width: number, height: number, leftEyeOpenProbability: number, rightEyeOpenProbability: number, smilingProbability: number, headEulerAngleX: number, headEulerAngleY: number, headEulerAngleZ: number } | Must have started capture type of face. Emit the face analysis, all parameters are null if no more face detecting.
| endCapture | - | Must have started capture type of face/frame. Emitted when the number of image files created is equal of the number of images set (see the method setImageCaptureAmount).
| qrCodeContent | { content: string } | Must have started capture type of qrcode (see startCapture). Emitted when the camera read a QR Code.
| status | { type: 'error'/'message', status: string } | Emit message error from native. Used more often for debug purpose.
| permissionDenied | - | Emit when try to preview but there is no camera permission.

Face Analysis

The face analysis is the response send by the onFaceDetected. Here we specify all the parameters.

| Attribute | Type | Description | | - | - | - | | x | number | The x position of the face in the screen. | | y | number | The y position of the face in the screen. | | width | number | The width position of the face in the screen. | | height | number | The height position of the face in the screen. | | leftEyeOpenProbability | number | The left eye open probability. | | rightEyeOpenProbability | number | The right eye open probability. | | smilingProbability | number | The smiling probability. | | headEulerAngleX | number | The angle in degrees that indicate the vertical head direction. See Head Movements | | headEulerAngleY | number | The angle in degrees that indicate the horizontal head direction. See Head Movements | | headEulerAngleZ | number | The angle in degrees that indicate the tilt head direction. See Head Movements |

Head Movements

Here we're explaining the above gif and how reached the "results". Each "movement" (vertical, horizontal and tilt) is a state, based in the angle in degrees that indicate head direction;

| Head Direction | Attribute | v < -36° | -36° < v < -12° | -12° < v < 12° | 12° < v < 36° | 36° < v | | - | - | - | - | - | - | - | | Vertical | headEulerAngleX | Super Down | Down | Frontal | Up | Super Up | | Horizontal | headEulerAngleY | Super Left | Left | Frontal | Right | Super Right | | Tilt | headEulerAngleZ | Super Right | Right | Frontal | Left | Super Left |

Image Quality

The image quality is the classification of the three attributes: darkness, lightness and sharpness. Result available in the imageCapture event. Let's see each parameter specifications:

| Threshold | Classification | - | - | Darkness | | darkness > 0.7 | Too dark | darkness <= 0.7 | Acceptable | Lightness | | lightness > 0.65 | Too light | lightness <= 0.65 | Acceptable | Sharpness | | sharpness >= 0.1591 | Blurred | sharpness < 0.1591 | Acceptable

Messages

Pre-define message constants used by the status event.

| Message | Description | - | - | INVALID_MINIMUM_SIZE | Face/QRCode width percentage in relation of the screen width is less than the set. | INVALID_MAXIMUM_SIZE | Face/QRCode width percentage in relation of the screen width is more than the set. | INVALID_OUT_OF_ROI | Face bounding box is out of the set region of interest.

Contribute and make it better

Clone the repo, change what you want and send PR.

For commit messages we use Conventional Commits.

Contributions are always welcome, some people that already contributed!


Code with ❤ by the Cyberlabs AI Front-End Team