leia-capture
v1.3.1
Published
Capture documents and your face liveness through challenges
Downloads
27
Readme
Leia Capture
Leia Capture allows you to perform liveness challenges, take pictures and record videos in your browser
Installation
Via npm:
npm install leia-capture
Via script tags:
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]"></script>
<script src="https://cdn.jsdelivr.net/npm/@mediapipe/face_mesh"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/[email protected]"></script>
<script src="https://unpkg.com/[email protected]/umd/leia-capture.umd.js"></script>
Usage
For npm:
import * as LeiaCapture from 'leia-capture'
For angular:
declare var LeiaCapture: any
And in your angular.json
:
{
...
"architect": {
...
"build": {
...
"scripts": [
"node_modules/@tensorflow/tfjs-core/dist/tf-core.js",
"node_modules/@tensorflow/tfjs-backend-cpu/dist/tf-backend-cpu.js",
"node_modules/@tensorflow/tfjs-backend-webgl/dist/tf-backend-webgl.js",
"node_modules/@tensorflow/tfjs-backend-wasm/dist/tf-backend-wasm.js",
"node_modules/@tensorflow/tfjs-layers/dist/tf-layers.js",
"node_modules/@tensorflow/tfjs-converter/dist/tf-converter.js",
"node_modules/@tensorflow-models/face-landmarks-detection/dist/face-landmarks-detection.js",
"node_modules/leia-capture/umd/leia-capture.umd.js"
]
...
}
...
}
...
}
To avoid errors when using Angular, add this in your package.json:
"browser": {
"os": false
}
Basic face challenge (type can be 'TURN_LEFT', 'TURN_RIGHT', 'OPEN_MOUTH'):
// Create a camera
const camera = new LeiaCapture()
// Register EventListeners
window.addEventListener("cameraReady", () => {
// Set your overlay and start a challenge when our camera is ready
camera.setOverlay(overlayDiv)
camera.startDetectAndDrawFace(0, true, "challenge01", "TURN_LEFT")
})
// If you choose to record challenges, it'll be returned using this event
window.addEventListener("videoProcessed", event => {
const video = event.detail.blob
const name = event.detail.name
// Do something with the video
})
// Start our camera
// It can take some time depending on the device so it's better not to load it when the camera is running
// Need a div to add our camera
camera.start(containerDiv, "front")
Basic document capture:
// Create a camera
const camera = new LeiaCapture()
onTakePicture(blob) {
// Do something with the picture
}
// Add a callback to your button when an user takes a picture
myOverlayCaptureButton.onclick = function() {
camera.takePicture(that.onTakePicture);
// You can also record a video
camera.startRecording("document01")
}
// Register EventListeners
window.addEventListener("cameraReady", () => {
// Set your overlay and start a challenge when our camera is ready
camera.setOverlay(overlayDiv)
})
window.addEventListener("videoProcessed", event => {
const video = event.detail.blob
const name = event.detail.name
// Do something with the video
})
// It can take some time depending on the device so it's better not to load it when the camera is running
// Need a div to add our camera
camera.start(containerDiv, "back")
API
start(container, facingMode, videoWidth, videoHeight, minFrameRate, maxFrameRate, drawFaceMask, deviceOrientation, maxVideoDuration)
Start camera in a given container
Params:
- container - an HTML element to insert the camera
- facingMode - a sensor mode. Can be 'front' or 'back' (default: 'front')
- videoWidth - a video width. Cannot be below 1280 (default: 1280)
- videoHeight - a video height. Cannot be below 720 (default: 720)
- minFrameRate - min frame rate (default: 23)
- maxFrameRate - max frame rate (default: 25)
- drawFaceMask - if true, detected face masks are drawn (default: true)
- deviceOrientation - if true, report device orientation via 'deviceOrientation' event (default: false)
- maxVideoDuration - max video duration in seconds. Any recording after that is cancelled (default: 10)
stop()
Stop camera and remove it from its container
loadFacemeshModel()
Load facemesh model
setOverlay(overlay)
Display an overlay on top of the video
Params:
- overlay - an HTML element
startDetectAndDrawFace(delay, record, videoOutputName, challengeType)
Start a challenge
Params:
- delay - delay in milliseconds before prediction
- record - if true, the current challenge will be automatically recorded (default: true)
- videoOutputName - a name for the recorded video, if record is set to true (default: 'challenge')
- challengeType - a challenge type. Can be 'TURN_LEFT', 'TURN_RIGHT', 'OPEN_MOUTH'
startRecording(videoOutputName)
Start recording a video. Note: during challenges, you don't have to use this method if you call 'startDetectAndDrawFace' with 'record' to true
Params:
- videoOutputName - a name for the recorded video
stopRecording(processVideo)
Stop recording a video. Note: during challenges, you don't have to use this method if you call 'startDetectAndDrawFace' with 'record' to true
Params:
- processVideo - if true, the current recorded video should be processed. Thus 'videoProcessing' and 'videoProcessed' are sent
takePicture(callback, quality, area)
Take a picture
Params:
- callback - a callback method for when the picture is returned as a blob. Your callback method must be in this format to receive the picture: nameofyourmethod(pictureBlob)
- quality - quality of the returned picture, from 0.0 to 1.0 (default: 1.0)
- area - (optional) an area of capture. Must be in this format [x, y, width, height]
getVideoDimensions()
Get video dimensions in this format: [width, height]
detectAndDrawFace()
Manually start face detection
Events
cameraReady
Triggered when camera is ready to capture
prediction
Triggered when there's a facemesh prediction
videoProcessing
Triggered when a video is processing
videoProcessed
Triggered when a video was processed
Params:
- blob - a video blob
- name - name of the video blob
loadingModel
Triggered when Facemesh model is loading
loadedModel
Triggered when Facemesh model was loaded
faceCentered
Triggered during a challenge when a face is centered and in front of the camera
faceIn
Triggered when a face is in the center area
faceOut
Triggered when a face is out of the center area
faceStartedChallenge
Triggered when a face started to move as required by the challenge
challengeComplete
Triggered when a challenge was completed
screenOrientationChanged
Triggered when the device screen changed orientation (portrait <--> landscape)
deviceOrientation
Triggered when the device move on z,x,y
Params:
- z - rotate left or rights
- x - tilt up or down
- y - tilt left or right
noFaceDetected
Triggered when no face was detected
Licence
MIT