npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@microblink/blinkinput-in-browser-sdk

v4.4.5

Published

A simple barcode scanning library for WebAssembly-enabled browsers.

Downloads

2,724

Readme

BlinkInput In-browser SDK

Build Status npm version

BlinkInput In-browser SDK enables you to perform scans of various barcodes in your web app, directly within the web browser, without the need for sending the image to servers for processing. You can integrate the SDK into your web app simply by following the instructions below and your web app will be able to scan and process data from the following barcode standards:

  • PDF417 barcode
  • QR code
  • Barcodes from SIM cards
  • Automobile VIN barcodes
  • Code 128 1D barcode
  • Code 39 1D barcode
  • EAN 13 1D barcode
  • EAN 8 1D barcode
  • ITF 1D barcode
  • UPC A 1D barcode
  • UPC E 1D barcode

Using BlinkInput in your web app requires a valid license key. You can obtain a free trial license key by registering to Microblink dashboard. After registering, you will be able to generate a license key for your web app. The license key is bound to fully qualified domain name of your web app, so please make sure you enter the correct name when asked. Also, keep in mind that if you plan to serve your web app from different domains, you will need different license keys.

For more information on how to integrate BlinkInput SDK into your web app read the instructions below. Make sure you read the latest CHANGELOG.md file for most recent changes and improvements.

Check out the official demo app or live examples of BlinkInput SDK in action:

  1. BlinkInput SDK with built-in UI
    • See what the bare UI looks like at Codepen
  2. Scan barcode with a web camera
  3. Scan SIM barcode with a web camera
  4. Scan VIN barcode with a web camera

To see the source code of the above examples, check out the examples directory. If you’d like to run examples of the UI component, either through the browser or locally, see the ui/examples directory.

BlinkInput In-browser SDK is meant to be used natively in a web browser. It will not work correctly within a iOS/Android WebView or NodeJS backend service. If you are looking for Cordova/PhoneGap version, please go here.

Table of contents

Integration instructions

This repository contains WebAssembly files and supporting JS files which contain the core implementation of BlinkInput functionalities.

In order to make integration of the WebAssembly easier and more developer friendly, a JavaScript/TypeScript support code is also provided, giving you an easy-to-use integration API.

This repository also contains a sample JS/TS integration app which demonstrates how you can integrate the BlinkInput into your web app.

BlinkInput will work in any browser that supports WebAssembly, but works best with the latest versions of Firefox, Chrome, Safari and Microsoft Edge. It's worth noting that scan performance depends on the device processing capabilities.

Obtaining a license key

Using BlinkInput in your web app requires a valid license key.

You can obtain a free trial license key by registering to Microblink dashboard. After registering, you will be able to generate a license key for your web app.

Make sure you enter a fully qualified domain name of your web app when filling out the form — the license key will be bound to it. Also, if you plan to serve your web app from different domains, you'll need a license key for each one.

Keep in mind: Versions BlinkInput 4.5.0 and above require an internet connection to work under our new License Management Program.

This means your web app has to be connected to the Internet in order for us to validate your trial license key. Scanning or data extraction of documents still happens offline, in the browser itself.

Once the validation is complete, you can continue using the SDK in an offline mode (or over a private network) until the next check.

We've added error callback to Microblink SDK to inform you about the status of your license key.

Installation

We recommend you install a stable version via NPM or Yarn:

# NPM
npm install @microblink/blinkinput-in-browser-sdk

# Yarn
yarn add @microblink/blinkinput-in-browser-sdk

Which can then be used with a module bundler in Node environment:

import * as BlinkInputSDK from "@microblink/blinkinput-in-browser-sdk";

Source code of BlinkInputSDK is written in TypeScript and types are exposed in the public NPM package, so it's possible to use the SDK in both JavaScript and TypeScript projects.


Alternatively, it's possible to use UMD builds which can be loaded from public CDN services.

However, we strongly advise that you host the JavaScript bundles on your infrastructure since there is no guarantee that the public CDN service has satisfactory uptime and availability throughout the world.

For example, it's possible to use UMD builds from the dist folder on Unpkg CDN. The UMD builds make BlinkInputSDK available as a window.BlinkInputSDK global variable:

<!-- IMPORTANT: change "X.Y.Z" to the version number you wish to use! -->
<script src="https://unpkg.com/@microblink/[email protected]/dist/blinkinput-sdk.min.js"></script>

Finally, it's possible to use ES builds, which can be downloaded from the es folder on unpkg. ES modules are used in a similar manner as NPM package:

import * as BlinkInputSDK from "./es/blinkinput-sdk.js";

Important: Unpkg CDN is used here due to simplicity of usage. It's not intended to be used in production!

WASM Resources

After adding BlinkInput SDK to your project, make sure to include all files from its resources folder in your distribution. Those files contain a compiled WebAssembly module and support JS code.

Do not add those files to the main app bundle, but rather place them on a publicly available location so that the SDK can load them at an appropriate time. For example, place the resources in my-angular-app/src/assets/ folder if using ng new or in my-react-app/public/ folder if using create-react-app.

For more information on how to setup aforementioned resources, check out the Configuration of SDK section.

Versions and backward compatibility

Even though the API is not going to change between minor versions, the structure of results for various recognizers might change between minor versions.

This is due to the improvements we make to our recognizers with every minor release. We suggest you familiarize yourself with what Recognizer, RecognizerRunner and VideoRecognizer are before moving on.

It's a good practice to always lock your minor version and check the CHANGELOG.md file before upgrading to a new minor version.

For example, in package.json you should have something like "@microblink/blinkinput-in-browser-sdk": "~4.1.1" instead of the default "@microblink/blinkinput-in-browser-sdk": "^4.1.1".

Performing your first scan

Note: the following code snippets are written in TypeScript, but it's possible to use them in plain JavaScript.

  1. Make sure you have a valid license key. See Obtaining a license key.

  2. Add the SDK to your web app by using one of the options provided in the Installation section.

  3. Initialize the SDK using the following code snippet:

    import * as BlinkInputSDK from "@microblink/blinkinput-in-browser-sdk";
    
    // Check if browser is supported
    if ( BlinkInputSDK.isBrowserSupported() )
    {
        const loadSettings = new BlinkInputSDK.WasmSDKLoadSettings( "your-base64-license-key" );
    
        BlinkInputSDK.loadWasmModule( loadSettings ).then
        (
            ( wasmSDK: BlinkInputSDK.WasmSDK ) =>
            {
                // The SDK was initialized successfully, save the wasmSDK for future use
            },
            ( error: any ) =>
            {
                // Error happened during the initialization of the SDK
                console.log( "Error during the initialization of the SDK!", error );
            }
        )
    }
    else
    {
        console.log( "This browser is not supported by the SDK!" );
    }
  4. Create recognizer objects that will perform image recognition, configure them to your needs (to scan specific types of documents, for example) and use them to create a RecognizerRunner object:

    import * as BlinkInputSDK from "@microblink/blinkinput-in-browser-sdk";
    
    const recognizer = await BlinkInputSDK.createBarcodeRecognizer( wasmSDK );
    const recognizerRunner = await BlinkInputSDK.createRecognizerRunner(
        wasmSDK,
        [ recognizer ],
        true
    );
  5. Obtain a reference to your HTML video element and create a VideoRecognizer using the element and your instance of RecognizerRunner which then can be used to process input video stream:

    const cameraFeed = document.getElementById( "myCameraVideoElement" ) as HTMLVideoElement;
    try
    {
        const videoRecognizer = await BlinkInputSDK.VideoRecognizer.createVideoRecognizerFromCameraStream(
            cameraFeed,
            recognizerRunner
        );
    
        // There is more than one way to handle recognition
    
        // Using the recognize() method will provide you with the default behavior,
        // such as built-in error handling, timeout and video feed pausing.
        const processResult = await videoRecognizer.recognize();
    
        // Using the startRecognition() method allows you to pass your own onScanningDone callback, 
        // giving you the option to create custom behavior.
        const processResult = await videoRecognizer.startRecognition(
            async ( recognitionState ) => 
            {
                videoRecognizer.pauseRecognition();
                return recognitionState;
            }
        );
    
        // To obtain recognition results see next step
    }
    catch ( error )
    {
        if ( error.name === "VideoRecognizerError" )
        {
            // Reason is of type BlinkInputSDK.NotSupportedReason and contains information why video
            // recognizer could not be used. Usually this happens when user didn't grant access to a
            // camera or when a hardware or OS error occurs.
            const reason = ( error as BlinkInputSDK.VideoRecognizerError ).reason;
        }
    }
  6. If processResult returned from VideoRecognizer's method recognize or startRecognition is not BlinkInputSDK.RecognizerResultState.Empty, then at least one recognizer given to the RecognizerRunner above contains a recognition result. You can extract the result from each recognizer using its getResult method:

    if ( processResult !== BlinkInputSDK.RecognizerResultState.Empty )
    {
        const recognitionResult = await recognizer.getResult();
        console.log( recognitionResult );
    }
    else
    {
        console.log( "Recognition was not successful!" );
    }
  7. Finally, release the memory on the WebAssembly heap by calling delete method on both RecognizerRunner and each of your recognizers. Also, release the camera stream by calling releaseVideoFeed on instance of VideoRecognizer:

    videoRecognizer.releaseVideoFeed();
    recognizerRunner.delete();
    recognizer.delete();

    Note that after releasing those objects it is not valid to call any methods on them, as they are literally destroyed. This is required to release memory resources on WebAssembly heap which are not automatically released with JavaScript's garbage collector. Also, note that results returned from getResult method are placed on JavaScript's heap and will be cleaned by its garbage collector, just like any other normal JavaScript object.

Recognizing still images

If you just want to perform recognition of still images and do not need live camera recognition, you can do that as well.

  1. Initialize recognizers and RecognizerRunner as described in the steps 1-4 above.

  2. Make sure you have the image set to a HTMLImageElement. If you only have the URL of the image that needs recognizing, you can attach it to the image element with following code snippet:

    const imageElement = document.getElementById( "imageToProcess" ) as HTMLImageElement;
    imageElement.src = URL.createObjectURL( imageURL );
    await imageElement.decode();
  3. Obtain the CapturedFrame object using function captureFrame and give it to the processImage method of the RecognizerRunner:

    const imageFrame = BlinkInputSDK.captureFrame( imageElement );
    const processResult = await recognizerRunner.processImage( imageFrame );
  4. Proceed as in steps 6-7 above. Note that you don't have to release any resources of VideoRecognizer here as we were only recognizing a single image, but RecognizerRunner and recognizers must be deleted using the delete method.

Configuration of SDK

You can modify the default behaviour of the SDK before a WASM module is loaded.

Check out the following code snippet to learn how to configure the SDK and which non-development options are available:

// Create instance of WASM SDK load settings
const loadSettings = new BlinkInputSDK.WasmSDKLoadSettings( "your-base64-license-key" );

/**
 * Write a hello message to the browser console when license check is successfully performed.
 *
 * Hello message will contain the name and version of the SDK, which are required information for all support
 * tickets.
 *
 * The default value is true.
 */
loadSettings.allowHelloMessage = true;

/**
 * Absolute location of WASM and related JS/data files. Useful when resource files should be loaded over CDN, or
 * when web frameworks/libraries are used which store resources in specific locations, e.g. inside "assets" folder.
 *
 * Important: if the engine is hosted on another origin, CORS must be enabled between two hosts. That is, server
 * where engine is hosted must have 'Access-Control-Allow-Origin' header for the location of the web app.
 *
 * Important: SDK and WASM resources must be from the same version of a package.
 *
 * Default value is empty string, i.e. "". In case of empty string, value of "window.location.origin" property is
 * going to be used.
 */
loadSettings.engineLocation = "";

/**
 * Type of the WASM that will be loaded. By default, if not set, the SDK will automatically determine the best WASM
 * to load.
 */
wasmType: WasmType | null = null;

/**
 * Optional callback function that will report the SDK loading progress.
 *
 * This can be useful for displaying progress bar to users with slow connections.
 *
 * The default value is "null".
 *
 * @example
 * loadSettings.loadProgressCallback = (percentage: number) => console.log(`${ percentage }% loaded!`);
 */
loadSettings.loadProgressCallback = null;

// After load settings are configured, proceed with the loading
BlinkInputSDK.loadWasmModule( loadSettings ).then( ... );

There are some additional options which can be seen in the configuration class WasmLoadSettings.

Deployment guidelines

This section contains information on how to deploy a web app which uses BlinkInput In-browser SDK.

HTTPS

Make sure to serve the web app over a HTTPS connection.

Otherwise, the browser will block access to a web camera and remote scripts due to security policies.

Deployment of WASM files

WASM wrapper contain three different builds:

  • Basic

    • The WASM that will be loaded will be most compatible with all browsers that support the WASM, but will lack features that could be used to improve performance.
  • Advanced

    • The WASM that will be loaded will be built with advanced WASM features, such as bulk memory, SIMD, non-trapping floating point and sign extension. Such WASM can only be executed in browsers that support those features. Attempting to run this WASM in a non-compatible browser will crash your app.
  • AdvancedWithThreads

    • The WASM that will be loaded will be build with advanced WASM features, just like above. Additionally, it will be also built with support for multi-threaded processing. This feature requires a browser with support for both advanced WASM features and SharedArrayBuffer.

    • For multi-threaded processing there are some things that needs to be set up additionally, like COOP and COEP headers, more info about web server setup can be found here.

    • Keep in mind that this WASM bundle requires that all resources are on the same origin. So, for example, it's not possible to load WASM files from some CDN. This limitation exists due to browser security rules.

Files: resources/{basic,advanced,advanced-threads}/BlinkInputWasmSDK.{data,js,wasm}

Server Configuration

If you know how WebAssembly works, then you'll know a browser will load the .wasm file it needs to compile it to the native code. This is unlike JavaScript code, which is interpreted and compiled to native code only if needed (JIT, a.k.a. Just-in-time compilation). Therefore, before BlinkInput is loaded, the browser must download and compile the provided .wasm file.

In order to make this faster, you should configure your web server to serve .wasm files with Content-Type: application/wasm. This will instruct the browser that this is a WebAssembly file, which most modern browsers will utilize to perform streaming compilation, i.e. they will start compiling the WebAssembly code as soon as first bytes arrive from the server, instead of waiting for the entire file to download.

For more information about streaming compilation, check this article from MDN.

If your server supports serving compressed files, you should utilize that to minimize the download size of your web app. It's easy to notice that .wasm file is not a small file, but it is very compressible. This is also true for all other files that you need to serve for your web app.

For more information about configuring your web server to compress and optimally deliver BlinkInput SDK in your web app, see the official Emscripten documentation.

Location of WASM and related support files

You can host WASM and related support files in a location different from the one where your web app is located.

For example, your WASM and related support files can be located in https://cdn.example.com, while the web app is hosted on https://example.com.

In that case it's important to set CORS headers in response from https://cdn.example.com. i.e. set header Access-Control-Allow-Origin with proper value so that the web page knows it’s okay to take on the request.

If WASM engine folders are not placed in the same folder as web app, don't forget to configure instance of WasmSDKLoadSettings with proper location:

...
const loadSettings = new BlinkInputSDK.WasmSDKLoadSettings( licenseKey );

loadSettings.engineLocation = "https://cdn.example.com/wasm";
...

The location should point to folder containing folders basic, advanced and advanced-threads that contain the WebAssembly and its support files.

The difference between basic, advanced and advanced-threads folders are in the way the WebAssembly file was built:

  • WebAssembly files in basic folder were built to be most compatible, but less performant.
  • WebAssembly files in advanced folder can yield better scanning performance, but requires more modern browser
  • WebAssembly files in the advanced-threads folder uses advanced WASM features as the WASM in the advanced folder but will additionally use WebWorkers for multi-threaded processing which will yield best performance.

Depending on what features the browser actually supports, the correct WASM file will be loaded automatically.

Note that in order to be able to use WASM from the advanced-threads folder, you need to configure website to be "cross-origin isolated" using COOP and COEP headers, as described in this article. This is required for browser to allow using the SharedArrayBuffer feature which is required for multi-threaded processing to work. Without doing so, the browser will load only the single-threaded WASM binary from the advanced folder.

# NGINX web server COEP and COOP header example

...

server {
    location / {
        add_header Cross-Origin-Embedder-Policy: require-corp;
        add_header Cross-Origin-Opener-Policy: same-origin;
    }
}

...

Setting up multiple licenses

As mentioned, the license key of BlinkInput SDK is tied to your domain name, so it's required to initialize the SDK with different license keys based on the location of your web app.

A common scenario is to have different license keys for development on the local machine, staging environment and production environment. Our team will be happy to issue multiple trial licenses if needs be. See Obtaining a license key.

There are two most common approaches regarding setup of your license key(s):

  1. Multiple apps: build different versions of your web app for different environments
  2. Single app: build a single version of your web app which has logic to determine which license key to use
Multiple apps

Common approach when working with modern frameworks/libraries.

Single app

Simple approach, where handling of license key is done inside the web app.

Here is one possible solution:

let licenseKey = "..."; // Place your development license key here

if ( window.location.hostname === "staging.example.com" ) // Place your staging domain here
{
    licenseKey = "..."; // Place your staging license key here
}

if ( window.location.hostname === "example.com" ) // Place your production domain here
{
    licenseKey = "..."; // Place your production license key here
}
...

The Recognizer concept, RecognizerRunner and VideoRecognizer

This section will first describe what a Recognizer is and how it should be used to perform recognition of images, videos and camera stream. We'll also describe what RecognizerRunner is and how it can be used to tweak the recognition procedure. Finally, we'll describe what VideoRecognizer is and explain how it builds on top of RecognizerRunner in order to provide support for recognizing a video or a camera stream.

The Recognizer concept

The Recognizer is the basic unit tasked with reading documents within the domain of BlinkInput SDK. Its main purpose is to process the image and extract meaningful information from it. As you will see later, BlinkInput SDK has lots of different Recognizer objects you can set up to recognize various documents.

The Recognizer is the object on the WebAssembly heap, which means that it will not be automatically cleaned up by the garbage collector once it's not required anymore. Once you are done using it, you must call the delete method on it to release the memory on the WebAssembly heap. Failing to do so will result in memory leak on the WebAssembly heap which may result in a crash of the browser tab running your web app.

Each Recognizer has a Result object, which contains the data that was extracted from the image. The Result for each specific Recognizer can be obtained by calling its getResult method, which will return a Result object placed on the JS heap, i.e. managed by the garbage collector. Therefore, you don't need to call any delete-like methods on the Result object.

Every Recognizer is a stateful object that can be in two possible states: idle state and working state.

While in idle state, you are allowed to call method updateSettings which will update its properties according to the given settings object. At any time, you can call its currentSettings method to obtain its currently applied settings object.

After you create a RecognizerRunner with an array containing your recognizer, the state of the Recognizer will change to working state, in which Recognizer object will be used for processing. While being in working state, it is not possible to call method updateSettings (calling it will crash your web app).

If you need to change configuration of your recognizer while it's being used, you need to:

  1. Call its currentSettings method to obtain its current configuration
  2. Update it as you need it
  3. Create a new Recogizer of the same type
  4. Call updateSettings on it with your modified configuration
  5. Replace the original Recognizer within the RecognizerRunner by calling its reconfigureRecognizers method

When written as a pseudocode, this would look like:

import * as BlinkInputSDK from "@microblink/blinkinput-in-browser-sdk";

// Assume myRecognizerInUse is used by the recognizerRunner
const currentSettings = await myRecognizerInUse.currentSettings();

// Modify currentSettings as you need
const newRecognizer = await BlinkInputSDK.createRecognizer(); // use appropriate recognizer creation function
await newRecognizer.updateSettings( currentSettings );

// Reconfigure recognizerRunner
await recognizerRunner.reconfigureRecognizers( [ newRecognizer ], true ); // use `true` or `false` depending of what you want to achieve (see below for the description)

// newRecognizer is now in use and myRecognizerInUse is no longer in use -
// you can delete it if you don't need it anymore
await myRecognizerInUse.delete();

While Recognizer object works, it changes its internal state and its result. The Recognizer object's Result always starts in Empty state. When corresponding Recognizer object performs the recognition of a given image, its Result can either stay in Empty state (in case Recognizer failed to perform recognition), move to Uncertain state (in case Recognizer performed the recognition, but not all mandatory information was extracted) or move to Valid state (in case Recognizer performed recognition and all mandatory information was successfully extracted from the image).

RecognizerRunner

The RecognizerRunner is the object that manages the chain of individual Recognizer objects within the recognition process.

It must be created by createRecognizerRunner method of the WasmModuleProxy interface, which is a member of WasmSDK interface which is resolved in a promise returned by the loadWasmModule function you've seen above. The function requires two parameters: an array of Recognizer objects that will be used for processing and a boolean indicating whether multiple Recognizer objects are allowed to have their Results enter the Valid state.

To explain the boolean parameter further, we first need to understand how RecognizerRunner performs image processing.

When the processImage method is called, it processes the image with the first Recognizer in the chain. If Recognizer's Result object changes its state to Valid, and if the above boolean parameter is false, the recognition chain will be stopped and Promise returned by the method will be immediately resolved. If the above parameter is true, then the image will also be processed with other Recognizer objects in chain, regardless of the state of their Result objects.

That means if after processing the image with the first Recognizer in the chain, its Result object's state is not changed to Valid, the RecognizerRunner will use the next Recognizer object in chain for processing the image and so on - until the end of the chain (if no results become valid or always if above parameter is true) or until it finds the recognizer that has successfully processed the image and changed its Result's state to Valid (if above parameter is false).

You cannot change the order of the Recognizer objects within the chain - regardless of the order in which you give Recognizer objects to RecognizerRunner (either to its creation function createRecognizerRunner or to its reconfigureRecognizers method), they are internally ordered in a way that ensures the best performance and accuracy possible.

Also, in order for BlinkInput SDK to be able to sort Recognizer objects in the recognition chain the best way, it is not allowed to have multiple instances of Recognizer objects of the same type within the chain. Attempting to do so will crash your application.

Performing recognition of video streams using VideoRecognizer

Using RecognizerRunner directly could be difficult in cases when you want to perform recognition of the video or the live camera stream. Additionally, handling camera management from the web browser can be sometimes challenging. In order to make this much easier, we provided a VideoRecognizer class.

To perform live camera recognition using the VideoRecognizer, you will need an already configured RecognizerRunner object and a reference to HTMLVideoElement to which camera stream will be attached.

To perform the recognition, you should simply write:

const cameraFeed = <HTMLVideoElement> document.getElementById( "cameraFeed" );
try
{
    const videoRecognizer = await BlinkInputSDK.VideoRecognizer.createVideoRecognizerFromCameraStream(
        cameraFeed,
        recognizerRunner
    );
    const processResult = await videoRecognizer.recognize();
}
catch ( error )
{
    // Handle camera error
}

The recognize method of the VideoRecognizer will start the video capture and recognition loop from the camera and will return a Promise that will be resolved when either processImage of the given RecognizerRunner returns Valid for some frame or the timeout given to recognize method is reached (if no timeout is given, a default one is used).

Recognizing a video file

If, instead of performing recognition of live video stream, you want to perform recognition of a pre-recorded video, you should simply construct VideoRecognizer using a different function, as shown below:

const videoRecognizer = await BlinkInputSDK.createVideoRecognizerFromVideoPath(
    videoPath,
    htmlVideoElement,
    recognizerRunner
);
const processResult = await videoRecognizer.recognize();

Custom UX with VideoRecognizer

The procedure for using VideoRecognizer described above is quite simple, but has some limits. For example, you can only perform one shot scan with it. As soon as the promise returned by recognize method resolves, the camera feed is paused and you need to start new recognition.

However, if you need to perform multiple recognitions in single camera session, without pausing the camera preview, you can use the startRecognition method, as described in the example below:

videoRecognizer.startRecognition
(
    ( recognitionState: BlinkInputSDK.RecognizerResultState ) =>
    {
        // Pause recognition before performing any async operation - this will make sure that
        // recognition will not continue while returning the control flow back from this function.
        videoRecognizer.pauseRecognition();

        // Obtain recognition results directly from recognizers associated with the RecognizerRunner
        // that is associated with the VideoRecognizer

        if ( shouldContinueScanning )
        {
            // Resume recognition
            videoRecognizer.resumeRecognition( true );
        }
        else
        {
            // Pause the camera feed
            videoRecognizer.pauseVideoFeed();
            // After this line, the VideoRecognizer is in the same state as if promise returned from
            // recognizer was resolved
        }
        // If videoRecognizer is not paused or terminated, after this line the recognition will
        // continue and recognition state will be retained
    }
);

Handling processing events with MetadataCallbacks

Processing events, also known as Metadata callbacks are purely intended to provide users with on-screen scanning guidance or to capture some debug information during development of your web app using BlinkInput SDK.

Callbacks for all events are bundled into the MetadataCallbacks object. We suggest that you have a look at the available callbacks and events which you can handle in the source code of the MetadataCallbacks interface.

You can link the MetadataCallbacks interface with RecognizerRunner either during creation or by invoking its method setMetadataCallbacks. Please note that both those methods need to pass information about available callbacks to the native code. For efficiency reasons this happens at the time setMetadataCallbacks is called, not every time a change occurs within the MetadataCallbacks object.

This means that if you, for example, set onQuadDetection to MetadataCallbacks after you already called setMetadataCallbacks method, the onQuadDetection will not be registered with the native code and therefore it will not be called.

Similarly, if you remove the onQuadDetection from MetadataCallbacks object after you already called setMetadataCallbacks method, your app will crash in attempt to invoke a non-existing function when our processing code attempts to invoke it. We deliberately do not perform null check here because of two reasons:

  • It is inefficient
  • Having no callback, while still being registered to native code is illegal state of your program and it should therefore crash

Remember that whenever you make some changes to the MetadataCallbacks object, you need to apply those changes to your RecognizerRunner by calling its setMetadataCallbacks method.

List of available recognizers

This section will give a list of all Recognizer objects that are available within BlinkInput SDK, their purpose and recommendations on how they should be used to achieve best performance and user experience.

Success Frame Grabber Recognizer

The SuccessFrameGrabberRecognizer is a special Recognizer that wraps some other Recognizer and impersonates it while processing the image. However, when the Recognizer being impersonated changes its Result into Valid state, the SuccessFrameGrabberRecognizer captures the image and saves it into its own Result object.

Since SuccessFrameGrabberRecognizer impersonates its slave Recognizer object, it is not possible to have both concrete Recognizer object and SuccessFrameGrabberRecognizer that wraps it in the same RecognizerRunner at the same time. Doing so will have the same effect as having multiple instances of the same Recognizer in the same RecognizerRunner - it will crash your application. For more information, see paragraph about RecognizerRunner.

This recognizer is best for use cases when you need to capture the exact image that was being processed by some other Recognizer object at the time its Result became Valid. When that happens, SuccessFrameGrabber's Result will also become Valid and will contain described image. That image will be available in its successFrame property.

Barcode recognizer

The BarcodeRecognizer is recognizer specialized for scanning various types of barcodes.

As you can see from its source code, you can enable multiple barcode symbologies within this recognizer, however keep in mind that enabling more barcode symbologies affects scanning performance - the more barcode symbologies are enabled, the slower the overall recognition performance. Also, keep in mind that some simple barcode symbologies that lack proper redundancy, such as Code 39, can be recognized within more complex barcodes, especially 2D barcodes, like PDF417.

SIM number recognizer

The SimNumberRecognizer is used for scanning barcodes containing SIM numbers. These barcodes are usually found on the packaging of the SIM cards.

VIN recognizer

The VinRecognizer is used for scanning barcodes containing Vehicle Identification Numbers from VIN plates on cars.

Recognizer settings

It's possible to enable various recognizer settings before recognition process to modify default behaviour of the recognizer.

List of all recognizer options is available in the source code of each recognizer, while list of all recognizers is available in the List of available recognizers section.

Recognizer settings should be enabled right after the recognizer has been created in the following manner:

// Create instance of recognizer
const BarcodeRecognizer = await BlinkInputSDK.createBarcodeRecognizer( sdk );

// Retrieve current settings
const settings = await BarcodeRecognizer.currentSettings();

// Update desired settings
settings[ " <recognizer_available_setting> " ] = true;

// Apply settings
await BarcodeRecognizer.updateSettings( settings );

...

Technical requirements

This document provides information about technical requirements of end-user devices to run BlinkInput.

Requirements:

  1. The browser is supported.
  2. The browser has access to camera device.
  3. The device has enough computing power to extract data from an image.

Important: BlinkInput may not work correctly in WebView/WKWebView/SFSafariViewController. See this section.

Supported browsers

Minimal browser versions with support for all features required by BlinkInput.

|Chrome|Safari|Edge|Firefox|Opera|iOS Safari|Android Browser|Opera Mobile|Chrome for Android|Firefox for Android| |------|------|----|-------|-----|----------|---------------|------------|------------------|-------------------| | 57| 11| 79| 52| 44| 14| 81| 59| 86| 82|

Internet Explorer is not supported.

Source: caniuse

Camera devices

Keep in mind that camera device is optional, since BlinkInput can extract data from still images.

SDK cannot access camera on iOS 14.2 and older versions when the end-user is using a web browser other than Safari. Apple does not allow access to camera via WebRTC specification for other browsers.

Notes & Guidelines

  • For optimal data extraction use high-quality camera device in well-lit space and don't move the camera too much.
  • It's recommended to use camera devices with autofocus functionality for fastest data extraction.
  • Camera devices on MacBook laptops don't work well with low ambient light, i.e. scanning will take longer than usual.

Device support

It's hard to pinpoint exact hardware specifications for successful data extraction, but based on our testing mid-end and high-end smartphone devices released in 2018 and later should be able to extract data from an image in a relatively short time frame.

Notes & Guidelines

  • Browsers supported by BlinkInput can run on older devices, where extraction can take much longer to execute, e.g. around 30 or even 40 seconds.

SDK and WebView/WKWebView/SFSafariViewController

Android and WebView

WebView is not supported for a couple of reasons:

  • There is no guarantee that developers of mobile apps are using WebView with all necessary features enabled.
  • It's up to developers of mobile apps to provide support for camera access from WebView (which is integral part of our experience), which requires additional work compared to classic camera permission in mobile apps.

Also, it's possible for mobile app developers to use WebView alternatives like GeckoView and similar, which have their own constraints.

iOS, WKWebView and SFSafariViewController

As for now, it's not possible to access the camera from WKWebView and SFSafariViewController.

Camera access on iOS, i.e. WebRTC, is only supported in Safari browser. Other browsers like Chrome and Firefox won't work as expected.

Conclusion

There is a general technical constraint when using BlinkInput from in-app browser - it's not possible to know for sure if the SDK has or hasn't got camera access. That is, it's not possible to notify the user if the camera is not available during the initialization.

However, majority of widely used apps with in-app browsers, e.g. Facebook and Snapchat, are using standard WebView or embedded Safari with all the features. For example, WASM and modern JS are supported.

But the major problem still remains, how to get an image from the camera? Currently, we can advise two approaches:

  1. Detect via UA string if in-app browser is used and prompt the user to use the native browser.
  2. Detect via UA string if in-app browser is used and enable classic image upload via <input type="file" accept="image/*" capture="environment" /> element.
    • Based on the operating system and software version, users will be able to select an image from the gallery, or to capture an image from the camera.

Troubleshooting

Integration problems

In case you're having issues integrating our SDK, the first thing you should do is revisit our integration instructions and make sure to closely follow each step.

If you have followed the instructions to the letter and you still have problems, please contact us at help.microblink.com.

When contacting us, please make sure you include the following information:

  • Log from the web console.
  • High resolution scan/photo of the document that you are trying to scan.
  • Information about the device and browser that you are using — we need the exact version of the browser and operating system it runs on. Also, if it runs on a mobile device, we also need the model of the device in question (camera management is specific to browser, OS and device).
  • Please stress out that you are reporting a problem related to the WebAssembly version of the BlinkInput SDK.

SDK problems

In case of problems with using the SDK, you should do as follows:

Licensing problems

If you are getting an "invalid license key" error or having other license-related problems (e.g. some feature is not enabled that should be), first check the browser console. All license-related problems are logged to the web console so that it's easier to determine what went wrong.

When you can't determine the license-related problem or you simply do not understand the log information, you should contact us at help.microblink.com. When contacting us, please make sure you provide following information:

  • Exact fully qualified domain name of your app, i.e. where the app is hosted.
  • License that is causing problems.
  • Please stress out that you are reporting a problem related to the WebAssembly version of the BlinkInput SDK.
  • If unsure about the problem, you should also provide an excerpt from the web console containing the license error.

Other problems

If you are having problems with scanning certain items, undesired behaviour on specific device(s), crashes inside BlinkInput SDK or anything unmentioned, please contact our support with the same information as listed at the start of this section.

FAQ and known issues

  • After switching from trial to production license I get error This entity is not allowed by currently active license! when I create a specific Recognizer object.

Each license key contains information about which features are allowed to use and which are not. This error indicates that your production license does not allow the use of a specific Recognizer object. You should contact support to check if the provided license is OK and that it really contains the features you've requested.

  • Why am I getting No internet connection error if I'm on a private network?

Versions BlinkInput 4.5.0 and above require an internet connection to work under our new License Management Program.

This means your web app has to be connected to the Internet in order for us to validate your trial license key. Scanning or data extraction of documents still happens offline, in the browser itself.

Once the validation is complete, you can continue using the SDK in an offline mode (or over a private network) until the next check.

We've added error callback to Microblink SDK to inform you about the status of your license key.

Additional info

Complete source code of the TypeScript wrapper can be found here.

For any other questions, feel free to contact us at help.microblink.com.