npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@picovoice/rhino-angular

v3.0.4

Published

Angular service for Rhino Web SDK

Downloads

66

Readme

Rhino Binding for Angular

Rhino Speech-to-Intent engine

Made in Vancouver, Canada by Picovoice

Rhino is Picovoice's Speech-to-Intent engine. It directly infers intent from spoken commands within a given context of interest, in real-time. For example, given a spoken command:

Can I have a small double-shot espresso? Rhino infers that the user would like to order a drink and emits the following inference result:

{
  "isUnderstood": "true",
  "intent": "orderBeverage",
  "slots": {
    "beverage": "espresso",
    "size": "small",
    "numberOfShots": "2"
  }
}

Rhino is:

  • using deep neural networks trained in real-world environments.
  • compact and computationally-efficient, making it perfect for IoT.
  • self-service. Developers and designers can train custom models using Picovoice Console.

Compatibility

  • Chrome / Edge
  • Firefox
  • Safari

Restrictions

IndexedDB and WebWorkers are required to use Rhino Angular. Browsers without support (i.e. Firefox Incognito Mode) should use the RhinoWeb binding main thread method.

AccessKey

Rhino requires a valid Picovoice AccessKey at initialization. AccessKey acts as your credentials when using Rhino SDKs. You can get your AccessKey for free. Make sure to keep your AccessKey secret. Signup or Login to Picovoice Console to get your AccessKey.

Installation

Using yarn:

yarn add @picovoice/rhino-angular @picovoice/web-voice-processor

or using npm:

npm install --save @picovoice/rhino-angular @picovoice/web-voice-processor

Usage

There are two methods to initialize Rhino:

Public Directory

NOTE: Due to modern browser limitations of using a file URL, this method does not work if used without hosting a server.

This method fetches the model file from the public directory and feeds it to Rhino. Copy the model file into the public directory:

cp ${RHINO_MODEL_FILE} ${PATH_TO_PUBLIC_DIRECTORY}

The same procedure can be used for the Rhino context (.rhn) files.

Base64

NOTE: This method works without hosting a server, but increases the size of the model file roughly by 33%.

This method uses a base64 string of the model file and feeds it to Rhino. Use the built-in script pvbase64 to base64 your model file:

npx pvbase64 -i ${RHINO_MODEL_FILE} -o ${OUTPUT_DIRECTORY}/${MODEL_NAME}.js

The output will be a js file which you can import into any file of your project. For detailed information about pvbase64, run:

npx pvbase64 -h

The same procedure can be used for the Rhino context (.rhn) files.

Rhino Model

Rhino saves and caches your model (.pv) and context (.rhn) files in the IndexedDB to be used by Web Assembly. Use a different customWritePath variable to hold multiple model values and set the forceWrite value to true to force an overwrite of the model file. If the model (.pv) or context (.rhn) files change, version should be incremented to force the cached model to be updated. Either base64 or publicPath must be set to instantiate Rhino. If both are set, Rhino will use the base64 parameter.

// Context (.rhn)
const rhinoContext = {
  publicPath: ${CONTEXT_RELATIVE_PATH},
  // or
  base64: ${CONTEXT_BASE64_STRING},
  // Optionals
  customWritePath: 'custom_context',
  forceWrite: true,
  version: 1,
  sensitivity: 0.5,
}
// Model (.pv)
const rhinoModel = {
  publicPath: ${MODEL_RELATIVE_PATH},
  // or
  base64: ${MODEL_BASE64_STRING},
  // Optionals
  customWritePath: 'custom_model',
  forceWrite: true,
  version: 1,
}

Additional engine options are provided via the options parameter. Use endpointDurationSec and requireEndpoint to control the engine's endpointing behavior. An endpoint is a chunk of silence at the end of an utterance that marks the end of spoken command.

// Optional. These are the default values
const options = {
  endpointDurationSec: 1.0,
  requireEndpoint: true,
}

Initialize Rhino

First subscribe to the events from RhinoService. There are five subscription events:

  • inference$: Returns the inference.
  • contextInfo$: Returns the context info once Rhino has loaded successfully.
  • isLoaded$: Returns true if Rhino has loaded successfully.
  • isListening$: Returns true if WebVoiceProcessor has started successfully and Rhino is listening for an utterance.
  • error$: Returns any errors occurred.
import { Subscription } from "rxjs"
import { RhinoService } from "@picovoice/rhino-angular"
...
  constructor(private rhinoService: RhinoService) {
    this.contextInfoSubscription = rhinoService.contextInfo$.subscribe(
      contextInfo => {
        console.log(contextInfo);
      });
    this.inferenceSubscription = rhinoService.inference$.subscribe(
      inference => {
        console.log(inference);
      });
    this.isLoadedSubscription = rhinoService.isLoaded$.subscribe(
      isLoaded => {
        console.log(isLoaded);
      });
    this.isListeningSubscription = rhinoService.isListening$.subscribe(
      isListening => {
        console.log(isListening);
      });
    this.errorSubscription = rhinoService.error$.subscribe(
      error => {
        console.log(error);
      });
  }

After setting up the subscriber events, initialize Rhino:

async ngOnInit() {
  await this.rhinoService.init(
    ${ACCESS_KEY},
    rhinoContext,
    rhinoModel,
  );
}

Process Audio Frames

The Rhino Angular binding uses WebVoiceProcessor to record audio. To start detecting detecting an inference, run the process function:

await this.rhinoService.process();

The process function initializes WebVoiceProcessor. Rhino will then listen and process frames of microphone audio until it reaches a conclusion, then return the result via the inference$ event. Once a conclusion is reached Rhino will enter a paused state. From the paused state Rhino call process again to detect another inference.

Cleanup

When you are done with Rhino call release. This cleans up all resources used by Rhino and WebVoiceProcessor.

ngOnDestroy() {
  this.contextInfoSubscription.unsubscribe();
  this.inferenceSubscription.unsubscribe();
  this.isLoadedSubscription.unsubscribe();
  this.isListeningSubscription.unsubscribe();
  this.errorSubscription.unsubscribe();
  this.rhinoService.release();
}

If any arguments require changes, call release then init again to initialize Rhino with the new settings.

Contexts

Create custom contexts using the Picovoice Console. Train and download a Rhino context file (.rhn) for the target platform Web (WASM). This model file can be used directly with publicPath, but, if base64 is preferable, convert the .rhn file to a base64 JavaScript variable using the built-in pvbase64 script:

npx pvbase64 -i ${CONTEXT_FILE}.rhn -o ${CONTEXT_BASE64}.js -n ${CONTEXT_BASE64_VAR_NAME}

Similar to the model file (.pv), context files (.rhn) are saved in IndexedDB to be used by Web Assembly. Either base64 or publicPath must be set for the context to instantiate Rhino. If both are set, Rhino will use the base64 model.

const contextModel = {
  publicPath: "${CONTEXT_RELATIVE_PATH}",
  // or
  base64: "${CONTEXT_BASE64_STRING}",
}

Switching Languages

In order to make inferences in different language you need to use the corresponding model file (.pv). The model files for all supported languages are available here.

Demo

For example usage, refer to our Angular demo application.