@picovoice/porcupine-react-native
v3.0.3
Published
Picovoice Porcupine React Native binding
Downloads
444
Readme
Porcupine Binding for React Native
Porcupine
Porcupine is a highly accurate and lightweight wake word engine. It enables building always-listening voice-enabled applications using cutting edge voice AI.
Porcupine is:
- private and offline
- accurate
- resource efficient (runs even on microcontrollers)
- data efficient (wake words can be easily generated by simply typing them, without needing thousands of hours of bespoke audio training data and manual effort)
- scalable to many simultaneous wake-words / always-on voice commands
- cross-platform
To learn more about Porcupine, see the product, documentation, and GitHub pages.
Compatibility
This binding is for running Porcupine on React Native 0.62.2+ on the following platforms:
- Android 5.0+ (API 21+)
- iOS 13.0+
Installation
To start install be sure you have installed yarn and CocoaPods. Then add these two native modules to your react-native project.
yarn add @picovoice/react-native-voice-processor
yarn add @picovoice/porcupine-react-native
or
npm i @picovoice/react-native-voice-processor --save
npm i @picovoice/porcupine-react-native --save
Link the iOS package
cd ios && pod install && cd ..
NOTE: Due to a limitation in React Native CLI auto-linking, these two native modules cannot be included as transitive dependencies. If you are creating a module that depends on porcupine-react-native and/or react-native-voice-processor, you will have to list these as peer dependencies and require developers to install them alongside.
AccessKey
Porcupine requires a valid Picovoice AccessKey
at initialization. AccessKey
acts as your credentials when using Porcupine SDKs.
You can get your AccessKey
for free. Make sure to keep your AccessKey
secret.
Signup or Login to Picovoice Console to get your AccessKey
.
Permissions
To enable recording with the hardware's microphone, you must first ensure that you have enabled the proper permission on both iOS and Android.
On iOS, open your Info.plist and add the following line:
<key>NSMicrophoneUsageDescription</key>
<string>[Permission explanation]</string>
On Android, open your AndroidManifest.xml and add the following line:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.INTERNET" />
Finally, in your app JS code, be sure to check for user permission consent before proceeding with audio capture:
let recordAudioRequest;
if (Platform.OS == 'android') {
// For Android, we need to explicitly ask
recordAudioRequest = this._requestRecordAudioPermission();
} else {
// iOS automatically asks for permission
recordAudioRequest = new Promise(function (resolve, _) {
resolve(true);
});
}
recordAudioRequest.then((hasPermission) => {
if(hasPermission){
// Code that uses Porcupine
}
});
async _requestRecordAudioPermission() {
const granted = await PermissionsAndroid.request(
PermissionsAndroid.PERMISSIONS.RECORD_AUDIO,
{
title: 'Microphone Permission',
message: '[Permission explanation]',
buttonNeutral: 'Ask Me Later',
buttonNegative: 'Cancel',
buttonPositive: 'OK',
}
);
return (granted === PermissionsAndroid.RESULTS.GRANTED)
}
Usage
The module provides you with two levels of API to choose from depending on your needs.
High-Level API
PorcupineManager provides a high-level API that takes care of audio recording. This class is the quickest way to get started.
Using the constructor PorcupineManager.fromBuiltInKeywords
will create an instance of the PorcupineManager
using one or more of the built-in keywords.
const accessKey = "${ACCESS_KEY}"; // AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)
async createPorcupineManager() {
try {
this._porcupineManager = await PorcupineManager.fromBuiltInKeywords(
accessKey,
[BuiltInKeywords.Picovoice, BuiltInKeywords.Porcupine],
detectionCallback,
processErrorCallback);
} catch (err) {
// handle error
}
}
NOTE: the call is asynchronous and therefore should be called in an async block with a try/catch.
The detectionCallback
parameter is a function that you want to execute when Porcupine has detected one of the keywords.
The function should accept a single integer, keywordIndex, which specifies which wake word has been detected.
detectionCallback(keywordIndex) {
if (keywordIndex === 0) {
// picovoice detected
}
else if (keywordIndex === 1) {
// porcupine detected
}
}
The processErrorCallback
parameter is a function that you want to execute when Porcupine has detected an error while processing audio.
The function should accept an error type, the error which is thrown. This callback is optional.
processErrorCallback(error) {
console.error(error);
}
Available built-in keywords are stored in the BuiltInKeywords
enum.
To create an instance of PorcupineManager that detects custom keywords, you can use the PorcupineManager.fromKeywordPaths
static constructor and provide the paths to the .ppn
file(s).
const accessKey = "${ACCESS_KEY}"
this._porcupineManager = await PorcupineManager.fromKeywordPaths(
accessKey,
["/path/to/keyword.ppn"],
detectionCallback);
To add a custom wake word to your React Native application you'll need to add the .ppn
file to your platform projects. Android models must be added to ./android/app/src/main/assets/
, while iOS models can be added anywhere under ./ios
, but must be included as a bundled resource in your iOS (i.e. add via XCode) project. The paths used as initialization arguments are relative to these device-specific directories.
In addition to custom keywords, you can override the default Porcupine model file and/or keyword sensitivities. These optional parameters can be passed in like so:
const accessKey = "${ACCESS_KEY}"
this._porcupineManager = await PorcupineManager.fromKeywordPaths(
accessKey,
["/path/to/keyword/one.ppn", "/path/to/keyword/two.ppn"],
detectionCallback,
processErrorCallback,
'path/to/model.pv',
[0.25, 0.6]);
Alternatively, if the model files are deployed to the device with a different method, the absolute paths to the files on device can be used.
Once you have instantiated a PorcupineManager
, you can start audio capture and wake word detection by calling:
let didStart = await this._porcupineManager.start();
And then stop it by calling:
let didStop = await this._porcupineManager.stop();
Once the app is done with using PorcupineManager
, be sure you explicitly release the resources allocated to Porcupine:
this._porcupineManager.delete();
With PorcupineManager
, the
@picovoice/react-native-voice-processor
module handles audio capture and automatically passes it to the wake word engine.
Low-Level API
Porcupine provides low-level access to the wake word engine for those who want to incorporate wake word detection into an already existing audio processing pipeline.
Porcupine
also has fromBuiltInKeywords
and fromKeywordPaths
static constructors.
const accessKey = "${ACCESS_KEY}" // AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)
async createPorcupine(){
try{
this._porcupine = await Porcupine.fromBuiltInKeywords(accessKey, [BuiltInKeywords.PICOVOICE]);
} catch (err) {
// handle error
}
}
As you can see, in this case you don't pass in a detection callback as you will be passing in audio frames directly using the process function:
let buffer = getAudioFrame();
try {
let keywordIndex = await this._porcupine.process(buffer);
if (keywordIndex >= 0) {
// detection made!
}
} catch (e) {
// handle error
}
For process to work correctly, the audio data must be in the audio format required by Picovoice.
The required audio format is found by calling .sampleRate
to get the required sample rate and .frameLength
to
get the required frame size. Audio must be single-channel and 16-bit linearly-encoded.
Finally, once you no longer need the wake word engine, be sure to explicitly release the resources allocated to Porcupine:
this._porcupine.delete();
Custom Wake Word Integration
To add a custom wake word to your React Native application you'll need to add the .ppn
file to your platform projects.
Adding Android Models
Android custom models and keywords must be added to ./android/app/src/main/assets/
.
Adding iOS Models
iOS models can be added anywhere under ./ios
, but it must be included as a bundled resource.
The easiest way to include a bundled resource in the iOS project is to:
- Open XCode.
- Either:
- Drag and Drop the model/keyword file to the navigation tab.
- Right-click on the navigation tab, and click
Add Files To ...
.
This will bundle your models together when the app is built.
Using Custom Wake Words
const accessKey = "${ACCESS_KEY}"
let keyword_paths: string[];
if (Platform.OS === 'android') {
keyword_paths = ['keyword1_android.ppn', 'keyword2_android.ppn'];
} else if (Platform.OS === 'ios') {
keyword_paths = ['keyword1_ios.ppn', 'keyword2_ios.ppn'];
} else {
// handle errors
}
try {
this._porcupine = await Porcupine.fromKeywordPaths(
accessKey,
keyword_paths,
'model.pv',
[0.5, 0.6]
);
} catch (err) { }
Alternatively, if the model files are deployed to the device with a different method, the absolute paths to the files on device can be used.
Non-English Wake Words
In order to detect non-English wake words you need to use the corresponding model file (.pv
). The model files for all supported languages are available here.
Demo App
Check out the Porcupine React Native demo to see what it looks like to use Porcupine in a cross-platform app!