npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@jamsch/react-native-voice

v3.1.7

Published

React Native Native Voice library for iOS and Android

Downloads

5

Readme

CircleCI branch npm

yarn add @react-native-voice/voice

# or

npm i @react-native-voice/voice --save

Link the iOS package

npx pod-install

Table of contents

Linking

Manually or automatically link the NativeModule

react-native link @react-native-voice/voice

Manually Link Android

  • In android/setting.gradle
...
include ':@react-native-voice_voice', ':app'
project(':@react-native-voice_voice').projectDir = new File(rootProject.projectDir, '../node_modules/@react-native-voice/voice/android')
  • In android/app/build.gradle
...
dependencies {
    ...
    compile project(':@react-native-voice_voice')
}
  • In MainApplication.java
import android.app.Application;
import com.facebook.react.ReactApplication;
import com.facebook.react.ReactPackage;
...
import com.wenkesj.voice.VoicePackage; // <------ Add this!
...

public class MainActivity extends Activity implements ReactApplication {
...
    @Override
    protected List<ReactPackage> getPackages() {
      return Arrays.<ReactPackage>asList(
        new MainReactPackage(),
        new VoicePackage() // <------ Add this!
        );
    }
}

Manually Link iOS

  • Drag the Voice.xcodeproj from the @react-native-voice/voice/ios folder to the Libraries group on Xcode in your poject. Manual linking

  • Click on your main project file (the one that represents the .xcodeproj) select Build Phases and drag the static library, lib.Voice.a, from the Libraries/Voice.xcodeproj/Products folder to Link Binary With Libraries

Example

import Voice from '@react-native-voice/voice';
import React, {Component} from 'react';

class VoiceTest extends Component {
  constructor(props) {
    Voice.onSpeechStart = this.onSpeechStartHandler.bind(this);
    Voice.onSpeechEnd = this.onSpeechEndHandler.bind(this);
    Voice.onSpeechPartialResults = this.onSpeechPartialResultsHandler.bind(this);
    Voice.onSpeechResults = this.onSpeechResultsHandler.bind(this);
    Voice.onSpeechError = this.onSpeechErrorHandler.bind(this);
  }

  componentWillUnmount() {
    // Remove all listeners
    Voice.removeAllListeners();
  }

  onSpeechStartHandler() {
    console.log("Speech started");
    // Update state to notify user that speech recognition has started
  }

   onSpeechPartialResultsHandler(e) {
    // e = { value: string[] }
    // Loop through e.value for speech transcription results
    console.log("Partial results", e);
  }

  onSpeechResultsHandler(e) {
    // e = { value: string[] }
    // Loop through e.value for speech transcription results
    console.log("Speech results", e);
  }

  onSpeechEndHandler(e) {
    // e = { error?: boolean }
    console.log("Speech ended", e);
  }

  onSpeechErrorHandler(e) {
    // e = { code?: string, message?: string }
    switch (e.code) { ... }
  }

  onStartButtonPress = async () => {
    try {
      await Voice.start("en_US");
    } catch (exception) {
      // exception = Error | { code: string, message?: string }
      onSpeechErrorHandler(exception);
    }
  };

  render() {
    return (
      <TouchableOpacity onPress={this.onStartButtonPress}>
        <View>
          <Text>Start</Text>
        </View>
      </TouchableOpacity>
    );
  }
}

All methods now return a new Promise for async/await compatibility.

| Method Name | Description | Platform | | ------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ | | Voice.isAvailable() | Checks whether a speech recognition service is available on the system. | Android, iOS | | Voice.start(locale) | Starts listening for speech for a specific locale. Returns null if no error occurs. | Android, iOS | | Voice.stop() | Stops listening for speech. Returns null if no error occurs. | Android, iOS | | Voice.cancel() | Cancels the speech recognition. Returns null if no error occurs. | Android, iOS | | Voice.destroy() | Destroys the current SpeechRecognizer instance. Returns null if no error occurs. | Android, iOS | | Voice.removeAllListeners() | Cleans/nullifies overridden Voice static methods. | Android, iOS | | Voice.isRecognizing() | Return if the SpeechRecognizer is recognizing. | Android, iOS | | Voice.getSpeechRecognitionServices() | Returns a list of the speech recognition engines available on the device. (Example: ['com.google.android.googlequicksearchbox'] if Google is the only one available.) | Android |

| Event Name | Description | Event | Platform | | ----------------------------------- | ------------------------------------------------------ | ----------------------------------------------- | ------------ | | Voice.onSpeechStart(event) | Invoked when .start() is called without error. | { error: false } | Android, iOS | | Voice.onSpeechRecognized(event) | Invoked when speech is recognized. | { error: false } | Android, iOS | | Voice.onSpeechEnd(event) | Invoked when SpeechRecognizer stops recognition. | { error: false } | Android, iOS | | Voice.onSpeechError(event) | Invoked when an error occurs. | { error: Description of error as string } | Android, iOS | | Voice.onSpeechResults(event) | Invoked when SpeechRecognizer is finished recognizing. | { value: [..., 'Speech recognized'] } | Android, iOS | | Voice.onSpeechPartialResults(event) | Invoked when any results are computed. | { value: [..., 'Partial speech recognized'] } | Android, iOS | | Voice.onSpeechVolumeChanged(event) | Invoked when pitch that is recognized changed. | { value: pitch in dB } | Android |

Android

While the included VoiceTest app works without explicit permissions checks and requests, it may be necessary to add a permission request for RECORD_AUDIO for some configurations. Since Android M (6.0), user need to grant permission at runtime (and not during app installation). By default, calling the startSpeech method will invoke RECORD AUDIO permission popup to the user. This can be disabled by passing REQUEST_PERMISSIONS_AUTO: true in the options argument.

If you're running an ejected expo/expokit app, you may run into issues with permissions on Android and get the following error host.exp.exponent.MainActivity cannot be cast to com.facebook.react.ReactActivity startSpeech. This can be resolved by prompting for permssion using the expo-permission package before starting recognition.

import { Permissions } from "expo";
async componentDidMount() {
	const { status, expires, permissions } = await Permissions.askAsync(
		Permissions.AUDIO_RECORDING
	);
	if (status !== "granted") {
		//Permissions not granted. Don't show the start recording button because it will cause problems if it's pressed.
		this.setState({showRecordButton: false});
	} else {
		this.setState({showRecordButton: true});
	}
}

Notes on Android

Even after all the permissions are correct in Android, there is one last thing to make sure this libray is working fine on Android. Please make sure the device has Google Speech Recognizing Engine such as com.google.android.googlequicksearchbox by calling Voice.getSpeechRecognitionServices(). Since Android phones can be configured with so many options, even if a device has googlequicksearchbox engine, it could be configured to use other services. You can check which serivce is used for Voice Assistive App in following steps for most Android phones:

Settings > App Management > Default App > Assistive App and Voice Input > Assistive App

Above flow can vary depending on the Android models and manufactures. For Huawei phones, there might be a chance that the device cannot install Google Services.

How can I get com.google.android.googlequicksearchbox in the device?

Please ask users to install Google Search App.

iOS

Need to include permissions for NSMicrophoneUsageDescription and NSSpeechRecognitionUsageDescription inside Info.plist for iOS. See the included VoiceTest for how to handle these cases.

<dict>
  ...
  <key>NSMicrophoneUsageDescription</key>
  <string>Description of why you require the use of the microphone</string>
  <key>NSSpeechRecognitionUsageDescription</key>
  <string>Description of why you require the use of the speech recognition</string>
  ...
</dict>

Please see the documentation provided by ReactNative for this: PermissionsAndroid

Handling errors

This applies to Voice.onSpeechError(e) and when await Voice.start() throws an exception.

try {
  await Voice.start();
} catch (e) {
  // Note: on Android this will *likely* return an Error object.
  // e: Error | { code: string, message?: string }
  // switch (e.code) { ... }
}

| Code | Description | Platform | | ------------------ | --------------------------------------------------------------- | ------------ | | permissions | User denied microphone/speech recognition permissions | Android, iOS | | recognizer_busy | Speech recognition has already started | Android, iOS | | not_available | Speech recognition is not available on the device | Android, iOS | | audio | Audio engine / Audio session error | Android, iOS | | network | Network error | Android | | network_timeout | Network timeout error | Android | | speech_timeout | Speech recognition timeout | Android | | no_match | No recognition matches | Android | | server | Server error | Android | | restricted | Speech recognition is restricted | iOS | | not_authorized | Speech recognition is not authorized | iOS | | not_ready | Speech recognition is not ready to start | iOS | | recognition_init | Speech recognition initialization failed | iOS | | start_recording | [inputNode installTapOnBus:0...] call failed | iOS | | input | Audio engine has no input node | iOS | | recognition_fail | General failure while using recognition. Has a "message" prop | iOS |

  • @asafron
  • @BrendanFDMoore
  • @brudny
  • @chitezh
  • @ifsnow
  • @jamsch
  • @misino
  • @Noitidart
  • @ohtangza & @hayanmind
  • @rudiedev6
  • @tdonia
  • @wenkesj