rn-audio
v0.2.3
Published
Audio recording and playback library for react-native, supporting a range of compressed formats as well as wav / lpcm, on both iOS and Android.
Downloads
3
Readme
rn-audio
React-native module for recording and playing audio files on iOS
and android
, using platform-supported formats and options (as well as .wav support). This module can additionally play audio files from a URL.
Compatibility:
- React Native >= 0.61
- iOS: >= 11.0
- Android SDK: >= 21
Installation:
In your project directory, type:
yarn add 'rn-audio@https://github.com/kleydon/rn-audio'
[iOS only]:
npx pod-install
Post-installation:
iOS
You need to add a usage description to Info.plist:
<key>NSMicrophoneUsageDescription</key>
<string>$(PRODUCT_NAME) requires your permission to use the microphone.</string>
NOTE: The Apple app-store review process requires that permission messages are clear and not misleading.
Also, add a swift bridging header (if you don't have one already), for swift compatibility; see here and here.
Android
Add the following permissions to your application's AndroidManifest.xml
:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Usage:
import {
Audio,
RecUpdateMetadata,
RecStopMetadata,
PlayUpdateMetadata,
PlayStopMetadata
} from 'rn-audio'
// Recording
const recordingOptions:RecordingOptions = {
audioFileNameOrPath: 'recording.wav',
recMeteringEnabled: true,
maxRecDurationSec: 10.0,
...
}
const recUpdateCallback = async (e: RecUpdateMetadata) => {
console.log('recUpdate: ', e)
}
const recStopCallback = async (e: RecStopMetadata):Promise<undefined> => {
console.log('recStop:', e)
}
// NOTE! SubscriptionDuration impacts responsiveness, particularly for seekToPlayer(), below.
// Choose a value that balances UI responsiveness with update frequency requirements
setSubscriptionDuration(0.25) // Rate of callbacks that fire during recording and playback.
// Defaults to 0.5
audio.startRecorder({ recUpdateCallback, recStopCallback, recordingOptions })
...
audio.pauseRecorder()
...
audio.resumeRecorder()
...
audio.stopRecorder()
const recUpdateCallback = async (e: RecUpdateMetadata) => {
ilog('app.recUpdateCallback() - metadata: ', e)
//db-level, progress, etc.
}
const recStopCallback = async (e: RecStopMetadata) => {
ilog('app.recStopCallback() - metadata:', e)
//Did recording stop due to user request? An error? Max duration exceeded?
}
// Playback
const playUpdateCallback = async (e: PlayUpdateMetadata) => {
console.log('playUpdate: ', e)
//progress, muted, etc.
}
const playStopCallback = async (e: PlayStopMetadata):Promise<void> => {
console.log('playStop:', e)
//Did playback stop due to completion? An error? User request?
}
...
...
audio.startPlayer({ fileNameOrPathOrURL, playUpdateCallback, playStopCallback, playVolume: 1.0 })
...
audio.pausePlayer()
...
audio.resumePlayer()
...
audio.stopPlayer()
...
audio.seekToPlayer(time)
...
// Run-time permission checking (Android only)
// All required permissions at once:
audio.verifyAndroidPermissionsEnabled()
// Granularly:
audio.verifyAndroidRecordAudioEnabled()
audio.verifyAndroidWriteExternalStorageEnabled()
audio.verifyAndroidReadExternalStorageEnabled()
// Time formatting
audio.mmss(secs) // Returns MM:SS formatted time string
audio.mmssss(ms) // Returns a MM:SS:mm formatted time string
For specifying directory paths, file system navigation, transferring recordings, dealing with file data, etc, consider using:
Options:
Input RecordingOptions
are listed below; for a full list of options/types, see (here)[https://github.com/kleydon/rn-audio/blob/main/src/index.tsx]:
export interface RecordingOptions {
audioFileNameOrPath?: string, // If wav encoding/format/LPCM params specified, defaults to 'recording.wav';
// otherwise, 'recording.m4a' for ios, and 'recording.mp4' for android.
maxRecDurationSec?: number,
recMeteringEnabled?: boolean, // db sound level
sampleRate?: number, // defaults to 44100
numChannels?: NumberOfChannelsId, // 1 or 2, defaults to 1
encoderBitRate?: number, // Defaults to 128000
lpcmByteDepth?: ByteDepthId, // 1 or 2, defaults to 2 = 16 bits
//Apple-specific
appleAudioFormatId?: AppleAudioFormatId, // Defaults to aac
appleAVAudioSessionModeId?: AppleAVAudioSessionModeId, // Defaults to measurement
//Apple encoded/compressed-specific
appleAVEncoderAudioQualityId?: AppleAVEncoderAudioQualityId, // Defaults to high
//Apple LPCM/WAV-specific
appleAVLinearPCMIsBigEndian?: boolean, // Defaults to false
appleAVLinearPCMIsFloatKeyIOS?: boolean, // Defaults to false
appleAVLinearPCMIsNonInterleaved?: boolean, // Defaults to false
//Android-specific
androidAudioSourceId?: AndroidAudioSourceId, // Defaults to MIC
androidOutputFormatId?: AndroidOutputFormatId, // Defaults to MPEG_4
androidAudioEncoderId?: AndroidAudioEncoderId, // Defaults to AAC
//Android encoded/compressed-specific
//(None)
}
App-Level Considerations
App Lifecycle Events & Aborting Recording or Playback
Depending on your app, you may wish to stop/cancel recording/playback in the event of a screen transition or the app going into the background. This library may be limited in what is possible, here, but its worth looking into ReactNative's AppState
, and ReactNavigation's useFocusEffect()
.
Securing Audio Permissions
Requesting Permission - Android
Android above API/SDK 23 (Marshmellow) requires run-time permission to record audio; this can be addressed with this library (interally using react-native-permissions), via:
// All required permissions at once:
audio.androidPermissionsEnabled()
// Granularly:
audio.androidRecordAudioEnabled()
audio.androidWriteExternalStorageEnabled()
audio.androidReadExternalStorageEnabled()
Requesting Permission - iOS
While iOS automatically requests a user's permission when audio is used (based on Info.plist entries; see "Post-Intallation" above), it is still worth considering when it is best for a user to experience permission requests, and perhaps artificially use audio so as to surface permission requests at opportune times for the user.
Contributing
See the guide to contributing, to learn how to contribute to this repository and our development workflow.
License & Attributions
MIT
Attributions
This project is inspired by, and to some extent based upon, the following projects:
- react-native-audio-recorder-player by [Dooboolab] (https://github.com/hyochan/react-native-audio-recorder-player)
- react-native-audio-record, by Atlas Labs
Development
Developing react-native modules is slow going; it is typically necessary to work in (at least) 3 languages simultaneously, and easy to make mistakes. Take your time, be deliberate, save your work through frequent small commits.
When project settings get messed up, it is often easier to build a new project from scratch using create-react-native-library - see below - then re-import your functional code into this new, up-to-date project skeleton.
Don't mindlessly update project settings, when XCode and Android Studio suggest to do this! Where possible, stick with the defaults provided by create-react-native-library.
Don't cavalierly upgrade react-native; preview with (react-native upgrade helper)[https://react-native-community.github.io/upgrade-helper/]. Probably easier to rebuild the project with create-react-native-library!
You may need to run a custom ruby install, and (if you're running iOS) you won't want to mess with the default installation! I recommend chruby, with ruby-install.
If installing npx pod-install / pod install for iOS is giving you problems, you MAY have hit this bug:
https://github.com/facebook/react-native/issues/39832
Try running bundle update --bundler in the project root directory (which may be example/ if building the example), then bundle install
and bundle exec pod install
in the ios directory
If you get a "PhaseScriptExecution" bug - make sure that the directory path to your project doesn't include any spaces (doh!)
Async arrow functions are currently (Aug 16, 2024) unsupported by the hermes js engine. using them can cause unpredictable effects. Instead (until there IS support) use async non-arrow functions, but provide access to the outer "this" if need be
To deal with Yarn idiocy, see: https://levelup.gitconnected.com/how-to-use-yarn-3-with-react-native-and-how-to-migrate-c5f108108533.
Set up
Download the project repo, and run yarn
from rn-audio
project directory. (If re-installing, may need to delete node_modules
and yarn.lock
files in the project
and example
directories.)
Running the example (for development)
From the rn-audio
project directory, run yarn example ios
and yarn example android
to run on iOS and android emulators, respectively. You may need to run npx pod-install
as well, to ensure the iOS project has its dependencies met.
Re-Creating the Library Project
- Run
npx create-react-native-library
✔ What is the email address for the package author? … [email protected] ✔ What is the URL for the package author? … https://github.com/kleydon/rn-audio ✔ What is the URL for the repository? … https://github.com/kleydon/rn-audio ✔ What type of library do you want to develop? › Native module ✔ Which languages do you want to use? › Kotlin & Swift cd
into library's main project folder- Ensure a bridging header file exists within iOS project; tailor if needed. See here and here.
- Create / update .gitignore, to ignore
node_modules
, etc. - Ensure the bundle identifier is
com.quixotry.rnaudio
- Add any 'native' project dependencies (and their dependencies), with
yarn add <npm module or github repo>
- Install all project dependencies using
yarn
andnpx pod-install
. You may need to delete ayarn.lock
file first - Add the functional swift/kotlin/typescript code to the library
- In the Android *Module.kt file:
- Be sure the value of the
TAG
string matches the (PascalCase) name of the module - In the module class declaration line, be sure to use
private val
reactContext, so
reactContext` is available to class member functions
cd
into library'sexample
project folder, reprise steps 3 - 7 as needed.- To address a wierd
error: implicit declaration of function 'assert' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
bug (noticed when compiling for iOS on an intel Mac with macOS 13.1/Ventura), you may need to change the example folder's XCode Pods project build setting forC Language Dialect
for the SocketRocket and Flipper-PeerTalk targets fromgnu11
tognu99
; see: https://github.com/facebook/react-native/issues/35725. If you discover you DONT need to do this - update this documentation! cd
back to the libraries main project folder- Run example app on iOS:
yarn example ios
- Run the example app on Android:
yarn example android
Upgrading react-native:
You could use https://react-native-community.github.io/upgrade-helper/ Practically, it is probably easier to start with a fresh react native project, using create-react-native-library
Issues / Improvement Ideas:
IOS: Currently, when a corrupt audio file fails to play, failure is silent. Figure out how to return a failure code here.
IOS: More nuanced route-change handling
IOS/Android: There may be various scenarios in which external events change accessibility to audio, in ways this library does not yet gracefully accomodate. Investigate, and address.
Record options are validated at the js/ts level; should they also be validated at the native level, in the spirit of defensive coding? (Could mean extra work / upkeep, if options always only accessed via the js/ts level...)
Playing from http: (as opposed to https) may not work; how to communicate this.
Playing from http/s involves delay. There should be (ideally) some feedback about this delay (e.g. Can I be informed that I am waiting for a network process, and not just "hung"? Can I know how long I need to wait?)
Consider adding a parameter for automatically switching from paused to stopped after some maximum duration has passed, so that when recording wav files via Android's, AudioRecord isn't just spinning, (potentially using too much power (?)
Android: Consider doing EVERYTHING with AudioRecord, and using converters after (during) the fact to get all the other formats. (Is this more limiting? More complex? More brittle? Does it result in delays? Or does it unify/simplify the framework - and provide lower-level access to audio data if this is needed in the future?)
Android: Consider handling timeout for AudioRecord-based recording in the same way as timeout is handled for MediaRecorder-based recording. (Advantages?)