cordova-plugin-vybuds-audioinput
v1.0.26
Published
This cordova plugin enables audio capture from a bluetooth headset microphone, by in (near) real-time forwarding raw audio data to the web layer of your application. It also enables multiple WebAudio API nodes to connect to said bluetooth stream.
Downloads
33
Maintainers
Readme
cordova-plugin-vybuds-audioinput
A cordova plugin to enable audio capture from a bluetooth headset microphone, by real-time forwarding raw audio data to one or more WebAudio API AudioNodes.
Installation
cordova plugin add https://github.com/project-oblio/cordova-plugin-vybuds-audioinput.git
Please handle bluetooth permissions, connection, and disconnection within your base application. This includes adding the Android bluetooth privileges (don't forget access_coarse_location), and adding the UUID values to your iOS plist. Alternatively, you can submit a pull request for using the experimental Bluetooth Web Audio API.
Features
- [x] Multiple WebAudio node outputs
- [x] Bluetooth headset compatibility
- [x] Control each node individually
- [x] Check microphone permissions
- [ ] (Untested, but may work) Two-way communication (play and record)
- [ ] Browser compatibility and testing (if necessary, see the experimental Bluetooth Web Audio API)
TODO (Homomorphic Encryption and Protecting Personal Health Information)
- Milestone One
- [ ] Make bluetooth recording optional for easier debugging and testing (if you don't have a bluetooth headset / if this has not already been implemented). In other words, add a function to the cordova audioinput object that will enable you to do recording through the regular microphone or through the bluetooth microphone (optionally). See cordova-plugin-audioinput.
- [ ] If a password has not yet been set by the user, prompt the user to set a password for accessing private methods (raw brainwave data). Obviously do not store the password in plaintext but use some type of hash.
- [ ] Provide a "subscribeToData" function which takes a function as an input. When a BTB service or experiment uses this function, the input function it provides is called with the most recent raw data.
- [ ] By default, have raw audio data from the microphone be homomorphically encrypted with the Project Oblio "do good be good" mantra; The string ("0xd0700db700d0000000"). Look for a javascript library dedicated to homomorphic encryption that has already been audited
- [ ] Add a function for making auWdio data unencrypted, "makeDataUnencrypted". This function requires the user to enter their password previously set, through the use of an alert window
- [ ] Test on iOS and Android
- [ ] Provide a test script that proves each of these features as part of your PR
TODO (Blockchain biometrics)
- Milestone One
- [ ] Make bluetooth recording optional for easier debugging and testing (if you don't have a bluetooth headset / if this has not already been implemented).
- [ ] Google any terminology you may be unfamiliar with for the following tasks. Most "new" material is really just simple HTTP requests to ethereum nodes (effectively websites) that provide an API to the ethereum blockchain
- [ ] Create a function called "setUsername". This function takes an ethereum address as an input and will later be checked wtih the subjects database in the oblio token smart contract. If a user has not set their username do not prompt them for one. You do not need to check if it is a valid ethereum address right now.
- [ ] Create a function to poll the ethereum bootnodes to obtain a list of peers.
- [ ] Create a function to poll 5 or more of these peers simultaneously. Ask for the JSON structure of the most recent block. Obtain the most recent block hash and make sure it is the same in at least 4/5 of these peers.
- [ ] Once you have the peers, create a function that converts this hash into an audio signal. Using a structure called "blockchainAudio", allow a user to set this to either "random" or "Wolfram" or "silent" (default: silent). Add an exposed "duration" setting to each of these algorithms so that a user can set them to a minimum length of 0.2 seconds up to 10 seconds (default: 2 seconds). Use the WebAudio API and WebAudio API Nodes whenever possible. -- [ ] The "random" setting creates an audible audio signal, with features ranging from 0 Hz to 23 kHz. It is based on the most recent block hash and will likely sound mostly like static. -- [ ] The "Wolfram" setting is an algorithm to generate an audio signal that sounds a lot more like music, generated from the centralized Wolfram Tones algorithm. It has about as many possibilities as ethereum block hashes and uses a simple HTTP GET request to generate audio. Stream the audio from the Wolfram server to the user. -- [ ] The "silent" setting is like the "random" audio signal but it only outputs randomly-derived audio in the range of 0 to 20 Hz. In terms of security, random > Wolfram > silent. In terms of user convenience, silent > Wolfram > random. -- [ ] Your methods here should be written in such a way that an external node can derive the exact same audio signal you've generated when given only the block hash.
- [ ] Test on iOS and Android
- [ ] Provide a test script that proves each of these features as part of your PR
- Milestone 2
- [ ] Please wait until the "Basic Security, Milestone One" PR has been submitted succesfully by you or another developer before continuing.
- [ ] Add a function called "detectHuman". This function calculates humanness in-browser. Further processing will be done by an external oblio node in a future PR. -- [ ] Determine if vybuds are connected (skin recording) or if a generic microphone (audio recording) is connected. Skin recording is comparatively noisy (high power frequencies) compared to audio recordings. When you play the block-hash-derived audio signal you generated in Milestone 1, if the recorded microphone's characteristic is nearly identical to the audio signal you just generated, then a device microphone is present. -- [ ] A circuit that is set-up for skin recording, but is not currently connected to the user, will have a low power frequency. Use this fact to determine if vybuds are on a person's head or somewhere else (i.e. on the floor). If the recorded microphone's characteristic has a low power spectrogram, then it is probably not well connected (< -30dB from 0 to 200 Hz). If the recorded microphone's characteristic has a high power spectrogam ( > -30 dB from 0 to 200 Hz), then it is probably well-connected to a person's skin. Provide functions called "detectPoorConnection" that takes a function as its input that allows an experimenter to provide a function to be called when the headset is disconnected. Provide functions called "detectGoodConnection" for when the headset is reconnected. Provide a numerical variable, "confidenceMetric", which is a running calculator of the the total db from 0 to 200 Hz and returns a linearly-dispersed value like so: 0.1 for -60 dB or less, 1.0 for -20 dB or higher. -- [ ] Perform a basic step to ensure that the user is not a bot. Ensure that the output audio signal is well correlated to the input (microphone recorded) audio signal with some degree of noise. For example, a baseline noise level should be present if the headphones are connected to a user's skin and no audio is generated (see last bullet regarding frequency power). First, calculate what this level is. Next, calculate how this level changes when a particular audio signal is played. This noise level should increase in the 20 Hz range when the audio signal is outputting 20 Hz, for example. For now, focus only on the range from 0 to 200 hz.
- [ ] Add a function called "uploadData". This function can be blank for now. It will take the username set earlier, along with an experiment name, most recent audio data and uploads it to an oblio node. This oblio node performs identification on the signal to ensure that a user is actually there.
- [ ] Add an exposed function called "performHumanDetectionAndUploadCheck" that can be called by a BTB service or experiment. This "performHumanDetection" step goes through each of the steps you've just written "detectHuman" function as well as the "uploadData" function.
- [ ] Add an exposed function called "beginRecurrentHumanDetection". This begins by polling the ethereum nodes every 30 seconds for the most recent block hash in the background. It plays the 2-second audio signal generated from block hash once every 30 seconds through the speaker.
- [ ] Test on iOS and Android
- [ ] Provide a test script that proves each of these features as part of your PR
TODO (Spectrogram analysis)
This step will require having vybuds in-hand.
Supported Platforms
- Android
- iOS
Basic Usage Example - AudioNode
- [x] Multiple WebAudio node outputs
- [x] Bluetooth headset compatibility
// Wait for the Cordova 'deviceready' event to fire before continuing
function startCapture() {
var audioContext = window.AudioContext;
var audioinput = window.audioinput; // this library
// An "id" - string - is required to allow connections to multiple nodes
// The id system allows you to connect a single stream to multiple
var id="toSpeaker"; // this can be anything
var audioNode = audioContext.destination;
var id2="toFile";
var audioNode2 = _this.audioContext.createMediaStreamDestination();
// Sets up a SCO socket on Android
audioinput.connectToBluetooth(function(){
audioinput.start({
streamToWebAudio: true,
audioContext:audioContext,
platform:"android" // default is android
});
// Connect the audioinput to the device speakers in order to hear the captured sound.
audioinput.connect(audioNode,id);
audioinput.connect(audioNode2,id2);
// To write raw data to file:
var mediaRecorder = new MediaRecorder(audioNode2.stream);
mediaRecorder.ondataavailable=function(evt){
// // Please see WebAudioAPI docs for this function:
// // the third input specifies an "append" operation
// writeFile(fileEntry,evt.data,true);
console.log(evt.data);
}
});
}
- [x] Control each node individually
// ...and when we're ready to stop recording.
var id = "toSpeaker";
audioinput.stop(function(url) {
// Now you have the URL (which might be different to the one passed in to audioinput.start())
// You might, for example, read the data into a blob.
window.resolveLocalFileSystemURL(url, function (tempFile) {
tempFile.file(function (tempWav) {
var reader = new FileReader();
reader.onloadend = function(e) {
// Create the blob from the result.
var blob = new Blob([new Uint8Array(this.result)], { type: "audio/wav" });
// Delete the temporary file.
tempFile.remove(function (e) { console.log("temporary WAV deleted"); }, fileError);
// Do something with the blob.
doSomethingWithWAVData(blob);
}
reader.readAsArrayBuffer(tempWav);
});
}, function(e) {
console.log("Could not resolveLocalFileSystemURL: " + e.message);
});
},id);
- [x] Check microphone permissions
window.audioinput.checkMicrophonePermission(function(hasPermission) {
if (hasPermission) {
console.log("We already have permission to record.");
startCapture();
}
else {
// Ask the user for permission to access the microphone
window.audioinput.getMicrophonePermission(function(hasPermission, message) {
if (hasPermission) {
console.log("User granted us permission to record.");
startCapture();
} else {
console.warn("User denied permission to record.");
}
});
}
});
Todo list
Contributing
This project is open-source, so contributions are welcome. Just ensure that your changes doesn't break backward compatibility!
- Fork the project.
- Create your feature branch (git checkout -b my-new-feature).
- Commit your changes (git commit -am 'Add some feature').
- Push to the branch (git push origin my-new-feature).
- Create a new Pull Request.
Credits
- The plugin is created by Edin Mujkanovic and Satoshi.