@davstack/hume-voice-react
v0.1.11-beta.12
Published
<div align="center"> <img src="https://storage.googleapis.com/hume-public-logos/hume/hume-banner.png"> <h1>Hume AI EVI React SDK</h1> <p> <strong>Integrate Hume's Empathic Voice Interface in your React application</strong> </p> </div>
Downloads
73
Readme
Overview
This is the React SDK for Hume's Empathic Voice Interface, making it easy to integrate the voice API into your own front-end application. The SDK abstracts the complexities of managing websocket connections, capturing user audio via the client's microphone, and handling the playback of the interface's audio responses.
Prerequisites
Before installing this package, please ensure your development environment meets the following requirement:
- Node.js (
v18.0.0
or higher).
To verify your Node.js version, run this command in your terminal:
node --version
If your Node.js version is below 18.0.0
, update it to meet the requirement. For updating Node.js, visit Node.js' official site or use a version management tool like nvm for a more seamless upgrade process.
Installation
Add @humeai/voice-react
to your project by running this command in your project directory:
npm install @humeai/voice-react
This will download and include the package in your project, making it ready for import and use within your React components.
import { VoiceProvider } from '@humeai/voice-react';
Usage
Quickstart
To use the SDK, wrap your components in the VoiceProvider
, which will enable your components to access available voice methods. Here's a simple example to get you started:
import React, { useState } from 'react';
import { EmbeddedVoice } from '@humeai/voice-react';
function App() {
const apiKey = process.env.HUME_API_KEY || '';
const [isEmbedOpen, setIsEmbedOpen] = useState(false);
return (
<>
<VoiceProvider
auth={{ type: 'apiKey', value: apiKey }}
hostname={process.env.HUME_VOICE_HOSTNAME || 'api.hume.ai'}
>
<ExampleComponent />
</VoiceProvider>
</>
);
}
Configuring VoiceProvider
The table below outlines the props accepted by VoiceProvider
:
| Prop | Required | Description |
| ------------------------- | -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| auth | yes | Authentication strategy and corresponding value. Authentication is required to establish the web socket connection with Hume's Voice API. See our documentation on obtaining your API key
or access token
. |
| hostname | no | Hostname of the Hume API. If not provided this value will default to "api.hume.ai"
. |
| reconnectAttempts | no | Number of times to attempt to reconnect to the API. If not provided this value will default to 30
. |
| debug | no | Enable debug mode. If not provided this value will default to false
. |
| configId | no | If you have a configuration ID with voice presets, pass the config ID here. |
| configVersion | no | If you wish to use a specific version of your config, pass in the version ID here. |
| onMessage | no | Callback function to invoke upon receiving a message through the web socket. |
| onToolCall | no | Callback function to invoke upon receiving a ToolCallMessage through the web socket. It will send the string returned as a the content of a ToolResponseMessage. This is where you should add logic that handles your custom tool calls. |
| onClose | no | Callback function to invoke upon the web socket connection being closed. |
| clearMessagesOnDisconnect | no | Boolean which indicates whether you want to clear message history when the call ends. |
| messageHistoryLimit | no | Set the number of messages that you wish to keep over the course of the conversation. The default value is 100. |
| sessionSettings | no | Optional settings where you can set custom values for the session. |
| resumedGroupChatId | no | Include a chat group ID, which enables the chat to continue from a previous chat group. |
Using the Voice
After you have set up your voice provider, you will be able to access various properties and methods to use the voice in your application. In any component that is a child of VoiceProvider
, access these methods by importing the useVoice
custom hook.
// ExampleComponent is nested within VoiceProvider
import { useVoice } from '@humeai/voice-react';
export const ExampleComponent = () => {
const { connect } = useVoice();
};
Methods
| Method | Usage |
| ------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| connect: () => Promise
| Opens a socket connection to the voice API and initializes the microphone. |
| disconnect: () => void
| Disconnect from the voice API and microphone. |
| clearMessages: () => void
| Clear transcript messages from history. |
| mute: () => void
| Mute the microphone |
| unmute: () => void
| Unmute the microphone |
| muteAudio: () => void
| Mute the assistant audio |
| unmuteAudio: () => void
| Unmute the assistant audio |
| sendSessionSettings: (text: string) => void
| Send new session settings to the assistant. This overrides any session settings that were passed as props to the VoiceProvider. |
| sendUserInput: (text: string) => void
| Send a user input message. |
| sendAssistantInput: (text: string) => void
| Send a text string for the assistant to read out loud. |
| sendToolMessage: (toolMessage: ToolResponse \| ToolError) => void
| Send a tool response or tool error message to the EVI backend. |
| sendPauseAssistantMessage: () => void
| Send pause assistant message to the websocket. This pauses responses from EVI. Chat history is still saved and sent after resuming. |
| sendResumeAssistantMessage: () => void
| Send resume assistant message to the websocket. This resumes responses from EVI. Chat history sent while paused will now be sent. |
Properties
| Property | Type | Description |
| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| isMuted
| boolean
| Boolean that describes whether the microphone is muted |
| isAudioMuted
| boolean
| Boolean that describes whether the assistant audio is muted |
| isPlaying
| boolean
| Describes whether the assistant audio is currently playing. |
| fft
| number[]
| Audio FFT values for the assistant audio output. |
| micFft
| number[]
| Audio FFT values for microphone input. |
| messages
| UserTranscriptMessage
, AssistantTranscriptMessage
, ConnectionMessage
, UserInterruptionMessage
, or JSONErrorMessage
| Message history of the current conversation. |
| lastVoiceMessage
| AssistantTranscriptMessage
or null
| The last transcript message received from the assistant. |
| lastUserMessage
| UserTranscriptMessage
or null
| The last transcript message received from the user. |
| readyState
| VoiceReadyState
| The current readyState of the websocket connection. |
| readyState
| VoiceReadyState
| The current readyState of the websocket connection. |
| status
| VoiceStatus
| The current status of the voice connection. Informs you of whether the voice is connected, disconnected, connecting, or error. If the voice is in an error state, it will automatically disconnect from the websocket and microphone. |
| error
| VoiceError
| Provides more detailed error information if the voice is in an error state. |
| isError
| boolean
| If true, the voice is in an error state. |
| isAudioError
| boolean
| If true, an audio playback error has occurred. |
| isMicrophoneError
| boolean
| If true, a microphone error has occurred. |
| isSocketError
| boolean
| If true, there was an error connecting to the websocket. |
| callDurationTimestamp
| string
or null
| The length of a call. This value persists after the conversation has ended. |
| toolStatusStore
| Record<string, { call?: ToolCall; resolved?: ToolResponse \| ToolError }>
| A map of tool call IDs to their associated tool messages. |
| chatMetadata
| ChatMetadataMessage
or null
| Metadata about the current chat, including chat ID, chat group ID, and request ID. |
Support
If you have questions or require assistance pertaining to this package, reach out to us on Discord!