@e-llm-studio/chat
v1.0.1
Published
e-llm-studio chat library
Downloads
35
Readme
eLLM Studio Chat Stream Package
Welcome to the eLLM Studio Chat Stream package! 🎉 This package enables streaming chat functionality with your AI assistant in eLLM Studio via WebSocket and GraphQL. It's designed for both frontend and backend implementations.
🚀 Features
- Real-time Streaming: Receive messages in a streaming fashion.
- AI Assistant Integration: Seamlessly connects with the AI models deployed within your organization.
- Customizable: Pass custom prompts or previous messages to enhance conversations.
- Error Handling: Catch and manage errors using callbacks.
📦 Installation
npm i @e-llm-studio/streaming-response
🛠️ Usage
Here’s how you can use the startChatStream function to set up the AI chat stream:
import { startChatStream } from '@e-llm-studio/streaming-response';
const params = {
WEBSOCKET_URL: 'wss://your-ellm-deployment/graphql', // Required: WebSocket URL for your deployment
organizationName: 'TechOrg', // Required: Organization name where the assistant is created
chatAssistantName: 'MyAssistant', // Required: Assistant name
selectedAIModel: 'gpt-4', // Required: AI model selected for the assistant
replyMessage: '', // Optional: Pass the previous response from AI.
userName: 'John Doe', // Required: Username of the person using the assistant
userEmailId: '[email protected]', // Required: User's email
userId: 'user-123', // Required: Unique identifier for the user
query: 'What is the weather like today?', // Required: The user's question or prompt
requestId: `requestId-${uuid()}`, // Required: Unique ID for the request
customPrompt: 'Personalized prompt here', // Optional: Add custom context to your prompt
enableForBackend: false, // Optional: Set to true if you're using it on the backend (Node.js)
onStreamEvent: (data) => console.log('Stream event:', data), // Required: Callback for handling stream data
onStreamEnd: (data) => console.log('Stream ended:', data), // Optional: Callback for when the stream ends
onError: (error) => console.error('Stream error:', error), // Optional: Callback for handling errors
};
startChatStream(params);
🔑 Parameters
Required Parameters:
- WEBSOCKET_URL: WebSocket URL of your eLLM deployment. Example:
wss://dev-egpt.techo.camp/graphql
. - organizationName: Name of the organization where the assistant is created.
- chatAssistantName: Name of the assistant you're interacting with.
- selectedAIModel: The AI model used (e.g., GPT-4).
- userName: The name of the user interacting with the assistant.
- userEmailId: Email ID of the user.
- userId: Unique user ID.
- query: The question or prompt you want to send to AI.
- requestId: Unique request ID, e.g.,
requestId-${uuid()}
. - onStreamEvent: Callback function to capture incoming stream events.
Optional Parameters:
- replyMessage: If you want to include a previous response with the new query, pass it here. Leave empty for normal chat scenarios.
- customPrompt: Use this to add additional context to the prompt sent to the AI.
- enableForBackend: Set to true if you're using this package in backend e.g. NodeJs. Defaults to false, which is suitable for frontend use e.g. React/Next.js.
- onStreamEnd: Callback for when the stream ends. Useful for handling final events or cleanup.
- onError: Callback for capturing any errors during the stream.
👥 Community & Support
For any questions or issues, feel free to reach out via our GitHub repository or join our community chat! We’re here to help. 😊