@embedapi/core
v1.0.9
Published
๐ฅ ONE API KEY TO RULE THEM ALL! Access ANY AI model instantly through our game-changing unified API. Build AI apps in minutes, not months! The ultimate all-in-one AI agent solution you've been waiting for! ๐
Downloads
163
Maintainers
Readme
EmbedAPIClient
๐ฅ ONE API KEY TO RULE THEM ALL! Access ANY AI model instantly through our game-changing unified API. Build AI apps in minutes, not months! The ultimate all-in-one AI agent solution you've been waiting for! ๐
Visit embedapi.com to get your API key and start building!
Installation
Using npm:
npm install @embedapi/core
Using yarn:
yarn add @embedapi/core
Using pnpm:
pnpm add @embedapi/core
Initialization
const EmbedAPIClient = require('@embedapi/core');
# Regular API client
const client = new EmbedAPIClient('your-api-key');
# Agent mode client
const agentClient = new EmbedAPIClient('your-agent-id', { isAgent: true });
# Debug mode client
const debugClient = new EmbedAPIClient('your-api-key', { debug: true });
# Agent and debug mode client
const debugAgentClient = new EmbedAPIClient('your-agent-id', {
isAgent: true,
debug: true
});
Constructor Parameters
apiKey
(string): Your API key for regular mode, or agent ID for agent modeoptions
(object, optional): Configuration optionsisAgent
(boolean, optional): Set to true to use agent mode. Defaults to falsedebug
(boolean, optional): Set to true to enable debug logging. Defaults to false
Methods
1. generate({ service, model, messages, ...options })
Generates text using the specified AI service and model.
Parameters
service
(string): The name of the AI service (e.g., 'openai')model
(string): The model to use (e.g., 'gpt-4o')messages
(array): An array of message objects containing conversation historymaxTokens
(number, optional): Maximum number of tokens to generatetemperature
(number, optional): Sampling temperaturetopP
(number, optional): Top-p sampling parameterfrequencyPenalty
(number, optional): Frequency penalty parameterpresencePenalty
(number, optional): Presence penalty parameterstopSequences
(array, optional): Stop sequences for controlling response generationtools
(array, optional): Array of function definitions for tool usetoolChoice
(string|object, optional): Tool selection preferencesenabledTools
(array, optional): List of enabled tool namesuserId
(string, optional): Optional user identifier for agent mode
Usage Example
// Regular mode
const response = await client.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
// Agent mode
const agentResponse = await agentClient.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
2. stream({ service, model, messages, ...options })
Streams text generation using the specified AI service and model.
Parameters
Same as generate()
, plus:
streamOptions
(object, optional): Stream-specific configuration options
Response Format
The stream emits Server-Sent Events (SSE) with two types of messages:
- Content Chunks:
{
"content": "Generated text chunk",
"role": "assistant"
}
- Final Statistics:
{
"type": "done",
"tokenUsage": 17,
"cost": 0.000612
}
Usage Example
// Regular mode
const streamResponse = await client.stream({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
// Agent mode
const agentStreamResponse = await agentClient.stream({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
// Process the stream
const reader = streamResponse.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = JSON.parse(line.slice(6));
if (data.type === 'done') {
console.log('Stream stats:', {
tokenUsage: data.tokenUsage,
cost: data.cost
});
} else {
console.log('Content:', data.content);
}
}
}
}
3. listModels()
Lists all available models.
const models = await client.listModels();
4. testAPIConnection()
Tests the connection to the API.
const isConnected = await client.testAPIConnection();
Error Handling
All methods throw errors if the API request fails:
try {
const response = await client.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
} catch (error) {
console.error('Error:', error.message);
}
Authentication
The client supports two authentication modes:
Regular Mode (default)
- Uses API key in request headers
- Initialize with:
new EmbedAPIClient('your-api-key')
Agent Mode
- Uses agent ID in request body
- Initialize with:
new EmbedAPIClient('your-agent-id', { isAgent: true })
- Optional
userId
parameter available for request tracking
License
MIT