@embedapi/core
v1.0.4
Published
A general-purpose embedding solution for APIs, providing tools for easy integration of AI models.
Downloads
332
Maintainers
Readme
EmbedAPIClient - Documentation
Overview
EmbedAPIClient
is a Node.js client library that interacts with the EmbedAPI service. It provides methods to generate AI responses, list available models, and test API connectivity. This library makes it easy to interact with the EmbedAPI for various AI services, such as OpenAI, Anthropic, VertexAI, and others.
Getting Started
Prerequisites
- Node.js installed on your system
- Basic understanding of JavaScript/Node.js
API Key Setup
- Visit embedapi.com to create your account
- Generate your API key from the dashboard
- Fund your account to activate API access
- Copy your API key for configuration
Installation
To install the EmbedAPIClient in your project, use the following command:
npm install @embedapi/core
Usage
Import the Client
You can import the EmbedAPIClient
class using CommonJS syntax:
const EmbedAPIClient = require('@embedapi/core');
Initialize the Client
To initialize the client, you need to provide your API key. The API key is required to authenticate requests to the EmbedAPI service.
const apiKey = 'your-api-key-here';
const client = new EmbedAPIClient(apiKey);
Methods
1. generate({ service, model, messages, ...options })
Generates text using the specified AI service and model.
Parameters:
service
(string): The name of the AI service (e.g., 'openai').model
(string): The model to use (e.g., 'gpt-4o').messages
(array): An array of message objects containing conversation history.maxTokens
(number, optional): The maximum number of tokens to generate.temperature
(number, optional): Sampling temperature.topP
(number, optional): Top-p sampling parameter.frequencyPenalty
(number, optional): Frequency penalty parameter.presencePenalty
(number, optional): Presence penalty parameter.stopSequences
(array, optional): Stop sequences for controlling response generation.
Usage Example:
const response = await client.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }],
maxTokens: 150,
temperature: 0.7
});
console.log(response);
2. listModels()
Lists all available models for the specified service.
- Usage Example:
const models = await client.listModels();
console.log(models);
3. testAPIConnection()
Tests the connection to the API to verify that the API key is valid.
- Usage Example:
const isConnected = await client.testAPIConnection();
console.log('API Connection Successful:', isConnected);
Error Handling
The client methods throw errors if the API request fails. Make sure to use try...catch
blocks when calling these methods to handle potential errors gracefully.
try {
const response = await client.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
console.log(response);
} catch (error) {
console.error('Error generating text:', error);
}
API Reference
Base URL
The base URL for all API requests is:
https://embedapi.com/api/v1
Endpoints
POST /generate
: Generates AI responses based on the provided input.GET /models
: Lists all available models.GET /test
: Tests the API connection.
Example Project
Here is a quick example of using EmbedAPIClient
in a Node.js project:
const EmbedAPIClient = require('@embedapi/core');
const apiKey = 'your-api-key-here';
const client = new EmbedAPIClient(apiKey);
async function main() {
try {
// Test API connection
const isConnected = await client.testAPIConnection();
console.log('API Connection Successful:', isConnected);
// List available models
const models = await client.listModels();
console.log('Available Models:', models);
// Generate text
const response = await client.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }],
maxTokens: 100
});
console.log('Generated Response:', response);
} catch (error) {
console.error('An error occurred:', error);
}
}
main();
Contributing
We welcome contributions! Please feel free to submit a pull request or open an issue if you find a bug or have suggestions for improvements.
License
This project is licensed under the MIT License. See the LICENSE file for details.