ai-embedapi
v1.0.1
Published
A powerful JavaScript SDK to interact with multiple AI models (OpenAI, Anthropic, VertexAI, XAI) for text generation and other AI capabilities using the EmbedAPI service.
Downloads
18
Maintainers
Readme
AI Embed API - JavaScript SDK
A JavaScript SDK for interacting with the AI Embed API platform.
Installation
To install the SDK, run:
npm install ai-embedapi
Getting Started
Here's a quick example to get you started with the One AI API.
Import the SDK
const OneAIAPI = require('ai-embedapi');
Initialize the SDK
You need to instantiate the OneAIAPI
class by providing your API key. You can obtain your API key from EmbedAPI.
const oneAI = new OneAIAPI('YOUR_API_KEY_HERE');
Example Usage
Text Generation Example (OpenAI)
(async () => {
try {
const response = await oneAI.generateText({
service: 'openai',
model: 'gpt-4o',
messages: [
{ role: 'user', content: 'Hello, can you tell me about AI trends?' }
],
maxTokens: 500,
temperature: 0.7
});
console.log('Generated Response:', response);
} catch (error) {
console.error('Failed to generate text:', error);
}
})();
Parameters
- service (required): The AI service provider (e.g.,
openai
,anthropic
,vertexai
,xai
). - model (required): The model to use for text generation (e.g.,
gpt-4o
,claude-3-5-sonnet-20241022
). - messages (required): The array of messages to send to the model, typically for chat completion.
- maxTokens (optional): The maximum number of tokens to generate.
- temperature (optional): Controls randomness. Higher values (e.g.,
0.9
) will make the output more random, while lower values (0.2
) will make it more focused and deterministic. - topP (optional): An alternative to sampling with temperature, called nucleus sampling.
- frequencyPenalty (optional): Number between
-2.0
and2.0
. Positive values penalize new tokens based on their frequency so far. - presencePenalty (optional): Number between
-2.0
and2.0
. Positive values penalize new tokens based on whether they appear in the text so far. - stopSequences (optional): Up to 4 sequences where the API will stop generating further tokens.
Models Supported
The SDK allows you to use the following AI models:
OpenAI
gpt-4o
: Advanced model with support for multiple modalities.gpt-3.5-turbo
: Fast and efficient, suitable for lightweight tasks.gpt-3.5-turbo-16k
: Extended context window version of GPT-3.5 Turbo.o1-preview
: Preview model designed for complex reasoning and problem-solving tasks.o1-mini
: Cost-efficient reasoning model for coding and tasks requiring limited world knowledge.
Anthropic
claude-3-5-sonnet-20241022
: Balanced performance model.claude-3-haiku-20240307
: Fast response model for simple queries.claude-3-opus-20240229
: Powerful AI model for complex tasks.
VertexAI
gemini-1.5-pro
: Advanced model from Google for a wide range of tasks.gemini-1.5-flash
: Faster variant of Gemini 1.5 Pro.- Other Vertex models from
vertexModels
list.
XAI
grok-beta
: Latest model from XAI focused on conversational AI.
Error Handling
The SDK provides detailed error messages to help you understand any issues that arise during text generation. Errors are thrown with details such as status codes and messages to help debug effectively.
Example with Different AI Models
Using Anthropic
(async () => {
try {
const response = await oneAI.generateText({
service: 'anthropic',
model: 'claude-3-5-sonnet-20241022',
messages: [
{ role: 'user', content: 'Explain quantum mechanics in simple terms.' }
],
maxTokens: 400,
temperature: 0.5
});
console.log('Generated Response:', response);
} catch (error) {
console.error('Failed to generate text:', error);
}
})();
Using VertexAI
(async () => {
try {
const response = await oneAI.generateText({
service: 'vertexai',
model: 'gemini-1.5-pro',
messages: [
{ role: 'user', content: 'What are the benefits of renewable energy?' }
],
maxTokens: 300,
temperature: 0.6
});
console.log('Generated Response:', response);
} catch (error) {
console.error('Failed to generate text:', error);
}
})();
API Cost Calculation
The API calculates the cost of each request based on the model and the number of tokens used.
- The SDK provides a
calculateCost
method internally to estimate the cost based on token usage. - Costs vary depending on the service and model used.
- OpenAI tokens cost
0.03
per thousand tokens for GPT-4o, for example.
Testing
To run the tests, use:
npm test
License
This SDK is released under the MIT License.
Contributing
We welcome contributions! Please read our Contributing Guide for more details on how to contribute to the project.
Issues
If you encounter any issues, please report them on our GitHub Issues page.
Contact
For questions or support, please reach out to [email protected].