intellinode
v1.8.8
Published
Evaluate and integrate with latest AI models including ChatGPT, Llama, Diffusion, Cohere, Gemini, and Hugging Face.
Downloads
448
Maintainers
Readme
Intelligent Node (IntelliNode)
Unified prompt, evaluation, and production integration to any AI model
Integrate your data with the latest language models and deep learning frameworks using intellinode javascript. The library provides intuitive functions for sending input to models like ChatGPT, WaveNet and Stable diffusion, and receiving generated text, speech, or images. With just a few lines of code, you can easily access the power of cutting-edge AI models to enhance your projects.
Latest Updates
- Add Anthropic claude 3 chat.
- Add Google Gemini chat and vision.
- Add Mistral SMoE model as a chatbot provider (open source mixture of experts).
- Improve Llama v2 chat speed and support llama code models. 🦙
- Update stable diffusion to use the XL model engine. 🎨
- Add support for hugging face inference. 🤗
- Support in-memory semantic search. 🔍
Join the discord server for the latest updates and community support.
Chat with your docs via Intellinode one key at app.intellinode.ai.
Examples
Functions
Chatbot
- imports:
const { Chatbot, ChatGPTInput } = require('intellinode');
- call:
// set chatGPT system mode and the user message.
const input = new ChatGPTInput('You are a helpful assistant.');
input.addUserMessage('What is the distance between the Earth and the Moon?');
// get chatGPT responses.
const bot = new Chatbot(apiKey);
const responses = await bot.chat(input);
Google Gemini Chatbot
IntelliNode enable effortless swapping between AI models.
- imports:
const { Chatbot, GeminiInput, SupportedChatModels } = require('intellinode');
- call:
const input = new GeminiInput();
input.addUserMessage('Who painted the Mona Lisa?');
// get the api key from makersuite.google.com/app/apikey
const geminiBot = new Chatbot(geminiApiKey, SupportedChatModels.GEMINI);
const responses = await geminiBot.chat(geminiInput);
The documentation on how to switch between ChatGPT, Mistral, Anthropic and LLama can be found in the IntelliNode Wiki.
Semantic Search
- imports:
const { SemanticSearch } = require('intellinode');
- call:
const search = new SemanticSearch(apiKey);
// pivotItem: item to search.
const results = await search.getTopMatches(pivotItem, searchArray, numberOfMatches);
const filteredArray = search.filterTopMatches(results, searchArray)
Gen
- imports:
const { Gen } = require('intellinode');
- call:
// one line to generate blog post
const blogPost = await Gen.get_blog_post(prompt, openaiApiKey);
// or generate html page code
text = 'a registration page with flat modern theme.'
await Gen.save_html_page(text, folder, file_name, openaiKey);
// or convert csv data to charts
const csv_str_data = '<your csv as string>'
const topic = "<the csv topic>";
const htmlCode = await Gen.generate_dashboard(csv_str_data, topic, openaiKey, num_graphs=2);
Models Access
Image models
- imports:
const { RemoteImageModel, SupportedImageModels, ImageModelInput } = require('intellinode');
- call DALL·E:
provider=SupportedImageModels.OPENAI;
const imgModel = new RemoteImageModel(apiKey, provider);
const images = await imgModel.generateImages(new ImageModelInput({
prompt: 'teddy writing a blog in times square',
numberOfImages: 1
}));
- change to call Stable Diffusion:
provider=SupportedImageModels.STABILITY;
// ... same code
Language models
- imports:
const { RemoteLanguageModel, LanguageModelInput } = require('intellinode');
- call openai model:
const langModel = new RemoteLanguageModel('openai-key', 'openai');
model_name = 'gpt-3.5-turbo-instruct'
const results = await langModel.generateText(new LanguageModelInput({
prompt: 'Write a product description for smart plug that works with voice assistant.',
model: model_name,
temperature: 0.7
}));
console.log('Generated text:', results[0]);
- change to call cohere models:
const langModel = new RemoteLanguageModel('cohere-key', 'cohere');
model_name = 'command'
// ... same code
Speech Synthesis
- imports:
const { RemoteSpeechModel, Text2SpeechInput } = require('intellinode');
- call google model:
const speechModel = new RemoteSpeechModel('google-key', 'google');
const audioContent = await speechModel.generateSpeech(new Text2SpeechInput({
text: text,
language: 'en-gb'
}));
Hugging Face Inference
- imports:
const { HuggingWrapper } = require('intellinode');
- call any model id
const inference = new HuggingWrapper('HF-key');
const result = await huggingWrapper.generateText(
modelId='facebook/bart-large-cnn',
data={ inputs: 'The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building...' });
The available hugging-face functions: generateText
, generateImage
, processImage
.
Check the samples for more code details including automating your daily tasks using AI.
Utilities
Prompt Engineering
Generate improved prompts using LLMs:
const promptTemp = await Prompt.fromChatGPT("fantasy image with ninja jumping across buildings", openaiApiKey);
console.log(promptTemp.getInput());
Azure Openai Access
To access Openai services from your Azure account, you have to call the following function at the beginning of your application:
const { ProxyHelper } = require('intellinode');
ProxyHelper.getInstance().setAzureOpenai(resourceName);
Custom proxy
Check the code to access the chatbot through a proxy: proxy chatbot.
:closed_book: Documentation
- IntelliNode Docs: Detailed documentation about IntelliNode.
- Showcase: Explore interactive demonstrations of IntelliNode's capabilities.
- Samples: Get started with IntelliNode using well-documented code samples.
- Model Evaluation: A swift approach to compare the performance of multiple large langiage models like gpt4, gemini, llama and cohere.
- LLM as Microservice: For scalable production.
- Fine-tuning Tutorial: Learn how to tune LLMs with yout data.
- Chatbot With Your Docs: Tutorial to augment any LLM provider with your docs and images.
Pillars
- The wrapper layer provides low-level access to the latest AI models
- The controller layer offers a unified input to any AI model by handling the differences. So you can switch between models like Openai and Cohere without changing the code.
- The function layer provides abstract functionality that extends based on the app's use cases. For example, an easy-to-use chatbot or marketing content generation utilities.
Intellicode compatible with third party libraries integration like langchain and vector DBs.
License
Apache License
Copyright 2023 IntelliNode
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.