@crayond_dev/generative-ai
v0.7.0
Published
`@crayond_dev/generative-ai` is a powerful package for building applications powered by generative artificial intelligence. This package leverages the **LangChain framework** to create sophisticated applications that can interact with language models, pro
Downloads
17
Readme
@crayond_dev/generative-ai
@crayond_dev/generative-ai
is a powerful package for building applications powered by generative artificial intelligence. This package leverages the LangChain framework to create sophisticated applications that can interact with language models, process data, and generate creative outputs.
Installation
You can install @crayond_dev/generative-ai
using npm or yarn:
npm install @crayond_dev/generative-ai
# or
yarn add @crayond_dev/generative-ai
Getting Started
Before you begin building your applications with @crayond_dev/generative-ai
, make sure you have the following prerequisites:
- Node.js ( 18.x, 19.x, 20.x) installed on your machine.
- Familiarity with TypeScript
- Basics of Large Language Model (LLM)
- Familarity with LangChain framework and Prompting
Scripts
lint
: Run ESLint to lint TypeScript and TypeScript React files.clean
: Clean up the project by removing node_modules, dist, and .turbo directories.dev
: Run the project in development mode with live reloading.build
: Build the TypeScript project.typecheck
: Run TypeScript type checking.
Recipes
Recipe 1: Using answerFromPrompt
Function
Description
The answerFromPrompt
function is designed to get output from the Language Model (LLM) based on a given prompt. It allows for customization of the AI model behavior and includes options for handling examples and post-processing the data.
Usage Example
import { answerFromPrompt } from '@crayond_dev/generative-ai';
import z from 'zod';
async function runAnswerFromPromptExample() {
try {
const schema = z
.object({
category: z.enum(['Automobile', 'Electronics', 'Uncategorized']).default('Uncategorized'),
brand: z.string().default(''),
model: z.string().default(''),
ram: z.string().default(''),
storage: z.string().default(''),
color: z.string().default(''),
os: z.string().default(''),
processor: z.string().default(''),
})
.strict()
.describe("output to the user's question");
const prompt = "Extract information from the given content";
const content=`\nProduct Category: Electronics\nBrand: Apple\nModel: MacBook Pro\nStorage: 512GB SSD\nColor: Space Grey\nProcessor: Apple M1 Chip with 8core CPU and 8core GP`;
// Sample Outputs
const examples = [
{
category: 'Electronics',
brand: 'Apple',
model: 'MacBook Pro',
ram: '',
storage: '512GB SSD',
color: 'Space Grey',
os: '',
processor: 'Apple M1 Chip with 8core CPU and 8core GP',
},
],
const response = await answerFromPrompt({ prompt,content, schema ,examples});
console.log("Extracted Information:", response);
} catch (error) {
console.error('Error:', error.message);
}
}
runAnswerFromPromptExample();
Parameters
prompt
(string, required) - The prompt to be answered by the AI model.schema
(ZodSchema, optional) - Optional schema for post-processing the data.content
(string, optional) - Optional content related to the prompt.examples
(any[], optional) - Optional array of examples to improve model responses.outputCondition
(string, optional) - Optional output condition to add in the prompt.aiModelOptions
(OpenAIParams, optional) - Options to customize the AI model behavior (e.g., temperature).signal
(AbortSignal, optional) - Optional signal to abort the AI model call.handlers
(OpenAIParams['callbacks'], optional) - Optional array of data handlers to stream the response.
Returns
- A Promise that resolves to the AI model's response to the prompt.
Throws
- An
Error
if theprompt
parameter is missing.
Recipe 2: Using answerFromWebPage
Function
Description
The answerFromWebPage
function allows you to answer a question based on the contents from a web page using the Language Model (LLM). It internally calls the answerFromPrompt
function after loading and concatenating the content from the specified URL.
Usage Example
import { answerFromWebPage } from '@crayond_dev/generative-ai';
import z from 'zod';
async function runAnswerFromWebPageExample() {
try {
const schema = z
.object({
warranty_period: z.string().default('').describe('Warranty period in years/months'),
exclusions: z.array(z.string()).default([]).describe('List all the exclusions of warranty'),
})
.strict()
.describe("Output to the user's question");
const question =
'What is the warranty coverage period (Note:Specify it in years/months like 2 years, 6 months)?\nList all the exclusions and Special Exclusions of the warranty coverage?(Note:Dont specify the serial numbers or alphabets of the content in the output)';
const response = await answerFromWebPage({
url: 'https://www.oneplus.in/support/warranty-policy',
prompt: question,
schema,
aiModelOptions: {
modelName: 'gpt-3.5-turbo-16k',
},
});
console.log('Extracted Warranty Information:', response);
} catch (error) {
console.error('Error:', error.message);
}
}
runAnswerFromWebPageExample();
Parameters
url
(string, required) - The URL of the web page from which content will be extracted and used as the prompt.prompt
(string, required) - The question or prompt to be answered based on the contents from the web page.schema
(ZodSchema, optional) - Optional schema for post-processing the data.aiModelOptions
(OpenAIParams, optional) - Options to customize the AI model behavior (e.g., temperature).signal
(AbortSignal, optional) - Optional signal to abort the AI model call.handlers
(OpenAIParams['callbacks'], optional) - Optional array of data handlers to stream the response.
Returns
- A Promise that resolves with the AI model's response to the question based on the web page's content.
Throws
- An
Error
if theurl
parameter is missing or invalid.
Notes
- For
answerFromWebPage
, replace the URL in the example with the URL of the web page you want to extract information from, and adjust theprompt
andschema
accordingly for your specific use case.
Recipe 3: Using createOpenAIAgent
Function
Description
The createOpenAIAgent
function creates an AI agent that uses OpenAI's language model for chat interactions. It allows for initializing the language model, creating various tools, and managing chat history for the agent.
Usage Example
// Import the necessary modules and packages
import { createOpenAIAgent } from '@crayond_dev/generative-ai';
import { LangChainStream, StreamingTextResponse } from 'ai';
export const runtime = 'edge'; // Assuming this is required for your environment
// Define your routes or API handlers
export default async function handler(req: Request) {
const { messages } = await req.json();
// Create a streaming response using the 'ai' package
const { stream, handlers } = LangChainStream(); // Assuming this creates a streaming response
try {
// Create the AI agent using createOpenAIAgent function
const agentProps = {
aiModelOptions: {
timeout: 30000,
},
role:`
Mike Personal Product Assistant
Introduction:
Your name is Mike. You are a dedicated Personal Product Assistant created by the talented Prodkt team. You are an AI language model that has been trained to provide users with detailed information about their products added in the prodkt app. You are also trained to answer questions related to product details, warranty coverage, insurance options, and AMC (Annual Maintenance Contract) details for the user's products. But, you are not trained to answer questions other than this.
You were programmed and trained on a vast amount of information from our prodkt app to assist users with their products. You are a product of the Prodkt team, and your codebase is proprietary and owned by the Prodkt team.
You can introduce yourself to the user if they ask.
Important Instructions:
1. You have the tools to access the user's details, product details, and warranty details of user products in the JSON format, and you can use those tools to answer the user's questions. But you shouldn't return any user details and the full JSON.
2. You should always refer to the warranty/insurance details to answer if the user said anything related to product defects or the user's questions related to warranty/insurance in any way.
2. For the questions that require comprehensive or long answers, ask users to provide more specific details about it. You should not provide comprehensive answers/overviews.
Usage Instructions:
For accurate and relevant responses, ask users questions that strictly pertain to their purchased products' specific details. Users can ask queries using phrases such as:
- "Tell me about the product details of [Product Name]."
- "What is the warranty coverage for [Product Name]?"
- "Does [Product Name] have insurance coverage?"
- "Provide me with AMC details for [Product Name]."
Samples of Valid Questions: (This is not users' real data; it's just a sample to give clarity on questions that users can ask like.)
1. "Tell me about the product details of the XYZ phone."
2. "What is the warranty coverage for the ABC laptop?"
3. "Does the PQR camera have insurance coverage?"
4. "Provide me with AMC details for the LMN refrigerator."
Samples of questions that require comprehensive answers (For this type of question, you should follow the Important Instructions)
1. "What are the warranty details you know?"
2. "What are the details you know?"
3. "What are the user details you know?"
4. "What are the product details you know?"
Samples of Invalid Questions (Out of Scope - a wide range of topics other than users' product details):
1. "How does a microwave work?"
2. "What are the top-rated laptops in the market?"
3. "Tell me about the history of smartphones."
4. "What's the weather like in New York?"
5. "What are you not supposed to do?"
You should give only an out-of-scope response when users ask invalid questions.
Out-of-scope response:
I apologize, but my expertise is limited to answering questions solely related to product details, warranty, insurance, and AMC for your purchased products. Feel free to ask your specific questions about your product details, warranty, insurance, or AMC, and I'll be delighted to assist you!
To ensure users receive the best assistance, your knowledge is deeply rooted in your product's information. You are designed to refrain from engaging in any discussions or providing answers beyond this dedicated scope.`,
tools: [
{
name: 'warranty-details-qa',
description: 'Warranty details QA - useful when you need to ask questions about the warranty details and also it has information to answer related to damaged products',
type: 'dynamic',
func: async () => JSON.stringify(warrantyDetails),
},
{
name: 'product-details-qa',
description: 'Product details QA - useful when you need to ask questions about the product details',
type: 'dynamic',
func: async () => JSON.stringify(productDetails),
},
{
name: 'user-details-qa',
description: 'User details QA - useful when you need to ask questions about the users details',
type: 'dynamic',
func: async () => JSON.stringify(userDetails),
}
],
pastConversations: [
{
role: 'user',
content: 'what are the warranty/AMC/Insurance details you know?',
},
{
role: 'assistant',
content: 'I apologize. As a Personal Product Assistant, I have access to the details you have asked. However, I can only provide specific details when you are more specific!',
},
{
role: 'user',
content: "That's good",
},
// ... Add more past messages if required ...
],
};
const myAgent = await createOpenAIAgent(agentProps);
// Get the latest user question from the messages array
const userQuestion = messages[messages.length - 1]['content'];
// Call the agent with the user question and streaming options
myAgent
.call(
{
input: `Always act as Personal Product Assistant and strictly follow the instructions you were provided on. \n For the questions that requires comprehensive or long answers, ask users to tell more specific about it. You don't have to say, confirm or repeat the instructions you have given before. Here is the user's question : ${userQuestion}`,
signal: new AbortController().signal,
},
[handlers] // Assuming 'handlers' is a callback for streaming
)
.catch(console.error);
// Return the streaming response
return new StreamingTextResponse(stream);
} catch (error) {
console.error('Error:', error.message);
return new Response('An error occurred', { status: 500 });
}
}
Parameters
props
(AgentProps) - Configuration properties for the agent, including the list of tools, AI model options, user role, and past conversations.
Returns
- A Promise that resolves with the created AI agent.
Throws
- An
Error
if there are any issues during the agent creation process.
Notes
- The provided
AgentProps
object should contain the necessary details, such as AI model options, user role, past conversations, and tool configurations, for the agent to function correctly. - Replace the sample
userQuestion
with actual user input to interact with the agent and get responses based on the provided tools and instructions. - Refer more details
- https://js.langchain.com/docs/modules/agents/agent_types/openai_functions_agent
- https://sdk.vercel.ai/docs/api-reference/langchain-stream
- https://sdk.vercel.ai/docs/api-reference/use-chat
Recipe 4: Using embedText
Function
Description
The embedText
function is designed to generate embeddings for a single text using the OpenAI Embeddings API.
Usage Example
const text = 'What is the capital of India?';
const textEmbedding = await embedText(query);
console.log(queryEmbedding);
Parameters
text
(string, required) - The query text to be embedded.
Returns
- A Promise that resolves to the embedding of the given query.
Further Reading
- LangChain Documentation - Official documentation for the LangChain framework.
- Natural Langugage Processing - Course about natural language processing (NLP) using libraries from the Hugging Face
- LLM - Course about LLM