npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

ai-function-helper

v3.5.2

Published

A NodeJS utility to simplify interaction with OpenAI GPT models.

Downloads

171

Readme

AI Function Helper

npm version npm downloads License Node.js Version

Table of Contents

Introduction

AI Function Helper is a powerful Node.js module that simplifies the integration of OpenAI's GPT models into your applications. It provides a structured way to interact with AI models, ensuring consistent and formatted responses.

Key Features

  • 🚀 Easy integration with OpenAI API
  • 📊 Structured input and output using JSON schemas
  • 🔄 Support for various AI models (GPT-3.5, GPT-4, etc.)
  • 🛠️ Function-like interface for AI interactions
  • 🔒 Built-in error handling and retries
  • 📡 Support for streaming responses
  • 🛡️ Protection against prompt hijacking
  • 🎛️ Customizable with tools and prompt variables
  • 🧠 Optional "thinking" process for complex tasks
  • 🖼️ Support for image inputs (vision models)
  • 🔍 Detailed debugging options

Installation

Install AI Function Helper using npm:

npm install ai-function-helper

Quick Start

Here's a simple example to get you started:

const { createAiFunctionInstance } = require('ai-function-helper');

// Create an instance with your OpenAI API key
const aiFunction = createAiFunctionInstance('your_api_key_here');

// Define your function
const options = {
  functionName: 'generate_haiku',
  model: 'gpt-3.5-turbo',
  args: { topic: 'spring' },
  description: 'Generate a haiku about the given topic.',
  outputSchema: {
    type: "object",
    properties: {
      haiku: { type: "string" }
    },
    required: ["haiku"]
  }
};

// Call the function
aiFunction(options)
  .then(result => console.log(result.haiku))
  .catch(error => console.error(error));

Basic Usage

To use AI Function Helper, you first need to create an instance with your OpenAI API key:

const { createAiFunctionInstance } = require('ai-function-helper');
const aiFunction = createAiFunctionInstance('your_api_key_here');

You can also use a custom endpoint URL:

const aiFunction = createAiFunctionInstance('your_api_key_here', 'https://api.openai.com/v1');

Alternatively, you can use an existing OpenAI instance:

const OpenAI = require('openai');
const openai = new OpenAI({ apiKey: 'your_api_key_here' });
const aiFunction = createAiFunctionInstance(openai);

Once you have an instance, you can call AI functions by providing options:

const result = await aiFunction({
  functionName: 'example_function',
  model: 'gpt-4o',
  args: { param1: 'value1', param2: 'value2' },
  description: 'This is an example function.',
  outputSchema: {
    type: "object",
    properties: {
      result: { type: "string" }
    },
    required: ["result"]
  }
});

API Reference

The aiFunction takes an options object with the following properties:

| Option | Type | Description | Default | |--------|------|-------------|---------| | functionName | string | Name of the AI function | 'custom_function' | | args | object/string | Arguments for the function | - | | description | string | Description of the function's purpose | - | | outputSchema | object | Expected return type (JSON Schema or Zod schema) | - | | strictReturn | boolean | Enforce strict return type validation | true | | showDebug | boolean | Print debug information to console | false | | debugLevel | number | Level of debug information (0-2) | 0 | | temperature | number | Sampling temperature for the AI model | 0.6 | | frequency_penalty | number | Frequency penalty for the AI model | 0 | | presence_penalty | number | Presence penalty for the AI model | 0 | | model | string | AI model to use | 'gpt-4o-mini' | | max_tokens | number | Maximum number of tokens to generate | 1000 | | top_p | number | Top p value for the AI model | null | | blockHijack | boolean | Prevent prompt hijacking | false | | blockHijackThrowError | boolean | Throw error on hijack attempt | false | | tools | array | Helper functions to use within the main function | [] | | stream | boolean | Enable response streaming | false | | streamCallback | function | Callback for streamed responses | null | | promptVars | object | Variables to use in the prompt | {} | | images | string/array | Image URL(s) for vision models | null | | imageQuality | string | Quality of image for vision models | 'low' | | minifyJSON | boolean | Minify JSON output | false | | history | array | Conversation history for context | [] | | forceJsonMode | boolean | Force JSON mode for non-JSON models | false | | timeout | number | Timeout for API calls (in milliseconds) | 120000 | | maxRetries | number | Maximum number of retries for API calls | 0 | | includeThinking | boolean | Include AI's thought process in debug output | false |

Key Concepts

  • outputSchema: Defines the expected structure of the AI function's output using JSON Schema or Zod schema. This ensures that the AI model returns data in the format your application expects.

  • tools: An array of helper functions that can be used within the main AI function. Each tool is an object with name, function_call, description, and parameters properties.

  • blockHijack: When enabled, this feature prevents the AI model from following instructions in user messages that attempt to override the function's intended behavior.

  • promptVars: Allows you to define variables that will be replaced in the function description, providing more flexibility in prompt engineering.

  • images: Enables the use of image inputs for vision-capable models, expanding the types of tasks the AI can perform.

  • includeThinking: When set to true, this option includes the AI's thought process in the debug output, providing insight into how the AI arrives at its conclusions. This is particularly useful for complex problem-solving tasks and debugging.

Advanced Features

Include Thinking Process

The includeThinking option allows you to capture the AI's thought process before it generates the final output. This feature provides several benefits:

  1. Improved Response Quality: By "thinking" before responding, the AI can organize its thoughts and provide more coherent and well-structured answers.

  2. Transparency: You can see the reasoning behind the AI's responses, which is useful for debugging and understanding how the AI arrived at its conclusions.

  3. Debugging Aid: The thinking process can be invaluable when fine-tuning prompts or troubleshooting unexpected outputs.

When enabled, the AI's thinking process is included in the debug output but is not returned as part of the final result. Here's how to use it:

const options = {
  functionName: 'complex_calculation',
  args: { expression: '15*87 + ( 129/ (48*0.5) ) +12' },
  description: 'Perform a complex mathematical calculation and show the steps.',
  outputSchema: {
    type: "object",
    properties: {
      result: { type: "number" }
    },
    required: ["result"]
  },
  includeThinking: true,
  showDebug: true // Set this to true to see the thinking process in the console
};

const result = await aiFunction(options);

In the debug output, you'll see the thinking process enclosed in <|start_of_thinking|> and <|end_of_thinking|> tags. For example:

--- Thinking Process ---
To solve the expression '15*87 + ( 129/ (48*0.5) ) +12', I'll break it down into steps:

1. First, let's solve the parentheses:
   (48*0.5) = 24
   
2. Now we can simplify the division:
   129 / 24 = 5.375

3. Let's calculate 15*87:
   15*87 = 1305

4. Now we have simplified the expression to:
   1305 + 5.375 + 12

5. Let's add these numbers:
   1305 + 5.375 = 1310.375
   1310.375 + 12 = 1322.375

Therefore, the final result is 1322.375.

--- Parsed JSON Output ---
{
  "result": 1322.375
}

Note that the thinking process is only visible in the debug output and does not affect the structure or content of the returned result. This feature is particularly useful for complex tasks where understanding the AI's reasoning can lead to better prompt engineering and more accurate results.

In this example, we can see how the AI breaks down the complex calculation into manageable steps, making it easier to verify the result and understand the problem-solving approach. This level of detail in the thinking process can be especially valuable for debugging, education, or when the steps to reach a conclusion are as important as the final answer itself.

Streaming

Enable streaming to process responses in real-time:

const options = {
  // ... other options ...
  stream: true,
  streamCallback: (chunk) => {
    console.log('Received chunk:', chunk);
  }
};

Using Stream with StreamCallback

The streaming feature allows you to process AI responses in real-time, which can be particularly useful for long-running tasks or when you want to provide immediate feedback to users. Here's an example of how to use the stream option with a streamCallback:

const { createAiFunctionInstance } = require('ai-function-helper');
const aiFunction = createAiFunctionInstance('your_api_key_here');

async function generateStory() {
  let story = '';

  const options = {
    functionName: 'generate_story',
    model: 'gpt-4o',
    args: {
      theme: 'space exploration',
      length: 'short'
    },
    description: 'Generate a short story about space exploration.',
    // We don't use 'outputSchema' to return a text instead of a JSON
    stream: true,
    streamCallback: (chunk) => {
      const content = chunk.choices[0]?.delta?.content;
      if (content) {
        story += content;
        console.log('Received chunk:', content);
        // You can update your UI here with the new content
      }
    }
  };

  try {
    const result = await aiFunction(options);
    // The result here will be the complete response
    console.log('Final story:', story);
    return story;
  } catch (error) {
    console.error('Error generating story:', error);
  }
}

generateStory();

In this example:

  1. We set stream: true in the options to enable streaming.
  2. We provide a streamCallback function that receives chunks of the response as they arrive.
  3. The callback function accumulates the story content and logs each chunk (you could update a UI element here instead).
  4. After the streaming is complete, we log the final story.

This approach allows you to handle the AI's response in real-time, which can be beneficial for:

  • Providing immediate feedback to users
  • Handling long-running tasks without timeouts
  • Implementing typewriter-like effects in user interfaces
  • Processing partial results as they become available

Remember that when using streaming, the final result returned by aiFunction will be the complete response, so you can still use it if needed.

Tools (Helper Functions)

Define helper functions to use within your main AI function:

const options = {
  // ... other options ...
  tools: [
    {
      name: "generate_password",
      function_call: ({ length = 5, passwordCount = 1 }) => {
        // Password generation logic here
      },
      description: "Generate a random password",
      parameters: {
        type: "object",
        properties: {
          length: { type: "integer" },
          passwordCount: { type: "integer" }
        }
      }
    }
  ]
};

Prompt Hijack Protection

Enable protection against prompt hijacking:

const options = {
  // ... other options ...
  blockHijack: true,
  blockHijackThrowError: true // Optional: throw error instead of returning a message
};

Using Image Inputs

For vision-capable models, you can include image inputs:

const options = {
  // ... other options ...
  images: 'https://example.com/image.jpg',
  // Or
  images: ['https://example.com/image1.jpg', 'https://example.com/image2.jpg'],
  imageQuality: 'high'
};

Conversation History

Provide context from previous interactions:

const options = {
  // ... other options ...
  history: [
    { role: "user", content: "What's the weather like?" },
    { role: "assistant", content: "I'm sorry, but I don't have access to real-time weather information. Is there anything else I can help you with?" }
  ]
};

Examples

Here are some engaging examples that showcase the versatility and power of the aiFunction module:

1. Generate a Quiz

const options = {
  functionName: 'generate_quiz',
  model: 'gpt-4o',
  args: { topic: 'space exploration', difficulty: 'medium', num_questions: 2 },
  description: 'Generate a quiz with multiple-choice questions on the given topic.',
  outputSchema: {
    type: "array",
    items: {
      type: "object",
      properties: {
        question: { type: "string" },
        options: { 
          type: "array",
          items: { type: "string" },
          minItems: 4,
          maxItems: 4
        },
        correct_answer: { type: "string" }
      },
      required: ["question", "options", "correct_answer"]
    }
  }
};

const quiz = await aiFunction(options);
console.log(JSON.stringify(quiz, null, 2));

Expected output:

[
  {
    "question": "Which space agency launched the first artificial satellite, Sputnik 1?",
    "options": [
      "NASA",
      "Soviet Union",
      "European Space Agency",
      "China National Space Administration"
    ],
    "correct_answer": "Soviet Union"
  },
  {
    "question": "What year did the Apollo 11 mission successfully land humans on the Moon?",
    "options": [
      "1967",
      "1969",
      "1971",
      "1973"
    ],
    "correct_answer": "1969"
  }
]

2. Create a Recipe (using Zod)

const { z } = require('zod');

const options = {
  functionName: 'create_recipe',
  model: 'gpt-4o',
  args: { cuisine: 'Italian', main_ingredient: 'pasta', dietary_restriction: 'vegetarian' },
  description: 'Create a recipe based on the given cuisine, main ingredient, and dietary restriction.',
  outputSchema: z.object({
    name: z.string(),
    ingredients: z.array(z.string()),
    instructions: z.array(z.string()),
    prep_time: z.string(),
    cook_time: z.string(),
    servings: z.number().int()
  })
};

const recipe = await aiFunction(options);
console.log(JSON.stringify(recipe, null, 2));

Expected output:

{
  "name": "Vegetarian Pasta Primavera",
  "ingredients": [
    "12 oz penne pasta",
    "2 cups mixed vegetables (bell peppers, zucchini, carrots)",
    "1/4 cup olive oil",
    "3 cloves garlic, minced",
    "1/2 cup grated Parmesan cheese",
    "1/4 cup fresh basil, chopped",
    "Salt and pepper to taste"
  ],
  "instructions": [
    "Cook pasta according to package instructions. Reserve 1/2 cup pasta water.",
    "In a large skillet, heat olive oil over medium heat. Add minced garlic and sauté for 1 minute.",
    "Add mixed vegetables to the skillet and cook for 5-7 minutes until tender-crisp.",
    "Drain pasta and add it to the skillet with vegetables. Toss to combine.",
    "Add Parmesan cheese, basil, and pasta water as needed to create a light sauce.",
    "Season with salt and pepper to taste. Serve hot."
  ],
  "prep_time": "15 minutes",
  "cook_time": "20 minutes",
  "servings": 4
}

3. Analyze Sentiment of Customer Reviews

const options = {
  functionName: 'analyze_reviews',
  model: 'gpt-4o',
  args: {
    reviews: [
      "The product exceeded my expectations. Great value for money!",
      "Disappointed with the quality. Wouldn't recommend.",
      "Average product, nothing special but does the job."
    ]
  },
  description: 'Analyze the sentiment of customer reviews and categorize them.',
  outputSchema: {
    type: "array",
    items: {
      type: "object",
      properties: {
        review: { type: "string" },
        sentiment: { type: "string", enum: ["positive", "neutral", "negative"] },
        score: { type: "number", minimum: 0, maximum: 1 }
      },
      required: ["review", "sentiment", "score"]
    }
  }
};

const sentiment_analysis = await aiFunction(options);
console.log(JSON.stringify(sentiment_analysis, null, 2));

Expected output:

[
  {
    "review": "The product exceeded my expectations. Great value for money!",
    "sentiment": "positive",
    "score": 0.9
  },
  {
    "review": "Disappointed with the quality. Wouldn't recommend.",
    "sentiment": "negative",
    "score": 0.2
  },
  {
    "review": "Average product, nothing special but does the job.",
    "sentiment": "neutral",
    "score": 0.5
  }
]

4. Generate a Travel Itinerary (using Zod)

const { z } = require('zod');

const options = {
  functionName: 'create_travel_itinerary',
  model: 'gpt-4o',
  args: { destination: 'Tokyo', duration: 3, interests: ['technology', 'culture', 'food'] },
  description: 'Create a daily travel itinerary for the specified destination and duration, considering the traveler\'s interests.',
  outputSchema: z.object({
    destination: z.string(),
    duration: z.number().int(),
    daily_plans: z.array(z.object({
      day: z.number().int(),
      activities: z.array(z.object({
        time: z.string(),
        activity: z.string(),
        description: z.string()
      }))
    }))
  })
};

const itinerary = await aiFunction(options);
console.log(JSON.stringify(itinerary, null, 2));

Expected output:

{
  "destination": "Tokyo",
  "duration": 3,
  "daily_plans": [
    {
      "day": 1,
      "activities": [
        {
          "time": "09:00",
          "activity": "Visit Akihabara",
          "description": "Explore the technology and electronics district, known for its gadgets and anime culture."
        },
        {
          "time": "13:00",
          "activity": "Lunch at a Robot Restaurant",
          "description": "Experience a unique dining experience with robot performances."
        },
        {
          "time": "15:00",
          "activity": "Tour the Miraikan Science Museum",
          "description": "Discover cutting-edge technology and scientific innovations at this interactive museum."
        }
      ]
    },
    {
      "day": 2,
      "activities": [
        {
          "time": "10:00",
          "activity": "Visit Senso-ji Temple",
          "description": "Explore Tokyo's oldest Buddhist temple and experience traditional Japanese culture."
        },
        {
          "time": "14:00",
          "activity": "Tea Ceremony in Hamarikyu Gardens",
          "description": "Participate in a traditional Japanese tea ceremony in a beautiful garden setting."
        },
        {
          "time": "18:00",
          "activity": "Dinner at Tsukiji Outer Market",
          "description": "Enjoy fresh sushi and local delicacies at the world-famous fish market area."
        }
      ]
    },
    {
      "day": 3,
      "activities": [
        {
          "time": "09:00",
          "activity": "Visit teamLab Borderless",
          "description": "Immerse yourself in a digital art museum that blends technology and creativity."
        },
        {
          "time": "13:00",
          "activity": "Ramen Tour in Shinjuku",
          "description": "Sample various styles of ramen at some of Tokyo's best ramen shops."
        },
        {
          "time": "16:00",
          "activity": "Shopping in Ginza",
          "description": "Explore high-end technology stores and experience Japanese retail innovation."
        }
      ]
    }
  ]
}

5. Analyze Stock Market Data

const options = {
  functionName: 'analyze_stock',
  model: 'gpt-4o',
  args: { symbol: 'AAPL', timeframe: '1 year' },
  description: 'Analyze the stock performance and provide insights based on the given symbol and timeframe.',
  outputSchema: {
    type: "object",
    properties: {
      symbol: { type: "string" },
      currentPrice: { type: "number" },
      yearlyPerformance: { type: "number" },
      technicalIndicators: {
        type: "object",
        properties: {
          RSI: { type: "number" },
          MACD: {
            type: "object",
            properties: {
              value: { type: "number" },
              signal: { type: "number" },
              histogram: { type: "number" }
            },
            required: ["value", "signal", "histogram"]
          }
        },
        required: ["RSI", "MACD"]
      },
      recommendation: { type: "string", enum: ["Buy", "Hold", "Sell"] }
    },
    required: ["symbol", "currentPrice", "yearlyPerformance", "technicalIndicators", "recommendation"]
  }
};

const stockAnalysis = await aiFunction(options);
console.log(JSON.stringify(stockAnalysis, null, 2));

Expected output:

{
  "symbol": "AAPL",
  "currentPrice": 178.25,
  "yearlyPerformance": 0.35,
  "technicalIndicators": {
    "RSI": 62.5,
    "MACD": {
      "value": 2.1,
      "signal": 1.8,
      "histogram": 0.3
    }
  },
  "recommendation": "Buy"
}

These examples demonstrate how to use the AI Function Helper for various tasks, from content generation to data analysis. Each example includes a detailed outputSchema schema (alternating between JSON Schema and Zod formats), ensuring structured and validated output from the AI model.

Best Practices

  1. Use Specific Function Names: Choose clear and descriptive function names to help the AI understand the context.

  2. Provide Detailed Descriptions: The more context you provide in the description, the better the AI can understand and perform the task.

  3. Define Precise Return Schemas: Use detailed outputSchema schemas to ensure you get the exact data structure you need.

  4. Utilize Tools for Complex Tasks: For tasks that require specific calculations or external data, define custom tools to handle these aspects.

  5. Handle Errors Gracefully: Use try-catch blocks and consider setting appropriate timeout and retry values for robust error handling.

  6. Optimize for Token Usage: Be mindful of the length of your prompts and consider using minifyJSON for large outputs to reduce token consumption.

  7. Use Streaming for Long Responses: For tasks that may generate long responses, consider using the streaming option to process the response in real-time.

  8. Leverage Conversation History: For multi-turn interactions, use the history option to provide context from previous exchanges.

FAQ

Q: Can I use this module with other AI providers? A: Currently, AI Function Helper is designed to work with OpenAI's models. Support for other providers may be added in future versions.

Q: How can I debug if I'm not getting the expected output? A: Enable debugging by setting showDebug: true and adjusting the debugLevel. This will provide more information about the API calls and responses.

Q: Is this module suitable for production use? A: Yes, but always ensure you have proper error handling and respect rate limits set by OpenAI.

Q: Can I use this for streaming large amounts of data? A: Yes, you can use the stream option for handling large responses efficiently.

Q: How does the module handle API keys securely? A: The module does not handle API key storage or security. It's your responsibility to securely manage and provide the API key when creating an instance.

Tests

We have conducted extensive tests on various AI models to evaluate their ability to generate JSON outputs of varying complexity while adhering to specified formats. These tests help demonstrate the versatility and reliability of the AI Function Helper module across different AI models.

Test Methodology

To ensure comprehensive testing across a wide range of AI models, including those not directly provided by OpenAI, we utilized LiteLLM as a proxy. LiteLLM is a powerful tool that provides a unified interface for various AI providers and local models (via Ollama), offering an OpenAI-compatible endpoint URL. This approach allowed us to seamlessly integrate and test multiple AI models with our AI Function Helper, demonstrating its flexibility and broad compatibility.

Test Summary

| Model | Success Rate | Average Duration | |-------|--------------|------------------| | fireworks/llama-v3p1-405b-instruct | 100.00% | 16887.67ms | | groq/llama-3.1-70b-versatile | 100.00% | 2154.89ms | | claude-3-haiku-20240307 | 100.00% | 3175.72ms | | gpt-3.5-turbo | 88.89% | 3398.67ms | | gpt-4o-mini | 100.00% | 5699.72ms | | gpt-4o | 100.00% | 5673.00ms | | claude-3-5-sonnet-20240620 | 100.00% | 5940.50ms | | gemini-1.5-flash | 88.89% | 5150.00ms | | gemini-1.5-pro | 100.00% | 10066.06ms | | gemma2:9b (ollama) | 100.00% | 13368.94ms |

Test Categories

The tests cover a wide range of functionalities, from simple calculations to complex data generation and analysis. Some of the test categories include:

  1. Basic operations (e.g., complex calculations, prime number generation)
  2. Text processing (e.g., grammar correction, language detection)
  3. Data generation (e.g., fake people generation, quiz creation)
  4. Complex data analysis (e.g., stock market analysis, social media campaign analysis)
  5. Creative tasks (e.g., recipe creation, short story generation)
  6. Complex JSON generation (e.g., nested structures, arrays of objects)

Detailed Results

For detailed results of each test case and model performance, please refer to the following files:

Running the Tests

If you want to run the tests yourself or contribute to improving them, you can find the test script in our GitHub repository:

These tests demonstrate the AI Function Helper's capability to work with various AI models and handle a wide range of task complexities. They also showcase the module's ability to enforce structured outputs, making it easier to integrate AI-generated content into your applications.

Some tests are "stupidly" complex and are designed to push the limits of the AI models. These tests are not meant to be practical but rather to demonstrate the AI Function Helper's ability to handle challenging scenarios. Most of the failed tests can be successfully completed by giving the AI model more context or refining the input prompts.

By leveraging LiteLLM, we've expanded the compatibility of our AI Function Helper beyond OpenAI models, allowing users to work with a diverse array of AI providers and local models while maintaining a consistent interface. This approach not only broadens the applicability of our tool but also provides users with greater flexibility in choosing the AI models that best suit their specific needs and constraints.

Contributing

Contributions are welcome! If you'd like to contribute, please fork the repository and use a feature branch. Pull requests are warmly welcome.

License

AI Function Helper is open-sourced software licensed under the MIT license.