npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

hexabot-helper-gemini

v2.0.2

Published

The Gemini Helper Extension for Hexabot Chatbot / Agent Builder to enable the LLM RAG Capability

Downloads

256

Readme

Hexabot Gemini Helper Extension

The Hexabot Gemini Helper Extension is a utility class designed to facilitate interaction with Google's Gemini AI API from other Hexabot extensions (such as plugins, channels, etc.). This helper allows developers to easily invoke the Gemini API and integrate advanced natural language understanding and response generation into Hexabot's chatbot builder.

Not yet familiar with Hexabot? It's a open-source chatbot/agent solution that allows users to create and manage AI-powered, multi-channel, and multilingual chatbots with ease. If you would like to learn more, please visit the official GitHub repo.

Features

  • API Integration: Seamlessly connect to Google's Gemini AI API, enabling other extensions within Hexabot to access Gemini's capabilities.
  • Configurable Settings: Configure parameters like model type, temperature, token count, penalties, and more for customized behavior.
  • Easy Integration: Use as a helper utility to invoke the Gemini API from any other extension within Hexabot.
  • Flexible Options: Supports various options such as response format, stop sequences, log probabilities, and more to customize the behavior of Gemini.

Prerequisites

Before setting up the Gemini Helper, you will need to generate an API token from Google’s Generative AI platform.

  1. Go to the Google Generative AI API page.
  2. Select "Develop in your own environment" to generate your API token.
  3. Once you have your API token, you can proceed to configure the plugin within Hexabot.

Installation

To use the Gemini Helper Extension within Hexabot, follow these steps:

cd ~/projects/my-chatbot
npm install hexabot-helper-gemini
hexabot dev

Make sure you have the appropriate access credentials for the Gemini AI API.

Usage

The Gemini Helper can be used to generate responses based on user input or integrate into more complex workflows involving conversation history and system prompts. Here's an overview of how to use this helper:

Settings

The extension provides configurable settings that can be adjusted to suit your needs. Below are the available settings:

  • API Token: API token required for authentication.
  • Model: Specifies the model to use for generating responses (default: gemini-1.5-flash).
  • Temperature: The temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a more deterministic response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 is deterministic, meaning that the highest probability response is always selected (default: 0.8).
  • Candidate Count: Specifies the number of generated responses to return. Currently, this value can only be set to 1 (default: 1).
  • Max Output Tokens: Sets the maximum number of tokens to include in a candidate response (default: 1000).
  • Top-K: Changes how the model selects tokens for output. A topK of 1 means the selected token is the most probable among all the tokens in the model's vocabulary (greedy decoding), while a topK of 3 means that the next token is selected from among the 3 most probable using the temperature (default: 40).
  • Top-P: Changes how the model selects tokens for output. Tokens are selected from the most to least probable until the sum of their probabilities equals the topP value (default: 0.95).
  • Stop Sequences: Specifies the set of character sequences that will stop output generation. The stop sequence won't be included as part of the response.
  • Response MIME Type: Output response MIME type of the generated candidate text (text/plain or application/json).
  • Presence Penalty: Presence penalty applied to the next token's logprobs if the token has already been seen in the response (default: 0.0).
  • Frequency Penalty: Frequency penalty applied to the next token's logprobs, multiplied by the number of times each token has been seen in the response so far (default: 0.0).
  • Response LogProbs: If True, export the logprobs results in the response (default: false).

These settings can be customized using the Hexabot admin interface or programmatically via the Hexabot API.

Example Integration

To use the Gemini Helper, simply inject the GeminiLlmHelper class and use it as shown below:

const geminiHelper = this.helperService.use(
  HelperType.LLM,
  GeminiLlmHelper,
);
// ...
const text = await geminiHelper.generateChatCompletion(
  context.text,
  args.model,
  systemPrompt,
  history,
  {
    ...options,
    user: context.user.id,
  },
);

Contributing

We welcome contributions from the community! Whether you want to report a bug, suggest new features, or submit a pull request, your input is valuable to us.

Please refer to our contribution policy first: How to contribute to Hexabot

Contributor Covenant

Feel free to join us on Discord

License

This software is licensed under the GNU Affero General Public License v3.0 (AGPLv3) with the following additional terms:

  1. The name "Hexabot" is a trademark of Hexastack. You may not use this name in derivative works without express written permission.
  2. All derivative works must include clear attribution to the original creator and software, Hexastack and Hexabot, in a prominent location (e.g., in the software's "About" section, documentation, and README file).

Happy Chatbot Building!