npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

bharatgpt

v1.0.7

Published

Node.js client for BharatGPT API

Downloads

3

Readme

BharatGPT

Installation

To use BharatGPT, you must first install the package. Use the following command to do this:

npm install bharatgpt

Initialization

After you've installed BharatGPT, you need to initialize it with your API key before making any requests. Here's how to do it:

For CommonJS (CJS):

const bharatgpt = require('bharatgpt');

bharatgpt.initializeConfig({ apiKey: 'your-api-key' });

For ES6 Modules (ESM):

import bharatgpt from 'bharatgpt';

bharatgpt.initializeConfig({ apiKey: 'your-api-key' });

Replace 'your-api-key' with your actual API key.

Making Requests

You can use the llmV3 function to make requests to the API. This function expects an options object as an argument. This object must have the following property:

  • q: The query you want to send to the API.

In addition, it can optionally have these properties:

  • group_id: Used to create a session for a conversation. Allows the API to keep track of the context and provide relevant responses for follow-up conversation.
  • followUp: Indicates whether you want to have a follow-up conversation. Set it to 0 for no follow-up, and 1 for follow-up. If you set it to 1, you must also provide a group_id.
  • limit: Sets the maximum length of the response. Set it to 0 to let the system automatically calculate the limit.
  • regenerate_id: Allows you to regenerate the last response from a session.
  • timeout: Sets the maximum timeout for fetching the response from the API, in milliseconds.
  • model: The language model you want to use for generating the response.

Making a Basic Request

Here's a basic example of making a request without any optional parameters:

try {
    const response = await bharatgpt.llmV3({q: "How you can help me?"});
    console.log(response);
} catch (error) {
    console.error('An error occurred:', error.message);
}

In this example, llmV3 will use default values for all optional parameters.

Using Optional Parameters

You can provide any optional parameters you want in the options object. Here are examples for each optional parameter:

group_id and followUp:

try {
    const options = {
        q: "How you can help me?",
        group_id: 'my-group-id',
        followUp: 1
    };
    const response = await bharatgpt.llmV3(options);
    console.log(response);
} catch (error) {
    console.error('An error occurred:', error.message);
}

limit:

try {
    const options = {
        q: "How you can help me?",
        limit: 100
    };
    const response = await bharatgpt.llmV3(options);
    console.log(response);
} catch (error) {
    console.error('An error occurred:', error.message);
}

regenerate_id:

try {
    const options = {
        q: "How you can help me?",
        regenerate_id: 'my-regenerate-id'
    };
    const response = await bharatgpt.llmV3(options);
    console.log(response);
} catch (error) {
    console.error('An error occurred:', error.message);
}

timeout:

try {
    const options = {
        q: "How you can help me?",
        timeout: 5000 // 5 seconds
    };
    const response = await bharatgpt.llmV3(options);
    console.log(response);
} catch (error) {
   

 console.error('An error occurred:', error.message);
}

Error Handling

You should always handle any potential errors when making requests. If the llmV3 function throws an error, it will always be an instance of Error and the error message will provide details about what went wrong. This might be a message from the API itself or a message about a network error.

The examples above show basic error handling with a try-catch block. This will catch any errors thrown by llmV3 and log the error message to the console. In a real application, you should adjust this to handle errors in a way that makes sense for your use case.