npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

promptleo-client

v1.0.1

Published

Client library for Promptleo AI services

Downloads

131

Readme

Promptleo Javascript Client

A lightweight JavaScript client library for interacting with Promptleo AI services. This client provides a simple interface to generate text and images using Stable Diffusion, LLaMA and other models through a unified API.

Installation

npm install promptleo-client

Quick Start

import PromptleoClient from "promptleo-client";

// Initialize the client with your API token
const client = new PromptleoClient({ token: "YOUR_API_TOKEN" });

// Generate an image
const imageResult = await client.generate({
  model: "stability-ai/stable-diffusion-xl-base-1.0",
  prompt: "A simple line drawing of an eagle",
});
console.log("Generated image URL:", imageResult.url);

// Generate text in a conversation format
const chatResult = await client.generate({
  model: "meta/llama-3.1-8b-instruct",
  messages: [{ role: "user", content: "Your question here" }],
});
console.log("Chat response:", chatResult.messages);

// Generate text completion
console.log("\nGenerating text with prompt...");
const promptResult = await client.generate({
  model: "meta/llama-3.2-3b",
  prompt: "I believe the meaning of life is",
});
console.log("Generated text:", promptResult.messages);

Changelog

1.0.0 - Initial release.

API Reference

Constructor

const client = new PromptleoClient({ token: "YOUR_API_TOKEN" });
  • token (string, required): Your API authentication token is available on the account page.

Methods

generate(params)

Generic method to generate content using various AI models.

const result = await client.generate({
  model: string,  // required
  prompt?: string,  // required for image generation
  messages?: Array  // required for chat models
})

Parameters:

  • model (string, required): The model identifier
  • prompt (string, optional): The generation prompt used in case of image generation and text completion requests.
  • messages (array, optional): Array of message objects for chat models.

Returns: Promise that resolves to the API response

Supported Models

| Name | Identifier | | --------------------------------- | ---------------------------------------------- | | FLUX.1 [schnell] | black-forest-labs/flux.1-schnell | | Stable Diffusion v3.5 Large Turbo | stability-ai/stable-diffusion-v3.5-large-turbo | | Stable Diffusion v3.5 Large | stability-ai/stable-diffusion-v3.5-large | | Stable Diffusion XL Base 1.0 | stability-ai/stable-diffusion-xl-base-1.0 | | Stable Diffusion 1.5 | stability-ai/stable-diffusion-v1.5 | | Stable Diffusion 1.4 | stability-ai/stable-diffusion-v1.4 | | Qwen2.5 14B Instruct | qwen/qwen2.5-14b-instruct | | Meta Llama 3.1 8B | meta/llama-3.1-8b | | Meta Llama 3.1 8B Instruct | meta/llama-3.1-8b-instruct | | Meta Llama 3.2 3B | meta/llama-3.2-3b | | Meta Llama 3.2 3B Instruct | meta/llama-3.2-3b-instruct |

Image Generation

Generates images from text descriptions (text-to-image).

const result = await client.generate({
  model: "stability-ai/stable-diffusion-xl-base-1.0",
  prompt: "Your image description",
});

Response format:

{
  url: string; // URL of the generated image
}

Text Generation

Generates text responses using messages in chat format.

const result = await client.generate({
  model: "meta/llama-3.1-8b-instruct",
  messages: [
    {
      role: "user",
      content: "Your question or prompt",
    },
  ],
});

Response format:

{
  messages: [
    {
      role: string,
      content: string,
    },
  ];
}

Generate text completion using the base model and a prompt.

const result = await client.generate({
  model: "meta/llama-3.2-3b",
  prompt: "I believe the meaning of life is",
});

Response format:

{
  messages: [
    {
      generated_text: string,
    },
  ];
}

Error Handling

The client throws errors in the following cases:

  • Missing authentication token
  • Missing required parameters (model, prompt/messages)
  • API request failures
  • Network errors
  • Invalid responses

Example error handling:

try {
  const result = await client.generate({
    model: "stability-ai/stable-diffusion-xl-base-1.0",
    prompt: "An image description",
  });
} catch (error) {
  console.error("Generation failed:", error.message);
}

Bug Reports and Feature Requests

Please file a ticket here.

Development

# Install dependencies
npm install

# Start development server
npm run dev

# Build for production
npm run build

# Run sample code, but specify the api token in the src/sample.js file first
npm run sample

License

Apache-2.0

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.