npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

lamp

v1.0.2

Published

Query GPT models from the safety of your terminal.

Downloads

3

Readme

LaMP

Language Model Prompter.

Query GPT models from the safety of your terminal. Unix-friendly for use within bash pipelines. Supports continuous conversation, like the OpenAI interface, but with the benefit of being able to switch model at will.

lamp/1.0.0

Usage:
  $ lamp [...flags]

Input will be taken from stdin by default, unless the '-p' or '-e' flags
are given. The model's response will be written to stdout.

Options:
  -m, --model <model>    Which GPT model to use (default: gpt-3.5-turbo)
  -p, --prompt <prompt>  Pass a prompt to the GPT model
  -e, --edit             Edit a prompt for the GPT model
  -c, --continue         Continue from the last message
  -v, --version          Display version number
  -h, --help             Display this message

Installation

npm install -g lamp

Configuration

export OPENAI_API_KEY=your_api_key_here

or

echo "your_api_key_here" > ~/.lamp-data/creds.txt

Usage

The model's response will always be printed to stdout, with no embellishment, making lamp suitable for unix-style piping. Prompts are read from stdin by default:

$ echo "Write me a poem about crows." > prompt.txt
$ cat prompt.txt | lamp
Crows in the sky,
Dark as the night.
Silent they fly,
In endless flight.

Prompts can be given directly as an argument with the -p/--prompt flag, and you can bring up your configured EDITOR to write a prompt with the -e/--edit flag. To change the model from the default of gpt-3.5-turbo, you can use the -m/--model flag:

$ lamp -m gpt-4 -p "Write me a poem about crows."
In twilight skies, the crows emerge,
Their shadows dance, as darkness surge,
They caw and gather, velvet wing,
A murder's wary offering.

To continue a conversation, simply call lamp again with the -c/--continue flag enabled. This has the same behaviour as a once-off call, but the current conversation will be passed to GPT along with the prompt for context.

$ lamp -m gpt-4 -p "Write me a javascript function that lists all of the primes between 1 and n."
function listPrimes(n) {
  const primes = [];
  for (let i = 2; i <= n; i++) {
    let isPrime = true;
    for (let j = 2; j * j <= i; j++) {
      if (i % j === 0) {
        isPrime = false;
        break;
      }
    }
    if (isPrime) {
      primes.push(i);
    }
  }
  return primes;
}

$ lamp -c -p "Now modify it to return the primes in reverse order."
function listReversedPrimes(n) {
  const primes = [];
  for (let i = 2; i <= n; i++) {
    let isPrime = true;
    for (let j = 2; j * j <= i; j++) {
      if (i % j === 0) {
        isPrime = false;
        break;
      }
    }
    if (isPrime) {
      primes.push(i);
    }
  }
  return primes.reverse();
}

Note that in this last example, the first response was generated using gpt-4, but the second response was generated with the default gpt-3.5-turbo model. You can switch models mid-conversation whenever you like.

License

This project is licensed under the MIT License - see the LICENSE.md file for details.

Acknowledgement

Based on some funny article I read on Hacker News about using and abusing LLMs at the command line. I can't remember the link, and couldn't find it, so if you know what I'm talking about let me know.