npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

sllm

v2.2.1

Published

A CLI for OpenAI LLMs

Downloads

10

Readme

SLLM - Command Line ChatGPT-like Assistant

A command line interface for the OpenAI Large Language Models that emulates some features of ChatGPT.

UPDATE: GPT4 Support is Here!

Note: This project has recently been updated to default to the GPT3.5-Turbo Model. You can access other models (including GPT4) with the --model option.

screenshot

In addition to providing a simple interface for talking with GPT, this tool also offers a few extra features built on top of the GPT APIs.

Extra Features:

  • Act as a chat bot (emulates chatGPT)
  • Read local files
  • Automatically prepend subject domains & context (Bash, JS, Physics etc.)
  • Expert Mode (act as an expert on some subject)
  • Explain it like I'm 5 (explain the answer simply)

Why Not Use ChatGPT?

You can do whatever you want :)

I made this for the following reasons:

  • Access LLMs without leaving the command line
  • Access LLMs without logging in to OpenAI (use a token instead)
  • Directly read and write local files
  • I don't always have easy access to a GUI

$ sllm what would be the advantage of talking to a LLM via the command line?

The advantage of talking to a LLM via the command line is that it allows for a more efficient and direct way of communicating. It also allows for more precise and specific commands to be used, which can help to quickly get the desired results.


Example Usage:

$ sllm what can be used to check the status of a running systemd service? -e bash scripting

The command to check the status of a running systemd service is "systemctl status <service_name>".

$ sllm how can I get the full log instead? -d bash -H 1

To get the full log of a running systemd service, you can use the command "journalctl -u <service_name>".

Install

npm install -g sllm

Setup

Get an OpenAI API KEY.

export OPENAI_API_KEY=<your_api_key>

Quick Start

$ sllm how many people live in china? 

According to the latest estimates, there are approximately 1.4 billion people living in China.

Chat Mode

To enable a "chat mode" similar to chatGPT, run the following command:

sllm .settings -H 3

This will remind the LLM about the last 3 prompts it was given.

Overview

$ sllm --help
Usage: sllm [options] [command]


         ____    
    ___ / / /_ _ 
   (_-</ / /  ' \
  /___/_/_/_/_/_/

CLI for OpenAI Large Language Models. v2.0.6
Created by Mathieu Dombrock 2023. GPL3 License.


Options:
  -V, --version                  output the version number
  -h, --help                     display help for command

Commands:
  .help                          show sllm help
  .prompt [options] <prompt...>  send a prompt (default command)
  .settings [options]            set a persistent command option
  .settings-view                 view the current settings that were changed via the `settings` command
  .settings-purge                purge the current settings that were changed via the `settings` command
  .history-view [options]        view the conversation history
  .history-purge [options]       view the conversation history
  .history-undo [options]        undo the conversation history
  .purge                         delete all history and settings
  .count [options]               estimate the tokens used by a prompt or file
  .repeat                        repeat the last response
  .models                        list the available models

Note: All commands are prefixed with "." to avoid conflicting with prompts!

Available Models

$ sllm models

Available Models:
-------
text-davinci-002
-------
text-davinci-003
alias: gpt3
-------
gpt-3.5-turbo
alias: gpt3t
-------
gpt-4
alias: gpt4
beta: might require special access!
-------
gpt-4-32k
alias: gpt4b
beta: might require special access!
-------
code-davinci-002
beta: might require special access!
///////
You can specify a model with the -m option
More info: https://platform.openai.com/docs/models/

Prompt

$ sllm .prompt --help
Usage: sllm .prompt [options] <prompt...>

send a prompt (default command)

Arguments:
  prompt                      the prompt text

Options:
  -v, --verbose               verbose output
  -x, --max-tokens <number>   maximum tokens to use in response (default: "256")
  -X, --unlimited             do not limit tokens used in response
  -t, --temperature <number>  temperature to use (default: "0.2")
  -c, --context <string...>   context to prepend
  -d, --domain <string...>    subject domain to prepend
  -e, --expert <string...>    act as an expert on this domain
  -C, --code <language>       respond only with executable code (default: "JavaScript")
  -5, --like-im-five          explain it like I'm 5 years old
  -H, --history <number>      prepend history (chatGPT mode)
  -f, --file <path>           prepend the given file contents
  -T, --trim                  automatically trim the given file contents
  -m, --model <model-name>    specify the model name (default: "gpt-3.5-turbo")
  --mock                      dont actually send the prompt to the API
  -h, --help                  display help for command

Working With Files

You can prepend a reference to a file with the -f or --file option.

However, be aware that files can not exceed 4k tokens. To the best of my knowledge, there is no way to get the GPT3 API to process more than 4096 tokens at once which means that this is a hard limitation and it would not be possible to get a meaningful analysis of a file that exceeds 4k tokens.

NOTE: At the time of writing, sending a file that contains 4k tokens would cost about $0.08 (USD). See OpenAI Pricing for more info.

Trimming Files to Save Tokens

If your files are too large or you simply want to save a few tokens, you can try adding the --trim flag when loading a file. This command will attempt to remove all white spaces, tabs and new lines from the file. This might confuse the LLM so it's typically better to avoid this option unless needed.

Depending on the type of file you want to analyze, you might also try minifying the file before running it through sllm.

File Examples:

$ sllm write a summary of this file -f sllm.js

  This file is a Node.js script that provides a command line interface (CLI) for interacting with OpenAI's GPT-3 API.

$ sllm what dependencies does this have -f ./package.json

 This package.json file has two dependencies: gpt-3-encoder and openai.

$ sllm what version of npm is this file built for? -f ./package.json

  This package.json file is built for npm version 6.14.4 or higher.

$ cat example.js

  const e = require('./llm.js');
  console.log(e);

$ sllm is this file NodeJS or Browser JS? -f example.js

  This file is Node.js.

$ sllm why do you say that? -H 1

  This file contains code that is specific to Node.js, such as the require statement, which is not supported in browser JavaScript.

$ sllm what is this file about? -f mute.cpp
  This file is about demonstrating the differences between mutating a value by reference, by pointer, and not mutating it at all. It contains three functions, noMute, muteR, and muteP, which respectively do not mutate the value, mutate the value by reference, and mutate the value by pointer. There is also a print function to output the results of the functions.

$ sllm what is this file about? -f cfg.txt
  This file is about creating a GIF animation of Conway's Game of Life using the .sorg settings. The animation will have a file name of "life", a frame delay of 1, 512 frames to render, 0 generations to run before render, a canvas width of 64, a canvas height of 64, a pixel/image scale of 8, a gif color pallet of lime, and a rule set of dtsd. Additionally, the .sorg settings include a file to load of "noise", a center of 0, an x offset of 1, and a y offset of 1.

Counting Tokens

If you want to estimate how many tokens a prompt or file will consume, you can use the sllm count command.

$ sllm .count --help

Usage: sllm .count [options]

estimate the tokens used by a prompt or file

Options:
  -p, --prompt <string...>  the prompt string to check
  -f, --file <path>         the file path to check
  -h, --help                display help for command