npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@curiecode/lamechain

v0.0.41970

Published

# LameChain > pipeable, trainable, JSON-mediated ChatGPT conversations for the smooth-brained dev

Downloads

58

Readme

@curiecode/lamechain - a code by oliver

LameChain

pipeable, trainable, JSON-mediated ChatGPT conversations for the smooth-brained dev

Overview

This code is a collection of tools for templating, communicating, and parsing results from ChatGPT in a composable way. The express intent is to use prompt engineering as a way to build "micro-models" for a specific job, and through a sort of functional composition, build these conversations into complex structures/interactions/data pipelines.

I was recently made aware of LangChain and found out that there exist rigorous solutions to this problem in the Python and emerging Typescript space (LangChain has TS support).

However, my smooth brain skated over their documents like a maglev train on its way to Simpletown. I would recommend that before anyone even consider using my code or anything like it, they assess whether or not a more rigorous solution exists for their use-case (mine is in a custom, home-brewed, half-baked game architecture, and I've decided to keep the third party libraries as light as possible, which is why I am not porting it to LangChain).

Package Info

  • Basically all Typescript
  • 0% Test Coverage
  • Used By No One but Me
  • Not Semantically Versioned (Yet)
    • The current version is inaccurate. Version 0.0.0.0.0.0.01 is accurate but NPM won't let me be accurate.
    • Do not use this code if you want stable software
  • Super Experimental; updating this as I improve my use-case for it.
  • Feel free to use, contribute, and etc. at your own risk. I just ask that you read about the license.

Installation:

On your machine,

  • Clone: git clone https://github.com/curiecode/lamechain.git

  • Yarn: yarn add @curiecode/lamechain

  • NPM: yarn add @curiecode/lamechain

In the project,

  • yarn to install modules
  • yarn ex:train to run the example training script
  • yarn ex:pipe to run the example pipe script
  • more utils to come

Usage:

The general pattern is to declare a conversation with some intent, some rules & restrictions, and a stated format for input and output. After doing so, messages can be sent through the conversation and received in type-safe objects rather than strings. These conversations support training and piping, but the general interface is as follows:

    import { JsonConversation } from '@curiecode/lamechain';

    const model = new JsonConversation({ logger: console }, {
        ... // <--- Prompt configuration, read below
    });

    await model.send({
        someInput: 'my typed input object'
    });

    // My typed output object:
    const { someOutput } = model.message();

An example in practice; the following model is meant generates knock-knock jokes:

model.ts:

import { JsonConversation } from "@curiecode/lamechain";

export const model = new JsonConversation({
    logger: console
}, {
    config: {
        overallContext: 'tell me jokes',
        motivations: 'take an input string, and make a joke about it',
        rulesAndLimitations: [
            `always include the phrase KNOCK KNOCK, and WHO IS THERE? in your joke`,
            `the joke should be really, really funny like something kurt vonnegut wrote`,                
        ]
    },
    inputProperties: {
        jokePrompt: 'a phrase for you to make a joke about'
    },
    responseProperties: {
        jokeString: 'the really funny joke that you invented'
    }
});
    await model.send({ jokePrompt: 'using chatGPT to tell jokes' });
    console.log(`Generated Joke: ${model.message().jokeString}`);

Training

Training conversations involves giving them a set of objects which match the input-output interface. The inputs and outputs are fed through ChatGPT with a slightly modified prompt which asks ChatGPT to validate that the example maps to its stochastic parrot brain or whatever; if so, we proceed with a normal conversation. If not, the conversation will throw an error on giveExample.

It is recommended (by me) for any complex prompts to use these kinds of examples. Anecdotally, they seem to be very useful. I don't have any good recommendations on a good number of examples, but I would suggest a minimum set that cover your different edge-cases.

import { TrainedConversation } from "@curiecode/lamechain";
import { jokeModel } from './docs/examples/shitModels/jokes';

const trainedModel = new TrainedConversation(jokeModel);

await model.giveExample({ 
    jokeInput: `no one home in oliver's head` 
}, {
    jokeString: `KNOCK KNOCK / Who's there? / Literally no one, my brain is empty af.` 
});

await model.giveExample({ 
    jokeInput: `pete townshend` 
}, {
    jokeString: `KNOCK KNOCK / Who's there? / A Who / What?  I'm confused` 
});

await model.send({
    jokeInput: 'some joke prompt'
});

const { jokeString } = model.message();

// ... 

Pipes

The conversation class provides a method pipe which accepts another conversation; the piper (calling conversation) must have the same responseProperties as the inputProperties of the pipee (pipe conversation parameter). This allows the decomposition of various tasks that OpenAI would normally have difficulty with due to complexity or scope; a problem broken into several distinct problems can be approached by having OpenAI provide a response for each distinct component of the problem. An example follows, in which we run the output of the above joke-generator through a model determining if the joke is funny or not:

jokeDeterminer.ts:

import { JsonConversation } from "../..";

export const model = new JsonConversation({
    logger: console
}, {
    config: {
        overallContext: 'tell me if a joke is quality',
        motivations: 'take an input KNOCK KNOCK joke, and tell me if it is funny',
        rulesAndLimitations: [
            `some antijokes may not always have the WHO IS THERE part`,
        ]
    },
    inputProperties: {
        jokeOutput: 'a phrase for you to judge the funniness of'
    },
    responseProperties: {
        jokeJudgement: 'a judgement of how funny the joke is'
    }
});
import { model as jokeModel } from '../the/#usage/example';
import { model as jokeDeterminerModel } from '../the/above/section';
    
jokeModel.pipe(jokeDeterminerModel);
await jokeModel.send({ jokePrompt: 'a joke about pipes' });
const jokeThatWasGenerated = jokeModel.message();
const jokeDetermination = jokeDeterminerModel.message();

console.log({
    joke: jokeThatWasGenerated.jokeString,
    jokeIsFunny: jokeDetermination.jokeJudgement
})

License

This project is licensed under the Love All My Cats (LAMC) Public License

You need to love my cats to use this code. If you do not, you're actually legally not allowed to use this code, there's a whole license file that you should really read if you want to use this code.

Curie

Curie

Anastasia

Anastasia