npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@xflr6/chatbot

v0.0.189

Published

A framework for creating and executing "canned" chatbots

Downloads

25

Readme

Chatbot

A framework for creating and executing "canned" chatbots.

This repo contains React UI components, and also re-exports the core engine.

Getting Started

Install package

npm install @xflr6/chatbot

yarn add @xflr6/chatbot

Add styles to your CSS

@import "~@xflr6/chatbot/dist/styles.css";

This library uses no CSS reset of any kind. Include your own and adjust it as you wish.

Create and run a chat

Create a JSON chat flow definition somewhere. It could be a local file, or it could be some remote resource somewhere. See the sections below for how to write flow definitions.

Once created, you need to write a class to fetch and return the definition. It must extend ChatFlow and implement getDefinition.

class MyFlow extends ChatFlow {
  async getDefinition(): Promise<ChatFlowDefinition> {
    return theJSONDefinitionYouCreated;
  }
}

Next, register the flow class with the flowFactory. The flow factory returns instances of your ChatFlow derived classes. One needs to ask for them by flow name. You can push multiple flow creator functions into the flow factory. Given a flow name, the first matching creator function is used.

flowFactory.pushFactoryFunc(
  "myFlow", // or a regex
  (flowId: FlowId, chat: Chat) => new MyFlow(flowId, chat)
);

Now create a Chat instance, and call its setPrompt method to start the chat. This causes the flow factory to create and return an instance of MyFlow to the chat, which the chat then begins to execute.

const chat = new Chat("someName");
chat.setPrompt(PromptKey.fromString("myFlow.start"));

Finally, render the ChatView component somewhere, and pass it the chat. This component must be rendered in a parent element that doesn't get its size from its child. Also, set the element's overflow-y to "scroll".

<ChatView chat={chat} />

By default, the chat is restricted to a particular maximum width, since after a point, it looks much too stretched. However, you can change this by overriding a CSS variable. In fact there are many variables you can override to suit the styling of your app. Check in the browser's developer console.

There are lots of customizations possible at every step. More on this later.

The Chat Engine

Basic Terminology

The chat engine runs "chat flows" that are defined using JSON.

A chat flow is a collection of "prompts" from the bot to the user. Each prompt contains the message that the bot shows to the user, and some response(s) that the user can send back to the bot. The response then determines which prompt is shown next to the user.

How the Chat Progresses

At each point in time, the chat can move forward in three ways, depending on how the current prompt is configured (via its JSON definition within the chat flow definition):

  1. Move forward to the next prompt without waiting for a user response. This is helpful if you want to simulate multiple messages coming from the bot one after another.
  2. Give the user a bunch of options to send as the answer. When the user clicks on an option (or multiple options) and submits, move on the next prompt accordingly.
  3. Let the user enter some free-form answer and submit (such as entering some text in a textbox etc.), and move on to the next prompt accordingly.

Creating Chat Flow Definitions

Use Case: A bunch of consecutive messages from the bot

export default chatFlowDefinition = {
  prompts: {
    hello: {
      message: "Hello",
      // The "dot" suffix is significant. More on this later.
      nextKey: ".hello2",
    },
    hello2: {
      message: "I am a chatbot",
      nextKey: ".hello3",
    },
    hello3: {
      message: "What can I do for you today?",
      nextKey: ".somePrompt",
    },
    // Other prompts
  }
}

A more concise way to write the above (omitting the enclosing chatFlowDefinition for brevity from now on):

prompts: {
  hello: {
    messages: [
      "Hello",
      "I am a chatbot",
      "What can I do for you today?"
    ],
    nextKey: ".somePrompt",
  },
  // Other prompts
}

Note that the message(s) need not be strings, they can be anything. Certain types of messages are supported out of the box, such as Markdown, video URLs etc (more on this later).

If you have some custom data shape that you want to render, or even want to render one of the supported data formats in a custom manner, you can do so easily by hooking into the rendering pipeline by supplying a custom component to render.

Not only data, you can even provide components to customize the way users provide answers to prompts (i.e. the UI that they interact with).

These customizations are done on a prompt-by-prompt basis (more on customization later).

Use Case: Multiple choices presented to the user

prompts: {
  q1: {
    message: "What is 1+1?",
    // Currently, only string messages are properly supported. More data
    // formats will be supported soon.
    answers: [
      { message: "1" },
      { message: "2" },
      { message: "3" }
    ],
    // You can show the answers as:
    // * Horizontal list of small buttons that wrap around
    // * Horizontal list of large boxes that can be horizontally scrolled
    // * Vertically stacked long boxes
    // This setting is prompt-wide, and cannot be specified for individual
    // answers within a prompt.
    inputDisplayType: "large" // or "stacked" or null (or omit) for small
    nextKey: ".q2",
  },
  q2: {
    // Prompt definition for q2
  },
}

Instead of progressing to the next prompt "q2" no matter what the answer is, you can fork out depending on which answer the user chose:

prompts: {
  q1: {
    message: "What is 1+1?",
    answers: [
      { message: "1", nextKey: ".wrong" },
      { message: "2", nextKey: ".q2" },
      { message: "3", nextKey: ".wrong" },
    ],
    // Don't use this prompt-level next key, otherwise it will override the
    // answer-level next keys.
    // nextKey: ".q2",
  },
  wrong: {
    message: "Try again!",
    answers: [
      { message: "1", nextKey: ".wrong" },
      { message: "2", nextKey: ".q2" },
      { message: "3", nextKey: ".wrong" },
    ],
  },
  q2: {
    // Prompt definition for q2
  },
}

Notice that we are having to repeat all the answers in the "wrong" prompt. We can avoid this as follows:

wrong: {
  message: "Try again!",
  answers: "q1" // This will insert the answers of the "q1" prompt here
},

Actually we can do even better. We don't really need to create the "wrong" prompt at all:

q1: {
  message: "What is 1+1?",
  answers: [
    {
      message: "1",
      // This will auto-create a new prompt, assign the answers of prompt
      // "q1" to it, and wire it up correctly.
      quickResponse: {
        message: "If 1 = 1, then can 1+1 = 1 too?" Try again",
        repeatAnswers: true,
      },
    }
    { message: "2", nextKey: ".q2" },
    {
      message: "3",
      quickResponse: {
        message: "Actually, 1+2 = 3. So now what d'you think 1+1 will be?",
        repeatAnswers: true,
      },
    }
  ],
},

Quick responses can also be used to "insert" a message before moving on to what would technically be the real next prompt:

prompts: {
  "p1": {
    message: "Greet me",
    answers: [
      {
        message: "Hi",
        quickResponse: { message: "Hi to you too" },
        nextKey: ".p2",
      },
      {
        message: "Hello",
        quickResponse: { message: "Hello to you too" },
        nextKey: ".p2",
      }
    ]
    // You can even use a prompt-level next key
    // nextKey: ".p2",
  },
  p2: {
    message: "How can I help you?"
  }
}

Use Case: Multiple choices presented to the user (multi select)

For multiple choice prompts, you can allow to the user to select more than one option to send as their answer:

somePrompt: {
  message: "Which of these are even numbers?",
  answers: [
    { message: "1" },
    { message: "2" },
    { message: "3" },
    { message: "3" },
  ],
  acceptsMultipleAnswers: true,
  // This is required. Answer-level next keys are ignored when accepting
  // multiple answers.
  nextKey: ".theNextPrompt",
}

Use Case: Custom response accepted from the user

somePrompt: {
  message: "What is your name?",
  answers: [
    // The '*' indicates that this prompt accepts a custom input from the
    // user.
    { message: "*", nextKey: ".theNextPrompt" },
  ],
  // You could even set the next key here, instead of at the answer level
  // nextKey: ".theNextPrompt",
}

Even while accepting a custom input, you can fork out based on the answer given by the user:

somePrompt: {
  message: "What is your name?",
  answers: [
    { message: "Tom", nextKey: ".factAboutMilk" },
    { message: "Jerry", nextKey: ".factAboutCheese" },
    // This now becomes sort of like a "catch-all"
    { message: "*", nextKey: ".noRelevantFacts" },
  ],
}

By default, the custom input is accepted via a text box. We plan to utilize the inputDisplayType field to support other inputs (numbers, dates etc.) in the future.

Feature: Disabling "destructive edits" to the chat

By default, the chat engine allows the user to change the answer for any prompt, at any point in the history of the chat. Of course when you go back in time and change something, you can expect the future from that point on to potentially be different. This is something you might not want to allow.

For any prompt for which you want to disallow this feature, you can do this:

somePrompt: {
  // The rest of the definition
  forbidsDestructiveAlteration: true,
}

Feature: Scoring

Any answer within a prompt can be given a numerical score. The chat instance that is running your chat flow keeps a running total of the scores encountered so far, and also other raw data use can use to aggregate scores in any other way you wish.

For the most part, the total is just the sum of the scores encountered. However, if a prompt is answered multiple times, then an average is taken of all such answers. An answer to a quick response with repeat answers is also considered as an answer to the original prompt.

Feature: Computing the answer for a prompt programmatically

For a particular prompt, if you want user input to be bypassed, and the answer to be computed programmatically, you can do this:

somePrompt: {
  // The rest of the definition
  answerProgramatically: true,
}

Along with this, you have to hook into the chat execution pipeline and provide the answer programmatically when asked (more on customization later).

Feature: Variable interpolation

In any prompt message, the occurrence of {{someVariable}} anywhere is treated as a variable named someVariable to be interpolated when the prompt is created. Mostly, it is the job of the programmer to provide values for interpolation by overriding ChatFlow#handleResolveVariables or implementing PromptHandler#resolveVariables.

However, there are some special variables that are interpolated by the engine itself. These are:

  • {{@.somePromptName}} - The answer provided to the prompt somePromptName is interpolated. Normally, this only makes sense when the answer to prompt somePromptName is a string.
  • {{userId}} - The userId (if any) passed into the chat context is interpolated

Also, in any prompt message, answer message, quick response message or prompt custom data, any occurrence of {{!someVariable}} (note the "bang") is interpolated with an argument named someVariable (if any) passed as part of the flow ID. This interpolation is done at the time the flow definition is parsed, since such arguments can't change during the lifetime of a flow.

Feature: Jumping between different flows within the same chat

TBA

Feature: Multi-step messages

Feature: Shuffling answer choices

This feature only works for prompts with answerType === "choice". To shuffle answer choices, provide the following inputDisplayConfig in the prompt definition:

somePrompt: {
  // The rest of the definition
  inputDisplayConfig: {
    shuffleChoices: true,
    // Optionally, you can explictly specify the order in which you want to
    // display the choices. If you leave this out an order is generated for
    // you.
    choicesDisplayOrder: // array of shuffled indices, e.g. [3, 1, 0, 2]
  }
}

Note that the shuffled order in which the answers are displayed it maintained across quick responses with repeating answers, and across multi-step messages with repeating answers.

It is however, not maintained for prompts that refer to the answers of the original prompt (somePrompt in the above example). This is because the former have their own independent inputDisplayConfigs. This is a conscious design decision.

Other features (TBA)

  • Handling errors
  • Loading and saving chats
  • Simulated UI delays
  • Enabling/disabling auto-answering
  • Intercepting answers
  • Tracking analytics

Publishing to npm

We use a tool called np.

  1. Install np: npm install --global np
  2. Run np and follow its instructions: np