npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

ngx-natural-language

v0.3.1

Published

<img src="https://static.vecteezy.com/system/resources/previews/010/403/281/original/simple-drawing-in-doodle-style-robot-cute-robot-hand-drawn-with-lines-funny-illustration-for-kids-free-vector.jpg" style="width: 300px">

Downloads

5

Readme

Natural Language Interface (NLI)

A library for building AI chat experiences in Angular 16 🤖

Installation

Inside your angular application, install the package with npm

npm i ngx-natural-language

Overview and Setup

NLI exposes three primary objects that can be used to create an AI chat experience:

  • ChatHandler
  • ChatService &
  • Action

ChatHandler manages a DOM element, ChatService sends requests to ChatHandler, Action describes what to do with structured data generated by the AI.

When the ChatHandler receives a request from the ChatService (e.g. get the AI's response for a user's prompt), it handles the communication with the OpenAI api as needed, draws components into the DOM element it is attached to, and then executes any relevant Action objects.

Chat Experience with ability to Create Users

Every project that implements NLI should start with the creation of an Angular component that implements ChatWindow. This component will contain the conversation between the user and the AI. Let's make a simple one:

import {
    ViewContainerRef,
    ViewChild,
    ChangeDetectorRef
} from '@angular/core';

import {
    ChatHandler,
    ChatWindow,
    ChatService
} from 'ngx-natural-language';

@Component({
    selector:'app-ai-chat-window',
    template:`
    <div #chat_window></div>
    <input #promptInput placeholder="Enter your prompt"/>
    <button (click)="process_prompt(promptInput.value)">Send</button>`,
})
export class ChatWindowComponent implements ChatWindow {
    @ViewChild('chat_window', { read: ViewContainerRef }) chat_window!: ViewContainerRef;
    chat_handler: ChatHandler | null = null;

    constructor(
        private chat_service: ChatService,
        private change_detector: ChangeDetectorRef
    ) { }

    ngAfterViewInit(): void {
        /** 
         * We'll be initalizing chat_handler with 
         * this.chat_handler = new ChatHandler(...) here shortly
         */
    }

    process_prompt(prompt: string) { 
        /**
         * We'll be implementing this method after we have the ChatHandler
         * set up
         */
    }
}

Implementing ChatWindow requires you to initialize a ChatHandler and a ViewContainerRef on the DOM element you want to display messages in.

It also requires you to inject ChatService and ChangeDetectorRef into your component. ChangeDetectorRef will be used in the initialization of chat_handler, allowing ChatHandler to manually trigger a change detection cycle on ChatWindowComponent.

Since ChatHandler needs to "attach" to the chat_window element. ChatWindow requires ChatWindowComponent to implement the ngAfterViewInit() method. This allows us to link ChatHandler to chat_window after its been created in the DOM tree.

Initializing ChatHandler

In order to intialize ChatHandler we are going to need to create a few more things:

  • A component representing messages from the user (human)
  • A component representing messages from the AI 🤖
  • A function for querying the OpenAI api
  • A list of Action objects

Example component for human messages

@Component({
  selector: 'app-human-message',
  template: `<div>
    <h2>Human Message</h2>
    {{content}}
  </div>`
})
export class HumanMessageComponent {
  @Input() content: string = "Lorem ipsum";
}

NOTE Every component that you make, which you intend to use with NLI, should use the @Input() decorator for variable parameters. Across NLI @Input() fields are used to pass values into components before rendering them to the chat window.

Example component for AI messages

@Component({
  selector: 'app-ai-message',
  template: `<div>
    <h2>AI Message</h2>
    {{content}}
  </div>`,
  styles: [`div { white-space: pre-line; }`]
})
export class AIMessageComponent {
  @Input() content: string = "Lorem ipsum";
}

Function for querying OpenAI api

IMPORTANT

The code below is an example implementation in the Angular frontend. You DO NOT want to implement what is below in a production application. This implementation will include the api key in the webpacked javascript sent to the client. Savvy users would be able to extract it from their browser. BAD IDEA.

It is recommended instead to change the implementation here to pass the function's parameters into an endpoint in the backend of your application. Your backend would then handle the interaction below with OpenAI, returning the necessary data to satisfy get_ai_response's function signature. The api key being stored in a non-git-tracked file in the backend, or in the database.

import {
  OpenAIApi,
  ChatCompletionFunctions,
  ChatCompletionResponseMessage,
  ChatCompletionRequestMessage
} from 'openai/dist/api';
import { Configuration } from 'openai/dist/configuration';

async function get_ai_response(
  messages: ChatCompletionRequestMessage[],
  schemas: ChatCompletionFunctions[]
): Promise<ChatCompletionResponseMessage | undefined> {

  const configuration = new Configuration({
    apiKey: "some super secret api key that we definitely don't want to share"
  });

  const openai = new OpenAIApi(configuration);

  const response = await openai.createChatCompletion({
    model: "gpt-3.5-turbo-0613",

    messages: [
      {
        role: "system",
        content: "Don't make assumptions about what values to plug into functions. You must ask for clarification if a user request is ambiguous."
      },
      ...messages
    ],
    functions: schemas,
  });

  return response.data.choices[0].message;
}

Defining an Action

import { createUserSchema } from './ai/schema';
import { createUser } from './ai/types';

class CreateUser extends Action<createUser> {
  schema = createUserSchema;
  description = "Creates a user in the system";

  async run(data: createUser) {
    this.render({
      component: CreateUserComponent,
      inputs: [{
        name: "create_user",
        value: data
      }]
    })

    return `The user was just prompted whether they want to create a user with this information ${JSON.stringify(data)}.`
  }
}

Actions describe what to do with structured data generated by the AI.

run is this description. In the example above we render a CreateUserComponent, which has info about the user they can create, and buttons to approve or deny the creation of that user. If they approve it, CreateUserComponent manages the http requests to accomplish that with the backend.

We then return a string back to the AI, informing it of what just happened. Doing this is good practice. It helps keep the AI in the loop of what is going on, and allows the conversation with the user to flow more naturally.

The AI will choose whether or not to respond to the string that you're returning. So, if you want the AI to respond with an error message instead, you can return something like this:

return  `The user was missing information on the name of the user, ask them to provide it`

In this example CreateUser extends an Action built around a type called createUser. This tells NLI to work with the AI to extract data in the form of createUser from the user's input when the AI decides to run this action. Here's what createUser looks like:

type phoneNumber = {
  country_code?: number,
  area_code?: string,
  first_three: string,
  last_four: string
}

export type createUser = {
  /** The first name of the user */
  firstName: string,
  /** The last name of the user */
  lastName: string,
  /** The favorite color of the user */
  favoriteColor: "red" | "blue" | "green",
  /** The date of birth of the user in the formate MM/DD/YYYY*/
  dob: string,
  /** The email of the user */
  email: string,
  /** The phone of the user*/
  phone: phoneNumber 
}

The schema property is the createUser type, but translated into OpenAI compatible JSON schema. OpenAI requires JSON schema to be provided in api calls to help define what objects the AI can produce.

NLI exposes a script which can be invoked by

npx schema [path to types.ts file] [path to schema.ts file]

Which will convert all types defined in a particular .ts file into AI compatible schema in a separate .ts file.

We recommend creating a singular file with all the types that you use for creating your actions to take advantage of this conversion tool.

Bringing it all together

Now that we have all of our pieces, we can finally construct a ChatHandler in the ngAfterViewInit() method of ChatWindowComponent.

While we're at it, let's also implement the process_prompt method.

@Component({
    selector:'app-ai-chat-window',
    template:`
    <div #chat_window></div>
    <input #promptInput placeholder="Enter your prompt"/>
    <button (click)="process_prompt(promptInput.value)">Send</button>`,
})
export class ChatWindowComponent implements ChatWindow {
  @ViewChild('chat_window', { read: ViewContainerRef }) chat_window!: ViewContainerRef;
  chat_handler: ChatHandler | null = null;

  constructor(
    private chat_service: ChatService,
    private change_detector: ChangeDetectorRef
  ) { }

  ngAfterViewInit(): void {
    this.chat_handler = new ChatHandler(
      this.chat_service,
      this.change_detector,
      this.chat_window,
      HumanMessageComponent,
      AIMessageComponent,
      get_ai_response,
      [CreateUser]
    )
  }

  async process_prompt(prompt: string) {
    if(prompt) {
      this.chat_service.send_prompt(prompt, [])
    }
  }
}