npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@benbravo73/backstage-plugin-backchat

v0.0.5

Published

A quick and dirty frontend plugin that integrates a GenAI feature into Backstage

Downloads

79

Readme

Backstage Plugin Backchat

A simple proof of concept that allows you to experiment with multiple private AI servers in Backstage.

I'd love some help to turn Backchat into something more. Please feel free to contact me directly on LinkedIn or GitHub if you'd like to contribute to this project!

Installing The Plugin

Follow the steps below to get the plugin installed into your Backstage instance. I wrote this plugin against Backstage v1.18.4.

Prerequisites

Here's what you will need before you get started:

  • A Backstage instance (follow this guide)
  • Docker, LocalAI and ChatbotUI, or Text Gen Web UI, or Ollama Web UI. Clone this project for a docker-compose.yaml that will get you setup in a few minutes.
  • Some patience.
  • An open mind so yu can imagine what the possibilities could be...

Step 1. Create A Backstage Instance

If you don't have a backstage instance yet, create one using the instructions here.

At the time of writing, the command that you need to do this is:

# Only works if you installed all the pre-requisites the command needs!
npx @backstage/create-app@latest

The Backstage installer will start. When prompted, choose a name for you instance, like my-instance.

Step 3: Start The AI Server

Clone this project somewhere other than the backstage instance folder and read through the README file. A small amount of configuration and setup is required.

This project uses docker compose to run a couple of large language model servers and a user interfaces where you can chat locally (and privately) to an open-source AI at zero cost. The speed of this solution is very much hardware dependant. If you're running Backstage and an LLM locally at the same time, make sure you have plenty of RAM and be very patient. If you have a separate server with CUDA, these servers can be reconfigured to make use of it.

Once you have configured and started the AI servers in Docker, you can check the ChatbotUI GUI is available by pointing your browser at http://localhost:3001, and that Text Gen Web UI is available by pointing your browser at http://localhost:7860.

Tip: If you don't want to run your own AI servers locally then why not try RunPod? RunPod offers a whole bunch of ready to run community templates with a broad selection of different hardware configurations. Just be mindful of how this choice may impact your data security and privacy before you make your decision.

Step 2. Add The AI Server Config To Backstage

Create a local configuration file in your backstage instance.

cd my-instance
touch app-config.local.yaml

Add the following config to your new app-config.local.yaml file:

# Mandatory: Choose one URL to incorporate, either ChatbotUI or TextGenWebUI.
ai_server:
  # Integrate the Chatbot UI GUI (the default, uses LocalAI server).
  url: "http://localhost:3001" 
  # To integrate the Ollama Web UI (uses Ollama server) uncomment the next line.
  # url: "http://localhost:3100"
  # To integrate the Text Generation Web UI (incorporates its own server)  uncomment the next line
  # url: "http://localhost:7860"
  # To integrate the Big-AGI UI (can use multiple servers)  uncomment the next line
  # url: "http://localhost:3456" 

# Optional: You can also load this Backstage catalog that contains a system diagram and TechDocs for Backchat.
catalog:
  locations:
    - type: url
      target: https://github.com/benwilcock/backstagecon-2023/blob/main/backchat-catalog.yaml
  rules:
    - allow: [Component, API, Resource, System, Domain, Location, Group, User]

Step 3: Add The Backchat Plugin To Your Backstage Instance

Use the yarn add command to add the plugin to your Backstage instance.

cd my-instance
yarn add --cwd packages/app @benbravo73/backstage-plugin-backchat

Navigate to your packages/app/src/App.tsx and refactor to include the following declarations.

import {BackchatPage} from '@benbravo73/backstage-plugin-backchat'
...
<FlatRoutes>
        ....
        <Route path="/backchat" element={<BackchatPage />} />
</FlatRoutes>

At this stage, you can already make the plugin appear in your browser at http://localhost:3000/backchat. But read on to complete the installation.

Step 4: Add The Backchat Feature To Your Backstage Navigation Menu

To integrate your plugin with the rest of the UI, you need to add an entry into the side navigation.

In the file packages/app/src/components/Root/Root.tsx add an import for the "Chat" icon and a sidebar item for "Backchat AI" like this:

...
// With the other icon imports at the top of the file.
import ChatIcon from '@material-ui/icons/Chat';
...

...
// With the other SidebarItem's under the "Menu" SidebarGroup.
<SidebarItem icon={ChatIcon} to="backchat" text="Backchat AI" />
...

Step 5: Start Backstage

In the root folder of your Backstage instance, start backstage with the following command.

yarn dev

A browser window may automatically open. If it does not, try http://localhost:3000. Upon loading of Backstage you should now see a new sidebar item for "Backchat AI." The feature won't work yet. You need to start the LLM clients and servers. Continue to the next step.

Step 6: Start The LLM Containers

Clone this project somewhere other than the backstage instance folder and read through the README file. A small amount of configuration and setup is required.

This project uses docker compose to run a couple of large language model servers and a user interfaces where you can chat locally (and privately) to an open-source AI at zero cost. The speed of this solution is very much hardware dependant. If you're running Backstage and an LLM locally at the same time, make sure you have plenty of RAM and be very patient. If you have a separate server with CUDA, these servers can be reconfigured to make use of it.

Once you have configured and started the AI servers in Docker, you can check the ChatbotUI GUI is available by pointing your browser at http://localhost:3001, and that Text Gen Web UI is available by pointing your browser at http://localhost:7860.

Tip: If you don't want to run your own AI servers locally then why not try RunPod? RunPod offers a whole bunch of ready to run community templates with a broad selection of different hardware configurations. Just be mindful of how this choice may impact your data security and privacy before you make your decision.

Step 7: Chat With Backchat

Now you have integrated the Backchat plugin, started Backstage, and started your LLM containers, navigate to Backstage and try the new sidebar item "Backstage AI." Your configured Chat GUI will load. Begin chatting with your AI!

Tips

As you have a choice of which AI system you integrate using Backchat, here is some information to help you decide.

ChatBot UI and LocalAI Server

The original (and default) choice. ChatBotUI and LocalAI has a nice look and feel. The ChatbotUI project is a bit dated now and the LocalAI server can be a challenge to understand when it comes to model loading and swapping, but overall the combination looks great for PoCs.

Ollama Web UI and Ollama

This client server combination is possibly the easiest to get started with. It features a built in model downloader with access to lots of models, switches between models easily, loads the default model automatically, and has a nice Chat interface that resembles ChatGPT. Try it!

Text Generation Web UI (GUI and Server)

The Text Gen Web UI has a lot of features - which can make it seem daunting at first. But, the "chat" features is very simple to use and it has a great model download / load workflow and works well with Mistral. Just don't for get to "Load" your model before starting your chat session.

Big-AGI UI (GUI Compatible With Multiple Servers)

Big AGI is a web client for multiple LLM API servers and services. It can talk to LocalAI server, Text Gen Web UI server (Oobabooga) and Ollama server. It can have all these servers configured at the same time and switch easily between them using the model selector in the GUI. It has a number of useful configuration options and offers suggested "personas" which you can switch between.

And Finally...

Enjoy the Backchat plugin and the private conversations you have with your AI! Don't forget to star this repo on GitHub and follow me on LinkedIn so you'll be notified when I post new content.

Roadmap

What I'd really like to do is to transform this Backchat plugin into a fully integrated OpenAI API client, but I need your help. Interested? Contact me directly on LinkedIn or GitHub if you'd like to contribute!

Roadmap suggestions:

  • Create a native OpenAI API GUI client plugin in Backstage (using common Backstage GUI widgets) that can chat to any endpoint providing the necessary API compatibility (local or in the cloud).
  • Allow users to switch the GUI to prompt any LLM model currently provided by the backend LLM server
  • Allow the user to specify temperature and other common LLM settings
  • Store chats
  • Store prompts
  • Store settings by model
  • Offer multi-modal use cases like whisper translation or stable diffusion image generation etc.

Testing

You can run the plugin in isolation for testing.

yarn start # Starts the plugin in standalone mode for testing

Note to self: I added some bash scripts to remind me of some common commands required after cloning this repo.

Alternatives?

Don't like this approach? Already a ChatGPT subscriber? Why not try this plugin from Enfuse instead?