npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

webwhiz

v1.0.0

Published

create ai chatbot trained on your data

Downloads

320

Readme

WebWhiz

Train ChatGPT on Your Website Data and Build an AI Chatbot that can instantly answer your customers queries.

webwhiz

This repo is specifically for WebWhiz sdk licensed under MIL license

🔥 Core features

  • Easy Integration
  • Data-Specific Responses
  • Regular Data Updates
  • No code builder
  • Customise chatbot
  • Fine tuning
  • Offline message

🤔 How it works ❓

Create and train a chatbot for your website in just a few simple steps.

  • Just enter your website URL to get started. We'll automatically fetch and prepare training data.
  • We’ll automatically train ChatGPT on your website based on the selected parameters and create the chatbot for you.
  • To embed the chatbot to your website, simply add the tiny script tag to your website.

🙋‍♂️ Frequently Asked Question ❓

What is WebWhiz?

WebWhiz allows you to train ChatGPT on your website data and build a chatbot that you can add to your website. No coding required.

How frequently do you crawl my website?

Currently we crawl your website once every month. Please contact us if you need your website to be scanned more frequently

What data do you collect from my website?

WebWhiz collects data from your website pages to train your chatbot. This includes text data from the pages as well as any metadata such as page titles or descriptions. We do not collect any personally identifiable information (PII) or sensitive data from your website. We scan only public data available to search engines

What happens if I exceed my plan's limits?

If you exceed your plan's limits for projects or pages, we will notify you. However, if you exceed the token limit for your plan, your chatbots will stop generating AI responses and will instead respond with a predefined message.

What are tokens?

Tokens are a unit of measurement used to calculate the amount of text data that is processed by your chatbot. Each token corresponds to a variable number of characters, depending on the complexity of the language used in the message. Each message your chatbot sends uses a certain number of tokens based on the length and complexity of the input and the AI response. You can view the current token usage of your account on the dashboard.

Can I train custom data?

Yes, you can train custom data by simply pasting content to WebWhiz

Can I bring my own open ai Key

Not at the moment, but, it will be possible in a couple of days.

What is the maximum size of context?

WebWhiz have any limitations on the size of context. However, please note that the number of pages you can crawl may be limited based on the plan you choose. Please refer to our plans page to learn more about the specific limitations of each plan.

☁️ Self Hosting

  1. Docker - Easy
  2. Manual Setup - Involved (but provides more flexibility)

📦 Docker

Prerequisites

  • Docker & docker-compose

Running Webwhiz with docker

  1. Clone the repo
  2. Edit the .env.docker file present in the root of the repo and add your OPENAI_KEY & OPENAI_KEY_2
  3. Use docker-compose to start the stack
# Bring up webwhiz
# Once the building is done and webwhiz starts the UI will be available at
# http://localhost:3030, backend is available at http://localhost:3000
# To exit Press Ctrl-C
docker-compose up

# Alternatively Run webwhiz as a daemon
docker-compose up -d

# Stop Webwhiz
docker-compose down

# Force rebuild all containers (required only if some change is not picked up)
sudo docker-compose up --build --force-recreate

🛃 Manual

WebWhiz is designed to be used as a production grade Chatbot that can be scaled up or down to handle any volume of data.

WebWhiz consists of mainly 3 components

  1. The API server - This is the main webwhiz backend web server using NestJS
  2. JS Celery Worker - Handles crawling, embeddings generation
  3. Python Celery Worker - Container Cosine similarity calculator & HTML / PDF content extractor

For Database and Caching Webwhiz uses

  • MongoDB
  • Redis

The backend server uses third part services (including OpenAI) for powering the chatbot, as well as for error monitoring etc. Only OpenAI key is mandatory and you can ignore the others if you prefer to.

NOTE: WebWhiz keeps embeddings in Redis to improve the performance of chatbot responses. For most organisations the chatbots created would be conatins data for a few hundren or thousands of pages, and Redis should work well while providing better performance. If you would like to use a dedicated vector database for searching relavant chunks please reach out to us.

Prerequisites

  • MongoDB v6
  • Redis v7
  • Node v18 + Yarn
  • Python v3.6+

Setting Enviornment Variables

  1. Create a copy of the .env.sample file and rename as .env

The following variables as mandatory

  • HOST - IP to which the web server should bind to typically 0.0.0.0
  • PORT - Port on which web server should listen to (Default 3000)
  • SECRET_KEY - Secret used for encryption (JWT, etc).
  • MONGO_URI - MongoDB uri to use
  • MONGO_DBNAME - Name of Database inside MongoDB
  • OPENAI_KEY - OpenAI API key
  • OPENAI_KEY_2 - Alternate OpenAI API key, used when the primary one raises error. You can use the same API key for both if you don't want to provide two separate API keys
  1. Inside the workers folder create a copy of the .env.sample and rename as .env.

Set the value for the following variables - MONGO_URI, MONGO_DBNAME, REDIS_HOST, REDIS_PORT

Installing dependencies and running app

From the root folder run the following commands

# Install node dependencies
yarn install

# Install python worker dependencies
cd workers
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

# Run application with pm2
cd ..
yarn run build
npm install -g pm2 # Use sudo if required
pm2 start ecosystem.config.js

This will start the backend http server, the js worker and the python worker

Frontend

Create .env file in the frontend folder and add the following variables

REACT_APP_BASE_URL='https://api.website.com'
GOOGLE_AUTH_ID='Only if you need google login'

From the frontend folder run the following commands to start the server

# Install dependencies
npm install

# Run front end app
npm run start

Run npm run build to package the frontend app

If you face any issues, reach out to [email protected]