npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

pgml-test

v0.1.12

Published

Open Source Alternative for Building End-to-End Vector Search Applications without OpenAI & Pinecone

Downloads

472

Readme

Open Source Alternative for Building End-to-End Vector Search Applications without OpenAI & Pinecone

Table of Contents

Overview

JavaScript SDK is designed to facilitate the development of scalable vector search applications on PostgreSQL databases. With this SDK, you can seamlessly manage various database tables related to documents, text chunks, text splitters, LLM (Language Model) models, and embeddings. By leveraging the SDK's capabilities, you can efficiently index LLM embeddings using PgVector for fast and accurate queries.

Key Features

  • Automated Database Management: With the SDK, you can easily handle the management of database tables related to documents, text chunks, text splitters, LLM models, and embeddings. This automated management system simplifies the process of setting up and maintaining your vector search application's data structure.

  • Embedding Generation from Open Source Models: The JavaScript SDK provides the ability to generate embeddings using hundreds of open source models. These models, trained on vast amounts of data, capture the semantic meaning of text and enable powerful analysis and search capabilities.

  • Flexible and Scalable Vector Search: The JavaScript SDK empowers you to build flexible and scalable vector search applications. The JavaScript SDK seamlessly integrates with PgVector, a PostgreSQL extension specifically designed for handling vector-based indexing and querying. By leveraging these indices, you can perform advanced searches, rank results by relevance, and retrieve accurate and meaningful information from your database.

Use Cases

Embeddings, the core concept of the JavaScript SDK, find applications in various scenarios, including:

  • Search: Embeddings are commonly used for search functionalities, where results are ranked by relevance to a query string. By comparing the embeddings of query strings and documents, you can retrieve search results in order of their similarity or relevance.

  • Clustering: With embeddings, you can group text strings by similarity, enabling clustering of related data. By measuring the similarity between embeddings, you can identify clusters or groups of text strings that share common characteristics.

  • Recommendations: Embeddings play a crucial role in recommendation systems. By identifying items with related text strings based on their embeddings, you can provide personalized recommendations to users.

  • Anomaly Detection: Anomaly detection involves identifying outliers or anomalies that have little relatedness to the rest of the data. Embeddings can aid in this process by quantifying the similarity between text strings and flagging outliers.

  • Classification: Embeddings are utilized in classification tasks, where text strings are classified based on their most similar label. By comparing the embeddings of text strings and labels, you can classify new text strings into predefined categories.

How the JavaScript SDK Works

The JavaScript SDK streamlines the development of vector search applications by abstracting away the complexities of database management and indexing. Here's an overview of how the SDK works:

  • Automatic Document and Text Chunk Management: The SDK provides a convenient interface to manage documents and pipelines, automatically handling chunking and embedding for you. You can easily organize and structure your text data within the PostgreSQL database.

  • Open Source Model Integration: With the SDK, you can seamlessly incorporate a wide range of open source models to generate high-quality embeddings. These models capture the semantic meaning of text and enable powerful analysis and search capabilities.

  • Embedding Indexing: The JavaScript SDK utilizes the PgVector extension to efficiently index the embeddings generated by the open source models. This indexing process optimizes search performance and allows for fast and accurate retrieval of relevant results.

  • Querying and Search: Once the embeddings are indexed, you can perform vector-based searches on the documents and text chunks stored in the PostgreSQL database. The SDK provides intuitive methods for executing queries and retrieving search results.

Quickstart

Follow the steps below to quickly get started with the JavaScript SDK for building scalable vector search applications on PostgresML databases.

Prerequisites

Before you begin, make sure you have the following:

  • PostgresML Database: Ensure you have a PostgresML database version >=2.7.7. You can spin up a database using Docker or sign up for a free GPU-powered database.

  • Set the DATABASE_URL environment variable to the connection string of your PostgresML database.

Installation

To install the JavaScript SDK, use npm:

npm i pgml

Sample Code

Once you have the JavaScript SDK installed, you can use the following sample code as a starting point for your vector search application:

const pgml = require("pgml");

const main = async () => {
    const collection = pgml.newCollection("my_javascript_collection");

Explanation:

  • This code imports pgml and creates an instance of the Collection class which we will add pipelines and documents onto

Continuing within const main

    const model = pgml.newModel();
    const splitter = pgml.newSplitter();
    const pipeline = pgml.newPipeline("my_javascript_pipeline", model, splitter);
    await collection.add_pipeline(pipeline);

Explanation

  • The code creates an instance of Model and Splitter using their default arguments.
  • Finally, the code constructs a pipeline called "my_javascript_pipeline" and add it to the collection we Initialized above. This pipeline automatically generates chunks and embeddings for every upserted document.

Continuing with const main

    const documents = [
        {
          id: "Document One",
          text: "document one contents...",
        },
        {
          id: "Document Two",
          text: "document two contents...",
        },
    ];
    await collection.upsert_documents(documents);

Explanation

  • This code crates and upserts some filler documents.
  • As mentioned above, the pipeline added earlier automatically runs and generates chunks and embeddings for each document.

Continuing within const main

    const queryResults = await collection
        .query()
        .vector_recall("Some user query that will match document one first", pipeline)
        .limit(2)
        .fetch_all();

    // Convert the results to an array of objects
    const results = queryResults.map((result) => {
      const [similarity, text, metadata] = result;
      return {
        similarity,
        text,
        metadata,
      };
    });
    console.log(results);

    await collection.archive();

Explanation:

  • The query method is called to perform a vector-based search on the collection. The query string is Some user query that will match document one first, and the top 2 results are requested.
  • The search results are converted to objects and printed.
  • Finally, the archive method is called to archive the collection and free up resources in the PostgresML database.

Call main function.

main().then(() => {
  console.log("Done with PostgresML demo");
});

Running the Code

Open a terminal or command prompt and navigate to the directory where the file is saved.

Execute the following command:

node vector_search.js

You should see the search results printed in the terminal. As you can see, our vector search engine did match document one first.

[
  {
    similarity: 0.8506832955692104,
    text: 'document one contents...',
    metadata: { id: 'Document One' }
  },
  {
    similarity: 0.8066114609244565,
    text: 'document two contents...',
    metadata: { id: 'Document Two' }
  }
]

Usage

High-level Description

The JavaScript SDK provides a set of functionalities to build scalable vector search applications on PostgresQL databases. It enables users to create a collection, which represents a schema in the database, to store tables for documents, chunks, models, splitters, and embeddings. The Collection class in the SDK handles all operations related to these tables, allowing users to interact with the collection and perform various tasks.

Collections

Collections are the organizational building blocks of the SDK. They manage all documents and related chunks, embeddings, tsvectors, and pipelines.

Creating Collections

By default, collections will read and write to the database specified by DATABASE_URL.

Create a Collection that uses the default DATABASE_URL environment variable.

const collection = pgml.newCollection("test_collection")

Create a Collection that reads from a different database than that set by the environment variable DATABASE_URL.

const collection = pgml.newCollection("test_collection", CUSTOM_DATABASE_URL)

Upserting Documents

The upsert_documents method can be used to insert new documents and update existing documents.

New documents are dictionaries with two required keys: id and text. All other keys/value pairs are stored as metadata for the document.

Upsert new documents with metadata

const documents = [
    {
        id: "Document 1",
        text: "Here are the contents of Document 1",
        random_key: "this will be metadata for the document"
    },
    {
        id: "Document 2",
        text: "Here are the contents of Document 2",
        random_key: "this will be metadata for the document"
    }
]
const collection = pgml.newCollection("test_collection")
await collection.upsert_documents(documents)

Document metadata can be updated by upserting the document without the text key.

Update document metadata

documents = [
    {
        id: "Document 1",
        random_key: "this will be NEW metadata for the document"
    },
    {
        id: "Document 2",
        random_key: "this will be NEW metadata for the document"
    }
]
collection = pgml.newCollection("test_collection")
await collection.upsert_documents(documents)

Getting Documents

Documents can be retrieved using the get_documents method on the collection object

Get the first 100 documents

collection = pgml.newCollection("test_collection")
documents = await collection.get_documents({ limit: 100 })

Pagination

The JavaScript SDK supports limit-offset pagination and keyset pagination

Limit-Offset pagination

collection = pgml.newCollection("test_collection")
documents = await collection.get_documents({ limit: 100, offset: 10 })

Keyset pagination

collection = pgml.newCollection("test_collection")
documents = await collection.get_documents({ limit: 100, last_row_id: 10 })

The last_row_id can be taken from the row_id field in the returned document's dictionary.

Filtering

Metadata and full text filtering are supported just like they are in vector recall.

Metadata and full text filtering

collection = pgml.newCollection("test_collection")
documents = await collection.get_documents({
    limit: 100,
    offset: 10,
    filter: {
        metadata: {
            id: {
                $eq: 1
            }
        },
        full_text_search: {
            configuration: "english",
            text: "Some full text query"
        }
    }
})

Deleting Documents

Documents can be deleted with the delete_documents method on the collection object.

Metadata and full text filtering are supported just like they are in vector recall.

collection = pgml.newCollection("test_collection")
documents = await collection.delete_documents({
    metadata: {
        id: {
            $eq: 1
        }
    },
    full_text_search: {
        configuration: "english",
        text: "Some full text query"
    }
})

Searching Collections

The JavaScript SDK is specifically designed to provide powerful, flexible vector search.

Pipelines are required to perform search. See the Pipelines Section for more information about using Pipelines.

Basic vector search

const collection = pgml.newCollection("test_collection")
const pipeline = pgml.newPipeline("test_pipeline")
const results = await collection.query().vector_recall("Why is PostgresML the best?", pipeline).fetch_all()

Vector search with custom limit

const collection = pgml.newCollection("test_collection")
const pipeline = pgml.newPipeline("test_pipeline")
const results = await collection.query().vector_recall("Why is PostgresML the best?", pipeline).limit(10).fetch_all()

Metadata Filtering

We provide powerful and flexible arbitrarly nested metadata filtering based off of MongoDB Comparison Operators. We support each operator mentioned except the $nin.

Vector search with $eq metadata filtering

const collection = pgml.newCollection("test_collection")
const pipeline = pgml.newPipeline("test_pipeline")
const results = await collection.query()
    .vector_recall("Here is some query", pipeline)
    .limit(10)
    .filter({
        metadata: {
            uuid: {
                $eq: 1
            }    
        }
    })
    .fetch_all()

The above query would filter out all documents that do not contain a key uuid equal to 1.

Vector search with $gte metadata filtering

const collection = pgml.newCollection("test_collection")
const pipeline = pgml.newPipeline("test_pipeline")
const results = await collection.query()
    .vector_recall("Here is some query", pipeline)
    .limit(10)
    .filter({
        metadata: {
            index: {
                $gte: 3
            }    
        }
    })
    .fetch_all()
)

The above query would filter out all documents that do not contain a key index with a value greater than 3.

Vector search with $or and $and metadata filtering

const collection = pgml.newCollection("test_collection")
const pipeline = pgml.newPipeline("test_pipeline")
const results = await collection.query()
    .vector_recall("Here is some query", pipeline)
    .limit(10)
    .filter({
        metadata: {
            $or: [
                {
                    $and: [
                        {
                            uuid: {
                                $eq: 1
                            }    
                        },
                        {
                            index: {
                                $lt: 100 
                            }
                        }
                    ] 
                },
                {
                   special: {
                        $ne: true
                    } 
                }
            ]    
        }
    })
    .fetch_all()

The above query would filter out all documents that do not have a key special with a value true or (have a key uuid equal to 1 and a key index less than 100).

Full Text Filtering

If full text search is enabled for the associated Pipeline, documents can be first filtered by full text search and then recalled by embedding similarity.

const collection = pgml.newCollection("test_collection")
const pipeline = pgml.newPipeline("test_pipeline")
const results = await collection.query()
    .vector_recall("Here is some query", pipeline)
    .limit(10)
    .filter({
        full_text: {
            configuration: "english",
            text: "Match Me"
        }
    })
    .fetch_all()

The above query would first filter out all documents that do not match the full text search criteria, and then perform vector recall on the remaining documents.

Pipelines

Collections can have any number of Pipelines. Each Pipeline is ran everytime documents are upserted.

Pipelines are composed of a Model, Splitter, and additional optional arguments.

Models

Models are used for embedding chuncked documents. We support most every open source model on Hugging Face, and also OpenAI's embedding models.

Create a default Model "intfloat/e5-small" with default parameters: {}

const model = pgml.newModel()

Create a Model with custom parameters

const model = pgml.newModel(
    "hkunlp/instructor-base",
    {instruction: "Represent the Wikipedia document for retrieval: "}    
)

Use an OpenAI model

const model = pgml.newModel("text-embedding-ada-002", "openai")

Splitters

Splitters are used to split documents into chunks before embedding them. We support splitters found in LangChain.

Create a default Splitter "recursive_character" with default parameters: {}

const splitter = pgml.newSplitter()

Create a Splitter with custom parameters

const splitter = pgml.newSplitter(
    "recursive_character", 
    {chunk_size: 1500, chunk_overlap: 40}
)

Adding Pipelines to a Collection

When adding a Pipeline to a collection it is required that Pipeline has a Model and Splitter.

The first time a Pipeline is added to a Collection it will automatically chunk and embed any documents already in that Collection.

const model = pgml.newModel()
const splitter = pgml.newSplitter()
const pipeline = pgml.newPipeline("test_pipeline", model, splitter)
await collection.add_pipeline(pipeline)

Enabling full text search

Pipelines can take additional arguments enabling full text search. When full text search is enabled, in addition to automatically chunking and embedding, the Pipeline will create the necessary tsvectors to perform full text search.

For more information on full text search please see: Postgres Full Text Search.

const model = pgml.newModel()
const splitter = pgml.newSplitter()
const pipeline = pgml.newPipeline("test_pipeline", model, splitter, {
    "full_text_search": {
        active: true,
        configuration: "english"
    }
})
await collection.add_pipeline(pipeline)

Searching with Pipelines

Pipelines are a required argument when performing vector search. After a Pipeline has been added to a Collection, the Model and Splitter can be omitted when instantiating it.

const pipeline = pgml.newPipeline("test_pipeline")
const collection = pgml.newCollection("test_collection")
const results = await collection.query().vector_recall("Why is PostgresML the best?", pipeline).fetch_all()

Enabling, Disabling, and Removing Pipelines

Pipelines can be disabled or removed to prevent them from running automatically when documents are upserted.

Disable a Pipeline

const pipeline = pgml.newPipeline("test_pipeline")
const collection = pgml.newCollection("test_collection")
await collection.disable_pipeline(pipeline)

Disabling a Pipeline prevents it from running automatically, but leaves all chunks and embeddings already created by that Pipeline in the database.

Enable a Pipeline

const pipeline = pgml.newPipeline("test_pipeline")
const collection = pgml.newCollection("test_collection")
await collection.enable_pipeline(pipeline)

Enabling a Pipeline will cause it to automatically run and chunk and embed all documents it may have missed while disabled.

Remove a Pipeline

const pipeline = pgml.newPipeline("test_pipeline")
const collection = pgml.newCollection("test_collection")
await collection.remove_pipeline(pipeline)

Removing a Pipeline deletes it and all associated data from the database. Removed Pipelines cannot be re-enabled but can be recreated.

Developer Setup

This javascript library is generated from our core rust-sdk. Please check rust-sdk documentation for developer setup.

Roadmap

  • [x] Enable filters on document metadata in vector_search. Issue
  • [x] text_search functionality on documents using Postgres text search. Issue
  • [x] hybrid_search functionality that does a combination of vector_search and text_search. Issue
  • [x] Ability to call and manage OpenAI embeddings for comparison purposes. Issue
  • [x] Perform chunking on the DB with multiple langchain splitters. Issue
  • [ ] Save vector_search history for downstream monitoring of model performance. Issue