npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@dumpstate/bongojs

v0.24.4

Published

PostgreSQL JSON document modeling tool for node.js

Downloads

7

Readme

BongoJS

PostgreSQL JSON document modeling tool for node.js.

Motivation

PostgreSQL is an outstanding product. It covers vast spectrum of different use cases - from small applications to very complex systems. But construsting and maintaining relational model is a hassle. Especially for small products (MVPs; weekend projects) document model seems to be an attractive alternative. This is where people usually start with products like MongoDB. Take arbitrarly complex JSON object, persist, retrieve. No tables to declare, schema migrations to maintain. All quick and easy.

PostgreSQL, with JSON and JSONB types, does provide a first class support for document modelling. Unfortunately the learning curve for the JSON-path-like syntax is relatively steep (not a standard SQL) plus is doesn't solve problems like schema validation, or database schema management.

BongoJS is an attempt to:

  • abstract complexities related to using JSON/JSONB on PostgreSQL,
  • introduce schema validation for JSON/JSONB columns,
  • minimize complexity related to database schema management,
  • make using document model on PostgreSQL as easy as with MongoDB.

Install

Install package:

npm install @dumpstate/bongojs --save

Install peer dependencies (if not yet installed):

npm install pg --save

Usage

BongoJS instance

The main entrypoint is a Bongo class. One needs an instance of Bongo to start defining the model and running queries.

import { Bongo } from "@dumpstate/bongojs"

const bongo = new Bongo()

The constructor optionally accepts either an instance of an existing node-postgres connection pool or a PoolConfig object. If not provided, BongoJS will create a new instance of node-postgres connection pool with environment defaults (i.e., one can set PGHOST, PGUSER, PGDATABASE, PGPASSWORD etc., in the environment).

Model Declaration

Bongo validates the model in the application code, so it requires a model declaration. The model declaration is a JSON Typedef-like schema registered at the Bongo instance.

import { Schema } from "@dumpstate/bongojs"

const Foo = {
	name: "foo",
	schema: {
		createdAt: { type: "timestamp" },
		bar: { type: "string" },
		baz: { type: "int32" },
	},
} as const

// though not required, is it useful to create a type declaration along the way
type FooType = Schema<typeof Foo>

// with a model definition in hand, one can create a bongo collection (and register the schema)
const foos = bongo.collection(Foo)

// the type of `foos` is now `Collection<FooType>` (thus why the type declaration is useful)

Schema Migration

BongoJS does requires some tables to be initialised in postgres. The table structure is dependent on the model definition, as we're creating table partitions and indexes under the hood. Thus, it's most convenient to run the migration after the collections are defined:

await bongo.migrate()

In a more professional setup, one may be interested in running the migration as a separate step, e.g., as a part of the CD pipeline. Then, it is preferred to leverage the provided CLI script:

npx bongojs migrate up ./path/to/entrypoint

where ./path/to/entrypoint is a path to a file exporting a Bongo instance (with all the collections being registered).

The Transactor

At this point, the primary interface to the database are the collections and the instance of the Transactor. The Transactor is required to provide db connection/transaction object to the DBActions returned from the collection instance.

// Foo and bongo from the previous example
const foos = bongo.collection(Foo)

const res: DBAction<Document<FooType>> = foos.create({
	createdAt: new Date(),
	bar: "bar",
	baz: "baz",
})

the res is of type DBAction<Document<FooType>> - nothing has been executed yet, but one can use res for further composition.

The DBAction should be executed by calling run or transact method and providing a Transactor instance:

const foo: FooType = await res.run(bongo.tr)

run method requests a new database connection from the pool (via transactor) and injects to the chain defined as a DBAction. All the composed queries are executed on the same DB connection.

transactor serves the same purpose, but the chain is being wrapped with BEGIN / COMMIT / ROLLBACK.

The Document Type

The model instance returned from collection is a plain sealed JavaScript object of type Document<T> (e.g., Document<FooType>), with a properties that follow convensions:

  1. the id property of type string is always present and cannot be null,
  2. all the properties declared on the schema are always nullable, e.g., foo.bar is of type string | null - it is not possible to have non-nullable properties for backwards compatibility reasons (similar to what protobuf does),
  3. all the properties declared on the schema have their required counterparts - when called the getter may raise an exception, e.g., foo.bar$ is of type string and will throw if foo.bar is null.

The unsafe getters of document (e.g., foo.bar$) help to write more concise code, while preserving the backwards compatibility at the collection level (e.g., what if tomorrow I deprecate the bar property?).

Collection

The Collection is instantiated the moment we register the schema on bongo instance, e.g.:

const foos: Collection<FooType> = bongo.collection(Foo)

The Collection offers the following database operations:

  • find(query: Query<T>, opts?: QueryOpts<T>): DBAction<<Document<T>[]> finds all documents matching the query,
  • findOne(query: Query<T>): DBAction<Document<T> | null> finds the first document matching the query,
  • findById(id: string): DBAction<Document<T>> - finds the document by id (throws if not found),
  • create(obj: T): DBAction<Document<T>> - creates a new document,
  • createAll(objs: T[]): DBAction<Document<T>[]> - creates multiple documents,
  • deleteById(id: string): DBAction<boolean> - deletes the document by id,
  • drop(): DBAction<number> - drops the collection (deletes all the documents from the partition),
  • save(obj: T & DocumentMeta): DBAction<Document<T>> - saves the document (inserts if not exists, updates otherwise),
  • count(query: Query<T>): DBAction<number> - counts the documents matching the query.

Query Object

Collections methods like find, findOne or count does acccept the Query<T> object as an input.

The Query is a MongoDB-like query object, with the type of the fields being inferred from the schema of T. The query is being translated to a SQL query - we're actually querying on the JSON column. The only index available is GIN on the document column, thus the query performance is usually limited to an exact match on a single field.

Running Tests

To execute the tests, you need PostgreSQL running with user bongo and database bongo_db:

createuser bongo
createdb -O bongo bongo_db

You can also use provided docker-compose.yml: docker-compose up.

Then, run:

pnpm test