npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

noorm-pg

v0.10.8

Published

Javascript regular object data binding, transacted blocks and multiple-database migrations for PostgreSQL, no ORM.

Downloads

57

Readme

noorm-pg

Migrators (mutli-database, multi-node), transaction blocks, and data binding for PostgreSQL for those who don't like using an ORM.

Install

Package local install (won't deploy migrator script).

npm install --save noorm-pg

In order to deploy the migrator script, a global install is also required.

npm install --global noorm-pg

Migrations

The migrator supports migration groups, each of which can reference multiple nodes (databases following the same schema). This is useful in scenarios where a given database is being scaled horizontally to multiple nodes. The basic usage is as follows:

####Initialization

From the target package root (where name is the name of the first migration group you'd like to create):

migdb db:init [name]

This will build the basic directory structure and initialize a migrations/config.json in the package root. Each invocation of this command will create a new migration group.

A basic, single database structure with multiple nodes would look like this:

{
  "development": {
    "groups": [
      {
        "name": "customer_db",
        "nodes": [
          {
            "alias": "default",
            "connUri": "postgresql://username:password@localhost:5432/customer_db"
          }
        ]
      }
    ]
  },
  "production": {
    "groups": [
      {
        "name": "customer_db",
        "nodes": [
          {
            "alias": "customer_db_1",
            "connUri": "postgresql://username:[email protected]/customer_db"
          },
          {
            "alias": "customer_db_2",
            "connUri": "postgresql://username:[email protected]/customer_db"
          },
          {
            "alias": "customer_db_3",
            "connUri": "postgresql://username:[email protected]/customer_db"
          }
        ]
      }
    ]
  }
}

It is recommended that you do not store your connection URIs in the config file but instead use process.env and reference it as follows:

{
  ...
  "connUri": "{process.env.PROD_DB_NODE_1}"
}

This will be evaluated as a variable by the migdb script instead of a string literal.

It is important to note that migdb will look at process.env.APP_ENV when determining which branch (production, development, etc.) to access when running migrations. If process.env.APP_ENV is not undefined, migdb will default to development.

Creating a migration script

migdb db:migration:create [name]

This will add a .js migration script file to the migrator group indicated by name which will have the following structure:

module.exports = {
  upgrade: async (conn) => {
    // Migration code goes in here
  },
  downgrade: async (conn) => {
    // De-migration code goes in here
  },
  transactUpgrade: true,
  transactDowngrade: true
};

The connection object will be discussed below. There is an upgrade function, a downgrade function, and two booleans, indicating whether or not each function is to be wrapped in a transaction.

NOTE: migdb will attempt to transact the entire migration/rollback process. It is sometimes however necessary to execute some SQL without a transaction. In this event, the transaction will be committed, the untransacted migrator will run, then a new transaction will begin for any further pending migration scripts. It is highly recommended that any untransacted statements are executed in isolation within their own migrator script to avoid issues in the event of any failure to execute all statements within the migrator.

####Running migrator(s)

migdb db:migrate [name]

Executes pending migrations in one or more migrator groups. name is an optional argument to limit the scope of the migration to that single migrator group. If name is not provided, all pending migrations on all migrator groups will be executed. At this time, migrations are executed synchronously. As mentioned above, all pending transacted migrators will run prior to committing the transaction, thus if any failure, all will be rolled back.

####Rolling back migrations

migdb db:migrate:undo [name] [version]

Executes a rollback up to (but not including) migrator version, which is the full file name of the migrator file, including the .js extension. Implicitly, this means that the very first migrator cannot be rolled back. The philosophy here is that one could simply drop and recreate the database in this event, as opposed to executing the rollback. There is no notion of rolling back a single migration (without naming) since in a node cluster scenario, it's impossible to guarantee that all nodes are on the same migration, thus providing the name is necessary.

####Accessing the config.json It is convenient to be able to access the configurations defined within config.json within your application. In order to make this easier, there is the Config.js module.

const path = require('path');
const { Config } = require('noorm-pg');

// Static synchronous initialization
Config.initialize(path.join(__dirname, '..', 'migrators', 'config.json'));

// Access your configuration like this
const nodes = Config.getMigratorGroupNodes('my_migrator');
nodes.forEach(node => {
	console.log(node.alias);
	console.log(node.connUri);
});

The Config module is aware of variables used in your config.js file for connUri and will eval them for you.

Connection object

####Initialization and teardown

const { Connection } = require('noorm-pg');

const conn = new Connection('postgresql://localhost/mydb');
conn.release();

The Connection object utilizes a connection pool, provided by the underlying node-pg module. The release method releases the connection pool back to the database. See the PostgreSQL documentation on connection strings for detailed examples of a connection URI.

####Querying

const results = await conn.query("SELECT field FROM table WHERE x = 1");
results.rows.forEach(row => {
	console.log(row.field);
});

query is an async function so it can be used with the await keyword to control flow.

####Data binding

const results = await conn.query(`
	SELECT first_name, last_name FROM table
	WHERE
		age > :age AND
		title = :jobTitle
`,
{
	age: 30,
	jobTitle: 'decorator'
});

Bound data takes the form of a regular javascript object. Single binding object per query.

####Transacted callback

await conn.transact(async t => {
	await t.query(
		"INSERT INTO table (x, y, z) VALUES (:one, :two, :three)",
		{
			one: 33,
			two: 66,
			three: 'abc'
		}
	);
	await t.query(
		"SELECT * FROM table"
	);
});

Transaction block accept a callback function which receives a Connection object as the argument. The underlying connection is a single connection from the pool of the Connection object which initiated the transaction.

####Transactions (alternative)

const t = await conn.begin();
try {
	await t.query(
		"INSERT INTO table (x, y, z) VALUES (:one, :two, :three)",
		{
			one: 33,
			two: 66,
			three: 'abc'
		}
	);
	await t.commit();
} catch(e) {
	await t.rollback();
	throw e;
}

####Bulk queries When performing bulk insert/update statments, it may be useful to be able to provide an array of arrays as binding arguments. The example below illustrates how a bulk upsert can be performed in this manner.

  const { SubQuery } = require('noormpg');

  await conn.query(`
    CREATE TABLE IF NOT EXISTS job (
      id BIGSERIAL NOT NULL,
      name TEXT NOT NULL,
      type TEXT NOT NULL,
      CONSTRAINT pk_job PRIMARY KEY (id),
      CONSTRAINT unique_job_name UNIQUE(name)
    );

    CREATE TABLE IF NOT EXISTS person (
      id BIGSERIAL NOT NULL,
      job_id BIGINT REFERENCES job (id),
      first_name TEXT NOT NULL,
      last_name TEXT NOT NULL,
      CONSTRAINT pk_person PRIMARY KEY (id),
      CONSTRAINT unique_name UNIQUE(first_name, last_name)
    );
  `);

  await conn.bulkQuery(
    `
      INSERT INTO job (name, type)
      :VALUES
      ON CONFLICT ON CONSTRAINT unique_job_name DO NOTHING
    `, [
      ['Police', 'Govt'],
      ['Fire fighter', 'Govt'],
      ['Chef', 'Food'],
      ['Programmer', 'Tech'],
      ['Data architect', 'Tech'],
    ]
  );

  await conn.bulkQuery(
    `
      INSERT INTO person (
        job_id,
        first_name,
        last_name
      ) :VALUES
      ON CONFLICT ON CONSTRAINT unique_name DO NOTHING
    `,
    [
      [
        new SubQuery("SELECT id FROM job WHERE name = $1", ['Chef']),
        'Gordon',
        'Ramsey',
      ],
      [
        new SubQuery("SELECT id FROM job WHERE name = $1", ['Programmer']),
        'Cow',
        'Man',
      ],
      [
        null,
        'Jobless',
        'Person',
      ]
    ]
  );

That's all folks.