npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

mgsql

v0.2.21

Published

SQL Utility

Downloads

22

Readme

mgsql

A simple set of SQL utilities for managing and building SQL statements for both Postgresql, trying to normalize as much of the differences as possible.

API

const mgsql = require('mgsql');

const dbConn = mgsql.getPostgresWrap( pgConnect, 'schema1,schema2' );

// Throw an error if any of the fields match
mgsql.assert.forMissing( data, 'field1,field2' );
mgsql.assert.forMissing( data, ['field1','field2'] );          <-- accepts array-of-strings, or csv of strings
mgsql.assert.forEmptyOrNull( data, 'field1,field2' );
mgsql.assert.forEmptyOrNullOrMissing( data, 'field1,field2' );

// Change the data for the given method signature
mgsql.clean.forOnlyAZaz09( data, 'field1,field2' );
mgsql.clean.forBlankToNull( data, 'field1,field2' );
mgsql.clean.forSetDefaults( data, {field1:value2} );

// query; basic straight through to the underlying driver; returns back the driver would do
// Postgres for example the .rows is the [] of fields; where as MySQL returns back []
dbConn.queryR( 'SELECT * FROM TABLE', [] );   // return []

// With auto ? -> $1 for postgres
dbConn.query( 'SELECT * FROM TABLE', [] );   // return []

// SELECT, with auto col conversion on return struct to include table names and to return back the [] of rows
dbConn.select( 'SELECT * FROM TABLE', [] );   // return []
dbConn.select1( 'SELECT * FROM TABLE', [] );  // return struct or null

// INSERT
dbConn.insert( 'schema1.table1', {} );   // return the ID
dbConn.insertIgnoreDuplicate( 'schema1.table1', {} );   // return the ID; or null if no insert was made

// UPDATE
dbConn.update( 'schema1.table1', {} );   // return the rows updated

// Logging the SQL/Values
dbConn.log();
dbConn.update( 'schema1.table1', {} );   // return the rows updated
dbConn.log(false);
dbConn.update( 'schema1.table1', {} );   // return the rows updated


// Builder helpers
dbConn.run( ..builder.. )
dbConn.runWithCount( ..builder.. )
dbConn.runWithCountDistinct( ..builder.. )

.insert() / .update()

These are special methods as they will do a schema lookup on the database for the given table and do checking to make sure you have all the right columns, with the right data types and have not missed any required fields. The database table definition drives the validation check here.

This metadata is cached so the overhead is not incurred on each one. A 'nice' error message is returned detailing the column that is wrong and the reason it failed. It will auto marshall date objects.

Builder INSERT

const mgsql = require('mgsql');
const dbConn = mgsql.getPostgresWrap( pgConnect, 'global' );

await dbConn.run(
  dbConn.buildInsert()
    .table('global.table')
    .column('a',2)
    .column('b',3)
    .column('c',3)
    .ignoreDuplicate()
  )
)

Supporting methods for re-use

  .reset()      <- reset the columns/values; ignoreDuplicate flag

  .toSql()      <- returns the prepared SQL statement
  async .run()  <- Return the rows from the INSERT

Builder UPDATE

const mgsql = require('mgsql');
const dbConn = mgsql.getPostgresWrap( pgConnect, 'global' );

await dbConn.run(
  dbConn.buildUpdate()
    .table('global.table')
    .column('a=?',2)     <- value is optional
    .column('b=?',2)
    .where('c=?',3)
  )
)

Supporting methods for re-use

  .reset()                      <- reset the columns/values

  .toSql(ignoreDuplicates)      <- returns the prepared SQL statement
  async .run(ignoreDuplicates)  <- Return the number of rows updated

Builder SELECT

const mgsql = require('mgsql');
const dbConn = mgsql.getPostgresWrap( pgConnect, 'global' );

await dbConn.run(
  dbConn.buildSelect()
    .log()
    .removeNull()
    .removeErrantPeriod()
    .select('t1.col1, t2.col1')
    .selectConcat('t3.col2)
    .from('global.table1 as t1')
    .from('global.table2 as t2')
    .from({
      "left" : "global.table3 as t3",
      "join" : [
        {
        "type" : "left",                    <- left (default), left outer, right, right outer, inner
        "right" : "global.table2 as tt2",
        "where" : "t3.id = tt2.id"
        }
      ]
    })
    .where('t1.id > ?', [3])
    .whereOr('t2.id != 0')
    .groupBy('')
    .orderBy('t2.id asc)
    .limit(10, 2)                           <- pageSize [, pageNo; 0 based page)]
);

Supporting methods for re-use

  .selectReset()
  .whereReset()
  .selectGroupBy()
  .groupByReset()
  .orderByReset()
  .limitReset()

  .toSql()                  <- returns the prepared SQL statement
  .toCountSql(distinct)     <- Run a count (with optional distinct)
  .log()                    <- Log the SQL to the console
  .removeNull()             <- Remove any keys that are null
  .removeErrantPeriod()     <- Remove the period on any keys that start with .
  .removeKeys(..)           <- Remove the keys from the result payload

  async .run()              <- Return rows

There is support for the popular JavaScript DataTables control, with the query block it generates and passes to the server.

        .dataTable(query [, maxItems])    <- Optional maxItems to limit the total rows no matter the query.length
  async .runWithCount(distinct)           <- Return the count with the rows

where query is of the following structure

{
  start : 0,                <- where to start from
  length: 10,               <- length to pull back
  selectColumns : ""        <- comma separated of columns that are to be turned; overrides columns
  columns:[
    {
      data: ""              <- name of the column
      searchable: "true",   <- if this column is searchable
      orderable: "true",    <- if this column can be orderable
    }
  ],
  order:[
    {
      column: 2,            <- the index into 'columns' of the column to order by
      dir: "desc|asc"       <- the direction
    }
  ],
  search: {
    "value" : "value"       <- the value of the column to do a 'LIKE' against
  },
  equals: {
    "columnName" : "value"  <- the value of the column to do a '='; if an array, will be a IN
  },
  notequals: {
    "columnName" : "value"  <- the value of the column to do a '!='; if an array, will be a NOT IN
  },
  range: {
    "columnName" : {
      "f": value   <- the value for '>=' [optional]
      "t": value   <- the value for '<=' [optional]
    }  
  },
  excrange: {
    "columnName" : {
      "f": value   <- the value for '>' [optional]
      "t": value   <- the value for '<' [optional]
    }  
  }
}

For columns that have a . (period) in them, to maintain that, datatables wants them escaped. So in the JS defintion, "t1.id" is defined "t1\\.id"

.batch()

You can pass in a file of statements, all to be executed, one after another. Useful for setting up and tearing down tests.

Each statement is treated as a Mustache template, and you can pass in replaces using the optional {}

.batch('./filename.sql', {
  delimiter: '',  <-- Defaults to one-per-line; blank lines are ignore;  You can use something like --- to separate out in to blocks
  'X' : 'x1'
});

Postgres

Given the way Postgresql works with aliasing, this library, for all .select/.select1 calls, will convert the columns return in the rows with a full aliased name (<table>.<column>).

Prepared paremeters are marked using ?

Release

  • 2021-07-19:
    • Logging update
  • 2021-05-13:
    • Fixed date parsing
  • 2021-04-05:
    • Added .batch()
  • 2021-02-09:
    • Range is can be either or / added excrange
  • 2021-01-19:
    • Fixed orderable flag
  • 2020-12-16:
    • Complete refactor
    • Reduce the overhead of looking up metadata
    • Dropped any thoughts of MySQL
  • 2020-11-04:
    • fixed null in clean
  • 2020-10-12:
    • updated clean to include more friendly characters
  • 2020-08-31:
    • fixed date parsing on invalid days in month
  • 2020-08-20:
    • Updated clean filter to permit .,;-
  • 2020-07-29:
    • Support for array, and notequals
  • 2020-07-21:
    • fixed dataTable maxItems
  • 2020-07-10:
    • added .insertIgnoreDuplicate
  • 2020-07-08:
    • added .range to the dataTable support
  • 2020-06-29:
    • added resetAll() on builder patterns
  • 2020-06-10:
    • added .log() / .removeNull() / .removeErrantPeriod() to builders
    • fixed the join syntax
  • 2020-06-09:
    • SELECT/INSERT/UPDATE Builder helpers
    • Auto ? -> $1
  • 2020-06-01:
    • Cleaner interpretation of select/select1/query
  • 2020-05-26:
    • Initial release