@henson/database
v5.1.7
Published
Database drivers
Downloads
2
Readme
Database Driver Library
Table of Contents
- Install
- Breaking Change in v4
- Database Types
- Load for Specific Database Type
- PostgreSQL Client
- Oracle Database Client
- Cassandra Client
- Schema Management
- DAO(Data Access Object)
Install
yarn add @henson/database
Breaking Change in v4
- Not allow importing submodule, all database types are loaded from top level import.
Doesn't work in v4:
import from '@henson/database/oracle'
. - Support dynamic load driver of specific database type
- Export original database driver as
driver
Migrate Guide
Follow below chapter 'Dynamic Loading' to rewrite your application to load proper database driver.
It's not necessary to import underlay database driver (pg
, oracledb
and cassandra-driver
) directly in application.
They're exported in namespace driver
under db submodule.
Application should add pg
, oracledb
or cassandra-driver
into optionalDependencies
in package.json
.
// before
import * as driver from 'pg';
// after
import { DatabaseType, load } from '@henson/database';
import * as postgresql from '@henson/database/lib/postgresql';
const db = await load(DatabaseType.PostgreSQL);
(db as typeof postgresql).driver.types.setTypeParser(20, Number);
Database Types
This library supports multiple kinds of database and expose a similar API over their original driver.
An enum is defined for database types:
enum DatabaseType {
oracle = 0,
postgresql = 1,
cassandra = 2
}
Database type of a Pool
is indicated by read only property type
:
import * as pg from '@henson/database/lib/postgresql';
const pool = new pg.Pool(...);
assert(pool.type === DatabaseType.PostgreSQL);
Load for Specific Database Type
You application isn't necessary to install and load database driver which it never use. For example, for an application which doesn't support Oracle DB, it doesn't need to run in an environment with Oracle client library installed.
You have two approaches to load selected database type.
Import Submodule
Directly import a submodule of desired database type.
import * as pg from '@henson/database/lib/postgresql';
pg.driver.types.setTypeParser(20, Number); // parse BIGINT
const pool: pg.Pool = new pg.Pool(...);
Dynamic Loading
This library exposes a function load
to support dynamic load a specific database driver.
Return type of load
is typeof postgresql | typeof oracle | typeof cassandra
.
Before using the loaded module, narrow its type to a specific database type by as typeof X
casting.
import { memoize } from 'lodash';
import { DatabaseType, Pool, load } from '@henson/database';
// import typing
import * as postgresql from '@henson/database/lib/postgresql';
import * as oracle from '@henson/database/lib/oracle';
async function createPool(type: DatabaseType): Promise<Pool> {
// dynamic load
const db = await load(type);
// Return type of load() is `typeof postgresql | typeof oracle | typeof cassandra`
// Cast it using `as typeof`
switch (type) {
case DatabaseType.postgresql:
return new (db as typeof postgresql).Pool(...);
case DatabaseType.oracle:
return new (db as typeof oracle).Pool(...);
break;
// ...
}
}
// Use memoization to make `createPool` once called once for the same type
const getPool = memoize(createPool);
// Use in application logic:
async function doSomething() {
const pool = await getPool(DatabaseType.postgresql);
await pool.query('SELECT 1');
}
PostgreSQL Client
PostgreSQL client is a promisfied pg API. Only the client pooling API is supported.
Configuration to initialize a pool is an option object containing properties of database connection and pooling strategy. The most used properties are below. Details please reference to client config in pg document and pool config in generic-pool document.
interface PoolConfig {
// database connection
user: string;
database: string;
password?: string; // default: no password
port?: number; // default: 5432
host?: string; // default: 'localhost'
// pooling strategy
max?: number; // default: 10
min?: number; // default: 0
refreshIdle?: boolean; // default: false
idleTimeoutMillis?: number; // default: 30000
reapIntervalMillis?: number; // default: 1000 - frequency to check for idle resources
}
It has transaction support. All database queries inside a transaction should be put inside a function, which returns a promise of the final result. The API will automatically commit or rollback the transaction depends on whether the promise is resolved or rejected.
For usage detail, check pg document.
Original pg API is exposed as driver
.
PostgreSQL server returns everything in strings, to parse raw result to JavaScript type, application may register type parser function. Read pg-types document for detail.
The query result is put inside rows
array of object type. Object's property name is the
table field name folded to all lower case. rowCount
property contains the number of rows that were
effected by the query.
⚠ Do not used quoted names in SQL.
ⓘ In this API, bind parameter can be either ${1}
or $1
(PostgreSQL native style).
It's to provide a uniform style in our API for both Oracle and PostgreSQL.
import { DatabaseType, Query, load } from '@henson/database';
import * as postgresql from '@henson/database/lib/postgresql';
const db = await load(DatabaseType.PostgreSQL);
// To parse BIGINT into number in JavaScript
(db as typeof postgresql).driver.types.setTypeParser(20, Number);
const pool = new (db as typeof postgresql).Pool({
user: 'henson',
password: 'henson',
database: 'henson',
host: '127.0.0.1',
port: 5432,
max: 3
});
async function earliestTaskTime(): Promise<number> {
const result = await pool.query(
'SELECT fromtime FROM timertask WHERE zone=${1} AND bucket=${2} ORDER BY fromtime ASC LIMIT 1',
[this.zone, this.bucket]
);
return (result.rows.length > 0) ? result.rows[0].fromtime : 0;
}
// transaction example
const insertRowsInTransaction = function (query: Query): Promise<string> {
await query('INSERT INTO unittest_pg (key, value) VALUES (${1}, ${2})', ['first', 'alpha']);
await query('INSERT INTO unittest_pg (key, value) VALUES (${1}, ${2})', ['second', 'beta']);
return 'I got some result';
});
try {
const result = await pool.transaction<string>(insertRowsInTransaction);
console.log(result); // --> I got some result
} catch (err) {
console.log('Transaction rollbacked');
}
When the application is gracefully shutdown, you should call await pool.end()
to release the pool.
Oracle Database Client
Oracle Database client is a promisfied wrapper API to node-oracledb. The style of this API is keep almost identical to the PostgreSQL client, so that application can switch between databases easily.
Usage
Supports single query mode (auto commit) and transaction mode. Except the properies
passing into Pool
constructor, code is totally same as using PostgreSQL client.
⚠ SQL may not the same in Oracle and PostgreSQL.
The query result is put inside rows
array of object type. Object's property name is the
table field name folded to all lower case. rowCount
property contains the number of rows that were
effected by the query.
Specify column names in SELECT SQL and don't use *
, so that the query result object has correct properties name. When function is used in SELECT, give it an alias.
SELECT count(id) as cnt FROM movie
^^^ result.rows[0].cnt
⚠ Do not used quoted column name in SQL.
ⓘ In this API, bind parameter can be either ${1}
or :1
(Oracle native style).
It's to provide a uniform style in our API for both Oracle and PostgreSQL.
⚠ Bind parameter MUST be used, even the value is constant. The bind parameters name MUST be in order from left to right.
UPDATE movie_table SET name='Super Man' WHERE id=100
must use bind parameter ^^^^^^^^^^^ ^^^
SELECT name FROM table1 WHERE id=${2} AND type=${1}
wrong order ^^^^ ^^^^
Example
import { DatabaseType, Query, load } from '@henson/database';
import * as oracle from '@henson/database/lib/oracle';
const db = await load(DatabaseType.Oracle);
const pool = new (db as typeof oracle).Pool({
connectString: 'localhost:1521/henson',
user: 'henson',
password: 'henson',
poolMax: 10
}, {
maxRows: 200
});
async function earliestTaskTime(): Promise<number> {
const result = await pool.query(
'SELECT fromtime FROM timertask WHERE zone=${1} AND bucket=${2} ' +
'ORDER BY fromtime ASC FETCH FIRST 1 ROWS ONLY',
[this.zone, this.bucket]
);
return (result.rows.length > 0) ? result.rows[0].fromtime : 0;
}
// transaction example
const insertRowsInTransaction = function (query: Query): Promise<string> {
await query('INSERT INTO unittest_pg (key, value) VALUES (${1}, ${2})', ['first', 'alpha']);
await query('INSERT INTO unittest_pg (key, value) VALUES (${1}, ${2})', ['second', 'beta']);
return 'I got some result';
});
try {
const result = await pool.transaction<string>(insertRowsInTransaction);
console.log(result); // --> I got some result
} catch (err) {
console.log('Transaction rollbacked');
}
For usage detail, check node-oracledb
API document.
Original oracledb API is exposed as driver
.
Node-oracledb has maxRows
limit on each query.
The default value is 100. If you application need to override this value, please set maxRows
when constructing pool.
If you need to ignore maxRows
on each query, set maxRows to 0 when constructing pool.
Limitation
BLOB data is not supported for now. (Can be added if required)
Cassandra Client
Cassandra client is a wrapper of official cassandra-driver API to make a similar style interface as PostgreSQL and Oracle client. However, it doesn't share the same interface with SQL client API since NoSQL has some essential differences then SQL.
Configuration to initialize a pool is an option object containing properties of database connection and pooling strategy.
The most used properties are below. Details please reference to client config and usage in Cassandra document and pool config in the sub section of this document.
interface PoolConfig {
contactPoints: Array<string>,
keyspace?: string,
policies?: {
addressResolution?: policies.addressResolution.AddressTranslator,
loadBalancing?: policies.loadBalancing.LoadBalancingPolicy,
reconnection?: policies.reconnection.ReconnectionPolicy,
retry?: policies.retry.RetryPolicy
},
queryOptions?: QueryOptions,
pooling?: {
heartBeatInterval: number,
coreConnectionsPerHost: { [key: number]: number; },
warmup: boolean;
},
protocolOptions?: {
port: number,
maxSchemaAgreementWaitSeconds: number,
maxVersion: number
},
socketOptions?: {
connectTimeout: number,
defunctReadTimeoutThreshold: number,
keepAlive: boolean,
keepAliveDelay: number,
readTimeout: number,
tcpNoDelay: boolean,
coalescingThreshold: number
},
authProvider?: auth.AuthProvider,
sslOptions?: any,
encoding?: {
map: Function,
set: Function,
copyBuffer: boolean,
useUndefinedAsUnset: boolean
}
}
For usage detail, check cassandra-driver document.
Original cassandra-driver API is exposed as driver
.
Cassandra driver converts CQL data type to JavaScript data type according to this table.
Application may globally override parser function of specific type.
Below example demonstrates a common use case to parse bigint
to number
, which represents a timestamp.
⚠ Type parser override is a hack into cassandra-driver. You need some knowledge on its encoder source code to write a parser function.
ⓘ In this API, bind parameter can be either ${1}
or ?
(Cassandra native style).
import { DatabaseType, load } from '@henson/database';
import * as cassandra from '@henson/database/lib/cassandra';
const db = await load(DatabaseType.Cassandra) as typeof cassandra;
// type parser must set before pool creation
db.setTypeParser(db.driver.types.dataTypes.bigint,
function decodeBigInt(bytes: Buffer): number {
return this.decodeLong(bytes).toNumber(); // tslint:disable-line: no-invalid-this
}
);
const pool = new db.Pool({
contactPoints: ['127.0.0.1'],
keyspace: 'henson'
});
// query example
async function earliestTaskTime(): Promise<void> {
const result = await pool.query(
'SELECT fromtime FROM timertask WHERE zone=${1} AND bucket=${2} ORDER BY fromtime',
[this.zone, this.bucket]
);
return (result.rows.length > 0) ? result.rows[0].fromtime : 0;
}
// batch example
async function batchRemove(): Promise<void>{
const batchQueue = [
{
query: 'DELETE FROM unittest_cassandra WHERE key=${1}',
params: ['no-1'],
},
{
query: 'DELETE FROM unittest_cassandra WHERE key=${1}',
params: ['no-2'],
},
]
const batchDeleteResult = await pool.batch(batchQueue);
}
When the application is gracefully shutdown, you should call await pool.end()
to release the pool.
Schema Management
Read SQL Schema Management on Henson-Book for usage and example.
It supports PostgreSQL, Oracle and Cassandra.
interface UpgradeSql {
[fromRev: number]: string | string[];
}
interface SchemaOptions {
tableName: string;
rev: number;
createSql: string | string[];
upgradeSql: UpgradeSql;
}
async function checkSchema(options: SchemaOptions, pool: oracle.Pool | pg.Pool | cassandra.Pool): Promise<void>
PostgreSQL supports to run multiple queries at a time. Just put all DDLs in one string and separate by ;
.
In Oracle you need to put DDLs into a block like this:
BEGIN
EXECUTE IMMEDIATE 'DROP INDEX idx_timertask_id_service';
EXECUTE IMMEDIATE 'CREATE INDEX idx_timertask_fromtime ON timertask (fromtime)';
END;`,
In Cassandra, the DDLs have to be executed individually. Provide an array of queries in createSql
and each upgrade path in upgradeSql
, the queries are executed one by one.
DAO(Data Access Object)
Use module dao
to support transparent client-side column encryption/decryption for database. It supports PostgreSQL and Oracle.
dao
uses sql-like DSL. The developers don't need to care about the differences among defferent databases.
⚠ The encrypted columns only support string values.
⚠ The encrypted columns only support exact match queries which are =
, !=
, <>
, IN
, NOT IN
, IS NULL
, IS NOT NULL
, but not support comparison query, such as >
, <
, BETWEEN...AND...
, etc.
Creation
import { dao } from '@henson/database';
import * as postgresql from '@henson/database/lib/postgresql';
const db = await load(DatabaseType.PostgreSQL);
const pool = new (db as typeof postgresql).Pool({
user: 'henson',
password: 'henson',
database: 'henson',
host: '127.0.0.1',
port: 5432,
max: 3
});
// create dao
const testDao = dao.create(pool);
// set default table - optional
testDao.table('test');
// set encryption columns - optional
testDao.encrypt(['key']);
// check whether column is encrypted
testDao.isEncryptedColumn('key');
// Use chain
const chainDao = dao.create(pool).table('test').encrypt(['key']);
Insert Statement
The values stored in encrypted columns are encrypted transparently.
// the value stored in column `key` of table `test` is encrypted in database
const testDao = dao.create(pool).table('test').encrypt(['key']);
await testDao.insert({
key: 'key2',
count: 2,
value: 'value2'
}).exec();
Update Statement
The values stored in encrypted columns are encrypted transparently.
The values in where conditions are encrypted transparently.
⚠ If the update value is expression, it can not support encryption.
// the value stored in column `key` of table `test` is encrypted in database
const testDao = dao.create(pool).table('test').encrypt(['key']);
await testDao.update().set({
key: 'key1',
count: 1,
value: 'value1'
}).where(dao.condition([
dao.column('key').eq('111')
]).exec();
await testDao.update().set({
key: 'key1',
count: {
expression: count + 1 // use expression
},
value: 'value1'
}).where(dao.condition([
dao.column('key').eq('111')
]).exec();
Delete Statement
The values in where conditions are encrypted transparently.
// the value stored in column `key` of table `test` is encrypted in database
const testDao = dao.create(pool).table('test').encrypt(['key']);
await testDao.delete().where(dao.condition([
dao.column('key').eq('111')
]).exec();
Select Statement
The values returned are decrypted transparently.
The values in where conditions are encrypted transparently.
⚠ function forUpdate
can't be used with function groupBy
.
⚠ function forUpdate
can only be used in transaction.
// the value stored in column `key` of table `test` is encrypted in database
const testDao = dao.create(pool).table('test').encrypt(['key']);
await testDao.select('*').where(dao.condition([
dao.column('key').eq('111')
]).exec();
// set row number returned in result
await testDao.select('*').where(dao.condition([
dao.column('key').eq('111')
]).rowNum(2).exec();
// expression 'for update'.
// For oracle, the parameter is `for update` seconds.
// For postgres, the parameter can be ignored.
await value.pool.transaction(async (query): Promise<boolean> => {
const result = await testDao.select('*').where(dao.condition([
dao.column('key').in([row1.key, row2.key]),
])).forUpdate().exec(query);
//then update
await testDao.update().set({
value: 'test'
}).where(dao.condition([
dao.column('key').in([row1.key, row2.key]),
])).exec(query);
return true;
});
Normal Query Statement
The developers can simply pass the sql to query
function.
⚠ Not recommended to use query
API if the database needs encryption because dao
only decrypt the values returned by database, the developers have to encrypt the values in sql manually.
// the value stored in column `key` of table `test` is encrypted in database
const testDao = dao.create(pool).encrypt(['key']);
await testDao.query('select * from test where key = ${1}', ['111']).exec();
Transaction
// the value stored in column `key` of table `test` is encrypted in database
const testDao = dao.create(pool).table('test').encrypt(['key']);
await pool.transaction(async (query): Promise<boolean> => {
await dao.create(pool).delete().from('unittest_dao').exec(query);
await testDao.insert({
key: 'key1',
count: 1,
value: 'value1'
}).exec(query);
await testDao.update().set({
key: 'key1',
count: 1,
value: 'value2'
}).exec(query);
return true;
})
Where Conditions
interface Condition {
or(conditions: ColumnCondition[]): Condition;
}
interface ColumnCondition {
// =, equal
eq(value: number | string): ColumnCondition;
// >, greater than
gt(value: number | string): ColumnCondition;
// <, less than
lt(value: number | string): ColumnCondition;
// >=, greater than or equal
ge(value: number | string): ColumnCondition;
// <=, less than or equal
le(value: number | string): ColumnCondition;
// <>, not equal
neq(value: number | string): ColumnCondition;
// BETWEEN
between(values: [number, number] | [string, string]): ColumnCondition;
// IN
in(value: number[] | string[]): ColumnCondition;
// NOT IN
notIn(value: number[] | string[]): ColumnCondition;
// IS NULL
isNull(): ColumnCondition;
// IS NOT NULL
isNotNull(): ColumnCondition;
}
function column(columnName: string): ColumnCondition;
function condition(conditions: ColumnCondition[]): Condition;