loopback-connector-postgresql
v7.1.7
Published
Loopback PostgreSQL Connector
Downloads
44,976
Readme
loopback-connector-postgresql
PostgreSQL, is a popular open-source object-relational database.
The loopback-connector-postgresql
module is the PostgreSQL connector for the LoopBack framework.
Installation
In your application root directory, enter this command to install the connector:
$ npm install loopback-connector-postgresql --save
This installs the module from npm and adds it as a dependency to the application's package.json
file.
If you create a PostgreSQL data source using the data source generator as described below, you don't have to do this, since the generator will run npm install
for you.
Creating a data source
For LoopBack 4 users, use the LoopBack 4 Command-line interface to generate a DataSource with PostgreSQL connector to your LB4 application. Run lb4 datasource
, it will prompt for configurations such as host, post, etc. that are required to connect to a PostgreSQL database.
After setting it up, the configuration can be found under src/datasources/<DataSourceName>.datasource.ts
, which would look like this:
const config = {
name: 'db',
connector: 'postgresql',
url: '',
host:'localhost',
port: 5432,
user: 'user',
password: 'pass',
database: 'testdb',
};
Use the Data source generator to add a PostgreSQL data source to your application.
The generator will prompt for the database server hostname, port, and other settings
required to connect to a PostgreSQL database. It will also run the npm install
command above for you.
The entry in the application's /server/datasources.json
will look like this:
{% include code-caption.html content="/server/datasources.json" %}
"mydb": {
"name": "mydb",
"connector": "postgresql"
"host": "mydbhost",
"port": 5432,
"url": "postgres://admin:admin@mydbhost:5432/db1?ssl=false",
"database": "db1",
"password": "admin",
"user": "admin",
"ssl": false
}
Edit datasources.json
to add other properties that enable you to connect the data source to a PostgreSQL database.
Connection Pool Settings
You can also specify connection pool settings in <DataSourceName>.datasource.ts
( or datasources.json
for LB3 users). For instance you can specify the minimum and the maximum pool size, and the maximum pool client's idle time before closing the client.
Example of db.datasource.ts
:
const config = {
name: 'db',
connector: 'postgresql',
url: '',
host: 'localhost',
port: 5432,
user: 'user',
password: 'pass',
database: 'testdb',
min: 5,
max: 200,
idleTimeoutMillis: 60000,
ssl: false
};
Check out node-pg-pool and node postgres pooling example for more information.
Configuration options
NOTE: By default, the 'public' schema is used for all tables.
The PostgreSQL connector uses node-postgres as the driver. For more information about configuration parameters, see node-postgres documentation.
Connecting to UNIX domain socket
A common PostgreSQL configuration is to connect to the UNIX domain socket /var/run/postgresql/.s.PGSQL.5432
instead of using the TCP/IP port. For example:
const config = {
name: 'db',
connector: 'postgresql',
url: '',
host: '/var/run/postgresql/',
port: 5432,
user: 'user',
password: 'pass',
database: 'testdb',
debug: true
};
Defining models
LoopBack allows you to specify some database settings through the model definition and/or the property definition. These definitions would be mapped to the database. Please check out the CLI lb4 model
for generating LB4 models. The following is a typical LoopBack 4 model that specifies the schema, table and column details through model definition and property definitions:
@model({
settings: { postgresql: { schema: 'public', table: 'inventory'} },
})
export class Inventory extends Entity {
@property({
type: 'number',
required: true,
scale: 0,
id: 1,
postgresql: {
columnName: 'id',
dataType: 'integer',
dataLength: null,
dataPrecision: null,
dataScale: 0,
nullable: 'NO',
},
})
id: number;
@property({
type: 'string',
postgresql: {
columnName: 'name',
dataType: 'text',
dataLength: null,
dataPrecision: null,
dataScale: null,
nullable: 'YES',
},
})
name?: string;
@property({
type: 'boolean',
required: true,
postgresql: {
columnName: 'available',
dataType: 'boolean',
dataLength: null,
dataPrecision: null,
dataScale: null,
nullable: 'NO',
},
})
available: boolean;
constructor(data?: Partial<User>) {
super(data);
}
}
The model definition consists of the following properties.
For example:
{% include code-caption.html content="/common/models/model.json" %}
{
"name": "Inventory",
"options": {
"idInjection": false,
"postgresql": {
"schema": "strongloop",
"table": "inventory"
}
},
"properties": {
"id": {
"type": "String",
"required": false,
"length": 64,
"precision": null,
"scale": null,
"postgresql": {
"columnName": "id",
"dataType": "character varying",
"dataLength": 64,
"dataPrecision": null,
"dataScale": null,
"nullable": "NO"
}
},
"productId": {
"type": "String",
"required": false,
"length": 20,
"precision": null,
"scale": null,
"id": 1,
"postgresql": {
"columnName": "product_id",
"dataType": "character varying",
"dataLength": 20,
"dataPrecision": null,
"dataScale": null,
"nullable": "YES"
}
},
"locationId": {
"type": "String",
"required": false,
"length": 20,
"precision": null,
"scale": null,
"id": 1,
"postgresql": {
"columnName": "location_id",
"dataType": "character varying",
"dataLength": 20,
"dataPrecision": null,
"dataScale": null,
"nullable": "YES"
}
},
"available": {
"type": "Number",
"required": false,
"length": null,
"precision": 32,
"scale": 0,
"postgresql": {
"columnName": "available",
"dataType": "integer",
"dataLength": null,
"dataPrecision": 32,
"dataScale": 0,
"nullable": "YES"
}
},
"total": {
"type": "Number",
"required": false,
"length": null,
"precision": 32,
"scale": 0,
"postgresql": {
"columnName": "total",
"dataType": "integer",
"dataLength": null,
"dataPrecision": 32,
"dataScale": 0,
"nullable": "YES"
}
}
}
}
To learn more about specifying database settings, please check the section Data Mapping Properties.
Type mapping
See LoopBack 4 types (or LoopBack 3 types) for details on LoopBack's data types.
LoopBack to PostgreSQL types
Besides the basic LoopBack types, as we introduced above, you can also specify the database type for model properties. It would be mapped to the database (see Data Mapping Properties). For example, we would like the property price
to have database type double precision
in the corresponding table in the database, we have specify it as following:
@property({
type: 'number',
postgresql: {
dataType: 'double precision',
},
})
price?: number;
"properties": {
// ..
"price": {
"type": "Number",
"postgresql": {
"dataType": "double precision",
}
},
{% include warning.html content=" Not all database types are supported for operating CRUD operations and queries with filters. For example, type Array cannot be filtered correctly, see GitHub issues: # 441 and # 342. " %}
PostgreSQL types to LoopBack
Numeric Data Type
Note: The node.js driver for postgres by default casts Numeric
type as a string on GET
operation. This is to avoid data precision loss since Numeric
types in postgres cannot be safely converted to JavaScript Number
.
For details, see the corresponding driver issue.
Querying JSON fields
Note The fields you are querying should be setup to use the JSON postgresql data type - see Defining models
Assuming a model such as this:
@property({
type: 'number',
postgresql: {
dataType: 'double precision',
},
})
price?: number;
You can query the nested fields with dot notation:
CustomerRepository.find({
where: {
address.state: 'California',
},
order: 'address.city',
});
Extended operators
PostgreSQL supports the following PostgreSQL-specific operators:
Please note extended operators are disabled by default, you must enable
them at datasource level or model level by setting allowExtendedOperators
to
true
.
Operator contains
The contains
operator allow you to query array properties and pick only
rows where the stored value contains all of the items specified by the query.
The operator is implemented using PostgreSQL array operator
@>
.
Note The fields you are querying must be setup to use the postgresql array data type - see Defining models above.
Assuming a model such as this:
@model({
settings: {
allowExtendedOperators: true,
}
})
class Post {
@property({
type: ['string'],
postgresql: {
dataType: 'varchar[]',
},
})
categories?: string[];
}
You can query the tags fields as follows:
const posts = await postRepository.find({
where: {
{
categories: {'contains': ['AA']},
}
}
});
Operator containedBy
Inverse of the contains
operator, the containedBy
operator allow you to query array properties and pick only
rows where the all the items in the stored value are contained by the query.
The operator is implemented using PostgreSQL array operator
<@
.
Note The fields you are querying must be setup to use the postgresql array data type - see Defining models above.
Assuming a model such as this:
@model({
settings: {
allowExtendedOperators: true,
}
})
class Post {
@property({
type: ['string'],
postgresql: {
dataType: 'varchar[]',
},
})
categories?: string[];
}
You can query the tags fields as follows:
const posts = await postRepository.find({
where: {
{
categories: {'containedBy': ['AA']},
}
}
});
Operator containsAnyOf
The containsAnyOf
operator allow you to query array properties and pick only
rows where the any of the items in the stored value matches any of the items in the query.
The operator is implemented using PostgreSQL array overlap operator
&&
.
Note The fields you are querying must be setup to use the postgresql array data type - see Defining models above.
Assuming a model such as this:
@model({
settings: {
allowExtendedOperators: true,
}
})
class Post {
@property({
type: ['string'],
postgresql: {
dataType: 'varchar[]',
},
})
categories?: string[];
}
You can query the tags fields as follows:
const posts = await postRepository.find({
where: {
{
categories: {'containsAnyOf': ['AA']},
}
}
});
Operator match
The match
operator allows you to perform a full text search using the @@
operator in PostgreSQL.
Assuming a model such as this:
@model({
settings: {
allowExtendedOperators: true,
}
})
class Post {
@property({
type: 'string',
})
content: string;
}
You can query the content field as follows:
const posts = await postRepository.find({
where: {
{
content: {match: 'someString'},
}
}
});
Discovery and auto-migration
Model discovery
The PostgreSQL connector supports model discovery that enables you to create LoopBack models based on an existing database schema. Once you defined your datasource:
- LoopBack 4 users could use the commend
lb4 discover
to discover models. - For LB3 users, please check Discovering models from relational databases.
(See database discovery API for related APIs information)
Auto-migration
The PostgreSQL connector also supports auto-migration that enables you to create a database schema from LoopBack models.
For example, based on the following model, the auto-migration method would create/alter existing customer
table under public
schema in the database. Table customer
would have two columns: name
and id
, where id
is also the primary key and has the default value SERIAL
as it has definition of type: 'Number'
and generated: true
:
@model()
export class Customer extends Entity {
@property({
id: true,
type: 'Number',
generated: true
})
id: number;
@property({
type: 'string'
})
name: string;
}
By default, tables generated by the auto-migration are under public
schema and named in lowercase.
Besides the basic model metadata, LoopBack allows you to specify part of the database schema definition via the property definition, which would be mapped to the database.
For example, based on the following model, after running the auto-migration script, a table named CUSTOMER
under schema market
will be created. Moreover, you can also have different names for your property and the corresponding column. In the example, by specifying the column name, the property name
will be mapped to the customer_name
column. This is useful when your database has a different naming convention than LoopBack (camelCase).
@model(
settings: {
postgresql: {schema: 'market', table: 'CUSTOMER'},
}
)
export class Customer extends Entity {
@property({
id: true,
type: 'Number',
generated: true
})
id: number;
@property({
type: 'string',
postgresql: {
columnName: 'customer_name'
}
})
name: string;
}
For how to run the script and more details:
- For LB4 users, please check Database Migration
- For LB3 users, please check Creating a database schema from models
(See LoopBack auto-migrate method for related APIs information)
Here are some limitations and tips:
- If you defined
generated: true
in the id property, it generates integers by default. For auto-generated uuid, see Auto-generated id property - Only the id property supports the auto-generation setting
generated: true
for now - Auto-migration doesn't create foreign key constraints by default. But they can be defined through the model definition. See Auto-migrate with foreign keys
- Destroying models may result in errors due to foreign key integrity. First delete any related models by calling delete on models with relationships.
Auto-migrate/Auto-update models with foreign keys
Foreign key constraints can be defined in the model definition.
Note: The order of table creation is important. A referenced table must exist before creating a foreign key constraint.
Define your models and the foreign key constraints as follows:
customer.model.ts
:
@model()
export class Customer extends Entity {
@property({
id: true,
type: 'Number',
generated: true
})
id: number;
@property({
type: 'string'
})
name: string;
}
order.model.ts
:
@model({
settings: {
foreignKeys: {
fk_order_customerId: {
name: 'fk_order_customerId',
entity: 'Customer',
entityKey: 'id',
foreignKey: 'customerId',
onDelete: 'CASCADE',
onUpdate: 'SET NULL'
},
},
})
export class Order extends Entity {
@property({
id: true,
type: 'Number',
generated: true
})
id: number;
@property({
type: 'string'
})
name: string;
@property({
type: 'Number'
})
customerId: number;
}
({
"name": "Customer",
"options": {
"idInjection": false
},
"properties": {
"id": {
"type": "Number",
"id": 1
},
"name": {
"type": "String",
"required": false
}
}
},
{
"name": "Order",
"options": {
"idInjection": false,
"foreignKeys": {
"fk_order_customerId": {
"name": "fk_order_customerId",
"entity": "Customer",
"entityKey": "id",
"foreignKey": "customerId",
"onDelete": "CASCADE",
"onUpdate": "SET NULL"
}
}
},
"properties": {
"id": {
"type": "Number"
"id": 1
},
"customerId": {
"type": "Number"
},
"description": {
"type": "String",
"required": false
}
}
})
Auto-generated ids
Auto-migrate supports the automatic generation of property values for the id property. For PostgreSQL, the default id type is integer. Thus if you have generated: true
in the id property, it generates integers by default:
{
id: true,
type: 'Number',
required: false,
generated: true // enables auto-generation
}
It is common to use UUIDs as the primary key in PostgreSQL instead of integers. You can enable it with either the following ways:
- use uuid that is generated by your LB application by setting
defaultFn: uuid
:
@property({
id: true,
type: 'string'
defaultFn: 'uuid',
// generated: true, -> not needed
})
id: string;
- use PostgreSQL built-in (extension and) uuid functions:
@property({
id: true,
type: 'String',
required: false,
// settings below are needed
generated: true,
useDefaultIdType: false,
postgresql: {
dataType: 'uuid',
},
})
id: string;
The setting uses uuid-ossp
extension and uuid_generate_v4()
function as default.
If you'd like to use other extensions and functions, you can do:
@property({
id: true,
type: 'String',
required: false,
// settings below are needed
generated: true,
useDefaultIdType: false,
postgresql: {
dataType: 'uuid',
extension: 'myExtension',
defaultFn: 'myuuid'
},
})
id: string;
WARNING: It is the users' responsibility to make sure the provided extension and function are valid.
Module Long Term Support Policy
This module adopts the Module Long Term Support (LTS) policy, with the following End Of Life (EOL) dates:
| Version | Status | Published | EOL | | ------- | ---------- | --------- | -------------------- | | 5.x | Current | Apr 2020 | Apr 2023 (minimum) | | 3.x | Active LTS | Mar 2017 | Apr 2022 |
Learn more about our LTS plan in docs.
Running tests
Own instance
If you have a local or remote PostgreSQL instance and would like to use that to run the test suite, use the following command:
- Linux
POSTGRESQL_HOST=<HOST> POSTGRESQL_PORT=<PORT> POSTGRESQL_USER=<USER> POSTGRESQL_PASSWORD=<PASSWORD> POSTGRESQL_DATABASE=<DATABASE> CI=true npm test
- Windows
SET POSTGRESQL_HOST=<HOST> SET POSTGRESQL_PORT=<PORT> SET POSTGRESQL_USER=<USER> SET POSTGRESQL_PASSWORD=<PASSWORD> SET POSTGRESQL_DATABASE=<DATABASE> SET CI=true npm test
Docker
If you do not have a local PostgreSQL instance, you can also run the test suite with very minimal requirements.
- Assuming you have Docker installed, run the following script which would spawn a PostgreSQL instance on your local:
source setup.sh <HOST> <PORT> <USER> <PASSWORD> <DATABASE>
where <HOST>
, <PORT>
, <USER>
, <PASSWORD>
and <DATABASE>
are optional parameters. The default values are localhost
, 5432
, root
, pass
and testdb
respectively.
- Run the test:
npm test