ut-port
v6.45.2
Published
UT port base
Downloads
62
Readme
UT Port
Base class to be usually extended when creating new ports.
Scope
- Define base API for all ports.
- Define facade API for managing multiple ports.
- Initialize logging
- Initialize metrics
API
This exports 2 APIs:
{
Port,
ports
}
Port public API
Port({bus, logFactory, config})
- create and return a port with the API below
bus
- bus to be used by the port, saved inthis.bus
logFactory
- factory for creating loggersconfig
- port configuration to be merged with the port default in this.config
Port.prototype.init()
- initializes internal variables of the port
Port.prototype.start()
- starts the port, so that it can process messages.
Usually here ports start to listen or initiate network / filesystem I/O
result
- return promise so thatstart
method of the next ports can be called after the returned promise resolves
Port.prototype.ready()
- this is called after all ports are started
result
- return promise so thatready
method of the next ports can be called after the returned promise resolves. Usually if promise is returned, it will be resolved when the I/O operation initiated by the start method has finished.
Port.prototype.stop()
- stops further processing of messages, closes
connections / stops listening
Port.prototype.disconnect(reason)
- throws disconnect error
reason
- the reason for the disconnect error. This reason is logged in the error log and set as cause in the thrown error.
Port.prototype.pull(what, context)
- creates pull streams for the port
what
- can be one of the following:- falsy - the method will return an object with method
push
that can be used to add messages received from outside - net stream - use the stream to send and receive messages from outside
- promise returning method - will execute the method. The method can communicate with external systems or just execute locally
- falsy - the method will return an object with method
context
- context with which the pull streams are associated
Ports public API
ports({bus, logFactory})
- create and return a ports object.
Returns the API below, which is accessible through the bus in the ports
namespace
bus
- bus to use when creating portslogFactory
- logger factory to use when creating ports
ports.get({port})
- get port by id
port
- id of the port
ports.fetch()
- returns array with all the created ports
ports.create(portsConfig, envConfig)
- create a new port
portsConfig
- port configuration coming from the implementation. Can be array or single value of the following:- function - will call the function with envConfig as parameter, the function should return object to be used as configuration
- object - will be used as configuration
When portConfig is array, multiple ports are created, with configurations taken from the array.
envConfig
- port configuration coming from the environment.If envConfig contains a property named
port
, then the value of this property will be merged to the port configuration.If envConfig contains a property matching the port id, then the value of this property will also be merged to the port configuration but with a higher priority than the
port
property.
ports.start()
- start all ports
ports.stop({port})
- stop port
port
- id of the port to stop
ports.move({port, x, y})
- set UI coordinates and returns the port
port
- id of the port to set coordinatesx, y
- coordinates to set
Port telemetry
Port telemetry includes two sets of data, which ports collect. These are
generally split in non numeric and numeric data, also know as logs
and
metrics
. When reporting these, the port uses a standard list of tags / fields
to annotate the date. Their names are:
impl
- name of the implementation or applicationlocation
- name of datacenter, cluster, etc. where the microservice runshostname
- name of host where the microservice runspid
- process id of the microserviceenv
- name of environment, usuallydev
,test
,uat
,prod
, etc.service
- name of microservicecontext
- type of port, usuallysql port
,http port
,httpserver port
, etc.name
- id of port
Port logging
Each port initializes its own logger instance, so that logging level
can be set per each port. By default, all ports will use info
level
for logging. The default can be changed using the configuration key utPort.logLevel
:
{
"utPort": {
"logLevel": "debug"
}
}
In addition to the standard tags, logging defines the following additional ones:
msg
- includes information in text form, which can be used for indexing and searchinglevel
- describes the logging level and can be one of:60
-fatal
- application has reached an unrecoverable condition50
-error
- expected error, the application will continue to process further requests40
-warn
- application is processing unexpected data and will process it in an inefficient way30
-info
- standard level of logging, that is of daily use of an application operator20
-debug
- more verbose level of logging, that includes additional details, useful for application developers, may include data on API calls, parameters and results10
-trace
- most detailed level, which may include low level details like network traces
@meta
- API metadata, which includes the fields:@meta.method
- method name@meta.opcode
- optional operation code that defines the method variant to execute@meta.mtid
- describes the type of message and can be one of:event
- the application is processing an internal eventrequest
- the application is processing an API request callerror
- the application is processing an API response of type errornotification
- the application is processing an API notification callconvert
- the application is converting data from one format to another
@meta.trace
@meta.conId
error
- when logging errors, this contains additional information for the error, represented in the following fields:error.type
- string, representing the type of errorerror.method
- the name of the method, which resulted with errorerror.stack
- the call stack at the time of the errorerror.cause
- any previous errors, that caused this error
Port metrics
Each port configures a default set of metrics, named as follows:
time_#
,time_#_min
,time_#_max
- average, minimum and maximum stage execution time in milliseconds. Depending on execution stage,#
can be one of:q
- Time spent in the incoming queuer
- Time spent in thereceive
hooke
- Time spent in theencode
hookx
- Time spent in theexecute
hookd
- Time spent in thedecode
hooks
- Time spent in thesend
hookw
- Time spent in dispatch stage
time_rate
- count of completed executions per secondtime_count
- count of completed executionscount_a#
,count_a#_min
,count_a#_max
- current, minimum and maximum concurrent executions per stage.#
can be one of:r
- Count of concurrently executingreceive
hookse
- Count of concurrently executingencode
hooksx
- Count of concurrently executingexecute
hooksd
- Count of concurrently executingdecode
hookss
- Count of concurrently executingsend
hooksw
- Count of concurrently executing dispatches
ms
- Count of sent messages per secondmr
- Count of received messages per secondbs
- Count of sent bytes per secondbr
- Count of received bytes per second
In addition to the standard tags, metrics define the following additional ones:
m
- Method name. A special method name*
is used to aggregate metrics for all methods.
Telemetry tools
Various open source tools are available to store, index and visualize telemetry data.
Elasticsearch
To index logs in an optimal way, an index can be created in elasticsearch using the following request elastic-index-ut-http
Grafana
To visualize metrics, the file grafana-ut-metrics.json can be imported in Grafana as a dashboard. It requires that metrics are stored in Prometheus and a datasource pointed to it exists in Grafana.
Automatic method import bindings
When defining modules, one can define a list of all bus methods that are going to be used withing the code as part of the factory function parameters definition.
Example:
// module wrapper
module.exports = () => function utModule() {
return {
orchestrator: () => [
require('./api/script')
]
}
};
// script
module.exports = function script({
import: {
userIdentityLookup
db$userIdentityLookup
}
}) {
return {
test() {
await userIdentityLookup(params);
// is identical to this.bus.importMethod('user.identity.lookup')(params)
await db$userIdentityLookup(params);
// is identical to this.bus.importMethod('db/user.identity.lookup')(params)
}
};
};
// additionally, the import options can be defined
// per method in the js/json configuration as follows
{
utModule: {
orchestrator: true,
script: {
import: {
userIdentityLookup: {
cache: {
ttl: 6 * 60 * 60 * 1000
}
}
}
}
}
}
Also annotation-like syntax can be used to define multiple configurations for one and the same method
Example:
// module wrapper
module.exports = () => function utModule() {
return {
orchestrator: () => [
require('./api/script')
]
}
};
// script
module.exports = function script({
import: {
'@shortCache namespace.entity.action': alias
}
}) {
return {
test() {
// alias(params) is identical
// to this.bus.importMethod('namespace.entity.action')(params)
}
};
};
// in the js/json configuration multiple options sets
// can be defined for the method above
// and they will be merged recursively
// in the specified order
// in this case: shortCache <- namespace.entity.action
{
utModule: {
orchestrator: true,
script: {
import: {
shortCache: {
cache: {
ttl: 60 * 1000
}
},
'namespace.entity.action': {
cache: {
before: 'get',
after: 'set',
key: ({key}) => ({
id: key,
segment: 'namespace.entity.action'
})
}
}
}
}
}
}