streams-logger
v1.4.2
Published
Streams is an intuitive and performant logger for Node.js _and_ TypeScript applications.
Downloads
119
Readme
Streams Logger
Streams is an intuitive and performant logger for Node.js and TypeScript applications.
Introduction
Streams is an intuitive logger built on native Node.js streams. You can use the built-in logging components (e.g., the Logger, Formatter, Filter, ConsoleHandler, RotatingFileHandler, and SocketHandler) for common logging tasks or implement your own logging Node to handle a wide range of logging scenarios. Streams offers a graph-like API pattern for building sophisticated logging pipelines.
Features
- A library of commonly used logging components: Logger, Formatter, Filter, ConsoleHandler, RotatingFileHandler, and SocketHandler.
- A rich selection of contextual data (e.g., module name, function name, line number, etc.) for augmenting log messages.
- A type-safe graph-like API pattern for constructing sophisticated logging graphs.
- Consume any native Node.js Readable, Writable, Duplex, or Transform stream and add it to your graph.
- Error handling and selective detachment of inoperable graph components.
- Log any type of message you choose - including objects serialized to JSON.
- Use Streams in your Node.js project, without type safety, or take advantage of the TypeScript type definitions.
Table of Contents
- Installation
- Concepts
- Usage
- Examples
- An Instance of Logging "Hello, World!" </TypeScript>
- Log to a File and the Console </TypeScript>
- A Network Connected Streams Logging Graph </TypeScript>
- Use Streams in a Node.js Project (without type safety) </Node.js>
- Formatting
- API
- Object (JSON) Logging
- Using a Socket Handler
- Hierarchical Logging
- How-Tos
- Tuning
- Backpressure
- Performance
- Test
Installation
npm install streams-logger
Concepts
Logging is essentially a data transformation task. When a string is logged to the console, for example, it typically undergoes a transformation step where relevant information (e.g., the timestamp, log level, process id, etc.) is added to the log message prior to it being printed. Likewise, when data is written to a file or the console additional data transformations may take place e.g., serialization and representational transformation. Streams accomplishes these data transformation tasks by means of a network of Node
instances that is constructed using a graph-like API pattern.
Node
Each data transformation step in a Streams logging graph is realized through a Node
implementation. Each Node
manages and represents a native Node.js stream. A Node
in a data transformation graph consumes an input, transforms or filters the data in some way, and optionally produces an output. Each component (e.g., Loggers, Formatters, Filters, Handlers, etc.) in a Streams logging graph is a Node
. Each Node
has a native Node.js stream that it manages.
Graph API Pattern
Streams uses a graph-like API pattern for constructing a logging graph. Each graph consists of a network of Node
instances that together comprise a graph logging pipeline. Please see the Usage or Examples for instructions on how to construct a Streams data transformation graph.
Usage
In this hypothetical example you will log "Hello, World!" to the console and to a file.
Log to a File and the Console
1. Import the Logger, Formatter, ConsoleHandler and RotatingFileHandler, and SyslogLevel enum.
import {
Logger,
Formatter,
ConsoleHandler,
RotatingFileHandler,
SyslogLevel,
} from "streams-logger";
2. Create an instance of a Logger, Formatter, ConsoleHandler and RotatingFileHandler.
- The
Logger
is set to log at levelSyslogLevel.DEBUG
. - The
Formatter
constructor is passed aformat
function that will serialize data contained in theLogContext
to a string containing the ISO time, the log level, the function name, the line number, the column number, and the log message. - The
ConsoleHandler
will log the message toprocess.stdout
. - The
RotatingFileHandler
will log the message to the file./message.log
.
const logger = new Logger({ level: SyslogLevel.DEBUG });
const formatter = new Formatter({
format: ({ isotime, message, name, level, func, url, line, col }) => {
return `${isotime}:${level}:${func}:${line}:${col}:${message}\n`;
},
});
const consoleHandler = new ConsoleHandler({ level: SyslogLevel.DEBUG });
const rotatingFileHandler = new RotatingFileHandler({
path: "./message.log",
level: SyslogLevel.DEBUG,
});
3. Connect the Logger to the Formatter and connect the Formatter to the ConsoleHandler and RotatingFileHandler.
Streams uses a graph-like API pattern in order to construct a network of log Nodes. Each component in a network, in this case the Logger
, the Formatter
, and the ConsoleHandler
and RotatingFileHandler
, is a Node.
const log = logger.connect(
formatter.connect(
consoleHandler,
rotatingFileHandler
)
);
4. Log "Hello, World!" to the console and to the file ./message.log
.
function sayHello() {
log.info("Hello, World!");
};
sayHello();
Output
# ⮶date-time function name⮷ column⮷ ⮶message
2024-06-12T00:10:15.894Z:INFO:sayHello:7:9:Hello, World!
# ⮴level ⮴line number
Examples
An Instance of Logging "Hello, World!" </TypeScript>
Please see the Usage section above or the "Hello, World!" example for a working implementation.
Log to a File and the Console </TypeScript>
Please see the Log to a File and the Console example that demonstrates how to log to a file and the console using different Formatters
.
A Network Connected Streams Logging Graph </TypeScript>
Please see the Network Connected Streams Logging Graph example that demonstrates how to connect Streams logging graphs over the network.
Use Streams in a Node.js Project (without type safety) </Node.js>
Please see the Use Streams in a Node.js Project example that demonstrates how to use Streams in a Node.js project without type checks.
Formatting
You can format your log message using a Formatter
Node. The Logger
constructs a LogContext
instance on each logged message. The properties of each LogContext
contain information about the context of the logged message (e.g., module name, function name, line number, etc.). You can define a serialization function and pass it to the constructor of a Formatter
. The serialization function can construct a log message from the LogContext
properties. In the concise example below this is accomplished by using a template literal.
Log Context Properties
Streams provides a rich selection of contextual information with each logging call. This information is provided in a LogContext
object that is passed as a single argument to the function assigned to the format
property of the FormatterOptions
object that is passed to the Formatter
constructor. Please see the example for instructions on how to incorporate contextual information into your logged message.
|Property|Description|Config Prerequisite|
|---|---|---|
|col
| The column number of the logging call.|captureStackTrace=true
|
|func
| The name of the function where the logging call took place.|captureStackTrace=true
|
|hostname
| The hostname. ||
|isotime
| The ISO 8601 representation of the time at which the logging call took place.|captureISOTime=true
|
|label
| Optional user specified label.||
|level
| The SyslogLevel
of the logging call.||
|line
| The line number of the logging call.|captureStackTrace=true
|
|message
| The message of the logging call.||
|metadata
| Optional user specified data.||
|name
| The name of the logger.||
|path
| The module path.|captureStackTrace=true
|
|pathbase
| The module filename.|captureStackTrace=true
|
|pathdir
| The directory part of the module path.|captureStackTrace=true
|
|pathext
| The extension of the module.|captureStackTrace=true
|
|pathname
| The name of the module.|captureStackTrace=true
|
|pathroot
| The root of the module.|captureStackTrace=true
|
|pid
| The process identifier.||
|stack
| The complete stack trace.|captureStackTrace=true
|
|threadid
| The thread identifier.||
|url
| The URL of the module.|captureStackTrace=true
|
NB For high throughput logging applications, you can improve performance by preventing some contextual information from being generated; you can set
Config.captureStackTrace
andConfig.captureISOTime
tofalse
. Please see Tuning for instructions on how to disable contextual information.
Example Formatter
In the following code excerpt, a formatter is implemented that serializes a LogContext
to:
- The time of the logging call in ISO format
- The log level
- The name of the function where the log event originated
- The line number of the log event
- The column number of the log event
- The log message
- A newline
The format
function is passed in a FormatterOptions
object to the constructor of a Formatter
. The Logger
is connected to the Formatter
. The Formatter
is connected to the ConsoleHandler
.
import { Logger, Formatter, ConsoleHandler, SyslogLevel } from "streams-logger";
const logger = new Logger({ name: "main", level: SyslogLevel.DEBUG });
const formatter = new Formatter({
format: ({ isotime, message, name, level, func, url, line, col }) => {
return `${isotime}:${level}:${func}:${line}:${col}:${message}\n`;
},
});
const consoleHandler = new ConsoleHandler();
const log = logger.connect(
formatter.connect(
consoleHandler
)
);
function sayHello() {
log.info('Hello, World!');
}
sayHello();
This is an example of what a logged message will look like using the Formatter
defined above.
# ⮶date-time function name⮷ column⮷ ⮶message
2024-06-12T00:10:15.894Z:INFO:sayHello:7:9:Hello, World!
# ⮴level ⮴line number
API
The Streams API provides commonly used logging facilities (i.e., the Logger, Formatter, Filter, ConsoleHandler, RotatingFileHandler, and SocketHandler). However, you can consume any Node.js stream and add it to your logging graph.
The Logger Class
new streams-logger.Logger<MessageT>(options, streamOptions)
<MessageT>
The type of the logged message. Default:<string>
- options
<LoggerOptions>
- level
<SyslogLevel>
The syslog logger level. Default:SyslogLevel.WARN
- name
<string>
An optional name for theLogger
. - parent
<Logger>
An optional parentLogger
. Set this tonull
in order to disconnect from the rootLogger
.Default:streams-logger.root
- queueSizeLimit
<number>
Optionally specify a limit on the number of log messages that may queue while waiting for a stream to drain. See Backpressure. - captureStackTrace
<boolean>
Optionally specify if stack trace capturing is enabled. This setting will override the default. Default:Config.captureStackTrace
- captureISOTime
<boolean>
Optionally specify if capturing ISO time is enabled. This setting will override the default. Default:Config.captureISOTime
- level
- streamOptions
<stream.TransformOptions>
Optional options to be passed to the stream. You can useTransformOptions
to set ahighWaterMark
on theLogger
.
Use an instance of a Logger to propagate messages at the specified syslog level.
public logger.level
<SyslogLevel>
The configured log level (e.g., SyslogLevel.DEBUG
).
public logger.connect(...nodes)
- nodes
<Array<Node<LogContext<MessageT, SyslogLevelT>, unknown>>
Connect to an Array ofNodes
.
Returns: <Logger<LogContext<MessageT, SyslogLevelT>, LogContext<MessageT, SyslogLevelT>>
public logger.disconnect(...nodes)
- nodes
<Array<Node<LogContext<MessageT, SyslogLevelT>, unknown>>
Disconnect from an Array ofNodes
.
Returns: <Logger<LogContext<MessageT, SyslogLevelT>, LogContext<MessageT, SyslogLevelT>>
public logger.debug(message, label)
- message
<MessageT>
Write a DEBUG message to theLogger
. - label:
<string>
An optional label.
Returns: <void>
public logger.info(message, label)
- message
<MessageT>
Write a INFO message to theLogger
. - label:
<string>
An optional label.
Returns: <void>
public logger.notice(message, label)
- message
<MessageT>
Write a NOTICE message to theLogger
. - label:
<string>
An optional label.
Returns: <void>
public logger.warn(message, label)
- message
<MessageT>
Write a WARN message to theLogger
. - label:
<string>
An optional label.
Returns: <void>
public logger.error(message, label)
- message
<MessageT>
Write a ERROR message to theLogger
. - label:
<string>
An optional label.
Returns: <void>
public logger.crit(message, label)
- message
<MessageT>
Write a CRIT message to theLogger
. - label:
<string>
An optional label.
Returns: <void>
public logger.alert(message, label)
- message
<MessageT>
Write a ALERT message to theLogger
. - label:
<string>
An optional label.
Returns: <void>
public logger.emerg(message, label)
- message
<MessageT>
Write a EMERG message to theLogger
. - label:
<string>
An optional label.
Returns: <void>
public logger.setLevel(level)
- level
<SyslogLevel>
A log level.
Returns <void>
Set the log level. Must be one of SyslogLevel
.
The Formatter Class
new streams-logger.Formatter<MessageInT, MessageOutT>(options, streamOptions)
<MessageInT>
The type of the logged message. This is the type of themessage
property of theLogContext
that is passed to theformat
function. Default:<string>
<MessageOutT>
The type of the output message. This is the return type of theformat
function. Default:<string>
- options
- format
(record: LogContext<MessageInT, SyslogLevelT>): Promise<MessageOutT> | MessageOutT
A function that will format and serialize theLogContext<MessageInT, SyslogLevelT>
. Please see Formatting for how to implement a format function.
- format
- streamOptions
<stream.TransformOptions>
Optional options to be passed to the stream. You can useTransformOptions
to set ahighWaterMark
on theFormatter
.
Use a Formatter
in order to specify how your log message will be formatted prior to forwarding it to the Handler(s). An instance of LogContext
is created that contains information about the environment at the time of the logging call. The LogContext
is passed as the single argument to format
function.
public formatter.connect(...nodes)
- nodes
<Array<Node<LogContext<MessageOutT, SyslogLevelT>, unknown>>
Connect to an Array ofNodes
.
Returns: <Formatter<LogContext<MessageInT, SyslogLevelT>, LogContext<MessageOutT, SyslogLevelT>>
public formatter.disconnect(...nodes)
- nodes
<Array<Node<LogContext<MessageOutT, SyslogLevelT>, unknown>>
Disconnect from an Array ofNodes
.
Returns: <Formatter<LogContext<MessageInT, SyslogLevelT>, LogContext<MessageOutT, SyslogLevelT>>
The Filter Class
new streams-logger.Filter<MessageT>(options, streamOptions)
<MessageT>
The type of the logged message. Default:<string>
- options
- filter
(record: LogContext<MessageT, SyslogLevelT>): Promise<boolean> | boolean
A function that will filter theLogContext<MessageT, SyslogLevelT>
. Returntrue
in order to permit the message through; otherwise, returnfalse
.
- filter
- streamOptions
<stream.TransformOptions>
Optional options to be passed to the stream. You can useTransformOptions
to set ahighWaterMark
on theFilter
.
public filter.connect(...nodes)
- nodes
<Array<Node<LogContext<MessageT, SyslogLevelT>, unknown>>
Connect to an Array ofNodes
.
Returns: <Filter<LogContext<MessageT, SyslogLevelT>, LogContext<MessageT, SyslogLevelT>>
public filter.disconnect(...nodes)
- nodes
<Array<Node<LogContext<MessageT, SyslogLevelT>, unknown>>
Disconnect from an Array ofNodes
.
Returns: <Filter<LogContext<MessageT, SyslogLevelT>, LogContext<MessageT, SyslogLevelT>>
The ConsoleHandler Class
new streams-logger.ConsoleHandler<MessageT>(options, streamOptions)
<MessageT>
The type of the logged message. Default:<string>
- options
<ConsoleHandlerOptions>
- level
<SyslogLevel>
An optional log level. Default:SyslogLevel.WARN
- level
- streamOptions
<stream.TransformOptions>
Optional options to be passed to the stream. You can useTransformOptions
to set ahighWaterMark
on theConsoleHandler
.
Use a ConsoleHandler
in order to stream your messages to the console.
public consoleHandler.setLevel(level)
- level
<SyslogLevel>
A log level.
Returns <void>
Set the log level. Must be one of SyslogLevel
.
The RotatingFileHandler Class
new streams-logger.RotatingFileHandler<MessageT>(options, streamOptions)
<MessageT>
The type of the logged message. Default:<string>
- options
<RotatingFileHandlerOptions>
- path
<string>
The path of the log file. - rotationLimit
<number>
An optional number of log rotations. Default:0
- maxSize
<number>
The size of the log file in bytes that will initiate a rotation. Default:1e6
- encoding
<BufferEncoding>
An optional encoding. Default:utf-8
- mode
<number>
An optional mode. Deafult:0o666
- level
<SyslogLevel>
An optional log level. Default:SyslogLevel.WARN
- path
- streamOptions
<stream.WritableOptions>
Optional options to be passed to the stream. You can useWritableOptions
to set ahighWaterMark
on theRotatingFileHandler
.
Use a RotatingFileHandler
in order to write your log messages to a file.
NB For improved performance, the
RotatingFileHandler
maintains its own accounting of the log file size for purposes of file rotation; hence, it's important that out-of-band writes are not permitted on the same log file while it is operating on it.
public rotatingFileHandler.setLevel(level)
- level
<SyslogLevel>
A log level.
Returns <void>
Set the log level. Must be one of SyslogLevel
.
The SocketHandler Class
new streams-logger.SocketHandler<MessageT>(options, streamOptions)
<MessageT>
The type of the logged message. Default:<string>
- options
<SocketHandlerOptions>
- socket
<Socket>
Anet.Socket
that will serve as a communication channel between thisSocketHandler
and the remoteSocketHandler
. - reviver
<(this: unknown, key: string, value: unknown) => unknown>
An optional reviver forJSON.parse
. - replacer
<(this: unknown, key: string, value: unknown) => unknown>
An optional replacer forJSON.stringify
. - space
<string | number>
An optional space specification forJSON.stringify
.
- socket
- streamOptions
<stream.DuplexOptions>
Optional options to be passed to the stream. You can useDuplexOptions
to set ahighWaterMark
on theSocketHandler
.
Use a SocketHandler
in order to connect Streams graphs over the network. Please see the A Network Connected Streams Logging Graph example for instructions on how to use a SocketHandler
in order to connect Streams logging graphs over the network.
public socketHandler.connect(...nodes)
- nodes
<Array<Node<LogContext<MessageT, SyslogLevelT>, unknown>>
Connect to an Array ofNodes
.
Returns: <SocketHandler<LogContext<MessageT, SyslogLevelT>, LogContext<MessageT, SyslogLevelT>>
public socketHandler.disconnect(...nodes)
- nodes
<Array<Node<LogContext<MessageT, SyslogLevelT>, unknown>>
Disconnect from an Array ofNodes
.
Returns: <SocketHandler<LogContext<MessageT, SyslogLevelT>, LogContext<MessageT, SyslogLevelT>>
public socketHandler.setLevel(level)
- level
<SyslogLevel>
A log level.
Returns <void>
Set the log level. Must be one of SyslogLevel
.
The LogContext Class
new streams-logger.LogContext<MessageT, LevelT>(options)
<MessageT>
The type of the logged message. Default:<string>
<LevelT>
The type of the Level enum. Default:<SyslogLevelT>
- options
<LoggerOptions>
- message
<MessageT>
The logged message. - name
<string>
The name of theLogger
. - level
<KeysUppercase<LevelT>
An uppercase string representing the log level. - stack
<string>
An optional stack trace.
- message
A LogContext
is instantiated each time a message is logged at (or below) the level set on the Logger
. It contains information about the process and environment at the time of the logging call. All Streams Nodes take a LogContext
as an input and emit a LogContext
as an output.
The LogContext
is passed as the single argument to the format function of the Formatter
; information about the environment can be extracted from the LogContext
in order to format the logged message. The following properties will be available to the format
function depending on the setting of Config.captureStackTrace
and Config.captureISOTime
. Please see the Log Context Data table for details.
public logContext.col
<string>
The column of the logging call. Available ifConfig.captureStackTrace
is set totrue
.
public logContext.func
<string>
The name of the function where the logging call took place. Available ifConfig.captureStackTrace
is set totrue
.
public logContext.hostname
<string>
The hostname.
public logContext.isotime
<string>
The date and time in ISO format at the time of the logging call. Available ifConfig.captureISOTime
is set totrue
.
public logContext.level
<DEBUG | INFO | NOTICE | WARN | ERROR | CRIT | ALERT | EMERG>
An uppercase string representation of the level.
public logContext.line
<string>
The line number of the logging call. Available ifConfig.captureStackTrace
is set totrue
.
public logContext.message
<string>
The logged message.
public logContext.metadata
<unknown>
Optional user specified data.
public logContext.name
<string>
The name of theLogger
.
public logContext.path
<string>
The complete path of the module. Available ifConfig.captureStackTrace
is set totrue
.
public logContext.pathbase
<string>
The module filename. Available ifConfig.captureStackTrace
is set totrue
.
public logContext.pathext
<string>
The extension of the module. Available ifConfig.captureStackTrace
is set totrue
.
public logContext.pathdir
<string>
The directory part of the module path. Available ifConfig.captureStackTrace
is set totrue
.
public logContext.pathname
<string>
The name of the module. Available ifConfig.captureStackTrace
is set totrue
.
public logContext.pathroot
<string>
The root of the path. Available ifConfig.captureStackTrace
is set totrue
.
public logContext.pid
<string>
The process identifier.
public logContext.threadid
<string>
The thread identifier.
public logContext.parseStackTrace(depth)
- depth
<number>
An optional depth i.e., the number of newlines to skip.
Returns <void>
If the stack
property has been set, parse the stack trace.
The Streams Config Settings Object
The Config
object is used to set default settings. It can be used for performance tuning.
Config.errorHandler <(err: Error, ...params: Array<unknown>) => void>
Set an error handler. Default: console.error
Config.captureISOTime <boolean>
Set this to false
in order to disable capturing ISO time on each logging call.. Default: true
Config.captureStackTrace <boolean>
Set this to false
in order to disable stack trace capture on each logging call. Default: true
Config.highWaterMark <number>
Set the highWaterMark
for streams in Buffer mode. Default: node:stream.getDefaultHighWaterMark(false)
Config.highWaterMarkObjectMode <number>
Set the highWaterMark
for streams in objectMode. Default: node:stream.getDefaultHighWaterMark(true)
Config.getDuplexOptions(writableObjectMode, readableObjectMode)
- writableObjectMode
<boolean>
true
for ObjectMode;false
otherwise. - readableObjectMode
<boolean>
true
for ObjectMode;false
otherwise.
Returns: <stream.DuplexOptions>
Use Config.getDuplexOptions
when implementing a custom Streams data transformation Node.
Config.getReadableOptions(readableObjectMode)
- readableObjectMode
<boolean>
true
for ObjectMode;false
otherwise.
Returns: <stream.ReadableOptions>
Use Config.getReadableOptions
when implementing a custom Streams data transformation Node.
Config.getWritableOptions(writableObjectMode)
- writableObjectMode
<boolean>
true
for ObjectMode;false
otherwise.
Returns: <stream.WritableOptions>
Use Config.getWritableOptions
when implementing a custom Streams data transformation Node.
The SyslogLevel Enum
streams-logger.SyslogLevel[Level]
- Level
- EMERG = 0
- ALERT = 1
- CRIT = 2
- ERROR = 3
- WARN = 4
- NOTICE = 5
- INFO = 6
- DEBUG = 7
Use SyslogLevel
to set the level in the options passed to Logger
, Filter
, and Handler constructors.
Object (JSON) Logging
Streams logging facilities (e.g., Logger, Formatter, etc.) default to logging string
messages; however, you can log any type of message you want by specifying your message type in the type parameter of the constructor. In the following example, a permissive interface is created named Message
. The Message
type is specified in the type parameter of the constructor of each Node
(i.e., the Logger, Formatter, and ConsoleHandler). The Formatter
is configured to input a Message
and output a string
; Message
objects are serialized using JSON.stringify
.
import { Logger, Formatter, ConsoleHandler, SyslogLevel } from "streams-logger";
interface Message {
[key: string]: string | number;
}
const logger = new Logger<Message>({ level: SyslogLevel.DEBUG });
const formatter = new Formatter<Message, string>({
format: ({ isotime, message, level, func, line, col }) => {
return `${isotime}:${level}:${func}:${line}:${col}:${JSON.stringify(
message
)}\n`;
},
});
const consoleHandler = new ConsoleHandler<string>({ level: SyslogLevel.DEBUG });
const log = logger.connect(
formatter.connect(
consoleHandler
)
);
(function sayHello() {
log.warn({ greeting: "Hello, World!", prime_number: 57 });
})();
Output
2024-07-06T03:19:28.767Z:WARN:sayHello:9:9:{"greeting":"Hello, World!","prime_number":57}
Using a Socket Handler
Streams uses Node.js streams for message propagation. Node.js represents sockets as streams; hence, sockets are a natural extension of a Streams logging graph. For example, you may choose to use a ConsoleHandler
locally and log to a RotatingFileHandler
on a remote server. Please see the A Network Connected Streams Logging Graph example for a working implementation.
Security
The SocketHandler
options take a socket instance as an argument. The net.Server
that produces this socket may be configured however you choose. You can encrypt the data sent over TCP connections and authenticate clients by configuring your net.Server
accordingly.
Configure your server to use TLS encryption.
TLS Encryption may be implemented using native Node.js TLS Encryption.
Configure your client to use TLS client certificate authentication.
TLS Client Certificate Authentication may be implemented using native Node.js TLS Client Authentication.
Hierarchical Logging
Streams supports hierarchical logging. By default every Logger
instance is connected to the root Logger
(streams-logger.root
). However, you may optionally specify an antecedent other than root
by assigning an instance of Logger
to the parent
property in the LoggerOptions
. The antecedent of the root Logger
is null
.
You may capture logging events from other modules (and your own) by connecting a data handler Node
(e.g., a ConsoleHandler
) to the streams-logger.root
Logger
. E.g.,
import { Formatter, ConsoleHandler, SyslogLevel, root } from "streams-logger";
const formatter = new Formatter({
format: ({ isotime, message, name, level, func, url, line, col }) => {
return `${isotime}:${level}:${func}:${line}:${col}:${message}\n`;
},
});
const consoleHandler = new ConsoleHandler({ level: SyslogLevel.DEBUG });
root.connect(
formatter.connect(
consoleHandler
)
);
How-Tos
How to implement a custom Streams data transformation Node.
Streams is built on the type-safe Nodes graph API framework. This means that any Nodes Node
may be incorporated into your logging graph provided that it meets the contextual type requirements. In order to implement a Streams data transformation Node
, subclass the Node
class, and provide the appropriate Streams defaults to the stream constructor.
For example, the somewhat contrived LogContextToBuffer
implementation transforms the message
contained in a LogContext
to a Buffer
; the graph pipeline streams the message to process.stdout
.
NB In this example,
writableObjectMode
is set totrue
andreadableObjectMode
is set tofalse
; hence, the Node.js stream implementation will handle the input as aobject
and the output as anBuffer
. It's important thatwritableObjectMode
andreadableObjectMode
accurately reflect the input and output types of your Node.
import * as stream from "node:stream";
import { Logger, Node, Config, LogContext, SyslogLevelT } from "streams-logger";
export class LogContextToBuffer extends Node<LogContext<string, SyslogLevelT>, Buffer> {
public encoding: NodeJS.BufferEncoding = "utf-8";
constructor(streamOptions?: stream.TransformOptions) {
super(
new stream.Transform({
...Config.getDuplexOptions(true, false),
...streamOptions,
...{
writableObjectMode: true,
readableObjectMode: false,
transform: (
chunk: LogContext<string, SyslogLevelT>,
encoding: BufferEncoding,
callback: stream.TransformCallback
) => {
callback(null, Buffer.from(chunk.message, this.encoding));
},
},
})
);
}
}
const log = new Logger({ name: "main" });
const logContextToBuffer = new LogContextToBuffer();
const console = new Node<Buffer, never>(process.stdout);
log.connect(
logContextToBuffer.connect(
console
)
);
log.warn("Hello, World!");
Output
Hello, World!
How to consume a Readable, Writable, Duplex, or Transform Node.js stream.
You can incorporate any Readable, Writable, Duplex, or Transform stream into your logging graph, provided that it meets the contextual type requirements, by passing the stream to the Node
constructor. In this hypothetical example a type-safe Node
is constructed from a net.Socket
. The type variables are specified as <Buffer, Buffer>
; the writable side of the stream consumes a Buffer
and the readable side of the stream produces a Buffer
.
import * as net from "node:net";
import { once } from "node:events";
import { Node } from "streams-logger";
net.createServer((socket: net.Socket) => socket.pipe(socket)).listen(3000);
const socket = net.createConnection({ port: 3000 });
await once(socket, "connect");
const socketHandler = new Node<Buffer, Buffer>(socket);
Tuning
Depending on your requirements, the defaults may be fine. However, for high throughput logging applications you may choose to adjust the highWaterMark
, disconnect your Logger
from the root Logger
, and/or disable stack trace capturing.
Tune the highWaterMark
.
Streams Node
implementations use the native Node.js stream API for message propagation. You have the option of tuning the Node.js stream highWaterMark
to your specific needs - keeping in mind memory constraints. You can set a highWaterMark
using Config.highWaterMark
and Config.highWaterMarkObjectMode
that will apply to Nodes in the Streams library. Alternatively, the highWaterMark
can be set in the constructor of each Node
; please see the API for instructions on how to do this.
In this example, the highWaterMark
of ObjectMode streams and Buffer mode streams is artificially set to 1e6
objects and 1e6
bytes.
import * as streams from "streams-logger";
streams.Config.highWaterMark = 1e6;
streams.Config.highWaterMarkObjectMode = 1e6;
Please see the API for more information on
Config
object settings.
Disable stack trace capture.
Another optional setting that you can take advantage of is to turn off stack trace capture. Stack trace capture can be disabled globally using the Streams configuration settings object i.e., Config.captureStackTrace
. Alternatively, you may disable stack trace capturing in a specific Logger
by setting the captureStackTrace
property of the LoggerOptions
to false
.
Turning off stack trace capture will disable some of the information (e.g., function name and line number) that is normally contained in the LogContext
object that is passed to the format
function of a Formatter
.
You can turn off stack trace capturing for all Logger
instances.
import * as streams from "streams-logger";
streams.Config.captureStackTrace = false;
Alternatively, you can instantiate a Logger
with stack trace capturing disabled.
const logger = new Logger({ captureStackTrace: false });
Disconnect from root.
You can optionally disconnect your Logger
from the root Logger
or a specified antecedent. This will prevent message propagation to the root logger, which will provide cost savings and isolation. You can either set the parent
parameter to null
in the constructor of the Logger
or explicitely disconnect from the root Logger
using the disconnect
method of the Logger
instance. In this example the Logger
instance is disconnected from the Streams root logger after instantiation.
import * as streams from 'streams-logger';
...
const log = logger.connect(
formatter.connect(
consoleHandler
)
);
log.disconnect(streams.root);
Putting it all together.
If you have a high throughput logging application, the following settings should get you to where you want to be while keeping Node.js stream buffers in check.
import * as streams from 'streams-logger';
streams.Config.highWaterMark = 1e5;
streams.Config.highWaterMarkObjectMode = 1e5;
const logger = new Logger({ parent: null, captureStackTrace: false });
... // Create an instance of a `Formatter` and `ConsoleHandler`.
const log = logger.connect(
formatter.connect(
consoleHandler
)
);
However, for typical error logging applications or debugging scenarios the defaults should work fine.
Backpressure
Streams respects backpressure by queueing messages while the stream is draining. You can set a limit on how large the message queue may grow by specifying a queueSizeLimit
in the Logger constructor options. If a queueSizeLimit
is specified and if it is exceeded, the Logger
will throw a QueueSizeLimitExceededError
.
For typical logging applications setting a queueSizeLimit
isn't necessary. However, if an uncooperative stream peer reads data at a rate that is slower than the rate that data is written to the stream, data may buffer until memory is exhausted. By setting a queueSizeLimit
you can effectively respond to subversive stream peers and disconnect offending Nodes in your graph.
If you have a cooperating stream that is backpressuring, you can either set a default highWaterMark
appropriate to your application or increase the highWaterMark
on the specific stream in order to mitigate drain events.
Performance
Streams is a highly customizable logger that performs well on a wide range of logging tasks. It is a good choice for both error logging and high throughput logging. It strictly adheres to the Node.js public API contract and common conventions. This approach comes with trade-offs; however, it ensures stability and portability while still delivering a performant logging experience.
Please see Tuning for how to configure the logging graph for high throughput logging applications.
Test
Test variations on logger functionality.
Clone the repository and change directory into the root of the repository.
git clone https://github.com/faranalytics/streams-logger.git
cd streams-logger
Install dependencies.
npm install && npm update
Run the tests.
npm test verbose=false