npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@rowanmanning/test-logger-01

v0.2.0

Published

A simple and fast logger based on Pino, with FT preferences baked in

Downloads

5

Readme

@dotcom-reliability-kit/logger

A simple and fast logger based on Pino, with FT preferences baked in. This module is part of FT.com Reliability Kit.

Usage

Install @dotcom-reliability-kit/logger as a dependency:

npm install --save @dotcom-reliability-kit/logger

Include in your code:

import logger from '@dotcom-reliability-kit/logger';
// or
const logger = require('@dotcom-reliability-kit/logger');

Perform logging at different levels using methods with the same name:

// Info-level log with a message
logger.info('Saying hello');

// Warning-level log with a message
logger.warn('Everything’s mostly cool');

// Error-level log with a message and additional data
logger.error('Uh-oh', { field: 'some value' });

// Info-level log with no message but some additional data
logger.info({ event: 'UPDATE_NOTIFICATION', data: data });

// Error-level log with an error object and additional data
const error = new Error('Whoops!');
logger.error('Uh-oh', error, { extra_field: 'boo' });

Log levels

Understanding log levels is key to using the logger. Setting a log level explains to the person debugging your application what level of information (or error severity) they're working with.

The valid levels are:

  • debug: The lowest log level. This is for high-volume information which isn't critical to monitoring and running the app, but may be useful when debugging in local development. E.g. logging transformed article information before rendering.

  • info: The highest informational log level. This is for key information which is important debugging an application in production. E.g. when the application is ready to receive HTTP traffic.

  • warn: The lowest error log level. This is for when something may cause an issue but it does not have a large impact on the end user. E.g. a deprecated method is being used which may be removed the next time a dependency is updated.

  • error: The general error log level. This is for when an error occurs which impacts the end user. E.g. an upstream service failed and so a page cannot be rendered.

  • fatal: The highest error log level. This is for when an error occurs and is severe enough that the application cannot recover. This should always be accompanied with the application process being exited.

Logger

The default export of @dotcom-reliability-kit/logger is an instance of the Logger class with some sensible configurations. You can also create your own logger by importing this class and instantating it yourself with some options.

import { Logger } from '@dotcom-reliability-kit/logger';
// or
const { Logger } = require('@dotcom-reliability-kit/logger');

const myLogger = new Logger({
    // options go here
});

Logger configuration options

Config options can be passed into the Logger constructor to change the behaviour of the logger.

options.baseLogData

Base log data which is added to every log output made by the logging methods. This allows you to add consistent data to a logger, e.g. information about the app. This must be an Object but it can have any properties. E.g.

const logger = new Logger({
    baseLogData: {
        app: {
            version: '1.2.3'
        }
    }
});

logger.info('This is a log');
// Outputs:
// {
//     "level": "info",
//     "message": "This is a log",
//     "app": {
//         "version": '1.2.3'
//     }
// }

options.logLevel

The maximum log level to output during logging. Logs at levels beneath this will be ignored. This option must be a String which is set to one of the supported log levels.

const logger = new Logger({
    logLevel: 'warn'
});

logger.info('This is some info'); // Outputs: nothing
logger.warn('This is a warning'); // Outputs: the message

It's also possible to set this option as an environment variable, which is how you configure the default logger. The following environment variables are read from:

  • LOG_LEVEL: The preferred way to set log level in an environment variable
  • SPLUNK_LOG_LEVEL: The legacy way to set log level, to maintain compatibility with n-logger

options.transforms

An array of functions which are called on log data before logs are output. This allows you to apply transformations to the final log object before it's sent.

Each log transform must be a function which accepts a single object argument and returns an object. Expressed as TypeScript types:

type LogData = {[x: string]: any};
type Tranform = (logData: LogData) => LogData

You can pass as many transforms as you need, though you must consider performance – each function will be called on every log that's sent.

function uppercaseProperties(logData) {
    const entries = Object.entries(logData).map(([property, value]) => {
        return [property.toUpperCase(), value];
    });
    return Object.fromEntries(entries);
}

const logger = new Logger({
    transforms: [
        uppercaseProperties
    ]
});

logger.info({
    time: 1234567890,
    message: 'Hello World'
});
// Outputs:
// {
//     "LEVEL": "info",
//     "MESSAGE": "This is a log",
//     "TIME": 1234567890
// }

You can also use built-in transforms to do things like mask sensitive data.

options.withTimestamps

Whether to send the timestamp that each log method was called. Because this module logs asynchronously the timestamp when the log is sent to Splunk is not necessarily accurate. Sending the timestamp as a time property on each log gives you accurate timings.

Must be a Boolean and defaults to true for backwards-compatibility with n-serverless-logger. This is not technically compatible with n-logger which is why the option to turn it off exists.

const logger = new Logger({
    withTimestamps: true
});

logger.info('This is a log');
// Outputs:
// {
//     "level": "info",
//     "message": "This is a log",
//     "time": 1234567890
// }

logger.log() and shortcut methods

The log method of a Logger can be used to send a JSON-formatted log to stdout with a message and additional information. This method requires a level parameter set to a valid log level. It can accept any number of other arguments which must be either a String, and Object, or an instance of Error.

logger.log('debug', 'This is a message', { extraData: 123 }, new Error('Oops'));

It's generally easier and less error-prone to use one of the shortcut methods rather than log(level). There are shortcut methods for each of the log levels:

  • logger.debug(...logData): Log with a level of "debug"
  • logger.info(...logData): Log with a level of "info"
  • logger.error(...logData): Log with a level of "error"
  • logger.warn(...logData): Log with a level of "warn"
  • logger.fatal(...logData): Log with a level of "fatal"

As well as the valid log levels, there are a couple of deprecated legacy levels. These are only present to maintain backwards-compatibility with n-logger.

Warning These methods are deprecated and will log a warning message the first time they're used.

  • logger.data(...logData): Aliases logger.debug()
  • logger.silly(...logData): Aliases logger.debug()
  • logger.verbose(...logData): Aliases logger.debug()

logger.flush()

Most logs are queued up and sent asynchronously, as this keeps your application performant and minimises the impact of logging. Sometimes (very rarely) you may need to manually flush the queue of logs. You can do so by using the flush method:

logger.flush();

Logs are automatically flushed if the application stops running, to ensure log information is not lost in the event of an application crash.

logger.createChildLogger()

Create a new logger with the same config options as the current logger, but with additional base log data merged in. This allows you to create loggers which add extra information depending on where in the codebase you are. You must pass an Object to set the base log data of the child logger.

Express example where you have an application logger and a child logger for use within Express routes:

const app = express();
const appLogger = new Logger({
    baseLogData: {
        appName: 'my-app'
    }
});

app.use((request, response) => {
    request.log = appLogger.createChildLogger({
        requestUrl: request.url
    });
});

app.get('/mock-url', (request, response) => {
    request.log.info('We got a request');
    // Outputs:
    // {
    //     "level": "info",
    //     "message": "We got a request",
    //     "appName": "my-app",
    //     "requestUrl": "/mock-url"
    // }
});

Deprecated methods

logger.addContext()

Add additional base log data to the logger via merging objects together. You must pass an Object which will be merged with the existing base log data.

Warning This method is deprecated and will log a warning message the first time it's used. This is just for compatibility with n-logger and you should use either baseLogData or createChildLogger instead.

const logger = new Logger({
    baseLogData: {
        appName: 'my-app'
    }
});

logger.addContext({
    myExtraData: 'extra data'
});

logger.info('Example');
// Outputs:
// {
//     "level": "info",
//     "message": "Example",
//     "appName": "my-app",
//     "myExtraData": "extra data"
// }

logger.setContext()

Add a context property to the logger's base log data. You must pass an Object which will be added as a context property to all logs.

Warning This method is deprecated and will log a warning message the first time it's used. This is just for compatibility with n-serverless-logger and you should use either baseLogData or createChildLogger instead.

const logger = new Logger();

logger.setContext({
    myExtraData: 'extra data'
});

logger.info('Example');
// Outputs:
// {
//     "level": "info",
//     "message": "Example",
//     "context": {
//         "myExtraData": "extra data"
//     }
// }

logger.clearContext()

Remove the context property from the logger's base log data.

Warning This method is deprecated and will log a warning message the first time it's used. This is just for compatibility with n-serverless-logger and you should use either baseLogData or createChildLogger instead.

const logger = new Logger({
    baseLogData: {
        context: {
            appName: 'my-app'
        }
    }
});

logger.clearContext();

logger.info('Example');
// Outputs:
// {
//     "level": "info",
//     "message": "Example"
// }

Built-in transforms

As well as writing your own log transforms, you can use the ones provided as part of this package. All built-in transforms are provided as properties of the transforms object:

const { transforms } = require('@dotcom-reliability-kit/logger');

legacyMask transform

The legacy mask transform applies the same masking behaviour to the logger as n-mask-logger, replicating the behaviour exactly. It masks a list of fields you specify so that we don't accidentally log sensitive information in our apps. You can use it like this:

const { Logger, transforms } = require('@dotcom-reliability-kit/logger');

const logger = new Logger({
    transforms: [
        transforms.legacyMask()
    ]
});

logger.info({
    email: '[email protected]'
});
// Outputs:
// {
//     "level": "info",
//     "email": "*****"
// }

You can configure the legacy mask transform by passing in an options object:

const logger = new Logger({
    transforms: [
        transforms.legacyMask({
            // options go here
        })
    ]
});
legacyMaskOptions.denyList

An array of strings which indicate the property names in the logs which should have their values masked. Adding new items to the deny list does not alter the default set of property names: firstName, ft-backend-key, ft-session-id, FTSession_s, FTSession, lastName, password, phone, postcode, primaryTelephone, session, and sessionId:

transforms.legacyMask({
    denyList: [ 'myEmailProperty', 'myPasswordProperty' ]
})
legacyMaskOptions.allowList

An array of strings which indicate the property names in the logs which should not have their values masked. This is used to override the default fields in cases where you want to log potentially sensitive data. E.g. if your email property actually contains a boolean indicating whether a user is opted into email, then you might want to do this:

transforms.legacyMask({
    allowList: [ 'email' ]
})
legacyMaskOptions.maskString

A string which is used as the replacement when a property is masked. This defaults to *****:

transforms.legacyMask({
    maskString: '🙈🙉🙊'
})

Log data serialization

The logger can accept data in a variety of formats, and it combines all the different data you give it into a single object which is then stringified as JSON. Each logging method can accept any number of objects, strings, and errors as log data:

logger.info('This is a string', { thisIsData: true }, { thisIsMoreData: true });
// Outputs:
// {
//     "level": "info",
//     "message": "This is a string",
//     "thisIsData": true,
//     "thisIsMoreData": true
// }

How different data is serialized

Different types of data are serialized differently before being output as JSON:

  • Objects are left as they are, all of their properties are extracted and logged as regular JSON. E.g.

    logger.info({ hello: 'world' });
    // Outputs:
    // {
    //     "level": "info",
    //     "hello": "world"
    // }

    Warning: It's important to note that only properties that are serializable as JSON can be logged. Any non-serializable properties (e.g. functions) will not be output.

  • Strings are moved into the message property of the output log. E.g.

    logger.info('Hello world');
    // Outputs:
    // {
    //     "level": "info",
    //     "message": "Hello World!"
    // }
  • Errors are moved into the error property of the output log and serialized using the Reliability Kit serializeError method. E.g.

    logger.info(new Error('Oops'));
    // Outputs:
    // {
    //     "level": "info",
    //     "error": {
    //         "name": "Error",
    //         "message": "Oops",
    //         ...etc
    //     }
    // }

    Errors found in sub-properties of the log data are not serialized like this. This is for performance reasons: looping over every nested property to check if it's an error is expensive. Do not do this:

    logger.info({ err: new Error('Oops') });
    // Outputs:
    // {
    //     "level": "info",
    //     "err": {}
    // }

    If you need to pass error objects as a property, you must serialize it yourself:

    logger.info({ err: serializeError(new Error('Oops')) });

Order of precedence

The order of the log data items is meaningful, and when a property is encountered the first instance of it will be used. This is the opposite behaviour to JavaScript's Object.assign:

logger.info({ example: 1 }, { example: 2 }, { example: 3 });
// Outputs:
// {
//     "level": "info",
//     "example": 1
// }

Local development usage

In local development @dotcom-reliability-kit/logger does not format or colourize logs by default. You may want to enable this to make reading logs in local development more easy.

To get formatted and colourised logs locally, you need to meet two conditions:

  1. Have the NODE_ENV environment variable set to either development or don't have it set at all.

  2. Install pino-pretty as a development dependency in your project. It's very important that this is a development dependency rather than a production one, otherwise you risk prettifying logs in production which makes them appear incorrectly in Splunk:

    npm install -D pino-pretty

Production usage

Using @dotcom-reliability-kit/logger in production requires that your application can handle logs to stdout and sends these logs to somewhere more permanent. On Heroku this means you're required to have migrated to Log Drains. On AWS Lambda it means you must be sending logs to CloudWatch.

Compatibility

@dotcom-reliability-kit/logger is compatible with most use cases of n-logger, n-mask-logger, and n-serverless-logger. We tried hard to make migration as easy as possible from these libraries. The full list of differences are available in the Migration Guide as well as tips on migrating.

Contributing

See the central contributing guide for Reliability Kit.

License

Licensed under the MIT license. Copyright © 2022, The Financial Times Ltd.