npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

simple-logstash-logger

v0.5.0

Published

A simple logger for outputting logstash json or yaml to stdout

Downloads

224

Readme

simple-logstash-logger

A simple nodejs logger for synchronously writing json/yaml logs in logstash-format to stdout.

Why would I use this instead of bunyan/winston/pino?

  • You are running a nodejs service on kubernetes, and simply want to log to stdout and feed it to logstash using filebeat.
  • It is simpler. Not faster (pino). Not more flexible (bunyan/winston). Not asynchronous (winston).

Usage

const {createLogger} = require('simple-logstash-logger');

 // Using __filename as the first argument is recommended.
 // This will set the `file` field in the output to the relative path of the current file.
const logger = createLogger(__filename);

logger.info("Hello World");

Output

{"@timestamp":"2019-03-24T15:10:56.414Z","@version":1,"level":"INFO","file":"examples/simple.js","message":"Hello World"}

Setting the log level

const {createLogger, LoggerConfig, LogLevel} = require('simple-logstash-logger');

const logger = createLogger(__filename);

// LoggerConfig can be modified after logger instantiation, allowing for
// modification of log level during runtime.
LoggerConfig.level = LogLevel.DEBUG; // default: INFO

logger.debug("Hello World");

// Setting the log level to OFF effectively disables all logging
LoggerConfig.level = LogLevel.OFF;

logger.error("Even this will not be seen. Good while running tests, bad in production");

Setting the output format to YAML

While developing, it is useful to use the YAML output format. Especially nice for stack traces (see section below).

const logger = createLogger(__filename);

// Order does not matter, you can set the format after instantiating the logger
LoggerConfig.format = LogFormat.YAML;

logger.info("Hello World");

Output


---
'@timestamp': '2019-03-24T15:13:00.424Z'
'@version': 1
level: INFO
file: examples/yaml.js
message: Hello World

Adding contextual information

// Contextual information can be added 

// 1) globally on the LoggerConfig
LoggerConfig.context = {application: "my-application"};

// 2) as part of the logger creation
const logger = createLogger(__filename, {loggerType: "request-logs"});

// 3) and in each event
logger.info("Received request", {
    request: {
        path: "/hello",
        headers: {
            "content-type": "application/json"
        }
    }
}); 

Output


---
'@timestamp': '2019-03-24T15:19:55.204Z'
'@version': 1
level: INFO
application: my-application
file: examples/contextual.js
loggerType: request-logs
message: Received request
request:
  path: /hello
  headers:
    content-type: application/json

Adding stack traces

If an error object is given as the last argument to a logger method, the stack trace for that object is added.

try {
    throw new Error("Oops, my bad!");
} catch (err) {
    logger.error("Caught unexpected exception", err);
}

Output


---
'@timestamp': '2019-03-24T15:25:51.014Z'
'@version': 1
level: ERROR
file: examples/errors.js
message: Caught unexpected exception
stackTrace: |-
  Error: Oops, my bad!
      at Object.<anonymous> (/.../examples/errors.js:6:11)
      at Module._compile (internal/modules/cjs/loader.js:701:30)
      at Object.Module._extensions..js (internal/modules/cjs/loader.js:712:10)
      at Module.load (internal/modules/cjs/loader.js:600:32)
      at tryModuleLoad (internal/modules/cjs/loader.js:539:12)
      at Function.Module._load (internal/modules/cjs/loader.js:531:3)
      at Function.Module.runMain (internal/modules/cjs/loader.js:754:12)
      at startup (internal/bootstrap/node.js:283:19)
      at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)

FAQ

Why synchronous?

Being synchronous means that any slowdown in stdout blocks the event loop, forcing the application to slow down. The alternative is either:

  • having log statements return promises, which would force any function that wants to log to become asynchronous too, or
  • buffer unbounded in memory, which can lead to the process crashing loosing all buffered logs

However, being synchronized means that the application can stop responding if stdout becomes a bottleneck. This library was built with the philosophy that, in the event of a overloaded service, logging properly is more important than being able to serve incoming http requests.

In practice, this should only be a problem if you are logging a lot of events. A small throughput test when redirecting to file on a modern laptop yielded a throughput of ~50k log events per second.