npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@madgex/datadog-monitoring

v5.0.2

Published

All your Hapi + Datadog needs, in one handy package.

Downloads

502

Readme

Madgex Datadog Logging and Monitoring

All your Hapi + Datadog needs, in one handy package.

Usage

As a library

When used as a module, this library exports:

  • hapi-pino
  • autoLogErrors, a Hapi plugin to automatically log responses over a certain status code threshold, and all responses if passed a debug log level
  • a function to set up dd-trace

There are no mutual dependencies so each can be used independently of the others. You can use them in the setup file for your server like so:

const { pino, autoLogErrors, trace, dataDogStats } = require('@madgex/datadog-monitoring');

async function createServer() {
  await trace({
    hostname: DD_AGENT_HOSTNAME || '',
    service: 'my-service-name',
    hapiOptions: {
      blacklist: ['/healthcheck']
    },
  });

  const Hapi = require('hapi');

  const server = new Hapi.Server({
    //etc
  })

  await server.register([
    {
      plugin: autoLogErrors,
      options: {
        level: LOG_LEVEL
      }
    },
    {
      plugin: pino,
      options: {
        // prettyPrint has been deprecated in pino
        // if you want to use it, you can add 'pino-pretty' as a dev dependency to your service
        // and implement it as below (see options here: https://github.com/pinojs/pino-pretty)
        // (note: pino-pretty should only be used in development)
        ...(IS_DEV && {
          transport: {
            target: 'pino-pretty',
          },
        }),
        level: LOG_LEVEL,
        redact: ['req.headers.authorization'],
        ignorePaths: ['/healthcheck'],
      },
    },
    {
      plugin: dataDogStats,
      options: {
        DD_AGENT_HOSTNAME,
        DD_AGENT_DSTATS_PORT,
      },
    },
  ]);

  return server;
}

Note that the tracer must be initialised before requiring Hapi, in order to correctly initialise the APM.

All available options for the dd-trace Hapi plugin can be passed as hapiOptions. hostname, if not set, will default to the discoverable Datadog agent host on AWS. The trace function returns the tracer instance so further plugin configuration can be added if you wish, eg:

async function createServer() {
  const tracer = await trace({
    hostname: DD_AGENT_HOSTNAME || '',
    service: 'my-service-name',
    debug: true // enables debugging the tracer, do not enable in production
    version: pkg.version,
    profiling: true,
    analytics: true,
    hapiOptions: {
      blacklist: ['/healthcheck']
    },
  });

  tracer.use('redis', { analytics: true });

  // etc
}

The hapi-pino plugin should be set up as described in its documentation.

The autoLogErrors plugin accepts two config options:

  • level: the application's log level, to determine whether to log all requests. Defaults to 'info'.
  • threshold: the status code above which responses should be logged as 'warn'. Defaults to 399.

The dataDogStats plugin decorates your hapi server object with dStats client. This client can call dstats functions like increment() with parameters:

  server.dataDogStats.increment(name, value, tags)
  • name: Required - string, containing name of the graph e.g. "jobseeker_frontend_jobdetails"
  • value: Required - number of increments e.g. 1
  • tags: Required - array of strings, containing your values e.g. [similarJobsCallSuccess:success]

This plugin is running hot-shots module. Full usage docs here: https://www.npmjs.com/package/hot-shots#check

From the command line

This library also includes a custom transport to pipe Pino logs from a server's stdout to a Datadog agent over UDP, transforming the JSON format for processing and display. It works by running the server in a separate, child process and piping that process's stdout stream through a transform and write stream to send to the Datadog agent. It's intended to be used as a replacement for the usual node /entry/point.js in the npm start script, like so:

"start": "dd-monitor /path/to/server.js --hostname [hostname] --port [port] --echo --debug"

It accepts the following flags:

  • hostname/h: the hostname of the Datadog agent. Defaults first to a DD_AGENT_HOSTNAME environment variable, and then to looking up the discoverable host on AWS
  • port/p: the port to log to. Will default to a Logging__DataDogLoggingPort environment variable, but is otherwise required
  • echo/e: echoes all logs to stdout, including a warning when a log was not parseable. Do not enable in production
  • debug/d: log to stdout when a packet is transmitted to the UDP socket. Do not enable in production

Development

The log transport is fundamentally very simple, but requires a basic understanding of Node streams- I'd recommend reading this article from Node Source on Understanding Streams in Node.js. All the transport does is spawn a child process to run the passed server, and hook the stream of that child process's standard output into a transform stream, to format the logs for Datadog, and then pipe that transform stream to a write stream which sends what it receives over a UDP connection to the Datadog Agent.

Running the tests

The unit tests can be run with Jest by running npm run test.

The integration tests for the transport check that it is successfully transmitting messages from an emitter process (which just logs a sequence of messages into stdout, standing in for the server) to a listener process (which is a simple UDP socket, standing in for the Datadog Agent). They can be run with npm run test:integration. The test can be controlled with the constants in test/integration/constants.js.