npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@alpha-lambda/cloudwatch-telemetry

v3.0.0

Published

Store custom CloudWatch metrics in a cost-effective way

Downloads

63

Readme

cloudwatch-telemetry

Build Status Coverage Status NPM version Dependencies Status DevDependencies Status

Serverless app to store custom CloudWatch metrics in a cost-effective way. The app works as follows:

CloudWatch logs with datapoints --> Kinesis Stream --> Lambda function --> CloudWatch metrics

Deployment

To deploy an instance of the app, run the following commands:

nvm use
npm ci
npm run deploy -- [--stage STAGE] [--region REGION] [--alarmAction ALARM_ACTION] [--insufficientDataAction INSUFFICIENT_DATA_ACTION] [--okAction OK_ACTION] [--logLevel LOG_LEVEL] [--batchSize BATCH_SIZE] [--retentionHours RETENTION_HOURS] [--shardCount SHARD_COUNT]
  • REGION: (Optional) AWS region to deploy the app to. Defaults to us-east-1.
  • STAGE: (Optional) Environment to deploy the app to. Defaults to dev.
  • ALARM_ACTION: (Optional) ARN of an action to execute (e.g. SNS topic) when any alarm transitions into an ALARM state.
  • INSUFFICIENT_DATA_ACTION: (Optional) ARN of an action to execute (e.g. SNS topic) when any alarm transitions into an INSUFFICIENT_DATA state.
  • OK_ACTION: (Optional) ARN of an action to execute (e.g. SNS topic) when any alarm transitions into an OK state.
  • LOG_LEVEL: (Optional) Logger level (trace, debug, info, warn, error, fatal). Defaults to info.
  • BATCH_SIZE: (Optional) The largest number of records that Lambda function retrieves from the Kinesis stream. Defaults to 1000.
  • RETENTION_HOURS: (Optional) The number of hours for the data records that are stored in shards to remain accessible. Defaults to 24.
  • SHARD_COUNT: (Optional) The number of shards that the stream uses. Defaults to 1.

Tearing Down

To remove an instance of the app, run the following commands:

nvm use
npm ci
npm run remove -- [--stage STAGE] [--region REGION]

Integration

CloudFormation stack

The app is deployed to the specified region using CloudFormation stack called cw-telemetry-<STACK>. Stack outputs:

Key|Name|Description|Value --|---|---|-- IngestionStreamArn|cw-telemetry-<STAGE>-ingestion-stream-arn|The ARN for the Kinesis stream to forward logs to|arn:aws:kinesis:<REGION>:<ACCOUNT_ID>:stream/cw-telemetry-<STAGE>-ingestion-stream ServiceRoleArn|cw-telemetry-<STAGE>-service-role-arn|The ARN for the IAM role to assume|arn:aws:iam::<ACCOUNT_ID>:role/cw-telemetry-<STAGE>-<REGION>-role

Log Format

Log records need to be in a JSON format and contain datapoints property. Each datapoint needs to contain the following:

  • name: (String [1..] / Required) The name of the metric
  • namespace: (String [1..] / Required) The namespace for the metric data
  • dimensions: (Object / Required) The dimensions associated with the metric (key-value pairs)
  • points: (Object[] / Required) One or more value/timestamp pairs, where:
    • value: (Float [0..] / Required) The value for the datapoint
    • timestamp: (Integer / Required) The time the datapoint data was received, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC
  • unit: (String [valid values] / Optional) The unit of the metric

Sample log record:

{
  "awsRequestId": "a669b165-ea14-11e8-8246-4d697629d57f",
  "requestId": "a669b165-ea14-11e8-8246-4d697629d57f",
  "level": 30,
  "datapoints": [
      {
          "namespace": "big-service-test",
          "name": "invocationCount",
          "dimensions": {
              "functionName": "createEntity",
              "customerId": "00000000-0000-0000-0000-000000000000"
          },
          "points": [{
            "timestamp": 1542423506412,
            "value": 1
          }],
          "unit": "Count"
      },
      {
          "namespace": "big-service-test",
          "timestamp": 1542423508235,
          "name": "capacityUsed",
          "dimensions": {
              "tableName": "mainTable"
          },
          "points": [
            {
              "timestamp": 1542423508235,
              "value": 23
            },
            {
              "timestamp": 1542423406280,
              "value": 18
            }
          ],
          "unit": "Count"
      }
  ],
  "time": "2018-11-17T02:58:27.596Z",
  "message": "datapoints for cw-telemetry"
}

DatapointCollector

DatapointCollector class makes it easier to aggregate datapoints.

new DatapointCollector([options])

Creates a new instance, where:

  • options - { Object } - a config object with the following keys:
    • log - { Function } - logger function to use for printing out datapoints
    • namespace - { String [1..] } - namespace for the metric data
    • [auto] - { Boolean } - enables automatic mode when aggregated datapoints are flushed at defined interval [defaults to false]
    • [flushFrequency] - { Number [1..] } - flush frequency for the automatic mode defined in milliseconds [defaults to 20000]
    • [maxDatapointsPerFlush] - { Number [1..] } - max number of datapoints to include in a single log record [defaults to 500]
Example
const bunyan = require('bunyan');
const DatapointCollector = require('cloudwatch-telemetry');

var log = bunyan.createLogger({ name: 'service' });
const datapointCollector = new DatapointCollector({
  log: log.info.bind(log),
  namespace: 'service-prod'
});

add(datapoints)

Stores datapoints, where:

  • datapoints - { Object | Object[]} - an object or collection that contains datapoints:
    • name - { String [1..] } - name of the metric
    • dimensions - { Object } - dimensions associated with the metric (key-value pairs)
    • value - { Number [0..] } value for the datapoint
    • [unit] - { String } - unit of the metric (valid values)

clear()

Deletes all the stored datapoints

flush()

Flushes all the stored datapoints

getAll()

Retrieves all the stored datapoints

stop()

Stops datapoints collector in the automatic mode and flushes all the stored datapoints

UNITS

List of all the units supported:

  • BITS
  • BITS_SECOND
  • BYTES
  • BYTES_SECOND
  • COUNT
  • COUNT_SECOND
  • GIGABITS
  • GIGABITS_SECOND
  • GIGABYTES
  • GIGABYTES_SECOND
  • KILOBITS
  • KILOBITS_SECOND
  • KILOBYTES
  • KILOBYTES_SECOND
  • MEGABITS
  • MEGABITS_SECOND
  • MEGABYTES
  • MEGABYTES_SECOND
  • MICROSECONDS
  • MILLISECONDS
  • NONE
  • PERCENT
  • SECONDS
  • TERABITS
  • TERABITS_SECONDS
  • TERABYTES
  • TERABYTES_SECOND

Log Forwarding

Log records need to be forwarded to the Kinesis stream using Subscription Filter. If you are using Serverless framework in your app, the easiest way would be to use serverless-plugin-log-subscription.

License

The MIT License (MIT)

Copyright (c) 2019 Anton Bazhal

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.