npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

log-shipper

v0.1.6

Published

A logging framework for AWS Lambda

Downloads

3

Readme

AWS Log

The aws-log module is intended for AWS serverless functions to use as their exclusive way to write to STDOUT and STDERR. By using using this library you will:

  • be assured that all data sent to your logging solution will be as structured JSON string
  • contextual information will be sent along with the details of that log message
  • automatic creation of a correlation-id for cross-function tracing
  • your "shipper" function will be able to filter log messages based on configured "severity"

Installing

In your project root add the Log Shipper module:

# npm
npm install -s aws-log
# yarn
yarn add aws-log

Logging

Now that the dependendency is installed you can import it in your code like so:

import logger from "aws-log";
const { log, debug, info, warn, error } = logger();

log("this is a log message", { foo: 1, bar: 2 });

In this simple example this will produce the following output to STDOUT:

{
  "@x-correlation-id": "1234-xyzd-abcd",
  "@severity": 1,
  "message": "this is a log message",
  "foo": 1,
  "bar": 2
}

Things to note about usage:

  • You must call the logger() function to get the primary logging functions which are: log, info, debug, warn and error.

    Note: log is just an alias for the info; we find log to be a little easier to use but traditional logging systems use info more consistenly.

  • we ALWAYS get a JSON object as a return (good for logging frameworks)

  • The first calling parameter is mapped to the message parameter in the output

  • The second calling parameter is optional but allows you to add other structured

  • attributes which help to define the log message

  • Every message will have a @severity attached to it. This is one-to-one mapped to which log function you choose:

    {
      DEBUG: 0,
      INFO: 1,
      WARN: 2,
      ERROR: 3
    };
  • Every message will have a @x-correlation-id attached to it ... more on that later

    Note: while there is no "timestamp" attribute appended we leave that off because AWS includes the timestamp by default on each log entry. Please do ensure your shipper function picks up the timestamp and adds it into the JSON payload.

Persistent Context

While each log message has unique data which you want to log, there is also "context" that when placed next to the specific message can make the data much more searchable and thereby more useful. There are a few ways to achieve this context but here's the most basic:

const { log, debug, info, warn, error } = logger().context({ foo: "bar" });
log("this is a log message", { foo: 1, bar: 2 });

In this example the output would be:

{
  "@x-correlation-id": "1234-xyzd-abcd",
  "@severity": 1,
  "@timestamp": 2234234,
  "message": "this is a log message",
  "foo": 1,
  "bar": 2,
  "context": {
    "foo": "bar"
  }
}

Every call to debug, info / log, warn and error will now always include the properties you have passed in as context.

Note: If your specific log content includes a property context then the logger will rename it to _context_. It is important for function-to-function consistency that the meaning of "context" remain consistent.

Logging Context in AWS Lambda

The signature of a Lambda function looks like this:

export function handler(event, context) { ... }

In order to provide consistent "context" in Lambda functions as described above we suggest you initialize your logging functions like so:

const { log, debug, info, warn, error } = logger().lambda(event, context);

This allows for "smart" extraction of context. By smart we mean that typically there are two distinct types of Lambda execution:

  1. Functions called from API Gateway (aka, an external API endpoint)
  2. Functions called from other functions

The main difference in these two situations is in the data passed in as the event. In the case of an API-Gateway call, the event has lots of meta-data travelling with it. For a complete list refer to Lambda Proxy Request. The quick summary is that it passes the client's "query parameters", "path parameters" and "body" of the message. This makes up the distinct "request" that will be considered in your functions but it also passes a bunch of variant data about the client such as "what browser?", "which geography?", etc. For a normal lambda-to-lambda function call the "event" is exactly what the calling function passed in.

The context object is largely the same between the two types of Lambda's mentioned above but in both cases provides some useful meta-data for logging. For those interested the full typeing is here: IAWSLambaContext.

All this information, regardless of which type of function it is, becomes "background knowledge" as aws-log will take care of all the contextual information for you if you use .lambda(event, context), providing you with the following attributes on your context property:

/** the REST command (e.g., GET, PUT, POST, DELETE) */
httpMethod: string;
/** the path to the endpoint called */
path: string;
/** query parameters; aka, the name-value pairs after the "&" character */
queryStringParameters: string;
/** parameters passed in via the path itself */
pathParameters: string;
/** the callers user agent string */
userAgent: string;
/** the country which the request hit Cloundfront */
country: string;
/** the function handler which led to this log message */
functionName: string;
functionVersion: string;
/** the cloudwatch log group where the log was sent */
logStreamName: string;
/** the AWS requestId which is unique for this function call */
requestId: string;
/** the version from package.json file (for serverless function, not other libs) */
packageVersion: string;

Files External to Handler

We've already discussed the utility of passing the event and context attributes to the logger and in the handler function we have a simple way of achieving this as these two objects are immediately available:

export function handler(event, context) {
  const { log, debug, info, warn, error } = logger().lambda(event, context);
  // ...
}

But unless we keep passing around the event and context how would we maintain context in logging that's in a utility function, etc.? The answer is after the context has been set with logger().lambda(event, context) you can simply write:

const { log, debug, info, warn, error } = logger().reloadContext();
function doSomething() {
  log("something has happened");
}

More Context

The original, and generic, logger.context(obj) method allowed us to add whatever name/value pairs we pleased but with logger.lambda(event, context) we rely on aws-log to choose context for us. This is probably good enough for most situations but wherever you want to add more you can do so easily enough:

// in the handler function
const { log } = logger().lambda(event, context, moreContext);
// somewhere else
const { log } = logger().reloadContext(moreContext);

NOTE: that while both signatures are valid, the first one is STRONGLY recommended because "context" is meant to be information which is valid for the full execution of the function. Typically we'd expect this to be established as the first line of the handler function not later in the execution.

Correlation ID

The correlation ID -- which shows up as @x-correlation-id in the log entry -- is an ID who's scope is meant to stay consistent for a whole graph of function executions. This scoping is SUPER useful as within AWS most logging is isolated to a single function execution but in a micro-services architecture this often represents too narrow a view.

The way the correlation ID is set is when "context" is provided -- typically via the lambda(event, context) parameters -- it looks for a property x-correlation-id in the "headers" property of the event. This means that if you are originating a request via API-Gateway, you can pass in this value as part of the request. In fact, it is often the case that graph of function executions does originate from API-Gateway but even in this situation we typically suggest the client does not send in a correlation ID unless there is a chain of logging that preceeded this call on the client side. In most cases, the absence of a correlation ID results in one being created automatically. Once it is created though it must be forwarded to all other executions downstream. This is achieved via a helper method provided by this library called invoke.

Passing the Correlation ID with invoke

The standard way of calling a Lambda functon from within a Lambda is through the invoke method of the AWS Lambda interface:

import { Lambda } from "aws-sdk";
const lambda = new Lambda({ region: "us-east-1" });
lambda.invoke(params, fn(err, data) { ... });

As a convenience, this library provides invoke which provides the same functionality but with a simplified calling structure:

import { invoke } from "aws-log";
try {
  await invoke(fnArn, request, options);
} catch (e) {
  // your error handler here or if you like just ignore the try/catch
  // and let the error cascade down to a more general error handler
}

Note: the AWS API exposes both an invoke and invokeAsync which is somewhat confusing because invoke can also be asynchronous! At this point no one should use the invokeAsync call as it is deprecated and therefore we ignore exposing this in our API.

The Request API for invoke

The request API only requires two parameters:

  • the ARN representing the function you are calling
  • the parameters you want to send in (as an object)

See below for an example:

await invoke("arn:aws:lambda:us-east-1:837955399040:function:myapp-prod-myfunction", {
  foo: 1,
  bar: 2
});

Now that's sort of compact but you can make it much more compact if you follow a few conventions:

  • First, you don't actually need the arn:aws:lambda at all, it will be assumed if you don't start with "arn".
  • Second, if you set the AWS_REGION environment variable for your function then you can leave off that component.
  • Third, if you provide AWS_ACCOUNT as a variable then you no longer need to state that in the string.
  • Finally, if you provide AWS_STAGE then you can leave off the prod | dev | etc. portion.

That means if you do all of the above you only need the following:

await invoke("myfunction", { foo: 1, bar: 2 });

This also has the added benefit of dynamically adjusting to the stage you are in (which you'll almost always want).

The last parameter in the signature is options (which is typed for those of you with intellisense) but basically this gives you an option to:

  • turn on the "dryrun" feature AWS exposes
  • specify a specific version of the function (rather than the default)

Shipping

In Lambda you can specify a particular Lambda function to be executed with your serverless functions to accept your STDOUT and STDERR streams. If you have an external logging solution then you should attach a "shipping" function to ship these entries to that external solution. Here at Inocan Group we use Logzio and if you do as well you should feel free to use ours: logzio-shipper.

License

Copyright (c) 2019 Inocan Group

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.