winston-s3-transport
v1.2.0
Published
Logs generated through Winston can be transferred to an S3 bucket using `winston-s3-transport`.
Downloads
4,087
Readme
Winston S3 Transport
Logs generated through Winston can be transferred to an S3 bucket using
winston-s3-transport
.
Installation
The easiest way to install winston-s3-transport
is with npm.
npm install winston-s3-transport
Alternately, download the source.
git clone https://github.com/stegano/winston-s3-transport.git
Example
[!] The bucket path is created when the log is first created.
// Example - `src/utils/logger.ts`
import winston from "winston";
import S3Transport from "winston-s3-transport";
import { v4 as uuidv4 } from "uuid";
import { format } from "date-fns";
const s3Transport = new S3Transport({
s3ClientConfig: {
region: "ap-northeast-2",
},
s3TransportConfig: {
bucket: "my-bucket",
group: (logInfo: any) => {
// Group logs with `userId` value and store them in memory.
// If the 'userId' value does not exist, use the `anonymous` group.
return logInfo?.message?.userId || "anonymous";
},
bucketPath: (group: string = "default") => {
const date = new Date();
const timestamp = format(date, "yyyyMMddhhmmss");
const uuid = uuidv4();
// The bucket path in which the log is uploaded.
// You can create a bucket path by combining `group`, `timestamp`, and `uuid` values.
return `/logs/${group}/${timestamp}/${uuid}.log`;
},
},
});
export const logger = winston.createLogger({
levels: winston.config.syslog.levels,
format: winston.format.combine(winston.format.json()),
transports: [s3Transport],
});
export default logger;
Create log using winston in another module
// Example - another module
import logger from "src/utils/logger";
...
// Create a log containing the field `userId`
logger.info({ userId: 'user001', ....logs });
Configuration
s3ClientConfig
This library is internally using
@aws-sdk/client-s3
to upload files to AWS S3.
- Please see AWSJavaScriptSDK/s3clientconfig
s3TransportConfig
bucket: string
- AWS S3 Bucket name
bucketPath: ((group: string) => string) | string
- AWS S3 Bucket path to upload log files
group?: (<T = any>(logInfo: T) => string) | string (default: "default")
- Group for logs classification.
dataUploadInterval?: number (default: 1000 * 20)
- Data upload interval(milliseconds)
fileRotationInterval?: number (default: 1000 * 60)
- File rotation interval(milliseconds)
maxDataSize?: number (default: 1000 * 1000 * 2)
- Max data size(byte)
Motivation
I made this so that it can be efficiently partitioned when storing log data in the S3 bucket. When you use vast amounts of S3 data in Athena, partitioned data can help you use the cost effectively.
Contributors ✨
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!