@benbria/winston-elasticsearch
v1.0.0
Published
An Elasticsearch transport for winston
Downloads
16
Readme
winston-elasticsearch
An elasticsearch transport for the winston logging toolkit.
Features
- logstash compatible message structure.
- Thus consumable with kibana.
- Date pattern based index names.
- Custom transformer function to transform logged data into a different message structure.
Compatibility
For Winston 3.0, Elasticsearch 6.0 and later, use the 0.7.0
.
For Elasticsearch 6.0 and later, use the 0.6.0
.
For Elasticsearch 5.0 and later, use the 0.5.9
.
For earlier versions, use the 0.4.x
series.
Unsupported / Todo
- Querying.
- Real buffering of messages in case of unavailable ES.
Installation
npm install --save winston winston-elasticsearch
Usage
var winston = require('winston');
var Elasticsearch = require('winston-elasticsearch');
var esTransportOpts = {
level: 'info'
};
var logger = winston.createLogger({
transports: [
new Elasticsearch(esTransportOpts)
]
});
The winston API for logging can be used with one restriction: Only one JS object can only be logged and indexed as such. If multiple objects are provided as arguments, the contents are stringified.
Options
level
[info
] Messages logged with a severity greater or equal to the given one are logged to ES; others are discarded.index
[none] the index to be used. This option is mutually exclusive withindexPrefix
.indexPrefix
[logs
] the prefix to use to generate the index name according to the pattern<indexPrefix>-<indexSuffixPattern>
.indexSuffixPattern
[YYYY.MM.DD
] a Moment.js compatible date/ time pattern.messageType
[log
] the type (path segment after the index path) under which the messages are stored under the index.transformer
orrawTransformer
[see below] a transformer function to transform logged data into a different message structure.ensureMappingTemplate
[true
] If set totrue
, the givenmappingTemplate
is checked/ uploaded to ES when the module is sending the fist log message to make sure the log messages are mapped in a sensible manner.mappingTemplate
[see fileindex-template-mapping.json
file] the mapping template to be ensured as parsed JSON.flushInterval
[2000
] distance between bulk writes in ms.client
An elasticsearch client instance. If given, all following options are ignored.clientOpts
An object hash passed to the ES client. See its docs for supported options.waitForActiveShards
[1
] Sets the number of shard copies that must be active before proceeding with the bulk operation.pipeline
[none] Sets the pipeline id to pre-process incoming documents with. See the bulk API docs.
Logging of ES Client
The default client and options will log through console
.
Interdependencies of Options
When changing the indexPrefix
and/ or the transformer
,
make sure to provide a matching mappingTemplate
.
Transformer
The transformer function allows to transform the log data structure as provided
by winston into a structure more appropriate for indexing in ES. transformer()
is passed a logData
object, which is a {message, level, meta}
object.
rawTransformer()
is passsed the raw info
object from Winston. Either
should return an object to write to Elasticsearch.
The default transformer function's transformation is shown below.
Input:
{
"message": "Some message",
"level": "info",
"meta": {
"method": "GET",
"url": "/sitemap.xml",
...
}
}
}
Output:
{
"@timestamp": "2018-09-30T05:09:08.282Z",
"message": "Some message",
"severity": "info",
"fields": {
"method": "GET",
"url": "/sitemap.xml",
...
}
}
The @timestamp
is generated in the transformer.
Note that in current logstash versions, the only "standard fields" are @timestamp and @version,
anything else ist just free.
The transformer
or rawTransformer
function can be provided in the options hash.
Events
error
: in case of any error.
Example
An example assuming default settings.
Log Action
logger.info('Some message', <req meta data>);
where req meta data
is a JSON object.
Generated Message
The log message generated by this module has the following structure:
{
"@timestamp": "2018-09-30T05:09:08.282Z",
"message": "Some log message",
"severity": "info",
"fields": {
"method": "GET",
"url": "/sitemap.xml",
"headers": {
"host": "www.example.com",
"user-agent": "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)",
"accept": "*/*",
"accept-encoding": "gzip,deflate",
"from": "googlebot(at)googlebot.com",
"if-modified-since": "Tue, 30 Sep 2018 11:34:56 GMT",
"x-forwarded-for": "66.249.78.19"
}
}
}
Target Index
This message would be POSTed to the following endpoint:
http://localhost:9200/logs-2018.09.30/log/
So the default mapping uses an index pattern logs-*
.