hearth-logger
v1.3.6
Published
A logger library for internal use @ Hearth, but if any one wants to use it in their project , you are free to do so
Downloads
7
Readme
Simple Logger
Usage Example
const { Logger } = require('hearth-logger')
const logger = new Logger({
name: 'Job Name',
type: 'Job Type',
})
// To log start time, will be required to calculate the execution time of the process
logger.startLogTime()
/*
..... Your Code
*/
// Utility methods to log messages at different levels
logger.info('Some Information')
logger.warn('Some Warning')
logger.error('Some Error')
// To log custom values into the job information
logger.jobConfig.set('KEY', 'VALUE')
// Utility method to log the statusCode of the process
logger.setJobStatus(200)
logger.setJobStatus(202)
// log the end time of the process, will be required to calculate the execution time of the process
logger.endLogTime()
// Save Logs to a temporary .log file, inside tmp folder in the root of your project
logger.save()
// Calling save in succession merges the logs into one file, see below if you want to save in a new log file
logger.save()
// Save Logs to a new temporary .log file, inside tmp folder in the root of your project
logger.save({saveAsNew: true})
info(message, options), warning(message, options) and error(message, options)
message: any
options: object
- hoisted (boolean): Hoisted logs always appear on top on the normal logs. Can be used to highlight some important logs.
save(options)
Options: object
- saveAsNew: Save the logs to a new log file with a new timestamp, instead of overwriting the regular log file
- clearLogsOnSave: The stored logs will be cleared after save and will not appear in subsequent calls to the save() method.
Samples
Code
const { Logger } = require('hearth-logger')
const JobInfo = {
name: 'Website Name',
type: 'Scrapper',
}
const logger = new Logger(JobInfo, {
removeLogFileOnSave: false,
})
// Updating the logger info after initialization
logger.updateConfig(
{
name: 'ABC.COM Scrapper',
type: 'Scraper',
description: 'Web scraper for https://abc.com',
},
{
debugLevel: 2,
}
)
logger.startLogTime()
logger.info('Started Scrapping')
// Simulate scrapping delay
for (let i = 0; i < 10000000000; i++) {}
logger.error('Some error')
logger.warn('Some warning')
logger.setJobStatus(400)
// Setting custom job information
logger.jobInfo.set('NEW ENTITIES SCRAPPED', 0)
logger.endLogTime()
logger.save()
Log File
===========================================
JOB INFORMATION
===========================================
NAME : Website Scrapper
TYPE : Scrapper
STATUS_CODE : 400
NEW ENTITIES SCRAPPED : 0
STARTED AT : 15-05-2020 | 09:06:54
FINISHED AT : 15-05-2020 | 09:07:03
EXECUTION TIME : 0 hrs 0 min 8 sec 567 ms
===========================================
LOGS
===========================================
[INFO] | 09:06:54 Started Scrapping
[ERROR] | 09:07:03 Some error
[WARNING] | 09:07:03 Some warning
Using Adapters
Uploading logs to Amazon S3
For uploading your logs to S3, you can use the provided S3 adapter. This adapter uses aws-sdk behind the scenes, hence you can use the authentication options provided by the aws-sdk. Here is a code sample:-
import { Logger, S3Adapter } from './src'
import { JobInfo, JobType } from './src/interfaces'
async function main() {
const jobInfo: JobInfo = {
name: 'Idealista Scrapper',
type: JobType.Scrapper,
}
const logger = new Logger(jobInfo, {
removeLogFileOnSave: true,
})
const s3Plugin = new S3Adapter(
{
bucketName: 'BUCKET_NAME',
},
{
/* AWS-SDK S3 CONFIG */
},
)
// Tell the logger to use the s3 adapter
logger.useAdapter(s3Plugin)
logger.startLogTime()
logger.info('Started Scrapping')
// Simulate process delay
for (let i = 0; i < 10000000000; i++) {}
logger.setJobStatus(200)
// Setting custom job information
logger.jobInfo.set('NEW PROPERTIES ADDED', 123)
logger.endLogTime()
// When you call this method, the log file will be uploaded to the s3 bucket with
// the name 'BUCKET_NAME", on the path Logs/Transformer
await logger.save()
}
main()