nolimitid-crawl-storing
v1.0.12
Published
Nolimit's own crawler storing engine module
Downloads
4
Readme
Nolimit Crawl Storing
This is how to store file and metadata based on injecting data of object. You just need to define your object of metadata and your input files.
Install it with npm install nolimit-crawl-storing
Just require var file = require('nolimitid-crawl-storing')
How to use
file.inject(metadata,object)
You need to define your metadata in object type, then input your object/content subsequently in parameter. It will automatically save file if buffer reach buffer maximum size.
file.bufferFlush(metadata)
This is how to emptied your buffer, by writing it to file manually. Its useful to make sure your buffer is stored
file.flushAll()
This is how to emptied all of your buffers, by writing it to files manually. Its useful to make sure all your buffers is stored
file.setBufferSize(size)
This is how you define your buffer size level, default is 50
or defined by config.json in module
file.setIncomingDir(size)
This is how you define your temp dir for files, default is ../
or defined by config.json in module
file.setOutputDir(size)
This is how you define your output dir for files, default is ./
or defined by config.json in module
file.setInterval(isTrue,interval)
This is how you define your store engine flush buffer periodically. Set isTrue
with true or false, true for periodically run and false to disable it. Set interval
is optional to set periodic time to flush, default is 60000ms