continuous-streams
v1.1.1
Published
Special purpose Node streams
Downloads
1,605
Readme
Continuous Streams
Stream classes with specific behavior:
- continuous reading, i.e., it doesn't stop if there is temporarily no data available
- configurable chunk size to read
- specific error handling (error vs. skip/retry)
- configurable parallel processing
- timeouts
- graceful shutdown after
SIGTERM
orSIGINT
Install
npm install continuous-streams
Usage
This is a basic example (ES6/ES2015).
const { pipeline } = require('stream');
const { ContinuousReader, ContinuousWriter } = require('continuous-streams');
const reader = new ContinuousReader({
chunkSize: 100, // default `50`
});
reader.readData = async (count) => {
// read `count` items from resource
// if rejects, a `skip` event is emitted (unless `skipOnError` is `false`)
return items; // resolve with array of data items (can be empty)
};
const writer = new ContinuousWriter({
parallelOps: 10, // max number of `writeData()` to fire off in parallel
});
writer.writeData = async (item) => {
// process a single data item
// if rejects, a `skip` event is emitted (unless `skipOnError` is `false`)
};
pipeline( // go!
reader,
writer,
(error) => { ... }, // pipeline stopped
);
['SIGTERM', 'SIGINT'].forEach((eventType) => {
process.on(eventType, () => reader.stop());
});
Or if you prefer TypeScript (types included):
import { pipeline } from "stream";
import { ContinuousReader, ContinuousWriter } from "continuous-streams";
const reader = new ContinuousReader<Foo>();
reader.readData = async (count: number): Promise<Foo[]> => {
// ...
return items;
}
const writer = new ContinuousWriter<Foo>();
writer.writeData = async (item: Foo): Promise<void> => {
// ...
}
// ...
For this module to work, your TypeScript compiler options must include
"target": "ES2015"
(or later), "moduleResolution": "node"
, and
"esModuleInterop": true
.
Stream Classes
Class ContinuousReader
Extends stream.Readable
.
It reads from an underlying data source (objectMode
only) -- please implement or assign readData()
for that purpose.
It does not end
when the underlying data source is (temporarily) empty but waits for new data to arrive.
It is robust/reluctant regarding (temporary) reading errors.
It supports gracefully stopping the pipeline.
Options
chunkSize
- Whenever the number of objects in the internal buffer is dropping belowchunkSize
, a new chunk of data is read from the underlying resource. Higher means fewer polling but less real-time. Default is50
.skipOnError
- Iftrue
(default), askip
event is emitted whenreadData()
rejects. Iffalse
, anerror
event is emitted whenreadData()
rejects and the pipeline will stop. Default istrue
.waitAfterEmpty
- Delay in milliseconds if there is (temporarily) no data available. Default is5000
.waitAfterLow
- Delay in milliseconds if there is (temporarily) less data available than chunk size. Default is1000
.waitAfterError
- Delay in milliseconds if there is a (temporary) reading problem. Default is10000
.autoStop
- The intention of this package was to provide easy streaming without stopping the stream when the source is (temporarily) empty. But sometimes, however, this is exactly what you want. Default isfalse
. Set totrue
to automatically and gracefully stop streaming whenreadData()
returns no data or less than requested.
Methods
async readData(count)
-- Readscount
data items from the underlying resource. To be implemented or assigned.count
is usually equalschunkSize
. It resolves with an array of data items -- which may be empty if there is temporarily no data available. If it rejects, anerror
orskip
event is emitted (depending onskipOnError
).stop()
- To be called afterSIGINT
orSIGTERM
for gracefully stopping the pipeline. Theend
event is emitted either immediately (if the read buffer is empty) or at the next reading attempt. Graceful shutdown means that all data that has been read so far will be fully processed throughout the entire pipeline. Example:process.on('SIGTERM', () => reader.stop())
.
Events
skip
- When reading from the underlying resource failed (ifskipOnError
istrue
). The stream continues to read after a delay ofwaitAfterError
. Example handler:reader.on('skip', ({ error }) => { ... })
.error
- When reading from the underlying resource failed (ifskipOnError
isfalse
). This will stop the pipeline. Example handler:reader.on('error', (error) => { ... })
.end
- Whenstop()
was called for gracefully stopping the pipeline.close
- When the stream is closed (as usual).debug
- After each successful reading attempt providing some debug information. Example handler:reader.on('debug', ({ items, requested, total, elapsed }) => { ... })
.items
is the number of read data items.requested
is the number of requested data items (normally equalscount
).total
is an overall counter.elapsed
is the number of milliseconds ofreadData()
to resolve.
Class ContinuousWriter
Extends stream.Writable
.
It processes data items (objectMode
only) for some sort of write operation at the end of a pipeline -- please implement or assign writeData()
for that purpose.
It is robust/reluctant regarding errors.
If skipOnError
is true
(default), an error during a write operation emits a skip
event and stream processing will continue.
If skipOnError
is false
, an error during a write operation emits an error
event and stream processing will stop.
Supports gracefully stopping the entire pipeline, i.e., it waits until all asynchronous operations in-flight are returned before emitting the finish
event.
Options
parallelOps
- The max number of asynchronouswriteData()
operations to fire off in parallel. Default is10
.skipOnError
- Iftrue
(default), askip
event is emitted whenwriteData()
rejects. Iffalse
, anerror
event is emitted whenwriteData()
rejects and the pipeline will stop. Default istrue
.timeoutMillis
- Timeout in milliseconds forwriteData()
. Default is60000
.
Methods
async writeData(item)
-- Processes a single data item. To be implemented or assigned. If it rejects, anerror
orskip
event is emitted (depending onskipOnError
).
Events
skip
- IfskipOnError
istrue
(default). Example handler:writer.on('skip', ({ data, error }) => { ... })
.error
- IfskipOnError
isfalse
. This will stop the pipeline.finish
- After graceful shutdown and all asynchronous write operations are returned.close
- Aftererror
orfinish
(as usual).debug
- After each successful write operation providing some debug information. Example handler:writer.on('debug', ({ inflight, total, elapsed }) => { ... })
.inflight
is the number of asynchronouswriteData()
operations currently inflight.total
is an overall counter.elapsed
is the number of milliseconds ofwriteData()
to resolve.
Class ContinuousTransformer
Extends stream.Transform
.
It processes data items (objectMode
only) for some sort of transform operation in the midst of a pipeline -- please implement or assign transformData()
for that purpose.
It is robust/reluctant regarding errors.
If skipOnError
is true
(default), an error during a transform operation emits a skip
event and stream processing will continue.
If skipOnError
is false
, an error during a transform operation emits an error
event and stream processing will stop.
Options
parallelOps
- The max number of asynchronoustransformData()
operations to fire off in parallel. Default is10
.skipOnError
- Iftrue
(default), askip
event is emitted whentransformData()
rejects. Iffalse
, anerror
event is emitted whentransformData()
rejects and the pipeline will stop. Default istrue
.timeoutMillis
- Timeout in milliseconds fortransformData()
. Default is60000
.
Methods
async transformData(item)
-- Processes a single data item. Resolves with the transformed data item (or an array of items for splitting the item into multiple items). To be implemented or assigned. If it rejects, anerror
orskip
event is emitted (depending onskipOnError
).
Events
skip
- IfskipOnError
istrue
(default). Example handler:transformer.on('skip', ({ data, error }) => { ... })
.error
- IfskipOnError
isfalse
. This will stop the pipeline.close
- When the stream is closed (as usual).debug
- After each successful transform operation providing some debug information. Example handler:transformer.on('debug', ({ inflight, total, elapsed }) => { ... })
.inflight
is the number of asynchronoustransformData()
operations currently inflight.total
is an overall counter.elapsed
is the number of milliseconds oftransformData()
to resolve.