poolifier
v4.4.5
Published
Fast and small Node.js Worker_Threads and Cluster Worker Pool
Downloads
115,760
Readme
Node.js Worker_Threads and Cluster Worker Pool
Why Poolifier?
Poolifier is used to perform CPU and/or I/O intensive tasks on Node.js servers, it implements worker pools using worker_threads and cluster Node.js modules.
With poolifier you can improve your performance and resolve problems related to the event loop.
Moreover you can execute your tasks using an API designed to improve the developer experience.
Please consult our general guidelines.
- Easy to use :white_check_mark:
- Fixed and dynamic pool size :white_check_mark:
- Easy switch from a pool type to another :white_check_mark:
- Performance benchmarks :white_check_mark:
- No runtime dependencies :white_check_mark:
- Proper integration with Node.js async_hooks :white_check_mark:
- Support for CommonJS, ESM and TypeScript :white_check_mark:
- Support for worker_threads and cluster Node.js modules :white_check_mark:
- Tasks distribution strategies :white_check_mark:
- Lockless tasks queueing :white_check_mark:
- Queued tasks rescheduling:
- Task stealing on idle :white_check_mark:
- Tasks stealing under back pressure :white_check_mark:
- Tasks redistribution on worker error :white_check_mark:
- Support for sync and async task function :white_check_mark:
- Support for multiple task functions with per task function queuing priority and tasks distribution strategy :white_check_mark:
- Support for task functions CRUD operations at runtime :white_check_mark:
- General guidelines on pool choice :white_check_mark:
- Error handling out of the box :white_check_mark:
- Widely tested :white_check_mark:
- Active community :white_check_mark:
- Code quality
- Code security
Table of contents
- Overview
- Installation
- Usage
- Node.js versions
- API
- General guidelines
- Worker choice strategies
- Contribute
- Team
- License
Overview
Poolifier contains two worker_threads/cluster worker pool implementations, you don't have to deal with worker_threads/cluster complexity.
The first implementation is a fixed worker pool, with a defined number of workers that are started at creation time and will be reused.
The second implementation is a dynamic worker pool, with a number of worker started at creation time (these workers will be always active and reused) and other workers created when the load will increase (with an upper limit, these workers will be reused when active), the newly created workers will be stopped after a configurable period of inactivity.
You have to implement your worker by extending the ThreadWorker or ClusterWorker class.
Installation
npmjs
npm install poolifier --save
JSR
npx jsr add @poolifier/poolifier
Usage
You can implement a poolifier worker_threads worker in a simple way by extending the class ThreadWorker:
import { ThreadWorker } from 'poolifier'
function yourFunction(data) {
// this will be executed in the worker thread,
// the data will be received by using the execute method
return { ok: 1 }
}
export default new ThreadWorker(yourFunction, {
maxInactiveTime: 60000,
})
Instantiate your pool based on your needs :
import { DynamicThreadPool, FixedThreadPool, PoolEvents, availableParallelism } from 'poolifier'
// a fixed worker_threads pool
const pool = new FixedThreadPool(availableParallelism(), './yourWorker.js', {
onlineHandler: () => console.info('worker is online'),
errorHandler: e => console.error(e),
})
pool.emitter?.on(PoolEvents.ready, () => console.info('Pool is ready'))
pool.emitter?.on(PoolEvents.busy, () => console.info('Pool is busy'))
// or a dynamic worker_threads pool
const pool = new DynamicThreadPool(Math.floor(availableParallelism() / 2), availableParallelism(), './yourWorker.js', {
onlineHandler: () => console.info('worker is online'),
errorHandler: e => console.error(e),
})
pool.emitter?.on(PoolEvents.full, () => console.info('Pool is full'))
pool.emitter?.on(PoolEvents.ready, () => console.info('Pool is ready'))
pool.emitter?.on(PoolEvents.busy, () => console.info('Pool is busy'))
// the execute method signature is the same for both implementations,
// so you can easily switch from one to another
try {
const res = await pool.execute()
console.info(res)
} catch (err) {
console.error(err)
}
You can do the same with the classes ClusterWorker, FixedClusterPool and DynamicClusterPool.
See examples for more details:
Remember that workers can only send and receive structured-cloneable data.
Node.js versions
Node.js versions >= 18.x.x are supported.
API
General guidelines
Worker choice strategies
Contribute
Choose your task here, propose an idea, a fix, an improvement.
See CONTRIBUTING guidelines.
Team
Creator/Owner:
Maintainers:
Contributors: