npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

eisenhertz

v1.6.0

Published

manage, cluster and scale processes with ease

Downloads

8

Readme

Eisenhertz

Build Status

nodejs module-as-a-process cluster management

What does it do?

  • Eisenhertz excels at one thing only: keeping a set of dynamic module execution up & running across endless Servers, VMs, Containers
  • You pass in a module to execute and an unlimited amount of config-data (jobs) for each execution and eisenhertz will ensure that the correct amount of modules is constantly running across the all instances of itself, where each module runs in its own process
  • It also gives you controll to manually add or remove such jobs in real-time
  • Additionally you can talk to the processes via ipc and retrieve metrics from all processes running
  • Eisenhertz does not work as a stand-alone "server-setup", its main idea is to build a basis for a project that requires scaling across a lot of machines in stand-alone processes

Requirements

  • Eisenhertz does heavily rely on async/await therefore you will need at least node >=v 7.0
  • The message cortex and job queue relies on Redis >= 2.8.18

Install via

npm i eisenhertz

Server Setup

const {
    Eisenhertz,
    defaultConfig,
    defaultLogger
} = require("eisenhertz");

const fetchJobNames = callback => {
    callback(null, [
        "one",
        "two"
    ]);
};

const fetchJobDetails = (id, callback) => {

    let config = {};

    switch (id) {

        case "one":
            config.port = 1337;
            config.hi = "hi from one";
            break;

        case "two":
            config.port = 1338;
            config.hi = "hi from two";
            break;
    }

    callback(null, {
        config
    });
};

const eisenhertz = new Eisenhertz(config, defaultLogger());
eisenhertz
    .start(fetchJobNames, fetchJobDetails)
    .then(() => {});

Fork-Module Setup

const { ForkProcess } = require("eisenhertz");
const express = require("express");

const fork = new ForkProcess();
let incomingRequests = 0;

const processCallback = data => {

    const app = express();

    app.get("/hi", (req, res) => {
        incomingRequests++;
        res.status(200).json({
            message: data.config.hi
        });
    });

    app.listen(data.config.port, () => {
        fork.log("ready");
    });
};

const metricsCallback = cb => {
    cb(null, {
        incomingRequests
    });
};

fork.connect(processCallback, metricsCallback);

Example Setup Description

  • The example setup above will give you the possiblity to scale a demo webserver across unlimited instances, by simply deploying the server module to servers, vms or containers.
  • As soon as it starts, it will spawn 2 processes on any of the parent systems that will run one of the two webservers.

Configuration

{
    prefix: "eh",
    redis: {
        host: "localhost",
        port: 6379,
        db: 7
    },
    redlock: {
        driftFactor: 0.01,
        retryCount: 2,
        retryDelay: 200,
        retryJitter: 200
    },
    settings: {
        lockDuration: 4500,
        stalledInterval: 4500,
        maxStalledCount: 1,
        guardInterval: 2500,
        retryProcessDelay: 2500
    },
    properties: {
        name: "eh:empty",
        maxJobsPerWorker: 2,
        masterLock: "eh:master:lock",
        masterLockTtl: 2000,
        masterLockReAttempt: 4000,
        maxInstancesOfJobPerNode: 1
    },
    jobOptions: {
        priority: 1, 
        delay: 1000, 
        attempts: 1, //dont touch
        repeat: undefined, //dont touch
        backoff: undefined, //dont touch
        lifo: undefined, //dont touch
        timeout: undefined, //dont touch
        jobId: undefined, // will be set by TaskHandler
        removeOnComplete: true, //dont touch
        removeOnFail: true //dont touch
    },
    fork: {
        module: "./fork/ForkProcess.js"
    }
}

Controlling jobs on nodes

config.properties.maxInstancesOfJobPerNode
/*
    lets you limit the amount of instances of a job
    that run on a single node, you can define a job instance
    by using ":" as delimiter e.g. jobOne:1, jobOne:2 and jobOne:3
    if the limit is reached, the node will return the job with
    an error back to the queue after a small timeout
*/

config.properties.maxJobsPerWorker
/*
    lets you limit the amount of jobs per worker
    it is usually a good idea to limit this to the amount
    of cores (* 2 on intel systems) of the node's host
*/