npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@habemus/amqp-worker

v2.0.0

Published

During development we've briefly studied the possibility of allowing multiple methods to be exposed through a single h-worker. That was discarded due to the idea that workers should perform resource-intensive tasks (e.g. builds) and that the exposure of m

Downloads

3

Readme

Why not allow multiple worker functions (e.g. worker methods) to be exposed?

During development we've briefly studied the possibility of allowing multiple methods to be exposed through a single h-worker. That was discarded due to the idea that workers should perform resource-intensive tasks (e.g. builds) and that the exposure of multiple tasks through the same worker might incentivize designs that do not perform well.

That is because for a worker to expose multiple methods to their clients there are two strategies, discussed below:

  1. Multiple queues - one queue for each workerMethod

The idea is simple: one direct-exchange for the worker and its client to communicate, multiple queues, one for each method. The problem is that message consuming works on a queue basis: one consumer per queue. That way, we have no straightforward manner of impeding the worker from executing two (or more) workerMethods at once (one from each queue). That is not a good path to go, as the parallellism does not explicitly benefit the worker nor the client.

  1. One single queue - define custom applicaiton-level protocol for defining the method requested

In this mode, the message would bear itself the information on which workerMethod should be executed. This way the worker would only consume from a single queue and before passing the message to the workerMethod, would parse it and decide which workerMethod should be executed. It prevents parallel jobs, as only one queue consumer exists. The problem with this approach is that the rabbitMQ queue becomes non-semantic at all, as it cannot be assumed that a queue represents workload requests for a given method without introspecting into each message. Does not seem to be a good path to go.

Remaining with the single-worker-fn maintains a simple API and is quite clear about the intentions of worker and client.

What is the difference between h-worker and intercomm?

Both modules are frameworks for remote procedure calling and definitely have great function overlapping. For the moment (20 Sep 2016), though, intercomm is targeted at providing a thin abstraction layer espefically for real-time RPC. By 'real-time RPC' we mean procedure calls that expect answers within a second (or some seconds at most). Thus, intercomm stores the promises for each RPC call in memory and waits for the response before removing the reference to the promise from memory. H-worker, on the other hand does not keep reference to call ids nor promises, it is stateless: once messages are sent from the client, it completely forgets about them, and once responses arrive the client does not try to match them to the request, but emits an event that should be handled by the h-worker client's consumer. Furthermore, h-worker has built in the idea that jobs might send intermediate messages (infoLog, warningLog, errorLog) that do not clearly fit in a simple request and response paradigm.

Another difference is that h-worker is used for server-side stuff, while intercomm is completely agnostic. On that end, it is very important to note that h-worker is tightly coupled to rabbit-mq and as such, uses much of AMQP protocol (which has many overlaps with the intercomm's custom protocol (json-message)).

In the future, h-worker and intercomm might be merged somehow, but for the time being it does not seem to be a good idea trying to do so.

The main benefit of this merge would be the universal message delivery system. The same format sent from the browser would be used inside rabbitMq, REST API's and more. (That would be huge!)

docker run rabbitmq

docker run -d --hostname my-rabbit --name my-rabbit -p 4369:4369 -p 5671:5671 -p 5672:5672 -p 15672:15672 -p 25672:25672 rabbitmq:3-management