npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@lokalise/background-jobs-common

v9.0.0

Published

This library provides a basic abstraction over BullMQ-powered background jobs. There are two types available:

Downloads

10,553

Readme

Common background jobs library

This library provides a basic abstraction over BullMQ-powered background jobs. There are two types available:

  • AbstractBackgroundJobProcessor: a base class for running jobs, it provides an instrumentation and logger integration plus basic API for enqueuing jobs.

Getting Started

Install all dependencies:

npm install

Start Docker containers:

docker compose up -d

Run all tests:

npm run test

Usage

See test implementations in ./test/processors folder. Extend AbstractBackgroundJobProcessor and implement required methods.

Common jobs

For that type of jobs, you will need to extend AbstractBackgroundJobProcessor and implement a processInternal method. It will be called when a job is dequeued. Processing logic is automatically wrapped into NewRelic and basic logger calls, so you only need to add your domain logic.

Both queue and worker is automatically started when you instantiate the processor. There is a default configuration which you can override by passing queueConfig.queueOptions and workerOptions params to the constructor.

Use dispose() to correctly stop processing any new messages and wait for the current ones to finish.

Spies

Testing asynchronous code can be challenging. To address this, we've implemented a built-in spy functionality for jobs.

Example Usage

const scheduledJobIds = await processor.scheduleBulk([
  {
    id: randomUUID(),
    value: 'first',
    metadata: { correlationId: generateMonotonicUuid() },
  },
  {
    id: randomUUID(),
    value: 'second',
    metadata: { correlationId: randomUUID() },
  },
]);

const firstJob = await processor.spy.waitForJobWithId(scheduledJobIds[0], 'completed');
const secondJob = await processor.spy.waitForJob(
  (data) => data.value === 'second',
  'completed'
);

expect(firstJob.data.value).toBe('first');
expect(secondJob.data.value).toBe('second');

Spy Methods

  • processor.spy.waitForJobWithId(jobId, status):

    • Waits for a job with a specific ID to reach the specified status.
    • Returns the job instance when the status is achieved.
  • processor.spy.waitForJob(predicate, status):

    • Waits for any job that matches the custom predicate to reach the specified status.
    • Returns the matching job instance when the status is achieved.

Awaitable Job States

Spies can await jobs in the following states:

  • scheduled: The job is scheduled but not yet processed.
  • failed: The job is processed but failed.
  • completed: The job is processed successfully.

Important Notes

  • Spies do not need to be invoked before the job is processed, accommodating the unpredictability of asynchronous operations.
  • Even if you call await processor.spy.waitForJobWithId(scheduledJobId[], {state}) after the job has already been scheduled or processed, the spy can still resolve the job state for you.
  • Spies are disabled in production.
    • To enable them, set the isTest option of BackgroundJobProcessorConfig to true in your processor configuration.

By utilizing these spy functions, you can more effectively manage and test the behavior of asynchronous jobs within your system.

Barriers

In case you want to conditionally delay execution of the job (e. g. until some data necessary for processing the job arrives, or until amount of jobs in the subsequent step go below the threshold), you can use the barrier parameter, which delays the execution of the job until a specified condition passes.

Barrier looks like this:

const barrier = async(_job: Job<JobData>) => {
          if (barrierConditionIsPassing) {
            return {
              isPassing: true,
            }
          }

          return {
            isPassing: false,
            delayAmountInMs: 30000, // retry in 30 seconds
          }
        }

You pass it as a part of AbstractBackgroundJobProcessor config.

You can also pass over some dependencies from the processor to the barrier:

class myJobProcessor extends AbstractBackgroundJobProcessor<Generics> {
    protected override resolveExecutionContext(): ExecutionContext {
        return {
            userService: this.userService
        }
    }
}

This will be passed to the barrier:

const barrier = async(_job: Job<JobData>, context: ExecutionContext) => {
          if (await context.userService.userExists(job.data.userId)) {
            return {
              isPassing: true,
            }
          }

          return {
            isPassing: false,
            delayAmountInMs: 30000, // retry in 30 seconds
          }
        }

Available prebuilt barriers

@lokalise/background-jobs-common provides one barrier out-of-the-box - a JobQueueSizeThrottlingBarrier, which is used to control amount of jobs that are being spawned by a job processor (in a different queue).

Here is an example usage:

import { createJobQueueSizeThrottlingBarrier } from '@lokalise/background-jobs-common'    

const processor = new MyJobProcessor(
        dependencies, {
          // ... the rest of the config
          barrier: createJobQueueSizeThrottlingBarrier({
            maxQueueJobsInclusive: 2, // optimistic limit, if exceeded, job with the barrier will be delayed
            retryPeriodInMsecs: 30000, // job with the barrier will be retried in 30 seconds if there are too many jobs in the throttled queue
          })
        })
await processor.start()

Note that throttling is based on an optimistic check (checking the count and executing the job that uses the barrier is not an atomic operation), so potentially it is possible to go over the limit in a highly concurrent system. For this reason it is recommended to set the limits with a buffer for the possible overflow.

This barrier depends on defining the following ExecutionContext:

import type { JobQueueSizeThrottlingBarrierContext } from '@lokalise/background-jobs-common'

class myJobProcessor extends AbstractBackgroundJobProcessor<Generics> {
    protected override resolveExecutionContext(): JobQueueSizeThrottlingBarrierContext {
        return {
          throttledQueueJobProcessor: this.throttledQueueJobProcessor, // AbstractBackgroundJobProcessor
        }
    }
}

Queue events.

The library optimized the default event stream settings to save memory. Specifically, the library sets the default maximum length of the BullMQ queue events stream to 0 (doc). This means the event stream will not store any events by default, greatly reducing memory usage.

If you need to store more events in the stream, you can easily configure the maximum length via the queueOptions parameter during the processor creation.

export class Processor extends AbstractBackgroundJobProcessor<Data> {
    constructor(dependencies: BackgroundJobProcessorDependencies<Data>) {
        super(dependencies, {
            queueId: 'queue',
            ownerName: 'example owner',
            queueOptions: {
                streams: {events:{maxLen: 1000}},
            }
        })
    }
    // ...
}