@dbos-inc/dbos-sqs
v1.28.6
Published
DBOS send step / event receiver library - queues in AWS with SQS
Downloads
1,084
Readme
DBOS AWS Simple Queue Service (SQS) Library
Message queues are a common building block for distributed systems. Message queues allow processing to occur at a different place or time, perhaps in another programming environment. Due to its flexibility, robustness, integration, and low cost, Amazon Simple Queue Service is the most popular message queuing service underpinning distributed systems in AWS.
This package includes a DBOS communicator for sending messages using SQS, as well as an event receiver for exactly-once processing of incoming messages (even using standard queues).
Getting Started
In order to send and receive messages with SQS, it is necessary to register with AWS, create a queue, and create access keys for the queue. (See Send Messages Between Distributed Applications in AWS documentation.)
Configuring a DBOS Application with AWS SQS
First, ensure that the DBOS SQS package is installed into the application:
npm install --save @dbos-inc/dbos-sqs
Second, place appropriate configuration into the dbos-config.yaml
file; the following example will pull the AWS information from the environment:
application:
aws_sqs_configuration: aws_config # Optional if the section is called `aws_config`
aws_config:
aws_region: ${AWS_REGION}
aws_access_key_id: ${AWS_ACCESS_KEY_ID}
aws_secret_access_key: ${AWS_SECRET_ACCESS_KEY}
If a different configuration file section should be used for SQS, the aws_sqs_configuration
can be changed to indicate a configuration section for use with SQS. If multiple configurations are to be used, the application code is responsible for naming them.
Sending Messages
Imports
First, ensure that the communicator is imported:
import { SQSCommunicator } from "@dbos-inc/dbos-sqs";
Selecting A Configuration
SQSCommunicator
is a configured class. This means that the configuration (or config file key name) must be provided when a class instance is created, for example:
const sqsCfg = configureInstance(SQSCommunicator, 'default', {awscfgname: 'aws_config'});
Sending With Standard Queues
Within a DBOS Transact Workflow, invoke the SQSCommunicator
function from the workflow context:
const sendRes = await ctx.invoke(sqsCfg).sendMessage(
{
MessageBody: "{/*app data goes here*/}",
},
);
FIFO Queues
Sending to SQS FIFO queues is the same as with standard queues, except that FIFO queues need a MessageDeduplicationId
(or content-based deduplication) and can be sharded by a MessageGroupId
.
const sendRes = await ctx.invoke(sqsCfg).sendMessage(
{
MessageBody: "{/*app data goes here*/}",
MessageDeduplicationId: "Message key goes here",
MessageGroupId: "Message grouping key goes here",
},
);
Receiving Messages
The DBOS SQS receiver provides the capability of running DBOS Transact workflows exactly once per SQS message, even on standard "at-least-once" SQS queues.
The package uses decorators to configure message receipt and identify the functions that will be invoked during message dispatch.
Imports
First, ensure that the method decorators are imported:
import { SQSMessageConsumer, SQSConfigure } from "@dbos-inc/dbos-sqs";
Receiver Configuration
The @SQSConfigure
decorator should be applied at the class level to identify the credentials useed by receiver functions in the class:
interface SQSConfig {
awscfgname?: string;
awscfg?: AWSServiceConfig;
queueUrl?: string;
getWFKey?: (m: Message) => string; // Calculate workflow OAOO key for each message
workflowQueueName?: string;
}
@SQSConfigure({awscfgname: 'sqs_receiver'})
class SQSEventProcessor {
...
}
Then, within the class, one or more methods should be decorated to handle SQS messages:
@SQSConfigure({awscfgname: 'sqs_receiver'})
class SQSEventProcessor {
@SQSMessageConsumer({queueUrl: process.env['SQS_QUEUE_URL']})
@Workflow()
static async recvMessage(ctx: WorkflowContext, msg: Message) {
// Workflow code goes here...
}
}
Concurrency and Rate Limiting
By default, @SQSMessageConsumer
workflows are started immediately after message receipt. If workflowQueueName
is specified in the SQSConfig
at either the method or class level, then the workflow will be enqueued in a workflow queue.
Once-And-Only-Once (OAOO) Semantics
Typical application processing for standard SQS queues implements "at least once" processing of the message:
- Receive the message from the SQS queue
- If necessary, extend the visibility timeout of the message during the course of processing
- After all processing is complete, delete the message from the queue If there are any failures, the message will remain in the queue and be redelivered to another consumer.
The DBOS receiver proceeds differently:
- Receive the message from the SQS queue
- Start a workflow (using an OAOO key computed from the message)
- Quickly delete the message
This means that, instead of the SQS service redelivering the message in the case of a transient failure, it is up to DBOS to restart any interrupted workflows. Also, since DBOS workflows execute to completion exactly once, it is not necessary to use a SQS FIFO queue for exactly-once processing.
Simple Testing
The sqs.test.ts
file included in the source repository demonstrates sending and processing SQS messages. Before running, set the following environment variables:
SQS_QUEUE_URL
: SQS queue URL with access for sending and receiving messagesAWS_REGION
: AWS region to useAWS_ACCESS_KEY_ID
: The access key with permission to use the SQS serviceAWS_SECRET_ACCESS_KEY
: The secret access key corresponding toAWS_ACCESS_KEY_ID
Next Steps
- For a detailed DBOS Transact tutorial, check out our programming quickstart.
- To learn how to deploy your application to DBOS Cloud, visit our cloud quickstart
- To learn more about DBOS, take a look at our documentation or our source code.