cbm-batchs-serverless-plugin-beta-data
v2.0.43
Published
Serverless Plugin to create batchs stacks
Downloads
75
Readme
Serverless Plugin for CBM batchs
Requirements
Tested with:
- Node.js >=
v10
- Serverless Framework >=
v1.51
Installation
Install the dependency
Using npm:
npm i -D cbm-batchs-serverless-plugin
Using yarn:
yarn add --dev cbm-batchs-serverless-plugin
Use the plugin
Add the plugin to your serverless.yml
file:
plugins:
- cbm-batchs-serverless-plugin
Usage
serverless deploy --stage <yourStage>
Example
This example uses an AWS Batch. Several kind of batches are supported.
You can specify a custom config file in your serverless.yml
:
custom:
cbmBatch:
name: <stateMachineName> # The state-machine name you want for your job/task
featureFlippingKey: <featureFlippingKey> # Required unless it is a reusable sub-component (concurrent runs)
scheduler: # Optional
expression: <cronExpression> # eg. cron(30 5 * * ? *)
enabled: ${self:custom.${opt:stage}.schedulerEnabled} # Optional
notifier: # Optional (alternative to failureSnsTopicArn)
stateMachine: <stateMachineNameOnSuccess>
stateMachineOnFailure: <stateMachineNameOnError>
enabled: ${self:custom.${opt:stage}.notifierEnabled} # Optional
failureSnsTopicArn: <snsArn> # Optional (alternative to notifier)
awsBatch: # One of awsBatch, lambda, glueJob, glueCrawler, talendJob, workflow
jobQueueName: <jobQueueName>
jobDefinitionName: <jobDefinitionName>
dev:
schedulerEnabled: false
notifierEnabled: true
rec:
schedulerEnabled: true
notifierEnabled: true
prod:
schedulerEnabled: true
notifierEnabled: true
You can also define a list of multiple batchs at once:
custom:
cbmBatch:
- name: ${self:service}-${opt:stage} # Usually the stack name, used for naming policies and resources
featureFlippingKey: SLAVE:SODA:DAB:JATO:BATCH_DAB_JATO
scheduler:
expression: cron(30 8 ? * MON-FRI *)
failureSnsTopicArn: arn:aws:sns:eu-west-1:62930723694:batch-failure-test
awsBatch:
jobQueueName: dabJato-queue-${opt:stage}
jobDefinitionName: dabJato-job-definition-${opt:stage}
- name: batch-2
featureFlippingKey: SLAVE:SODA:DAB:JATO:BATCH_DAB_JATO2
scheduler:
expression: cron(30 8 ? * MON-FRI *)
awsBatch:
jobQueueName: dabJato-queue2-${opt:stage}
jobDefinitionName: dabJato-job-definition2-${opt:stage}
Properties for custom.cbmBatch:
- name: State-machine name for your job/task, also used for naming policies and resources, example:
name: ${self:service}-${opt:stage}
- featureFlippingKey: The name of the feature flipping key used to enable or disable the batch, example:
featureFlippingKey: SLAVE:SODA:DAB:JATO:BATCH_DAB_JATO
- OPTIONAL scheduler:
- expression: The aws schedule expression to schedule your batch. It can be either:
- rate(Value Unit), example:
schedule: rate(5 minutes)
- cron(Minutes Hours Day-of-month Month Day-of-week Year), example:
schedule: cron(0 18 ? * MON-FRI *)
- rate(Value Unit), example:
- enabled: The batch scheduler can be either enabled or disabled. Default value: true
- expression: The aws schedule expression to schedule your batch. It can be either:
- OPTIONAL notifier:
- stateMachine: The state-machine to execute to notify a success
- stateMachineOnError: The state-machine to execute to notify a failure
- OPTIONAL failureSnsTopicArn: In case of failure, a notification with the error will be sent on this topic
- One of the supported jobs types.
Jobs types
AWS Batch
This configuration launches an AWS Batch. The queue name and job definition must already exist and their names must be provided:
- awsBatch:
- jobQueueName: The name of an existing AWS Batch job queue
- jobDefinitionName: The name of an existing AWS BATCH job definition
Example:
awsBatch:
jobQueueName: dabJato-queue2-${opt:stage}
jobDefinitionName: dabJato-job-definition2-${opt:stage}
Lambda
This configuration executes an AWS Lambda with an optional payload as input
- lambda:
- name: The name of the lambda to execute
- OPTIONAL payload: The payload is the input of the lambda that will be passed as an argument of the handler
- key1: value1
- key2: value2
- key3.$: "$.key3" where value is extracted from the state-machine input '{"key3": value3}'
Example:
lambda:
name: ma-lambda-dev
payload:
customerReference: C000100
classifiedReference: E243567
The payload will be provided to the handler as the 1st arg:
exports.handler = async function(event, context) {
// The event contains the payload : {customerReference: "C00100", classifiedReference: "E243567"}
return context.logStreamName
}
Codebuild
This configuration executes an AWS CodeBuild with a sourceVersion and optional parameters (environment variables)
- codebuild:
- name: The name of the codebuild to execute
- sourceVersion: the source version (git branch or git commit) to use
- OPTIONAL parameters: The environment variables to be overriden
- KEY1: value1
- KEY2: value2
- KEY3.$: "$.KEY3" where value is extracted from the state-machine input '{"KEY3": value3}'
Example:
codebuild:
name: ma-codebuild-dev
sourceVersion: dev
parameters:
SKIP_BUILD_IMAGE: "true"
GlueJob
This configuration executes a Glue Job with optional parameters and options as input
- glueJob:
- jobName: The name of the Glue Job to execute
- OPTIONAL jobArgs: The job arguments without the need of prefixing them with "--"
- KEY1: value1
- KEY2: value2
- KEY3.$: "$.KEY3" where value is extracted from the state-machine input '{"KEY3": value3}'
- OPTIONAL jobOpts: The job options (such as AllocatedCapacity)
- Key1: value1
Example:
glueJob:
jobName: mon-job-glue-dev
jobArgs:
START_DATE: "DAY-1"
END_DATE: "DAY-1"
jobOpts:
AllocatedCapacity: 10
GlueCrawler
This configuration executes a Glue Crawler
- glueCrawler:
- crawlerName: The name of the Glue Crawler to execute
Example:
glueCrawler:
crawlerName: mon-crawler-dev
TalendJob
At the moment, all jobs will be executed on the Data Team's TAC: http://talend-administration-center.prod.carboat.cloud
This configuration executes a Talend Job on the Data Team's TAC with an optional contextParam as input
- talendJob:
- jobName: The name of the Glue Job to execute
- OPTIONAL jobContextParam: The job arguments (contextParam)
- KEY1: value1
- KEY2: value2
- KEY3.$: "$.KEY3" where value is extracted from the state-machine input '{"KEY3": value3}'
Example:
talendJob:
jobName: mon-job-talend-dev
jobContextParam:
date_debut: "DAY-1"
date_fin: "TODAY"
Workflow
This configuration executes a Workflow that is composed of parallel tasks (state machines) with the ability to sequence them via dependsOn
.
- workflow:
- composedOf: a list of jobs/tasks (linked to state-machine names)
- name: the name of the job/task that will be shown in the state-machine DAG
- _OPTIONAL stateMachine: the state-machine name (it will reuse the name above otherwise)
- _OPTIONAL input: the optional input to pass to the state-machine
- KEY1: value1
- KEY2: value2
- KEY3.$: "$.KEY3" where value is extracted from the state-machine input '{"KEY3": value3}'
- _OPTIONAL dependsOn: the optional dependency to sequence the job/tasks
- composedOf: a list of jobs/tasks (linked to state-machine names)
Example:
workflow:
composedOf:
- name: first-job
stateMachine: mon-premier-job-dev
- name: second-job
stateMachine: mon-second-job-dev
dependsOn:
- first-job
- name: third-job
stateMachine: mon-troisieme-job-dev
dependsOn:
- first-job
- name: independent-fourth-job
stateMachine: mon-quatrieme-job-dev
input:
START_DATE: "DAY-1"
END_DATE: "DAY-1"
How it works
The plugin will create an AWS Step Function with predifined steps:
- Initialization: A predefined lambda that controls if the batch is enabled, using the provided featureFlippingKey
- Job/Task: The configured job/task to be launched
- Finally:
- Notification OK: An optional step to notify success (if notifier.stateMachine is defined)
- Succeeded: The final status + if featureFlippingKey is provided then a predefined lambda is called to let it know that the batch succeeded
- Otherwise:
- Notification KO: An optional step to notify failure (if notifier.stateMachineOnError or failureSnsTopicArn is defined)
- Failure: The final status + if featureFlippingKey is provided then a predefined lambda is called to let it know that the batch failed
+------------+
| Init. | <-- start hook if featureFlippingKey
+------------+
v
+------------+
| Job/Task | <-- one of awsBatch, lambda, glueJob,
+------------+ glueCrawler, talendJob, workflow
|
+----------------+
v v
+------------+ +------------+
| Notif. OK | | Notif. KO | <-- only if notifier (OK/KO)
+------------+ +------------+ or if failureSnsTopicArn (KO)
v v
+------------+ +------------+
| Succeeded | | Failed | <-- final state
+------------+ +------------+ (+ end hook if featureFlippingKey)
Contributing
Please have a look at the contributing guidelines