@lcylwik/serverless-glue
v2.0.1
Published
Serverless plugin to deploy Glue Jobs
Downloads
35
Maintainers
Readme
Serverless Glue
Serverless-glue is an open source MIT licensed project, which has been able to grow thanks to the community. This project is the result of an idea that did not let it rest in oblivion and many hours of work after hours.
Install
- run
npm install --save-dev @lcylwik/serverless-glue
- add @lcylwik/serverless-glue in serverless.yml plugin section
plugins: - "@lcylwik/serverless-glue"
How it works
The plugin creates CloufFormation resources of your configuration before making the serverless deploy then add it to the serverless template.
So any glue-job deployed with this plugin is part of your stack too.
How to configure your GlueJob(s)
Configure your glue jobs in the root of servelress.yml like this:
Glue:
bucketDeploy: someBucket # Required
createBucket: true # Optional, default = false
createBucketConfig: # Optional
ACL: private # Optional, private | public-read | public-read-write | authenticated-read
LocationConstraint: af-south-1
GrantFullControl: 'STRING_VALUE' # Optional
GrantRead: 'STRING_VALUE' # Optional
GrantReadACP: 'STRING_VALUE' # Optional
GrantWrite: 'STRING_VALUE' # Optional
GrantWriteACP: 'STRING_VALUE' # Optional
ObjectLockEnabledForBucket: true # Optional
ObjectOwnership: BucketOwnerPreferred # Optional
s3Prefix: some/s3/key/location/ # optional, default = 'glueJobs/'
tempDirBucket: someBucket # optional, default = '{serverless.serviceName}-{provider.stage}-gluejobstemp'
tempDirS3Prefix: some/s3/key/location/ # optional, default = ''. The job name will be appended to the prefix name
jobs:
- name: super-glue-job # Required
scriptPath: src/script.py # Required script will be named with the name after '/' and uploaded to s3Prefix location
Description: # Optional, string
tempDir: true # Optional true | false
type: spark # spark / pythonshell # Required
glueVersion: python3-2.0 # Required python3-1.0 | python3-2.0 | python2-1.0 | python2-0.9 | scala2-1.0 | scala2-0.9 | scala2-2.0
role: arn:aws:iam::000000000:role/someRole # Required
MaxConcurrentRuns: 3 # Optional
WorkerType: Standard # Optional, G.1X | G.2X
NumberOfWorkers: 1 # Optional
Connections: # Optional
- some-conection-string
- other-conection-string
Timeout: # Optional, number
MaxRetries: # Optional, number
DefaultArguments: # Optional
class: string # Optional
scriptLocation: string # Optional
extraPyFiles: string # Optional
extraJars: string # Optional
userJarsFirst: string # Optional
usePostgresDriver: string # Optional
extraFiles: string # Optional
disableProxy: string # Optional
jobBookmarkOption: string # Optional
enableAutoScaling: string # Optional
enableS3ParquetOptimizedCommitter: string # Optional
enableRenameAlgorithmV2: string # Optional
enableGlueDatacatalog: string # Optional
enableMetrics: string # Optional
enableContinuousCloudwatchLog: string # Optional
enableContinuousLogFilter: string # Optional
continuousLogLogGroup: string # Optional
continuousLogLogStreamPrefix: string # Optional
continuousLogConversionPattern: string # Optional
enableSparkUi: string # Optional
sparkEventLogsPath: string # Optional
customArguments: # Optional; these are user-specified custom default arguments that are passed into cloudformation with a leading -- (required for glue)
custom_arg_1: custom_value
custom_arg_2: other_custom_value
SupportFiles: # Optional
- local_path: path/to/file/or/folder/ # Required if SupportFiles is given, you can pass a folder path or a file path
s3_bucket: bucket-name-where-to-upload-files # Required if SupportFiles is given
s3_prefix: some/s3/key/location/ # Required if SupportFiles is given
execute_upload: True # Boolean, True to execute upload, False to not upload. Required if SupportFiles is given
Tags:
job_tag_example_1: example1
job_tag_example_2: example2
triggers:
- name: some-trigger-name # Required
Description: # Optional, string
StartOnCreation: True # Optional, True or False
schedule: 30 12 * * ? * # Optional, CRON expression. The trigger will be created with On-Demand type if the schedule is not provided.
Tags:
trigger_tag_example_1: example1
actions: # Required. One or more jobs to trigger
- name: super-glue-job # Required
args: # Optional
custom_arg_1: custom_value
custom_arg_2: other_custom_value
timeout: 30 # Optional, if set, it overwrites specific jobs timeout when job starts via trigger
You can define a lot of jobs...
Glue:
bucketDeploy: someBucket
jobs:
- name: jobA
scriptPath: scriptA
...
- name: jobB
scriptPath: scriptB
...
And a lot of triggers...
Glue:
triggers:
- name:
...
- name:
...
Glue configuration parameters
|Parameter|Type|Description|Required|
|-|-|-|-|
|bucketDeploy|String|S3 Bucket name|true|
|createBucket|Boolean|If true, a bucket named as bucketDeploy
will be created before. Helpful if you have not created the bucket first|false|
createBucketConfig|createBucketConfig| Bucket configuration for creation on S3 |false|
|s3Prefix|String|S3 prefix name|false|
|tempDirBucket|String|S3 Bucket name for Glue temporary directory. If dont pass argument the bucket'name will generates with pattern {serverless.serviceName}-{provider.stage}-gluejobstemp|false|
|tempDirS3Prefix|String|S3 prefix name for Glue temporary directory|false|
|jobs|Array|Array of glue jobs to deploy|true|
CreateBucket confoguration parameters
|Parameter|Type|Description|Required| |-|-|-|-| |ACL|String|The canned ACL to apply to the bucket. Possible values include:privatepublic-readpublic-read-writeauthenticated-read|False| |LocationConstraint|String| Specifies the Region where the bucket will be created. If you don't specify a Region, the bucket is created in the US East (N. Virginia) Region (us-east-1). Possible values are: af-south-1ap-east-1ap-northeast-1ap-northeast-2ap-northeast-3ap-south-1ap-southeast-1ap-southeast-2ca-central-1cn-north-1cn-northwest-1EUeu-central-1eu-north-1eu-south-1eu-west-1eu-west-2eu-west-3me-south-1sa-east-1us-east-2us-gov-east-1us-gov-west-1us-west-1us-west-2|false| |GrantFullControl|String|Allows grantee the read, write, read ACP, and write ACP permissions on the bucket.|false| |GrantRead|(String|Allows grantee to list the objects in the bucket.|false| |GrantReadACP|String|Allows grantee to read the bucket ACL.|false| |GrantWrite|String|Allows grantee to create new objects in the bucket. For the bucket and object owners of existing objects, also allows deletions and overwrites of those objects.|false| |GrantWriteACP|String|Allows grantee to write the ACL for the applicable bucket.|false| |ObjectLockEnabledForBucket|Boolean|Specifies whether you want S3 Object Lock to be enabled for the new bucket.|false| |ObjectOwnership|String|The container element for object ownership for a bucket's ownership controls.Possible values include:BucketOwnerPreferredObjectWriterBucketOwnerEnforced|false|
Jobs configurations parameters
|Parameter|Type|Description|Required|
|-|-|-|-|
|name|String|name of job|true|
|Description|String|Description of the job|False|
|scriptPath|String|script path in the project|true|
|tempDir|Boolean|flag indicate if job required a temp folder, if true plugin create a bucket for tmp|false|
|type|String|Indicate if the type of your job. Values can use are : spark
or pythonshell
|true|
|glueVersion|String|Indicate language and glue version to use ( [language][version]-[glue version]
) the value can you use are: python3-1.0python3-2.0python2-1.0python2-0.9scala2-1.0scala2-0.9scala2-2.0|true|
|role|String| arn role to execute job|true|
|MaxConcurrentRuns|Double|max concurrent runs of the job|false|
|Connections|String|Database conections (for multiple connections use , for separetion)|false|
|WorkerType|String|The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, or G.2X.|false|
|NumberOfWorkers|Integer|number of workers|false|
|Connections|List|a list of connections used by the job|false|
|DefaultArguments|object|Special Parameters Used by AWS Glue for mor information see this read the AWS documentation|false|
|SupportFiles|List|List of supporting files for the glue job that need upload to S3|false|
|Tags|JSON|The tags to use with this job. You may use tags to limit access to the job. For more information about tags in AWS Glue, see AWS Tags in AWS Glue in the developer guide.|false|
Triggers configuration parameters
|Parameter|Type|Description|Required| |-|-|-|-| |name|String|name of the trigger|true| |schedule|String|CRON expression|false| |actions|Array|An array of jobs to trigger|true| |Description|String|Description of the Trigger|False| |StartOnCreation|Boolean|Whether the trigger starts when created. Not supperted for ON_DEMAND triggers|False|
Only On-Demand and Scheduled triggers are supported.
Trigger job configuration parameters
|Parameter|Type|Description|Required| |-|-|-|-| |name|String|The name of the Glue job to trigger|true| |timeout|Integer|Job execution timeout. It overwrites|false| |args|Map|job arguments|false| |Tags|JSON|The tags to use with this triggers. For more information about tags in AWS Glue, see AWS Tags in AWS Glue in the developer guide.|false|