@twentyfourg/cloud-sdk
v2.14.0
Published
24G Cloud SDK
Downloads
3,925
Keywords
Readme
V2 is live. See the upgrade notes for information about the breaking changes.
24G Cloud SDK
NPM package to help developers interact with the 24G Cloud Platform.
Development
How to generate types?
npm run tsc
wil spit out any type updates. These types are generated from jsdoc doc blocks /** */
.
Usage
Logs
Logs emitted while running on the 24G Cloud Platform can be viewed in Datadog if they are formatted properly. Use the logger()
function when your application starts to configure console.*
functions with proper syntax required by Datadog.
sdk.logger()
Configures console.*
functions to work with Datadog.
console.error();
console.warn();
console.info(); // aliased to console.log();
console.http();
console.verbose();
console.debug();
console.silly();
The default log level will log info
, warn
, and error
messages.
const sdk = require('@twentyfourg/cloud-sdk');
sdk.logger();
console.log('OK');
console.error(new Error('error'));
The logger can be setup to use a label in the message.
const sdk = require('@twentyfourg/cloud-sdk');
const logger = sdk.logger('FEATURE-NAME');
logger.info('Processing Complete');
// 23-04-04 13:07:04 [FEATURE-NAME] info: Processing Complete
// {"kind":"application","label":"FEATURE-NAME","level":"info","message":"Processing Complete","timestamp":"23-04-04 13:06:41"}
The logger can be used within the request object through req.log.*
and a label can be set with req.logger('FEATURE-NAME')
.
const express = require('express');
const { middleware } = require('@twentyfourg/cloud-sdk').express;
const app = express();
app.use(middleware());
app.get('/', (req, res) => {
req.logger('FEATURE-NAME');
req.log.info('Processing Data');
res.send();
});
Secrets
Application credentials should be stored outside the code either in Vault or AWS Secret Manager. The Cloud SDK can be used to pull secrets and set them as environment variables.
sdk.secrets([key], [options])
Retrieve secrets from Vault or Secret Manager while also setting them as environment variables.
// Single secret
const secrets = await sdk.secrets('kv/test');
console.log(process.env.FOO);
// Multiple secrets
const secrets = await sdk.secrets('kv/test,kv/test2,kv/mysql');
console.log(process.env.FOO);
key
: The key of the secret you are trying to Retrieve. If omitted, theSECRET_PATH
environment variable is used.
| Options | Description | Default |
| ------- | --------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| type
| The type of secret manager to look in. Valid options are vault
or aws
. This option can also be set with the SECRET_TYPE
environment variable. | vault
|
Storage
Applications running on the 24G Cloud Platform will sometimes need access to object storage for storing assets. The sdk.storage
namespace of the SDK will help you interact with object storage for your project.
sdk.storage.put(body, key, [options]):
Put object in storage.
await sdk.storage.put('hello', 'test.txt', { bucket: 'mybucket' });
body
: String or buffer to write to storage.key
: The desired name for the object.
| Options | Description | Default |
| -------- | ---------------------------------------------- | ----------------------------------- |
| bucket
| The name of the bucket to write the object to. | ASSET_BUCKET
environment variable |
sdk.storage.signed.put(key, [options])
Creates a signed URL for anonymous uploads.
const url = await sdk.storage.signed.put('test.txt', { bucket: 'mybucket' });
key
: The key of the object you are trying to sign for.
| Options | Description | Default |
| ----------------------- | ------------------------------------------------------- | ----------------------------------- |
| bucket
| The name of the bucket the object should be put in to. | ASSET_BUCKET
environment variable |
| ttl
| Number in seconds before the signed URL expires. | 300
|
| useAccelerateEndpoint
| Whether or not to use S3 Accelerate endpoint on uploads | true
|
sdk.storage.signed.post(key, [options])
Creates a signed URL for anonymous uploads.
const { url, fields } = await sdk.storage.signed.post('assets/image.jpg', {
conditions: {
size: { max: 0.5 * 1024 * 1024 },
starts: [{ key: '$Content-Type', value: 'image/' }],
},
});
key
: The key of the object you are trying to sign for.conditions.equal
andconditions.starts
values must be in the form of[{ key: '$keyName', value: keyValue }]
.
| Options | Description | Default |
| ----------------------- | ------------------------------------------------------- | -------------------------------------- |
| bucket
| The name of the bucket the object should be put in to. | ASSET_BUCKET
environment variable |
| ttl
| Number in seconds before the signed URL expires. | 300
|
| useAccelerateEndpoint
| Whether or not to use S3 Accelerate endpoint on uploads | true
|
| conditions
| The conditions used to validate the request | |
| conditions.size
| Validate the size of the file in bytes | { min: 1024, max: 10 * 1024 * 1024 }
|
| conditions.equal
| Form field must match with the specified value | [{ key: '$key', value: key }]
|
| conditions.starts
| Form field must start with the specified value | |
| conditions.fields
| Fields returned that can be used within a form | { 'Content-Type': contentType }
|
sdk.storage.get(key, [options])
Retrieve an object from storage.
await sdk.storage.get('test.txt', { bucket: 'mybucket' });
const contents = Body.toString();
/**
{
LastModified: 2021-07-09T21:49:14.000Z,
ETag: '"098f6bcd4621d373cade4e832627b4f6"',
ContentType: 'application/octet-stream',
ContentLength: 10422,
Metadata: {},
Body: <Buffer 74 65 73 74>
}
**/
key
: The object to get.
| Options | Description | Default |
| ----------------- | --------------------------------------------------------------------------- | ----------------------------------- |
| bucket
| The name of bucket to get the key from. | ASSET_BUCKET
environment variable |
| contentEncoding
| Encoding expected when parsing byte data. Use raw
for the raw byte Buffer | utf-8
|
sdk.storage.signed.get(key, [options])
Creates a signed URL for retrieval of private objects from storage.
await sdk.secrets('/kv/test');
const url = await sdk.storage.signed.get('images/profile.png');
key
: The key of the object you are trying to sign for.
| Options | Description | Default |
| ------------ | ---------------------------------------------------------------------------------------------------------- | --------------------------------------- |
| type
| cf
or s3
. | cf
|
| ttl
| TTL of the signedURL in seconds. Defaults to 300 seconds | 300
|
| bucket
| The name of the bucket the object exists in. | ASSET_BUCKET
environment variable |
| domain
| The domain to use for the signed URL. | ASSET_DOMAIN
environment variable |
| keyPairId
| CloudFront signer key ID. (Only require for type cf
) | CF_ACCESS_KEY_ID
environment variable |
| privateKey
| CloudFront signer private key. (Only require for type cf
) | CF_PRIVATE_KEY
environment variable |
| ipAddress
| IP address or range of IP addresses of the users who can access your content. (Only require for type cf
) | Any IP |
sdk.storage.signed.cookie(res, [options])
Creates a signed cookie for retrieval of private objects from storage.
sdk.storage.signed.cookie(res);
| Options | Description | Default |
| ------------ | --------------------------------------------------------------------- | ------------------------------------------------------ |
| paths
| Array of S3 paths the cookie will be valid for. | ['*']
|
| domain
| The apex domain for the asset URL. | apex domain of the ASSET_DOMAIN
environment variable |
| ttl
| TTL in seconds from now when the signed cookie should expire. | Defaults to 604800
or 1 week |
| secure
| Marks the cookie to be used with HTTPS only. | true
|
| sameSite
| Value of the "SameSite" Set-Cookie attribute. | lax
|
| keyPairId
| CloudFront signer key ID. | CF_ACCESS_KEY_ID
environment variable |
| privateKey
| CloudFront signer private key. | CF_PRIVATE_KEY
environment variable |
| ipAddress
| IP address or range of IP addresses the signed cookies are limited to | undefined
(All IPs) |
sdk.storage.list([options])
List objects in storage.
let response = await sdk.storage.list();
/**
{
IsTruncated: false,
Contents: [
{
Key: 'reputation-management-uploads/',
LastModified: 2021-03-23T15:06:52.000Z,
ETag: '"d41d8cd98f00b204e9800998ecf8427e"',
Size: 0,
StorageClass: 'STANDARD'
},
],
next: [Function (anonymous)]
}
**/
If the list of objects is larger than 1000 (or maxKeys
), the results will be paginated. The IsTruncated
property of the response will be true and you can use the next()
method of the response to retrieve the next page.
let allObjects;
let response;
response = await sdk.storage.list({
bucket: 'cloud-sdk-test-infrastructure-assets',
maxKeys: 1,
});
allObjects = response.Contents;
while (response.IsTruncated) {
response = await response.next();
allObjects = [...allObjects, ...response.Contents];
}
console.log('OBJECTS', allObjects);
| Options | Description | Default |
| ------------------- | ------------------------------------------------------------------ | ----------------------------------- |
| bucket
| The name of the bucket the object exists in. | ASSET_BUCKET
environment variable |
| maxKeys
| Maximum amount of objects to return in one page before paginating. | 1000
|
| bucket
| The name of the bucket the object exists in. | ASSET_BUCKET
environment variable |
| continuationToken
| Resume listing from a specific spot | undefined
|
| prefix
| Limits the response to keys that begin with the specified prefix | /
|
sdk.storage.exists(key, [options])
Checks if a object with a given key exists in storage. Returns true
or false
depending on if the object exists
let exists = await sdk.storage.exists('uploads/345a3893fj.jpg');
if(exists) do something
key
: The key of the object to get.
| Options | Description | Default |
| -------- | -------------------------------------------- | -------------------------- |
| bucket
| The name of the bucket the object exists in. | ASSET_BUCKET
environment |
sdk.storage.delete(key(s), [options])
Delete object(s) or folder(s) from storage.
// Delete object(s)
await sdk.storage.delete('folder/key');
await sdk.storage.delete(['folder/key1', 'folder/key2']);
// Delete folder(s)
await sdk.storage.delete('folder/');
await sdk.storage.delete('/'); // Deletes everything from the bucket
await sdk.storage.delete.object(['folder1/', 'folder2/']);
key(s)
: Single object/folder or a list of objects/folders to delete.
| Options | Description | Default |
| -------- | ------------------------------ | ----------------------------------- |
| bucket
| Storage bucket to delete from. | ASSET_BUCKET
environment variable |
sdk.storage.delete.cookie(res, [options])
Deletes signed cookies.
sdk.storage.delete.cookie(res);
| Options | Description | Default |
| ---------- | --------------------------------------------- | ----------------------------------- |
| domain
| The apex domain for the asset URL. | ASSET_DOMAIN
environment variable |
| secure
| Marks the cookie to be used with HTTPS only. | true
|
| sameSite
| Value of the "SameSite" Set-Cookie attribute. | lax
|
sdk.storage.tags.get(key, [options])
Retrieves the tag set for a given private object from storage.
const { TagSet } = await sdk.storage.tags.get('images/profile.png');
console.log(TagSet);
/*
[
{ Key: 'CustomStatus', Value: 'STS' },
{ Key: 'availability', Value: 'temporal' }
]
*/
key
: The key of the object you are trying to sign for.
| Options | Description | Default |
| -------- | -------------------------------------------- | -------------------------- |
| bucket
| The name of the bucket the object exists in. | ASSET_BUCKET
environment |
sdk.storage.tags.put(key, [options])
Adds or updates the tag set for a given private object from storage.
const key = 'images/profile.png';
const { TagSet } = await sdk.storage.tags.get(key);
await sdk.storage.tags.put(key, { tagSet: [...TagSet, { Key: 'enable', Value: 'yeap' }] });
key
: The desired name for the object.
| Options | Description | Default |
| -------- | ---------------------------------------------- | ----------------------------------- |
| bucket
| The name of the bucket to write the object to. | ASSET_BUCKET
environment variable |
| tagSet
| The tag set to be added or updated. | []
empty array |
Express
sdk.express.start(app, [options])
Start a HTTP server given a ExpressJS application.
sdk.express.start(app);
app
: ExpressJS application
| Options | Description | Default |
| ----------------------- | ------------------------------------------------------ | ------- |
| port | Port API is exposed on. | 3000
|
| healthCheckEnabled | Whether to create a health check route. | true
|
| readyCheckEnabled | Whether to create a ready check route. | true
|
| gracefulShutdownEnabled | Whether to handle the graceful shutdown of the server. | true
|
sdk.express.middleware()
The top-level middleware
function is a wrapper around smaller middlewares, 3 of which are enabled by default.
In other words, these two things are equivalent:
const express = require('express');
const { middleware } = require('@twentyfourg/cloud-sdk').express;
const app = express();
// This...
app.use(middleware());
// ...is equivalent to this:
app.use(middleware.respond());
app.use(middleware.nocache());
app.use(middleware.helmet());
app.use(middleware.cors());
app.use(middleware.accessLogs());
sdk.express.middleware.respond()
Middleware that extends functionality of the native res.send()
function.
- Supports
String
,Object
, andError
- Validates input meets response standards
- Determines status code based off input
res.send({ users: [{ name: 'david' }, { name: 'brian' }] });
// HTTP/1.1 200 OK
{
"users": [
{
"name": "david"
},
{
"name": "brian"
}
]
}
res.send();
// HTTP/1.1 204 No Content
res.send(new Error('this is a custom error'));
// HTTP/1.1 500 Internal Server Error
{
"error": "this is a custom error",
"stack": "Error: this is a custom error\n at /cloud-sdk/test.js:14:12"
"logs": "https://app.datadoghq.com/logs?query=${errorCode}"
}
res.status(404).send(new Error('this is a custom status code'));
// HTTP/1.1 404 Not Found
{
"error": "this is a custom status code",
"stack": "Error: this is a custom status code\n at /cloud-sdk/test.js:14:12"
"logs": "https://app.datadoghq.com/logs?query=${errorCode}"
}
res.send(Error)
When running in prod
or qa
, res.send(Error)
will always result in the error message being "error"
. This is to prevent the API from leaking production data. All other environments will use error.message
.
| Environment | Response | | ----------- | ---------------------- | | prod, qa | { error, logs } | | other | { error, stack, logs } |
sdk.express.middleware.errors
Middleware that catches errors and responds using res.send()
.
const express = require('express');
const { middleware } = require('@twentyfourg/cloud-sdk').express;
const app = express();
middleware.asyncErrors();
app.use(middleware());
// ...api routes
app.use(middleware.errors);
module.exports = app;
sdk.express.middleware.asyncErrors
Middleware that enables express-async-errors.
const express = require('express');
const { middleware } = require('@twentyfourg/cloud-sdk').express;
const app = express();
middleware.asyncErrors();
app.use(middleware());
// ...api routes
app.use(middleware.errors);
module.exports = app;
sdk.express.middleware.nocache()
Middleware that enables nocache.
sdk.express.middleware.helmet(options)
Middleware that enables helmet.
sdk.express.middleware.cors(options)
Middleware that enables cors.
sdk.express.middleware.accessLogs(options)
Middleware that enables Apache styled access logs.
| Options | Description | Default |
| -------------- | ---------------------------- | ------------------------------------------- |
| enable
| Enable / disable access logs | true
|
| ignoreRoutes
| Routes to ignore | ['/_healthz', '/_readyz', '/favicon.ico']
|
sdk.cache([options])
The Caching namespace provides caching capabilities to your applications backed by Redis or DynamoDB. Both the redis
and dynamo
caching adapters expose the same functionality and behave the same way, allowing you to easily change your cache storage without having to change any of your code.
The default cache storage is dynamo
. You can switch between the caching storage adapters either by using the type
option or by using the redis
and dynamo
sub-namespace.
NOTE: DynamoDB item size limit is 400KB. The SDK will compress the data using zlib if the size exceeds this limit.
Initialize the caching client before you use it.
// Uses Redis: Default storage backend is Redis
const cache = sdk.cache();
// Uses Redis: Redis subnamespace used
const cache = sdk.cache.redis();
// Uses Dynamo: Dynamo subnamespace used
const cache = sdk.cache.dynamo();
// Uses Dynamo: Type option was specified
const cache = sdk.cache({ type: 'dynamo' });
| Options | Description | Default | Adapter |
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- | -------- |
| prefix
| A string that will prefix all cache keys. | ENV-JOB_NUMBER
| Both |
| ttl
| How long the keys should exist in cache | null
(Keys never expire) | Both |
| endpoint
| The endpoint to the caching instance. | REDIS_ENDPOINT
or DYNAMO_ENDPOINT
environment variable. | Both |
| tls
| Whether to use TLS when creating a connection to the endpoint. | false
for local development and true
for remote environments. | redis
|
| maxRetry
| The maximum number of times the client will attempt to connect to a Redis instance. | 10
| redis
|
| port
| Port the client should use to try and connect to a Redis instance. | 6379
| redis
|
| timeOut
| The amount of time (in milliseconds) the client should spend trying to connect to a Redis instance. If this time is reached, the client stops trying to connect even if there's retry attempts left. | 5 minutes | redis
|
| tableName
| The DynamoDB table to use. | DYNAMO_CACHE_TABLE
environment variable | dynamo
|
sdk.cache.set(key, data [options])
Puts data in a cache. Returns the key used to access the data.
const cache = sdk.cache();
const key = await cache.set('rawr', { foo: 'bar' });
key
: A string keydata
: Data you wish to store at thekey
.
| Options | Description | Default |
| ------- | ---------------------------------------------------------------------------------- | ---------------------- |
| ttl
| TTL in seconds that cache should be accessible. After this TTL, the cache expires. | null
(Never expires) |
sdk.cache.get(key)
Retrieve data from cache given a key.
const cache = sdk.cache();
const data = await cache.get('rawr');
key
: A string key
sdk.cache.delete(key)
Removes key/values from cache.
const cache = sdk.cache();
await cache.delete('rawr');
key
: A string key.
sdk.cache.deleteByPattern(pattern)
Removes all key/values from cache that match the pattern.
const cache = sdk.cache();
await cache.deleteByPattern('sessions/*');
pattern
: A pattern to use for searching for keys to delete.
sdk.cache.flush()
Removes all keys from cache
const cache = sdk.cache();
await cache.flush();
sdk.cache.close()
Close connection to the cache
const cache = sdk.cache();
await cache.close();
sdk.cache.middleware(keys)
Returns a ExpressJS middleware that can be added to routes to cache responses.
const cache = sdk.cache();
app.get('/standard', cache.middleware(), (req, res) => {
console.log(req.cache.hit);
// Can use any of options from the sdk.cache.set function
// req.cache.set({ foo: 'bar' }, { ttl: 10 });
req.cache.set({ foo: 'bar' });
res.json({ foo: 'bar' });
});
app.get('/custom/:userId', cache.middleware(['headers.authorization']), (req, res) => {
console.log(req.cache.hit);
req.cache.set({ foo: 'bar' });
res.json({ foo: 'bar' });
});
keys
: An array of request properties to append to the cache key. Defaults to thereq.originalURL
The caching middleware appends the cache
namespace to the Express request object. This namespaces contains the following functions/properties.
req.cache.hit
: Boolean indicating whether it was a cache hit or miss.req.cache.set(data,[options])
: Function for adding data to the cache for this specific route. Supports that same options as thesdk.cache.set()
function.
The X-Cache
header is appended to the response object.
< HTTP/1.1 200 OK
< X-Powered-By: Express
< X-Cache: HIT
Queues
The queue namespace of the SDK contains functions for interacting with message queues.
sdk.queue(url, [options])
Creates a new queue object.
const pointsQueue = sdk.queue('https://pointurl');
const userQueue = sdk.queue('https://userurl.fifo');
url
: The URL of the message queue
| Options | Description | Default |
| ----------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | ------------ |
| sqs
| Instantiated SQS object to use instead of making one | null
|
| messageAttributes
| Object containing key/value attributes to put on each message | default {}
|
| sqs constructor options | Any valid property for the SQS constructor. |
sdk.queue.size()
Returns the approximate number of messages on the queue
const numOfMessages = await sdk.queue.size();
sdk.queue.put(message(s))
Put messages on the queue.
When using FIFO queues, deduplication is automatically handled to remove duplicate message ensuring only one copy of the message is delivered. without a groupID
each message gets its own group allowing for parallel processing but order is not guaranteed. If you specify a groupID
, all messages will use that group and be processed in order.
await sdk.queue.put('singleString');
await sdk.queue.put({ single: 'object' });
await sdk.queue.put(['array', { of: 'things' }]);
await sdk.queue.put('message', { messageAttributes: { containsPassword: true } });
await sdk.queue.put([1, 2, 3, 4, 5], { groupID: 'sessions' });
message
: single string, object or array of strings and objects.
| Options | Description | Default |
| ------------------- | ------------------------------------------------------------ | ------- |
| messageAttributes
| Object containing key/value attributes to put on the message | {}
|
| groupID
| The groupID to put the message in | null
|
sdk.queue.callback(callback, message(s), [options])
Put messages on EZQ which will POST
back the messages to the specified callback
URL.
await sdk.queue.callback('https://api.com/callback', [{ id: 1 }, { id: 2 }]);
callback
: URL where EZQ willPOST
message.message
: single string, object or array of strings and objects.
| Options | Description | Default |
| ------------------- | ------------------------------------------------------------ | ----------------- |
| messageAttributes
| Object containing key/value attributes to put on the message | { timeout: 15 }
|
| groupID
| The groupID to put the message in | null
|
sdk.queue.query(messages, [params], [options])
Put SQL messages on EZQ which will insert them into the database.
await sdk.queue.query('INSERT INTO tracking (action, value) VALUES (?, ?)', ['click', 'video']);
await sdk.queue.query([
['INSERT INTO tracking (action, value) VALUES (?, ?)', ['view', 'users page']],
['UPDATE user SET lastLogin = ? WHERE id = ?', ['2022-03-02 11:47:59', 1]],
]);
await sdk.queue.query('INSERT INTO tracking (action, value) VALUES ?', [
[
['view', 'users page'],
['click', 'video'],
],
]);
message
: single string, array of strings, or array of arrays.params
: array of values to escape
| Options | Description | Default |
| ------------------- | ------------------------------------------------------------ | ------- |
| messageAttributes
| Object containing key/value attributes to put on the message | {}
|
| groupID
| The groupID to put the message in | null
|
sdk.queue.purge()
Delete all messages from the queue.
await sdk.queue.purge();
sdk.queue.get(handler, [options])
Retrieve messages from the queue and process them using the provided handler.
await queue.get(async (message) => {
// Do something with the message.
});
handler
: Async function that called for each message received.
| Options | Description | Default |
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------- |
| poll
| Whether to consistently poll for message or not. | true
|
| MaxNumberOfMessages
| Max number of message pulled in each batch get to SQS in a single poll. This setting does not do anything if used inside Lambda. When using Lambda, you configure the message poll size via Source Event Map | 1
|
| sqs.receiveMessage
params | Any valid sqs.receiveMessage params | {}
|
The handler function must be async. It must take at least on argument for the message but can optionally have a second for the message's attributes. The message if considered successfully processed if the handler finishes without throwing any errors. Messages are automatically deleted after successful processing.
async function handler(message, [messageAttributes]) {
// Process the message. Message will be deleted.
console.log(message);
// Or...
// Throw an error if there was an issue
throw new Error('Error while processing');
}
sdk.queue.batchGet(handler, [options])
Retrieve messages from the queue and process them in a batch using the provided handler. This function is similar to queue.get()
but instead of processing the messages in parallel like queue.get()
, it passes a batch of messages to the handler to process in a group.
await queue.batchGet(async (messages, requeue) => {
messages.map((message) => {
// Do something with the message
});
requeue(message[7]);
});
handler
: Async function that is called for each message batch received.
| Options | Description | Default |
| --------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| poll
| Whether to consistently poll for message or not. | true
|
| MaxNumberOfMessages
| Max number of messages pulled in each GET request to SQS in a single poll. This setting does not do anything if used inside Lambda. When using Lambda, you configure the message poll size via Source Event Map | 1
|
| maxHandlerBatchSize
| Max number of messages pulled from the queue to send to each handler function invocation. By default this number is set to the same value as MaxNumberOfMessages
resulting in processing all of the pulled messages in a batch from a single handler invocation. | 1
|
| sqs.receiveMessage
params | Any valid sqs.receiveMessage params | |
The handler function must be async and takes two arguments. The first argument is for the messages which will be an array of message objects. The message object has a body
, messageAttributes
, and receiptHandle
property. The second argument is a requeue function that you can pass messages to you wish to requeue.
await queue.batchGet(async (messages, requeue) => {
/*
* messages = [{body: 'foo', messageAttributes: {att1: 'value'}, receiptHandle: 'asdf1234'}]
*/
messages.map((message) => {
// Do something with the message
});
// Requeue a specific message
requeue(message[7]);
});
Error Handling and Requeuing
By default, if the handler function errors out and no messages are requeued using the requeue
function, all messages passed to that handler invocation will be requeued. If there are no errors thrown from the handler, all messages are considered to be successfully processed and deleted from the queue.
You can requeue specific messages by calling the requeue
function and pass it a single message or any array of messages. The requeue
function can be called many times in a single handler invocation.
await queue.batchGet(async (messages, requeue) => {
messages.map((message) => {
// Do something with the message
});
// Requeue a specific message
requeue([message[7], message[9]]);
// or just a single message
requeue(message[6]);
});
All messages deleted from the queue (Total Success)
await queue.batchGet(async (messages, requeue) => {
messages.map((message) => {
// Do something with the message
});
});
All messages requeued (Total Failure)
await queue.batchGet(async (messages, requeue) => {
messages.map((message) => {
// Do something with the message
});
throw new Error('handler error');
});
Requeue some messages, delete the rest (Partial Failure)
await queue.batchGet(async (messages, requeue) => {
messages.map((message) => {
// Do something with the message
requeue(message);
});
throw new Error('handler error');
});
Or
await queue.batchGet(async (messages, requeue) => {
messages.map((message) => {
// Do something with the message
requeue(message);
});
});
Parallel Processing
Parallel processing of multiple handler function invocations can be configured with the MaxNumberOfMessages
and maxHandlerBatchSize
options. If MaxNumberOfMessages
and maxHandlerBatchSize
are the same value, all of the messages pulled from the queue will be sent to a single invocation of the handler function. If they differ, then potentially multiple invocations of the handler function will be fired.
numberOfParallelProcesses = floor(MaxNumberOfMessages / maxHandlerBatchSize)
numberOfParallelProcesses = floor(10 / 3) = 3
await queue.batchGet(
async (messages, requeue) => {
// Invoked twice in parallel.
},
{ MaxNumberOfMessages: 10, maxHandlerBatchSize: 5 }
);
await queue.batchGet(
async (messages, requeue) => {
// Invoked only once containing all messages
},
{ MaxNumberOfMessages: 10, maxHandlerBatchSize: 10 }
);
Slack
The Slack namespace of the SDK contains functions for interacting with the 24G Developer Slack API.
sdk.slack.post(arguments, [key])
Sends a message to a channel.
await sdk.slack.post({ channel: 'data-processing', text: 'ETL Job Complete' });
arguments
: Object of Slack arguments used inchat.postMessage
method.key
: The 24G Developer API key. If omitted, theDEVELOPER_API_KEY
environment variable is used.
Metrics
Helper function(s) to creating custom metrics.
sdk.metrics.create(metricEvent)
Create a AWS Embedded Metric Formatted log that is parsed by CloudWatch and will show up in CloudWatch Metrics and CloudWatch Logs.
You can either call the create
function statically on the Metrics class
// Or Metrics = sdk.metrics
// Metrics.create()
sdk.metrics.create({
FeatureTest: {
value: 1,
dimensions: {
Service: 'child',
},
metadata: { Child: 'child' },
},
});
or you can create a instance of the MetricEmitter class that has top level configuration you want all .create
calls to have.
const metricEmitter = new sdk.metrics({
namespace: 'parent',
dimensions: { Parent: 'parentDim' },
metadata: { Parent: 'parentMeta' },
unit: 'Percentage',
});
metricEmitter.create({
FeatureTest: {
value: 1,
dimensions: {
Service: 'child',
},
metadata: { Child: 'child' },
},
});
metricEvent
: Object of metrics you want to emit. Each key is the
The metric event format
interface MetricEvent {
[metricName: string]: {
dimensions: {
[key: string]: string | number | boolean;
};
value: number;
unit?: string;
namespace?: string;
metadata?: object;
};
}
The metric constructor config
interface MetricEmitterConfig {
namespace?: string;
dimensions?: {
[key: string]: string;
};
unit?: string;
metadata?: object;
}
SMS
This namespace of the SDK contains functions for interacting with SMS Text Messages on AWS infrastructure
sdk.sms(applicationID, [options])
Creates a new SMS instance.
const sms = sdk.sms('2d16e35ab1e543b68474d0334a258b25');
applicationID
: The Application/Project ID of your PinPoint project
| Options | Description | Default |
| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| originationNumber
| Specific origination number to use for all SMS calls. If no specific number is provided, the PINPOINT_ORIGINATION_NUMBER
environment variable is used. If neither are set, a randomly available number will be used. | null
|
| pinPoint
| Instantiated PinPoint client to use instead of making one | null
|
| sns
| Instantiated SNS client to use instead of making one | null
|
sdk.sms.send(number(s), [options])
Sends a SMS text message to a given number or numbers.
Returns an object containing success and failed numbers.
const sms = sdk.sms('2d16e35ab1e543b68474d0334a258b25');
await sms.send('+14254147755', 'Hi there!');
const response = await sms.send(['+14254147755', '+14254147167'], 'Hi there!');
/**
{
failure: {
'+12488778282': {
status: 'PERMANENT_FAILURE',
messageId: 'ktinqk95s4grb212bpo22vual2rmlo29bkc85jg0',
statusCode: 400,
applicationId: 'ec11df4923384a2ebad6da7fc62b58fb',
requestId: '2e4050df-8922-460a-9bd0-672cc0be0a64'
}
},
success: {
'+12488778283': {
status: 'SUCCESSFUL',
messageId: 't0vm8i7rad22l3si63uo8sm7kmt6fri7acb8o200',
statusCode: 200,
applicationId: 'ec11df4923384a2ebad6da7fc62b58fb',
requestId: '2e4050df-8922-460a-9bd0-672cc0be0a64'
}
}
}
**/
number(s)
: Single phone or array of phone numbers
| Options | Description | Default |
| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ | -------------- |
| messageType
| Message type to use. Valid options are PROMOTIONAL
and TRANSACTIONAL
| PROMOTIONAL
' |
| originationNumber
| Specific origination number to use for this SMS calls. If no specific number is provided, class wide number is used set in the constructor | Class wide |
sdk.sms.optIn(number)
Opt back in a number that has been previously opted out. An error will be thrown if the phone number can't be opted out. NOTE: Opted Out Phone numbers can be opted in only once in 30 days
const sms = sdk.sms('2d16e35ab1e543b68474d0334a258b25');
try {
await sms.optIn('+14254147755');
} catch (error) {
// Phone number can't be opted back in
}
sdk.sms.eventHandler(handlers, [options])
Creates ExpressJS formatted controller that can be use to handle SMS webhook events. The function takes an object of handlers for each type of event.
The controller handles authentication, message decoding and deduplication of events.
If there is an error inside your handler, throw
it and the SDK will catch the error, log it, and respond to the webhook with a 500
.
const sms = sdk.sms('2d16e35ab1e543b68474d0334a258b25');
app.post(
'/',
sms.eventHandler({
success: async (event) => {
console.log('IN SUCCESS', event);
},
failure: async (event) => {
console.log('IN FAILURE', event);
},
optOut: async (event) => {
console.log('IN optOut', event);
},
buffered: async (event) => {
console.log('IN buffered', event);
},
})
);
handlers
: Object containing functions to be called on each type of event[handlers].success
: Async function to be called onSUCCESS
events[handlers].failure
: Async function to be called onFAILURE
events[handlers].optOut
: Async function to be called onOPTOUT
events[handlers].buffered
: Async function to be called onBUFFERED
events
| Options | Description | Default |
| ----------- | ------------------------------------------------------------------------------------------------- | -------------------------- |
| accessKey
| Key that will be used against the x-amz-firehose-access-key
header to authenticate the endpoint | null
(no authentication) |
Note About Origination Numbers and Event Streams
Event streams are set at the PinPoint project/application level. Events, including opting out, is done at the Origination Number level. At the time of writing this, there is no way to explicitly link PinPoint projects and Origination Numbers which can lead to scenarios where SMS events will not reach the desired PinPoint event stream.
Although not officially documented, testing has shown that the last PinPoint project to use an Origination Number will receive it's events. Because of this, it is recommended that each PinPoint project use it's own Origination Number so all events from a single Origination Number will go to the a single PinPoint project's event stream.
Analytics
The analytics namespace provides basic application analytical functionality. The phrase "who did what where" is the main concept behind functionality.
who: 'uid-1234abc`
did: 'button-click`
what: 'register'
where: '/sessions/elearning'
---
who: 'uid-1234abc`
did: 'route-hit`
what: null
where: '/sessions'
A table called tracking
should be in your database with the columns who
, did
, what
, and where
.
You can create an "instance" of analytics which allows you utilize multiple instances of analytics each with different configuration, or use the function directly off of the namespace.
sdk.analytics([options])
Creates a instances of the analytics namespace.
const sdk = require('@twentyfourg/cloud-sdk');
// this
const analytics = sdk.analytics(options);
analytics.track();
analytics.middleware();
analytics.requestTracking;
analytics.routeTracking();
analytics.endpoint;
// is the same as this but the analytics instance used defaults
sdk.analytics.track();
sdk.analytics.middleware();
sdk.analytics.requestTracking;
sdk.analytics.routeTracking();
sdk.analytics.endpoint;
| Options | Description | Default |
| --------- | ----------------------------------------------------------------- | --------------------- |
| ezq
| A instance of the sdk.queue
that will be used for EZQ messages. | null
|
| ezq_url
| SQS URL to use for sending EZQ messages to. | SQS_EZQ_URL
env var |
sdk.analytics.track(event)
Generates a EZQ MySQL query based on a tracking event.
sdk.analytics.track({
who: 'uid-1234abcd',
did: 'button-click',
what: 'signout',
where: '/sessions',
});
// INSERT INTO tracking (who, did, what, `where`) VALUES ('uid-1234abcd', 'button-click', 'signout', '/sessions');
[event.who]
: "Who" is doing the action. If left blank, the action is consider anonymous.[event.did]
: The action being done.[event.what]
: To "What" is the action being done to.[event.where]
: "Where" did the action happen.
If did
, what
, and where
are all blank, no event is generated.
sdk.analytics.endpoint(req, res, next)
Express middleware that can be mounted to a path allowing frontend application to create tracking events.
app.use(bodyParser.json());
app.post('/tracking', sdk.analytics.endpoint);
// Frontend
axios.post('api.com/tracking, {who, did, what, where})
sdk.analytics.requestTracking(req, res, next)
Express middleware function that appends the analytics
namespace to the req
object with the track
function. The track
function is an alias to sdk.analytics.track
but gets the who
from the request context.
app.use(sdk.analytics.requestTracking);
app.get('/sessions', (req, res, next) => {
req.analytics.track({ did: 'interested', what: 'more-sessions' });
});
sdk.analytics.routeTracking([options])
Returns a Express middleware function that creates tracking event on every route hit.
app.use(sdk.analytics.routeTracking());
// hit to /sessions route
// {who: 'uid-1234', did: 'route-hit', where: '/sessions'}
| Options | Description | Default |
| -------------- | --------------------------------------------------- | -------------------------------------------------------- |
| ignoreRoutes
| List of routes to NOT create a tracking event for | ['/favicon.ico', '/_healthz', '/_readyz', '/tracking']
|
sdk.analytics.middleware([options])
Alias function for sdk.analytics.requestTracking
and sdk.analytics.routeTracking
// This
app.use(sdk.analytics.middleware());
// Is the same as this
app.use(sdk.analytics.requestTracking);
app.use(sdk.analytics.routeTracking());
| Options | Description | Default |
| ------------------------------------- | ----------------------------------------------- | -------------------------------------- |
| sdk.analytics.routeTracking
options | Options passed to sdk.analytics.routeTracking
| sdk.analytics.routeTracking
defaults |
WebHooks
The WebHook namespace contains ExpressJS middleware to help handle Web Hooks from various vendors.
AWS
The AWS WebHook middleware can be used to easily handle AWS SNS HTTP events. The body of the request sent to the Web Hook endpoint will vary depending on the AWS source services (S3, CloudWatch, etc.).
sdk.webhook.aws([options])
Returns ExpressJS middleware to handle AWS SNS WebHook requests
app.post('/', sdk.webhook.aws(), (req, res) => {
// req.body == body of the WebHook
res.send('OK');
});
| Options | Description | Default |
| ----------------------- | ----------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- |
| awsAccountWhiteList
| List of AWS Account IDs who are allowed to send events to this endpoint | Current AWS Account |
| snsTopicWhiteList
| List of AWS SNS Topic ARNs that are authorized to use this endpoint | WEBHOOK_AWS_SNS_TOPIC_WHITELIST
environment variable or [*]
(All topics) |
| disableAuthentication
| Whether to turn off endpoint authentication. If the ENV
is local, authentication is disabled. | WEBHOOK_AWS_DISABLE_AUTHENTICATION
environment variable or false
|
AWS Web Hooks requires the Web Hook endpoint to confirm that it would like to receive events. For more information on how to trigger the confirmation process, see these docs.
Event bodies will vary depending on the originating AWS service. For example bodies, see these docs
Environment Variables
| Key | Description |
| -------------------- | ------------------------------------------------------------------------------------ |
| SECRET_PATH
| The path(key) to secret stored in Vault or Secret Manager secret. |
| SECRET_TYPE
| The type of secret manager you wish to interact with. Defaults to vault
. |
| ASSET_BUCKET
| The name of the storage bucket where your assets are stored. |
| ASSET_DOMAIN
| The domain of used to access the assets from storage. |
| ASSET_URL
| Alias for ASSET_DOMAIN
. Use ASSET_DOMAIN
instead. |
| CF_ACCESS_KEY_ID
| CloudFront signer key ID. |
| CF_PRIVATE_KEY
| CloudFront signer private key. |
| REDIS_ENDPOINT
| The endpoint of the Redis instance to use for storing cache (excluding protocol) |
| ENV
| The environment the code is running in (eg. 'local', 'prod'). Used in caching prefix |
| JOB_NUMBER
| 24G job number. Used in caching prefix |
| LOG_LEVEL
| Options: silly
, debug
, verbose
, http
, info
, warn
, error
|
| API_PORT
| The port the API should listen on. Defaults to 3000
if left blank. |
| DEVELOPER_API_URL
| The base URL of the 24G Developer API. |
| DEVELOPER_API_KEY
| The 24G Developer API key. |
| SQS_EZQ_URL
| The SQS URL to send EZQ formatted messages to |
| DYNAMO_CACHE_TABLE
| The name of the DynamoDB cache table |
| CACHE_TYPE
| The type of caching backend to use. Valid values are redis
or dynamo
. |
| EZQ_JWT_EXPIRES
| The amount of time the EZQ JWT should be valid for. Defaults to 1hr
. |
Contributing
Contributors are always welcome! If you wish to make an improvement or fix a bug, make a pull request into master
. Be sure to follow the commit name convention as you work.
- Slack Channel: #cloud-sdk