bucket-mapper
v0.43.5
Published
BucketMapper is an AWS-based, cloud-native, S3 file system helper
Downloads
118
Readme
BucketMapper
Overview
API and Worker to support mapping of metadata to files in S3 buckets. When a file is ingested to S3, it will be checked against existing metadata for that file.
Key Capabilities
- Storing or inferring of metadata for CSV files (generated from databases), or XML and JSON documents
- Data quality assurance checks (hash totals) against metadata when files are ingested
- Annotation of data assets (pending)
Setup
Requires AWS components
- S3 - where the files will be stored
- SQS - queue to receive notifications from S3 (and a dead letter queue)
- S3 notification to SQS
- DynamoDb - to store file-related metadata and processing related information (no data)
- EC2 instance (or 2) - to run BucketMapper API and Worker (preferably with nginx)
- Relevant policies to allow each AWS service to talk to each other
Look under /scripts
to find a sample cloudformation script with the all details.
Installation
Setup EC2 instance
- Install node 6.x LTS or higher
- Install forever or pm2 eg
npm install forever
Run
- Configure your node project for your configurations or use the sample project (optional)
- Install BucketMapper
npm install bucket-mapper
- Run worker
npm run worker
from forever - For the API
npm run api
from forever
Test
- Post a new metadata file to
/api/metadata
- Drop a csv file (corresponding to the metadata) into the S3 bucket
- Lookup the DynamoDb tables to see if the file and its checksums were stored
Use Cases for API
For all use case User can call an API to * fetch the metadata * manipulate the metadata * view list of files (not the data) * run checks on the data quality
Data consumer wants to know the structure of the data
Given an S3 file
`https://S3.<customerurl>/<address/of/the/target/folder/year/date/file>`
When we access the file's meta data as
curl -X GET -H "Authorization:..."
https://bucketmapper.<customerurl>/metadata
?objectKey=<DoNotPutBucketNameHere/address/of/the/target/folder/file>[&version=<default=latest>&format=<default=csv|json|yaml>]
Then we must receive the metadata in the form
JSON using http://json-schema.org/examples.html
{
"id": "<hash-of-key-below>"
"metadataKey": "<address/of/the/target/folder/file>",
"version": "latest",
"type": "object",
"properties": [
{ "name": "firstName",
"type": "string",
"required": true /* similar to nullable */
"order": 1 /* required */
"mapping": "FIRST_NAME"
,
{ "name": "lastName",
"type": "string",
"order": 2
},
{ "name": "age",
"description": "Age in years",
"type": "integer",
"minimum": 0,
"order": 3
}
},
}
OR CSV with the column and types
`column_1,type,column_2,type,column_3,type,column_4,type`
Data Custodian wants to be able to save metadata for intended Data Asset in Data Lake
Given the metadata based on the source and stored as the following
{
"id": "681929c184d918c4b1e7a79813622239",
"metadataKey": "root/folder/*/dummy_source1",
"bucket": "devop5-bucketmapper-experiment",
"properties": [
{
"name": "TestColumn0",
"order": 1,
"required": false,
"type": "number"
},
{ "name": "TestColumn1", order: 2 },
{ "name": "TestColumn2", order: 3 },
{ "name": "TestColumn3", order: 4 },
{
"format": "isoDateTime", //|..pattern based on npm dateformat module",
"name": "TestColumn4",
"order": 5,
"required": false,
"type": "date"
},
{ "name": "TestColumn5", order: 6 },
{ "name": "TestColumn6", order: 7 }
],
"check": {
"partitionRowCount": true,
"rowChecksums": true OR {
//same as true (default)
"columns":[
{
"check": "hash",
"column": "*"
}
]
//OR specify to include
"columns":[
{
"check": "hash",
"column": "*"
}
]
//OR specify to exclude
"exclude":[
{"column": "columnB"},
{"column": "columnD"}
]
},
"colChecksums": [
{
"check": "hash",
"column": "*"
}
],
"rowCount": true,
"split": {
"chunkSize": "10000",
"orderBy": "SOME-PRIMARY-KEY ASC"
}
},
"file": "dummy_source1",
"log": [
{
"eventType": "created",
"updated": "2016-03-21T05:38:09.973Z"
}
],
"source": {
"filePath": "some/file/path",
"loadJob": "name-of-load-job",
"owner": "table-owner",
"table": "some-table-name",
"dbType": "mssql|oracle10g|oracle11g|inferred|csv|.."
},
"status": "active"
}
details:
| attribute | description | required | |:------------|:-------------------------------------------------------------------------------------------------|:---------| | id | md5 hash of the key | Y | | metadataKey | nominal key of the type of object stored. e.g.root/folder/*/dummy_source1 | Y | | bucket | name of bucket | Y | | properties | map - with { order: #, type: 'number| text| boolean', required: true, mapping: 'source_name' } | Y | | file | name of actual file without root folder and date parts | Y | | source | optional details about source of data | N | | log | event source for the record with payload of each update | Y | | status | 'active|donotprocess|archived' | Y | | check | holds a variety of checks to be run on the ingested file | N |
Available Checks
check.rowChecksums
- (default: true) will create hash checksums by concatenating ALL columns values per row and store under /checkcheck.rowChecksums.columns[..]
- (same as above default:{ "check": "hash","column": "*"}
) - hashes ALL columns as abovecheck.rowChecksums.columns[{col..},{col..}]
- specify included columns as{ "check": "hash|sum","column": "name-of-column"}
) - hashes/sums this columncheck.rowChecksums.exclude[{col..},{col..}]
- overrides.columns
- specify excluded columns as{ "column": "name-of-column"}
) - excludes this columncheck.partitionRowCount
- (default: true) will create a checksum of concatenated values of all rowChecksums over a partition (max 10K rows) ordered by thecheck.split.orderBy
check.columnChecksums
- (default: false) will create checksums of individual concatenated column values per partition and store under /checkcheck.columnChecksums.columns[..]
- (same as above default:{ "check": "hash","column": "*"}
) - hashes ALL columns individually as abovecheck.columnChecksums.columns[{col..},{col..}]
- specify included columns as{ "check": "hash|sum","column": "name-of-column"}
) - hashes/sums this columncheck.columnChecksums.exclude[{col..},{col..}]
- overrides.columns
- specify excluded columns as{ "column": "name-of-column"}
) - excludes this columncheck.rowCount
- (default: true) This will store rowCount /checkcheck.split
- (required) This will affect order and partition of checkscheck.split.orderBy
- (required) - This can take a comma-separate list of order clauses e.g. 'Id ASC, LastUpdated DESC`check.split.chunkSize
- (default/max: 10000) - e.g. creates two partition, ie 1-10000, 10001-20000 for a table with 17376 rows
When the metadata is saved using the API call
```
curl -X POST -H "Authorization:..." -H "Content-type:application/json"
-d "{...metadata...}"
https://bucketmapper.<customerurl>/metadata
```
Then the API must return
status: 201 Created
Data Lake must be able to store quality assurance information about an ingested Data Asset
Given the metadata is already stored
When a Data Asset <address/of/the/target/folder/year/date/file>
is written to the S3
And a S3 notifies SQS e.g. S3 --> SQS informs
Then the Worker must process the S3 event
And the Worker must check the api to locate this Data Asset's metadata using the API call
curl -X GET https://bucketmapper.<customerurl>/api/metadata?key=<DoNotPutBucketNameHere/address/of/the/target/folder/file>[&version=<default=latest>&format=<default=csv|json|yaml>]
- (if not found) must write to error log
- (if the metadata is disabled) must write to error/warning log And (if metadata found), must
- save a File record via
curl -X POST -H "Authorization:..." -H "Content-type:application/json"
-d "{...metadata...}"
https://bucketmapper.<customerurl>/file
- initiate the calculation of the checksums on the ingested Data Asset
- And on completion, must call this api to store the checksums (note checksums array must be in the same order as the metadata.properties.order)
curl -X POST -H "Authorization:..."
-H "Content-type:application/x-url-form-encoded"
-d "objectKey=<DoNotPutBucketNameHere/address/of/the/target/folder/year/date/file/>[&version=<default=latest>]
&metadata=<address/of/the/target/folder/file>[&metadataVersion=<default=latest>]
&checksums=[..,..,,,]&metadata={..}"
https://bucketmapper.<customerurl>/api/check
And API must return
status: 201 Created
OR
status: 400 bad request
Then User must be able to get the file using
curl -X GET -H "Authorization:..."
https://bucketmapper.<customerurl>/api/file
?key=<DoNotPutBucketNameHere/address/of/the/target/folder/year/date/file>[&version=<default=latest>][&format=<default=csv|json|yaml>]
in the form
{
"id": "2d78912dfcf7b50c421d06536893a972",
"key": "root/folder/2016/201603/20160315/dummy_source1",
"bucket": "devop5-bucketmapper-experiment",
"dateKey": "20160315",
"monthKey": "201603",
"yearKey": "2016",
"file": "dummy_source1",
"folder": "root/folder/2016/201603/20160315",
"log": [
{
"eventType": "updated",
"payload": {
"id": "2d78912dfcf7b50c421d06536893a972",
"numOfRows": 1351,
"partitions": [
"1-1000"
"1001-2000"
],
"status": "processing_complete"
},
"updated": "2016-03-21T05:39:03.623Z"
},
{
"eventType": "updated",
"payload": {
"id": "2d78912dfcf7b50c421d06536893a972",
"status": "processing_started"
},
"updated": "2016-03-21T05:39:02.668Z"
},
{
"eventType": "created",
"payload": {...},
"updated": "2016-03-21T05:39:02.324Z"
}
],
"metadata": {...},
"metadataKey": "root/folder/*/dummy_source1",
"numOfRows": 1351,
"partitions": [
"1-1000",
"1001-2000"
],
"rootFolder": "root/folder",
"s3Event": {...},
"status": "processing_complete",
"version": 1479706142324
}
Details:
| attribute | description | required | |:------------|:--------------------------------------------------------------------------------|:---------| | id | md5 of key | Y | | key | S3 object key e.g. root/folder/2016/201603/20160315/subfolder/filename | Y | | bucket | S3 bucket name | Y | | dateKey | time key from key e.g. d20160315 | | | monthKey | time key from key e.g. m201603 | | | yearKey | time key from key e.g. y2016 | Y | | file | right part of object key after date parts eg subfolder/filename | | | folder | path to file with date parts eg. root/folder/2016/201603/20160315 | | | rootFolder | left part of object key without date parts eg. root/folder | | | metadata | object - snapshot of metadata when processing started | Y | | metadataKey | nominal key of the type of object stored. e.g.root/folder/*/subfolder/filename | Y | | numOfRows | updated after worker processes the file | | | partitions | updated when partitions checksums are stored, else [0] = null | | | version | timestamp in secs | Y |
Then User must be able to get the checksum using file.partitions eg. 1-100000
curl -X GET -H "Authorization:..."
https://bucketmapper.<customerurl>/api/check
?fileKey=<DoNotPutBucketNameHere/address/of/the/target/folder/year/date/file>[&partition=from-to][&version=<default=latest>][&format=<default=csv|json|yaml>]
in the form
{
"id": "c343f3494a2f0034f458c99fcffa6504",
"key": "root/folder/2016/201603/20160315/dummy_source1-1-1000",
"partitionKey": "1-1000",
"bucket": "devop5-bucketmapper-experiment",
"fileKey": "root/folder/2016/201603/20160315/dummy_source1",
"dateKey": "d20160315",
"monthKey": "m201603",
"yearKey": "y2016",
"file": "dummy_source1",
"checksums": [
"1b89267e406e18d4cf2fa31ec65da407",
"7182dd641e3a4268edfbc8a8c2f818c6",
"23e0e5478e6d64c1a27f255d1a36a252",
"a5221daa6d761f564620ad8f39718de0",
"541d8cd98f00b204e9800998ecf8427e",
"676fa5edb15cc325b8d04bb9e9be71da",
"210079472537e886aa537a25e491dc7c"
],
"log": [
{
"eventType": "created",
"updated": "2016-03-21T08:44:01.367Z"
}
],
"status": "ok",
"version": 1479711841367
}
And (if file is still processing) User must be able to get
status: 102 Processing
OR status: 400 bad request
OR status: 401 not authorised
OR status: 403 forbidden
OR status: 404 file not found
OR status: 503 server error
And (once the checksums are done)
status: 200
Data Custodian wants to be able to validate the ingested data
Given User (or script) has prepared checksum data from the source assets in csv form
-- sample sql concatenates individual column values then hashes them
> SELECT hash(firstName), hash(lastName), sum(age)
FROM sourceTable
WHERE rowNum >= 10001 AND rowNum <= 20000;
>
firstName | lastName | age |
94dff1342cef | 82faabd43de | 576543260 |
Then the User can check that the ingested data is valid by calling the API as
curl -X POST -H "Authorization:..."
-H "Content-type:application/x-url-form-encoded"
-d
"objectKey=<DoNotPutBucketNameHere/system-name/root/folder/year/date/file>
[&version=<default=latest>&partition=10001,20000]
[&metadata=<address/of/the/target/folder/file>
&metadataVersion=<default=latest>]
&checks=firstName,lastName,age
&expectedValues=94dff1342cef,82faabd43de,576543260" //can also be json
https://bucketmapper.<customerurl>/checkresult
And the results would be
status: 200 Success
OR status: 400 Bad Request
{
file: "<address/of/the/target/folder/year/date/file>",
version: "latest",
check: {
checkedAt: "timestamp",
code: "CHECK_FAILED",
metadata: "<address/of/the/target/folder/file>",
metadataVersion: "<default=latest>",
partition: { from:10001, to:20000 },
errors:
[
{ error: "FAILED", column: "firstName", expected:"", actual:"" }
]
}
}
And results can be retrieved as
curl -X GET -H "Authorization:..."
-H "Content-type:application/x-url-form-encoded"
-d
https://bucketmapper.<customerurl>/checkresult?file=<address/of/the/target/folder/year/date/file>
[&version=<default=latest>&partition=10001,20000]
&metadata=<address/of/the/target/folder/file>
[&metadataVersion=<default=latest>]
status: 200
{
file: "<address/of/the/target/folder/year/date/file>",
version: "latest",
data:
[
{ checkedAt: "timestamp-x", code: "CHECK_PASSED", metadata: "<address/of/the/target/folder/file>", partition: { from:10001, to:20000 },... },
{ checkedAt: "timestamp-y", code: "CHECK_FAILED", metadata: "<address/of/the/target/folder/file>", partition: { from:1, to:10000 },... },
{ checkedAt: "timestamp-z", code: "CHECK_PASSED", metadata: "<address/of/the/target/folder/file>", partition: { from:10001, to:20000 },... }
]
}
AWS Configuration
- See link
- Store in ~/.aws/credentials
- When running a node app ..
$ AWS_PROFILE=work-stuff node script.js
S3 Key Naming Conventions
Keys are built from S3 ObjectKey and a system name that is equal to to extracted from the S3 Bucket Name
Custom keybuilders allow flexible naming convention as desired
Key names expect to have a Hadoop-like ../year/month/date/..
Key names are for the default form system/root/folder/year/month/date/sub/folder/FILE
Key names can be customised to be hadoop-friendly ie system/root/folder/FILE/year/month/date/any-file-name-number
id, key, bucket, system, file, folder, rootFolder, yearKey, monthKey, dateKey, version, metadataKey, systemKey
Tests
This code base currently has coverage of over 75% (we will put a little badge soon). For you to run tests yourself, you will need to setup relevant source RDBMS (oracle, or sql and S3.
License
MIT License
Copyright (c) 2016 DevOp5
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.