6xs
v0.5.0
Published
Simple Storage Service Static Site Sync
Downloads
47
Maintainers
Readme
6xs
6xs means Simple Storage Service Static Site Sync.
It takes your /public
directory (or however you call it) and pushes its
contents (optionally matching with node-glob
) into a selected S3 bucket.
It also can:
- remove remote files that are not found in your local directory
- create an invalidation for a chosen CloudFront distribution
Usage
This usage example presents all available configuration options:
var sync = require('6xs');
var path = require('path');
sync({
// defaults to: process.cwd() + '/public'
base: path.join(__dirname, 'public'),
// defaults to: '**'
patterns: ['*.html', 'font/*'],
// defaults to noop
logger: function () {
return console.log.apply(console, arguments);
},
// custom mappings between file extension and content type
// if not provided, libmagic is used for detection
// values below are used by default:
contentTypeMap: {
html: 'text/html',
css: 'text/css',
js: 'application/javascript',
json: 'application/json'
},
aws: {
// have to be provided:
access_key_id: 'abcdef...',
secret_access_key: 'xyz987...',
// defaults:
ssl: true,
retries: 3,
concurrency: 10
},
s3: {
// have to be provided:
region: 'eu-west-1',
bucket: 'your-bucket-name',
// defaults to false:
remove_remote_surplus: true
// defaults:
max_age: 365,
s_max_age: 1,
},
// if the distribution id is provided,
// its content will be invalidated after the upload
cf_distribution_id: 'qwerty...'
}, function (err, uploadedFiles) {
// callback is optional
// if upload was successful, err will be null
// uploadedFiles is an array of paths
});
CLI usage
Usage
$ 6xs <settings/options>
This will upload the current working directory to the specified S3 bucket.
Required settings
-i, --id AWS Access Key ID
-s, --secret AWS Secret Access Key
-b, --bucket AWS S3 Bucket name
-r, --region AWS region
Options
-p, --patterns Glob patterns of the files to upload
default: **
e.g. *.html
e.g. *.html,fonts/*
-ma, --max-age Cache-Control max-age header, in days
default: 365
-sa, --s-max-age Cache-Control s-maxage header, in days
default: 1
--retries Number of retries
Default: 3
--concurrency Number of concurrent uploads
Default: 10
--remove-surplus Remove remote files that are not
found in your local directory
--no-ssl Don't use SSL
-cf, --cloudfront The distribution ID to invalidate
Examples
$ 6xs -i I2B -s KPAvL4GR -b my-s3-site.gov -r us-west-2 --remove-surplus
Uploading: ...
Contributing
Pull requests and/or issue reports are warmly welcomed!
Running tests
$ npm run test
$ npm run coverage
Running integration tests locally
Travis build won't run integration tests if your PR originates in a fork.
You'll need to provide 4 environmental variables to run integration tests locally. The user identified by the access key has to have an appropriate allowing policy for the S3 bucket assigned.
$ AWS_ACCESS_KEY_ID=key-id \
AWS_SECRET_ACCESS_KEY=secret \
S3_REGION=your-region \
S3_BUCKET=your-test-bucket \
npm run test-integration
If you understand implications you can copy integration-test.sh.dist
and
adjust it to your needs.
Contributors
License
MIT