npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

jac-s3-sync-aws

v2.0.1

Published

A streaming upload tool for Amazon S3

Downloads

10

Readme

jac-s3-sync-aws

Fork from s3-sync-aws(https://github.com/andreialecu/s3-sync-aws)

The difference between this fork is i add a option dest.

Installation

npm install jac-s3-sync-aws

Usage

require('jac-s3-sync-aws').createStream([db, ]options)

Creates an upload stream. Passes its options to aws-sdk, so at a minimum you'll need:

  • key or accessKeyId: Your AWS access key.
  • secret or secretAccessKey: Your AWS secret.
  • bucket: The bucket to upload to.
  • region: The region the bucket is in.

The following are also specific to jac-s3-sync-aws:

  • dest: Upload folder.
  • concurrency: The maximum amount of files to upload concurrently.
  • retries: The maximum number of times to retry uploading a file before failing. By default the value is 7.
  • headers: Additional parameters for each file, see S3 docs.
  • hashKey: By default, file hashes are stored based on the file's absolute path. This doesn't work very nicely with temporary files, so you can pass this function in to map the file object to a string key for the hash.
  • acl: Use a custom ACL header. Defaults to public-read.
  • force: Force s3-sync-aws to overwrite any existing files. Not generally required, since we store a hash and compare it to detect updated files.

You can also store your local cache in S3, provided you pass the following options, and use getCache and putCache (see below) before/after uploading:

  • cacheDest: the path to upload your cache backup to in S3.
  • cacheSrc: the local, temporary, text file to stream to before uploading to S3.

If you want more control over the files and their locations that you're uploading, you can write file objects directly to the stream, e.g.:

var stream = s3sync({
    key: process.env.AWS_ACCESS_KEY
  , secret: process.env.AWS_SECRET_KEY
  , bucket: 'sync-testing'
})

stream.write({
    src: __filename
  , dest: '/uploader.js'
})

stream.end({
    src: __dirname + '/README.md'
  , dest: '/README.md'
})

Where src is the absolute local file path, and dest is the location to upload the file to on the S3 bucket.

db is an optional argument - pass it a level database and it'll keep a local cache of file hashes, keeping S3 requests to a minimum.

stream.putCache(callback)

Uploads your level cache, if available, to the S3 bucket. This means that your cache only needs to be populated once.

stream.getCache(callback)

Streams a previously uploaded cache from S3 to your local level database.

stream.on('fail', callback)

Emitted when a file has failed to upload. This will be called each time the file is attempted to be uploaded.

Example

Here's an example using level and readdirp to upload a local directory to an S3 bucket:

var level = require('level')
  , s3sync = require('jac-s3-sync-aws')
  , readdirp = require('readdirp')

// To cache the S3 HEAD results and speed up the
// upload process. Usage is optional.
var db = level(__dirname + '/cache')

var files = readdirp({
    root: __dirname
  , directoryFilter: ['!.git', '!cache']
})

// Takes the same options arguments as `aws-sdk`,
// plus some additional options listed above
var uploader = s3sync(db, {
    key: process.env.AWS_ACCESS_KEY
  , secret: process.env.AWS_SECRET_KEY
  , bucket: 'sync-testing'
  , concurrency: 16
  , dest: 'target_folder'
  , prefix : 'mysubfolder/' //optional prefix to files on S3
}).on('data', function(file) {
  console.log(file.fullPath + ' -> ' + file.url)
})

files.pipe(uploader)

You can find another example which includes remote cache storage at example.js.