npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

dynamodb-migrator

v1.0.0

Published

[![Build Status](https://travis-ci.org/mapbox/dynamodb-migrator.svg?branch=master)](https://travis-ci.org/mapbox/dynamodb-migrator)

Downloads

131

Readme

dynamodb-migrator

Build Status

For migration and cleanup operations on your DynamoDB table

Usage

Write a migration script

Write a module that exports a function that will run over each record in your table. Optionally, you may also define a finish routine that will be executed once the table migration is complete. Here's an example:

var deleted = 0;

module.exports = function(record, dyno, callback) {
  if (record.flag !== 'delete-me') return callback();

  console.log('%s flagged for deletion', record.id);

  // If you are running a dry-run, `dyno` will be null
  if (!dyno) return callback();

  dyno.deleteItem({ id: record.id }, function(err) {
    if (err) {
      console.error('%s failed to delete', record.id);

      // Sending an error to the callback function will stop the migration
      return callback(new Error('A record failed to delete'));
    }

    deleted++;
    callback();
  });
}

module.exports.finish = function(dyno, callback) {
  console.log('Deleted %s records', deleted);
  callback();
}

Decide on stream vs. scan mode

Streaming mode is where you feed records into your migration script from a file. This is useful for testing your migration based on a backup from your database, or if you know you have a small subset of records that you want your migration to impact. This file is consist of line-delimited JSON strings.

Scan mode is where the database is scanned and your migration script will be fed each record in the database.

Do you need to prefilter?

If you're running in stream mode from something like a prior database dump, you may want to prefilter your dump so that it only includes records you're interested in. dynamodb-filter is provided for you to help do this.

  1. Write a filter function. It will be passed one argument: a single line (as a string) from the original file. The function is expected to return true/false indicating whether or not that line should be written into your filtered output. It should be written into a Node.js module as the module's exports. For example:

    module.exports = function(line) {
      // I only care about ham.
      return line.indexOf('ham') > -1;
    };
  2. Run dynamodb-filter:

    # dynamodb-filter <input file path> <filter function path> [--output <output file path>]
    $ dynamodb-filter ./some-dump.gz ./my-script.js > ./some-filtered-dump.gz
    $ dynamodb-filter ./some-dump.gz ./my-script.js --output ./some-filtered-dump.gz

Specify type of JSON

Pass the --dyno flag to the migrator if your input JSON objects are in a format suitable for direct usage in dyno. Otherwise, it is assumed that the objects are formatted using standard DynamoDB syntax.

Do a dry-run

Run your migration without impacting any records to check that your conditions are filtering as you expect them to. Remember that your migration script will not receive a dyno object in this case.

$ dynamodb-migrate scan us-east-1/my-table ./my-migration-script.js

When the migration is complete, it will print the paths to your info and error logs.

Do it for real

Specify the --live flag to run the migration once and for all.

$ dynamodb-migrate scan us-east-1/my-table ./my-migration-script.js --live

Help

Usage: dynamodb-migrate <method> <database> <script>

method: either scan or stream to read records from the database or from stdin, respectively
database: region/name of the database to work against
script: relative path to a migration script

Options:
 - concurrency [1]: number of records to process in parallel
 - live [false]: if not specified, the migration script will not receive a database reference
 - dyno [false]: if not specified, it is assumed that the objects are formatted using standard DynamoDB syntax. Pass the `--dyno` flag to the migrator if your input JSON objects are in a format suitable for direct usage in dyno (https://github.com/mapbox/dyno)
 - rate [false]: log information about the rate at which migration is running. Will interfere with a migration script's logs