npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

cacti

v0.0.8

Published

Extremely simple MongoDB/Redis backups to Amazon S3 with encryption and compression'

Downloads

91

Readme

cacti

build status code coverage code style styled with prettier made with lass license

:cactus: Extremely simple MongoDB/Redis backups to Amazon S3 with encryption and compression

Table of Contents

Install

CLI

Coming soon

API

npm:

npm install cacti

yarn:

yarn add cacti

Usage

Redis Permission Requirements

You must ensure that the user running the CLI or interacting with the API has permission to access your Redis database backup file path.

Note that if you have changed the paths below from the defaults provided then you'll need to adjust them.

  • Mac: You don't need to do anything (assuming you installed Redis with brew install redis and have default permissions setup)

  • Ubuntu: Run the commands below and replace user with your currently logged in username (type whoami to get this)

    sudo adduser user redis
    sudo chown -R redis:redis /etc/redis
    sudo chown -R redis:redis /var/lib/redis
    sudo chmod g+wr /etc/redis/redis.conf
    sudo chmod g+wr /var/lib/redis/dump.rdb

CLI

Coming soon

API

If you want to backup all databases:

const Cacti = require('cacti');

const cacti = new Cacti('my-s3-bucket-name');

// backup mongo and redis and upload to amazon s3
cacti.backup().then(console.log).catch(console.error);

// simply run mongorestore to create a mongo backup file
cacti.mongo().then(console.log).catch(console.error);

// simply run bgsave to create a redis backup file
cacti.redis().then(console.log).catch(console.error);

If you want to backup only a specific database:

const Cacti = require('cacti');
const cacti = new Cacti('my-s3-bucket-name', { mongo: '--db=some_database' });
cacti.backup().then(console.log).catch(console.error);

new Cacti(bucket, options)

Note that you can also create a new Cacti instance with just new Cacti(options) (but make sure you specify options.aws.params.Bucket if so).

By default if you do not specify a bucket name it will throw an error.

cacti.backup(tasks)

Returns a Promise that resolves with the S3 upload response or rejects with an Error object.

The argument tasks is an optional Array and defaults to [ 'mongo', 'redis' ].

By default, this method runs cacti.mongo(), cacti.redis(), and for each it then runs cacti.tar() and cacti.upload().

cacti.mongo()

Returns a Promise that resolves with the file path to the MongoDB backup or rejects with an Error object.

cacti.redis()

Returns a Promise that resolves with the file path to the Redis backup or rejects with an Error object.

cacti.upload(dir, filePath)

Return a Promise that resolves with the S3 upload response or rejects with an Error object.

This method is used by cacti.backup(). It will automatically remove the dir argument from the filesystem.

cacti.tar(dir)

Returns a Promise that resolves with the file path to the gzipped tarball of dir.

This method is used by cacti.backup(). It will automatically remove the dir argument from the filesystem.

cacti.getRedisBgSaveFilePath(lastSave)

Returns a Promise that resolves with a temporary file path to copy the RDB file to.

The argument lastSave is a UNIX TIME parsed from bgsave which is used for comparison to the lastsave.

This temporary file path contains an ISO-8601 file name based upon redis-cli command output from lastsave.

This method is used by cacti.redis() in combination with the option redisBgSaveCheckInterval.

Options

The default option values are provided below, and can be overridden through both the CLI and API.

The only required option is bucket (but this is only checked in the upload method), which is the Amazon S3 bucket name you'll upload backups to.

By default the value of bucket will be set to process.env.CACTI_AWS_BUCKET if it is set and bucket is not a String.

Note that your AWS access key ID and secret access key are required as well, but we inherit the standardized values from process.env.AWS_ACCESS_KEY_ID and process.env.AWS_SECRET_ACCESS_KEY respectively. If those environment variables are not set you will either need to set them, pass them before running cacti, or specify them through the API options below.

CLI

Coming soon

API

Options are passed when creating a new Cacti(bucket, options) instance (options are in camelCased format)

const cacti = new Cacti('bucket', {
  // s3 base directory
  directory: 'cacti',
  // s3 directory for mongo backup
  mongoDirectory: 'mongo',
  // s3 directory for redis backup
  redisDirectory: 'redis',
  // mongorestore options/flags
  // note that if `process.env.DATABASE_NAME` is set
  // set this value to `--db=${process.env.DATABASE_NAME}`
  // (as long as you don't pass this option at all when configuring)
  mongo: '',
  // redis-cli options/flags
  redis: '',
  // platform specific path to redis.conf
  redisConfPath:
    os.platform() === 'darwin'
      ? '/usr/local/etc/redis.conf'
      : '/etc/redis/redis.conf',
  // ms to check bgsave completed
  redisBgSaveCheckInterval: 300,
  // aws configuration object to pass to aws-sdk
  aws: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
  }
});

Integrations

All examples below show how to create a backup every hour.

Agenda

const Cacti = require('cacti');
const Agenda = require('agenda');

const cacti = new Cacti('my-s3-bucket-name');
const agenda = new Agenda();

agenda.define('cacti', async (job, done) => {
  try {
    await cacti.backup();
    done();
  } catch (err) {
    done(err);
  }
});

agenda.every('hour', 'cacti');

Kue

You might want to use kue-scheduler to make scheduling jobs easier

const Cacti = require('cacti');
const kue = require('kue');

const queue = kue.createQueue();
const cacti = new Cacti('my-s3-bucket-name');

queue.process('cacti', async (job, done) => {
  try {
    await cacti.backup();
    done();
  } catch (err) {
    done(err);
  }
});

setInterval(() => {
  queue.create('cacti').save();
}, 1000 * 60 * 60);

cron

const { CronJob } = require('cron');

new CronJob('0 * * * *', async () => {
  try {
    await cacti.backup();
  } catch (err) {
    console.error(err);
  }
}, null, true);

crontab

NOTE: This will not work until the CLI is released

  1. Schedule a cron job:
crontab -e
  1. Add a new line:
# run cacti every day at midnight to backup mongo/redis
0 0 * * * /bin/bash cacti backup --bucket 'my-s3-bucket-name'

Amazon Glacier

You will probably want to configure Amazon S3 to automatically archive to Amazon Glacier after a period of time.

See https://aws.amazon.com/blogs/aws/archive-s3-to-glacier/ for more information.

Frequently Asked Questions

How do I download and restore a backup

You can download a backup from the Amazon S3 console or use awscli.

MongoDB

You can use simply use the mongorestore command.

  1. Stop your mongo server:
  • Mac: brew services stop mongo
  • Ubuntu: sudo systemctl stop mongo
  1. Download your backup from Amazon S3:
wget -O archive.gz https://s3.amazonaws.com/my-bucket/xx-xx-xxxx-xx:xx:xx.archive.gz
  1. Import the backup to MongoDB:
mongorestore --gzip --archive=archive.gz
  1. Start your mongo server:
  • Mac: brew services start mongo
  • Ubuntu: sudo systemctl start mongo

Redis

  1. Stop your redis server:
  • Mac: brew services stop redis
  • Ubuntu: sudo systemctl stop redis-server
  1. Download your backup from Amazon S3:
wget -O dump.rdb https://s3.amazonaws.com/my-bucket/xx-xx-xxxx-xx:xx:xx.dump.rdb
  1. Move the extracted dump.rdb file:
mv dump.rdb /var/lib/redis/dump.rdb
  1. Ensure permissions are set properly for redis user:
chown redis:redis /var/lib/redis/dump.rdb
  1. Start your redis server:
  • Mac: brew services start redis
  • Ubuntu: sudo systemctl start redis-server

How does it work

MongoDB

Cacti uses mongodump for creating backups.

https://docs.mongodb.com/manual/reference/program/mongodump/

Redis

Cacti uses redis-cli with the bgsave command for creating backups.

https://redis.io/commands/bgsave

Amazon S3

Cacti uses aws-sdk and uploads to S3 a gzipped tarball using server-side AES256 encryption.

https://github.com/aws/aws-sdk-js

References

Contributors

| Name | Website | | -------------- | -------------------------- | | Nick Baugh | http://niftylettuce.com/ |

License

MIT © Nick Baugh