npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

sqldump-to

v1.0.3

Published

Stream SQL dump to newline delimited json

Downloads

41

Readme

sqldump-to

Build Status

This stdin stream compatible command line tool takes a SQL dump stream and converts it to newline delimited JSON. In the future this module may support other output formats and have additional features (with your help).

  • Convert SQL dump to newline delimited JSON for import to BigQuery or other tools.
  • Output JSON schema to file. Request export format
  • Stream-based processor makes efficient use of resources (low memory/disk requirements).
  • Use multiple worker processes to increase performance/conversion speed.
  • stdin/stdout compatible.
  • Supports MySQL and MariaDB SQL dumpfiles and schema. Request dump format

Get Started

Installation

npm install -g sqldump-to

Usage

To use, simply pipe a MySQL compatible database dump to the tool. The schema will be read and the output will be newline delimited JSON, with object keys matching the column names of your tables.

Examples

# Output from dump file to newline delimited JSON on stdout
cat tablename.sql | sqldump-to
# Dump table directly using mysqldump to JSON file
mysqldump -u user -psecret dbname tablename | sqldump-to > tablename.json
# Dump entire database directly using mysqldump to JSON files in output dir
mysqldump -u user -psecret dbname | sqldump-to -d ./output
# Track progress from gzipped dump to newline delimited JSON to a file
pv tablename.sql.gz | gunzip -c | sqldump-to > tablename.json
# Output to a specific directory from a download stream
curl http://dumps.mydumps.com/tablename.sql.gz | gunzip -c | sqldump-to -d ./output
# Output to gzipped json file with a separate schema file from a download stream
curl http://dumps.mydumps.com/tablename.sql.gz | gunzip -c | sqldump-to -s | gzip -9 > tablename.json.gz

Flags

--dir-output=<path>, -d

Output to file in a specific directory. Filename will be {tablename}.json. Selecting this option will disable writing to stdout in favour of write to disk.

# Output newline delimited JSON to ./output/tablename.json
cat tablename.sql | sqldump-to -d ./output

--write-workers=<number of workers>, -w

Adds extra write workers and splits the output into separate files. Only works when writing to disk (ie. when --dir-output given).

You probably want to experiment with different values to optimize the speed of processing. Filenames will be {tablename}_0.json, {tablename}_1.json, etc.

# Use 2 workers to output ./output/tablename_0.json and ./output/tablename_1.json
cat tablename.sql | sqldump-to -d ./output -w 2

--schema, -s

Output the detected schema as JSON to a file. Filename will be {tablename}_schema.json.

If output-dir is not set, the schema file will be written to current directory. Otherwise will be written to the directory specified in output-dir.

# Output to stdout
# Write embedded schema to ./tablename_schema.json
cat tablename.sql | sqldump-to -s
# Output to ./output/tablename.json
# Write Standard SQL schema to ./output/tablename_schema.json
cat tablename.sql | sqldump-to -d ./output -s standard

--input=<dumpfile>, -i

Specify a filename instead of piping to stdin.

# Output newline delimited JSON to ./output/tablename.json
sqldump-to -i tablename.sql -d ./output

License

The MIT License (MIT)
Copyright (c) 2019 Arjun Mehta