npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

databricks-sql-cli

v1.0.0

Published

Helper CLI to apply migrations to a databricks sql warehouse

Downloads

77

Readme

databricks-sql-cli

Helper CLI to apply migrations to a databricks sql warehouse

Overview:

This tool simplifies the process of creating and applying SQL migrations for your Databricks tables. It manages migration versions, ensuring a smooth and predictable workflow when evolving your data schema.

Installation: The databricks-sql-cli tool is assumed to be installed and accessible globally through npm or yarn. You can install it using the following commands:

npm i --save-dev databricks-sql-cli
yarn add databricks-sql-cli --dev

Usage:

The tool is invoked using the following syntax:

dbsql <command> [options]

Available Commands:

  • init: Initializes the migration environment by creating a dedicated folder for migration files and a table to track migration versions.
  • create: Generates a new SQL migration file with a timestamped filename. You can optionally specify a custom name using the -n flag.
  • apply: Applies all unapplied migrations in ascending order based on their version numbers.
  • reset: Drops the schema and then reapplies all migrations from scratch.

Options:

  • -h, --host: Specifies the hostname of your Databricks workspace. (Optional)
  • -p, --path: Defines the path to your Databricks workspace directory. (Optional)
  • -t, --token: Provides your Databricks workspace access token. (Optional)
  • -c, --catalog: Sets the catalog for your Databricks workspace. (Optional)
  • -s, --schema: Defines the schema within your Databricks workspace for migrations. Created if not exists. (Optional)
  • -e, --env: Sets the path to an environment file containing Databricks connection details. Defaults to .env in the current working directory. Supports environment variable expansion.
  • The --noEnvFile flag now overrides the -e, --env option. This means that if --noEnvFile is set, the tool will ignore any specified environment file path and rely solely on environment variables.
  • -h, --help: Displays the usage information and available options.

NOTE

If any of the -h, --host, -p, --path, -s, --schema, -c, --catalog, or -t, --token options are provided, the --noEnvFile flag will be ignored, and the tool will load environment variables from the options provided. This ensures that the user can explicitly provide connection details without relying on environment variables.

Configuration:

You can configure the tool through a .env file in your project directory, which should contain key-value pairs for connection details. Alternatively, set environment variables directly in your shell and run the tool with the --noEnvFile flag to load values from the environment.

DATABRICKS_HOST=your-databricks-host
DATABRICKS_PATH=your-databricks-path
DATABRICKS_TOKEN=your-databricks-access-token
DATABRICKS_CATALOG=your-databricks-catalog
DATABRICKS_SCHEMA=your-databricks-schema

You can also use the --env option to specify a custom path to an environment file. The tool will automatically load these values when running commands.

You can also specify these values directly as command-line options when invoking the tool.

sh dbsql apply --host your-databricks-host --path your-databricks-path --token your-databricks-access-token --catalog your-databricks-catalog --schema your-databricks-schema

Example Usage:

  1. Initializing the migration environment:
dbsql init
  1. Creating a new migration file:
dbsql create -n my_migration_script
  1. Applying unapplied migrations:
dbsql apply
  1. Resetting the schema and reapplying migrations (use with caution):
dbsql reset

Error Handling:

The tool provides informative error messages if invalid commands or missing arguments are encountered. It also exits with a non-zero status code in case of errors.

Additional Notes:

  • Ensure you have the necessary permissions to access and modify objects within your Databricks workspace.
  • The migration files follow a specific naming convention based on timestamps, making them easy to identify and track.

Bootstrapped with: create-ts-lib-gh

This project is MIT Licensed.