npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

sql-ddl-to-mongoose-schema

v3.0.1

Published

Parse and convert SQL DDL statements to a Mongoose Schema.

Downloads

1,351

Readme

SQL DDL to Mongoose Schema converter

Build Status npm node license

Transforms SQL DDL statements into Mongoose format (also JSON Schema and a compact format).

CREATE TABLE users (
  id INT(11) NOT NULL AUTO_INCREMENT,
  nickname VARCHAR(255) NOT NULL,
  deleted_at TIMESTAMP NULL,
  created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,
  updated_at TIMESTAMP,
  PRIMARY KEY (id)
) ENGINE MyISAM COMMENT 'All system users';

ALTER TABLE users ADD UNIQUE KEY unq_nick (nickname);

Delivers an array of Mongoose Schema documents (one for each parsed table):

[
  {
    "title": "users",
    "description": "All system users",
    "columns": {
      "id": {
        "type": Number,
        "min": -1.5474250491067253e+26,
        "max": 1.5474250491067253e+26,
        "unique": true,
        "primary": true,
        "required": true,
        "autoincrement": true
      },
      "nickname": {
        "type": String,
        "maxlength": 255,
        "unique": true,
        "required": false
      },
      "deleted_at": {
        "type": String,
        "required": true
      },
      "created_at": {
        "type": String,
        "default": "CURRENT_TIMESTAMP",
        "required": false
      },
      "updated_at": {
        "type": String,
        "required": true
      }
    }
  }
]

Also an array of tables in a JSON Schema format:

[
  {
    "$schema": "http://json-schema.org/draft-07/schema",
    "$comment": "JSON Schema for users table",
    "$id": "users",
    "title": "users",
    "description": "All system users",
    "type": "object",
    "required": [
      "id",
      "nickname",
      "created_at"
    ],
    "definitions": {
      "id": {
        "$comment": "primary key",
        "type": "integer",
        "minimum": 1,
        "maximum": 1.5474250491067253e+26
      },
      "nickname": {
        "type": "string",
        "maxLength": 255
      },
      "deleted_at": {
        "type": "string"
      },
      "created_at": {
        "type": "string"
      },
      "updated_at": {
        "type": "string"
      }
    },
    "properties": {
      "id": {
        "$ref": "#/definitions/id"
      },
      "nickname": {
        "$ref": "#/definitions/nickname"
      },
      "deleted_at": {
        "$ref": "#/definitions/deleted_at"
      },
      "created_at": {
        "$ref": "#/definitions/created_at"
      },
      "updated_at": {
        "$ref": "#/definitions/updated_at"
      }
    }
  }
]

And an array of tables in a compact JSON format:

[
  {
    "name": "users",
    "columns": [
      {
        "name": "id",
        "type": {
          "datatype": "int",
          "width": 11
        },
        "options": {
          "nullable": false,
          "autoincrement": true
        }
      },
      {
        "name": "nickname",
        "type": {
          "datatype": "varchar",
          "length": 255
        },
        "options": {
          "nullable": false
        }
      },
      {
        "name": "deleted_at",
        "type": {
          "datatype": "timestamp",
          "fractional": 0
        },
        "options": {
          "nullable": true
        }
      },
      {
        "name": "created_at",
        "type": {
          "datatype": "timestamp",
          "fractional": 0
        },
        "options": {
          "nullable": false,
          "default": "CURRENT_TIMESTAMP"
        }
      },
      {
        "name": "updated_at",
        "type": {
          "datatype": "timestamp",
          "fractional": 0
        },
        "options": {
          "nullable": true
        }
      }
    ],
    "primaryKey": {
      "columns": [
        {
          "column": "id"
        }
      ]
    },
    "uniqueKeys": [
      {
        "columns": [
          {
            "column": "nickname"
          }
        ],
        "name": "unq_nick"
      }
    ],
    "options": {
      "comment": "All system users",
      "engine": "MyISAM"
    }
  }
]

Currently only DDL statements of mySQL and MariaDB dialects are supported. - Check out the roadmap

Usage

yarn add sql-ddl-to-mongoose-schema


const Parser = require('sql-ddl-to-mongoose-schema');
const parser = new Parser('mysql');

const sql = `
CREATE TABLE users (
  id INT(11) NOT NULL AUTO_INCREMENT,
  nickname VARCHAR(255) NOT NULL,
  deleted_at TIMESTAMP NULL,
  created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,
  updated_at TIMESTAMP,
  PRIMARY KEY (id)
) ENGINE MyISAM COMMENT 'All system users';

ALTER TABLE users ADD UNIQUE KEY unq_nick (nickname);
`;

/**
 * Output each table to a Mongoose Schema file in a given directory...
 */
parser.feed(sql)
  .toMongooseSchemaFiles(__dirname)
  .then((outputFilePaths) => {
    // ...
  });

/**
 * Or get the Mongoose Schema if you need to modify it...
 */
const jsonSchemaDocuments = parser.feed(sql)
  .toMongooseSchemaArray();

/**
 * Output each table to a JSON Schema file in a given directory...
 */
parser.feed(sql)
  .toJsonSchemaFiles(__dirname)
  .then((outputFilePaths) => {
    // ...
  });

/**
 * Or get the JSON Schema if you need to modify it...
 */
const jsonSchemaDocuments = parser.feed(sql)
  .toJsonSchemaArray();

/*
 * Or to explore the compact JSON format...
 */
const compactJsonTablesArray = parser.feed(sql)
  .toCompactJson(parsedJsonFormat);

More options

You can grab the JSON that is parsed on every call to the parser, by feeding the parser only once. For example:


/**
 * Feed the parser with the SQL DDL statements...
 */
parser.feed(sql);

/**
 * You can get the parsed results in JSON format...
 */
const parsedJsonFormat = parser.results;

/**
 * And pass it to be formatted in a compact JSON format...
 */
const compactJsonTablesArray = parser.toCompactJson(parsedJsonFormat);

/**
 * And pass it to format to an array of JSON Schema items. One for each table...
 */
const jsonSchemaDocuments = parser.toJsonSchemaArray(compactJsonTablesArray);

/*
 * And spread the JSON Schema documents to files, which returns a promise...
 */
const options = {
  indent: 2,
  extension: '.json'
};
const jsonFilesOutput = parser.toJsonSchemaFiles(__dirname, options, jsonSchemaDocuments)
  .then((outputFilePaths) => {
    // ...
  });

What it is, what it is not

It is a SQL DDL parser for Javascript, based on nearley. It will parse DDL statements only, converting it to JSON. No DML is supported.

It is not a SQL DBMS, nor a SQL Server, nor SQL client.

About

No SQL server, client or DBMS is required.

To see which DDL statements / SQL dialects are supported, check out the roadmap.

This project is a grammar and stream-friendly SQL parser based on nearley.

Contributing

You are welcome to contribute!

Preferably use yarn instead of npm, as all scripts in package.json are run through yarn.

  • Clone this repo
  • Install nodemon: yarn global add nodemon
  • Install dependencies: yarn

Commiting

To commit, use commitizen: git cz (you will need to have installed commitizen: yarn global add commitizen).

Understanding the internals

Folder structure:


/
|- index.js               Entrypoint file, imports from lib/index.js
|- lib/                   Compiled (dist) library folder, product of this project.
|
|- src/
|  |- shared/             Shared files used by dialects, parsers and formatters.
|  |- mysql/
|     |- example.js       Serves development purpose for testing isolated statements.
|     |- formatter/       Formats the parsed JSON (output of parser) to other format.
|        |- compact/      Formatter for a compact JSON format.
|        |- json-schema/   Formatter for a JSON Schema format.
|        |- mongoose-schema/   Formatter for a Mongoose Schema format.
|     |- parser/
|        |- dictionary/   JS files with array of keywords used in lexer.ne.
|        |- rules/        Nearley files with grammar rules.
|        |- lexer.ne      Entrypoint and first lines of the grammar.
|
|- tasks/
|  |- mysql/
|     |- assembly.js      Script that concatenates all .ne files to grammar.ne to lib folder.
|     |- formatters.js    Script that sends a copy of formatters to lib folder.
|
|- test/                  Tests.
  • There are naming rules for tokens in ne files, as stated in lexer.ne. They are prepended with:

K_ -> Keywords
P_ -> Phrase (aka statements)
O_ -> Options (one of several keywords or phrases)
S_ -> Symbol (not a keyword, but chars and other matches by RegExp's)
  1. The dictionary/keywords.js file contains keywords, but they are prepended with K_ when used in .ne files. Take a look to make sure you understand how it is exported.

  2. The compiled grammar.ne file comprises an assembly (concatenation) of lexer.ne and files in rules folder. So don't worry about importing .ne files in other .ne files. This prevents circular dependency and grammar rules in lexer.ne are scoped to all files (thus not having to repeat them in every file).

Scripts at hand

Valid to all SQL dialects:

  • Assemble grammar.ne and compile to grammar.js: yarn run build
  • Same as above, but watch for changes: yarn run build:watch
  • Run tests: yarn run test
  • Test and watch for changes: yarn run test:watch
  • Test against nearley tester: yarn run nearley-test lib/mysql/parser/grammar.js --input 'CREATE TABLE test (test CHAR(1));'

Debugging

Taking the example file as an example, you may debug with the following configurations, for each IDE:

Visual Studio Code

Place the launch config below. To debug tests you may want to change the args as you go.


{
  "version": "0.2.0",
  "configurations": [
    {
      "type": "node",
      "request": "launch",
      "name": "Debug Compilation",
      "args": [
        "lib/mysql/parser/grammar.ne"
      ],
      "program": "${workspaceFolder}/node_modules/nearley/bin/nearleyc.js"
    },
    {
      "type": "node",
      "request": "launch",
      "name": "Debug Tests",
      "args": [
        "test/mysql/parser/parser.spec.js"
      ],
      "program": "${workspaceFolder}/node_modules/ava/profile.js"
    },
    {
      "type": "node",
      "request": "launch",
      "name": "Debug Example",
      "program": "${workspaceFolder}/src/mysql/example.js"
    }
  ]
}

Links