npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

modelorama

v0.0.15

Published

Tame schemas once and for all!

Downloads

31

Readme

ci

Modelorama showcases Protobuf, GraphQL and JSON-Schema cooperating to achieve a solid service-based architecture for web applications.

  • The database is built on top of Sequelize, also powered by JSON-Schema
  • The front-end API talks GraphQL while communication between back-end services is done through gRPC (optional)
  • The implemented APIs can be written or consumed in TypeScript, by following the suggested architecture typings are granted (optional)

Install grown and modelorama globally or in your project:

npm i grown modelorama --save # or `npm i -g grown modelorama`

To begin with, just write the main db/app.js file:

const Grown = require('grown')();

Grown.use(require('.'));
Grown.use(require('modelorama'));

module.exports = Grown.ready();

Here we're also registering modelorama to enable the discovery of db/resolvers and db/handlers if GraphQL and gRPC are enabled respectively.

Now write a db/index.js file with the following code:

module.exports = require('./generated')(require('modelorama')
  .setup({
    refs: require('./generated').default,
    config: {
      dialect: 'sqlite',
      storage: '/tmp/db.sqlite',
      directory: __dirname,
    },
  }));

This script will be used as module, it loads and configures the database repository.

Finally, you'll need to write a server.js file:

module.exports = async () => {
  const Grown = await require('./db/app');
  const server = new Grown();

  server.on('listen', app => {
    console.log(app.location.href);
  });

  return server;
};

The later script will let you start the application server with pot server start — try running mdl without arguments to list all available tasks and options.

If you want to start the server manually (e.g. PORT=1234 node server), then modify the script as follows:

-module.exports = server;
+if (require.main !== module) {
+  module.exports = server;
+} else {
+  server.listen(process.env.PORT || 8080);
+}

By default, we're gonna place all the related sources together, i.e. the db directory:

  • Model and their JSON-Schema definitions at db/models
  • GraphQL resolvers at db/resolvers (optional)
  • gRPC handlers at db/handlers (optional)

Types and Schemas

  1. Lets add a type: pot generate type db/types/dataTypes.yml pk primaryKey autoIncrement type:integer

  2. Now create a model: pot generate model db/models/User id:dataTypes/pk email:string password:string

  3. Run pot build schema db to generate the required sources.

Try the interactive console with pot console db to debug your models — run User.sync({ force: true }) to recreate the model if does not exists already!

Database migrations

In order to properly migrate the database from the previous steps you MUST:

  1. Run pot migrate db --make to create the migration files.
  2. Run pot migrate db --up to execute those migrations.
  3. Run pot migrate db --apply "initial version" to take a snapshot.

Every modification on the schemas should be followed by repeating the steps described above to ensure atomic changes over time — use git commit to keep your changes.

The db/schema.js file is a single-shot migration or snapshot from the current database shape.

Snapshots are faster than running all migrations from ground up — try pot migrate db --create to recreate the database from scratch.

TypeScript usage

Run pot build types db to generate the types from the current modules.

Now, you can import the types used in your code:

import Grown from './db/app';
import type { UserModel } from './db/app';

async function main() {
  await Grown.Models.connect();

  const User = Grown.Models.get<UserModel>('User');

  await User.sync({ force: true });
  const c = await User.count();

  console.log(c);
}
main();

Also you can type the whole connection at once:

import Grown from './db/app';
import Modelorama from 'modelorama';
import type { DB, User } from './db/app';

Grown.use(require('@grown/model'));

const Models = await Modelorama.setup<DB>(Grown, {
  config: {
    dialect: 'sqlite',
    storage: '/tmp/db.sqlite',
    directory: `${__dirname}/db`,
  },
});

const UserModel = Models.get('User').getSchema<User>();

console.log(UserModel.fakeOne({ alwaysFakeOptionals: true }));

We have types and schemas to build our model definitions, later we can declare (optionally) services for GraphQL and gRPC.

Types are declared in JSON-Schema:

{
  "id": "Example",
  "type": "object",
  "properties": {
    "name": {
      "type": "string"
    }
  },
  "required": ["name"]
}

This can be traduced to GraphQL:

type Example {
  name: String!
}

Protobuf can be generated the same way:

message Example {
  required string name = 1;
}

As well its corresponding TypeScript type:

export type Example = {
  name: string;
}

All this means:

  • JSON-Schema is used to validate shape, format, content and advanced relationships as data.
  • GraphQL and Protobuf can be used to validate shape and types on any message being transmitted.
  • TypeScript can check your code with the proper types, just regenerate them if you update the schema.

Services can be expressed using JSON or YAML:

id: Test

service:
  calls:
  - get: example
    resp: Example
  - get: examples
    resp: ExampleList

definitions:
  Example: !include schema.json
  ExampleList:
    type: array
    items:
      $ref: Example

Test is not a schema because it lacks of properties, it cannot be used as $ref.

As you've pictured it, we'll get a working GraphQL declaration with its corresponding Protobuf service definition and TypeScript types, etc.

Service calls are used to declare how gRPC and GraphQL will communicate, when omitted, you can also get rid of their handlers and resolvers, respectively.

GraphQL and gRPC integrations are completely optional.

Services usage

Definitions made for GraphQL and gRPC are available as middleware.

To make them work, you need to register them in your server.js script:

server.on('listen', app => {
  console.log(app.location.href);
});
+
+require('modelorama').plug(Grown, server);
+
+server.plug(require('modelorama').db(Grown));

Respectively:

  • plug(Container, serverInstance) — will load and register the required modules for the enabled features, models are always booted upon connection.
  • db(Container) — returns a middleware that responds to /db for scaffolding purposes; it helps to quickly create, read, update and delete data from your registered models.

Handlers and Resolvers

  1. Lets generate a service for listing users: pot generate service db/models/User/resource.yml get:users resp:UserList

  2. Now, lets add the required types: pot generate type db/models/User/resource.yml UserList type:array items:User

  3. Generate the GraphQL resolver: pot generate def db/resolvers/User Query.users --use API --args ctx

  4. Generate the gRPC handler: pot generate def db/handlers/User users --use User --args ctx

  5. Provider for GraphQL resolvers: pot generate def db/resolvers/provider.js API

  6. Provider for gRPC handlers: pot generate def db/handlers/provider.js User

Run pot build schema db to regenerate all model sources.

You'll need to update your db/index.js script as well:

    Grown.use(require('@grown/model'));
+    Grown.use(require('@grown/grpc'));
+    Grown.use(require('@grown/graphql'));

These modules are required to enable the services on the container.

At this point, you should be able to run pot server without crashing... cheers!

But now, depending on the enabled services you should update the db/app.d.ts types:

import type Models from './models';
+import type Handlers from './handlers';
+import type Resolvers from './resolvers';
...
export interface Server extends GrownInterface {
  Models: Repository<Models>;
  Services: Services;
+  Handlers: Handlers;
+  Resolvers: Resolvers;
}

Run pot build types db again to regenerate all types.

That's it! — now you should be able to import those modules with TypeScript, e.g.

import Grown from './db/app';

console.log(Grown.Models.get('User'));
console.log(Grown.Handlers.User.users.toString());
console.log(Grown.Resolvers.User.Query.users.toString());