npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

minimonolith

v0.25.24

Published

[![codecov](https://codecov.io/gh/DeepHackDev/minimonolith-api/branch/master/graph/badge.svg?token=ORFNKKJRSE)](https://codecov.io/gh/DeepHackDev/minimonolith-lib)

Downloads

40

Readme

minimonolith

codecov

minimonolith is a lightweight library designed to help you build serverless APIs using AWS Lambda, with a focus on simplicity and ease of use. The library provides a straightforward structure to organize your API's services, methods, validation, and models while handling common tasks like database connection and request validation.

In addition to its simplicity, minimonolith enables seamless inter-service communication within your API. This allows services to call one another's functionality without directly importing them, fostering a modular design. For example, you can call the get method of the todo service from the todoList service using SERVICES.todo.get({ id }). By registering services within the API, you can easily call their methods from other services, which not only promotes a clean architecture but also paves the way for future support of automated end-to-end testing.

Example Project

Here's an example project using minimonolith:

.
├── package.json
├── .gitignore
├── .env
├── server.js                  // For local development
├── index.js                   // Root of the code in a deployed AWS Lambda
└── todo
    ├── services.js            // Module 'todo' exported services  are declared here
    ├── model.js               // Optional: Sequelize model for module 'todo' is declared here
    └── get
        ├── handler.js         // Module 'todo' service 'get' handler
        └── in.js           // Optional: Service 'get' input validation, if body not empty
        └── out.js           // Optional: Service 'get' output validation, if body not empty

server.js

This file is used for local development. It runs a local server using minimonolith's getServer function:

// server.js
import { getServerFactory } from 'minimonolith';

const getServer = await getServerFactory()(;
const { lambdaHandler } = await import('./index.js');

getServer(lambdaHandler).listen(8080);

index.js

This file serves as the root of the code in a deployed AWS Lambda:

// index.js
'use strict';

import { getNewAPI } from 'minimonolith';

const API = getNewAPI({
  PROD_ENV: process.env.PROD_ENV,
  DEV_ENV: process.env.DEV_ENV,
});

await API.postHealthService();
await API.postModule('todo');
await API.postDatabaseService({
  DB_DIALECT: process.env.DB_DIALECT,
  DB_HOST: process.env.DB_HOST,
  DB_PORT: process.env.DB_PORT,
  DB_DB: process.env.DB_DB,
  DB_USER: process.env.DB_USER,
  DB_PASS: process.env.DB_PASS,
});

export const lambdaHandler = await API.getSyncedHandler();

todo/index.js

Here, we declare the service routes for the todo module:

// todo/services.js
export default ['getAll', 'get:id', 'post', 'patch:id', 'delete:id'];

todo/model.js

In this file, we define a Sequelize model for the todo module:

// todo/model.js
export default moduleName => (orm, types) => {
  const schema = orm.define(moduleName, {
    name: {
      type: types.STRING,
      allowNull: false
    },
  });

  schema.associate = MODELS => {}; // e.g. MODELS.todo.belongsTo(MODELS.todoList, {...});

  return schema;
};

todo/get/index.js

This file contains the get:id route for the todo module. It retrieves a todo item by its ID:

// todo/get/index.js
export default async ({ body, MODELS }) =>  {
  return await MODELS.todo.findOne({ where: { id: body.id } });
}

todo/get/in.js

This file validates the get:id service's input, ensuring that the provided id is a string and exists in the todo model:

// todo/get/in.js
import { z, zdb } from 'minimonolith';

export default ({ MODELS }) => ({
  id: z.string()
    .superRefine(zdb.getIsSafeInt('id'))
    .transform(id => parseInt(id))
    .superRefine(zdb.getExists(MODELS.todo, 'id')),
})

Response Codes

Success

  • POST -> 201
  • DELETE -> 204
  • Everything else -> 200

Invalid Request

  • ANY -> 400

Runtime Error

  • ANY -> 500

App Environments

There are 4 possible environments:

  1. DEV=TRUE + PROD=FALSE: This is the standard DEV environment
  2. DEV=FALSE + PROD=FALSE: This is the standard QA environment
  3. DEV=FALSE + PROD=TRUE: This is the stnadard PROD environment
  4. DEV=TRUE + PROD=TRUE: This allows to test the behavior of PROD within the "new concept" of DEV environment

To better understand their relevance:

  1. The "new concept" DEV environments (DEV=TRUE) aim to make the api crash if an "important" error happens
  • Its current only difference is it makes it crash on error at service registration phase
  • Some may think QA should also fail on "important" errors They can use DEV=TRUE there But some do training activities on QA that must be minimally disrupted
  1. The "new concept" QA environments (PROD=FALSE) aim at logging data about the system which on production environments would be forbiden personal information
  • This is relevant because replication of QA activities (even security QA activities) depend heavily on this

The current App environment is determined on the values of DEV ENV [TRUE/FALSE] and PROD_ENV [TRUE/FALSE] Assuming using same env variables as used at index.js above

# .env standard dev environment
DEV_ENV=TRUE
PROD_ENV=FALSE
TEST_ENV=FALSE
[...]

NOTICE: Default environment it is assumed standard PROD environment (DEV=FLASE + PROD=TRUE)

  • This means that sequelize will not alter automatically tables having mismatches with defined model.js files
  • Database dialect/credentials detected will not be printed
  • Critical errors will not make the app crash

Database Authentication

To set up authentication for the database you need to pass necessary variables to postDatabaseService as at index.js above. Assuming using same env variable names as at index.js above

For MySQL:

DEV_ENV=TRUE
PROD_ENV=FALSE
DB_DIALECT=mysql
DB_HOST=<your_database_endpoint>
DB_PORT=<your_database_port>
DB_DB=<your_database_name>
DB_USER=<your_database_username>
DB_PASS=<your_database_password>

For SQLite in memory:

DEV_ENV=TRUE
PROD_ENV=FALSE
DB_DIALECT=sqlite
DB_DB=<your_database_name>
DB_STORAGE=:memory: # Need to also pass to API.postDatabaseService()

Make sure to replace the placeholders with your actual database credentials.

  • DEV_ENV=TRUE allows Sequelize to alter table structure automatically when working locally
  • PROD_ENV=FALSE allows logging of DB credentials for debugging purposes in non-production environments
    • We consider high quality logging important for app performance and evolution
    • However we recommend automatic DB credentials updates (daily) High quality logging does not mean giving away your infraestructure to hackers
    • At the risk of stating the obvious do not store personal information at the QA database