minimonolith
v0.25.24
Published
[![codecov](https://codecov.io/gh/DeepHackDev/minimonolith-api/branch/master/graph/badge.svg?token=ORFNKKJRSE)](https://codecov.io/gh/DeepHackDev/minimonolith-lib)
Downloads
40
Readme
minimonolith
minimonolith
is a lightweight library designed to help you build serverless APIs using AWS Lambda, with a focus on simplicity and ease of use. The library provides a straightforward structure to organize your API's services, methods, validation, and models while handling common tasks like database connection and request validation.
In addition to its simplicity, minimonolith
enables seamless inter-service communication within your API. This allows services to call one another's functionality without directly importing them, fostering a modular design. For example, you can call the get method of the todo service from the todoList service using SERVICES.todo.get({ id }). By registering services within the API, you can easily call their methods from other services, which not only promotes a clean architecture but also paves the way for future support of automated end-to-end testing.
Example Project
Here's an example project using minimonolith
:
.
├── package.json
├── .gitignore
├── .env
├── server.js // For local development
├── index.js // Root of the code in a deployed AWS Lambda
└── todo
├── services.js // Module 'todo' exported services are declared here
├── model.js // Optional: Sequelize model for module 'todo' is declared here
└── get
├── handler.js // Module 'todo' service 'get' handler
└── in.js // Optional: Service 'get' input validation, if body not empty
└── out.js // Optional: Service 'get' output validation, if body not empty
server.js
This file is used for local development. It runs a local server using minimonolith
's getServer
function:
// server.js
import { getServerFactory } from 'minimonolith';
const getServer = await getServerFactory()(;
const { lambdaHandler } = await import('./index.js');
getServer(lambdaHandler).listen(8080);
index.js
This file serves as the root of the code in a deployed AWS Lambda:
// index.js
'use strict';
import { getNewAPI } from 'minimonolith';
const API = getNewAPI({
PROD_ENV: process.env.PROD_ENV,
DEV_ENV: process.env.DEV_ENV,
});
await API.postHealthService();
await API.postModule('todo');
await API.postDatabaseService({
DB_DIALECT: process.env.DB_DIALECT,
DB_HOST: process.env.DB_HOST,
DB_PORT: process.env.DB_PORT,
DB_DB: process.env.DB_DB,
DB_USER: process.env.DB_USER,
DB_PASS: process.env.DB_PASS,
});
export const lambdaHandler = await API.getSyncedHandler();
todo/index.js
Here, we declare the service routes for the todo
module:
// todo/services.js
export default ['getAll', 'get:id', 'post', 'patch:id', 'delete:id'];
todo/model.js
In this file, we define a Sequelize model for the todo
module:
// todo/model.js
export default moduleName => (orm, types) => {
const schema = orm.define(moduleName, {
name: {
type: types.STRING,
allowNull: false
},
});
schema.associate = MODELS => {}; // e.g. MODELS.todo.belongsTo(MODELS.todoList, {...});
return schema;
};
todo/get/index.js
This file contains the get:id
route for the todo
module. It retrieves a todo item by its ID:
// todo/get/index.js
export default async ({ body, MODELS }) => {
return await MODELS.todo.findOne({ where: { id: body.id } });
}
todo/get/in.js
This file validates the get:id
service's input, ensuring that the provided id
is a string and exists in the todo
model:
// todo/get/in.js
import { z, zdb } from 'minimonolith';
export default ({ MODELS }) => ({
id: z.string()
.superRefine(zdb.getIsSafeInt('id'))
.transform(id => parseInt(id))
.superRefine(zdb.getExists(MODELS.todo, 'id')),
})
Response Codes
Success
- POST -> 201
- DELETE -> 204
- Everything else -> 200
Invalid Request
- ANY -> 400
Runtime Error
- ANY -> 500
App Environments
There are 4 possible environments:
- DEV=TRUE + PROD=FALSE: This is the standard DEV environment
- DEV=FALSE + PROD=FALSE: This is the standard QA environment
- DEV=FALSE + PROD=TRUE: This is the stnadard PROD environment
- DEV=TRUE + PROD=TRUE: This allows to test the behavior of PROD within the "new concept" of DEV environment
To better understand their relevance:
- The "new concept" DEV environments (DEV=TRUE) aim to make the api crash if an "important" error happens
- Its current only difference is it makes it crash on error at service registration phase
- Some may think QA should also fail on "important" errors They can use DEV=TRUE there But some do training activities on QA that must be minimally disrupted
- The "new concept" QA environments (PROD=FALSE) aim at logging data about the system which on production environments would be forbiden personal information
- This is relevant because replication of QA activities (even security QA activities) depend heavily on this
The current App environment is determined on the values of DEV ENV [TRUE/FALSE] and PROD_ENV [TRUE/FALSE] Assuming using same env variables as used at index.js above
# .env standard dev environment
DEV_ENV=TRUE
PROD_ENV=FALSE
TEST_ENV=FALSE
[...]
NOTICE: Default environment it is assumed standard PROD environment (DEV=FLASE + PROD=TRUE)
- This means that sequelize will not alter automatically tables having mismatches with defined model.js files
- Database dialect/credentials detected will not be printed
- Critical errors will not make the app crash
Database Authentication
To set up authentication for the database you need to pass necessary variables to postDatabaseService as at index.js above. Assuming using same env variable names as at index.js above
For MySQL:
DEV_ENV=TRUE
PROD_ENV=FALSE
DB_DIALECT=mysql
DB_HOST=<your_database_endpoint>
DB_PORT=<your_database_port>
DB_DB=<your_database_name>
DB_USER=<your_database_username>
DB_PASS=<your_database_password>
For SQLite in memory:
DEV_ENV=TRUE
PROD_ENV=FALSE
DB_DIALECT=sqlite
DB_DB=<your_database_name>
DB_STORAGE=:memory: # Need to also pass to API.postDatabaseService()
Make sure to replace the placeholders with your actual database credentials.
DEV_ENV=TRUE
allows Sequelize to alter table structure automatically when working locallyPROD_ENV=FALSE
allows logging of DB credentials for debugging purposes in non-production environments- We consider high quality logging important for app performance and evolution
- However we recommend automatic DB credentials updates (daily) High quality logging does not mean giving away your infraestructure to hackers
- At the risk of stating the obvious do not store personal information at the QA database