npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@alextis59/back-flip

v0.1.5

Published

## Overview

Downloads

244

Readme

Back-Flip Project

Overview

The "back-flip" project is a comprehensive software solution designed to streamline the development and management of applications that require robust database interactions, middleware processing, and messaging capabilities. This project encapsulates a variety of utilities and components, each meticulously crafted to enhance the functionality and maintainability of your application.

At its core, the project leverages Express for building server-side applications, MongoDB for database management, and NATS for messaging. The project structure is modular, ensuring that each component can be developed, tested, and maintained independently, yet seamlessly integrate with the rest of the system.

Key Features and Components

Project Configuration: The package.json file is the cornerstone of the project's configuration. It defines essential metadata, manages dependencies, and specifies scripts for various development and testing tasks. The dependencies include critical libraries such as axios for HTTP requests, lodash for utility functions, moment for date manipulation, mongodb for database operations, nats for messaging, and uuid for unique identifier generation. The devDependencies support testing and development workflows with tools like chai, mocha, nyc, sinon, and supertest.

Core Functionality: The core functionality revolves around handling asynchronous operations in Express, managing database connections and CRUD operations, and tracking database events. The async-express module provides utilities to handle both asynchronous and synchronous middleware, ensuring smooth and efficient request processing. The database modules (db/index.js, db/auto_publish.js, db/cache.js, db/tracking.js) offer robust solutions for managing MongoDB connections, event subscriptions, and data caching, making it easier to build scalable and responsive applications.

Logging: Effective logging is crucial for monitoring and debugging applications. The log/index.js file configures the winston logger with custom formats, including correlation IDs, to provide detailed and structured logs. This dynamic configuration capability ensures that the logging setup can be easily adjusted to meet different environments and requirements.

Middleware: Middleware functions are essential for processing requests and responses in a web application. The project includes several middleware modules:

  • middlewares/factory.js provides factory functions for creating middleware tailored to specific operations such as entity creation, update, and deletion.
  • middlewares/generic.js offers generic middleware functions for handling requests, checking access rights, and performing CRUD operations.
  • middlewares/responses.js defines middleware for handling HTTP responses and errors, ensuring consistent and informative responses across the application.

Models: The model layer is responsible for defining and managing data schemas and entity-specific operations. The EntityHandler and EntityModel classes encapsulate the logic for validating and managing entity schemas, while custom error classes in model/errors.js provide a structured approach to handling different types of errors.

Messaging: Messaging is a critical component for building distributed systems. The project includes the mqz module, which defines the MQZClient and NatsClient classes for managing message queues using NATS. These classes provide functions to initialize, publish, and subscribe to messages, facilitating efficient and reliable communication between different parts of the system.

Testing Utilities: Robust testing is essential for ensuring the reliability and stability of the application. The project includes several testing utilities:

  • test_utils/control.js provides functions to initialize test clients, clear received messages, and check database entities.
  • test_utils/pub_test_client.js defines the PubTestClient class for testing message publishing and consumption.
  • test_utils/requests.js offers functions to make HTTP requests and verify responses during testing.

Miscellaneous: The entry point of the project, index.js, serves as the initial loading script but contains minimal content, indicating that the main logic is distributed across the various modules and components.

Summary

The "back-flip" project is a well-structured and feature-rich solution designed to address the complexities of modern web application development. With its modular architecture, robust database management, comprehensive middleware support, effective logging, and reliable messaging capabilities, it provides a solid foundation for building scalable, maintainable, and high-performance applications. Whether you are developing a new application or enhancing an existing one, the "back-flip" project offers the tools and utilities needed to succeed.

Installation

To install and set up the "back-flip" project, follow the detailed steps outlined below. This guide assumes you have a basic understanding of Node.js, npm, and general software development practices.

Prerequisites

Before you begin, ensure you have the following software installed on your system:

  1. Node.js: Version 14.x or later is recommended. You can download it from nodejs.org.
  2. npm: This is typically installed with Node.js. Ensure you have npm version 6.x or later. You can check your version by running npm -v in your terminal.
  3. MongoDB: The project requires MongoDB for database operations. Install the latest version from mongodb.com.
  4. NATS Server: The project uses NATS for messaging. You can download and install NATS from nats.io.

Step-by-Step Installation

  1. Clone the Repository: Begin by cloning the "back-flip" repository from GitHub to your local machine. Open your terminal and execute the following command:

    git clone https://github.com/yourusername/back-flip.git

    Replace yourusername with the actual username or organization name.

  2. Navigate to the Project Directory: Change your working directory to the newly cloned repository:

    cd back-flip
  3. Install Dependencies: The project relies on several npm packages. Install them by running:

    npm install

    This will read the package.json file and install all listed dependencies and devDependencies.

  4. Configure Environment Variables: Create a .env file in the root directory of your project. This file should contain all necessary environment variables. Here is an example of what your .env file might look like:

    MONGODB_URI=mongodb://localhost:27017/backflip
    NATS_URL=nats://localhost:4222
    LOG_LEVEL=debug

    Adjust the values as needed based on your local setup.

  5. Database Setup: Ensure your MongoDB server is running. You can start MongoDB using the command:

    mongod

    The default configuration assumes MongoDB is running on localhost at port 27017. If you have a different setup, make sure to update the MONGODB_URI in your .env file accordingly.

  6. Start the NATS Server: Ensure the NATS server is running. You can start it with:

    nats-server

    The default configuration assumes NATS is running on localhost at port 4222. If your setup is different, update the NATS_URL in your .env file.

  7. Run the Application: Start the application by executing:

    npm start

    This command will start the server, and you should see logs indicating that the application is running and connected to both MongoDB and NATS.

Running Tests

The project includes unit and integration tests to ensure code quality and functionality. You can run these tests using npm scripts defined in the package.json file.

  1. Unit Tests: To run unit tests, use:

    npm run test:unit
  2. Integration Tests: To run integration tests locally, use:

    npm run test:integration:local

    For running integration tests against a target environment, use:

    npm run test:integration:target
  3. Test Coverage: To generate a test coverage report, use:

    npm run test:cover:unit

Additional Commands

  • Publish the Package: If you need to publish the package to npm, use:
    npm publish --access public

Troubleshooting

If you encounter any issues during installation or setup, consider the following troubleshooting steps:

  • Check Dependencies: Ensure all dependencies are correctly installed. Run npm install again if necessary.

  • Verify Environment Variables: Double-check the .env file for any typos or incorrect values.

  • Database Connection: Ensure MongoDB is running and accessible at the URI specified in your .env file.

  • NATS Server: Make sure the NATS server is running and accessible at the URL specified in your .env file.

  • Logs: Check the application logs for detailed error messages. Adjust the LOG_LEVEL in your .env file to debug for more verbose logging.

By following these instructions, you should be able to successfully install and run the "back-flip" project. For any further assistance, refer to the project's documentation or contact the project maintainers.

Project Structure

The "back-flip" project repository is meticulously organized to ensure that developers can easily navigate and understand its structure. This section provides an in-depth exploration of the various files and directories, highlighting their roles and interactions within the project.

The root directory contains the package.json file, which is essential for defining the project metadata, dependencies, and scripts. This file includes both runtime dependencies such as axios, lodash, moment, mongodb, nats, and uuid, as well as development dependencies like chai, mocha, nyc, sinon, and supertest. These dependencies are crucial for the project's functionality and testing capabilities.

Core Functionality

The async-express directory houses index.js, which is pivotal for managing both asynchronous and synchronous middleware in Express. Key functions such as asyncWrap and asyncRouter are designed to streamline middleware handling, ensuring efficient processing of router methods.

The db directory is a cornerstone of the project, containing several critical files:

  • auto_publish.js manages database event subscriptions and publishing. It supports configuration options for whitelists, blacklists, and publishers, ensuring flexible event handling.
  • cache.js is responsible for subscribing to database events like 'create' and 'update', facilitating efficient caching mechanisms.
  • index.js is the primary file for managing MongoDB connections and CRUD operations. It includes utility functions for initializing, connecting, and disconnecting from the database, along with event handling and query projection building.
  • tracking.js tracks database operations such as create, update, and delete. It provides functions to filter and obfuscate tracked entity updates, enhancing data management and security.

Logging

Logging is handled by the log/index.js file, which configures the winston logger with custom formats, including correlation IDs. This allows for dynamic logger configuration, ensuring that logging can be tailored to specific needs and environments.

Middleware

The middlewares directory includes several key files:

  • factory.js offers factory functions to create middleware for various operations like entity creation, update, and deletion. It also includes functions to check and process entity access rights.
  • generic.js contains generic middleware functions for handling requests, checking access rights, and performing CRUD operations. Utility functions for formatting and sending responses are also provided.
  • responses.js defines middleware for handling HTTP responses and errors, with functions to send success responses and catch HTTP errors.

Models

The model directory is crucial for managing entities within the project:

  • EntityHandler.js defines the EntityHandler class, which manages entity-specific operations and middleware.
  • EntityModel.js outlines the EntityModel class for validating and managing entity schemas.
  • errors.js defines custom error classes for various HTTP and application-specific errors.
  • index.js manages entity handlers and models, providing functions to register, retrieve, and check access rights for entities.

Messaging

Messaging functionality is encapsulated within the mqz directory:

  • index.js defines the MQZClient class for managing message queues using NATS. It includes functions for initializing, publishing, and subscribing to messages.
  • nats.js implements the NatsClient class, which handles NATS connections, streams, and message consumption.

Testing Utilities

The test_utils directory provides essential tools for testing:

  • control.js offers functions to initialize test clients, clear received messages, and check database entities.
  • pub_test_client.js defines the PubTestClient class for testing message publishing and consumption.
  • requests.js provides functions for making HTTP requests and verifying responses during testing.

Miscellaneous

The index.js file in the root directory appears to serve as an entry point for the project, though it contains minimal content. It likely acts as a placeholder or a simple initializer for the project's core functionalities.

In summary, the "back-flip" project repository is well-structured, with each directory and file serving a specific purpose. From managing asynchronous middleware and database operations to handling logging, middleware, models, messaging, and testing utilities, the repository is designed to facilitate efficient development and maintenance of the project.

Core Functionality

async-express/index.js

asyncWrap

The asyncWrap function is a pivotal utility within the async-express/index.js module, designed to bridge the gap between asynchronous and synchronous middleware in Express applications. This function is critical for developers who wish to streamline their middleware execution flow, ensuring that any errors arising from asynchronous operations are appropriately managed and propagated through the middleware stack.

At its core, asyncWrap facilitates the seamless integration of asynchronous functions into an Express middleware chain. Traditional Express middleware functions are predominantly synchronous, but modern applications often require asynchronous operations, such as database queries or external API calls. Without proper handling, these asynchronous operations can disrupt the middleware flow, leading to uncaught errors and unpredictable behavior. asyncWrap addresses this challenge by encapsulating both synchronous and asynchronous middleware functions, ensuring consistent error handling and middleware execution.

The function accepts a variable number of middleware functions as parameters. It processes these functions sequentially, wrapping each one to catch any errors that may occur during their execution. If an error is encountered in an asynchronous middleware, asyncWrap captures it and forwards it to the next middleware or error handler, thereby maintaining the integrity of the middleware chain.

To illustrate the utility of asyncWrap, consider an Express application that needs to perform both synchronous and asynchronous operations. For instance, an asynchronous middleware might fetch data from a database, while a synchronous middleware might validate the fetched data. By using asyncWrap, developers can ensure that any errors in the asynchronous operation are caught and handled correctly, without disrupting the synchronous validation process.

Here is a practical example of how asyncWrap can be implemented in an Express application:

const express = require('express');
const asyncWrap = require('./async-express/index.js').asyncWrap;

const app = express();

// Example asynchronous middleware
const asyncMiddleware = async (req, res, next) => {
    try {
        // Simulate an asynchronous operation
        await someAsyncOperation();
        next();
    } catch (error) {
        next(error);
    }
};

// Example synchronous middleware
const syncMiddleware = (req, res, next) => {
    // Perform some synchronous operations
    next();
};

// Use asyncWrap to handle both async and sync middlewares
app.use(asyncWrap(asyncMiddleware, syncMiddleware));

// Error handling middleware
app.use((err, req, res, next) => {
    res.status(500).send('Something went wrong!');
});

app.listen(3000, () => {
    console.log('Server is running on port 3000');
});

async function someAsyncOperation() {
    // Simulate a delay
    return new Promise((resolve) => setTimeout(resolve, 1000));
}

In this example, asyncWrap is utilized to wrap both an asynchronous middleware (asyncMiddleware) and a synchronous middleware (syncMiddleware). The asynchronous middleware performs a simulated asynchronous operation, while the synchronous middleware executes immediately. If an error occurs during the asynchronous operation, it is caught by asyncWrap and passed to the error handling middleware, which sends a response indicating that something went wrong.

The use of asyncWrap in this manner not only simplifies the middleware implementation but also enhances the robustness and maintainability of the code. Developers can write cleaner, more concise middleware functions without worrying about the intricacies of error handling in asynchronous operations. This utility is particularly beneficial in complex applications where multiple asynchronous operations are performed, as it ensures that all errors are consistently managed and propagated through the middleware stack.

In conclusion, asyncWrap is an essential utility for any Express application that incorporates asynchronous middleware. By providing a consistent mechanism for handling errors in both synchronous and asynchronous middleware functions, it allows developers to create more reliable and maintainable code. The function's ability to seamlessly integrate asynchronous operations into the middleware chain makes it a valuable tool for modern web development, ensuring that applications can handle errors gracefully and maintain a smooth execution flow.

asyncRouter

The asyncRouter function in async-express/index.js serves as a powerful utility to streamline the handling of asynchronous routes within an Express application. This function extends the standard Express router with the capability to manage asynchronous route handlers seamlessly, ensuring that any errors encountered during asynchronous operations are properly caught and handled.

Description

The primary goal of asyncRouter is to simplify the integration of asynchronous route handlers into an Express application. Traditional Express routers do not inherently support asynchronous route handlers, which can lead to unhandled promise rejections and other issues if errors occur during asynchronous operations. The asyncRouter function addresses this limitation by wrapping route handlers in a way that ensures any errors are caught and passed to the next middleware or error handler.

Usage

The asyncRouter function can be used to create a router instance that supports asynchronous route handlers. It provides a familiar interface similar to the standard Express router, but with enhanced error handling capabilities for asynchronous operations.

Example

Below is an example demonstrating how to use asyncRouter to define routes with asynchronous handlers:

const express = require('express');
const { asyncRouter } = require('./async-express/index.js');

const router = asyncRouter();

// Example asynchronous route handler for a GET request
router.get('/data', async (req, res, next) => {
    try {
        const data = await fetchDataFromDatabase();
        res.json(data);
    } catch (error) {
        next(error);
    }
});

// Example asynchronous route handler for a POST request
router.post('/data', async (req, res, next) => {
    try {
        const newData = await saveDataToDatabase(req.body);
        res.status(201).json(newData);
    } catch (error) {
        next(error);
    }
});

// Integrate the router into an Express application
const app = express();
app.use(express.json());
app.use('/api', router);

// Error handling middleware
app.use((err, req, res, next) => {
    console.error(err.stack);
    res.status(500).send('Something went wrong!');
});

app.listen(3000, () => {
    console.log('Server is running on port 3000');
});

async function fetchDataFromDatabase() {
    // Simulate a database fetch operation
    return new Promise((resolve) => setTimeout(() => resolve({ key: 'value' }), 1000));
}

async function saveDataToDatabase(data) {
    // Simulate a database save operation
    return new Promise((resolve) => setTimeout(() => resolve({ id: 1, ...data }), 1000));
}

In this example, asyncRouter is used to create a router instance that can handle asynchronous route handlers for both GET and POST requests. The asynchronous route handlers perform database operations, and any errors encountered during these operations are caught and passed to the error handling middleware.

By leveraging asyncRouter, developers can effortlessly integrate asynchronous route handlers into their Express applications, ensuring robust error handling and a cleaner, more maintainable codebase. This utility is particularly useful for modern web applications that rely heavily on asynchronous operations, such as database interactions, API calls, and other I/O-bound tasks.

db/auto_publish.js

onEntityCreate

The onEntityCreate function is an essential part of the db/auto_publish.js module, responsible for handling the creation of new entities within the database. This function ensures that each newly created entity is properly logged and tracked, facilitating seamless integration with other components of the system that depend on real-time data consistency and event-driven architecture.

When an entity is created, the onEntityCreate function is triggered, initiating a series of operations designed to manage and record this event accurately. The function first extracts the entity_name from the provided data, which identifies the type of entity being created. It then checks if the entity type is one that requires tracking by invoking the isTrackedEntity method. This step is crucial as it determines whether the creation event needs to be logged for future reference and auditing purposes.

If the entity type is indeed tracked, the function proceeds to gather a list of inserted entity IDs from the data.result.insertedIds array. This array contains the unique identifiers assigned to each entity by the database upon successful insertion. Additionally, the function retrieves the creation_date from the first entity in the data.entities array, defaulting to the current timestamp if no specific creation date is provided.

The requestor_id is another critical piece of information managed by onEntityCreate. It identifies the user or system that initiated the entity creation. This ID is derived from various sources within the data, including the requestor_id field, the target updater field specified in the options, or the default service name configured in the database settings.

With these details in hand, the function constructs a list of creation objects, each representing an entity creation event. These objects include the entity_name, entity_id, requestor_id, and creation_date. This list is then passed to the createEntities method, which inserts the creation objects into a dedicated collection within the database. The no_creation_date option is set to true to prevent the insertion of a new creation date, as it is already specified in the creation objects.

By meticulously logging each entity creation, the onEntityCreate function supports robust data tracking and auditing capabilities. This functionality is vital for maintaining data integrity and ensuring that all changes within the database are transparently recorded. This process also enables other system components to react to creation events in real-time, facilitating a dynamic and responsive application architecture.

In summary, the onEntityCreate function plays a pivotal role in the db/auto_publish.js module by managing the creation of new entities, ensuring they are accurately tracked, and recording essential details for auditing and real-time processing purposes. This functionality underscores the importance of meticulous event handling in maintaining a reliable and consistent database system.

onEntityUpdate

The onEntityUpdate function is an integral part of the db/auto_publish.js module, playing a crucial role in managing database event subscriptions and publishing updates. This function is designed to handle the updates of entities within the database, ensuring that any changes made to an entity are appropriately processed and published to the relevant subscribers.

When an entity is updated, the onEntityUpdate function is invoked to manage the update process. It begins by verifying if the entity in question is tracked using the isTrackedEntity method. This check is essential to determine whether the entity's updates should be monitored and published. If the entity is tracked, the function proceeds to filter the update object. This filtering process involves retaining only the attributes specified in the whitelist and excluding those in the blacklist, ensuring that only the relevant data is processed.

The filtering is accomplished by the filterTrackedEntityUpdate function, which takes into account the entity's specific tracking options, including attribute whitelists and blacklists. These options help in refining the update object to include only the necessary attributes, thereby maintaining data integrity and relevance.

Once the update object is filtered, the function checks if there are any attributes left to be published. If the filtered update object contains valid attributes, the function constructs an update object that includes the entity name, entity ID, the filtered data, the requestor ID, and the update date. The requestor ID is determined from various sources, such as the data object, options, or the database service name, ensuring that the origin of the update is accurately recorded.

The constructed update object is then created in the database using the createEntity method. This method inserts the update object into the specified collection, with an option to bypass the creation date if necessary. By doing so, the function ensures that the update is recorded and can be retrieved or referenced in future operations.

In summary, the onEntityUpdate function is a critical component of the auto-publishing mechanism within the db/auto_publish.js module. It meticulously handles entity updates by verifying tracking status, filtering update attributes, constructing update objects, and recording these updates in the database. This process ensures that entity changes are accurately captured and disseminated to the appropriate subscribers, maintaining the integrity and consistency of the data across the system.

onEntityDelete

The onEntityDelete function is a crucial part of the db/auto_publish.js module, responsible for handling the deletion events of entities within the database. This function is designed to ensure that any necessary actions are performed when an entity is deleted, such as publishing relevant information to subscribed clients and maintaining data integrity across the system.

Upon the deletion of an entity, the onEntityDelete function is triggered. This function first checks if the entity type is included in the configured whitelist or not present in the blacklist, ensuring that only the appropriate entities are processed. This filtering mechanism is essential for controlling which entities trigger the event, thereby preventing unnecessary operations on irrelevant data.

Once an entity is deemed eligible, the function proceeds to gather the necessary information about the entity. This includes fetching the entity's current state from the database, which is crucial for providing accurate and up-to-date information to any subscribed clients. The function then constructs a message containing the details of the deleted entity, formatted according to the predefined schema.

The next step involves publishing the constructed message to the relevant message queues or topics. This is typically done using a message broker such as NATS, which is configured to handle the inter-service communication within the project. By publishing the deletion event, the system ensures that all interested parties are notified of the change, allowing them to take appropriate actions, such as updating their local caches or triggering further business logic.

Additionally, the onEntityDelete function includes error handling mechanisms to manage any issues that may arise during the process. This ensures that any failures in publishing the message or fetching the entity data are logged and managed appropriately, preventing the system from entering an inconsistent state.

In summary, the onEntityDelete function is a vital component that manages the lifecycle of entity deletions within the db/auto_publish.js module. It ensures that deletions are processed correctly, relevant information is published to subscribers, and any errors are handled gracefully, maintaining the overall integrity and reliability of the system.

db/cache.js

onEventSubscribe

The onEventSubscribe function is an integral part of the db/cache.js module, serving as a mechanism to listen for specific database events. This function is designed to register event listeners for various database operations such as 'create', 'update', and 'delete'. By subscribing to these events, the system can perform additional actions or trigger workflows in response to changes in the database.

When utilizing onEventSubscribe, a callback function is associated with a particular event name. This callback function will be executed whenever the specified event occurs. The function ensures that each event type can have multiple listeners, enabling a flexible and extensible event-handling architecture.

Here's a breakdown of how onEventSubscribe works:

  1. Event Registration: The function first checks if the event name already has a list of associated listeners. If not, it initializes an empty array for that event. This ensures that multiple callbacks can be registered for the same event type without overwriting existing listeners.

  2. Callback Association: Once the event's listener array is confirmed to exist, the provided callback function is added to this array. This allows the system to maintain a collection of functions that should be executed when the event is triggered.

  3. Event Handling: When an event occurs, all associated callbacks in the listener array are executed in the order they were registered. This enables the execution of multiple operations in response to a single event, such as logging, data synchronization, or triggering other workflows.

The onEventSubscribe function is essential for creating a dynamic and responsive system that can adapt to changes in the database in real-time. By leveraging this function, developers can implement custom logic that responds to data modifications, ensuring that the application remains consistent and up-to-date with the latest database state.

Below is a simplified example of how onEventSubscribe might be implemented:

/**
 * Subscribes a callback function to a specific database event.
 * @param {string} event_name - The name of the event to subscribe to (e.g., 'create', 'update', 'delete').
 * @param {function} cb - The callback function to execute when the event occurs.
 */
function onEventSubscribe(event_name, cb) {
    if (!self.event_listeners[event_name]) {
        self.event_listeners[event_name] = [];
    }
    self.event_listeners[event_name].push(cb);
}

In this example, the event_name parameter specifies the type of event to listen for, while the cb parameter is the callback function that will be executed when the event occurs. The self.event_listeners object maintains a list of listeners for each event type, ensuring that multiple callbacks can be registered and executed in sequence.

By incorporating onEventSubscribe into the db/cache.js module, the back-flip project ensures that it has a robust and flexible mechanism for handling database events, enabling developers to build responsive and dynamic applications.

db/index.js

initialize

The initialization function is a critical component of the database management system in the back-flip project. This function is responsible for setting up the database connection and configuring various parameters necessary for the smooth operation of database interactions. The initialization process ensures that the database is ready to handle incoming requests and perform CRUD (Create, Read, Update, Delete) operations efficiently.

The function begins by checking if the database has already been initialized to prevent redundant operations. If the database is not yet initialized, it proceeds with the initialization steps. The logger is utilized to provide debug information about the initialization process, which can be helpful for troubleshooting and monitoring.

Several options can be configured during the initialization process:

  1. Database URI: This parameter specifies the connection string used to connect to the MongoDB server. It allows the flexibility to connect to different database instances as required by the environment (e.g., development, testing, production).

  2. Database Name: This parameter sets the name of the database to which the application will connect. It is essential for organizing data within specific databases, especially when multiple databases are managed by the same MongoDB server.

  3. Service Name: The service name is used to identify the specific service or application instance that is connecting to the database. This can be useful for logging and monitoring purposes, providing a clear context of which service is performing database operations.

  4. Projection Fields: This option allows the specification of additional fields that should be included in database projections. Projections are used to limit the amount of data returned by queries, improving performance by fetching only the necessary fields.

  5. Default Sort: This parameter sets the default sorting order for query results. By defining a default sort order, the application can ensure consistent and predictable data retrieval, which can be particularly important for paginated results or ordered data displays.

The initialization function also handles setting up the database client and ensuring that the connection is established correctly. It leverages the moment library to obtain the current UTC date, which can be used for timestamping and other time-related operations throughout the application.

In summary, the initialization function is designed to configure and establish a robust connection to the MongoDB database, providing essential settings and ensuring that the database is prepared for efficient and reliable data operations. Its flexible configuration options and detailed logging make it a vital part of the back-flip project's database management system.

connect

The connect function is a critical component within the db/index.js file, responsible for establishing and managing the connection to the MongoDB database. This function is designed to ensure that the application can reliably connect to the database, handle connection retries, and manage the state of the database connection efficiently.

The function accepts two parameters: a callback function (cb) and an options object (options). The callback function is invoked once the connection process is complete, either successfully or with an error. The options object allows for customization of the connection process, including settings for connection timeouts and other configurations.

Upon invocation, the connect function first checks if a connection is already in the process of being established by evaluating the self.connecting_db flag. If a connection attempt is already underway, the function enters a loop, periodically waiting (using utils.wait(100)) until the ongoing connection attempt is resolved.

If no connection attempt is in progress, the function sets the self.connecting_db flag to true to indicate that a connection attempt is now being made. It then proceeds to log the connection attempt using the logger.debug method, providing visibility into the connection process for debugging purposes.

The core of the connection process involves invoking the MongoClient.connect method from the MongoDB library, with the database URI (self.db_uri) and connection timeout settings. The connection timeout can be customized through the options.connection_timeout parameter, defaulting to 1000 milliseconds if not specified.

Once the connection is successfully established, the function assigns the resulting MongoDB client instance to self.client and the database instance to self.db. It also sets up an event listener on the MongoDB client to handle the 'close' event, logging an error message if the database connection is unexpectedly closed.

To ensure the database connection remains active and responsive, the function sends a ping command (self.db.command({ ping: 1 })) to the database. This step verifies the connection's health and responsiveness.

After successfully establishing the connection and verifying its health, the function invokes the callback function (cb) with the database instance (self.db) as an argument, signaling that the connection process is complete. It then returns the database instance (self.db) to the caller.

In case of any errors during the connection attempt, the function includes error handling mechanisms to log the error and reset the self.connecting_db flag, ensuring that subsequent connection attempts can be made.

Overall, the connect function is designed to provide a robust and reliable mechanism for establishing and managing the MongoDB database connection, with support for customization and error handling to ensure smooth operation of the application.

disconnect

The disconnect function is a critical part of the database management module in the back-flip project. This function is responsible for gracefully closing the connection to the MongoDB database, ensuring that all resources are properly released and any necessary cleanup tasks are performed. Proper disconnection from the database is essential to maintain the stability and performance of the application, especially in environments where multiple connections are frequently opened and closed.

When invoked, the disconnect function performs a series of steps to ensure a safe and thorough disconnection process. It begins by checking if there is an active MongoDB client instance. If an active client is detected, the function proceeds to close the connection using the close method provided by the MongoDB client library. This method ensures that all ongoing operations are completed or terminated appropriately before closing the connection.

Throughout the disconnection process, the function logs significant events and errors using the winston logger configured in the project. Logging these events is crucial for monitoring and debugging purposes, as it provides insights into the state of the database connection and any issues that might arise during the disconnection process. For example, a successful disconnection is logged with an informational message, while any errors encountered during the process are logged with an error message, including details about the nature of the error.

In addition to closing the MongoDB client connection, the disconnect function also updates the internal state of the database management module. It sets the client instance to null, indicating that there is no active connection, and resets any flags or variables used to track the connection status. This ensures that subsequent attempts to connect to the database will not be affected by stale or incorrect state information.

The function also accepts a callback parameter, which is invoked once the disconnection process is complete. This callback mechanism allows other parts of the application to be notified when the database connection has been successfully closed, enabling them to perform any necessary follow-up actions. For instance, the application might need to release other resources or update its internal state based on the disconnection event.

In summary, the disconnect function is a well-structured and essential component of the back-flip project's database management module. It ensures that the MongoDB database connection is closed safely and efficiently, logging important events and errors, updating the internal state, and providing a callback mechanism for further actions. Proper implementation and usage of this function are critical for maintaining the stability and performance of the application, especially in scenarios with frequent database connection operations.

getCollection

The getCollection function is a crucial utility for interacting with MongoDB collections within the back-flip project. This asynchronous function is designed to retrieve a reference to a specified collection in the MongoDB database, facilitating various database operations such as queries, updates, and deletions.

When invoked, the function first ensures that a connection to the MongoDB instance is established. This is done by calling the connect method, which handles the initialization and connection processes. If the connection is successful, the function proceeds to fetch the collection reference using the db.collection method, where db is the MongoDB database instance and entity_name is the name of the collection to be retrieved.

The function accepts two parameters: entity_name and an optional callback function cb. The entity_name parameter is a string that specifies the name of the collection to be accessed. The callback function is intended to handle the results of the collection retrieval process, providing an error-first callback pattern commonly used in Node.js.

Upon successful retrieval of the collection, the function executes the callback function with null as the first argument (indicating no error) and the collection reference as the second argument. If the collection retrieval fails at any point, an error object is created using the DatabaseError class, and the callback function is invoked with this error object as the first argument.

The getCollection function is particularly useful in scenarios where multiple database operations need to be performed on a specific collection. By providing a consistent and reliable way to access MongoDB collections, this function helps streamline database interactions and ensures that all necessary connections are properly managed.

Key points to note about the getCollection function:

  • It ensures a connection to the MongoDB instance before attempting to retrieve the collection.
  • It uses an error-first callback pattern to handle the results of the collection retrieval process.
  • It creates a DatabaseError object to encapsulate any errors that occur during the retrieval process.
  • It is designed to be asynchronous, allowing for non-blocking database operations.

Overall, the getCollection function is an essential component of the back-flip project's database management system, providing a robust and efficient mechanism for accessing MongoDB collections.

onEventSubscribe

The onEventSubscribe function is a critical component within the db/index.js file, responsible for managing event subscriptions in the MongoDB database. This function allows the system to listen for specific database events such as 'create', 'update', and 'delete', and execute corresponding callback functions when these events occur.

The function takes two parameters: event_name and cb. The event_name parameter specifies the type of event to subscribe to, while cb is the callback function that gets executed when the specified event is triggered.

Internally, onEventSubscribe maintains an event listener registry, which is essentially a collection of event names mapped to their respective callback functions. When an event occurs, the function iterates through the list of registered callbacks for that event and invokes each one, passing along any relevant data.

Here’s a step-by-step breakdown of how onEventSubscribe operates:

  1. Event Listener Initialization:

    • When the function is first called with a specific event_name, it checks if an array for that event already exists in the event listener registry.
    • If it doesn’t exist, the function initializes an empty array for that event. This array will hold all the callback functions that need to be executed when the event occurs.
  2. Adding Callback Functions:

    • The provided callback function cb is then added to the array of callbacks for the specified event.
    • This allows multiple callbacks to be registered for the same event, enabling the system to perform various actions in response to a single event.
  3. Event Triggering:

    • When an event occurs (e.g., an entity is created, updated, or deleted), the corresponding event handler function is invoked.
    • The event handler function retrieves the array of callbacks for the event from the registry and iterates through it, executing each callback function in the order they were registered.
  4. Data Handling:

    • The callback functions are executed with the event data as their argument. This data typically includes details about the database operation that triggered the event.
    • This mechanism ensures that all registered callbacks receive the necessary context to perform their tasks.
  5. Use Cases:

    • Common use cases for onEventSubscribe include logging changes to the database, updating cache entries, notifying other system components of changes, and triggering additional business logic.

By providing a flexible and efficient way to handle database events, onEventSubscribe plays a pivotal role in the back-flip project’s architecture. It ensures that the system can react dynamically to changes in the database, thereby maintaining data integrity and enabling real-time processing of data-driven events.

onEventUnsubscribe

The function, designed for unsubscribing from database events, plays a crucial role in managing event listeners dynamically. When working with database-driven applications, it's often necessary to subscribe to events such as create, update, and delete to perform various operations like logging, caching, or triggering additional workflows. However, there are scenarios where you need to remove these subscriptions to prevent memory leaks, avoid redundant operations, or simply because the listener is no longer required.

This function accepts two parameters: the name of the event and the callback function that was originally subscribed to the event. The event name is a string that identifies the specific event you want to unsubscribe from, such as 'create', 'update', or 'delete'. The callback function is the one that was previously registered to handle the event.

The function begins by checking if there are any listeners currently registered for the specified event. This is done by looking up the event name in the event_listeners object, which is a dictionary where keys are event names and values are arrays of callback functions. If the event name is found, the function proceeds to filter out the specified callback function from the array of listeners. This is achieved using the _.filter method from the lodash library, which creates a new array excluding the specified callback.

By removing the callback function from the list of listeners, the function effectively unsubscribes it from the event. This means that the callback will no longer be invoked when the event occurs. This mechanism is essential for maintaining the performance and stability of the application, especially in long-running processes where event subscriptions might change dynamically.

In summary, this function provides a robust way to manage event subscriptions by allowing you to dynamically remove listeners when they are no longer needed. This not only helps in optimizing resource usage but also ensures that the application behaves as expected by preventing unintended side effects from stale or redundant event handlers.

onEvent

The onEvent function is a crucial part of the db/index.js file, responsible for handling and processing database events. This function acts as a centralized event dispatcher, ensuring that registered callbacks are invoked whenever specific events occur within the database. The function takes two parameters: event_name, a string representing the name of the event, and data, an object containing the event data to be processed.

When an event is triggered, the onEvent function checks if there are any listeners subscribed to the specified event name. If listeners are found, the function iterates through the list of callbacks and executes each one, passing the event data as an argument. This mechanism allows for flexible and modular handling of database events, enabling various parts of the application to respond to changes in the database state.

The implementation of the onEvent function ensures that event handling is both efficient and reliable. By maintaining a registry of event listeners, the function can quickly determine which callbacks need to be executed for a given event. This approach minimizes the overhead associated with event processing and ensures that all relevant parts of the application are notified of database changes in a timely manner.

In addition to its core functionality, the onEvent function plays a vital role in the broader event-driven architecture of the back-flip project. By facilitating communication between different components of the application, the function helps to maintain a cohesive and responsive system. This is particularly important in scenarios where multiple parts of the application need to react to the same event, such as updating the user interface, logging changes, or triggering additional workflows.

Overall, the onEvent function is a key component of the back-flip project's database management system, providing a robust and scalable solution for handling database events. Its design ensures that the application can efficiently process and respond to changes in the database, supporting a wide range of use cases and enabling seamless integration with other parts of the system.

createEntity

The createEntity function plays a crucial role in the back-flip project by facilitating the creation of a single entity object within a specified collection. This function is designed to handle the insertion of new entity data into the MongoDB database, ensuring that each entity is appropriately structured and stored.

When invoked, the function requires several parameters: the entity_name, which specifies the collection where the entity will be stored; the obj, representing the entity object to be created; and an optional callback function (cb) along with additional options (options) that can modify the behavior of the creation process.

Upon execution, the function initiates by logging the creation attempt, capturing the entity_name for debugging and traceability purposes. It then delegates the actual creation process to the createEntities method, passing along the entity object encapsulated within an array. This delegation allows for consistent handling of both single and multiple entity creation scenarios, leveraging the same underlying logic and ensuring uniformity in how entities are processed and stored.

Key steps in the creation process include:

  1. Timestamp Assignment: If the no_creation_date option is not set, the function assigns a creation_date timestamp to the entity, marking the exact time of its creation. This timestamp is crucial for maintaining accurate records of when entities are added to the database.
  2. Database Insertion: The function then inserts the entity into the specified collection using MongoDB's insertMany method. This step ensures that the entity is persistently stored within the database.
  3. ID Assignment: Post insertion, the function maps the inserted IDs from the database to the corresponding entities, ensuring that each entity object is updated with its unique identifier.
  4. Event Emission: The function emits a "create" event, signaling other parts of the system about the new entity's creation. This event includes details such as the entity_name, the entities themselves, the result of the insertion, and any options that were applied. Emitting this event allows for subsequent actions, such as caching or publishing updates, to be triggered automatically.

Error handling is an integral part of the function, ensuring that any issues encountered during the creation process are captured and appropriately managed. If an error occurs, it is wrapped within a DatabaseError object, providing a consistent error reporting mechanism that can be leveraged for debugging and user notifications.

Overall, the createEntity function is a foundational component of the back-flip project's database management system, enabling seamless and efficient creation of new entity records while ensuring consistency, traceability, and error resilience throughout the process.

createEntities

The createEntities function is a crucial method within the db/index.js file, designed to handle the batch creation of entity objects within a specified collection. This function is highly efficient for scenarios where multiple entities need to be inserted into the database simultaneously, as it minimizes the overhead associated with individual insert operations.

When invoking createEntities, the function requires several parameters: the name of the entity collection, a list of entity objects to be created, an optional callback function, and additional options for customization. The entity objects are expected to be in JSON format, each representing a distinct record to be added to the collection.

The process begins by logging the initiation of the entity creation operation, capturing essential input details such as the entity name and the list of entities. This logging is facilitated by the logger.debug method, which ensures that the operation's parameters are recorded for debugging and auditing purposes.

If the no_creation_date option is not specified or set to false, the function automatically assigns a creation date to each entity object. This timestamp is generated using the getNow utility function, ensuring that all entities have a consistent and accurate creation date, which is crucial for tracking and historical purposes.

Once the entities are prepared, the function proceeds to insert them into the specified collection using the insertMany method. This MongoDB operation is optimized for batch inserts, significantly improving performance compared to individual insert operations. Upon successful insertion, the function updates each entity object with its corresponding _id from the result.insertedIds array, ensuring that each entity is uniquely identifiable within the collection.

After the entities are inserted, the function triggers an event to notify other parts of the system about the creation of these new entities. This is achieved through the onEvent method, which broadcasts the "create" event along with relevant details such as the entity name, the list of created entities, the insertion result, and any additional options. This event-driven approach facilitates real-time updates and synchronization across different components of the application.

In case of an error during the insertion process, the function handles it gracefully by logging the error and wrapping it in a DatabaseError object. This ensures that any issues are promptly identified and reported, allowing for effective troubleshooting and resolution.

Overall, the createEntities function is a robust and efficient method for batch entity creation, providing essential features such as automatic timestamping, event broadcasting, and comprehensive error handling. Its design aligns with the project's goal of maintaining high performance and reliability in database operations.

saveEntity

The saveEntity function is a critical component designed to persist an entity object into the specified collection within the MongoDB database. This function ensures that the entity is saved with its full object representation, including updates to its metadata such as the last modification date. It is essential for maintaining the integrity and consistency of the stored data.

When invoking this function, it requires three primary parameters: the name of the entity, the entity object itself, and an optional callback function. The entity object is expected to be a well-defined JSON object that adheres to the schema requirements of the collection it is being saved into.

Upon execution, the function begins by logging the operation using the logger.debug method to provide visibility into the inputs being processed. This logging step is crucial for debugging and tracking purposes, especially in a production environment where monitoring database operations is vital.

The entity object is then cloned to create a deep copy, ensuring that any modifications made during the save process do not affect the original object passed to the function. This cloned object is subsequently updated with a last_modified timestamp, which is generated using the getNow utility function. This timestamp is critical for tracking when the entity was last altered, providing a chronological reference for future operations and audits.

Next, the function performs the actual database operation by calling collection.replaceOne. This method attempts to find an existing document in the collection that matches the entity's unique identifier (_id). If a matching document is found, it is replaced with the new entity object. If no matching document is found, the operation fails, and an error is thrown.

The replaceOne method returns a result object, which includes information about the operation's outcome. This result is logged for further analysis and debugging. In case of an error during the database operation, a DatabaseError is instantiated with relevant details, and the error is thrown to be handled by the calling function or middleware.

Additionally, the function triggers an event by invoking self.onEvent with the event type "create," along with the entity name, the updated entity object, the result of the database operation, and any options provided. This event handling mechanism is part of a broader event-driven architecture that allows other components within the system to react to changes in the database, such as updating caches, publishing messages, or triggering additional workflows.

The optional callback function, if provided, is called with the error (if any) and the result of the operation. This allows for asynchronous handling of the save operation's outcome, enabling the calling code to respond appropriately based on whether the operation succeeded or failed.

In summary, the saveEntity function is a robust and comprehensive utility for persisting entity objects in a MongoDB collection. It ensures data integrity through cloning and timestamping, provides extensive logging for transparency, and integrates seamlessly with the system's event-driven architecture to notify other components of changes. This function is a cornerstone of the database management capabilities within the back-flip project, facilitating reliable and consistent data storage.

updateEntityFromQuery

The updateEntityFromQuery function is a crucial method within the db/index.js file, designed to facilitate the partial update of an entity in the MongoDB database based on a specified query. This function provides a flexible and efficient way to modify existing records without the need to replace entire documents, which can be particularly useful in scenarios where only a subset of fields requires updating.

Function Signature

updateEntityFromQuery: async (entity_name, query, obj, cb, options = {})

Parameters

  • entity_name: This parameter is a string that specifies the name of the entity to be updated. It is used to identify the collection within the database where the update operation will be performed.

  • query: An object representing the query criteria used to locate the entity to be updated. This query is typically structured as a MongoDB query object and can include various conditions to precisely target the desired entity.

  • obj: The update object containing the fields and values that need to be modified. This object can be a partial representation of the entity, including only the fields that require updating.

  • cb: A callback function that is executed once the update operation is complete. This callback is used to handle the result of the update operation, whether it is successful or encounters an error.

  • options: An optional parameter providing additional configuration for the update operation. This can include settings such as upsert, which determines whether to insert a new document if no matching document is found, and delete_null_fields, which specifies whether fields with null values should be removed from the document.

Detailed Functionality

  1. Logging and Flattening: The function begins by logging the inputs for debugging purposes. If the data_flattening option is enabled, the update object is transformed into a flattened structure using a utility function. This is useful for updating nested fields within the document.

  2. Cloning and Event Data: The update object is deep-cloned to ensure that the original object remains unaltered. This clone is also used to capture the event data, which will be emitted later to notify other parts of the system about the update.

  3. Update Construction: The update object is constructed using MongoDB's $set operator to specify the fields to be updated. If the delete_null_fields option is enabled, fields with null values are collected and included in the $unset operator, effectively removing them from the document.

  4. Timestamp and Options: The last_modified field is updated with the current timestamp to reflect the time of the modification. Additional options for the update operation, such as upsert, are also configured based on the provided options parameter.

  5. Database Operation: The constructed update object and options are used to perform the updateOne operation on the MongoDB collection, targeting the document(s) that match the specified query.

  6. Event Emission: Upon successful completion of the update operation, an event is emitted to notify other parts of the system about the update. This event includes details such as the entity name, query, update data, result, and options.

  7. Error Handling: If an error occurs during the update operation, it is wrapped in a custom DatabaseError and passed to the callback function for further handling.

Example Usage

const query = { _id: "60d21b4667d0d8992e610c85" };
const updateData = { status: "active", last_login: new Date() };
const options = { upsert: true, delete_null_fields: true };

updateEntityFromQuery("User", query, updateData, (err, result) => {
    if (err) {
        console.error("Error updating entity:", err);
    } else {
        console.log("Entity updated successfully:", result);
    }
}, options);

In the above example, the updateEntityFromQuery function is used to update the status and last_login fields of a User entity. The upsert option ensures that a new document will be created if no matching document is found, and the delete_null_fields option removes any fields with null values from the document.

Overall, the updateEntityFromQuery function is a versatile and powerful tool for performing partial updates on entities within the MongoDB database, offering a high degree of flexibility and control over the update process.

deleteEntity

The function responsible for removing a single entity