npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

lightning-flow-scanner-core

v4.8.0

Published

A rule engine capable of conducting static analysis on the metadata associated with Salesforce Lightning Flows, Process Builders, and Workflows.

Downloads

16,278

Readme

Lightning Flow Scanner Banner

An Extensible Rule Engine capable of conducting static analysis on the metadata associated with Salesforce Lightning Flows, Process Builders, and Workflows. Used by the Lightning Flow Scanner Salesforce CLI Plugin and VS Code Extension.

Default Rules

| Rule (Configuration ID) | Description | |--------------------------|-------------| | Auto Layout (AutoLayout) | With Canvas Mode set to Auto-Layout, Elements are spaced, connected, and aligned automatically, keeping your Flow neatly organized thus saving you time. | | Outdated API Version (APIVersion) | Introducing newer API components may lead to unexpected issues with older versions of Flows, as they might not align with the underlying mechanics. Starting from API version 50.0, the 'Api Version' attribute has been readily available on the Flow Object. To ensure smooth operation and reduce discrepancies between API versions, it is strongly advised to regularly update and maintain them. | | Copy API Name (CopyAPIName) | Maintaining multiple elements with a similar name, like 'Copy_X_Of_Element,' can diminish the overall readability of your Flow. When copying and pasting these elements, it's crucial to remember to update the API name of the newly created copy. | | DML Statement In A Loop (DMLStatementInLoop) | To prevent exceeding Apex governor limits, it is advisable to consolidate all your database operations, including record creation, updates, or deletions, at the conclusion of the flow. | | Duplicate DML Operation (DuplicateDMLOperation) | When the flow executes database changes or actions between two screens, it's important to prevent users from navigating back between screens. Failure to do so may result in duplicate database operations being performed within the flow. | | Flow Naming Convention (FlowName) | The readability of a flow is of utmost importance. Establishing a naming convention for the Flow Name significantly enhances findability, searchability, and maintains overall consistency. It is advisable to include at least a domain and a brief description of the actions carried out in the flow, for instance, 'Service_OrderFulfillment'. | | Hardcoded Id (HardcodedId) | Avoid hard-coding IDs as they are org-specific. Instead, pass them into variables at the start of the flow. You can achieve this by utilizing merge fields in URL parameters or employing a Get Records element. | | Inactive Flow (InactiveFlow) | Like cleaning out your closet: deleting unused flows is essential. Inactive flows can still cause trouble, like accidentally deleting records during testing, or being activated as subflows within parent flows. | | Missing Flow Description (FlowDescription) | Descriptions play a vital role in documentation. We highly recommend including details about where they are used and their intended purpose. | | Missing Fault Path (MissingFaultPath) | At times, a flow may fail to execute a configured operation as intended. By default, the flow displays an error message to the user and notifies the admin who created the flow via email. However, you can customize this behavior by incorporating a Fault Path. | | Missing Null Handler (MissingNullHandler) | When a Get Records operation doesn't find any data, it returns null. To ensure data validation, utilize a decision element on the operation result variable to check for a non-null result. | | Process Builder (ProcessBuilder) | Salesforce is transitioning away from Workflow Rules and Process Builder in favor of Flow. Ensure you're prepared for this transition by migrating your organization's automation to Flow. Refer to official documentation for more information on the transition process and tools available. | | SOQL Query In A Loop (SOQLQueryInLoop) | To prevent exceeding Apex governor limits, it is advisable to consolidate all your SOQL queries at the conclusion of the flow. | | Unconnected Element (UnconnectedElement) | Unconnected elements which are not being used by the Flow should be avoided to keep Flows efficient and maintainable. | | Unused Variable (UnusedVariable) | To maintain the efficiency and manageability of your Flow, it's advisable to avoid including unconnected variables that are not in use. | | Unsafe Running Context (UnsafeRunningContext) | This flow is configured to run in System Mode without Sharing. This system context grants all running users the permission to view and edit all data in your org. Running a flow in System Mode without Sharing can lead to unsafe data access. | | Same Record Field Updates (SameRecordFieldUpdates) | Much like triggers, before contexts can update the same record by accessing the trigger variables $Record without needing to invoke a DML. | | Trigger Order (TriggerOrder) | Guarantee your flow execution order with the Trigger Order property introduced in Spring '22 |

Core Functions

The index.ts file in this repository contains the core functionality of the Lightning Flow Scanner Core. Below is an overview of the main functions exported from this file:

getRules(ruleNames?: string[]): IRuleDefinition[]

This function retrieves the rule definitions used by the Lightning Flow Scanner. It takes an optional array of rule names as an argument and returns an array of IRuleDefinition objects representing the rules to be executed.

parse(selectedUris: any): Promise<ParsedFlow[]>

The parse function parses the metadata of Salesforce Lightning Flows, Process Builders, and Workflows from the provided URIs. It returns a Promise that resolves to an array of ParsedFlow objects containing the parsed metadata.

scan(parsedFlows: ParsedFlow[], ruleOptions?: IRulesConfig): ScanResult[]

The scan function conducts static analysis on the parsed metadata of Lightning Flows, Process Builders, and Workflows using the specified rules. It takes an array of ParsedFlow objects and an optional IRulesConfig object as arguments and returns an array of ScanResult objects representing the results of the analysis.

fix(results: ScanResult[]): ScanResult[]

The fix function attempts to automatically fix certain issues identified during the static analysis. It takes an array of ScanResult objects as input and returns a modified array with any applicable fixes applied.

Configurations

Rule Configuration

Using the rules section of your configurations, you can specify the list of rules to be run and provide custom rules. Furthermore, you can define the severity of violating specific rules and configure relevant attributes for some rules. Below is a breakdown of the available attributes of rule configuration:

{
    "rules": {
        "<RuleName>": {
            "severity": "<Severity>",
            "expression": "<Expression>",
            "path": "<Path>"
        }
    }
}
  • Severity:

    • Optional values for severity are "error", "warning", and "note".
    • If severity is provided, it overwrites the default severity, which is "error".
  • Expression:

    • Expression is used to overwrite standard values in configurable rules.
  • Path:

    • If a path is provided, it can either replace an existing rule with a new rule definition or load a custom rule.
    • Ensure that the rule name used in the path matches the exported class name of the rule.

Custom Rule Interface

To create custom rules that can be loaded using the path attribute of the rule configurations, they need to adhere to the IRuleInterface. Please refer to the Custom Rule Creation Guide for detailed instructions.

Exception Configuration

Specifying exceptions allows you to exclude specific scenarios from rule enforcement. Exceptions can be specified at the flow, rule, or result level to provide fine-grained control. Below is a breakdown of the available attributes of exception configuration:

{
  "exceptions": {
    "<FlowName>": {
      "<RuleName>": [
        "<ResultName>",
        "<ResultName>",
        ...
      ]
    },
    ...
  }
}
  • FlowName:

    • The name of the flow where exceptions apply.
  • RuleName:

    • The name of the rule for which exceptions are defined.
  • ResultName:

    • The specific result or condition within the rule for which exceptions are specified.

Development Setup

Follow these steps to set up your development environment:

  1. Clone Repository: Begin by cloning the Lightning Flow Scanner Core repository to your local machine:

    git clone https://github.com/Lightning-Flow-Scanner/lightning-flow-scanner-core.git
  2. Install Dependencies: Navigate into the cloned repository directory and install the necessary dependencies using Yarn:

    cd lightning-flow-scanner-core
    npm install
  3. Build: Compile the TypeScript source files into JavaScript using the TypeScript compiler:

    npm run build

    This command generates the compiled JavaScript files in the out directory.

  4. Run Tests: Ensure the module functions correctly by running the test suites:

    npm run test

    This command uses Mocha to run tests located in the tests directory and provides feedback on the module's functionality.

  5. Debugging in IDE: If needed, set up your integrated development environment (IDE) for debugging TypeScript code. Configure breakpoints, inspect variables, and step through the code to identify and resolve issues efficiently.