npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

git-ops

v1.0.3

Published

Git with operations powers

Downloads

7

Readme

git-ops

git-ops is a set of tools to create auto operations contained in the repositories of projects that require them. The model adopted by this project is to create a simple git: "ops" subcommand that allows developers, devops, SREs to construct and execute their CI and CD stream independent of any tool or platform.


Motivation

This project arose from the desire to have control over the flows of continuous integration and continuous deployment/delivery autonomously as part of the project. Many CI/CD tools implement a form of streaming scripting, such as Jenkins, Travis, GitLab among others, but still the execution dependencies are integrated into them.

Some projects implement the concept of shared libraries to solve this problem, but still the pipelines depend on the execution tool. How to have 100% of the operations contained in the repository of a project? This is the challenge we are trying to solve.


Quick Start

  1. Go to releases and download the version that fits in your system
  2. Extract the executable into a folder that you can leave it there forever
  3. Execute the install command, example: ./git-ops-linux-x64 install (you may need to run chmod +x git-ops-YOUR_OS-ARCH before)
  4. Now you have git-ops installed, execute the following command to get help: git ops -h

How it's built

git-ops is a project written in Javascript for NodeJS 10 and compiled into an executable, so you do not need to install any dependencies to use git-ops, unless you're planning to use the npmInstall function to download dependencies for your JS steps, if this is the case, you need to have npm installed.


Tutorial

  1. Understanding KDBX
  2. Creating my first ops pack
  3. Creating my first ops file
  4. Using git-ops
  5. JS VM Context

Understanding KDBX

A KDBX file is a specialized database for storing sensitive information, you can read more about the format here. Think of it as a vault for data. We recommend that you use KeePassXC to manipulate your KDBX files.

There are many advantages to keeping sensitive data in a KDBX file, but we will not delve into it here. You will realize that being a portable and secure database is perhaps the largest of them for git-ops.

In git-ops the KDBX files are used to:

  • Store sensitive data (obvious)
  • Store useful data to execute CI/CD flows (like a key-value or key-attachment database)
  • Store scripts to execute the steps of pipelines
  • Create shared operations assets

We're going to see that is a really powerful combination the use of git-ops and KeePassXC.


Creating my first ops pack

An ops pack is a set of data and scripts for running pipelines (from ops files). Let's see in practice following the documentation.

  1. In an empty folder create the following folder structure:
    /kdbx
    /ops
      /MyOps
        /Js
          /Hash
        /Shell
          /Print
  2. Create the following file: /ops/MyOps/Js/Hash/pack.kdbx.yaml
    entry:
      title: hashRepository
    data:
      - type: field
        name: title
        value: hashRepository
      - type: field
        name: hashAlgorithm
        value: md5
      - type: field
        name: hashDirectory
        value: ../myAwesomeProject/**/*
      - type: attachment
        name: getFiles.js
        path: ./getFiles.js
      - type: attachment
        name: hashFiles.js
        path: ./hashFiles.js
  3. Create the file: /ops/MyOps/Js/Hash/getFiles.js
    npmInstall('glob');
    const path = require('path');
    const glob = require('glob');
    
    var files = glob.sync(
      path.join(
        __cwd,
        hashDirectory
      ),
      {
        nodir: true
      }
    );
  4. Create the file: /ops/MyOps/Js/Hash/hashFiles.js
    const crypto = require('crypto')
    const fs = require('fs');
    
    logger.info(`Files will be hashed using ${hashAlgorithm}`);
    
    const hash = crypto.createHash(hashAlgorithm);
    
    for (file of files) {
      logger.info(`Hashing file: ${file}`);
      hash.update(
        fs.readFileSync(file)
      );
    }
    
    var filesHash = hash.digest('hex');
    logger.info(`${files.length} hashed: ${filesHash}`);
    
    kdbx.setSourceValue(
      kdbx.KDBX_DB,
      {
        group: ['MyOps', 'Js', 'Hash'],
        entry: {
          title: 'filesHash'
        },
        data: [
          {
            type: 'field',
            name: 'title',
            value: 'filesHash'
          },
          {
            type: 'field',
            name: 'filesHash',
            value: filesHash
          }
        ]
      }
    );
  5. Now we're going to create the shell entry: /ops/MyOps/Shell/Print/pack.kdbx.yaml
    entry:
      title: printHash
    data:
      - type: field
        name: title
        value: printHash
      - type: attachment
        name: generateMd.sh
        path: ./generateMd.sh
  6. Create the shell script to print in a markdown file the generated hash: /ops/MyOps/Shell/Print/generateMd.sh
    #!/bin/sh
    CURRENT_DATE=`date`
    echo "Hash at ${CURRENT_DATE}: ${FILES_HASH}" >> HASH_HISTORY.md
  7. Now that we have all the assets to build our KDBX we just need to go through terminal to / and execute the following command:
    git ops pack -p MyPassword123 -k ./kdbx/MyOps.key ./kdbx/MyOps.kdbx ./ops

You'll see that git-ops generated the following files:

  • /kdbx/MyOps.kdbx - The KDBX file with all content that we coded, you can open this file with KeePassXC to see what happened. This file is what we call as ops pack.
  • /kdbx/MyOps.key - The secret key to open the KDBX file, git-ops pack will always generate a secret key if the specified file for -k option does not exist. Without this file and the password (option -p) the ops pack is useless. You don't need to specify a password if you just want to use a key file, this is useful for example when you're not creating an ops pack with sensitive data and commit the key file in the project repository, otherwise you should not commit the key file, this file is like a SSL certificate's secret key. To make sure that you don't commit this file accidentally we recommend you to put the kdbx/*.key pattern in your .gitignore.

Creating my first ops file

An ops file is a file containing specifications of pipelines that are known as "operations". In fact it is the equivalent of a Jenkinsfile, .gitlab-ci.yaml, .travis.yaml among other CI/CD scripting files.

  1. In an empty folder create the following folder structure:
    /myAwesomeProject
    /cicd
  2. Let's create a test file to simulate a real project: /myAwesomeProject/hello.sh
    #!/bin/bash
    echo 'Hello world!'
  3. Copy the MyOps.kdbx and MyOps.key into /cicd folder
  4. An finally we're going to create our ops file: /cicd/operations.yaml
    operation:
      name: myFirstPipeline
      pipeline:
        - type: js
          source:
            group:
              - MyOps
              - Js
              - Hash
            entry:
              title: hashRepository
            value:
              type: attachment
              name: getFiles.js
          contextMap:
            - name: hashDirectory
              source:
                group:
                  - MyOps
                  - Js
                  - Hash
                entry:
                   title: hashRepository
                value:
                  type: field
                  name: hashDirectory
        - type: js
          source:
            group:
              - MyOps
              - Js
              - Hash
            entry:
              title: hashRepository
            value:
              type: attachment
              name: hashFiles.js
          contextMap:
            - name: hashAlgorithm
              source:
                group:
                  - MyOps
                  - Js
                  - Hash
                entry:
                  title: hashRepository
                value:
                  type: field
                  name: hashAlgorithm
        - type: bash
          source:
            group:
              - MyOps
              - Shell
              - Print
            entry:
              title: printHash
            value:
              type: attachment
              name: generateMd.sh
          envMap:
            - name: FILES_HASH
              source:
                group:
                  - MyOps
                  - Js
                  - Hash
                entry:
                  title: filesHash
                value:
                  type: field
                  name: filesHash

We can notice some things with ops file. The first is that we use the KDBX entries as sources for the codes that will run in the pipeline. The second is that we can map KDBX fields and attachments as environment variables or execution context (when it is a Javascript script). The execution context of a Javascript script is global scope, ie any variable created in the script will be in the pipeline execution context, this is how we pass the files listed in the first script to the second.


Using git-ops

Now that we have our ops pack and our ops file we can finally perform our operations. To do this, navigate through a terminal to / from the previous step, execute the following command:

git ops execute -x ./cicd/MyOps.kdbx -p MyPassword123 -k ./cicd/MyOps.key ./cicd/operations.yaml myFirstPipeline

And so you ran your first pipeline using git-ops!

Let's also learn one more cool thing: you can customize ops packs without having to change their source code.

  1. Create a KDBX file with KeePassXC in the /cicd folder and create an entry in the Root group with the title as hashAlgorithm and an additional attribute named hashAlgorithm with the value sha1, you can use the same key of the previous KDBX to encrypt your file
  2. Change the the contextMap of the second step of the pipeline to get the value from our new KDBX:
    contextMap:
      - name: hashAlgorithm
        source:
          group:
            - Root
          entry:
            title: hashAlgorithm
          value:
            type: field
            name: hashAlgorithm
  3. And execute your operation now passing the two KDBX files:
    git ops execute -x ./cicd/MyOps.kdbx,./cicd/YOUR_KDBX.kdbx -p MyPassword123,YOUR_PASSWORD -k ./cicd/MyOps.key,./cicd/MyOps.key ./cicd/operations.yaml myFirstPipeline

And now we are generating the hashes with the sha1 algorithm. This feature is very useful for for example you specify values specific to the CD (API keys, certificates, secrets among others). When you specify more than one KDBX for the execute command all of them are merged, with the rightmost files overwriting the left ones. Note that you also pass passwords and comma-separated keys.


JS VM Context

When you are executing JS code as step of pipelines, the code is executed using NodeJS VM module. The provided context contains the following globals:

  • __cwd - A constant to store the current directory of the pipeline, this value should always be the directory of the ops file.
  • npmInstall(...dependencies) - A synchronous function to download dependencies for your script, these dependencies are ephemeral and will be deleted after the step execution.
  • logger - Instance of bunyan to log actions from your script.
  • kdbx.KDBX_DB - Instance of the main KDBX database (see kdbxweb package), this will always be the first loaded KDBX file.
  • kdbx.KDBX_DBS - Object as key-value mapping to all instances of loaded KDBXs.
  • kdbx.setSourceValue(db, sourceSpec) - Function to set values into loaded KDBXs instances.
  • kdbx.save(kdbxDb, directory) - Asynchronous function to save KDBXs instances into files.
  • endPipelineStep([error]) - If you're running asynchronous code inside the JS VM you must call this callback function when the execution is done, this callback function will make the git-ops wait for the execution of an asynchronous pipeline step.
  • And every context value that you mapped. Note that if you declare a var like var x = 1 this value will be available on the next JS VM scripts.