npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

donald

v0.1.0

Published

Simple KPI fetching tool with node and InfluxDB.

Downloads

1

Readme

PlutoKpi

Simple KPI fetching tool with node and InfluxDB.

Getting started

Clone the project and execute yarn to install all the dependencies. After that, duplicate the pluto.sample.yml, rename it to pluto.yml and add all the config needed for work (you can find below how to configure your dev environment). Then you can start working on it. For testing your changes you can do the following:

yarn debug <namespace> <measurement>

What this means?

  1. debug is a handy command that will execute yarn build and yarn start
  2. namespace is the name of the app that the worker will collect the info from, and where the config will be retrieved on the pluto.yml. E.g. companies, yoda...
  3. measurement is the metric unit that you want to recollect data for, and will execute the proper worker. E.g. gems. packages...

⚠️ If you don't have influxdb or you don't want to write there, you can add ONLY_OUTPUT=1 as the first parameter of your command. Example given:

ONLY_OUTPUT=1 yarn debug companies packages

Example of pluto.yml

meta:
  database:
    database: "pluto"
    host: "localhost"
companies:
  packages:
    schedule: "* * * * *"
    image: "docker.dc.xing.com/companies/companies"
    folder: "client"
  gems:
    schedule: "* * * * *"
    image: "docker.dc.xing.com/companies/companies"
yoda:
  lighthouse:
    schedule: "10 1 2 * *"
    website: "https://www.xing.com/url-to-test/whatever"
    login:
      url: "https://www.xing.com/login"
      username_field_selector: "#login_form_username"
      password_field_selector: "#login_form_password"
      login_button_selector: "#login_form_submit"
      username: "[email protected]"
      password: "somethingToBeSafe🔒"

Deploying to pluto

Note: Currently pluto writes to http://ams1-redcomp01.xing.hh:8086 influx instance

Architecture Design

This is the structure of the project

├── src
│   ├── index.js
│   ├── storage
│   │   └── storage.js
│   └── workers
│       └── workers.js

The entry point of the application is index.js file. This file will work as a CLI accepting multiple arguments <worker> and <application>.

The worker argument represent one of the multiple sources where we fetch data. The application argument represent the application from where to fetch the data.

The store directory holds the logic for the DB InfluxDB (documentation)[https://docs.influxdata.com/chronograf/v1.4/introduction/getting-started/]

The workers directory holds all the logic for extracting data from the different sources, each worker file export one function that accept storage and application

The storage in this case InfluxDB expects to receive a key ex: companies_response_codes and a object ex: { fields: { YOUR_DATA_HERE } }

storage.write(`jira_yoda`, {
  fields: { total_bugs: parseInt(json.total) }
});
  • As a side note remember that Grafana work better with integer or float

Adding a new worker

Create a new folder inside the worker folder with a representative name of the source where are you are extracting data. Add the worker in workers.js inside the workersAvailable constant

Environment config

Install InfluxDB, nodejs, Grafana, yarn

$ brew install grafana
$ brew install influxdb

Remember to start both services in order to store a visualize data.

Usually Grafana start on port localhost:3000

Copy .env.sample to .env

Change the defaults if needed

To build the code run: yarn dev

The application does not create a database so if the configured influx database does not exist you have to create it first using the influx client:

$ influx

> create database <your database in the .env file>

With all of this you can start storing data to your InfluxDB by using the command yarn start <worker> <application>

Installing the package globally

Installing the application globally will means that the bin generated by the package will be available in your $PATH

$ yarn build
$ npm install -g

Then we can run our workers by executing

pluto <worker> <application>

List of workers

CI

  • buildTime

    Scrapes data from jenkins UI and populates influxdb with build times for each successfull build on the page of the supplied url.

    config example:

      ci.buildTime:
        schedule: "* * * * *"
        builds:
          - name: "pullRequest"
            url: "https://ci.dc.xing.com/view/xtm/job/xtm-pullrequest/api/json?tree=allBuilds[result,duration]"
          - name: "dockerImage"
            url: "https://ci.dc.xing.com/view/xtm/job/xtm-pullrequest-generate-docker-image/api/json?tree=allBuilds[result,duration]"
        auth:
          username: "foo"
          password: "secret"

    data point example:

      table: ci.buildTime
      key: pullRequest (Build times of pull request jobs)
      key: dockerImag (Build times of docker image builder jobs)

Gems

Fetches supplied docker image and runs a command inside a container of that image to determine the number of outdated gems.

config example:

  gems:
    schedule: "* * * * *"
    image: "docker.dc.xing.com/xtm/xtm:latest"

data point example:

  table: gems
  key: default (Number of outdated gems)

Packages

Fetches supplied docker image and runs a command inside a container of that image to determine the number of outdated packages. You will need to supply a folder, or if not will assume the root.

config example:

  packages:
    schedule: "* * * * *"
    image: "docker.dc.xing.com/xtm/xtm:latest"
    folder: "foo"

data point example:

  table: packages
  key: default (Number of outdated packages)

Code climate

Uses rest API of code climate to determine metrics like:

  • Maintainability cost (tech debt)
  • Code coverage
  • Number of security issues

config example:

  code_climate:
    schedule: "* * * * *"
    url: "https://codeclimate.dc.xing.com/api/v1"
    api_token: "secret"
    app_id: "secret"

data point example:

  table: code_climate
  key: techDebt (Calculated by code climate based on number/nature of issues and time cost to fix them)
  key: testCoverage (Global percentage code coverage as determined by code climate)
  key: securityIssues (Number of issues in security category)

Logjam

  • apdex

    Scrapes data off of logjam UI to determine overall apdex for a project.

    config example:

      logjam.apdex:
        schedule: "* * * * *"

    data point example:

      table: logjam.apdex
      key: apdex (global apdex for project)
  • errors

    Scrapes data off of logjam UI to retrieve fatals, error and warning for project

    config example:

      logjam.errors:
        schedule: "* * * * *"

    data point example:

      table: logjam.errors
      key: fatals (fatal errors for project)
      key: errors (errors for project)
      key: warnings (warnings for project)
  • requests

    Scrapes data off of logjam UI to retrieve number of requests by response code (e.g. 2xx, 3xx, 4xx, 5xx) for a project.

    config example:

      logjam.requests:
        schedule: "* * * * *"
        include_last_read: true

    data point example:

      table: logjam.requests
      key: 2xx (Number of requests with a 2xx response)
      key: 3xx (Number of requests with a 3xx response)
      key: 4xx (Number of requests with a 4xx response)
      key: 5xx (Number of requests with a 5xx response)
  • slowControllers

    Scrapes data off of logjam UI to retrieve the count of slowest controllers for a project (.e.g those with an apdex of less than 0.7)

    config example:

      logjam.slowControllers:
        schedule: "* * * * *"

    data point example:

      table: logjam.slowControllers
      key: slowControllers
  • slowestControllers

    Scrapes data off of logjam UI to retrieve the list of slowest controllers for a project (.e.g those with an apdex of less than 0.7)

    config example:

      logjam.slowestControllers:
        schedule: "* * * * *"

    data point example:

      table: logjam.slowestControllers
      key: apdex
      tag: controllerName
  • apdexPerControllerAction

    Scrapes data off of logjam UI to retrieve the value of the apdex for a particular controller of a project

    config example:

      logjam.apdexPerControllerAction:
        schedule: "* * * * *"
        actions:
          - name: "Xtm::Search::IdentitiesController#index"

    data point example:

      table: logjam.apdexPerControllerAction
      key: apdex

Jira query

User JIRA rest API to perform queries to retrieve info about bugs and tasks. Is flexible enough to allow any kind of query.

config example:

  jira_query:
    schedule: "* * * * *"
    include_last_read: true
    url: "https://jira.xing.hh/rest/api/latest"
    queries:
      - name: "all_bugs"
        query: "issuetype = Bug"

data point example:

  table: jira_query
  key: all_bugs (value of name in list of queries)

Pingdom

Worker that checks from pingdom the status of certain application

config example:

  pingdom_is_alive:
    schedule: "1 * * * *"
    url: https://api.pingdom.com/api/2.1/checks
    check_id: 2373562 

Also following sensitive data is required on .env:

   PINGDOM_API_KEY=<api lkey from pingdom>
   [email protected]
   PINGDOM_USERNAME=<username from pingdom>
   PINGDOM_PASSWORD=<password>

data point example:

  table: jobs_pingdom_is_alive
  key: alive

Debugging with VSCode

For debug node projects, you can use the debugging tool of VSCode. Is quite powerful but tricky to config. You can use this snippet as your configuration:

{
  "version": "0.2.0",
  "configurations": [
    {
      "type": "node",
      "request": "launch",
      "name": "Debug node",
      "program": "${workspaceFolder}/build/main.js",
      "args": ["companies", "gems", "ONLY_OUTPUT=1", "VS_DEBUG"]
    }
  ]
}

The only thing that you need to tweak are the args, which are essentially the same as you use when executing yarn debug. The flag VS_DEBUG is just for debugging purposes such forcing an if else statement or things like that

Roadmap

  • [ ] Testing

  • [ ] Task for setting DB