npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@hermes-serverless/cli

v0.2.2

Published

## About <a name = "about"></a>

Downloads

23

Readme

Hermes/CLI npm version GitHub Issues GitHub Pull Requests

About

This is the CLI to use with Hermes systems. With this CLI you can build, deploy and run functions written in some predetermined languages (currently C++ and CUDA are supported). Hermes offers serverless delivery for your function executions, with the possibility to use the server's GPU.

Getting Started

These instructions will get you ready to start creating your own functions and running them in a Hermes server.

Prerequisites

Besides npm or yarn, docker is also a prerequisite for running this CLI. This is because the function building is done on your computer. Everything else is done on the server side.

To install docker refer to: https://docs.docker.com/install/.

Installing

After installing docker, you can run the following to install hermes-cli:

$ npm install -g @hermes-serverless/cli

Or, if you are using yarn:

$ yarn global add @hermes-serverless/cli

Make sure npm's or yarn's binaries path are in your system's environment variable PATH. After this you can run:

$ hermes

And the output should be all commands available to use.

Setting up

In order to use Hermes you'll have to own a Dockerhub account (https://hub.docker.com/). If you don't have one already, create one, it's simple and free. After that, you'll have to login to your account using docker, simply run:

$ docker login

Now you can setup hermes for using this Docker username:

$ hermes config docker.username <yourDockerhubUsername>

The last setup step to start using hermes is setting the url of a remote hermes instance to be used, for example:

$ hermes config hermes.url http://ratel.ime.usp.br:9090

If you want you can run a hermes instance locally or in a server by following the instructions in this repository.

After that it's time to login into that instance. If you don't have an account there, you can create one:

$ hermes remote:register

Or login into an existent account:

$ hermes remote:login <username>

You check your credentials at any time by running:

$ hermes remote:whoami

Deploying your first function

Some function examples are available on Hermes-Examples. We're going to deploy function gpu-pi-montecarlo, which calculates PI using the Montecarlo's method sending the workload to the GPU. First clone Hermes-Examples repository:

$ git clone https://github.com/hermes-serverless/examples.git

Now you can deploy the function using:

$ cd ./examples/cuda/gpu-pi-montecarlo
$ hermes function:deploy

The function will be built and after successfuly deploying it on hermes you should expect the output to be somehting like this:

===== FUNCTION DEPLOYED ======
╔═════════════════════╤══════════╤════════╤═════════════╤═════════════════════════════════════════╗
║ Function            │ Language │ Scope  │ GPU Capable │ Watcher Image                           ║
╟─────────────────────┼──────────┼────────┼─────────────┼─────────────────────────────────────────╢
║ pi-montecarlo:1.0.0 │ cpp      │ PUBLIC │ false       │ tiagonapoli/watcher-pi-montecarlo:1.0.0 ║
╚═════════════════════╧══════════╧════════╧═════════════╧═════════════════════════════════════════╝

Now you can request a synchronous execution using:

$ hermes function:run <yourHermesUsername>/gpu-pi-montecarlo:1.0.0 --sync

A prompt for the input should appear. This function expects one integers - the number of iterations of the Montecarlo's algorithm, for example:

  ? Input: 1000000000

You'll receive something like this, which is the function output:

------CUDA Devices------
Device Number: 0
  Device name: GeForce MX150
  Memory Clock Rate (KHz): 3004000

Starting simulation with 64 blocks, 32 threads per block (warps), and a total of 1000001536 iterations
Approximated PI using 1000001536 random tests
PI ~= 3.141620254

This function just prints some information on the Hermes instance GPU and then approximates PI.

You can create async executions too, which are fire-and-forget executions:

$ hermes function:run <yourHermesUsername>/gpu-pi-montecarlo:1.0.0 --async
? Input: 100000000
{ startTime: '2020-07-27T02:49:33.576Z', runID: '3' }

You received an runID which can be used to inspect the execution:

$ hermes execution:inspect <runID>

The output will be similar to:

{
  status: 'success',
  startTime: '2020-07-27T02:49:33.576Z',
  runningTime: '00:00:00.615',
  endTime: '2020-07-27T02:49:34.191Z',
  out: '------CUDA Devices------\n' +
    'Device Number: 0\n' +
    '  Device name: GeForce MX150\n' +
    '  Memory Clock Rate (KHz): 3004000\n' +
    '\n' +
    'Starting simulation with 64 blocks, 32 threads per block (warps), and a total of 100001792 iterations\n' +
    'Approximated PI using 100001792 random tests\n' +
    'PI ~= 3.141545224\n'
}

Your previous sync or async executions can be listed and checked again at any time:

$ hermes execution:list
╔═══════╤═════════╤════════════════════╤════════════════════╤══════════════╤═════════════════════════════════════╗
║ RunID │ Status  │ Start              │ End                │ Elapsed      │ Function                            ║
╟───────┼─────────┼────────────────────┼────────────────────┼──────────────┼─────────────────────────────────────╢
║ 1     │ success │ 07/26 23:45:14.822 │ 07/26 23:45:15.710 │ 00:00:00.888 │ tiagonapoli/gpu-pi-montecarlo:1.0.0 ║
╟───────┼─────────┼────────────────────┼────────────────────┼──────────────┼─────────────────────────────────────╢
║ 2     │ success │ 07/26 23:45:23.978 │ 07/26 23:45:24.760 │ 00:00:00.782 │ tiagonapoli/gpu-pi-montecarlo:1.0.0 ║
╟───────┼─────────┼────────────────────┼────────────────────┼──────────────┼─────────────────────────────────────╢
║ 3     │ success │ 07/26 23:49:33.576 │ 07/26 23:49:34.191 │ 00:00:00.615 │ tiagonapoli/gpu-pi-montecarlo:1.0.0 ║
╚═══════╧═════════╧════════════════════╧════════════════════╧══════════════╧═════════════════════════════════════╝
$ hermes execution:inspect 1
{
  status: 'success',
  startTime: '2020-07-27T02:45:14.822Z',
  runningTime: '00:00:00.888',
  endTime: '2020-07-27T02:45:15.710Z',
  out: '------CUDA Devices------\n' +
    'Device Number: 0\n' +
    '  Device name: GeForce MX150\n' +
    '  Memory Clock Rate (KHz): 3004000\n' +
    '\n' +
    'Starting simulation with 64 blocks, 32 threads per block (warps), and a total of 1000001536 iterations\n' +
    'Approximated PI using 1000001536 random tests\n' +
    'PI ~= 3.141579903\n'
}

Creating your first function

To create a function you can use:

$ hermes function:init [path]

After answering some prompts, a folder with your function name will be availabe in the given path. Inside it you'll find hermes.config.json. This file is responsible for the project configuration, and it has a structure like this one:

{
  "functionName": "char-printer",
  "language": "cpp",
  "gpuCapable": true,
  "scope": "public",
  "functionVersion": "1.0.0",
  "handler": "./main.out"
}

The handler is the path to the file that should be executed on your function runs. The gpuCapable property tells the server if the function should be able to use NVIDIA drivers. After deploying this function you'll refer to it using <yourUsername>/<functionName>:<functionVersion>.

Now you can create the source file for your function. After that you'll have to create a Makefile:

main: charPrinter.cpp
	g++ charPrinter.cpp -o main.out

clean:
	-rm main.out

The instruction for creating the handler should be available through the main recipe. You'll have to create a clean recipe too, and make sure it removes all binaries and object files. If you don't do this, a object file may be copied from your system to the function container built and after the compilation inside the container some libraries incompatibilities may occur.

Make sure to insert a - before the rm command. If the files to be removed doesn't exist, the clean recipe will not fail.

After creating the Makefile for your project you can try to build it using:

$ hermes function:build

If there were no errors in the compilation, you can deploy your function using the instructions from the previous section.

Commands

All commands usage are documented here, make sure to check it out. Also every command has a --help flag showing its usage.