npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

nkode

v0.4.14

Published

CLI to make it easy for data scientists to build docker images, train thier models and serve the models

Downloads

3

Readme

nkode

CLI to make it easy for data scientists to build docker images, train thier models and serve the models

oclif Version Downloads/week License

Usage

$ npm install -g nkode
$ nkode COMMAND
running command...
$ nkode (-v|--version|version)
nkode/0.4.14 darwin-x64 node-v12.19.0
$ nkode --help [COMMAND]
USAGE
  $ nkode COMMAND
...

Commands

nkode create

USAGE
  $ nkode create

OPTIONS
  --boolean
  --build
  --enum
  --help
  --integer
  --option
  --string
  --version

See code: src/commands/create/index.js

nkode create:endpoint

Create Remote endpoint for your model

USAGE
  $ nkode create:endpoint

OPTIONS
  -b, --baseImage=baseImage    Base Docker Image to use
  -d, --dataDir=dataDir        Data Model file name
  -e, --entryPoint=entryPoint  Entry Point whether it is a function, python file or notebook
  -m, --method=method          Method type to use build the Docker Image

DESCRIPTION
  ...
  This command will automatically build the docker image, push the image to the docker registry and deploy an endpoint.

See code: src/commands/create/endpoint.js

nkode create:image

Create Docker Image with your model, required packages and data

USAGE
  $ nkode create:image

OPTIONS
  -b, --baseImage=baseImage    Base Docker Image to use
  -d, --dataDir=dataDir        Path to test data directory
  -e, --entryPoint=entryPoint  Entry Point whether it is a function, python file or notebook
  -m, --method=method          Method type to use build the Docker Image

DESCRIPTION
  ...
  This command will automatically build the docker image, push the image to the docker registry and deploy the image 
  to a pod to run the training remotely.

See code: src/commands/create/image.js

nkode create:notebook

Ceate a Jupyter notebook server to work on

USAGE
  $ nkode create:notebook

OPTIONS
  -b, --baseImage=baseImage                Base Docker Image to use for creating the Jupyter Server
  -c, --cpuQuota=cpuQuota                  Allocated vCPU for your notebook server
  -m, --memoryQuota=memoryQuota            Allocated Memory in Gb for your notebook server
  -n, --notebookName=notebookName          Name of the the notebook. (Optional)
  -s, --namespace=namespace                Your account namespace. By default this is set to your individual namespace
  -v, --persistentVolume=persistentVolume  Name of the persistent volume for your server

DESCRIPTION
  ...
  This command will automatically create a Jupyter notebook server in your account to get started with your datascience
  work. You also get a persistent storage so that all your data is stored. You can delete the server and recreate 
  another
  one as needed. Your data is never lost! Following are the defaults
    -  Allocated CPU : 1 vCPU
    -  Allocated Memory : 1 GB
    -  Base Image : Nokode image with tensorflow
    -  Persistent Storage : 10 GB

    You can change these settings using the flags including access to GPU instances.

See code: src/commands/create/notebook.js

nkode help [COMMAND]

display help for nkode

USAGE
  $ nkode help [COMMAND]

ARGUMENTS
  COMMAND  command to show help for

OPTIONS
  --all  see all commands in CLI

See code: @oclif/plugin-help

nkode init

Checks and initializes any dependencies

USAGE
  $ nkode init

DESCRIPTION
  ...
  Run this one time

See code: src/commands/init.js

nkode train

USAGE
  $ nkode train

OPTIONS
  --boolean
  --build
  --enum
  --help
  --integer
  --option
  --string
  --version

See code: src/commands/train/index.js

nkode train:distributed

Build Docker Images and trains on remote resources as requested

USAGE
  $ nkode train:distributed

OPTIONS
  -b, --baseImage=baseImage      Base Docker Image to use
  -c, --cpu=cpu                  Specify CPU resoures you would like to reserve
  -d, --dataDir=dataDir          Path to test data directory
  -e, --entryPoint=entryPoint    Entry Point whether it is a function, python file or notebook
  -g, --gpu=gpu                  Specify GPU resoures you would like to reserve
  -m, --method=method            Method type to use build the Docker Image
  -o, --operator=operator        Framework Operator to use for distributed training
  -p, --psCount=psCount          PS Count for TensorFlow Distributed Training Job
  -t, --masterCount=masterCount  Master Count for PyTorch Distributed Training Job
  -v, --gpuvendor=gpuvendor      Specify GPU Vendor you would like to use. Supports NVIDA - default - and AMD
  -w, --workerCount=workerCount  Worker Count for TF Job or Pytorch distributed training job.
  -y, --memory=memory            Specify Memory resoures you would like to reserve

DESCRIPTION
  ...
  This command will automatically create a docker image and train on remote resource with required resource 
  requirements.
  Use this if you have specific requirements of resource needs for training. The request will fail if there are no 
  enough resources

See code: src/commands/train/distributed.js

nkode train:hpt:init

Use this command to initial search algorithm to use. A configuration

USAGE
  $ nkode train:hpt:init

OPTIONS
  -s, --searchAlgorithm=searchAlgorithm  The search algorithm that you want platform to use to find the best
                                         hyperparameters print

DESCRIPTION
  YAML file will be downloaded to your current folder which can be used to configure your experiment.
  Once you are ready with configuration, run 'nkode train:hpt:startExperiment' to start the experiment.
  By default, Random search algorithm will be selected
  ...
  Following are the search algorithms supported and flag values to use while running the command
  1. Grid search : grid
  2. Random search : random
  3. Bayesian optimization : bayesianoptimization
  4. Hyperband : hyperband
  5. Tree of Parzen Estimators (TPE) : tpe
  6. Covariance Matrix Adaptation Evolution Strategy (CMA-ES) :cmaes
  7. Neural Architecture Search based on ENAS : enas
  8. Differentiable Architecture Search (DARTS) : darts

See code: src/commands/train/hpt/init.js

nkode train:hpt:startExperiment

Start the Hyperparameter Tuning experiment.

USAGE
  $ nkode train:hpt:startExperiment

OPTIONS
  -f, --fileName=fileName  Provide the file name of hpt configurarion yaml

DESCRIPTION
  ...
  Start the experiment and let the platform do the work! You can monitor status of the experiment
  by visiting the URL provided once the command is successfully executed.

See code: src/commands/train/hpt/startExperiment.js

nkode train:remoteResources

Build Docker Images and trains on remote resources as requested

USAGE
  $ nkode train:remoteResources

OPTIONS
  -b, --baseImage=baseImage    Base Docker Image to use
  -c, --cpu=cpu                Specify CPU resoures you would like to reserve
  -d, --dataDir=dataDir        Path to test data directory
  -e, --entryPoint=entryPoint  Entry Point whether it is a function, python file or notebook
  -g, --gpu=gpu                Specify GPU resoures you would like to reserve
  -m, --method=method          Method type to use build the Docker Image
  -v, --gpuvendor=gpuvendor    Specify GPU Vendor you would like to use. Supports NVIDA - default - and AMD
  -y, --memory=memory          Specify Memory resoures you would like to reserve

DESCRIPTION
  ...
  This command will automatically create a docker image and train on remote resource with required resource 
  requirements.
  Use this if you have specific requirements of resource needs for training. The request will fail if there are no 
  enough resources

See code: src/commands/train/remoteResources.js