nkode
v0.4.14
Published
CLI to make it easy for data scientists to build docker images, train thier models and serve the models
Downloads
24
Readme
nkode
CLI to make it easy for data scientists to build docker images, train thier models and serve the models
Usage
$ npm install -g nkode
$ nkode COMMAND
running command...
$ nkode (-v|--version|version)
nkode/0.4.14 darwin-x64 node-v12.19.0
$ nkode --help [COMMAND]
USAGE
$ nkode COMMAND
...
Commands
nkode create
nkode create:endpoint
nkode create:image
nkode create:notebook
nkode help [COMMAND]
nkode init
nkode train
nkode train:distributed
nkode train:hpt:init
nkode train:hpt:startExperiment
nkode train:remoteResources
nkode create
USAGE
$ nkode create
OPTIONS
--boolean
--build
--enum
--help
--integer
--option
--string
--version
See code: src/commands/create/index.js
nkode create:endpoint
Create Remote endpoint for your model
USAGE
$ nkode create:endpoint
OPTIONS
-b, --baseImage=baseImage Base Docker Image to use
-d, --dataDir=dataDir Data Model file name
-e, --entryPoint=entryPoint Entry Point whether it is a function, python file or notebook
-m, --method=method Method type to use build the Docker Image
DESCRIPTION
...
This command will automatically build the docker image, push the image to the docker registry and deploy an endpoint.
See code: src/commands/create/endpoint.js
nkode create:image
Create Docker Image with your model, required packages and data
USAGE
$ nkode create:image
OPTIONS
-b, --baseImage=baseImage Base Docker Image to use
-d, --dataDir=dataDir Path to test data directory
-e, --entryPoint=entryPoint Entry Point whether it is a function, python file or notebook
-m, --method=method Method type to use build the Docker Image
DESCRIPTION
...
This command will automatically build the docker image, push the image to the docker registry and deploy the image
to a pod to run the training remotely.
See code: src/commands/create/image.js
nkode create:notebook
Ceate a Jupyter notebook server to work on
USAGE
$ nkode create:notebook
OPTIONS
-b, --baseImage=baseImage Base Docker Image to use for creating the Jupyter Server
-c, --cpuQuota=cpuQuota Allocated vCPU for your notebook server
-m, --memoryQuota=memoryQuota Allocated Memory in Gb for your notebook server
-n, --notebookName=notebookName Name of the the notebook. (Optional)
-s, --namespace=namespace Your account namespace. By default this is set to your individual namespace
-v, --persistentVolume=persistentVolume Name of the persistent volume for your server
DESCRIPTION
...
This command will automatically create a Jupyter notebook server in your account to get started with your datascience
work. You also get a persistent storage so that all your data is stored. You can delete the server and recreate
another
one as needed. Your data is never lost! Following are the defaults
- Allocated CPU : 1 vCPU
- Allocated Memory : 1 GB
- Base Image : Nokode image with tensorflow
- Persistent Storage : 10 GB
You can change these settings using the flags including access to GPU instances.
See code: src/commands/create/notebook.js
nkode help [COMMAND]
display help for nkode
USAGE
$ nkode help [COMMAND]
ARGUMENTS
COMMAND command to show help for
OPTIONS
--all see all commands in CLI
See code: @oclif/plugin-help
nkode init
Checks and initializes any dependencies
USAGE
$ nkode init
DESCRIPTION
...
Run this one time
See code: src/commands/init.js
nkode train
USAGE
$ nkode train
OPTIONS
--boolean
--build
--enum
--help
--integer
--option
--string
--version
See code: src/commands/train/index.js
nkode train:distributed
Build Docker Images and trains on remote resources as requested
USAGE
$ nkode train:distributed
OPTIONS
-b, --baseImage=baseImage Base Docker Image to use
-c, --cpu=cpu Specify CPU resoures you would like to reserve
-d, --dataDir=dataDir Path to test data directory
-e, --entryPoint=entryPoint Entry Point whether it is a function, python file or notebook
-g, --gpu=gpu Specify GPU resoures you would like to reserve
-m, --method=method Method type to use build the Docker Image
-o, --operator=operator Framework Operator to use for distributed training
-p, --psCount=psCount PS Count for TensorFlow Distributed Training Job
-t, --masterCount=masterCount Master Count for PyTorch Distributed Training Job
-v, --gpuvendor=gpuvendor Specify GPU Vendor you would like to use. Supports NVIDA - default - and AMD
-w, --workerCount=workerCount Worker Count for TF Job or Pytorch distributed training job.
-y, --memory=memory Specify Memory resoures you would like to reserve
DESCRIPTION
...
This command will automatically create a docker image and train on remote resource with required resource
requirements.
Use this if you have specific requirements of resource needs for training. The request will fail if there are no
enough resources
See code: src/commands/train/distributed.js
nkode train:hpt:init
Use this command to initial search algorithm to use. A configuration
USAGE
$ nkode train:hpt:init
OPTIONS
-s, --searchAlgorithm=searchAlgorithm The search algorithm that you want platform to use to find the best
hyperparameters print
DESCRIPTION
YAML file will be downloaded to your current folder which can be used to configure your experiment.
Once you are ready with configuration, run 'nkode train:hpt:startExperiment' to start the experiment.
By default, Random search algorithm will be selected
...
Following are the search algorithms supported and flag values to use while running the command
1. Grid search : grid
2. Random search : random
3. Bayesian optimization : bayesianoptimization
4. Hyperband : hyperband
5. Tree of Parzen Estimators (TPE) : tpe
6. Covariance Matrix Adaptation Evolution Strategy (CMA-ES) :cmaes
7. Neural Architecture Search based on ENAS : enas
8. Differentiable Architecture Search (DARTS) : darts
See code: src/commands/train/hpt/init.js
nkode train:hpt:startExperiment
Start the Hyperparameter Tuning experiment.
USAGE
$ nkode train:hpt:startExperiment
OPTIONS
-f, --fileName=fileName Provide the file name of hpt configurarion yaml
DESCRIPTION
...
Start the experiment and let the platform do the work! You can monitor status of the experiment
by visiting the URL provided once the command is successfully executed.
See code: src/commands/train/hpt/startExperiment.js
nkode train:remoteResources
Build Docker Images and trains on remote resources as requested
USAGE
$ nkode train:remoteResources
OPTIONS
-b, --baseImage=baseImage Base Docker Image to use
-c, --cpu=cpu Specify CPU resoures you would like to reserve
-d, --dataDir=dataDir Path to test data directory
-e, --entryPoint=entryPoint Entry Point whether it is a function, python file or notebook
-g, --gpu=gpu Specify GPU resoures you would like to reserve
-m, --method=method Method type to use build the Docker Image
-v, --gpuvendor=gpuvendor Specify GPU Vendor you would like to use. Supports NVIDA - default - and AMD
-y, --memory=memory Specify Memory resoures you would like to reserve
DESCRIPTION
...
This command will automatically create a docker image and train on remote resource with required resource
requirements.
Use this if you have specific requirements of resource needs for training. The request will fail if there are no
enough resources
See code: src/commands/train/remoteResources.js