triton
v7.17.0
Published
Triton CLI and client (https://www.tritondatacenter.com/)
Downloads
395
Keywords
Readme
node-triton
This repository is part of the Triton Data Center project. See the contribution guidelines and general documentation at the main Triton project page.
triton
is a CLI tool for working with the CloudAPI for Triton
public and
private clouds.
CloudAPI is a RESTful API for end users of the cloud to manage their accounts, instances, networks, images, and to inquire other relevant details. CloudAPI provides a single view of docker containers, infrastructure containers and hardware virtual machines available in the Triton solution.
There is currently another CLI tool known as node-smartdc
for CloudAPI. node-smartdc
CLI works off the 32-character object UUID to uniquely
identify object instances in API requests, and returns response payload in JSON format.
The CLI covers both basic and advanced usage of CloudAPI.
The triton
CLI is currently in beta (effectively because it does not yet
have complete coverage of all commands from node-smartdc) and will be
expanded over time to support all CloudAPI commands, eventually replacing
node-smartdc
as both the API client library for Triton cloud and the command
line tool.
Setup
User accounts, authentication, and security
Before you can use the CLI you'll need an account on the cloud to which you are connecting and an SSH key uploaded. The SSH key is used to identify and secure SSH access to containers and other resources in Triton.
API endpoint
Each data center has a single CloudAPI endpoint. For MNX Public Cloud, you can find the list of data centers here. For private cloud implementations, please consult the private cloud operator for the correct URL. Have the URL handy as you'll need it in the next step.
Installation
Install node.js, then:
npm install -g triton
Verify that it is installed and on your PATH:
$ triton --version
Triton CLI 4.15.0
https://github.com/TritonDataCenter/node-triton
To use triton
, you'll need to configure it to talk to a Triton DataCenter
API endpoint (called CloudAPI). Commonly that is done using a Triton profile:
$ triton profile create
A profile name. A short string to identify a CloudAPI endpoint to the
`triton` CLI.
name: central1
The CloudAPI endpoint URL.
url: https://us-central-1.api.mnx.io
Your account login name.
account: bob
Available SSH keys:
1. 2048-bit RSA key with fingerprint 4e:e7:56:9a:b0:91:31:3e:23:8d:f8:62:12:58:a2:ec
* [in homedir] bob-20160704 id_rsa
The fingerprint of the SSH key you want to use, or its index in the list
above. If the key you want to use is not listed, make sure it is either saved
in your SSH keys directory or loaded into the SSH agent.
keyId: 1
Saved profile "central1".
WARNING: Docker uses TLS-based authentication with a different security model
from SSH keys. As a result, the Docker client cannot currently support
encrypted (password protected) keys or SSH agents. If you continue, the
Triton CLI will attempt to format a copy of your SSH *private* key as an
unencrypted TLS cert and place the copy in ~/.triton/docker for use by the
Docker client.
Continue? [y/n] y
Setting up profile "central1" to use Docker.
Setup profile "central1" to use Docker (v1.12.3). Try this:
eval "$(triton env --docker central1)"
docker info
Set "central1" as current profile (because it is your only profile).
Or instead of using profiles, you can set the required environment variables
(triton
defaults to an "env" profile that uses these environment variables if
no profile is set). For example:
TRITON_URL=https://us-central-1.api.mnx.io
TRITON_ACCOUNT=bob
TRITON_KEY_ID=SHA256:j2WoSeOWhFy69BQ0uCR3FAySp9qCZTSCEyT2vRKcL+s
For compatibility with the older sdc-* tools from
node-smartdc, triton
also supports
SDC_URL
, SDC_ACCOUNT
, etc. environment variables.
Bash completion
Install Bash completion with
triton completion > /usr/local/etc/bash_completion.d/triton # Mac
triton completion > /etc/bash_completion.d/triton # Linux
Alternatively, if you don't have or don't want to use a "bash_completion.d" dir, then something like this would work:
triton completion > ~/.triton.completion
echo "source ~/.triton.completion" >> ~/.bashrc
Then open a new shell or manually source FILE
that completion file, and
play with the bash completions:
triton <TAB>
triton
CLI Usage
Create and view instances
$ triton instance list
SHORTID NAME IMG STATE PRIMARYIP AGO
We have no instances created yet, so let's create some. In order to create an instance we need to specify two things: an image and a package. An image represents what will be used as the root of the instances filesystem, and the package represents the size of the instance, eg. ram, disk size, cpu shares, etc. More information on images and packages below - for now we'll just use SmartOS 64bit and a small 128M ram package.
triton instance create base-64 t4-standard-128M
Without a name specified, the container created will have a generated ID. Now to create a container-native Ubuntu 14.04 container with 2GB of ram with the name "server-1"
triton instance create --name=server-1 ubuntu-14.04 t4-standard-2G
Now list your instances again
$ triton instance list
SHORTID NAME IMG STATE PRIMARYIP AGO
7db6c907 b851ba9 [email protected] running 165.225.169.63 9m
9cf1f427 server-1 ubuntu-14.04@20150819 provisioning - 0s
Get a quick overview of your account
$ triton info
login: [email protected]
name: Dave Eddy
email: [email protected]
url: https://us-central-1.api.mnx.io
totalDisk: 50.5 GiB
totalMemory: 2.0 MiB
instances: 2
running: 1
provisioning: 1
To obtain more detailed information of your instance
$ triton instance get server-1
{
"id": "9cf1f427-9a40-c188-ce87-fd0c4a5a2c2c",
"name": "251d4fd",
"type": "smartmachine",
"state": "running",
"image": "c8d68a9e-4682-11e5-9450-4f4fadd0936d",
"ips": [
"165.225.169.54",
"192.168.128.16"
],
"memory": 2048,
"disk": 51200,
"metadata": {
"root_authorized_keys": "(...ssh keys...)"
},
"tags": {},
"created": "2015-09-08T04:56:27.734Z",
"updated": "2015-09-08T04:56:43.000Z",
"networks": [
"feb7b2c5-0063-42f0-a4e6-b812917397f7",
"726379ac-358b-4fb4-bb7c-8bc4548bac1e"
],
"dataset": "c8d68a9e-4682-11e5-9450-4f4fadd0936d",
"primaryIp": "165.225.169.54",
"firewall_enabled": false,
"compute_node": "44454c4c-5400-1034-8053-b5c04f383432",
"package": "t4-standard-2G"
}
SSH to an instance
Connect to an instance over SSH
$ triton ssh b851ba9
Last login: Wed Aug 26 17:59:35 2015 from 208.184.5.170
,---. | ,---. ,---.
`---. ,-.-. ,---. ,---. |--- | | `---. base-64-lts
| | | | ,---| | | | | | 21.4.1
`---' ` ' ' `---' ` `---' `---' `---'
[root@7db6c907-2693-42bc-ea9b-f38678f2554b ~]# uptime
20:08pm up 2:27, 0 users, load average: 0.00, 0.00, 0.01
[root@7db6c907-2693-42bc-ea9b-f38678f2554b ~]# logout
Connection to 165.225.169.63 closed.
Or non-interactively
$ triton ssh b851ba9 uname -v
joyent_20150826T120743Z
Manage an instance
Commonly used container operations are supported in the Triton CLI:
$ triton help instance
...
list (ls) List instances.
get Get an instance.
create Create a new instance.
delete (rm) Delete one or more instances.
start Start one or more instances.
stop Stop one or more instances.
reboot Reboot one or more instances.
ssh SSH to the primary IP of an instance
wait Wait on instances changing state.
audit List instance actions.
View packages and images
Package definitions and images available vary between different data centers and different Triton cloud implementations.
To see all the packages offered in the data center and specific package information, use
triton package list
triton package get ID|NAME
Similarly, to find out the available images and their details, do
triton image list
triton images ID|NAME
Note that docker images are not shown in triton images
as they are
maintained in Docker Hub and other third-party registries configured to be
used with Triton clouds. In general, docker containers should be
provisioned and managed with the regular
docker
CLI
(Triton provides an endpoint that represents the entire datacenter
as a single DOCKER_HOST
. See the Triton Docker
documentation for more information.)
TritonApi
Module Usage
Node-triton can also be used as a node module for your own node.js tooling. A basic example appropriate for a command-line tool is:
var mod_bunyan = require('bunyan');
var mod_triton = require('triton');
var log = mod_bunyan.createLogger({name: 'my-tool'});
// See the `createClient` block comment for full usage details:
// https://github.com/TritonDataCenter/node-triton/blob/master/lib/index.js
mod_triton.createClient({
log: log,
// Use 'env' to pick up 'TRITON_/SDC_' env vars. Or manually specify a
// `profile` object.
profileName: 'env',
unlockKeyFn: mod_triton.promptPassphraseUnlockKey
}, function (err, client) {
if (err) {
// handle err
}
client.listImages(function (err, images) {
client.close(); // Remember to close the client to close TCP conn.
if (err) {
console.error('listImages err:', err);
} else {
console.log(JSON.stringify(images, null, 4));
}
});
});
See the following for more details:
- The block-comment for
createClient
in lib/index.js. - Some module-usage examples in examples/.
- The lower-level details in the top-comment in lib/tritonapi.js.
Configuration
This section defines all the vars in a TritonApi config. The baked in defaults are in "etc/defaults.json" and can be overriden for the CLI in "~/.triton/config.json" (on Windows: "%APPDATA%/Joyent/Triton/config.json").
| Name | Description |
| ---- | ----------- |
| profile | The name of the triton profile to use. The default with the CLI is "env", i.e. take config from SDC_*
envvars. |
| cacheDir | The path (relative to the config dir, "~/.triton") where cache data is stored. The default is "cache", i.e. the triton
CLI caches at "~/.triton/cache". |
node-triton differences with node-smartdc
- There is a single
triton
command instead of a number ofsdc-*
commands. TRITON_*
environment variables are preferred to theSDC_*
environment variables. However theSDC_*
envvars are still supported.- Node-smartdc still has more complete coverage of the Triton
CloudAPI. However,
triton
is catching up and is much more friendly to use.
Development Hooks
Before commiting be sure to, at least:
make check # lint and style checks
make test-unit # run unit tests
A good way to do that is to install the stock pre-commit hook in your clone via:
make git-hooks
Also please run the full (longer) test suite (make test
). See the next
section.
Testing
node-triton has both unit tests (make test-unit
) and integration tests (make
test-integration
). Integration tests require a config file, by default at
"test/config.json". For example:
$ cat test/config.json
{
"profileName": "east3b",
"allowWriteActions": true,
"image": "minimal-64",
"package": "g4-highcpu-128M",
"resizePackage": "g4-highcpu-256M"
}
See "test/config.json.sample" for a description of all config vars. Minimally just a "profileName" or "profile" is required.
Warning: Running the integration tests will create resources and could incur costs if running against a public cloud.
Usage:
make test-unit [TEST-VARS] # run unit tests
make test-integration [TEST-VARS] # run integration tests
make test [TEST-VARS] # run both sets
Test output is node-tap's default short-form output. Full TAP output is
written to "test-unit.tap" and "test-integration.tap". You can use TAP=1
to have TAP output emited to stdout.
Test vars
There are a few TEST_...
vars that can tweak how the tests are run.
TEST_CONFIG=<path to JSON config file>
- By default the integration test suite uses "test/config.json". Use this flag to provide an alternative. This can be useful if you have test configs for a number of separate target DCs. E.g.:$ cat test/coal.json { "profileName": "coal", "allowWriteActions": true } $ make test TEST_CONFIG=test/coal.json
where "coal" here refers to a development Triton (a.k.a SDC) "Cloud On A Laptop" standup.
TEST_GLOB=<glob for test file basename>
- By default all "*.test.js" in the "test/unit/" and "test/integration" dirs are run. To run just those with "image" in the name, usemake test TEST_GLOB=*image*
, or to run a specific test file:make test TEST_GLOB=metadataFromOpts
.TEST_KNOWN_FAIL=1
- At any given time there may be some known failures in the test suite that are being worked on in specific tickets. Those tests may be excluded from the default test run. These will show up in test output like this:test/integration/cli-snapshots.test.js ................ 0/1 1s Skipped: 1 triton instance snapshot known failure, see TRITON-1387
Set the
TEST_KNOWN_FAIL=1
environment variable to include these tests in the test run.TEST_JOBS=<number of test files to run concurrently>
- By default this is 10. Set to 1 to run tests serially. Note: Write tests must be run serially.TEST_TIMEOUT_S=<number of seconds timeout for each test file>
- By default this is 1200 (10 minutes). Ideally tests are written to take much less than 10 minutes.TAP=1
to have the test suite emit TAP output. This is a node-tap envvar.
Testing Development Guide
Unit tests (i.e. not requiring the cloudapi endpoint) in "unit/*.test.js". Integration tests "integration/*.test.js".
We are using node-tap. Read RFD 139 for some guidelines for node-tap usage. The more common we can make some basic usage patterns in the many Triton repos, the easier the maintenance.
Use "test/lib/*.js" and "test/{unit,integration}/helpers.js" to help make ".test.js" code more expressive. Avoid excessive parameterization, however. Some cut 'n paste of boilerplate is fine if it makes an individual test clearer and easier to debug and maintain.
Node-tap supports running test files in parallel, and
make test
by default runs tests in parallel. Therefore:- Ensure that test files do not depend on each other and can run concurrently.
- Prefer more and smaller and more targetted test files.
Release process
Here is how to cut a release:
Make a commit to set the intended version in "package.json#version" and changing
## not yet released
at the top of "CHANGES.md" to:## not yet released ## $version
Get that commit approved and merged via a pull request.
Once that is merged and you've updated your local copy, run:
make cutarelease
This will run a couple checks (clean working copy, versions in package.json and CHANGES.md match), then will git tag and npm publish.
License
MPL 2.0