@substrate/api-sidecar
v19.3.1
Published
REST service that makes it easy to interact with blockchain nodes built using Substrate's FRAME framework.
Downloads
391
Readme
Prerequisites
<= v15.0.0
This service requires Node versions 14 or higher.
Compatibility: | Node Version | Stablility | |---------------|:-----------:| | v14.x.x | Stable | | v16.x.x | Stable | | v17.x.x | Stable | | v18.x.x | Stable | | v19.x.x | stable |
>= v16.0.0
This service requires Node versions 18.14 or higher.
Compatibility: | Node Version | Stablility | |---------------|:-----------:| | v18.14.x | Stable | | v20.x.x | Stable | | v21.x.x | Pending |
NOTE: Node LTS (long term support
) versions start with an even number, and odd number versions are subject to a 6 month testing period with active support before they are unsupported. It is recommended to use sidecar with a stable actively maintained version of node.js.
Table of contents
- NPM package installation and usage
- Source code installation and usage
- Configuration
- Debugging fee and staking payout calculations
- Available endpoints
- Chain integration guide
- Docker
- Notes for maintainers
- Hardware requirements
NPM package installation and usage
Global installation
Install the service globally:
npm install -g @substrate/api-sidecar
# OR
yarn global add @substrate/api-sidecar
Run the service from any directory on your machine:
substrate-api-sidecar
To check your version you may append the --version
flag to substrate-api-sidecar
.
Local installation
Install the service locally:
npm install @substrate/api-sidecar
# OR
yarn add @substrate/api-sidecar
Run the service from within the local directory:
node_modules/.bin/substrate-api-sidecar
Finishing up
Jump to the configuration section for more details on connecting to a node.
Click here for full endpoint docs.
In the full endpoints doc, you will also find the following trace
related endpoints :
/experimental/blocks/{blockId}/traces/operations?actions=false
/experimental/blocks/head/traces/operations?actions=false
/experimental/blocks/{blockId}/traces
/experimental/blocks/head/traces
To have access to these endpoints you need to :
- Run your node with the flag
—unsafe-rpc-external
- Check in sidecar if
BlocksTrace
controller is active for the chain you are running.
Currently BlocksTrace
controller is active in Polkadot and Kusama.
Source code installation and usage
Quick install
Simply run yarn
.
Rust development installation
If you are looking to hack on the calc
Rust crate make sure your machine has an up-to-date version of rustup
installed to manage Rust dependencies.
Install wasm-pack
if your machine does not already have it:
cargo install wasm-pack
Use yarn to do the remaining setup:
yarn
Running
# For live reload in development
yarn dev
# To build and run
yarn build
yarn start
Jump to the configuration section for more details on connecting to a node.
Configuration
To use a specific env profile (here for instance a profile called 'env.sample'):
NODE_ENV=sample yarn start
For more information on our configuration manager visit its readme here. See Specs.ts
to view the env configuration spec.
Express server
SAS_EXPRESS_BIND_HOST
: address on which the server will be listening, defaults to127.0.0.1
.SAS_EXPRESS_PORT
: port on which the server will be listening, defaults to8080
.SAS_EXPRESS_KEEP_ALIVE_TIMEOUT
: Set thekeepAliveTimeout
in express.
Substrate node
SAS_SUBSTRATE_URL
: URL to which the RPC proxy will attempt to connect to, defaults tows://127.0.0.1:9944
. Accepts both a websocket, and http URL.
Metrics Server
SAS_METRICS_ENABLED
: Boolean to enable the metrics server instance with Prometheus (server metrics) and Loki (logging) connections. Defaults to false.SAS_METRICS_PROM_HOST
: The host of the prometheus server used to listen to metrics emitted, defaults to127.0.0.1
.SAS_METRICS_PROM_PORT
: The port of the prometheus server, defaults to9100
.SAS_METRICS_LOKI_HOST
: The host of the loki server used to pull the logs, defaults to127.0.0.1
.SAS_METRICS_LOKI_PORT
: The port of the loki server, defaults to3100
Custom substrate types
Some chains require custom type definitions in order for Sidecar to know how to decode the data
retrieved from the node. Sidecar affords environment variables which allow the user to specify an absolute path to a JSON file that contains type definitions in the corresponding formats. Consult polkadot-js/api for more info on
the type formats (see RegisteredTypes
). There is a helper CLI tool called generate-type-bundle that can generate a typesBundle.json
file for you using chain information from @polkadot/apps-config
. The generated json file from this tool will work directly with the SAS_SUBSTRATE_TYPES_BUNDLE
ENV variable.
SAS_SUBSTRATE_TYPES_BUNDLE
: a bundle of types with versioning info, type aliases, derives, and rpc definitions. Format:OverrideBundleType
(seetypesBundle
).SAS_SUBSTRATE_TYPES_CHAIN
: type definitions keyed bychainName
. Format:Record<string, RegistryTypes>
(seetypesChain
).SAS_SUBSTRATE_TYPES_SPEC
: type definitions keyed byspecName
. Format:Record<string, RegistryTypes>
(seetypesSpec
).SAS_SUBSTRATE_TYPES
: type definitions and overrides, not keyed. Format:RegistryTypes
(seetypes
).
You can read more about defining types for polkadot-js here.
Connecting a modified node template
Polkadot-js can recognize the standard node template and inject the correct types, but if you have
modified the name of your chain in the node template you will need to add the types manually in a
JSON types
file like so:
// my-chains-types.json
{
"Address": "AccountId",
"LookupSource": "AccountId"
}
and then set the enviroment variable to point to your definitions:
export SAS_SUBSTRATE_TYPES=/path/to/my-chains-types.json
Logging
SAS_LOG_LEVEL
: The lowest priority log level to surface, defaults toinfo
. Tip: set tohttp
to see all HTTP requests.SAS_LOG_JSON
:Whether or not to have logs formatted as JSON, defaults tofalse
. Useful when usingstdout
to programmatically process Sidecar log data.SAS_LOG_FILTER_RPC
: Whether or not to filter polkadot-js API-WS RPC logging, defaults tofalse
.SAS_LOG_STRIP_ANSI
: Whether or not to strip ANSI characters from logs, defaults tofalse
. Useful when logging RPC calls with JSON written to transports.SAS_LOG_WRITE
: Whether or not to write logs to a log file. Default is set tofalse
. Accepts a boolean value. The log files will be written aslogs.log
. NOTE: It will only log what is available depending on whatSAS_LOG_LEVEL
is set to.SAS_LOG_WRITE_PATH
: Specifies the path to write the log files. Default will be where the package is installed.SAS_LOG_WRITE_MAX_FILE_SIZE
: Specifies in bytes what the max file size for the written log files should be. Default is5242880
(5MB). NOTE Once the the max amount of files have reached their max size, the logger will start to rewrite over the first log file.SAS_LOG_WRITE_MAX_FILES
: Specifies how many files can be written. Default is 5.
Log levels
Log levels in order of decreasing importance are: error
, warn
, info
, http
, verbose
, debug
, silly
.
| http status code range | log level |
|------------------------|-----------|
| code
< 400 | http
|
| 400 <= code
< 500 | warn
|
| 500 < code
| error
|
RPC logging
If looking to track raw RPC requests/responses, one can use yarn start:log-rpc
to turn on polkadot-js's
logging. It is recommended to also set SAS_LOG_STRIP_ANSI=true
to increase the readability of the logging stream.
N.B. If running yarn start:log-rpc
, the NODE_ENV will be set to test
. In order still run your .env
file you can symlink
it with .env.test
. For example you could run
ln -s .env.myEnv .env.test && yarn start:log-rpc
to use .env.myEnv
to set ENV variables. (see linux
commands ln
and unlink
for more info.)
Prometheus server
Prometheus metrics can be enabled by running sidecar with the following env configuration: SAS_METRICS_ENABLED
=true
You can also expand the metrics tracking capabilities to include query params by adding to the env configuration: SAS_METRICS_INCLUDE_QUERYPARAMS
=true
The metrics endpoint can then be accessed :
- on the default port :
http://127.0.0.1:9100/metrics
or - on your custom port if you defined one :
http://127.0.0.1:<YOUR_CUSTOM_PORT>/metrics
A JSON format response is available at http://127.0.0.1:9100/metrics.json
.
That way you will have access to the default prometheus node instance metrics and the following metrics will be emitted for each route:
sas_http_request_error
: type counter and tracks http errors occuring in sidecarsas_http_request_success
: type counter and tracks successfull http requestssas_http_requests
: type counter and tracks all http requestssas_request_duration_seconds
: type histogram and tracks the latency of the requestssas_response_size_bytes_seconds
: type histogram and tracks the response size of the requestssas_response_size_latency_ratio_seconds
: type histogram and tracks the response bytes per second of the requests
The blocks controller also includes the following route-specific metrics:
sas_extrinsics_in_request
: type histogram and tracks the number of extrinsics returned in the request when a range of blocks is queriedsas_extrinsics_per_second
: type histogram and tracks the returned extrinics per secondsas_extrinsics_per_block
: type histogram and tracks the returned extrinsics per blocksas_seconds_per_block
: type histogram and tracks the request time per block
The metrics registry is injected in the Response object when the SAS_METRICS_ENABLED
flag is set to true
in the .env
file, allowing to extend the controller based metrics to any given controller from within the controller functions.
To successfully run and access the metrics and logs in Grafana (for example) the following are required:
For mac users using homebrew:
brew install prometheus loki promtail
Debugging fee and staking payout calculations
It is possible to get more information about the fee and staking payout calculation process logged to the console. Because these calculations happens in the statically compiled web assembly part, a re-compile with the proper environment variable set is necessary:
CALC_DEBUG=1 sh calc/build.sh
Available endpoints
Click here for full endpoint docs.
Chain integration guide
Click here for chain integration guide.
Docker
With each release, the maintainers publish a docker image to dockerhub at parity/substrate-api-sidecar
Pull the latest release
docker pull docker.io/parity/substrate-api-sidecar:latest
The specific image tag matches the release version.
Or build from source
yarn build:docker
Run
# For default use run:
docker run --rm -it --read-only -p 8080:8080 substrate-api-sidecar
# Or if you want to use environment variables set in `.env.docker`, run:
docker run --rm -it --read-only --env-file .env.docker -p 8080:8080 substrate-api-sidecar
NOTE: While you could omit the --read-only
flag, it is strongly recommended for containers used in production.
then you can test with:
curl -s http://0.0.0.0:8080/blocks/head | jq
N.B. The docker flow presented here is just a sample to help get started. Modifications may be necessary for secure usage.
Contribute
Need help or want to contribute ideas or code? Head over to our CONTRIBUTING doc for more information.
Notes for maintainers
Commits
All the commits in this repo follow the Conventional Commits spec. When merging a PR, make sure 1) to use squash merge and 2) that the title of the PR follows the Conventional Commits spec.
Updating polkadot-js dependencies
Whenever the polkadot-js ecosystem releases a new version, it's important to keep up with these updates and review the release notes for any breaking changes or high priority updates. In order to update all the dependencies and resolutions, create a new branch, such as
yourname-update-pjs
, and then runyarn up "@polkadot/*"
in that branch.- @polkadot/api release notes
- @polkadot/util-crypto release notes
- @substrate/calc npm release page
Ensure everything is up to date and working by running the following:
yarn yarn dedupe yarn build yarn lint yarn test yarn test:historical-e2e-tests yarn test:latest-e2e-tests
Commit the dependency updates with a name like
chore(deps): update polkadot-js deps
(adjust the title based on what was updated; refer to the commit history for examples). Then, wait for it to be merged.Follow RELEASE.md next if you're working through a full sidecar release. This will involve creating a separate PR where the changelog and versions are bumped.
Maintenance Guide
A more complete list of the maintainer's tasks can be found in the MAINTENANCE.md guide.
Hardware requirements
Disk Space
Sidecar is a stateless program and thus should not use any disk space.
Memory
The requirements follow the default of node.js processes which is an upper bound in HEAP memory of a little less than 2GB thus 4GB of memory should be sufficient.
Running sidecar and a node
Please note that if you run sidecar next to a substrate node in a single machine then your system specifications should improve significantly.
- Our official specifications related to validator nodes can be found in the polkadot wiki page.
- Regarding archive nodes :
- again as mentioned in the polkadot wiki page, the space needed from an archive node depends on which block we are currently on (of the specific chain we are referring to).
- there are no other hardware requirements for an archive node since it is not time critical (archive nodes do not participate in the consensus).
Benchmarks
During the benchmarks we performed, we concluded that sidecar would use a max of 1.1GB of RSS memory.
The benchmarks were:
- using 4 threads over 12 open http connections and
- were overloading the cache with every runtime possible on polkadot.
Hardware specs in which the benchmarks were performed:
Machine type:
n2-standard-4 (4 vCPUs, 16 GB memory)
CPU Platform:
Intel Cascade Lake
Hard-Disk:
500GB