npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@xdbchain/js-xdbchain-node-crawler

v4.2.0

Published

Crawl the network for nodes

Downloads

2

Readme

code style: prettier test

stellar-js-node-crawler

Crawl the Stellar Network. Identify the nodes and determine their validating status.

How does it work?

1) Connecting

The crawler uses a seed list of nodes to start the crawling. Using the node connector package it connects to every node in the seed list.

2) Node discovery

When the crawler connects to a node, that node will send a list of peer addresses. The crawler keeps connecting to new nodes it discovers until there are no more left.

3) QuorumSets

To determine the quorumSet of a node, the crawler listens to every node for at least 6 seconds. When the node participates in consensus it will send scp messages that contain its quorumSet hash. The crawler then sends a GET_SCP_QUORUMSET message to retrieve the actual quorumSet. If a node doesn't respond in time or sends a DONT_HAVE message, it tries with another node.

4) Validating status

To determine the validating status of a node, the crawler listens for externalize messages. An externalize message indicates that the node has closed a slot in its ledger.

A node is marked as validating when the closed slot is in line with the network. Meaning it closed the most recent slot (or is not far behind) and it closed the correct value.

For the crawler to know what the most recent closed slot is, it relies on a set of trusted nodes. If the majority of its trusted nodes externalize a slot with a specific value, the crawler marks that slot as the latest.

Every node sends not only its own SCP/externalize messages, but also relays the messages from the nodes in its own transitive quorumSet. These will include the messages of all the nodes in the network transitive quorumSet, because by definition, these nodes are transitively trusted by every node in the network. The crawler performs best if you select nodes from the network transitive quorumSet as its trusted nodes (the nodes it needs to determine the latest ledger). This way when the crawler connects and listens to any node in the network, the messages from the network transitive quorumSet will be relayed, and the crawler will be able to correctly determine if that node is validating.

By default, the crawler listens for 6 seconds to a node to determine its validating status. However, if the node is participating in consensus by sending other types of SCP messages, but no externalize messages it could indicate slower ledger close times. The crawler will wait another six seconds, and will repeat this process up to 100 seconds in total listening time. If no ledger is closed in 100 seconds, the node is marked as not validating.
This has an important consequence that the crawl time will increase together with ledger close times. Taking the most time in case the network is halted.

5) Peer nodes

The crawler returns a PeerNode object for every successful connection.

install

yarn install

build code

yarn run build: builds code in lib folder

Usage

Create crawler

let myCrawler = createCrawler({
    nodeConfig: getConfigFromEnv(),
    maxOpenConnections: 25,
    maxCrawlTime: 900000
});

The crawler is itself a node and needs to be configured accordingly. You can limit the number of simultaneous open connections to not overwhelm your server and set the maxCrawlTime as a safety if the crawler should be stuck.

Run crawl

let result = await myCrawler.crawl(
			nodes, // [[ip, port], [ip, port]]
			trustedQSet, //a quorumSet the crawler uses the determine the latest closed ledger
		    latestKnownLedger //a previous detected ledger the crawler can use to ignore older externalize messages	
		);

example script

Check out examples/crawl.js for an example on how to crawl the network. You can try it out using the bundled seed file with the following command:
yarn run examples:crawl seed/nodes.json

Another example is the Stellarbeat backend

publish new release

Uses the np package and semantic versioning.

np