npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

fl-casper-tools

v0.2.0

Published

Tools for aggregating Casper Network AMM data

Downloads

6

Readme

FLUIDEFI Caspernet Aggregator Tools

This repository contains a library of tools that can be composed together into a data-aggregation app, with the goal of storing all desired caspernet raw and decoded DeFi data in an efficient data store.

An application using this library would perform ETL operations, on historical and new blockchain data as it is produced. The types of data extracted and decoded will include:

  • Block data (including header and body)
  • Deploys (raw)
  • DeFi data (decoded from state changes in deploys):
    • Token approvals
    • Token transfers
    • Minting / burning of liquidity
    • Updates to liquidity pool reserves / asset pricing
    • Other operations for DeFi platforms deployed to Caspernet

Quickstart

Install the package with:

npm install fl-casper-tools

and import into your project with the following code:

Import (javascript):

const { 
  CasperBlockchain, 
  BlockFetcher,
  BlockParser,
  DataStore,
  BlockSaver,
  BlockConsumer,
} = require('fl-casper-tools');

Import (typescript):

import { 
  CasperBlockchain, 
  BlockFetcher,
  BlockParser,
  DataStore,
  BlockSaver,
  BlockConsumer,
} from 'fl-casper-tools';

Usage

CasperBlockchain is a very light wrapper for the casper-sdk, which will only contain methods necessary for the data aggregation processes.

Create an instance by passing your JSON-RPC url to the constructor. This should point to port 7777/rpc on your node server.

const blockchain = new CasperBlockchain(jsonRpcProviderUrl);

Simple example - get the current blockchain height:

const height = await blockchain.getCurrentBlockHeight();

The use the examples you will need to have a postgres database set up with the blocks table created.

The blocks table can be created with the following SQL code:

CREATE TABLE IF NOT EXISTS blocks (
    block_hash          varchar(64)      NOT NULL,
    parent_hash         varchar(64)     ,
    state_root_hash     varchar(64)     ,
    body_hash           varchar(64)     ,
    random_bit          boolean         ,
    accumulated_seed    varchar(64)     ,
    era_end             boolean         ,
    timestamp_utc       timestamptz     ,
    era_id              integer         ,
    block_number        integer         ,
    protocol_version    varchar(20)     ,
    proposer            varchar(68)     ,
    deploy_hashes       varchar(64)[]   ,
    transfer_hashes     varchar(64)[]   ,
    api_version         varchar(20)     ,
    CONSTRAINT pk_blocks PRIMARY KEY ( block_number )
);

Initialize other classes used for processing blocks:

const fetcher = new BlockFetcher(blockchain);
const parser = new BlockParser();
const datastore = new DataStore(getDataSource(dataSourceOptions));
const blockSaver = new BlockSaver(datastore);

Initialize the typeorm DataSource:

await datastore.initialize();

The dataSourceOptions arg will be typeorm DataSourceOption:

You can now fetch a block, parse it and add to your data store:

const blockFetcherResult = await fetcher.apply(700000);

The result will have the following fields:

type BlockFetcherResult = {
  success: boolean;
  error?: any;
  message?: string;
  height?: number;
  block?: any;
};

If the request failed or encountered an error, success will be false and an error message may be present.

Parse the block resulting in fields that map to the block model in the data store:

const block = blockFetcherResult.block;
const parserResult = parser.apply(block);

The result will have the following fields, similar to the BlockFetcherResult:

type BlockParserResult = {
  success: boolean;
  error?: any;
  message?: string;
  height?: number;
  fields?: any;
};

If the parser was successful, save the block to the datastore using the blocks model:

const fields = parserResult.fields;
const result = await blockSaver.apply(fields);

The full process can be abstracted by using the BlockConsumer:

const blockConsumer = new BlockConsumer(
  parser,
  fetcher,
  blockSaver
);

const blockConsumerResult = await blockConsumer.apply(700000);

The result will have the following fields:

type BlockConsumerResult = {
  success: boolean;
  error?: any;
  message?: string;
  height?: number;
};

You can define your own data store to pass to the Blocks model constructor, if you don't want to use typeorm. It just needs to implement the IDataStore interface.

Testing:

If you clone this repository directly, you can run the included unit tests with the npm command:

npm run test

Documentation:

Full documentation can be found in the docs folder.

The project was initiated with DEVxDAO proposal #451

Based on casper.network

Opensource components:

Contributing

Please see Contributing Guidelines.

Code of Conduct

Please see Code of Conduct.

License

This project is licensed under MIT license.

About us: