npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

ssb-hyper-blobs

v4.1.0

Published

a scuttlebutt plugin for storing blobs in hyperdrives, and serving them locally over http

Downloads

57

Readme

ssb-hyper-blobs

A scuttlebutt plugin which runs an artefact-server instance, providing hyper/DAT based blob storage.

Example Usage

/* setup */
const caps = require('ssb-caps')
const Config = require('ssb-config/inject')

const stack = require('secret-stack')()
  .use(require('ssb-db'))
  .use(require('ssb-hyper-blobs'))

const config = Config('temp', {
  path: '/tmp/ssb-hyper-blobs' + Date.now(),
  caps
})

const ssb = stack(config)
/* adding a file */
const path = require('path')
const pull = require('pull-stream')
const file = require('pull-file')

pull(
  file(path.resolve(__dirname, '../README.md')),
  ssb.hyperBlobs.add((err, data) => {
    console.log(data)
    // {
    //   driveAddress: 'e8b2557f90b94f559abae254802594034993f9a93da55e5f147e2e870d8f7955',
    //   blobId: '2df0cb34-77e5-40af-99ad-52d85ff67d35',
    //   readKey: '59e9eeb226bd77581d849346b2063eebac518e06e5f3637b75a15e68ce37eaab'
    // }

  })
)
/* reading a file over http */
const axios = require('axios')

const { driveAddress, blobId, readKey } = data
const url = `http://localhost:26836/drive/${driveAddress}/blob/${blobId}?readKey=${readKey}&fileName=${fileName}`
// if this was e.g. an image you can use this URL directly in an <img> tag
// NOTE: the query.fileName is used to pin mimeType which is helpful for some html rendering e.g <video>

axios.get(url)
  .then(res => {
    console.log(res.data.slice(0, 600), '...')

    // ...
  })

Behaviour

On plugin init, starts two servers:

  1. An http server for serving blobs
  2. A hyper server which handles p2p replication of blobs

When you request a particular blob from the http server, if you already have it locally it will be immediately served. Otherwise, it will be replicated from remote peers (where possible), then served.

When you connect to another scuttlebutt peer, if they also have ssb-hyper-blobs installed, you will make an RPC call on them, registering your hyper driveAddress with them.

Further, all scuttlebutt messages are scanned for references to driveAddresses, and these are registered.

Stores files in path.join(config.path, hyperBlobs)

Pataka mode

A pataka is an always-online peer, with the following differences:

  • it does full replication of drives (in contrast, other peers do sparse replication - only fetch files they want to view)
  • it has no encryption keys for files, so it cannot read anything it has stored / replicated
  • it does not have methods for adding files to its own drive

Config

This module can be configured by modifying config passed into the secret-stack startup. Here's the default config values if nothing is passed in:

{
  hyperBlobs: {
    port: 26836,
    pataka: false,
    storeOpts: {},
    autoPrune: false
  }
}
  • port Number - the port the http server will start on
  • pataka Boolean - whether the ssb instance is running in Pataka mode (see Behaviour above)
  • storeOpts Object - options that can be passed down to the internal artefact-store
  • autoPrune Boolean|Object - enables the prune function to run every X interval, removing oldest files first. It tries to keep the size of hyperBlobs from remote peers under the maxHyperBlobsSize
    • default: false - auto pruning is disabled
    • if true, then default config will be used (see below)
    • if object, you can provide custom config for the following, or leave them empty to use the default values:
      • startDelay: Number (optional) the delay in milliseconds the first times autoPrune runs:
        • default: 600000 - 10 minutes
      • intervalTime: Number (optional) the interval in milliseconds to run the autoPrune function
        • default: 3600000 - 1 hour intervals
      • maxRemoteSize: Number (optional) The max total size for hyperBlobs in bytes. When the total size of hyperBlobs exceeds this number, then pruning will take place to prune the excess
        • default: 5 * 1024 * 1024 * 1024 - 5GB
        • note "remote" becuase pruning only prunes others blobs, never your own

API

ssb.hyperBlobs.onReady(cb)

Runs callback function cb when internal ArtefactServer is all stood up.

ssb.hyperBlobs.driveAddress(cb)

Get your driveAddress. This method is available to all peers.

If running in pataka mode (or server doesn't trust you) returns null (as pataka do not have their own drive)

ssb.hyperBlobs.add(cb) => sink

Creates a pull-stream sink that works well with e.g. pull-files

  • cb Function - a callback which is run which receives err, data, where data is the info needed to read the blob:
    • driveAddress - the address of the hyper drive
    • blobId
      • an id unique to this blob (within this particular drive)
      • NOT a hash of the content (currently generated by uuid
    • readKey
      • the encryption key you need if you want to read the contents of the blob

ssb.hyperBlobs.registerDrive(driveAddress, cb)

This method is mainly for use as a remote call by a peer that has connected to you, allowing them to register their drives with you.

ssb.hyperBlobs.prune(opts, cb)

This method prunes files from your store which match specified constraints NOTE:

This only removes local copies of files you've replicated from others (files you've added will never be pruned by this function)

  • opts

    • pruneSize: Number (optional) the amount of data to try and prune:

      • default: null meanking "no limit"
      • if pruning a particular file would take you over that target pruneSize, it will be skipped
    • minSize: Number (optional) limit pruning to files whose size is >= minSize

    • maxSize: Number (optional) limit pruning to files whose size is <= maxSize

      • if ommited all files above minSize (and satisfying any other constraints) will be pruned
    • minDate: Number|Date (optional) All files that were last accessed between this date and maxDate will be pruned. The default value is 0

    • maxDate: Number|Date (optional) All files that were last accessed between minDate and this date will be pruned. The default value is todays date

    • sort: Function (optional) Sort function to be called on the files before pruning starts

      • after sort, pruning proceeds until pruneSize is reached (or end of list)
      • default: (a, b) => a.atime - b.atime (i.e. sort by access time oldest > newest)
      • See the object below for the fields on a file.
  • cb Function - a callback which is run which receives err, data, where data is an array of files that were pruned in the form of:

      {
        filename: String, // blobId
        driveAddress: Buffer, // address of the drive the file is on
    
        // fields similar to those of fs.stat
        dev: Int,
        nlink: Int,
        rdev: Int,
        blksize: Int,
        ino: Int,
        mode: Int,
        uid: Int,
        gid: Int,
        size: Int,
        offset: Int,
        byteOffset: Int,
        blocks: Int,
        atime: Date,
        mtime: Date,
        ctime: Date,
        linkname: String,
        mount: String,
        metadata: Object
      }

ssb.hyperBlobs.autoPrune.set(config, cb)

Sets config.hyperBlobs.autoPrune (see above) where config can be either Boolean or { startDelay, intervalTime, maxRemoteSize }.

This function triggers several things:

  • persistes this to the {appHome}/config file
  • updates the current start of the autoPrune process
    • stops any running interval
    • the value is true or { startDelay, intervalTime, maxRemoteSize } it starts up a new autoPrune process

ssb.hyperBlobs.autoPrune.get(cb)

Returns config.hyperBlobs.autoPrune where config can either be null when its not set or { startDelay, intervalTime, maxRemoteSize } when set. See above for default values of these fields

GET http://localhost:PORT/drive/:driveAddress/blob/:blobId?readKey=READ_KEY&start=START&end=END&mimeType=MIME&fileName=FILENAME

params:

  • driveAddress - address of drive where blob is stored (hex)
  • blobId - id of blob within that store

query:

  • readKey String (hex)
    • decryption key which allows you to read the content of the blob
    • required (may not be required in future to allow pulling the encrypted blob)
  • start Number (optional)
    • byte offset to begin stream
    • default: 0
  • end Number (optional)
    • byte offset to stop stream
    • default: EOF
  • mimeType String (optional)
    • help the response to encode mimeType
  • fileName String (optional)
    • provide this is you don't have the mimeType and the extension will be used to try and derive it for the response.

PORT is the whatever is configured as hyperBlobs.port

TODO

  • [ ] bound how drives are auto-registered?
    • [ ] only accept RPC registration of drives which are from people you follow / are in a group with
      • this would stop patakas being abused by people connecting who are not "members" of that pataka
    • [ ] only auto-register drives from friends messages?