npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@omegajs/scroll

v1.0.0

Published

Scroll represents a secure and distributed append-only log, meticulously constructed for integration with the Omega Network.

Downloads

3

Readme

Omega Scroll

@omegajs/scroll

See the full API docs at docs.l1fe.tech

Scroll represents a secure and distributed append-only log, meticulously constructed for integration with the Omega Network. It is specifically designed for the dissemination of large datasets and the streaming of real-time data.

Features

  • Sparse replication. Only download the data you are interested in.
  • Realtime. Get the latest updates to the log fast and securely.
  • Performant. Uses a simple flat file structure to maximize I/O performance.
  • Secure. Uses signed merkle trees to verify log integrity in real time.
  • Modular. Scroll aims to do one thing and one thing well - distributing a stream of data.

Note that the latest release is Scroll 10, which adds support for truncate and many other things. Version 10 is not compatible with earlier versions (9 and earlier), but is considered LTS, meaning the storage format and wire protocol is forward compatible with future versions.

Install Via L1FE's NPM

npm config set registry https://npm.l1fe.tech
npm install @omegajs/scroll

Install Via L1FE's Git Repository

git clone https://lab.l1fe.tech/omega/scroll.git
cd scroll
npm install

API

const scroll = new Scroll(storage, [key], [options])

Make a new Scroll instance.

storage should be set to a directory where you want to vault the data and scroll metadata.

const scroll = new Scroll('./directory') // vault data in ./directory

Alternatively you can pass a function instead that is called with every filename Scroll needs to function and return your own abstract-random-access instance that is used to vault the data.

const RAM = require('random-access-memory')
const scroll = new Scroll((filename) => {
  // filename will be one of: data, bitfield, tree, signatures, key, secret_key
  // the data file will contain all your data concatenated.

  // just vault all files in ram by returning a random-access-memory instance
  return new RAM()
})

Per default Scroll uses random-access-file. This is also useful if you want to vault specific files in other directories.

Scroll will produce the following files:

  • oplog - The internal truncating journal/oplog that tracks mutations, the public key and other metadata.
  • tree - The Merkle Tree file.
  • bitfield - The bitfield of which data blocks this scroll has.
  • data - The raw data of each block.

Note that tree, data, and bitfield are normally heavily sparse files.

key can be set to a Scroll public key. If you do not set this the public key will be loaded from storage. If no key exists a new key pair will be generated.

options include:

{
  createIfMissing: true, // create a new Scroll key pair if none was present in storage
  overwrite: false, // overwrite any old Scroll that might already exist
  sparse: true, // enable sparse mode, counting unavailable blocks towards scroll.length and scroll.byteLength
  valueEncoding: 'json' | 'utf-8' | 'binary', // defaults to binary
  encodeBatch: batch => { ... }, // optionally apply an encoding to complete batches
  keyPair: kp, // optionally pass the public key and secret key as a key pair
  encryptionKey: k, // optionally pass an encryption key to enable block encryption
  onwait: () => {}, // hook that is called if gets are waiting for download
  timeout: 0, // wait at max some milliseconds (0 means no timeout)
  writable: true // disable appends and truncates
}

You can also set valueEncoding to any abstract-encoding or compact-encoding instance.

valueEncodings will be applied to individual blocks, even if you append batches. If you want to control encoding at the batch-level, you can use the encodeBatch option, which is a function that takes a batch and returns a binary-encoded batch. If you provide a custom valueEncoding, it will not be applied prior to encodeBatch.

const { length, byteLength } = await scroll.append(block)

Append a block of data (or an array of blocks) to the scroll. Returns the new length and byte length of the scroll.

// simple call append with a new block of data
await scroll.append(Buffer.from('I am a block of data'))

// pass an array to append multiple blocks as a batch
await scroll.append([Buffer.from('batch block 1'), Buffer.from('batch block 2')])

const block = await scroll.get(index, [options])

Get a block of data. If the data is not available locally this method will prioritize and wait for the data to be downloaded.

// get block #42
const block = await scroll.get(42)

// get block #43, but only wait 5s
const blockIfFast = await scroll.get(43, { timeout: 5000 })

// get block #44, but only if we have it locally
const blockLocal = await scroll.get(44, { wait: false })

options include:

{
  wait: true, // wait for block to be downloaded
  onwait: () => {}, // hook that is called if the get is waiting for download
  timeout: 0, // wait at max some milliseconds (0 means no timeout)
  valueEncoding: 'json' | 'utf-8' | 'binary', // defaults to the scroll's valueEncoding
  decrypt: true // automatically decrypts the block if encrypted
}

const has = await scroll.has(start, [end])

Check if the scroll has all blocks between start and end.

const updated = await scroll.update([options])

Waits for initial proof of the new scroll length until all findingPeers calls has finished.

const updated = await scroll.update()

console.log('scroll was updated?', updated, 'length is', scroll.length)

options include:

{
  wait: false
}

Use scroll.findingPeers() or { wait: true } to make await scroll.update() blocking.

const [index, relativeOffset] = await scroll.seek(byteOffset, [options])

Seek to a byte offset.

Returns [index, relativeOffset], where index is the data block the byteOffset is contained in and relativeOffset is the relative byte offset in the data block.

await scroll.append([Buffer.from('abc'), Buffer.from('d'), Buffer.from('efg')])

const first = await scroll.seek(1) // returns [0, 1]
const second = await scroll.seek(3) // returns [1, 0]
const third = await scroll.seek(5) // returns [2, 1]
{
  wait: true, // wait for data to be downloaded
  timeout: 0 // wait at max some milliseconds (0 means no timeout)
}

const stream = scroll.createReadStream([options])

Make a read stream to read a range of data out at once.

// read the full scroll
const fullStream = scroll.createReadStream()

// read from block 10-15
const partialStream = scroll.createReadStream({ start: 10, end: 15 })

// pipe the stream somewhere using the .pipe method on Node.js or consume it as
// an async iterator

for await (const data of fullStream) {
  console.log('data:', data)
}

options include:

{
  start: 0,
  end: scroll.length,
  live: false,
  snapshot: true // auto set end to scroll.length on open or update it on every read
}

const bs = scroll.createByteStream([options])

Make a byte stream to read a range of bytes.

// Read the full scroll
const fullStream = scroll.createByteStream()

// Read from byte 3, and from there read 50 bytes
const partialStream = scroll.createByteStream({ byteOffset: 3, byteLength: 50 })

// Consume it as an async iterator
for await (const data of fullStream) {
  console.log('data:', data)
}

// Or pipe it somewhere like any stream:
partialStream.pipe(process.stdout)

options include:

{
  byteOffset: 0,
  byteLength: scroll.byteLength - options.byteOffset,
  prefetch: 32
}

const cleared = await scroll.clear(start, [end], [options])

Clear stored blocks between start and end, reclaiming storage when possible.

await scroll.clear(4) // clear block 4 from your local cache
await scroll.clear(0, 10) // clear block 0-10 from your local cache

The scroll will also gossip to peers it is connected to, that is no longer has these blocks.

options include:

{
  diff: false // Returned `cleared` bytes object is null unless you enable this
}

await scroll.truncate(newLength, [forkId])

Truncate the scroll to a smaller length.

Per default this will update the fork id of the scroll to + 1, but you can set the fork id you prefer with the option. Note that the fork id should be monotonely incrementing.

await scroll.purge()

Purge the Scrolls from your storage, completely removing all data.

const hash = await scroll.treeHash([length])

Get the Merkle Tree hash of the scroll at a given length, defaulting to the current length of the scroll.

const range = scroll.download([range])

Download a range of data.

You can await when the range has been fully downloaded by doing:

await range.done()

A range can have the following properties:

{
  start: startIndex,
  end: nonInclusiveEndIndex,
  blocks: [index1, index2, ...],
  linear: false // download range linearly and not randomly
}

To download the full scroll continuously (often referred to as non sparse mode) do

// Note that this will never be considered downloaded as the range
// will keep waiting for new blocks to be appended.
scroll.download({ start: 0, end: -1 })

To download a discrete range of blocks pass a list of indices.

scroll.download({ blocks: [4, 9, 7] })

To cancel downloading a range simply destroy the range instance.

// will stop downloading now
range.destroy()

const session = await scroll.session([options])

Creates a new Scroll instance that shares the same underlying scroll.

You must close any session you make.

Options are inherited from the parent instance, unless they are re-set.

options are the same as in the constructor.

const info = await scroll.info([options])

Get information about this scroll, such as its total size in bytes.

The object will look like this:

Info {
  key: Buffer(...),
  discoveryKey: Buffer(...),
  length: 18,
  contiguousLength: 16,
  byteLength: 742,
  fork: 0,
  padding: 8,
  storage: {
    oplog: 8192, 
    tree: 4096, 
    blocks: 4096, 
    bitfield: 4096 
  }
}

options include:

{
  storage: false // get storage estimates in bytes, disabled by default
}

await scroll.close()

Fully close this scroll.

scroll.on('close')

Emitted when the scroll has been fully closed.

await scroll.ready()

Wait for the scroll to fully open.

After this has called scroll.length and other properties have been set.

In general you do NOT need to wait for ready, unless checking a synchronous property, as all internals await this themself.

scroll.on('ready')

Emitted after the scroll has initially opened all its internal state.

scroll.writable

Can we append to this scroll?

Populated after ready has been emitted. Will be false before the event.

scroll.readable

Can we read from this scroll? After closing the scroll this will be false.

Populated after ready has been emitted. Will be false before the event.

scroll.id

String containing the id (z-base-32 of the public key) identifying this scroll.

Populated after ready has been emitted. Will be null before the event.

scroll.key

Buffer containing the public key identifying this scroll.

Populated after ready has been emitted. Will be null before the event.

scroll.keyPair

Object containing buffers of the scroll's public and secret key

Populated after ready has been emitted. Will be null before the event.

scroll.discoveryKey

Buffer containing a key derived from the scroll's public key. In contrast to scroll.key this key does not allow you to verify the data but can be used to announce or look for peers that are sharing the same scroll, without leaking the scroll key.

Populated after ready has been emitted. Will be null before the event.

scroll.encryptionKey

Buffer containing the optional block encryption key of this scroll. Will be null unless block encryption is enabled.

scroll.length

How many blocks of data are available on this scroll? If sparse: false, this will equal scroll.contiguousLength.

Populated after ready has been emitted. Will be 0 before the event.

scroll.contiguousLength

How many blocks are contiguously available starting from the first block of this scroll?

Populated after ready has been emitted. Will be 0 before the event.

scroll.fork

What is the current fork id of this scroll?

Populated after ready has been emitted. Will be 0 before the event.

scroll.padding

How much padding is applied to each block of this scroll? Will be 0 unless block encryption is enabled.

const stream = scroll.replicate(isInitiatorOrReplicationStream)

Create a replication stream. You should pipe this to another Scroll instance.

The isInitiator argument is a boolean indicating whether you are the initiator of the connection (ie the client) or if you are the passive part (ie the server).

If you are using a P2P flock like Flock you can know this by checking if the flock connection is a client socket or server socket. In Flock you can check that using the client property on the peer details object

If you want to multiplex the replication over an existing Scroll replication stream you can pass another stream instance instead of the isInitiator boolean.

// assuming we have two scrolls, localScroll + remoteScroll, sharing the same key
// on a server
const net = require('net')
const server = net.createServer(function (socket) {
  socket.pipe(remoteScroll.replicate(false)).pipe(socket)
})

// on a client
const socket = net.connect(...)
socket.pipe(localScroll.replicate(true)).pipe(socket)

const done = scroll.findingPeers()

Create a hook that tells Scroll you are finding peers for this scroll in the background. Call done when your current discovery iteration is done. If you're using Flock, you'd normally call this after a flock.flush() finishes.

This allows scroll.update to wait for either the findingPeers hook to finish or one peer to appear before deciding whether it should wait for a merkle tree update before returning.

scroll.on('append')

Emitted when the scroll has been appended to (i.e. has a new length / byteLength), either locally or remotely.

scroll.on('truncate', ancestors, forkId)

Emitted when the scroll has been truncated, either locally or remotely.

scroll.on('peer-add')

Emitted when a new connection has been established with a peer.

scroll.on('peer-remove')

Emitted when a peer's connection has been closed.