dwebx
v1.0.2
Published
A distributed-web factory that stores and replicates linked cores.
Downloads
11
Readme
dwebx
Note: These APIs are still in progress and are subject to change prior to the Hyperdrive v10 release.
This module is the canonical implementation of the "dwebx" interface, which exposes a Hypercore factory and a set of associated functions for managing generated Hypercores.
A dwebx is designed to efficiently store and replicate multiple sets of interlinked Hypercores, such as those used by Hyperdrive and mountable-dwtrie, removing the responsibility of managing custom storage/replication code from these higher-level modules.
In order to do this, dwebx provides:
- Key derivation - all writable Hypercore keys are derived from a single master key.
- A dependency graph - you can specify parent/child relationships between Hypercores (a Hyperdrive's content feed is a child of its metadata feed). The dependency graph is used to find the minimal number of Hypercores that should be replicated over a given stream. The graph can also be replicated (it is a dwtrie).
- Caching - Two separate caches are used for passively replicating cores (those requested by peers) and active cores (those requested by the owner of the dwebx).
- Storage bootstrapping - You can create a
default
Hypercore that will be loaded when a key is not specified, which is useful when you don't want to reload a previously-created Hypercore by key. - Namespacing - If you want to create multiple compound data structures backed by a single dwebx, you can create namespaced corestores such that each data structure's
default
feed is separate.
Installation
npm i dwebx --save
Usage
A dwebx instance can be constructed with a random-access-storage module, a function that returns a random-access-storage module given a path, or a string. If a string is specified, it will be assumed to be a path to a local storage directory:
const Corestore = require('dwebx')
const ram = require('random-access-memory')
const raf = require('random-access-file')
const store1 = new Corestore(ram)
const store2 = new Corestore(path => raf('store2/' + path))
const store3 = new Corestore('my-storage-dir')
Hypercores can be generated with both the get
and default
methods. If the first writable core is created with default
, it will be used for storage bootstrapping. We can always reload this bootstrapping core off disk without your having to store its public key externally. Keys for other hypercores should either be stored externally, or referenced from within the default core:
const core1 = store1.default()
Note: You do not have to create a default feed before creating additional ones unless you'd like to bootstrap your dwebx from disk the next time it's instantiated.
Additional hypercores can be created by key, using the get
method. In most scenarios, these additional keys can be extracted from the default (bootstrapping) core. If that's not the case, keys will have to be stored externally:
const core2 = store1.get({ key: Buffer(...) })
All hypercores are indexed by their discovery keys, so that they can be dynamically injected into replication streams when requested.
Two corestores can be replicated with the replicate
function, which accepts ddatabase's replicate
options, as well as an optional starting node in the dependency graph (a discovery key). When specifying a starting node, only that node's children will be replicated into the stream:
const store2 = dwebx(ram)
const core3 = store2.get(core1.key)
const core4 = store2.get({ parents: [core1.key] })
const stream = store2.replicate(true, core3.discoveryKey, { live: true }) // This will replicate core3 and core4.
stream.pipe(store2.replicate(false, { live: true })).pipe(stream) // This will replicate all common cores.
API
const store = dwebx(storage, [opts])
Create a new dwebx instance. storage
can be either a random-access-storage module, or a function that takes a path and returns a random-access-storage instance.
Opts is an optional object which can contain the following:
{
cacheSize: 1000 // The size of the LRU cache for passively-replicating cores.
}
store.default(opts)
Create a new default ddatabase, which is used for bootstrapping the creation of subsequent hypercores. Options match those in get
.
store.get(opts)
Create a new ddatabase. Options can be one of the following:
{
key: 0x1232..., // A Buffer representing a ddatabase key
discoveryKey: 0x1232..., // A Buffer representing a ddatabase discovery key (must have been previously created by key)
parents: [ 0x1234, 0xabba, ...], // A list of ddatabase keys specifying the core's parent dependencies.
...opts // All other options accepted by the ddatabase constructor
}
If opts
is a Buffer, it will be interpreted as a ddatabase key.
store.on('feed', feed, options)
Emitted everytime a feed is loaded internally (ie, the first time get(key) is called). Options will be the full options map passed to .get.
store.replicate(isInitiator, [discoveryKey], [opts])
Create a replication stream for either all managed hypercores, or for all hypercores that are children of the one with the specified discoveryKey
. The replication options are passed directly to Hypercore.
store.list()
Returns a Map of all cores currently cached in memory. For each core in memory, the map will contain the following entries:
{
discoveryKey => core,
...
}
store.close(cb)
Close all hypercores previously generated by the dwebx.
License
MIT