npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@ddn/ddndrive

v1.0.0

Published

ddndrive is based on Hyperdrive, which is a secure, real time distributed file system

Downloads

4

Readme

Ddndrive

Ddndrive is a secure, real time distributed file system

npm install Ddndrive

Usage

Ddndrive aims to implement the same API as Node.js' core fs module.

var Ddndrive = require('Ddndrive')
var archive = Ddndrive('./my-first-Ddndrive') // content will be stored in this folder

archive.writeFile('/hello.txt', 'world', function (err) {
  if (err) throw err
  archive.readdir('/', function (err, list) {
    if (err) throw err
    console.log(list) // prints ['hello.txt']
    archive.readFile('/hello.txt', 'utf-8', function (err, data) {
      if (err) throw err
      console.log(data) // prints 'world'
    })
  })
})

A big difference is that you can replicate the file system to other computers! All you need is a stream.

var net = require('net')

// ... on one machine

var server = net.createServer(function (socket) {
  socket.pipe(archive.replicate()).pipe(socket)
})

server.listen(10000)

// ... on another

var clonedArchive = Ddndrive('./my-cloned-Ddndrive', origKey)
var socket = net.connect(10000)

socket.pipe(clonedArchive.replicate()).pipe(socket)

It also comes with build in versioning and real time replication. See more below.

API

var archive = Ddndrive(storage, [key], [options])

Create a new Ddndrive.

The storage parameter defines how the contents of the archive will be stored. It can be one of the following, depending on how much control you require over how the archive is stored.

  • If you pass in a string, the archive content will be stored in a folder at the given path.

  • You can also pass in a function. This function will be called with the name of each of the required files for the archive, and needs to return a random-access-storage instance.

  • If you require complete control, you can also pass in an object containing a metadata and a content field. Both of these need to be functions, and are called with the following arguments:

    • name: the name of the file to be stored
    • opts
      • key: the feed key of the underlying Hypercore instance
      • discoveryKey: the discovery key of the underlying Hypercore instance
    • archive: the current Ddndrive instance

    The functions need to return a a random-access-storage instance.

Options include:

{
  sparse: true, // only download data on content feed when it is specifically requested
  sparseMetadata: true // only download data on metadata feed when requested
  metadataStorageCacheSize: 65536 // how many entries to use in the metadata hypercore's LRU cache
  contentStorageCacheSize: 65536 // how many entries to use in the content hypercore's LRU cache
  treeCacheSize: 65536 // how many entries to use in the append-tree's LRU cache
}

Note that a cloned Ddndrive archive can be "sparse". Usually (by setting sparse: true) this means that the content is not downloaded until you ask for it, but the entire metadata feed is still downloaded. If you want a very sparse archive, where even the metadata feed is not downloaded until you request it, then you should also set sparseMetadata: true.

var stream = archive.replicate([options])

Replicate this archive. Options include

{
  live: false, // keep replicating
  download: true, // download data from peers?
  upload: true // upload data to peers?
}

archive.version

Get the current version of the archive (incrementing number).

archive.key

The public key identifying the archive.

archive.discoveryKey

A key derived from the public key that can be used to discovery other peers sharing this archive.

archive.writable

A boolean indicating whether the archive is writable.

archive.on('ready')

Emitted when the archive is fully ready and all properties has been populated.

archive.on('error', err)

Emitted when a critical error during load happened.

var oldDrive = archive.checkout(version, [opts])

Checkout a readonly copy of the archive at an old version. Options are used to configure the oldDrive:

{
  metadataStorageCacheSize: 65536 // how many entries to use in the metadata hypercore's LRU cache
  contentStorageCacheSize: 65536 // how many entries to use in the content hypercore's LRU cache
  treeCacheSize: 65536 // how many entries to use in the append-tree's LRU cache
}

archive.download([path], [callback])

Download all files in path of current version. If no path is specified this will download all files.

You can use this with .checkout(version) to download a specific version of the archive.

archive.checkout(version).download()

var stream = archive.history([options])

Get a stream of all changes and their versions from this archive.

var stream = archive.createReadStream(name, [options])

Read a file out as a stream. Similar to fs.createReadStream.

Options include:

{
  start: optionalByteOffset, // similar to fs
  end: optionalInclusiveByteEndOffset, // similar to fs
  length: optionalByteLength
}

archive.readFile(name, [options], callback)

Read an entire file into memory. Similar to fs.readFile.

Options can either be an object or a string

Options include:

{
  encoding: string
  cached: true|false // default: false
}

or a string can be passed as options to simply set the encoding - similar to fs.

If cached is set to true, this function returns results only if they have already been downloaded.

var stream = archive.createDiffStream(version, [options])

Diff this archive this another version. version can both be a version number of a checkout instance of the archive. The data objects looks like this

{
  type: 'put' | 'del',
  name: '/some/path/name.txt',
  value: {
    // the stat object
  }
}

var stream = archive.createWriteStream(name, [options])

Write a file as a stream. Similar to fs.createWriteStream. If options.cached is set to true, this function returns results only if they have already been downloaded.

archive.writeFile(name, buffer, [options], [callback])

Write a file from a single buffer. Similar to fs.writeFile.

archive.unlink(name, [callback])

Unlinks (deletes) a file. Similar to fs.unlink.

archive.mkdir(name, [options], [callback])

Explictly create an directory. Similar to fs.mkdir

archive.rmdir(name, [callback])

Delete an empty directory. Similar to fs.rmdir.

archive.readdir(name, [options], [callback])

Lists a directory. Similar to fs.readdir.

Options include:

{
    cached: true|false, // default: false
}

If cached is set to true, this function returns results from the local version of the archive’s append-tree. Default behavior is to fetch the latest remote version of the archive before returning list of directories.

archive.stat(name, [options], callback)

Stat an entry. Similar to fs.stat. Sample output:

Stat {
  dev: 0,
  nlink: 1,
  rdev: 0,
  blksize: 0,
  ino: 0,
  mode: 16877,
  uid: 0,
  gid: 0,
  size: 0,
  offset: 0,
  blocks: 0,
  atime: 2017-04-10T18:59:00.147Z,
  mtime: 2017-04-10T18:59:00.147Z,
  ctime: 2017-04-10T18:59:00.147Z,
  linkname: undefined }

The output object includes methods similar to fs.stat:

var stat = archive.stat('/hello.txt')
stat.isDirectory()
stat.isFile()

Options include:

{
  cached: true|false // default: false,
  wait: true|false // default: true
}

If cached is set to true, this function returns results only if they have already been downloaded.

If wait is set to true, this function will wait for data to be downloaded. If false, will return an error.

archive.lstat(name, [options], callback)

Stat an entry but do not follow symlinks. Similar to fs.lstat.

Options include:

{
  cached: true|false // default: false,
  wait: true|false // default: true
}

If cached is set to true, this function returns results only if they have already been downloaded.

If wait is set to true, this function will wait for data to be downloaded. If false, will return an error.

archive.access(name, [options], callback)

Similar to fs.access.

Options include:

{
  cached: true|false // default: false,
  wait: true|false // default: true
}

If cached is set to true, this function returns results only if they have already been downloaded.

If wait is set to true, this function will wait for data to be downloaded. If false, will return an error.

archive.open(name, flags, [mode], callback)

Open a file and get a file descriptor back. Similar to fs.open.

Note that currently only read mode is supported in this API.

archive.read(fd, buf, offset, len, position, callback)

Read from a file descriptor into a buffer. Similar to fs.read.

archive.close(fd, [callback])

Close a file. Similar to fs.close.

archive.close([callback])

Closes all open resources used by the archive. The archive should no longer be used after calling this.