npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@ipld/unixfs

v3.0.0

Published

An implementation of the UnixFS in JavaScript designed for use with multiformats

Downloads

34,656

Readme

@ipld/unixfs

An implementation of the UnixFS spec in JavaScript designed for use with multiformats.

Overview

This library provides functionality similar to ipfs-unixfs-importer, but it had been designed around different set of use cases:

  1. Writing into Content Addressable Archives (CAR).

    In order to allow encoding file(s) into arbitrary number of CARs, library makes no assumbtions about how blocks will be consumed by returning ReadableStream of blocks and leaving it up to caller to handle the rest.

  2. Incremental and resumable writes

    Instead of passing a stream of files, user creates files, writes into them and when finished gets Promise<CID> for it. This removes need for mapping files back to their CIDs streamed on the other end.

  3. Complete control of memory and concurrency

    By using writer style API users can choose how many files to write concurrently and change that decision based on other tasks application performs. User can also specify buffer size to be able to tweak read/write coordination.

  4. No indirect configuration

    Library removes indirection by taking approach similar to multiformats library. Instead of passing chunker and layout config options, you pass chunker / layout / encoder interface implementations.

Usage

You can encode a file as follows

import * as UnixFS from "@ipld/unixfs"

// Create a redable & writable streams with internal queue that can
// hold around 32 blocks
const { readable, writable } = new TransformStream(
  {},
  UnixFS.withCapacity(1048576 * 32)
)
// Next we create a writer with filesystem like API for encoding files and
// directories into IPLD blocks that will come out on `readable` end.
const writer = UnixFS.createWriter({ writable })

// Create file writer that can be used to encode UnixFS file.
const file = UnixFS.createFileWriter(writer)
// write some content
file.write(new TextEncoder().encode("hello world"))
// Finalize file by closing it.
const { cid } = await file.close()

// close the writer to close underlying block stream.
writer.close()

// We could encode all this as car file
encodeCAR({ roots: [cid], blocks: readable })

You can encode (non sharded) directories with provided API as well

import * as UnixFS from "@ipld/unixfs"

export const demo = async () => {
  const { readable, writable } = new TransformStream()
  const writer = UnixFS.createWriter({ writable })

  // write a file
  const file = UnixFS.createFileWriter(writer)
  file.write(new TextEncoder().encode("hello world"))
  const fileLink = await file.close()

  // create directory and add a file we encoded above
  const dir = UnixFS.createDirectoryWriter(writer)
  dir.set("intro.md", fileLink)
  const dirLink = await dir.close()

  // now wrap above directory with another and also add the same file
  // there
  const root = UnixFS.createDirectoryWriter(fs)
  root.set("user", dirLink)
  root.set("hello.md", fileLink)

  // Creates following UnixFS structure where intro.md and hello.md link to same
  // IPFS file.
  // ./
  // ./user/intro.md
  // ./hello.md
  const rootLink = await root.close()
  // ...
  writer.close()
}

Configuration

You can configure DAG layout, chunking and bunch of other things by providing API compatible components. Library provides bunch of them but you can also bring your own.

import * as UnixFS from "@ipld/unixfs"
import * as Rabin from "@ipld/unixfs/file/chunker/rabin"
import * as Trickle from "@ipld/unixfs/file/layout/trickle"
import * as RawLeaf from "multiformats/codecs/raw"
import { sha256 } from "multiformats/hashes/sha2"

const demo = async blob => {
  const { readable, writable } = new TransformStream()
  const writer = UnixFS.createWriter({
    writable,
    // you can pass only things you want to override
    settings: {
      fileChunker: await Rabin.create({
        avg: 60000,
        min: 100,
        max: 662144,
      }),
      fileLayout: Trickle.configure({ maxDirectLeaves: 100 }),
      // Encode leaf nodes as raw blocks
      fileChunkEncoder: RawLeaf,
      smallFileEncoder: RawLeaf,
      fileEncoder: UnixFS,
      hasher: sha256,
    },
  })

  const file = UnixFS.createFileWriter(writer)
  // ...
}

License

Licensed under either of

  • Apache 2.0, (LICENSE-APACHE / http://www.apache.org/licenses/LICENSE-2.0)
  • MIT (LICENSE-MIT / http://opensource.org/licenses/MIT)

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.