npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

abstract-tus-store

v1.5.1

Published

[![Build Status](https://travis-ci.org/blockai/abstract-tus-store.svg?branch=master)](https://travis-ci.org/blockai/abstract-tus-store)

Downloads

200

Readme

abstract-tus-store

Build Status

Black box test suite and interface specification for Tus-like stores. Inspired by abstract-blob-store.

Tus stores implement an API for creating and writing sequentially to "upload resources".

The required interface consists of 4 functions:

  • create(key[, opts]) creates a new upload resource
  • info(uploadId) returns the current size, final key, size and metadata of an upload resource
  • append(uploadId, readStream, [offset,] [opts]) to append to an upload resource
  • createReadStream(key) creates a readable stream for the key (primarily used to test implementation)

Optional interface:

  • createPartial([opts]) to create a new "partial" upload resource
  • concat(key, uploadIds, [opts]) concatenate "partial" upload resources to key
  • del(uploadId) delete an upload resource to free up resources
  • minChunkSize optional property that announces the minimal amount of bytes to write in an append call (except for the last one)

Some modules that use this

Send a PR adding yours if you write a new one.

Badge

Include this badge in your readme if you make a new module that uses the abstract-tus-store API:

tus-store-compatible

Install

npm install --save-dev abstract-tus-store

Requires Node v6+

Usage

See ./test directory for usage example.

In your tests:

import testStore from 'abstract-tus-store'
import memStore from 'abstract-tus-store/mem-store'

testStore({
  setup() {
    // Return your store instance (promise supported)
    return memStore()
  }

  teardown() {
    // you can use this hook to clean up, e.g. remove
    // a temp directory
  }
})

API

This documentation is mainly targeted at implementers.

General notes:

  • All functions must return promises

  • Error classes are available as keys on an errors object exported by abstract-tus-store, e.g.

    import { errors } from 'abstract-tus-store'
    new errors.UnknownKey('some unknown key')

Some terminology and definitions:

  • A key represents the final destination (in the store) of an upload resource's content and metadata (once the upload is complete).
    • Keys should be used "as is" to ensure interoperability with other libraries and external systems. For example, an AWS S3 store should use the key directly as the S3 key.
  • An upload resource
    • represents an upload in progress;
    • is linked to exactly one or zero keys (for partial uploads);
  • A deferred upload is an upload resource which hasn't been assigned an upload length yet.
  • A partial upload is an upload with no linked key.
  • An upload length is the total expected size of an upload resource (e.g. the size of a file to upload).
  • An offset represents how many bytes have been written to an upload resource (so far).
  • An upload resource is complete when its offset reaches its upload length.
  • An upload ID uniquely identities an upload resource
    • Upload IDs should be unique and unpredictable so that library users can send/receive them directly to/from untrusted systems.

create(key[, opts])

Create a new upload resource that will be stored at key once completed.

  • key: String required location of the content and metadata of the completed upload resource
  • opts.uploadLength: Number expected size of the upload in bytes
  • opts.metadata: Object arbitrary map of key/values

Must resolve to an object with the following properties:

  • uploadId: String required unique, unpredictable identifier for the created upload resource

If opts.uploadLength is not supplied, it must be passed to a subsequent call to append or else the upload will never complete.

Calls to create should always return a new and unique upload ID.

info(uploadId)

Get the offset and uploadLength of an upload resource.

  • uploadId: String required a known upload ID

Must resolve to an object with the following properties:

  • offset: Number required offset of the upload resource. Must be present even if the offset is 0 or the upload is already completed.
  • uploadLength: Number (required if known) upload length of the upload resource.
  • metadata: Object metadata that was set at creation.
  • isPartial: Boolean true if partial upload

Must throw an UploadNotFound error if the upload resource does not exist. Must not attempt to create the upload resource if it does not already exist.

append(uploadId, data, [offset,] [opts])

Append data to an upload resource.

  • uploadId: String required a known Upload ID
  • data: Readable Stream required Data that will be appended to the upload resource.
  • offset: Number Optional offset to help prevent data corruption.
  • opts.beforeComplete: Function (async) function that will be called as beforeComplete(upload, uploadId) before completing the upload.
  • opts.uploadLength: Number Used to set the length of a deferred upload.

Resolves to an object with the following properties:

  • offset: Number required the new offset of the upload
  • upload: Object (required if the append causes the upload to complete) the upload object as returned by info(uploadId)

Data must be read and written to the upload resource until the data stream ends or the upload completes (offset === uploadLength).

The optional offset parameter can be used to prevent data corruption. Let's say you want to continue uploading a file. You get the current offset of the upload with a call to info, and then call append with a read stream that starts reading the file from offset. If offset has changed between your call to read and append, append will throw an OffsetMismatch error.

If the call to append causes the upload resource to complete, append must not resolve until the upload resource's content and metadata become available on its linked key (except for partial uploads which do not have a key). If an object already exists at said key, it must be overwritten.

Must throw an OffsetMismatch error if the supplied offset is incorrect.

Must throw an UploadNotFound error if the upload doesn't exist.

createReadStream(key[, onInfo])

Creates a readable stream to read the key's content from the backing store. This is mostly used for testing implementations.

  • key: String required key to read from the store
  • onInfo: Function optional callback that will be called with an { contentLength, metadata } object

Optional APIs

TODO...

  • createPartial(opts) to create a new "partial" upload resource
  • concat(key, uploadIds, [opts]) concatenate "partial" upload resources to key
  • del(uploadId) delete an upload resource to free up resources
  • minChunkSize optional property that announces the minimal amount of bytes that must be written in an append call (except for the last one)