npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

lz4

v0.6.5

Published

LZ4 streaming compression and decompression

Downloads

375,920

Readme

LZ4

LZ4 is a very fast compression and decompression algorithm. This nodejs module provides a Javascript implementation of the decoder as well as native bindings to the LZ4 functions. Nodejs Streams are also supported for compression and decompression.

NB. Version 0.2 does not support the legacy format, only the one as of "LZ4 Streaming Format 1.4". Use version 0.1 if required.

Build

With NodeJS:

git clone https://github.com/pierrec/node-lz4.git
cd node-lz4
git submodule update --init --recursive
npm install

Install

With NodeJS:

npm install lz4

Within the browser, using build/lz4.js:

<script type="text/javascript" src="/path/to/lz4.js"></script>
<script type="text/javascript">
// Nodejs-like Buffer built-in
var Buffer = require('buffer').Buffer
var LZ4 = require('lz4')

// Some data to be compressed
var data = 'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.'
data += data
// LZ4 can only work on Buffers
var input = Buffer.from(data)
// Initialize the output buffer to its maximum length based on the input data
var output = Buffer.alloc( LZ4.encodeBound(input.length) )

// block compression (no archive format)
var compressedSize = LZ4.encodeBlock(input, output)
// remove unnecessary bytes
output = output.slice(0, compressedSize)

console.log( "compressed data", output )

// block decompression (no archive format)
var uncompressed = Buffer.alloc(input.length)
var uncompressedSize = LZ4.decodeBlock(output, uncompressed)
uncompressed = uncompressed.slice(0, uncompressedSize)

console.log( "uncompressed data", uncompressed )
</script>

From github cloning, after having made sure that node and node-gyp are properly installed:

npm i
node-gyp rebuild

See below for more LZ4 functions.

Usage

Encoding

There are 2 ways to encode:

  • asynchronous using nodejs Streams - slowest but can handle very large data sets (no memory limitations).
  • synchronous by feeding the whole set of data - faster but is limited by the amount of memory

Asynchronous encoding

First, create an LZ4 encoding NodeJS stream with LZ4#createEncoderStream(options).

  • options (Object): LZ4 stream options (optional)
    • options.blockMaxSize (Number): chunk size to use (default=4Mb)
    • options.highCompression (Boolean): use high compression (default=false)
    • options.blockIndependence (Boolean): (default=true)
    • options.blockChecksum (Boolean): add compressed blocks checksum (default=false)
    • options.streamSize (Boolean): add full LZ4 stream size (default=false)
    • options.streamChecksum (Boolean): add full LZ4 stream checksum (default=true)
    • options.dict (Boolean): use dictionary (default=false)
    • options.dictId (Integer): dictionary id (default=0)

The stream can then encode any data piped to it. It will emit a data event on each encoded chunk, which can be saved into an output stream.

The following example shows how to encode a file test into test.lz4.

var fs = require('fs')
var lz4 = require('lz4')

var encoder = lz4.createEncoderStream()

var input = fs.createReadStream('test')
var output = fs.createWriteStream('test.lz4')

input.pipe(encoder).pipe(output)

Synchronous encoding

Read the data into memory and feed it to LZ4#encode(input[, options]) to decode an LZ4 stream.

  • input (Buffer): data to encode
  • options (Object): LZ4 stream options (optional)
    • options.blockMaxSize (Number): chunk size to use (default=4Mb)
    • options.highCompression (Boolean): use high compression (default=false)
    • options.blockIndependence (Boolean): (default=true)
    • options.blockChecksum (Boolean): add compressed blocks checksum (default=false)
    • options.streamSize (Boolean): add full LZ4 stream size (default=false)
    • options.streamChecksum (Boolean): add full LZ4 stream checksum (default=true)
    • options.dict (Boolean): use dictionary (default=false)
    • options.dictId (Integer): dictionary id (default=0)
var fs = require('fs')
var lz4 = require('lz4')

var input = fs.readFileSync('test')
var output = lz4.encode(input)

fs.writeFileSync('test.lz4', output)

Decoding

There are 2 ways to decode:

  • asynchronous using nodejs Streams - slowest but can handle very large data sets (no memory limitations)
  • synchronous by feeding the whole LZ4 data - faster but is limited by the amount of memory

Asynchronous decoding

First, create an LZ4 decoding NodeJS stream with LZ4#createDecoderStream().

The stream can then decode any data piped to it. It will emit a data event on each decoded sequence, which can be saved into an output stream.

The following example shows how to decode an LZ4 compressed file test.lz4 into test.

var fs = require('fs')
var lz4 = require('lz4')

var decoder = lz4.createDecoderStream()

var input = fs.createReadStream('test.lz4')
var output = fs.createWriteStream('test')

input.pipe(decoder).pipe(output)

Synchronous decoding

Read the data into memory and feed it to LZ4#decode(input) to produce an LZ4 stream.

  • input (Buffer): data to decode
var fs = require('fs')
var lz4 = require('lz4')

var input = fs.readFileSync('test.lz4')
var output = lz4.decode(input)

fs.writeFileSync('test', output)

Block level encoding/decoding

In some cases, it is useful to be able to manipulate an LZ4 block instead of an LZ4 stream. The functions to decode and encode are therefore exposed as:

  • LZ4#decodeBlock(input, output[, startIdx, endIdx]) (Number) >=0: uncompressed size, <0: error at offset
    • input (Buffer): data block to decode
    • output (Buffer): decoded data block
    • startIdx (Number): input buffer start index (optional, default=0)
    • endIdx (Number): input buffer end index (optional, default=startIdx + input.length)
  • LZ4#encodeBound(inputSize) (Number): maximum size for a compressed block
    • inputSize (Number) size of the input, 0 if too large This is required to size the buffer for a block encoded data
  • LZ4#encodeBlock(input, output[, startIdx, endIdx]) (Number) >0: compressed size, =0: not compressible
    • input (Buffer): data block to encode
    • output (Buffer): encoded data block
    • startIdx (Number): output buffer start index (optional, default=0)
    • endIdx (Number): output buffer end index (optional, default=startIdx + output.length)
  • LZ4#encodeBlockHC(input, output) (Number) >0: compressed size, =0: not compressible
    • input (Buffer): data block to encode with high compression
    • output (Buffer): encoded data block

Blocks do not have any magic number and are provided as is. It is useful to store somewhere the size of the original input for decoding. LZ4#encodeBlockHC() is not available as pure Javascript.

How it works

Restrictions / Issues

  • blockIndependence property only supported for true

License

MIT