npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

hyparquet

v1.6.0

Published

parquet file parser for javascript

Downloads

20,824

Readme

hyparquet

hyparquet parakeet

npm workflow status mit license dependencies coverage

Dependency free since 2023!

What is hyparquet?

Hyparquet is a lightweight, pure JavaScript library for parsing Apache Parquet files. Apache Parquet is a popular columnar storage format that is widely used in data engineering, data science, and machine learning applications for efficiently storing and processing large datasets.

Hyparquet allows you to read and extract data from Parquet files directly in JavaScript environments, both in Node.js and in the browser, without any dependencies. Designed for performance and ease of use, hyparquet is ideal for data engineering, data science, and machine learning applications that require efficient data processing.

Demo

Online parquet file reader demo available at:

https://hyparam.github.io/hyperparam-cli/apps/hyparquet-demo/

hyparquet demo

See the source code.

Features

  1. Performant: Designed to efficiently process large datasets by only loading the required data, making it suitable for big data and machine learning applications.
  2. Browser-native: Built to work seamlessly in the browser, opening up new possibilities for web-based data applications and visualizations.
  3. Dependency-free: Hyparquet has zero dependencies, making it lightweight and easy to install and use in any JavaScript project.
  4. TypeScript support: The library is written in jsdoc-typed JavaScript and provides TypeScript definitions out of the box.
  5. Flexible data access: Hyparquet allows you to read specific subsets of data by specifying row and column ranges, giving fine-grained control over what data is fetched and loaded.

Why hyparquet?

Why make a new parquet parser? First, existing libraries like parquetjs are officially "inactive". Importantly, they do not support the kind of stream processing needed to make a really performant parser in the browser. And finally, no dependencies means that hyparquet is lean, and easy to package and deploy.

Usage

Install the hyparquet package from npm:

npm install hyparquet

Reading Data

Node.js

To read the entire contents of a parquet file in a node.js environment:

const { asyncBufferFromFile, parquetRead } = await import('hyparquet')
await parquetRead({
  file: await asyncBufferFromFile(filename),
  onComplete: data => console.log(data)
})

The hyparquet package is an ES module and is not packaged as a CommonJS module. That's why you need to use a dynamic import to load the module in Node.js.

Browser

Hyparquet supports asynchronous fetching of parquet files over a network.

const { asyncBufferFromUrl, parquetRead } = await import('https://cdn.jsdelivr.net/npm/hyparquet/src/hyparquet.min.js')
const url = 'https://hyperparam-public.s3.amazonaws.com/bunnies.parquet'
await parquetRead({
  file: await asyncBufferFromUrl(url),
  onComplete: data => console.log(data)
})

Metadata

You can read just the metadata, including schema and data statistics using the parquetMetadata function:

const { parquetMetadata } = await import('hyparquet')
const fs = await import('fs')

const buffer = fs.readFileSync('example.parquet')
const arrayBuffer = new Uint8Array(buffer).buffer
const metadata = parquetMetadata(arrayBuffer)

If you're in a browser environment, you'll probably get parquet file data from either a drag-and-dropped file from the user, or downloaded from the web.

To load parquet data in the browser from a remote server using fetch:

import { parquetMetadata } from 'hyparquet'

const res = await fetch(url)
const arrayBuffer = await res.arrayBuffer()
const metadata = parquetMetadata(arrayBuffer)

To parse parquet files from a user drag-and-drop action, see example in index.html.

Filtering by Row and Column

To read large parquet files, it is recommended that you filter by row and column. Hyparquet is designed to load only the minimal amount of data needed to fulfill a query. You can filter rows by number, or columns by name, and columns will be returned in the same order they were requested:

import { parquetRead } from 'hyparquet'

await parquetRead({
  file,
  columns: ['colB', 'colA'], // include columns colB and colA
  rowStart: 100,
  rowEnd: 200,
  onComplete: data => console.log(data),
})

Column names

By default, data returned in the onComplete function will be one array of columns per row. If you would like each row to be an object with each key the name of the column, set the option rowFormat to object.

import { parquetRead } from 'hyparquet'

await parquetRead({
  file,
  rowFormat: 'object',
  onComplete: data => console.log(data),
})

Advanced Usage

AsyncBuffer

Hyparquet supports asynchronous fetching of parquet files over a network. You can provide an AsyncBuffer which is like a js ArrayBuffer but the slice method returns Promise<ArrayBuffer>.

interface AsyncBuffer {
  byteLength: number
  slice(start: number, end?: number): Promise<ArrayBuffer>
}

You can read parquet files asynchronously using HTTP Range requests so that only the necessary byte ranges from a url will be fetched:

import { parquetRead } from 'hyparquet'

const url = 'https://hyperparam-public.s3.amazonaws.com/wiki-en-00000-of-00041.parquet'
const byteLength = 420296449
await parquetRead({
  file: { // AsyncBuffer
    byteLength,
    async slice(start, end) {
      const headers = new Headers()
      headers.set('Range', `bytes=${start}-${end - 1}`)
      const res = await fetch(url, { headers })
      return res.arrayBuffer()
    },
  },
  onComplete: data => console.log(data),
})

Supported Parquet Files

The parquet format is known to be a sprawling format which includes options for a wide array of compression schemes, encoding types, and data structures.

Supported parquet encodings:

  • [X] PLAIN
  • [X] PLAIN_DICTIONARY
  • [X] RLE_DICTIONARY
  • [X] RLE
  • [X] BIT_PACKED
  • [X] DELTA_BINARY_PACKED
  • [X] DELTA_BYTE_ARRAY
  • [X] DELTA_LENGTH_BYTE_ARRAY
  • [X] BYTE_STREAM_SPLIT

Compression

Supporting every possible compression codec available in parquet would blow up the size of the hyparquet library. In practice, most parquet files use snappy compression.

Parquet compression types supported by default:

  • [X] Uncompressed
  • [X] Snappy
  • [ ] GZip
  • [ ] LZO
  • [ ] Brotli
  • [ ] LZ4
  • [ ] ZSTD
  • [ ] LZ4_RAW

You can provide custom compression codecs using the compressors option.

hysnappy

The most common compression codec used in parquet is snappy compression. Hyparquet includes a built-in snappy decompressor written in javascript.

We developed hysnappy to make parquet parsing even faster. Hysnappy is a snappy decompression codec written in C, compiled to WASM.

To use hysnappy for faster parsing of large parquet files, override the SNAPPY compressor for hyparquet:

import { parquetRead } from 'hyparquet'
import { snappyUncompressor } from 'hysnappy'

await parquetRead({
  file,
  compressors: {
    SNAPPY: snappyUncompressor(),
  },
  onComplete: console.log,
})

Parsing a 420mb wikipedia parquet file using hysnappy reduces parsing time by 40% (4.1s to 2.3s).

hyparquet-compressors

You can include support for ALL parquet compression codecs using the hyparquet-compressors library.

import { parquetRead } from 'hyparquet'
import { compressors } from 'hyparquet-compressors'

await parquetRead({ file, compressors, onComplete: console.log })

References

  • https://github.com/apache/parquet-format
  • https://github.com/apache/parquet-testing
  • https://github.com/apache/thrift
  • https://github.com/apache/arrow
  • https://github.com/dask/fastparquet
  • https://github.com/duckdb/duckdb
  • https://github.com/google/snappy
  • https://github.com/ironSource/parquetjs
  • https://github.com/zhipeng-jia/snappyjs

Contributions

Contributions are welcome!

Hyparquet development is supported by an open-source grant from Hugging Face :hugs: