npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

tserve

v0.0.8

Published

Delightful data parsing

Downloads

28

Readme

npm module dependencies

Introduction

Data parsing using ES6 Async Iterators

Online documentation

What problem this package solves?

Processing huge files in Node.js can be hard. Especially when you need execute send or retrieve data from external sources.

This package solves

  1. Parse big CSV | XML | JSON files in memory efficient way.
  2. Write data to CSV | JSON | XML file in memory efficient way.

Installation

Async iterators are natively supported in Node.js 10.x. If you're using Node.js 8.x or 9.x, you need to use Node.js' --harmony_async_iteration flag.

Async iterators are not supported in Node.js 6.x or 7.x, so if you're on an older version you need to upgrade Node.js to use async iterators.

$ npm install iterparse

Or using yarn

$ yarn add iterparse

Benchmarks

Run all benchmarks

    git clone https://github.com/digimuza/iterparse.git &&
    cd ./iterparse/benchmarks &&
    yarn && 
    yarn run

All benchmarks are executed with on AMD 2600x processor.

Benchmarks source code here

CSV Parsing

Parsing 1 million records of random generated data.

Data was generated using this. script

XML

Parsing 1 million records of random generated data.

Data was generated using this. script

JSON

Parsing 1 million records of random generated data.

Data was generated using this. script

Documentation

General usage

For processing iterators I recommend to use IxJS library

Real world examples

Usage in e-commerce Big e-shops can have feeds with 100k or more products. load all this data at once is really in practical.

const productCount = 100000;
const productSizeInKb = 20;
const totalMemoryConsumption = productCount * productSizeInKb * 1024; // 2gb of memory just to load data

So base on this calculation we will use 2gb of memory just to load data when we start working with data memory footprint will grow 6, 10 times.

We can use node streams to solve this problem, but working with streams is kinda mind bending and really hard especially when you need manipulate data in meaningfully way and send data to external source api machine learning network database ect.

Some examples what we what we can do with iterparse

import { AsyncIterable } from 'ix';
import { xmlRead, jsonWrite } from 'iterparse'

interface Video {
    id: string,
    url: string,
    description: string
}
async function getListOfYouTubeVideos(url: string): Promise<Video[]> {
    // I will not implement real logic here
    // Just have in mind that this function will do some http requests
    // It will take time to do all this logic
    ...

    return {...} // Big json object
}

// Extracting all <product></product> nodes from xml file
// Let's assume that "./big_product_feed.xml" have 20 million records and file size is 30gb
// This script would use around 50mb of RAM
xmlRead<Video>({ filePath: "./big_product_feed.xml", pattern: 'product' })
    .map(async ({ url })=>{
       return getListOfYouTubeVideos(url)
    })
    // Write all extracted data to JSON file
    .pipe(jsonWrite({ filePath: "./small_feed_with_videos.json" }))
    .count()
    // All iterators must be consumed in any way.
    // I just pick count()
    // Other alternatives are toArray(), forEach(), reduce() ect.

Keep in mind this is trivial example but it illustrates how to process huge amounts of data.

Simple csv to json converter.

import { csvRead, jsonWrite } from "iterparse";

csvRead({ filePath: "./big_csv_file.csv" })
  .pipe(jsonWrite({ filePath: "big_json_file.json" }))
  .count();

Data aggregation

import { csvRead, jsonWrite } from "iterparse";

// CSV file with 100 million sales records
csvRead<{ id: string, price: number, qty: number, margin: number }>({ filePath: "./sales.csv" })
  .reduce((acc, item)=> acc + ((item.qty * item.price) * item.margin), 0)
  .then((profit) => {
      console.log(`Yearly profit ${profit}$`)
  });

Extract breweries from open api


import fetch from 'node-fetch'
import { jsonWrite } from './json'
async function* extractBreweries() {
    let page = 0
    while (true) {
        const url = `https://api.openbrewerydb.org/breweries?page=${page}`
        console.log(`Extracting: "${url}"`)
        const response = await fetch(`https://api.openbrewerydb.org/breweries?page=${page}`)
        if (!response.ok) {
            throw new Error(`Failed get ${url}`)
        }

        const body = await response.json()
        if (Array.isArray(body) && body.length !== 0) {
            for (const item of body) {
                yield item
            }
            page++
            continue
        }

        return
    }
}

jsonWrite(extractBreweries(), { filePath: 'breweries.json' }).count()