npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

csver

v1.2.0

Published

A CSV file reader and parser based on Node.js `Stream` module

Downloads

4

Readme

csver

A csv file reader and parser based on Node.js Stream module.

Features

csver is a package that follows the csv file formats described in RFC 4180. The csv format rules covered are:

  1. Each record is located on a separate line, delimited by a line break (CRLF).

  2. The last record in the file may or may not have an ending line break.

  3. There maybe an optional header line appearing as the first line of the file with the same format as normal record lines. This header will contain names corresponding to the fields in the file and should contain the same number of fields as the records in the rest of the file (the presence or absence of the header line should be indicated via the optional "header" parameter of this MIME type).

  4. Within the header and each record, there may be one or more fields, separated by commas. Each line should contain the same number of fields throughout the file. Spaces are considered part of a field and should not be ignored. The last field in the record must not be followed by a comma.

  5. Each field may or may not be enclosed in double quotes. If fields are not enclosed with double quotes, then double quotes may not appear inside the fields.

  6. If double-quotes are used to enclose fields, then a double-quote appearing inside a field must be escaped by preceding it with another double quote.

Install

$ npm install csver

Usage

const csver = require('csver');
const path = '/path/to/file/csv';

// csv data row as an array of values to be piped
new csver(path).asArray();

// csv data row as an object with headers as properties and data row as values to be piped
new csver(path).asObject();

API

csver(options|filepath)

Returns the parser object with:

  • asArray: Node.js Stream with chunks as an array with row values,
  • asObject: Node.js Stream with chunks as an Object with properties as headers read.

filepath

Path to the csv file as a string

options

Configuration object with:

  • filePath (required): path to the csv file as a string,
  • columnSplitter (optional): character that separates csv columns. Comma (,) character by default,
  • lineSplitter (optional): characters that separates csv lines. Carriage return (\r) and line feed (\n) characters by default,
  • filters (optional): array of callbacks to filter content provided when objects is returned. Callback function must return true or false after evaluating the object provided. An object with row values is provided to the callback allowing to check values, properties, etc.,
  • hasHeaders (optional): boolean value to determine if csv file contains headers in the first line. By default, true is the value,
  • headers (optional): set of strings to set as headers for the csv file.

asArray()

Returns a Transform stream. Each chunk is an Array with data for the line read excluding the header in case of hasHeaders parameter set as true.

asObject()

Returns a Transform stream. Each chunk is an Object, properties as the headers of the file, and data as the line read. If headers option has been set, properties will be these.

Example

The following example shows how to use filters to avoid rows depending on the value of the first column. It is using the "asArray()" method but you can use the "asObject()" method as well.

/* CSV CONTENT
type,animal,legs
mammal,dog,4
bird,sparrow,2
mammal,cat,4
bird,stork,2
*/


// imports
const csv = require('csver');
const { Writable } = require('stream');

// constant values
const PATH = '/path/to/csv/file';
const NON_VALID_ANIMAL = 'bird';

// app definition
const app = new csv({
    filePath: PATH,
    filters:  [item => item[0] !== NON_VALID_ANIMAL]
});

// output stream definition
const output = new Writable({
    objectMode: true,
    write(chunk, encoding, next) {
        console.log(chunk[0]);
        next();
    }
});

// show all items with first column different from the NON_VALID_ANIMAL constant.
app.asArray().pipe(output);

/* OUTPUT RESULT
mammal,dog,4
mammal,cat,4
*/