npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

dsv-dataset

v0.2.1

Published

A metadata specification and parsing library for data sets.

Downloads

24

Readme

dsv-dataset

Build Status

A metadata specification and parsing library for data sets.

One of the many recurring issues in data visualization is parsing data sets. Data sets are frequently represented in a delimiter-separated value (DSV) format, such as comma-separated value (CSV) or tab-separated value (TSV). Conveniently, the d3-dsv library supports parsing such data sets. However, the resulting parsed data table has string values for every column, and it is up to the developer to parse those string values into numbers or dates, depending on the data.

The primary purpose of this library is to provide a way to annotate DSV data sets with type information about their columns, so they can be automatically parsed. This enables developers to shift the logic of how to parse columns out of visualization code, and into a separate metadata specification.

Installation

Install via NPM: npm install dsv-dataset

Require the library via Node.js / Browserify:

var dsvDataset = require("dsv-dataset");

You can also require the library via Bower: bower install dsv-dataset. The file bower_components/dsv-dataset/dsv-dataset.js contains a UMD bundle, which can be included via a <script> tag, or using RequireJS.

Example

Here is an example program that parses three columns from the Iris dataset.


// Use dsv-dataset to parse the data.
var dataset = dsvDataset.parse({
  // This string contains CSV data that could be loaded from a .csv file.
  dsvString: [
    "sepal_length,sepal_width,petal_length,petal_width,class",
    "5.1,3.5,1.4,0.2,setosa",
    "6.2,2.9,4.3,1.3,versicolor",
    "6.3,3.3,6.0,2.5,virginica"
  ].join("\n"),

  // This metadata object specifies the delimiter and column types.
  // This could be loaded from a .json file.
  metadata: {
    delimiter: ",",
    columns: [
      { name: "sepal_length", type: "number" },
      { name: "sepal_width",  type: "number" },
      { name: "petal_length", type: "number" },
      { name: "petal_width",  type: "number" },
      { name: "class",        type: "string" }
    ]
  }
});

// Pretty-print the parsed data table as JSON.
console.log(JSON.stringify(dataset.data, null, 2));

The following JSON will be printed:

[
  {
    "sepal_length": 5.1,
    "sepal_width": 3.5,
    "petal_length": 1.4,
    "petal_width": 0.2,
    "class": "setosa"
  },
  {
    "sepal_length": 6.2,
    "sepal_width": 2.9,
    "petal_length": 4.3,
    "petal_width": 1.3,
    "class": "versicolor"
  },
  {
    "sepal_length": 6.3,
    "sepal_width": 3.3,
    "petal_length": 6,
    "petal_width": 2.5,
    "class": "virginica"
  }
]

Notice how numeric columns have been parsed to numbers.

API

# dsvDataset.parse(dataset)

Parses the given DSV dataset, which is comprised of a DSV string and a metadata specification. This function mutates the dataset argument by adding a data property, which contains the parsed data table (an array of row objects). Returns the mutated dataset object.

Argument structure:

dataset (object) The dataset representation, with properties

  • dsvString (string) The data table represented in DSV format, parsed by d3-dsv.
  • metadata (object, optional) Annotates the data table with metadata, with properties
    • delimiter (string, optional) The delimiter used between values. Typical values are
      • "," (CSV) This is the default used if no delimiter is specified.
      • "\t" (TSV)
      • "|"
    • columns (array of objects) An array of column descriptor objects with properties
      • name (String) The column name found on the first line of the DSV data set.
      • type (String - one of "string", "number" or "date") The type of this column.
        • If type is "number", then parseFloat will parse the string.
        • If type is "date", then new Date(String) will parse the string.
        • If no type is specified, the default is "string".

Project Structure

This project uses NPM as the primary build tool. The file package.json specifies that this project depends on d3-dsv.

The main source file is index.js. This exposes the top-level dsvDataset module using ES6 Module Syntax. This file is transformed into dsv-dataset.js by Rollup, which outputs a UMD bundle. Note that since d3-dsv exposes ES6 modules via the jsnext:main field in its package.json, Rollup includes the necessary modules directly in the dsv-dataset.js bundle. Unit tests live in test.js. These tests run against the built file, dsv-dataset.js.

To build dsv-dataset.js from index.js and run unit tests, run the command

npm test

This will execute both the pretest and test scripts specified in package.json. The pretest script builds the bundle, and the test script runs the unit tests using Mocha.

The development flow for me is 1.) edit code and save 2.) run npm test.

Future Plans

A future goal of this project is to provide recommentations for how descriptive metadata can be added to data sets. This includes human-readable titles and descriptions for data sets and columns. This metadata can be surfaced in visualizations to provide a nicer user experience. For example, the human-readable title for a column can be used as an axis label (e.g. "Sepal Width"), rather than the not-so-nice column name from the original DSV data (e.g. "sepal_width").

The metadata object will have the following optional properties:

  • title (string) A human readable name for the data set.
  • description (string - Markdown) A human readable free text description of the data set. This can be Markdown, so can include links. The length of this should be about one paragraph.
  • sourceURL (string - URL) The URL from which the data set was originally downloaded.

Each entry in the columns array will have the following optional properties:

  • title (string) A human readable name for the column. Should be a single word or as few words as possible. Intended for use on axis labels and column selection UI widgets.
  • description (string - Markdown) A human readable free text description of the data set. This can be Markdown, so can include links. The length of this should be about one sentence, and should communicate the meaning of the column to the user. Intended for use in tooltips when hovering over axes in a visualization, and for entries in user interfaces for selecting columns (e.g. dropdown menu or drag & drop column list).

DSV data sets could have incrementally more useful and powerful "levels" of metadata annotation. These levels might look something like this:

  • Level 0 - There is an intention to publish the data set.
  • Level 1 - The data set is published in some form other than DSV.
  • Level 2 - The data set is published on the Web as a valid DSV string.
  • Level 3 - Metadata that includes the delimiter and type of each column is published.
  • Level 4 - The data set is given a title, description, and source URL.
  • Level 5 - All columns have a title.
  • Level 6 - All columns have a description.

Related work