npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@bshowen/csv-to-json

v1.0.2

Published

Parse small and large CSV files into JSON files

Downloads

11

Readme

csv-to-json

  • Transforms a .csv file into a .json file
  • The header values in the CSV are used as the JSON object keys.
    • First row of the CSV is treated as the header.
  • Each row is transformed into a JSON object.
  • The final .json file will be a single array with N objects.
    • N = number of rows in the .csv file.

Why does this exist? There are NPM packages that do exactly this.

I want to understand how this process works and deepen my knowledge with Node
I/O streams.

How to use this package.

// First, import this package.
const { csvToJson } = require("@bshowen/csv-to-json");
// csvToJson requires the following two options.
const options = {
  inputFilePath: "path/to/your/csv/data.csv",
  outputFilePath: "path/where/you/want/the/json/data/",
};

// You can use .catch to catch any errors or use a try/catch block.
csvToJson(options)
  .catch((err) => console.log(err.message))
  .then(() => {
    // Hurray, no errors. Do some other stuff here...
  });

// Usage with a try catch block.
async function run() {
  try {
    await csvToJson(options);
    // Hurray, no errors. Do some other stuff here...
  } catch (err) {
    console.log(err);
  }
}
run();

Now you will have a new file with a valid JSON array. Each item in the array
representing a single row of your CSV data.

Options

csvToJson accepts the following options in the form of an object.

| Option | Required | Type | Example | | :------------: | :------: | :----: | :-------------------------------------------------------------: | | inputFilePath | Yes | String | { inputFilePath: path.resolve(__dirname, "myFile.csv") } | | outputFilePath | Yes | String | { outputFilePath: path.resolve(__dirname, "./outputFolder") } | | outputFileName | No | String | { outputFileName: "myFileName.json" } |

An example using all options.

const { csvToJson } = require("@bshowen/csv-to-json");
const path = require("node:path");

csvToJson({
  inputFilePath: path.resolve(__dirname, "../myFiles/weatherData.csv"),
  outputFilePath: path.resolve(__dirname, "../formattedFiles"),
  outputFileName: "weatherData.json",
})
  .catch((err) => console.log(err.message))
  .then(() => {
    // Hurray, no errors. Do some other stuff here...
  });

How it works.

Heres a high level overview. I am processing the CSV one chunk at a time. When
a chunk of data is received it is transformed from comma separated values into
JSON objects. Those JSON objects are then written to the output file and this
process gets repeated until no more chunks are received.

I learned quickly that you cannot parse an entire file into memory and then
try to operate on that data. With small files this is possible, but with larger
files I ran into memory issues. I wasn't able to continue this approach so
I came up with the solution of reading, transforming, and writing each chunk as
they are received.

Notes

This article came in extremely useful when I was trying to decrypt the node.js docs.

  • https://blog.logrocket.com/working-node-js-streams/