npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

whatdidibuy

v0.2.0

Published

Scrape your order history from Digi-Key, Mouser, and more

Downloads

1

Readme

WhatDidIBuy

This project is a set of scrapers that will let you collect your order history from electronics distributors and retailers, and save it as CSV files. Invoices in PDF format can also be downloaded and saved.

Currently, these websites are supported:

A note about scraping

This project is meant for personal use only -- and was created out of a need to catalog, in one place, what parts I had already ordered in the past, so that I do not end up re-ordering the same things.

Some of these websites do not like being scraped, but that is usually easy to get past, either with some scripts to hide the fact that we are using browser automation, or by solving a CAPTCHA.

Since the scripts are not fully automated (they require the user to log in manually), and will only hit the website in low volumes in order to get a single user's order history, I believe this use of browser automation should be within the usage policies of the websites in question.

In any case, please use your own judgement and restraint when using these scripts.

Installation and usage

Create a directory where you would like to save the collected CSV files. In that directory, install WhatDidIBuy with:

npm install whatdidibuy

This will also install Puppeteer as a dependency, which will download a local copy of Chromium. If you wish to skip the Chromium download and use your own copy of Chrome or Chromium, please see their documentation about environment variables.

Next, run WhatDidIBuy with:

./node_modules/.bin/whatdidibuy

This command will show the available scrapers. For example, to grab your order history from Digi-Key, run:

./node_modules/.bin/whatdidibuy digikey

This will launch a Chromium window with the Digi-Key website.

Manual actions

The scrapers will launch the appropriate website but will not automatically log in for you. When you see the website, you will need to:

  1. Log in with your credentials.
  2. Navigate to the order history page for the website.

The scrapers will wait for the order history page to be shown, and will swing into action at that point.

If everything goes well, you will get two CSV files per website: website-orders.csv and website-items.csv. Invoices, if available, will be saved to website/.

CSV format

The *-orders.csv files have these fields:

  • site - the scraper that was used to get this data
  • id - a site-specific order ID
  • date - order date, in YYYY-MM-DD format
  • status - a site-specific order status

The *-items.csv files have these fields:

  • site - the scraper that was used to get this data
  • ord - the site-specific order ID
  • idx - line item index in the order
  • dpn - distributor part number
  • mpn - manufacturer part number
  • mfr - manufacturer
  • qty - item quantity
  • dsc - item description
  • upr - unit price
  • lnk - link to the item page (may be relative)
  • img - image URL (may be relative)

Note that all scrapers might not output all of these fields, depending on what data is actually available.

License

Licensed under the Apache License, version 2.0.

Credits

Created by Nikhil Dabas.