npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@tartarus/data

v0.4.11

Published

Helper tools for downloading and building datasets

Downloads

393

Readme

Tartarus Dataset Fetch and Cleanup Tools

Various scripts for downloading and building datasets.

TL;DR

# Download English Wikimedia dumps
npx -p @tartarus/data td fetch wikimedia --output /tmp/

Prerequisites

Requires node and wget.

(Tested on MacOS with node==10.14.2 and wget==1.20.3.)

Usage

# Download Project Gutenberg library and catalogue
td fetch gutenberg --output /my/download/path

# Download Wikimedia dumps for English and Spanish language Wikipedia and Wikiquote 
td fetch wikimedia --output /my/download/path \
  --language en \
  --language es \
  --site wiki \
  --site wikiquote
  
# Crawl an API endpoint
td spider --output /my/download/path -- site /my/site/config/file.ts  

Spiders & Crawling

td spider collects sequential data from APIs. Both iterative counters (e.g. page number) and extractable URLs ('next page') are supported.

SpiderSiteConfig

A spider requires a JS/TS configuration file to customize its use.

import { SpiderJsonData, SpiderHttpNavigator, SpiderStore, SpiderHandle } from '@tartarus/data' 

export default {
  name: 'myexamplesite',
  
  // Determine how the received data will be parsed
  // Supported: SpiderJsonData, SpiderYamlData, SpiderXmlData, SpiderCsvData, SpiderHtmlData, SpiderTextData
  data: new SpiderJsonData(),
  
  // Determine what to query (required)
  navigator: new SpiderHttpNavigator(
    {
      // Base URL
      baseTarget: 'https://api.domain.ext/v1/list',
      
      // Determine how URLs are formed (return null to stop spidering)
      target: (h: SpiderHandle): string | null => `${h.getBaseTarget()}&page=${h.getIteration()}`,
      
      // Optional callback to test whether spidering should be ceased
      isDone: (h: SpiderHandle): boolean => false
    }
  ),
  
  // Determine how responses are stored
  store: new SpiderStore(
    {
      // Number of sub-directories to be used 
      subDirectoryDepth: 3,
      
      // Filename to be used; return null to stop spidering
      filename: (h: SpiderHandle) => `${h.getIteration()}.json`,
    }
  ),
  
  // Request decorator
  request: {
    headers: {
      'User-Agent': 'Tartarus-Data-Spider/1.0 ([email protected])'
    },
    
    method: 'get',
    responseEncoding: 'utf8'
  },
  
  // Spider behavior
  behavior: {
    delay: 1500, // Delay between requests in milliseconds
    retryDelay: 15000, // Delay before retrying failed requests
    maxRetries: 15, // Number of times to retry a failed request 
  }
}

SpiderHandle

The spider interface exposes information of its current status and the latest downloaded page by passing an instance of SpiderHandle class to the callback functions.

SpiderHandle.getResponseData(): SpiderParsedData

Returns an object that describes the data received in response to a successful query. The data element contains a parsed (JSON) object of the response. The raw element contains a string representation of the response data.

interface SpiderParsedData {
  raw: string;
  data: any;
}

SpiderHandle.getResponse(): SpiderNavigatorFetchResponse | null

Returns an object that contains a descriptor of a successful query (rawResponse) and the raw data received (rawData). The contents of the rawResponse element are dependent on the type of SpiderNavigator in use -- for SpiderHttpNavigator it will be an AxiosResponse<any>; for SpiderFileNavigator it will be set to null.

interface SpiderNavigatorFetchResponse {
  rawData: string;
  rawResponse: any;
}

SpiderHandle.getBaseTarget(): string

Returns the value passed in baseTarget element to the SpiderNavigator instance. Typically an URL.

SpiderHandle.getIteration(): number

Returns the current iteration.

SpiderHandle.getPath(relativeFilename: string): string

Returns an absolute path to relativeFilename in the output directory.

SpiderHandle.getSiteConfig(): SpiderSiteConfig

Returns the contents of the site configuration as described above.

SpiderHandle.getSpider(): SpiderTask

Returns the Task instance of the spider.