npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

trawl-4

v1.1.1

Published

A full-fledged node.js web crawler with a MySQL backend

Downloads

32

Readme

README

TRAWL4-alpha: A low memory footprint web crawler with a MySQL backend

Overview

This is a CLI tool for recursively crawling websites.

It:

  • discovers links and recursively follows them
  • adds crawled pages (URL, content) to a MySQL database for further processing

Features:

  • recursive, suitable for crawling/spidering an entire website/domain
  • respects the robots.txt standard https://en.wikipedia.org/wiki/Robots_exclusion_standard
  • holds an in-memory LRU for discovered links so the DB is not hit hard
  • auto-restarts session to avoid memory leaks
  • uses about 240MB RAM per typical crawl session

How do I get set up?

Clone this repository and follow these simple steps:

First, create an empty database (the crawler will create the tables automatically):

echo "CREATE DATABASE your_db_name" |mysql

Then modify lib/db/connect.js to suit your MySQL setup (user/password and database name)

Example: mysql://user:password@localhost/your_db_name

Now the obligatory

npm i

or

yarn 

You're set!

Now run the crawler with:

npm run demo

You can hit Ctrl+C to stop crawling and wait about 2 seconds for the script to finish the exit routines.

Running npm run demo once more will resume the crawling.

The script will auto-restart itself every 100 URLs to work around a memory leak in cheerio

See lib/constants.js for more settings regarding crawl delay, in-memory LRU cache size, user agent and others

Be a good citizen

Please don't abuse the demo configuration, write your own (for example ./config/my_config.js) in and run it with

node runner.js  --preset my_config