npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

sitemap-generator-cli

v7.5.0

Published

Create xml sitemaps from the command line.

Downloads

4,675

Readme

Sitemap Generator CLI

Travis David npm

Create xml sitemaps from the command line.

Generates a sitemap by crawling your site. Uses streams to efficiently write the sitemap to your drive. Is cappable of creating multiple sitemaps if threshold is reached. Respects robots.txt and meta tags.

Table of contents

Install

This module is available on npm.

npm install -g sitemap-generator-cli
# or execute it directly with npx (since npm v5.2)
npx sitemap-generator-cli https://example.com

Usage

The crawler will fetch all folder URL pages and file types parsed by Google. If present the robots.txt will be taken into account and possible rules are applied for each URL to consider if it should be added to the sitemap. Also the crawler will not fetch URL's from a page if the robots meta tag with the value nofollow is present and ignore them completely if noindex rule is present. The crawler is able to apply the base value to found links.

sitemap-generator [options] <url>

When the crawler finished the XML Sitemap will be built and saved to your specified filepath. If the count of fetched pages is greater than 50000 it will be splitted into several sitemap files and create a sitemapindex file. Google does not allow more than 50000 items in one sitemap.

Example:

sitemap-generator http://example.com

Options

sitemap-generator --help

  Usage: cli [options] <url>

  Options:

    -V, --version                           output the version number
    -f, --filepath <filepath>               path to file including filename (default: sitemap.xml)
    -m, --max-entries <maxEntries>          limits the maximum number of URLs per sitemap file (default: 50000)
    -d, --max-depth <maxDepth>              limits the maximum distance from the original request (default: 0)
    -q, --query                             consider query string
    -u, --user-agent <agent>                set custom User Agent
    -v, --verbose                           print details when crawling
    -c, --max-concurrency <maxConcurrency>  maximum number of requests the crawler will run simultaneously (default: 5)
    -r, --no-respect-robots-txt             controls whether the crawler should respect rules in robots.txt
    -l, --last-mod                          add Last-Modified header to xml
    -g, --change-freq <changeFreq>          adds a <changefreq> line to each URL in the sitemap.
    -p, --priority-map <priorityMap>        priority for each depth url, values between 1.0 and 0.0, example: "1.0,0.8 0.6,0.4"
    -h, --help                              output usage information

filepath

Path to file to write including the filename itself. Path can be absolute or relative. Default is sitemap.xml.

Examples:

  • sitemap.xml
  • mymap.xml
  • /var/www/sitemap.xml
  • ./sitemap.myext

maxConcurrency

Sets the maximum number of requests the crawler will run simultaneously (default: 5).

maxEntries

Define a limit of URLs per sitemap files, useful for site with lots of urls. Defaults to 50000.

maxDepth

Set a maximum distance from the original request to crawl URLs, useful for generating smaller sitemap.xml files. Defaults to 0, which means it will crawl all levels.

noRespectRobotsTxt

Controls whether the crawler should respect rules in robots.txt.

query

Consider URLs with query strings like http://www.example.com/?foo=bar as individual sites and add them to the sitemap.

user-agent

Set a custom User Agent used for crawling. Default is Node/SitemapGenerator.

verbose

Print debug messages during crawling process. Also prints out a summery when finished.

last-mod

add Last-Modified header to xml

change-freq

adds a line to each URL in the sitemap.

priority-map

add priority for each depth url, values between 1.0 and 0.0, example: "1.0,0.8 0.6,0.4"

License

MIT © Lars Graubner