npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

crawlme

v0.0.7

Published

Makes your ajax web application indexable by search engines by generating html snapshots on the fly. Caches results for blazing fast responses and better page ranking.

Downloads

6

Readme

#Crawlme A Connect/Express middleware that makes your node.js web application indexable by search engines. Crawlme generates static HTML snapshots of your JavaScript web application on the fly and has a built in periodically refreshing in-memory cache, so even though the snapshot generation may take a second or two, search engines will get them really fast. This is beneficial for SEO since response time is one of the factors used in the page rank algorithm.

Making ajax applications crawlable has always been tricky since search engines don't execute the JavaScript on the web sites they crawl. The solution to this is to provide the search engines with pre-rendered HTML versions of each page on your site, but creating those HTML versions has until now been a tedious and error prone process with many manual steps. Crawlme fixes this by rendering HTML snapshots of your web application on the fly whenever the Googlebot crawls your site. Apart from making the process of more or less manually creating indexable HTML versions of your site obsolete, this also has the benefit that Google will always index the latest version of your site and not some old pre-rendered version.

Follow optimalbits for news and updates regarding this library.

##How to use

  1. Make you ajax app use the hashbang #! instead of just the # in urls. This tells Google that those urls support ajax crawling and indexing.
  2. Insert the Crawlme middleware before your server in the chain of Connect/Express middlewares.
  3. Sit back and relax. Crawlme takes care of the rest. :)

##Example var connect = require('connect'), http = require('http'), crawlme = require('crawlme');

var app = connect()
  .use(crawlme())
  .use(connect.static(__dirname + '/webroot'));

http.createServer(app).listen(3000);

##Install npm install crawlme

##How it works Google detects that your page your.server.com/page.html#!key=value is ajax-crawlable by the hashbang #! in the url. The Googlebot doesn't evaluate JavaScript so it can't index the page directly. Instead it tries to get the URL your.server.com/page.html?escaped_fragment=key=value and expects to find an HTML snapshot of your page there. Crawlme will catch all requests to this kind of URLs and generate a HTML snapshot of the original ajax page on the fly.

More info on Google's ajax crawling can be found here

##Test ###Running the unit tests

  1. Install dev-dependencies npm install
  2. Run the test suite npm test

###Testing that your ajax web application is crawlable Pick an ajax url to some part of your web application like for example your.server.com/page.html#!key=value Now replace the hashbang with ?escaped_fragment=. The new URL will be your.server.com/page.html?escaped_fragment=key=value Now go to that URL. Crawlme should intercept the request and render an HTML snapshot of your page.

You can also use Googles Fetch as Googlebot tool.

##Reference

crawlme(options)

Arguments

options  {Object} options object

##Options Crawlme provides the following configuration options:

  • waitFor The time (in ms) crawlme waits before it assumes that your ajax page has finished loading and takes an HTML snapshot. Set this high enough to make sure that your page loads completely before the snapshot is taken. Defaults to 1000ms.
  • protocol The protocol crawlme should use to get the ajax pages. If crawlme runs under express this is determined automatically. Under connect this option is used. (defaults to http)
  • cacheSize The size of the cache that crawlme should use to cache the snapshots. String.prototype.length is used to determine the "size" of each snapshot. A cache size of 0 means no cache. Defaults to 2^20.
  • cacheRefresh The number of seconds between cache refreshes. Defaults to 15 minutes.

##Under the hood Crawlme uses the excellent headless browser zombie.js to render the HTML snapshots.

##License

(The MIT License)

Copyright (c) 2012 Optimal Bits Sweden AB (http://optimalbits.com)

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.