doogle
v0.1.3
Published
Node.js app for taking HTML snapshots of JavaScript pages to make your dynamic apps Google crawlable.
Downloads
7
Readme
Doogle
Google translates any hash-bang URLs (#!
) it finds into an actual GET parameter _escaped_fragment_
. By listening for this parameter and querying Doogle, we can generate and cache static HTML pages for your JavaScript content.
For a complete reference on Google's AJAX crawling please see getting started.
Install with npm: npm install doogle
Getting Started
Doogle is very simple to get up and running with. First of all you need to let Doogle know which base URL we're going to be using.
var doogle = require('doogle')('http://www.example.com/');
Any time you specify a path to retrieve, it will be appended to the base URL (http://www.example.com/
).
You then need to decide where to store your HTML snapshots, and for how long they're valid for in hours.
// Set the HTML snapshot directory.
doogle.setDirectory(__dirname + '/snapshots');
// Set the expiry in hours.
doogle.setExpiry(24);
If you don't want to use cache at all – which isn't recommended, then you can set setExpiry
to false
.
You then need to instruct Doogle on which path to fetch.
doogle.fetch('/');
As fetch
returns Q.promise
you need to define what happens when Doogle resolves the promise.
doogle.fetch('/').then(function(data) {
response.send(JSON.stringify(data));
});
From there on in you're on your own! Doogle lets you decide how to serve the HTML snapshot to Googlebot.