npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

grunt-escaped-seo

v0.5.1

Published

Generate an SEO website and sitemap for google escaped fragments

Downloads

4

Readme

grunt-escaped-seo

Generate an SEO website for site with google escaped fragments

Getting Started

This plugin requires Grunt ~0.4.1

If you haven't used Grunt before, be sure to check out the Getting Started guide, as it explains how to create a Gruntfile as well as install and use Grunt plugins. Once you're familiar with that process, you may install this plugin with this command:

npm install grunt-escaped-seo --save-dev

Once the plugin has been installed, it may be enabled inside your Gruntfile with this line of JavaScript:

grunt.loadNpmTasks('grunt-escaped-seo');

This plugin require a local installation of PhantomJS (phantomjs.org/‎)

And the npm "phantom" module ~0.6.1

The "escaped_seo" task

Overview

Thank you to Mathieu Desvé (https://github.com/mazerte) who brought the idea and contributed to some of the code.

Use this plugin to generate a static version of your "single page application" boosted with ajax. This version will be parsed by the googlebot. The generated sitemap.xml will help you to tell google to index your site.

To work with googlebot you need follow the google specifications (https://developers.google.com/webmasters/ajax-crawling/docs/specification). Use #! hash fragment in your urls or add a meta in your html page:

<meta name="fragment" content="!">

Don't forget to add a redirect rule. In Exemple for htaccess with apache server :

<ifModule mod_rewrite.c>
    RewriteCond %{QUERY_STRING} ^_escaped_fragment_=$
    RewriteRule ^$ /seo/index.html [L]
    RewriteCond %{QUERY_STRING} ^_escaped_fragment_=(.*)$
    RewriteRule ^$ /seo/%1.html [L]
</ifModule>

And if you are using pushstate.

<ifModule mod_rewrite.c>
    RewriteCond %{HTTP_USER_AGENT} (Googlebot|bingbot|Googlebot-Mobile|Baiduspider|Yahoo|YahooSeeker|DoCoMo|Twitterbot|TweetmemeBot|Twikle|Netseer|Daumoa|SeznamBot|Ezooms|MSNBot|Exabot|MJ12bot|sogou\sspider|YandexBot|bitlybot|ia_archiver|proximic|spbot|ChangeDetection|NaverBot|MetaJobBot|magpie-crawler|Genieo\sWeb\sfilter|Qualidator.com\sBot|Woko|Vagabondo|360Spider|ExB\sLanguage\sCrawler|AddThis.com|aiHitBot|Spinn3r|BingPreview|GrapeshotCrawler|CareerBot|ZumBot|ShopWiki|bixocrawler|uMBot|sistrix|linkdexbot|AhrefsBot|archive.org_bot|SeoCheckBot|TurnitinBot|VoilaBot|SearchmetricsBot|Butterfly|Yahoo!|Plukkie|yacybot|trendictionbot|UASlinkChecker|Blekkobot|Wotbox|YioopBot|meanpathbot|TinEye|LuminateBot|FyberSpider|Infohelfer|linkdex.com|Curious\sGeorge|Fetch-Guess|ichiro|MojeekBot|SBSearch|WebThumbnail|socialbm_bot|SemrushBot|Vedma|alexa\ssite\saudit|SEOkicks-Robot|Browsershots|BLEXBot|woriobot|AMZNKAssocBot|Speedy|oBot|HostTracker|OpenWebSpider|WBSearchBot|FacebookExternalHit) [NC]
    RewriteRule ^$ /seo/index.html [QSA,L]

    RewriteCond %{HTTP_USER_AGENT} (Googlebot|bingbot|Googlebot-Mobile|Baiduspider|Yahoo|YahooSeeker|DoCoMo|Twitterbot|TweetmemeBot|Twikle|Netseer|Daumoa|SeznamBot|Ezooms|MSNBot|Exabot|MJ12bot|sogou\sspider|YandexBot|bitlybot|ia_archiver|proximic|spbot|ChangeDetection|NaverBot|MetaJobBot|magpie-crawler|Genieo\sWeb\sfilter|Qualidator.com\sBot|Woko|Vagabondo|360Spider|ExB\sLanguage\sCrawler|AddThis.com|aiHitBot|Spinn3r|BingPreview|GrapeshotCrawler|CareerBot|ZumBot|ShopWiki|bixocrawler|uMBot|sistrix|linkdexbot|AhrefsBot|archive.org_bot|SeoCheckBot|TurnitinBot|VoilaBot|SearchmetricsBot|Butterfly|Yahoo!|Plukkie|yacybot|trendictionbot|UASlinkChecker|Blekkobot|Wotbox|YioopBot|meanpathbot|TinEye|LuminateBot|FyberSpider|Infohelfer|linkdex.com|Curious\sGeorge|Fetch-Guess|ichiro|MojeekBot|SBSearch|WebThumbnail|socialbm_bot|SemrushBot|Vedma|alexa\ssite\saudit|SEOkicks-Robot|Browsershots|BLEXBot|woriobot|AMZNKAssocBot|Speedy|oBot|HostTracker|OpenWebSpider|WBSearchBot|FacebookExternalHit) [NC]
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_FILENAME} !-d
    RewriteRule ^[#!/]*([\w\/\-_]*)$ /seo/$1.html [QSA,L]
</ifModule>

In your project's Gruntfile, add a section named escaped_seo to the data object passed into grunt.initConfig().

grunt.initConfig({
  'escaped-seo': {
    options: {
      domain: 'http://yourdomain.com'
    },
  },
})

In your html code you can add a nofollow class inside some tags and these tags and its contents will be skipped

<div class="nofollow">
    This content will not be indexed
</div>

Options

options.domain

Type: String

The final domain of your site, used for the sitemap.xml

options.server

Type: String Default value: options.domain

The server to parse to generate the static version and the site tree. By default options.domain is used

options.delay

Type: Number Default value: 2000

Time to wait before capturing the page. Needed time for javascript to generate the whol page.

options.public

Type: String Default value: dist

Your local current folder corresponding to the public document root folder. The sitemap and the static version will be created inside.

options.folder

Type: String Default value: seo

A local folder into which this static html files will be created.

options.changefreq

Type: String Default value: daily

The changefreq value to use in the sitemap.xml

options.replace

Type: Object Default value: ``

You can define in this object some replace rules for the static html versions. Each value (String or RegExp) will be replace by the corresponding key. If you use String instead of RegExp only the first occurence will be replaced.

Usage Examples

'escaped-seo':
  options:
    domain: 'http://pr0d.fr'
    server: 'http://localhost:9001'
    public: 'dist'
    folder: 'seo'
    changefreq: 'daily'
    delay: 2000
    replace: {
      '[email protected]': /[a-z0-9_\-\.]+@[a-z0-9_\-\.]+\.[a-z]*/gi
    }

Contributing

In lieu of a formal styleguide, take care to maintain the existing coding style.

Release History

0.5.1 Fix the priorities in the sitemap 0.5.0 Add the nofollow possibility 0.4.1 Add index on files inside folders if needed 0.4.0 Add the protocol inside the sitemap loc 0.3.1 Bug correction with sitemap domain 0.3.0 Bug correction with redirection domain 0.2.0 Pushstate compatibility added