npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

galaxiat.serve.seo

v1.8.3

Published

galaxiat.serve.seo Node.js package allows you to easily cron crawl path that you want to have an html version (for dynamic rendering like React) without have to make SSR when request is passed.

Downloads

59

Readme

About

galaxiat.serve.seo Node.js package allows you to easily cron crawl path that you want to have an html version (for dynamic rendering like React) without have to make SSR when request is passed.

We use it at Galaxiat to do our https://galaxiatapp.com/pub/hash/dev rendering.

Package support

galaxiat.serve.seo Support both static and dynamic route.

Dynamic route can be delivered by a remote json endpoint and static route can be delivered by the .galaxiat.json file.

Installation

Node.js 16.9.0 or newer is required.

npm install galaxiat.serve.seo

Example

Static and remote crawl


.galaxiat.json OR .galaxiat.{env}.json

To set env use the GALAXIAT_SERVE_ENV var

type : remote | local

  • Remote will use the url provided on the remote key to connect a remote chrome instance.
    • NB : include the /playwright at the end of the url
    • For remote usage we recommend the use of Token see the docs of browserless.io for more infos
  • Local will spawn a chrome headless browser with args on the args key.
{
  "hostname" : "galaxiatapp.com",
  "port" : 3000,
  "type" : "remote",
  "args" : ["--no-sandbox", 
    "--disable-setuid-sandbox"],
  "remote" : "wss://chrome.shared.svc.galaxiat.fr/playwright?token=MWkH6L4K3knkG3hvsaHrnzA5g6dtfucYk5nD9YVBRRh9ZtdPyDaE",
  "target" : "http://localhost:3000",
  "public" : "./public",
  "crawl" : [
    {
      "type" : "config",
      "url" : "/path",
      "file" : "/cache/path.html",
      "cron" : "0 * * * * *"
    },
    {
      "type" : "remote",
      "json_url" : "https://api.galaxiatapp.com/seo/galaxiat.json",
      "cron" : "0 */15 * * * *"
    }
  ],
  "crawl_cron" : "* * * * * *",
  "crawl_max_num" : 3,
  "crawl_queue_num" : 10,
  "errors" : {
    "https" : false
  }
}

https://galaxiatapp.com/seo/galaxiat.json

[
  {
    "url": "https://galaxiatapp.com/pub/hash/dev",
    "file": "/pub/hash/dev.html"
  },{
    "url": "https://galaxiatapp.com/pub/hash/something",
    "file": "/pub/hash/something.html"
  }
]

RoadMap

  • V1.X.X - Single workload implementation
    • Per node deployment -> not so good for performance
    • Crawling is done on the local node
  • V2.X.X - Multiple workload implementation
    • Multi-node deployment -> better performance
    • Crawling is done on remote node
  • V3.X.X - Advanced Multiple workload implementation
    • Multi-node deployment + Cluster cache -> better performance
    • Cache is cluster wide instead of a local cache per node

We have added remote chrome support

see browserless.io for more infos

Local mode is not recommended for production

Links

Contributing

Before creating an issue, please ensure that it hasn't already been reported/suggested.

License

Software is under MIT license