npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

sparender

v2.0.1

Published

动态渲染SPA站点, 为搜索引擎提供JavaScript站点SEO优化方案

Downloads

17

Readme

这是什么?

这是一个高性能的基于puppeteerSSR方案, 他使用Headless Chrome从网页中生成html,然后以http的方法返回html内容

解决了什么问题

很多公司和开发者使用JavaScript框架(包括AngularJS,BackboneJS,ReactJS, VueJS)开发应用程序和网站。但很多搜索引擎,社交媒体,爬虫不支持抓取JavaScript的网页,也就无法做网站SEO。

通过UserAgent判断,如果是来自于爬虫, 则通过nginx(tomcat, Apache)等反向代理到本服务,则可以把渲染好的html网页内容传递给搜索引擎, 从而间接实现SEO,, 从而间接实现 SEO, 这样,既可以保持纯粹的前端开始思路, 还能节省 SSR 造成的服务器负担

也可以使用在爬虫采集, 生成网页截图,生成网页PDF场景

使用

git clone  
cd sparender
npm i
npm start

免费接入

免费接入SSR渲染

请求地址: http://api.zuoyanit.com/render

请求方式: GET

请求示例: http://api.zuoyanit.com/render/http://www.zuoyanit.com

反向代理配置请参阅: 文档

  • 为了防止滥用,使用前请联系作者,设置域名白名单

  • 免费提供200个页面(以redis中存的记录条数为准)

查看效果

http://127.0.0.1:3001/render?url=http://www.example.com

功能

  • puppeteer连接池
  • render并发限制
  • log4j 日志
  • 已集成任务调度
  • 生产,开发环境配置
  • redis缓存
  • 自动来路, 如果来自移动端则自动设置请求UA和viewpoint(使用iphoneX的环境参数)

性能对比

服务器: 12核16G 并发:10 运行时间:60S

项目配置: 不使用缓存, 屏蔽图片,字体,多媒体等

请求地址: http://xxxx/render?url=https://www.baidu.com

渲染方式对比

一下内容摘自 https://markdowner.net/article/73058307795021824, 并根据自己经验做了部分改动

服务端动态渲染(利用user-agent)

为了提高用户体验我们用了SPA技术、为了SEO 我们用了 SSR、预渲染等技术。不同技术方案有一定差距,不能兼顾优点。但仔细想,需要这些技术优点的用户,其实时不一样的,SPA 针对的是浏览器普通用户、SSR 针对的是网页爬虫,如 googlebot、baiduspider 等,那为什么我们不能给不同用户不同的页面呢,服务端动态渲染就是这种方案。

基本原理: 服务端对请求的 user-agent 进行判断,浏览器端直接给 SPA 页面,如果是爬虫,给经过动态渲染的 html 页面(因为蜘蛛不会造成DDOS,所以这种方案相对于SSR能节省不少服务器资源)

PS: 你可能会问,给了爬虫不同的页面,会不会被认为是网页作弊行为呢? Google 给了回复

Dynamic rendering is not cloaking Googlebot generally doesn't consider dynamic rendering as cloaking. As long as your dynamic rendering produces similar content, Googlebot won't view dynamic rendering as cloaking. When you're setting up dynamic rendering, your site may produce error pages. Googlebot doesn't consider these error pages as cloaking and treats the error as any other error page. Using dynamic rendering to serve completely different content to users and crawlers can be considered cloaking. For example, a website that serves a page about cats to users and a page about dogs to crawlers can be considered cloaking.

也就是说,如果我们没有刻意去作弊,而是使用动态渲染方案去解决SEO问题,爬虫经过对比网站内容,没有明显差异,不会认为这是作弊行为。

至于百度,请参考 豆丁网是在做黑帽 SEO 吗?

通过user-agent判断,将Baiduspider定向到http页面

基本的解释是:

的确从单一feature来讲,会比较难区分cloaking型的spam和豆丁类的搜索优化,但是搜索引擎判断spam绝对不会只用一个维度的feature。docin这样的网站,依靠它的外链关系、alexa流量、用户在搜索结果里面的点击行为等等众多信号,都足以将其从spam里面拯救出来了。

何况,一般的spam肯定还有关键词堆砌、文本语义前后不搭、link farm等等众多特征。总之antispam一门综合性的算法,会参考很多要素,最后给出判断。

实在不行了,还可以有白名单作为最后的弥补手段,拯救这批大网站是没什么问题的啦。

所以不做过多的黑帽或者灰帽, 百度也不会做作弊处理