npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@us3r-network/metadata-scraper

v0.3.3

Published

A Javascript library for scraping/parsing metadata from a web page.

Downloads

16

Readme

metadata-scraper

GitHub David npm

A Javascript library for scraping/parsing metadata from a web page.

👋 Introduction

metadata-scraper is a Javascript library which scrapes/parses metadata from web pages. You only need to supply it with a URL or an HTML string and it will use different rules to find the most relevant metadata like:

  • Title
  • Description
  • Favicons/Images
  • Language
  • Keywords
  • Author
  • and more (full list below)

🚀 Get started

Install metadata-scraper via npm:

npm install metadata-scraper

📚 Usage

Import metadata-scraper and pass it a URL or options object:

const getMetaData = require('metadata-scraper')

const url = 'https://github.com/BetaHuhn/metadata-scraper'

getMetaData(url).then((data) => {
	console.log(data)
})

Or with async/await:

const getMetaData = require('metadata-scraper')

async function run() {
	const url = 'https://github.com/BetaHuhn/metadata-scraper'
	const data = await getMetaData(url)
	console.log(data)
}

run()

This will return:

{
	title: 'BetaHuhn/metadata-scraper',
	description: 'A Javascript library for scraping/parsing metadata from a web page.',
	language: 'en',
	url: 'https://github.com/BetaHuhn/metadata-scraper',
	provider: 'GitHub',
	twitter: '@github',
	image: 'https://avatars1.githubusercontent.com/u/51766171?s=400&v=4',
	icon: 'https://github.githubassets.com/favicons/favicon.svg'
}

You can see a list of all metadata which metadata-scraper tries to scrape below.

⚙️ Configuration

You can change the behaviour of metadata-scraper by passing an options object:

const getMetaData = require('metadata-scraper')

const options = {
	url: 'https://github.com/BetaHuhn/metadata-scraper', // URL of web page
	maxRedirects: 0, // Maximum number of redirects to follow (default: 5)
	ua: 'MyApp', // Specify User-Agent header
	lang: 'de-CH', // Specify Accept-Language header
	timeout: 1000, // Request timeout in milliseconds (default: 10000ms)
	forceImageHttps: false, // Force all image URLs to use https (default: true)
	customRules: {} // more info below
}

getMetaData(options).then((data) => {
	console.log(data)
})

You can specify the URL by either passing it as the first parameter, or by setting it in the options object.

📖 Examples

Here are some examples on how to use metadata-scraper:

Basic

Pass a URL as the first parameter and metadata-scraper automatically scrapes it and returns everything it finds:

const getMetaData = require('metadata-scraper')
const data = await getMetaData('https://github.com/BetaHuhn/metadata-scraper')

Example file located at examples/basic.js.


HTML String

If you already have an HTML string and don't want metadata-scraper to make an http request, specify it in the options object:

const getMetaData = require('metadata-scraper')

const html = `
	<meta name="og:title" content="Example">
	<meta name="og:description" content="This is an example.">
`

const options {
	html: html, 
	url: 'https://example.com' // Optional URL to make relative image paths absolute
}

const data = await getMetaData(options)

Example file located at examples/html.js.


Custom Rules

Look at the rules.ts file in the src directory to see all rules which will be used.

You can expand metadata-scraper easily by specifying custom rules:

const getMetaData = require('metadata-scraper')

const options = {
	url: 'https://github.com/BetaHuhn/metadata-scraper',
	customRules: {
		name: {
			rules: [
				[ 'meta[name="customName"][content]', (element) => element.getAttribute('content') ]
			],
			processor: (text) => text.toLowerCase()
		}
	}
}

const data = await getMetaData(options)

customRules needs to contain one or more objects, where the key (name above) will identify the value in the returned data.

You can then specify different rules for each item in the rules array.

The first item is the query which gets inserted into the browsers querySelector function, and the second item is a function which gets passed the HTML element:

[ 'querySelector', (element) => element.innerText ]

You can also specify a processor function which will process/transform the result of one of the matched rules:

{
	processor: (text) => text.toLowerCase()
}

If you find a useful rule, let me know and I will add it (or create a PR yourself).

Example file located at examples/custom.js.

📇 All metadata

Here's what metadata-scraper currently tries to scrape:

{
	title: 'Title of page or article',
	description: 'Description of page or article',
	language: 'Language of page or article',
	type: 'Page type',
	url: 'URL of page',
	provider: 'Page provider',
	keywords: ['array', 'of', 'keywords'],
	section: 'Section/Category of page',
	author: 'Article author',
	published: 1605221765, // Date the article was published
	modified: 1605221765, // Date the article was modified
	robots: ['array', 'for', 'robots'],
	copyright: 'Page copyright',
	email: 'Contact email',
	twitter: 'Twitter handle',
	facebook: 'Facebook account id',
	image: 'Image URL',
	icon: 'Favicon URL',
	video: 'Video URL',
	audio: 'Audio URL'
}

If you find a useful metatag, let me know and I will add it (or create a PR yourself).

💻 Development

Issues and PRs are very welcome!

Please check out the contributing guide before you start.

This project adheres to Semantic Versioning. To see differences with previous versions refer to the CHANGELOG.

❔ About

This library was developed by me (@betahuhn) in my free time. If you want to support me:

Donate via PayPal

Credits

This library is based on Mozilla's page-metadata-parser. I converted it to TypeScript, implemented a few new features, and added more rules.

License

Copyright 2020 Maximilian Schiller

This project is licensed under the MIT License - see the LICENSE file for details.