npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

feedparser

v2.2.10

Published

Robust RSS Atom and RDF feed parsing using sax js

Downloads

74,884

Readme

Feedparser - Robust RSS, Atom, and RDF feed parsing in Node.js

Greenkeeper badge

Join the chat at https://gitter.im/danmactough/node-feedparser

Build Status

NPM

Feedparser is for parsing RSS, Atom, and RDF feeds in node.js.

It has a couple features you don't usually see in other feed parsers:

  1. It resolves relative URLs (such as those seen in Tim Bray's "ongoing" feed).
  2. It properly handles XML namespaces (including those in unusual feeds that define a non-default namespace for the main feed elements).

Installation

npm install feedparser

Usage

This example is just to briefly demonstrate basic concepts.

Please also review the complete example for a thorough working example that is a suitable starting point for your app.


var FeedParser = require('feedparser');
var fetch = require('node-fetch'); // for fetching the feed

var req = fetch('http://somefeedurl.xml')
var feedparser = new FeedParser([options]);

req.then(function (res) {
  if (res.status !== 200) {
    throw new Error('Bad status code');
  }
  else {
    // The response `body` -- res.body -- is a stream
    res.body.pipe(feedparser);
  }
}, function (err) {
  // handle any request errors
});

feedparser.on('error', function (error) {
  // always handle errors
});

feedparser.on('readable', function () {
  // This is where the action is!
  var stream = this; // `this` is `feedparser`, which is a stream
  var meta = this.meta; // **NOTE** the "meta" is always available in the context of the feedparser instance
  var item;

  while (item = stream.read()) {
    console.log(item);
  }
});

You can also check out this nice working implementation that demonstrates one way to handle all the hard and annoying stuff. :smiley:

options

  • normalize - Set to false to override Feedparser's default behavior, which is to parse feeds into an object that contains the generic properties patterned after (although not identical to) the RSS 2.0 format, regardless of the feed's format.

  • addmeta - Set to false to override Feedparser's default behavior, which is to add the feed's meta information to each article.

  • feedurl - The url (string) of the feed. FeedParser is very good at resolving relative urls in feeds. But some feeds use relative urls without declaring the xml:base attribute any place in the feed. This is perfectly valid, but we don't know know the feed's url before we start parsing the feed and trying to resolve those relative urls. If we discover the feed's url, we will go back and resolve the relative urls we've already seen, but this takes a little time (not much). If you want to be sure we never have to re-resolve relative urls (or if FeedParser is failing to properly resolve relative urls), you should set the feedurl option. Otherwise, feel free to ignore this option.

  • resume_saxerror - Set to false to override Feedparser's default behavior, which is to emit any SAXError on error and then automatically resume parsing. In my experience, SAXErrors are not usually fatal, so this is usually helpful behavior. If you want total control over handling these errors and optionally aborting parsing the feed, use this option.

Examples

See the examples directory.

API

Transform Stream

Feedparser is a transform stream operating in "object mode": XML in -> Javascript objects out. Each readable chunk is an object representing an article in the feed.

Events Emitted

  • meta - called with feed meta when it has been parsed
  • error - called with error whenever there is a Feedparser error of any kind (SAXError, Feedparser error, etc.)

What is the parsed output produced by feedparser?

Feedparser parses each feed into a meta (emitted on the meta event) portion and one or more articles (emited on the data event or readable after the readable is emitted).

Regardless of the format of the feed, the meta and each article contain a uniform set of generic properties patterned after (although not identical to) the RSS 2.0 format, as well as all of the properties originally contained in the feed. So, for example, an Atom feed may have a meta.description property, but it will also have a meta['atom:subtitle'] property.

The purpose of the generic properties is to provide the user a uniform interface for accessing a feed's information without needing to know the feed's format (i.e., RSS versus Atom) or having to worry about handling the differences between the formats. However, the original information is also there, in case you need it. In addition, Feedparser supports some popular namespace extensions (or portions of them), such as portions of the itunes, media, feedburner and pheedo extensions. So, for example, if a feed article contains either an itunes:image or media:thumbnail, the url for that image will be contained in the article's image.url property.

All generic properties are "pre-initialized" to null (or empty arrays or objects for certain properties). This should save you from having to do a lot of checking for undefined, such as, for example, when you are using jade templates.

In addition, all properties (and namespace prefixes) use only lowercase letters, regardless of how they were capitalized in the original feed. ("xmlUrl" and "pubDate" also are still used to provide backwards compatibility.) This decision places ease-of-use over purity -- hopefully, you will never need to think about whether you should camelCase "pubDate" ever again.

The title and description properties of meta and the title property of each article have any HTML stripped if you let feedparser normalize the output. If you really need the HTML in those elements, there are always the originals: e.g., meta['atom:subtitle']['#'].

List of meta properties

  • title
  • description
  • link (website link)
  • xmlurl (the canonical link to the feed, as specified by the feed)
  • date (most recent update)
  • pubdate (original published date)
  • author
  • language
  • image (an Object containing url and title properties)
  • favicon (a link to the favicon -- only provided by Atom feeds)
  • copyright
  • generator
  • categories (an Array of Strings)

List of article properties

  • title
  • description (frequently, the full article content)
  • summary (frequently, an excerpt of the article content)
  • link
  • origlink (when FeedBurner or Pheedo puts a special tracking url in the link property, origlink contains the original link)
  • permalink (when an RSS feed has a guid field and the isPermalink attribute is not set to false, permalink contains the value of guid)
  • date (most recent update)
  • pubdate (original published date)
  • author
  • guid (a unique identifier for the article)
  • comments (a link to the article's comments section)
  • image (an Object containing url and title properties)
  • categories (an Array of Strings)
  • source (an Object containing url and title properties pointing to the original source for an article; see the RSS Spec for an explanation of this element)
  • enclosures (an Array of Objects, each representing a podcast or other enclosure and having a url property and possibly type and length properties)
  • meta (an Object containing all the feed meta properties; especially handy when using the EventEmitter interface to listen to article emissions)

Help

  • Don't be afraid to report an issue.
  • You can drop by Gitter, too.

Contributors

View all the contributors.

Although node-feedparser no longer shares any code with node-easyrss, it was the original inspiration and a starting point.

License

(The MIT License)

Copyright (c) 2011-2020 Dan MacTough and contributors

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.