wtf_wikipedia
v10.3.2
Published
parse wikiscript into json
Downloads
29,720
Readme
import wtf from 'wtf_wikipedia'
let doc = await wtf.fetch('Toronto Raptors')
let coach = doc.infobox().get('coach')
coach.text() //'Darko Rajaković'
.text()
get clean plaintext:
let str = `[[Greater_Boston|Boston]]'s [[Fenway_Park|baseball field]] has a {{convert|37|ft}} wall. <ref>Field of our Fathers: By Richard Johnson</ref>`
wtf(str).text()
// "Boston's baseball field has a 37ft wall."
let doc = await wtf.fetch('Glastonbury', 'en')
doc.sentences()[0].text()
// 'Glastonbury is a town and civil parish in Somerset, England, situated at a dry point ...'
.json()
get all the data from a page:
let doc = await wtf.fetch('Whistling')
doc.json()
// { categories: ['Oral communication', 'Vocal skills'], sections: [{ title: 'Techniques' }], ...}
the default .json() output is really verbose, but you can cherry-pick data by poking-around like this:
// get just the links:
doc.links().map((link) => link.json())
//[{ page: 'Theatrical superstitions', text: 'supersitions' }]
// just the images:
doc.images()[0].json()
// { file: 'Image:Duveneck Whistling Boy.jpg', url: 'https://commons.wiki...' }
// json for a particular section:
doc.section('see also').links()[0].json()
// { page: 'Slide Whistle' }
run it on the client-side:
<script src="https://unpkg.com/wtf_wikipedia"></script>
<script>
wtf.fetch('Radiohead', { 'Api-User-Agent': 'Name your script here' }, function (err, doc) {
let members = doc.infobox().get('current members')
members.links().map((l) => l.page())
//['Thom Yorke', 'Jonny Greenwood', 'Colin Greenwood'...]
})
</script>
or the server-side:
import wtf from 'wtf_wikipedia'
// or,
const wtf = require('wtf_wikipedia')
full wikipedia dumps
With this library, in conjunction with dumpster-dive, you can parse the whole english wikipedia in an aftertoon.
npm install -g dumpster-dive
Ok first, 🛀
Wikitext is no small thing.
Consider:
- the partial-implementation of inline-css,
- nested elements do not honour the scope of other elements
- the language has no errors
- deep recursion of similar-syntax templates
- the egyptian hieroglyphics syntax
- 'Birth_date_and_age' vs 'Birth-date_and_age'
- the unexplained hashing scheme for image paths
- the custom encoding of whitespace and punctuation
- right-to-left values in left-to-right templates
- PEG-based parsers struggle with wikitext's backtracking/lookarounds
- there are 634,755 templates in en-wikipedia (as of Nov-2018)
- there are a large number of pages that don't render properly on wikipedia, or its apps..
this library supports many recursive shenanigans, depreciated and obscure template variants, and illicit wiki-shorthands.
What it does:
- Detects and parses redirects and disambiguation pages
- Parse infoboxes into a formatted key-value object
- Handles recursive templates and links- like [[.. [[...]] ]]
- Per-sentence plaintext and link resolution
- Parse and format internal links
- creates image thumbnail urls from File:XYZ.png filenames
- Properly resolve dynamic templates like {{CURRENTMONTH}} and {{CONVERT ..}}
- Parse images, headings, and categories
- converts 'DMS-formatted' (59°12'7.7"N) geo-coordinates to lat/lng
- parse and combine citation and reference metadata
- Eliminate xml, latex, css, and table-sorting cruft
What doesn't do:
- external 'transcluded' page data [1]
- AST output
- smart (or 'pretty') formatting of html in infoboxes or galleries [1]
- maintain perfect page order [1]
- per-sentence references (by 'section' element instead)
- maintain template or infobox css styling
- large tables that span different sections [1]
It is built to be as flexible as possible. In all cases, tries to fail in considerate ways.
How about html scraping..?
Wikimedia's official parser turns wikitext ➔ HTML.
if you prefer this screen-scraping workflow, you can pluck at parts of a page like that.
that's cool!
getting structured data this way is still a complex, weird process. Manually spelunking the html is sometimes just as tricky and error-prone as scanning the wikitext itself.
The contributors to this library have come to that conclusion, as many others have.
This library is gracious to the Parsoid contributors.
okay,
flip your wikitext into a Doc object
import wtf from 'wtf_wikipedia'
let txt = `
==Wood in Popular Culture==
* Harry Potter's wand
* The Simpson's fence
`
wtf(txt)
// Document {text(), json(), lists()...}
doc.links()
let txt = `Whistling is featured in a number of television shows, such as [[Lassie (1954 TV series)|''Lassie'']], and the title theme for ''[[The X-Files]]''.`
wtf(txt)
.links()
.map((l) => l.page())
// [ 'Lassie (1954 TV series)', 'The X-Files' ]
doc.text()
returns nice plain-text of the article
let txt =
"[[Greater_Boston|Boston]]'s [[Fenway_Park|baseball field]] has a {{convert|37|ft}} wall.<ref>{{cite web|blah}}</ref>"
wtf(txt).text()
//"Boston's baseball field has a 37ft wall."
doc.sections():
a section is a heading '==Like This=='
wtf(page).sections()[1].children() //traverse nested sections
wtf(page).section('see also').remove() //delete one
doc.sentences()
let s = wtf(page).sentences()[4]
s.links()
s.bolds()
s.italics()
s.text()
s.wikitext()
doc.categories()
await wtf.fetch('Whistling').categories()
//['Oral communication', 'Vocal music', 'Vocal skills']
doc.images()
let img = wtf(page).images()[0]
img.url() // the full-size wikimedia-hosted url
img.thumbnail() // 300px, by default
img.format() // jpg, png, ..
Fetch
You can grab and parse articles from any wiki api. This includes any language, any wiki-project, and most 3rd-party wikis.
// 3rd-party wiki
let doc = await wtf.fetch('https://muppet.fandom.com/wiki/Miss_Piggy')
// wikipedia français
doc = await wtf.fetch('Tony Hawk', 'fr')
doc.sentence().text() // 'Tony Hplawk est un skateboarder professionnel et un acteur ...'
// accept an array, or wikimedia pageIDs
let docs = wtf.fetch(['Whistling', 2983], { follow_redirects: false })
// article from german wikivoyage
wtf.fetch('Toronto', { lang: 'de', wiki: 'wikivoyage' }).then((doc) => {
console.log(doc.sentences()[0].text()) // 'Toronto ist die Hauptstadt der Provinz Ontario'
})
you may also pass the wikipedia page id as parameter instead of the page title:
let doc = await wtf.fetch(64646, 'de')
the fetch method follows redirects.
API plugin
wtf.getCategoryPages(title, [options])
retrieves all pages and sub-categories belonging to a given category:
wtf.extend(require('wtf-plugin-api'))
let result = await wtf.getCategoryPages('Category:Politicians_from_Paris')
/*
{
[
{"pageid":52502362,"ns":0,"title":"William Abitbol"},
{"pageid":50101413,"ns":0,"title":"Marie-Joseph Charles des Acres de L'Aigle"}
...
{"pageid":62721979,"ns":14,"title":"Category:Councillors of Paris"},
{"pageid":856891,"ns":14,"title":"Category:Mayors of Paris"}
]
}
*/
wtf.random([options])
fetches a random wikipedia article, from a given language or domain
wtf.extend(require('wtf-plugin-api'))
wtf.random().then((doc) => {
console.log(doc.title(), doc.categories())
//'Whistling' ['Oral communication', 'Vocal skills']
})
see wtf-plugin-api
Tutorials
- Gentle Introduction - Getting NBA Team data
- Parsing tables - getting all Apollo Astronauts as JSON
- Parsing Timezones
- MBL season schedules
- Fetching a list of pages
- Parsing COVID outbreak table
Plugins
these add all sorts of new functionality:
wtf.extend(require('wtf-plugin-classify'))
await wtf.fetch('Toronto Raptors').classify()
// 'Organization/SportsTeam'
wtf.extend(require('wtf-plugin-summary'))
await wtf.fetch('Pulp Fiction').summary()
// 'a 1994 American crime film'
wtf.extend(require('wtf-plugin-person'))
await wtf.fetch('David Bowie').birthDate()
// {year:1947, date:8, month:1}
wtf.extend(require('wtf-plugin-i18n'))
await wtf.fetch('Ziggy Stardust', 'fr').infobox().json()
// {nom:{text:"Ziggy Stardust"}, oeuvre:{text:"The Rise and Fall of Ziggy Stardust"}}
| Plugin | |
| ---------------------------------------------------------- | --------------------------------------- |
| classify | person/place/thing |
| summary | short description text |
| person | birth/death information |
| api | fetch more data from the API |
| i18n | improves multilingual template coverage |
| wtf-mlb | fetch baseball data |
| wtf-nhl | fetch hockey data |
| nsfw | flag sexual/graphic/adult articles |
| image | additional methods for .images()
|
| html | output html |
| wikitext | output wikitext |
| markdown | output markdown |
| latex | output latex |
Good practice:
The wikipedia api is pretty welcoming though recommends three things, if you're going to hit it heavily -
- pass a
Api-User-Agent
as something so they can use to easily throttle bad scripts - bundle multiple pages into one request as an array (say, groups of 5?)
- run it serially, or at least, slowly.
wtf
.fetch(['Royal Cinema', 'Aldous Huxley'], {
lang: 'en',
'Api-User-Agent': '[email protected]',
})
.then((docList) => {
let links = docList.map((doc) => doc.links())
console.log(links)
})
Full API
- .title() - get/set the title of the page from the first-sentence
- .pageID() - get/set the wikimedia id of the page, if we have it.
- .wikidata() - get/set the wikidata id of the page, if we have it.
- .domain() - get/set the domain of the wiki we're on, if we have it.
- .url() - (try to) generate the url for the current article
- .lang() - get/set the current language (used for url method)
- .namespace() - get/set the wikimedia namespace of the page, if we have it
- .isRedirect() - if the page is just a redirect to another page
- .redirectTo() - the page this redirects to
- .isDisambiguation() - is this a placeholder page to direct you to one-of-many possible pages
- .isStub() - if the page is flagged as incomplete
- .categories() - return all categories of the document
- .sections() - return a list of the Document's sections
- .paragraphs() - return a list of Paragraphs, in all sections
- .sentences() - return a list of all sentences in the document
- .images() - return all images found in the document
- .links() - return a list of all links, in all parts of the document
- .lists() - sections in a page where each line begins with a bullet point
- .tables() - return a list of all structured tables in the document
- .templates() - any type of structured-data elements, typically wrapped in like {{this}}
- .infoboxes() - specific type of template, that appear on the top-right of the page
- .references() - return a list of 'citations' in the document
- .coordinates() - geo-locations that appear on the page
- .text() - plaintext, human-readable output for the page
- .json() - a 'stringifyable' output of the page's main data
- .wikitext() - original wiki markup
- .description() - get/set the page's short description, if we have one.
- .pageImage() - get/set the page's representative image, if we have one.
- .revisionID() - get/set the latest edit id of the page, if we have it.
- .timestamp() - get/set the time of the most recent edit of the page, if we have it.
Section
- .title() - the name of the section, between ==these tags==
- .index() - which number section is this, in the whole document.
- .indentation() - how many steps deep into the table of contents it is
- .sentences() - return a list of sentences in this section
- .paragraphs() - return a list of paragraphs in this section
- .links() - list of all links, in all paragraphs and templates
- .tables() - list of all html tables
- .templates() - list of all templates in this section
- .infoboxes() - list of all infoboxes found in this section
- .coordinates() - list of all coordinate templates found in this section
- .lists() - list of all lists in this section
- .interwiki() - any links to other language wikis
- .images() - return a list of any images in this section
- .references() - return a list of 'citations' in this section
- .remove() - remove the current section from the document
- .nextSibling() - a section following this one, under the current parent: eg. 1920s → 1930s
- .lastSibling() - a section before this one, under the current parent: eg. 1930s → 1920s
- .children() - any sections more specific than this one: eg. History → [PreHistory, 1920s, 1930s]
- .parent() - the section, broader than this one: eg. 1920s → History
- .text() - readable plaintext for this section
- .json() - return all section data
- .wikitext() - original wiki markup
Paragraph
- .sentences() - return a list of sentence objects in this paragraph
- .references() - any citations, or references in all sentences
- .lists() - any lists found in this paragraph
- .images() - any images found in this paragraph
- .links() - list of all links in all sentences
- .interwiki() - any links to other language wikis
- .text() - generate readable plaintext for this paragraph
- .json() - generate some generic data for this paragraph in JSON format
- .wikitext() - original wiki markup
Sentence
- .links() - list of all links
- .bolds() - list of all bold texts
- .italics() - list of all italic formatted text
- .text() - generate readable plaintext
- .json() - return all sentence data
- .wikitext() - original wiki markup
Image
- .url() - return url to full size image
- .thumbnail() - return url to thumbnail (pass
size
to customize) - .links() - any links from the caption (if present)
- .format() - get file format (e.g.
jpg
) - .text() - does nothing
- .json() - return some generic metadata for this image
- .wikitext() - original wiki markup
Template
- .text() - does this template generate any readable plaintext?
- .json() - get all the data for this template
- .wikitext() - original wiki markup
Infobox
- .links() - any internal or external links in this infobox
- .keyValue() - generate simple key:value strings from this infobox
- .image() - grab the main image from this infobox
- .get() - lookup properties from their key
- .template() - which infobox, eg 'Infobox Person'
- .text() - generate readable plaintext for this infobox
- .json() - generate some generic 'stringifyable' data for this infobox
- .wikitext() - original wiki markup
List
- .lines() - get an array of each member of the list
- .links() - get all links mentioned in this list
- .text() - generate readable plaintext for this list
- .json() - generate some generic easily-parsable data for this list
- .wikitext() - original wiki markup
Reference
- .title() - generate human-facing text for this reference
- .links() - get any links mentioned in this reference
- .text() - returns nothing
- .json() - generate some generic metadata data for this reference
- .wikitext() - original wiki markup
Table
- .links() - get any links mentioned in this table
- .keyValue() - generate a simple list of key:value objects for this table
- .text() - returns nothing
- .json() - generate some useful metadata data for this table
- .wikitext() - original wiki markup
Configuration
Adding new methods:
you can add new methods to any class of the library, with wtf.extend()
wtf.extend((models) => {
// throw this method in there...
models.Doc.prototype.isPerson = function () {
return this.categories().find((cat) => cat.match(/people/))
}
})
await wtf.fetch('Stephen Harper').isPerson()
Adding new templates:
does your wiki use a {{foo}}
template? Add a custom parser for it:
wtf.extend((models, templates) => {
// create a custom parser function
templates.foo = (tmpl, list, parse) => {
let obj = parse(tmpl) //or do a custom regex
list.push(obj)
return 'new-text'
}
// array-syntax allows easy-labeling of parameters
templates.foo = ['a', 'b', 'c']
// number-syntax for returning by param # '{{name|zero|one|two}}'
templates.baz = 0
// replace the template with a string '{{asterisk}}' -> '*'
templates.asterisk = '*'
})
by default, if there's no parser for a template, it will be just ignored and generate an empty string. However, it's possible to configure a fallback parser function to handle these templates:
wtf('some {{weird_template}} here', {
templateFallbackFn: (tmpl, list, parse) => {
let obj = parse(tmpl) //or do a custom regex
list.push(obj)
return '[unsupported template]' // or return null to ignore this template
},
})
you can determine which templates are understood to be 'infoboxes' with the 3rd parameter:
wtf.extend((models, templates, infoboxes) => {
Object.assign(infoboxes, { person: true, place: true, thing: true })
})
Notes:
3rd-party wikis
by default, a public API is provided by a installed mediawiki application. This means that most wikis have an open api, even if they don't realize it. Some wikis may turn this feature off.
It can usually be found by visiting http://mywiki.com/api.php
to fetch pages from a 3rd-party wiki:
wtf.fetch('Kermit', { domain: 'muppet.fandom.com' }).then((doc) => {
console.log(doc.text())
})
some wikis will change the path of their API, from ./api.php
to elsewhere. If your api has a different path, you can set it like so:
wtf.fetch('2016-06-04_-_J.Fernandes_@_FIL,_Lisbon', { domain: 'www.mixesdb.com', path: 'db/api.php' }).then((doc) => {
console.log(doc.template('player').json())
})
for image-urls to work properly, the wiki should also have Special:Redirect
enabled.
Some wikis, (like wikia) have intentionally disabled this.
i18n and multi-language:
wikitext is (amazingly) used across all languages, wikis, and even in right-to-left languages. This parser actually does an okay job at it too.
Wikipedia I18n langauge information for Redirects, Infoboxes, Categories, and Images are included in the library, with pretty-decent coverage.
To improve coverage of i18n templates, use wtf-plugin-i18n
Please make a PR if you see something missing for your language.
Builds:
this library ships seperate client-side and server-side builds, to preserve filesize.
./wtf_wikipedia-client.mjs - as es-module (or Deno)
./wtf_wikipedia-client.min.js - for production
./wtf_wikipedia.cjs - node commonjs build
./wtf_wikipedia.mjs - node/deno/typescript esm build
the browser version uses fetch()
and the server version uses require('https')
.
Performance:
It is not the fastest parser, and is very unlikely to beat a single-pass parser in C or Java.
Using dumpster-dive, this library can parse a full english wikipedia in around 4 hours on a macbook.
That's about 100 pages/second, per thread.
See also:
Other alternative javascript parsers:
and many more!
MIT