nlp-corpus
v4.4.0
Published
texts for integration testing of nlp components
Downloads
337
Readme
nlp-corpus is a proud series of weird texts from a delicious smattering of sources - aimed at getting cosmopolitan flavours of english - highbrow, lowbrow and unibrow - dialects, typos, shakespeare, unicode, 19th century, aggressive emoji, and epic nsfw slurs into your training data.
it is 50,000 sentences, or 5mb, split into 50 files of randomized sentences.
it's role is mainly to kick the tires a bit, as creatively as possible, for fuzzy linguistic parsing.
- suggestive American rock lyrics
- campy Friends tv-show transcripts
- vulnerable drug-trip reports from Erowid
- singaporean SMS messages
- State of the union logorrhea
- generally-offensive 90's rap
- Legal descriptions in NAFTA
- 20th century romantic fiction
- pedantic arguments on reddit
- arcane and dense jeopardy questions
Note that some of this text is nsfw, or containing offensive content, badly-formatted unicode, weird indentation, ascii art, antiquated shorthands, etc.
These texts were found just clicking around on the internet. Running them blindly through your parser should be considered fair-use, but please don't commercially republish them, or anything like that.
ok go.
npm install nlp-corpus
running this library server-side loads a subset of the documents - abt 3mb total
import corpus from 'nlp-corpus'
// all 10k sentences, in an array
let arr = corpus.all()
// or load just a few:
arr = corpus.some(400)
//random sentence
let str = corpus.random()
//random 5 sentences
let arr = corpus.some(5) //n can only be <= 1,500
or on the client-side, there's a one-liner that fetches the docs:
<script src="http://unpkg.com/nlp-corpus"></script>
<script>
// load a documents lazily
await nlpCorpus.fetch(2) //1 - 20
// (each doc is abt 150kb)
let arr = nlpCorpus.random(4) //1 - 1,500
</script>
Contents:
the corpus is availble as [./builds](100 150kb files) where each file is 1,000 mixed sentences.
This is a good size for picking-at from the client-side - but you can loop through them all in nodejs.
in total, there are 100,000 sentences, at around 12mb.
Dialog
the National University of Singapore's 56 thousand SMS message Corpus 3mb.
'Friends' Transcripts uses @silentrob's parser of versatel transcripts of the friends tv show. all 10 seasons. about 2.5mb
Music lyrics
short, modern texts with some nice slang.
- nltk lyric corpus by JacobGo
Fiction
some CC-BY fiction pieces by some selected authors. Mix of tense, dialogue, subject, and style. ~300kb
Erowid trip reports - some very casual and modern slang-filled drug-use reports from erowid.org ~nsfw.
Children's stories from the facebook children's stories corpus
Speeches
State of the union transcripts - American presidential speech transcripts from 2000-2015. ~600kb
Wikipedia
a bunch of articles from wikipedia's Articles every Wikipedia must have list
Internet comments
Reddit /r/TLDR corpus from this dataset
Questions
sample of jeopardy questions from this dataset
Instructions
sample of wikihow instructions from this dataset
News Headlines
sample of the times of india headlines from this dataset
Reviews
subset of reviews from the Yelp academic dataset
subset of imdb reviews from the Stanford Large Movie Review Dataset
Legal Text
subset of the SigmaLaw - Large Legal Text Corpus
subset of the United Nations Multilingual corpus (english subset)
Jokes & puns
Super-corny dad-jokes (some offensive) from CrowdTruth/Short-Text-Corpus-For-Humor-Detection
Literature
subsets of Infinite Jest and Edgar Allen Poe short stories
Email text
subsets of the ENRON email dataset