npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@datafire/oxforddictionaries

v6.0.0

Published

DataFire integration for Oxford Dictionaries

Downloads

12

Readme

@datafire/oxforddictionaries

Client library for Oxford Dictionaries

Installation and Usage

npm install --save @datafire/oxforddictionaries
let oxforddictionaries = require('@datafire/oxforddictionaries').create();

.then(data => {
  console.log(data);
});

Description

Actions

domains.source_domains_language.target_domains_language.get

Returns a list of the available domains for a given bilingual language dataset.

oxforddictionaries.domains.source_domains_language.target_domains_language.get({
  "source_domains_language": "",
  "target_domains_language": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_domains_language required string (values: en, es, nso, zu, ur, de, pt): IANA language code
    • target_domains_language required string (values: es, nso, zu, ms, id, tn, ro, de, pt): IANA language code
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

domains.source_language.get

Returns a list of the available domains for a given monolingual language dataset.

oxforddictionaries.domains.source_language.get({
  "source_language": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_language required string (values: en, es, nso, zu, hi, sw, ur, de, pt, ta, gu): IANA language code
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

entries.source_language.word_id.sentences.get

Use this to retrieve sentences extracted from corpora which show how a word is used in the language. This is available for English and Spanish. For English, the sentences are linked to the correct sense of the word in the dictionary. In Spanish, they are linked at the headword level.

oxforddictionaries.entries.source_language.word_id.sentences.get({
  "source_language": "",
  "word_id": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_language required string (values: en, es): IANA language code
    • word_id required string: An Entry identifier. Case-sensitive.
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

entries.source_lang.word_id.get

Use this to retrieve definitions, pronunciations, example sentences, grammatical information and word origins. It only works for dictionary headwords, so you may need to use the Lemmatron first if your input is likely to be an inflected form (e.g., 'swimming'). This would return the linked headword (e.g., 'swim') which you can then use in the Entries endpoint. Unless specified using a region filter, the default lookup will be the Oxford Dictionary of English (GB).

oxforddictionaries.entries.source_lang.word_id.get({
  "source_lang": "",
  "word_id": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_lang required string (values: en, es, lv, hi, sw, ta, gu, fr): IANA language code
    • word_id required string: An Entry identifier. Case-sensitive.
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

entries.source_lang.word_id.antonyms.get

Retrieve words that are opposite in meaning to the input word (antonym).

oxforddictionaries.entries.source_lang.word_id.antonyms.get({
  "source_lang": "",
  "word_id": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_lang required string (values: en): IANA language code
    • word_id required string: An Entry identifier. Case-sensitive.
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

entries.source_lang.word_id.regions_region.get

USe this filter to restrict the lookup to either our Oxford Dictionary of English (GB) or New Oxford American Dictionary (US).

oxforddictionaries.entries.source_lang.word_id.regions_region.get({
  "source_lang": "",
  "word_id": "",
  "region": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_lang required string (values: en): IANA language code
    • word_id required string: An Entry identifier. Case-sensitive.
    • region required string (values: gb, us): Region filter parameter. gb = Oxford Dictionary of English. us = New Oxford American Dictionary.
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

entries.source_lang.word_id.synonyms.get

Use this to retrieve words that are similar in meaning to the input word (synonym).

oxforddictionaries.entries.source_lang.word_id.synonyms.get({
  "source_lang": "",
  "word_id": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_lang required string (values: en): IANA language code
    • word_id required string: An Entry identifier. Case-sensitive.
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

entries.source_lang.word_id.synonyms_antonyms.get

Retrieve available synonyms and antonyms for a given word and language.

oxforddictionaries.entries.source_lang.word_id.synonyms_antonyms.get({
  "source_lang": "",
  "word_id": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_lang required string (values: en): IANA language code
    • word_id required string: An Entry identifier. Case-sensitive.
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

entries.source_lang.word_id.filters.get

Use filters to limit the entry information that is returned. For example, you may only require definitions and not everything else, or just pronunciations. The full list of filters can be retrieved from the filters Utility endpoint. You can also specify values within the filter using '='. For example 'grammaticalFeatures=singular'. Filters can also be combined using a semicolon.

oxforddictionaries.entries.source_lang.word_id.filters.get({
  "source_lang": "",
  "word_id": "",
  "filters": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_lang required string (values: en, es, lv, hi, sw, ta, gu, fr): IANA language code
    • word_id required string: An Entry identifier. Case-sensitive.
    • filters required string: Separate filtering conditions using a semicolon. Conditions take values grammaticalFeatures and/or lexicalCategory and are case-sensitive. To list multiple values in single condition divide them with comma.
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

entries.source_translation_language.word_id.translations_target_translation_language.get

Use this to return translations for a given word. In the event that a word in the dataset does not have a direct translation, the response will be a definition in the target language.

oxforddictionaries.entries.source_translation_language.word_id.translations_target_translation_language.get({
  "source_translation_language": "",
  "word_id": "",
  "target_translation_language": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_translation_language required string (values: en, es, nso, zu, ms, id, tn, ur, de, pt): IANA language code
    • word_id required string: The source word
    • target_translation_language required string (values: es, nso, zu, ms, id, tn, ro, de, pt): IANA language code
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

filters.get

Returns a list of all the valid filters to construct API calls.

oxforddictionaries.filters.get({
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

filters.endpoint.get

Returns a list of all the valid filters for a given endpoint to construct API calls.

oxforddictionaries.filters.endpoint.get({
  "endpoint": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • endpoint required string (values: entries, inflections, translations): Name of the endpoint.
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

grammaticalFeatures.source_language.get

Returns a list of the available grammatical features for a given language dataset.

oxforddictionaries.grammaticalFeatures.source_language.get({
  "source_language": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_language required string (values: en, es, lv, nso, zu, ms, tn, ur, hi, sw, de, pt, ta, gu): IANA language code. If provided output will be filtered by sourceLanguage.
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

inflections.source_lang.word_id.filters.get

Use this to check if a word exists in the dictionary, or what 'root' form it links to (e.g., swimming > swim). The response tells you the possible lemmas for a given inflected word. This can then be combined with other endpoints to retrieve more information.

oxforddictionaries.inflections.source_lang.word_id.filters.get({
  "source_lang": "",
  "filters": "",
  "word_id": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_lang required string (values: en, es, hi, nso, tn, zu, de, pt): IANA language code
    • filters required string: Separate filtering conditions using a semicolon. Conditions take values grammaticalFeatures and/or lexicalCategory and are case-sensitive. To list multiple values in single condition divide them with comma.
    • word_id required string: The input word
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

languages.get

Returns a list of monolingual and bilingual language datasets available in the API

oxforddictionaries.languages.get({
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • sourceLanguage string (values: es, en, lv, nso, zu, ms, id, tn, ur, hi, sw, ro, de, pt, ta, gu): IANA language code. If provided output will be filtered by sourceLanguage.
    • targetLanguage string (values: es, en, lv, nso, zu, ms, id, tn, ur, hi, sw, ro): IANA language code. If provided output will be filtered by sourceLanguage.
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

lexicalcategories.language.get

Returns a list of available lexical categories for a given language dataset.

oxforddictionaries.lexicalcategories.language.get({
  "language": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • language required string (values: es, en, lv, nso, zu, ms, id, tn, ur, hi, sw, ro, de, pt, ta, gu): IANA language code
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

regions.source_language.get

Returns a list of the available regions for a given monolingual language dataset.

oxforddictionaries.regions.source_language.get({
  "source_language": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_language required string (values: en): IANA language code
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

registers.source_language.get

Returns a list of the available registers for a given monolingual language dataset.

oxforddictionaries.registers.source_language.get({
  "source_language": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_language required string (values: en, es, hi, id, lv, ms, sw, ur, de, pt, ta, gu): IANA language code
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

registers.source_register_language.target_register_language.get

Returns a list of the available registers for a given bilingual language dataset.

oxforddictionaries.registers.source_register_language.target_register_language.get({
  "source_register_language": "",
  "target_register_language": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_register_language required string (values: en, es, ms, id, ur, de, pt): IANA language code
    • target_register_language required string (values: es, en, nso, zu, ms, id, tn, ro, de, pt): IANA language code
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

search.source_lang.get

Use this to retrieve possible headword matches for a given string of text. The results are culculated using headword matching, fuzzy matching, and lemmatization

oxforddictionaries.search.source_lang.get({
  "source_lang": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_lang required string (values: en, es, hi, lv, sw, ta, gu): IANA language code
    • q string: Search string
    • prefix boolean (values: false, true): Set prefix to true if you'd like to get results only starting with search string.
    • regions string: If searching in English, filter words with specific region(s) either 'us' or 'gb'.
    • limit string: Limit the number of results per response. Default and maximum limit is 5000.
    • offset string: Offset the start number of the result.
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

search.source_search_language.translations_target_search_language.get

Use this to find matches in our translation dictionaries.

oxforddictionaries.search.source_search_language.translations_target_search_language.get({
  "source_search_language": "",
  "target_search_language": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_search_language required string (values: en, es, nso, zu, ms, id, tn, ur, de, pt): IANA language code
    • target_search_language required string (values: es, nso, zu, ms, id, tn, ro, de, pt): IANA language code
    • q string: Search string.
    • prefix boolean (values: false, true): Set prefix to true if you'd like to get results only starting with search string.
    • regions string: Filter words with specific region(s) E.g., regions=us. For now gb, us are available for en language.
    • limit string: Limit the number of results per response. Default and maximum limit is 5000.
    • offset string: Offset the start number of the result.
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

stats.frequency.ngrams.source_lang.corpus.ngram_size.get

This endpoint returns frequencies of ngrams of size 1-4. That is the number of times a word (ngram size = 1) or words (ngram size > 1) appear in the corpus. Ngrams are case sensitive ("I AM" and "I am" will have different frequency) and frequencies are calculated per word (true case) so "the book" and "the books" are two different ngrams. The results can be filtered based on query parameters. Parameters can be provided in PATH, GET or POST (form or json). The parameters in PATH are overridden by parameters in GET, POST and json (in that order). In PATH, individual options are separated by semicolon and values are separated by commas (where multiple values can be used). Example for bigrams (ngram of size 2):

  • PATH: /tokens=a word,another word

  • GET: /?tokens=a word&tokens=another word

  • POST (json):

      {
          "tokens": ["a word", "another word"]
      }

Either "tokens" or "contains" has to be provided. Some queries with "contains" or "sort" can exceed the 30s timeout, in which case the API will return an error message with status code 503. You mitigate this by providing additional restrictions such as "minFrequency" and "maxFrequency". You can use the parameters "offset" and "limit" to paginate through large result sets. For convenience, the HTTP header "Link" is set on the response to provide links to "first", "self", "next", "prev" and "last" pages of results (depending on the context). For example, if the result set contains 50 results and the parameter "limit" is set to 25, the Links header will contain an URL for the first 25 results and the next 25 results. Some libraries such as python's requests can parse the header automatically and offer a convenient way of iterating through the results. For example:

    while url:
        r = requests.get(url)
        r.raise_for_status()
        for item in r.json()['results']:
          yield item
        url = r.links.get('next', {}).get('url')
oxforddictionaries.stats.frequency.ngrams.source_lang.corpus.ngram_size.get({
  "source_lang": "",
  "corpus": "",
  "ngram-size": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_lang required string: IANA language code
    • corpus required string: For corpora other than 'nmc' (New Monitor Corpus) please contact [email protected]
    • ngram-size required string: the size of ngrams requested (1-4)
    • tokens string: List of tokens to filter. The tokens are separated by spaces, the list items are separated by comma (e.g., for bigrams (n=2) tokens=this is,this was, this will)
    • contains string: Find ngrams containing the given token(s). Use comma or space as token separators; the order of tokens is irrelevant.
    • punctuation string: Flag specifying whether to lookup ngrams that include punctuation or not (possible values are "true" and "false"; default is "false")
    • format string: Option specifying whether tokens should be returned as a single string (option "google") or as a list of strings (option "oup")
    • minFrequency integer: Restrict the query to entries with frequency of at least minFrequency
    • maxFrequency integer: Restrict the query to entries with frequency of at most maxFrequency
    • minDocumentFrequency integer: Restrict the query to entries that appear in at least minDocumentFrequency documents
    • maxDocumentFrequency integer: Restrict the query to entries that appera in at most maxDocumentFrequency documents
    • collate string: collate the results by wordform, trueCase, lemma, lexicalCategory. Multiple values can be separated by commas (e.g., collate=trueCase,lemma,lexicalCategory).
    • sort string: sort the resulting list by wordform, trueCase, lemma, lexicalCategory, frequency, normalizedFrequency. Descending order is achieved by prepending the value with the minus sign ('-'). Multiple values can be separated by commas (e.g., sort=lexicalCategory,-frequency)
    • offset integer: pagination - results offset
    • limit integer: pagination - results limit
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

stats.frequency.word.source_lang.get

This endpoint provides the frequency of a given word. When multiple database records match the query parameters, the returned frequency is the sum of the individual frequencies. For example, if the query parameters are lemma=test, the returned frequency will include the verb "test", the noun "test" and the adjective "test" in all forms (Test, tested, testing, etc.) If you are interested in the frequency of the word "test" but want to exclude other forms (e.g., tested) use the option trueCase=test. Normally, the word "test" will be spelt with a capital letter at the beginning of a sentence. The option trueCase will ignore this and it will count "Test" and "test" as the same token. If you are interested in frequencies of "Test" and "test", use the option wordform=test or wordform=Test. Note that trueCase is not just a lower case of the word as some words are genuinely spelt with a capital letter such as the word "press" in Oxford University Press. Parameters can be provided in PATH, GET or POST (form or json). The parameters in PATH are overriden by parameters in GET, POST and json (in that order). In PATH, individual options are separated by semicolon and values are separated by commas (where multiple values can be used). Examples:

  • PATH: /lemma=test;lexicalCategory=noun

  • GET: /?lemma=test&lexicalCategory=noun

  • POST (json):

      {
        "lemma": "test",
        "lexicalCategory": "noun"
      }

One of the options wordform/trueCase/lemma/lexicalCategory has to be provided.

oxforddictionaries.stats.frequency.word.source_lang.get({
  "source_lang": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_lang required string: IANA language code
    • corpus string: For corpora other than 'nmc' (New Monitor Corpus) please contact [email protected]
    • wordform string: The written form of the word to look up (preserving case e.g., Books vs books)
    • trueCase string: The written form of the word to look up with normalised case (Books --> books)
    • lemma string: The lemma of the word to look up (e.g., Book, booked, books all have the lemma "book")
    • lexicalCategory string: The lexical category of the word(s) to look up (e.g., noun or verb)
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

stats.frequency.words.source_lang.get

This endpoint provides a list of frequencies for a given word or words. Unlike the /word/ endpoint, the results are split into the smallest units. To exclude a specific value, prepend it with the minus sign ('-'). For example, to get frequencies of the lemma 'happy' but exclude superlative forms (i.e., happiest) you could use options 'lemma=happy;grammaticalFeatures=-degreeType:superlative'. Parameters can be provided in PATH, GET or POST (form or json). The parameters in PATH are overridden by parameters in GET, POST and json (in that order). In PATH, individual options are separated by semicolon and values are separated by commas (where multiple values can be used). The parameters wordform/trueCase/lemma/lexicalCategory also exist in a plural form, taking a lists of items. Examples:

  • PATH: /wordforms=happy,happier,happiest
  • GET: /?wordforms=happy&wordforms=happier&wordforms=happiest
  • POST (json):
  {
    "wordforms": ["happy", "happier", "happiest"]
  }

A mor complex example of retrieving frequencies of multiple lemmas:

  {
      "lemmas": ["happy", "content", "cheerful", "cheery", "merry", "joyful", "ecstatic"],
      "grammaticalFeatures": {
          "adjectiveFunctionType": "predicative"
      },
      "lexicalCategory": "adjective",
      "sort": ["lemma", "-frequency"]
  }

Some queries with "collate" or "sort" can exceed the 30s timeout, in which case the API will return an error message with status code 503. You mitigate this by providing additional restrictions such as "minFrequency" and "maxFrequency". You can use the parameters "offset" and "limit" to paginate through large result sets. For convenience, the HTTP header "Link" is set on the response to provide links to "first", "self", "next", "prev" and "last" pages of results (depending on the context). For example, if the result set contains 50 results and the parameter "limit" is set to 25, the Links header will contain an URL for the first 25 results and the next 25 results. Some libraries such as python's requests can parse the header automatically and offer a convenient way of iterating through the results. For example:

    while url:
        r = requests.get(url)
        r.raise_for_status()
        for item in r.json()['results']:
          yield item
        url = r.links.get('next', {}).get('url')
oxforddictionaries.stats.frequency.words.source_lang.get({
  "source_lang": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_lang required string: IANA language code
    • corpus string: For corpora other than 'nmc' (New Monitor Corpus) please contact [email protected]
    • wordform string: The written form of the word to look up (preserving case e.g., Book vs book)
    • trueCase string: The written form of the word to look up with normalised case (Books --> books)
    • lemma string: The lemma of the word to look up (e.g., Book, booked, books all have the lemma "book")
    • lexicalCategory string: The lexical category of the word(s) to look up (e.g., adjective or noun)
    • grammaticalFeatures string: The grammatical features of the word(s) to look up entered as a list of k:v (e.g., degree_type:comparative)
    • sort string: sort the resulting list by wordform, trueCase, lemma, lexicalCategory, frequency, normalizedFrequency. Descending order is achieved by prepending the value with the minus sign ('-'). Multiple values can be separated by commas (e.g., sort=lexicalCategory,-frequency)
    • collate string: collate the results by wordform, trueCase, lemma, lexicalCategory. Multiple values can be separated by commas (e.g., collate=trueCase,lemma,lexicalCategory).
    • minFrequency integer: Restrict the query to entries with frequency of at least minFrequency
    • maxFrequency integer: Restrict the query to entries with frequency of at most maxFrequency
    • minNormalizedFrequency number: Restrict the query to entries with frequency of at least minNormalizedFrequency
    • maxNormalizedFrequency number: Restrict the query to entries with frequency of at most maxNormalizedFrequency
    • offset integer: pagination - results offset
    • limit integer: pagination - results limit
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

wordlist.source_lang.filters_advanced.get

Use this to apply more complex filters to the list of words. For example, you may only want to filter out words for which all senses match the filter, or only its 'prime sense'. You can also filter by word length or match by substring (prefix).

oxforddictionaries.wordlist.source_lang.filters_advanced.get({
  "source_lang": "",
  "filters_advanced": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_lang required string (values: en, es, hi, lv, sw, ta, gu): IANA language code
    • filters_advanced required string: Semicolon separated list of wordlist parameters, presented as value pairs: LexicalCategory, domains, regions, registers. Parameters can take comma separated list of values. E.g., lexicalCategory=noun,adjective;domains=sport. Number of values limited to 5.
    • exclude string: Semicolon separated list of parameters-value pairs (same as filters). Excludes headwords that have any senses in specified exclusion attributes (lexical categories, domains, etc.) from results.
    • exclude_senses string: Semicolon separated list of parameters-value pairs (same as filters). Excludes only those senses of a particular headword that match specified exclusion attributes (lexical categories, domains, etc.) from results but includes the headword if it has other permitted senses.
    • exclude_prime_senses string: Semicolon separated list of parameters-value pairs (same as filters). Excludes a headword only if the primary sense matches the specified exclusion attributes (registers, domains only).
    • word_length string: Parameter to speficy the minimum (>), exact or maximum (<) length of the words required. E.g., >5 - more than 5 chars; <4 - less than 4 chars; >5<10 - from 5 to 10 chars; 3 - exactly 3 chars.
    • prefix string: Filter words that start with prefix parameter
    • exact boolean (values: false, true): If exact=true wordlist returns a list of entries that exactly matches the search string. Otherwise wordlist lists entries that start with prefix string.
    • limit string: Limit the number of results per response. Default and maximum limit is 5000.
    • offset string: Offset the start number of the result.
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

wordlist.source_lang.filters_basic.get

Use this to retrieve a list of words for particular domain, lexical category, register and/or region. View the full list of possible filters using the filters Utility endpoint. The response only includes headwords, not all their possible inflections. If you require a full wordlist including inflected forms, contact us and we can help.

oxforddictionaries.wordlist.source_lang.filters_basic.get({
  "source_lang": "",
  "filters_basic": "",
  "app_id": "",
  "app_key": ""
}, context)

Input

  • input object
    • source_lang required string (values: en, es, hi, lv, sw, ta, gu): IANA language code
    • filters_basic required string: Semicolon separated list of wordlist parameters, presented as value pairs: LexicalCategory, domains, regions, registers. Parameters can take comma separated list of values. E.g., lexicalCategory=noun,adjective;domains=sport. Number of values limited to 5.
    • limit string: Limit the number of results per response. Default and maximum limit is 5000.
    • offset string: Offset the start number of the result
    • app_id required string: App ID Authentication Parameter
    • app_key required string: App Key Authentication Parameter

Output

Definitions

ArrayOfRelatedEntries

  • ArrayOfRelatedEntries array: A list of written or spoken words
    • items object

CategorizedTextList

  • CategorizedTextList array: various types of notes that appear
    • items object
      • id string: The identifier of the word
      • text required string: A note text
      • type required string: The descriptive category of the text

CrossReferencesList

  • CrossReferencesList array: A reference to another word that is closely related, might provide additional information about the subject, has a variant spelling or is an abbreviated form of it.
    • items object: cross references of a sense
      • id required string: The word id of cooccurrence
      • text required string: The word of cooccurrence
      • type required string: The type of relation between the two words. Possible values are 'close match', 'related', 'see also', 'variant spelling', and 'abbreviation' in case of crossreferences, or 'pre', 'post' in case of collocates.

Entry

ExamplesList

Filters

GrammaticalFeaturesList

  • GrammaticalFeaturesList array: The different forms are correlated with meanings or functions which we text as 'features'
    • items object
      • text required string
      • type required string

HeadwordEntry

  • HeadwordEntry object: Description of a word
    • id required string: The identifier of a word
    • language required string: IANA language code
    • lexicalEntries required array: A grouping of various senses in a specific language, and a lexical category that relates to a word
    • pronunciations PronunciationsList
    • type string: The json object type. Could be 'headword', 'inflection' or 'phrase'
    • word required string: A given written or spoken realisation of a an entry, lowercased.

HeadwordLemmatron

  • HeadwordLemmatron object: Description of an inflected form of a word
    • id required string: The identifier of a word
    • language required string: IANA language code
    • lexicalEntries required array: A grouping of various senses in a specific language, and a lexical category that relates to a word
    • type string: The json object type. Could be 'headword', 'inflection' or 'phrase'
    • word required string: A given written or spoken realisation of a an entry, lowercased.

HeadwordThesaurus

  • HeadwordThesaurus object: description of thesaurus information of a word
    • id required string: The identifier of a word
    • language required string: IANA language code
    • lexicalEntries required array: A grouping of various senses in a specific language, and a lexical category that relates to a word
    • type string: The json object type. Could be 'headword', 'inflection' or 'phrase'
    • word required string: A given written or spoken realisation of a an entry, lowercased.

InflectionsList

  • InflectionsList array: A grouping of the modifications of a word to express different grammatical categories
    • items object
      • id required string: The identifier of the word
      • text required string

Languages

  • Languages object: Schema for the languages endpoint.
    • metadata object: Additional Information provided by OUP
    • results array: A list of languages available.
      • items object
        • region string: Name of region.
        • source string: Name of source dictionary.
        • sourceLanguage object: Source language of the results
          • id string: IANA language code
          • language string: Language label.
        • targetLanguage object: Translation language of the results
          • id string: IANA language code
          • language string: Language label.
        • type string (values: monolingual, bilingual): whether monolingual or bilingual.

Lemmatron

  • Lemmatron object: Schema for the inflections endpoint.
    • metadata object: Additional Information provided by OUP
    • results array: A list of inflections matching a given word

LemmatronLexicalEntry

  • LemmatronLexicalEntry object: Description of an entry for a particular part of speech and grammatical features
    • grammaticalFeatures GrammaticalFeaturesList
    • inflectionOf required InflectionsList
    • language required string: IANA language code
    • lexicalCategory required string: A linguistic category of words (or more precisely lexical items), generally defined by the syntactic or morphological behaviour of the lexical item in question, such as noun or verb
    • text required string: A given written or spoken realisation of a an entry.

NgramsResult

  • NgramsResult object: Schema for corpus ngrams.
    • metadata object: Additional Information provided by OUP
    • results array: A list of found ngrams along with their frequencies
      • items object: Ngrams matching the given options
        • frequency required integer: The number of times the ngram (a sequence of n words) appears in the corpus
        • tokens required array: A list of tokens
          • items string

PronunciationsList

  • PronunciationsList array: A list of possible pronunciations of a word
    • items object: A grouping of pronunciation information
      • audioFile string: The URL of the sound file
      • dialects arrayofstrings
      • phoneticNotation string: The alphabetic system used to display the phonetic spelling
      • phoneticSpelling string: Phonetic spelling is the representation of vocal sounds which express pronunciations of words. It is a system of spelling in which each letter represents invariably the same spoken sound
      • regions arrayofstrings

Regions

  • Regions object: Schema for region endpoint.
    • metadata object: Additional Information provided by OUP
    • results object: A mapping of regions available.

RetrieveEntry

  • RetrieveEntry object: Schema for the 'entries' endpoints
    • metadata object: Additional Information provided by OUP
    • results array: A list of entries and all the data related to them

Sense

SentencesEntry

  • SentencesEntry object: Description of a word
    • id required string: The identifier of a word
    • language required string: IANA language code
    • lexicalEntries required array: A grouping of various senses in a specific language, and a lexical category that relates to a word
    • type string: The json object type. Could be 'headword', 'inflection' or 'phrase'
    • word required string: A given written or spoken realisation of a an entry, lowercased.

SentencesLexicalEntry

  • SentencesLexicalEntry object: Description of an entry for a particular part of speech
    • grammaticalFeatures GrammaticalFeaturesList
    • language required string: IANA language code
    • lexicalCategory string: A linguistic category of words (or more precisely lexical items), generally defined by the syntactic or morphological behaviour of the lexical item in question, such as noun or verb
    • sentences required ExamplesList
    • text required string: A given written or spoken realisation of a an entry.

SentencesResults

  • SentencesResults object: Schema for the 'sentences' endpoint
    • metadata object: Additional Information provided by OUP
    • results array: A list of entries and all the data related to them

StatsWordResult

  • StatsWordResult object: Schema for lexi-stats results for a word/trueCase/lemma/lexicalCategory returned as a frequency
    • metadata object: Additional Information provided by OUP
    • result object: Frequency information for a given entity
      • frequency required integer: The number of times a word appears in the entire corpus
      • lemma string: A lemma of the word (e.g., wordforms "lay", "laid" and "laying" have all lemma "lay")
      • lexicalCategory string: A lexical category such as 'verb' or 'noun'
      • matchCount required integer: The number of database records that matched the query params (stated frequency is the sum of the individual frequencies)
      • normalizedFrequency required integer: The number of times a word appears on average in 1 million words
      • trueCase string: A given written realisation of a an entry (e.g., "lay") usually lower case
      • wordform string: A given written realisation of a an entry (e.g., "Lay") preserving case

StatsWordResultList

  • StatsWordResultList object: Schema for lexi-stats results for a word/trueCase/lemma/lexicalCategory returned as a list of frequencies per wordform-trueCase-lemma-lexicalCategory entry.
    • metadata object: Additional Information provided by OUP
    • results array: A list of found words along with their frequencies
      • items object: Statistical information about a word
        • frequency required integer: The number of times a word appears in the entire corpus
        • lemma required string: A lemma of the word.
        • lexicalCategory required string: A lexical category such as 'verb' or 'noun'
        • normalizedFrequency required integer: The number of times a word appears on average in 1 million words
        • trueCase required string: A given written realisation of a an entry (e.g., "lay") usually lower case
        • wordform required string: A given written realisation of a an entry (e.g., "lay") preserving case

SynonymsAntonyms

Thesaurus

  • Thesaurus object: Schema for thesaurus endpoint
    • metadata object: Additional Information provided by OUP
    • results array: A list of found synonyms or antonyms

ThesaurusEntry

  • ThesaurusEntry object
    • homographNumber string: Identifies the homograph grouping. The last two digits identify different entries of the same homograph. The first one/two digits identify the homograph number.
    • senses array: Complete list of senses
    • variantForms VariantFormsList

ThesaurusLexicalEntry

  • ThesaurusLexicalEntry object: Description of an entry for a particular part of speech
    • entries array
    • language required string: IANA language code
    • lexicalCategory required string: A linguistic category of words (or more precisely lexical items), generally defined by the syntactic or morphological behaviour of the lexical item in question, such as noun or verb
    • text required string: A given written or spoken realisation of a an entry.
    • variantForms VariantFormsList

ThesaurusSense

TranslationsList

UtilityLabels

  • UtilityLabels object: Schema for lexicalcategories, registers utility endpoints.
    • metadata object: Additional Information provided by OUP
    • results object: Mapping of labels available.

VariantFormsList

  • VariantFormsList array: Various words that are used interchangeably depending on the context, e.g 'aluminium' and 'aluminum'

Wordlist

  • Wordlist object: Schema for wordlist endpoint.
    • metadata object: Additional Information provided by OUP
    • results array: A list of found words
      • items object: Description of found word
        • id required string: The identifier of a word
        • matchString string
        • matchType string
        • region string: Name of region.
        • word required string: A given written or spoken realisation of a an entry, lowercased.

arrayofstrings

  • arrayofstrings array
    • items string

lexicalEntry

  • lexicalEntry object: Description of an entry for a particular part of speech

thesaurusLink

  • thesaurusLink object: Link to a sense of a specific entry in the thesaurus Dictionary
    • entry_id required string: identifier of a word
    • sense_id required string: identifier of a sense