npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

nlptoolkit-ngram

v1.0.3

Published

NGram library

Downloads

10

Readme

N-Gram

An N-gram is a sequence of N words: a 2-gram (or bigram) is a two-word sequence of words like “lütfen ödevinizi”, “ödevinizi çabuk”, or ”çabuk veriniz”, and a 3-gram (or trigram) is a three-word sequence of words like “lütfen ödevinizi çabuk”, or “ödevinizi çabuk veriniz”.

Smoothing

To keep a language model from assigning zero probability to unseen events, we’ll have to shave off a bit of probability mass from some more frequent events and give it to the events we’ve never seen. This modification is called smoothing or discounting.

Laplace Smoothing

The simplest way to do smoothing is to add one to all the bigram counts, before we normalize them into probabilities. All the counts that used to be zero will now have a count of 1, the counts of 1 will be 2, and so on. This algorithm is called Laplace smoothing.

Add-k Smoothing

One alternative to add-one smoothing is to move a bit less of the probability mass from the seen to the unseen events. Instead of adding 1 to each count, we add a fractional count k. This algorithm is therefore called add-k smoothing.

Video Lectures

For Developers

You can also see Python, Java, C++, Swift, Cython or C# repository.

Requirements

Node.js

To check if you have a compatible version of Node.js installed, use the following command:

node -v

You can find the latest version of Node.js here.

Git

Install the latest version of Git.

Npm Install

npm install nlptoolkit-ngram

Download Code

In order to work on code, create a fork from GitHub page. Use Git for cloning the code to your local or below line for Ubuntu:

git clone <your-fork-git-link>

A directory called util will be created. Or you can use below link for exploring the code:

git clone https://github.com/starlangsoftware/ngram-js.git

Open project with Webstorm IDE

Steps for opening the cloned project:

  • Start IDE
  • Select File | Open from main menu
  • Choose NGram-Js file
  • Select open as project option
  • Couple of seconds, dependencies will be downloaded.

Detailed Description

Training NGram

To create an empty NGram model:

NGram(N: number)

For example,

a = NGram(2)

this creates an empty NGram model.

To add an sentence to NGram

addNGramSentence(self, symbols: list)

For example,

nGram = NGram(2)
nGram.addNGramSentence(["jack", "read", "books", "john", "mary", "went"])
nGram.addNGramSentence(["jack", "read", "books", "mary", "went"])

with the lines above, an empty NGram model is created and two sentences are added to the bigram model.

NoSmoothing class is the simplest technique for smoothing. It doesn't require training. Only probabilities are calculated using counters. For example, to calculate the probabilities of a given NGram model using NoSmoothing:

a.calculateNGramProbabilitiesSimple(new NoSmoothing())

LaplaceSmoothing class is a simple smoothing technique for smoothing. It doesn't require training. Probabilities are calculated adding 1 to each counter. For example, to calculate the probabilities of a given NGram model using LaplaceSmoothing:

a.calculateNGramProbabilitiesSimple(new LaplaceSmoothing())

GoodTuringSmoothing class is a complex smoothing technique that doesn't require training. To calculate the probabilities of a given NGram model using GoodTuringSmoothing:

a.calculateNGramProbabilitiesSimple(new GoodTuringSmoothing())

AdditiveSmoothing class is a smoothing technique that requires training.

a.calculateNGramProbabilitiesTrained(trainedCorpus, new AdditiveSmoothing())

Using NGram

To find the probability of an NGram:

getProbability(... symbols: Array<Symbol>): number

For example, to find the bigram probability:

a.getProbability("jack", "reads")

To find the trigram probability:

a.getProbability("jack", "reads", "books")