npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

corenlp

v1.5.2

Published

A NodeJS CoreNLP library

Downloads

120

Readme

CoreNLP for NodeJS

This library helps making NodeJS/Web applications using the state-of-the-art technology for Natural Language Processing: Stanford CoreNLP. It is compatible with the latest release of CoreNLP 3.9.0.

Build Status Try corenlp on RunKit

NPM package

This project is under active development, please stay tuned for updates. More documentation and examples are comming.

Example

Assuming that StanfordCoreNLPServer is running on http://localhost:9000....

import CoreNLP, { Properties, Pipeline } from 'corenlp';

const props = new Properties({
  annotators: 'tokenize,ssplit,pos,lemma,ner,parse',
});
const pipeline = new Pipeline(props, 'English'); // uses ConnectorServer by default

const sent = new CoreNLP.simple.Sentence('The little dog runs so fast.');
pipeline.annotate(sent)
  .then(sent => {
    console.log('parse', sent.parse());
    console.log(CoreNLP.util.Tree.fromSentence(sent).dump());
  })
  .catch(err => {
    console.log('err', err);
  });

API

Read the full API documentation.

Setup

1. Install the package:

npm i --save corenlp

2. Download Stanford CoreNLP

2.1. Shortcut (recommended to give this library a first try)

Via npm, run this command from your own project after having installed this library:

npm explore corenlp -- npm run corenlp:download

Once downloaded you can easily start the server by running

npm explore corenlp -- npm run corenlp:server

Or you can manually download the project from the Stanford's CoreNLP download section at: https://stanfordnlp.github.io/CoreNLP/download.html You may want to download, apart of the full package, other language models (see more on that page).

2.2. Via sources

For advanced projects, when you have to customize the library a bit more, we highly recommend to download the StanfordCoreNLP from the original repository, and compile the source code by using ant jar.

NOTE: Some functionality included in this library, for TokensRegex, Semgrex and Tregex, requires the latest version from that repository, which contains some fixes needed by this library, not included in the latest stable release.

3. Configure Stanford CoreNLP

There are two method to connect your NodeJS application to Stanford CoreNLP:

  1. HTTP is the preferred method since it requires CoreNLP to initialize just once to serve many requests, it also avoids extra I/O given that the CLI method need to write temporary files to run recommended.
  2. Via Command Line Interface, this is by spawning processes from your app.

3.1. Using StanfordCoreNLPServer

# Run the server using all jars in the current directory (e.g., the CoreNLP home directory)
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 15000

CoreNLP connects by default via StanfordCoreNLPServer, using port 9000. You can also opt to setup the connection differently:

import CoreNLP, { Properties, Pipeline, ConnectorServer } from 'corenlp';

const connector = new ConnectorServer({ dsn: 'http://localhost:9000' });
const props = new Properties({
  annotators: 'tokenize,ssplit,pos,lemma,ner,parse',
});
const pipeline = new Pipeline(props, 'English', connector);

3.2. Use CoreNLP via CLI

CoreNLP expects by default the StanfordCoreNLP package to be placed (unzipped) inside the path ${YOUR_NPM_PROJECT_ROOT}/corenlp/. You can also opt to setup the CLI interface differently:

import CoreNLP, { Properties, Pipeline, ConnectorCli } from 'corenlp';

const connector = new ConnectorCli({
  classPath: 'corenlp/stanford-corenlp-full-2017-06-09/*', // specify the paths relative to your npm project root
  mainClass: 'edu.stanford.nlp.pipeline.StanfordCoreNLP', // optional
  props: 'StanfordCoreNLP-spanish.properties', // optional
});
const props = new Properties({
  annotators: 'tokenize,ssplit,pos,lemma,ner,parse',
});
const pipeline = new Pipeline(props, 'English', connector);

4. Usage

4.1 Pipeline

// ... include dependencies

const props = new Properties({ annotators: 'tokenize,ssplit,lemma,pos,ner' });
const pipeline = new Pipeline(props, 'English', connector);
const sent = new CoreNLP.simple.Sentence('Hello world');
pipeline.annotate(sent)
  .then(sent => {
    console.log(sent.words());
    console.log(sent.nerTags());
  })
  .catch(err => {
    console.log('err', err);
  });

4.2 Penn TreeBank traversing

// ... include dependencies

const props = new Properties();
props.setProperty('annotators', 'tokenize,ssplit,pos,lemma,ner,parse');
const pipeline = new Pipeline(props, 'Spanish');

const sent = new CoreNLP.simple.Sentence('Jorge quiere cinco empanadas de queso y carne.');
pipeline.annotate(sent)
  .then(sent => {
    console.log('parse', sent.parse()); // constituency parsing string representation
    const tree = CoreNLP.util.Tree.fromSentence(sent);
    tree.visitLeaves(node =>
      console.log(node.word(), node.pos(), node.token().ner()));
    console.log(tree.dump());
  })
  .catch(err => {
    console.log('err', err);
  });

4.3 TokensRegex, Tregex and Semgrex

// ... include dependencies

const props = new Properties();
props.setProperty('annotators', 'tokenize,ssplit,regexner,depparse');
const expression = new CoreNLP.simple.Expression(
  'John Snow eats snow.',
  '{ner:PERSON}=who <nsubj ({pos:VBZ}=action >dobj {}=what)');
const pipeline = new Pipeline(props, 'English');

pipeline.annotateSemgrex(expression, true)  // similarly use pipeline.annotateTokensRegex / pipeline.annotateTregex
  .then(expression => expression.sentence(0).matches().map(match => {
      console.log('match', match.group('who'), match.group('action'), match.group('what'));
  }))
  .catch(err => {
    console.log('err', err);
  });

5. Client Side

This library is isomorphic, which means that works as well on a Browser. The API is exactly the same, and you can use it directly by requiring it via a <script> tag, using AMD (RequireJS), or within your app bundle.

The browser ready version of corenlp can be found as dist/index.browser.min.js, once built (npm run build).

See the examples folder for more details.

6. External Documentation

Properties
Pipeline
Service
ConnectorServer                   # https://stanfordnlp.github.io/CoreNLP/corenlp-server.html
ConnectorCli                      # https://stanfordnlp.github.io/CoreNLP/cmdline.html
CoreNLP
  simple                          # https://stanfordnlp.github.io/CoreNLP/simple.html
    Annotable
    Annotator
    Document
    Sentence
    Token
    annotator                     # https://stanfordnlp.github.io/CoreNLP/annotators.html
      TokenizerAnnotator          # https://stanfordnlp.github.io/CoreNLP/tokenize.html
      WordsToSentenceAnnotator    # https://stanfordnlp.github.io/CoreNLP/ssplit.html
      POSTaggerAnnotator          # https://stanfordnlp.github.io/CoreNLP/pos.html
      MorphaAnnotator             # https://stanfordnlp.github.io/CoreNLP/lemma.html
      NERClassifierCombiner       # https://stanfordnlp.github.io/CoreNLP/ner.html
      ParserAnnotator             # https://stanfordnlp.github.io/CoreNLP/parse.html
      DependencyParseAnnotator    # https://stanfordnlp.github.io/CoreNLP/depparse.html
      RelationExtractorAnnotator  # https://stanfordnlp.github.io/CoreNLP/relation.html
      CorefAnnotator              # https://stanfordnlp.github.io/CoreNLP/coref.html
      SentimentAnnotator          # https://stanfordnlp.github.io/CoreNLP/sentiment.html - Comming soon...
      RelationExtractorAnnotator  # https://stanfordnlp.github.io/CoreNLP/relation.html - TODO
      NaturalLogicAnnotator       # https://stanfordnlp.github.io/CoreNLP/natlog.html - TODO
      QuoteAnnotator              # https://stanfordnlp.github.io/CoreNLP/quote.html - TODO
  util
    Tree                          # http://www.cs.cornell.edu/courses/cs474/2004fa/lec1.pdf

7. References

This library is not maintained by StanfordNLP. However, it's based on and depends on StanfordNLP/CoreNLP to function.

7.1 Stanford CoreNLP Reference

Manning, Christopher D., Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 55-60.