tfjsmodel-model
v0.0.4-rv1
Published
this package is origined from https://github.com/tensorflow/tfjs-models.git, with some package upgrade
Downloads
12
Readme
Universal Sentence Encoder lite
The Universal Sentence Encoder (Cer et al., 2018) (USE) is a model that encodes text into 512-dimensional embeddings. These embeddings can then be used as inputs to natural language processing tasks such as sentiment classification and textual similarity analysis.
This module is a TensorFlow.js FrozenModel
converted from the USE lite (module on TFHub), a lightweight version of the original. The lite model is based on the Transformer (Vaswani et al, 2017) architecture, and uses an 8k word piece vocabulary.
In this demo we embed six sentences with the USE, and render their self-similarity scores in a matrix (redder means more similar):
The matrix shows that USE embeddings can be used to cluster sentences by similarity.
The sentences (taken from the TensorFlow Hub USE lite colab):
- I like my phone.
- Your cellphone looks great.
- How old are you?
- What is your age?
- An apple a day, keeps the doctors away.
- Eating strawberries is healthy.
Installation
Using yarn
:
$ yarn add @tensorflow/[email protected] @tensorflow-models/universal-sentence-encoder
Using npm
:
$ npm install @tensorflow/[email protected] @tensorflow-models/universal-sentence-encoder
Usage
To import in npm:
import * as use from '@tensorflow-models/universal-sentence-encoder';
or as a standalone script tag:
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/universal-sentence-encoder"></script>
Then:
// Load the model.
use.load().then(model => {
// Embed an array of sentences.
const sentences = [
'Hello.',
'How are you?'
];
model.embed(sentences).then(embeddings => {
// `embeddings` is a 2D tensor consisting of the 512-dimensional embeddings for each sentence.
// So in this example `embeddings` has the shape [2, 512].
embeddings.print(true /* verbose */);
});
});
To use the Tokenizer separately:
use.loadTokenizer().then(tokenizer => {
tokenizer.encode('Hello, how are you?'); // [341, 4125, 8, 140, 31, 19, 54]
});