libmf
v0.2.0
Published
Large-scale sparse matrix factorization for Node.js
Downloads
143
Readme
LIBMF Node
LIBMF - large-scale sparse matrix factorization - for Node.js
Installation
Run:
npm install libmf
Getting Started
Prep your data in the format rowIndex, columnIndex, value
import { Matrix } from 'libmf';
const data = new Matrix();
data.push(0, 0, 5.0);
data.push(0, 2, 3.5);
data.push(1, 1, 4.0);
Create a model
import { Model } from 'libmf';
const model = new Model();
model.fit(data);
Make predictions
model.predict(rowIndex, columnIndex);
Get the latent factors (these approximate the training matrix)
model.p();
model.q();
Get the bias (average of all elements in the training matrix)
model.bias();
Save the model to a file
model.save('model.txt');
Load the model from a file
const model = Model.load('model.txt');
Pass a validation set
model.fit(data, evalSet);
Destroy the model
model.destroy();
Cross-Validation
Perform cross-validation
model.cv(data);
Specify the number of folds
model.cv(data, 5);
Parameters
Pass parameters - default values below
import { Loss } from 'libmf';
new Model({
loss: Loss.REAL_L2, // loss function
factors: 8, // number of latent factors
threads: 12, // number of threads used
bins: 25, // number of bins
iterations: 20, // number of iterations
lambdaP1: 0, // coefficient of L1-norm regularization on P
lambdaP2: 0.1, // coefficient of L2-norm regularization on P
lambdaQ1: 0, // coefficient of L1-norm regularization on Q
lambdaQ2: 0.1, // coefficient of L2-norm regularization on Q
learningRate: 0.1, // learning rate
alpha: 1, // importance of negative entries
c: 0.0001, // desired value of negative entries
nmf: false, // perform non-negative MF (NMF)
quiet: false // no outputs to stdout
});
Loss Functions
For real-valued matrix factorization
Loss.REAL_L2
- squared error (L2-norm)Loss.REAL_L1
- absolute error (L1-norm)Loss.REAL_KL
- generalized KL-divergence
For binary matrix factorization
Loss.BINARY_LOG
- logarithmic errorLoss.BINARY_L2
- squared hinge lossLoss.BINARY_L1
- hinge loss
For one-class matrix factorization
Loss.ONE_CLASS_ROW
- row-oriented pair-wise logarithmic lossLoss.ONE_CLASS_COL
- column-oriented pair-wise logarithmic lossLoss.ONE_CLASS_L2
- squared error (L2-norm)
Metrics
Calculate RMSE (for real-valued MF)
model.rmse(data);
Calculate MAE (for real-valued MF)
model.mae(data);
Calculate generalized KL-divergence (for non-negative real-valued MF)
model.gkl(data);
Calculate logarithmic loss (for binary MF)
model.logloss(data);
Calculate accuracy (for binary MF)
model.accuracy(data);
Calculate MPR (for one-class MF)
model.mpr(data, transpose);
Calculate AUC (for one-class MF)
model.auc(data, transpose);
Resources
History
View the changelog
Contributing
Everyone is encouraged to help improve this project. Here are a few ways you can help:
- Report bugs
- Fix bugs and submit pull requests
- Write, clarify, or fix documentation
- Suggest or add new features
To get started with development:
git clone https://github.com/ankane/libmf-node.git
cd libmf-node
npm install
npm run vendor
npm test