@death_raider/neural-network
v2.0.3
Published
Simple to use Neural Network
Downloads
22
Maintainers
Readme
Neural Network
Installing
npm i @death_raider/neural-network
About
This is an easy to use Neural Network package with SGD using backpropagation as a gradient computing technique.
Creating the model
const NeuralNetwork = require("@death_raider/neural-network").NeuralNetwork
//creates ANN with 2 input nodes, 1 hidden layers with 2 hidden nodes and 1 output node
let network = new NeuralNetwork({
input_nodes : 2,
layer_count : [2],
output_nodes :1,
weight_bias_initilization_range : [-1,1]
});
Parameters like the activations for hidden layer and output layers are set as leaky relu and sigmoid respectively but can changed
//format for activation function = [ function , derivative of function ]
network.Activation.hidden = [(x)=>1/(1+Math.exp(-x)),(x)=>x*(1-x)] //sets activation for hidden layers as sigmoid function
Training, Testing and Using
For this example we'll be testing it on the XOR function.
There are 2 ways we can go about training:
- Inbuilt Function
function xor(){
let inp = [Math.floor(Math.random()*2),Math.floor(Math.random()*2)]; //random inputs 0 or 1 per cell
let out = (inp.reduce((a,b)=>a+b)%2 == 0)?[0]:[1]; //if even number of 1's in input then 0 else 1 as output
return [inp,out]; //train or validation functions should have [input,output] format
}
network.train({
TotalTrain : 1e+6, //total data for training (not epochs)
batch_train : 1, //batch size for training
trainFunc : xor, //training function to get data
TotalVal : 1000, //total data for validation (not epochs)
batch_val : 1, //batch size for validation
validationFunc : xor, //validation function to get data
learning_rate : 0.1, //learning rate (default = 0.0000001)
momentum : 0.9 // momentum for SGD
});
The trainFunc
and validationFunc
recieve an input of the batch iteration and the current epoch which can be used in the functions.
NOTE: The validationFunc is called AFTER the training is done
Now to see the avg. test loss:
console.log("Average Validation Loss ->",network.Loss.Validation_Loss.reduce((a,b)=>a+b)/network.Loss.Validation_Loss.length);
// Result after running it a few times
// Average Validation Loss -> 0.00004760326022482792
// Average Validation Loss -> 0.000024864418333478723
// Average Validation Loss -> 0.000026908106414283446
- Iterative
for(let i = 0; i < 10000; i++){
let [inputs,outputs] = xor()
let dnn = network.trainIteration({
input : inputs,
desired : outputs,
})
network.update(dnn.Updates.updatedWeights,dnn.Updates.updatedBias,0.1)
console.log(dnn.Cost,dnn.layers); //optional to view the loss and the hidden layers
}
// output after 10k iterations
// 0.00022788194782669534 [
// [ 1, 1 ],
// [ 0.6856085043616054, -0.6833685003507397 ],
// [ 0.021348627488749498 ]
// ]
This iterative method can be used for visulizations, dynamic learning rate, etc...
To use the network:
// network.use(inputs) --> returns the hidden node values as well
let output = [ //truth table for xor gate
network.use([0,0]),
network.use([0,1]),
network.use([1,0]),
network.use([1,1])
]
To get the gradients w.r.t the inputs (Questionable correct me if wrong values)
console.log( network.getInputGradients() );
Saving and Loading Models
This package allows to save the hyperparameters(weights and bias) in a file(s) and then unpack them, allowing us to use pretrained models. Saving the model couldnt be further from simplicity:
network.save(path)
Loading the model is asynchronous:
const {NeuralNetwork} = require("@death_raider/neural-network")
let network = new NeuralNetwork({
input_nodes : 2,
layer_count : [2],
output_nodes :1,
weight_bias_initilization_range : [-1,1]
});
(async () =>{
await network.load(path) //make sure network is of correct structure
let output = [
network.use([0,0]), // --> returns the hidden node values as well
network.use([0,1]), // --> returns the hidden node values as well
network.use([1,0]), // --> returns the hidden node values as well
network.use([1,1]) // --> returns the hidden node values as well
]
})()
Linear Algebra
This class is not the most optimized as it can be, but the implementation of certain functions are based on traditional methods to solving them. Those functions will be marked with the * symbol.
Base function
The base function (basefunc) is a recursive function that takes in 3 parameters a, b, and Opt where a is an array and b is an object and opt is a function. The basefunc goes over all elements of a and also b if b is an array and then passes those elements to the opt function defined by the user. opt will take in 2 parameters and the return can be any object.
const {LinearAlgebra} = require("@death_raider/neural-network")
linearA = new LinearAlgebra
let a = [
[1,2,3,4],
[5,6,7,8]
]
let b = [
[8,7,6,5],
[4,3,2,1]
]
function foo(p,q){
return p*q
}
console.log(linearA.basefunc(a,b,foo))
// [ [ 8, 14, 18, 20 ], [ 20, 18, 14, 8 ] ]
Matrix Manipulation
#Convolution
This class can compute the convolution of an 3 dimensional array with a filter of 4 dimensions using the im2row operator, more details can be found here. Aside from convolution, It also provides the Input gradients and updates the filter based on the previous gradients and a learning rate.
const {Convolution, LinearAlgebra} = require("@death_raider/neural-network")
const conv = new Convolution
let input = [[
[0,0,1,1,0,0],
[0,0,1,1,0,0],
[1,1,1,1,1,1],
[1,1,1,1,1,1],
[0,0,1,1,0,0],
[0,0,1,1,0,0]
]] // shape -> 1x6x6
let filter = [
[[
[0,1,0],
[0,1,0],
[0,1,0]
]],
[[
[0,0,0],
[1,1,1],
[0,0,0]
]]
] // shape -> 2x1x3x3
output = conv.convolution(input,filter,true,(x)=>x)
console.log(output)
// [
// [
// [ 1, 3, 3, 1 ],
// [ 2, 3, 3, 2 ],
// [ 2, 3, 3, 2 ],
// [ 1, 3, 3, 1 ]
// ],
// [
// [ 1, 2, 2, 1 ],
// [ 3, 3, 3, 3 ],
// [ 3, 3, 3, 3 ],
// [ 1, 2, 2, 1 ]
// ]
// ]
let fake_grads = [
[0,0],[1,0],[0,1],[1,1],[0,0],[1,0],[0,1],[1,0],
[0,0],[1,1],[0,1],[1,0],[0,1],[1,0],[0,1],[1,0]
]
let next_layer_grads = conv.layerGrads(fake_grads)
console.log(next_layer_grads)
// [
// [
// [ 0, 0, 0, 0, 0, 0 ],
// [ 0, 1, 1, 2, 2, 1 ],
// [ 0, 1, 2, 2, 2, 0 ],
// [ 1, 4, 5, 5, 4, 1 ],
// [ 1, 2, 2, 1, 1, 0 ],
// [ 0, 1, 1, 1, 1, 0 ]
// ]
// ]
If u have PreviousGradients of shape DxH"xW" then you can do this to convert into that format using the LinearAlgebra class
let fake_grads = [
[
[0,1,1,0],
[0,1,1,0],
[0,1,1,0],
[0,1,1,0]
],
[
[0,0,0,0],
[1,1,1,1],
[1,1,1,1],
[0,0,0,0]
]
]
const La = new LinearAlgebra
fake_grads = La.vectorize(fake_grads)
fake_grads = La.reconstructMatrix(fake_grads,{x:4*4,y:2,z:1}).flat(1)
fake_grads = La.transpose(fake_grads)
let next_layer_grads = conv.layerGrads(fake_grads)
console.log(next_layer_grads)
// [
// [
// [ 0, 0, 1, 1, 0, 0 ],
// [ 0, 0, 2, 2, 0, 0 ],
// [ 1, 2, 6, 6, 2, 1 ],
// [ 1, 2, 6, 6, 2, 1 ],
// [ 0, 0, 2, 2, 0, 0 ],
// [ 0, 0, 1, 1, 0, 0 ]
// ]
// ]
conv.filterGrads(fake_grads,0.1)
conv,saveFilters("path")
Max Pool
Does a max pool on a matrix using the im2row method.
const {MaxPool} = require("@death_raider/neural-network")
const mxpool = new MaxPool
let input = [[
[0,0,1,1,0,0],
[0,0,1,1,0,0],
[1,1,1,1,1,1],
[1,1,1,1,1,1],
[0,0,1,1,0,0],
[0,0,1,1,0,0]
]] // shape -> 1x6x6
let output = mxpool.pool(input)//other arguments default to 2,2,and true
console.log(output)
// [
// [
// [ 0, 1, 0 ],
// [ 1, 1, 1 ],
// [ 0, 1, 0 ]
// ]
// ]
let fake_grads = [
[ 0 ], [ 1 ],
[ 0 ], [ 1 ],
[ 5 ], [ 1 ],
[ 0 ], [ 1 ],
[ 0 ]
]
let input_grads = mxpool.layerGrads(fake_grads)
console.log(input_grads);
// [
// [ 0 ], [ 0 ], [ 1 ], [ 0 ], [ 0 ],
// [ 0 ], [ 0 ], [ 0 ], [ 0 ], [ 0 ],
// [ 0 ], [ 0 ], [ 1 ], [ 0 ], [ 5 ],
// [ 0 ], [ 1 ], [ 0 ], [ 0 ], [ 0 ],
// [ 0 ], [ 0 ], [ 0 ], [ 0 ], [ 0 ],
// [ 0 ], [ 1 ], [ 0 ], [ 0 ], [ 0 ],
// [ 0 ], [ 0 ], [ 0 ], [ 0 ], [ 0 ],
// [ 0 ]
// ]
mxpool.savePool("path")
#Application of CNN
In the Application.js file, I have created a simple CNN for mnist number recognition but there are more modules needed to install first.
npm install mnist cli-progress
pip install numpy matplotlib
#Future Updates
- Convolution and other image processing functions ✔️done
- Convolutional Neural Network (CNN) ✔️ done
- Visualization of Neural Network ❌ pending (next)
- Recurrent Neural Network (RNN) ❌ pending
- Long Short Term Memory (LSTM) ❌ pending