npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

greataptic

v1.0.0-pre2

Published

A simplistic neural network library.

Downloads

5

Readme

Greataptic

Just like life evolves, neural networks do too!

...and neural network libraries too!

Greataptic is a minimalistic, sequential-only, simple and flexible Node.JS library that was created after the abandoning of Neataptic. It is probably extensible (but lacks a proper extension/plugin system), and it does have a simple and efficient genetic algorithm.

Tutorials

Quickstart

First, you create a simple neural network.

const greataptic = require('greataptic');

let net = greataptic.sequential(3, [ {size: 9, post: 'sigmoid'}, {size: 6, post: 'sigmoid'}, {size: 2, post: 'sigmoid'} ]);

In this example, this network will return whether we should go to the park, given whether it is raining, and the temperature in Celsius.

To make our life easier, we can just define a function that returns whether we should go to the park, given these parameters:

function goToThePark(bRain, temperature, raw = false) {
    let l = net.compute([+!bRain, +bRain, temperature]).data;

    if (raw) return l;
    return l[0] > l[1];
}

Now, all we have to do is to train it.

net = net.train({
    inputSet:    [[1, 0, 20], [0, 1, 18], [1, 0, 40], [0, 1, 32], [0, 1, 42], [0, 1, -20], [1, 0, -10]],
    expectedSet: [[1, 0],     [0, 1],     [0, 1],     [1, 0],     [0, 1],     [0, 1],      [0, 1]],
    debug: true,
    mutation: 20,
    maxGens: 300,
    elitism: 0.04
});

Once we have done training it, we can use our function to our heart's desire!

> goToThePark(false, 20)
true
> goToThePark(false, -20)
false
> goToThePark(true, -20)
false
> goToThePark(true, 50)
false
> goToThePark(true, 15)
false
> goToThePark(true, 35)
false
> goToThePark(true, 32)
false
> goToThePark(false, 26)
true
> goToThePark(false, 31)
false

Spiking Neural Networks

A digital spiking neural network is a concept that is similar to the ordinary linear neural network, but with the added nuance that it can vary temporally. It sacrifices the bloody scalarity into 1's and 0's, but in turn it will likely not have the same result for the same input on every single activation.

Basically, every neuron has a threshold and an accumulator. If this accumulator reaches a specific threshold, it can release a specified output signal, that will add to (or subtract from) the accumulator of other neurons.

For example, let's say we want to create a random word from noise: the stronger the signal from a specific letter, the earlier it comes in this word, and if this signal is too weak, it will not appear at all.

First, we have to initialize our spiking neural network. To use spiking layers, we must provide the type: 'spiking' to the object describing the layer in the network construction utility sequential.

const greataptic = require('greataptic');

// A few adjustable constants to make sure our word is the right size.
const signalThreshold = 0.6;
const noiseSize = 9;

const alphabet = 'abcdefghijklmnopqrstuvwxyz';
let net = greataptic.sequential(5, [ {size: noiseSize, type: 'spiking'}, {size: 23, type: 'spiking'}, {size: alphabet.length, type: 'spiking', post: 'sigmoid'} ]);

Now we can make ourselves an utility function to make our lives easier - "do what you want 'cause a 'coder' is free! You're the programmer!"

function inputNoise(size) {
    return new Array(size).fill(0).map(() => Math.random() * 4 - 2);
}

function genWord(input = inputNoise(noiseSize)) {
    let letters = net.compute(input).data;

    return letters
        .map((l, i) => [alphabet[i], l])
        .filter((e) => e[1] >= signalThreshold)
        .sort((a, b) => b[1] - a[1])
        .map((e) => e[0])
        .join('');
}

At first, it will be fully random:

> genWord()
'oxjkmie'
> genWord()
'boxjkmz'
> genWord()
'poxjkmiywe'
> genWord()
'plxjckmie'
> genWord()
'oxjkmiz'
> genWord()
'poxjkmfiye'
> genWord()
'oxjkmie'
> genWord()
'oxjkmiz'
> genWord()
'boxjkmz'
> genWord()
'lxkmie'
> genWord()
'plxjckmie'

Since it is a completely generative approach, at this point, our best way to train this network is most likely through a simple but efficient technique called...

Generative Adversarial Network (GAN)

Remember our previous code? We can upgrade it using GANs.

In a GAN, there are two neural networks opposing each other: the generator, which takes in a random noise vector and outputs what we want (well, should), and a discriminator, that will tell the generator how 'true' its output looks. The discriminator learns to differentiate between real and false data using two sets (a set of actual, true data, and a set of fake data generated by the generator in a previous iteration), and the generator learns simultaneously given the rating the discriminator gives to the generator.

Unlike other GAN libraries, this rating is computed through a fitness callback function and fed into a genetic algorithm. This way, the truer a generator network is, the better it will rank out, which can end up breeding (and mutating) better, robuster networks, a la Darwian.

So, how can we use this powerful technology?

First, scrap that greataptic.sequential function. We're going to take a look at a class called greataptic.GAN! (And we can also use the English word array library to get the real data our GAN will require.)

// @sequential GET OFF MY SWAMP
// let net = greataptic.sequential(5, [ {size: noiseSize, type: 'spiking'}, {size: 23, type: 'spiking'}, {size: alphabet.length, type: 'spiking', post: 'sigmoid'} ]);

Did that? Good. Now we can begin writing the file.

Begin with the useful constants:

const greataptic = require('greataptic');
const shuffle = require('shuffle-array');
const realWords = shuffle(require('an-array-of-english-words')).slice(0, 10000);

// A few adjustable constants to make sure our word is the right size.
const signalThreshold = 0.6;
const noiseSize = 15;
const netType = 'linear';
const alphabet = 'abcdefghijklmnopqrstuvwxyz';

Take note of the fact we're back to good old linear neural networks. Spiking neurons and generativity are concepts that don't seem to ring well together.

Now, declare some functions so you can encode and decode words into arrays:

function encode(word) {
    let bad = Math.min(-1, signalThreshold - 0.5);

    let d = greataptic.layerTypes.sigmoid.process(greataptic.$vec(Array.from(alphabet).map((l) =>
        word.indexOf(l) !== -1 ? (word.length - word.indexOf(word)) : bad
    )));
    return d;
}

function decode(letters) {
    return letters
        .map((l, i) => [alphabet[i], l])
        .filter((e) => e[1] >= signalThreshold)
        .sort((a, b) => b[1] - a[1])
        .map((e) => e[0])
        .join('');
}

Good! NOW comes the interesting part.

Initialize the GAN thingie, providing the right arguments. We'll look into those once I finish the API documentation.

console.log(' * [INFO] Encoding English word subset.');
const realEncodedWords = realWords.map(encode);

let gan = new greataptic.GAN({
    size: {
        output: alphabet.length,
        noise: noiseSize
    },

    outputType: netType
});

I guess this is a bit more code than our previous examples, but hey, I'm sure it's totally worth it!

Now you can begin training! Hooray!

console.log(' * [INFO] Training.');
gan.trainAsync(realEncodedWords, {
    maxGens: 80,
    population: 100,
    maxComparisonSize: 150,
    fitnessQuota: 0.95,
    mutation: 0.6,
    elitism: 0.0625,
    survivalRate: 0.15,
    debug: true,

    discriminatorTrainOptions: { fitnessQuota: 0.9 },

    genCallback: (data) => {
        console.log(`(${data.fake.map(decode).slice(0, 8).concat((data.fake.length <= 8) ? [] : ['...']).join(', ')})`);
    }
}).then(() => {
    function genWord() {
        let letters = gan.generate();

        return decode(letters);
    }

    for (let i = 1; i <= (isNaN(+process.argv[2]) ? 50 : +process.argv[2]); i++) {
        console.log('>', genWord());
    }
});

Also take note of the fact we are using the .trainAsync function, which is supposedly faster.

This might take a long time. We have profiled the application, but if it takes too long for you to train the GAN, there is probably not a lot that can be done at that point. I'm sorry.

...

[Generation #1] Best fitness: 0 | Worst fitness: 0
(hvwycfixnzl, nhvrwikmc, ivowycxjgn, tzfeigmknv, fgheovkiwxc, hwznflkgto, vhegyksq, yfhwesgvkci, ...)
[Generation #2] Best fitness: 0.6310722878538669 | Worst fitness: 0.06214089501871067
(hoefclxv, vhyfwglnm, ivofwyqgxlth, ztfegvim, xkfzvohnjei, hozqnfwal, vgfhkei, hyesfvwicgqlpk, ...)
[Generation #3] Best fitness: 0.7720685810330974 | Worst fitness: 0.02534786537134085
(fvoehzcxukn, efmvdzcn, oigvyftxwnqjk, hxfeyloqz, mqhwfoisenz, hzftogpume, hcesfdt, exzbfmgwov, ...)
[Generation #4] Best fitness: 0.9099892193256698 | Worst fitness: 0.011222608262300893
(hcdest, tzmfgavdqne, fizvetgkmb, joemitwafzguh, uefxvckniat, efszwkoh, feucvzhdtonwxslq, fokezcjqtm, ...)

At first it won't seem to improve much, but hey, it's a GAN.

Once you're done, go Wild Words, cowboy! .......yes? no? But, come on, that was a good pun!

You know what else is good? This beautiful output:

(duatzrjpvmyfe, uzdmaprtjlyc, jtudzmoyralpwb, uajtoylrdpzbwq, jdrpxyltfnusvz, jdaoupyftvlx, urtdzyapmjf, udjtaylrpmoz, ...)