简体   繁体   中英

Perceptron Javascript Inconsistencies

Building a basic Perceptron. My results after training are very inconsistent, even after 1000's of epochs. The weights seem to adjust properly, however the model fails to accurately predict. A second pairs of eyes on the structure would be greatly appreciated, struggling to find where I went wrong. The accuracy consistently tops out at 60%.

 // Perceptron
class Perceptron {

    constructor (x_train, y_train, learn_rate= 0.1, epochs=10) {
            this.epochs = epochs
            this.x_train = x_train
            this.y_train = y_train
            this.learn_rate = learn_rate
            this.weights = new Array(x_train[0].length)

            // initialize random weights
            for ( let n = 0; n < x_train[0].length; n++ ) {
                    this.weights[n] = this.random()
            }
    }

    // generate random float between -1 and 1 (for generating weights)
    random () {
            return Math.random() * 2 - 1
    }

    // activation function
    activation (n) {
            return n < 0 ? 0 : 1
    }

    // y-hat output given an input tensor 
    predict (input) {
            let total = 0
            this.weights.forEach((w, index) => { total += input[index] * w }) // multiply each weight by each input vector value
            return this.activation(total)
    }

    // training perceptron on data
    fit () {
            for ( let e = 0; e < this.epochs; e++) { // epochs loop
                    for ( let i = 0; i < this.x_train.length; i++ ) { // iterate over each training sample
                            let prediction = this.predict(this.x_train[i]) // predict sample output
                            console.log('Expected: ' + this.y_train[i] + '    Model Output: ' + prediction) // log expected vs predicted
                            let loss = this.y_train[i] - prediction // calculate loss
                            for ( let w = 0; w < this.weights.length; w++ ) { // loop weights for update
                                    this.weights[w] += loss * this.x_train[i][w] * this.learn_rate // update all weights to reduce loss
                            }
                    }
            }
    }
}

x = [[1, 1, 1], [0, 0, 0], [0, 0, 1], [1, 1, 0], [0, 0, 1]]
y = [1, 0, 0, 1, 0]

p = new Perceptron(x, y, epochs=5000, learn_rate=.1)

Updated:

// Perceptron
module.exports = class Perceptron {

constructor (x_train, y_train, epochs=1000, learn_rate= 0.1) {

    // used to generate percent accuracy
    this.accuracy = 0
    this.samples = 0
    this.x_train = x_train
    this.y_train = y_train
    this.epochs = epochs
    this.learn_rate = learn_rate
    this.weights = new Array(x_train[0].length)
    this.bias = 0

    // initialize random weights
    for ( let n = 0; n < x_train[0].length; n++ ) {
                    this.weights[n] = this.random()
            }
}

// returns percent accuracy 
current_accuracy () {
    return this.accuracy/this.samples
}

// generate random float between -1 and 1 (for generating weights)
random () {
    return Math.random() * 2 - 1
}

// activation function
activation (n) {
    return n < 0 ? 0 : 1
}

// y-hat output given an input tensor 
predict (input) {
    let total = this.bias
    this.weights.forEach((w, index) => { total += input[index] * w }) // multiply each weight by each input vector value
    return this.activation(total)
}

// training perceptron on data
fit () {
    // epochs loop
    for ( let e = 0; e < this.epochs; e++) { 

        // for each training sample
        for ( let i = 0; i < this.x_train.length; i++ ) { 

            // get prediction
            let prediction = this.predict(this.x_train[i]) 
            console.log('Expected: ' + this.y_train[i] + '    Model Output: ' + prediction) 

            // update accuracy measures
            this.y_train[i] === prediction ? this.accuracy += 1 : this.accuracy -= 1
            this.samples++

            // calculate loss
            let loss = this.y_train[i] - prediction

            // update all weights
            for ( let w = 0; w < this.weights.length; w++ ) { 
                this.weights[w] += loss * this.x_train[i][w] * this.learn_rate
            }

            this.bias += loss * this.learn_rate
        }

        // accuracy post epoch
        console.log(this.current_accuracy())
    }
  }
}

It's just a syntactic error :)

Switch the order of the last two parameters, like this:

p = new Perceptron(x, y, learn_rate=.1, epochs=5000)

And now everything should work fine.

However, a more serious problem lies in your implementation:

You forgot the bias

With a perceptron you're trying to learn a linear function, something of the form of

y = wx + b

but what you're currently computing is just

y = wx

This is fine if what you're trying to learn is just the identity function of a single input, like in your case. But it fails to work as soon as you start doing something slightly more complex like trying to learn the AND function, which can be represented like this:

y = x1 + x2 - 1.5

How to fix?

Really easy, just initialise this.bias = 0 in the constructor. Then, in predict() , you initialise let total = this.bias and, in fit() , add this.bias += loss * this.learn_rate right after the inner-most loop.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM